Log Type: stderr Log Upload Time: Tue Apr 17 17:31:19 +0300 2018 Log Length: 7692277 18/04/17 16:32:20 INFO yarn.ApplicationMaster: Registered signal handlers for [TERM, HUP, INT] 18/04/17 16:32:20 INFO yarn.ApplicationMaster: ApplicationAttemptId: appattempt_1520875508177_0403_000002 18/04/17 16:32:21 INFO spark.SecurityManager: Changing view acls to: yarn,jenkins 18/04/17 16:32:21 INFO spark.SecurityManager: Changing modify acls to: yarn,jenkins 18/04/17 16:32:21 INFO spark.SecurityManager: SecurityManager: authentication disabled; ui acls disabled; users with view permissions: Set(yarn, jenkins); users with modify permissions: Set(yarn, jenkins) 18/04/17 16:32:21 INFO yarn.ApplicationMaster: Starting the user application in a separate Thread 18/04/17 16:32:21 INFO yarn.ApplicationMaster: Waiting for spark context initialization... 18/04/17 16:32:21 INFO spark.SparkContext: Running Spark version 1.6.0 18/04/17 16:32:21 INFO spark.SecurityManager: Changing view acls to: yarn,jenkins 18/04/17 16:32:21 INFO spark.SecurityManager: Changing modify acls to: yarn,jenkins 18/04/17 16:32:21 INFO spark.SecurityManager: SecurityManager: authentication disabled; ui acls disabled; users with view permissions: Set(yarn, jenkins); users with modify permissions: Set(yarn, jenkins) 18/04/17 16:32:21 INFO util.Utils: Successfully started service 'sparkDriver' on port 51755. 18/04/17 16:32:22 INFO slf4j.Slf4jLogger: Slf4jLogger started 18/04/17 16:32:22 INFO Remoting: Starting remoting 18/04/17 16:32:22 INFO Remoting: Remoting started; listening on addresses :[akka.tcp://sparkDriverActorSystem@***IP masked***:43892] 18/04/17 16:32:22 INFO Remoting: Remoting now listens on addresses: [akka.tcp://sparkDriverActorSystem@***IP masked***:43892] 18/04/17 16:32:22 INFO util.Utils: Successfully started service 'sparkDriverActorSystem' on port 43892. 18/04/17 16:32:22 INFO spark.SparkEnv: Registering MapOutputTracker 18/04/17 16:32:22 INFO spark.SparkEnv: Registering BlockManagerMaster 18/04/17 16:32:22 INFO storage.DiskBlockManager: Created local directory at /hadoop/1/yarn/nm/usercache/jenkins/appcache/application_1520875508177_0403/blockmgr-bb828e90-e4ea-4cfd-9bcb-21ab798f6b6a 18/04/17 16:32:22 INFO storage.DiskBlockManager: Created local directory at /hadoop/2/yarn/nm/usercache/jenkins/appcache/application_1520875508177_0403/blockmgr-6fc375d0-6607-40ba-b627-4b7aa42574ce 18/04/17 16:32:22 INFO storage.DiskBlockManager: Created local directory at /hadoop/3/yarn/nm/usercache/jenkins/appcache/application_1520875508177_0403/blockmgr-0c02fdec-21ea-436d-bc84-e56bdb36af4e 18/04/17 16:32:22 INFO storage.DiskBlockManager: Created local directory at /hadoop/4/yarn/nm/usercache/jenkins/appcache/application_1520875508177_0403/blockmgr-a1eca2a2-567a-46c3-8fbb-0e49cf0c136b 18/04/17 16:32:22 INFO storage.DiskBlockManager: Created local directory at /hadoop/5/yarn/nm/usercache/jenkins/appcache/application_1520875508177_0403/blockmgr-bc876e34-0039-4e6b-979b-dc3f5f24e975 18/04/17 16:32:22 INFO storage.DiskBlockManager: Created local directory at /hadoop/6/yarn/nm/usercache/jenkins/appcache/application_1520875508177_0403/blockmgr-91c8547d-8a61-4741-89b5-9bb352ac0129 18/04/17 16:32:22 INFO storage.MemoryStore: MemoryStore started with capacity 491.7 MB 18/04/17 16:32:22 INFO spark.SparkEnv: Registering OutputCommitCoordinator 18/04/17 16:32:22 INFO ui.JettyUtils: Adding filter: org.apache.hadoop.yarn.server.webproxy.amfilter.AmIpFilter 18/04/17 16:32:22 INFO util.Utils: Successfully started service 'SparkUI' on port 48756. 18/04/17 16:32:22 INFO ui.SparkUI: Started SparkUI at http://***IP masked***:48756 18/04/17 16:32:22 INFO cluster.YarnClusterScheduler: Created YarnClusterScheduler 18/04/17 16:32:22 INFO util.Utils: Successfully started service 'org.apache.spark.network.netty.NettyBlockTransferService' on port 45737. 18/04/17 16:32:22 INFO netty.NettyBlockTransferService: Server created on 45737 18/04/17 16:32:22 INFO storage.BlockManager: external shuffle service port = 7337 18/04/17 16:32:22 INFO storage.BlockManagerMaster: Trying to register BlockManager 18/04/17 16:32:22 INFO storage.BlockManagerMasterEndpoint: Registering block manager ***IP masked***:45737 with 491.7 MB RAM, BlockManagerId(driver, ***IP masked***, 45737) 18/04/17 16:32:22 INFO storage.BlockManagerMaster: Registered BlockManager 18/04/17 16:32:23 INFO scheduler.EventLoggingListener: Logging events to hdfs://smartdata-prod/user/spark/applicationHistory/application_1520875508177_0403_2 18/04/17 16:32:23 WARN spark.SparkContext: Dynamic Allocation and num executors both set, thus dynamic allocation disabled. 18/04/17 16:32:23 INFO cluster.YarnSchedulerBackend$YarnSchedulerEndpoint: ApplicationMaster registered as NettyRpcEndpointRef(spark://YarnAM@***IP masked***:51755) 18/04/17 16:32:23 INFO yarn.ExecutorRunnable: Preparing Local resources 18/04/17 16:32:23 INFO yarn.ExecutorRunnable: Prepared Local resources Map(__app__.jar -> resource { scheme: "hdfs" host: "smartdata-prod" port: -1 file: "/user/jenkins/.sparkStaging/application_1520875508177_0403/predictor-engine-1.0-jar-with-dependencies.jar" } size: 58733145 timestamp: 1523969330660 type: FILE visibility: PRIVATE, __spark_conf__ -> resource { scheme: "hdfs" host: "smartdata-prod" port: -1 file: "/user/jenkins/.sparkStaging/application_1520875508177_0403/__spark_conf__2003734234993939341.zip" } size: 35065 timestamp: 1523969330966 type: ARCHIVE visibility: PRIVATE, hbase-site.xml -> resource { scheme: "hdfs" host: "smartdata-prod" port: -1 file: "/user/jenkins/.sparkStaging/application_1520875508177_0403/hbase-site.xml" } size: 2888 timestamp: 1523969330833 type: FILE visibility: PRIVATE) 18/04/17 16:32:23 INFO yarn.ApplicationMaster: =============================================================================== YARN executor launch context: env: CLASSPATH -> {{PWD}}<CPS>{{PWD}}/__spark_conf__<CPS>{{HADOOP_COMMON_HOME}}/../../../CDH-5.10.0-1.cdh5.10.0.p0.41/lib/spark/lib/spark-assembly.jar<CPS>$HADOOP_CLIENT_CONF_DIR<CPS>$HADOOP_CONF_DIR<CPS>$HADOOP_COMMON_HOME/*<CPS>$HADOOP_COMMON_HOME/lib/*<CPS>$HADOOP_HDFS_HOME/*<CPS>$HADOOP_HDFS_HOME/lib/*<CPS>$HADOOP_YARN_HOME/*<CPS>$HADOOP_YARN_HOME/lib/*<CPS>$HADOOP_MAPRED_HOME/*<CPS>$HADOOP_MAPRED_HOME/lib/*<CPS>$MR2_CLASSPATH<CPS>{{HADOOP_COMMON_HOME}}/../../../CDH-5.10.0-1.cdh5.10.0.p0.41/jars/ST4-4.0.4.jar:{{HADOOP_COMMON_HOME}}/../../../CDH-5.10.0-1.cdh5.10.0.p0.41/jars/accumulo-core-1.6.0.jar:{{HADOOP_COMMON_HOME}}/../../../CDH-5.10.0-1.cdh5.10.0.p0.41/jars/accumulo-fate-1.6.0.jar:{{HADOOP_COMMON_HOME}}/../../../CDH-5.10.0-1.cdh5.10.0.p0.41/jars/accumulo-start-1.6.0.jar:{{HADOOP_COMMON_HOME}}/../../../CDH-5.10.0-1.cdh5.10.0.p0.41/jars/accumulo-trace-1.6.0.jar:{{HADOOP_COMMON_HOME}}/../../../CDH-5.10.0-1.cdh5.10.0.p0.41/jars/activation-1.1.jar:{{HADOOP_COMMON_HOME}}/../../../CDH-5.10.0-1.cdh5.10.0.p0.41/jars/ant-1.9.1.jar:{{HADOOP_COMMON_HOME}}/../../../CDH-5.10.0-1.cdh5.10.0.p0.41/jars/ant-launcher-1.9.1.jar:{{HADOOP_COMMON_HOME}}/../../../CDH-5.10.0-1.cdh5.10.0.p0.41/jars/antlr-2.7.7.jar:{{HADOOP_COMMON_HOME}}/../../../CDH-5.10.0-1.cdh5.10.0.p0.41/jars/antlr-runtime-3.4.jar:{{HADOOP_COMMON_HOME}}/../../../CDH-5.10.0-1.cdh5.10.0.p0.41/jars/aopalliance-1.0.jar:{{HADOOP_COMMON_HOME}}/../../../CDH-5.10.0-1.cdh5.10.0.p0.41/jars/apache-log4j-extras-1.1.jar:{{HADOOP_COMMON_HOME}}/../../../CDH-5.10.0-1.cdh5.10.0.p0.41/jars/apache-log4j-extras-1.2.17.jar:{{HADOOP_COMMON_HOME}}/../../../CDH-5.10.0-1.cdh5.10.0.p0.41/jars/apacheds-i18n-2.0.0-M15.jar:{{HADOOP_COMMON_HOME}}/../../../CDH-5.10.0-1.cdh5.10.0.p0.41/jars/apacheds-kerberos-codec-2.0.0-M15.jar:{{HADOOP_COMMON_HOME}}/../../../CDH-5.10.0-1.cdh5.10.0.p0.41/jars/api-asn1-api-1.0.0-M20.jar:{{HADOOP_COMMON_HOME}}/../../../CDH-5.10.0-1.cdh5.10.0.p0.41/jars/api-util-1.0.0-M20.jar:{{HADOOP_COMMON_HOME}}/../../../CDH-5.10.0-1.cdh5.10.0.p0.41/jars/asm-3.2.jar:{{HADOOP_COMMON_HOME}}/../../../CDH-5.10.0-1.cdh5.10.0.p0.41/jars/asm-commons-3.1.jar:{{HADOOP_COMMON_HOME}}/../../../CDH-5.10.0-1.cdh5.10.0.p0.41/jars/asm-tree-3.1.jar:{{HADOOP_COMMON_HOME}}/../../../CDH-5.10.0-1.cdh5.10.0.p0.41/jars/async-1.4.0.jar:{{HADOOP_COMMON_HOME}}/../../../CDH-5.10.0-1.cdh5.10.0.p0.41/jars/asynchbase-1.7.0.jar:{{HADOOP_COMMON_HOME}}/../../../CDH-5.10.0-1.cdh5.10.0.p0.41/jars/avro-1.7.6-cdh5.10.0.jar:{{HADOOP_COMMON_HOME}}/../../../CDH-5.10.0-1.cdh5.10.0.p0.41/jars/avro-compiler-1.7.6-cdh5.10.0.jar:{{HADOOP_COMMON_HOME}}/../../../CDH-5.10.0-1.cdh5.10.0.p0.41/jars/avro-ipc-1.7.6-cdh5.10.0-tests.jar:{{HADOOP_COMMON_HOME}}/../../../CDH-5.10.0-1.cdh5.10.0.p0.41/jars/avro-ipc-1.7.6-cdh5.10.0.jar:{{HADOOP_COMMON_HOME}}/../../../CDH-5.10.0-1.cdh5.10.0.p0.41/jars/avro-mapred-1.7.6-cdh5.10.0-hadoop2.jar:{{HADOOP_COMMON_HOME}}/../../../CDH-5.10.0-1.cdh5.10.0.p0.41/jars/avro-maven-plugin-1.7.6-cdh5.10.0.jar:{{HADOOP_COMMON_HOME}}/../../../CDH-5.10.0-1.cdh5.10.0.p0.41/jars/avro-protobuf-1.7.6-cdh5.10.0.jar:{{HADOOP_COMMON_HOME}}/../../../CDH-5.10.0-1.cdh5.10.0.p0.41/jars/avro-service-archetype-1.7.6-cdh5.10.0.jar:{{HADOOP_COMMON_HOME}}/../../../CDH-5.10.0-1.cdh5.10.0.p0.41/jars/avro-thrift-1.7.6-cdh5.10.0.jar:{{HADOOP_COMMON_HOME}}/../../../CDH-5.10.0-1.cdh5.10.0.p0.41/jars/aws-java-sdk-core-1.10.6.jar:{{HADOOP_COMMON_HOME}}/../../../CDH-5.10.0-1.cdh5.10.0.p0.41/jars/aws-java-sdk-kms-1.10.6.jar:{{HADOOP_COMMON_HOME}}/../../../CDH-5.10.0-1.cdh5.10.0.p0.41/jars/aws-java-sdk-s3-1.10.6.jar:{{HADOOP_COMMON_HOME}}/../../../CDH-5.10.0-1.cdh5.10.0.p0.41/jars/aws-java-sdk-sts-1.10.6.jar:{{HADOOP_COMMON_HOME}}/../../../CDH-5.10.0-1.cdh5.10.0.p0.41/jars/bonecp-0.8.0.RELEASE.jar:{{HADOOP_COMMON_HOME}}/../../../CDH-5.10.0-1.cdh5.10.0.p0.41/jars/calcite-avatica-1.0.0-incubating.jar:{{HADOOP_COMMON_HOME}}/../../../CDH-5.10.0-1.cdh5.10.0.p0.41/jars/calcite-core-1.0.0-incubating.jar:{{HADOOP_COMMON_HOME}}/../../../CDH-5.10.0-1.cdh5.10.0.p0.41/jars/calcite-linq4j-1.0.0-incubating.jar:{{HADOOP_COMMON_HOME}}/../../../CDH-5.10.0-1.cdh5.10.0.p0.41/jars/commons-beanutils-1.9.2.jar:{{HADOOP_COMMON_HOME}}/../../../CDH-5.10.0-1.cdh5.10.0.p0.41/jars/commons-beanutils-core-1.8.0.jar:{{HADOOP_COMMON_HOME}}/../../../CDH-5.10.0-1.cdh5.10.0.p0.41/jars/commons-cli-1.2.jar:{{HADOOP_COMMON_HOME}}/../../../CDH-5.10.0-1.cdh5.10.0.p0.41/jars/commons-codec-1.4.jar:{{HADOOP_COMMON_HOME}}/../../../CDH-5.10.0-1.cdh5.10.0.p0.41/jars/commons-codec-1.8.jar:{{HADOOP_COMMON_HOME}}/../../../CDH-5.10.0-1.cdh5.10.0.p0.41/jars/commons-collections-3.2.2.jar:{{HADOOP_COMMON_HOME}}/../../../CDH-5.10.0-1.cdh5.10.0.p0.41/jars/commons-compiler-2.7.6.jar:{{HADOOP_COMMON_HOME}}/../../../CDH-5.10.0-1.cdh5.10.0.p0.41/jars/commons-compress-1.4.1.jar:{{HADOOP_COMMON_HOME}}/../../../CDH-5.10.0-1.cdh5.10.0.p0.41/jars/commons-configuration-1.6.jar:{{HADOOP_COMMON_HOME}}/../../../CDH-5.10.0-1.cdh5.10.0.p0.41/jars/commons-daemon-1.0.13.jar:{{HADOOP_COMMON_HOME}}/../../../CDH-5.10.0-1.cdh5.10.0.p0.41/jars/commons-dbcp-1.4.jar:{{HADOOP_COMMON_HOME}}/../../../CDH-5.10.0-1.cdh5.10.0.p0.41/jars/commons-digester-1.8.jar:{{HADOOP_COMMON_HOME}}/../../../CDH-5.10.0-1.cdh5.10.0.p0.41/jars/commons-el-1.0.jar:{{HADOOP_COMMON_HOME}}/../../../CDH-5.10.0-1.cdh5.10.0.p0.41/jars/commons-httpclient-3.0.1.jar:{{HADOOP_COMMON_HOME}}/../../../CDH-5.10.0-1.cdh5.10.0.p0.41/jars/commons-httpclient-3.1.jar:{{HADOOP_COMMON_HOME}}/../../../CDH-5.10.0-1.cdh5.10.0.p0.41/jars/commons-io-2.4.jar:{{HADOOP_COMMON_HOME}}/../../../CDH-5.10.0-1.cdh5.10.0.p0.41/jars/commons-jexl-2.1.1.jar:{{HADOOP_COMMON_HOME}}/../../../CDH-5.10.0-1.cdh5.10.0.p0.41/jars/commons-lang-2.6.jar:{{HADOOP_COMMON_HOME}}/../../../CDH-5.10.0-1.cdh5.10.0.p0.41/jars/commons-lang3-3.1.jar:{{HADOOP_COMMON_HOME}}/../../../CDH-5.10.0-1.cdh5.10.0.p0.41/jars/commons-logging-1.1.3.jar:{{HADOOP_COMMON_HOME}}/../../../CDH-5.10.0-1.cdh5.10.0.p0.41/jars/commons-math-2.1.jar:{{HADOOP_COMMON_HOME}}/../../../CDH-5.10.0-1.cdh5.10.0.p0.41/jars/commons-math3-3.1.1.jar:{{HADOOP_COMMON_HOME}}/../../../CDH-5.10.0-1.cdh5.10.0.p0.41/jars/commons-net-3.1.jar:{{HADOOP_COMMON_HOME}}/../../../CDH-5.10.0-1.cdh5.10.0.p0.41/jars/commons-pool-1.5.4.jar:{{HADOOP_COMMON_HOME}}/../../../CDH-5.10.0-1.cdh5.10.0.p0.41/jars/commons-vfs2-2.0.jar:{{HADOOP_COMMON_HOME}}/../../../CDH-5.10.0-1.cdh5.10.0.p0.41/jars/curator-client-2.6.0.jar:{{HADOOP_COMMON_HOME}}/../../../CDH-5.10.0-1.cdh5.10.0.p0.41/jars/curator-client-2.7.1.jar:{{HADOOP_COMMON_HOME}}/../../../CDH-5.10.0-1.cdh5.10.0.p0.41/jars/curator-framework-2.6.0.jar:{{HADOOP_COMMON_HOME}}/../../../CDH-5.10.0-1.cdh5.10.0.p0.41/jars/curator-framework-2.7.1.jar:{{HADOOP_COMMON_HOME}}/../../../CDH-5.10.0-1.cdh5.10.0.p0.41/jars/curator-recipes-2.6.0.jar:{{HADOOP_COMMON_HOME}}/../../../CDH-5.10.0-1.cdh5.10.0.p0.41/jars/curator-recipes-2.7.1.jar:{{HADOOP_COMMON_HOME}}/../../../CDH-5.10.0-1.cdh5.10.0.p0.41/jars/datanucleus-api-jdo-3.2.6.jar:{{HADOOP_COMMON_HOME}}/../../../CDH-5.10.0-1.cdh5.10.0.p0.41/jars/datanucleus-core-3.2.10.jar:{{HADOOP_COMMON_HOME}}/../../../CDH-5.10.0-1.cdh5.10.0.p0.41/jars/datanucleus-rdbms-3.2.9.jar:{{HADOOP_COMMON_HOME}}/../../../CDH-5.10.0-1.cdh5.10.0.p0.41/jars/derby-10.11.1.1.jar:{{HADOOP_COMMON_HOME}}/../../../CDH-5.10.0-1.cdh5.10.0.p0.41/jars/eigenbase-properties-1.1.4.jar:{{HADOOP_COMMON_HOME}}/../../../CDH-5.10.0-1.cdh5.10.0.p0.41/jars/fastutil-6.3.jar:{{HADOOP_COMMON_HOME}}/../../../CDH-5.10.0-1.cdh5.10.0.p0.41/jars/findbugs-annotations-1.3.9-1.jar:{{HADOOP_COMMON_HOME}}/../../../CDH-5.10.0-1.cdh5.10.0.p0.41/jars/flume-avro-source-1.6.0-cdh5.10.0.jar:{{HADOOP_COMMON_HOME}}/../../../CDH-5.10.0-1.cdh5.10.0.p0.41/jars/flume-dataset-sink-1.6.0-cdh5.10.0.jar:{{HADOOP_COMMON_HOME}}/../../../CDH-5.10.0-1.cdh5.10.0.p0.41/jars/flume-file-channel-1.6.0-cdh5.10.0.jar:{{HADOOP_COMMON_HOME}}/../../../CDH-5.10.0-1.cdh5.10.0.p0.41/jars/flume-hdfs-sink-1.6.0-cdh5.10.0.jar:{{HADOOP_COMMON_HOME}}/../../../CDH-5.10.0-1.cdh5.10.0.p0.41/jars/flume-hive-sink-1.6.0-cdh5.10.0.jar:{{HADOOP_COMMON_HOME}}/../../../CDH-5.10.0-1.cdh5.10.0.p0.41/jars/flume-irc-sink-1.6.0-cdh5.10.0.jar:{{HADOOP_COMMON_HOME}}/../../../CDH-5.10.0-1.cdh5.10.0.p0.41/jars/flume-jdbc-channel-1.6.0-cdh5.10.0.jar:{{HADOOP_COMMON_HOME}}/../../../CDH-5.10.0-1.cdh5.10.0.p0.41/jars/flume-jms-source-1.6.0-cdh5.10.0.jar:{{HADOOP_COMMON_HOME}}/../../../CDH-5.10.0-1.cdh5.10.0.p0.41/jars/flume-kafka-channel-1.6.0-cdh5.10.0.jar:{{HADOOP_COMMON_HOME}}/../../../CDH-5.10.0-1.cdh5.10.0.p0.41/jars/flume-kafka-source-1.6.0-cdh5.10.0.jar:{{HADOOP_COMMON_HOME}}/../../../CDH-5.10.0-1.cdh5.10.0.p0.41/jars/flume-ng-auth-1.6.0-cdh5.10.0.jar:{{HADOOP_COMMON_HOME}}/../../../CDH-5.10.0-1.cdh5.10.0.p0.41/jars/flume-ng-configuration-1.6.0-cdh5.10.0.jar:{{HADOOP_COMMON_HOME}}/../../../CDH-5.10.0-1.cdh5.10.0.p0.41/jars/flume-ng-core-1.6.0-cdh5.10.0.jar:{{HADOOP_COMMON_HOME}}/../../../CDH-5.10.0-1.cdh5.10.0.p0.41/jars/flume-ng-elasticsearch-sink-1.6.0-cdh5.10.0.jar:{{HADOOP_COMMON_HOME}}/../../../CDH-5.10.0-1.cdh5.10.0.p0.41/jars/flume-ng-embedded-agent-1.6.0-cdh5.10.0.jar:{{HADOOP_COMMON_HOME}}/../../../CDH-5.10.0-1.cdh5.10.0.p0.41/jars/flume-ng-hbase-sink-1.6.0-cdh5.10.0.jar:{{HADOOP_COMMON_HOME}}/../../../CDH-5.10.0-1.cdh5.10.0.p0.41/jars/flume-ng-kafka-sink-1.6.0-cdh5.10.0.jar:{{HADOOP_COMMON_HOME}}/../../../CDH-5.10.0-1.cdh5.10.0.p0.41/jars/flume-ng-log4jappender-1.6.0-cdh5.10.0.jar:{{HADOOP_COMMON_HOME}}/../../../CDH-5.10.0-1.cdh5.10.0.p0.41/jars/flume-ng-morphline-solr-sink-1.6.0-cdh5.10.0.jar:{{HADOOP_COMMON_HOME}}/../../../CDH-5.10.0-1.cdh5.10.0.p0.41/jars/flume-ng-node-1.6.0-cdh5.10.0.jar:{{HADOOP_COMMON_HOME}}/../../../CDH-5.10.0-1.cdh5.10.0.p0.41/jars/flume-ng-sdk-1.6.0-cdh5.10.0.jar:{{HADOOP_COMMON_HOME}}/../../../CDH-5.10.0-1.cdh5.10.0.p0.41/jars/flume-scribe-source-1.6.0-cdh5.10.0.jar:{{HADOOP_COMMON_HOME}}/../../../CDH-5.10.0-1.cdh5.10.0.p0.41/jars/flume-spillable-memory-channel-1.6.0-cdh5.10.0.jar:{{HADOOP_COMMON_HOME}}/../../../CDH-5.10.0-1.cdh5.10.0.p0.41/jars/flume-taildir-source-1.6.0-cdh5.10.0.jar:{{HADOOP_COMMON_HOME}}/../../../CDH-5.10.0-1.cdh5.10.0.p0.41/jars/flume-thrift-source-1.6.0-cdh5.10.0.jar:{{HADOOP_COMMON_HOME}}/../../../CDH-5.10.0-1.cdh5.10.0.p0.41/jars/flume-tools-1.6.0-cdh5.10.0.jar:{{HADOOP_COMMON_HOME}}/../../../CDH-5.10.0-1.cdh5.10.0.p0.41/jars/flume-twitter-source-1.6.0-cdh5.10.0.jar:{{HADOOP_COMMON_HOME}}/../../../CDH-5.10.0-1.cdh5.10.0.p0.41/jars/geronimo-annotation_1.0_spec-1.1.1.jar:{{HADOOP_COMMON_HOME}}/../../../CDH-5.10.0-1.cdh5.10.0.p0.41/jars/geronimo-jaspic_1.0_spec-1.0.jar:{{HADOOP_COMMON_HOME}}/../../../CDH-5.10.0-1.cdh5.10.0.p0.41/jars/geronimo-jta_1.1_spec-1.1.1.jar:{{HADOOP_COMMON_HOME}}/../../../CDH-5.10.0-1.cdh5.10.0.p0.41/jars/groovy-all-2.4.4.jar:{{HADOOP_COMMON_HOME}}/../../../CDH-5.10.0-1.cdh5.10.0.p0.41/jars/gson-2.2.4.jar:{{HADOOP_COMMON_HOME}}/../../../CDH-5.10.0-1.cdh5.10.0.p0.41/jars/guava-11.0.2.jar:{{HADOOP_COMMON_HOME}}/../../../CDH-5.10.0-1.cdh5.10.0.p0.41/jars/guava-11.0.jar:{{HADOOP_COMMON_HOME}}/../../../CDH-5.10.0-1.cdh5.10.0.p0.41/jars/guava-14.0.1.jar:{{HADOOP_COMMON_HOME}}/../../../CDH-5.10.0-1.cdh5.10.0.p0.41/jars/guice-3.0.jar:{{HADOOP_COMMON_HOME}}/../../../CDH-5.10.0-1.cdh5.10.0.p0.41/jars/guice-servlet-3.0.jar:{{HADOOP_COMMON_HOME}}/../../../CDH-5.10.0-1.cdh5.10.0.p0.41/jars/hadoop-annotations-2.6.0-cdh5.10.0.jar:{{HADOOP_COMMON_HOME}}/../../../CDH-5.10.0-1.cdh5.10.0.p0.41/jars/hadoop-ant-2.6.0-cdh5.10.0.jar:{{HADOOP_COMMON_HOME}}/../../../CDH-5.10.0-1.cdh5.10.0.p0.41/jars/hadoop-archive-logs-2.6.0-cdh5.10.0.jar:{{HADOOP_COMMON_HOME}}/../../../CDH-5.10.0-1.cdh5.10.0.p0.41/jars/hadoop-archives-2.6.0-cdh5.10.0.jar:{{HADOOP_COMMON_HOME}}/../../../CDH-5.10.0-1.cdh5.10.0.p0.41/jars/hadoop-auth-2.6.0-cdh5.10.0.jar:{{HADOOP_COMMON_HOME}}/../../../CDH-5.10.0-1.cdh5.10.0.p0.41/jars/hadoop-aws-2.6.0-cdh5.10.0.jar:{{HADOOP_COMMON_HOME}}/../../../CDH-5.10.0-1.cdh5.10.0.p0.41/jars/hadoop-azure-2.6.0-cdh5.10.0.jar:{{HADOOP_COMMON_HOME}}/../../../CDH-5.10.0-1.cdh5.10.0.p0.41/jars/hadoop-common-2.6.0-cdh5.10.0-tests.jar:{{HADOOP_COMMON_HOME}}/../../../CDH-5.10.0-1.cdh5.10.0.p0.41/jars/hadoop-common-2.6.0-cdh5.10.0.jar:{{HADOOP_COMMON_HOME}}/../../../CDH-5.10.0-1.cdh5.10.0.p0.41/jars/hadoop-datajoin-2.6.0-cdh5.10.0.jar:{{HADOOP_COMMON_HOME}}/../../../CDH-5.10.0-1.cdh5.10.0.p0.41/jars/hadoop-distcp-2.6.0-cdh5.10.0.jar:{{HADOOP_COMMON_HOME}}/../../../CDH-5.10.0-1.cdh5.10.0.p0.41/jars/hadoop-extras-2.6.0-cdh5.10.0.jar:{{HADOOP_COMMON_HOME}}/../../../CDH-5.10.0-1.cdh5.10.0.p0.41/jars/hadoop-gridmix-2.6.0-cdh5.10.0.jar:{{HADOOP_COMMON_HOME}}/../../../CDH-5.10.0-1.cdh5.10.0.p0.41/jars/hadoop-hdfs-2.6.0-cdh5.10.0-tests.jar:{{HADOOP_COMMON_HOME}}/../../../CDH-5.10.0-1.cdh5.10.0.p0.41/jars/hadoop-hdfs-2.6.0-cdh5.10.0.jar:{{HADOOP_COMMON_HOME}}/../../../CDH-5.10.0-1.cdh5.10.0.p0.41/jars/hadoop-hdfs-nfs-2.6.0-cdh5.10.0.jar:{{HADOOP_COMMON_HOME}}/../../../CDH-5.10.0-1.cdh5.10.0.p0.41/jars/hadoop-mapreduce-client-app-2.6.0-cdh5.10.0.jar:{{HADOOP_COMMON_HOME}}/../../../CDH-5.10.0-1.cdh5.10.0.p0.41/jars/hadoop-mapreduce-client-common-2.6.0-cdh5.10.0.jar:{{HADOOP_COMMON_HOME}}/../../../CDH-5.10.0-1.cdh5.10.0.p0.41/jars/hadoop-mapreduce-client-core-2.6.0-cdh5.10.0.jar:{{HADOOP_COMMON_HOME}}/../../../CDH-5.10.0-1.cdh5.10.0.p0.41/jars/hadoop-mapreduce-client-hs-2.6.0-cdh5.10.0.jar:{{HADOOP_COMMON_HOME}}/../../../CDH-5.10.0-1.cdh5.10.0.p0.41/jars/hadoop-mapreduce-client-hs-plugins-2.6.0-cdh5.10.0.jar:{{HADOOP_COMMON_HOME}}/../../../CDH-5.10.0-1.cdh5.10.0.p0.41/jars/hadoop-mapreduce-client-jobclient-2.6.0-cdh5.10.0-tests.jar:{{HADOOP_COMMON_HOME}}/../../../CDH-5.10.0-1.cdh5.10.0.p0.41/jars/hadoop-mapreduce-client-jobclient-2.6.0-cdh5.10.0.jar:{{HADOOP_COMMON_HOME}}/../../../CDH-5.10.0-1.cdh5.10.0.p0.41/jars/hadoop-mapreduce-client-nativetask-2.6.0-cdh5.10.0.jar:{{HADOOP_COMMON_HOME}}/../../../CDH-5.10.0-1.cdh5.10.0.p0.41/jars/hadoop-mapreduce-client-shuffle-2.6.0-cdh5.10.0.jar:{{HADOOP_COMMON_HOME}}/../../../CDH-5.10.0-1.cdh5.10.0.p0.41/jars/hadoop-mapreduce-examples-2.6.0-cdh5.10.0.jar:{{HADOOP_COMMON_HOME}}/../../../CDH-5.10.0-1.cdh5.10.0.p0.41/jars/hadoop-nfs-2.6.0-cdh5.10.0.jar:{{HADOOP_COMMON_HOME}}/../../../CDH-5.10.0-1.cdh5.10.0.p0.41/jars/hadoop-openstack-2.6.0-cdh5.10.0.jar:{{HADOOP_COMMON_HOME}}/../../../CDH-5.10.0-1.cdh5.10.0.p0.41/jars/hadoop-rumen-2.6.0-cdh5.10.0.jar:{{HADOOP_COMMON_HOME}}/../../../CDH-5.10.0-1.cdh5.10.0.p0.41/jars/hadoop-sls-2.6.0-cdh5.10.0.jar:{{HADOOP_COMMON_HOME}}/../../../CDH-5.10.0-1.cdh5.10.0.p0.41/jars/hadoop-streaming-2.6.0-cdh5.10.0.jar:{{HADOOP_COMMON_HOME}}/../../../CDH-5.10.0-1.cdh5.10.0.p0.41/jars/hadoop-yarn-api-2.6.0-cdh5.10.0.jar:{{HADOOP_COMMON_HOME}}/../../../CDH-5.10.0-1.cdh5.10.0.p0.41/jars/hadoop-yarn-applications-distributedshell-2.6.0-cdh5.10.0.jar:{{HADOOP_COMMON_HOME}}/../../../CDH-5.10.0-1.cdh5.10.0.p0.41/jars/hadoop-yarn-applications-unmanaged-am-launcher-2.6.0-cdh5.10.0.jar:{{HADOOP_COMMON_HOME}}/../../../CDH-5.10.0-1.cdh5.10.0.p0.41/jars/hadoop-yarn-client-2.6.0-cdh5.10.0.jar:{{HADOOP_COMMON_HOME}}/../../../CDH-5.10.0-1.cdh5.10.0.p0.41/jars/hadoop-yarn-common-2.6.0-cdh5.10.0.jar:{{HADOOP_COMMON_HOME}}/../../../CDH-5.10.0-1.cdh5.10.0.p0.41/jars/hadoop-yarn-registry-2.6.0-cdh5.10.0.jar:{{HADOOP_COMMON_HOME}}/../../../CDH-5.10.0-1.cdh5.10.0.p0.41/jars/hadoop-yarn-server-applicationhistoryservice-2.6.0-cdh5.10.0.jar:{{HADOOP_COMMON_HOME}}/../../../CDH-5.10.0-1.cdh5.10.0.p0.41/jars/hadoop-yarn-server-common-2.6.0-cdh5.10.0.jar:{{HADOOP_COMMON_HOME}}/../../../CDH-5.10.0-1.cdh5.10.0.p0.41/jars/hadoop-yarn-server-nodemanager-2.6.0-cdh5.10.0.jar:{{HADOOP_COMMON_HOME}}/../../../CDH-5.10.0-1.cdh5.10.0.p0.41/jars/hadoop-yarn-server-resourcemanager-2.6.0-cdh5.10.0.jar:{{HADOOP_COMMON_HOME}}/../../../CDH-5.10.0-1.cdh5.10.0.p0.41/jars/hadoop-yarn-server-tests-2.6.0-cdh5.10.0.jar:{{HADOOP_COMMON_HOME}}/../../../CDH-5.10.0-1.cdh5.10.0.p0.41/jars/hadoop-yarn-server-web-proxy-2.6.0-cdh5.10.0.jar:{{HADOOP_COMMON_HOME}}/../../../CDH-5.10.0-1.cdh5.10.0.p0.41/jars/hamcrest-core-1.1.jar:{{HADOOP_COMMON_HOME}}/../../../CDH-5.10.0-1.cdh5.10.0.p0.41/jars/hamcrest-core-1.3.jar:{{HADOOP_COMMON_HOME}}/../../../CDH-5.10.0-1.cdh5.10.0.p0.41/jars/hbase-annotations-1.2.0-cdh5.10.0.jar:{{HADOOP_COMMON_HOME}}/../../../CDH-5.10.0-1.cdh5.10.0.p0.41/jars/hbase-client-1.2.0-cdh5.10.0.jar:{{HADOOP_COMMON_HOME}}/../../../CDH-5.10.0-1.cdh5.10.0.p0.41/jars/hbase-common-1.2.0-cdh5.10.0.jar:{{HADOOP_COMMON_HOME}}/../../../CDH-5.10.0-1.cdh5.10.0.p0.41/jars/hbase-hadoop-compat-1.2.0-cdh5.10.0.jar:{{HADOOP_COMMON_HOME}}/../../../CDH-5.10.0-1.cdh5.10.0.p0.41/jars/hbase-hadoop2-compat-1.2.0-cdh5.10.0.jar:{{HADOOP_COMMON_HOME}}/../../../CDH-5.10.0-1.cdh5.10.0.p0.41/jars/hbase-protocol-1.2.0-cdh5.10.0.jar:{{HADOOP_COMMON_HOME}}/../../../CDH-5.10.0-1.cdh5.10.0.p0.41/jars/hbase-server-1.2.0-cdh5.10.0.jar:{{HADOOP_COMMON_HOME}}/../../../CDH-5.10.0-1.cdh5.10.0.p0.41/jars/high-scale-lib-1.1.1.jar:{{HADOOP_COMMON_HOME}}/../../../CDH-5.10.0-1.cdh5.10.0.p0.41/jars/hive-accumulo-handler-1.1.0-cdh5.10.0.jar:{{HADOOP_COMMON_HOME}}/../../../CDH-5.10.0-1.cdh5.10.0.p0.41/jars/hive-ant-1.1.0-cdh5.10.0.jar:{{HADOOP_COMMON_HOME}}/../../../CDH-5.10.0-1.cdh5.10.0.p0.41/jars/hive-beeline-1.1.0-cdh5.10.0.jar:{{HADOOP_COMMON_HOME}}/../../../CDH-5.10.0-1.cdh5.10.0.p0.41/jars/hive-cli-1.1.0-cdh5.10.0.jar:{{HADOOP_COMMON_HOME}}/../../../CDH-5.10.0-1.cdh5.10.0.p0.41/jars/hive-common-1.1.0-cdh5.10.0.jar:{{HADOOP_COMMON_HOME}}/../../../CDH-5.10.0-1.cdh5.10.0.p0.41/jars/hive-contrib-1.1.0-cdh5.10.0.jar:{{HADOOP_COMMON_HOME}}/../../../CDH-5.10.0-1.cdh5.10.0.p0.41/jars/hive-exec-1.1.0-cdh5.10.0.jar:{{HADOOP_COMMON_HOME}}/../../../CDH-5.10.0-1.cdh5.10.0.p0.41/jars/hive-hbase-handler-1.1.0-cdh5.10.0.jar:{{HADOOP_COMMON_HOME}}/../../../CDH-5.10.0-1.cdh5.10.0.p0.41/jars/hive-hwi-1.1.0-cdh5.10.0.jar:{{HADOOP_COMMON_HOME}}/../../../CDH-5.10.0-1.cdh5.10.0.p0.41/jars/hive-jdbc-1.1.0-cdh5.10.0-standalone.jar:{{HADOOP_COMMON_HOME}}/../../../CDH-5.10.0-1.cdh5.10.0.p0.41/jars/hive-jdbc-1.1.0-cdh5.10.0.jar:{{HADOOP_COMMON_HOME}}/../../../CDH-5.10.0-1.cdh5.10.0.p0.41/jars/hive-metastore-1.1.0-cdh5.10.0.jar:{{HADOOP_COMMON_HOME}}/../../../CDH-5.10.0-1.cdh5.10.0.p0.41/jars/hive-serde-1.1.0-cdh5.10.0.jar:{{HADOOP_COMMON_HOME}}/../../../CDH-5.10.0-1.cdh5.10.0.p0.41/jars/hive-service-1.1.0-cdh5.10.0.jar:{{HADOOP_COMMON_HOME}}/../../../CDH-5.10.0-1.cdh5.10.0.p0.41/jars/hive-shims-0.23-1.1.0-cdh5.10.0.jar:{{HADOOP_COMMON_HOME}}/../../../CDH-5.10.0-1.cdh5.10.0.p0.41/jars/hive-shims-1.1.0-cdh5.10.0.jar:{{HADOOP_COMMON_HOME}}/../../../CDH-5.10.0-1.cdh5.10.0.p0.41/jars/hive-shims-common-1.1.0-cdh5.10.0.jar:{{HADOOP_COMMON_HOME}}/../../../CDH-5.10.0-1.cdh5.10.0.p0.41/jars/hive-shims-scheduler-1.1.0-cdh5.10.0.jar:{{HADOOP_COMMON_HOME}}/../../../CDH-5.10.0-1.cdh5.10.0.p0.41/jars/hive-testutils-1.1.0-cdh5.10.0.jar:{{HADOOP_COMMON_HOME}}/../../../CDH-5.10.0-1.cdh5.10.0.p0.41/jars/htrace-core-3.2.0-incubating.jar:{{HADOOP_COMMON_HOME}}/../../../CDH-5.10.0-1.cdh5.10.0.p0.41/jars/htrace-core4-4.0.1-incubating.jar:{{HADOOP_COMMON_HOME}}/../../../CDH-5.10.0-1.cdh5.10.0.p0.41/jars/httpclient-4.2.5.jar:{{HADOOP_COMMON_HOME}}/../../../CDH-5.10.0-1.cdh5.10.0.p0.41/jars/httpcore-4.2.5.jar:{{HADOOP_COMMON_HOME}}/../../../CDH-5.10.0-1.cdh5.10.0.p0.41/jars/hue-plugins-3.9.0-cdh5.10.0.jar:{{HADOOP_COMMON_HOME}}/../../../CDH-5.10.0-1.cdh5.10.0.p0.41/jars/irclib-1.10.jar:{{HADOOP_COMMON_HOME}}/../../../CDH-5.10.0-1.cdh5.10.0.p0.41/jars/ivy-2.0.0-rc2.jar:{{HADOOP_COMMON_HOME}}/../../../CDH-5.10.0-1.cdh5.10.0.p0.41/jars/jackson-annotations-2.2.3.jar:{{HADOOP_COMMON_HOME}}/../../../CDH-5.10.0-1.cdh5.10.0.p0.41/jars/jackson-core-2.2.3.jar:{{HADOOP_COMMON_HOME}}/../../../CDH-5.10.0-1.cdh5.10.0.p0.41/jars/jackson-core-asl-1.8.8.jar:{{HADOOP_COMMON_HOME}}/../../../CDH-5.10.0-1.cdh5.10.0.p0.41/jars/jackson-databind-2.2.3.jar:{{HADOOP_COMMON_HOME}}/../../../CDH-5.10.0-1.cdh5.10.0.p0.41/jars/jackson-jaxrs-1.8.8.jar:{{HADOOP_COMMON_HOME}}/../../../CDH-5.10.0-1.cdh5.10.0.p0.41/jars/jackson-mapper-asl-1.8.8.jar:{{HADOOP_COMMON_HOME}}/../../../CDH-5.10.0-1.cdh5.10.0.p0.41/jars/jackson-xc-1.8.8.jar:{{HADOOP_COMMON_HOME}}/../../../CDH-5.10.0-1.cdh5.10.0.p0.41/jars/jamon-runtime-2.3.1.jar:{{HADOOP_COMMON_HOME}}/../../../CDH-5.10.0-1.cdh5.10.0.p0.41/jars/janino-2.7.6.jar:{{HADOOP_COMMON_HOME}}/../../../CDH-5.10.0-1.cdh5.10.0.p0.41/jars/jasper-compiler-5.5.23.jar:{{HADOOP_COMMON_HOME}}/../../../CDH-5.10.0-1.cdh5.10.0.p0.41/jars/jasper-runtime-5.5.23.jar:{{HADOOP_COMMON_HOME}}/../../../CDH-5.10.0-1.cdh5.10.0.p0.41/jars/java-xmlbuilder-0.4.jar:{{HADOOP_COMMON_HOME}}/../../../CDH-5.10.0-1.cdh5.10.0.p0.41/jars/javax.inject-1.jar:{{HADOOP_COMMON_HOME}}/../../../CDH-5.10.0-1.cdh5.10.0.p0.41/jars/jaxb-api-2.2.2.jar:{{HADOOP_COMMON_HOME}}/../../../CDH-5.10.0-1.cdh5.10.0.p0.41/jars/jaxb-impl-2.2.3-1.jar:{{HADOOP_COMMON_HOME}}/../../../CDH-5.10.0-1.cdh5.10.0.p0.41/jars/jcommander-1.32.jar:{{HADOOP_COMMON_HOME}}/../../../CDH-5.10.0-1.cdh5.10.0.p0.41/jars/jdo-api-3.0.1.jar:{{HADOOP_COMMON_HOME}}/../../../CDH-5.10.0-1.cdh5.10.0.p0.41/jars/jersey-client-1.9.jar:{{HADOOP_COMMON_HOME}}/../../../CDH-5.10.0-1.cdh5.10.0.p0.41/jars/jersey-core-1.9.jar:{{HADOOP_COMMON_HOME}}/../../../CDH-5.10.0-1.cdh5.10.0.p0.41/jars/jersey-guice-1.9.jar:{{HADOOP_COMMON_HOME}}/../../../CDH-5.10.0-1.cdh5.10.0.p0.41/jars/jersey-json-1.9.jar:{{HADOOP_COMMON_HOME}}/../../../CDH-5.10.0-1.cdh5.10.0.p0.41/jars/jersey-server-1.9.jar:{{HADOOP_COMMON_HOME}}/../../../CDH-5.10.0-1.cdh5.10.0.p0.41/jars/jets3t-0.9.0.jar:{{HADOOP_COMMON_HOME}}/../../../CDH-5.10.0-1.cdh5.10.0.p0.41/jars/jettison-1.1.jar:{{HADOOP_COMMON_HOME}}/../../../CDH-5.10.0-1.cdh5.10.0.p0.41/jars/jetty-6.1.26.cloudera.4.jar:{{HADOOP_COMMON_HOME}}/../../../CDH-5.10.0-1.cdh5.10.0.p0.41/jars/jetty-all-7.6.0.v20120127.jar:{{HADOOP_COMMON_HOME}}/../../../CDH-5.10.0-1.cdh5.10.0.p0.41/jars/jetty-all-server-7.6.0.v20120127.jar:{{HADOOP_COMMON_HOME}}/../../../CDH-5.10.0-1.cdh5.10.0.p0.41/jars/jetty-util-6.1.26.cloudera.4.jar:{{HADOOP_COMMON_HOME}}/../../../CDH-5.10.0-1.cdh5.10.0.p0.41/jars/jline-2.11.jar:{{HADOOP_COMMON_HOME}}/../../../CDH-5.10.0-1.cdh5.10.0.p0.41/jars/jline-2.12.jar:{{HADOOP_COMMON_HOME}}/../../../CDH-5.10.0-1.cdh5.10.0.p0.41/jars/joda-time-1.6.jar:{{HADOOP_COMMON_HOME}}/../../../CDH-5.10.0-1.cdh5.10.0.p0.41/jars/joda-time-2.1.jar:{{HADOOP_COMMON_HOME}}/../../../CDH-5.10.0-1.cdh5.10.0.p0.41/jars/jopt-simple-4.9.jar:{{HADOOP_COMMON_HOME}}/../../../CDH-5.10.0-1.cdh5.10.0.p0.41/jars/jpam-1.1.jar:{{HADOOP_COMMON_HOME}}/../../../CDH-5.10.0-1.cdh5.10.0.p0.41/jars/jsch-0.1.42.jar:{{HADOOP_COMMON_HOME}}/../../../CDH-5.10.0-1.cdh5.10.0.p0.41/jars/jsp-api-2.1.jar:{{HADOOP_COMMON_HOME}}/../../../CDH-5.10.0-1.cdh5.10.0.p0.41/jars/jsr305-1.3.9.jar:{{HADOOP_COMMON_HOME}}/../../../CDH-5.10.0-1.cdh5.10.0.p0.41/jars/jsr305-3.0.0.jar:{{HADOOP_COMMON_HOME}}/../../../CDH-5.10.0-1.cdh5.10.0.p0.41/jars/jta-1.1.jar:{{HADOOP_COMMON_HOME}}/../../../CDH-5.10.0-1.cdh5.10.0.p0.41/jars/junit-4.11.jar:{{HADOOP_COMMON_HOME}}/../../../CDH-5.10.0-1.cdh5.10.0.p0.41/jars/kafka-clients-0.9.0-kafka-2.0.2.jar:{{HADOOP_COMMON_HOME}}/../../../CDH-5.10.0-1.cdh5.10.0.p0.41/jars/kafka_2.10-0.9.0-kafka-2.0.2.jar:{{HADOOP_COMMON_HOME}}/../../../CDH-5.10.0-1.cdh5.10.0.p0.41/jars/kite-data-core-1.0.0-cdh5.10.0.jar:{{HADOOP_COMMON_HOME}}/../../../CDH-5.10.0-1.cdh5.10.0.p0.41/jars/kite-data-hbase-1.0.0-cdh5.10.0.jar:{{HADOOP_COMMON_HOME}}/../../../CDH-5.10.0-1.cdh5.10.0.p0.41/jars/kite-data-hive-1.0.0-cdh5.10.0.jar:{{HADOOP_COMMON_HOME}}/../../../CDH-5.10.0-1.cdh5.10.0.p0.41/jars/kite-hadoop-compatibility-1.0.0-cdh5.10.0.jar:{{HADOOP_COMMON_HOME}}/../../../CDH-5.10.0-1.cdh5.10.0.p0.41/jars/leveldbjni-all-1.8.jar:{{HADOOP_COMMON_HOME}}/../../../CDH-5.10.0-1.cdh5.10.0.p0.41/jars/libfb303-0.9.3.jar:{{HADOOP_COMMON_HOME}}/../../../CDH-5.10.0-1.cdh5.10.0.p0.41/jars/libthrift-0.9.3.jar:{{HADOOP_COMMON_HOME}}/../../../CDH-5.10.0-1.cdh5.10.0.p0.41/jars/log4j-1.2.16.jar:{{HADOOP_COMMON_HOME}}/../../../CDH-5.10.0-1.cdh5.10.0.p0.41/jars/log4j-1.2.17.jar:{{HADOOP_COMMON_HOME}}/../../../CDH-5.10.0-1.cdh5.10.0.p0.41/jars/logredactor-1.0.3.jar:{{HADOOP_COMMON_HOME}}/../../../CDH-5.10.0-1.cdh5.10.0.p0.41/jars/lz4-1.3.0.jar:{{HADOOP_COMMON_HOME}}/../../../CDH-5.10.0-1.cdh5.10.0.p0.41/jars/mail-1.4.1.jar:{{HADOOP_COMMON_HOME}}/../../../CDH-5.10.0-1.cdh5.10.0.p0.41/jars/mapdb-0.9.9.jar:{{HADOOP_COMMON_HOME}}/../../../CDH-5.10.0-1.cdh5.10.0.p0.41/jars/maven-scm-api-1.4.jar:{{HADOOP_COMMON_HOME}}/../../../CDH-5.10.0-1.cdh5.10.0.p0.41/jars/maven-scm-provider-svn-commons-1.4.jar:{{HADOOP_COMMON_HOME}}/../../../CDH-5.10.0-1.cdh5.10.0.p0.41/jars/maven-scm-provider-svnexe-1.4.jar:{{HADOOP_COMMON_HOME}}/../../../CDH-5.10.0-1.cdh5.10.0.p0.41/jars/metrics-core-2.2.0.jar:{{HADOOP_COMMON_HOME}}/../../../CDH-5.10.0-1.cdh5.10.0.p0.41/jars/metrics-core-3.0.2.jar:{{HADOOP_COMMON_HOME}}/../../../CDH-5.10.0-1.cdh5.10.0.p0.41/jars/metrics-json-3.0.2.jar:{{HADOOP_COMMON_HOME}}/../../../CDH-5.10.0-1.cdh5.10.0.p0.41/jars/metrics-jvm-3.0.2.jar:{{HADOOP_COMMON_HOME}}/../../../CDH-5.10.0-1.cdh5.10.0.p0.41/jars/microsoft-windowsazure-storage-sdk-0.6.0.jar:{{HADOOP_COMMON_HOME}}/../../../CDH-5.10.0-1.cdh5.10.0.p0.41/jars/mina-core-2.0.4.jar:{{HADOOP_COMMON_HOME}}/../../../CDH-5.10.0-1.cdh5.10.0.p0.41/jars/mockito-all-1.8.5.jar:{{HADOOP_COMMON_HOME}}/../../../CDH-5.10.0-1.cdh5.10.0.p0.41/jars/netty-3.10.5.Final.jar:{{HADOOP_COMMON_HOME}}/../../../CDH-5.10.0-1.cdh5.10.0.p0.41/jars/netty-3.9.4.Final.jar:{{HADOOP_COMMON_HOME}}/../../../CDH-5.10.0-1.cdh5.10.0.p0.41/jars/netty-all-4.0.23.Final.jar:{{HADOOP_COMMON_HOME}}/../../../CDH-5.10.0-1.cdh5.10.0.p0.41/jars/opencsv-2.3.jar:{{HADOOP_COMMON_HOME}}/../../../CDH-5.10.0-1.cdh5.10.0.p0.41/jars/oro-2.0.8.jar:{{HADOOP_COMMON_HOME}}/../../../CDH-5.10.0-1.cdh5.10.0.p0.41/jars/paranamer-2.3.jar:{{HADOOP_COMMON_HOME}}/../../../CDH-5.10.0-1.cdh5.10.0.p0.41/jars/parquet-avro-1.5.0-cdh5.10.0.jar:{{HADOOP_COMMON_HOME}}/../../../CDH-5.10.0-1.cdh5.10.0.p0.41/jars/parquet-cascading-1.5.0-cdh5.10.0.jar:{{HADOOP_COMMON_HOME}}/../../../CDH-5.10.0-1.cdh5.10.0.p0.41/jars/parquet-column-1.5.0-cdh5.10.0.jar:{{HADOOP_COMMON_HOME}}/../../../CDH-5.10.0-1.cdh5.10.0.p0.41/jars/parquet-common-1.5.0-cdh5.10.0.jar:{{HADOOP_COMMON_HOME}}/../../../CDH-5.10.0-1.cdh5.10.0.p0.41/jars/parquet-encoding-1.5.0-cdh5.10.0.jar:{{HADOOP_COMMON_HOME}}/../../../CDH-5.10.0-1.cdh5.10.0.p0.41/jars/parquet-format-2.1.0-cdh5.10.0-javadoc.jar:{{HADOOP_COMMON_HOME}}/../../../CDH-5.10.0-1.cdh5.10.0.p0.41/jars/parquet-format-2.1.0-cdh5.10.0-sources.jar:{{HADOOP_COMMON_HOME}}/../../../CDH-5.10.0-1.cdh5.10.0.p0.41/jars/parquet-format-2.1.0-cdh5.10.0.jar:{{HADOOP_COMMON_HOME}}/../../../CDH-5.10.0-1.cdh5.10.0.p0.41/jars/parquet-generator-1.5.0-cdh5.10.0.jar:{{HADOOP_COMMON_HOME}}/../../../CDH-5.10.0-1.cdh5.10.0.p0.41/jars/parquet-hadoop-1.5.0-cdh5.10.0.jar:{{HADOOP_COMMON_HOME}}/../../../CDH-5.10.0-1.cdh5.10.0.p0.41/jars/parquet-hadoop-bundle-1.5.0-cdh5.10.0.jar:{{HADOOP_COMMON_HOME}}/../../../CDH-5.10.0-1.cdh5.10.0.p0.41/jars/parquet-jackson-1.5.0-cdh5.10.0.jar:{{HADOOP_COMMON_HOME}}/../../../CDH-5.10.0-1.cdh5.10.0.p0.41/jars/parquet-pig-1.5.0-cdh5.10.0.jar:{{HADOOP_COMMON_HOME}}/../../../CDH-5.10.0-1.cdh5.10.0.p0.41/jars/parquet-pig-bundle-1.5.0-cdh5.10.0.jar:{{HADOOP_COMMON_HOME}}/../../../CDH-5.10.0-1.cdh5.10.0.p0.41/jars/parquet-protobuf-1.5.0-cdh5.10.0.jar:{{HADOOP_COMMON_HOME}}/../../../CDH-5.10.0-1.cdh5.10.0.p0.41/jars/parquet-scala_2.10-1.5.0-cdh5.10.0.jar:{{HADOOP_COMMON_HOME}}/../../../CDH-5.10.0-1.cdh5.10.0.p0.41/jars/parquet-scrooge_2.10-1.5.0-cdh5.10.0.jar:{{HADOOP_COMMON_HOME}}/../../../CDH-5.10.0-1.cdh5.10.0.p0.41/jars/parquet-test-hadoop2-1.5.0-cdh5.10.0.jar:{{HADOOP_COMMON_HOME}}/../../../CDH-5.10.0-1.cdh5.10.0.p0.41/jars/parquet-thrift-1.5.0-cdh5.10.0.jar:{{HADOOP_COMMON_HOME}}/../../../CDH-5.10.0-1.cdh5.10.0.p0.41/jars/parquet-tools-1.5.0-cdh5.10.0.jar:{{HADOOP_COMMON_HOME}}/../../../CDH-5.10.0-1.cdh5.10.0.p0.41/jars/pentaho-aggdesigner-algorithm-5.1.5-jhyde.jar:{{HADOOP_COMMON_HOME}}/../../../CDH-5.10.0-1.cdh5.10.0.p0.41/jars/plexus-utils-1.5.6.jar:{{HADOOP_COMMON_HOME}}/../../../CDH-5.10.0-1.cdh5.10.0.p0.41/jars/protobuf-java-2.5.0.jar:{{HADOOP_COMMON_HOME}}/../../../CDH-5.10.0-1.cdh5.10.0.p0.41/jars/regexp-1.3.jar:{{HADOOP_COMMON_HOME}}/../../../CDH-5.10.0-1.cdh5.10.0.p0.41/jars/scala-library-2.10.5.jar:{{HADOOP_COMMON_HOME}}/../../../CDH-5.10.0-1.cdh5.10.0.p0.41/jars/serializer-2.7.2.jar:{{HADOOP_COMMON_HOME}}/../../../CDH-5.10.0-1.cdh5.10.0.p0.41/jars/servlet-api-2.5-20110124.jar:{{HADOOP_COMMON_HOME}}/../../../CDH-5.10.0-1.cdh5.10.0.p0.41/jars/servlet-api-2.5.jar:{{HADOOP_COMMON_HOME}}/../../../CDH-5.10.0-1.cdh5.10.0.p0.41/jars/slf4j-api-1.7.5.jar:{{HADOOP_COMMON_HOME}}/../../../CDH-5.10.0-1.cdh5.10.0.p0.41/jars/slf4j-log4j12-1.7.5.jar:{{HADOOP_COMMON_HOME}}/../../../CDH-5.10.0-1.cdh5.10.0.p0.41/jars/snappy-java-1.0.4.1.jar:{{HADOOP_COMMON_HOME}}/../../../CDH-5.10.0-1.cdh5.10.0.p0.41/jars/spark-1.6.0-cdh5.10.0-yarn-shuffle.jar:{{HADOOP_COMMON_HOME}}/../../../CDH-5.10.0-1.cdh5.10.0.p0.41/jars/spark-streaming-flume-sink_2.10-1.6.0-cdh5.10.0.jar:{{HADOOP_COMMON_HOME}}/../../../CDH-5.10.0-1.cdh5.10.0.p0.41/jars/stax-api-1.0-2.jar:{{HADOOP_COMMON_HOME}}/../../../CDH-5.10.0-1.cdh5.10.0.p0.41/jars/stax-api-1.0.1.jar:{{HADOOP_COMMON_HOME}}/../../../CDH-5.10.0-1.cdh5.10.0.p0.41/jars/stringtemplate-3.2.1.jar:{{HADOOP_COMMON_HOME}}/../../../CDH-5.10.0-1.cdh5.10.0.p0.41/jars/super-csv-2.2.0.jar:{{HADOOP_COMMON_HOME}}/../../../CDH-5.10.0-1.cdh5.10.0.p0.41/jars/tempus-fugit-1.1.jar:{{HADOOP_COMMON_HOME}}/../../../CDH-5.10.0-1.cdh5.10.0.p0.41/jars/trevni-avro-1.7.6-cdh5.10.0-hadoop2.jar:{{HADOOP_COMMON_HOME}}/../../../CDH-5.10.0-1.cdh5.10.0.p0.41/jars/trevni-avro-1.7.6-cdh5.10.0.jar:{{HADOOP_COMMON_HOME}}/../../../CDH-5.10.0-1.cdh5.10.0.p0.41/jars/trevni-core-1.7.6-cdh5.10.0.jar:{{HADOOP_COMMON_HOME}}/../../../CDH-5.10.0-1.cdh5.10.0.p0.41/jars/twitter4j-core-3.0.3.jar:{{HADOOP_COMMON_HOME}}/../../../CDH-5.10.0-1.cdh5.10.0.p0.41/jars/twitter4j-media-support-3.0.3.jar:{{HADOOP_COMMON_HOME}}/../../../CDH-5.10.0-1.cdh5.10.0.p0.41/jars/twitter4j-stream-3.0.3.jar:{{HADOOP_COMMON_HOME}}/../../../CDH-5.10.0-1.cdh5.10.0.p0.41/jars/unused-1.0.0.jar:{{HADOOP_COMMON_HOME}}/../../../CDH-5.10.0-1.cdh5.10.0.p0.41/jars/velocity-1.5.jar:{{HADOOP_COMMON_HOME}}/../../../CDH-5.10.0-1.cdh5.10.0.p0.41/jars/velocity-1.7.jar:{{HADOOP_COMMON_HOME}}/../../../CDH-5.10.0-1.cdh5.10.0.p0.41/jars/xalan-2.7.2.jar:{{HADOOP_COMMON_HOME}}/../../../CDH-5.10.0-1.cdh5.10.0.p0.41/jars/xercesImpl-2.9.1.jar:{{HADOOP_COMMON_HOME}}/../../../CDH-5.10.0-1.cdh5.10.0.p0.41/jars/xml-apis-1.3.04.jar:{{HADOOP_COMMON_HOME}}/../../../CDH-5.10.0-1.cdh5.10.0.p0.41/jars/xmlenc-0.52.jar:{{HADOOP_COMMON_HOME}}/../../../CDH-5.10.0-1.cdh5.10.0.p0.41/jars/xz-1.0.jar:{{HADOOP_COMMON_HOME}}/../../../CDH-5.10.0-1.cdh5.10.0.p0.41/jars/zkclient-0.7.jar:{{HADOOP_COMMON_HOME}}/../../../CDH-5.10.0-1.cdh5.10.0.p0.41/jars/zookeeper-3.4.5-cdh5.10.0.jar:{{HADOOP_COMMON_HOME}}/../../../CDH-5.10.0-1.cdh5.10.0.p0.41/lib/hadoop/LICENSE.txt:{{HADOOP_COMMON_HOME}}/../../../CDH-5.10.0-1.cdh5.10.0.p0.41/lib/hadoop/NOTICE.txt SPARK_YARN_CACHE_ARCHIVES -> hdfs://smartdata-prod/user/jenkins/.sparkStaging/application_1520875508177_0403/__spark_conf__2003734234993939341.zip#__spark_conf__ SPARK_YARN_CACHE_FILES_FILE_SIZES -> 58733145,2888 SPARK_YARN_STAGING_DIR -> .sparkStaging/application_1520875508177_0403 SPARK_DIST_CLASSPATH -> /opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/jars/ST4-4.0.4.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/jars/accumulo-core-1.6.0.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/jars/accumulo-fate-1.6.0.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/jars/accumulo-start-1.6.0.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/jars/accumulo-trace-1.6.0.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/jars/activation-1.1.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/jars/ant-1.9.1.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/jars/ant-launcher-1.9.1.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/jars/antlr-2.7.7.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/jars/antlr-runtime-3.4.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/jars/aopalliance-1.0.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/jars/apache-log4j-extras-1.1.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/jars/apache-log4j-extras-1.2.17.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/jars/apacheds-i18n-2.0.0-M15.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/jars/apacheds-kerberos-codec-2.0.0-M15.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/jars/api-asn1-api-1.0.0-M20.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/jars/api-util-1.0.0-M20.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/jars/asm-3.2.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/jars/asm-commons-3.1.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/jars/asm-tree-3.1.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/jars/async-1.4.0.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/jars/asynchbase-1.7.0.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/jars/avro-1.7.6-cdh5.10.0.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/jars/avro-compiler-1.7.6-cdh5.10.0.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/jars/avro-ipc-1.7.6-cdh5.10.0-tests.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/jars/avro-ipc-1.7.6-cdh5.10.0.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/jars/avro-mapred-1.7.6-cdh5.10.0-hadoop2.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/jars/avro-maven-plugin-1.7.6-cdh5.10.0.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/jars/avro-protobuf-1.7.6-cdh5.10.0.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/jars/avro-service-archetype-1.7.6-cdh5.10.0.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/jars/avro-thrift-1.7.6-cdh5.10.0.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/jars/aws-java-sdk-core-1.10.6.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/jars/aws-java-sdk-kms-1.10.6.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/jars/aws-java-sdk-s3-1.10.6.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/jars/aws-java-sdk-sts-1.10.6.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/jars/bonecp-0.8.0.RELEASE.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/jars/calcite-avatica-1.0.0-incubating.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/jars/calcite-core-1.0.0-incubating.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/jars/calcite-linq4j-1.0.0-incubating.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/jars/commons-beanutils-1.9.2.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/jars/commons-beanutils-core-1.8.0.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/jars/commons-cli-1.2.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/jars/commons-codec-1.4.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/jars/commons-codec-1.8.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/jars/commons-collections-3.2.2.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/jars/commons-compiler-2.7.6.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/jars/commons-compress-1.4.1.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/jars/commons-configuration-1.6.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/jars/commons-daemon-1.0.13.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/jars/commons-dbcp-1.4.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/jars/commons-digester-1.8.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/jars/commons-el-1.0.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/jars/commons-httpclient-3.0.1.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/jars/commons-httpclient-3.1.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/jars/commons-io-2.4.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/jars/commons-jexl-2.1.1.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/jars/commons-lang-2.6.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/jars/commons-lang3-3.1.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/jars/commons-logging-1.1.3.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/jars/commons-math-2.1.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/jars/commons-math3-3.1.1.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/jars/commons-net-3.1.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/jars/commons-pool-1.5.4.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/jars/commons-vfs2-2.0.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/jars/curator-client-2.6.0.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/jars/curator-client-2.7.1.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/jars/curator-framework-2.6.0.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/jars/curator-framework-2.7.1.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/jars/curator-recipes-2.6.0.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/jars/curator-recipes-2.7.1.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/jars/datanucleus-api-jdo-3.2.6.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/jars/datanucleus-core-3.2.10.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/jars/datanucleus-rdbms-3.2.9.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/jars/derby-10.11.1.1.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/jars/eigenbase-properties-1.1.4.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/jars/fastutil-6.3.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/jars/findbugs-annotations-1.3.9-1.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/jars/flume-avro-source-1.6.0-cdh5.10.0.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/jars/flume-dataset-sink-1.6.0-cdh5.10.0.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/jars/flume-file-channel-1.6.0-cdh5.10.0.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/jars/flume-hdfs-sink-1.6.0-cdh5.10.0.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/jars/flume-hive-sink-1.6.0-cdh5.10.0.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/jars/flume-irc-sink-1.6.0-cdh5.10.0.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/jars/flume-jdbc-channel-1.6.0-cdh5.10.0.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/jars/flume-jms-source-1.6.0-cdh5.10.0.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/jars/flume-kafka-channel-1.6.0-cdh5.10.0.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/jars/flume-kafka-source-1.6.0-cdh5.10.0.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/jars/flume-ng-auth-1.6.0-cdh5.10.0.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/jars/flume-ng-configuration-1.6.0-cdh5.10.0.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/jars/flume-ng-core-1.6.0-cdh5.10.0.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/jars/flume-ng-elasticsearch-sink-1.6.0-cdh5.10.0.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/jars/flume-ng-embedded-agent-1.6.0-cdh5.10.0.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/jars/flume-ng-hbase-sink-1.6.0-cdh5.10.0.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/jars/flume-ng-kafka-sink-1.6.0-cdh5.10.0.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/jars/flume-ng-log4jappender-1.6.0-cdh5.10.0.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/jars/flume-ng-morphline-solr-sink-1.6.0-cdh5.10.0.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/jars/flume-ng-node-1.6.0-cdh5.10.0.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/jars/flume-ng-sdk-1.6.0-cdh5.10.0.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/jars/flume-scribe-source-1.6.0-cdh5.10.0.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/jars/flume-spillable-memory-channel-1.6.0-cdh5.10.0.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/jars/flume-taildir-source-1.6.0-cdh5.10.0.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/jars/flume-thrift-source-1.6.0-cdh5.10.0.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/jars/flume-tools-1.6.0-cdh5.10.0.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/jars/flume-twitter-source-1.6.0-cdh5.10.0.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/jars/geronimo-annotation_1.0_spec-1.1.1.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/jars/geronimo-jaspic_1.0_spec-1.0.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/jars/geronimo-jta_1.1_spec-1.1.1.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/jars/groovy-all-2.4.4.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/jars/gson-2.2.4.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/jars/guava-11.0.2.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/jars/guava-11.0.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/jars/guava-14.0.1.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/jars/guice-3.0.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/jars/guice-servlet-3.0.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/jars/hadoop-annotations-2.6.0-cdh5.10.0.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/jars/hadoop-ant-2.6.0-cdh5.10.0.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/jars/hadoop-archive-logs-2.6.0-cdh5.10.0.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/jars/hadoop-archives-2.6.0-cdh5.10.0.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/jars/hadoop-auth-2.6.0-cdh5.10.0.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/jars/hadoop-aws-2.6.0-cdh5.10.0.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/jars/hadoop-azure-2.6.0-cdh5.10.0.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/jars/hadoop-common-2.6.0-cdh5.10.0-tests.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/jars/hadoop-common-2.6.0-cdh5.10.0.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/jars/hadoop-datajoin-2.6.0-cdh5.10.0.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/jars/hadoop-distcp-2.6.0-cdh5.10.0.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/jars/hadoop-extras-2.6.0-cdh5.10.0.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/jars/hadoop-gridmix-2.6.0-cdh5.10.0.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/jars/hadoop-hdfs-2.6.0-cdh5.10.0-tests.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/jars/hadoop-hdfs-2.6.0-cdh5.10.0.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/jars/hadoop-hdfs-nfs-2.6.0-cdh5.10.0.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/jars/hadoop-mapreduce-client-app-2.6.0-cdh5.10.0.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/jars/hadoop-mapreduce-client-common-2.6.0-cdh5.10.0.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/jars/hadoop-mapreduce-client-core-2.6.0-cdh5.10.0.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/jars/hadoop-mapreduce-client-hs-2.6.0-cdh5.10.0.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/jars/hadoop-mapreduce-client-hs-plugins-2.6.0-cdh5.10.0.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/jars/hadoop-mapreduce-client-jobclient-2.6.0-cdh5.10.0-tests.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/jars/hadoop-mapreduce-client-jobclient-2.6.0-cdh5.10.0.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/jars/hadoop-mapreduce-client-nativetask-2.6.0-cdh5.10.0.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/jars/hadoop-mapreduce-client-shuffle-2.6.0-cdh5.10.0.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/jars/hadoop-mapreduce-examples-2.6.0-cdh5.10.0.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/jars/hadoop-nfs-2.6.0-cdh5.10.0.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/jars/hadoop-openstack-2.6.0-cdh5.10.0.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/jars/hadoop-rumen-2.6.0-cdh5.10.0.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/jars/hadoop-sls-2.6.0-cdh5.10.0.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/jars/hadoop-streaming-2.6.0-cdh5.10.0.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/jars/hadoop-yarn-api-2.6.0-cdh5.10.0.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/jars/hadoop-yarn-applications-distributedshell-2.6.0-cdh5.10.0.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/jars/hadoop-yarn-applications-unmanaged-am-launcher-2.6.0-cdh5.10.0.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/jars/hadoop-yarn-client-2.6.0-cdh5.10.0.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/jars/hadoop-yarn-common-2.6.0-cdh5.10.0.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/jars/hadoop-yarn-registry-2.6.0-cdh5.10.0.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/jars/hadoop-yarn-server-applicationhistoryservice-2.6.0-cdh5.10.0.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/jars/hadoop-yarn-server-common-2.6.0-cdh5.10.0.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/jars/hadoop-yarn-server-nodemanager-2.6.0-cdh5.10.0.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/jars/hadoop-yarn-server-resourcemanager-2.6.0-cdh5.10.0.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/jars/hadoop-yarn-server-tests-2.6.0-cdh5.10.0.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/jars/hadoop-yarn-server-web-proxy-2.6.0-cdh5.10.0.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/jars/hamcrest-core-1.1.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/jars/hamcrest-core-1.3.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/jars/hbase-annotations-1.2.0-cdh5.10.0.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/jars/hbase-client-1.2.0-cdh5.10.0.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/jars/hbase-common-1.2.0-cdh5.10.0.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/jars/hbase-hadoop-compat-1.2.0-cdh5.10.0.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/jars/hbase-hadoop2-compat-1.2.0-cdh5.10.0.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/jars/hbase-protocol-1.2.0-cdh5.10.0.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/jars/hbase-server-1.2.0-cdh5.10.0.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/jars/high-scale-lib-1.1.1.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/jars/hive-accumulo-handler-1.1.0-cdh5.10.0.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/jars/hive-ant-1.1.0-cdh5.10.0.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/jars/hive-beeline-1.1.0-cdh5.10.0.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/jars/hive-cli-1.1.0-cdh5.10.0.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/jars/hive-common-1.1.0-cdh5.10.0.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/jars/hive-contrib-1.1.0-cdh5.10.0.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/jars/hive-exec-1.1.0-cdh5.10.0.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/jars/hive-hbase-handler-1.1.0-cdh5.10.0.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/jars/hive-hwi-1.1.0-cdh5.10.0.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/jars/hive-jdbc-1.1.0-cdh5.10.0-standalone.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/jars/hive-jdbc-1.1.0-cdh5.10.0.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/jars/hive-metastore-1.1.0-cdh5.10.0.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/jars/hive-serde-1.1.0-cdh5.10.0.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/jars/hive-service-1.1.0-cdh5.10.0.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/jars/hive-shims-0.23-1.1.0-cdh5.10.0.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/jars/hive-shims-1.1.0-cdh5.10.0.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/jars/hive-shims-common-1.1.0-cdh5.10.0.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/jars/hive-shims-scheduler-1.1.0-cdh5.10.0.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/jars/hive-testutils-1.1.0-cdh5.10.0.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/jars/htrace-core-3.2.0-incubating.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/jars/htrace-core4-4.0.1-incubating.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/jars/httpclient-4.2.5.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/jars/httpcore-4.2.5.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/jars/hue-plugins-3.9.0-cdh5.10.0.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/jars/irclib-1.10.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/jars/ivy-2.0.0-rc2.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/jars/jackson-annotations-2.2.3.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/jars/jackson-core-2.2.3.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/jars/jackson-core-asl-1.8.8.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/jars/jackson-databind-2.2.3.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/jars/jackson-jaxrs-1.8.8.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/jars/jackson-mapper-asl-1.8.8.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/jars/jackson-xc-1.8.8.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/jars/jamon-runtime-2.3.1.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/jars/janino-2.7.6.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/jars/jasper-compiler-5.5.23.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/jars/jasper-runtime-5.5.23.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/jars/java-xmlbuilder-0.4.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/jars/javax.inject-1.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/jars/jaxb-api-2.2.2.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/jars/jaxb-impl-2.2.3-1.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/jars/jcommander-1.32.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/jars/jdo-api-3.0.1.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/jars/jersey-client-1.9.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/jars/jersey-core-1.9.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/jars/jersey-guice-1.9.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/jars/jersey-json-1.9.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/jars/jersey-server-1.9.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/jars/jets3t-0.9.0.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/jars/jettison-1.1.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/jars/jetty-6.1.26.cloudera.4.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/jars/jetty-all-7.6.0.v20120127.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/jars/jetty-all-server-7.6.0.v20120127.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/jars/jetty-util-6.1.26.cloudera.4.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/jars/jline-2.11.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/jars/jline-2.12.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/jars/joda-time-1.6.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/jars/joda-time-2.1.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/jars/jopt-simple-4.9.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/jars/jpam-1.1.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/jars/jsch-0.1.42.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/jars/jsp-api-2.1.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/jars/jsr305-1.3.9.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/jars/jsr305-3.0.0.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/jars/jta-1.1.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/jars/junit-4.11.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/jars/kafka-clients-0.9.0-kafka-2.0.2.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/jars/kafka_2.10-0.9.0-kafka-2.0.2.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/jars/kite-data-core-1.0.0-cdh5.10.0.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/jars/kite-data-hbase-1.0.0-cdh5.10.0.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/jars/kite-data-hive-1.0.0-cdh5.10.0.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/jars/kite-hadoop-compatibility-1.0.0-cdh5.10.0.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/jars/leveldbjni-all-1.8.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/jars/libfb303-0.9.3.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/jars/libthrift-0.9.3.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/jars/log4j-1.2.16.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/jars/log4j-1.2.17.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/jars/logredactor-1.0.3.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/jars/lz4-1.3.0.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/jars/mail-1.4.1.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/jars/mapdb-0.9.9.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/jars/maven-scm-api-1.4.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/jars/maven-scm-provider-svn-commons-1.4.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/jars/maven-scm-provider-svnexe-1.4.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/jars/metrics-core-2.2.0.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/jars/metrics-core-3.0.2.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/jars/metrics-json-3.0.2.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/jars/metrics-jvm-3.0.2.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/jars/microsoft-windowsazure-storage-sdk-0.6.0.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/jars/mina-core-2.0.4.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/jars/mockito-all-1.8.5.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/jars/netty-3.10.5.Final.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/jars/netty-3.9.4.Final.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/jars/netty-all-4.0.23.Final.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/jars/opencsv-2.3.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/jars/oro-2.0.8.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/jars/paranamer-2.3.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/jars/parquet-avro-1.5.0-cdh5.10.0.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/jars/parquet-cascading-1.5.0-cdh5.10.0.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/jars/parquet-column-1.5.0-cdh5.10.0.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/jars/parquet-common-1.5.0-cdh5.10.0.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/jars/parquet-encoding-1.5.0-cdh5.10.0.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/jars/parquet-format-2.1.0-cdh5.10.0-javadoc.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/jars/parquet-format-2.1.0-cdh5.10.0-sources.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/jars/parquet-format-2.1.0-cdh5.10.0.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/jars/parquet-generator-1.5.0-cdh5.10.0.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/jars/parquet-hadoop-1.5.0-cdh5.10.0.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/jars/parquet-hadoop-bundle-1.5.0-cdh5.10.0.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/jars/parquet-jackson-1.5.0-cdh5.10.0.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/jars/parquet-pig-1.5.0-cdh5.10.0.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/jars/parquet-pig-bundle-1.5.0-cdh5.10.0.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/jars/parquet-protobuf-1.5.0-cdh5.10.0.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/jars/parquet-scala_2.10-1.5.0-cdh5.10.0.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/jars/parquet-scrooge_2.10-1.5.0-cdh5.10.0.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/jars/parquet-test-hadoop2-1.5.0-cdh5.10.0.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/jars/parquet-thrift-1.5.0-cdh5.10.0.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/jars/parquet-tools-1.5.0-cdh5.10.0.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/jars/pentaho-aggdesigner-algorithm-5.1.5-jhyde.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/jars/plexus-utils-1.5.6.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/jars/protobuf-java-2.5.0.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/jars/regexp-1.3.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/jars/scala-library-2.10.5.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/jars/serializer-2.7.2.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/jars/servlet-api-2.5-20110124.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/jars/servlet-api-2.5.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/jars/slf4j-api-1.7.5.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/jars/slf4j-log4j12-1.7.5.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/jars/snappy-java-1.0.4.1.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/jars/spark-1.6.0-cdh5.10.0-yarn-shuffle.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/jars/spark-streaming-flume-sink_2.10-1.6.0-cdh5.10.0.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/jars/stax-api-1.0-2.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/jars/stax-api-1.0.1.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/jars/stringtemplate-3.2.1.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/jars/super-csv-2.2.0.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/jars/tempus-fugit-1.1.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/jars/trevni-avro-1.7.6-cdh5.10.0-hadoop2.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/jars/trevni-avro-1.7.6-cdh5.10.0.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/jars/trevni-core-1.7.6-cdh5.10.0.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/jars/twitter4j-core-3.0.3.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/jars/twitter4j-media-support-3.0.3.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/jars/twitter4j-stream-3.0.3.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/jars/unused-1.0.0.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/jars/velocity-1.5.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/jars/velocity-1.7.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/jars/xalan-2.7.2.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/jars/xercesImpl-2.9.1.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/jars/xml-apis-1.3.04.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/jars/xmlenc-0.52.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/jars/xz-1.0.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/jars/zkclient-0.7.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/jars/zookeeper-3.4.5-cdh5.10.0.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/lib/hadoop/LICENSE.txt:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/lib/hadoop/NOTICE.txt SPARK_YARN_CACHE_ARCHIVES_FILE_SIZES -> 35065 SPARK_YARN_CACHE_FILES_VISIBILITIES -> PRIVATE,PRIVATE SPARK_USER -> jenkins SPARK_YARN_CACHE_ARCHIVES_TIME_STAMPS -> 1523969330966 SPARK_YARN_MODE -> true SPARK_YARN_CACHE_FILES_TIME_STAMPS -> 1523969330660,1523969330833 SPARK_YARN_CACHE_ARCHIVES_VISIBILITIES -> PRIVATE SPARK_YARN_CACHE_FILES -> hdfs://smartdata-prod/user/jenkins/.sparkStaging/application_1520875508177_0403/predictor-engine-1.0-jar-with-dependencies.jar#__app__.jar,hdfs://smartdata-prod/user/jenkins/.sparkStaging/application_1520875508177_0403/hbase-site.xml#hbase-site.xml command: LD_LIBRARY_PATH="{{HADOOP_COMMON_HOME}}/../../../CDH-5.10.0-1.cdh5.10.0.p0.41/lib/hadoop/lib/native:$LD_LIBRARY_PATH" \ {{JAVA_HOME}}/bin/java \ -server \ -XX:OnOutOfMemoryError='kill %p' \ -Xms6144m \ -Xmx6144m \ -Djava.io.tmpdir={{PWD}}/tmp \ '-Dspark.shuffle.service.port=7337' \ '-Dspark.authenticate=false' \ -Dspark.yarn.app.container.log.dir=<LOG_DIR> \ org.apache.spark.executor.CoarseGrainedExecutorBackend \ --driver-url \ spark://CoarseGrainedScheduler@***IP masked***:51755 \ --executor-id \ <executorId> \ --hostname \ <hostname> \ --cores \ 4 \ --app-id \ application_1520875508177_0403 \ --user-class-path \ file:$PWD/__app__.jar \ 1><LOG_DIR>/stdout \ 2><LOG_DIR>/stderr resources: __app__.jar -> resource { scheme: "hdfs" host: "smartdata-prod" port: -1 file: "/user/jenkins/.sparkStaging/application_1520875508177_0403/predictor-engine-1.0-jar-with-dependencies.jar" } size: 58733145 timestamp: 1523969330660 type: FILE visibility: PRIVATE __spark_conf__ -> resource { scheme: "hdfs" host: "smartdata-prod" port: -1 file: "/user/jenkins/.sparkStaging/application_1520875508177_0403/__spark_conf__2003734234993939341.zip" } size: 35065 timestamp: 1523969330966 type: ARCHIVE visibility: PRIVATE hbase-site.xml -> resource { scheme: "hdfs" host: "smartdata-prod" port: -1 file: "/user/jenkins/.sparkStaging/application_1520875508177_0403/hbase-site.xml" } size: 2888 timestamp: 1523969330833 type: FILE visibility: PRIVATE =============================================================================== 18/04/17 16:32:23 INFO yarn.YarnRMClient: Registering the ApplicationMaster 18/04/17 16:32:23 INFO client.ConfiguredRMFailoverProxyProvider: Failing over to rm70 18/04/17 16:32:23 INFO yarn.YarnAllocator: Will request 12 executor container(s), each with 4 core(s) and 6758 MB memory (including 614 MB of overhead) 18/04/17 16:32:23 INFO yarn.YarnAllocator: Submitted 12 unlocalized container requests. 18/04/17 16:32:23 INFO yarn.ApplicationMaster: Started progress reporter thread with (heartbeat : 3000, initial allocation : 200) intervals 18/04/17 16:32:23 INFO yarn.YarnAllocator: Launching container container_e26_1520875508177_0403_02_000002 on host ***hostname masked*** 18/04/17 16:32:23 INFO yarn.YarnAllocator: Launching container container_e26_1520875508177_0403_02_000003 on host ***hostname masked*** 18/04/17 16:32:23 INFO yarn.YarnAllocator: Launching container container_e26_1520875508177_0403_02_000004 on host ***hostname masked*** 18/04/17 16:32:23 INFO yarn.YarnAllocator: Launching container container_e26_1520875508177_0403_02_000005 on host ***hostname masked*** 18/04/17 16:32:23 INFO yarn.YarnAllocator: Launching container container_e26_1520875508177_0403_02_000006 on host ***hostname masked*** 18/04/17 16:32:23 INFO yarn.YarnAllocator: Launching container container_e26_1520875508177_0403_02_000007 on host ***hostname masked*** 18/04/17 16:32:23 INFO yarn.YarnAllocator: Launching container container_e26_1520875508177_0403_02_000008 on host ***hostname masked*** 18/04/17 16:32:23 INFO yarn.YarnAllocator: Launching container container_e26_1520875508177_0403_02_000009 on host ***hostname masked*** 18/04/17 16:32:23 INFO yarn.YarnAllocator: Launching container container_e26_1520875508177_0403_02_000010 on host ***hostname masked*** 18/04/17 16:32:23 INFO yarn.YarnAllocator: Launching container container_e26_1520875508177_0403_02_000011 on host ***hostname masked*** 18/04/17 16:32:23 INFO yarn.YarnAllocator: Launching container container_e26_1520875508177_0403_02_000012 on host ***hostname masked*** 18/04/17 16:32:23 INFO yarn.YarnAllocator: Launching container container_e26_1520875508177_0403_02_000013 on host ***hostname masked*** 18/04/17 16:32:23 INFO yarn.YarnAllocator: Received 12 containers from YARN, launching executors on 12 of them. 18/04/17 16:32:23 INFO yarn.ExecutorRunnable: Preparing Local resources 18/04/17 16:32:23 INFO yarn.ExecutorRunnable: Preparing Local resources 18/04/17 16:32:23 INFO yarn.ExecutorRunnable: Preparing Local resources 18/04/17 16:32:23 INFO yarn.ExecutorRunnable: Preparing Local resources 18/04/17 16:32:23 INFO yarn.ExecutorRunnable: Preparing Local resources 18/04/17 16:32:23 INFO yarn.ExecutorRunnable: Prepared Local resources Map(__app__.jar -> resource { scheme: "hdfs" host: "smartdata-prod" port: -1 file: "/user/jenkins/.sparkStaging/application_1520875508177_0403/predictor-engine-1.0-jar-with-dependencies.jar" } size: 58733145 timestamp: 1523969330660 type: FILE visibility: PRIVATE, __spark_conf__ -> resource { scheme: "hdfs" host: "smartdata-prod" port: -1 file: "/user/jenkins/.sparkStaging/application_1520875508177_0403/__spark_conf__2003734234993939341.zip" } size: 35065 timestamp: 1523969330966 type: ARCHIVE visibility: PRIVATE, hbase-site.xml -> resource { scheme: "hdfs" host: "smartdata-prod" port: -1 file: "/user/jenkins/.sparkStaging/application_1520875508177_0403/hbase-site.xml" } size: 2888 timestamp: 1523969330833 type: FILE visibility: PRIVATE) 18/04/17 16:32:23 INFO yarn.ExecutorRunnable: Preparing Local resources 18/04/17 16:32:23 INFO yarn.ExecutorRunnable: Preparing Local resources 18/04/17 16:32:23 INFO yarn.ExecutorRunnable: Preparing Local resources 18/04/17 16:32:23 INFO yarn.ExecutorRunnable: Preparing Local resources 18/04/17 16:32:23 INFO yarn.ExecutorRunnable: Preparing Local resources 18/04/17 16:32:23 INFO yarn.ExecutorRunnable: Preparing Local resources 18/04/17 16:32:23 INFO yarn.ExecutorRunnable: Preparing Local resources 18/04/17 16:32:23 INFO yarn.ExecutorRunnable: Prepared Local resources Map(__app__.jar -> resource { scheme: "hdfs" host: "smartdata-prod" port: -1 file: "/user/jenkins/.sparkStaging/application_1520875508177_0403/predictor-engine-1.0-jar-with-dependencies.jar" } size: 58733145 timestamp: 1523969330660 type: FILE visibility: PRIVATE, __spark_conf__ -> resource { scheme: "hdfs" host: "smartdata-prod" port: -1 file: "/user/jenkins/.sparkStaging/application_1520875508177_0403/__spark_conf__2003734234993939341.zip" } size: 35065 timestamp: 1523969330966 type: ARCHIVE visibility: PRIVATE, hbase-site.xml -> resource { scheme: "hdfs" host: "smartdata-prod" port: -1 file: "/user/jenkins/.sparkStaging/application_1520875508177_0403/hbase-site.xml" } size: 2888 timestamp: 1523969330833 type: FILE visibility: PRIVATE) 18/04/17 16:32:23 INFO yarn.ExecutorRunnable: Prepared Local resources Map(__app__.jar -> resource { scheme: "hdfs" host: "smartdata-prod" port: -1 file: "/user/jenkins/.sparkStaging/application_1520875508177_0403/predictor-engine-1.0-jar-with-dependencies.jar" } size: 58733145 timestamp: 1523969330660 type: FILE visibility: PRIVATE, __spark_conf__ -> resource { scheme: "hdfs" host: "smartdata-prod" port: -1 file: "/user/jenkins/.sparkStaging/application_1520875508177_0403/__spark_conf__2003734234993939341.zip" } size: 35065 timestamp: 1523969330966 type: ARCHIVE visibility: PRIVATE, hbase-site.xml -> resource { scheme: "hdfs" host: "smartdata-prod" port: -1 file: "/user/jenkins/.sparkStaging/application_1520875508177_0403/hbase-site.xml" } size: 2888 timestamp: 1523969330833 type: FILE visibility: PRIVATE) 18/04/17 16:32:23 INFO yarn.ExecutorRunnable: Prepared Local resources Map(__app__.jar -> resource { scheme: "hdfs" host: "smartdata-prod" port: -1 file: "/user/jenkins/.sparkStaging/application_1520875508177_0403/predictor-engine-1.0-jar-with-dependencies.jar" } size: 58733145 timestamp: 1523969330660 type: FILE visibility: PRIVATE, __spark_conf__ -> resource { scheme: "hdfs" host: "smartdata-prod" port: -1 file: "/user/jenkins/.sparkStaging/application_1520875508177_0403/__spark_conf__2003734234993939341.zip" } size: 35065 timestamp: 1523969330966 type: ARCHIVE visibility: PRIVATE, hbase-site.xml -> resource { scheme: "hdfs" host: "smartdata-prod" port: -1 file: "/user/jenkins/.sparkStaging/application_1520875508177_0403/hbase-site.xml" } size: 2888 timestamp: 1523969330833 type: FILE visibility: PRIVATE) 18/04/17 16:32:23 INFO yarn.ExecutorRunnable: Prepared Local resources Map(__app__.jar -> resource { scheme: "hdfs" host: "smartdata-prod" port: -1 file: "/user/jenkins/.sparkStaging/application_1520875508177_0403/predictor-engine-1.0-jar-with-dependencies.jar" } size: 58733145 timestamp: 1523969330660 type: FILE visibility: PRIVATE, __spark_conf__ -> resource { scheme: "hdfs" host: "smartdata-prod" port: -1 file: "/user/jenkins/.sparkStaging/application_1520875508177_0403/__spark_conf__2003734234993939341.zip" } size: 35065 timestamp: 1523969330966 type: ARCHIVE visibility: PRIVATE, hbase-site.xml -> resource { scheme: "hdfs" host: "smartdata-prod" port: -1 file: "/user/jenkins/.sparkStaging/application_1520875508177_0403/hbase-site.xml" } size: 2888 timestamp: 1523969330833 type: FILE visibility: PRIVATE) 18/04/17 16:32:23 INFO yarn.ExecutorRunnable: Prepared Local resources Map(__app__.jar -> resource { scheme: "hdfs" host: "smartdata-prod" port: -1 file: "/user/jenkins/.sparkStaging/application_1520875508177_0403/predictor-engine-1.0-jar-with-dependencies.jar" } size: 58733145 timestamp: 1523969330660 type: FILE visibility: PRIVATE, __spark_conf__ -> resource { scheme: "hdfs" host: "smartdata-prod" port: -1 file: "/user/jenkins/.sparkStaging/application_1520875508177_0403/__spark_conf__2003734234993939341.zip" } size: 35065 timestamp: 1523969330966 type: ARCHIVE visibility: PRIVATE, hbase-site.xml -> resource { scheme: "hdfs" host: "smartdata-prod" port: -1 file: "/user/jenkins/.sparkStaging/application_1520875508177_0403/hbase-site.xml" } size: 2888 timestamp: 1523969330833 type: FILE visibility: PRIVATE) 18/04/17 16:32:23 INFO yarn.ExecutorRunnable: Prepared Local resources Map(__app__.jar -> resource { scheme: "hdfs" host: "smartdata-prod" port: -1 file: "/user/jenkins/.sparkStaging/application_1520875508177_0403/predictor-engine-1.0-jar-with-dependencies.jar" } size: 58733145 timestamp: 1523969330660 type: FILE visibility: PRIVATE, __spark_conf__ -> resource { scheme: "hdfs" host: "smartdata-prod" port: -1 file: "/user/jenkins/.sparkStaging/application_1520875508177_0403/__spark_conf__2003734234993939341.zip" } size: 35065 timestamp: 1523969330966 type: ARCHIVE visibility: PRIVATE, hbase-site.xml -> resource { scheme: "hdfs" host: "smartdata-prod" port: -1 file: "/user/jenkins/.sparkStaging/application_1520875508177_0403/hbase-site.xml" } size: 2888 timestamp: 1523969330833 type: FILE visibility: PRIVATE) 18/04/17 16:32:23 INFO yarn.ExecutorRunnable: Prepared Local resources Map(__app__.jar -> resource { scheme: "hdfs" host: "smartdata-prod" port: -1 file: "/user/jenkins/.sparkStaging/application_1520875508177_0403/predictor-engine-1.0-jar-with-dependencies.jar" } size: 58733145 timestamp: 1523969330660 type: FILE visibility: PRIVATE, __spark_conf__ -> resource { scheme: "hdfs" host: "smartdata-prod" port: -1 file: "/user/jenkins/.sparkStaging/application_1520875508177_0403/__spark_conf__2003734234993939341.zip" } size: 35065 timestamp: 1523969330966 type: ARCHIVE visibility: PRIVATE, hbase-site.xml -> resource { scheme: "hdfs" host: "smartdata-prod" port: -1 file: "/user/jenkins/.sparkStaging/application_1520875508177_0403/hbase-site.xml" } size: 2888 timestamp: 1523969330833 type: FILE visibility: PRIVATE) 18/04/17 16:32:23 INFO yarn.ExecutorRunnable: Prepared Local resources Map(__app__.jar -> resource { scheme: "hdfs" host: "smartdata-prod" port: -1 file: "/user/jenkins/.sparkStaging/application_1520875508177_0403/predictor-engine-1.0-jar-with-dependencies.jar" } size: 58733145 timestamp: 1523969330660 type: FILE visibility: PRIVATE, __spark_conf__ -> resource { scheme: "hdfs" host: "smartdata-prod" port: -1 file: "/user/jenkins/.sparkStaging/application_1520875508177_0403/__spark_conf__2003734234993939341.zip" } size: 35065 timestamp: 1523969330966 type: ARCHIVE visibility: PRIVATE, hbase-site.xml -> resource { scheme: "hdfs" host: "smartdata-prod" port: -1 file: "/user/jenkins/.sparkStaging/application_1520875508177_0403/hbase-site.xml" } size: 2888 timestamp: 1523969330833 type: FILE visibility: PRIVATE) 18/04/17 16:32:23 INFO yarn.ExecutorRunnable: Prepared Local resources Map(__app__.jar -> resource { scheme: "hdfs" host: "smartdata-prod" port: -1 file: "/user/jenkins/.sparkStaging/application_1520875508177_0403/predictor-engine-1.0-jar-with-dependencies.jar" } size: 58733145 timestamp: 1523969330660 type: FILE visibility: PRIVATE, __spark_conf__ -> resource { scheme: "hdfs" host: "smartdata-prod" port: -1 file: "/user/jenkins/.sparkStaging/application_1520875508177_0403/__spark_conf__2003734234993939341.zip" } size: 35065 timestamp: 1523969330966 type: ARCHIVE visibility: PRIVATE, hbase-site.xml -> resource { scheme: "hdfs" host: "smartdata-prod" port: -1 file: "/user/jenkins/.sparkStaging/application_1520875508177_0403/hbase-site.xml" } size: 2888 timestamp: 1523969330833 type: FILE visibility: PRIVATE) 18/04/17 16:32:23 INFO yarn.ExecutorRunnable: Prepared Local resources Map(__app__.jar -> resource { scheme: "hdfs" host: "smartdata-prod" port: -1 file: "/user/jenkins/.sparkStaging/application_1520875508177_0403/predictor-engine-1.0-jar-with-dependencies.jar" } size: 58733145 timestamp: 1523969330660 type: FILE visibility: PRIVATE, __spark_conf__ -> resource { scheme: "hdfs" host: "smartdata-prod" port: -1 file: "/user/jenkins/.sparkStaging/application_1520875508177_0403/__spark_conf__2003734234993939341.zip" } size: 35065 timestamp: 1523969330966 type: ARCHIVE visibility: PRIVATE, hbase-site.xml -> resource { scheme: "hdfs" host: "smartdata-prod" port: -1 file: "/user/jenkins/.sparkStaging/application_1520875508177_0403/hbase-site.xml" } size: 2888 timestamp: 1523969330833 type: FILE visibility: PRIVATE) 18/04/17 16:32:23 INFO yarn.ExecutorRunnable: Prepared Local resources Map(__app__.jar -> resource { scheme: "hdfs" host: "smartdata-prod" port: -1 file: "/user/jenkins/.sparkStaging/application_1520875508177_0403/predictor-engine-1.0-jar-with-dependencies.jar" } size: 58733145 timestamp: 1523969330660 type: FILE visibility: PRIVATE, __spark_conf__ -> resource { scheme: "hdfs" host: "smartdata-prod" port: -1 file: "/user/jenkins/.sparkStaging/application_1520875508177_0403/__spark_conf__2003734234993939341.zip" } size: 35065 timestamp: 1523969330966 type: ARCHIVE visibility: PRIVATE, hbase-site.xml -> resource { scheme: "hdfs" host: "smartdata-prod" port: -1 file: "/user/jenkins/.sparkStaging/application_1520875508177_0403/hbase-site.xml" } size: 2888 timestamp: 1523969330833 type: FILE visibility: PRIVATE) 18/04/17 16:32:26 INFO cluster.YarnClusterSchedulerBackend: Registered executor NettyRpcEndpointRef(null) (***hostname masked***:46210) with ID 10 18/04/17 16:32:26 INFO cluster.YarnClusterSchedulerBackend: Registered executor NettyRpcEndpointRef(null) (***hostname masked***:37238) with ID 5 18/04/17 16:32:26 INFO cluster.YarnClusterSchedulerBackend: Registered executor NettyRpcEndpointRef(null) (***hostname masked***:46209) with ID 12 18/04/17 16:32:26 INFO storage.BlockManagerMasterEndpoint: Registering block manager ***hostname masked***:55095 with 3.1 GB RAM, BlockManagerId(10, ***hostname masked***, 55095) 18/04/17 16:32:26 INFO storage.BlockManagerMasterEndpoint: Registering block manager ***hostname masked***:53081 with 3.1 GB RAM, BlockManagerId(5, ***hostname masked***, 53081) 18/04/17 16:32:26 INFO storage.BlockManagerMasterEndpoint: Registering block manager ***hostname masked***:42188 with 3.1 GB RAM, BlockManagerId(12, ***hostname masked***, 42188) 18/04/17 16:32:26 INFO cluster.YarnClusterSchedulerBackend: Registered executor NettyRpcEndpointRef(null) (***hostname masked***:37239) with ID 6 18/04/17 16:32:26 INFO storage.BlockManagerMasterEndpoint: Registering block manager ***hostname masked***:35790 with 3.1 GB RAM, BlockManagerId(6, ***hostname masked***, 35790) 18/04/17 16:32:26 INFO cluster.YarnClusterSchedulerBackend: Registered executor NettyRpcEndpointRef(null) (***hostname masked***:37243) with ID 4 18/04/17 16:32:26 INFO cluster.YarnClusterSchedulerBackend: Registered executor NettyRpcEndpointRef(null) (***hostname masked***:46214) with ID 9 18/04/17 16:32:26 INFO storage.BlockManagerMasterEndpoint: Registering block manager ***hostname masked***:55279 with 3.1 GB RAM, BlockManagerId(4, ***hostname masked***, 55279) 18/04/17 16:32:26 INFO storage.BlockManagerMasterEndpoint: Registering block manager ***hostname masked***:55033 with 3.1 GB RAM, BlockManagerId(9, ***hostname masked***, 55033) 18/04/17 16:32:27 INFO cluster.YarnClusterSchedulerBackend: Registered executor NettyRpcEndpointRef(null) (***hostname masked***:46215) with ID 8 18/04/17 16:32:27 INFO storage.BlockManagerMasterEndpoint: Registering block manager ***hostname masked***:50260 with 3.1 GB RAM, BlockManagerId(8, ***hostname masked***, 50260) 18/04/17 16:32:27 INFO cluster.YarnClusterSchedulerBackend: Registered executor NettyRpcEndpointRef(null) (***hostname masked***:37250) with ID 3 18/04/17 16:32:27 INFO storage.BlockManagerMasterEndpoint: Registering block manager ***hostname masked***:60107 with 3.1 GB RAM, BlockManagerId(3, ***hostname masked***, 60107) 18/04/17 16:32:27 INFO cluster.YarnClusterSchedulerBackend: Registered executor NettyRpcEndpointRef(null) (***hostname masked***:37254) with ID 2 18/04/17 16:32:27 INFO storage.BlockManagerMasterEndpoint: Registering block manager ***hostname masked***:43653 with 3.1 GB RAM, BlockManagerId(2, ***hostname masked***, 43653) 18/04/17 16:32:28 INFO cluster.YarnClusterSchedulerBackend: Registered executor NettyRpcEndpointRef(null) (***hostname masked***:37258) with ID 1 18/04/17 16:32:28 INFO cluster.YarnClusterSchedulerBackend: SchedulerBackend is ready for scheduling beginning after reached minRegisteredResourcesRatio: 0.8 18/04/17 16:32:28 INFO cluster.YarnClusterScheduler: YarnClusterScheduler.postStartHook done 18/04/17 16:32:28 INFO cluster.YarnClusterSchedulerBackend: Registered executor NettyRpcEndpointRef(null) (***hostname masked***:46221) with ID 11 18/04/17 16:32:28 INFO storage.BlockManagerMasterEndpoint: Registering block manager ***hostname masked***:56034 with 3.1 GB RAM, BlockManagerId(1, ***hostname masked***, 56034) 18/04/17 16:32:28 INFO storage.BlockManagerMasterEndpoint: Registering block manager ***hostname masked***:57847 with 3.1 GB RAM, BlockManagerId(11, ***hostname masked***, 57847) 18/04/17 16:32:28 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x7c0e3536 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 16:32:28 INFO zookeeper.ZooKeeper: Client environment:zookeeper.version=3.4.5-cdh5.10.0--1, built on 01/20/2017 20:08 GMT 18/04/17 16:32:28 INFO zookeeper.ZooKeeper: Client environment:host.name=***hostname masked*** 18/04/17 16:32:28 INFO zookeeper.ZooKeeper: Client environment:java.version=1.8.0_60 18/04/17 16:32:28 INFO zookeeper.ZooKeeper: Client environment:java.vendor=Oracle Corporation 18/04/17 16:32:28 INFO zookeeper.ZooKeeper: Client environment:java.home=/usr/java/jdk1.8.0_60/jre 18/04/17 16:32:28 INFO zookeeper.ZooKeeper: Client environment:java.class.path=/hadoop/6/yarn/nm/usercache/jenkins/appcache/application_1520875508177_0403/container_e26_1520875508177_0403_02_000001:/hadoop/6/yarn/nm/usercache/jenkins/appcache/application_1520875508177_0403/container_e26_1520875508177_0403_02_000001/__spark_conf__:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/lib/hadoop/../../../CDH-5.10.0-1.cdh5.10.0.p0.41/lib/spark/lib/spark-assembly.jar:/etc/hadoop/conf.cloudera.yarn:/run/cloudera-scm-agent/process/1750-yarn-NODEMANAGER:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/lib/hadoop/hadoop-annotations.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/lib/hadoop/hadoop-auth.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/lib/hadoop/hadoop-aws.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/lib/hadoop/hadoop-common-tests.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/lib/hadoop/hadoop-common.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/lib/hadoop/hadoop-nfs.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/lib/hadoop/hadoop-nfs-2.6.0-cdh5.10.0.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/lib/hadoop/hadoop-common-2.6.0-cdh5.10.0.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/lib/hadoop/hadoop-common-2.6.0-cdh5.10.0-tests.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/lib/hadoop/hadoop-aws-2.6.0-cdh5.10.0.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/lib/hadoop/hadoop-auth-2.6.0-cdh5.10.0.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/lib/hadoop/hadoop-annotations-2.6.0-cdh5.10.0.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/lib/hadoop/parquet-format.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/lib/hadoop/parquet-format-sources.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/lib/hadoop/parquet-format-javadoc.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/lib/hadoop/parquet-tools.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/lib/hadoop/parquet-thrift.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/lib/hadoop/parquet-test-hadoop2.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/lib/hadoop/parquet-scrooge_2.10.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/lib/hadoop/parquet-scala_2.10.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/lib/hadoop/parquet-protobuf.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/lib/hadoop/parquet-pig.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/lib/hadoop/parquet-pig-bundle.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/lib/hadoop/parquet-jackson.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/lib/hadoop/parquet-hadoop.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/lib/hadoop/parquet-hadoop-bundle.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/lib/hadoop/parquet-generator.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/lib/hadoop/parquet-encoding.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/lib/hadoop/parquet-common.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/lib/hadoop/parquet-column.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/lib/hadoop/parquet-cascading.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/lib/hadoop/parquet-avro.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/lib/hadoop/lib/commons-collections-3.2.2.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/lib/hadoop/lib/commons-digester-1.8.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/lib/hadoop/lib/commons-io-2.4.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/lib/hadoop/lib/commons-httpclient-3.1.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/lib/hadoop/lib/commons-net-3.1.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/lib/hadoop/lib/commons-logging-1.1.3.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/lib/hadoop/lib/gson-2.2.4.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/lib/hadoop/lib/commons-el-1.0.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/lib/hadoop/lib/httpclient-4.2.5.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/lib/hadoop/lib/guava-11.0.2.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/lib/hadoop/lib/commons-configuration-1.6.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/lib/hadoop/lib/jasper-runtime-5.5.23.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/lib/hadoop/lib/httpcore-4.2.5.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/lib/hadoop/lib/curator-client-2.7.1.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/lib/hadoop/lib/jersey-server-1.9.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/lib/hadoop/lib/jersey-core-1.9.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/lib/hadoop/lib/java-xmlbuilder-0.4.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/lib/hadoop/lib/curator-recipes-2.7.1.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/lib/hadoop/lib/commons-codec-1.4.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/lib/hadoop/lib/jsr305-3.0.0.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/lib/hadoop/lib/jsch-0.1.42.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/lib/hadoop/lib/jettison-1.1.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/lib/hadoop/lib/jaxb-impl-2.2.3-1.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/lib/hadoop/lib/jackson-jaxrs-1.8.8.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/lib/hadoop/lib/commons-math3-3.1.1.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/lib/hadoop/lib/jasper-compiler-5.5.23.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/lib/hadoop/lib/curator-framework-2.7.1.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/lib/hadoop/lib/commons-cli-1.2.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/lib/hadoop/lib/apacheds-i18n-2.0.0-M15.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/lib/hadoop/lib/zookeeper.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/lib/hadoop/lib/avro.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/lib/hadoop/lib/hue-plugins-3.9.0-cdh5.10.0.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/lib/hadoop/lib/slf4j-api-1.7.5.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/lib/hadoop/lib/netty-3.10.5.Final.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/lib/hadoop/lib/logredactor-1.0.3.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/lib/hadoop/lib/junit-4.11.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/lib/hadoop/lib/jetty-util-6.1.26.cloudera.4.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/lib/hadoop/lib/jets3t-0.9.0.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/lib/hadoop/lib/jaxb-api-2.2.2.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/lib/hadoop/lib/htrace-core4-4.0.1-incubating.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/lib/hadoop/lib/commons-compress-1.4.1.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/lib/hadoop/lib/xmlenc-0.52.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/lib/hadoop/lib/snappy-java-1.0.4.1.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/lib/hadoop/lib/servlet-api-2.5.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/lib/hadoop/lib/paranamer-2.3.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/lib/hadoop/lib/commons-beanutils-core-1.8.0.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/lib/hadoop/lib/commons-beanutils-1.9.2.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/lib/hadoop/lib/aws-java-sdk-sts-1.10.6.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/lib/hadoop/lib/aws-java-sdk-s3-1.10.6.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/lib/hadoop/lib/aws-java-sdk-kms-1.10.6.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/lib/hadoop/lib/aws-java-sdk-core-1.10.6.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/lib/hadoop/lib/asm-3.2.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/lib/hadoop/lib/api-util-1.0.0-M20.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/lib/hadoop/lib/api-asn1-api-1.0.0-M20.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/lib/hadoop/lib/apacheds-kerberos-codec-2.0.0-M15.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/lib/hadoop/lib/activation-1.1.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/lib/hadoop/lib/slf4j-log4j12.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/lib/hadoop/lib/xz-1.0.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/lib/hadoop/lib/stax-api-1.0-2.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/lib/hadoop/lib/protobuf-java-2.5.0.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/lib/hadoop/lib/mockito-all-1.8.5.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/lib/hadoop/lib/log4j-1.2.17.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/lib/hadoop/lib/jsp-api-2.1.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/lib/hadoop/lib/jetty-6.1.26.cloudera.4.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/lib/hadoop/lib/jersey-json-1.9.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/lib/hadoop/lib/jackson-xc-1.8.8.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/lib/hadoop/lib/hamcrest-core-1.3.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/lib/hadoop/lib/commons-lang-2.6.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/lib/hadoop-hdfs/hadoop-hdfs-nfs.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/lib/hadoop-hdfs/hadoop-hdfs-tests.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/lib/hadoop-hdfs/hadoop-hdfs.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/lib/hadoop-hdfs/hadoop-hdfs-nfs-2.6.0-cdh5.10.0.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/lib/hadoop-hdfs/hadoop-hdfs-2.6.0-cdh5.10.0.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/lib/hadoop-hdfs/hadoop-hdfs-2.6.0-cdh5.10.0-tests.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/lib/hadoop-hdfs/lib/xmlenc-0.52.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/lib/hadoop-hdfs/lib/xml-apis-1.3.04.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/lib/hadoop-hdfs/lib/xercesImpl-2.9.1.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/lib/hadoop-hdfs/lib/servlet-api-2.5.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/lib/hadoop-hdfs/lib/protobuf-java-2.5.0.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/lib/hadoop-hdfs/lib/netty-3.10.5.Final.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/lib/hadoop-hdfs/lib/log4j-1.2.17.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/lib/hadoop-hdfs/lib/leveldbjni-all-1.8.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/lib/hadoop-hdfs/lib/jsr305-3.0.0.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/lib/hadoop-hdfs/lib/jsp-api-2.1.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/lib/hadoop-hdfs/lib/jetty-util-6.1.26.cloudera.4.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/lib/hadoop-hdfs/lib/jetty-6.1.26.cloudera.4.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/lib/hadoop-hdfs/lib/jersey-server-1.9.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/lib/hadoop-hdfs/lib/jersey-core-1.9.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/lib/hadoop-hdfs/lib/jasper-runtime-5.5.23.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/lib/hadoop-hdfs/lib/jackson-mapper-asl-1.8.8.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/lib/hadoop-hdfs/lib/jackson-core-asl-1.8.8.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/lib/hadoop-hdfs/lib/htrace-core4-4.0.1-incubating.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/lib/hadoop-hdfs/lib/guava-11.0.2.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/lib/hadoop-hdfs/lib/commons-logging-1.1.3.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/lib/hadoop-hdfs/lib/commons-lang-2.6.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/lib/hadoop-hdfs/lib/commons-io-2.4.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/lib/hadoop-hdfs/lib/commons-el-1.0.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/lib/hadoop-hdfs/lib/commons-daemon-1.0.13.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/lib/hadoop-hdfs/lib/commons-codec-1.4.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/lib/hadoop-hdfs/lib/commons-cli-1.2.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/lib/hadoop-hdfs/lib/asm-3.2.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/lib/hadoop-yarn/hadoop-yarn-api.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/lib/hadoop-yarn/hadoop-yarn-applications-distributedshell.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/lib/hadoop-yarn/hadoop-yarn-applications-unmanaged-am-launcher.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/lib/hadoop-yarn/hadoop-yarn-client.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/lib/hadoop-yarn/hadoop-yarn-common.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/lib/hadoop-yarn/hadoop-yarn-registry.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/lib/hadoop-yarn/hadoop-yarn-server-applicationhistoryservice.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/lib/hadoop-yarn/hadoop-yarn-server-common.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/lib/hadoop-yarn/hadoop-yarn-server-nodemanager.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/lib/hadoop-yarn/hadoop-yarn-server-resourcemanager.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/lib/hadoop-yarn/hadoop-yarn-server-tests.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/lib/hadoop-yarn/hadoop-yarn-server-web-proxy.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/lib/hadoop-yarn/hadoop-yarn-server-web-proxy-2.6.0-cdh5.10.0.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/lib/hadoop-yarn/hadoop-yarn-server-tests-2.6.0-cdh5.10.0.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/lib/hadoop-yarn/hadoop-yarn-server-resourcemanager-2.6.0-cdh5.10.0.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/lib/hadoop-yarn/hadoop-yarn-server-nodemanager-2.6.0-cdh5.10.0.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/lib/hadoop-yarn/hadoop-yarn-server-common-2.6.0-cdh5.10.0.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/lib/hadoop-yarn/hadoop-yarn-server-applicationhistoryservice-2.6.0-cdh5.10.0.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/lib/hadoop-yarn/hadoop-yarn-registry-2.6.0-cdh5.10.0.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/lib/hadoop-yarn/hadoop-yarn-common-2.6.0-cdh5.10.0.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/lib/hadoop-yarn/hadoop-yarn-client-2.6.0-cdh5.10.0.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/lib/hadoop-yarn/hadoop-yarn-applications-unmanaged-am-launcher-2.6.0-cdh5.10.0.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/lib/hadoop-yarn/hadoop-yarn-applications-distributedshell-2.6.0-cdh5.10.0.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/lib/hadoop-yarn/hadoop-yarn-api-2.6.0-cdh5.10.0.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/lib/hadoop-yarn/lib/spark-yarn-shuffle.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/lib/hadoop-yarn/lib/xz-1.0.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/lib/hadoop-yarn/lib/stax-api-1.0-2.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/lib/hadoop-yarn/lib/servlet-api-2.5.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/lib/hadoop-yarn/lib/protobuf-java-2.5.0.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/lib/hadoop-yarn/lib/log4j-1.2.17.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/lib/hadoop-yarn/lib/leveldbjni-all-1.8.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/lib/hadoop-yarn/lib/jsr305-3.0.0.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/lib/hadoop-yarn/lib/jline-2.11.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/lib/hadoop-yarn/lib/jetty-util-6.1.26.cloudera.4.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/lib/hadoop-yarn/lib/jetty-6.1.26.cloudera.4.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/lib/hadoop-yarn/lib/jettison-1.1.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/lib/hadoop-yarn/lib/jersey-server-1.9.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/lib/hadoop-yarn/lib/jersey-json-1.9.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/lib/hadoop-yarn/lib/jersey-guice-1.9.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/lib/hadoop-yarn/lib/jersey-core-1.9.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/lib/hadoop-yarn/lib/jersey-client-1.9.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/lib/hadoop-yarn/lib/jaxb-impl-2.2.3-1.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/lib/hadoop-yarn/lib/jaxb-api-2.2.2.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/lib/hadoop-yarn/lib/javax.inject-1.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/lib/hadoop-yarn/lib/jackson-xc-1.8.8.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/lib/hadoop-yarn/lib/jackson-mapper-asl-1.8.8.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/lib/hadoop-yarn/lib/jackson-jaxrs-1.8.8.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/lib/hadoop-yarn/lib/jackson-core-asl-1.8.8.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/lib/hadoop-yarn/lib/guice-servlet-3.0.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/lib/hadoop-yarn/lib/guice-3.0.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/lib/hadoop-yarn/lib/guava-11.0.2.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/lib/hadoop-yarn/lib/commons-logging-1.1.3.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/lib/hadoop-yarn/lib/commons-lang-2.6.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/lib/hadoop-yarn/lib/commons-io-2.4.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/lib/hadoop-yarn/lib/commons-compress-1.4.1.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/lib/hadoop-yarn/lib/commons-collections-3.2.2.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/lib/hadoop-yarn/lib/commons-codec-1.4.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/lib/hadoop-yarn/lib/commons-cli-1.2.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/lib/hadoop-yarn/lib/asm-3.2.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/lib/hadoop-yarn/lib/aopalliance-1.0.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/lib/hadoop-yarn/lib/activation-1.1.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/lib/hadoop-yarn/lib/zookeeper.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/lib/hadoop-yarn/lib/spark-1.6.0-cdh5.10.0-yarn-shuffle.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/lib/hadoop-mapreduce/activation-1.1.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/lib/hadoop-mapreduce/apacheds-i18n-2.0.0-M15.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/lib/hadoop-mapreduce/apacheds-kerberos-codec-2.0.0-M15.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/lib/hadoop-mapreduce/api-asn1-api-1.0.0-M20.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/lib/hadoop-mapreduce/api-util-1.0.0-M20.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/lib/hadoop-mapreduce/asm-3.2.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/lib/hadoop-mapreduce/avro.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/lib/hadoop-mapreduce/commons-beanutils-1.9.2.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/lib/hadoop-mapreduce/commons-beanutils-core-1.8.0.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/lib/hadoop-mapreduce/commons-cli-1.2.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/lib/hadoop-mapreduce/commons-codec-1.4.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/lib/hadoop-mapreduce/commons-collections-3.2.2.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/lib/hadoop-mapreduce/commons-compress-1.4.1.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/lib/hadoop-mapreduce/commons-configuration-1.6.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/lib/hadoop-mapreduce/commons-digester-1.8.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/lib/hadoop-mapreduce/commons-el-1.0.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/lib/hadoop-mapreduce/commons-httpclient-3.1.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/lib/hadoop-mapreduce/commons-io-2.4.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/lib/hadoop-mapreduce/commons-lang-2.6.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/lib/hadoop-mapreduce/commons-logging-1.1.3.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/lib/hadoop-mapreduce/commons-math3-3.1.1.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/lib/hadoop-mapreduce/commons-net-3.1.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/lib/hadoop-mapreduce/curator-client-2.7.1.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/lib/hadoop-mapreduce/curator-framework-2.7.1.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/lib/hadoop-mapreduce/curator-recipes-2.7.1.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/lib/hadoop-mapreduce/gson-2.2.4.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/lib/hadoop-mapreduce/guava-11.0.2.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/lib/hadoop-mapreduce/hadoop-ant-2.6.0-cdh5.10.0.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/lib/hadoop-mapreduce/hadoop-ant.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/lib/hadoop-mapreduce/hadoop-archive-logs-2.6.0-cdh5.10.0.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/lib/hadoop-mapreduce/hadoop-archive-logs.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/lib/hadoop-mapreduce/hadoop-archives-2.6.0-cdh5.10.0.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/lib/hadoop-mapreduce/hadoop-archives.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/lib/hadoop-mapreduce/hadoop-auth-2.6.0-cdh5.10.0.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/lib/hadoop-mapreduce/hadoop-auth.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/lib/hadoop-mapreduce/hadoop-azure-2.6.0-cdh5.10.0.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/lib/hadoop-mapreduce/hadoop-azure.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/lib/hadoop-mapreduce/hadoop-datajoin-2.6.0-cdh5.10.0.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/lib/hadoop-mapreduce/hadoop-datajoin.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/lib/hadoop-mapreduce/hadoop-distcp-2.6.0-cdh5.10.0.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/lib/hadoop-mapreduce/hadoop-distcp.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/lib/hadoop-mapreduce/hadoop-extras-2.6.0-cdh5.10.0.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/lib/hadoop-mapreduce/hadoop-extras.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/lib/hadoop-mapreduce/hadoop-gridmix-2.6.0-cdh5.10.0.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/lib/hadoop-mapreduce/hadoop-gridmix.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/lib/hadoop-mapreduce/hadoop-mapreduce-client-app-2.6.0-cdh5.10.0.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/lib/hadoop-mapreduce/hadoop-mapreduce-client-app.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/lib/hadoop-mapreduce/hadoop-mapreduce-client-common-2.6.0-cdh5.10.0.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/lib/hadoop-mapreduce/hadoop-mapreduce-client-common.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/lib/hadoop-mapreduce/hadoop-mapreduce-client-core-2.6.0-cdh5.10.0.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/lib/hadoop-mapreduce/hadoop-mapreduce-client-core.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/lib/hadoop-mapreduce/hadoop-mapreduce-client-hs-2.6.0-cdh5.10.0.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/lib/hadoop-mapreduce/hadoop-mapreduce-client-hs-plugins-2.6.0-cdh5.10.0.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/lib/hadoop-mapreduce/hadoop-mapreduce-client-hs-plugins.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/lib/hadoop-mapreduce/hadoop-mapreduce-client-hs.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/lib/hadoop-mapreduce/hadoop-mapreduce-client-jobclient-2.6.0-cdh5.10.0-tests.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/lib/hadoop-mapreduce/hadoop-mapreduce-client-jobclient-2.6.0-cdh5.10.0.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/lib/hadoop-mapreduce/hadoop-mapreduce-client-jobclient-tests.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/lib/hadoop-mapreduce/hadoop-mapreduce-client-jobclient.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/lib/hadoop-mapreduce/hadoop-mapreduce-client-nativetask-2.6.0-cdh5.10.0.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/lib/hadoop-mapreduce/hadoop-mapreduce-client-nativetask.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/lib/hadoop-mapreduce/hadoop-mapreduce-client-shuffle-2.6.0-cdh5.10.0.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/lib/hadoop-mapreduce/hadoop-mapreduce-client-shuffle.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/lib/hadoop-mapreduce/hadoop-mapreduce-examples-2.6.0-cdh5.10.0.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/lib/hadoop-mapreduce/hadoop-mapreduce-examples.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/lib/hadoop-mapreduce/hadoop-openstack-2.6.0-cdh5.10.0.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/lib/hadoop-mapreduce/hadoop-openstack.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/lib/hadoop-mapreduce/hadoop-rumen-2.6.0-cdh5.10.0.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/lib/hadoop-mapreduce/hadoop-rumen.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/lib/hadoop-mapreduce/hadoop-sls-2.6.0-cdh5.10.0.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/lib/hadoop-mapreduce/hadoop-sls.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/lib/hadoop-mapreduce/hadoop-streaming-2.6.0-cdh5.10.0.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/lib/hadoop-mapreduce/hadoop-streaming.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/lib/hadoop-mapreduce/hamcrest-core-1.3.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/lib/hadoop-mapreduce/htrace-core4-4.0.1-incubating.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/lib/hadoop-mapreduce/httpclient-4.2.5.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/lib/hadoop-mapreduce/httpcore-4.2.5.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/lib/hadoop-mapreduce/jackson-annotations-2.2.3.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/lib/hadoop-mapreduce/jackson-core-2.2.3.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/lib/hadoop-mapreduce/jackson-core-asl-1.8.8.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/lib/hadoop-mapreduce/jackson-databind-2.2.3.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/lib/hadoop-mapreduce/jackson-jaxrs-1.8.8.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/lib/hadoop-mapreduce/jackson-mapper-asl-1.8.8.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/lib/hadoop-mapreduce/jackson-xc-1.8.8.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/lib/hadoop-mapreduce/jasper-compiler-5.5.23.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/lib/hadoop-mapreduce/jasper-runtime-5.5.23.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/lib/hadoop-mapreduce/java-xmlbuilder-0.4.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/lib/hadoop-mapreduce/jaxb-api-2.2.2.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/lib/hadoop-mapreduce/jaxb-impl-2.2.3-1.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/lib/hadoop-mapreduce/jersey-core-1.9.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/lib/hadoop-mapreduce/jersey-json-1.9.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/lib/hadoop-mapreduce/jersey-server-1.9.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/lib/hadoop-mapreduce/jets3t-0.9.0.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/lib/hadoop-mapreduce/jettison-1.1.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/lib/hadoop-mapreduce/xz-1.0.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/lib/hadoop-mapreduce/xmlenc-0.52.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/lib/hadoop-mapreduce/stax-api-1.0-2.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/lib/hadoop-mapreduce/snappy-java-1.0.4.1.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/lib/hadoop-mapreduce/servlet-api-2.5.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/lib/hadoop-mapreduce/protobuf-java-2.5.0.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/lib/hadoop-mapreduce/paranamer-2.3.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/lib/hadoop-mapreduce/mockito-all-1.8.5.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/lib/hadoop-mapreduce/microsoft-windowsazure-storage-sdk-0.6.0.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/lib/hadoop-mapreduce/metrics-core-3.0.2.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/lib/hadoop-mapreduce/log4j-1.2.17.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/lib/hadoop-mapreduce/junit-4.11.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/lib/hadoop-mapreduce/jsr305-3.0.0.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/lib/hadoop-mapreduce/jsp-api-2.1.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/lib/hadoop-mapreduce/jsch-0.1.42.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/lib/hadoop-mapreduce/jetty-util-6.1.26.cloudera.4.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/lib/hadoop-mapreduce/jetty-6.1.26.cloudera.4.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/lib/hadoop-mapreduce/zookeeper.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/lib/hadoop-mapreduce/lib/xz-1.0.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/lib/hadoop-mapreduce/lib/snappy-java-1.0.4.1.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/lib/hadoop-mapreduce/lib/protobuf-java-2.5.0.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/lib/hadoop-mapreduce/lib/paranamer-2.3.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/lib/hadoop-mapreduce/lib/netty-3.10.5.Final.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/lib/hadoop-mapreduce/lib/log4j-1.2.17.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/lib/hadoop-mapreduce/lib/leveldbjni-all-1.8.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/lib/hadoop-mapreduce/lib/junit-4.11.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/lib/hadoop-mapreduce/lib/jersey-server-1.9.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/lib/hadoop-mapreduce/lib/jersey-guice-1.9.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/lib/hadoop-mapreduce/lib/jersey-core-1.9.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/lib/hadoop-mapreduce/lib/javax.inject-1.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/lib/hadoop-mapreduce/lib/jackson-mapper-asl-1.8.8.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/lib/hadoop-mapreduce/lib/jackson-core-asl-1.8.8.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/lib/hadoop-mapreduce/lib/hamcrest-core-1.3.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/lib/hadoop-mapreduce/lib/guice-servlet-3.0.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/lib/hadoop-mapreduce/lib/guice-3.0.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/lib/hadoop-mapreduce/lib/commons-io-2.4.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/lib/hadoop-mapreduce/lib/commons-compress-1.4.1.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/lib/hadoop-mapreduce/lib/asm-3.2.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/lib/hadoop-mapreduce/lib/aopalliance-1.0.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/lib/hadoop-mapreduce/lib/avro.jar::/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/lib/hadoop/../../../CDH-5.10.0-1.cdh5.10.0.p0.41/jars/ST4-4.0.4.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/lib/hadoop/../../../CDH-5.10.0-1.cdh5.10.0.p0.41/jars/accumulo-core-1.6.0.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/lib/hadoop/../../../CDH-5.10.0-1.cdh5.10.0.p0.41/jars/accumulo-fate-1.6.0.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/lib/hadoop/../../../CDH-5.10.0-1.cdh5.10.0.p0.41/jars/accumulo-start-1.6.0.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/lib/hadoop/../../../CDH-5.10.0-1.cdh5.10.0.p0.41/jars/accumulo-trace-1.6.0.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/lib/hadoop/../../../CDH-5.10.0-1.cdh5.10.0.p0.41/jars/activation-1.1.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/lib/hadoop/../../../CDH-5.10.0-1.cdh5.10.0.p0.41/jars/ant-1.9.1.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/lib/hadoop/../../../CDH-5.10.0-1.cdh5.10.0.p0.41/jars/ant-launcher-1.9.1.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/lib/hadoop/../../../CDH-5.10.0-1.cdh5.10.0.p0.41/jars/antlr-2.7.7.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/lib/hadoop/../../../CDH-5.10.0-1.cdh5.10.0.p0.41/jars/antlr-runtime-3.4.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/lib/hadoop/../../../CDH-5.10.0-1.cdh5.10.0.p0.41/jars/aopalliance-1.0.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/lib/hadoop/../../../CDH-5.10.0-1.cdh5.10.0.p0.41/jars/apache-log4j-extras-1.1.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/lib/hadoop/../../../CDH-5.10.0-1.cdh5.10.0.p0.41/jars/apache-log4j-extras-1.2.17.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/lib/hadoop/../../../CDH-5.10.0-1.cdh5.10.0.p0.41/jars/apacheds-i18n-2.0.0-M15.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/lib/hadoop/../../../CDH-5.10.0-1.cdh5.10.0.p0.41/jars/apacheds-kerberos-codec-2.0.0-M15.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/lib/hadoop/../../../CDH-5.10.0-1.cdh5.10.0.p0.41/jars/api-asn1-api-1.0.0-M20.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/lib/hadoop/../../../CDH-5.10.0-1.cdh5.10.0.p0.41/jars/api-util-1.0.0-M20.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/lib/hadoop/../../../CDH-5.10.0-1.cdh5.10.0.p0.41/jars/asm-3.2.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/lib/hadoop/../../../CDH-5.10.0-1.cdh5.10.0.p0.41/jars/asm-commons-3.1.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/lib/hadoop/../../../CDH-5.10.0-1.cdh5.10.0.p0.41/jars/asm-tree-3.1.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/lib/hadoop/../../../CDH-5.10.0-1.cdh5.10.0.p0.41/jars/async-1.4.0.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/lib/hadoop/../../../CDH-5.10.0-1.cdh5.10.0.p0.41/jars/asynchbase-1.7.0.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/lib/hadoop/../../../CDH-5.10.0-1.cdh5.10.0.p0.41/jars/avro-1.7.6-cdh5.10.0.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/lib/hadoop/../../../CDH-5.10.0-1.cdh5.10.0.p0.41/jars/avro-compiler-1.7.6-cdh5.10.0.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/lib/hadoop/../../../CDH-5.10.0-1.cdh5.10.0.p0.41/jars/avro-ipc-1.7.6-cdh5.10.0-tests.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/lib/hadoop/../../../CDH-5.10.0-1.cdh5.10.0.p0.41/jars/avro-ipc-1.7.6-cdh5.10.0.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/lib/hadoop/../../../CDH-5.10.0-1.cdh5.10.0.p0.41/jars/avro-mapred-1.7.6-cdh5.10.0-hadoop2.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/lib/hadoop/../../../CDH-5.10.0-1.cdh5.10.0.p0.41/jars/avro-maven-plugin-1.7.6-cdh5.10.0.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/lib/hadoop/../../../CDH-5.10.0-1.cdh5.10.0.p0.41/jars/avro-protobuf-1.7.6-cdh5.10.0.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/lib/hadoop/../../../CDH-5.10.0-1.cdh5.10.0.p0.41/jars/avro-service-archetype-1.7.6-cdh5.10.0.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/lib/hadoop/../../../CDH-5.10.0-1.cdh5.10.0.p0.41/jars/avro-thrift-1.7.6-cdh5.10.0.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/lib/hadoop/../../../CDH-5.10.0-1.cdh5.10.0.p0.41/jars/aws-java-sdk-core-1.10.6.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/lib/hadoop/../../../CDH-5.10.0-1.cdh5.10.0.p0.41/jars/aws-java-sdk-kms-1.10.6.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/lib/hadoop/../../../CDH-5.10.0-1.cdh5.10.0.p0.41/jars/aws-java-sdk-s3-1.10.6.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/lib/hadoop/../../../CDH-5.10.0-1.cdh5.10.0.p0.41/jars/aws-java-sdk-sts-1.10.6.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/lib/hadoop/../../../CDH-5.10.0-1.cdh5.10.0.p0.41/jars/bonecp-0.8.0.RELEASE.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/lib/hadoop/../../../CDH-5.10.0-1.cdh5.10.0.p0.41/jars/calcite-avatica-1.0.0-incubating.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/lib/hadoop/../../../CDH-5.10.0-1.cdh5.10.0.p0.41/jars/calcite-core-1.0.0-incubating.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/lib/hadoop/../../../CDH-5.10.0-1.cdh5.10.0.p0.41/jars/calcite-linq4j-1.0.0-incubating.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/lib/hadoop/../../../CDH-5.10.0-1.cdh5.10.0.p0.41/jars/commons-beanutils-1.9.2.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/lib/hadoop/../../../CDH-5.10.0-1.cdh5.10.0.p0.41/jars/commons-beanutils-core-1.8.0.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/lib/hadoop/../../../CDH-5.10.0-1.cdh5.10.0.p0.41/jars/commons-cli-1.2.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/lib/hadoop/../../../CDH-5.10.0-1.cdh5.10.0.p0.41/jars/commons-codec-1.4.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/lib/hadoop/../../../CDH-5.10.0-1.cdh5.10.0.p0.41/jars/commons-codec-1.8.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/lib/hadoop/../../../CDH-5.10.0-1.cdh5.10.0.p0.41/jars/commons-collections-3.2.2.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/lib/hadoop/../../../CDH-5.10.0-1.cdh5.10.0.p0.41/jars/commons-compiler-2.7.6.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/lib/hadoop/../../../CDH-5.10.0-1.cdh5.10.0.p0.41/jars/commons-compress-1.4.1.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/lib/hadoop/../../../CDH-5.10.0-1.cdh5.10.0.p0.41/jars/commons-configuration-1.6.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/lib/hadoop/../../../CDH-5.10.0-1.cdh5.10.0.p0.41/jars/commons-daemon-1.0.13.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/lib/hadoop/../../../CDH-5.10.0-1.cdh5.10.0.p0.41/jars/commons-dbcp-1.4.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/lib/hadoop/../../../CDH-5.10.0-1.cdh5.10.0.p0.41/jars/commons-digester-1.8.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/lib/hadoop/../../../CDH-5.10.0-1.cdh5.10.0.p0.41/jars/commons-el-1.0.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/lib/hadoop/../../../CDH-5.10.0-1.cdh5.10.0.p0.41/jars/commons-httpclient-3.0.1.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/lib/hadoop/../../../CDH-5.10.0-1.cdh5.10.0.p0.41/jars/commons-httpclient-3.1.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/lib/hadoop/../../../CDH-5.10.0-1.cdh5.10.0.p0.41/jars/commons-io-2.4.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/lib/hadoop/../../../CDH-5.10.0-1.cdh5.10.0.p0.41/jars/commons-jexl-2.1.1.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/lib/hadoop/../../../CDH-5.10.0-1.cdh5.10.0.p0.41/jars/commons-lang-2.6.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/lib/hadoop/../../../CDH-5.10.0-1.cdh5.10.0.p0.41/jars/commons-lang3-3.1.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/lib/hadoop/../../../CDH-5.10.0-1.cdh5.10.0.p0.41/jars/commons-logging-1.1.3.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/lib/hadoop/../../../CDH-5.10.0-1.cdh5.10.0.p0.41/jars/commons-math-2.1.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/lib/hadoop/../../../CDH-5.10.0-1.cdh5.10.0.p0.41/jars/commons-math3-3.1.1.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/lib/hadoop/../../../CDH-5.10.0-1.cdh5.10.0.p0.41/jars/commons-net-3.1.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/lib/hadoop/../../../CDH-5.10.0-1.cdh5.10.0.p0.41/jars/commons-pool-1.5.4.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/lib/hadoop/../../../CDH-5.10.0-1.cdh5.10.0.p0.41/jars/commons-vfs2-2.0.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/lib/hadoop/../../../CDH-5.10.0-1.cdh5.10.0.p0.41/jars/curator-client-2.6.0.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/lib/hadoop/../../../CDH-5.10.0-1.cdh5.10.0.p0.41/jars/curator-client-2.7.1.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/lib/hadoop/../../../CDH-5.10.0-1.cdh5.10.0.p0.41/jars/curator-framework-2.6.0.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/lib/hadoop/../../../CDH-5.10.0-1.cdh5.10.0.p0.41/jars/curator-framework-2.7.1.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/lib/hadoop/../../../CDH-5.10.0-1.cdh5.10.0.p0.41/jars/curator-recipes-2.6.0.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/lib/hadoop/../../../CDH-5.10.0-1.cdh5.10.0.p0.41/jars/curator-recipes-2.7.1.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/lib/hadoop/../../../CDH-5.10.0-1.cdh5.10.0.p0.41/jars/datanucleus-api-jdo-3.2.6.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/lib/hadoop/../../../CDH-5.10.0-1.cdh5.10.0.p0.41/jars/datanucleus-core-3.2.10.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/lib/hadoop/../../../CDH-5.10.0-1.cdh5.10.0.p0.41/jars/datanucleus-rdbms-3.2.9.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/lib/hadoop/../../../CDH-5.10.0-1.cdh5.10.0.p0.41/jars/derby-10.11.1.1.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/lib/hadoop/../../../CDH-5.10.0-1.cdh5.10.0.p0.41/jars/eigenbase-properties-1.1.4.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/lib/hadoop/../../../CDH-5.10.0-1.cdh5.10.0.p0.41/jars/fastutil-6.3.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/lib/hadoop/../../../CDH-5.10.0-1.cdh5.10.0.p0.41/jars/findbugs-annotations-1.3.9-1.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/lib/hadoop/../../../CDH-5.10.0-1.cdh5.10.0.p0.41/jars/flume-avro-source-1.6.0-cdh5.10.0.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/lib/hadoop/../../../CDH-5.10.0-1.cdh5.10.0.p0.41/jars/flume-dataset-sink-1.6.0-cdh5.10.0.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/lib/hadoop/../../../CDH-5.10.0-1.cdh5.10.0.p0.41/jars/flume-file-channel-1.6.0-cdh5.10.0.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/lib/hadoop/../../../CDH-5.10.0-1.cdh5.10.0.p0.41/jars/flume-hdfs-sink-1.6.0-cdh5.10.0.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/lib/hadoop/../../../CDH-5.10.0-1.cdh5.10.0.p0.41/jars/flume-hive-sink-1.6.0-cdh5.10.0.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/lib/hadoop/../../../CDH-5.10.0-1.cdh5.10.0.p0.41/jars/flume-irc-sink-1.6.0-cdh5.10.0.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/lib/hadoop/../../../CDH-5.10.0-1.cdh5.10.0.p0.41/jars/flume-jdbc-channel-1.6.0-cdh5.10.0.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/lib/hadoop/../../../CDH-5.10.0-1.cdh5.10.0.p0.41/jars/flume-jms-source-1.6.0-cdh5.10.0.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/lib/hadoop/../../../CDH-5.10.0-1.cdh5.10.0.p0.41/jars/flume-kafka-channel-1.6.0-cdh5.10.0.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/lib/hadoop/../../../CDH-5.10.0-1.cdh5.10.0.p0.41/jars/flume-kafka-source-1.6.0-cdh5.10.0.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/lib/hadoop/../../../CDH-5.10.0-1.cdh5.10.0.p0.41/jars/flume-ng-auth-1.6.0-cdh5.10.0.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/lib/hadoop/../../../CDH-5.10.0-1.cdh5.10.0.p0.41/jars/flume-ng-configuration-1.6.0-cdh5.10.0.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/lib/hadoop/../../../CDH-5.10.0-1.cdh5.10.0.p0.41/jars/flume-ng-core-1.6.0-cdh5.10.0.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/lib/hadoop/../../../CDH-5.10.0-1.cdh5.10.0.p0.41/jars/flume-ng-elasticsearch-sink-1.6.0-cdh5.10.0.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/lib/hadoop/../../../CDH-5.10.0-1.cdh5.10.0.p0.41/jars/flume-ng-embedded-agent-1.6.0-cdh5.10.0.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/lib/hadoop/../../../CDH-5.10.0-1.cdh5.10.0.p0.41/jars/flume-ng-hbase-sink-1.6.0-cdh5.10.0.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/lib/hadoop/../../../CDH-5.10.0-1.cdh5.10.0.p0.41/jars/flume-ng-kafka-sink-1.6.0-cdh5.10.0.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/lib/hadoop/../../../CDH-5.10.0-1.cdh5.10.0.p0.41/jars/flume-ng-log4jappender-1.6.0-cdh5.10.0.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/lib/hadoop/../../../CDH-5.10.0-1.cdh5.10.0.p0.41/jars/flume-ng-morphline-solr-sink-1.6.0-cdh5.10.0.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/lib/hadoop/../../../CDH-5.10.0-1.cdh5.10.0.p0.41/jars/flume-ng-node-1.6.0-cdh5.10.0.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/lib/hadoop/../../../CDH-5.10.0-1.cdh5.10.0.p0.41/jars/flume-ng-sdk-1.6.0-cdh5.10.0.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/lib/hadoop/../../../CDH-5.10.0-1.cdh5.10.0.p0.41/jars/flume-scribe-source-1.6.0-cdh5.10.0.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/lib/hadoop/../../../CDH-5.10.0-1.cdh5.10.0.p0.41/jars/flume-spillable-memory-channel-1.6.0-cdh5.10.0.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/lib/hadoop/../../../CDH-5.10.0-1.cdh5.10.0.p0.41/jars/flume-taildir-source-1.6.0-cdh5.10.0.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/lib/hadoop/../../../CDH-5.10.0-1.cdh5.10.0.p0.41/jars/flume-thrift-source-1.6.0-cdh5.10.0.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/lib/hadoop/../../../CDH-5.10.0-1.cdh5.10.0.p0.41/jars/flume-tools-1.6.0-cdh5.10.0.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/lib/hadoop/../../../CDH-5.10.0-1.cdh5.10.0.p0.41/jars/flume-twitter-source-1.6.0-cdh5.10.0.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/lib/hadoop/../../../CDH-5.10.0-1.cdh5.10.0.p0.41/jars/geronimo-annotation_1.0_spec-1.1.1.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/lib/hadoop/../../../CDH-5.10.0-1.cdh5.10.0.p0.41/jars/geronimo-jaspic_1.0_spec-1.0.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/lib/hadoop/../../../CDH-5.10.0-1.cdh5.10.0.p0.41/jars/geronimo-jta_1.1_spec-1.1.1.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/lib/hadoop/../../../CDH-5.10.0-1.cdh5.10.0.p0.41/jars/groovy-all-2.4.4.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/lib/hadoop/../../../CDH-5.10.0-1.cdh5.10.0.p0.41/jars/gson-2.2.4.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/lib/hadoop/../../../CDH-5.10.0-1.cdh5.10.0.p0.41/jars/guava-11.0.2.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/lib/hadoop/../../../CDH-5.10.0-1.cdh5.10.0.p0.41/jars/guava-11.0.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/lib/hadoop/../../../CDH-5.10.0-1.cdh5.10.0.p0.41/jars/guava-14.0.1.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/lib/hadoop/../../../CDH-5.10.0-1.cdh5.10.0.p0.41/jars/guice-3.0.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/lib/hadoop/../../../CDH-5.10.0-1.cdh5.10.0.p0.41/jars/guice-servlet-3.0.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/lib/hadoop/../../../CDH-5.10.0-1.cdh5.10.0.p0.41/jars/hadoop-annotations-2.6.0-cdh5.10.0.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/lib/hadoop/../../../CDH-5.10.0-1.cdh5.10.0.p0.41/jars/hadoop-ant-2.6.0-cdh5.10.0.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/lib/hadoop/../../../CDH-5.10.0-1.cdh5.10.0.p0.41/jars/hadoop-archive-logs-2.6.0-cdh5.10.0.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/lib/hadoop/../../../CDH-5.10.0-1.cdh5.10.0.p0.41/jars/hadoop-archives-2.6.0-cdh5.10.0.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/lib/hadoop/../../../CDH-5.10.0-1.cdh5.10.0.p0.41/jars/hadoop-auth-2.6.0-cdh5.10.0.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/lib/hadoop/../../../CDH-5.10.0-1.cdh5.10.0.p0.41/jars/hadoop-aws-2.6.0-cdh5.10.0.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/lib/hadoop/../../../CDH-5.10.0-1.cdh5.10.0.p0.41/jars/hadoop-azure-2.6.0-cdh5.10.0.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/lib/hadoop/../../../CDH-5.10.0-1.cdh5.10.0.p0.41/jars/hadoop-common-2.6.0-cdh5.10.0-tests.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/lib/hadoop/../../../CDH-5.10.0-1.cdh5.10.0.p0.41/jars/hadoop-common-2.6.0-cdh5.10.0.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/lib/hadoop/../../../CDH-5.10.0-1.cdh5.10.0.p0.41/jars/hadoop-datajoin-2.6.0-cdh5.10.0.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/lib/hadoop/../../../CDH-5.10.0-1.cdh5.10.0.p0.41/jars/hadoop-distcp-2.6.0-cdh5.10.0.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/lib/hadoop/../../../CDH-5.10.0-1.cdh5.10.0.p0.41/jars/hadoop-extras-2.6.0-cdh5.10.0.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/lib/hadoop/../../../CDH-5.10.0-1.cdh5.10.0.p0.41/jars/hadoop-gridmix-2.6.0-cdh5.10.0.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/lib/hadoop/../../../CDH-5.10.0-1.cdh5.10.0.p0.41/jars/hadoop-hdfs-2.6.0-cdh5.10.0-tests.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/lib/hadoop/../../../CDH-5.10.0-1.cdh5.10.0.p0.41/jars/hadoop-hdfs-2.6.0-cdh5.10.0.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/lib/hadoop/../../../CDH-5.10.0-1.cdh5.10.0.p0.41/jars/hadoop-hdfs-nfs-2.6.0-cdh5.10.0.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/lib/hadoop/../../../CDH-5.10.0-1.cdh5.10.0.p0.41/jars/hadoop-mapreduce-client-app-2.6.0-cdh5.10.0.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/lib/hadoop/../../../CDH-5.10.0-1.cdh5.10.0.p0.41/jars/hadoop-mapreduce-client-common-2.6.0-cdh5.10.0.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/lib/hadoop/../../../CDH-5.10.0-1.cdh5.10.0.p0.41/jars/hadoop-mapreduce-client-core-2.6.0-cdh5.10.0.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/lib/hadoop/../../../CDH-5.10.0-1.cdh5.10.0.p0.41/jars/hadoop-mapreduce-client-hs-2.6.0-cdh5.10.0.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/lib/hadoop/../../../CDH-5.10.0-1.cdh5.10.0.p0.41/jars/hadoop-mapreduce-client-hs-plugins-2.6.0-cdh5.10.0.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/lib/hadoop/../../../CDH-5.10.0-1.cdh5.10.0.p0.41/jars/hadoop-mapreduce-client-jobclient-2.6.0-cdh5.10.0-tests.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/lib/hadoop/../../../CDH-5.10.0-1.cdh5.10.0.p0.41/jars/hadoop-mapreduce-client-jobclient-2.6.0-cdh5.10.0.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/lib/hadoop/../../../CDH-5.10.0-1.cdh5.10.0.p0.41/jars/hadoop-mapreduce-client-nativetask-2.6.0-cdh5.10.0.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/lib/hadoop/../../../CDH-5.10.0-1.cdh5.10.0.p0.41/jars/hadoop-mapreduce-client-shuffle-2.6.0-cdh5.10.0.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/lib/hadoop/../../../CDH-5.10.0-1.cdh5.10.0.p0.41/jars/hadoop-mapreduce-examples-2.6.0-cdh5.10.0.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/lib/hadoop/../../../CDH-5.10.0-1.cdh5.10.0.p0.41/jars/hadoop-nfs-2.6.0-cdh5.10.0.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/lib/hadoop/../../../CDH-5.10.0-1.cdh5.10.0.p0.41/jars/hadoop-openstack-2.6.0-cdh5.10.0.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/lib/hadoop/../../../CDH-5.10.0-1.cdh5.10.0.p0.41/jars/hadoop-rumen-2.6.0-cdh5.10.0.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/lib/hadoop/../../../CDH-5.10.0-1.cdh5.10.0.p0.41/jars/hadoop-sls-2.6.0-cdh5.10.0.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/lib/hadoop/../../../CDH-5.10.0-1.cdh5.10.0.p0.41/jars/hadoop-streaming-2.6.0-cdh5.10.0.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/lib/hadoop/../../../CDH-5.10.0-1.cdh5.10.0.p0.41/jars/hadoop-yarn-api-2.6.0-cdh5.10.0.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/lib/hadoop/../../../CDH-5.10.0-1.cdh5.10.0.p0.41/jars/hadoop-yarn-applications-distributedshell-2.6.0-cdh5.10.0.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/lib/hadoop/../../../CDH-5.10.0-1.cdh5.10.0.p0.41/jars/hadoop-yarn-applications-unmanaged-am-launcher-2.6.0-cdh5.10.0.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/lib/hadoop/../../../CDH-5.10.0-1.cdh5.10.0.p0.41/jars/hadoop-yarn-client-2.6.0-cdh5.10.0.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/lib/hadoop/../../../CDH-5.10.0-1.cdh5.10.0.p0.41/jars/hadoop-yarn-common-2.6.0-cdh5.10.0.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/lib/hadoop/../../../CDH-5.10.0-1.cdh5.10.0.p0.41/jars/hadoop-yarn-registry-2.6.0-cdh5.10.0.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/lib/hadoop/../../../CDH-5.10.0-1.cdh5.10.0.p0.41/jars/hadoop-yarn-server-applicationhistoryservice-2.6.0-cdh5.10.0.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/lib/hadoop/../../../CDH-5.10.0-1.cdh5.10.0.p0.41/jars/hadoop-yarn-server-common-2.6.0-cdh5.10.0.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/lib/hadoop/../../../CDH-5.10.0-1.cdh5.10.0.p0.41/jars/hadoop-yarn-server-nodemanager-2.6.0-cdh5.10.0.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/lib/hadoop/../../../CDH-5.10.0-1.cdh5.10.0.p0.41/jars/hadoop-yarn-server-resourcemanager-2.6.0-cdh5.10.0.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/lib/hadoop/../../../CDH-5.10.0-1.cdh5.10.0.p0.41/jars/hadoop-yarn-server-tests-2.6.0-cdh5.10.0.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/lib/hadoop/../../../CDH-5.10.0-1.cdh5.10.0.p0.41/jars/hadoop-yarn-server-web-proxy-2.6.0-cdh5.10.0.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/lib/hadoop/../../../CDH-5.10.0-1.cdh5.10.0.p0.41/jars/hamcrest-core-1.1.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/lib/hadoop/../../../CDH-5.10.0-1.cdh5.10.0.p0.41/jars/hamcrest-core-1.3.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/lib/hadoop/../../../CDH-5.10.0-1.cdh5.10.0.p0.41/jars/hbase-annotations-1.2.0-cdh5.10.0.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/lib/hadoop/../../../CDH-5.10.0-1.cdh5.10.0.p0.41/jars/hbase-client-1.2.0-cdh5.10.0.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/lib/hadoop/../../../CDH-5.10.0-1.cdh5.10.0.p0.41/jars/hbase-common-1.2.0-cdh5.10.0.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/lib/hadoop/../../../CDH-5.10.0-1.cdh5.10.0.p0.41/jars/hbase-hadoop-compat-1.2.0-cdh5.10.0.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/lib/hadoop/../../../CDH-5.10.0-1.cdh5.10.0.p0.41/jars/hbase-hadoop2-compat-1.2.0-cdh5.10.0.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/lib/hadoop/../../../CDH-5.10.0-1.cdh5.10.0.p0.41/jars/hbase-protocol-1.2.0-cdh5.10.0.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/lib/hadoop/../../../CDH-5.10.0-1.cdh5.10.0.p0.41/jars/hbase-server-1.2.0-cdh5.10.0.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/lib/hadoop/../../../CDH-5.10.0-1.cdh5.10.0.p0.41/jars/high-scale-lib-1.1.1.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/lib/hadoop/../../../CDH-5.10.0-1.cdh5.10.0.p0.41/jars/hive-accumulo-handler-1.1.0-cdh5.10.0.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/lib/hadoop/../../../CDH-5.10.0-1.cdh5.10.0.p0.41/jars/hive-ant-1.1.0-cdh5.10.0.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/lib/hadoop/../../../CDH-5.10.0-1.cdh5.10.0.p0.41/jars/hive-beeline-1.1.0-cdh5.10.0.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/lib/hadoop/../../../CDH-5.10.0-1.cdh5.10.0.p0.41/jars/hive-cli-1.1.0-cdh5.10.0.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/lib/hadoop/../../../CDH-5.10.0-1.cdh5.10.0.p0.41/jars/hive-common-1.1.0-cdh5.10.0.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/lib/hadoop/../../../CDH-5.10.0-1.cdh5.10.0.p0.41/jars/hive-contrib-1.1.0-cdh5.10.0.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/lib/hadoop/../../../CDH-5.10.0-1.cdh5.10.0.p0.41/jars/hive-exec-1.1.0-cdh5.10.0.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/lib/hadoop/../../../CDH-5.10.0-1.cdh5.10.0.p0.41/jars/hive-hbase-handler-1.1.0-cdh5.10.0.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/lib/hadoop/../../../CDH-5.10.0-1.cdh5.10.0.p0.41/jars/hive-hwi-1.1.0-cdh5.10.0.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/lib/hadoop/../../../CDH-5.10.0-1.cdh5.10.0.p0.41/jars/hive-jdbc-1.1.0-cdh5.10.0-standalone.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/lib/hadoop/../../../CDH-5.10.0-1.cdh5.10.0.p0.41/jars/hive-jdbc-1.1.0-cdh5.10.0.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/lib/hadoop/../../../CDH-5.10.0-1.cdh5.10.0.p0.41/jars/hive-metastore-1.1.0-cdh5.10.0.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/lib/hadoop/../../../CDH-5.10.0-1.cdh5.10.0.p0.41/jars/hive-serde-1.1.0-cdh5.10.0.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/lib/hadoop/../../../CDH-5.10.0-1.cdh5.10.0.p0.41/jars/hive-service-1.1.0-cdh5.10.0.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/lib/hadoop/../../../CDH-5.10.0-1.cdh5.10.0.p0.41/jars/hive-shims-0.23-1.1.0-cdh5.10.0.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/lib/hadoop/../../../CDH-5.10.0-1.cdh5.10.0.p0.41/jars/hive-shims-1.1.0-cdh5.10.0.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/lib/hadoop/../../../CDH-5.10.0-1.cdh5.10.0.p0.41/jars/hive-shims-common-1.1.0-cdh5.10.0.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/lib/hadoop/../../../CDH-5.10.0-1.cdh5.10.0.p0.41/jars/hive-shims-scheduler-1.1.0-cdh5.10.0.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/lib/hadoop/../../../CDH-5.10.0-1.cdh5.10.0.p0.41/jars/hive-testutils-1.1.0-cdh5.10.0.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/lib/hadoop/../../../CDH-5.10.0-1.cdh5.10.0.p0.41/jars/htrace-core-3.2.0-incubating.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/lib/hadoop/../../../CDH-5.10.0-1.cdh5.10.0.p0.41/jars/htrace-core4-4.0.1-incubating.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/lib/hadoop/../../../CDH-5.10.0-1.cdh5.10.0.p0.41/jars/httpclient-4.2.5.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/lib/hadoop/../../../CDH-5.10.0-1.cdh5.10.0.p0.41/jars/httpcore-4.2.5.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/lib/hadoop/../../../CDH-5.10.0-1.cdh5.10.0.p0.41/jars/hue-plugins-3.9.0-cdh5.10.0.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/lib/hadoop/../../../CDH-5.10.0-1.cdh5.10.0.p0.41/jars/irclib-1.10.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/lib/hadoop/../../../CDH-5.10.0-1.cdh5.10.0.p0.41/jars/ivy-2.0.0-rc2.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/lib/hadoop/../../../CDH-5.10.0-1.cdh5.10.0.p0.41/jars/jackson-annotations-2.2.3.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/lib/hadoop/../../../CDH-5.10.0-1.cdh5.10.0.p0.41/jars/jackson-core-2.2.3.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/lib/hadoop/../../../CDH-5.10.0-1.cdh5.10.0.p0.41/jars/jackson-core-asl-1.8.8.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/lib/hadoop/../../../CDH-5.10.0-1.cdh5.10.0.p0.41/jars/jackson-databind-2.2.3.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/lib/hadoop/../../../CDH-5.10.0-1.cdh5.10.0.p0.41/jars/jackson-jaxrs-1.8.8.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/lib/hadoop/../../../CDH-5.10.0-1.cdh5.10.0.p0.41/jars/jackson-mapper-asl-1.8.8.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/lib/hadoop/../../../CDH-5.10.0-1.cdh5.10.0.p0.41/jars/jackson-xc-1.8.8.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/lib/hadoop/../../../CDH-5.10.0-1.cdh5.10.0.p0.41/jars/jamon-runtime-2.3.1.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/lib/hadoop/../../../CDH-5.10.0-1.cdh5.10.0.p0.41/jars/janino-2.7.6.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/lib/hadoop/../../../CDH-5.10.0-1.cdh5.10.0.p0.41/jars/jasper-compiler-5.5.23.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/lib/hadoop/../../../CDH-5.10.0-1.cdh5.10.0.p0.41/jars/jasper-runtime-5.5.23.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/lib/hadoop/../../../CDH-5.10.0-1.cdh5.10.0.p0.41/jars/java-xmlbuilder-0.4.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/lib/hadoop/../../../CDH-5.10.0-1.cdh5.10.0.p0.41/jars/javax.inject-1.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/lib/hadoop/../../../CDH-5.10.0-1.cdh5.10.0.p0.41/jars/jaxb-api-2.2.2.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/lib/hadoop/../../../CDH-5.10.0-1.cdh5.10.0.p0.41/jars/jaxb-impl-2.2.3-1.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/lib/hadoop/../../../CDH-5.10.0-1.cdh5.10.0.p0.41/jars/jcommander-1.32.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/lib/hadoop/../../../CDH-5.10.0-1.cdh5.10.0.p0.41/jars/jdo-api-3.0.1.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/lib/hadoop/../../../CDH-5.10.0-1.cdh5.10.0.p0.41/jars/jersey-client-1.9.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/lib/hadoop/../../../CDH-5.10.0-1.cdh5.10.0.p0.41/jars/jersey-core-1.9.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/lib/hadoop/../../../CDH-5.10.0-1.cdh5.10.0.p0.41/jars/jersey-guice-1.9.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/lib/hadoop/../../../CDH-5.10.0-1.cdh5.10.0.p0.41/jars/jersey-json-1.9.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/lib/hadoop/../../../CDH-5.10.0-1.cdh5.10.0.p0.41/jars/jersey-server-1.9.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/lib/hadoop/../../../CDH-5.10.0-1.cdh5.10.0.p0.41/jars/jets3t-0.9.0.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/lib/hadoop/../../../CDH-5.10.0-1.cdh5.10.0.p0.41/jars/jettison-1.1.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/lib/hadoop/../../../CDH-5.10.0-1.cdh5.10.0.p0.41/jars/jetty-6.1.26.cloudera.4.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/lib/hadoop/../../../CDH-5.10.0-1.cdh5.10.0.p0.41/jars/jetty-all-7.6.0.v20120127.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/lib/hadoop/../../../CDH-5.10.0-1.cdh5.10.0.p0.41/jars/jetty-all-server-7.6.0.v20120127.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/lib/hadoop/../../../CDH-5.10.0-1.cdh5.10.0.p0.41/jars/jetty-util-6.1.26.cloudera.4.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/lib/hadoop/../../../CDH-5.10.0-1.cdh5.10.0.p0.41/jars/jline-2.11.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/lib/hadoop/../../../CDH-5.10.0-1.cdh5.10.0.p0.41/jars/jline-2.12.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/lib/hadoop/../../../CDH-5.10.0-1.cdh5.10.0.p0.41/jars/joda-time-1.6.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/lib/hadoop/../../../CDH-5.10.0-1.cdh5.10.0.p0.41/jars/joda-time-2.1.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/lib/hadoop/../../../CDH-5.10.0-1.cdh5.10.0.p0.41/jars/jopt-simple-4.9.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/lib/hadoop/../../../CDH-5.10.0-1.cdh5.10.0.p0.41/jars/jpam-1.1.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/lib/hadoop/../../../CDH-5.10.0-1.cdh5.10.0.p0.41/jars/jsch-0.1.42.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/lib/hadoop/../../../CDH-5.10.0-1.cdh5.10.0.p0.41/jars/jsp-api-2.1.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/lib/hadoop/../../../CDH-5.10.0-1.cdh5.10.0.p0.41/jars/jsr305-1.3.9.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/lib/hadoop/../../../CDH-5.10.0-1.cdh5.10.0.p0.41/jars/jsr305-3.0.0.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/lib/hadoop/../../../CDH-5.10.0-1.cdh5.10.0.p0.41/jars/jta-1.1.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/lib/hadoop/../../../CDH-5.10.0-1.cdh5.10.0.p0.41/jars/junit-4.11.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/lib/hadoop/../../../CDH-5.10.0-1.cdh5.10.0.p0.41/jars/kafka-clients-0.9.0-kafka-2.0.2.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/lib/hadoop/../../../CDH-5.10.0-1.cdh5.10.0.p0.41/jars/kafka_2.10-0.9.0-kafka-2.0.2.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/lib/hadoop/../../../CDH-5.10.0-1.cdh5.10.0.p0.41/jars/kite-data-core-1.0.0-cdh5.10.0.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/lib/hadoop/../../../CDH-5.10.0-1.cdh5.10.0.p0.41/jars/kite-data-hbase-1.0.0-cdh5.10.0.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/lib/hadoop/../../../CDH-5.10.0-1.cdh5.10.0.p0.41/jars/kite-data-hive-1.0.0-cdh5.10.0.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/lib/hadoop/../../../CDH-5.10.0-1.cdh5.10.0.p0.41/jars/kite-hadoop-compatibility-1.0.0-cdh5.10.0.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/lib/hadoop/../../../CDH-5.10.0-1.cdh5.10.0.p0.41/jars/leveldbjni-all-1.8.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/lib/hadoop/../../../CDH-5.10.0-1.cdh5.10.0.p0.41/jars/libfb303-0.9.3.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/lib/hadoop/../../../CDH-5.10.0-1.cdh5.10.0.p0.41/jars/libthrift-0.9.3.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/lib/hadoop/../../../CDH-5.10.0-1.cdh5.10.0.p0.41/jars/log4j-1.2.16.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/lib/hadoop/../../../CDH-5.10.0-1.cdh5.10.0.p0.41/jars/log4j-1.2.17.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/lib/hadoop/../../../CDH-5.10.0-1.cdh5.10.0.p0.41/jars/logredactor-1.0.3.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/lib/hadoop/../../../CDH-5.10.0-1.cdh5.10.0.p0.41/jars/lz4-1.3.0.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/lib/hadoop/../../../CDH-5.10.0-1.cdh5.10.0.p0.41/jars/mail-1.4.1.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/lib/hadoop/../../../CDH-5.10.0-1.cdh5.10.0.p0.41/jars/mapdb-0.9.9.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/lib/hadoop/../../../CDH-5.10.0-1.cdh5.10.0.p0.41/jars/maven-scm-api-1.4.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/lib/hadoop/../../../CDH-5.10.0-1.cdh5.10.0.p0.41/jars/maven-scm-provider-svn-commons-1.4.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/lib/hadoop/../../../CDH-5.10.0-1.cdh5.10.0.p0.41/jars/maven-scm-provider-svnexe-1.4.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/lib/hadoop/../../../CDH-5.10.0-1.cdh5.10.0.p0.41/jars/metrics-core-2.2.0.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/lib/hadoop/../../../CDH-5.10.0-1.cdh5.10.0.p0.41/jars/metrics-core-3.0.2.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/lib/hadoop/../../../CDH-5.10.0-1.cdh5.10.0.p0.41/jars/metrics-json-3.0.2.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/lib/hadoop/../../../CDH-5.10.0-1.cdh5.10.0.p0.41/jars/metrics-jvm-3.0.2.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/lib/hadoop/../../../CDH-5.10.0-1.cdh5.10.0.p0.41/jars/microsoft-windowsazure-storage-sdk-0.6.0.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/lib/hadoop/../../../CDH-5.10.0-1.cdh5.10.0.p0.41/jars/mina-core-2.0.4.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/lib/hadoop/../../../CDH-5.10.0-1.cdh5.10.0.p0.41/jars/mockito-all-1.8.5.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/lib/hadoop/../../../CDH-5.10.0-1.cdh5.10.0.p0.41/jars/netty-3.10.5.Final.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/lib/hadoop/../../../CDH-5.10.0-1.cdh5.10.0.p0.41/jars/netty-3.9.4.Final.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/lib/hadoop/../../../CDH-5.10.0-1.cdh5.10.0.p0.41/jars/netty-all-4.0.23.Final.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/lib/hadoop/../../../CDH-5.10.0-1.cdh5.10.0.p0.41/jars/opencsv-2.3.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/lib/hadoop/../../../CDH-5.10.0-1.cdh5.10.0.p0.41/jars/oro-2.0.8.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/lib/hadoop/../../../CDH-5.10.0-1.cdh5.10.0.p0.41/jars/paranamer-2.3.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/lib/hadoop/../../../CDH-5.10.0-1.cdh5.10.0.p0.41/jars/parquet-avro-1.5.0-cdh5.10.0.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/lib/hadoop/../../../CDH-5.10.0-1.cdh5.10.0.p0.41/jars/parquet-cascading-1.5.0-cdh5.10.0.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/lib/hadoop/../../../CDH-5.10.0-1.cdh5.10.0.p0.41/jars/parquet-column-1.5.0-cdh5.10.0.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/lib/hadoop/../../../CDH-5.10.0-1.cdh5.10.0.p0.41/jars/parquet-common-1.5.0-cdh5.10.0.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/lib/hadoop/../../../CDH-5.10.0-1.cdh5.10.0.p0.41/jars/parquet-encoding-1.5.0-cdh5.10.0.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/lib/hadoop/../../../CDH-5.10.0-1.cdh5.10.0.p0.41/jars/parquet-format-2.1.0-cdh5.10.0-javadoc.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/lib/hadoop/../../../CDH-5.10.0-1.cdh5.10.0.p0.41/jars/parquet-format-2.1.0-cdh5.10.0-sources.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/lib/hadoop/../../../CDH-5.10.0-1.cdh5.10.0.p0.41/jars/parquet-format-2.1.0-cdh5.10.0.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/lib/hadoop/../../../CDH-5.10.0-1.cdh5.10.0.p0.41/jars/parquet-generator-1.5.0-cdh5.10.0.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/lib/hadoop/../../../CDH-5.10.0-1.cdh5.10.0.p0.41/jars/parquet-hadoop-1.5.0-cdh5.10.0.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/lib/hadoop/../../../CDH-5.10.0-1.cdh5.10.0.p0.41/jars/parquet-hadoop-bundle-1.5.0-cdh5.10.0.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/lib/hadoop/../../../CDH-5.10.0-1.cdh5.10.0.p0.41/jars/parquet-jackson-1.5.0-cdh5.10.0.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/lib/hadoop/../../../CDH-5.10.0-1.cdh5.10.0.p0.41/jars/parquet-pig-1.5.0-cdh5.10.0.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/lib/hadoop/../../../CDH-5.10.0-1.cdh5.10.0.p0.41/jars/parquet-pig-bundle-1.5.0-cdh5.10.0.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/lib/hadoop/../../../CDH-5.10.0-1.cdh5.10.0.p0.41/jars/parquet-protobuf-1.5.0-cdh5.10.0.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/lib/hadoop/../../../CDH-5.10.0-1.cdh5.10.0.p0.41/jars/parquet-scala_2.10-1.5.0-cdh5.10.0.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/lib/hadoop/../../../CDH-5.10.0-1.cdh5.10.0.p0.41/jars/parquet-scrooge_2.10-1.5.0-cdh5.10.0.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/lib/hadoop/../../../CDH-5.10.0-1.cdh5.10.0.p0.41/jars/parquet-test-hadoop2-1.5.0-cdh5.10.0.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/lib/hadoop/../../../CDH-5.10.0-1.cdh5.10.0.p0.41/jars/parquet-thrift-1.5.0-cdh5.10.0.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/lib/hadoop/../../../CDH-5.10.0-1.cdh5.10.0.p0.41/jars/parquet-tools-1.5.0-cdh5.10.0.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/lib/hadoop/../../../CDH-5.10.0-1.cdh5.10.0.p0.41/jars/pentaho-aggdesigner-algorithm-5.1.5-jhyde.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/lib/hadoop/../../../CDH-5.10.0-1.cdh5.10.0.p0.41/jars/plexus-utils-1.5.6.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/lib/hadoop/../../../CDH-5.10.0-1.cdh5.10.0.p0.41/jars/protobuf-java-2.5.0.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/lib/hadoop/../../../CDH-5.10.0-1.cdh5.10.0.p0.41/jars/regexp-1.3.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/lib/hadoop/../../../CDH-5.10.0-1.cdh5.10.0.p0.41/jars/scala-library-2.10.5.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/lib/hadoop/../../../CDH-5.10.0-1.cdh5.10.0.p0.41/jars/serializer-2.7.2.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/lib/hadoop/../../../CDH-5.10.0-1.cdh5.10.0.p0.41/jars/servlet-api-2.5-20110124.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/lib/hadoop/../../../CDH-5.10.0-1.cdh5.10.0.p0.41/jars/servlet-api-2.5.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/lib/hadoop/../../../CDH-5.10.0-1.cdh5.10.0.p0.41/jars/slf4j-api-1.7.5.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/lib/hadoop/../../../CDH-5.10.0-1.cdh5.10.0.p0.41/jars/slf4j-log4j12-1.7.5.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/lib/hadoop/../../../CDH-5.10.0-1.cdh5.10.0.p0.41/jars/snappy-java-1.0.4.1.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/lib/hadoop/../../../CDH-5.10.0-1.cdh5.10.0.p0.41/jars/spark-1.6.0-cdh5.10.0-yarn-shuffle.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/lib/hadoop/../../../CDH-5.10.0-1.cdh5.10.0.p0.41/jars/spark-streaming-flume-sink_2.10-1.6.0-cdh5.10.0.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/lib/hadoop/../../../CDH-5.10.0-1.cdh5.10.0.p0.41/jars/stax-api-1.0-2.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/lib/hadoop/../../../CDH-5.10.0-1.cdh5.10.0.p0.41/jars/stax-api-1.0.1.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/lib/hadoop/../../../CDH-5.10.0-1.cdh5.10.0.p0.41/jars/stringtemplate-3.2.1.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/lib/hadoop/../../../CDH-5.10.0-1.cdh5.10.0.p0.41/jars/super-csv-2.2.0.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/lib/hadoop/../../../CDH-5.10.0-1.cdh5.10.0.p0.41/jars/tempus-fugit-1.1.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/lib/hadoop/../../../CDH-5.10.0-1.cdh5.10.0.p0.41/jars/trevni-avro-1.7.6-cdh5.10.0-hadoop2.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/lib/hadoop/../../../CDH-5.10.0-1.cdh5.10.0.p0.41/jars/trevni-avro-1.7.6-cdh5.10.0.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/lib/hadoop/../../../CDH-5.10.0-1.cdh5.10.0.p0.41/jars/trevni-core-1.7.6-cdh5.10.0.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/lib/hadoop/../../../CDH-5.10.0-1.cdh5.10.0.p0.41/jars/twitter4j-core-3.0.3.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/lib/hadoop/../../../CDH-5.10.0-1.cdh5.10.0.p0.41/jars/twitter4j-media-support-3.0.3.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/lib/hadoop/../../../CDH-5.10.0-1.cdh5.10.0.p0.41/jars/twitter4j-stream-3.0.3.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/lib/hadoop/../../../CDH-5.10.0-1.cdh5.10.0.p0.41/jars/unused-1.0.0.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/lib/hadoop/../../../CDH-5.10.0-1.cdh5.10.0.p0.41/jars/velocity-1.5.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/lib/hadoop/../../../CDH-5.10.0-1.cdh5.10.0.p0.41/jars/velocity-1.7.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/lib/hadoop/../../../CDH-5.10.0-1.cdh5.10.0.p0.41/jars/xalan-2.7.2.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/lib/hadoop/../../../CDH-5.10.0-1.cdh5.10.0.p0.41/jars/xercesImpl-2.9.1.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/lib/hadoop/../../../CDH-5.10.0-1.cdh5.10.0.p0.41/jars/xml-apis-1.3.04.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/lib/hadoop/../../../CDH-5.10.0-1.cdh5.10.0.p0.41/jars/xmlenc-0.52.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/lib/hadoop/../../../CDH-5.10.0-1.cdh5.10.0.p0.41/jars/xz-1.0.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/lib/hadoop/../../../CDH-5.10.0-1.cdh5.10.0.p0.41/jars/zkclient-0.7.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/lib/hadoop/../../../CDH-5.10.0-1.cdh5.10.0.p0.41/jars/zookeeper-3.4.5-cdh5.10.0.jar:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/lib/hadoop/../../../CDH-5.10.0-1.cdh5.10.0.p0.41/lib/hadoop/LICENSE.txt:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/lib/hadoop/../../../CDH-5.10.0-1.cdh5.10.0.p0.41/lib/hadoop/NOTICE.txt 18/04/17 16:32:28 INFO zookeeper.ZooKeeper: Client environment:java.library.path=/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/lib/hadoop/../../../CDH-5.10.0-1.cdh5.10.0.p0.41/lib/hadoop/lib/native::/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/lib/hadoop/lib/native:/usr/java/packages/lib/amd64:/usr/lib64:/lib64:/lib:/usr/lib 18/04/17 16:32:28 INFO zookeeper.ZooKeeper: Client environment:java.io.tmpdir=/hadoop/6/yarn/nm/usercache/jenkins/appcache/application_1520875508177_0403/container_e26_1520875508177_0403_02_000001/tmp 18/04/17 16:32:28 INFO zookeeper.ZooKeeper: Client environment:java.compiler=<NA> 18/04/17 16:32:28 INFO zookeeper.ZooKeeper: Client environment:os.name=Linux 18/04/17 16:32:28 INFO zookeeper.ZooKeeper: Client environment:os.arch=amd64 18/04/17 16:32:28 INFO zookeeper.ZooKeeper: Client environment:os.version=3.10.0-327.el7.x86_64 18/04/17 16:32:28 INFO zookeeper.ZooKeeper: Client environment:user.name=yarn 18/04/17 16:32:28 INFO zookeeper.ZooKeeper: Client environment:user.home=/var/lib/hadoop-yarn 18/04/17 16:32:28 INFO zookeeper.ZooKeeper: Client environment:user.dir=/hadoop/6/yarn/nm/usercache/jenkins/appcache/application_1520875508177_0403/container_e26_1520875508177_0403_02_000001 18/04/17 16:32:28 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x7c0e35360x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 16:32:28 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 16:32:28 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:51456, server: ***hostname masked***/***IP masked***:2181 18/04/17 16:32:28 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x3626be1439a9140, negotiated timeout = 60000 18/04/17 16:32:29 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x3626be1439a9140 18/04/17 16:32:29 INFO zookeeper.ZooKeeper: Session: 0x3626be1439a9140 closed 18/04/17 16:32:29 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 16:32:29 INFO utils.VerifiableProperties: Verifying properties 18/04/17 16:32:29 INFO utils.VerifiableProperties: Property client.id is overridden to 18/04/17 16:32:29 INFO utils.VerifiableProperties: Property metadata.broker.list is overridden to ***hostname masked***:9092,***hostname masked***:9092,***hostname masked***:9092,***hostname masked***:9092,***hostname masked***:9092 18/04/17 16:32:29 INFO utils.VerifiableProperties: Property request.timeout.ms is overridden to 1000 18/04/17 16:32:29 INFO client.ClientUtils$: Fetching metadata from broker BrokerEndPoint(1,***hostname masked***,9092) with correlation id 0 for 1 topic(s) Set(predictor_pfr_account_advice) 18/04/17 16:32:29 INFO producer.SyncProducer: Connected to ***hostname masked***:9092 for producing 18/04/17 16:32:29 INFO producer.SyncProducer: Disconnecting from ***hostname masked***:9092 18/04/17 16:32:29 INFO spark.Utils: Starting offsets loaded: {[predictor_pfr_account_advice,0]=67} 18/04/17 16:32:29 INFO cluster.YarnClusterSchedulerBackend: Registered executor NettyRpcEndpointRef(null) (***hostname masked***:46224) with ID 7 18/04/17 16:32:29 INFO storage.BlockManagerMasterEndpoint: Registering block manager ***hostname masked***:41751 with 3.1 GB RAM, BlockManagerId(7, ***hostname masked***, 41751) 18/04/17 16:32:29 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x40174fa4 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 16:32:29 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x2cc45eb8 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 16:32:29 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x2cc45eb80x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 16:32:29 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x645c2446 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 16:32:29 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x4ca3be30 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 16:32:29 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x645c24460x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 16:32:29 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x40174fa40x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 16:32:29 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x725f1ca4 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 16:32:29 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x4ca3be300x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 16:32:29 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x725f1ca40x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 16:32:29 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x3cfbf13 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 16:32:29 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x5571da37 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 16:32:29 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x58fd1188 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 16:32:29 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x3cfbf130x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 16:32:29 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x58fd11880x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 16:32:29 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x5571da370x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 16:32:29 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x12b63554 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 16:32:29 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x12b635540x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 16:32:29 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x3680ad05 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 16:32:29 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x3680ad050x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 16:32:29 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x3b94ba5f connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 16:32:29 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x3b94ba5f0x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 16:32:29 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x2c3906a connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 16:32:29 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x2c3906a0x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 16:32:29 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x603d890 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 16:32:29 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x603d8900x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 16:32:29 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x71da0b53 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 16:32:29 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 16:32:29 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 16:32:29 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 16:32:29 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 16:32:29 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 16:32:29 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 16:32:29 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x71da0b530x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 16:32:29 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 16:32:29 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 16:32:29 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:57852, server: ***hostname masked***/***IP masked***:2181 18/04/17 16:32:29 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:51468, server: ***hostname masked***/***IP masked***:2181 18/04/17 16:32:29 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:51471, server: ***hostname masked***/***IP masked***:2181 18/04/17 16:32:29 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:34210, server: ***hostname masked***/***IP masked***:2181 18/04/17 16:32:29 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:34213, server: ***hostname masked***/***IP masked***:2181 18/04/17 16:32:29 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:34211, server: ***hostname masked***/***IP masked***:2181 18/04/17 16:32:29 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 16:32:29 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 16:32:29 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:51473, server: ***hostname masked***/***IP masked***:2181 18/04/17 16:32:29 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x5ba954a0 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 16:32:29 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x69bbdc95 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 16:32:29 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 16:32:29 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:34218, server: ***hostname masked***/***IP masked***:2181 18/04/17 16:32:29 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x592ed444 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 16:32:29 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x36a9ee00 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 16:32:29 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x4a235347 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 16:32:29 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 16:32:29 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x4a2353470x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 16:32:29 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 16:32:29 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:57854, server: ***hostname masked***/***IP masked***:2181 18/04/17 16:32:29 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x51882e55 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 16:32:29 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:34221, server: ***hostname masked***/***IP masked***:2181 18/04/17 16:32:29 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x268aaf78 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 16:32:29 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:34220, server: ***hostname masked***/***IP masked***:2181 18/04/17 16:32:29 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x36a9ee000x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 16:32:29 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x592ed4440x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 16:32:29 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 16:32:29 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:57857, server: ***hostname masked***/***IP masked***:2181 18/04/17 16:32:29 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x69bbdc950x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 16:32:29 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x5ba954a00x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 16:32:29 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x69649585 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 16:32:29 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:51479, server: ***hostname masked***/***IP masked***:2181 18/04/17 16:32:29 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x5f9f9b88 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 16:32:29 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x69f15e42 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 16:32:29 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x1626be1444c917b, negotiated timeout = 60000 18/04/17 16:32:29 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 16:32:29 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x48f945cc connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 16:32:29 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x268aaf780x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 16:32:29 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x6bb07bc7 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 16:32:29 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:51478, server: ***hostname masked***/***IP masked***:2181 18/04/17 16:32:29 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x51882e550x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 16:32:29 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x63e2c14b connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 16:32:29 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x6bb07bc70x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 16:32:29 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x56aaaa9d connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 16:32:29 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x48f945cc0x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 16:32:29 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x69f15e420x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 16:32:29 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x603400ec connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 16:32:29 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x5f9f9b880x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 16:32:29 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x696495850x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 16:32:29 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x2626be142b28a6b, negotiated timeout = 60000 18/04/17 16:32:29 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 16:32:29 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x2626be142b28a68, negotiated timeout = 60000 18/04/17 16:32:29 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x3626be1439a9144, negotiated timeout = 60000 18/04/17 16:32:29 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x2626be142b28a6a, negotiated timeout = 60000 18/04/17 16:32:29 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x2626be142b28a69, negotiated timeout = 60000 18/04/17 16:32:29 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x2626be142b28a6c, negotiated timeout = 60000 18/04/17 16:32:29 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x603400ec0x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 16:32:29 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 16:32:29 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x60fafe36 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 16:32:29 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x56aaaa9d0x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 16:32:29 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 16:32:29 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x63e2c14b0x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 16:32:29 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:51480, server: ***hostname masked***/***IP masked***:2181 18/04/17 16:32:29 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x3626be1439a9146, negotiated timeout = 60000 18/04/17 16:32:29 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 16:32:29 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x60fafe360x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 16:32:29 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x2969c57f connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 16:32:29 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x2969c57f0x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 16:32:29 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 16:32:29 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x1626be1444c917c, negotiated timeout = 60000 18/04/17 16:32:29 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x643be5c4 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 16:32:29 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 16:32:29 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x3626be1439a9143, negotiated timeout = 60000 18/04/17 16:32:29 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x3626be1439a9142, negotiated timeout = 60000 18/04/17 16:32:29 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:34229, server: ***hostname masked***/***IP masked***:2181 18/04/17 16:32:29 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x269706d2 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 16:32:29 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x643be5c40x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 16:32:29 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x307e9f23 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 16:32:29 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:34225, server: ***hostname masked***/***IP masked***:2181 18/04/17 16:32:29 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:51483, server: ***hostname masked***/***IP masked***:2181 18/04/17 16:32:29 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:57866, server: ***hostname masked***/***IP masked***:2181 18/04/17 16:32:29 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 16:32:29 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:51482, server: ***hostname masked***/***IP masked***:2181 18/04/17 16:32:29 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x3626be1439a9145, negotiated timeout = 60000 18/04/17 16:32:29 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 16:32:29 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 16:32:29 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 16:32:29 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x1626be1444c917d, negotiated timeout = 60000 18/04/17 16:32:29 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x2626be142b28a6d, negotiated timeout = 60000 18/04/17 16:32:29 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:57873, server: ***hostname masked***/***IP masked***:2181 18/04/17 16:32:29 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 16:32:29 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 16:32:29 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x2626be142b28a6e, negotiated timeout = 60000 18/04/17 16:32:29 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 16:32:29 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:57869, server: ***hostname masked***/***IP masked***:2181 18/04/17 16:32:29 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:57876, server: ***hostname masked***/***IP masked***:2181 18/04/17 16:32:29 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 16:32:29 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x2626be142b28a6f, negotiated timeout = 60000 18/04/17 16:32:29 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x307e9f230x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 16:32:29 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 16:32:29 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x3626be1439a9147, negotiated timeout = 60000 18/04/17 16:32:29 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 16:32:29 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:34246, server: ***hostname masked***/***IP masked***:2181 18/04/17 16:32:29 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 16:32:29 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x269706d20x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 16:32:29 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:34230, server: ***hostname masked***/***IP masked***:2181 18/04/17 16:32:29 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x3626be1439a9149, negotiated timeout = 60000 18/04/17 16:32:29 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x3626be1439a9148, negotiated timeout = 60000 18/04/17 16:32:29 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:57888, server: ***hostname masked***/***IP masked***:2181 18/04/17 16:32:29 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:57886, server: ***hostname masked***/***IP masked***:2181 18/04/17 16:32:29 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:51503, server: ***hostname masked***/***IP masked***:2181 18/04/17 16:32:29 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:34244, server: ***hostname masked***/***IP masked***:2181 18/04/17 16:32:29 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:51501, server: ***hostname masked***/***IP masked***:2181 18/04/17 16:32:29 WARN spark.Utils: Start listening for STOP command on 9850 18/04/17 16:32:29 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:57880, server: ***hostname masked***/***IP masked***:2181 18/04/17 16:32:29 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:34236, server: ***hostname masked***/***IP masked***:2181 18/04/17 16:32:29 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 16:32:29 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x1626be1444c917e, negotiated timeout = 60000 18/04/17 16:32:29 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x1626be1444c9181, negotiated timeout = 60000 18/04/17 16:32:29 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x1626be1444c9180, negotiated timeout = 60000 18/04/17 16:32:29 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x1626be1444c917f, negotiated timeout = 60000 18/04/17 16:32:29 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x2626be142b28a70, negotiated timeout = 60000 18/04/17 16:32:29 INFO spark.Utils: No offsets found for topic predictor_ufo_response_offer. Starting from 0L. 18/04/17 16:32:29 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 16:32:29 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x2626be142b28a6c 18/04/17 16:32:29 INFO spark.Utils: No offsets found for topic predictor_agreement_number_days_before_expiration_validity_period. Starting from 0L. 18/04/17 16:32:29 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x3626be1439a9146 18/04/17 16:32:29 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x3626be1439a914a, negotiated timeout = 60000 18/04/17 16:32:29 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x2626be142b28a68 18/04/17 16:32:29 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x1626be1444c9183, negotiated timeout = 60000 18/04/17 16:32:29 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x2626be142b28a69 18/04/17 16:32:29 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x1626be1444c917b 18/04/17 16:32:29 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:51513, server: ***hostname masked***/***IP masked***:2181 18/04/17 16:32:29 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x1626be1444c917c 18/04/17 16:32:29 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x2626be142b28a72, negotiated timeout = 60000 18/04/17 16:32:29 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x2626be142b28a6a 18/04/17 16:32:29 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x3626be1439a9144 18/04/17 16:32:29 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x3626be1439a914b, negotiated timeout = 60000 18/04/17 16:32:29 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x2626be142b28a71, negotiated timeout = 60000 18/04/17 16:32:29 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:51516, server: ***hostname masked***/***IP masked***:2181 18/04/17 16:32:29 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x1626be1444c9182, negotiated timeout = 60000 18/04/17 16:32:29 INFO utils.VerifiableProperties: Verifying properties 18/04/17 16:32:29 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x2626be142b28a6b 18/04/17 16:32:29 INFO utils.VerifiableProperties: Property client.id is overridden to 18/04/17 16:32:29 INFO utils.VerifiableProperties: Property metadata.broker.list is overridden to ***hostname masked***:9092,***hostname masked***:9092,***hostname masked***:9092,***hostname masked***:9092,***hostname masked***:9092 18/04/17 16:32:29 INFO utils.VerifiableProperties: Property request.timeout.ms is overridden to 1000 18/04/17 16:32:29 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x2626be142b28a73, negotiated timeout = 60000 18/04/17 16:32:29 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x1626be1444c9184, negotiated timeout = 60000 18/04/17 16:32:29 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x2626be142b28a6d 18/04/17 16:32:29 INFO client.ClientUtils$: Fetching metadata from broker BrokerEndPoint(4,***hostname masked***,9092) with correlation id 0 for 1 topic(s) Set(predictor_address_fact_residental_date_from_gold) 18/04/17 16:32:29 INFO spark.Utils: No offsets found for topic predictor_sas_rtdm_offer. Starting from 0L. 18/04/17 16:32:29 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x3626be1439a9142 18/04/17 16:32:29 INFO producer.SyncProducer: Connected to ***hostname masked***:9092 for producing 18/04/17 16:32:29 INFO zookeeper.ZooKeeper: Session: 0x2626be142b28a6c closed 18/04/17 16:32:29 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 16:32:29 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x3626be1439a9143 18/04/17 16:32:29 INFO producer.SyncProducer: Disconnecting from ***hostname masked***:9092 18/04/17 16:32:29 INFO spark.Utils: Starting offsets loaded: {[predictor_address_fact_residental_date_from_gold,0]=5669622} 18/04/17 16:32:29 INFO spark.Utils: No offsets found for topic predictor_income_joint_gold. Starting from 0L. 18/04/17 16:32:29 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x2626be142b28a6f 18/04/17 16:32:29 INFO utils.VerifiableProperties: Verifying properties 18/04/17 16:32:29 INFO utils.VerifiableProperties: Property client.id is overridden to 18/04/17 16:32:29 INFO zookeeper.ZooKeeper: Session: 0x3626be1439a9146 closed 18/04/17 16:32:29 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 16:32:29 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x3626be1439a914c, negotiated timeout = 60000 18/04/17 16:32:29 INFO zookeeper.ZooKeeper: Session: 0x1626be1444c917b closed 18/04/17 16:32:29 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 16:32:29 INFO zookeeper.ZooKeeper: Session: 0x1626be1444c917c closed 18/04/17 16:32:29 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 16:32:29 INFO zookeeper.ZooKeeper: Session: 0x2626be142b28a69 closed 18/04/17 16:32:29 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 16:32:29 INFO zookeeper.ZooKeeper: Session: 0x2626be142b28a68 closed 18/04/17 16:32:29 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x3626be1439a914d, negotiated timeout = 60000 18/04/17 16:32:29 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 16:32:29 INFO zookeeper.ZooKeeper: Session: 0x2626be142b28a6b closed 18/04/17 16:32:29 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 16:32:29 INFO zookeeper.ZooKeeper: Session: 0x2626be142b28a6a closed 18/04/17 16:32:29 INFO zookeeper.ZooKeeper: Session: 0x3626be1439a9144 closed 18/04/17 16:32:29 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 16:32:29 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 16:32:29 INFO utils.VerifiableProperties: Property metadata.broker.list is overridden to ***hostname masked***:9092,***hostname masked***:9092,***hostname masked***:9092,***hostname masked***:9092,***hostname masked***:9092 18/04/17 16:32:29 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 16:32:29 INFO utils.VerifiableProperties: Verifying properties 18/04/17 16:32:29 INFO zookeeper.ZooKeeper: Session: 0x3626be1439a9143 closed 18/04/17 16:32:29 INFO zookeeper.ZooKeeper: Session: 0x3626be1439a9142 closed 18/04/17 16:32:29 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 16:32:29 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 16:32:29 INFO zookeeper.ZooKeeper: Session: 0x2626be142b28a6d closed 18/04/17 16:32:29 INFO utils.VerifiableProperties: Property client.id is overridden to 18/04/17 16:32:29 INFO utils.VerifiableProperties: Property request.timeout.ms is overridden to 1000 18/04/17 16:32:29 INFO zookeeper.ZooKeeper: Session: 0x2626be142b28a6f closed 18/04/17 16:32:29 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 16:32:29 INFO spark.Utils: No offsets found for topic predictor_income_offline_family_predicted. Starting from 0L. 18/04/17 16:32:29 INFO utils.VerifiableProperties: Property metadata.broker.list is overridden to ***hostname masked***:9092,***hostname masked***:9092,***hostname masked***:9092,***hostname masked***:9092,***hostname masked***:9092 18/04/17 16:32:29 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x3626be1439a9147 18/04/17 16:32:29 INFO utils.VerifiableProperties: Property request.timeout.ms is overridden to 1000 18/04/17 16:32:29 INFO utils.VerifiableProperties: Verifying properties 18/04/17 16:32:29 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x1626be1444c917d 18/04/17 16:32:29 INFO utils.VerifiableProperties: Verifying properties 18/04/17 16:32:29 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x3626be1439a9145 18/04/17 16:32:29 INFO client.ClientUtils$: Fetching metadata from broker BrokerEndPoint(1,***hostname masked***,9092) with correlation id 0 for 1 topic(s) Set(predictor_employment_name_gold) 18/04/17 16:32:29 INFO utils.VerifiableProperties: Property client.id is overridden to 18/04/17 16:32:29 INFO utils.VerifiableProperties: Verifying properties 18/04/17 16:32:29 INFO client.ClientUtils$: Fetching metadata from broker BrokerEndPoint(3,***hostname masked***,9092) with correlation id 0 for 1 topic(s) Set(predictor_cuid_gold) 18/04/17 16:32:29 INFO utils.VerifiableProperties: Property metadata.broker.list is overridden to ***hostname masked***:9092,***hostname masked***:9092,***hostname masked***:9092,***hostname masked***:9092,***hostname masked***:9092 18/04/17 16:32:29 INFO utils.VerifiableProperties: Property client.id is overridden to 18/04/17 16:32:29 INFO utils.VerifiableProperties: Verifying properties 18/04/17 16:32:29 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x1626be1444c917e 18/04/17 16:32:29 INFO utils.VerifiableProperties: Property request.timeout.ms is overridden to 1000 18/04/17 16:32:29 INFO utils.VerifiableProperties: Property client.id is overridden to 18/04/17 16:32:29 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x2626be142b28a72 18/04/17 16:32:29 INFO utils.VerifiableProperties: Property metadata.broker.list is overridden to ***hostname masked***:9092,***hostname masked***:9092,***hostname masked***:9092,***hostname masked***:9092,***hostname masked***:9092 18/04/17 16:32:29 INFO utils.VerifiableProperties: Property client.id is overridden to 18/04/17 16:32:29 INFO producer.SyncProducer: Connected to ***hostname masked***:9092 for producing 18/04/17 16:32:29 INFO utils.VerifiableProperties: Property metadata.broker.list is overridden to ***hostname masked***:9092,***hostname masked***:9092,***hostname masked***:9092,***hostname masked***:9092,***hostname masked***:9092 18/04/17 16:32:29 INFO utils.VerifiableProperties: Verifying properties 18/04/17 16:32:29 INFO utils.VerifiableProperties: Property metadata.broker.list is overridden to ***hostname masked***:9092,***hostname masked***:9092,***hostname masked***:9092,***hostname masked***:9092,***hostname masked***:9092 18/04/17 16:32:29 INFO utils.VerifiableProperties: Verifying properties 18/04/17 16:32:29 INFO utils.VerifiableProperties: Property request.timeout.ms is overridden to 1000 18/04/17 16:32:29 INFO utils.VerifiableProperties: Verifying properties 18/04/17 16:32:29 INFO client.ClientUtils$: Fetching metadata from broker BrokerEndPoint(3,***hostname masked***,9092) with correlation id 0 for 1 topic(s) Set(predictor_address_fact_residental_value_gold) 18/04/17 16:32:29 INFO utils.VerifiableProperties: Property request.timeout.ms is overridden to 1000 18/04/17 16:32:29 INFO producer.SyncProducer: Connected to ***hostname masked***:9092 for producing 18/04/17 16:32:29 INFO client.ClientUtils$: Fetching metadata from broker BrokerEndPoint(1,***hostname masked***,9092) with correlation id 0 for 1 topic(s) Set(predictor_ufo_response_offer) 18/04/17 16:32:29 INFO utils.VerifiableProperties: Property client.id is overridden to 18/04/17 16:32:29 INFO utils.VerifiableProperties: Property client.id is overridden to 18/04/17 16:32:29 INFO utils.VerifiableProperties: Property client.id is overridden to 18/04/17 16:32:29 INFO utils.VerifiableProperties: Property request.timeout.ms is overridden to 1000 18/04/17 16:32:29 INFO producer.SyncProducer: Connected to ***hostname masked***:9092 for producing 18/04/17 16:32:29 INFO utils.VerifiableProperties: Property metadata.broker.list is overridden to ***hostname masked***:9092,***hostname masked***:9092,***hostname masked***:9092,***hostname masked***:9092,***hostname masked***:9092 18/04/17 16:32:29 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x1626be1444c9183 18/04/17 16:32:29 INFO client.ClientUtils$: Fetching metadata from broker BrokerEndPoint(3,***hostname masked***,9092) with correlation id 0 for 1 topic(s) Set(predictor_current_family_status_gold) 18/04/17 16:32:29 INFO utils.VerifiableProperties: Property metadata.broker.list is overridden to ***hostname masked***:9092,***hostname masked***:9092,***hostname masked***:9092,***hostname masked***:9092,***hostname masked***:9092 18/04/17 16:32:29 INFO producer.SyncProducer: Disconnecting from ***hostname masked***:9092 18/04/17 16:32:29 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x1626be1444c9181 18/04/17 16:32:29 INFO producer.SyncProducer: Connected to ***hostname masked***:9092 for producing 18/04/17 16:32:29 INFO producer.SyncProducer: Connected to ***hostname masked***:9092 for producing 18/04/17 16:32:29 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x3626be1439a914a 18/04/17 16:32:29 INFO spark.Utils: No offsets found for topic predictor_status_model_client_relationship_implicit_gold. Starting from 0L. 18/04/17 16:32:29 INFO spark.Utils: No offsets found for topic predictor_income_offline_predicted. Starting from 0L. 18/04/17 16:32:29 INFO utils.VerifiableProperties: Property request.timeout.ms is overridden to 1000 18/04/17 16:32:29 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x1626be1444c9182 18/04/17 16:32:29 INFO utils.VerifiableProperties: Property request.timeout.ms is overridden to 1000 18/04/17 16:32:29 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x1626be1444c917f 18/04/17 16:32:29 INFO client.ClientUtils$: Fetching metadata from broker BrokerEndPoint(1,***hostname masked***,9092) with correlation id 0 for 1 topic(s) Set(predictor_agreement_number_days_before_expiration_validity_period) 18/04/17 16:32:29 INFO utils.VerifiableProperties: Property metadata.broker.list is overridden to ***hostname masked***:9092,***hostname masked***:9092,***hostname masked***:9092,***hostname masked***:9092,***hostname masked***:9092 18/04/17 16:32:29 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x2626be142b28a6e 18/04/17 16:32:29 INFO client.ClientUtils$: Fetching metadata from broker BrokerEndPoint(2,***hostname masked***,9092) with correlation id 0 for 1 topic(s) Set(predictor_address_constant_registration_gold) 18/04/17 16:32:29 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x2626be142b28a71 18/04/17 16:32:29 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x3626be1439a9148 18/04/17 16:32:29 INFO producer.SyncProducer: Disconnecting from ***hostname masked***:9092 18/04/17 16:32:29 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x3626be1439a914b 18/04/17 16:32:29 INFO producer.SyncProducer: Disconnecting from ***hostname masked***:9092 18/04/17 16:32:29 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x3626be1439a9149 18/04/17 16:32:29 INFO producer.SyncProducer: Disconnecting from ***hostname masked***:9092 18/04/17 16:32:29 INFO producer.SyncProducer: Connected to ***hostname masked***:9092 for producing 18/04/17 16:32:29 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x3626be1439a914c 18/04/17 16:32:29 INFO client.ClientUtils$: Fetching metadata from broker BrokerEndPoint(4,***hostname masked***,9092) with correlation id 0 for 1 topic(s) Set(predictor_birth_place_gold) 18/04/17 16:32:29 INFO spark.Utils: No offsets found for topic predictor_personal_age_customer_gold. Starting from 0L. 18/04/17 16:32:29 INFO producer.SyncProducer: Connected to ***hostname masked***:9092 for producing 18/04/17 16:32:29 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x1626be1444c9184 18/04/17 16:32:29 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x1626be1444c9180 18/04/17 16:32:29 INFO producer.SyncProducer: Connected to ***hostname masked***:9092 for producing 18/04/17 16:32:29 INFO utils.VerifiableProperties: Property request.timeout.ms is overridden to 1000 18/04/17 16:32:29 INFO producer.SyncProducer: Disconnecting from ***hostname masked***:9092 18/04/17 16:32:29 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x2626be142b28a73 18/04/17 16:32:29 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x2626be142b28a70 18/04/17 16:32:29 INFO spark.Utils: Starting offsets loaded: {[predictor_employment_name_gold,0]=2188764} 18/04/17 16:32:29 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x3626be1439a914d 18/04/17 16:32:29 INFO producer.SyncProducer: Disconnecting from ***hostname masked***:9092 18/04/17 16:32:29 INFO producer.SyncProducer: Disconnecting from ***hostname masked***:9092 18/04/17 16:32:29 INFO producer.SyncProducer: Disconnecting from ***hostname masked***:9092 18/04/17 16:32:29 INFO client.ClientUtils$: Fetching metadata from broker BrokerEndPoint(0,***hostname masked***,9092) with correlation id 0 for 1 topic(s) Set(predictor_current_family_children_number_gold) 18/04/17 16:32:29 INFO spark.Utils: Starting offsets loaded: {[predictor_cuid_gold,0]=1685582} 18/04/17 16:32:29 INFO spark.Utils: Starting offsets loaded: {[predictor_ufo_response_offer,0]=0} 18/04/17 16:32:29 INFO spark.Utils: Starting offsets loaded: {[predictor_address_fact_residental_value_gold,0]=5669626} 18/04/17 16:32:29 INFO producer.SyncProducer: Connected to ***hostname masked***:9092 for producing 18/04/17 16:32:29 INFO utils.VerifiableProperties: Verifying properties 18/04/17 16:32:29 INFO spark.Utils: Starting offsets loaded: {[predictor_current_family_status_gold,0]=546202} 18/04/17 16:32:29 INFO utils.VerifiableProperties: Property client.id is overridden to 18/04/17 16:32:29 INFO utils.VerifiableProperties: Property metadata.broker.list is overridden to ***hostname masked***:9092,***hostname masked***:9092,***hostname masked***:9092,***hostname masked***:9092,***hostname masked***:9092 18/04/17 16:32:29 INFO utils.VerifiableProperties: Property request.timeout.ms is overridden to 1000 18/04/17 16:32:29 INFO spark.Utils: Starting offsets loaded: {[predictor_agreement_number_days_before_expiration_validity_period,0]=0} 18/04/17 16:32:29 INFO spark.Utils: Starting offsets loaded: {[predictor_birth_place_gold,0]=1684670} 18/04/17 16:32:29 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 16:32:29 INFO zookeeper.ZooKeeper: Session: 0x3626be1439a9147 closed 18/04/17 16:32:29 INFO client.ClientUtils$: Fetching metadata from broker BrokerEndPoint(0,***hostname masked***,9092) with correlation id 0 for 1 topic(s) Set(predictor_address_employer_gold) 18/04/17 16:32:29 INFO producer.SyncProducer: Disconnecting from ***hostname masked***:9092 18/04/17 16:32:29 INFO spark.Utils: Starting offsets loaded: {[predictor_address_constant_registration_gold,0]=5669626} 18/04/17 16:32:29 INFO utils.VerifiableProperties: Verifying properties 18/04/17 16:32:29 INFO utils.VerifiableProperties: Verifying properties 18/04/17 16:32:29 INFO producer.SyncProducer: Connected to ***hostname masked***:9092 for producing 18/04/17 16:32:29 INFO utils.VerifiableProperties: Verifying properties 18/04/17 16:32:29 INFO utils.VerifiableProperties: Property client.id is overridden to 18/04/17 16:32:29 INFO utils.VerifiableProperties: Property client.id is overridden to 18/04/17 16:32:29 INFO utils.VerifiableProperties: Property client.id is overridden to 18/04/17 16:32:29 INFO utils.VerifiableProperties: Property metadata.broker.list is overridden to ***hostname masked***:9092,***hostname masked***:9092,***hostname masked***:9092,***hostname masked***:9092,***hostname masked***:9092 18/04/17 16:32:29 INFO utils.VerifiableProperties: Property metadata.broker.list is overridden to ***hostname masked***:9092,***hostname masked***:9092,***hostname masked***:9092,***hostname masked***:9092,***hostname masked***:9092 18/04/17 16:32:29 INFO utils.VerifiableProperties: Property metadata.broker.list is overridden to ***hostname masked***:9092,***hostname masked***:9092,***hostname masked***:9092,***hostname masked***:9092,***hostname masked***:9092 18/04/17 16:32:29 INFO spark.Utils: Starting offsets loaded: {[predictor_current_family_children_number_gold,0]=524863} 18/04/17 16:32:29 INFO utils.VerifiableProperties: Property request.timeout.ms is overridden to 1000 18/04/17 16:32:29 INFO utils.VerifiableProperties: Property request.timeout.ms is overridden to 1000 18/04/17 16:32:29 INFO utils.VerifiableProperties: Property request.timeout.ms is overridden to 1000 18/04/17 16:32:29 INFO producer.SyncProducer: Disconnecting from ***hostname masked***:9092 18/04/17 16:32:29 INFO client.ClientUtils$: Fetching metadata from broker BrokerEndPoint(1,***hostname masked***,9092) with correlation id 0 for 1 topic(s) Set(predictor_employment_occupation_gold) 18/04/17 16:32:29 INFO client.ClientUtils$: Fetching metadata from broker BrokerEndPoint(2,***hostname masked***,9092) with correlation id 0 for 1 topic(s) Set(predictor_sas_rtdm_offer) 18/04/17 16:32:29 INFO client.ClientUtils$: Fetching metadata from broker BrokerEndPoint(1,***hostname masked***,9092) with correlation id 0 for 1 topic(s) Set(predictor_income_joint_gold) 18/04/17 16:32:29 INFO producer.SyncProducer: Connected to ***hostname masked***:9092 for producing 18/04/17 16:32:29 INFO producer.SyncProducer: Connected to ***hostname masked***:9092 for producing 18/04/17 16:32:29 INFO spark.Utils: Starting offsets loaded: {[predictor_address_employer_gold,0]=5669795} 18/04/17 16:32:29 INFO producer.SyncProducer: Disconnecting from ***hostname masked***:9092 18/04/17 16:32:29 INFO producer.SyncProducer: Disconnecting from ***hostname masked***:9092 18/04/17 16:32:29 INFO producer.SyncProducer: Connected to ***hostname masked***:9092 for producing 18/04/17 16:32:29 INFO producer.SyncProducer: Disconnecting from ***hostname masked***:9092 18/04/17 16:32:29 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 16:32:29 INFO zookeeper.ZooKeeper: Session: 0x1626be1444c9183 closed 18/04/17 16:32:29 INFO zookeeper.ZooKeeper: Session: 0x2626be142b28a70 closed 18/04/17 16:32:29 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 16:32:29 INFO zookeeper.ZooKeeper: Session: 0x3626be1439a9145 closed 18/04/17 16:32:29 INFO spark.Utils: Starting offsets loaded: {[predictor_income_joint_gold,0]=0} 18/04/17 16:32:29 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 16:32:29 INFO zookeeper.ZooKeeper: Session: 0x3626be1439a9149 closed 18/04/17 16:32:29 INFO spark.Utils: Starting offsets loaded: {[predictor_employment_occupation_gold,0]=1781340} 18/04/17 16:32:29 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 16:32:29 INFO zookeeper.ZooKeeper: Session: 0x1626be1444c9182 closed 18/04/17 16:32:29 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 16:32:29 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 16:32:29 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 16:32:29 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 16:32:29 INFO zookeeper.ZooKeeper: Session: 0x2626be142b28a6e closed 18/04/17 16:32:29 INFO zookeeper.ZooKeeper: Session: 0x2626be142b28a71 closed 18/04/17 16:32:29 INFO zookeeper.ZooKeeper: Session: 0x3626be1439a914d closed 18/04/17 16:32:29 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 16:32:29 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 16:32:29 INFO zookeeper.ZooKeeper: Session: 0x1626be1444c917f closed 18/04/17 16:32:29 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 16:32:29 INFO zookeeper.ZooKeeper: Session: 0x3626be1439a914c closed 18/04/17 16:32:29 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 16:32:29 INFO zookeeper.ZooKeeper: Session: 0x3626be1439a914b closed 18/04/17 16:32:29 INFO spark.Utils: Starting offsets loaded: {[predictor_sas_rtdm_offer,0]=0} 18/04/17 16:32:29 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 16:32:29 INFO zookeeper.ZooKeeper: Session: 0x3626be1439a9148 closed 18/04/17 16:32:29 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 16:32:29 INFO zookeeper.ZooKeeper: Session: 0x1626be1444c9181 closed 18/04/17 16:32:29 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 16:32:29 INFO zookeeper.ZooKeeper: Session: 0x2626be142b28a72 closed 18/04/17 16:32:29 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 16:32:29 INFO zookeeper.ZooKeeper: Session: 0x1626be1444c9184 closed 18/04/17 16:32:29 INFO zookeeper.ZooKeeper: Session: 0x1626be1444c9180 closed 18/04/17 16:32:29 INFO zookeeper.ZooKeeper: Session: 0x2626be142b28a73 closed 18/04/17 16:32:29 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 16:32:29 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 16:32:29 INFO utils.VerifiableProperties: Verifying properties 18/04/17 16:32:29 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 16:32:29 INFO zookeeper.ZooKeeper: Session: 0x1626be1444c917e closed 18/04/17 16:32:29 INFO zookeeper.ZooKeeper: Session: 0x3626be1439a914a closed 18/04/17 16:32:29 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 16:32:29 INFO zookeeper.ZooKeeper: Session: 0x1626be1444c917d closed 18/04/17 16:32:29 INFO utils.VerifiableProperties: Property client.id is overridden to 18/04/17 16:32:29 INFO utils.VerifiableProperties: Verifying properties 18/04/17 16:32:29 INFO utils.VerifiableProperties: Property metadata.broker.list is overridden to ***hostname masked***:9092,***hostname masked***:9092,***hostname masked***:9092,***hostname masked***:9092,***hostname masked***:9092 18/04/17 16:32:29 INFO utils.VerifiableProperties: Property request.timeout.ms is overridden to 1000 18/04/17 16:32:29 INFO utils.VerifiableProperties: Verifying properties 18/04/17 16:32:29 INFO utils.VerifiableProperties: Property client.id is overridden to 18/04/17 16:32:29 INFO utils.VerifiableProperties: Verifying properties 18/04/17 16:32:29 INFO utils.VerifiableProperties: Verifying properties 18/04/17 16:32:29 INFO utils.VerifiableProperties: Verifying properties 18/04/17 16:32:29 INFO utils.VerifiableProperties: Verifying properties 18/04/17 16:32:29 INFO utils.VerifiableProperties: Verifying properties 18/04/17 16:32:29 INFO utils.VerifiableProperties: Verifying properties 18/04/17 16:32:29 INFO utils.VerifiableProperties: Property client.id is overridden to 18/04/17 16:32:29 INFO utils.VerifiableProperties: Property client.id is overridden to 18/04/17 16:32:29 INFO utils.VerifiableProperties: Property client.id is overridden to 18/04/17 16:32:29 INFO utils.VerifiableProperties: Property metadata.broker.list is overridden to ***hostname masked***:9092,***hostname masked***:9092,***hostname masked***:9092,***hostname masked***:9092,***hostname masked***:9092 18/04/17 16:32:29 INFO utils.VerifiableProperties: Property client.id is overridden to 18/04/17 16:32:29 INFO utils.VerifiableProperties: Property client.id is overridden to 18/04/17 16:32:29 INFO utils.VerifiableProperties: Verifying properties 18/04/17 16:32:29 INFO utils.VerifiableProperties: Property client.id is overridden to 18/04/17 16:32:29 INFO utils.VerifiableProperties: Property metadata.broker.list is overridden to ***hostname masked***:9092,***hostname masked***:9092,***hostname masked***:9092,***hostname masked***:9092,***hostname masked***:9092 18/04/17 16:32:29 INFO client.ClientUtils$: Fetching metadata from broker BrokerEndPoint(3,***hostname masked***,9092) with correlation id 0 for 1 topic(s) Set(predictor_income_offline_predicted) 18/04/17 16:32:29 INFO utils.VerifiableProperties: Property client.id is overridden to 18/04/17 16:32:29 INFO utils.VerifiableProperties: Property request.timeout.ms is overridden to 1000 18/04/17 16:32:29 INFO utils.VerifiableProperties: Property metadata.broker.list is overridden to ***hostname masked***:9092,***hostname masked***:9092,***hostname masked***:9092,***hostname masked***:9092,***hostname masked***:9092 18/04/17 16:32:29 INFO utils.VerifiableProperties: Property client.id is overridden to 18/04/17 16:32:29 INFO utils.VerifiableProperties: Property metadata.broker.list is overridden to ***hostname masked***:9092,***hostname masked***:9092,***hostname masked***:9092,***hostname masked***:9092,***hostname masked***:9092 18/04/17 16:32:29 INFO utils.VerifiableProperties: Property metadata.broker.list is overridden to ***hostname masked***:9092,***hostname masked***:9092,***hostname masked***:9092,***hostname masked***:9092,***hostname masked***:9092 18/04/17 16:32:29 INFO utils.VerifiableProperties: Property request.timeout.ms is overridden to 1000 18/04/17 16:32:29 INFO utils.VerifiableProperties: Property request.timeout.ms is overridden to 1000 18/04/17 16:32:29 INFO utils.VerifiableProperties: Property metadata.broker.list is overridden to ***hostname masked***:9092,***hostname masked***:9092,***hostname masked***:9092,***hostname masked***:9092,***hostname masked***:9092 18/04/17 16:32:29 INFO utils.VerifiableProperties: Property metadata.broker.list is overridden to ***hostname masked***:9092,***hostname masked***:9092,***hostname masked***:9092,***hostname masked***:9092,***hostname masked***:9092 18/04/17 16:32:29 INFO utils.VerifiableProperties: Verifying properties 18/04/17 16:32:29 INFO utils.VerifiableProperties: Property request.timeout.ms is overridden to 1000 18/04/17 16:32:29 INFO utils.VerifiableProperties: Property request.timeout.ms is overridden to 1000 18/04/17 16:32:29 INFO utils.VerifiableProperties: Property metadata.broker.list is overridden to ***hostname masked***:9092,***hostname masked***:9092,***hostname masked***:9092,***hostname masked***:9092,***hostname masked***:9092 18/04/17 16:32:29 INFO producer.SyncProducer: Connected to ***hostname masked***:9092 for producing 18/04/17 16:32:29 INFO utils.VerifiableProperties: Property request.timeout.ms is overridden to 1000 18/04/17 16:32:29 INFO client.ClientUtils$: Fetching metadata from broker BrokerEndPoint(1,***hostname masked***,9092) with correlation id 0 for 1 topic(s) Set(predictor_full_name_secondname_gold) 18/04/17 16:32:29 INFO client.ClientUtils$: Fetching metadata from broker BrokerEndPoint(0,***hostname masked***,9092) with correlation id 0 for 1 topic(s) Set(predictor_employment_type_gold) 18/04/17 16:32:29 INFO utils.VerifiableProperties: Property request.timeout.ms is overridden to 1000 18/04/17 16:32:29 INFO utils.VerifiableProperties: Property metadata.broker.list is overridden to ***hostname masked***:9092,***hostname masked***:9092,***hostname masked***:9092,***hostname masked***:9092,***hostname masked***:9092 18/04/17 16:32:29 INFO utils.VerifiableProperties: Verifying properties 18/04/17 16:32:29 INFO producer.SyncProducer: Connected to ***hostname masked***:9092 for producing 18/04/17 16:32:29 INFO client.ClientUtils$: Fetching metadata from broker BrokerEndPoint(0,***hostname masked***,9092) with correlation id 0 for 1 topic(s) Set(predictor_full_name_lastname_gold) 18/04/17 16:32:29 INFO client.ClientUtils$: Fetching metadata from broker BrokerEndPoint(0,***hostname masked***,9092) with correlation id 0 for 1 topic(s) Set(predictor_income_offline_family_predicted) 18/04/17 16:32:29 INFO utils.VerifiableProperties: Property client.id is overridden to 18/04/17 16:32:29 INFO utils.VerifiableProperties: Verifying properties 18/04/17 16:32:29 INFO client.ClientUtils$: Fetching metadata from broker BrokerEndPoint(2,***hostname masked***,9092) with correlation id 0 for 1 topic(s) Set(predictor_employment_start_date_work_current_gold) 18/04/17 16:32:29 INFO producer.SyncProducer: Connected to ***hostname masked***:9092 for producing 18/04/17 16:32:29 INFO utils.VerifiableProperties: Property client.id is overridden to 18/04/17 16:32:29 INFO client.ClientUtils$: Fetching metadata from broker BrokerEndPoint(4,***hostname masked***,9092) with correlation id 0 for 1 topic(s) Set(predictor_main_phone_number_gold) 18/04/17 16:32:29 INFO utils.VerifiableProperties: Property request.timeout.ms is overridden to 1000 18/04/17 16:32:29 INFO producer.SyncProducer: Connected to ***hostname masked***:9092 for producing 18/04/17 16:32:29 INFO utils.VerifiableProperties: Verifying properties 18/04/17 16:32:29 INFO utils.VerifiableProperties: Property metadata.broker.list is overridden to ***hostname masked***:9092,***hostname masked***:9092,***hostname masked***:9092,***hostname masked***:9092,***hostname masked***:9092 18/04/17 16:32:29 INFO client.ClientUtils$: Fetching metadata from broker BrokerEndPoint(4,***hostname masked***,9092) with correlation id 0 for 1 topic(s) Set(predictor_personal_age_customer_gold) 18/04/17 16:32:29 INFO producer.SyncProducer: Connected to ***hostname masked***:9092 for producing 18/04/17 16:32:29 INFO producer.SyncProducer: Disconnecting from ***hostname masked***:9092 18/04/17 16:32:29 INFO utils.VerifiableProperties: Property metadata.broker.list is overridden to ***hostname masked***:9092,***hostname masked***:9092,***hostname masked***:9092,***hostname masked***:9092,***hostname masked***:9092 18/04/17 16:32:29 INFO producer.SyncProducer: Disconnecting from ***hostname masked***:9092 18/04/17 16:32:29 INFO utils.VerifiableProperties: Property client.id is overridden to 18/04/17 16:32:29 INFO utils.VerifiableProperties: Property request.timeout.ms is overridden to 1000 18/04/17 16:32:29 INFO utils.VerifiableProperties: Property metadata.broker.list is overridden to ***hostname masked***:9092,***hostname masked***:9092,***hostname masked***:9092,***hostname masked***:9092,***hostname masked***:9092 18/04/17 16:32:29 INFO producer.SyncProducer: Connected to ***hostname masked***:9092 for producing 18/04/17 16:32:29 INFO client.ClientUtils$: Fetching metadata from broker BrokerEndPoint(3,***hostname masked***,9092) with correlation id 0 for 1 topic(s) Set(predictor_passport_ru_number_gold) 18/04/17 16:32:29 INFO utils.VerifiableProperties: Property request.timeout.ms is overridden to 1000 18/04/17 16:32:29 INFO producer.SyncProducer: Disconnecting from ***hostname masked***:9092 18/04/17 16:32:29 INFO producer.SyncProducer: Connected to ***hostname masked***:9092 for producing 18/04/17 16:32:29 INFO utils.VerifiableProperties: Property request.timeout.ms is overridden to 1000 18/04/17 16:32:29 INFO utils.VerifiableProperties: Property request.timeout.ms is overridden to 1000 18/04/17 16:32:29 INFO utils.VerifiableProperties: Property client.id is overridden to 18/04/17 16:32:29 INFO producer.SyncProducer: Disconnecting from ***hostname masked***:9092 18/04/17 16:32:29 INFO utils.VerifiableProperties: Property metadata.broker.list is overridden to ***hostname masked***:9092,***hostname masked***:9092,***hostname masked***:9092,***hostname masked***:9092,***hostname masked***:9092 18/04/17 16:32:29 INFO producer.SyncProducer: Connected to ***hostname masked***:9092 for producing 18/04/17 16:32:29 INFO utils.VerifiableProperties: Property request.timeout.ms is overridden to 1000 18/04/17 16:32:29 INFO client.ClientUtils$: Fetching metadata from broker BrokerEndPoint(2,***hostname masked***,9092) with correlation id 0 for 1 topic(s) Set(predictor_status_model_client_relationship_implicit_gold) 18/04/17 16:32:29 INFO producer.SyncProducer: Connected to ***hostname masked***:9092 for producing 18/04/17 16:32:29 INFO client.ClientUtils$: Fetching metadata from broker BrokerEndPoint(1,***hostname masked***,9092) with correlation id 0 for 1 topic(s) Set(predictor_address_constant_registration_date_from_gold) 18/04/17 16:32:29 INFO utils.VerifiableProperties: Verifying properties 18/04/17 16:32:29 INFO client.ClientUtils$: Fetching metadata from broker BrokerEndPoint(3,***hostname masked***,9092) with correlation id 0 for 1 topic(s) Set(predictor_full_name_firstname_gold) 18/04/17 16:32:29 INFO client.ClientUtils$: Fetching metadata from broker BrokerEndPoint(2,***hostname masked***,9092) with correlation id 0 for 1 topic(s) Set(predictor_address_employer_date_from_gold) 18/04/17 16:32:29 INFO producer.SyncProducer: Connected to ***hostname masked***:9092 for producing 18/04/17 16:32:29 INFO spark.Utils: Starting offsets loaded: {[predictor_income_offline_predicted,0]=0} 18/04/17 16:32:29 INFO producer.SyncProducer: Disconnecting from ***hostname masked***:9092 18/04/17 16:32:29 INFO producer.SyncProducer: Disconnecting from ***hostname masked***:9092 18/04/17 16:32:29 INFO producer.SyncProducer: Connected to ***hostname masked***:9092 for producing 18/04/17 16:32:29 INFO producer.SyncProducer: Disconnecting from ***hostname masked***:9092 18/04/17 16:32:29 INFO producer.SyncProducer: Disconnecting from ***hostname masked***:9092 18/04/17 16:32:29 INFO spark.Utils: Starting offsets loaded: {[predictor_full_name_secondname_gold,0]=1685554} 18/04/17 16:32:29 INFO producer.SyncProducer: Connected to ***hostname masked***:9092 for producing 18/04/17 16:32:29 INFO spark.Utils: Starting offsets loaded: {[predictor_full_name_lastname_gold,0]=1685579} 18/04/17 16:32:29 INFO producer.SyncProducer: Disconnecting from ***hostname masked***:9092 18/04/17 16:32:29 INFO utils.VerifiableProperties: Property client.id is overridden to 18/04/17 16:32:29 INFO client.ClientUtils$: Fetching metadata from broker BrokerEndPoint(1,***hostname masked***,9092) with correlation id 0 for 1 topic(s) Set(predictor_passport_ru_issuer_gold) 18/04/17 16:32:29 INFO utils.VerifiableProperties: Verifying properties 18/04/17 16:32:29 INFO utils.VerifiableProperties: Property metadata.broker.list is overridden to ***hostname masked***:9092,***hostname masked***:9092,***hostname masked***:9092,***hostname masked***:9092,***hostname masked***:9092 18/04/17 16:32:29 INFO utils.VerifiableProperties: Verifying properties 18/04/17 16:32:29 INFO producer.SyncProducer: Disconnecting from ***hostname masked***:9092 18/04/17 16:32:29 INFO spark.Utils: Starting offsets loaded: {[predictor_income_offline_family_predicted,0]=0} 18/04/17 16:32:29 INFO utils.VerifiableProperties: Property client.id is overridden to 18/04/17 16:32:29 INFO producer.SyncProducer: Connected to ***hostname masked***:9092 for producing 18/04/17 16:32:29 INFO spark.Utils: Starting offsets loaded: {[predictor_employment_start_date_work_current_gold,0]=1979514} 18/04/17 16:32:29 INFO utils.VerifiableProperties: Property metadata.broker.list is overridden to ***hostname masked***:9092,***hostname masked***:9092,***hostname masked***:9092,***hostname masked***:9092,***hostname masked***:9092 18/04/17 16:32:29 INFO utils.VerifiableProperties: Verifying properties 18/04/17 16:32:29 INFO utils.VerifiableProperties: Verifying properties 18/04/17 16:32:29 INFO utils.VerifiableProperties: Verifying properties 18/04/17 16:32:29 INFO producer.SyncProducer: Disconnecting from ***hostname masked***:9092 18/04/17 16:32:29 INFO producer.SyncProducer: Connected to ***hostname masked***:9092 for producing 18/04/17 16:32:29 INFO utils.VerifiableProperties: Property request.timeout.ms is overridden to 1000 18/04/17 16:32:29 INFO utils.VerifiableProperties: Property client.id is overridden to 18/04/17 16:32:29 INFO utils.VerifiableProperties: Verifying properties 18/04/17 16:32:29 INFO client.ClientUtils$: Fetching metadata from broker BrokerEndPoint(3,***hostname masked***,9092) with correlation id 0 for 1 topic(s) Set(predictor_snils_number_gold) 18/04/17 16:32:29 INFO utils.VerifiableProperties: Property metadata.broker.list is overridden to ***hostname masked***:9092,***hostname masked***:9092,***hostname masked***:9092,***hostname masked***:9092,***hostname masked***:9092 18/04/17 16:32:29 INFO utils.VerifiableProperties: Property client.id is overridden to 18/04/17 16:32:29 INFO spark.Utils: Starting offsets loaded: {[predictor_passport_ru_number_gold,0]=2532192} 18/04/17 16:32:29 INFO producer.SyncProducer: Connected to ***hostname masked***:9092 for producing 18/04/17 16:32:29 INFO producer.SyncProducer: Disconnecting from ***hostname masked***:9092 18/04/17 16:32:29 INFO producer.SyncProducer: Disconnecting from ***hostname masked***:9092 18/04/17 16:32:29 INFO spark.Utils: Starting offsets loaded: {[predictor_personal_age_customer_gold,0]=0} 18/04/17 16:32:29 INFO producer.SyncProducer: Disconnecting from ***hostname masked***:9092 18/04/17 16:32:29 INFO spark.Utils: Starting offsets loaded: {[predictor_employment_type_gold,0]=269201} 18/04/17 16:32:29 INFO utils.VerifiableProperties: Property client.id is overridden to 18/04/17 16:32:29 INFO spark.Utils: Starting offsets loaded: {[predictor_address_constant_registration_date_from_gold,0]=5669742} 18/04/17 16:32:29 INFO utils.VerifiableProperties: Property metadata.broker.list is overridden to ***hostname masked***:9092,***hostname masked***:9092,***hostname masked***:9092,***hostname masked***:9092,***hostname masked***:9092 18/04/17 16:32:29 INFO utils.VerifiableProperties: Property client.id is overridden to 18/04/17 16:32:29 INFO utils.VerifiableProperties: Property request.timeout.ms is overridden to 1000 18/04/17 16:32:29 INFO utils.VerifiableProperties: Property request.timeout.ms is overridden to 1000 18/04/17 16:32:29 INFO spark.Utils: Starting offsets loaded: {[predictor_main_phone_number_gold,0]=2296691} 18/04/17 16:32:29 INFO producer.SyncProducer: Disconnecting from ***hostname masked***:9092 18/04/17 16:32:29 INFO utils.VerifiableProperties: Property metadata.broker.list is overridden to ***hostname masked***:9092,***hostname masked***:9092,***hostname masked***:9092,***hostname masked***:9092,***hostname masked***:9092 18/04/17 16:32:29 INFO utils.VerifiableProperties: Property request.timeout.ms is overridden to 1000 18/04/17 16:32:29 INFO client.ClientUtils$: Fetching metadata from broker BrokerEndPoint(1,***hostname masked***,9092) with correlation id 0 for 1 topic(s) Set(predictor_personal_gender_gold) 18/04/17 16:32:29 INFO client.ClientUtils$: Fetching metadata from broker BrokerEndPoint(0,***hostname masked***,9092) with correlation id 0 for 1 topic(s) Set(predictor_passport_ru_issued_gold) 18/04/17 16:32:29 INFO spark.Utils: Starting offsets loaded: {[predictor_address_employer_date_from_gold,0]=5669626} 18/04/17 16:32:29 INFO spark.Utils: Starting offsets loaded: {[predictor_full_name_firstname_gold,0]=1685622} 18/04/17 16:32:29 INFO utils.VerifiableProperties: Property request.timeout.ms is overridden to 1000 18/04/17 16:32:29 INFO utils.VerifiableProperties: Property metadata.broker.list is overridden to ***hostname masked***:9092,***hostname masked***:9092,***hostname masked***:9092,***hostname masked***:9092,***hostname masked***:9092 18/04/17 16:32:29 INFO utils.VerifiableProperties: Property client.id is overridden to 18/04/17 16:32:29 INFO producer.SyncProducer: Connected to ***hostname masked***:9092 for producing 18/04/17 16:32:29 INFO producer.SyncProducer: Connected to ***hostname masked***:9092 for producing 18/04/17 16:32:29 INFO spark.Utils: Starting offsets loaded: {[predictor_status_model_client_relationship_implicit_gold,0]=0} 18/04/17 16:32:29 INFO client.ClientUtils$: Fetching metadata from broker BrokerEndPoint(0,***hostname masked***,9092) with correlation id 0 for 1 topic(s) Set(predictor_birth_date_gold) 18/04/17 16:32:29 INFO spark.Utils: Starting offsets loaded: {[predictor_passport_ru_issuer_gold,0]=2532254} 18/04/17 16:32:29 INFO client.ClientUtils$: Fetching metadata from broker BrokerEndPoint(3,***hostname masked***,9092) with correlation id 0 for 1 topic(s) Set(predictor_passport_ru_series_gold) 18/04/17 16:32:29 INFO utils.VerifiableProperties: Property metadata.broker.list is overridden to ***hostname masked***:9092,***hostname masked***:9092,***hostname masked***:9092,***hostname masked***:9092,***hostname masked***:9092 18/04/17 16:32:29 INFO utils.VerifiableProperties: Property request.timeout.ms is overridden to 1000 18/04/17 16:32:29 INFO utils.VerifiableProperties: Property request.timeout.ms is overridden to 1000 18/04/17 16:32:29 INFO producer.SyncProducer: Connected to ***hostname masked***:9092 for producing 18/04/17 16:32:29 INFO spark.Utils: Starting offsets loaded: {[predictor_snils_number_gold,0]=1265615} 18/04/17 16:32:29 INFO producer.SyncProducer: Connected to ***hostname masked***:9092 for producing 18/04/17 16:32:29 INFO producer.SyncProducer: Disconnecting from ***hostname masked***:9092 18/04/17 16:32:29 INFO producer.SyncProducer: Disconnecting from ***hostname masked***:9092 18/04/17 16:32:29 INFO client.ClientUtils$: Fetching metadata from broker BrokerEndPoint(1,***hostname masked***,9092) with correlation id 0 for 1 topic(s) Set(predictor_passport_ru_code_issuer_gold) 18/04/17 16:32:29 INFO client.ClientUtils$: Fetching metadata from broker BrokerEndPoint(3,***hostname masked***,9092) with correlation id 0 for 1 topic(s) Set(predictor_inn_tax_gold) 18/04/17 16:32:29 INFO producer.SyncProducer: Connected to ***hostname masked***:9092 for producing 18/04/17 16:32:29 INFO producer.SyncProducer: Connected to ***hostname masked***:9092 for producing 18/04/17 16:32:29 INFO producer.SyncProducer: Disconnecting from ***hostname masked***:9092 18/04/17 16:32:29 INFO producer.SyncProducer: Disconnecting from ***hostname masked***:9092 18/04/17 16:32:29 INFO spark.Utils: Starting offsets loaded: {[predictor_personal_gender_gold,0]=1648995} 18/04/17 16:32:29 INFO producer.SyncProducer: Disconnecting from ***hostname masked***:9092 18/04/17 16:32:29 INFO spark.Utils: Starting offsets loaded: {[predictor_passport_ru_issued_gold,0]=2532312} 18/04/17 16:32:29 INFO producer.SyncProducer: Disconnecting from ***hostname masked***:9092 18/04/17 16:32:29 INFO spark.Utils: Starting offsets loaded: {[predictor_birth_date_gold,0]=1685618} 18/04/17 16:32:29 INFO spark.Utils: Starting offsets loaded: {[predictor_passport_ru_series_gold,0]=2532138} 18/04/17 16:32:29 INFO spark.Utils: Starting offsets loaded: {[predictor_passport_ru_code_issuer_gold,0]=2532226} 18/04/17 16:32:29 INFO spark.Utils: Starting offsets loaded: {[predictor_inn_tax_gold,0]=20834} 18/04/17 16:32:59 INFO dstream.ForEachDStream: metadataCleanupDelay = -1 18/04/17 16:32:59 INFO dstream.TransformedDStream: metadataCleanupDelay = -1 18/04/17 16:32:59 INFO kafka.DirectKafkaInputDStream: metadataCleanupDelay = -1 18/04/17 16:32:59 INFO kafka.DirectKafkaInputDStream: Slide time = 60000 ms 18/04/17 16:32:59 INFO kafka.DirectKafkaInputDStream: Storage level = StorageLevel(false, false, false, false, 1) 18/04/17 16:32:59 INFO kafka.DirectKafkaInputDStream: Checkpoint interval = null 18/04/17 16:32:59 INFO kafka.DirectKafkaInputDStream: Remember duration = 60000 ms 18/04/17 16:32:59 INFO kafka.DirectKafkaInputDStream: Initialized and validated org.apache.spark.streaming.kafka.DirectKafkaInputDStream@3059eae9 18/04/17 16:32:59 INFO dstream.TransformedDStream: Slide time = 60000 ms 18/04/17 16:32:59 INFO dstream.TransformedDStream: Storage level = StorageLevel(false, false, false, false, 1) 18/04/17 16:32:59 INFO dstream.TransformedDStream: Checkpoint interval = null 18/04/17 16:32:59 INFO dstream.TransformedDStream: Remember duration = 60000 ms 18/04/17 16:32:59 INFO dstream.TransformedDStream: Initialized and validated org.apache.spark.streaming.dstream.TransformedDStream@58ca6878 18/04/17 16:32:59 INFO dstream.ForEachDStream: Slide time = 60000 ms 18/04/17 16:32:59 INFO dstream.ForEachDStream: Storage level = StorageLevel(false, false, false, false, 1) 18/04/17 16:32:59 INFO dstream.ForEachDStream: Checkpoint interval = null 18/04/17 16:32:59 INFO dstream.ForEachDStream: Remember duration = 60000 ms 18/04/17 16:32:59 INFO dstream.ForEachDStream: Initialized and validated org.apache.spark.streaming.dstream.ForEachDStream@2537055f 18/04/17 16:32:59 INFO dstream.ForEachDStream: metadataCleanupDelay = -1 18/04/17 16:32:59 INFO dstream.TransformedDStream: metadataCleanupDelay = -1 18/04/17 16:32:59 INFO kafka.DirectKafkaInputDStream: metadataCleanupDelay = -1 18/04/17 16:32:59 INFO kafka.DirectKafkaInputDStream: Slide time = 60000 ms 18/04/17 16:32:59 INFO kafka.DirectKafkaInputDStream: Storage level = StorageLevel(false, false, false, false, 1) 18/04/17 16:32:59 INFO kafka.DirectKafkaInputDStream: Checkpoint interval = null 18/04/17 16:32:59 INFO kafka.DirectKafkaInputDStream: Remember duration = 60000 ms 18/04/17 16:32:59 INFO kafka.DirectKafkaInputDStream: Initialized and validated org.apache.spark.streaming.kafka.DirectKafkaInputDStream@631918dc 18/04/17 16:32:59 INFO dstream.TransformedDStream: Slide time = 60000 ms 18/04/17 16:32:59 INFO dstream.TransformedDStream: Storage level = StorageLevel(false, false, false, false, 1) 18/04/17 16:32:59 INFO dstream.TransformedDStream: Checkpoint interval = null 18/04/17 16:32:59 INFO dstream.TransformedDStream: Remember duration = 60000 ms 18/04/17 16:32:59 INFO dstream.TransformedDStream: Initialized and validated org.apache.spark.streaming.dstream.TransformedDStream@1ce8a8f4 18/04/17 16:32:59 INFO dstream.ForEachDStream: Slide time = 60000 ms 18/04/17 16:32:59 INFO dstream.ForEachDStream: Storage level = StorageLevel(false, false, false, false, 1) 18/04/17 16:32:59 INFO dstream.ForEachDStream: Checkpoint interval = null 18/04/17 16:32:59 INFO dstream.ForEachDStream: Remember duration = 60000 ms 18/04/17 16:32:59 INFO dstream.ForEachDStream: Initialized and validated org.apache.spark.streaming.dstream.ForEachDStream@176a8ab9 18/04/17 16:32:59 INFO dstream.ForEachDStream: metadataCleanupDelay = -1 18/04/17 16:32:59 INFO dstream.TransformedDStream: metadataCleanupDelay = -1 18/04/17 16:32:59 INFO kafka.DirectKafkaInputDStream: metadataCleanupDelay = -1 18/04/17 16:32:59 INFO kafka.DirectKafkaInputDStream: Slide time = 60000 ms 18/04/17 16:32:59 INFO kafka.DirectKafkaInputDStream: Storage level = StorageLevel(false, false, false, false, 1) 18/04/17 16:32:59 INFO kafka.DirectKafkaInputDStream: Checkpoint interval = null 18/04/17 16:32:59 INFO kafka.DirectKafkaInputDStream: Remember duration = 60000 ms 18/04/17 16:32:59 INFO kafka.DirectKafkaInputDStream: Initialized and validated org.apache.spark.streaming.kafka.DirectKafkaInputDStream@a6a68f6 18/04/17 16:32:59 INFO dstream.TransformedDStream: Slide time = 60000 ms 18/04/17 16:32:59 INFO dstream.TransformedDStream: Storage level = StorageLevel(false, false, false, false, 1) 18/04/17 16:32:59 INFO dstream.TransformedDStream: Checkpoint interval = null 18/04/17 16:32:59 INFO dstream.TransformedDStream: Remember duration = 60000 ms 18/04/17 16:32:59 INFO dstream.TransformedDStream: Initialized and validated org.apache.spark.streaming.dstream.TransformedDStream@219e4e9f 18/04/17 16:32:59 INFO dstream.ForEachDStream: Slide time = 60000 ms 18/04/17 16:32:59 INFO dstream.ForEachDStream: Storage level = StorageLevel(false, false, false, false, 1) 18/04/17 16:32:59 INFO dstream.ForEachDStream: Checkpoint interval = null 18/04/17 16:32:59 INFO dstream.ForEachDStream: Remember duration = 60000 ms 18/04/17 16:32:59 INFO dstream.ForEachDStream: Initialized and validated org.apache.spark.streaming.dstream.ForEachDStream@7868ed50 18/04/17 16:32:59 INFO dstream.ForEachDStream: metadataCleanupDelay = -1 18/04/17 16:32:59 INFO dstream.TransformedDStream: metadataCleanupDelay = -1 18/04/17 16:32:59 INFO kafka.DirectKafkaInputDStream: metadataCleanupDelay = -1 18/04/17 16:32:59 INFO kafka.DirectKafkaInputDStream: Slide time = 60000 ms 18/04/17 16:32:59 INFO kafka.DirectKafkaInputDStream: Storage level = StorageLevel(false, false, false, false, 1) 18/04/17 16:32:59 INFO kafka.DirectKafkaInputDStream: Checkpoint interval = null 18/04/17 16:32:59 INFO kafka.DirectKafkaInputDStream: Remember duration = 60000 ms 18/04/17 16:32:59 INFO kafka.DirectKafkaInputDStream: Initialized and validated org.apache.spark.streaming.kafka.DirectKafkaInputDStream@3b149fa6 18/04/17 16:32:59 INFO dstream.TransformedDStream: Slide time = 60000 ms 18/04/17 16:32:59 INFO dstream.TransformedDStream: Storage level = StorageLevel(false, false, false, false, 1) 18/04/17 16:32:59 INFO dstream.TransformedDStream: Checkpoint interval = null 18/04/17 16:32:59 INFO dstream.TransformedDStream: Remember duration = 60000 ms 18/04/17 16:32:59 INFO dstream.TransformedDStream: Initialized and validated org.apache.spark.streaming.dstream.TransformedDStream@62606dea 18/04/17 16:32:59 INFO dstream.ForEachDStream: Slide time = 60000 ms 18/04/17 16:32:59 INFO dstream.ForEachDStream: Storage level = StorageLevel(false, false, false, false, 1) 18/04/17 16:32:59 INFO dstream.ForEachDStream: Checkpoint interval = null 18/04/17 16:32:59 INFO dstream.ForEachDStream: Remember duration = 60000 ms 18/04/17 16:32:59 INFO dstream.ForEachDStream: Initialized and validated org.apache.spark.streaming.dstream.ForEachDStream@31593d8f 18/04/17 16:32:59 INFO dstream.ForEachDStream: metadataCleanupDelay = -1 18/04/17 16:32:59 INFO dstream.TransformedDStream: metadataCleanupDelay = -1 18/04/17 16:32:59 INFO kafka.DirectKafkaInputDStream: metadataCleanupDelay = -1 18/04/17 16:32:59 INFO kafka.DirectKafkaInputDStream: Slide time = 60000 ms 18/04/17 16:32:59 INFO kafka.DirectKafkaInputDStream: Storage level = StorageLevel(false, false, false, false, 1) 18/04/17 16:32:59 INFO kafka.DirectKafkaInputDStream: Checkpoint interval = null 18/04/17 16:32:59 INFO kafka.DirectKafkaInputDStream: Remember duration = 60000 ms 18/04/17 16:32:59 INFO kafka.DirectKafkaInputDStream: Initialized and validated org.apache.spark.streaming.kafka.DirectKafkaInputDStream@e644499 18/04/17 16:32:59 INFO dstream.TransformedDStream: Slide time = 60000 ms 18/04/17 16:32:59 INFO dstream.TransformedDStream: Storage level = StorageLevel(false, false, false, false, 1) 18/04/17 16:32:59 INFO dstream.TransformedDStream: Checkpoint interval = null 18/04/17 16:32:59 INFO dstream.TransformedDStream: Remember duration = 60000 ms 18/04/17 16:32:59 INFO dstream.TransformedDStream: Initialized and validated org.apache.spark.streaming.dstream.TransformedDStream@1112135c 18/04/17 16:32:59 INFO dstream.ForEachDStream: Slide time = 60000 ms 18/04/17 16:32:59 INFO dstream.ForEachDStream: Storage level = StorageLevel(false, false, false, false, 1) 18/04/17 16:32:59 INFO dstream.ForEachDStream: Checkpoint interval = null 18/04/17 16:32:59 INFO dstream.ForEachDStream: Remember duration = 60000 ms 18/04/17 16:32:59 INFO dstream.ForEachDStream: Initialized and validated org.apache.spark.streaming.dstream.ForEachDStream@707c23a9 18/04/17 16:32:59 INFO dstream.ForEachDStream: metadataCleanupDelay = -1 18/04/17 16:32:59 INFO dstream.TransformedDStream: metadataCleanupDelay = -1 18/04/17 16:32:59 INFO kafka.DirectKafkaInputDStream: metadataCleanupDelay = -1 18/04/17 16:32:59 INFO kafka.DirectKafkaInputDStream: Slide time = 60000 ms 18/04/17 16:32:59 INFO kafka.DirectKafkaInputDStream: Storage level = StorageLevel(false, false, false, false, 1) 18/04/17 16:32:59 INFO kafka.DirectKafkaInputDStream: Checkpoint interval = null 18/04/17 16:32:59 INFO kafka.DirectKafkaInputDStream: Remember duration = 60000 ms 18/04/17 16:32:59 INFO kafka.DirectKafkaInputDStream: Initialized and validated org.apache.spark.streaming.kafka.DirectKafkaInputDStream@8b1dd6c 18/04/17 16:32:59 INFO dstream.TransformedDStream: Slide time = 60000 ms 18/04/17 16:32:59 INFO dstream.TransformedDStream: Storage level = StorageLevel(false, false, false, false, 1) 18/04/17 16:32:59 INFO dstream.TransformedDStream: Checkpoint interval = null 18/04/17 16:32:59 INFO dstream.TransformedDStream: Remember duration = 60000 ms 18/04/17 16:32:59 INFO dstream.TransformedDStream: Initialized and validated org.apache.spark.streaming.dstream.TransformedDStream@245d106f 18/04/17 16:32:59 INFO dstream.ForEachDStream: Slide time = 60000 ms 18/04/17 16:32:59 INFO dstream.ForEachDStream: Storage level = StorageLevel(false, false, false, false, 1) 18/04/17 16:32:59 INFO dstream.ForEachDStream: Checkpoint interval = null 18/04/17 16:32:59 INFO dstream.ForEachDStream: Remember duration = 60000 ms 18/04/17 16:32:59 INFO dstream.ForEachDStream: Initialized and validated org.apache.spark.streaming.dstream.ForEachDStream@25547f4b 18/04/17 16:32:59 INFO dstream.ForEachDStream: metadataCleanupDelay = -1 18/04/17 16:32:59 INFO dstream.TransformedDStream: metadataCleanupDelay = -1 18/04/17 16:32:59 INFO kafka.DirectKafkaInputDStream: metadataCleanupDelay = -1 18/04/17 16:32:59 INFO kafka.DirectKafkaInputDStream: Slide time = 60000 ms 18/04/17 16:32:59 INFO kafka.DirectKafkaInputDStream: Storage level = StorageLevel(false, false, false, false, 1) 18/04/17 16:32:59 INFO kafka.DirectKafkaInputDStream: Checkpoint interval = null 18/04/17 16:32:59 INFO kafka.DirectKafkaInputDStream: Remember duration = 60000 ms 18/04/17 16:32:59 INFO kafka.DirectKafkaInputDStream: Initialized and validated org.apache.spark.streaming.kafka.DirectKafkaInputDStream@34a47123 18/04/17 16:32:59 INFO dstream.TransformedDStream: Slide time = 60000 ms 18/04/17 16:32:59 INFO dstream.TransformedDStream: Storage level = StorageLevel(false, false, false, false, 1) 18/04/17 16:32:59 INFO dstream.TransformedDStream: Checkpoint interval = null 18/04/17 16:32:59 INFO dstream.TransformedDStream: Remember duration = 60000 ms 18/04/17 16:32:59 INFO dstream.TransformedDStream: Initialized and validated org.apache.spark.streaming.dstream.TransformedDStream@32788066 18/04/17 16:32:59 INFO dstream.ForEachDStream: Slide time = 60000 ms 18/04/17 16:32:59 INFO dstream.ForEachDStream: Storage level = StorageLevel(false, false, false, false, 1) 18/04/17 16:32:59 INFO dstream.ForEachDStream: Checkpoint interval = null 18/04/17 16:32:59 INFO dstream.ForEachDStream: Remember duration = 60000 ms 18/04/17 16:32:59 INFO dstream.ForEachDStream: Initialized and validated org.apache.spark.streaming.dstream.ForEachDStream@7eea302e 18/04/17 16:32:59 INFO dstream.ForEachDStream: metadataCleanupDelay = -1 18/04/17 16:32:59 INFO dstream.TransformedDStream: metadataCleanupDelay = -1 18/04/17 16:32:59 INFO kafka.DirectKafkaInputDStream: metadataCleanupDelay = -1 18/04/17 16:32:59 INFO kafka.DirectKafkaInputDStream: Slide time = 60000 ms 18/04/17 16:32:59 INFO kafka.DirectKafkaInputDStream: Storage level = StorageLevel(false, false, false, false, 1) 18/04/17 16:32:59 INFO kafka.DirectKafkaInputDStream: Checkpoint interval = null 18/04/17 16:32:59 INFO kafka.DirectKafkaInputDStream: Remember duration = 60000 ms 18/04/17 16:32:59 INFO kafka.DirectKafkaInputDStream: Initialized and validated org.apache.spark.streaming.kafka.DirectKafkaInputDStream@78c2b69f 18/04/17 16:32:59 INFO dstream.TransformedDStream: Slide time = 60000 ms 18/04/17 16:32:59 INFO dstream.TransformedDStream: Storage level = StorageLevel(false, false, false, false, 1) 18/04/17 16:32:59 INFO dstream.TransformedDStream: Checkpoint interval = null 18/04/17 16:32:59 INFO dstream.TransformedDStream: Remember duration = 60000 ms 18/04/17 16:32:59 INFO dstream.TransformedDStream: Initialized and validated org.apache.spark.streaming.dstream.TransformedDStream@6f78edcd 18/04/17 16:32:59 INFO dstream.ForEachDStream: Slide time = 60000 ms 18/04/17 16:32:59 INFO dstream.ForEachDStream: Storage level = StorageLevel(false, false, false, false, 1) 18/04/17 16:32:59 INFO dstream.ForEachDStream: Checkpoint interval = null 18/04/17 16:32:59 INFO dstream.ForEachDStream: Remember duration = 60000 ms 18/04/17 16:32:59 INFO dstream.ForEachDStream: Initialized and validated org.apache.spark.streaming.dstream.ForEachDStream@19f52bf4 18/04/17 16:32:59 INFO dstream.ForEachDStream: metadataCleanupDelay = -1 18/04/17 16:32:59 INFO dstream.TransformedDStream: metadataCleanupDelay = -1 18/04/17 16:32:59 INFO kafka.DirectKafkaInputDStream: metadataCleanupDelay = -1 18/04/17 16:32:59 INFO kafka.DirectKafkaInputDStream: Slide time = 60000 ms 18/04/17 16:32:59 INFO kafka.DirectKafkaInputDStream: Storage level = StorageLevel(false, false, false, false, 1) 18/04/17 16:32:59 INFO kafka.DirectKafkaInputDStream: Checkpoint interval = null 18/04/17 16:32:59 INFO kafka.DirectKafkaInputDStream: Remember duration = 60000 ms 18/04/17 16:32:59 INFO kafka.DirectKafkaInputDStream: Initialized and validated org.apache.spark.streaming.kafka.DirectKafkaInputDStream@36b113a4 18/04/17 16:32:59 INFO dstream.TransformedDStream: Slide time = 60000 ms 18/04/17 16:32:59 INFO dstream.TransformedDStream: Storage level = StorageLevel(false, false, false, false, 1) 18/04/17 16:32:59 INFO dstream.TransformedDStream: Checkpoint interval = null 18/04/17 16:32:59 INFO dstream.TransformedDStream: Remember duration = 60000 ms 18/04/17 16:32:59 INFO dstream.TransformedDStream: Initialized and validated org.apache.spark.streaming.dstream.TransformedDStream@5b2a3da3 18/04/17 16:32:59 INFO dstream.ForEachDStream: Slide time = 60000 ms 18/04/17 16:32:59 INFO dstream.ForEachDStream: Storage level = StorageLevel(false, false, false, false, 1) 18/04/17 16:32:59 INFO dstream.ForEachDStream: Checkpoint interval = null 18/04/17 16:32:59 INFO dstream.ForEachDStream: Remember duration = 60000 ms 18/04/17 16:32:59 INFO dstream.ForEachDStream: Initialized and validated org.apache.spark.streaming.dstream.ForEachDStream@7394a58e 18/04/17 16:32:59 INFO dstream.ForEachDStream: metadataCleanupDelay = -1 18/04/17 16:32:59 INFO dstream.TransformedDStream: metadataCleanupDelay = -1 18/04/17 16:32:59 INFO kafka.DirectKafkaInputDStream: metadataCleanupDelay = -1 18/04/17 16:32:59 INFO kafka.DirectKafkaInputDStream: Slide time = 60000 ms 18/04/17 16:32:59 INFO kafka.DirectKafkaInputDStream: Storage level = StorageLevel(false, false, false, false, 1) 18/04/17 16:32:59 INFO kafka.DirectKafkaInputDStream: Checkpoint interval = null 18/04/17 16:32:59 INFO kafka.DirectKafkaInputDStream: Remember duration = 60000 ms 18/04/17 16:32:59 INFO kafka.DirectKafkaInputDStream: Initialized and validated org.apache.spark.streaming.kafka.DirectKafkaInputDStream@430e9a83 18/04/17 16:32:59 INFO dstream.TransformedDStream: Slide time = 60000 ms 18/04/17 16:32:59 INFO dstream.TransformedDStream: Storage level = StorageLevel(false, false, false, false, 1) 18/04/17 16:32:59 INFO dstream.TransformedDStream: Checkpoint interval = null 18/04/17 16:32:59 INFO dstream.TransformedDStream: Remember duration = 60000 ms 18/04/17 16:32:59 INFO dstream.TransformedDStream: Initialized and validated org.apache.spark.streaming.dstream.TransformedDStream@7d1c9d75 18/04/17 16:32:59 INFO dstream.ForEachDStream: Slide time = 60000 ms 18/04/17 16:32:59 INFO dstream.ForEachDStream: Storage level = StorageLevel(false, false, false, false, 1) 18/04/17 16:32:59 INFO dstream.ForEachDStream: Checkpoint interval = null 18/04/17 16:32:59 INFO dstream.ForEachDStream: Remember duration = 60000 ms 18/04/17 16:32:59 INFO dstream.ForEachDStream: Initialized and validated org.apache.spark.streaming.dstream.ForEachDStream@77d16050 18/04/17 16:32:59 INFO dstream.ForEachDStream: metadataCleanupDelay = -1 18/04/17 16:32:59 INFO dstream.TransformedDStream: metadataCleanupDelay = -1 18/04/17 16:32:59 INFO kafka.DirectKafkaInputDStream: metadataCleanupDelay = -1 18/04/17 16:32:59 INFO kafka.DirectKafkaInputDStream: Slide time = 60000 ms 18/04/17 16:32:59 INFO kafka.DirectKafkaInputDStream: Storage level = StorageLevel(false, false, false, false, 1) 18/04/17 16:32:59 INFO kafka.DirectKafkaInputDStream: Checkpoint interval = null 18/04/17 16:32:59 INFO kafka.DirectKafkaInputDStream: Remember duration = 60000 ms 18/04/17 16:32:59 INFO kafka.DirectKafkaInputDStream: Initialized and validated org.apache.spark.streaming.kafka.DirectKafkaInputDStream@213f13f1 18/04/17 16:32:59 INFO dstream.TransformedDStream: Slide time = 60000 ms 18/04/17 16:32:59 INFO dstream.TransformedDStream: Storage level = StorageLevel(false, false, false, false, 1) 18/04/17 16:32:59 INFO dstream.TransformedDStream: Checkpoint interval = null 18/04/17 16:32:59 INFO dstream.TransformedDStream: Remember duration = 60000 ms 18/04/17 16:32:59 INFO dstream.TransformedDStream: Initialized and validated org.apache.spark.streaming.dstream.TransformedDStream@16525fd7 18/04/17 16:32:59 INFO dstream.ForEachDStream: Slide time = 60000 ms 18/04/17 16:32:59 INFO dstream.ForEachDStream: Storage level = StorageLevel(false, false, false, false, 1) 18/04/17 16:32:59 INFO dstream.ForEachDStream: Checkpoint interval = null 18/04/17 16:32:59 INFO dstream.ForEachDStream: Remember duration = 60000 ms 18/04/17 16:32:59 INFO dstream.ForEachDStream: Initialized and validated org.apache.spark.streaming.dstream.ForEachDStream@fbc9f5d 18/04/17 16:32:59 INFO dstream.ForEachDStream: metadataCleanupDelay = -1 18/04/17 16:32:59 INFO dstream.TransformedDStream: metadataCleanupDelay = -1 18/04/17 16:32:59 INFO kafka.DirectKafkaInputDStream: metadataCleanupDelay = -1 18/04/17 16:32:59 INFO kafka.DirectKafkaInputDStream: Slide time = 60000 ms 18/04/17 16:32:59 INFO kafka.DirectKafkaInputDStream: Storage level = StorageLevel(false, false, false, false, 1) 18/04/17 16:32:59 INFO kafka.DirectKafkaInputDStream: Checkpoint interval = null 18/04/17 16:32:59 INFO kafka.DirectKafkaInputDStream: Remember duration = 60000 ms 18/04/17 16:32:59 INFO kafka.DirectKafkaInputDStream: Initialized and validated org.apache.spark.streaming.kafka.DirectKafkaInputDStream@7313ad1a 18/04/17 16:32:59 INFO dstream.TransformedDStream: Slide time = 60000 ms 18/04/17 16:32:59 INFO dstream.TransformedDStream: Storage level = StorageLevel(false, false, false, false, 1) 18/04/17 16:32:59 INFO dstream.TransformedDStream: Checkpoint interval = null 18/04/17 16:32:59 INFO dstream.TransformedDStream: Remember duration = 60000 ms 18/04/17 16:32:59 INFO dstream.TransformedDStream: Initialized and validated org.apache.spark.streaming.dstream.TransformedDStream@2aea9812 18/04/17 16:32:59 INFO dstream.ForEachDStream: Slide time = 60000 ms 18/04/17 16:32:59 INFO dstream.ForEachDStream: Storage level = StorageLevel(false, false, false, false, 1) 18/04/17 16:32:59 INFO dstream.ForEachDStream: Checkpoint interval = null 18/04/17 16:32:59 INFO dstream.ForEachDStream: Remember duration = 60000 ms 18/04/17 16:32:59 INFO dstream.ForEachDStream: Initialized and validated org.apache.spark.streaming.dstream.ForEachDStream@2ec2d67f 18/04/17 16:32:59 INFO dstream.ForEachDStream: metadataCleanupDelay = -1 18/04/17 16:32:59 INFO dstream.TransformedDStream: metadataCleanupDelay = -1 18/04/17 16:32:59 INFO kafka.DirectKafkaInputDStream: metadataCleanupDelay = -1 18/04/17 16:32:59 INFO kafka.DirectKafkaInputDStream: Slide time = 60000 ms 18/04/17 16:32:59 INFO kafka.DirectKafkaInputDStream: Storage level = StorageLevel(false, false, false, false, 1) 18/04/17 16:32:59 INFO kafka.DirectKafkaInputDStream: Checkpoint interval = null 18/04/17 16:32:59 INFO kafka.DirectKafkaInputDStream: Remember duration = 60000 ms 18/04/17 16:32:59 INFO kafka.DirectKafkaInputDStream: Initialized and validated org.apache.spark.streaming.kafka.DirectKafkaInputDStream@456ff28d 18/04/17 16:32:59 INFO dstream.TransformedDStream: Slide time = 60000 ms 18/04/17 16:32:59 INFO dstream.TransformedDStream: Storage level = StorageLevel(false, false, false, false, 1) 18/04/17 16:32:59 INFO dstream.TransformedDStream: Checkpoint interval = null 18/04/17 16:32:59 INFO dstream.TransformedDStream: Remember duration = 60000 ms 18/04/17 16:32:59 INFO dstream.TransformedDStream: Initialized and validated org.apache.spark.streaming.dstream.TransformedDStream@2bfafc47 18/04/17 16:32:59 INFO dstream.ForEachDStream: Slide time = 60000 ms 18/04/17 16:32:59 INFO dstream.ForEachDStream: Storage level = StorageLevel(false, false, false, false, 1) 18/04/17 16:32:59 INFO dstream.ForEachDStream: Checkpoint interval = null 18/04/17 16:32:59 INFO dstream.ForEachDStream: Remember duration = 60000 ms 18/04/17 16:32:59 INFO dstream.ForEachDStream: Initialized and validated org.apache.spark.streaming.dstream.ForEachDStream@55aecb22 18/04/17 16:32:59 INFO dstream.ForEachDStream: metadataCleanupDelay = -1 18/04/17 16:32:59 INFO dstream.TransformedDStream: metadataCleanupDelay = -1 18/04/17 16:32:59 INFO kafka.DirectKafkaInputDStream: metadataCleanupDelay = -1 18/04/17 16:32:59 INFO kafka.DirectKafkaInputDStream: Slide time = 60000 ms 18/04/17 16:32:59 INFO kafka.DirectKafkaInputDStream: Storage level = StorageLevel(false, false, false, false, 1) 18/04/17 16:32:59 INFO kafka.DirectKafkaInputDStream: Checkpoint interval = null 18/04/17 16:32:59 INFO kafka.DirectKafkaInputDStream: Remember duration = 60000 ms 18/04/17 16:32:59 INFO kafka.DirectKafkaInputDStream: Initialized and validated org.apache.spark.streaming.kafka.DirectKafkaInputDStream@6d678ec6 18/04/17 16:32:59 INFO dstream.TransformedDStream: Slide time = 60000 ms 18/04/17 16:32:59 INFO dstream.TransformedDStream: Storage level = StorageLevel(false, false, false, false, 1) 18/04/17 16:32:59 INFO dstream.TransformedDStream: Checkpoint interval = null 18/04/17 16:32:59 INFO dstream.TransformedDStream: Remember duration = 60000 ms 18/04/17 16:32:59 INFO dstream.TransformedDStream: Initialized and validated org.apache.spark.streaming.dstream.TransformedDStream@57a6e27d 18/04/17 16:32:59 INFO dstream.ForEachDStream: Slide time = 60000 ms 18/04/17 16:32:59 INFO dstream.ForEachDStream: Storage level = StorageLevel(false, false, false, false, 1) 18/04/17 16:32:59 INFO dstream.ForEachDStream: Checkpoint interval = null 18/04/17 16:32:59 INFO dstream.ForEachDStream: Remember duration = 60000 ms 18/04/17 16:32:59 INFO dstream.ForEachDStream: Initialized and validated org.apache.spark.streaming.dstream.ForEachDStream@2bc2340a 18/04/17 16:32:59 INFO dstream.ForEachDStream: metadataCleanupDelay = -1 18/04/17 16:32:59 INFO dstream.TransformedDStream: metadataCleanupDelay = -1 18/04/17 16:32:59 INFO kafka.DirectKafkaInputDStream: metadataCleanupDelay = -1 18/04/17 16:32:59 INFO kafka.DirectKafkaInputDStream: Slide time = 60000 ms 18/04/17 16:32:59 INFO kafka.DirectKafkaInputDStream: Storage level = StorageLevel(false, false, false, false, 1) 18/04/17 16:32:59 INFO kafka.DirectKafkaInputDStream: Checkpoint interval = null 18/04/17 16:32:59 INFO kafka.DirectKafkaInputDStream: Remember duration = 60000 ms 18/04/17 16:32:59 INFO kafka.DirectKafkaInputDStream: Initialized and validated org.apache.spark.streaming.kafka.DirectKafkaInputDStream@8961d8b 18/04/17 16:32:59 INFO dstream.TransformedDStream: Slide time = 60000 ms 18/04/17 16:32:59 INFO dstream.TransformedDStream: Storage level = StorageLevel(false, false, false, false, 1) 18/04/17 16:32:59 INFO dstream.TransformedDStream: Checkpoint interval = null 18/04/17 16:32:59 INFO dstream.TransformedDStream: Remember duration = 60000 ms 18/04/17 16:32:59 INFO dstream.TransformedDStream: Initialized and validated org.apache.spark.streaming.dstream.TransformedDStream@59d6a3e1 18/04/17 16:32:59 INFO dstream.ForEachDStream: Slide time = 60000 ms 18/04/17 16:32:59 INFO dstream.ForEachDStream: Storage level = StorageLevel(false, false, false, false, 1) 18/04/17 16:32:59 INFO dstream.ForEachDStream: Checkpoint interval = null 18/04/17 16:32:59 INFO dstream.ForEachDStream: Remember duration = 60000 ms 18/04/17 16:32:59 INFO dstream.ForEachDStream: Initialized and validated org.apache.spark.streaming.dstream.ForEachDStream@390307ac 18/04/17 16:32:59 INFO dstream.ForEachDStream: metadataCleanupDelay = -1 18/04/17 16:32:59 INFO dstream.TransformedDStream: metadataCleanupDelay = -1 18/04/17 16:32:59 INFO kafka.DirectKafkaInputDStream: metadataCleanupDelay = -1 18/04/17 16:32:59 INFO kafka.DirectKafkaInputDStream: Slide time = 60000 ms 18/04/17 16:32:59 INFO kafka.DirectKafkaInputDStream: Storage level = StorageLevel(false, false, false, false, 1) 18/04/17 16:32:59 INFO kafka.DirectKafkaInputDStream: Checkpoint interval = null 18/04/17 16:32:59 INFO kafka.DirectKafkaInputDStream: Remember duration = 60000 ms 18/04/17 16:32:59 INFO kafka.DirectKafkaInputDStream: Initialized and validated org.apache.spark.streaming.kafka.DirectKafkaInputDStream@35b16e2 18/04/17 16:32:59 INFO dstream.TransformedDStream: Slide time = 60000 ms 18/04/17 16:32:59 INFO dstream.TransformedDStream: Storage level = StorageLevel(false, false, false, false, 1) 18/04/17 16:32:59 INFO dstream.TransformedDStream: Checkpoint interval = null 18/04/17 16:32:59 INFO dstream.TransformedDStream: Remember duration = 60000 ms 18/04/17 16:32:59 INFO dstream.TransformedDStream: Initialized and validated org.apache.spark.streaming.dstream.TransformedDStream@3b193947 18/04/17 16:32:59 INFO dstream.ForEachDStream: Slide time = 60000 ms 18/04/17 16:32:59 INFO dstream.ForEachDStream: Storage level = StorageLevel(false, false, false, false, 1) 18/04/17 16:32:59 INFO dstream.ForEachDStream: Checkpoint interval = null 18/04/17 16:32:59 INFO dstream.ForEachDStream: Remember duration = 60000 ms 18/04/17 16:32:59 INFO dstream.ForEachDStream: Initialized and validated org.apache.spark.streaming.dstream.ForEachDStream@57bc4c6e 18/04/17 16:32:59 INFO dstream.ForEachDStream: metadataCleanupDelay = -1 18/04/17 16:32:59 INFO dstream.TransformedDStream: metadataCleanupDelay = -1 18/04/17 16:32:59 INFO kafka.DirectKafkaInputDStream: metadataCleanupDelay = -1 18/04/17 16:32:59 INFO kafka.DirectKafkaInputDStream: Slide time = 60000 ms 18/04/17 16:32:59 INFO kafka.DirectKafkaInputDStream: Storage level = StorageLevel(false, false, false, false, 1) 18/04/17 16:32:59 INFO kafka.DirectKafkaInputDStream: Checkpoint interval = null 18/04/17 16:32:59 INFO kafka.DirectKafkaInputDStream: Remember duration = 60000 ms 18/04/17 16:32:59 INFO kafka.DirectKafkaInputDStream: Initialized and validated org.apache.spark.streaming.kafka.DirectKafkaInputDStream@76230f52 18/04/17 16:32:59 INFO dstream.TransformedDStream: Slide time = 60000 ms 18/04/17 16:32:59 INFO dstream.TransformedDStream: Storage level = StorageLevel(false, false, false, false, 1) 18/04/17 16:32:59 INFO dstream.TransformedDStream: Checkpoint interval = null 18/04/17 16:32:59 INFO dstream.TransformedDStream: Remember duration = 60000 ms 18/04/17 16:32:59 INFO dstream.TransformedDStream: Initialized and validated org.apache.spark.streaming.dstream.TransformedDStream@2d94eb72 18/04/17 16:32:59 INFO dstream.ForEachDStream: Slide time = 60000 ms 18/04/17 16:32:59 INFO dstream.ForEachDStream: Storage level = StorageLevel(false, false, false, false, 1) 18/04/17 16:32:59 INFO dstream.ForEachDStream: Checkpoint interval = null 18/04/17 16:32:59 INFO dstream.ForEachDStream: Remember duration = 60000 ms 18/04/17 16:32:59 INFO dstream.ForEachDStream: Initialized and validated org.apache.spark.streaming.dstream.ForEachDStream@5f353c86 18/04/17 16:32:59 INFO dstream.ForEachDStream: metadataCleanupDelay = -1 18/04/17 16:32:59 INFO dstream.TransformedDStream: metadataCleanupDelay = -1 18/04/17 16:32:59 INFO kafka.DirectKafkaInputDStream: metadataCleanupDelay = -1 18/04/17 16:32:59 INFO kafka.DirectKafkaInputDStream: Slide time = 60000 ms 18/04/17 16:32:59 INFO kafka.DirectKafkaInputDStream: Storage level = StorageLevel(false, false, false, false, 1) 18/04/17 16:32:59 INFO kafka.DirectKafkaInputDStream: Checkpoint interval = null 18/04/17 16:32:59 INFO kafka.DirectKafkaInputDStream: Remember duration = 60000 ms 18/04/17 16:32:59 INFO kafka.DirectKafkaInputDStream: Initialized and validated org.apache.spark.streaming.kafka.DirectKafkaInputDStream@6adfd432 18/04/17 16:32:59 INFO dstream.TransformedDStream: Slide time = 60000 ms 18/04/17 16:32:59 INFO dstream.TransformedDStream: Storage level = StorageLevel(false, false, false, false, 1) 18/04/17 16:32:59 INFO dstream.TransformedDStream: Checkpoint interval = null 18/04/17 16:32:59 INFO dstream.TransformedDStream: Remember duration = 60000 ms 18/04/17 16:32:59 INFO dstream.TransformedDStream: Initialized and validated org.apache.spark.streaming.dstream.TransformedDStream@4e81fa4 18/04/17 16:32:59 INFO dstream.ForEachDStream: Slide time = 60000 ms 18/04/17 16:32:59 INFO dstream.ForEachDStream: Storage level = StorageLevel(false, false, false, false, 1) 18/04/17 16:32:59 INFO dstream.ForEachDStream: Checkpoint interval = null 18/04/17 16:32:59 INFO dstream.ForEachDStream: Remember duration = 60000 ms 18/04/17 16:32:59 INFO dstream.ForEachDStream: Initialized and validated org.apache.spark.streaming.dstream.ForEachDStream@eadab30 18/04/17 16:32:59 INFO dstream.ForEachDStream: metadataCleanupDelay = -1 18/04/17 16:32:59 INFO dstream.TransformedDStream: metadataCleanupDelay = -1 18/04/17 16:32:59 INFO kafka.DirectKafkaInputDStream: metadataCleanupDelay = -1 18/04/17 16:32:59 INFO kafka.DirectKafkaInputDStream: Slide time = 60000 ms 18/04/17 16:32:59 INFO kafka.DirectKafkaInputDStream: Storage level = StorageLevel(false, false, false, false, 1) 18/04/17 16:32:59 INFO kafka.DirectKafkaInputDStream: Checkpoint interval = null 18/04/17 16:32:59 INFO kafka.DirectKafkaInputDStream: Remember duration = 60000 ms 18/04/17 16:32:59 INFO kafka.DirectKafkaInputDStream: Initialized and validated org.apache.spark.streaming.kafka.DirectKafkaInputDStream@788a676f 18/04/17 16:32:59 INFO dstream.TransformedDStream: Slide time = 60000 ms 18/04/17 16:32:59 INFO dstream.TransformedDStream: Storage level = StorageLevel(false, false, false, false, 1) 18/04/17 16:32:59 INFO dstream.TransformedDStream: Checkpoint interval = null 18/04/17 16:32:59 INFO dstream.TransformedDStream: Remember duration = 60000 ms 18/04/17 16:32:59 INFO dstream.TransformedDStream: Initialized and validated org.apache.spark.streaming.dstream.TransformedDStream@6c605208 18/04/17 16:32:59 INFO dstream.ForEachDStream: Slide time = 60000 ms 18/04/17 16:32:59 INFO dstream.ForEachDStream: Storage level = StorageLevel(false, false, false, false, 1) 18/04/17 16:32:59 INFO dstream.ForEachDStream: Checkpoint interval = null 18/04/17 16:32:59 INFO dstream.ForEachDStream: Remember duration = 60000 ms 18/04/17 16:32:59 INFO dstream.ForEachDStream: Initialized and validated org.apache.spark.streaming.dstream.ForEachDStream@2831651f 18/04/17 16:32:59 INFO dstream.ForEachDStream: metadataCleanupDelay = -1 18/04/17 16:32:59 INFO dstream.TransformedDStream: metadataCleanupDelay = -1 18/04/17 16:32:59 INFO kafka.DirectKafkaInputDStream: metadataCleanupDelay = -1 18/04/17 16:32:59 INFO kafka.DirectKafkaInputDStream: Slide time = 60000 ms 18/04/17 16:32:59 INFO kafka.DirectKafkaInputDStream: Storage level = StorageLevel(false, false, false, false, 1) 18/04/17 16:32:59 INFO kafka.DirectKafkaInputDStream: Checkpoint interval = null 18/04/17 16:32:59 INFO kafka.DirectKafkaInputDStream: Remember duration = 60000 ms 18/04/17 16:32:59 INFO kafka.DirectKafkaInputDStream: Initialized and validated org.apache.spark.streaming.kafka.DirectKafkaInputDStream@4b26af02 18/04/17 16:32:59 INFO dstream.TransformedDStream: Slide time = 60000 ms 18/04/17 16:32:59 INFO dstream.TransformedDStream: Storage level = StorageLevel(false, false, false, false, 1) 18/04/17 16:32:59 INFO dstream.TransformedDStream: Checkpoint interval = null 18/04/17 16:32:59 INFO dstream.TransformedDStream: Remember duration = 60000 ms 18/04/17 16:32:59 INFO dstream.TransformedDStream: Initialized and validated org.apache.spark.streaming.dstream.TransformedDStream@60bc0816 18/04/17 16:32:59 INFO dstream.ForEachDStream: Slide time = 60000 ms 18/04/17 16:32:59 INFO dstream.ForEachDStream: Storage level = StorageLevel(false, false, false, false, 1) 18/04/17 16:32:59 INFO dstream.ForEachDStream: Checkpoint interval = null 18/04/17 16:32:59 INFO dstream.ForEachDStream: Remember duration = 60000 ms 18/04/17 16:32:59 INFO dstream.ForEachDStream: Initialized and validated org.apache.spark.streaming.dstream.ForEachDStream@ea2e61b 18/04/17 16:32:59 INFO dstream.ForEachDStream: metadataCleanupDelay = -1 18/04/17 16:32:59 INFO dstream.TransformedDStream: metadataCleanupDelay = -1 18/04/17 16:32:59 INFO kafka.DirectKafkaInputDStream: metadataCleanupDelay = -1 18/04/17 16:32:59 INFO kafka.DirectKafkaInputDStream: Slide time = 60000 ms 18/04/17 16:32:59 INFO kafka.DirectKafkaInputDStream: Storage level = StorageLevel(false, false, false, false, 1) 18/04/17 16:32:59 INFO kafka.DirectKafkaInputDStream: Checkpoint interval = null 18/04/17 16:32:59 INFO kafka.DirectKafkaInputDStream: Remember duration = 60000 ms 18/04/17 16:32:59 INFO kafka.DirectKafkaInputDStream: Initialized and validated org.apache.spark.streaming.kafka.DirectKafkaInputDStream@2d98734d 18/04/17 16:32:59 INFO dstream.TransformedDStream: Slide time = 60000 ms 18/04/17 16:32:59 INFO dstream.TransformedDStream: Storage level = StorageLevel(false, false, false, false, 1) 18/04/17 16:32:59 INFO dstream.TransformedDStream: Checkpoint interval = null 18/04/17 16:32:59 INFO dstream.TransformedDStream: Remember duration = 60000 ms 18/04/17 16:32:59 INFO dstream.TransformedDStream: Initialized and validated org.apache.spark.streaming.dstream.TransformedDStream@53b88743 18/04/17 16:32:59 INFO dstream.ForEachDStream: Slide time = 60000 ms 18/04/17 16:32:59 INFO dstream.ForEachDStream: Storage level = StorageLevel(false, false, false, false, 1) 18/04/17 16:32:59 INFO dstream.ForEachDStream: Checkpoint interval = null 18/04/17 16:32:59 INFO dstream.ForEachDStream: Remember duration = 60000 ms 18/04/17 16:32:59 INFO dstream.ForEachDStream: Initialized and validated org.apache.spark.streaming.dstream.ForEachDStream@53c4d99a 18/04/17 16:32:59 INFO dstream.ForEachDStream: metadataCleanupDelay = -1 18/04/17 16:32:59 INFO dstream.TransformedDStream: metadataCleanupDelay = -1 18/04/17 16:32:59 INFO kafka.DirectKafkaInputDStream: metadataCleanupDelay = -1 18/04/17 16:32:59 INFO kafka.DirectKafkaInputDStream: Slide time = 60000 ms 18/04/17 16:32:59 INFO kafka.DirectKafkaInputDStream: Storage level = StorageLevel(false, false, false, false, 1) 18/04/17 16:32:59 INFO kafka.DirectKafkaInputDStream: Checkpoint interval = null 18/04/17 16:32:59 INFO kafka.DirectKafkaInputDStream: Remember duration = 60000 ms 18/04/17 16:32:59 INFO kafka.DirectKafkaInputDStream: Initialized and validated org.apache.spark.streaming.kafka.DirectKafkaInputDStream@4acf7fc7 18/04/17 16:32:59 INFO dstream.TransformedDStream: Slide time = 60000 ms 18/04/17 16:32:59 INFO dstream.TransformedDStream: Storage level = StorageLevel(false, false, false, false, 1) 18/04/17 16:32:59 INFO dstream.TransformedDStream: Checkpoint interval = null 18/04/17 16:32:59 INFO dstream.TransformedDStream: Remember duration = 60000 ms 18/04/17 16:32:59 INFO dstream.TransformedDStream: Initialized and validated org.apache.spark.streaming.dstream.TransformedDStream@24a37fc8 18/04/17 16:32:59 INFO dstream.ForEachDStream: Slide time = 60000 ms 18/04/17 16:32:59 INFO dstream.ForEachDStream: Storage level = StorageLevel(false, false, false, false, 1) 18/04/17 16:32:59 INFO dstream.ForEachDStream: Checkpoint interval = null 18/04/17 16:32:59 INFO dstream.ForEachDStream: Remember duration = 60000 ms 18/04/17 16:32:59 INFO dstream.ForEachDStream: Initialized and validated org.apache.spark.streaming.dstream.ForEachDStream@33367680 18/04/17 16:32:59 INFO dstream.ForEachDStream: metadataCleanupDelay = -1 18/04/17 16:32:59 INFO dstream.TransformedDStream: metadataCleanupDelay = -1 18/04/17 16:32:59 INFO kafka.DirectKafkaInputDStream: metadataCleanupDelay = -1 18/04/17 16:32:59 INFO kafka.DirectKafkaInputDStream: Slide time = 60000 ms 18/04/17 16:32:59 INFO kafka.DirectKafkaInputDStream: Storage level = StorageLevel(false, false, false, false, 1) 18/04/17 16:32:59 INFO kafka.DirectKafkaInputDStream: Checkpoint interval = null 18/04/17 16:32:59 INFO kafka.DirectKafkaInputDStream: Remember duration = 60000 ms 18/04/17 16:32:59 INFO kafka.DirectKafkaInputDStream: Initialized and validated org.apache.spark.streaming.kafka.DirectKafkaInputDStream@464b6175 18/04/17 16:32:59 INFO dstream.TransformedDStream: Slide time = 60000 ms 18/04/17 16:32:59 INFO dstream.TransformedDStream: Storage level = StorageLevel(false, false, false, false, 1) 18/04/17 16:32:59 INFO dstream.TransformedDStream: Checkpoint interval = null 18/04/17 16:32:59 INFO dstream.TransformedDStream: Remember duration = 60000 ms 18/04/17 16:32:59 INFO dstream.TransformedDStream: Initialized and validated org.apache.spark.streaming.dstream.TransformedDStream@77cb0f3c 18/04/17 16:32:59 INFO dstream.ForEachDStream: Slide time = 60000 ms 18/04/17 16:32:59 INFO dstream.ForEachDStream: Storage level = StorageLevel(false, false, false, false, 1) 18/04/17 16:32:59 INFO dstream.ForEachDStream: Checkpoint interval = null 18/04/17 16:32:59 INFO dstream.ForEachDStream: Remember duration = 60000 ms 18/04/17 16:32:59 INFO dstream.ForEachDStream: Initialized and validated org.apache.spark.streaming.dstream.ForEachDStream@48296332 18/04/17 16:32:59 INFO dstream.ForEachDStream: metadataCleanupDelay = -1 18/04/17 16:32:59 INFO dstream.TransformedDStream: metadataCleanupDelay = -1 18/04/17 16:32:59 INFO kafka.DirectKafkaInputDStream: metadataCleanupDelay = -1 18/04/17 16:32:59 INFO kafka.DirectKafkaInputDStream: Slide time = 60000 ms 18/04/17 16:32:59 INFO kafka.DirectKafkaInputDStream: Storage level = StorageLevel(false, false, false, false, 1) 18/04/17 16:32:59 INFO kafka.DirectKafkaInputDStream: Checkpoint interval = null 18/04/17 16:32:59 INFO kafka.DirectKafkaInputDStream: Remember duration = 60000 ms 18/04/17 16:32:59 INFO kafka.DirectKafkaInputDStream: Initialized and validated org.apache.spark.streaming.kafka.DirectKafkaInputDStream@48ab8ec1 18/04/17 16:32:59 INFO dstream.TransformedDStream: Slide time = 60000 ms 18/04/17 16:32:59 INFO dstream.TransformedDStream: Storage level = StorageLevel(false, false, false, false, 1) 18/04/17 16:32:59 INFO dstream.TransformedDStream: Checkpoint interval = null 18/04/17 16:32:59 INFO dstream.TransformedDStream: Remember duration = 60000 ms 18/04/17 16:32:59 INFO dstream.TransformedDStream: Initialized and validated org.apache.spark.streaming.dstream.TransformedDStream@55761e68 18/04/17 16:32:59 INFO dstream.ForEachDStream: Slide time = 60000 ms 18/04/17 16:32:59 INFO dstream.ForEachDStream: Storage level = StorageLevel(false, false, false, false, 1) 18/04/17 16:32:59 INFO dstream.ForEachDStream: Checkpoint interval = null 18/04/17 16:32:59 INFO dstream.ForEachDStream: Remember duration = 60000 ms 18/04/17 16:32:59 INFO dstream.ForEachDStream: Initialized and validated org.apache.spark.streaming.dstream.ForEachDStream@7aeb4915 18/04/17 16:32:59 INFO dstream.ForEachDStream: metadataCleanupDelay = -1 18/04/17 16:32:59 INFO dstream.TransformedDStream: metadataCleanupDelay = -1 18/04/17 16:32:59 INFO kafka.DirectKafkaInputDStream: metadataCleanupDelay = -1 18/04/17 16:32:59 INFO kafka.DirectKafkaInputDStream: Slide time = 60000 ms 18/04/17 16:32:59 INFO kafka.DirectKafkaInputDStream: Storage level = StorageLevel(false, false, false, false, 1) 18/04/17 16:32:59 INFO kafka.DirectKafkaInputDStream: Checkpoint interval = null 18/04/17 16:32:59 INFO kafka.DirectKafkaInputDStream: Remember duration = 60000 ms 18/04/17 16:32:59 INFO kafka.DirectKafkaInputDStream: Initialized and validated org.apache.spark.streaming.kafka.DirectKafkaInputDStream@79588589 18/04/17 16:32:59 INFO dstream.TransformedDStream: Slide time = 60000 ms 18/04/17 16:32:59 INFO dstream.TransformedDStream: Storage level = StorageLevel(false, false, false, false, 1) 18/04/17 16:32:59 INFO dstream.TransformedDStream: Checkpoint interval = null 18/04/17 16:32:59 INFO dstream.TransformedDStream: Remember duration = 60000 ms 18/04/17 16:32:59 INFO dstream.TransformedDStream: Initialized and validated org.apache.spark.streaming.dstream.TransformedDStream@6d11c1e5 18/04/17 16:32:59 INFO dstream.ForEachDStream: Slide time = 60000 ms 18/04/17 16:32:59 INFO dstream.ForEachDStream: Storage level = StorageLevel(false, false, false, false, 1) 18/04/17 16:32:59 INFO dstream.ForEachDStream: Checkpoint interval = null 18/04/17 16:32:59 INFO dstream.ForEachDStream: Remember duration = 60000 ms 18/04/17 16:32:59 INFO dstream.ForEachDStream: Initialized and validated org.apache.spark.streaming.dstream.ForEachDStream@8711771 18/04/17 16:32:59 INFO dstream.ForEachDStream: metadataCleanupDelay = -1 18/04/17 16:32:59 INFO dstream.TransformedDStream: metadataCleanupDelay = -1 18/04/17 16:32:59 INFO kafka.DirectKafkaInputDStream: metadataCleanupDelay = -1 18/04/17 16:32:59 INFO kafka.DirectKafkaInputDStream: Slide time = 60000 ms 18/04/17 16:32:59 INFO kafka.DirectKafkaInputDStream: Storage level = StorageLevel(false, false, false, false, 1) 18/04/17 16:32:59 INFO kafka.DirectKafkaInputDStream: Checkpoint interval = null 18/04/17 16:32:59 INFO kafka.DirectKafkaInputDStream: Remember duration = 60000 ms 18/04/17 16:32:59 INFO kafka.DirectKafkaInputDStream: Initialized and validated org.apache.spark.streaming.kafka.DirectKafkaInputDStream@28f2448b 18/04/17 16:32:59 INFO dstream.TransformedDStream: Slide time = 60000 ms 18/04/17 16:32:59 INFO dstream.TransformedDStream: Storage level = StorageLevel(false, false, false, false, 1) 18/04/17 16:32:59 INFO dstream.TransformedDStream: Checkpoint interval = null 18/04/17 16:32:59 INFO dstream.TransformedDStream: Remember duration = 60000 ms 18/04/17 16:32:59 INFO dstream.TransformedDStream: Initialized and validated org.apache.spark.streaming.dstream.TransformedDStream@15bbe8d1 18/04/17 16:32:59 INFO dstream.ForEachDStream: Slide time = 60000 ms 18/04/17 16:32:59 INFO dstream.ForEachDStream: Storage level = StorageLevel(false, false, false, false, 1) 18/04/17 16:32:59 INFO dstream.ForEachDStream: Checkpoint interval = null 18/04/17 16:32:59 INFO dstream.ForEachDStream: Remember duration = 60000 ms 18/04/17 16:32:59 INFO dstream.ForEachDStream: Initialized and validated org.apache.spark.streaming.dstream.ForEachDStream@76460d6a 18/04/17 16:32:59 INFO dstream.ForEachDStream: metadataCleanupDelay = -1 18/04/17 16:32:59 INFO dstream.TransformedDStream: metadataCleanupDelay = -1 18/04/17 16:32:59 INFO kafka.DirectKafkaInputDStream: metadataCleanupDelay = -1 18/04/17 16:32:59 INFO kafka.DirectKafkaInputDStream: Slide time = 60000 ms 18/04/17 16:32:59 INFO kafka.DirectKafkaInputDStream: Storage level = StorageLevel(false, false, false, false, 1) 18/04/17 16:32:59 INFO kafka.DirectKafkaInputDStream: Checkpoint interval = null 18/04/17 16:32:59 INFO kafka.DirectKafkaInputDStream: Remember duration = 60000 ms 18/04/17 16:32:59 INFO kafka.DirectKafkaInputDStream: Initialized and validated org.apache.spark.streaming.kafka.DirectKafkaInputDStream@768c564c 18/04/17 16:32:59 INFO dstream.TransformedDStream: Slide time = 60000 ms 18/04/17 16:32:59 INFO dstream.TransformedDStream: Storage level = StorageLevel(false, false, false, false, 1) 18/04/17 16:32:59 INFO dstream.TransformedDStream: Checkpoint interval = null 18/04/17 16:32:59 INFO dstream.TransformedDStream: Remember duration = 60000 ms 18/04/17 16:32:59 INFO dstream.TransformedDStream: Initialized and validated org.apache.spark.streaming.dstream.TransformedDStream@4c60920a 18/04/17 16:32:59 INFO dstream.ForEachDStream: Slide time = 60000 ms 18/04/17 16:32:59 INFO dstream.ForEachDStream: Storage level = StorageLevel(false, false, false, false, 1) 18/04/17 16:32:59 INFO dstream.ForEachDStream: Checkpoint interval = null 18/04/17 16:32:59 INFO dstream.ForEachDStream: Remember duration = 60000 ms 18/04/17 16:32:59 INFO dstream.ForEachDStream: Initialized and validated org.apache.spark.streaming.dstream.ForEachDStream@6d70637 18/04/17 16:32:59 INFO dstream.ForEachDStream: metadataCleanupDelay = -1 18/04/17 16:32:59 INFO dstream.TransformedDStream: metadataCleanupDelay = -1 18/04/17 16:32:59 INFO kafka.DirectKafkaInputDStream: metadataCleanupDelay = -1 18/04/17 16:32:59 INFO kafka.DirectKafkaInputDStream: Slide time = 60000 ms 18/04/17 16:32:59 INFO kafka.DirectKafkaInputDStream: Storage level = StorageLevel(false, false, false, false, 1) 18/04/17 16:32:59 INFO kafka.DirectKafkaInputDStream: Checkpoint interval = null 18/04/17 16:32:59 INFO kafka.DirectKafkaInputDStream: Remember duration = 60000 ms 18/04/17 16:32:59 INFO kafka.DirectKafkaInputDStream: Initialized and validated org.apache.spark.streaming.kafka.DirectKafkaInputDStream@40bc76da 18/04/17 16:32:59 INFO dstream.TransformedDStream: Slide time = 60000 ms 18/04/17 16:32:59 INFO dstream.TransformedDStream: Storage level = StorageLevel(false, false, false, false, 1) 18/04/17 16:32:59 INFO dstream.TransformedDStream: Checkpoint interval = null 18/04/17 16:32:59 INFO dstream.TransformedDStream: Remember duration = 60000 ms 18/04/17 16:32:59 INFO dstream.TransformedDStream: Initialized and validated org.apache.spark.streaming.dstream.TransformedDStream@549676b7 18/04/17 16:32:59 INFO dstream.ForEachDStream: Slide time = 60000 ms 18/04/17 16:32:59 INFO dstream.ForEachDStream: Storage level = StorageLevel(false, false, false, false, 1) 18/04/17 16:32:59 INFO dstream.ForEachDStream: Checkpoint interval = null 18/04/17 16:32:59 INFO dstream.ForEachDStream: Remember duration = 60000 ms 18/04/17 16:32:59 INFO dstream.ForEachDStream: Initialized and validated org.apache.spark.streaming.dstream.ForEachDStream@1c2e4eed 18/04/17 16:32:59 INFO dstream.ForEachDStream: metadataCleanupDelay = -1 18/04/17 16:32:59 INFO dstream.TransformedDStream: metadataCleanupDelay = -1 18/04/17 16:32:59 INFO kafka.DirectKafkaInputDStream: metadataCleanupDelay = -1 18/04/17 16:32:59 INFO kafka.DirectKafkaInputDStream: Slide time = 60000 ms 18/04/17 16:32:59 INFO kafka.DirectKafkaInputDStream: Storage level = StorageLevel(false, false, false, false, 1) 18/04/17 16:32:59 INFO kafka.DirectKafkaInputDStream: Checkpoint interval = null 18/04/17 16:32:59 INFO kafka.DirectKafkaInputDStream: Remember duration = 60000 ms 18/04/17 16:32:59 INFO kafka.DirectKafkaInputDStream: Initialized and validated org.apache.spark.streaming.kafka.DirectKafkaInputDStream@227615e1 18/04/17 16:32:59 INFO dstream.TransformedDStream: Slide time = 60000 ms 18/04/17 16:32:59 INFO dstream.TransformedDStream: Storage level = StorageLevel(false, false, false, false, 1) 18/04/17 16:32:59 INFO dstream.TransformedDStream: Checkpoint interval = null 18/04/17 16:32:59 INFO dstream.TransformedDStream: Remember duration = 60000 ms 18/04/17 16:32:59 INFO dstream.TransformedDStream: Initialized and validated org.apache.spark.streaming.dstream.TransformedDStream@1dfadd3 18/04/17 16:32:59 INFO dstream.ForEachDStream: Slide time = 60000 ms 18/04/17 16:32:59 INFO dstream.ForEachDStream: Storage level = StorageLevel(false, false, false, false, 1) 18/04/17 16:32:59 INFO dstream.ForEachDStream: Checkpoint interval = null 18/04/17 16:32:59 INFO dstream.ForEachDStream: Remember duration = 60000 ms 18/04/17 16:32:59 INFO dstream.ForEachDStream: Initialized and validated org.apache.spark.streaming.dstream.ForEachDStream@661b4091 18/04/17 16:32:59 INFO dstream.ForEachDStream: metadataCleanupDelay = -1 18/04/17 16:32:59 INFO dstream.TransformedDStream: metadataCleanupDelay = -1 18/04/17 16:32:59 INFO kafka.DirectKafkaInputDStream: metadataCleanupDelay = -1 18/04/17 16:32:59 INFO kafka.DirectKafkaInputDStream: Slide time = 60000 ms 18/04/17 16:32:59 INFO kafka.DirectKafkaInputDStream: Storage level = StorageLevel(false, false, false, false, 1) 18/04/17 16:32:59 INFO kafka.DirectKafkaInputDStream: Checkpoint interval = null 18/04/17 16:32:59 INFO kafka.DirectKafkaInputDStream: Remember duration = 60000 ms 18/04/17 16:32:59 INFO kafka.DirectKafkaInputDStream: Initialized and validated org.apache.spark.streaming.kafka.DirectKafkaInputDStream@82c2399 18/04/17 16:32:59 INFO dstream.TransformedDStream: Slide time = 60000 ms 18/04/17 16:32:59 INFO dstream.TransformedDStream: Storage level = StorageLevel(false, false, false, false, 1) 18/04/17 16:32:59 INFO dstream.TransformedDStream: Checkpoint interval = null 18/04/17 16:32:59 INFO dstream.TransformedDStream: Remember duration = 60000 ms 18/04/17 16:32:59 INFO dstream.TransformedDStream: Initialized and validated org.apache.spark.streaming.dstream.TransformedDStream@1ae7f660 18/04/17 16:32:59 INFO dstream.ForEachDStream: Slide time = 60000 ms 18/04/17 16:32:59 INFO dstream.ForEachDStream: Storage level = StorageLevel(false, false, false, false, 1) 18/04/17 16:32:59 INFO dstream.ForEachDStream: Checkpoint interval = null 18/04/17 16:32:59 INFO dstream.ForEachDStream: Remember duration = 60000 ms 18/04/17 16:32:59 INFO dstream.ForEachDStream: Initialized and validated org.apache.spark.streaming.dstream.ForEachDStream@66aa61da 18/04/17 16:32:59 INFO dstream.ForEachDStream: metadataCleanupDelay = -1 18/04/17 16:32:59 INFO dstream.TransformedDStream: metadataCleanupDelay = -1 18/04/17 16:32:59 INFO kafka.DirectKafkaInputDStream: metadataCleanupDelay = -1 18/04/17 16:32:59 INFO kafka.DirectKafkaInputDStream: Slide time = 60000 ms 18/04/17 16:32:59 INFO kafka.DirectKafkaInputDStream: Storage level = StorageLevel(false, false, false, false, 1) 18/04/17 16:32:59 INFO kafka.DirectKafkaInputDStream: Checkpoint interval = null 18/04/17 16:32:59 INFO kafka.DirectKafkaInputDStream: Remember duration = 60000 ms 18/04/17 16:32:59 INFO kafka.DirectKafkaInputDStream: Initialized and validated org.apache.spark.streaming.kafka.DirectKafkaInputDStream@5a89ba56 18/04/17 16:32:59 INFO dstream.TransformedDStream: Slide time = 60000 ms 18/04/17 16:32:59 INFO dstream.TransformedDStream: Storage level = StorageLevel(false, false, false, false, 1) 18/04/17 16:32:59 INFO dstream.TransformedDStream: Checkpoint interval = null 18/04/17 16:32:59 INFO dstream.TransformedDStream: Remember duration = 60000 ms 18/04/17 16:32:59 INFO dstream.TransformedDStream: Initialized and validated org.apache.spark.streaming.dstream.TransformedDStream@33506a75 18/04/17 16:32:59 INFO dstream.ForEachDStream: Slide time = 60000 ms 18/04/17 16:32:59 INFO dstream.ForEachDStream: Storage level = StorageLevel(false, false, false, false, 1) 18/04/17 16:32:59 INFO dstream.ForEachDStream: Checkpoint interval = null 18/04/17 16:32:59 INFO dstream.ForEachDStream: Remember duration = 60000 ms 18/04/17 16:32:59 INFO dstream.ForEachDStream: Initialized and validated org.apache.spark.streaming.dstream.ForEachDStream@16a1de89 18/04/17 16:32:59 INFO dstream.ForEachDStream: metadataCleanupDelay = -1 18/04/17 16:32:59 INFO dstream.TransformedDStream: metadataCleanupDelay = -1 18/04/17 16:32:59 INFO kafka.DirectKafkaInputDStream: metadataCleanupDelay = -1 18/04/17 16:32:59 INFO kafka.DirectKafkaInputDStream: Slide time = 60000 ms 18/04/17 16:32:59 INFO kafka.DirectKafkaInputDStream: Storage level = StorageLevel(false, false, false, false, 1) 18/04/17 16:32:59 INFO kafka.DirectKafkaInputDStream: Checkpoint interval = null 18/04/17 16:32:59 INFO kafka.DirectKafkaInputDStream: Remember duration = 60000 ms 18/04/17 16:32:59 INFO kafka.DirectKafkaInputDStream: Initialized and validated org.apache.spark.streaming.kafka.DirectKafkaInputDStream@2330c936 18/04/17 16:32:59 INFO dstream.TransformedDStream: Slide time = 60000 ms 18/04/17 16:32:59 INFO dstream.TransformedDStream: Storage level = StorageLevel(false, false, false, false, 1) 18/04/17 16:32:59 INFO dstream.TransformedDStream: Checkpoint interval = null 18/04/17 16:32:59 INFO dstream.TransformedDStream: Remember duration = 60000 ms 18/04/17 16:32:59 INFO dstream.TransformedDStream: Initialized and validated org.apache.spark.streaming.dstream.TransformedDStream@347c9c0c 18/04/17 16:32:59 INFO dstream.ForEachDStream: Slide time = 60000 ms 18/04/17 16:32:59 INFO dstream.ForEachDStream: Storage level = StorageLevel(false, false, false, false, 1) 18/04/17 16:32:59 INFO dstream.ForEachDStream: Checkpoint interval = null 18/04/17 16:32:59 INFO dstream.ForEachDStream: Remember duration = 60000 ms 18/04/17 16:32:59 INFO dstream.ForEachDStream: Initialized and validated org.apache.spark.streaming.dstream.ForEachDStream@44f5b34 18/04/17 16:32:59 INFO dstream.ForEachDStream: metadataCleanupDelay = -1 18/04/17 16:32:59 INFO dstream.TransformedDStream: metadataCleanupDelay = -1 18/04/17 16:32:59 INFO kafka.DirectKafkaInputDStream: metadataCleanupDelay = -1 18/04/17 16:32:59 INFO kafka.DirectKafkaInputDStream: Slide time = 60000 ms 18/04/17 16:32:59 INFO kafka.DirectKafkaInputDStream: Storage level = StorageLevel(false, false, false, false, 1) 18/04/17 16:32:59 INFO kafka.DirectKafkaInputDStream: Checkpoint interval = null 18/04/17 16:32:59 INFO kafka.DirectKafkaInputDStream: Remember duration = 60000 ms 18/04/17 16:32:59 INFO kafka.DirectKafkaInputDStream: Initialized and validated org.apache.spark.streaming.kafka.DirectKafkaInputDStream@1c0298a2 18/04/17 16:32:59 INFO dstream.TransformedDStream: Slide time = 60000 ms 18/04/17 16:32:59 INFO dstream.TransformedDStream: Storage level = StorageLevel(false, false, false, false, 1) 18/04/17 16:32:59 INFO dstream.TransformedDStream: Checkpoint interval = null 18/04/17 16:32:59 INFO dstream.TransformedDStream: Remember duration = 60000 ms 18/04/17 16:32:59 INFO dstream.TransformedDStream: Initialized and validated org.apache.spark.streaming.dstream.TransformedDStream@39de9b6d 18/04/17 16:32:59 INFO dstream.ForEachDStream: Slide time = 60000 ms 18/04/17 16:32:59 INFO dstream.ForEachDStream: Storage level = StorageLevel(false, false, false, false, 1) 18/04/17 16:32:59 INFO dstream.ForEachDStream: Checkpoint interval = null 18/04/17 16:32:59 INFO dstream.ForEachDStream: Remember duration = 60000 ms 18/04/17 16:32:59 INFO dstream.ForEachDStream: Initialized and validated org.apache.spark.streaming.dstream.ForEachDStream@6992eca6 18/04/17 16:32:59 INFO dstream.ForEachDStream: metadataCleanupDelay = -1 18/04/17 16:32:59 INFO dstream.TransformedDStream: metadataCleanupDelay = -1 18/04/17 16:32:59 INFO kafka.DirectKafkaInputDStream: metadataCleanupDelay = -1 18/04/17 16:32:59 INFO kafka.DirectKafkaInputDStream: Slide time = 60000 ms 18/04/17 16:32:59 INFO kafka.DirectKafkaInputDStream: Storage level = StorageLevel(false, false, false, false, 1) 18/04/17 16:32:59 INFO kafka.DirectKafkaInputDStream: Checkpoint interval = null 18/04/17 16:32:59 INFO kafka.DirectKafkaInputDStream: Remember duration = 60000 ms 18/04/17 16:32:59 INFO kafka.DirectKafkaInputDStream: Initialized and validated org.apache.spark.streaming.kafka.DirectKafkaInputDStream@177a8c5b 18/04/17 16:32:59 INFO dstream.TransformedDStream: Slide time = 60000 ms 18/04/17 16:32:59 INFO dstream.TransformedDStream: Storage level = StorageLevel(false, false, false, false, 1) 18/04/17 16:32:59 INFO dstream.TransformedDStream: Checkpoint interval = null 18/04/17 16:32:59 INFO dstream.TransformedDStream: Remember duration = 60000 ms 18/04/17 16:32:59 INFO dstream.TransformedDStream: Initialized and validated org.apache.spark.streaming.dstream.TransformedDStream@1fb5c19e 18/04/17 16:32:59 INFO dstream.ForEachDStream: Slide time = 60000 ms 18/04/17 16:32:59 INFO dstream.ForEachDStream: Storage level = StorageLevel(false, false, false, false, 1) 18/04/17 16:32:59 INFO dstream.ForEachDStream: Checkpoint interval = null 18/04/17 16:32:59 INFO dstream.ForEachDStream: Remember duration = 60000 ms 18/04/17 16:32:59 INFO dstream.ForEachDStream: Initialized and validated org.apache.spark.streaming.dstream.ForEachDStream@52fd34f6 18/04/17 16:32:59 INFO dstream.ForEachDStream: metadataCleanupDelay = -1 18/04/17 16:32:59 INFO dstream.TransformedDStream: metadataCleanupDelay = -1 18/04/17 16:32:59 INFO kafka.DirectKafkaInputDStream: metadataCleanupDelay = -1 18/04/17 16:32:59 INFO kafka.DirectKafkaInputDStream: Slide time = 60000 ms 18/04/17 16:32:59 INFO kafka.DirectKafkaInputDStream: Storage level = StorageLevel(false, false, false, false, 1) 18/04/17 16:32:59 INFO kafka.DirectKafkaInputDStream: Checkpoint interval = null 18/04/17 16:32:59 INFO kafka.DirectKafkaInputDStream: Remember duration = 60000 ms 18/04/17 16:32:59 INFO kafka.DirectKafkaInputDStream: Initialized and validated org.apache.spark.streaming.kafka.DirectKafkaInputDStream@2cf415d3 18/04/17 16:32:59 INFO dstream.TransformedDStream: Slide time = 60000 ms 18/04/17 16:32:59 INFO dstream.TransformedDStream: Storage level = StorageLevel(false, false, false, false, 1) 18/04/17 16:32:59 INFO dstream.TransformedDStream: Checkpoint interval = null 18/04/17 16:32:59 INFO dstream.TransformedDStream: Remember duration = 60000 ms 18/04/17 16:32:59 INFO dstream.TransformedDStream: Initialized and validated org.apache.spark.streaming.dstream.TransformedDStream@6f2f4c42 18/04/17 16:32:59 INFO dstream.ForEachDStream: Slide time = 60000 ms 18/04/17 16:32:59 INFO dstream.ForEachDStream: Storage level = StorageLevel(false, false, false, false, 1) 18/04/17 16:32:59 INFO dstream.ForEachDStream: Checkpoint interval = null 18/04/17 16:32:59 INFO dstream.ForEachDStream: Remember duration = 60000 ms 18/04/17 16:32:59 INFO dstream.ForEachDStream: Initialized and validated org.apache.spark.streaming.dstream.ForEachDStream@5e27c908 18/04/17 16:32:59 INFO dstream.ForEachDStream: metadataCleanupDelay = -1 18/04/17 16:32:59 INFO dstream.TransformedDStream: metadataCleanupDelay = -1 18/04/17 16:32:59 INFO kafka.DirectKafkaInputDStream: metadataCleanupDelay = -1 18/04/17 16:32:59 INFO kafka.DirectKafkaInputDStream: Slide time = 60000 ms 18/04/17 16:32:59 INFO kafka.DirectKafkaInputDStream: Storage level = StorageLevel(false, false, false, false, 1) 18/04/17 16:32:59 INFO kafka.DirectKafkaInputDStream: Checkpoint interval = null 18/04/17 16:32:59 INFO kafka.DirectKafkaInputDStream: Remember duration = 60000 ms 18/04/17 16:32:59 INFO kafka.DirectKafkaInputDStream: Initialized and validated org.apache.spark.streaming.kafka.DirectKafkaInputDStream@65c60cbe 18/04/17 16:32:59 INFO dstream.TransformedDStream: Slide time = 60000 ms 18/04/17 16:32:59 INFO dstream.TransformedDStream: Storage level = StorageLevel(false, false, false, false, 1) 18/04/17 16:32:59 INFO dstream.TransformedDStream: Checkpoint interval = null 18/04/17 16:32:59 INFO dstream.TransformedDStream: Remember duration = 60000 ms 18/04/17 16:32:59 INFO dstream.TransformedDStream: Initialized and validated org.apache.spark.streaming.dstream.TransformedDStream@6990d758 18/04/17 16:32:59 INFO dstream.ForEachDStream: Slide time = 60000 ms 18/04/17 16:32:59 INFO dstream.ForEachDStream: Storage level = StorageLevel(false, false, false, false, 1) 18/04/17 16:32:59 INFO dstream.ForEachDStream: Checkpoint interval = null 18/04/17 16:32:59 INFO dstream.ForEachDStream: Remember duration = 60000 ms 18/04/17 16:32:59 INFO dstream.ForEachDStream: Initialized and validated org.apache.spark.streaming.dstream.ForEachDStream@7cc8cb74 18/04/17 16:32:59 INFO util.RecurringTimer: Started timer for JobGenerator at time 1523971980000 18/04/17 16:32:59 INFO scheduler.JobGenerator: Started JobGenerator at 1523971980000 ms 18/04/17 16:32:59 INFO scheduler.JobScheduler: Started JobScheduler 18/04/17 16:32:59 INFO streaming.StreamingContext: StreamingContext started 18/04/17 16:33:00 INFO utils.VerifiableProperties: Verifying properties 18/04/17 16:33:00 INFO utils.VerifiableProperties: Property auto.offset.reset is overridden to smallest 18/04/17 16:33:00 WARN utils.VerifiableProperties: Property enable.auto.commit is not valid 18/04/17 16:33:00 INFO utils.VerifiableProperties: Property fetch.message.max.bytes is overridden to 10485760 18/04/17 16:33:00 INFO utils.VerifiableProperties: Property group.id is overridden to predictor-engine 18/04/17 16:33:00 INFO utils.VerifiableProperties: Property zookeeper.connect is overridden to 18/04/17 16:33:00 INFO utils.VerifiableProperties: Verifying properties 18/04/17 16:33:00 INFO utils.VerifiableProperties: Property auto.offset.reset is overridden to smallest 18/04/17 16:33:00 WARN utils.VerifiableProperties: Property enable.auto.commit is not valid 18/04/17 16:33:00 INFO utils.VerifiableProperties: Property fetch.message.max.bytes is overridden to 10485760 18/04/17 16:33:00 INFO utils.VerifiableProperties: Property group.id is overridden to predictor-engine 18/04/17 16:33:00 INFO utils.VerifiableProperties: Property zookeeper.connect is overridden to 18/04/17 16:33:00 INFO utils.VerifiableProperties: Verifying properties 18/04/17 16:33:00 INFO utils.VerifiableProperties: Property auto.offset.reset is overridden to smallest 18/04/17 16:33:00 WARN utils.VerifiableProperties: Property enable.auto.commit is not valid 18/04/17 16:33:00 INFO utils.VerifiableProperties: Property fetch.message.max.bytes is overridden to 10485760 18/04/17 16:33:00 INFO utils.VerifiableProperties: Property group.id is overridden to predictor-engine 18/04/17 16:33:00 INFO utils.VerifiableProperties: Property zookeeper.connect is overridden to 18/04/17 16:33:00 INFO utils.VerifiableProperties: Verifying properties 18/04/17 16:33:00 INFO utils.VerifiableProperties: Property auto.offset.reset is overridden to smallest 18/04/17 16:33:00 WARN utils.VerifiableProperties: Property enable.auto.commit is not valid 18/04/17 16:33:00 INFO utils.VerifiableProperties: Property fetch.message.max.bytes is overridden to 10485760 18/04/17 16:33:00 INFO utils.VerifiableProperties: Property group.id is overridden to predictor-engine 18/04/17 16:33:00 INFO utils.VerifiableProperties: Property zookeeper.connect is overridden to 18/04/17 16:33:00 INFO utils.VerifiableProperties: Verifying properties 18/04/17 16:33:00 INFO utils.VerifiableProperties: Property auto.offset.reset is overridden to smallest 18/04/17 16:33:00 WARN utils.VerifiableProperties: Property enable.auto.commit is not valid 18/04/17 16:33:00 INFO utils.VerifiableProperties: Property fetch.message.max.bytes is overridden to 10485760 18/04/17 16:33:00 INFO utils.VerifiableProperties: Property group.id is overridden to predictor-engine 18/04/17 16:33:00 INFO utils.VerifiableProperties: Property zookeeper.connect is overridden to 18/04/17 16:33:00 INFO utils.VerifiableProperties: Verifying properties 18/04/17 16:33:00 INFO utils.VerifiableProperties: Property auto.offset.reset is overridden to smallest 18/04/17 16:33:00 WARN utils.VerifiableProperties: Property enable.auto.commit is not valid 18/04/17 16:33:00 INFO utils.VerifiableProperties: Property fetch.message.max.bytes is overridden to 10485760 18/04/17 16:33:00 INFO utils.VerifiableProperties: Property group.id is overridden to predictor-engine 18/04/17 16:33:00 INFO utils.VerifiableProperties: Property zookeeper.connect is overridden to 18/04/17 16:33:00 INFO utils.VerifiableProperties: Verifying properties 18/04/17 16:33:00 INFO utils.VerifiableProperties: Property auto.offset.reset is overridden to smallest 18/04/17 16:33:00 WARN utils.VerifiableProperties: Property enable.auto.commit is not valid 18/04/17 16:33:00 INFO utils.VerifiableProperties: Property fetch.message.max.bytes is overridden to 10485760 18/04/17 16:33:00 INFO utils.VerifiableProperties: Property group.id is overridden to predictor-engine 18/04/17 16:33:00 INFO utils.VerifiableProperties: Property zookeeper.connect is overridden to 18/04/17 16:33:00 INFO utils.VerifiableProperties: Verifying properties 18/04/17 16:33:00 INFO utils.VerifiableProperties: Property auto.offset.reset is overridden to smallest 18/04/17 16:33:00 WARN utils.VerifiableProperties: Property enable.auto.commit is not valid 18/04/17 16:33:00 INFO utils.VerifiableProperties: Property fetch.message.max.bytes is overridden to 10485760 18/04/17 16:33:00 INFO utils.VerifiableProperties: Property group.id is overridden to predictor-engine 18/04/17 16:33:00 INFO utils.VerifiableProperties: Property zookeeper.connect is overridden to 18/04/17 16:33:00 INFO utils.VerifiableProperties: Verifying properties 18/04/17 16:33:00 INFO utils.VerifiableProperties: Property auto.offset.reset is overridden to smallest 18/04/17 16:33:00 WARN utils.VerifiableProperties: Property enable.auto.commit is not valid 18/04/17 16:33:00 INFO utils.VerifiableProperties: Property fetch.message.max.bytes is overridden to 10485760 18/04/17 16:33:00 INFO utils.VerifiableProperties: Property group.id is overridden to predictor-engine 18/04/17 16:33:00 INFO utils.VerifiableProperties: Property zookeeper.connect is overridden to 18/04/17 16:33:00 INFO utils.VerifiableProperties: Verifying properties 18/04/17 16:33:00 INFO utils.VerifiableProperties: Property auto.offset.reset is overridden to smallest 18/04/17 16:33:00 WARN utils.VerifiableProperties: Property enable.auto.commit is not valid 18/04/17 16:33:00 INFO utils.VerifiableProperties: Property fetch.message.max.bytes is overridden to 10485760 18/04/17 16:33:00 INFO utils.VerifiableProperties: Property group.id is overridden to predictor-engine 18/04/17 16:33:00 INFO utils.VerifiableProperties: Property zookeeper.connect is overridden to 18/04/17 16:33:00 INFO utils.VerifiableProperties: Verifying properties 18/04/17 16:33:00 INFO utils.VerifiableProperties: Property auto.offset.reset is overridden to smallest 18/04/17 16:33:00 WARN utils.VerifiableProperties: Property enable.auto.commit is not valid 18/04/17 16:33:00 INFO utils.VerifiableProperties: Property fetch.message.max.bytes is overridden to 10485760 18/04/17 16:33:00 INFO utils.VerifiableProperties: Property group.id is overridden to predictor-engine 18/04/17 16:33:00 INFO utils.VerifiableProperties: Property zookeeper.connect is overridden to 18/04/17 16:33:00 INFO utils.VerifiableProperties: Verifying properties 18/04/17 16:33:00 INFO utils.VerifiableProperties: Property auto.offset.reset is overridden to smallest 18/04/17 16:33:00 WARN utils.VerifiableProperties: Property enable.auto.commit is not valid 18/04/17 16:33:00 INFO utils.VerifiableProperties: Property fetch.message.max.bytes is overridden to 10485760 18/04/17 16:33:00 INFO utils.VerifiableProperties: Property group.id is overridden to predictor-engine 18/04/17 16:33:00 INFO utils.VerifiableProperties: Property zookeeper.connect is overridden to 18/04/17 16:33:00 INFO utils.VerifiableProperties: Verifying properties 18/04/17 16:33:00 INFO utils.VerifiableProperties: Property auto.offset.reset is overridden to smallest 18/04/17 16:33:00 WARN utils.VerifiableProperties: Property enable.auto.commit is not valid 18/04/17 16:33:00 INFO utils.VerifiableProperties: Property fetch.message.max.bytes is overridden to 10485760 18/04/17 16:33:00 INFO utils.VerifiableProperties: Property group.id is overridden to predictor-engine 18/04/17 16:33:00 INFO utils.VerifiableProperties: Property zookeeper.connect is overridden to 18/04/17 16:33:00 INFO utils.VerifiableProperties: Verifying properties 18/04/17 16:33:00 INFO utils.VerifiableProperties: Property auto.offset.reset is overridden to smallest 18/04/17 16:33:00 WARN utils.VerifiableProperties: Property enable.auto.commit is not valid 18/04/17 16:33:00 INFO utils.VerifiableProperties: Property fetch.message.max.bytes is overridden to 10485760 18/04/17 16:33:00 INFO utils.VerifiableProperties: Property group.id is overridden to predictor-engine 18/04/17 16:33:00 INFO utils.VerifiableProperties: Property zookeeper.connect is overridden to 18/04/17 16:33:00 INFO utils.VerifiableProperties: Verifying properties 18/04/17 16:33:00 INFO utils.VerifiableProperties: Property auto.offset.reset is overridden to smallest 18/04/17 16:33:00 WARN utils.VerifiableProperties: Property enable.auto.commit is not valid 18/04/17 16:33:00 INFO utils.VerifiableProperties: Property fetch.message.max.bytes is overridden to 10485760 18/04/17 16:33:00 INFO utils.VerifiableProperties: Property group.id is overridden to predictor-engine 18/04/17 16:33:00 INFO utils.VerifiableProperties: Property zookeeper.connect is overridden to 18/04/17 16:33:00 INFO utils.VerifiableProperties: Verifying properties 18/04/17 16:33:00 INFO utils.VerifiableProperties: Property auto.offset.reset is overridden to smallest 18/04/17 16:33:00 WARN utils.VerifiableProperties: Property enable.auto.commit is not valid 18/04/17 16:33:00 INFO utils.VerifiableProperties: Property fetch.message.max.bytes is overridden to 10485760 18/04/17 16:33:00 INFO utils.VerifiableProperties: Property group.id is overridden to predictor-engine 18/04/17 16:33:00 INFO utils.VerifiableProperties: Property zookeeper.connect is overridden to 18/04/17 16:33:00 INFO utils.VerifiableProperties: Verifying properties 18/04/17 16:33:00 INFO utils.VerifiableProperties: Property auto.offset.reset is overridden to smallest 18/04/17 16:33:00 WARN utils.VerifiableProperties: Property enable.auto.commit is not valid 18/04/17 16:33:00 INFO utils.VerifiableProperties: Property fetch.message.max.bytes is overridden to 10485760 18/04/17 16:33:00 INFO utils.VerifiableProperties: Property group.id is overridden to predictor-engine 18/04/17 16:33:00 INFO utils.VerifiableProperties: Property zookeeper.connect is overridden to 18/04/17 16:33:00 INFO utils.VerifiableProperties: Verifying properties 18/04/17 16:33:00 INFO utils.VerifiableProperties: Property auto.offset.reset is overridden to smallest 18/04/17 16:33:00 WARN utils.VerifiableProperties: Property enable.auto.commit is not valid 18/04/17 16:33:00 INFO utils.VerifiableProperties: Property fetch.message.max.bytes is overridden to 10485760 18/04/17 16:33:00 INFO utils.VerifiableProperties: Property group.id is overridden to predictor-engine 18/04/17 16:33:00 INFO utils.VerifiableProperties: Property zookeeper.connect is overridden to 18/04/17 16:33:00 INFO utils.VerifiableProperties: Verifying properties 18/04/17 16:33:00 INFO utils.VerifiableProperties: Property auto.offset.reset is overridden to smallest 18/04/17 16:33:00 WARN utils.VerifiableProperties: Property enable.auto.commit is not valid 18/04/17 16:33:00 INFO utils.VerifiableProperties: Property fetch.message.max.bytes is overridden to 10485760 18/04/17 16:33:00 INFO utils.VerifiableProperties: Property group.id is overridden to predictor-engine 18/04/17 16:33:00 INFO utils.VerifiableProperties: Property zookeeper.connect is overridden to 18/04/17 16:33:00 INFO utils.VerifiableProperties: Verifying properties 18/04/17 16:33:00 INFO utils.VerifiableProperties: Property auto.offset.reset is overridden to smallest 18/04/17 16:33:00 WARN utils.VerifiableProperties: Property enable.auto.commit is not valid 18/04/17 16:33:00 INFO utils.VerifiableProperties: Property fetch.message.max.bytes is overridden to 10485760 18/04/17 16:33:00 INFO utils.VerifiableProperties: Property group.id is overridden to predictor-engine 18/04/17 16:33:00 INFO utils.VerifiableProperties: Property zookeeper.connect is overridden to 18/04/17 16:33:00 INFO utils.VerifiableProperties: Verifying properties 18/04/17 16:33:00 INFO utils.VerifiableProperties: Property auto.offset.reset is overridden to smallest 18/04/17 16:33:00 WARN utils.VerifiableProperties: Property enable.auto.commit is not valid 18/04/17 16:33:00 INFO utils.VerifiableProperties: Property fetch.message.max.bytes is overridden to 10485760 18/04/17 16:33:00 INFO utils.VerifiableProperties: Property group.id is overridden to predictor-engine 18/04/17 16:33:00 INFO utils.VerifiableProperties: Property zookeeper.connect is overridden to 18/04/17 16:33:00 INFO utils.VerifiableProperties: Verifying properties 18/04/17 16:33:00 INFO utils.VerifiableProperties: Property auto.offset.reset is overridden to smallest 18/04/17 16:33:00 WARN utils.VerifiableProperties: Property enable.auto.commit is not valid 18/04/17 16:33:00 INFO utils.VerifiableProperties: Property fetch.message.max.bytes is overridden to 10485760 18/04/17 16:33:00 INFO utils.VerifiableProperties: Property group.id is overridden to predictor-engine 18/04/17 16:33:00 INFO utils.VerifiableProperties: Property zookeeper.connect is overridden to 18/04/17 16:33:00 INFO utils.VerifiableProperties: Verifying properties 18/04/17 16:33:00 INFO utils.VerifiableProperties: Property auto.offset.reset is overridden to smallest 18/04/17 16:33:00 WARN utils.VerifiableProperties: Property enable.auto.commit is not valid 18/04/17 16:33:00 INFO utils.VerifiableProperties: Property fetch.message.max.bytes is overridden to 10485760 18/04/17 16:33:00 INFO utils.VerifiableProperties: Property group.id is overridden to predictor-engine 18/04/17 16:33:00 INFO utils.VerifiableProperties: Property zookeeper.connect is overridden to 18/04/17 16:33:00 INFO utils.VerifiableProperties: Verifying properties 18/04/17 16:33:00 INFO utils.VerifiableProperties: Property auto.offset.reset is overridden to smallest 18/04/17 16:33:00 WARN utils.VerifiableProperties: Property enable.auto.commit is not valid 18/04/17 16:33:00 INFO utils.VerifiableProperties: Property fetch.message.max.bytes is overridden to 10485760 18/04/17 16:33:00 INFO utils.VerifiableProperties: Property group.id is overridden to predictor-engine 18/04/17 16:33:00 INFO utils.VerifiableProperties: Property zookeeper.connect is overridden to 18/04/17 16:33:00 INFO utils.VerifiableProperties: Verifying properties 18/04/17 16:33:00 INFO utils.VerifiableProperties: Property auto.offset.reset is overridden to smallest 18/04/17 16:33:00 WARN utils.VerifiableProperties: Property enable.auto.commit is not valid 18/04/17 16:33:00 INFO utils.VerifiableProperties: Property fetch.message.max.bytes is overridden to 10485760 18/04/17 16:33:00 INFO utils.VerifiableProperties: Property group.id is overridden to predictor-engine 18/04/17 16:33:00 INFO utils.VerifiableProperties: Property zookeeper.connect is overridden to 18/04/17 16:33:00 INFO utils.VerifiableProperties: Verifying properties 18/04/17 16:33:00 INFO utils.VerifiableProperties: Property auto.offset.reset is overridden to smallest 18/04/17 16:33:00 WARN utils.VerifiableProperties: Property enable.auto.commit is not valid 18/04/17 16:33:00 INFO utils.VerifiableProperties: Property fetch.message.max.bytes is overridden to 10485760 18/04/17 16:33:00 INFO utils.VerifiableProperties: Property group.id is overridden to predictor-engine 18/04/17 16:33:00 INFO utils.VerifiableProperties: Property zookeeper.connect is overridden to 18/04/17 16:33:00 INFO utils.VerifiableProperties: Verifying properties 18/04/17 16:33:00 INFO utils.VerifiableProperties: Property auto.offset.reset is overridden to smallest 18/04/17 16:33:00 WARN utils.VerifiableProperties: Property enable.auto.commit is not valid 18/04/17 16:33:00 INFO utils.VerifiableProperties: Property fetch.message.max.bytes is overridden to 10485760 18/04/17 16:33:00 INFO utils.VerifiableProperties: Property group.id is overridden to predictor-engine 18/04/17 16:33:00 INFO utils.VerifiableProperties: Property zookeeper.connect is overridden to 18/04/17 16:33:00 INFO utils.VerifiableProperties: Verifying properties 18/04/17 16:33:00 INFO utils.VerifiableProperties: Property auto.offset.reset is overridden to smallest 18/04/17 16:33:00 WARN utils.VerifiableProperties: Property enable.auto.commit is not valid 18/04/17 16:33:00 INFO utils.VerifiableProperties: Property fetch.message.max.bytes is overridden to 10485760 18/04/17 16:33:00 INFO utils.VerifiableProperties: Property group.id is overridden to predictor-engine 18/04/17 16:33:00 INFO utils.VerifiableProperties: Property zookeeper.connect is overridden to 18/04/17 16:33:00 INFO utils.VerifiableProperties: Verifying properties 18/04/17 16:33:00 INFO utils.VerifiableProperties: Property auto.offset.reset is overridden to smallest 18/04/17 16:33:00 WARN utils.VerifiableProperties: Property enable.auto.commit is not valid 18/04/17 16:33:00 INFO utils.VerifiableProperties: Property fetch.message.max.bytes is overridden to 10485760 18/04/17 16:33:00 INFO utils.VerifiableProperties: Property group.id is overridden to predictor-engine 18/04/17 16:33:00 INFO utils.VerifiableProperties: Property zookeeper.connect is overridden to 18/04/17 16:33:00 INFO utils.VerifiableProperties: Verifying properties 18/04/17 16:33:00 INFO utils.VerifiableProperties: Property auto.offset.reset is overridden to smallest 18/04/17 16:33:00 WARN utils.VerifiableProperties: Property enable.auto.commit is not valid 18/04/17 16:33:00 INFO utils.VerifiableProperties: Property fetch.message.max.bytes is overridden to 10485760 18/04/17 16:33:00 INFO utils.VerifiableProperties: Property group.id is overridden to predictor-engine 18/04/17 16:33:00 INFO utils.VerifiableProperties: Property zookeeper.connect is overridden to 18/04/17 16:33:00 INFO utils.VerifiableProperties: Verifying properties 18/04/17 16:33:00 INFO utils.VerifiableProperties: Property auto.offset.reset is overridden to smallest 18/04/17 16:33:00 WARN utils.VerifiableProperties: Property enable.auto.commit is not valid 18/04/17 16:33:00 INFO utils.VerifiableProperties: Property fetch.message.max.bytes is overridden to 10485760 18/04/17 16:33:00 INFO utils.VerifiableProperties: Property group.id is overridden to predictor-engine 18/04/17 16:33:00 INFO utils.VerifiableProperties: Property zookeeper.connect is overridden to 18/04/17 16:33:00 INFO utils.VerifiableProperties: Verifying properties 18/04/17 16:33:00 INFO utils.VerifiableProperties: Property auto.offset.reset is overridden to smallest 18/04/17 16:33:00 WARN utils.VerifiableProperties: Property enable.auto.commit is not valid 18/04/17 16:33:00 INFO utils.VerifiableProperties: Property fetch.message.max.bytes is overridden to 10485760 18/04/17 16:33:00 INFO utils.VerifiableProperties: Property group.id is overridden to predictor-engine 18/04/17 16:33:00 INFO utils.VerifiableProperties: Property zookeeper.connect is overridden to 18/04/17 16:33:00 INFO utils.VerifiableProperties: Verifying properties 18/04/17 16:33:00 INFO utils.VerifiableProperties: Property auto.offset.reset is overridden to smallest 18/04/17 16:33:00 WARN utils.VerifiableProperties: Property enable.auto.commit is not valid 18/04/17 16:33:00 INFO utils.VerifiableProperties: Property fetch.message.max.bytes is overridden to 10485760 18/04/17 16:33:00 INFO utils.VerifiableProperties: Property group.id is overridden to predictor-engine 18/04/17 16:33:00 INFO utils.VerifiableProperties: Property zookeeper.connect is overridden to 18/04/17 16:33:00 INFO utils.VerifiableProperties: Verifying properties 18/04/17 16:33:00 INFO utils.VerifiableProperties: Property auto.offset.reset is overridden to smallest 18/04/17 16:33:00 WARN utils.VerifiableProperties: Property enable.auto.commit is not valid 18/04/17 16:33:00 INFO utils.VerifiableProperties: Property fetch.message.max.bytes is overridden to 10485760 18/04/17 16:33:00 INFO utils.VerifiableProperties: Property group.id is overridden to predictor-engine 18/04/17 16:33:00 INFO utils.VerifiableProperties: Property zookeeper.connect is overridden to 18/04/17 16:33:00 INFO utils.VerifiableProperties: Verifying properties 18/04/17 16:33:00 INFO utils.VerifiableProperties: Property auto.offset.reset is overridden to smallest 18/04/17 16:33:00 WARN utils.VerifiableProperties: Property enable.auto.commit is not valid 18/04/17 16:33:00 INFO utils.VerifiableProperties: Property fetch.message.max.bytes is overridden to 10485760 18/04/17 16:33:00 INFO utils.VerifiableProperties: Property group.id is overridden to predictor-engine 18/04/17 16:33:00 INFO utils.VerifiableProperties: Property zookeeper.connect is overridden to 18/04/17 16:33:00 INFO utils.VerifiableProperties: Verifying properties 18/04/17 16:33:00 INFO utils.VerifiableProperties: Property auto.offset.reset is overridden to smallest 18/04/17 16:33:00 WARN utils.VerifiableProperties: Property enable.auto.commit is not valid 18/04/17 16:33:00 INFO utils.VerifiableProperties: Property fetch.message.max.bytes is overridden to 10485760 18/04/17 16:33:00 INFO utils.VerifiableProperties: Property group.id is overridden to predictor-engine 18/04/17 16:33:00 INFO utils.VerifiableProperties: Property zookeeper.connect is overridden to 18/04/17 16:33:00 INFO scheduler.JobScheduler: Starting job streaming job 1523971980000 ms.3 from job set of time 1523971980000 ms 18/04/17 16:33:00 INFO scheduler.JobScheduler: Starting job streaming job 1523971980000 ms.4 from job set of time 1523971980000 ms 18/04/17 16:33:00 INFO scheduler.JobScheduler: Starting job streaming job 1523971980000 ms.6 from job set of time 1523971980000 ms 18/04/17 16:33:00 INFO scheduler.JobScheduler: Starting job streaming job 1523971980000 ms.2 from job set of time 1523971980000 ms 18/04/17 16:33:00 INFO scheduler.JobScheduler: Starting job streaming job 1523971980000 ms.7 from job set of time 1523971980000 ms 18/04/17 16:33:00 INFO scheduler.JobScheduler: Starting job streaming job 1523971980000 ms.0 from job set of time 1523971980000 ms 18/04/17 16:33:00 INFO scheduler.JobScheduler: Starting job streaming job 1523971980000 ms.1 from job set of time 1523971980000 ms 18/04/17 16:33:00 INFO scheduler.JobScheduler: Starting job streaming job 1523971980000 ms.5 from job set of time 1523971980000 ms 18/04/17 16:33:00 INFO scheduler.JobScheduler: Starting job streaming job 1523971980000 ms.8 from job set of time 1523971980000 ms 18/04/17 16:33:00 INFO scheduler.JobScheduler: Starting job streaming job 1523971980000 ms.9 from job set of time 1523971980000 ms 18/04/17 16:33:00 INFO scheduler.JobScheduler: Starting job streaming job 1523971980000 ms.10 from job set of time 1523971980000 ms 18/04/17 16:33:00 INFO scheduler.JobScheduler: Starting job streaming job 1523971980000 ms.11 from job set of time 1523971980000 ms 18/04/17 16:33:00 INFO scheduler.JobScheduler: Starting job streaming job 1523971980000 ms.12 from job set of time 1523971980000 ms 18/04/17 16:33:00 INFO scheduler.JobScheduler: Finished job streaming job 1523971980000 ms.3 from job set of time 1523971980000 ms 18/04/17 16:33:00 INFO scheduler.JobScheduler: Finished job streaming job 1523971980000 ms.4 from job set of time 1523971980000 ms 18/04/17 16:33:00 INFO scheduler.JobScheduler: Finished job streaming job 1523971980000 ms.0 from job set of time 1523971980000 ms 18/04/17 16:33:00 INFO scheduler.JobScheduler: Starting job streaming job 1523971980000 ms.14 from job set of time 1523971980000 ms 18/04/17 16:33:00 INFO scheduler.JobScheduler: Finished job streaming job 1523971980000 ms.14 from job set of time 1523971980000 ms 18/04/17 16:33:00 INFO scheduler.JobScheduler: Starting job streaming job 1523971980000 ms.13 from job set of time 1523971980000 ms 18/04/17 16:33:00 INFO scheduler.JobScheduler: Starting job streaming job 1523971980000 ms.15 from job set of time 1523971980000 ms 18/04/17 16:33:00 INFO scheduler.JobScheduler: Starting job streaming job 1523971980000 ms.16 from job set of time 1523971980000 ms 18/04/17 16:33:00 INFO scheduler.JobScheduler: Finished job streaming job 1523971980000 ms.13 from job set of time 1523971980000 ms 18/04/17 16:33:00 INFO scheduler.JobScheduler: Finished job streaming job 1523971980000 ms.16 from job set of time 1523971980000 ms 18/04/17 16:33:00 INFO scheduler.JobScheduler: Starting job streaming job 1523971980000 ms.17 from job set of time 1523971980000 ms 18/04/17 16:33:00 INFO scheduler.JobScheduler: Finished job streaming job 1523971980000 ms.17 from job set of time 1523971980000 ms 18/04/17 16:33:00 INFO scheduler.JobScheduler: Starting job streaming job 1523971980000 ms.19 from job set of time 1523971980000 ms 18/04/17 16:33:00 INFO scheduler.JobScheduler: Starting job streaming job 1523971980000 ms.18 from job set of time 1523971980000 ms 18/04/17 16:33:00 INFO scheduler.JobScheduler: Starting job streaming job 1523971980000 ms.20 from job set of time 1523971980000 ms 18/04/17 16:33:00 INFO scheduler.JobScheduler: Starting job streaming job 1523971980000 ms.21 from job set of time 1523971980000 ms 18/04/17 16:33:00 INFO scheduler.JobScheduler: Starting job streaming job 1523971980000 ms.22 from job set of time 1523971980000 ms 18/04/17 16:33:00 INFO scheduler.JobScheduler: Starting job streaming job 1523971980000 ms.23 from job set of time 1523971980000 ms 18/04/17 16:33:00 INFO scheduler.JobScheduler: Finished job streaming job 1523971980000 ms.21 from job set of time 1523971980000 ms 18/04/17 16:33:00 INFO scheduler.JobScheduler: Starting job streaming job 1523971980000 ms.24 from job set of time 1523971980000 ms 18/04/17 16:33:00 INFO scheduler.JobScheduler: Starting job streaming job 1523971980000 ms.25 from job set of time 1523971980000 ms 18/04/17 16:33:00 INFO scheduler.JobScheduler: Starting job streaming job 1523971980000 ms.26 from job set of time 1523971980000 ms 18/04/17 16:33:00 INFO scheduler.JobScheduler: Starting job streaming job 1523971980000 ms.27 from job set of time 1523971980000 ms 18/04/17 16:33:00 INFO scheduler.JobScheduler: Starting job streaming job 1523971980000 ms.28 from job set of time 1523971980000 ms 18/04/17 16:33:00 INFO scheduler.JobScheduler: Starting job streaming job 1523971980000 ms.29 from job set of time 1523971980000 ms 18/04/17 16:33:00 INFO scheduler.JobScheduler: Starting job streaming job 1523971980000 ms.30 from job set of time 1523971980000 ms 18/04/17 16:33:00 INFO scheduler.JobScheduler: Finished job streaming job 1523971980000 ms.30 from job set of time 1523971980000 ms 18/04/17 16:33:00 INFO scheduler.JobScheduler: Starting job streaming job 1523971980000 ms.31 from job set of time 1523971980000 ms 18/04/17 16:33:00 INFO scheduler.JobScheduler: Starting job streaming job 1523971980000 ms.32 from job set of time 1523971980000 ms 18/04/17 16:33:00 INFO scheduler.JobScheduler: Added jobs for time 1523971980000 ms 18/04/17 16:33:00 INFO scheduler.JobScheduler: Starting job streaming job 1523971980000 ms.33 from job set of time 1523971980000 ms 18/04/17 16:33:00 INFO scheduler.JobScheduler: Starting job streaming job 1523971980000 ms.34 from job set of time 1523971980000 ms 18/04/17 16:33:00 INFO scheduler.JobScheduler: Starting job streaming job 1523971980000 ms.35 from job set of time 1523971980000 ms 18/04/17 16:33:00 INFO scheduler.JobScheduler: Finished job streaming job 1523971980000 ms.35 from job set of time 1523971980000 ms 18/04/17 16:33:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 16:33:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 16:33:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 16:33:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 16:33:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 16:33:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 16:33:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 16:33:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 16:33:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 16:33:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 16:33:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 16:33:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 16:33:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 16:33:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 16:33:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 16:33:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 16:33:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 16:33:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 16:33:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 16:33:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 16:33:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 16:33:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 16:33:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 16:33:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 16:33:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 16:33:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 16:33:00 INFO scheduler.DAGScheduler: Got job 0 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 16:33:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 0 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 16:33:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 16:33:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 16:33:00 INFO scheduler.DAGScheduler: Submitting ResultStage 0 (KafkaRDD[7] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 16:33:00 INFO storage.MemoryStore: Block broadcast_0 stored as values in memory (estimated size 5.7 KB, free 491.7 MB) 18/04/17 16:33:00 INFO storage.MemoryStore: Block broadcast_0_piece0 stored as bytes in memory (estimated size 3.1 KB, free 491.7 MB) 18/04/17 16:33:00 INFO storage.BlockManagerInfo: Added broadcast_0_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.7 MB) 18/04/17 16:33:00 INFO spark.SparkContext: Created broadcast 0 from broadcast at DAGScheduler.scala:1006 18/04/17 16:33:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 0 (KafkaRDD[7] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 16:33:00 INFO cluster.YarnClusterScheduler: Adding task set 0.0 with 1 tasks 18/04/17 16:33:00 INFO scheduler.DAGScheduler: Got job 15 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 16:33:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 1 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 16:33:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 16:33:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 16:33:00 INFO scheduler.DAGScheduler: Submitting ResultStage 1 (KafkaRDD[29] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 16:33:00 INFO storage.MemoryStore: Block broadcast_1 stored as values in memory (estimated size 5.7 KB, free 491.7 MB) 18/04/17 16:33:00 INFO storage.MemoryStore: Block broadcast_1_piece0 stored as bytes in memory (estimated size 3.1 KB, free 491.7 MB) 18/04/17 16:33:00 INFO storage.BlockManagerInfo: Added broadcast_1_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.7 MB) 18/04/17 16:33:00 INFO spark.SparkContext: Created broadcast 1 from broadcast at DAGScheduler.scala:1006 18/04/17 16:33:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 1 (KafkaRDD[29] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 16:33:00 INFO cluster.YarnClusterScheduler: Adding task set 1.0 with 1 tasks 18/04/17 16:33:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 0.0 (TID 0, ***hostname masked***, executor 8, partition 0, RACK_LOCAL, 2060 bytes) 18/04/17 16:33:00 INFO scheduler.DAGScheduler: Got job 2 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 16:33:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 2 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 16:33:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 16:33:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 16:33:00 INFO scheduler.DAGScheduler: Submitting ResultStage 2 (KafkaRDD[23] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 16:33:00 INFO storage.MemoryStore: Block broadcast_2 stored as values in memory (estimated size 5.7 KB, free 491.6 MB) 18/04/17 16:33:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 1.0 (TID 1, ***hostname masked***, executor 1, partition 0, NODE_LOCAL, 2049 bytes) 18/04/17 16:33:00 INFO storage.MemoryStore: Block broadcast_2_piece0 stored as bytes in memory (estimated size 3.1 KB, free 491.6 MB) 18/04/17 16:33:00 INFO storage.BlockManagerInfo: Added broadcast_2_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.7 MB) 18/04/17 16:33:00 INFO spark.SparkContext: Created broadcast 2 from broadcast at DAGScheduler.scala:1006 18/04/17 16:33:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 2 (KafkaRDD[23] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 16:33:00 INFO cluster.YarnClusterScheduler: Adding task set 2.0 with 1 tasks 18/04/17 16:33:00 INFO scheduler.DAGScheduler: Got job 10 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 16:33:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 3 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 16:33:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 16:33:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 16:33:00 INFO scheduler.DAGScheduler: Submitting ResultStage 3 (KafkaRDD[19] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 16:33:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 2.0 (TID 2, ***hostname masked***, executor 12, partition 0, RACK_LOCAL, 2048 bytes) 18/04/17 16:33:00 INFO storage.MemoryStore: Block broadcast_3 stored as values in memory (estimated size 5.7 KB, free 491.6 MB) 18/04/17 16:33:00 INFO storage.MemoryStore: Block broadcast_3_piece0 stored as bytes in memory (estimated size 3.1 KB, free 491.6 MB) 18/04/17 16:33:00 INFO storage.BlockManagerInfo: Added broadcast_3_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.7 MB) 18/04/17 16:33:00 INFO spark.SparkContext: Created broadcast 3 from broadcast at DAGScheduler.scala:1006 18/04/17 16:33:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 3 (KafkaRDD[19] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 16:33:00 INFO cluster.YarnClusterScheduler: Adding task set 3.0 with 1 tasks 18/04/17 16:33:00 INFO scheduler.DAGScheduler: Got job 25 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 16:33:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 4 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 16:33:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 16:33:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 16:33:00 INFO scheduler.DAGScheduler: Submitting ResultStage 4 (KafkaRDD[34] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 16:33:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 3.0 (TID 3, ***hostname masked***, executor 7, partition 0, RACK_LOCAL, 2050 bytes) 18/04/17 16:33:00 INFO storage.MemoryStore: Block broadcast_4 stored as values in memory (estimated size 5.7 KB, free 491.6 MB) 18/04/17 16:33:00 INFO storage.MemoryStore: Block broadcast_4_piece0 stored as bytes in memory (estimated size 3.1 KB, free 491.6 MB) 18/04/17 16:33:00 INFO storage.BlockManagerInfo: Added broadcast_4_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.7 MB) 18/04/17 16:33:00 INFO spark.SparkContext: Created broadcast 4 from broadcast at DAGScheduler.scala:1006 18/04/17 16:33:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 4 (KafkaRDD[34] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 16:33:00 INFO cluster.YarnClusterScheduler: Adding task set 4.0 with 1 tasks 18/04/17 16:33:00 INFO scheduler.DAGScheduler: Got job 6 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 16:33:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 5 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 16:33:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 16:33:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 16:33:00 INFO scheduler.DAGScheduler: Submitting ResultStage 5 (KafkaRDD[15] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 16:33:00 INFO storage.MemoryStore: Block broadcast_5 stored as values in memory (estimated size 5.7 KB, free 491.6 MB) 18/04/17 16:33:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 4.0 (TID 4, ***hostname masked***, executor 5, partition 0, RACK_LOCAL, 2053 bytes) 18/04/17 16:33:00 INFO storage.MemoryStore: Block broadcast_5_piece0 stored as bytes in memory (estimated size 3.1 KB, free 491.6 MB) 18/04/17 16:33:00 INFO storage.BlockManagerInfo: Added broadcast_5_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.7 MB) 18/04/17 16:33:00 INFO spark.SparkContext: Created broadcast 5 from broadcast at DAGScheduler.scala:1006 18/04/17 16:33:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 5 (KafkaRDD[15] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 16:33:00 INFO cluster.YarnClusterScheduler: Adding task set 5.0 with 1 tasks 18/04/17 16:33:00 INFO scheduler.DAGScheduler: Got job 1 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 16:33:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 6 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 16:33:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 16:33:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 16:33:00 INFO scheduler.DAGScheduler: Submitting ResultStage 6 (KafkaRDD[10] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 16:33:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 5.0 (TID 5, ***hostname masked***, executor 6, partition 0, RACK_LOCAL, 2064 bytes) 18/04/17 16:33:00 INFO storage.MemoryStore: Block broadcast_6 stored as values in memory (estimated size 5.7 KB, free 491.6 MB) 18/04/17 16:33:00 INFO storage.MemoryStore: Block broadcast_6_piece0 stored as bytes in memory (estimated size 3.1 KB, free 491.6 MB) 18/04/17 16:33:00 INFO storage.BlockManagerInfo: Added broadcast_6_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.6 MB) 18/04/17 16:33:00 INFO spark.SparkContext: Created broadcast 6 from broadcast at DAGScheduler.scala:1006 18/04/17 16:33:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 6 (KafkaRDD[10] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 16:33:00 INFO cluster.YarnClusterScheduler: Adding task set 6.0 with 1 tasks 18/04/17 16:33:00 INFO scheduler.DAGScheduler: Got job 7 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 16:33:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 7 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 16:33:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 16:33:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 16:33:00 INFO scheduler.DAGScheduler: Submitting ResultStage 7 (KafkaRDD[12] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 16:33:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 6.0 (TID 6, ***hostname masked***, executor 3, partition 0, NODE_LOCAL, 2059 bytes) 18/04/17 16:33:00 INFO storage.MemoryStore: Block broadcast_7 stored as values in memory (estimated size 5.7 KB, free 491.6 MB) 18/04/17 16:33:00 INFO storage.MemoryStore: Block broadcast_7_piece0 stored as bytes in memory (estimated size 3.1 KB, free 491.6 MB) 18/04/17 16:33:00 INFO storage.BlockManagerInfo: Added broadcast_7_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.6 MB) 18/04/17 16:33:00 INFO spark.SparkContext: Created broadcast 7 from broadcast at DAGScheduler.scala:1006 18/04/17 16:33:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 7 (KafkaRDD[12] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 16:33:00 INFO cluster.YarnClusterScheduler: Adding task set 7.0 with 1 tasks 18/04/17 16:33:00 INFO scheduler.DAGScheduler: Got job 14 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 16:33:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 8 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 16:33:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 16:33:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 16:33:00 INFO scheduler.DAGScheduler: Submitting ResultStage 8 (KafkaRDD[6] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 16:33:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 7.0 (TID 7, ***hostname masked***, executor 4, partition 0, RACK_LOCAL, 2051 bytes) 18/04/17 16:33:00 INFO storage.MemoryStore: Block broadcast_8 stored as values in memory (estimated size 5.7 KB, free 491.6 MB) 18/04/17 16:33:00 INFO storage.MemoryStore: Block broadcast_8_piece0 stored as bytes in memory (estimated size 3.1 KB, free 491.6 MB) 18/04/17 16:33:00 INFO storage.BlockManagerInfo: Added broadcast_8_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.6 MB) 18/04/17 16:33:00 INFO spark.SparkContext: Created broadcast 8 from broadcast at DAGScheduler.scala:1006 18/04/17 16:33:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 8 (KafkaRDD[6] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 16:33:00 INFO cluster.YarnClusterScheduler: Adding task set 8.0 with 1 tasks 18/04/17 16:33:00 INFO scheduler.DAGScheduler: Got job 8 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 16:33:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 9 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 16:33:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 16:33:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 16:33:00 INFO scheduler.DAGScheduler: Submitting ResultStage 9 (KafkaRDD[32] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 16:33:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 8.0 (TID 8, ***hostname masked***, executor 11, partition 0, NODE_LOCAL, 2045 bytes) 18/04/17 16:33:00 INFO storage.MemoryStore: Block broadcast_9 stored as values in memory (estimated size 5.6 KB, free 491.6 MB) 18/04/17 16:33:00 INFO storage.MemoryStore: Block broadcast_9_piece0 stored as bytes in memory (estimated size 3.1 KB, free 491.6 MB) 18/04/17 16:33:00 INFO storage.BlockManagerInfo: Added broadcast_9_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.6 MB) 18/04/17 16:33:00 INFO spark.SparkContext: Created broadcast 9 from broadcast at DAGScheduler.scala:1006 18/04/17 16:33:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 9 (KafkaRDD[32] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 16:33:00 INFO cluster.YarnClusterScheduler: Adding task set 9.0 with 1 tasks 18/04/17 16:33:00 INFO scheduler.DAGScheduler: Got job 3 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 16:33:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 10 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 16:33:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 16:33:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 16:33:00 INFO scheduler.DAGScheduler: Submitting ResultStage 10 (KafkaRDD[27] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 16:33:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 9.0 (TID 9, ***hostname masked***, executor 1, partition 0, RACK_LOCAL, 2040 bytes) 18/04/17 16:33:00 INFO storage.MemoryStore: Block broadcast_10 stored as values in memory (estimated size 5.7 KB, free 491.6 MB) 18/04/17 16:33:00 INFO storage.MemoryStore: Block broadcast_10_piece0 stored as bytes in memory (estimated size 3.1 KB, free 491.6 MB) 18/04/17 16:33:00 INFO storage.BlockManagerInfo: Added broadcast_10_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.6 MB) 18/04/17 16:33:00 INFO spark.SparkContext: Created broadcast 10 from broadcast at DAGScheduler.scala:1006 18/04/17 16:33:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 10 (KafkaRDD[27] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 16:33:00 INFO cluster.YarnClusterScheduler: Adding task set 10.0 with 1 tasks 18/04/17 16:33:00 INFO scheduler.DAGScheduler: Got job 11 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 16:33:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 11 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 16:33:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 16:33:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 16:33:00 INFO scheduler.DAGScheduler: Submitting ResultStage 11 (KafkaRDD[2] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 16:33:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 10.0 (TID 10, ***hostname masked***, executor 10, partition 0, RACK_LOCAL, 2048 bytes) 18/04/17 16:33:00 INFO storage.MemoryStore: Block broadcast_11 stored as values in memory (estimated size 5.6 KB, free 491.6 MB) 18/04/17 16:33:00 INFO storage.MemoryStore: Block broadcast_11_piece0 stored as bytes in memory (estimated size 3.1 KB, free 491.6 MB) 18/04/17 16:33:00 INFO storage.BlockManagerInfo: Added broadcast_11_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.6 MB) 18/04/17 16:33:00 INFO spark.SparkContext: Created broadcast 11 from broadcast at DAGScheduler.scala:1006 18/04/17 16:33:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 11 (KafkaRDD[2] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 16:33:00 INFO cluster.YarnClusterScheduler: Adding task set 11.0 with 1 tasks 18/04/17 16:33:00 INFO storage.BlockManagerInfo: Added broadcast_1_piece0 in memory on ***hostname masked***:56034 (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:33:00 INFO storage.BlockManagerInfo: Added broadcast_0_piece0 in memory on ***hostname masked***:50260 (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:33:00 INFO scheduler.DAGScheduler: Got job 13 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 16:33:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 12 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 16:33:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 16:33:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 16:33:00 INFO scheduler.DAGScheduler: Submitting ResultStage 12 (KafkaRDD[25] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 16:33:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 11.0 (TID 11, ***hostname masked***, executor 1, partition 0, RACK_LOCAL, 2034 bytes) 18/04/17 16:33:00 INFO storage.BlockManagerInfo: Added broadcast_2_piece0 in memory on ***hostname masked***:42188 (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:33:00 INFO storage.MemoryStore: Block broadcast_12 stored as values in memory (estimated size 5.7 KB, free 491.6 MB) 18/04/17 16:33:00 INFO storage.MemoryStore: Block broadcast_12_piece0 stored as bytes in memory (estimated size 3.1 KB, free 491.6 MB) 18/04/17 16:33:00 INFO storage.BlockManagerInfo: Added broadcast_12_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.6 MB) 18/04/17 16:33:00 INFO spark.SparkContext: Created broadcast 12 from broadcast at DAGScheduler.scala:1006 18/04/17 16:33:00 INFO storage.BlockManagerInfo: Added broadcast_3_piece0 in memory on ***hostname masked***:41751 (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:33:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 12 (KafkaRDD[25] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 16:33:00 INFO cluster.YarnClusterScheduler: Adding task set 12.0 with 1 tasks 18/04/17 16:33:00 INFO scheduler.DAGScheduler: Got job 21 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 16:33:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 13 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 16:33:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 16:33:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 16:33:00 INFO scheduler.DAGScheduler: Submitting ResultStage 13 (KafkaRDD[24] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 16:33:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 12.0 (TID 12, ***hostname masked***, executor 10, partition 0, RACK_LOCAL, 2045 bytes) 18/04/17 16:33:00 INFO storage.MemoryStore: Block broadcast_13 stored as values in memory (estimated size 5.7 KB, free 491.6 MB) 18/04/17 16:33:00 INFO storage.BlockManagerInfo: Added broadcast_4_piece0 in memory on ***hostname masked***:53081 (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:33:00 INFO storage.MemoryStore: Block broadcast_13_piece0 stored as bytes in memory (estimated size 3.1 KB, free 491.6 MB) 18/04/17 16:33:00 INFO storage.BlockManagerInfo: Added broadcast_13_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.6 MB) 18/04/17 16:33:00 INFO spark.SparkContext: Created broadcast 13 from broadcast at DAGScheduler.scala:1006 18/04/17 16:33:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 13 (KafkaRDD[24] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 16:33:00 INFO cluster.YarnClusterScheduler: Adding task set 13.0 with 1 tasks 18/04/17 16:33:00 INFO scheduler.DAGScheduler: Got job 17 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 16:33:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 14 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 16:33:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 16:33:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 16:33:00 INFO scheduler.DAGScheduler: Submitting ResultStage 14 (KafkaRDD[11] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 16:33:00 INFO storage.MemoryStore: Block broadcast_14 stored as values in memory (estimated size 5.7 KB, free 491.5 MB) 18/04/17 16:33:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 13.0 (TID 13, ***hostname masked***, executor 5, partition 0, NODE_LOCAL, 2047 bytes) 18/04/17 16:33:00 INFO storage.MemoryStore: Block broadcast_14_piece0 stored as bytes in memory (estimated size 3.1 KB, free 491.5 MB) 18/04/17 16:33:00 INFO storage.BlockManagerInfo: Added broadcast_14_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.6 MB) 18/04/17 16:33:00 INFO spark.SparkContext: Created broadcast 14 from broadcast at DAGScheduler.scala:1006 18/04/17 16:33:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 14 (KafkaRDD[11] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 16:33:00 INFO cluster.YarnClusterScheduler: Adding task set 14.0 with 1 tasks 18/04/17 16:33:00 INFO scheduler.DAGScheduler: Got job 22 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 16:33:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 15 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 16:33:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 16:33:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 16:33:00 INFO storage.BlockManagerInfo: Added broadcast_5_piece0 in memory on ***hostname masked***:35790 (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:33:00 INFO scheduler.DAGScheduler: Submitting ResultStage 15 (KafkaRDD[31] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 16:33:00 INFO storage.MemoryStore: Block broadcast_15 stored as values in memory (estimated size 5.6 KB, free 491.5 MB) 18/04/17 16:33:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 14.0 (TID 14, ***hostname masked***, executor 4, partition 0, RACK_LOCAL, 2046 bytes) 18/04/17 16:33:00 INFO storage.MemoryStore: Block broadcast_15_piece0 stored as bytes in memory (estimated size 3.1 KB, free 491.5 MB) 18/04/17 16:33:00 INFO storage.BlockManagerInfo: Added broadcast_15_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.6 MB) 18/04/17 16:33:00 INFO spark.SparkContext: Created broadcast 15 from broadcast at DAGScheduler.scala:1006 18/04/17 16:33:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 15 (KafkaRDD[31] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 16:33:00 INFO cluster.YarnClusterScheduler: Adding task set 15.0 with 1 tasks 18/04/17 16:33:00 INFO storage.BlockManagerInfo: Added broadcast_7_piece0 in memory on ***hostname masked***:55279 (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:33:00 INFO scheduler.DAGScheduler: Got job 20 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 16:33:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 16 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 16:33:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 16:33:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 16:33:00 INFO scheduler.DAGScheduler: Submitting ResultStage 16 (KafkaRDD[18] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 16:33:00 INFO storage.MemoryStore: Block broadcast_16 stored as values in memory (estimated size 5.7 KB, free 491.5 MB) 18/04/17 16:33:00 INFO storage.BlockManagerInfo: Added broadcast_6_piece0 in memory on ***hostname masked***:60107 (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:33:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 15.0 (TID 15, ***hostname masked***, executor 6, partition 0, RACK_LOCAL, 2042 bytes) 18/04/17 16:33:00 INFO storage.MemoryStore: Block broadcast_16_piece0 stored as bytes in memory (estimated size 3.1 KB, free 491.5 MB) 18/04/17 16:33:00 INFO storage.BlockManagerInfo: Added broadcast_16_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.6 MB) 18/04/17 16:33:00 INFO spark.SparkContext: Created broadcast 16 from broadcast at DAGScheduler.scala:1006 18/04/17 16:33:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 16 (KafkaRDD[18] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 16:33:00 INFO cluster.YarnClusterScheduler: Adding task set 16.0 with 1 tasks 18/04/17 16:33:00 INFO scheduler.DAGScheduler: Got job 12 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 16:33:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 17 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 16:33:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 16:33:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 16:33:00 INFO scheduler.DAGScheduler: Submitting ResultStage 17 (KafkaRDD[22] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 16:33:00 INFO storage.BlockManagerInfo: Added broadcast_8_piece0 in memory on ***hostname masked***:57847 (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:33:00 INFO storage.MemoryStore: Block broadcast_17 stored as values in memory (estimated size 5.7 KB, free 491.5 MB) 18/04/17 16:33:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 16.0 (TID 16, ***hostname masked***, executor 12, partition 0, RACK_LOCAL, 2048 bytes) 18/04/17 16:33:00 INFO storage.MemoryStore: Block broadcast_17_piece0 stored as bytes in memory (estimated size 3.1 KB, free 491.5 MB) 18/04/17 16:33:00 INFO storage.BlockManagerInfo: Added broadcast_17_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.6 MB) 18/04/17 16:33:00 INFO spark.SparkContext: Created broadcast 17 from broadcast at DAGScheduler.scala:1006 18/04/17 16:33:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 17 (KafkaRDD[22] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 16:33:00 INFO cluster.YarnClusterScheduler: Adding task set 17.0 with 1 tasks 18/04/17 16:33:00 INFO scheduler.DAGScheduler: Got job 16 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 16:33:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 18 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 16:33:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 16:33:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 16:33:00 INFO scheduler.DAGScheduler: Submitting ResultStage 18 (KafkaRDD[5] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 16:33:00 INFO storage.MemoryStore: Block broadcast_18 stored as values in memory (estimated size 5.7 KB, free 491.5 MB) 18/04/17 16:33:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 17.0 (TID 17, ***hostname masked***, executor 11, partition 0, RACK_LOCAL, 2069 bytes) 18/04/17 16:33:00 INFO storage.MemoryStore: Block broadcast_18_piece0 stored as bytes in memory (estimated size 3.1 KB, free 491.5 MB) 18/04/17 16:33:00 INFO storage.BlockManagerInfo: Added broadcast_18_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.6 MB) 18/04/17 16:33:00 INFO spark.SparkContext: Created broadcast 18 from broadcast at DAGScheduler.scala:1006 18/04/17 16:33:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 18 (KafkaRDD[5] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 16:33:00 INFO cluster.YarnClusterScheduler: Adding task set 18.0 with 1 tasks 18/04/17 16:33:00 INFO scheduler.DAGScheduler: Got job 24 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 16:33:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 19 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 16:33:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 16:33:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 16:33:00 INFO scheduler.DAGScheduler: Submitting ResultStage 19 (KafkaRDD[1] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 16:33:00 INFO storage.MemoryStore: Block broadcast_19 stored as values in memory (estimated size 5.7 KB, free 491.5 MB) 18/04/17 16:33:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 18.0 (TID 18, ***hostname masked***, executor 2, partition 0, NODE_LOCAL, 2059 bytes) 18/04/17 16:33:00 INFO storage.MemoryStore: Block broadcast_19_piece0 stored as bytes in memory (estimated size 3.1 KB, free 491.5 MB) 18/04/17 16:33:00 INFO storage.BlockManagerInfo: Added broadcast_19_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.6 MB) 18/04/17 16:33:00 INFO spark.SparkContext: Created broadcast 19 from broadcast at DAGScheduler.scala:1006 18/04/17 16:33:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 19 (KafkaRDD[1] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 16:33:00 INFO cluster.YarnClusterScheduler: Adding task set 19.0 with 1 tasks 18/04/17 16:33:01 INFO scheduler.DAGScheduler: Got job 23 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 16:33:01 INFO scheduler.DAGScheduler: Final stage: ResultStage 20 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 16:33:01 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 16:33:01 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 16:33:01 INFO scheduler.DAGScheduler: Submitting ResultStage 20 (KafkaRDD[28] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 16:33:01 INFO storage.MemoryStore: Block broadcast_20 stored as values in memory (estimated size 5.7 KB, free 491.5 MB) 18/04/17 16:33:01 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 19.0 (TID 19, ***hostname masked***, executor 11, partition 0, RACK_LOCAL, 2063 bytes) 18/04/17 16:33:01 INFO storage.MemoryStore: Block broadcast_20_piece0 stored as bytes in memory (estimated size 3.1 KB, free 491.5 MB) 18/04/17 16:33:01 INFO storage.BlockManagerInfo: Added broadcast_20_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.6 MB) 18/04/17 16:33:01 INFO spark.SparkContext: Created broadcast 20 from broadcast at DAGScheduler.scala:1006 18/04/17 16:33:01 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 20 (KafkaRDD[28] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 16:33:01 INFO cluster.YarnClusterScheduler: Adding task set 20.0 with 1 tasks 18/04/17 16:33:01 INFO scheduler.DAGScheduler: Got job 19 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 16:33:01 INFO scheduler.DAGScheduler: Final stage: ResultStage 21 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 16:33:01 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 16:33:01 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 16:33:01 INFO scheduler.DAGScheduler: Submitting ResultStage 21 (KafkaRDD[9] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 16:33:01 INFO storage.MemoryStore: Block broadcast_21 stored as values in memory (estimated size 5.6 KB, free 491.5 MB) 18/04/17 16:33:01 INFO storage.MemoryStore: Block broadcast_21_piece0 stored as bytes in memory (estimated size 3.1 KB, free 491.5 MB) 18/04/17 16:33:01 INFO storage.BlockManagerInfo: Added broadcast_21_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.6 MB) 18/04/17 16:33:01 INFO spark.SparkContext: Created broadcast 21 from broadcast at DAGScheduler.scala:1006 18/04/17 16:33:01 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 21 (KafkaRDD[9] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 16:33:01 INFO cluster.YarnClusterScheduler: Adding task set 21.0 with 1 tasks 18/04/17 16:33:01 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 20.0 (TID 20, ***hostname masked***, executor 5, partition 0, RACK_LOCAL, 2045 bytes) 18/04/17 16:33:01 INFO scheduler.DAGScheduler: Got job 4 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 16:33:01 INFO scheduler.DAGScheduler: Final stage: ResultStage 22 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 16:33:01 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 16:33:01 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 16:33:01 INFO scheduler.DAGScheduler: Submitting ResultStage 22 (KafkaRDD[20] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 16:33:01 INFO storage.MemoryStore: Block broadcast_22 stored as values in memory (estimated size 5.7 KB, free 491.5 MB) 18/04/17 16:33:01 INFO storage.MemoryStore: Block broadcast_22_piece0 stored as bytes in memory (estimated size 3.1 KB, free 491.5 MB) 18/04/17 16:33:01 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 21.0 (TID 21, ***hostname masked***, executor 5, partition 0, RACK_LOCAL, 2041 bytes) 18/04/17 16:33:01 INFO storage.BlockManagerInfo: Added broadcast_22_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.6 MB) 18/04/17 16:33:01 INFO spark.SparkContext: Created broadcast 22 from broadcast at DAGScheduler.scala:1006 18/04/17 16:33:01 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 22 (KafkaRDD[20] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 16:33:01 INFO cluster.YarnClusterScheduler: Adding task set 22.0 with 1 tasks 18/04/17 16:33:01 INFO scheduler.DAGScheduler: Got job 5 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 16:33:01 INFO scheduler.DAGScheduler: Final stage: ResultStage 23 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 16:33:01 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 16:33:01 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 16:33:01 INFO scheduler.DAGScheduler: Submitting ResultStage 23 (KafkaRDD[26] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 16:33:01 INFO storage.MemoryStore: Block broadcast_23 stored as values in memory (estimated size 5.7 KB, free 491.5 MB) 18/04/17 16:33:01 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 22.0 (TID 22, ***hostname masked***, executor 9, partition 0, RACK_LOCAL, 2048 bytes) 18/04/17 16:33:01 INFO storage.MemoryStore: Block broadcast_23_piece0 stored as bytes in memory (estimated size 3.1 KB, free 491.5 MB) 18/04/17 16:33:01 INFO storage.BlockManagerInfo: Added broadcast_23_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.6 MB) 18/04/17 16:33:01 INFO spark.SparkContext: Created broadcast 23 from broadcast at DAGScheduler.scala:1006 18/04/17 16:33:01 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 23 (KafkaRDD[26] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 16:33:01 INFO cluster.YarnClusterScheduler: Adding task set 23.0 with 1 tasks 18/04/17 16:33:01 INFO scheduler.DAGScheduler: Got job 9 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 16:33:01 INFO scheduler.DAGScheduler: Final stage: ResultStage 24 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 16:33:01 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 16:33:01 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 16:33:01 INFO scheduler.DAGScheduler: Submitting ResultStage 24 (KafkaRDD[8] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 16:33:01 INFO storage.MemoryStore: Block broadcast_24 stored as values in memory (estimated size 5.7 KB, free 491.5 MB) 18/04/17 16:33:01 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 23.0 (TID 23, ***hostname masked***, executor 6, partition 0, NODE_LOCAL, 2056 bytes) 18/04/17 16:33:01 INFO storage.MemoryStore: Block broadcast_24_piece0 stored as bytes in memory (estimated size 3.1 KB, free 491.5 MB) 18/04/17 16:33:01 INFO storage.BlockManagerInfo: Added broadcast_24_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.6 MB) 18/04/17 16:33:01 INFO spark.SparkContext: Created broadcast 24 from broadcast at DAGScheduler.scala:1006 18/04/17 16:33:01 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 24 (KafkaRDD[8] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 16:33:01 INFO cluster.YarnClusterScheduler: Adding task set 24.0 with 1 tasks 18/04/17 16:33:01 INFO scheduler.DAGScheduler: Got job 18 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 16:33:01 INFO scheduler.DAGScheduler: Final stage: ResultStage 25 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 16:33:01 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 16:33:01 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 16:33:01 INFO scheduler.DAGScheduler: Submitting ResultStage 25 (KafkaRDD[33] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 16:33:01 INFO storage.MemoryStore: Block broadcast_25 stored as values in memory (estimated size 5.7 KB, free 491.5 MB) 18/04/17 16:33:01 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 24.0 (TID 24, ***hostname masked***, executor 2, partition 0, RACK_LOCAL, 2051 bytes) 18/04/17 16:33:01 INFO storage.MemoryStore: Block broadcast_25_piece0 stored as bytes in memory (estimated size 3.1 KB, free 491.4 MB) 18/04/17 16:33:01 INFO storage.BlockManagerInfo: Added broadcast_25_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.6 MB) 18/04/17 16:33:01 INFO spark.SparkContext: Created broadcast 25 from broadcast at DAGScheduler.scala:1006 18/04/17 16:33:01 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 25 (KafkaRDD[33] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 16:33:01 INFO cluster.YarnClusterScheduler: Adding task set 25.0 with 1 tasks 18/04/17 16:33:01 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 25.0 (TID 25, ***hostname masked***, executor 12, partition 0, NODE_LOCAL, 2048 bytes) 18/04/17 16:33:01 INFO storage.BlockManagerInfo: Added broadcast_10_piece0 in memory on ***hostname masked***:55095 (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:33:01 INFO storage.BlockManagerInfo: Added broadcast_11_piece0 in memory on ***hostname masked***:56034 (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:33:01 INFO storage.BlockManagerInfo: Added broadcast_21_piece0 in memory on ***hostname masked***:53081 (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:33:01 INFO storage.BlockManagerInfo: Added broadcast_9_piece0 in memory on ***hostname masked***:56034 (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:33:01 INFO storage.BlockManagerInfo: Added broadcast_23_piece0 in memory on ***hostname masked***:35790 (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:33:01 INFO storage.BlockManagerInfo: Added broadcast_25_piece0 in memory on ***hostname masked***:42188 (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:33:01 INFO storage.BlockManagerInfo: Added broadcast_20_piece0 in memory on ***hostname masked***:53081 (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:33:01 INFO storage.BlockManagerInfo: Added broadcast_15_piece0 in memory on ***hostname masked***:35790 (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:33:01 INFO storage.BlockManagerInfo: Added broadcast_14_piece0 in memory on ***hostname masked***:55279 (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:33:01 INFO storage.BlockManagerInfo: Added broadcast_16_piece0 in memory on ***hostname masked***:42188 (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:33:01 INFO storage.BlockManagerInfo: Added broadcast_24_piece0 in memory on ***hostname masked***:43653 (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:33:01 INFO storage.BlockManagerInfo: Added broadcast_13_piece0 in memory on ***hostname masked***:53081 (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:33:01 INFO storage.BlockManagerInfo: Added broadcast_22_piece0 in memory on ***hostname masked***:55033 (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:33:01 INFO storage.BlockManagerInfo: Added broadcast_19_piece0 in memory on ***hostname masked***:57847 (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:33:01 INFO storage.BlockManagerInfo: Added broadcast_17_piece0 in memory on ***hostname masked***:57847 (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:33:01 INFO storage.BlockManagerInfo: Added broadcast_12_piece0 in memory on ***hostname masked***:55095 (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:33:01 INFO storage.BlockManagerInfo: Added broadcast_18_piece0 in memory on ***hostname masked***:43653 (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:33:05 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 0.0 (TID 0) in 4516 ms on ***hostname masked*** (executor 8) (1/1) 18/04/17 16:33:05 INFO cluster.YarnClusterScheduler: Removed TaskSet 0.0, whose tasks have all completed, from pool 18/04/17 16:33:05 INFO scheduler.DAGScheduler: ResultStage 0 (foreachPartition at PredictorEngineApp.java:153) finished in 4.525 s 18/04/17 16:33:05 INFO scheduler.DAGScheduler: Job 0 finished: foreachPartition at PredictorEngineApp.java:153, took 4.803468 s 18/04/17 16:33:05 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x6d8afa28 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 16:33:05 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x6d8afa280x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 16:33:05 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 16:33:05 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:58173, server: ***hostname masked***/***IP masked***:2181 18/04/17 16:33:05 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x1626be1444c9192, negotiated timeout = 60000 18/04/17 16:33:05 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x1626be1444c9192 18/04/17 16:33:05 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 16:33:05 INFO zookeeper.ZooKeeper: Session: 0x1626be1444c9192 closed 18/04/17 16:33:05 INFO scheduler.JobScheduler: Finished job streaming job 1523971980000 ms.7 from job set of time 1523971980000 ms 18/04/17 16:33:06 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 12.0 (TID 12) in 5482 ms on ***hostname masked*** (executor 10) (1/1) 18/04/17 16:33:06 INFO cluster.YarnClusterScheduler: Removed TaskSet 12.0, whose tasks have all completed, from pool 18/04/17 16:33:06 INFO scheduler.DAGScheduler: ResultStage 12 (foreachPartition at PredictorEngineApp.java:153) finished in 5.484 s 18/04/17 16:33:06 INFO scheduler.DAGScheduler: Job 13 finished: foreachPartition at PredictorEngineApp.java:153, took 6.092242 s 18/04/17 16:33:06 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x47d127f9 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 16:33:06 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x47d127f90x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 16:33:06 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 16:33:06 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:58182, server: ***hostname masked***/***IP masked***:2181 18/04/17 16:33:06 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x1626be1444c9195, negotiated timeout = 60000 18/04/17 16:33:06 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x1626be1444c9195 18/04/17 16:33:06 INFO zookeeper.ZooKeeper: Session: 0x1626be1444c9195 closed 18/04/17 16:33:06 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 16:33:06 INFO scheduler.JobScheduler: Finished job streaming job 1523971980000 ms.25 from job set of time 1523971980000 ms 18/04/17 16:33:08 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 24.0 (TID 24) in 7097 ms on ***hostname masked*** (executor 2) (1/1) 18/04/17 16:33:08 INFO cluster.YarnClusterScheduler: Removed TaskSet 24.0, whose tasks have all completed, from pool 18/04/17 16:33:08 INFO scheduler.DAGScheduler: ResultStage 24 (foreachPartition at PredictorEngineApp.java:153) finished in 7.101 s 18/04/17 16:33:08 INFO scheduler.DAGScheduler: Job 9 finished: foreachPartition at PredictorEngineApp.java:153, took 7.829656 s 18/04/17 16:33:08 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x7d584617 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 16:33:08 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x7d5846170x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 16:33:08 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 16:33:08 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:58190, server: ***hostname masked***/***IP masked***:2181 18/04/17 16:33:08 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x1626be1444c9197, negotiated timeout = 60000 18/04/17 16:33:08 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x1626be1444c9197 18/04/17 16:33:08 INFO zookeeper.ZooKeeper: Session: 0x1626be1444c9197 closed 18/04/17 16:33:08 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 16:33:08 INFO scheduler.JobScheduler: Finished job streaming job 1523971980000 ms.8 from job set of time 1523971980000 ms 18/04/17 16:33:08 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 22.0 (TID 22) in 7529 ms on ***hostname masked*** (executor 9) (1/1) 18/04/17 16:33:08 INFO cluster.YarnClusterScheduler: Removed TaskSet 22.0, whose tasks have all completed, from pool 18/04/17 16:33:08 INFO scheduler.DAGScheduler: ResultStage 22 (foreachPartition at PredictorEngineApp.java:153) finished in 7.532 s 18/04/17 16:33:08 INFO scheduler.DAGScheduler: Job 4 finished: foreachPartition at PredictorEngineApp.java:153, took 8.247911 s 18/04/17 16:33:08 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x4ccc258c connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 16:33:08 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x4ccc258c0x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 16:33:08 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 16:33:08 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:51811, server: ***hostname masked***/***IP masked***:2181 18/04/17 16:33:08 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x3626be1439a9161, negotiated timeout = 60000 18/04/17 16:33:08 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x3626be1439a9161 18/04/17 16:33:08 INFO zookeeper.ZooKeeper: Session: 0x3626be1439a9161 closed 18/04/17 16:33:08 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 16:33:08 INFO scheduler.JobScheduler: Finished job streaming job 1523971980000 ms.20 from job set of time 1523971980000 ms 18/04/17 16:33:10 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 15.0 (TID 15) in 9458 ms on ***hostname masked*** (executor 6) (1/1) 18/04/17 16:33:10 INFO cluster.YarnClusterScheduler: Removed TaskSet 15.0, whose tasks have all completed, from pool 18/04/17 16:33:10 INFO scheduler.DAGScheduler: ResultStage 15 (foreachPartition at PredictorEngineApp.java:153) finished in 9.462 s 18/04/17 16:33:10 INFO scheduler.DAGScheduler: Job 22 finished: foreachPartition at PredictorEngineApp.java:153, took 10.105219 s 18/04/17 16:33:10 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x174dc2a connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 16:33:10 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x174dc2a0x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 16:33:10 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 16:33:10 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:58207, server: ***hostname masked***/***IP masked***:2181 18/04/17 16:33:10 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x1626be1444c9198, negotiated timeout = 60000 18/04/17 16:33:10 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x1626be1444c9198 18/04/17 16:33:10 INFO zookeeper.ZooKeeper: Session: 0x1626be1444c9198 closed 18/04/17 16:33:10 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 16:33:10 INFO scheduler.JobScheduler: Finished job streaming job 1523971980000 ms.31 from job set of time 1523971980000 ms 18/04/17 16:33:10 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 7.0 (TID 7) in 9740 ms on ***hostname masked*** (executor 4) (1/1) 18/04/17 16:33:10 INFO cluster.YarnClusterScheduler: Removed TaskSet 7.0, whose tasks have all completed, from pool 18/04/17 16:33:10 INFO scheduler.DAGScheduler: ResultStage 7 (foreachPartition at PredictorEngineApp.java:153) finished in 9.741 s 18/04/17 16:33:10 INFO scheduler.DAGScheduler: Job 7 finished: foreachPartition at PredictorEngineApp.java:153, took 10.182892 s 18/04/17 16:33:10 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x6bf17e86 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 16:33:10 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x6bf17e860x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 16:33:10 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 16:33:10 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:58210, server: ***hostname masked***/***IP masked***:2181 18/04/17 16:33:10 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x1626be1444c9199, negotiated timeout = 60000 18/04/17 16:33:10 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x1626be1444c9199 18/04/17 16:33:10 INFO zookeeper.ZooKeeper: Session: 0x1626be1444c9199 closed 18/04/17 16:33:10 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 16:33:10 INFO scheduler.JobScheduler: Finished job streaming job 1523971980000 ms.12 from job set of time 1523971980000 ms 18/04/17 16:33:11 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 3.0 (TID 3) in 10375 ms on ***hostname masked*** (executor 7) (1/1) 18/04/17 16:33:11 INFO cluster.YarnClusterScheduler: Removed TaskSet 3.0, whose tasks have all completed, from pool 18/04/17 16:33:11 INFO scheduler.DAGScheduler: ResultStage 3 (foreachPartition at PredictorEngineApp.java:153) finished in 10.376 s 18/04/17 16:33:11 INFO scheduler.DAGScheduler: Job 10 finished: foreachPartition at PredictorEngineApp.java:153, took 10.722890 s 18/04/17 16:33:11 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x6daad791 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 16:33:11 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x6daad7910x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 16:33:11 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 16:33:11 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:58214, server: ***hostname masked***/***IP masked***:2181 18/04/17 16:33:11 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x1626be1444c919b, negotiated timeout = 60000 18/04/17 16:33:11 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x1626be1444c919b 18/04/17 16:33:11 INFO zookeeper.ZooKeeper: Session: 0x1626be1444c919b closed 18/04/17 16:33:11 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 16:33:11 INFO scheduler.JobScheduler: Finished job streaming job 1523971980000 ms.19 from job set of time 1523971980000 ms 18/04/17 16:33:11 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 10.0 (TID 10) in 11109 ms on ***hostname masked*** (executor 10) (1/1) 18/04/17 16:33:11 INFO cluster.YarnClusterScheduler: Removed TaskSet 10.0, whose tasks have all completed, from pool 18/04/17 16:33:11 INFO scheduler.DAGScheduler: ResultStage 10 (foreachPartition at PredictorEngineApp.java:153) finished in 11.111 s 18/04/17 16:33:11 INFO scheduler.DAGScheduler: Job 3 finished: foreachPartition at PredictorEngineApp.java:153, took 11.656037 s 18/04/17 16:33:11 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0xb3cc61 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 16:33:11 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0xb3cc610x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 16:33:11 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 16:33:11 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:58217, server: ***hostname masked***/***IP masked***:2181 18/04/17 16:33:11 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x1626be1444c919d, negotiated timeout = 60000 18/04/17 16:33:11 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x1626be1444c919d 18/04/17 16:33:11 INFO zookeeper.ZooKeeper: Session: 0x1626be1444c919d closed 18/04/17 16:33:11 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 16:33:11 INFO scheduler.JobScheduler: Finished job streaming job 1523971980000 ms.27 from job set of time 1523971980000 ms 18/04/17 16:33:14 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 9.0 (TID 9) in 14077 ms on ***hostname masked*** (executor 1) (1/1) 18/04/17 16:33:14 INFO cluster.YarnClusterScheduler: Removed TaskSet 9.0, whose tasks have all completed, from pool 18/04/17 16:33:14 INFO scheduler.DAGScheduler: ResultStage 9 (foreachPartition at PredictorEngineApp.java:153) finished in 14.094 s 18/04/17 16:33:14 INFO scheduler.DAGScheduler: Job 8 finished: foreachPartition at PredictorEngineApp.java:153, took 14.557898 s 18/04/17 16:33:14 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x5271234a connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 16:33:14 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x5271234a0x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 16:33:14 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 16:33:14 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:58228, server: ***hostname masked***/***IP masked***:2181 18/04/17 16:33:14 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x1626be1444c91a1, negotiated timeout = 60000 18/04/17 16:33:14 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x1626be1444c91a1 18/04/17 16:33:14 INFO zookeeper.ZooKeeper: Session: 0x1626be1444c91a1 closed 18/04/17 16:33:14 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 16:33:14 INFO scheduler.JobScheduler: Finished job streaming job 1523971980000 ms.32 from job set of time 1523971980000 ms 18/04/17 16:33:15 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 2.0 (TID 2) in 14517 ms on ***hostname masked*** (executor 12) (1/1) 18/04/17 16:33:15 INFO cluster.YarnClusterScheduler: Removed TaskSet 2.0, whose tasks have all completed, from pool 18/04/17 16:33:15 INFO scheduler.DAGScheduler: ResultStage 2 (foreachPartition at PredictorEngineApp.java:153) finished in 14.519 s 18/04/17 16:33:15 INFO scheduler.DAGScheduler: Job 2 finished: foreachPartition at PredictorEngineApp.java:153, took 14.849432 s 18/04/17 16:33:15 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x39c2c544 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 16:33:15 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x39c2c5440x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 16:33:15 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 16:33:15 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:34595, server: ***hostname masked***/***IP masked***:2181 18/04/17 16:33:15 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x2626be142b28a8b, negotiated timeout = 60000 18/04/17 16:33:15 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x2626be142b28a8b 18/04/17 16:33:15 INFO zookeeper.ZooKeeper: Session: 0x2626be142b28a8b closed 18/04/17 16:33:15 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 16:33:15 INFO scheduler.JobScheduler: Finished job streaming job 1523971980000 ms.23 from job set of time 1523971980000 ms 18/04/17 16:33:15 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 1.0 (TID 1) in 15002 ms on ***hostname masked*** (executor 1) (1/1) 18/04/17 16:33:15 INFO cluster.YarnClusterScheduler: Removed TaskSet 1.0, whose tasks have all completed, from pool 18/04/17 16:33:15 INFO scheduler.DAGScheduler: ResultStage 1 (foreachPartition at PredictorEngineApp.java:153) finished in 15.009 s 18/04/17 16:33:15 INFO scheduler.DAGScheduler: Job 15 finished: foreachPartition at PredictorEngineApp.java:153, took 15.308030 s 18/04/17 16:33:15 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x69e0bda6 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 16:33:15 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x69e0bda60x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 16:33:15 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 16:33:15 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:34604, server: ***hostname masked***/***IP masked***:2181 18/04/17 16:33:15 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x2626be142b28a8c, negotiated timeout = 60000 18/04/17 16:33:15 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x2626be142b28a8c 18/04/17 16:33:15 INFO zookeeper.ZooKeeper: Session: 0x2626be142b28a8c closed 18/04/17 16:33:15 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 16:33:15 INFO scheduler.JobScheduler: Finished job streaming job 1523971980000 ms.29 from job set of time 1523971980000 ms 18/04/17 16:33:16 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 14.0 (TID 14) in 15572 ms on ***hostname masked*** (executor 4) (1/1) 18/04/17 16:33:16 INFO cluster.YarnClusterScheduler: Removed TaskSet 14.0, whose tasks have all completed, from pool 18/04/17 16:33:16 INFO scheduler.DAGScheduler: ResultStage 14 (foreachPartition at PredictorEngineApp.java:153) finished in 15.578 s 18/04/17 16:33:16 INFO scheduler.DAGScheduler: Job 17 finished: foreachPartition at PredictorEngineApp.java:153, took 16.211141 s 18/04/17 16:33:16 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x3230cb61 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 16:33:16 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x3230cb610x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 16:33:16 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 16:33:16 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:34608, server: ***hostname masked***/***IP masked***:2181 18/04/17 16:33:16 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x2626be142b28a8d, negotiated timeout = 60000 18/04/17 16:33:16 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x2626be142b28a8d 18/04/17 16:33:16 INFO zookeeper.ZooKeeper: Session: 0x2626be142b28a8d closed 18/04/17 16:33:16 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 16:33:16 INFO scheduler.JobScheduler: Finished job streaming job 1523971980000 ms.11 from job set of time 1523971980000 ms 18/04/17 16:33:17 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 5.0 (TID 5) in 16942 ms on ***hostname masked*** (executor 6) (1/1) 18/04/17 16:33:17 INFO cluster.YarnClusterScheduler: Removed TaskSet 5.0, whose tasks have all completed, from pool 18/04/17 16:33:17 INFO scheduler.DAGScheduler: ResultStage 5 (foreachPartition at PredictorEngineApp.java:153) finished in 16.943 s 18/04/17 16:33:17 INFO scheduler.DAGScheduler: Job 6 finished: foreachPartition at PredictorEngineApp.java:153, took 17.350588 s 18/04/17 16:33:17 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x4b4e08c2 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 16:33:17 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x4b4e08c20x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 16:33:17 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 16:33:17 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:34613, server: ***hostname masked***/***IP masked***:2181 18/04/17 16:33:17 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x2626be142b28a8e, negotiated timeout = 60000 18/04/17 16:33:17 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x2626be142b28a8e 18/04/17 16:33:17 INFO zookeeper.ZooKeeper: Session: 0x2626be142b28a8e closed 18/04/17 16:33:17 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 16:33:17 INFO scheduler.JobScheduler: Finished job streaming job 1523971980000 ms.15 from job set of time 1523971980000 ms 18/04/17 16:33:18 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 11.0 (TID 11) in 17567 ms on ***hostname masked*** (executor 1) (1/1) 18/04/17 16:33:18 INFO cluster.YarnClusterScheduler: Removed TaskSet 11.0, whose tasks have all completed, from pool 18/04/17 16:33:18 INFO scheduler.DAGScheduler: ResultStage 11 (foreachPartition at PredictorEngineApp.java:153) finished in 17.569 s 18/04/17 16:33:18 INFO scheduler.DAGScheduler: Job 11 finished: foreachPartition at PredictorEngineApp.java:153, took 18.154136 s 18/04/17 16:33:18 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x7f72d4a1 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 16:33:18 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x7f72d4a10x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 16:33:18 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 16:33:18 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:51876, server: ***hostname masked***/***IP masked***:2181 18/04/17 16:33:18 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x3626be1439a9166, negotiated timeout = 60000 18/04/17 16:33:18 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x3626be1439a9166 18/04/17 16:33:18 INFO zookeeper.ZooKeeper: Session: 0x3626be1439a9166 closed 18/04/17 16:33:18 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 16:33:18 INFO scheduler.JobScheduler: Finished job streaming job 1523971980000 ms.2 from job set of time 1523971980000 ms 18/04/17 16:33:18 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 16.0 (TID 16) in 17881 ms on ***hostname masked*** (executor 12) (1/1) 18/04/17 16:33:18 INFO cluster.YarnClusterScheduler: Removed TaskSet 16.0, whose tasks have all completed, from pool 18/04/17 16:33:18 INFO scheduler.DAGScheduler: ResultStage 16 (foreachPartition at PredictorEngineApp.java:153) finished in 17.903 s 18/04/17 16:33:18 INFO scheduler.DAGScheduler: Job 20 finished: foreachPartition at PredictorEngineApp.java:153, took 18.556756 s 18/04/17 16:33:18 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x680e06dc connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 16:33:18 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x680e06dc0x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 16:33:18 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 16:33:18 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:58265, server: ***hostname masked***/***IP masked***:2181 18/04/17 16:33:18 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x1626be1444c91a6, negotiated timeout = 60000 18/04/17 16:33:18 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x1626be1444c91a6 18/04/17 16:33:18 INFO zookeeper.ZooKeeper: Session: 0x1626be1444c91a6 closed 18/04/17 16:33:18 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 16:33:18 INFO scheduler.JobScheduler: Finished job streaming job 1523971980000 ms.18 from job set of time 1523971980000 ms 18/04/17 16:33:19 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 25.0 (TID 25) in 18562 ms on ***hostname masked*** (executor 12) (1/1) 18/04/17 16:33:19 INFO cluster.YarnClusterScheduler: Removed TaskSet 25.0, whose tasks have all completed, from pool 18/04/17 16:33:19 INFO scheduler.DAGScheduler: ResultStage 25 (foreachPartition at PredictorEngineApp.java:153) finished in 18.566 s 18/04/17 16:33:19 INFO scheduler.DAGScheduler: Job 18 finished: foreachPartition at PredictorEngineApp.java:153, took 19.300681 s 18/04/17 16:33:19 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0xdb007f3 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 16:33:19 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0xdb007f30x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 16:33:19 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 16:33:19 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:34631, server: ***hostname masked***/***IP masked***:2181 18/04/17 16:33:19 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x2626be142b28a90, negotiated timeout = 60000 18/04/17 16:33:19 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x2626be142b28a90 18/04/17 16:33:19 INFO zookeeper.ZooKeeper: Session: 0x2626be142b28a90 closed 18/04/17 16:33:19 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 16:33:19 INFO scheduler.JobScheduler: Finished job streaming job 1523971980000 ms.33 from job set of time 1523971980000 ms 18/04/17 16:33:21 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 18.0 (TID 18) in 20920 ms on ***hostname masked*** (executor 2) (1/1) 18/04/17 16:33:21 INFO cluster.YarnClusterScheduler: Removed TaskSet 18.0, whose tasks have all completed, from pool 18/04/17 16:33:21 INFO scheduler.DAGScheduler: ResultStage 18 (foreachPartition at PredictorEngineApp.java:153) finished in 20.925 s 18/04/17 16:33:21 INFO scheduler.DAGScheduler: Job 16 finished: foreachPartition at PredictorEngineApp.java:153, took 21.613032 s 18/04/17 16:33:21 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x5c758e40 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 16:33:21 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x5c758e400x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 16:33:21 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 16:33:21 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:51898, server: ***hostname masked***/***IP masked***:2181 18/04/17 16:33:21 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x3626be1439a9168, negotiated timeout = 60000 18/04/17 16:33:21 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x3626be1439a9168 18/04/17 16:33:21 INFO zookeeper.ZooKeeper: Session: 0x3626be1439a9168 closed 18/04/17 16:33:21 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 16:33:21 INFO scheduler.JobScheduler: Finished job streaming job 1523971980000 ms.5 from job set of time 1523971980000 ms 18/04/17 16:33:23 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 8.0 (TID 8) in 23104 ms on ***hostname masked*** (executor 11) (1/1) 18/04/17 16:33:23 INFO cluster.YarnClusterScheduler: Removed TaskSet 8.0, whose tasks have all completed, from pool 18/04/17 16:33:23 INFO scheduler.DAGScheduler: ResultStage 8 (foreachPartition at PredictorEngineApp.java:153) finished in 23.106 s 18/04/17 16:33:23 INFO scheduler.DAGScheduler: Job 14 finished: foreachPartition at PredictorEngineApp.java:153, took 23.560629 s 18/04/17 16:33:23 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x5e547825 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 16:33:23 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x5e5478250x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 16:33:23 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 16:33:23 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:34649, server: ***hostname masked***/***IP masked***:2181 18/04/17 16:33:23 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x2626be142b28a93, negotiated timeout = 60000 18/04/17 16:33:23 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x2626be142b28a93 18/04/17 16:33:23 INFO zookeeper.ZooKeeper: Session: 0x2626be142b28a93 closed 18/04/17 16:33:23 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 16:33:23 INFO scheduler.JobScheduler: Finished job streaming job 1523971980000 ms.6 from job set of time 1523971980000 ms 18/04/17 16:33:24 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 6.0 (TID 6) in 23684 ms on ***hostname masked*** (executor 3) (1/1) 18/04/17 16:33:24 INFO cluster.YarnClusterScheduler: Removed TaskSet 6.0, whose tasks have all completed, from pool 18/04/17 16:33:24 INFO scheduler.DAGScheduler: ResultStage 6 (foreachPartition at PredictorEngineApp.java:153) finished in 23.685 s 18/04/17 16:33:24 INFO scheduler.DAGScheduler: Job 1 finished: foreachPartition at PredictorEngineApp.java:153, took 24.111392 s 18/04/17 16:33:24 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x133b0161 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 16:33:24 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x133b01610x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 16:33:24 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 16:33:24 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:51911, server: ***hostname masked***/***IP masked***:2181 18/04/17 16:33:24 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x3626be1439a916b, negotiated timeout = 60000 18/04/17 16:33:24 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x3626be1439a916b 18/04/17 16:33:24 INFO zookeeper.ZooKeeper: Session: 0x3626be1439a916b closed 18/04/17 16:33:24 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 16:33:24 INFO scheduler.JobScheduler: Finished job streaming job 1523971980000 ms.10 from job set of time 1523971980000 ms 18/04/17 16:33:25 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 23.0 (TID 23) in 24956 ms on ***hostname masked*** (executor 6) (1/1) 18/04/17 16:33:25 INFO cluster.YarnClusterScheduler: Removed TaskSet 23.0, whose tasks have all completed, from pool 18/04/17 16:33:25 INFO scheduler.DAGScheduler: ResultStage 23 (foreachPartition at PredictorEngineApp.java:153) finished in 24.958 s 18/04/17 16:33:25 INFO scheduler.DAGScheduler: Job 5 finished: foreachPartition at PredictorEngineApp.java:153, took 25.681555 s 18/04/17 16:33:25 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x77769391 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 16:33:25 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x777693910x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 16:33:25 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 16:33:25 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:51923, server: ***hostname masked***/***IP masked***:2181 18/04/17 16:33:26 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x3626be1439a916c, negotiated timeout = 60000 18/04/17 16:33:26 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x3626be1439a916c 18/04/17 16:33:26 INFO zookeeper.ZooKeeper: Session: 0x3626be1439a916c closed 18/04/17 16:33:26 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 16:33:26 INFO scheduler.JobScheduler: Finished job streaming job 1523971980000 ms.26 from job set of time 1523971980000 ms 18/04/17 16:33:30 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 21.0 (TID 21) in 29491 ms on ***hostname masked*** (executor 5) (1/1) 18/04/17 16:33:30 INFO cluster.YarnClusterScheduler: Removed TaskSet 21.0, whose tasks have all completed, from pool 18/04/17 16:33:30 INFO scheduler.DAGScheduler: ResultStage 21 (foreachPartition at PredictorEngineApp.java:153) finished in 29.497 s 18/04/17 16:33:30 INFO scheduler.DAGScheduler: Job 19 finished: foreachPartition at PredictorEngineApp.java:153, took 30.205785 s 18/04/17 16:33:30 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x1b0a58dc connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 16:33:30 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x1b0a58dc0x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 16:33:30 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 16:33:30 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:34680, server: ***hostname masked***/***IP masked***:2181 18/04/17 16:33:30 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x2626be142b28a97, negotiated timeout = 60000 18/04/17 16:33:30 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x2626be142b28a97 18/04/17 16:33:30 INFO zookeeper.ZooKeeper: Session: 0x2626be142b28a97 closed 18/04/17 16:33:30 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 16:33:30 INFO scheduler.JobScheduler: Finished job streaming job 1523971980000 ms.9 from job set of time 1523971980000 ms 18/04/17 16:33:31 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 20.0 (TID 20) in 30185 ms on ***hostname masked*** (executor 5) (1/1) 18/04/17 16:33:31 INFO cluster.YarnClusterScheduler: Removed TaskSet 20.0, whose tasks have all completed, from pool 18/04/17 16:33:31 INFO scheduler.DAGScheduler: ResultStage 20 (foreachPartition at PredictorEngineApp.java:153) finished in 30.192 s 18/04/17 16:33:31 INFO scheduler.DAGScheduler: Job 23 finished: foreachPartition at PredictorEngineApp.java:153, took 30.892248 s 18/04/17 16:33:31 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x50928f0c connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 16:33:31 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x50928f0c0x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 16:33:31 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 16:33:31 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:51940, server: ***hostname masked***/***IP masked***:2181 18/04/17 16:33:31 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x3626be1439a916f, negotiated timeout = 60000 18/04/17 16:33:31 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x3626be1439a916f 18/04/17 16:33:31 INFO zookeeper.ZooKeeper: Session: 0x3626be1439a916f closed 18/04/17 16:33:31 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 16:33:31 INFO scheduler.JobScheduler: Finished job streaming job 1523971980000 ms.28 from job set of time 1523971980000 ms 18/04/17 16:33:31 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 4.0 (TID 4) in 31205 ms on ***hostname masked*** (executor 5) (1/1) 18/04/17 16:33:31 INFO cluster.YarnClusterScheduler: Removed TaskSet 4.0, whose tasks have all completed, from pool 18/04/17 16:33:31 INFO scheduler.DAGScheduler: ResultStage 4 (foreachPartition at PredictorEngineApp.java:153) finished in 31.208 s 18/04/17 16:33:31 INFO scheduler.DAGScheduler: Job 25 finished: foreachPartition at PredictorEngineApp.java:153, took 31.570432 s 18/04/17 16:33:31 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x2a9ca257 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 16:33:31 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x2a9ca2570x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 16:33:31 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 16:33:31 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:58328, server: ***hostname masked***/***IP masked***:2181 18/04/17 16:33:31 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x1626be1444c91a9, negotiated timeout = 60000 18/04/17 16:33:31 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x1626be1444c91a9 18/04/17 16:33:31 INFO zookeeper.ZooKeeper: Session: 0x1626be1444c91a9 closed 18/04/17 16:33:31 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 16:33:31 INFO scheduler.JobScheduler: Finished job streaming job 1523971980000 ms.34 from job set of time 1523971980000 ms 18/04/17 16:33:32 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 13.0 (TID 13) in 31414 ms on ***hostname masked*** (executor 5) (1/1) 18/04/17 16:33:32 INFO cluster.YarnClusterScheduler: Removed TaskSet 13.0, whose tasks have all completed, from pool 18/04/17 16:33:32 INFO scheduler.DAGScheduler: ResultStage 13 (foreachPartition at PredictorEngineApp.java:153) finished in 31.417 s 18/04/17 16:33:32 INFO scheduler.DAGScheduler: Job 21 finished: foreachPartition at PredictorEngineApp.java:153, took 32.041210 s 18/04/17 16:33:32 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x55794a03 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 16:33:32 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x55794a030x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 16:33:32 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 16:33:32 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:51950, server: ***hostname masked***/***IP masked***:2181 18/04/17 16:33:32 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x3626be1439a9171, negotiated timeout = 60000 18/04/17 16:33:32 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x3626be1439a9171 18/04/17 16:33:32 INFO zookeeper.ZooKeeper: Session: 0x3626be1439a9171 closed 18/04/17 16:33:32 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 16:33:32 INFO scheduler.JobScheduler: Finished job streaming job 1523971980000 ms.24 from job set of time 1523971980000 ms 18/04/17 16:33:39 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 19.0 (TID 19) in 38108 ms on ***hostname masked*** (executor 11) (1/1) 18/04/17 16:33:39 INFO cluster.YarnClusterScheduler: Removed TaskSet 19.0, whose tasks have all completed, from pool 18/04/17 16:33:39 INFO scheduler.DAGScheduler: ResultStage 19 (foreachPartition at PredictorEngineApp.java:153) finished in 38.110 s 18/04/17 16:33:39 INFO scheduler.DAGScheduler: Job 24 finished: foreachPartition at PredictorEngineApp.java:153, took 38.805090 s 18/04/17 16:33:39 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x1a4cce71 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 16:33:39 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x1a4cce710x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 16:33:39 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 16:33:39 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:58343, server: ***hostname masked***/***IP masked***:2181 18/04/17 16:33:39 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x1626be1444c91ac, negotiated timeout = 60000 18/04/17 16:33:39 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x1626be1444c91ac 18/04/17 16:33:39 INFO zookeeper.ZooKeeper: Session: 0x1626be1444c91ac closed 18/04/17 16:33:39 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 16:33:39 INFO scheduler.JobScheduler: Finished job streaming job 1523971980000 ms.1 from job set of time 1523971980000 ms 18/04/17 16:33:39 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 17.0 (TID 17) in 38845 ms on ***hostname masked*** (executor 11) (1/1) 18/04/17 16:33:39 INFO cluster.YarnClusterScheduler: Removed TaskSet 17.0, whose tasks have all completed, from pool 18/04/17 16:33:39 INFO scheduler.DAGScheduler: ResultStage 17 (foreachPartition at PredictorEngineApp.java:153) finished in 38.848 s 18/04/17 16:33:39 INFO scheduler.DAGScheduler: Job 12 finished: foreachPartition at PredictorEngineApp.java:153, took 39.527883 s 18/04/17 16:33:39 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x2ad2ae5f connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 16:33:39 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x2ad2ae5f0x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 16:33:39 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 16:33:39 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:58346, server: ***hostname masked***/***IP masked***:2181 18/04/17 16:33:39 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x1626be1444c91ad, negotiated timeout = 60000 18/04/17 16:33:39 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x1626be1444c91ad 18/04/17 16:33:39 INFO zookeeper.ZooKeeper: Session: 0x1626be1444c91ad closed 18/04/17 16:33:39 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 16:33:39 INFO scheduler.JobScheduler: Finished job streaming job 1523971980000 ms.22 from job set of time 1523971980000 ms 18/04/17 16:33:39 INFO scheduler.JobScheduler: Total delay: 39.865 s for time 1523971980000 ms (execution: 39.626 s) 18/04/17 16:33:39 INFO scheduler.ReceivedBlockTracker: Deleting batches ArrayBuffer() 18/04/17 16:33:39 INFO scheduler.InputInfoTracker: remove old batch metadata: 18/04/17 16:34:00 INFO scheduler.JobScheduler: Added jobs for time 1523972040000 ms 18/04/17 16:34:00 INFO scheduler.JobScheduler: Starting job streaming job 1523972040000 ms.0 from job set of time 1523972040000 ms 18/04/17 16:34:00 INFO scheduler.JobScheduler: Starting job streaming job 1523972040000 ms.1 from job set of time 1523972040000 ms 18/04/17 16:34:00 INFO scheduler.JobScheduler: Starting job streaming job 1523972040000 ms.2 from job set of time 1523972040000 ms 18/04/17 16:34:00 INFO scheduler.JobScheduler: Starting job streaming job 1523972040000 ms.4 from job set of time 1523972040000 ms 18/04/17 16:34:00 INFO scheduler.JobScheduler: Starting job streaming job 1523972040000 ms.3 from job set of time 1523972040000 ms 18/04/17 16:34:00 INFO scheduler.JobScheduler: Starting job streaming job 1523972040000 ms.6 from job set of time 1523972040000 ms 18/04/17 16:34:00 INFO scheduler.JobScheduler: Starting job streaming job 1523972040000 ms.5 from job set of time 1523972040000 ms 18/04/17 16:34:00 INFO scheduler.JobScheduler: Finished job streaming job 1523972040000 ms.0 from job set of time 1523972040000 ms 18/04/17 16:34:00 INFO scheduler.JobScheduler: Finished job streaming job 1523972040000 ms.4 from job set of time 1523972040000 ms 18/04/17 16:34:00 INFO scheduler.JobScheduler: Starting job streaming job 1523972040000 ms.7 from job set of time 1523972040000 ms 18/04/17 16:34:00 INFO scheduler.JobScheduler: Finished job streaming job 1523972040000 ms.3 from job set of time 1523972040000 ms 18/04/17 16:34:00 INFO scheduler.JobScheduler: Starting job streaming job 1523972040000 ms.9 from job set of time 1523972040000 ms 18/04/17 16:34:00 INFO scheduler.JobScheduler: Starting job streaming job 1523972040000 ms.10 from job set of time 1523972040000 ms 18/04/17 16:34:00 INFO scheduler.JobScheduler: Starting job streaming job 1523972040000 ms.8 from job set of time 1523972040000 ms 18/04/17 16:34:00 INFO scheduler.JobScheduler: Starting job streaming job 1523972040000 ms.11 from job set of time 1523972040000 ms 18/04/17 16:34:00 INFO scheduler.JobScheduler: Starting job streaming job 1523972040000 ms.12 from job set of time 1523972040000 ms 18/04/17 16:34:00 INFO scheduler.JobScheduler: Starting job streaming job 1523972040000 ms.13 from job set of time 1523972040000 ms 18/04/17 16:34:00 INFO scheduler.JobScheduler: Starting job streaming job 1523972040000 ms.15 from job set of time 1523972040000 ms 18/04/17 16:34:00 INFO scheduler.JobScheduler: Starting job streaming job 1523972040000 ms.16 from job set of time 1523972040000 ms 18/04/17 16:34:00 INFO scheduler.JobScheduler: Starting job streaming job 1523972040000 ms.17 from job set of time 1523972040000 ms 18/04/17 16:34:00 INFO scheduler.JobScheduler: Starting job streaming job 1523972040000 ms.14 from job set of time 1523972040000 ms 18/04/17 16:34:00 INFO scheduler.JobScheduler: Finished job streaming job 1523972040000 ms.13 from job set of time 1523972040000 ms 18/04/17 16:34:00 INFO scheduler.JobScheduler: Finished job streaming job 1523972040000 ms.16 from job set of time 1523972040000 ms 18/04/17 16:34:00 INFO scheduler.JobScheduler: Starting job streaming job 1523972040000 ms.19 from job set of time 1523972040000 ms 18/04/17 16:34:00 INFO scheduler.JobScheduler: Starting job streaming job 1523972040000 ms.18 from job set of time 1523972040000 ms 18/04/17 16:34:00 INFO scheduler.JobScheduler: Starting job streaming job 1523972040000 ms.20 from job set of time 1523972040000 ms 18/04/17 16:34:00 INFO scheduler.JobScheduler: Finished job streaming job 1523972040000 ms.17 from job set of time 1523972040000 ms 18/04/17 16:34:00 INFO scheduler.JobScheduler: Starting job streaming job 1523972040000 ms.22 from job set of time 1523972040000 ms 18/04/17 16:34:00 INFO scheduler.JobScheduler: Finished job streaming job 1523972040000 ms.14 from job set of time 1523972040000 ms 18/04/17 16:34:00 INFO scheduler.JobScheduler: Starting job streaming job 1523972040000 ms.21 from job set of time 1523972040000 ms 18/04/17 16:34:00 INFO scheduler.JobScheduler: Starting job streaming job 1523972040000 ms.23 from job set of time 1523972040000 ms 18/04/17 16:34:00 INFO scheduler.JobScheduler: Starting job streaming job 1523972040000 ms.25 from job set of time 1523972040000 ms 18/04/17 16:34:00 INFO scheduler.JobScheduler: Starting job streaming job 1523972040000 ms.24 from job set of time 1523972040000 ms 18/04/17 16:34:00 INFO scheduler.JobScheduler: Starting job streaming job 1523972040000 ms.27 from job set of time 1523972040000 ms 18/04/17 16:34:00 INFO scheduler.JobScheduler: Starting job streaming job 1523972040000 ms.26 from job set of time 1523972040000 ms 18/04/17 16:34:00 INFO scheduler.JobScheduler: Finished job streaming job 1523972040000 ms.21 from job set of time 1523972040000 ms 18/04/17 16:34:00 INFO scheduler.JobScheduler: Starting job streaming job 1523972040000 ms.31 from job set of time 1523972040000 ms 18/04/17 16:34:00 INFO scheduler.JobScheduler: Starting job streaming job 1523972040000 ms.30 from job set of time 1523972040000 ms 18/04/17 16:34:00 INFO scheduler.JobScheduler: Starting job streaming job 1523972040000 ms.29 from job set of time 1523972040000 ms 18/04/17 16:34:00 INFO scheduler.JobScheduler: Starting job streaming job 1523972040000 ms.28 from job set of time 1523972040000 ms 18/04/17 16:34:00 INFO scheduler.JobScheduler: Starting job streaming job 1523972040000 ms.32 from job set of time 1523972040000 ms 18/04/17 16:34:00 INFO scheduler.JobScheduler: Starting job streaming job 1523972040000 ms.35 from job set of time 1523972040000 ms 18/04/17 16:34:00 INFO scheduler.JobScheduler: Starting job streaming job 1523972040000 ms.34 from job set of time 1523972040000 ms 18/04/17 16:34:00 INFO scheduler.JobScheduler: Starting job streaming job 1523972040000 ms.33 from job set of time 1523972040000 ms 18/04/17 16:34:00 INFO scheduler.JobScheduler: Finished job streaming job 1523972040000 ms.35 from job set of time 1523972040000 ms 18/04/17 16:34:00 INFO scheduler.JobScheduler: Finished job streaming job 1523972040000 ms.30 from job set of time 1523972040000 ms 18/04/17 16:34:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 16:34:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 16:34:00 INFO scheduler.DAGScheduler: Got job 26 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 16:34:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 26 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 16:34:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 16:34:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 16:34:00 INFO scheduler.DAGScheduler: Submitting ResultStage 26 (KafkaRDD[51] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 16:34:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 16:34:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 16:34:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 16:34:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 16:34:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 16:34:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 16:34:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 16:34:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 16:34:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 16:34:00 INFO storage.MemoryStore: Block broadcast_26 stored as values in memory (estimated size 5.7 KB, free 491.4 MB) 18/04/17 16:34:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 16:34:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 16:34:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 16:34:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 16:34:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 16:34:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 16:34:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 16:34:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 16:34:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 16:34:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 16:34:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 16:34:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 16:34:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 16:34:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 16:34:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 16:34:00 INFO storage.MemoryStore: Block broadcast_26_piece0 stored as bytes in memory (estimated size 3.1 KB, free 491.4 MB) 18/04/17 16:34:00 INFO storage.BlockManagerInfo: Added broadcast_26_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.6 MB) 18/04/17 16:34:00 INFO spark.SparkContext: Created broadcast 26 from broadcast at DAGScheduler.scala:1006 18/04/17 16:34:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 26 (KafkaRDD[51] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 16:34:00 INFO cluster.YarnClusterScheduler: Adding task set 26.0 with 1 tasks 18/04/17 16:34:00 INFO scheduler.DAGScheduler: Got job 27 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 16:34:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 27 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 16:34:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 16:34:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 16:34:00 INFO scheduler.DAGScheduler: Submitting ResultStage 27 (KafkaRDD[55] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 16:34:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 26.0 (TID 26, ***hostname masked***, executor 9, partition 0, RACK_LOCAL, 2064 bytes) 18/04/17 16:34:00 INFO storage.MemoryStore: Block broadcast_27 stored as values in memory (estimated size 5.7 KB, free 491.4 MB) 18/04/17 16:34:00 INFO storage.MemoryStore: Block broadcast_27_piece0 stored as bytes in memory (estimated size 3.1 KB, free 491.4 MB) 18/04/17 16:34:00 INFO storage.BlockManagerInfo: Removed broadcast_1_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.6 MB) 18/04/17 16:34:00 INFO storage.BlockManagerInfo: Added broadcast_27_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.6 MB) 18/04/17 16:34:00 INFO spark.SparkContext: Created broadcast 27 from broadcast at DAGScheduler.scala:1006 18/04/17 16:34:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 27 (KafkaRDD[55] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 16:34:00 INFO cluster.YarnClusterScheduler: Adding task set 27.0 with 1 tasks 18/04/17 16:34:00 INFO scheduler.DAGScheduler: Got job 28 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 16:34:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 28 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 16:34:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 16:34:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 16:34:00 INFO scheduler.DAGScheduler: Submitting ResultStage 28 (KafkaRDD[67] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 16:34:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 27.0 (TID 27, ***hostname masked***, executor 8, partition 0, RACK_LOCAL, 2050 bytes) 18/04/17 16:34:00 INFO storage.MemoryStore: Block broadcast_28 stored as values in memory (estimated size 5.7 KB, free 491.4 MB) 18/04/17 16:34:00 INFO storage.MemoryStore: Block broadcast_28_piece0 stored as bytes in memory (estimated size 3.1 KB, free 491.4 MB) 18/04/17 16:34:00 INFO storage.BlockManagerInfo: Added broadcast_28_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.6 MB) 18/04/17 16:34:00 INFO spark.SparkContext: Created broadcast 28 from broadcast at DAGScheduler.scala:1006 18/04/17 16:34:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 28 (KafkaRDD[67] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 16:34:00 INFO cluster.YarnClusterScheduler: Adding task set 28.0 with 1 tasks 18/04/17 16:34:00 INFO scheduler.DAGScheduler: Got job 29 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 16:34:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 29 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 16:34:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 16:34:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 16:34:00 INFO scheduler.DAGScheduler: Submitting ResultStage 29 (KafkaRDD[48] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 16:34:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 28.0 (TID 28, ***hostname masked***, executor 7, partition 0, RACK_LOCAL, 2042 bytes) 18/04/17 16:34:00 INFO storage.MemoryStore: Block broadcast_29 stored as values in memory (estimated size 5.7 KB, free 491.4 MB) 18/04/17 16:34:00 INFO storage.MemoryStore: Block broadcast_29_piece0 stored as bytes in memory (estimated size 3.1 KB, free 491.4 MB) 18/04/17 16:34:00 INFO storage.BlockManagerInfo: Added broadcast_29_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.6 MB) 18/04/17 16:34:00 INFO spark.SparkContext: Created broadcast 29 from broadcast at DAGScheduler.scala:1006 18/04/17 16:34:00 INFO storage.BlockManagerInfo: Removed broadcast_1_piece0 on ***hostname masked***:56034 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:34:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 29 (KafkaRDD[48] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 16:34:00 INFO cluster.YarnClusterScheduler: Adding task set 29.0 with 1 tasks 18/04/17 16:34:00 INFO scheduler.DAGScheduler: Got job 30 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 16:34:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 30 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 16:34:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 16:34:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 16:34:00 INFO scheduler.DAGScheduler: Submitting ResultStage 30 (KafkaRDD[60] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 16:34:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 29.0 (TID 29, ***hostname masked***, executor 4, partition 0, RACK_LOCAL, 2051 bytes) 18/04/17 16:34:00 INFO storage.MemoryStore: Block broadcast_30 stored as values in memory (estimated size 5.7 KB, free 491.4 MB) 18/04/17 16:34:00 INFO storage.BlockManagerInfo: Added broadcast_27_piece0 in memory on ***hostname masked***:50260 (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:34:00 INFO spark.ContextCleaner: Cleaned accumulator 16 18/04/17 16:34:00 INFO storage.BlockManagerInfo: Added broadcast_26_piece0 in memory on ***hostname masked***:55033 (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:34:00 INFO storage.BlockManagerInfo: Added broadcast_28_piece0 in memory on ***hostname masked***:41751 (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:34:00 INFO storage.MemoryStore: Block broadcast_30_piece0 stored as bytes in memory (estimated size 3.1 KB, free 491.4 MB) 18/04/17 16:34:00 INFO storage.BlockManagerInfo: Added broadcast_30_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.6 MB) 18/04/17 16:34:00 INFO storage.BlockManagerInfo: Removed broadcast_22_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.6 MB) 18/04/17 16:34:00 INFO spark.SparkContext: Created broadcast 30 from broadcast at DAGScheduler.scala:1006 18/04/17 16:34:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 30 (KafkaRDD[60] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 16:34:00 INFO cluster.YarnClusterScheduler: Adding task set 30.0 with 1 tasks 18/04/17 16:34:00 INFO scheduler.DAGScheduler: Got job 31 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 16:34:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 31 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 16:34:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 16:34:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 16:34:00 INFO scheduler.DAGScheduler: Submitting ResultStage 31 (KafkaRDD[44] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 16:34:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 30.0 (TID 30, ***hostname masked***, executor 5, partition 0, NODE_LOCAL, 2047 bytes) 18/04/17 16:34:00 INFO storage.MemoryStore: Block broadcast_31 stored as values in memory (estimated size 5.7 KB, free 491.4 MB) 18/04/17 16:34:00 INFO storage.BlockManagerInfo: Removed broadcast_22_piece0 on ***hostname masked***:55033 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:34:00 INFO storage.MemoryStore: Block broadcast_31_piece0 stored as bytes in memory (estimated size 3.1 KB, free 491.4 MB) 18/04/17 16:34:00 INFO storage.BlockManagerInfo: Added broadcast_31_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.6 MB) 18/04/17 16:34:00 INFO storage.BlockManagerInfo: Added broadcast_29_piece0 in memory on ***hostname masked***:55279 (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:34:00 INFO spark.SparkContext: Created broadcast 31 from broadcast at DAGScheduler.scala:1006 18/04/17 16:34:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 31 (KafkaRDD[44] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 16:34:00 INFO cluster.YarnClusterScheduler: Adding task set 31.0 with 1 tasks 18/04/17 16:34:00 INFO scheduler.DAGScheduler: Got job 32 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 16:34:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 32 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 16:34:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 16:34:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 16:34:00 INFO scheduler.DAGScheduler: Submitting ResultStage 32 (KafkaRDD[47] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 16:34:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 31.0 (TID 31, ***hostname masked***, executor 10, partition 0, RACK_LOCAL, 2051 bytes) 18/04/17 16:34:00 INFO storage.MemoryStore: Block broadcast_32 stored as values in memory (estimated size 5.7 KB, free 491.4 MB) 18/04/17 16:34:00 INFO storage.MemoryStore: Block broadcast_32_piece0 stored as bytes in memory (estimated size 3.1 KB, free 491.4 MB) 18/04/17 16:34:00 INFO storage.BlockManagerInfo: Added broadcast_32_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.6 MB) 18/04/17 16:34:00 INFO spark.SparkContext: Created broadcast 32 from broadcast at DAGScheduler.scala:1006 18/04/17 16:34:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 32 (KafkaRDD[47] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 16:34:00 INFO cluster.YarnClusterScheduler: Adding task set 32.0 with 1 tasks 18/04/17 16:34:00 INFO scheduler.DAGScheduler: Got job 34 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 16:34:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 33 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 16:34:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 16:34:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 16:34:00 INFO scheduler.DAGScheduler: Submitting ResultStage 33 (KafkaRDD[63] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 16:34:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 32.0 (TID 32, ***hostname masked***, executor 6, partition 0, RACK_LOCAL, 2046 bytes) 18/04/17 16:34:00 INFO storage.MemoryStore: Block broadcast_33 stored as values in memory (estimated size 5.7 KB, free 491.4 MB) 18/04/17 16:34:00 INFO storage.BlockManagerInfo: Added broadcast_30_piece0 in memory on ***hostname masked***:53081 (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:34:00 INFO storage.BlockManagerInfo: Removed broadcast_20_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.6 MB) 18/04/17 16:34:00 INFO storage.MemoryStore: Block broadcast_33_piece0 stored as bytes in memory (estimated size 3.1 KB, free 491.4 MB) 18/04/17 16:34:00 INFO storage.BlockManagerInfo: Added broadcast_33_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.6 MB) 18/04/17 16:34:00 INFO spark.SparkContext: Created broadcast 33 from broadcast at DAGScheduler.scala:1006 18/04/17 16:34:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 33 (KafkaRDD[63] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 16:34:00 INFO cluster.YarnClusterScheduler: Adding task set 33.0 with 1 tasks 18/04/17 16:34:00 INFO scheduler.DAGScheduler: Got job 33 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 16:34:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 34 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 16:34:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 16:34:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 16:34:00 INFO scheduler.DAGScheduler: Submitting ResultStage 34 (KafkaRDD[45] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 16:34:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 33.0 (TID 33, ***hostname masked***, executor 5, partition 0, RACK_LOCAL, 2048 bytes) 18/04/17 16:34:00 INFO storage.MemoryStore: Block broadcast_34 stored as values in memory (estimated size 5.7 KB, free 491.4 MB) 18/04/17 16:34:00 INFO storage.BlockManagerInfo: Removed broadcast_20_piece0 on ***hostname masked***:53081 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:34:00 INFO spark.ContextCleaner: Cleaned accumulator 21 18/04/17 16:34:00 INFO storage.BlockManagerInfo: Added broadcast_31_piece0 in memory on ***hostname masked***:55095 (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:34:00 INFO storage.BlockManagerInfo: Removed broadcast_19_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.6 MB) 18/04/17 16:34:00 INFO storage.MemoryStore: Block broadcast_34_piece0 stored as bytes in memory (estimated size 3.1 KB, free 491.4 MB) 18/04/17 16:34:00 INFO storage.BlockManagerInfo: Added broadcast_34_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.6 MB) 18/04/17 16:34:00 INFO spark.SparkContext: Created broadcast 34 from broadcast at DAGScheduler.scala:1006 18/04/17 16:34:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 34 (KafkaRDD[45] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 16:34:00 INFO cluster.YarnClusterScheduler: Adding task set 34.0 with 1 tasks 18/04/17 16:34:00 INFO scheduler.DAGScheduler: Got job 35 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 16:34:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 35 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 16:34:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 16:34:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 16:34:00 INFO scheduler.DAGScheduler: Submitting ResultStage 35 (KafkaRDD[61] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 16:34:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 34.0 (TID 34, ***hostname masked***, executor 3, partition 0, RACK_LOCAL, 2041 bytes) 18/04/17 16:34:00 INFO storage.MemoryStore: Block broadcast_35 stored as values in memory (estimated size 5.7 KB, free 491.4 MB) 18/04/17 16:34:00 INFO storage.BlockManagerInfo: Removed broadcast_19_piece0 on ***hostname masked***:57847 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:34:00 INFO storage.MemoryStore: Block broadcast_35_piece0 stored as bytes in memory (estimated size 3.1 KB, free 491.4 MB) 18/04/17 16:34:00 INFO storage.BlockManagerInfo: Added broadcast_35_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.6 MB) 18/04/17 16:34:00 INFO spark.SparkContext: Created broadcast 35 from broadcast at DAGScheduler.scala:1006 18/04/17 16:34:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 35 (KafkaRDD[61] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 16:34:00 INFO cluster.YarnClusterScheduler: Adding task set 35.0 with 1 tasks 18/04/17 16:34:00 INFO scheduler.DAGScheduler: Got job 36 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 16:34:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 36 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 16:34:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 16:34:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 16:34:00 INFO scheduler.DAGScheduler: Submitting ResultStage 36 (KafkaRDD[37] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 16:34:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 35.0 (TID 35, ***hostname masked***, executor 11, partition 0, RACK_LOCAL, 2045 bytes) 18/04/17 16:34:00 INFO storage.MemoryStore: Block broadcast_36 stored as values in memory (estimated size 5.7 KB, free 491.4 MB) 18/04/17 16:34:00 INFO storage.MemoryStore: Block broadcast_36_piece0 stored as bytes in memory (estimated size 3.1 KB, free 491.4 MB) 18/04/17 16:34:00 INFO storage.BlockManagerInfo: Added broadcast_36_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.6 MB) 18/04/17 16:34:00 INFO spark.SparkContext: Created broadcast 36 from broadcast at DAGScheduler.scala:1006 18/04/17 16:34:00 INFO storage.BlockManagerInfo: Added broadcast_33_piece0 in memory on ***hostname masked***:53081 (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:34:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 36 (KafkaRDD[37] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 16:34:00 INFO cluster.YarnClusterScheduler: Adding task set 36.0 with 1 tasks 18/04/17 16:34:00 INFO scheduler.DAGScheduler: Got job 37 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 16:34:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 37 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 16:34:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 16:34:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 16:34:00 INFO scheduler.DAGScheduler: Submitting ResultStage 37 (KafkaRDD[54] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 16:34:00 INFO storage.BlockManagerInfo: Added broadcast_32_piece0 in memory on ***hostname masked***:35790 (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:34:00 INFO storage.BlockManagerInfo: Removed broadcast_24_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.6 MB) 18/04/17 16:34:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 36.0 (TID 36, ***hostname masked***, executor 3, partition 0, RACK_LOCAL, 2063 bytes) 18/04/17 16:34:00 INFO storage.MemoryStore: Block broadcast_37 stored as values in memory (estimated size 5.7 KB, free 491.4 MB) 18/04/17 16:34:00 INFO storage.BlockManagerInfo: Added broadcast_35_piece0 in memory on ***hostname masked***:57847 (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:34:00 INFO storage.BlockManagerInfo: Removed broadcast_24_piece0 on ***hostname masked***:43653 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:34:00 INFO spark.ContextCleaner: Cleaned accumulator 25 18/04/17 16:34:00 INFO storage.MemoryStore: Block broadcast_37_piece0 stored as bytes in memory (estimated size 3.1 KB, free 491.4 MB) 18/04/17 16:34:00 INFO storage.BlockManagerInfo: Added broadcast_37_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.6 MB) 18/04/17 16:34:00 INFO storage.BlockManagerInfo: Removed broadcast_23_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.6 MB) 18/04/17 16:34:00 INFO spark.SparkContext: Created broadcast 37 from broadcast at DAGScheduler.scala:1006 18/04/17 16:34:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 37 (KafkaRDD[54] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 16:34:00 INFO cluster.YarnClusterScheduler: Adding task set 37.0 with 1 tasks 18/04/17 16:34:00 INFO storage.BlockManagerInfo: Added broadcast_34_piece0 in memory on ***hostname masked***:60107 (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:34:00 INFO scheduler.DAGScheduler: Got job 38 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 16:34:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 38 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 16:34:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 16:34:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 16:34:00 INFO scheduler.DAGScheduler: Submitting ResultStage 38 (KafkaRDD[38] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 16:34:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 37.0 (TID 37, ***hostname masked***, executor 6, partition 0, RACK_LOCAL, 2048 bytes) 18/04/17 16:34:00 INFO storage.MemoryStore: Block broadcast_38 stored as values in memory (estimated size 5.7 KB, free 491.4 MB) 18/04/17 16:34:00 INFO storage.BlockManagerInfo: Removed broadcast_23_piece0 on ***hostname masked***:35790 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:34:00 INFO spark.ContextCleaner: Cleaned accumulator 24 18/04/17 16:34:00 INFO storage.BlockManagerInfo: Removed broadcast_2_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.6 MB) 18/04/17 16:34:00 INFO storage.BlockManagerInfo: Removed broadcast_2_piece0 on ***hostname masked***:42188 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:34:00 INFO storage.MemoryStore: Block broadcast_38_piece0 stored as bytes in memory (estimated size 3.1 KB, free 491.4 MB) 18/04/17 16:34:00 INFO storage.BlockManagerInfo: Added broadcast_38_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.6 MB) 18/04/17 16:34:00 INFO spark.SparkContext: Created broadcast 38 from broadcast at DAGScheduler.scala:1006 18/04/17 16:34:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 38 (KafkaRDD[38] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 16:34:00 INFO storage.BlockManagerInfo: Added broadcast_37_piece0 in memory on ***hostname masked***:35790 (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:34:00 INFO cluster.YarnClusterScheduler: Adding task set 38.0 with 1 tasks 18/04/17 16:34:00 INFO scheduler.DAGScheduler: Got job 39 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 16:34:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 39 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 16:34:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 16:34:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 16:34:00 INFO scheduler.DAGScheduler: Submitting ResultStage 39 (KafkaRDD[59] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 16:34:00 INFO storage.BlockManagerInfo: Removed broadcast_25_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.6 MB) 18/04/17 16:34:00 INFO storage.MemoryStore: Block broadcast_39 stored as values in memory (estimated size 5.7 KB, free 491.4 MB) 18/04/17 16:34:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 38.0 (TID 38, ***hostname masked***, executor 5, partition 0, RACK_LOCAL, 2034 bytes) 18/04/17 16:34:00 INFO storage.BlockManagerInfo: Added broadcast_36_piece0 in memory on ***hostname masked***:60107 (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:34:00 INFO storage.BlockManagerInfo: Removed broadcast_25_piece0 on ***hostname masked***:42188 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:34:00 INFO spark.ContextCleaner: Cleaned accumulator 26 18/04/17 16:34:00 INFO storage.BlockManagerInfo: Removed broadcast_14_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.6 MB) 18/04/17 16:34:00 INFO storage.BlockManagerInfo: Removed broadcast_14_piece0 on ***hostname masked***:55279 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:34:00 INFO storage.BlockManagerInfo: Added broadcast_38_piece0 in memory on ***hostname masked***:53081 (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:34:00 INFO storage.MemoryStore: Block broadcast_39_piece0 stored as bytes in memory (estimated size 3.1 KB, free 491.4 MB) 18/04/17 16:34:00 INFO storage.BlockManagerInfo: Added broadcast_39_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.6 MB) 18/04/17 16:34:00 INFO spark.SparkContext: Created broadcast 39 from broadcast at DAGScheduler.scala:1006 18/04/17 16:34:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 39 (KafkaRDD[59] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 16:34:00 INFO cluster.YarnClusterScheduler: Adding task set 39.0 with 1 tasks 18/04/17 16:34:00 INFO scheduler.DAGScheduler: Got job 40 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 16:34:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 40 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 16:34:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 16:34:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 16:34:00 INFO scheduler.DAGScheduler: Submitting ResultStage 40 (KafkaRDD[65] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 16:34:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 39.0 (TID 39, ***hostname masked***, executor 1, partition 0, RACK_LOCAL, 2048 bytes) 18/04/17 16:34:00 INFO storage.MemoryStore: Block broadcast_40 stored as values in memory (estimated size 5.7 KB, free 491.4 MB) 18/04/17 16:34:00 INFO storage.BlockManagerInfo: Removed broadcast_15_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.6 MB) 18/04/17 16:34:00 INFO storage.BlockManagerInfo: Removed broadcast_15_piece0 on ***hostname masked***:35790 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:34:00 INFO storage.MemoryStore: Block broadcast_40_piece0 stored as bytes in memory (estimated size 3.1 KB, free 491.4 MB) 18/04/17 16:34:00 INFO storage.BlockManagerInfo: Added broadcast_40_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.6 MB) 18/04/17 16:34:00 INFO spark.SparkContext: Created broadcast 40 from broadcast at DAGScheduler.scala:1006 18/04/17 16:34:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 40 (KafkaRDD[65] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 16:34:00 INFO cluster.YarnClusterScheduler: Adding task set 40.0 with 1 tasks 18/04/17 16:34:00 INFO scheduler.DAGScheduler: Got job 41 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 16:34:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 41 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 16:34:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 16:34:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 16:34:00 INFO scheduler.DAGScheduler: Submitting ResultStage 41 (KafkaRDD[56] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 16:34:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 40.0 (TID 40, ***hostname masked***, executor 6, partition 0, NODE_LOCAL, 2049 bytes) 18/04/17 16:34:00 INFO storage.BlockManagerInfo: Added broadcast_39_piece0 in memory on ***hostname masked***:56034 (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:34:00 INFO storage.MemoryStore: Block broadcast_41 stored as values in memory (estimated size 5.7 KB, free 491.4 MB) 18/04/17 16:34:00 INFO storage.MemoryStore: Block broadcast_41_piece0 stored as bytes in memory (estimated size 3.1 KB, free 491.4 MB) 18/04/17 16:34:00 INFO storage.BlockManagerInfo: Added broadcast_41_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.6 MB) 18/04/17 16:34:00 INFO spark.SparkContext: Created broadcast 41 from broadcast at DAGScheduler.scala:1006 18/04/17 16:34:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 41 (KafkaRDD[56] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 16:34:00 INFO cluster.YarnClusterScheduler: Adding task set 41.0 with 1 tasks 18/04/17 16:34:00 INFO scheduler.DAGScheduler: Got job 42 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 16:34:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 42 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 16:34:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 16:34:00 INFO storage.BlockManagerInfo: Added broadcast_40_piece0 in memory on ***hostname masked***:35790 (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:34:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 16:34:00 INFO scheduler.DAGScheduler: Submitting ResultStage 42 (KafkaRDD[70] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 16:34:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 41.0 (TID 41, ***hostname masked***, executor 12, partition 0, RACK_LOCAL, 2048 bytes) 18/04/17 16:34:00 INFO storage.MemoryStore: Block broadcast_42 stored as values in memory (estimated size 5.7 KB, free 491.4 MB) 18/04/17 16:34:00 INFO storage.MemoryStore: Block broadcast_42_piece0 stored as bytes in memory (estimated size 3.1 KB, free 491.4 MB) 18/04/17 16:34:00 INFO storage.BlockManagerInfo: Added broadcast_42_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.6 MB) 18/04/17 16:34:00 INFO spark.SparkContext: Created broadcast 42 from broadcast at DAGScheduler.scala:1006 18/04/17 16:34:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 42 (KafkaRDD[70] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 16:34:00 INFO cluster.YarnClusterScheduler: Adding task set 42.0 with 1 tasks 18/04/17 16:34:00 INFO scheduler.DAGScheduler: Got job 43 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 16:34:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 43 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 16:34:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 16:34:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 16:34:00 INFO scheduler.DAGScheduler: Submitting ResultStage 43 (KafkaRDD[46] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 16:34:00 INFO storage.BlockManagerInfo: Added broadcast_41_piece0 in memory on ***hostname masked***:42188 (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:34:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 42.0 (TID 42, ***hostname masked***, executor 6, partition 0, RACK_LOCAL, 2053 bytes) 18/04/17 16:34:00 INFO storage.MemoryStore: Block broadcast_43 stored as values in memory (estimated size 5.7 KB, free 491.4 MB) 18/04/17 16:34:00 INFO storage.MemoryStore: Block broadcast_43_piece0 stored as bytes in memory (estimated size 3.1 KB, free 491.4 MB) 18/04/17 16:34:00 INFO storage.BlockManagerInfo: Added broadcast_43_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.6 MB) 18/04/17 16:34:00 INFO spark.SparkContext: Created broadcast 43 from broadcast at DAGScheduler.scala:1006 18/04/17 16:34:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 43 (KafkaRDD[46] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 16:34:00 INFO cluster.YarnClusterScheduler: Adding task set 43.0 with 1 tasks 18/04/17 16:34:00 INFO scheduler.DAGScheduler: Got job 45 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 16:34:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 44 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 16:34:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 16:34:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 16:34:00 INFO scheduler.DAGScheduler: Submitting ResultStage 44 (KafkaRDD[41] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 16:34:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 43.0 (TID 43, ***hostname masked***, executor 2, partition 0, NODE_LOCAL, 2059 bytes) 18/04/17 16:34:00 INFO storage.BlockManagerInfo: Added broadcast_42_piece0 in memory on ***hostname masked***:35790 (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:34:00 INFO storage.MemoryStore: Block broadcast_44 stored as values in memory (estimated size 5.7 KB, free 491.4 MB) 18/04/17 16:34:00 INFO storage.MemoryStore: Block broadcast_44_piece0 stored as bytes in memory (estimated size 3.1 KB, free 491.4 MB) 18/04/17 16:34:00 INFO storage.BlockManagerInfo: Added broadcast_44_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.6 MB) 18/04/17 16:34:00 INFO spark.SparkContext: Created broadcast 44 from broadcast at DAGScheduler.scala:1006 18/04/17 16:34:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 44 (KafkaRDD[41] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 16:34:00 INFO cluster.YarnClusterScheduler: Adding task set 44.0 with 1 tasks 18/04/17 16:34:00 INFO scheduler.DAGScheduler: Got job 44 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 16:34:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 45 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 16:34:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 16:34:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 16:34:00 INFO scheduler.DAGScheduler: Submitting ResultStage 45 (KafkaRDD[42] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 16:34:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 44.0 (TID 44, ***hostname masked***, executor 5, partition 0, NODE_LOCAL, 2059 bytes) 18/04/17 16:34:00 INFO storage.MemoryStore: Block broadcast_45 stored as values in memory (estimated size 5.7 KB, free 491.4 MB) 18/04/17 16:34:00 INFO storage.BlockManagerInfo: Added broadcast_43_piece0 in memory on ***hostname masked***:43653 (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:34:00 INFO storage.MemoryStore: Block broadcast_45_piece0 stored as bytes in memory (estimated size 3.1 KB, free 491.4 MB) 18/04/17 16:34:00 INFO storage.BlockManagerInfo: Added broadcast_45_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.6 MB) 18/04/17 16:34:00 INFO spark.SparkContext: Created broadcast 45 from broadcast at DAGScheduler.scala:1006 18/04/17 16:34:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 45 (KafkaRDD[42] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 16:34:00 INFO cluster.YarnClusterScheduler: Adding task set 45.0 with 1 tasks 18/04/17 16:34:00 INFO scheduler.DAGScheduler: Got job 46 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 16:34:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 46 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 16:34:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 16:34:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 16:34:00 INFO scheduler.DAGScheduler: Submitting ResultStage 46 (KafkaRDD[58] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 16:34:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 45.0 (TID 45, ***hostname masked***, executor 12, partition 0, NODE_LOCAL, 2045 bytes) 18/04/17 16:34:00 INFO storage.MemoryStore: Block broadcast_46 stored as values in memory (estimated size 5.7 KB, free 491.4 MB) 18/04/17 16:34:00 INFO storage.BlockManagerInfo: Added broadcast_44_piece0 in memory on ***hostname masked***:53081 (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:34:00 INFO storage.MemoryStore: Block broadcast_46_piece0 stored as bytes in memory (estimated size 3.1 KB, free 491.4 MB) 18/04/17 16:34:00 INFO storage.BlockManagerInfo: Added broadcast_46_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.6 MB) 18/04/17 16:34:00 INFO spark.SparkContext: Created broadcast 46 from broadcast at DAGScheduler.scala:1006 18/04/17 16:34:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 46 (KafkaRDD[58] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 16:34:00 INFO cluster.YarnClusterScheduler: Adding task set 46.0 with 1 tasks 18/04/17 16:34:00 INFO scheduler.DAGScheduler: Got job 47 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 16:34:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 47 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 16:34:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 16:34:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 16:34:00 INFO scheduler.DAGScheduler: Submitting ResultStage 47 (KafkaRDD[68] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 16:34:00 INFO storage.MemoryStore: Block broadcast_47 stored as values in memory (estimated size 5.7 KB, free 491.3 MB) 18/04/17 16:34:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 46.0 (TID 46, ***hostname masked***, executor 10, partition 0, RACK_LOCAL, 2069 bytes) 18/04/17 16:34:00 INFO storage.BlockManagerInfo: Added broadcast_45_piece0 in memory on ***hostname masked***:42188 (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:34:00 INFO storage.MemoryStore: Block broadcast_47_piece0 stored as bytes in memory (estimated size 3.1 KB, free 491.3 MB) 18/04/17 16:34:00 INFO storage.BlockManagerInfo: Added broadcast_47_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.6 MB) 18/04/17 16:34:00 INFO spark.SparkContext: Created broadcast 47 from broadcast at DAGScheduler.scala:1006 18/04/17 16:34:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 47 (KafkaRDD[68] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 16:34:00 INFO cluster.YarnClusterScheduler: Adding task set 47.0 with 1 tasks 18/04/17 16:34:00 INFO scheduler.DAGScheduler: Got job 48 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 16:34:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 48 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 16:34:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 16:34:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 16:34:00 INFO scheduler.DAGScheduler: Submitting ResultStage 48 (KafkaRDD[64] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 16:34:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 47.0 (TID 47, ***hostname masked***, executor 1, partition 0, RACK_LOCAL, 2040 bytes) 18/04/17 16:34:00 INFO storage.MemoryStore: Block broadcast_48 stored as values in memory (estimated size 5.7 KB, free 491.3 MB) 18/04/17 16:34:00 INFO storage.MemoryStore: Block broadcast_48_piece0 stored as bytes in memory (estimated size 3.1 KB, free 491.3 MB) 18/04/17 16:34:00 INFO storage.BlockManagerInfo: Added broadcast_48_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.6 MB) 18/04/17 16:34:00 INFO spark.SparkContext: Created broadcast 48 from broadcast at DAGScheduler.scala:1006 18/04/17 16:34:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 48 (KafkaRDD[64] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 16:34:00 INFO cluster.YarnClusterScheduler: Adding task set 48.0 with 1 tasks 18/04/17 16:34:00 INFO scheduler.DAGScheduler: Got job 51 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 16:34:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 49 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 16:34:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 16:34:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 16:34:00 INFO scheduler.DAGScheduler: Submitting ResultStage 49 (KafkaRDD[69] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 16:34:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 48.0 (TID 48, ***hostname masked***, executor 3, partition 0, RACK_LOCAL, 2045 bytes) 18/04/17 16:34:00 INFO storage.MemoryStore: Block broadcast_49 stored as values in memory (estimated size 5.7 KB, free 491.3 MB) 18/04/17 16:34:00 INFO storage.MemoryStore: Block broadcast_49_piece0 stored as bytes in memory (estimated size 3.1 KB, free 491.3 MB) 18/04/17 16:34:00 INFO storage.BlockManagerInfo: Added broadcast_49_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.5 MB) 18/04/17 16:34:00 INFO spark.SparkContext: Created broadcast 49 from broadcast at DAGScheduler.scala:1006 18/04/17 16:34:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 49 (KafkaRDD[69] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 16:34:00 INFO cluster.YarnClusterScheduler: Adding task set 49.0 with 1 tasks 18/04/17 16:34:00 INFO scheduler.DAGScheduler: Got job 50 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 16:34:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 50 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 16:34:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 16:34:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 16:34:00 INFO scheduler.DAGScheduler: Submitting ResultStage 50 (KafkaRDD[43] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 16:34:00 INFO storage.MemoryStore: Block broadcast_50 stored as values in memory (estimated size 5.7 KB, free 491.3 MB) 18/04/17 16:34:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 49.0 (TID 49, ***hostname masked***, executor 10, partition 0, NODE_LOCAL, 2048 bytes) 18/04/17 16:34:00 INFO storage.BlockManagerInfo: Added broadcast_47_piece0 in memory on ***hostname masked***:56034 (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:34:00 INFO storage.MemoryStore: Block broadcast_50_piece0 stored as bytes in memory (estimated size 3.1 KB, free 491.3 MB) 18/04/17 16:34:00 INFO storage.BlockManagerInfo: Added broadcast_50_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.5 MB) 18/04/17 16:34:00 INFO spark.SparkContext: Created broadcast 50 from broadcast at DAGScheduler.scala:1006 18/04/17 16:34:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 50 (KafkaRDD[43] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 16:34:00 INFO cluster.YarnClusterScheduler: Adding task set 50.0 with 1 tasks 18/04/17 16:34:00 INFO storage.BlockManagerInfo: Added broadcast_48_piece0 in memory on ***hostname masked***:60107 (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:34:00 INFO scheduler.DAGScheduler: Got job 49 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 16:34:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 51 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 16:34:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 16:34:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 16:34:00 INFO scheduler.DAGScheduler: Submitting ResultStage 51 (KafkaRDD[62] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 16:34:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 50.0 (TID 50, ***hostname masked***, executor 2, partition 0, RACK_LOCAL, 2060 bytes) 18/04/17 16:34:00 INFO storage.MemoryStore: Block broadcast_51 stored as values in memory (estimated size 5.7 KB, free 491.3 MB) 18/04/17 16:34:00 INFO storage.MemoryStore: Block broadcast_51_piece0 stored as bytes in memory (estimated size 3.1 KB, free 491.3 MB) 18/04/17 16:34:00 INFO storage.BlockManagerInfo: Added broadcast_51_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.5 MB) 18/04/17 16:34:00 INFO spark.SparkContext: Created broadcast 51 from broadcast at DAGScheduler.scala:1006 18/04/17 16:34:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 51 (KafkaRDD[62] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 16:34:00 INFO cluster.YarnClusterScheduler: Adding task set 51.0 with 1 tasks 18/04/17 16:34:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 51.0 (TID 51, ***hostname masked***, executor 3, partition 0, NODE_LOCAL, 2056 bytes) 18/04/17 16:34:00 INFO storage.BlockManagerInfo: Added broadcast_50_piece0 in memory on ***hostname masked***:43653 (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:34:00 INFO storage.BlockManagerInfo: Added broadcast_51_piece0 in memory on ***hostname masked***:60107 (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:34:00 INFO storage.BlockManagerInfo: Added broadcast_46_piece0 in memory on ***hostname masked***:55095 (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:34:00 INFO storage.BlockManagerInfo: Added broadcast_49_piece0 in memory on ***hostname masked***:55095 (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:34:00 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 36.0 (TID 36) in 188 ms on ***hostname masked*** (executor 3) (1/1) 18/04/17 16:34:00 INFO cluster.YarnClusterScheduler: Removed TaskSet 36.0, whose tasks have all completed, from pool 18/04/17 16:34:00 INFO scheduler.DAGScheduler: ResultStage 36 (foreachPartition at PredictorEngineApp.java:153) finished in 0.189 s 18/04/17 16:34:00 INFO scheduler.DAGScheduler: Job 36 finished: foreachPartition at PredictorEngineApp.java:153, took 0.295081 s 18/04/17 16:34:00 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x34412875 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 16:34:00 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x344128750x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 16:34:00 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 16:34:00 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:34822, server: ***hostname masked***/***IP masked***:2181 18/04/17 16:34:00 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x2626be142b28aa6, negotiated timeout = 60000 18/04/17 16:34:00 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x2626be142b28aa6 18/04/17 16:34:00 INFO zookeeper.ZooKeeper: Session: 0x2626be142b28aa6 closed 18/04/17 16:34:00 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 16:34:00 INFO scheduler.JobScheduler: Finished job streaming job 1523972040000 ms.1 from job set of time 1523972040000 ms 18/04/17 16:34:01 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 35.0 (TID 35) in 944 ms on ***hostname masked*** (executor 11) (1/1) 18/04/17 16:34:01 INFO cluster.YarnClusterScheduler: Removed TaskSet 35.0, whose tasks have all completed, from pool 18/04/17 16:34:01 INFO scheduler.DAGScheduler: ResultStage 35 (foreachPartition at PredictorEngineApp.java:153) finished in 0.945 s 18/04/17 16:34:01 INFO scheduler.DAGScheduler: Job 35 finished: foreachPartition at PredictorEngineApp.java:153, took 1.026256 s 18/04/17 16:34:01 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x2155427f connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 16:34:01 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x2155427f0x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 16:34:01 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 16:34:01 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:52082, server: ***hostname masked***/***IP masked***:2181 18/04/17 16:34:01 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x3626be1439a9182, negotiated timeout = 60000 18/04/17 16:34:01 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x3626be1439a9182 18/04/17 16:34:01 INFO zookeeper.ZooKeeper: Session: 0x3626be1439a9182 closed 18/04/17 16:34:01 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 16:34:01 INFO scheduler.JobScheduler: Finished job streaming job 1523972040000 ms.25 from job set of time 1523972040000 ms 18/04/17 16:34:02 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 28.0 (TID 28) in 2224 ms on ***hostname masked*** (executor 7) (1/1) 18/04/17 16:34:02 INFO cluster.YarnClusterScheduler: Removed TaskSet 28.0, whose tasks have all completed, from pool 18/04/17 16:34:02 INFO scheduler.DAGScheduler: ResultStage 28 (foreachPartition at PredictorEngineApp.java:153) finished in 2.224 s 18/04/17 16:34:02 INFO scheduler.DAGScheduler: Job 28 finished: foreachPartition at PredictorEngineApp.java:153, took 2.251924 s 18/04/17 16:34:02 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x7eb015a connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 16:34:02 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x7eb015a0x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 16:34:02 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 16:34:02 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:34831, server: ***hostname masked***/***IP masked***:2181 18/04/17 16:34:02 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x2626be142b28aa7, negotiated timeout = 60000 18/04/17 16:34:02 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x2626be142b28aa7 18/04/17 16:34:02 INFO zookeeper.ZooKeeper: Session: 0x2626be142b28aa7 closed 18/04/17 16:34:02 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 16:34:02 INFO scheduler.JobScheduler: Finished job streaming job 1523972040000 ms.31 from job set of time 1523972040000 ms 18/04/17 16:34:02 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 50.0 (TID 50) in 2088 ms on ***hostname masked*** (executor 2) (1/1) 18/04/17 16:34:02 INFO cluster.YarnClusterScheduler: Removed TaskSet 50.0, whose tasks have all completed, from pool 18/04/17 16:34:02 INFO scheduler.DAGScheduler: ResultStage 50 (foreachPartition at PredictorEngineApp.java:153) finished in 2.089 s 18/04/17 16:34:02 INFO scheduler.DAGScheduler: Job 50 finished: foreachPartition at PredictorEngineApp.java:153, took 2.321949 s 18/04/17 16:34:02 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x50072774 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 16:34:02 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x500727740x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 16:34:02 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 16:34:02 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:52090, server: ***hostname masked***/***IP masked***:2181 18/04/17 16:34:02 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x3626be1439a9183, negotiated timeout = 60000 18/04/17 16:34:02 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x3626be1439a9183 18/04/17 16:34:02 INFO zookeeper.ZooKeeper: Session: 0x3626be1439a9183 closed 18/04/17 16:34:02 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 16:34:02 INFO scheduler.JobScheduler: Finished job streaming job 1523972040000 ms.7 from job set of time 1523972040000 ms 18/04/17 16:34:02 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 29.0 (TID 29) in 2632 ms on ***hostname masked*** (executor 4) (1/1) 18/04/17 16:34:02 INFO cluster.YarnClusterScheduler: Removed TaskSet 29.0, whose tasks have all completed, from pool 18/04/17 16:34:02 INFO scheduler.DAGScheduler: ResultStage 29 (foreachPartition at PredictorEngineApp.java:153) finished in 2.633 s 18/04/17 16:34:02 INFO scheduler.DAGScheduler: Job 29 finished: foreachPartition at PredictorEngineApp.java:153, took 2.664916 s 18/04/17 16:34:02 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x60f8cba5 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 16:34:02 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x60f8cba50x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 16:34:02 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 16:34:02 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:52093, server: ***hostname masked***/***IP masked***:2181 18/04/17 16:34:02 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x3626be1439a9185, negotiated timeout = 60000 18/04/17 16:34:02 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x3626be1439a9185 18/04/17 16:34:02 INFO zookeeper.ZooKeeper: Session: 0x3626be1439a9185 closed 18/04/17 16:34:02 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 16:34:02 INFO scheduler.JobScheduler: Finished job streaming job 1523972040000 ms.12 from job set of time 1523972040000 ms 18/04/17 16:34:03 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 31.0 (TID 31) in 3211 ms on ***hostname masked*** (executor 10) (1/1) 18/04/17 16:34:03 INFO cluster.YarnClusterScheduler: Removed TaskSet 31.0, whose tasks have all completed, from pool 18/04/17 16:34:03 INFO scheduler.DAGScheduler: ResultStage 31 (foreachPartition at PredictorEngineApp.java:153) finished in 3.212 s 18/04/17 16:34:03 INFO scheduler.DAGScheduler: Job 31 finished: foreachPartition at PredictorEngineApp.java:153, took 3.265427 s 18/04/17 16:34:03 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x2b38f123 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 16:34:03 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x2b38f1230x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 16:34:03 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 16:34:03 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:58482, server: ***hostname masked***/***IP masked***:2181 18/04/17 16:34:03 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x1626be1444c91b5, negotiated timeout = 60000 18/04/17 16:34:03 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x1626be1444c91b5 18/04/17 16:34:03 INFO zookeeper.ZooKeeper: Session: 0x1626be1444c91b5 closed 18/04/17 16:34:03 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 16:34:03 INFO scheduler.JobScheduler: Finished job streaming job 1523972040000 ms.8 from job set of time 1523972040000 ms 18/04/17 16:34:03 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 26.0 (TID 26) in 3494 ms on ***hostname masked*** (executor 9) (1/1) 18/04/17 16:34:03 INFO cluster.YarnClusterScheduler: Removed TaskSet 26.0, whose tasks have all completed, from pool 18/04/17 16:34:03 INFO scheduler.DAGScheduler: ResultStage 26 (foreachPartition at PredictorEngineApp.java:153) finished in 3.495 s 18/04/17 16:34:03 INFO scheduler.DAGScheduler: Job 26 finished: foreachPartition at PredictorEngineApp.java:153, took 3.514179 s 18/04/17 16:34:03 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x6671859f connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 16:34:03 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x6671859f0x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 16:34:03 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 16:34:03 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:52103, server: ***hostname masked***/***IP masked***:2181 18/04/17 16:34:03 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x3626be1439a9186, negotiated timeout = 60000 18/04/17 16:34:03 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x3626be1439a9186 18/04/17 16:34:03 INFO zookeeper.ZooKeeper: Session: 0x3626be1439a9186 closed 18/04/17 16:34:03 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 16:34:03 INFO scheduler.JobScheduler: Finished job streaming job 1523972040000 ms.15 from job set of time 1523972040000 ms 18/04/17 16:34:04 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 27.0 (TID 27) in 3896 ms on ***hostname masked*** (executor 8) (1/1) 18/04/17 16:34:04 INFO cluster.YarnClusterScheduler: Removed TaskSet 27.0, whose tasks have all completed, from pool 18/04/17 16:34:04 INFO scheduler.DAGScheduler: ResultStage 27 (foreachPartition at PredictorEngineApp.java:153) finished in 3.896 s 18/04/17 16:34:04 INFO scheduler.DAGScheduler: Job 27 finished: foreachPartition at PredictorEngineApp.java:153, took 3.920627 s 18/04/17 16:34:04 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x28421ccc connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 16:34:04 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x28421ccc0x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 16:34:04 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 16:34:04 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:34850, server: ***hostname masked***/***IP masked***:2181 18/04/17 16:34:04 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x2626be142b28aa9, negotiated timeout = 60000 18/04/17 16:34:04 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x2626be142b28aa9 18/04/17 16:34:04 INFO zookeeper.ZooKeeper: Session: 0x2626be142b28aa9 closed 18/04/17 16:34:04 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 16:34:04 INFO scheduler.JobScheduler: Finished job streaming job 1523972040000 ms.19 from job set of time 1523972040000 ms 18/04/17 16:34:05 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 48.0 (TID 48) in 5618 ms on ***hostname masked*** (executor 3) (1/1) 18/04/17 16:34:05 INFO cluster.YarnClusterScheduler: Removed TaskSet 48.0, whose tasks have all completed, from pool 18/04/17 16:34:05 INFO scheduler.DAGScheduler: ResultStage 48 (foreachPartition at PredictorEngineApp.java:153) finished in 5.618 s 18/04/17 16:34:05 INFO scheduler.DAGScheduler: Job 48 finished: foreachPartition at PredictorEngineApp.java:153, took 5.841615 s 18/04/17 16:34:05 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x6a4d095e connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 16:34:05 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x6a4d095e0x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 16:34:05 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 16:34:05 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:34858, server: ***hostname masked***/***IP masked***:2181 18/04/17 16:34:05 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x2626be142b28aab, negotiated timeout = 60000 18/04/17 16:34:05 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x2626be142b28aab 18/04/17 16:34:05 INFO zookeeper.ZooKeeper: Session: 0x2626be142b28aab closed 18/04/17 16:34:05 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 16:34:06 INFO scheduler.JobScheduler: Finished job streaming job 1523972040000 ms.28 from job set of time 1523972040000 ms 18/04/17 16:34:06 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 34.0 (TID 34) in 5893 ms on ***hostname masked*** (executor 3) (1/1) 18/04/17 16:34:06 INFO cluster.YarnClusterScheduler: Removed TaskSet 34.0, whose tasks have all completed, from pool 18/04/17 16:34:06 INFO scheduler.DAGScheduler: ResultStage 34 (foreachPartition at PredictorEngineApp.java:153) finished in 5.893 s 18/04/17 16:34:06 INFO scheduler.DAGScheduler: Job 33 finished: foreachPartition at PredictorEngineApp.java:153, took 5.966478 s 18/04/17 16:34:06 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x17c17dd2 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 16:34:06 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x17c17dd20x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 16:34:06 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 16:34:06 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:58500, server: ***hostname masked***/***IP masked***:2181 18/04/17 16:34:06 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x1626be1444c91ba, negotiated timeout = 60000 18/04/17 16:34:06 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x1626be1444c91ba 18/04/17 16:34:06 INFO zookeeper.ZooKeeper: Session: 0x1626be1444c91ba closed 18/04/17 16:34:06 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 16:34:06 INFO scheduler.JobScheduler: Finished job streaming job 1523972040000 ms.9 from job set of time 1523972040000 ms 18/04/17 16:34:06 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 45.0 (TID 45) in 6381 ms on ***hostname masked*** (executor 12) (1/1) 18/04/17 16:34:06 INFO cluster.YarnClusterScheduler: Removed TaskSet 45.0, whose tasks have all completed, from pool 18/04/17 16:34:06 INFO scheduler.DAGScheduler: ResultStage 45 (foreachPartition at PredictorEngineApp.java:153) finished in 6.382 s 18/04/17 16:34:06 INFO scheduler.DAGScheduler: Job 44 finished: foreachPartition at PredictorEngineApp.java:153, took 6.574879 s 18/04/17 16:34:06 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x6cd5130d connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 16:34:06 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x6cd5130d0x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 16:34:06 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 16:34:06 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:58503, server: ***hostname masked***/***IP masked***:2181 18/04/17 16:34:06 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x1626be1444c91bb, negotiated timeout = 60000 18/04/17 16:34:06 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x1626be1444c91bb 18/04/17 16:34:06 INFO zookeeper.ZooKeeper: Session: 0x1626be1444c91bb closed 18/04/17 16:34:06 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 16:34:06 INFO scheduler.JobScheduler: Finished job streaming job 1523972040000 ms.6 from job set of time 1523972040000 ms 18/04/17 16:34:06 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 41.0 (TID 41) in 6559 ms on ***hostname masked*** (executor 12) (1/1) 18/04/17 16:34:06 INFO cluster.YarnClusterScheduler: Removed TaskSet 41.0, whose tasks have all completed, from pool 18/04/17 16:34:06 INFO scheduler.DAGScheduler: ResultStage 41 (foreachPartition at PredictorEngineApp.java:153) finished in 6.560 s 18/04/17 16:34:06 INFO scheduler.DAGScheduler: Job 41 finished: foreachPartition at PredictorEngineApp.java:153, took 6.723640 s 18/04/17 16:34:06 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x3ef43961 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 16:34:06 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x3ef439610x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 16:34:06 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 16:34:06 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:58507, server: ***hostname masked***/***IP masked***:2181 18/04/17 16:34:06 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x1626be1444c91bd, negotiated timeout = 60000 18/04/17 16:34:06 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x1626be1444c91bd 18/04/17 16:34:06 INFO zookeeper.ZooKeeper: Session: 0x1626be1444c91bd closed 18/04/17 16:34:06 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 16:34:06 INFO scheduler.JobScheduler: Finished job streaming job 1523972040000 ms.20 from job set of time 1523972040000 ms 18/04/17 16:34:07 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 47.0 (TID 47) in 6658 ms on ***hostname masked*** (executor 1) (1/1) 18/04/17 16:34:07 INFO cluster.YarnClusterScheduler: Removed TaskSet 47.0, whose tasks have all completed, from pool 18/04/17 16:34:07 INFO scheduler.DAGScheduler: ResultStage 47 (foreachPartition at PredictorEngineApp.java:153) finished in 6.659 s 18/04/17 16:34:07 INFO scheduler.DAGScheduler: Job 47 finished: foreachPartition at PredictorEngineApp.java:153, took 6.877696 s 18/04/17 16:34:07 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x72eb921 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 16:34:07 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x72eb9210x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 16:34:07 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 16:34:07 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:34872, server: ***hostname masked***/***IP masked***:2181 18/04/17 16:34:07 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x2626be142b28aac, negotiated timeout = 60000 18/04/17 16:34:07 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x2626be142b28aac 18/04/17 16:34:07 INFO zookeeper.ZooKeeper: Session: 0x2626be142b28aac closed 18/04/17 16:34:07 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 16:34:07 INFO scheduler.JobScheduler: Finished job streaming job 1523972040000 ms.32 from job set of time 1523972040000 ms 18/04/17 16:34:07 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 39.0 (TID 39) in 7221 ms on ***hostname masked*** (executor 1) (1/1) 18/04/17 16:34:07 INFO cluster.YarnClusterScheduler: Removed TaskSet 39.0, whose tasks have all completed, from pool 18/04/17 16:34:07 INFO scheduler.DAGScheduler: ResultStage 39 (foreachPartition at PredictorEngineApp.java:153) finished in 7.222 s 18/04/17 16:34:07 INFO scheduler.DAGScheduler: Job 39 finished: foreachPartition at PredictorEngineApp.java:153, took 7.367098 s 18/04/17 16:34:07 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x38e27c10 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 16:34:07 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x38e27c100x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 16:34:07 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 16:34:07 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:58514, server: ***hostname masked***/***IP masked***:2181 18/04/17 16:34:07 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x1626be1444c91bf, negotiated timeout = 60000 18/04/17 16:34:07 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x1626be1444c91bf 18/04/17 16:34:07 INFO zookeeper.ZooKeeper: Session: 0x1626be1444c91bf closed 18/04/17 16:34:07 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 16:34:07 INFO scheduler.JobScheduler: Finished job streaming job 1523972040000 ms.23 from job set of time 1523972040000 ms 18/04/17 16:34:08 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 49.0 (TID 49) in 7891 ms on ***hostname masked*** (executor 10) (1/1) 18/04/17 16:34:08 INFO cluster.YarnClusterScheduler: Removed TaskSet 49.0, whose tasks have all completed, from pool 18/04/17 16:34:08 INFO scheduler.DAGScheduler: ResultStage 49 (foreachPartition at PredictorEngineApp.java:153) finished in 7.893 s 18/04/17 16:34:08 INFO scheduler.DAGScheduler: Job 51 finished: foreachPartition at PredictorEngineApp.java:153, took 8.120063 s 18/04/17 16:34:08 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x24358531 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 16:34:08 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x243585310x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 16:34:08 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 16:34:08 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:34880, server: ***hostname masked***/***IP masked***:2181 18/04/17 16:34:08 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x2626be142b28aad, negotiated timeout = 60000 18/04/17 16:34:08 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x2626be142b28aad 18/04/17 16:34:08 INFO zookeeper.ZooKeeper: Session: 0x2626be142b28aad closed 18/04/17 16:34:08 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 16:34:08 INFO scheduler.JobScheduler: Finished job streaming job 1523972040000 ms.33 from job set of time 1523972040000 ms 18/04/17 16:34:10 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 40.0 (TID 40) in 10486 ms on ***hostname masked*** (executor 6) (1/1) 18/04/17 16:34:10 INFO cluster.YarnClusterScheduler: Removed TaskSet 40.0, whose tasks have all completed, from pool 18/04/17 16:34:10 INFO scheduler.DAGScheduler: ResultStage 40 (foreachPartition at PredictorEngineApp.java:153) finished in 10.487 s 18/04/17 16:34:10 INFO scheduler.DAGScheduler: Job 40 finished: foreachPartition at PredictorEngineApp.java:153, took 10.641359 s 18/04/17 16:34:10 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x594a3435 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 16:34:10 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x594a34350x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 16:34:10 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 16:34:10 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:34886, server: ***hostname masked***/***IP masked***:2181 18/04/17 16:34:10 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x2626be142b28aaf, negotiated timeout = 60000 18/04/17 16:34:10 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x2626be142b28aaf 18/04/17 16:34:10 INFO zookeeper.ZooKeeper: Session: 0x2626be142b28aaf closed 18/04/17 16:34:10 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 16:34:10 INFO scheduler.JobScheduler: Finished job streaming job 1523972040000 ms.29 from job set of time 1523972040000 ms 18/04/17 16:34:10 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 37.0 (TID 37) in 10711 ms on ***hostname masked*** (executor 6) (1/1) 18/04/17 16:34:10 INFO cluster.YarnClusterScheduler: Removed TaskSet 37.0, whose tasks have all completed, from pool 18/04/17 16:34:10 INFO scheduler.DAGScheduler: ResultStage 37 (foreachPartition at PredictorEngineApp.java:153) finished in 10.712 s 18/04/17 16:34:10 INFO scheduler.DAGScheduler: Job 37 finished: foreachPartition at PredictorEngineApp.java:153, took 10.829173 s 18/04/17 16:34:10 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x29ac3ea5 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 16:34:10 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x29ac3ea50x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 16:34:10 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 16:34:10 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:58527, server: ***hostname masked***/***IP masked***:2181 18/04/17 16:34:10 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x1626be1444c91c2, negotiated timeout = 60000 18/04/17 16:34:10 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x1626be1444c91c2 18/04/17 16:34:11 INFO zookeeper.ZooKeeper: Session: 0x1626be1444c91c2 closed 18/04/17 16:34:11 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 16:34:11 INFO scheduler.JobScheduler: Finished job streaming job 1523972040000 ms.18 from job set of time 1523972040000 ms 18/04/17 16:34:11 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 43.0 (TID 43) in 11193 ms on ***hostname masked*** (executor 2) (1/1) 18/04/17 16:34:11 INFO cluster.YarnClusterScheduler: Removed TaskSet 43.0, whose tasks have all completed, from pool 18/04/17 16:34:11 INFO scheduler.DAGScheduler: ResultStage 43 (foreachPartition at PredictorEngineApp.java:153) finished in 11.193 s 18/04/17 16:34:11 INFO scheduler.DAGScheduler: Job 43 finished: foreachPartition at PredictorEngineApp.java:153, took 11.372991 s 18/04/17 16:34:11 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x355a3ccb connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 16:34:11 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x355a3ccb0x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 16:34:11 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 16:34:11 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:52149, server: ***hostname masked***/***IP masked***:2181 18/04/17 16:34:11 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x3626be1439a918c, negotiated timeout = 60000 18/04/17 16:34:11 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x3626be1439a918c 18/04/17 16:34:11 INFO zookeeper.ZooKeeper: Session: 0x3626be1439a918c closed 18/04/17 16:34:11 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 16:34:11 INFO scheduler.JobScheduler: Finished job streaming job 1523972040000 ms.10 from job set of time 1523972040000 ms 18/04/17 16:34:11 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 46.0 (TID 46) in 11545 ms on ***hostname masked*** (executor 10) (1/1) 18/04/17 16:34:11 INFO cluster.YarnClusterScheduler: Removed TaskSet 46.0, whose tasks have all completed, from pool 18/04/17 16:34:11 INFO scheduler.DAGScheduler: ResultStage 46 (foreachPartition at PredictorEngineApp.java:153) finished in 11.546 s 18/04/17 16:34:11 INFO scheduler.DAGScheduler: Job 46 finished: foreachPartition at PredictorEngineApp.java:153, took 11.765229 s 18/04/17 16:34:11 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x6a0e8227 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 16:34:11 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x6a0e82270x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 16:34:11 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 16:34:11 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:58534, server: ***hostname masked***/***IP masked***:2181 18/04/17 16:34:11 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x1626be1444c91c3, negotiated timeout = 60000 18/04/17 16:34:11 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x1626be1444c91c3 18/04/17 16:34:11 INFO zookeeper.ZooKeeper: Session: 0x1626be1444c91c3 closed 18/04/17 16:34:11 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 16:34:11 INFO scheduler.JobScheduler: Finished job streaming job 1523972040000 ms.22 from job set of time 1523972040000 ms 18/04/17 16:34:11 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 42.0 (TID 42) in 11641 ms on ***hostname masked*** (executor 6) (1/1) 18/04/17 16:34:11 INFO cluster.YarnClusterScheduler: Removed TaskSet 42.0, whose tasks have all completed, from pool 18/04/17 16:34:11 INFO scheduler.DAGScheduler: ResultStage 42 (foreachPartition at PredictorEngineApp.java:153) finished in 11.641 s 18/04/17 16:34:11 INFO scheduler.DAGScheduler: Job 42 finished: foreachPartition at PredictorEngineApp.java:153, took 11.813346 s 18/04/17 16:34:11 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x631c0ae9 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 16:34:11 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x631c0ae90x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 16:34:11 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 16:34:11 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:34899, server: ***hostname masked***/***IP masked***:2181 18/04/17 16:34:11 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x2626be142b28ab0, negotiated timeout = 60000 18/04/17 16:34:11 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x2626be142b28ab0 18/04/17 16:34:11 INFO zookeeper.ZooKeeper: Session: 0x2626be142b28ab0 closed 18/04/17 16:34:11 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 16:34:11 INFO scheduler.JobScheduler: Finished job streaming job 1523972040000 ms.34 from job set of time 1523972040000 ms 18/04/17 16:34:13 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 38.0 (TID 38) in 12883 ms on ***hostname masked*** (executor 5) (1/1) 18/04/17 16:34:13 INFO cluster.YarnClusterScheduler: Removed TaskSet 38.0, whose tasks have all completed, from pool 18/04/17 16:34:13 INFO scheduler.DAGScheduler: ResultStage 38 (foreachPartition at PredictorEngineApp.java:153) finished in 12.886 s 18/04/17 16:34:13 INFO scheduler.DAGScheduler: Job 38 finished: foreachPartition at PredictorEngineApp.java:153, took 13.014893 s 18/04/17 16:34:13 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x66f4d147 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 16:34:13 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x66f4d1470x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 16:34:13 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 16:34:13 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:52160, server: ***hostname masked***/***IP masked***:2181 18/04/17 16:34:13 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x3626be1439a918d, negotiated timeout = 60000 18/04/17 16:34:13 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x3626be1439a918d 18/04/17 16:34:13 INFO zookeeper.ZooKeeper: Session: 0x3626be1439a918d closed 18/04/17 16:34:13 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 16:34:13 INFO scheduler.JobScheduler: Finished job streaming job 1523972040000 ms.2 from job set of time 1523972040000 ms 18/04/17 16:34:14 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 30.0 (TID 30) in 14417 ms on ***hostname masked*** (executor 5) (1/1) 18/04/17 16:34:14 INFO cluster.YarnClusterScheduler: Removed TaskSet 30.0, whose tasks have all completed, from pool 18/04/17 16:34:14 INFO scheduler.DAGScheduler: ResultStage 30 (foreachPartition at PredictorEngineApp.java:153) finished in 14.417 s 18/04/17 16:34:14 INFO scheduler.DAGScheduler: Job 30 finished: foreachPartition at PredictorEngineApp.java:153, took 14.464271 s 18/04/17 16:34:14 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x173f4e1c connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 16:34:14 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x173f4e1c0x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 16:34:14 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 16:34:14 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:52164, server: ***hostname masked***/***IP masked***:2181 18/04/17 16:34:14 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x3626be1439a9192, negotiated timeout = 60000 18/04/17 16:34:14 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x3626be1439a9192 18/04/17 16:34:14 INFO zookeeper.ZooKeeper: Session: 0x3626be1439a9192 closed 18/04/17 16:34:14 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 16:34:14 INFO scheduler.JobScheduler: Finished job streaming job 1523972040000 ms.24 from job set of time 1523972040000 ms 18/04/17 16:34:14 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 33.0 (TID 33) in 14621 ms on ***hostname masked*** (executor 5) (1/1) 18/04/17 16:34:14 INFO cluster.YarnClusterScheduler: Removed TaskSet 33.0, whose tasks have all completed, from pool 18/04/17 16:34:14 INFO scheduler.DAGScheduler: ResultStage 33 (foreachPartition at PredictorEngineApp.java:153) finished in 14.622 s 18/04/17 16:34:14 INFO scheduler.DAGScheduler: Job 34 finished: foreachPartition at PredictorEngineApp.java:153, took 14.687453 s 18/04/17 16:34:14 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x49f0c2e3 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 16:34:14 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x49f0c2e30x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 16:34:14 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 16:34:14 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:58549, server: ***hostname masked***/***IP masked***:2181 18/04/17 16:34:14 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x1626be1444c91c9, negotiated timeout = 60000 18/04/17 16:34:14 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x1626be1444c91c9 18/04/17 16:34:14 INFO zookeeper.ZooKeeper: Session: 0x1626be1444c91c9 closed 18/04/17 16:34:14 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 16:34:14 INFO scheduler.JobScheduler: Finished job streaming job 1523972040000 ms.27 from job set of time 1523972040000 ms 18/04/17 16:34:15 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 32.0 (TID 32) in 15100 ms on ***hostname masked*** (executor 6) (1/1) 18/04/17 16:34:15 INFO cluster.YarnClusterScheduler: Removed TaskSet 32.0, whose tasks have all completed, from pool 18/04/17 16:34:15 INFO scheduler.DAGScheduler: ResultStage 32 (foreachPartition at PredictorEngineApp.java:153) finished in 15.101 s 18/04/17 16:34:15 INFO scheduler.DAGScheduler: Job 32 finished: foreachPartition at PredictorEngineApp.java:153, took 15.160390 s 18/04/17 16:34:15 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x651cde31 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 16:34:15 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x651cde310x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 16:34:15 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 16:34:15 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:58554, server: ***hostname masked***/***IP masked***:2181 18/04/17 16:34:15 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x1626be1444c91ca, negotiated timeout = 60000 18/04/17 16:34:15 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x1626be1444c91ca 18/04/17 16:34:15 INFO zookeeper.ZooKeeper: Session: 0x1626be1444c91ca closed 18/04/17 16:34:15 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 16:34:15 INFO scheduler.JobScheduler: Finished job streaming job 1523972040000 ms.11 from job set of time 1523972040000 ms 18/04/17 16:34:18 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 44.0 (TID 44) in 18019 ms on ***hostname masked*** (executor 5) (1/1) 18/04/17 16:34:18 INFO cluster.YarnClusterScheduler: Removed TaskSet 44.0, whose tasks have all completed, from pool 18/04/17 16:34:18 INFO scheduler.DAGScheduler: ResultStage 44 (foreachPartition at PredictorEngineApp.java:153) finished in 18.020 s 18/04/17 16:34:18 INFO scheduler.DAGScheduler: Job 45 finished: foreachPartition at PredictorEngineApp.java:153, took 18.206661 s 18/04/17 16:34:18 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x2a49cb5c connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 16:34:18 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x2a49cb5c0x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 16:34:18 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 16:34:18 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:52180, server: ***hostname masked***/***IP masked***:2181 18/04/17 16:34:18 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x3626be1439a9193, negotiated timeout = 60000 18/04/17 16:34:18 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x3626be1439a9193 18/04/17 16:34:18 INFO zookeeper.ZooKeeper: Session: 0x3626be1439a9193 closed 18/04/17 16:34:18 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 16:34:18 INFO scheduler.JobScheduler: Finished job streaming job 1523972040000 ms.5 from job set of time 1523972040000 ms 18/04/17 16:35:00 INFO scheduler.JobScheduler: Added jobs for time 1523972100000 ms 18/04/17 16:35:00 INFO scheduler.JobScheduler: Starting job streaming job 1523972100000 ms.2 from job set of time 1523972100000 ms 18/04/17 16:35:00 INFO scheduler.JobScheduler: Starting job streaming job 1523972100000 ms.0 from job set of time 1523972100000 ms 18/04/17 16:35:00 INFO scheduler.JobScheduler: Starting job streaming job 1523972100000 ms.1 from job set of time 1523972100000 ms 18/04/17 16:35:00 INFO scheduler.JobScheduler: Starting job streaming job 1523972100000 ms.3 from job set of time 1523972100000 ms 18/04/17 16:35:00 INFO scheduler.JobScheduler: Starting job streaming job 1523972100000 ms.4 from job set of time 1523972100000 ms 18/04/17 16:35:00 INFO scheduler.JobScheduler: Starting job streaming job 1523972100000 ms.5 from job set of time 1523972100000 ms 18/04/17 16:35:00 INFO scheduler.JobScheduler: Starting job streaming job 1523972100000 ms.7 from job set of time 1523972100000 ms 18/04/17 16:35:00 INFO scheduler.JobScheduler: Starting job streaming job 1523972100000 ms.6 from job set of time 1523972100000 ms 18/04/17 16:35:00 INFO scheduler.JobScheduler: Finished job streaming job 1523972100000 ms.0 from job set of time 1523972100000 ms 18/04/17 16:35:00 INFO scheduler.JobScheduler: Starting job streaming job 1523972100000 ms.8 from job set of time 1523972100000 ms 18/04/17 16:35:00 INFO scheduler.JobScheduler: Finished job streaming job 1523972100000 ms.3 from job set of time 1523972100000 ms 18/04/17 16:35:00 INFO scheduler.JobScheduler: Finished job streaming job 1523972100000 ms.4 from job set of time 1523972100000 ms 18/04/17 16:35:00 INFO scheduler.JobScheduler: Starting job streaming job 1523972100000 ms.11 from job set of time 1523972100000 ms 18/04/17 16:35:00 INFO scheduler.JobScheduler: Starting job streaming job 1523972100000 ms.9 from job set of time 1523972100000 ms 18/04/17 16:35:00 INFO scheduler.JobScheduler: Starting job streaming job 1523972100000 ms.12 from job set of time 1523972100000 ms 18/04/17 16:35:00 INFO scheduler.JobScheduler: Starting job streaming job 1523972100000 ms.10 from job set of time 1523972100000 ms 18/04/17 16:35:00 INFO scheduler.JobScheduler: Starting job streaming job 1523972100000 ms.15 from job set of time 1523972100000 ms 18/04/17 16:35:00 INFO scheduler.JobScheduler: Starting job streaming job 1523972100000 ms.13 from job set of time 1523972100000 ms 18/04/17 16:35:00 INFO scheduler.JobScheduler: Starting job streaming job 1523972100000 ms.14 from job set of time 1523972100000 ms 18/04/17 16:35:00 INFO scheduler.JobScheduler: Starting job streaming job 1523972100000 ms.16 from job set of time 1523972100000 ms 18/04/17 16:35:00 INFO scheduler.JobScheduler: Starting job streaming job 1523972100000 ms.17 from job set of time 1523972100000 ms 18/04/17 16:35:00 INFO scheduler.JobScheduler: Finished job streaming job 1523972100000 ms.13 from job set of time 1523972100000 ms 18/04/17 16:35:00 INFO scheduler.JobScheduler: Starting job streaming job 1523972100000 ms.18 from job set of time 1523972100000 ms 18/04/17 16:35:00 INFO scheduler.JobScheduler: Starting job streaming job 1523972100000 ms.19 from job set of time 1523972100000 ms 18/04/17 16:35:00 INFO scheduler.JobScheduler: Starting job streaming job 1523972100000 ms.21 from job set of time 1523972100000 ms 18/04/17 16:35:00 INFO scheduler.JobScheduler: Finished job streaming job 1523972100000 ms.14 from job set of time 1523972100000 ms 18/04/17 16:35:00 INFO scheduler.JobScheduler: Starting job streaming job 1523972100000 ms.20 from job set of time 1523972100000 ms 18/04/17 16:35:00 INFO scheduler.JobScheduler: Finished job streaming job 1523972100000 ms.17 from job set of time 1523972100000 ms 18/04/17 16:35:00 INFO scheduler.JobScheduler: Starting job streaming job 1523972100000 ms.23 from job set of time 1523972100000 ms 18/04/17 16:35:00 INFO scheduler.JobScheduler: Starting job streaming job 1523972100000 ms.22 from job set of time 1523972100000 ms 18/04/17 16:35:00 INFO scheduler.JobScheduler: Starting job streaming job 1523972100000 ms.24 from job set of time 1523972100000 ms 18/04/17 16:35:00 INFO scheduler.JobScheduler: Finished job streaming job 1523972100000 ms.16 from job set of time 1523972100000 ms 18/04/17 16:35:00 INFO scheduler.JobScheduler: Finished job streaming job 1523972100000 ms.21 from job set of time 1523972100000 ms 18/04/17 16:35:00 INFO scheduler.JobScheduler: Starting job streaming job 1523972100000 ms.27 from job set of time 1523972100000 ms 18/04/17 16:35:00 INFO scheduler.JobScheduler: Starting job streaming job 1523972100000 ms.26 from job set of time 1523972100000 ms 18/04/17 16:35:00 INFO scheduler.JobScheduler: Starting job streaming job 1523972100000 ms.28 from job set of time 1523972100000 ms 18/04/17 16:35:00 INFO scheduler.JobScheduler: Starting job streaming job 1523972100000 ms.25 from job set of time 1523972100000 ms 18/04/17 16:35:00 INFO scheduler.JobScheduler: Starting job streaming job 1523972100000 ms.29 from job set of time 1523972100000 ms 18/04/17 16:35:00 INFO scheduler.JobScheduler: Starting job streaming job 1523972100000 ms.30 from job set of time 1523972100000 ms 18/04/17 16:35:00 INFO scheduler.JobScheduler: Starting job streaming job 1523972100000 ms.31 from job set of time 1523972100000 ms 18/04/17 16:35:00 INFO scheduler.JobScheduler: Starting job streaming job 1523972100000 ms.32 from job set of time 1523972100000 ms 18/04/17 16:35:00 INFO scheduler.JobScheduler: Starting job streaming job 1523972100000 ms.34 from job set of time 1523972100000 ms 18/04/17 16:35:00 INFO scheduler.JobScheduler: Finished job streaming job 1523972100000 ms.30 from job set of time 1523972100000 ms 18/04/17 16:35:00 INFO scheduler.JobScheduler: Starting job streaming job 1523972100000 ms.33 from job set of time 1523972100000 ms 18/04/17 16:35:00 INFO scheduler.JobScheduler: Starting job streaming job 1523972100000 ms.35 from job set of time 1523972100000 ms 18/04/17 16:35:00 INFO scheduler.JobScheduler: Finished job streaming job 1523972100000 ms.35 from job set of time 1523972100000 ms 18/04/17 16:35:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 16:35:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 16:35:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 16:35:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 16:35:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 16:35:00 INFO scheduler.DAGScheduler: Got job 52 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 16:35:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 16:35:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 52 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 16:35:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 16:35:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 16:35:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 16:35:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 16:35:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 16:35:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 16:35:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 16:35:00 INFO scheduler.DAGScheduler: Submitting ResultStage 52 (KafkaRDD[103] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 16:35:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 16:35:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 16:35:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 16:35:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 16:35:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 16:35:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 16:35:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 16:35:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 16:35:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 16:35:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 16:35:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 16:35:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 16:35:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 16:35:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 16:35:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 16:35:00 INFO storage.MemoryStore: Block broadcast_52 stored as values in memory (estimated size 5.7 KB, free 491.3 MB) 18/04/17 16:35:00 INFO storage.MemoryStore: Block broadcast_52_piece0 stored as bytes in memory (estimated size 3.1 KB, free 491.3 MB) 18/04/17 16:35:00 INFO storage.BlockManagerInfo: Added broadcast_52_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.5 MB) 18/04/17 16:35:00 INFO spark.SparkContext: Created broadcast 52 from broadcast at DAGScheduler.scala:1006 18/04/17 16:35:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 52 (KafkaRDD[103] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 16:35:00 INFO cluster.YarnClusterScheduler: Adding task set 52.0 with 1 tasks 18/04/17 16:35:00 INFO scheduler.DAGScheduler: Got job 54 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 16:35:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 53 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 16:35:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 16:35:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 16:35:00 INFO scheduler.DAGScheduler: Submitting ResultStage 53 (KafkaRDD[91] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 16:35:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 52.0 (TID 52, ***hostname masked***, executor 3, partition 0, RACK_LOCAL, 2042 bytes) 18/04/17 16:35:00 INFO storage.MemoryStore: Block broadcast_53 stored as values in memory (estimated size 5.7 KB, free 491.3 MB) 18/04/17 16:35:00 INFO storage.MemoryStore: Block broadcast_53_piece0 stored as bytes in memory (estimated size 3.1 KB, free 491.3 MB) 18/04/17 16:35:00 INFO storage.BlockManagerInfo: Added broadcast_53_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.5 MB) 18/04/17 16:35:00 INFO spark.SparkContext: Created broadcast 53 from broadcast at DAGScheduler.scala:1006 18/04/17 16:35:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 53 (KafkaRDD[91] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 16:35:00 INFO cluster.YarnClusterScheduler: Adding task set 53.0 with 1 tasks 18/04/17 16:35:00 INFO scheduler.DAGScheduler: Got job 53 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 16:35:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 54 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 16:35:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 16:35:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 16:35:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 53.0 (TID 53, ***hostname masked***, executor 3, partition 0, RACK_LOCAL, 2050 bytes) 18/04/17 16:35:00 INFO scheduler.DAGScheduler: Submitting ResultStage 54 (KafkaRDD[84] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 16:35:00 INFO storage.MemoryStore: Block broadcast_54 stored as values in memory (estimated size 5.7 KB, free 491.3 MB) 18/04/17 16:35:00 INFO storage.MemoryStore: Block broadcast_54_piece0 stored as bytes in memory (estimated size 3.1 KB, free 491.3 MB) 18/04/17 16:35:00 INFO storage.BlockManagerInfo: Added broadcast_54_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.5 MB) 18/04/17 16:35:00 INFO spark.SparkContext: Created broadcast 54 from broadcast at DAGScheduler.scala:1006 18/04/17 16:35:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 54 (KafkaRDD[84] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 16:35:00 INFO cluster.YarnClusterScheduler: Adding task set 54.0 with 1 tasks 18/04/17 16:35:00 INFO scheduler.DAGScheduler: Got job 55 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 16:35:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 55 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 16:35:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 16:35:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 16:35:00 INFO scheduler.DAGScheduler: Submitting ResultStage 55 (KafkaRDD[82] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 16:35:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 54.0 (TID 54, ***hostname masked***, executor 6, partition 0, RACK_LOCAL, 2051 bytes) 18/04/17 16:35:00 INFO storage.MemoryStore: Block broadcast_55 stored as values in memory (estimated size 5.7 KB, free 491.3 MB) 18/04/17 16:35:00 INFO storage.BlockManagerInfo: Added broadcast_52_piece0 in memory on ***hostname masked***:60107 (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:35:00 INFO storage.MemoryStore: Block broadcast_55_piece0 stored as bytes in memory (estimated size 3.1 KB, free 491.3 MB) 18/04/17 16:35:00 INFO storage.BlockManagerInfo: Added broadcast_55_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.5 MB) 18/04/17 16:35:00 INFO spark.SparkContext: Created broadcast 55 from broadcast at DAGScheduler.scala:1006 18/04/17 16:35:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 55 (KafkaRDD[82] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 16:35:00 INFO cluster.YarnClusterScheduler: Adding task set 55.0 with 1 tasks 18/04/17 16:35:00 INFO scheduler.DAGScheduler: Got job 56 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 16:35:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 56 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 16:35:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 16:35:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 16:35:00 INFO scheduler.DAGScheduler: Submitting ResultStage 56 (KafkaRDD[79] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 16:35:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 55.0 (TID 55, ***hostname masked***, executor 2, partition 0, NODE_LOCAL, 2059 bytes) 18/04/17 16:35:00 INFO storage.MemoryStore: Block broadcast_56 stored as values in memory (estimated size 5.7 KB, free 491.3 MB) 18/04/17 16:35:00 INFO storage.MemoryStore: Block broadcast_56_piece0 stored as bytes in memory (estimated size 3.1 KB, free 491.3 MB) 18/04/17 16:35:00 INFO storage.BlockManagerInfo: Added broadcast_56_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.5 MB) 18/04/17 16:35:00 INFO spark.SparkContext: Created broadcast 56 from broadcast at DAGScheduler.scala:1006 18/04/17 16:35:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 56 (KafkaRDD[79] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 16:35:00 INFO cluster.YarnClusterScheduler: Adding task set 56.0 with 1 tasks 18/04/17 16:35:00 INFO scheduler.DAGScheduler: Got job 57 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 16:35:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 57 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 16:35:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 16:35:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 16:35:00 INFO scheduler.DAGScheduler: Submitting ResultStage 57 (KafkaRDD[87] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 16:35:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 56.0 (TID 56, ***hostname masked***, executor 9, partition 0, RACK_LOCAL, 2060 bytes) 18/04/17 16:35:00 INFO storage.MemoryStore: Block broadcast_57 stored as values in memory (estimated size 5.7 KB, free 491.3 MB) 18/04/17 16:35:00 INFO storage.MemoryStore: Block broadcast_57_piece0 stored as bytes in memory (estimated size 3.1 KB, free 491.3 MB) 18/04/17 16:35:00 INFO storage.BlockManagerInfo: Added broadcast_57_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.5 MB) 18/04/17 16:35:00 INFO spark.SparkContext: Created broadcast 57 from broadcast at DAGScheduler.scala:1006 18/04/17 16:35:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 57 (KafkaRDD[87] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 16:35:00 INFO cluster.YarnClusterScheduler: Adding task set 57.0 with 1 tasks 18/04/17 16:35:00 INFO scheduler.DAGScheduler: Got job 58 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 16:35:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 58 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 16:35:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 16:35:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 16:35:00 INFO scheduler.DAGScheduler: Submitting ResultStage 58 (KafkaRDD[73] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 16:35:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 57.0 (TID 57, ***hostname masked***, executor 3, partition 0, RACK_LOCAL, 2064 bytes) 18/04/17 16:35:00 INFO storage.MemoryStore: Block broadcast_58 stored as values in memory (estimated size 5.7 KB, free 491.3 MB) 18/04/17 16:35:00 INFO storage.BlockManagerInfo: Added broadcast_54_piece0 in memory on ***hostname masked***:35790 (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:35:00 INFO storage.BlockManagerInfo: Added broadcast_53_piece0 in memory on ***hostname masked***:60107 (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:35:00 INFO spark.ContextCleaner: Cleaned accumulator 34 18/04/17 16:35:00 INFO spark.ContextCleaner: Cleaned accumulator 28 18/04/17 16:35:00 INFO storage.MemoryStore: Block broadcast_58_piece0 stored as bytes in memory (estimated size 3.1 KB, free 491.3 MB) 18/04/17 16:35:00 INFO storage.BlockManagerInfo: Removed broadcast_26_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.5 MB) 18/04/17 16:35:00 INFO storage.BlockManagerInfo: Added broadcast_58_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.5 MB) 18/04/17 16:35:00 INFO spark.SparkContext: Created broadcast 58 from broadcast at DAGScheduler.scala:1006 18/04/17 16:35:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 58 (KafkaRDD[73] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 16:35:00 INFO cluster.YarnClusterScheduler: Adding task set 58.0 with 1 tasks 18/04/17 16:35:00 INFO scheduler.DAGScheduler: Got job 59 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 16:35:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 59 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 16:35:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 16:35:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 16:35:00 INFO scheduler.DAGScheduler: Submitting ResultStage 59 (KafkaRDD[101] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 16:35:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 58.0 (TID 58, ***hostname masked***, executor 2, partition 0, RACK_LOCAL, 2063 bytes) 18/04/17 16:35:00 INFO storage.BlockManagerInfo: Added broadcast_55_piece0 in memory on ***hostname masked***:43653 (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:35:00 INFO storage.MemoryStore: Block broadcast_59 stored as values in memory (estimated size 5.7 KB, free 491.3 MB) 18/04/17 16:35:00 INFO storage.BlockManagerInfo: Removed broadcast_26_piece0 on ***hostname masked***:55033 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:35:00 INFO storage.BlockManagerInfo: Added broadcast_56_piece0 in memory on ***hostname masked***:55033 (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:35:00 INFO storage.MemoryStore: Block broadcast_59_piece0 stored as bytes in memory (estimated size 3.1 KB, free 491.2 MB) 18/04/17 16:35:00 INFO storage.BlockManagerInfo: Added broadcast_59_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.5 MB) 18/04/17 16:35:00 INFO spark.SparkContext: Created broadcast 59 from broadcast at DAGScheduler.scala:1006 18/04/17 16:35:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 59 (KafkaRDD[101] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 16:35:00 INFO cluster.YarnClusterScheduler: Adding task set 59.0 with 1 tasks 18/04/17 16:35:00 INFO scheduler.DAGScheduler: Got job 60 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 16:35:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 60 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 16:35:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 16:35:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 16:35:00 INFO scheduler.DAGScheduler: Submitting ResultStage 60 (KafkaRDD[78] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 16:35:00 INFO storage.MemoryStore: Block broadcast_60 stored as values in memory (estimated size 5.7 KB, free 491.2 MB) 18/04/17 16:35:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 59.0 (TID 59, ***hostname masked***, executor 4, partition 0, NODE_LOCAL, 2049 bytes) 18/04/17 16:35:00 INFO spark.ContextCleaner: Cleaned accumulator 27 18/04/17 16:35:00 INFO spark.ContextCleaner: Cleaned accumulator 30 18/04/17 16:35:00 INFO storage.BlockManagerInfo: Removed broadcast_28_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.5 MB) 18/04/17 16:35:00 INFO storage.MemoryStore: Block broadcast_60_piece0 stored as bytes in memory (estimated size 3.1 KB, free 491.2 MB) 18/04/17 16:35:00 INFO storage.BlockManagerInfo: Added broadcast_60_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.5 MB) 18/04/17 16:35:00 INFO spark.SparkContext: Created broadcast 60 from broadcast at DAGScheduler.scala:1006 18/04/17 16:35:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 60 (KafkaRDD[78] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 16:35:00 INFO cluster.YarnClusterScheduler: Adding task set 60.0 with 1 tasks 18/04/17 16:35:00 INFO scheduler.DAGScheduler: Got job 61 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 16:35:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 61 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 16:35:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 16:35:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 16:35:00 INFO scheduler.DAGScheduler: Submitting ResultStage 61 (KafkaRDD[77] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 16:35:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 60.0 (TID 60, ***hostname masked***, executor 10, partition 0, NODE_LOCAL, 2045 bytes) 18/04/17 16:35:00 INFO storage.MemoryStore: Block broadcast_61 stored as values in memory (estimated size 5.7 KB, free 491.2 MB) 18/04/17 16:35:00 INFO storage.BlockManagerInfo: Added broadcast_57_piece0 in memory on ***hostname masked***:60107 (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:35:00 INFO storage.MemoryStore: Block broadcast_61_piece0 stored as bytes in memory (estimated size 3.1 KB, free 491.2 MB) 18/04/17 16:35:00 INFO storage.BlockManagerInfo: Added broadcast_61_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.5 MB) 18/04/17 16:35:00 INFO spark.SparkContext: Created broadcast 61 from broadcast at DAGScheduler.scala:1006 18/04/17 16:35:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 61 (KafkaRDD[77] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 16:35:00 INFO cluster.YarnClusterScheduler: Adding task set 61.0 with 1 tasks 18/04/17 16:35:00 INFO scheduler.DAGScheduler: Got job 62 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 16:35:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 62 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 16:35:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 16:35:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 16:35:00 INFO scheduler.DAGScheduler: Submitting ResultStage 62 (KafkaRDD[99] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 16:35:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 61.0 (TID 61, ***hostname masked***, executor 5, partition 0, NODE_LOCAL, 2059 bytes) 18/04/17 16:35:00 INFO storage.MemoryStore: Block broadcast_62 stored as values in memory (estimated size 5.7 KB, free 491.2 MB) 18/04/17 16:35:00 INFO storage.BlockManagerInfo: Removed broadcast_28_piece0 on ***hostname masked***:41751 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:35:00 INFO spark.ContextCleaner: Cleaned accumulator 29 18/04/17 16:35:00 INFO storage.MemoryStore: Block broadcast_62_piece0 stored as bytes in memory (estimated size 3.1 KB, free 491.2 MB) 18/04/17 16:35:00 INFO storage.BlockManagerInfo: Added broadcast_62_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.5 MB) 18/04/17 16:35:00 INFO storage.BlockManagerInfo: Added broadcast_58_piece0 in memory on ***hostname masked***:43653 (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:35:00 INFO spark.SparkContext: Created broadcast 62 from broadcast at DAGScheduler.scala:1006 18/04/17 16:35:00 INFO storage.BlockManagerInfo: Removed broadcast_27_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.5 MB) 18/04/17 16:35:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 62 (KafkaRDD[99] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 16:35:00 INFO cluster.YarnClusterScheduler: Adding task set 62.0 with 1 tasks 18/04/17 16:35:00 INFO scheduler.DAGScheduler: Got job 64 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 16:35:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 63 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 16:35:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 16:35:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 16:35:00 INFO scheduler.DAGScheduler: Submitting ResultStage 63 (KafkaRDD[98] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 16:35:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 62.0 (TID 62, ***hostname masked***, executor 12, partition 0, RACK_LOCAL, 2048 bytes) 18/04/17 16:35:00 INFO storage.MemoryStore: Block broadcast_63 stored as values in memory (estimated size 5.7 KB, free 491.2 MB) 18/04/17 16:35:00 INFO storage.MemoryStore: Block broadcast_63_piece0 stored as bytes in memory (estimated size 3.1 KB, free 491.2 MB) 18/04/17 16:35:00 INFO storage.BlockManagerInfo: Added broadcast_59_piece0 in memory on ***hostname masked***:55279 (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:35:00 INFO storage.BlockManagerInfo: Added broadcast_63_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.5 MB) 18/04/17 16:35:00 INFO spark.SparkContext: Created broadcast 63 from broadcast at DAGScheduler.scala:1006 18/04/17 16:35:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 63 (KafkaRDD[98] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 16:35:00 INFO cluster.YarnClusterScheduler: Adding task set 63.0 with 1 tasks 18/04/17 16:35:00 INFO scheduler.DAGScheduler: Got job 63 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 16:35:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 64 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 16:35:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 16:35:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 16:35:00 INFO scheduler.DAGScheduler: Submitting ResultStage 64 (KafkaRDD[83] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 16:35:00 INFO storage.BlockManagerInfo: Added broadcast_60_piece0 in memory on ***hostname masked***:55095 (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:35:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 63.0 (TID 63, ***hostname masked***, executor 6, partition 0, NODE_LOCAL, 2056 bytes) 18/04/17 16:35:00 INFO storage.MemoryStore: Block broadcast_64 stored as values in memory (estimated size 5.7 KB, free 491.2 MB) 18/04/17 16:35:00 INFO storage.BlockManagerInfo: Removed broadcast_27_piece0 on ***hostname masked***:50260 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:35:00 INFO storage.BlockManagerInfo: Added broadcast_61_piece0 in memory on ***hostname masked***:53081 (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:35:00 INFO storage.MemoryStore: Block broadcast_64_piece0 stored as bytes in memory (estimated size 3.1 KB, free 491.2 MB) 18/04/17 16:35:00 INFO spark.ContextCleaner: Cleaned accumulator 31 18/04/17 16:35:00 INFO storage.BlockManagerInfo: Added broadcast_64_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.5 MB) 18/04/17 16:35:00 INFO spark.SparkContext: Created broadcast 64 from broadcast at DAGScheduler.scala:1006 18/04/17 16:35:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 64 (KafkaRDD[83] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 16:35:00 INFO cluster.YarnClusterScheduler: Adding task set 64.0 with 1 tasks 18/04/17 16:35:00 INFO scheduler.DAGScheduler: Got job 66 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 16:35:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 65 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 16:35:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 16:35:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 16:35:00 INFO scheduler.DAGScheduler: Submitting ResultStage 65 (KafkaRDD[94] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 16:35:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 64.0 (TID 64, ***hostname masked***, executor 4, partition 0, RACK_LOCAL, 2046 bytes) 18/04/17 16:35:00 INFO storage.BlockManagerInfo: Removed broadcast_29_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.5 MB) 18/04/17 16:35:00 INFO storage.MemoryStore: Block broadcast_65 stored as values in memory (estimated size 5.7 KB, free 491.2 MB) 18/04/17 16:35:00 INFO storage.BlockManagerInfo: Added broadcast_62_piece0 in memory on ***hostname masked***:42188 (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:35:00 INFO storage.BlockManagerInfo: Removed broadcast_29_piece0 on ***hostname masked***:55279 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:35:00 INFO storage.MemoryStore: Block broadcast_65_piece0 stored as bytes in memory (estimated size 3.1 KB, free 491.2 MB) 18/04/17 16:35:00 INFO storage.BlockManagerInfo: Added broadcast_65_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.5 MB) 18/04/17 16:35:00 INFO spark.SparkContext: Created broadcast 65 from broadcast at DAGScheduler.scala:1006 18/04/17 16:35:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 65 (KafkaRDD[94] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 16:35:00 INFO cluster.YarnClusterScheduler: Adding task set 65.0 with 1 tasks 18/04/17 16:35:00 INFO scheduler.DAGScheduler: Got job 65 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 16:35:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 66 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 16:35:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 16:35:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 16:35:00 INFO scheduler.DAGScheduler: Submitting ResultStage 66 (KafkaRDD[95] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 16:35:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 65.0 (TID 65, ***hostname masked***, executor 1, partition 0, RACK_LOCAL, 2069 bytes) 18/04/17 16:35:00 INFO storage.MemoryStore: Block broadcast_66 stored as values in memory (estimated size 5.7 KB, free 491.2 MB) 18/04/17 16:35:00 INFO spark.ContextCleaner: Cleaned accumulator 33 18/04/17 16:35:00 INFO storage.BlockManagerInfo: Added broadcast_63_piece0 in memory on ***hostname masked***:35790 (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:35:00 INFO storage.BlockManagerInfo: Removed broadcast_31_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.5 MB) 18/04/17 16:35:00 INFO storage.MemoryStore: Block broadcast_66_piece0 stored as bytes in memory (estimated size 3.1 KB, free 491.2 MB) 18/04/17 16:35:00 INFO storage.BlockManagerInfo: Added broadcast_66_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.5 MB) 18/04/17 16:35:00 INFO spark.SparkContext: Created broadcast 66 from broadcast at DAGScheduler.scala:1006 18/04/17 16:35:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 66 (KafkaRDD[95] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 16:35:00 INFO cluster.YarnClusterScheduler: Adding task set 66.0 with 1 tasks 18/04/17 16:35:00 INFO scheduler.DAGScheduler: Got job 67 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 16:35:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 67 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 16:35:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 16:35:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 16:35:00 INFO scheduler.DAGScheduler: Submitting ResultStage 67 (KafkaRDD[97] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 16:35:00 INFO storage.MemoryStore: Block broadcast_67 stored as values in memory (estimated size 5.7 KB, free 491.2 MB) 18/04/17 16:35:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 66.0 (TID 66, ***hostname masked***, executor 5, partition 0, RACK_LOCAL, 2048 bytes) 18/04/17 16:35:00 INFO storage.BlockManagerInfo: Removed broadcast_31_piece0 on ***hostname masked***:55095 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:35:00 INFO storage.MemoryStore: Block broadcast_67_piece0 stored as bytes in memory (estimated size 3.1 KB, free 491.2 MB) 18/04/17 16:35:00 INFO storage.BlockManagerInfo: Added broadcast_67_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.5 MB) 18/04/17 16:35:00 INFO spark.SparkContext: Created broadcast 67 from broadcast at DAGScheduler.scala:1006 18/04/17 16:35:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 67 (KafkaRDD[97] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 16:35:00 INFO cluster.YarnClusterScheduler: Adding task set 67.0 with 1 tasks 18/04/17 16:35:00 INFO scheduler.DAGScheduler: Got job 68 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 16:35:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 68 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 16:35:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 16:35:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 16:35:00 INFO scheduler.DAGScheduler: Submitting ResultStage 68 (KafkaRDD[106] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 16:35:00 INFO storage.MemoryStore: Block broadcast_68 stored as values in memory (estimated size 5.7 KB, free 491.2 MB) 18/04/17 16:35:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 67.0 (TID 67, ***hostname masked***, executor 8, partition 0, RACK_LOCAL, 2045 bytes) 18/04/17 16:35:00 INFO storage.MemoryStore: Block broadcast_68_piece0 stored as bytes in memory (estimated size 3.1 KB, free 491.2 MB) 18/04/17 16:35:00 INFO storage.BlockManagerInfo: Added broadcast_68_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.5 MB) 18/04/17 16:35:00 INFO spark.SparkContext: Created broadcast 68 from broadcast at DAGScheduler.scala:1006 18/04/17 16:35:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 68 (KafkaRDD[106] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 16:35:00 INFO cluster.YarnClusterScheduler: Adding task set 68.0 with 1 tasks 18/04/17 16:35:00 INFO scheduler.DAGScheduler: Got job 69 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 16:35:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 69 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 16:35:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 16:35:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 16:35:00 INFO scheduler.DAGScheduler: Submitting ResultStage 69 (KafkaRDD[96] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 16:35:00 INFO storage.MemoryStore: Block broadcast_69 stored as values in memory (estimated size 5.7 KB, free 491.2 MB) 18/04/17 16:35:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 68.0 (TID 68, ***hostname masked***, executor 1, partition 0, RACK_LOCAL, 2053 bytes) 18/04/17 16:35:00 INFO spark.ContextCleaner: Cleaned accumulator 32 18/04/17 16:35:00 INFO storage.BlockManagerInfo: Removed broadcast_30_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.5 MB) 18/04/17 16:35:00 INFO storage.MemoryStore: Block broadcast_69_piece0 stored as bytes in memory (estimated size 3.1 KB, free 491.2 MB) 18/04/17 16:35:00 INFO storage.BlockManagerInfo: Added broadcast_65_piece0 in memory on ***hostname masked***:56034 (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:35:00 INFO storage.BlockManagerInfo: Added broadcast_69_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.5 MB) 18/04/17 16:35:00 INFO spark.SparkContext: Created broadcast 69 from broadcast at DAGScheduler.scala:1006 18/04/17 16:35:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 69 (KafkaRDD[96] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 16:35:00 INFO cluster.YarnClusterScheduler: Adding task set 69.0 with 1 tasks 18/04/17 16:35:00 INFO storage.BlockManagerInfo: Added broadcast_66_piece0 in memory on ***hostname masked***:53081 (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:35:00 INFO scheduler.DAGScheduler: Got job 70 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 16:35:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 70 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 16:35:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 16:35:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 16:35:00 INFO scheduler.DAGScheduler: Submitting ResultStage 70 (KafkaRDD[104] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 16:35:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 69.0 (TID 69, ***hostname masked***, executor 5, partition 0, NODE_LOCAL, 2047 bytes) 18/04/17 16:35:00 INFO storage.MemoryStore: Block broadcast_70 stored as values in memory (estimated size 5.7 KB, free 491.2 MB) 18/04/17 16:35:00 INFO storage.BlockManagerInfo: Removed broadcast_30_piece0 on ***hostname masked***:53081 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:35:00 INFO storage.MemoryStore: Block broadcast_70_piece0 stored as bytes in memory (estimated size 3.1 KB, free 491.2 MB) 18/04/17 16:35:00 INFO storage.BlockManagerInfo: Added broadcast_70_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.5 MB) 18/04/17 16:35:00 INFO spark.SparkContext: Created broadcast 70 from broadcast at DAGScheduler.scala:1006 18/04/17 16:35:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 70 (KafkaRDD[104] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 16:35:00 INFO cluster.YarnClusterScheduler: Adding task set 70.0 with 1 tasks 18/04/17 16:35:00 INFO scheduler.DAGScheduler: Got job 71 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 16:35:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 71 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 16:35:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 16:35:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 16:35:00 INFO scheduler.DAGScheduler: Submitting ResultStage 71 (KafkaRDD[92] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 16:35:00 INFO storage.MemoryStore: Block broadcast_71 stored as values in memory (estimated size 5.7 KB, free 491.2 MB) 18/04/17 16:35:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 70.0 (TID 70, ***hostname masked***, executor 7, partition 0, RACK_LOCAL, 2040 bytes) 18/04/17 16:35:00 INFO storage.BlockManagerInfo: Added broadcast_64_piece0 in memory on ***hostname masked***:55279 (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:35:00 INFO spark.ContextCleaner: Cleaned accumulator 35 18/04/17 16:35:00 INFO storage.BlockManagerInfo: Added broadcast_67_piece0 in memory on ***hostname masked***:50260 (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:35:00 INFO storage.BlockManagerInfo: Removed broadcast_33_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.5 MB) 18/04/17 16:35:00 INFO storage.MemoryStore: Block broadcast_71_piece0 stored as bytes in memory (estimated size 3.1 KB, free 491.2 MB) 18/04/17 16:35:00 INFO storage.BlockManagerInfo: Added broadcast_71_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.5 MB) 18/04/17 16:35:00 INFO spark.SparkContext: Created broadcast 71 from broadcast at DAGScheduler.scala:1006 18/04/17 16:35:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 71 (KafkaRDD[92] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 16:35:00 INFO cluster.YarnClusterScheduler: Adding task set 71.0 with 1 tasks 18/04/17 16:35:00 INFO scheduler.DAGScheduler: Got job 72 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 16:35:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 72 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 16:35:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 16:35:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 16:35:00 INFO scheduler.DAGScheduler: Submitting ResultStage 72 (KafkaRDD[74] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 16:35:00 INFO storage.BlockManagerInfo: Removed broadcast_33_piece0 on ***hostname masked***:53081 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:35:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 71.0 (TID 71, ***hostname masked***, executor 7, partition 0, RACK_LOCAL, 2048 bytes) 18/04/17 16:35:00 INFO storage.MemoryStore: Block broadcast_72 stored as values in memory (estimated size 5.7 KB, free 491.2 MB) 18/04/17 16:35:00 INFO storage.MemoryStore: Block broadcast_72_piece0 stored as bytes in memory (estimated size 3.1 KB, free 491.2 MB) 18/04/17 16:35:00 INFO storage.BlockManagerInfo: Added broadcast_72_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.5 MB) 18/04/17 16:35:00 INFO spark.SparkContext: Created broadcast 72 from broadcast at DAGScheduler.scala:1006 18/04/17 16:35:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 72 (KafkaRDD[74] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 16:35:00 INFO cluster.YarnClusterScheduler: Adding task set 72.0 with 1 tasks 18/04/17 16:35:00 INFO scheduler.DAGScheduler: Got job 73 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 16:35:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 73 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 16:35:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 16:35:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 16:35:00 INFO scheduler.DAGScheduler: Submitting ResultStage 73 (KafkaRDD[100] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 16:35:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 72.0 (TID 72, ***hostname masked***, executor 11, partition 0, RACK_LOCAL, 2034 bytes) 18/04/17 16:35:00 INFO storage.MemoryStore: Block broadcast_73 stored as values in memory (estimated size 5.7 KB, free 491.2 MB) 18/04/17 16:35:00 INFO storage.BlockManagerInfo: Removed broadcast_32_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.5 MB) 18/04/17 16:35:00 INFO storage.MemoryStore: Block broadcast_73_piece0 stored as bytes in memory (estimated size 3.1 KB, free 491.2 MB) 18/04/17 16:35:00 INFO storage.BlockManagerInfo: Added broadcast_73_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.5 MB) 18/04/17 16:35:00 INFO spark.SparkContext: Created broadcast 73 from broadcast at DAGScheduler.scala:1006 18/04/17 16:35:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 73 (KafkaRDD[100] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 16:35:00 INFO cluster.YarnClusterScheduler: Adding task set 73.0 with 1 tasks 18/04/17 16:35:00 INFO storage.BlockManagerInfo: Removed broadcast_32_piece0 on ***hostname masked***:35790 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:35:00 INFO scheduler.DAGScheduler: Got job 74 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 16:35:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 74 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 16:35:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 16:35:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 16:35:00 INFO scheduler.DAGScheduler: Submitting ResultStage 74 (KafkaRDD[90] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 16:35:00 INFO storage.MemoryStore: Block broadcast_74 stored as values in memory (estimated size 5.7 KB, free 491.2 MB) 18/04/17 16:35:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 73.0 (TID 73, ***hostname masked***, executor 10, partition 0, RACK_LOCAL, 2045 bytes) 18/04/17 16:35:00 INFO storage.BlockManagerInfo: Added broadcast_68_piece0 in memory on ***hostname masked***:56034 (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:35:00 INFO spark.ContextCleaner: Cleaned accumulator 37 18/04/17 16:35:00 INFO storage.BlockManagerInfo: Removed broadcast_35_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.5 MB) 18/04/17 16:35:00 INFO storage.MemoryStore: Block broadcast_74_piece0 stored as bytes in memory (estimated size 3.1 KB, free 491.2 MB) 18/04/17 16:35:00 INFO storage.BlockManagerInfo: Added broadcast_74_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.5 MB) 18/04/17 16:35:00 INFO spark.SparkContext: Created broadcast 74 from broadcast at DAGScheduler.scala:1006 18/04/17 16:35:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 74 (KafkaRDD[90] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 16:35:00 INFO cluster.YarnClusterScheduler: Adding task set 74.0 with 1 tasks 18/04/17 16:35:00 INFO scheduler.DAGScheduler: Got job 75 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 16:35:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 75 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 16:35:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 16:35:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 16:35:00 INFO scheduler.DAGScheduler: Submitting ResultStage 75 (KafkaRDD[105] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 16:35:00 INFO storage.BlockManagerInfo: Added broadcast_69_piece0 in memory on ***hostname masked***:53081 (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:35:00 INFO storage.MemoryStore: Block broadcast_75 stored as values in memory (estimated size 5.7 KB, free 491.2 MB) 18/04/17 16:35:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 74.0 (TID 74, ***hostname masked***, executor 9, partition 0, RACK_LOCAL, 2048 bytes) 18/04/17 16:35:00 INFO storage.BlockManagerInfo: Added broadcast_70_piece0 in memory on ***hostname masked***:41751 (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:35:00 INFO storage.BlockManagerInfo: Removed broadcast_35_piece0 on ***hostname masked***:57847 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:35:00 INFO storage.MemoryStore: Block broadcast_75_piece0 stored as bytes in memory (estimated size 3.1 KB, free 491.2 MB) 18/04/17 16:35:00 INFO storage.BlockManagerInfo: Added broadcast_75_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.5 MB) 18/04/17 16:35:00 INFO spark.SparkContext: Created broadcast 75 from broadcast at DAGScheduler.scala:1006 18/04/17 16:35:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 75 (KafkaRDD[105] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 16:35:00 INFO cluster.YarnClusterScheduler: Adding task set 75.0 with 1 tasks 18/04/17 16:35:00 INFO scheduler.DAGScheduler: Got job 76 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 16:35:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 76 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 16:35:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 16:35:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 16:35:00 INFO scheduler.DAGScheduler: Submitting ResultStage 76 (KafkaRDD[81] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 16:35:00 INFO spark.ContextCleaner: Cleaned accumulator 36 18/04/17 16:35:00 INFO storage.MemoryStore: Block broadcast_76 stored as values in memory (estimated size 5.7 KB, free 491.2 MB) 18/04/17 16:35:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 75.0 (TID 75, ***hostname masked***, executor 9, partition 0, NODE_LOCAL, 2048 bytes) 18/04/17 16:35:00 INFO storage.BlockManagerInfo: Removed broadcast_34_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.5 MB) 18/04/17 16:35:00 INFO storage.MemoryStore: Block broadcast_76_piece0 stored as bytes in memory (estimated size 3.1 KB, free 491.2 MB) 18/04/17 16:35:00 INFO storage.BlockManagerInfo: Added broadcast_76_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.5 MB) 18/04/17 16:35:00 INFO spark.SparkContext: Created broadcast 76 from broadcast at DAGScheduler.scala:1006 18/04/17 16:35:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 76 (KafkaRDD[81] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 16:35:00 INFO cluster.YarnClusterScheduler: Adding task set 76.0 with 1 tasks 18/04/17 16:35:00 INFO scheduler.DAGScheduler: Got job 77 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 16:35:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 77 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 16:35:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 16:35:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 16:35:00 INFO scheduler.DAGScheduler: Submitting ResultStage 77 (KafkaRDD[80] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 16:35:00 INFO storage.MemoryStore: Block broadcast_77 stored as values in memory (estimated size 5.7 KB, free 491.2 MB) 18/04/17 16:35:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 76.0 (TID 76, ***hostname masked***, executor 2, partition 0, RACK_LOCAL, 2041 bytes) 18/04/17 16:35:00 INFO storage.MemoryStore: Block broadcast_77_piece0 stored as bytes in memory (estimated size 3.1 KB, free 491.2 MB) 18/04/17 16:35:00 INFO storage.BlockManagerInfo: Added broadcast_77_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.5 MB) 18/04/17 16:35:00 INFO spark.SparkContext: Created broadcast 77 from broadcast at DAGScheduler.scala:1006 18/04/17 16:35:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 77 (KafkaRDD[80] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 16:35:00 INFO cluster.YarnClusterScheduler: Adding task set 77.0 with 1 tasks 18/04/17 16:35:00 INFO storage.BlockManagerInfo: Added broadcast_73_piece0 in memory on ***hostname masked***:55095 (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:35:00 INFO storage.BlockManagerInfo: Removed broadcast_34_piece0 on ***hostname masked***:60107 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:35:00 INFO storage.BlockManagerInfo: Added broadcast_72_piece0 in memory on ***hostname masked***:57847 (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:35:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 77.0 (TID 77, ***hostname masked***, executor 11, partition 0, RACK_LOCAL, 2051 bytes) 18/04/17 16:35:00 INFO storage.BlockManagerInfo: Added broadcast_76_piece0 in memory on ***hostname masked***:43653 (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:35:00 INFO storage.BlockManagerInfo: Removed broadcast_37_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.5 MB) 18/04/17 16:35:00 INFO storage.BlockManagerInfo: Added broadcast_71_piece0 in memory on ***hostname masked***:41751 (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:35:00 INFO storage.BlockManagerInfo: Removed broadcast_37_piece0 on ***hostname masked***:35790 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:35:00 INFO spark.ContextCleaner: Cleaned accumulator 38 18/04/17 16:35:00 INFO storage.BlockManagerInfo: Removed broadcast_36_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.5 MB) 18/04/17 16:35:00 INFO storage.BlockManagerInfo: Added broadcast_74_piece0 in memory on ***hostname masked***:55033 (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:35:00 INFO storage.BlockManagerInfo: Removed broadcast_36_piece0 on ***hostname masked***:60107 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:35:00 INFO storage.BlockManagerInfo: Added broadcast_77_piece0 in memory on ***hostname masked***:57847 (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:35:00 INFO storage.BlockManagerInfo: Removed broadcast_39_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.5 MB) 18/04/17 16:35:00 INFO storage.BlockManagerInfo: Removed broadcast_39_piece0 on ***hostname masked***:56034 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:35:00 INFO storage.BlockManagerInfo: Added broadcast_75_piece0 in memory on ***hostname masked***:55033 (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:35:00 INFO spark.ContextCleaner: Cleaned accumulator 40 18/04/17 16:35:00 INFO storage.BlockManagerInfo: Removed broadcast_38_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.5 MB) 18/04/17 16:35:00 INFO storage.BlockManagerInfo: Removed broadcast_38_piece0 on ***hostname masked***:53081 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:35:00 INFO spark.ContextCleaner: Cleaned accumulator 39 18/04/17 16:35:00 INFO storage.BlockManagerInfo: Removed broadcast_41_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.5 MB) 18/04/17 16:35:00 INFO storage.BlockManagerInfo: Removed broadcast_41_piece0 on ***hostname masked***:42188 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:35:00 INFO spark.ContextCleaner: Cleaned accumulator 42 18/04/17 16:35:00 INFO storage.BlockManagerInfo: Removed broadcast_40_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.5 MB) 18/04/17 16:35:00 INFO storage.BlockManagerInfo: Removed broadcast_40_piece0 on ***hostname masked***:35790 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:35:00 INFO spark.ContextCleaner: Cleaned accumulator 41 18/04/17 16:35:00 INFO storage.BlockManagerInfo: Removed broadcast_43_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.5 MB) 18/04/17 16:35:00 INFO storage.BlockManagerInfo: Removed broadcast_43_piece0 on ***hostname masked***:43653 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:35:00 INFO spark.ContextCleaner: Cleaned accumulator 44 18/04/17 16:35:00 INFO storage.BlockManagerInfo: Removed broadcast_42_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.5 MB) 18/04/17 16:35:00 INFO storage.BlockManagerInfo: Removed broadcast_42_piece0 on ***hostname masked***:35790 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:35:00 INFO spark.ContextCleaner: Cleaned accumulator 43 18/04/17 16:35:00 INFO storage.BlockManagerInfo: Removed broadcast_45_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.5 MB) 18/04/17 16:35:00 INFO storage.BlockManagerInfo: Removed broadcast_45_piece0 on ***hostname masked***:42188 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:35:00 INFO spark.ContextCleaner: Cleaned accumulator 46 18/04/17 16:35:00 INFO storage.BlockManagerInfo: Removed broadcast_44_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.5 MB) 18/04/17 16:35:00 INFO storage.BlockManagerInfo: Removed broadcast_44_piece0 on ***hostname masked***:53081 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:35:00 INFO spark.ContextCleaner: Cleaned accumulator 45 18/04/17 16:35:00 INFO storage.BlockManagerInfo: Removed broadcast_47_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.5 MB) 18/04/17 16:35:00 INFO storage.BlockManagerInfo: Removed broadcast_47_piece0 on ***hostname masked***:56034 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:35:00 INFO spark.ContextCleaner: Cleaned accumulator 48 18/04/17 16:35:00 INFO storage.BlockManagerInfo: Removed broadcast_46_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.5 MB) 18/04/17 16:35:00 INFO storage.BlockManagerInfo: Removed broadcast_46_piece0 on ***hostname masked***:55095 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:35:00 INFO spark.ContextCleaner: Cleaned accumulator 47 18/04/17 16:35:00 INFO storage.BlockManagerInfo: Removed broadcast_49_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.5 MB) 18/04/17 16:35:00 INFO storage.BlockManagerInfo: Removed broadcast_49_piece0 on ***hostname masked***:55095 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:35:00 INFO spark.ContextCleaner: Cleaned accumulator 50 18/04/17 16:35:00 INFO storage.BlockManagerInfo: Removed broadcast_48_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.5 MB) 18/04/17 16:35:00 INFO storage.BlockManagerInfo: Removed broadcast_48_piece0 on ***hostname masked***:60107 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:35:00 INFO spark.ContextCleaner: Cleaned accumulator 49 18/04/17 16:35:00 INFO storage.BlockManagerInfo: Removed broadcast_50_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.5 MB) 18/04/17 16:35:00 INFO storage.BlockManagerInfo: Removed broadcast_50_piece0 on ***hostname masked***:43653 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:35:00 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 58.0 (TID 58) in 186 ms on ***hostname masked*** (executor 2) (1/1) 18/04/17 16:35:00 INFO cluster.YarnClusterScheduler: Removed TaskSet 58.0, whose tasks have all completed, from pool 18/04/17 16:35:00 INFO scheduler.DAGScheduler: ResultStage 58 (foreachPartition at PredictorEngineApp.java:153) finished in 0.186 s 18/04/17 16:35:00 INFO scheduler.DAGScheduler: Job 58 finished: foreachPartition at PredictorEngineApp.java:153, took 0.264951 s 18/04/17 16:35:00 INFO spark.ContextCleaner: Cleaned accumulator 51 18/04/17 16:35:00 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x108f2055 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 16:35:00 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x108f20550x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 16:35:00 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 16:35:00 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:58710, server: ***hostname masked***/***IP masked***:2181 18/04/17 16:35:00 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x1626be1444c91d6, negotiated timeout = 60000 18/04/17 16:35:00 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x1626be1444c91d6 18/04/17 16:35:00 INFO zookeeper.ZooKeeper: Session: 0x1626be1444c91d6 closed 18/04/17 16:35:00 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 16:35:00 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 64.0 (TID 64) in 189 ms on ***hostname masked*** (executor 4) (1/1) 18/04/17 16:35:00 INFO scheduler.DAGScheduler: ResultStage 64 (foreachPartition at PredictorEngineApp.java:153) finished in 0.189 s 18/04/17 16:35:00 INFO cluster.YarnClusterScheduler: Removed TaskSet 64.0, whose tasks have all completed, from pool 18/04/17 16:35:00 INFO scheduler.DAGScheduler: Job 63 finished: foreachPartition at PredictorEngineApp.java:153, took 0.293914 s 18/04/17 16:35:00 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x33ce39d4 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 16:35:00 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x33ce39d40x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 16:35:00 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 16:35:00 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:52331, server: ***hostname masked***/***IP masked***:2181 18/04/17 16:35:00 INFO scheduler.JobScheduler: Finished job streaming job 1523972100000 ms.1 from job set of time 1523972100000 ms 18/04/17 16:35:00 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x3626be1439a919f, negotiated timeout = 60000 18/04/17 16:35:00 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x3626be1439a919f 18/04/17 16:35:00 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 68.0 (TID 68) in 192 ms on ***hostname masked*** (executor 1) (1/1) 18/04/17 16:35:00 INFO cluster.YarnClusterScheduler: Removed TaskSet 68.0, whose tasks have all completed, from pool 18/04/17 16:35:00 INFO scheduler.DAGScheduler: ResultStage 68 (foreachPartition at PredictorEngineApp.java:153) finished in 0.193 s 18/04/17 16:35:00 INFO scheduler.DAGScheduler: Job 68 finished: foreachPartition at PredictorEngineApp.java:153, took 0.312303 s 18/04/17 16:35:00 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x5825d9bd connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 16:35:00 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x5825d9bd0x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 16:35:00 INFO zookeeper.ZooKeeper: Session: 0x3626be1439a919f closed 18/04/17 16:35:00 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 16:35:00 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 16:35:00 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:35078, server: ***hostname masked***/***IP masked***:2181 18/04/17 16:35:00 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x2626be142b28abe, negotiated timeout = 60000 18/04/17 16:35:00 INFO scheduler.JobScheduler: Finished job streaming job 1523972100000 ms.11 from job set of time 1523972100000 ms 18/04/17 16:35:00 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 71.0 (TID 71) in 203 ms on ***hostname masked*** (executor 7) (1/1) 18/04/17 16:35:00 INFO cluster.YarnClusterScheduler: Removed TaskSet 71.0, whose tasks have all completed, from pool 18/04/17 16:35:00 INFO scheduler.DAGScheduler: ResultStage 71 (foreachPartition at PredictorEngineApp.java:153) finished in 0.203 s 18/04/17 16:35:00 INFO scheduler.DAGScheduler: Job 71 finished: foreachPartition at PredictorEngineApp.java:153, took 0.332848 s 18/04/17 16:35:00 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x2626be142b28abe 18/04/17 16:35:00 INFO zookeeper.ZooKeeper: Session: 0x2626be142b28abe closed 18/04/17 16:35:00 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 16:35:00 INFO scheduler.JobScheduler: Finished job streaming job 1523972100000 ms.34 from job set of time 1523972100000 ms 18/04/17 16:35:00 INFO scheduler.JobScheduler: Finished job streaming job 1523972100000 ms.20 from job set of time 1523972100000 ms 18/04/17 16:35:00 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 67.0 (TID 67) in 655 ms on ***hostname masked*** (executor 8) (1/1) 18/04/17 16:35:00 INFO cluster.YarnClusterScheduler: Removed TaskSet 67.0, whose tasks have all completed, from pool 18/04/17 16:35:00 INFO scheduler.DAGScheduler: ResultStage 67 (foreachPartition at PredictorEngineApp.java:153) finished in 0.656 s 18/04/17 16:35:00 INFO scheduler.DAGScheduler: Job 67 finished: foreachPartition at PredictorEngineApp.java:153, took 0.771541 s 18/04/17 16:35:00 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x4a90876d connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 16:35:00 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x4a90876d0x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 16:35:00 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 16:35:00 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:58720, server: ***hostname masked***/***IP masked***:2181 18/04/17 16:35:00 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x1626be1444c91db, negotiated timeout = 60000 18/04/17 16:35:00 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x1626be1444c91db 18/04/17 16:35:00 INFO zookeeper.ZooKeeper: Session: 0x1626be1444c91db closed 18/04/17 16:35:00 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 16:35:00 INFO scheduler.JobScheduler: Finished job streaming job 1523972100000 ms.25 from job set of time 1523972100000 ms 18/04/17 16:35:02 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 77.0 (TID 77) in 2508 ms on ***hostname masked*** (executor 11) (1/1) 18/04/17 16:35:02 INFO cluster.YarnClusterScheduler: Removed TaskSet 77.0, whose tasks have all completed, from pool 18/04/17 16:35:02 INFO scheduler.DAGScheduler: ResultStage 77 (foreachPartition at PredictorEngineApp.java:153) finished in 2.509 s 18/04/17 16:35:02 INFO scheduler.DAGScheduler: Job 77 finished: foreachPartition at PredictorEngineApp.java:153, took 2.654975 s 18/04/17 16:35:02 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x6d85cd1b connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 16:35:02 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x6d85cd1b0x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 16:35:02 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 16:35:02 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:58728, server: ***hostname masked***/***IP masked***:2181 18/04/17 16:35:02 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x1626be1444c91dc, negotiated timeout = 60000 18/04/17 16:35:02 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x1626be1444c91dc 18/04/17 16:35:02 INFO zookeeper.ZooKeeper: Session: 0x1626be1444c91dc closed 18/04/17 16:35:02 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 16:35:02 INFO scheduler.JobScheduler: Finished job streaming job 1523972100000 ms.8 from job set of time 1523972100000 ms 18/04/17 16:35:03 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 56.0 (TID 56) in 3543 ms on ***hostname masked*** (executor 9) (1/1) 18/04/17 16:35:03 INFO cluster.YarnClusterScheduler: Removed TaskSet 56.0, whose tasks have all completed, from pool 18/04/17 16:35:03 INFO scheduler.DAGScheduler: ResultStage 56 (foreachPartition at PredictorEngineApp.java:153) finished in 3.543 s 18/04/17 16:35:03 INFO scheduler.DAGScheduler: Job 56 finished: foreachPartition at PredictorEngineApp.java:153, took 3.578413 s 18/04/17 16:35:03 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x9d64a54 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 16:35:03 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x9d64a540x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 16:35:03 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 16:35:03 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:35094, server: ***hostname masked***/***IP masked***:2181 18/04/17 16:35:03 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x2626be142b28ac3, negotiated timeout = 60000 18/04/17 16:35:03 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x2626be142b28ac3 18/04/17 16:35:03 INFO zookeeper.ZooKeeper: Session: 0x2626be142b28ac3 closed 18/04/17 16:35:03 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 16:35:03 INFO scheduler.JobScheduler: Finished job streaming job 1523972100000 ms.7 from job set of time 1523972100000 ms 18/04/17 16:35:04 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 59.0 (TID 59) in 4781 ms on ***hostname masked*** (executor 4) (1/1) 18/04/17 16:35:04 INFO cluster.YarnClusterScheduler: Removed TaskSet 59.0, whose tasks have all completed, from pool 18/04/17 16:35:04 INFO scheduler.DAGScheduler: ResultStage 59 (foreachPartition at PredictorEngineApp.java:153) finished in 4.782 s 18/04/17 16:35:04 INFO scheduler.DAGScheduler: Job 59 finished: foreachPartition at PredictorEngineApp.java:153, took 4.865027 s 18/04/17 16:35:04 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x450a44ba connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 16:35:04 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x450a44ba0x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 16:35:04 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 16:35:04 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:35099, server: ***hostname masked***/***IP masked***:2181 18/04/17 16:35:04 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x2626be142b28ac5, negotiated timeout = 60000 18/04/17 16:35:04 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x2626be142b28ac5 18/04/17 16:35:04 INFO zookeeper.ZooKeeper: Session: 0x2626be142b28ac5 closed 18/04/17 16:35:04 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 16:35:04 INFO scheduler.JobScheduler: Finished job streaming job 1523972100000 ms.29 from job set of time 1523972100000 ms 18/04/17 16:35:05 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 62.0 (TID 62) in 4944 ms on ***hostname masked*** (executor 12) (1/1) 18/04/17 16:35:05 INFO cluster.YarnClusterScheduler: Removed TaskSet 62.0, whose tasks have all completed, from pool 18/04/17 16:35:05 INFO scheduler.DAGScheduler: ResultStage 62 (foreachPartition at PredictorEngineApp.java:153) finished in 4.944 s 18/04/17 16:35:05 INFO scheduler.DAGScheduler: Job 62 finished: foreachPartition at PredictorEngineApp.java:153, took 5.040209 s 18/04/17 16:35:05 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x56631349 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 16:35:05 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x566313490x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 16:35:05 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 16:35:05 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:35102, server: ***hostname masked***/***IP masked***:2181 18/04/17 16:35:05 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x2626be142b28ac6, negotiated timeout = 60000 18/04/17 16:35:05 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x2626be142b28ac6 18/04/17 16:35:05 INFO zookeeper.ZooKeeper: Session: 0x2626be142b28ac6 closed 18/04/17 16:35:05 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 16:35:05 INFO scheduler.JobScheduler: Finished job streaming job 1523972100000 ms.27 from job set of time 1523972100000 ms 18/04/17 16:35:05 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 70.0 (TID 70) in 4990 ms on ***hostname masked*** (executor 7) (1/1) 18/04/17 16:35:05 INFO cluster.YarnClusterScheduler: Removed TaskSet 70.0, whose tasks have all completed, from pool 18/04/17 16:35:05 INFO scheduler.DAGScheduler: ResultStage 70 (foreachPartition at PredictorEngineApp.java:153) finished in 4.991 s 18/04/17 16:35:05 INFO scheduler.DAGScheduler: Job 70 finished: foreachPartition at PredictorEngineApp.java:153, took 5.117946 s 18/04/17 16:35:05 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x4105f2c5 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 16:35:05 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x4105f2c50x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 16:35:05 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 16:35:05 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:52361, server: ***hostname masked***/***IP masked***:2181 18/04/17 16:35:05 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x3626be1439a91a6, negotiated timeout = 60000 18/04/17 16:35:05 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x3626be1439a91a6 18/04/17 16:35:05 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 16:35:05 INFO zookeeper.ZooKeeper: Session: 0x3626be1439a91a6 closed 18/04/17 16:35:05 INFO scheduler.JobScheduler: Finished job streaming job 1523972100000 ms.32 from job set of time 1523972100000 ms 18/04/17 16:35:05 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 72.0 (TID 72) in 5679 ms on ***hostname masked*** (executor 11) (1/1) 18/04/17 16:35:05 INFO cluster.YarnClusterScheduler: Removed TaskSet 72.0, whose tasks have all completed, from pool 18/04/17 16:35:05 INFO scheduler.DAGScheduler: ResultStage 72 (foreachPartition at PredictorEngineApp.java:153) finished in 5.680 s 18/04/17 16:35:05 INFO scheduler.DAGScheduler: Job 72 finished: foreachPartition at PredictorEngineApp.java:153, took 5.810542 s 18/04/17 16:35:05 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x5d669c24 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 16:35:05 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x5d669c240x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 16:35:05 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 16:35:05 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:58747, server: ***hostname masked***/***IP masked***:2181 18/04/17 16:35:05 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x1626be1444c91df, negotiated timeout = 60000 18/04/17 16:35:05 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x1626be1444c91df 18/04/17 16:35:05 INFO zookeeper.ZooKeeper: Session: 0x1626be1444c91df closed 18/04/17 16:35:05 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 16:35:05 INFO scheduler.JobScheduler: Finished job streaming job 1523972100000 ms.2 from job set of time 1523972100000 ms 18/04/17 16:35:06 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 52.0 (TID 52) in 6204 ms on ***hostname masked*** (executor 3) (1/1) 18/04/17 16:35:06 INFO cluster.YarnClusterScheduler: Removed TaskSet 52.0, whose tasks have all completed, from pool 18/04/17 16:35:06 INFO scheduler.DAGScheduler: ResultStage 52 (foreachPartition at PredictorEngineApp.java:153) finished in 6.204 s 18/04/17 16:35:06 INFO scheduler.DAGScheduler: Job 52 finished: foreachPartition at PredictorEngineApp.java:153, took 6.220248 s 18/04/17 16:35:06 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x4833698b connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 16:35:06 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x4833698b0x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 16:35:06 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 16:35:06 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:52369, server: ***hostname masked***/***IP masked***:2181 18/04/17 16:35:06 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x3626be1439a91a8, negotiated timeout = 60000 18/04/17 16:35:06 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x3626be1439a91a8 18/04/17 16:35:06 INFO zookeeper.ZooKeeper: Session: 0x3626be1439a91a8 closed 18/04/17 16:35:06 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 16:35:06 INFO scheduler.JobScheduler: Finished job streaming job 1523972100000 ms.31 from job set of time 1523972100000 ms 18/04/17 16:35:07 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 54.0 (TID 54) in 7409 ms on ***hostname masked*** (executor 6) (1/1) 18/04/17 16:35:07 INFO cluster.YarnClusterScheduler: Removed TaskSet 54.0, whose tasks have all completed, from pool 18/04/17 16:35:07 INFO scheduler.DAGScheduler: ResultStage 54 (foreachPartition at PredictorEngineApp.java:153) finished in 7.409 s 18/04/17 16:35:07 INFO scheduler.DAGScheduler: Job 53 finished: foreachPartition at PredictorEngineApp.java:153, took 7.436988 s 18/04/17 16:35:07 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x2edd647d connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 16:35:07 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x2edd647d0x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 16:35:07 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 16:35:07 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:52373, server: ***hostname masked***/***IP masked***:2181 18/04/17 16:35:07 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x3626be1439a91a9, negotiated timeout = 60000 18/04/17 16:35:07 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x3626be1439a91a9 18/04/17 16:35:07 INFO zookeeper.ZooKeeper: Session: 0x3626be1439a91a9 closed 18/04/17 16:35:07 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 16:35:07 INFO scheduler.JobScheduler: Finished job streaming job 1523972100000 ms.12 from job set of time 1523972100000 ms 18/04/17 16:35:08 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 73.0 (TID 73) in 8646 ms on ***hostname masked*** (executor 10) (1/1) 18/04/17 16:35:08 INFO cluster.YarnClusterScheduler: Removed TaskSet 73.0, whose tasks have all completed, from pool 18/04/17 16:35:08 INFO scheduler.DAGScheduler: ResultStage 73 (foreachPartition at PredictorEngineApp.java:153) finished in 8.647 s 18/04/17 16:35:08 INFO scheduler.DAGScheduler: Job 73 finished: foreachPartition at PredictorEngineApp.java:153, took 8.780131 s 18/04/17 16:35:08 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x6f6ba1e9 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 16:35:08 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x6f6ba1e90x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 16:35:08 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 16:35:08 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:52378, server: ***hostname masked***/***IP masked***:2181 18/04/17 16:35:08 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x3626be1439a91aa, negotiated timeout = 60000 18/04/17 16:35:08 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x3626be1439a91aa 18/04/17 16:35:08 INFO zookeeper.ZooKeeper: Session: 0x3626be1439a91aa closed 18/04/17 16:35:08 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 16:35:08 INFO scheduler.JobScheduler: Finished job streaming job 1523972100000 ms.28 from job set of time 1523972100000 ms 18/04/17 16:35:09 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 60.0 (TID 60) in 8943 ms on ***hostname masked*** (executor 10) (1/1) 18/04/17 16:35:09 INFO cluster.YarnClusterScheduler: Removed TaskSet 60.0, whose tasks have all completed, from pool 18/04/17 16:35:09 INFO scheduler.DAGScheduler: ResultStage 60 (foreachPartition at PredictorEngineApp.java:153) finished in 8.943 s 18/04/17 16:35:09 INFO scheduler.DAGScheduler: Job 60 finished: foreachPartition at PredictorEngineApp.java:153, took 9.031160 s 18/04/17 16:35:09 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x20b53b8a connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 16:35:09 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x20b53b8a0x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 16:35:09 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 16:35:09 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:58764, server: ***hostname masked***/***IP masked***:2181 18/04/17 16:35:09 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x1626be1444c91e3, negotiated timeout = 60000 18/04/17 16:35:09 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x1626be1444c91e3 18/04/17 16:35:09 INFO zookeeper.ZooKeeper: Session: 0x1626be1444c91e3 closed 18/04/17 16:35:09 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 16:35:09 INFO scheduler.JobScheduler: Finished job streaming job 1523972100000 ms.6 from job set of time 1523972100000 ms 18/04/17 16:35:09 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 76.0 (TID 76) in 9212 ms on ***hostname masked*** (executor 2) (1/1) 18/04/17 16:35:09 INFO cluster.YarnClusterScheduler: Removed TaskSet 76.0, whose tasks have all completed, from pool 18/04/17 16:35:09 INFO scheduler.DAGScheduler: ResultStage 76 (foreachPartition at PredictorEngineApp.java:153) finished in 9.214 s 18/04/17 16:35:09 INFO scheduler.DAGScheduler: Job 76 finished: foreachPartition at PredictorEngineApp.java:153, took 9.357769 s 18/04/17 16:35:09 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x541e521f connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 16:35:09 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x541e521f0x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 16:35:09 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 16:35:09 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:35129, server: ***hostname masked***/***IP masked***:2181 18/04/17 16:35:09 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x2626be142b28aca, negotiated timeout = 60000 18/04/17 16:35:09 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x2626be142b28aca 18/04/17 16:35:09 INFO zookeeper.ZooKeeper: Session: 0x2626be142b28aca closed 18/04/17 16:35:09 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 16:35:09 INFO scheduler.JobScheduler: Finished job streaming job 1523972100000 ms.9 from job set of time 1523972100000 ms 18/04/17 16:35:10 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 53.0 (TID 53) in 10146 ms on ***hostname masked*** (executor 3) (1/1) 18/04/17 16:35:10 INFO cluster.YarnClusterScheduler: Removed TaskSet 53.0, whose tasks have all completed, from pool 18/04/17 16:35:10 INFO scheduler.DAGScheduler: ResultStage 53 (foreachPartition at PredictorEngineApp.java:153) finished in 10.146 s 18/04/17 16:35:10 INFO scheduler.DAGScheduler: Job 54 finished: foreachPartition at PredictorEngineApp.java:153, took 10.169880 s 18/04/17 16:35:10 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x5428965a connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 16:35:10 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x5428965a0x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 16:35:10 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 16:35:10 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:58771, server: ***hostname masked***/***IP masked***:2181 18/04/17 16:35:10 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x1626be1444c91e6, negotiated timeout = 60000 18/04/17 16:35:10 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x1626be1444c91e6 18/04/17 16:35:10 INFO zookeeper.ZooKeeper: Session: 0x1626be1444c91e6 closed 18/04/17 16:35:10 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 16:35:10 INFO scheduler.JobScheduler: Finished job streaming job 1523972100000 ms.19 from job set of time 1523972100000 ms 18/04/17 16:35:10 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 57.0 (TID 57) in 10310 ms on ***hostname masked*** (executor 3) (1/1) 18/04/17 16:35:10 INFO cluster.YarnClusterScheduler: Removed TaskSet 57.0, whose tasks have all completed, from pool 18/04/17 16:35:10 INFO scheduler.DAGScheduler: ResultStage 57 (foreachPartition at PredictorEngineApp.java:153) finished in 10.311 s 18/04/17 16:35:10 INFO scheduler.DAGScheduler: Job 57 finished: foreachPartition at PredictorEngineApp.java:153, took 10.349844 s 18/04/17 16:35:10 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0xee6dcd5 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 16:35:10 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0xee6dcd50x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 16:35:10 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 16:35:10 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:52392, server: ***hostname masked***/***IP masked***:2181 18/04/17 16:35:10 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x3626be1439a91ab, negotiated timeout = 60000 18/04/17 16:35:10 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x3626be1439a91ab 18/04/17 16:35:10 INFO zookeeper.ZooKeeper: Session: 0x3626be1439a91ab closed 18/04/17 16:35:10 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 16:35:10 INFO scheduler.JobScheduler: Finished job streaming job 1523972100000 ms.15 from job set of time 1523972100000 ms 18/04/17 16:35:10 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 74.0 (TID 74) in 10339 ms on ***hostname masked*** (executor 9) (1/1) 18/04/17 16:35:10 INFO scheduler.DAGScheduler: ResultStage 74 (foreachPartition at PredictorEngineApp.java:153) finished in 10.340 s 18/04/17 16:35:10 INFO cluster.YarnClusterScheduler: Removed TaskSet 74.0, whose tasks have all completed, from pool 18/04/17 16:35:10 INFO scheduler.DAGScheduler: Job 74 finished: foreachPartition at PredictorEngineApp.java:153, took 10.488455 s 18/04/17 16:35:10 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x630c0dd connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 16:35:10 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x630c0dd0x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 16:35:10 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 16:35:10 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:52395, server: ***hostname masked***/***IP masked***:2181 18/04/17 16:35:10 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x3626be1439a91ac, negotiated timeout = 60000 18/04/17 16:35:10 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x3626be1439a91ac 18/04/17 16:35:10 INFO zookeeper.ZooKeeper: Session: 0x3626be1439a91ac closed 18/04/17 16:35:10 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 16:35:10 INFO scheduler.JobScheduler: Finished job streaming job 1523972100000 ms.18 from job set of time 1523972100000 ms 18/04/17 16:35:10 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 75.0 (TID 75) in 10690 ms on ***hostname masked*** (executor 9) (1/1) 18/04/17 16:35:10 INFO cluster.YarnClusterScheduler: Removed TaskSet 75.0, whose tasks have all completed, from pool 18/04/17 16:35:10 INFO scheduler.DAGScheduler: ResultStage 75 (foreachPartition at PredictorEngineApp.java:153) finished in 10.692 s 18/04/17 16:35:10 INFO scheduler.DAGScheduler: Job 75 finished: foreachPartition at PredictorEngineApp.java:153, took 10.831406 s 18/04/17 16:35:10 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x5d5c7c9d connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 16:35:10 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x5d5c7c9d0x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 16:35:10 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 16:35:10 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:35144, server: ***hostname masked***/***IP masked***:2181 18/04/17 16:35:10 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x2626be142b28acc, negotiated timeout = 60000 18/04/17 16:35:10 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x2626be142b28acc 18/04/17 16:35:10 INFO zookeeper.ZooKeeper: Session: 0x2626be142b28acc closed 18/04/17 16:35:10 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 16:35:10 INFO scheduler.JobScheduler: Finished job streaming job 1523972100000 ms.33 from job set of time 1523972100000 ms 18/04/17 16:35:13 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 63.0 (TID 63) in 13800 ms on ***hostname masked*** (executor 6) (1/1) 18/04/17 16:35:13 INFO cluster.YarnClusterScheduler: Removed TaskSet 63.0, whose tasks have all completed, from pool 18/04/17 16:35:13 INFO scheduler.DAGScheduler: ResultStage 63 (foreachPartition at PredictorEngineApp.java:153) finished in 13.801 s 18/04/17 16:35:13 INFO scheduler.DAGScheduler: Job 64 finished: foreachPartition at PredictorEngineApp.java:153, took 13.901285 s 18/04/17 16:35:13 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x36607f02 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 16:35:13 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x36607f020x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 16:35:13 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 16:35:13 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:35153, server: ***hostname masked***/***IP masked***:2181 18/04/17 16:35:14 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x2626be142b28ace, negotiated timeout = 60000 18/04/17 16:35:14 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x2626be142b28ace 18/04/17 16:35:14 INFO zookeeper.ZooKeeper: Session: 0x2626be142b28ace closed 18/04/17 16:35:14 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 16:35:14 INFO scheduler.JobScheduler: Finished job streaming job 1523972100000 ms.26 from job set of time 1523972100000 ms 18/04/17 16:35:14 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 65.0 (TID 65) in 13863 ms on ***hostname masked*** (executor 1) (1/1) 18/04/17 16:35:14 INFO cluster.YarnClusterScheduler: Removed TaskSet 65.0, whose tasks have all completed, from pool 18/04/17 16:35:14 INFO scheduler.DAGScheduler: ResultStage 65 (foreachPartition at PredictorEngineApp.java:153) finished in 13.864 s 18/04/17 16:35:14 INFO scheduler.DAGScheduler: Job 66 finished: foreachPartition at PredictorEngineApp.java:153, took 13.971899 s 18/04/17 16:35:14 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x151a0bc1 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 16:35:14 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x151a0bc10x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 16:35:14 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 16:35:14 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:58794, server: ***hostname masked***/***IP masked***:2181 18/04/17 16:35:14 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x1626be1444c91eb, negotiated timeout = 60000 18/04/17 16:35:14 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x1626be1444c91eb 18/04/17 16:35:14 INFO zookeeper.ZooKeeper: Session: 0x1626be1444c91eb closed 18/04/17 16:35:14 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 16:35:14 INFO scheduler.JobScheduler: Finished job streaming job 1523972100000 ms.22 from job set of time 1523972100000 ms 18/04/17 16:35:15 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 69.0 (TID 69) in 15016 ms on ***hostname masked*** (executor 5) (1/1) 18/04/17 16:35:15 INFO cluster.YarnClusterScheduler: Removed TaskSet 69.0, whose tasks have all completed, from pool 18/04/17 16:35:15 INFO scheduler.DAGScheduler: ResultStage 69 (foreachPartition at PredictorEngineApp.java:153) finished in 15.018 s 18/04/17 16:35:15 INFO scheduler.DAGScheduler: Job 69 finished: foreachPartition at PredictorEngineApp.java:153, took 15.140994 s 18/04/17 16:35:15 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x61c4ed73 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 16:35:15 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x61c4ed730x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 16:35:15 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 16:35:15 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:35160, server: ***hostname masked***/***IP masked***:2181 18/04/17 16:35:15 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x2626be142b28ad0, negotiated timeout = 60000 18/04/17 16:35:15 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x2626be142b28ad0 18/04/17 16:35:15 INFO zookeeper.ZooKeeper: Session: 0x2626be142b28ad0 closed 18/04/17 16:35:15 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 16:35:15 INFO scheduler.JobScheduler: Finished job streaming job 1523972100000 ms.24 from job set of time 1523972100000 ms 18/04/17 16:35:15 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 66.0 (TID 66) in 15408 ms on ***hostname masked*** (executor 5) (1/1) 18/04/17 16:35:15 INFO cluster.YarnClusterScheduler: Removed TaskSet 66.0, whose tasks have all completed, from pool 18/04/17 16:35:15 INFO scheduler.DAGScheduler: ResultStage 66 (foreachPartition at PredictorEngineApp.java:153) finished in 15.409 s 18/04/17 16:35:15 INFO scheduler.DAGScheduler: Job 65 finished: foreachPartition at PredictorEngineApp.java:153, took 15.520927 s 18/04/17 16:35:15 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x40ec9142 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 16:35:15 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x40ec91420x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 16:35:15 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 16:35:15 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:52419, server: ***hostname masked***/***IP masked***:2181 18/04/17 16:35:15 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x3626be1439a91b1, negotiated timeout = 60000 18/04/17 16:35:15 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x3626be1439a91b1 18/04/17 16:35:15 INFO zookeeper.ZooKeeper: Session: 0x3626be1439a91b1 closed 18/04/17 16:35:15 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 16:35:15 INFO scheduler.JobScheduler: Finished job streaming job 1523972100000 ms.23 from job set of time 1523972100000 ms 18/04/17 16:35:18 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 55.0 (TID 55) in 18730 ms on ***hostname masked*** (executor 2) (1/1) 18/04/17 16:35:18 INFO cluster.YarnClusterScheduler: Removed TaskSet 55.0, whose tasks have all completed, from pool 18/04/17 16:35:18 INFO scheduler.DAGScheduler: ResultStage 55 (foreachPartition at PredictorEngineApp.java:153) finished in 18.730 s 18/04/17 16:35:18 INFO scheduler.DAGScheduler: Job 55 finished: foreachPartition at PredictorEngineApp.java:153, took 18.761725 s 18/04/17 16:35:18 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x4286b906 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 16:35:18 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x4286b9060x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 16:35:18 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 16:35:18 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:35171, server: ***hostname masked***/***IP masked***:2181 18/04/17 16:35:18 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x2626be142b28ad1, negotiated timeout = 60000 18/04/17 16:35:18 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x2626be142b28ad1 18/04/17 16:35:18 INFO zookeeper.ZooKeeper: Session: 0x2626be142b28ad1 closed 18/04/17 16:35:18 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 16:35:18 INFO scheduler.JobScheduler: Finished job streaming job 1523972100000 ms.10 from job set of time 1523972100000 ms 18/04/17 16:35:21 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 61.0 (TID 61) in 21322 ms on ***hostname masked*** (executor 5) (1/1) 18/04/17 16:35:21 INFO cluster.YarnClusterScheduler: Removed TaskSet 61.0, whose tasks have all completed, from pool 18/04/17 16:35:21 INFO scheduler.DAGScheduler: ResultStage 61 (foreachPartition at PredictorEngineApp.java:153) finished in 21.322 s 18/04/17 16:35:21 INFO scheduler.DAGScheduler: Job 61 finished: foreachPartition at PredictorEngineApp.java:153, took 21.413751 s 18/04/17 16:35:21 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0xe694fbd connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 16:35:21 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0xe694fbd0x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 16:35:21 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 16:35:21 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:58817, server: ***hostname masked***/***IP masked***:2181 18/04/17 16:35:21 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x1626be1444c91ee, negotiated timeout = 60000 18/04/17 16:35:21 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x1626be1444c91ee 18/04/17 16:35:21 INFO zookeeper.ZooKeeper: Session: 0x1626be1444c91ee closed 18/04/17 16:35:21 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 16:35:21 INFO scheduler.JobScheduler: Finished job streaming job 1523972100000 ms.5 from job set of time 1523972100000 ms 18/04/17 16:35:21 INFO scheduler.JobScheduler: Total delay: 21.533 s for time 1523972100000 ms (execution: 21.461 s) 18/04/17 16:35:21 INFO kafka.KafkaRDD: Removing RDD 36 from persistence list 18/04/17 16:35:21 INFO storage.BlockManager: Removing RDD 36 18/04/17 16:35:21 INFO kafka.KafkaRDD: Removing RDD 0 from persistence list 18/04/17 16:35:21 INFO storage.BlockManager: Removing RDD 0 18/04/17 16:35:21 INFO kafka.KafkaRDD: Removing RDD 36 from persistence list 18/04/17 16:35:21 INFO storage.BlockManager: Removing RDD 36 18/04/17 16:35:21 INFO kafka.KafkaRDD: Removing RDD 0 from persistence list 18/04/17 16:35:21 INFO storage.BlockManager: Removing RDD 0 18/04/17 16:35:21 INFO kafka.KafkaRDD: Removing RDD 37 from persistence list 18/04/17 16:35:21 INFO storage.BlockManager: Removing RDD 37 18/04/17 16:35:21 INFO kafka.KafkaRDD: Removing RDD 1 from persistence list 18/04/17 16:35:21 INFO storage.BlockManager: Removing RDD 1 18/04/17 16:35:21 INFO kafka.KafkaRDD: Removing RDD 37 from persistence list 18/04/17 16:35:21 INFO storage.BlockManager: Removing RDD 37 18/04/17 16:35:21 INFO kafka.KafkaRDD: Removing RDD 1 from persistence list 18/04/17 16:35:21 INFO storage.BlockManager: Removing RDD 1 18/04/17 16:35:21 INFO kafka.KafkaRDD: Removing RDD 38 from persistence list 18/04/17 16:35:21 INFO storage.BlockManager: Removing RDD 38 18/04/17 16:35:21 INFO kafka.KafkaRDD: Removing RDD 2 from persistence list 18/04/17 16:35:21 INFO storage.BlockManager: Removing RDD 2 18/04/17 16:35:21 INFO kafka.KafkaRDD: Removing RDD 38 from persistence list 18/04/17 16:35:21 INFO storage.BlockManager: Removing RDD 38 18/04/17 16:35:21 INFO kafka.KafkaRDD: Removing RDD 2 from persistence list 18/04/17 16:35:21 INFO storage.BlockManager: Removing RDD 2 18/04/17 16:35:21 INFO kafka.KafkaRDD: Removing RDD 39 from persistence list 18/04/17 16:35:21 INFO storage.BlockManager: Removing RDD 39 18/04/17 16:35:21 INFO kafka.KafkaRDD: Removing RDD 3 from persistence list 18/04/17 16:35:21 INFO storage.BlockManager: Removing RDD 3 18/04/17 16:35:21 INFO kafka.KafkaRDD: Removing RDD 39 from persistence list 18/04/17 16:35:21 INFO storage.BlockManager: Removing RDD 39 18/04/17 16:35:21 INFO kafka.KafkaRDD: Removing RDD 3 from persistence list 18/04/17 16:35:21 INFO storage.BlockManager: Removing RDD 3 18/04/17 16:35:21 INFO kafka.KafkaRDD: Removing RDD 40 from persistence list 18/04/17 16:35:21 INFO storage.BlockManager: Removing RDD 40 18/04/17 16:35:21 INFO kafka.KafkaRDD: Removing RDD 4 from persistence list 18/04/17 16:35:21 INFO storage.BlockManager: Removing RDD 4 18/04/17 16:35:21 INFO kafka.KafkaRDD: Removing RDD 40 from persistence list 18/04/17 16:35:21 INFO storage.BlockManager: Removing RDD 40 18/04/17 16:35:21 INFO kafka.KafkaRDD: Removing RDD 4 from persistence list 18/04/17 16:35:21 INFO storage.BlockManager: Removing RDD 4 18/04/17 16:35:21 INFO kafka.KafkaRDD: Removing RDD 41 from persistence list 18/04/17 16:35:21 INFO storage.BlockManager: Removing RDD 41 18/04/17 16:35:21 INFO kafka.KafkaRDD: Removing RDD 5 from persistence list 18/04/17 16:35:21 INFO storage.BlockManager: Removing RDD 5 18/04/17 16:35:21 INFO kafka.KafkaRDD: Removing RDD 41 from persistence list 18/04/17 16:35:21 INFO storage.BlockManager: Removing RDD 41 18/04/17 16:35:21 INFO kafka.KafkaRDD: Removing RDD 5 from persistence list 18/04/17 16:35:21 INFO storage.BlockManager: Removing RDD 5 18/04/17 16:35:21 INFO kafka.KafkaRDD: Removing RDD 42 from persistence list 18/04/17 16:35:21 INFO storage.BlockManager: Removing RDD 42 18/04/17 16:35:21 INFO kafka.KafkaRDD: Removing RDD 6 from persistence list 18/04/17 16:35:21 INFO storage.BlockManager: Removing RDD 6 18/04/17 16:35:21 INFO kafka.KafkaRDD: Removing RDD 42 from persistence list 18/04/17 16:35:21 INFO storage.BlockManager: Removing RDD 42 18/04/17 16:35:21 INFO kafka.KafkaRDD: Removing RDD 6 from persistence list 18/04/17 16:35:21 INFO storage.BlockManager: Removing RDD 6 18/04/17 16:35:21 INFO kafka.KafkaRDD: Removing RDD 43 from persistence list 18/04/17 16:35:21 INFO storage.BlockManager: Removing RDD 43 18/04/17 16:35:21 INFO kafka.KafkaRDD: Removing RDD 7 from persistence list 18/04/17 16:35:21 INFO storage.BlockManager: Removing RDD 7 18/04/17 16:35:21 INFO kafka.KafkaRDD: Removing RDD 43 from persistence list 18/04/17 16:35:21 INFO storage.BlockManager: Removing RDD 43 18/04/17 16:35:21 INFO kafka.KafkaRDD: Removing RDD 7 from persistence list 18/04/17 16:35:21 INFO storage.BlockManager: Removing RDD 7 18/04/17 16:35:21 INFO kafka.KafkaRDD: Removing RDD 44 from persistence list 18/04/17 16:35:21 INFO storage.BlockManager: Removing RDD 44 18/04/17 16:35:21 INFO kafka.KafkaRDD: Removing RDD 8 from persistence list 18/04/17 16:35:21 INFO storage.BlockManager: Removing RDD 8 18/04/17 16:35:21 INFO kafka.KafkaRDD: Removing RDD 44 from persistence list 18/04/17 16:35:21 INFO storage.BlockManager: Removing RDD 44 18/04/17 16:35:21 INFO kafka.KafkaRDD: Removing RDD 8 from persistence list 18/04/17 16:35:21 INFO storage.BlockManager: Removing RDD 8 18/04/17 16:35:21 INFO kafka.KafkaRDD: Removing RDD 45 from persistence list 18/04/17 16:35:21 INFO storage.BlockManager: Removing RDD 45 18/04/17 16:35:21 INFO kafka.KafkaRDD: Removing RDD 9 from persistence list 18/04/17 16:35:21 INFO storage.BlockManager: Removing RDD 9 18/04/17 16:35:21 INFO kafka.KafkaRDD: Removing RDD 45 from persistence list 18/04/17 16:35:21 INFO storage.BlockManager: Removing RDD 45 18/04/17 16:35:21 INFO kafka.KafkaRDD: Removing RDD 9 from persistence list 18/04/17 16:35:21 INFO storage.BlockManager: Removing RDD 9 18/04/17 16:35:21 INFO kafka.KafkaRDD: Removing RDD 46 from persistence list 18/04/17 16:35:21 INFO storage.BlockManager: Removing RDD 46 18/04/17 16:35:21 INFO kafka.KafkaRDD: Removing RDD 10 from persistence list 18/04/17 16:35:21 INFO storage.BlockManager: Removing RDD 10 18/04/17 16:35:21 INFO kafka.KafkaRDD: Removing RDD 46 from persistence list 18/04/17 16:35:21 INFO storage.BlockManager: Removing RDD 46 18/04/17 16:35:21 INFO kafka.KafkaRDD: Removing RDD 10 from persistence list 18/04/17 16:35:21 INFO storage.BlockManager: Removing RDD 10 18/04/17 16:35:21 INFO kafka.KafkaRDD: Removing RDD 47 from persistence list 18/04/17 16:35:21 INFO storage.BlockManager: Removing RDD 47 18/04/17 16:35:21 INFO kafka.KafkaRDD: Removing RDD 11 from persistence list 18/04/17 16:35:21 INFO storage.BlockManager: Removing RDD 11 18/04/17 16:35:21 INFO kafka.KafkaRDD: Removing RDD 47 from persistence list 18/04/17 16:35:21 INFO storage.BlockManager: Removing RDD 47 18/04/17 16:35:21 INFO kafka.KafkaRDD: Removing RDD 11 from persistence list 18/04/17 16:35:21 INFO storage.BlockManager: Removing RDD 11 18/04/17 16:35:21 INFO kafka.KafkaRDD: Removing RDD 48 from persistence list 18/04/17 16:35:21 INFO storage.BlockManager: Removing RDD 48 18/04/17 16:35:21 INFO kafka.KafkaRDD: Removing RDD 12 from persistence list 18/04/17 16:35:21 INFO storage.BlockManager: Removing RDD 12 18/04/17 16:35:21 INFO kafka.KafkaRDD: Removing RDD 48 from persistence list 18/04/17 16:35:21 INFO storage.BlockManager: Removing RDD 48 18/04/17 16:35:21 INFO kafka.KafkaRDD: Removing RDD 12 from persistence list 18/04/17 16:35:21 INFO storage.BlockManager: Removing RDD 12 18/04/17 16:35:21 INFO kafka.KafkaRDD: Removing RDD 49 from persistence list 18/04/17 16:35:21 INFO storage.BlockManager: Removing RDD 49 18/04/17 16:35:21 INFO kafka.KafkaRDD: Removing RDD 13 from persistence list 18/04/17 16:35:21 INFO storage.BlockManager: Removing RDD 13 18/04/17 16:35:21 INFO kafka.KafkaRDD: Removing RDD 49 from persistence list 18/04/17 16:35:21 INFO storage.BlockManager: Removing RDD 49 18/04/17 16:35:21 INFO kafka.KafkaRDD: Removing RDD 13 from persistence list 18/04/17 16:35:21 INFO storage.BlockManager: Removing RDD 13 18/04/17 16:35:21 INFO kafka.KafkaRDD: Removing RDD 50 from persistence list 18/04/17 16:35:21 INFO storage.BlockManager: Removing RDD 50 18/04/17 16:35:21 INFO kafka.KafkaRDD: Removing RDD 14 from persistence list 18/04/17 16:35:21 INFO storage.BlockManager: Removing RDD 14 18/04/17 16:35:21 INFO kafka.KafkaRDD: Removing RDD 50 from persistence list 18/04/17 16:35:21 INFO storage.BlockManager: Removing RDD 50 18/04/17 16:35:21 INFO kafka.KafkaRDD: Removing RDD 14 from persistence list 18/04/17 16:35:21 INFO storage.BlockManager: Removing RDD 14 18/04/17 16:35:21 INFO kafka.KafkaRDD: Removing RDD 51 from persistence list 18/04/17 16:35:21 INFO storage.BlockManager: Removing RDD 51 18/04/17 16:35:21 INFO kafka.KafkaRDD: Removing RDD 15 from persistence list 18/04/17 16:35:21 INFO storage.BlockManager: Removing RDD 15 18/04/17 16:35:21 INFO kafka.KafkaRDD: Removing RDD 51 from persistence list 18/04/17 16:35:21 INFO storage.BlockManager: Removing RDD 51 18/04/17 16:35:21 INFO kafka.KafkaRDD: Removing RDD 15 from persistence list 18/04/17 16:35:21 INFO storage.BlockManager: Removing RDD 15 18/04/17 16:35:21 INFO kafka.KafkaRDD: Removing RDD 52 from persistence list 18/04/17 16:35:21 INFO storage.BlockManager: Removing RDD 52 18/04/17 16:35:21 INFO kafka.KafkaRDD: Removing RDD 16 from persistence list 18/04/17 16:35:21 INFO storage.BlockManager: Removing RDD 16 18/04/17 16:35:21 INFO kafka.KafkaRDD: Removing RDD 52 from persistence list 18/04/17 16:35:21 INFO storage.BlockManager: Removing RDD 52 18/04/17 16:35:21 INFO kafka.KafkaRDD: Removing RDD 16 from persistence list 18/04/17 16:35:21 INFO storage.BlockManager: Removing RDD 16 18/04/17 16:35:21 INFO kafka.KafkaRDD: Removing RDD 53 from persistence list 18/04/17 16:35:21 INFO storage.BlockManager: Removing RDD 53 18/04/17 16:35:21 INFO kafka.KafkaRDD: Removing RDD 17 from persistence list 18/04/17 16:35:21 INFO storage.BlockManager: Removing RDD 17 18/04/17 16:35:21 INFO kafka.KafkaRDD: Removing RDD 53 from persistence list 18/04/17 16:35:21 INFO storage.BlockManager: Removing RDD 53 18/04/17 16:35:21 INFO kafka.KafkaRDD: Removing RDD 17 from persistence list 18/04/17 16:35:21 INFO storage.BlockManager: Removing RDD 17 18/04/17 16:35:21 INFO kafka.KafkaRDD: Removing RDD 54 from persistence list 18/04/17 16:35:21 INFO storage.BlockManager: Removing RDD 54 18/04/17 16:35:21 INFO kafka.KafkaRDD: Removing RDD 18 from persistence list 18/04/17 16:35:21 INFO storage.BlockManager: Removing RDD 18 18/04/17 16:35:21 INFO kafka.KafkaRDD: Removing RDD 54 from persistence list 18/04/17 16:35:21 INFO storage.BlockManager: Removing RDD 54 18/04/17 16:35:21 INFO kafka.KafkaRDD: Removing RDD 18 from persistence list 18/04/17 16:35:21 INFO storage.BlockManager: Removing RDD 18 18/04/17 16:35:21 INFO kafka.KafkaRDD: Removing RDD 55 from persistence list 18/04/17 16:35:21 INFO storage.BlockManager: Removing RDD 55 18/04/17 16:35:21 INFO kafka.KafkaRDD: Removing RDD 19 from persistence list 18/04/17 16:35:21 INFO storage.BlockManager: Removing RDD 19 18/04/17 16:35:21 INFO kafka.KafkaRDD: Removing RDD 55 from persistence list 18/04/17 16:35:21 INFO storage.BlockManager: Removing RDD 55 18/04/17 16:35:21 INFO kafka.KafkaRDD: Removing RDD 19 from persistence list 18/04/17 16:35:21 INFO storage.BlockManager: Removing RDD 19 18/04/17 16:35:21 INFO kafka.KafkaRDD: Removing RDD 56 from persistence list 18/04/17 16:35:21 INFO storage.BlockManager: Removing RDD 56 18/04/17 16:35:21 INFO kafka.KafkaRDD: Removing RDD 20 from persistence list 18/04/17 16:35:21 INFO storage.BlockManager: Removing RDD 20 18/04/17 16:35:21 INFO kafka.KafkaRDD: Removing RDD 56 from persistence list 18/04/17 16:35:21 INFO storage.BlockManager: Removing RDD 56 18/04/17 16:35:21 INFO kafka.KafkaRDD: Removing RDD 20 from persistence list 18/04/17 16:35:21 INFO storage.BlockManager: Removing RDD 20 18/04/17 16:35:21 INFO kafka.KafkaRDD: Removing RDD 57 from persistence list 18/04/17 16:35:21 INFO storage.BlockManager: Removing RDD 57 18/04/17 16:35:21 INFO kafka.KafkaRDD: Removing RDD 21 from persistence list 18/04/17 16:35:21 INFO storage.BlockManager: Removing RDD 21 18/04/17 16:35:21 INFO kafka.KafkaRDD: Removing RDD 57 from persistence list 18/04/17 16:35:21 INFO storage.BlockManager: Removing RDD 57 18/04/17 16:35:21 INFO kafka.KafkaRDD: Removing RDD 21 from persistence list 18/04/17 16:35:21 INFO storage.BlockManager: Removing RDD 21 18/04/17 16:35:21 INFO kafka.KafkaRDD: Removing RDD 58 from persistence list 18/04/17 16:35:21 INFO storage.BlockManager: Removing RDD 58 18/04/17 16:35:21 INFO kafka.KafkaRDD: Removing RDD 22 from persistence list 18/04/17 16:35:21 INFO storage.BlockManager: Removing RDD 22 18/04/17 16:35:21 INFO kafka.KafkaRDD: Removing RDD 58 from persistence list 18/04/17 16:35:21 INFO storage.BlockManager: Removing RDD 58 18/04/17 16:35:21 INFO kafka.KafkaRDD: Removing RDD 22 from persistence list 18/04/17 16:35:21 INFO storage.BlockManager: Removing RDD 22 18/04/17 16:35:21 INFO kafka.KafkaRDD: Removing RDD 59 from persistence list 18/04/17 16:35:21 INFO storage.BlockManager: Removing RDD 59 18/04/17 16:35:21 INFO kafka.KafkaRDD: Removing RDD 23 from persistence list 18/04/17 16:35:21 INFO storage.BlockManager: Removing RDD 23 18/04/17 16:35:21 INFO kafka.KafkaRDD: Removing RDD 59 from persistence list 18/04/17 16:35:21 INFO storage.BlockManager: Removing RDD 59 18/04/17 16:35:21 INFO kafka.KafkaRDD: Removing RDD 23 from persistence list 18/04/17 16:35:21 INFO storage.BlockManager: Removing RDD 23 18/04/17 16:35:21 INFO kafka.KafkaRDD: Removing RDD 60 from persistence list 18/04/17 16:35:21 INFO storage.BlockManager: Removing RDD 60 18/04/17 16:35:21 INFO kafka.KafkaRDD: Removing RDD 24 from persistence list 18/04/17 16:35:21 INFO storage.BlockManager: Removing RDD 24 18/04/17 16:35:21 INFO kafka.KafkaRDD: Removing RDD 60 from persistence list 18/04/17 16:35:21 INFO storage.BlockManager: Removing RDD 60 18/04/17 16:35:21 INFO kafka.KafkaRDD: Removing RDD 24 from persistence list 18/04/17 16:35:21 INFO storage.BlockManager: Removing RDD 24 18/04/17 16:35:21 INFO kafka.KafkaRDD: Removing RDD 61 from persistence list 18/04/17 16:35:21 INFO storage.BlockManager: Removing RDD 61 18/04/17 16:35:21 INFO kafka.KafkaRDD: Removing RDD 25 from persistence list 18/04/17 16:35:21 INFO storage.BlockManager: Removing RDD 25 18/04/17 16:35:21 INFO kafka.KafkaRDD: Removing RDD 61 from persistence list 18/04/17 16:35:21 INFO storage.BlockManager: Removing RDD 61 18/04/17 16:35:21 INFO kafka.KafkaRDD: Removing RDD 25 from persistence list 18/04/17 16:35:21 INFO storage.BlockManager: Removing RDD 25 18/04/17 16:35:21 INFO kafka.KafkaRDD: Removing RDD 62 from persistence list 18/04/17 16:35:21 INFO storage.BlockManager: Removing RDD 62 18/04/17 16:35:21 INFO kafka.KafkaRDD: Removing RDD 26 from persistence list 18/04/17 16:35:21 INFO storage.BlockManager: Removing RDD 26 18/04/17 16:35:21 INFO kafka.KafkaRDD: Removing RDD 62 from persistence list 18/04/17 16:35:21 INFO storage.BlockManager: Removing RDD 62 18/04/17 16:35:21 INFO kafka.KafkaRDD: Removing RDD 26 from persistence list 18/04/17 16:35:21 INFO storage.BlockManager: Removing RDD 26 18/04/17 16:35:21 INFO kafka.KafkaRDD: Removing RDD 63 from persistence list 18/04/17 16:35:21 INFO storage.BlockManager: Removing RDD 63 18/04/17 16:35:21 INFO kafka.KafkaRDD: Removing RDD 27 from persistence list 18/04/17 16:35:21 INFO storage.BlockManager: Removing RDD 27 18/04/17 16:35:21 INFO kafka.KafkaRDD: Removing RDD 63 from persistence list 18/04/17 16:35:21 INFO storage.BlockManager: Removing RDD 63 18/04/17 16:35:21 INFO kafka.KafkaRDD: Removing RDD 27 from persistence list 18/04/17 16:35:21 INFO storage.BlockManager: Removing RDD 27 18/04/17 16:35:21 INFO kafka.KafkaRDD: Removing RDD 64 from persistence list 18/04/17 16:35:21 INFO storage.BlockManager: Removing RDD 64 18/04/17 16:35:21 INFO kafka.KafkaRDD: Removing RDD 28 from persistence list 18/04/17 16:35:21 INFO storage.BlockManager: Removing RDD 28 18/04/17 16:35:21 INFO kafka.KafkaRDD: Removing RDD 64 from persistence list 18/04/17 16:35:21 INFO storage.BlockManager: Removing RDD 64 18/04/17 16:35:21 INFO kafka.KafkaRDD: Removing RDD 28 from persistence list 18/04/17 16:35:21 INFO storage.BlockManager: Removing RDD 28 18/04/17 16:35:21 INFO kafka.KafkaRDD: Removing RDD 65 from persistence list 18/04/17 16:35:21 INFO storage.BlockManager: Removing RDD 65 18/04/17 16:35:21 INFO kafka.KafkaRDD: Removing RDD 29 from persistence list 18/04/17 16:35:21 INFO storage.BlockManager: Removing RDD 29 18/04/17 16:35:21 INFO kafka.KafkaRDD: Removing RDD 65 from persistence list 18/04/17 16:35:21 INFO storage.BlockManager: Removing RDD 65 18/04/17 16:35:21 INFO kafka.KafkaRDD: Removing RDD 29 from persistence list 18/04/17 16:35:21 INFO storage.BlockManager: Removing RDD 29 18/04/17 16:35:21 INFO kafka.KafkaRDD: Removing RDD 66 from persistence list 18/04/17 16:35:21 INFO storage.BlockManager: Removing RDD 66 18/04/17 16:35:21 INFO kafka.KafkaRDD: Removing RDD 30 from persistence list 18/04/17 16:35:21 INFO storage.BlockManager: Removing RDD 30 18/04/17 16:35:21 INFO kafka.KafkaRDD: Removing RDD 66 from persistence list 18/04/17 16:35:21 INFO storage.BlockManager: Removing RDD 66 18/04/17 16:35:21 INFO kafka.KafkaRDD: Removing RDD 30 from persistence list 18/04/17 16:35:21 INFO storage.BlockManager: Removing RDD 30 18/04/17 16:35:21 INFO kafka.KafkaRDD: Removing RDD 67 from persistence list 18/04/17 16:35:21 INFO storage.BlockManager: Removing RDD 67 18/04/17 16:35:21 INFO kafka.KafkaRDD: Removing RDD 31 from persistence list 18/04/17 16:35:21 INFO storage.BlockManager: Removing RDD 31 18/04/17 16:35:21 INFO kafka.KafkaRDD: Removing RDD 67 from persistence list 18/04/17 16:35:21 INFO storage.BlockManager: Removing RDD 67 18/04/17 16:35:21 INFO kafka.KafkaRDD: Removing RDD 31 from persistence list 18/04/17 16:35:21 INFO storage.BlockManager: Removing RDD 31 18/04/17 16:35:21 INFO kafka.KafkaRDD: Removing RDD 68 from persistence list 18/04/17 16:35:21 INFO storage.BlockManager: Removing RDD 68 18/04/17 16:35:21 INFO kafka.KafkaRDD: Removing RDD 32 from persistence list 18/04/17 16:35:21 INFO storage.BlockManager: Removing RDD 32 18/04/17 16:35:21 INFO kafka.KafkaRDD: Removing RDD 68 from persistence list 18/04/17 16:35:21 INFO storage.BlockManager: Removing RDD 68 18/04/17 16:35:21 INFO kafka.KafkaRDD: Removing RDD 32 from persistence list 18/04/17 16:35:21 INFO storage.BlockManager: Removing RDD 32 18/04/17 16:35:21 INFO kafka.KafkaRDD: Removing RDD 69 from persistence list 18/04/17 16:35:21 INFO storage.BlockManager: Removing RDD 69 18/04/17 16:35:21 INFO kafka.KafkaRDD: Removing RDD 33 from persistence list 18/04/17 16:35:21 INFO storage.BlockManager: Removing RDD 33 18/04/17 16:35:21 INFO kafka.KafkaRDD: Removing RDD 69 from persistence list 18/04/17 16:35:21 INFO storage.BlockManager: Removing RDD 69 18/04/17 16:35:21 INFO kafka.KafkaRDD: Removing RDD 33 from persistence list 18/04/17 16:35:21 INFO storage.BlockManager: Removing RDD 33 18/04/17 16:35:21 INFO kafka.KafkaRDD: Removing RDD 70 from persistence list 18/04/17 16:35:21 INFO storage.BlockManager: Removing RDD 70 18/04/17 16:35:21 INFO kafka.KafkaRDD: Removing RDD 34 from persistence list 18/04/17 16:35:21 INFO storage.BlockManager: Removing RDD 34 18/04/17 16:35:21 INFO kafka.KafkaRDD: Removing RDD 70 from persistence list 18/04/17 16:35:21 INFO storage.BlockManager: Removing RDD 70 18/04/17 16:35:21 INFO kafka.KafkaRDD: Removing RDD 34 from persistence list 18/04/17 16:35:21 INFO storage.BlockManager: Removing RDD 34 18/04/17 16:35:21 INFO kafka.KafkaRDD: Removing RDD 71 from persistence list 18/04/17 16:35:21 INFO storage.BlockManager: Removing RDD 71 18/04/17 16:35:21 INFO kafka.KafkaRDD: Removing RDD 35 from persistence list 18/04/17 16:35:21 INFO storage.BlockManager: Removing RDD 35 18/04/17 16:35:21 INFO kafka.KafkaRDD: Removing RDD 71 from persistence list 18/04/17 16:35:21 INFO storage.BlockManager: Removing RDD 71 18/04/17 16:35:21 INFO kafka.KafkaRDD: Removing RDD 35 from persistence list 18/04/17 16:35:21 INFO storage.BlockManager: Removing RDD 35 18/04/17 16:35:21 INFO scheduler.ReceivedBlockTracker: Deleting batches ArrayBuffer() 18/04/17 16:35:21 INFO scheduler.InputInfoTracker: remove old batch metadata: 1523971980000 ms 18/04/17 16:36:00 INFO scheduler.JobScheduler: Added jobs for time 1523972160000 ms 18/04/17 16:36:00 INFO scheduler.JobScheduler: Starting job streaming job 1523972160000 ms.1 from job set of time 1523972160000 ms 18/04/17 16:36:00 INFO scheduler.JobScheduler: Starting job streaming job 1523972160000 ms.2 from job set of time 1523972160000 ms 18/04/17 16:36:00 INFO scheduler.JobScheduler: Starting job streaming job 1523972160000 ms.0 from job set of time 1523972160000 ms 18/04/17 16:36:00 INFO scheduler.JobScheduler: Starting job streaming job 1523972160000 ms.3 from job set of time 1523972160000 ms 18/04/17 16:36:00 INFO scheduler.JobScheduler: Starting job streaming job 1523972160000 ms.4 from job set of time 1523972160000 ms 18/04/17 16:36:00 INFO scheduler.JobScheduler: Finished job streaming job 1523972160000 ms.0 from job set of time 1523972160000 ms 18/04/17 16:36:00 INFO scheduler.JobScheduler: Starting job streaming job 1523972160000 ms.6 from job set of time 1523972160000 ms 18/04/17 16:36:00 INFO scheduler.JobScheduler: Starting job streaming job 1523972160000 ms.5 from job set of time 1523972160000 ms 18/04/17 16:36:00 INFO scheduler.JobScheduler: Starting job streaming job 1523972160000 ms.7 from job set of time 1523972160000 ms 18/04/17 16:36:00 INFO scheduler.JobScheduler: Finished job streaming job 1523972160000 ms.4 from job set of time 1523972160000 ms 18/04/17 16:36:00 INFO scheduler.JobScheduler: Finished job streaming job 1523972160000 ms.3 from job set of time 1523972160000 ms 18/04/17 16:36:00 INFO scheduler.JobScheduler: Starting job streaming job 1523972160000 ms.8 from job set of time 1523972160000 ms 18/04/17 16:36:00 INFO scheduler.JobScheduler: Starting job streaming job 1523972160000 ms.10 from job set of time 1523972160000 ms 18/04/17 16:36:00 INFO scheduler.JobScheduler: Starting job streaming job 1523972160000 ms.9 from job set of time 1523972160000 ms 18/04/17 16:36:00 INFO scheduler.JobScheduler: Starting job streaming job 1523972160000 ms.11 from job set of time 1523972160000 ms 18/04/17 16:36:00 INFO scheduler.JobScheduler: Starting job streaming job 1523972160000 ms.13 from job set of time 1523972160000 ms 18/04/17 16:36:00 INFO scheduler.JobScheduler: Starting job streaming job 1523972160000 ms.12 from job set of time 1523972160000 ms 18/04/17 16:36:00 INFO scheduler.JobScheduler: Starting job streaming job 1523972160000 ms.14 from job set of time 1523972160000 ms 18/04/17 16:36:00 INFO scheduler.JobScheduler: Finished job streaming job 1523972160000 ms.13 from job set of time 1523972160000 ms 18/04/17 16:36:00 INFO scheduler.JobScheduler: Starting job streaming job 1523972160000 ms.15 from job set of time 1523972160000 ms 18/04/17 16:36:00 INFO scheduler.JobScheduler: Starting job streaming job 1523972160000 ms.17 from job set of time 1523972160000 ms 18/04/17 16:36:00 INFO scheduler.JobScheduler: Finished job streaming job 1523972160000 ms.17 from job set of time 1523972160000 ms 18/04/17 16:36:00 INFO scheduler.JobScheduler: Starting job streaming job 1523972160000 ms.16 from job set of time 1523972160000 ms 18/04/17 16:36:00 INFO scheduler.JobScheduler: Finished job streaming job 1523972160000 ms.14 from job set of time 1523972160000 ms 18/04/17 16:36:00 INFO scheduler.JobScheduler: Starting job streaming job 1523972160000 ms.19 from job set of time 1523972160000 ms 18/04/17 16:36:00 INFO scheduler.JobScheduler: Starting job streaming job 1523972160000 ms.21 from job set of time 1523972160000 ms 18/04/17 16:36:00 INFO scheduler.JobScheduler: Starting job streaming job 1523972160000 ms.18 from job set of time 1523972160000 ms 18/04/17 16:36:00 INFO scheduler.JobScheduler: Finished job streaming job 1523972160000 ms.16 from job set of time 1523972160000 ms 18/04/17 16:36:00 INFO scheduler.JobScheduler: Starting job streaming job 1523972160000 ms.20 from job set of time 1523972160000 ms 18/04/17 16:36:00 INFO scheduler.JobScheduler: Starting job streaming job 1523972160000 ms.22 from job set of time 1523972160000 ms 18/04/17 16:36:00 INFO scheduler.JobScheduler: Starting job streaming job 1523972160000 ms.24 from job set of time 1523972160000 ms 18/04/17 16:36:00 INFO scheduler.JobScheduler: Starting job streaming job 1523972160000 ms.23 from job set of time 1523972160000 ms 18/04/17 16:36:00 INFO scheduler.JobScheduler: Starting job streaming job 1523972160000 ms.25 from job set of time 1523972160000 ms 18/04/17 16:36:00 INFO scheduler.JobScheduler: Starting job streaming job 1523972160000 ms.26 from job set of time 1523972160000 ms 18/04/17 16:36:00 INFO scheduler.JobScheduler: Starting job streaming job 1523972160000 ms.27 from job set of time 1523972160000 ms 18/04/17 16:36:00 INFO scheduler.JobScheduler: Starting job streaming job 1523972160000 ms.28 from job set of time 1523972160000 ms 18/04/17 16:36:00 INFO scheduler.JobScheduler: Starting job streaming job 1523972160000 ms.29 from job set of time 1523972160000 ms 18/04/17 16:36:00 INFO scheduler.JobScheduler: Starting job streaming job 1523972160000 ms.30 from job set of time 1523972160000 ms 18/04/17 16:36:00 INFO scheduler.JobScheduler: Starting job streaming job 1523972160000 ms.31 from job set of time 1523972160000 ms 18/04/17 16:36:00 INFO scheduler.JobScheduler: Finished job streaming job 1523972160000 ms.30 from job set of time 1523972160000 ms 18/04/17 16:36:00 INFO scheduler.JobScheduler: Starting job streaming job 1523972160000 ms.32 from job set of time 1523972160000 ms 18/04/17 16:36:00 INFO scheduler.JobScheduler: Starting job streaming job 1523972160000 ms.35 from job set of time 1523972160000 ms 18/04/17 16:36:00 INFO scheduler.JobScheduler: Starting job streaming job 1523972160000 ms.33 from job set of time 1523972160000 ms 18/04/17 16:36:00 INFO scheduler.JobScheduler: Starting job streaming job 1523972160000 ms.34 from job set of time 1523972160000 ms 18/04/17 16:36:00 INFO scheduler.JobScheduler: Finished job streaming job 1523972160000 ms.21 from job set of time 1523972160000 ms 18/04/17 16:36:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 16:36:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 16:36:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 16:36:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 16:36:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 16:36:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 16:36:00 INFO scheduler.DAGScheduler: Got job 78 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 16:36:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 78 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 16:36:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 16:36:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 16:36:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 16:36:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 16:36:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 16:36:00 INFO scheduler.DAGScheduler: Submitting ResultStage 78 (KafkaRDD[137] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 16:36:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 16:36:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 16:36:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 16:36:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 16:36:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 16:36:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 16:36:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 16:36:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 16:36:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 16:36:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 16:36:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 16:36:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 16:36:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 16:36:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 16:36:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 16:36:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 16:36:00 INFO storage.MemoryStore: Block broadcast_78 stored as values in memory (estimated size 5.7 KB, free 491.3 MB) 18/04/17 16:36:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 16:36:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 16:36:00 INFO storage.MemoryStore: Block broadcast_78_piece0 stored as bytes in memory (estimated size 3.1 KB, free 491.3 MB) 18/04/17 16:36:00 INFO storage.BlockManagerInfo: Added broadcast_78_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.5 MB) 18/04/17 16:36:00 INFO spark.SparkContext: Created broadcast 78 from broadcast at DAGScheduler.scala:1006 18/04/17 16:36:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 78 (KafkaRDD[137] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 16:36:00 INFO cluster.YarnClusterScheduler: Adding task set 78.0 with 1 tasks 18/04/17 16:36:00 INFO scheduler.DAGScheduler: Got job 79 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 16:36:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 79 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 16:36:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 16:36:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 16:36:00 INFO scheduler.DAGScheduler: Submitting ResultStage 79 (KafkaRDD[109] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 16:36:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 78.0 (TID 78, ***hostname masked***, executor 4, partition 0, NODE_LOCAL, 2049 bytes) 18/04/17 16:36:00 INFO storage.MemoryStore: Block broadcast_79 stored as values in memory (estimated size 5.7 KB, free 491.3 MB) 18/04/17 16:36:00 INFO storage.MemoryStore: Block broadcast_79_piece0 stored as bytes in memory (estimated size 3.1 KB, free 491.3 MB) 18/04/17 16:36:00 INFO storage.BlockManagerInfo: Added broadcast_79_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.5 MB) 18/04/17 16:36:00 INFO spark.SparkContext: Created broadcast 79 from broadcast at DAGScheduler.scala:1006 18/04/17 16:36:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 79 (KafkaRDD[109] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 16:36:00 INFO cluster.YarnClusterScheduler: Adding task set 79.0 with 1 tasks 18/04/17 16:36:00 INFO scheduler.DAGScheduler: Got job 80 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 16:36:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 80 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 16:36:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 16:36:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 16:36:00 INFO scheduler.DAGScheduler: Submitting ResultStage 80 (KafkaRDD[139] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 16:36:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 79.0 (TID 79, ***hostname masked***, executor 4, partition 0, RACK_LOCAL, 2063 bytes) 18/04/17 16:36:00 INFO storage.MemoryStore: Block broadcast_80 stored as values in memory (estimated size 5.7 KB, free 491.3 MB) 18/04/17 16:36:00 INFO storage.MemoryStore: Block broadcast_80_piece0 stored as bytes in memory (estimated size 3.1 KB, free 491.3 MB) 18/04/17 16:36:00 INFO storage.BlockManagerInfo: Added broadcast_80_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.5 MB) 18/04/17 16:36:00 INFO spark.SparkContext: Created broadcast 80 from broadcast at DAGScheduler.scala:1006 18/04/17 16:36:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 80 (KafkaRDD[139] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 16:36:00 INFO cluster.YarnClusterScheduler: Adding task set 80.0 with 1 tasks 18/04/17 16:36:00 INFO scheduler.DAGScheduler: Got job 81 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 16:36:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 81 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 16:36:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 16:36:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 16:36:00 INFO scheduler.DAGScheduler: Submitting ResultStage 81 (KafkaRDD[114] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 16:36:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 80.0 (TID 80, ***hostname masked***, executor 11, partition 0, RACK_LOCAL, 2042 bytes) 18/04/17 16:36:00 INFO storage.MemoryStore: Block broadcast_81 stored as values in memory (estimated size 5.7 KB, free 491.3 MB) 18/04/17 16:36:00 INFO storage.MemoryStore: Block broadcast_81_piece0 stored as bytes in memory (estimated size 3.1 KB, free 491.3 MB) 18/04/17 16:36:00 INFO storage.BlockManagerInfo: Added broadcast_81_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.5 MB) 18/04/17 16:36:00 INFO spark.SparkContext: Created broadcast 81 from broadcast at DAGScheduler.scala:1006 18/04/17 16:36:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 81 (KafkaRDD[114] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 16:36:00 INFO cluster.YarnClusterScheduler: Adding task set 81.0 with 1 tasks 18/04/17 16:36:00 INFO scheduler.DAGScheduler: Got job 82 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 16:36:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 82 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 16:36:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 16:36:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 16:36:00 INFO scheduler.DAGScheduler: Submitting ResultStage 82 (KafkaRDD[118] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 16:36:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 81.0 (TID 81, ***hostname masked***, executor 12, partition 0, NODE_LOCAL, 2045 bytes) 18/04/17 16:36:00 INFO storage.MemoryStore: Block broadcast_82 stored as values in memory (estimated size 5.7 KB, free 491.3 MB) 18/04/17 16:36:00 INFO storage.MemoryStore: Block broadcast_82_piece0 stored as bytes in memory (estimated size 3.1 KB, free 491.3 MB) 18/04/17 16:36:00 INFO storage.BlockManagerInfo: Added broadcast_82_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.5 MB) 18/04/17 16:36:00 INFO spark.SparkContext: Created broadcast 82 from broadcast at DAGScheduler.scala:1006 18/04/17 16:36:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 82 (KafkaRDD[118] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 16:36:00 INFO cluster.YarnClusterScheduler: Adding task set 82.0 with 1 tasks 18/04/17 16:36:00 INFO storage.BlockManagerInfo: Added broadcast_79_piece0 in memory on ***hostname masked***:55279 (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:36:00 INFO scheduler.DAGScheduler: Got job 83 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 16:36:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 83 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 16:36:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 16:36:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 16:36:00 INFO scheduler.DAGScheduler: Submitting ResultStage 83 (KafkaRDD[120] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 16:36:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 82.0 (TID 82, ***hostname masked***, executor 4, partition 0, NODE_LOCAL, 2059 bytes) 18/04/17 16:36:00 INFO storage.MemoryStore: Block broadcast_83 stored as values in memory (estimated size 5.7 KB, free 491.3 MB) 18/04/17 16:36:00 INFO storage.MemoryStore: Block broadcast_83_piece0 stored as bytes in memory (estimated size 3.1 KB, free 491.2 MB) 18/04/17 16:36:00 INFO storage.BlockManagerInfo: Added broadcast_83_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.5 MB) 18/04/17 16:36:00 INFO spark.SparkContext: Created broadcast 83 from broadcast at DAGScheduler.scala:1006 18/04/17 16:36:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 83 (KafkaRDD[120] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 16:36:00 INFO cluster.YarnClusterScheduler: Adding task set 83.0 with 1 tasks 18/04/17 16:36:00 INFO scheduler.DAGScheduler: Got job 84 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 16:36:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 84 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 16:36:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 16:36:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 16:36:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 83.0 (TID 83, ***hostname masked***, executor 10, partition 0, RACK_LOCAL, 2051 bytes) 18/04/17 16:36:00 INFO scheduler.DAGScheduler: Submitting ResultStage 84 (KafkaRDD[135] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 16:36:00 INFO storage.MemoryStore: Block broadcast_84 stored as values in memory (estimated size 5.7 KB, free 491.2 MB) 18/04/17 16:36:00 INFO storage.MemoryStore: Block broadcast_84_piece0 stored as bytes in memory (estimated size 3.1 KB, free 491.2 MB) 18/04/17 16:36:00 INFO storage.BlockManagerInfo: Added broadcast_84_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.5 MB) 18/04/17 16:36:00 INFO spark.SparkContext: Created broadcast 84 from broadcast at DAGScheduler.scala:1006 18/04/17 16:36:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 84 (KafkaRDD[135] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 16:36:00 INFO cluster.YarnClusterScheduler: Adding task set 84.0 with 1 tasks 18/04/17 16:36:00 INFO scheduler.DAGScheduler: Got job 85 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 16:36:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 85 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 16:36:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 16:36:00 INFO storage.BlockManagerInfo: Added broadcast_81_piece0 in memory on ***hostname masked***:42188 (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:36:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 16:36:00 INFO scheduler.DAGScheduler: Submitting ResultStage 85 (KafkaRDD[128] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 16:36:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 84.0 (TID 84, ***hostname masked***, executor 4, partition 0, RACK_LOCAL, 2048 bytes) 18/04/17 16:36:00 INFO storage.BlockManagerInfo: Added broadcast_82_piece0 in memory on ***hostname masked***:55279 (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:36:00 INFO storage.MemoryStore: Block broadcast_85 stored as values in memory (estimated size 5.7 KB, free 491.2 MB) 18/04/17 16:36:00 INFO storage.BlockManagerInfo: Added broadcast_80_piece0 in memory on ***hostname masked***:57847 (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:36:00 INFO storage.MemoryStore: Block broadcast_85_piece0 stored as bytes in memory (estimated size 3.1 KB, free 491.2 MB) 18/04/17 16:36:00 INFO storage.BlockManagerInfo: Added broadcast_85_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.5 MB) 18/04/17 16:36:00 INFO spark.SparkContext: Created broadcast 85 from broadcast at DAGScheduler.scala:1006 18/04/17 16:36:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 85 (KafkaRDD[128] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 16:36:00 INFO cluster.YarnClusterScheduler: Adding task set 85.0 with 1 tasks 18/04/17 16:36:00 INFO scheduler.DAGScheduler: Got job 86 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 16:36:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 86 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 16:36:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 16:36:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 16:36:00 INFO scheduler.DAGScheduler: Submitting ResultStage 86 (KafkaRDD[142] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 16:36:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 85.0 (TID 85, ***hostname masked***, executor 1, partition 0, RACK_LOCAL, 2048 bytes) 18/04/17 16:36:00 INFO storage.MemoryStore: Block broadcast_86 stored as values in memory (estimated size 5.7 KB, free 491.2 MB) 18/04/17 16:36:00 INFO storage.BlockManagerInfo: Added broadcast_83_piece0 in memory on ***hostname masked***:55095 (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:36:00 INFO storage.MemoryStore: Block broadcast_86_piece0 stored as bytes in memory (estimated size 3.1 KB, free 491.2 MB) 18/04/17 16:36:00 INFO storage.BlockManagerInfo: Added broadcast_86_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.5 MB) 18/04/17 16:36:00 INFO spark.SparkContext: Created broadcast 86 from broadcast at DAGScheduler.scala:1006 18/04/17 16:36:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 86 (KafkaRDD[142] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 16:36:00 INFO cluster.YarnClusterScheduler: Adding task set 86.0 with 1 tasks 18/04/17 16:36:00 INFO scheduler.DAGScheduler: Got job 87 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 16:36:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 87 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 16:36:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 16:36:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 16:36:00 INFO scheduler.DAGScheduler: Submitting ResultStage 87 (KafkaRDD[130] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 16:36:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 86.0 (TID 86, ***hostname masked***, executor 1, partition 0, RACK_LOCAL, 2053 bytes) 18/04/17 16:36:00 INFO storage.MemoryStore: Block broadcast_87 stored as values in memory (estimated size 5.7 KB, free 491.2 MB) 18/04/17 16:36:00 INFO storage.BlockManagerInfo: Added broadcast_78_piece0 in memory on ***hostname masked***:55279 (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:36:00 INFO storage.MemoryStore: Block broadcast_87_piece0 stored as bytes in memory (estimated size 3.1 KB, free 491.2 MB) 18/04/17 16:36:00 INFO storage.BlockManagerInfo: Added broadcast_87_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.5 MB) 18/04/17 16:36:00 INFO spark.SparkContext: Created broadcast 87 from broadcast at DAGScheduler.scala:1006 18/04/17 16:36:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 87 (KafkaRDD[130] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 16:36:00 INFO cluster.YarnClusterScheduler: Adding task set 87.0 with 1 tasks 18/04/17 16:36:00 INFO scheduler.DAGScheduler: Got job 88 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 16:36:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 88 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 16:36:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 16:36:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 16:36:00 INFO scheduler.DAGScheduler: Submitting ResultStage 88 (KafkaRDD[126] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 16:36:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 87.0 (TID 87, ***hostname masked***, executor 11, partition 0, RACK_LOCAL, 2069 bytes) 18/04/17 16:36:00 INFO storage.MemoryStore: Block broadcast_88 stored as values in memory (estimated size 5.7 KB, free 491.2 MB) 18/04/17 16:36:00 INFO storage.BlockManagerInfo: Added broadcast_85_piece0 in memory on ***hostname masked***:56034 (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:36:00 INFO storage.MemoryStore: Block broadcast_88_piece0 stored as bytes in memory (estimated size 3.1 KB, free 491.2 MB) 18/04/17 16:36:00 INFO storage.BlockManagerInfo: Added broadcast_88_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.5 MB) 18/04/17 16:36:00 INFO spark.SparkContext: Created broadcast 88 from broadcast at DAGScheduler.scala:1006 18/04/17 16:36:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 88 (KafkaRDD[126] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 16:36:00 INFO cluster.YarnClusterScheduler: Adding task set 88.0 with 1 tasks 18/04/17 16:36:00 INFO scheduler.DAGScheduler: Got job 89 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 16:36:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 89 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 16:36:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 16:36:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 16:36:00 INFO scheduler.DAGScheduler: Submitting ResultStage 89 (KafkaRDD[117] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 16:36:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 88.0 (TID 88, ***hostname masked***, executor 11, partition 0, RACK_LOCAL, 2048 bytes) 18/04/17 16:36:00 INFO storage.MemoryStore: Block broadcast_89 stored as values in memory (estimated size 5.7 KB, free 491.2 MB) 18/04/17 16:36:00 INFO storage.BlockManagerInfo: Added broadcast_87_piece0 in memory on ***hostname masked***:57847 (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:36:00 INFO storage.MemoryStore: Block broadcast_89_piece0 stored as bytes in memory (estimated size 3.1 KB, free 491.2 MB) 18/04/17 16:36:00 INFO storage.BlockManagerInfo: Added broadcast_89_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.5 MB) 18/04/17 16:36:00 INFO spark.SparkContext: Created broadcast 89 from broadcast at DAGScheduler.scala:1006 18/04/17 16:36:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 89 (KafkaRDD[117] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 16:36:00 INFO cluster.YarnClusterScheduler: Adding task set 89.0 with 1 tasks 18/04/17 16:36:00 INFO scheduler.DAGScheduler: Got job 90 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 16:36:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 90 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 16:36:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 16:36:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 16:36:00 INFO scheduler.DAGScheduler: Submitting ResultStage 90 (KafkaRDD[133] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 16:36:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 89.0 (TID 89, ***hostname masked***, executor 8, partition 0, RACK_LOCAL, 2041 bytes) 18/04/17 16:36:00 INFO storage.MemoryStore: Block broadcast_90 stored as values in memory (estimated size 5.7 KB, free 491.2 MB) 18/04/17 16:36:00 INFO storage.BlockManagerInfo: Added broadcast_84_piece0 in memory on ***hostname masked***:55279 (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:36:00 INFO storage.BlockManagerInfo: Added broadcast_86_piece0 in memory on ***hostname masked***:56034 (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:36:00 INFO storage.MemoryStore: Block broadcast_90_piece0 stored as bytes in memory (estimated size 3.1 KB, free 491.2 MB) 18/04/17 16:36:00 INFO storage.BlockManagerInfo: Added broadcast_90_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.5 MB) 18/04/17 16:36:00 INFO spark.SparkContext: Created broadcast 90 from broadcast at DAGScheduler.scala:1006 18/04/17 16:36:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 90 (KafkaRDD[133] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 16:36:00 INFO cluster.YarnClusterScheduler: Adding task set 90.0 with 1 tasks 18/04/17 16:36:00 INFO scheduler.DAGScheduler: Got job 91 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 16:36:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 91 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 16:36:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 16:36:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 16:36:00 INFO scheduler.DAGScheduler: Submitting ResultStage 91 (KafkaRDD[123] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 16:36:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 90.0 (TID 90, ***hostname masked***, executor 9, partition 0, RACK_LOCAL, 2045 bytes) 18/04/17 16:36:00 INFO storage.MemoryStore: Block broadcast_91 stored as values in memory (estimated size 5.7 KB, free 491.2 MB) 18/04/17 16:36:00 INFO storage.MemoryStore: Block broadcast_91_piece0 stored as bytes in memory (estimated size 3.1 KB, free 491.2 MB) 18/04/17 16:36:00 INFO storage.BlockManagerInfo: Added broadcast_91_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.5 MB) 18/04/17 16:36:00 INFO spark.SparkContext: Created broadcast 91 from broadcast at DAGScheduler.scala:1006 18/04/17 16:36:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 91 (KafkaRDD[123] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 16:36:00 INFO cluster.YarnClusterScheduler: Adding task set 91.0 with 1 tasks 18/04/17 16:36:00 INFO scheduler.DAGScheduler: Got job 93 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 16:36:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 92 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 16:36:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 16:36:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 16:36:00 INFO scheduler.DAGScheduler: Submitting ResultStage 92 (KafkaRDD[113] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 16:36:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 91.0 (TID 91, ***hostname masked***, executor 11, partition 0, RACK_LOCAL, 2064 bytes) 18/04/17 16:36:00 INFO storage.MemoryStore: Block broadcast_92 stored as values in memory (estimated size 5.7 KB, free 491.2 MB) 18/04/17 16:36:00 INFO storage.BlockManagerInfo: Added broadcast_88_piece0 in memory on ***hostname masked***:57847 (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:36:00 INFO storage.BlockManagerInfo: Added broadcast_89_piece0 in memory on ***hostname masked***:50260 (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:36:00 INFO storage.MemoryStore: Block broadcast_92_piece0 stored as bytes in memory (estimated size 3.1 KB, free 491.2 MB) 18/04/17 16:36:00 INFO storage.BlockManagerInfo: Added broadcast_92_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.5 MB) 18/04/17 16:36:00 INFO spark.SparkContext: Created broadcast 92 from broadcast at DAGScheduler.scala:1006 18/04/17 16:36:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 92 (KafkaRDD[113] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 16:36:00 INFO cluster.YarnClusterScheduler: Adding task set 92.0 with 1 tasks 18/04/17 16:36:00 INFO scheduler.DAGScheduler: Got job 92 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 16:36:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 93 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 16:36:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 16:36:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 16:36:00 INFO scheduler.DAGScheduler: Submitting ResultStage 93 (KafkaRDD[132] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 16:36:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 92.0 (TID 92, ***hostname masked***, executor 1, partition 0, NODE_LOCAL, 2059 bytes) 18/04/17 16:36:00 INFO storage.MemoryStore: Block broadcast_93 stored as values in memory (estimated size 5.7 KB, free 491.2 MB) 18/04/17 16:36:00 INFO storage.MemoryStore: Block broadcast_93_piece0 stored as bytes in memory (estimated size 3.1 KB, free 491.2 MB) 18/04/17 16:36:00 INFO storage.BlockManagerInfo: Added broadcast_93_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.5 MB) 18/04/17 16:36:00 INFO spark.SparkContext: Created broadcast 93 from broadcast at DAGScheduler.scala:1006 18/04/17 16:36:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 93 (KafkaRDD[132] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 16:36:00 INFO cluster.YarnClusterScheduler: Adding task set 93.0 with 1 tasks 18/04/17 16:36:00 INFO scheduler.DAGScheduler: Got job 94 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 16:36:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 94 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 16:36:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 16:36:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 16:36:00 INFO scheduler.DAGScheduler: Submitting ResultStage 94 (KafkaRDD[134] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 16:36:00 INFO storage.MemoryStore: Block broadcast_94 stored as values in memory (estimated size 5.7 KB, free 491.2 MB) 18/04/17 16:36:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 93.0 (TID 93, ***hostname masked***, executor 5, partition 0, NODE_LOCAL, 2047 bytes) 18/04/17 16:36:00 INFO storage.MemoryStore: Block broadcast_94_piece0 stored as bytes in memory (estimated size 3.1 KB, free 491.2 MB) 18/04/17 16:36:00 INFO storage.BlockManagerInfo: Added broadcast_94_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.5 MB) 18/04/17 16:36:00 INFO spark.SparkContext: Created broadcast 94 from broadcast at DAGScheduler.scala:1006 18/04/17 16:36:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 94 (KafkaRDD[134] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 16:36:00 INFO cluster.YarnClusterScheduler: Adding task set 94.0 with 1 tasks 18/04/17 16:36:00 INFO scheduler.DAGScheduler: Got job 95 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 16:36:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 95 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 16:36:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 16:36:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 16:36:00 INFO scheduler.DAGScheduler: Submitting ResultStage 95 (KafkaRDD[119] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 16:36:00 INFO storage.BlockManagerInfo: Added broadcast_91_piece0 in memory on ***hostname masked***:57847 (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:36:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 94.0 (TID 94, ***hostname masked***, executor 6, partition 0, NODE_LOCAL, 2056 bytes) 18/04/17 16:36:00 INFO storage.MemoryStore: Block broadcast_95 stored as values in memory (estimated size 5.7 KB, free 491.1 MB) 18/04/17 16:36:00 INFO storage.MemoryStore: Block broadcast_95_piece0 stored as bytes in memory (estimated size 3.1 KB, free 491.1 MB) 18/04/17 16:36:00 INFO storage.BlockManagerInfo: Added broadcast_92_piece0 in memory on ***hostname masked***:56034 (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:36:00 INFO storage.BlockManagerInfo: Added broadcast_95_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.5 MB) 18/04/17 16:36:00 INFO spark.SparkContext: Created broadcast 95 from broadcast at DAGScheduler.scala:1006 18/04/17 16:36:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 95 (KafkaRDD[119] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 16:36:00 INFO cluster.YarnClusterScheduler: Adding task set 95.0 with 1 tasks 18/04/17 16:36:00 INFO scheduler.DAGScheduler: Got job 96 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 16:36:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 96 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 16:36:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 16:36:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 16:36:00 INFO scheduler.DAGScheduler: Submitting ResultStage 96 (KafkaRDD[136] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 16:36:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 95.0 (TID 95, ***hostname masked***, executor 12, partition 0, RACK_LOCAL, 2046 bytes) 18/04/17 16:36:00 INFO storage.BlockManagerInfo: Added broadcast_90_piece0 in memory on ***hostname masked***:55033 (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:36:00 INFO storage.MemoryStore: Block broadcast_96 stored as values in memory (estimated size 5.7 KB, free 491.1 MB) 18/04/17 16:36:00 INFO storage.BlockManagerInfo: Added broadcast_93_piece0 in memory on ***hostname masked***:53081 (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:36:00 INFO storage.MemoryStore: Block broadcast_96_piece0 stored as bytes in memory (estimated size 3.1 KB, free 491.1 MB) 18/04/17 16:36:00 INFO storage.BlockManagerInfo: Added broadcast_96_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.5 MB) 18/04/17 16:36:00 INFO spark.SparkContext: Created broadcast 96 from broadcast at DAGScheduler.scala:1006 18/04/17 16:36:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 96 (KafkaRDD[136] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 16:36:00 INFO cluster.YarnClusterScheduler: Adding task set 96.0 with 1 tasks 18/04/17 16:36:00 INFO scheduler.DAGScheduler: Got job 98 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 16:36:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 97 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 16:36:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 16:36:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 16:36:00 INFO scheduler.DAGScheduler: Submitting ResultStage 97 (KafkaRDD[143] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 16:36:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 96.0 (TID 96, ***hostname masked***, executor 12, partition 0, RACK_LOCAL, 2045 bytes) 18/04/17 16:36:00 INFO storage.MemoryStore: Block broadcast_97 stored as values in memory (estimated size 5.7 KB, free 491.1 MB) 18/04/17 16:36:00 INFO storage.MemoryStore: Block broadcast_97_piece0 stored as bytes in memory (estimated size 3.1 KB, free 491.1 MB) 18/04/17 16:36:00 INFO storage.BlockManagerInfo: Added broadcast_97_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.5 MB) 18/04/17 16:36:00 INFO spark.SparkContext: Created broadcast 97 from broadcast at DAGScheduler.scala:1006 18/04/17 16:36:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 97 (KafkaRDD[143] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 16:36:00 INFO cluster.YarnClusterScheduler: Adding task set 97.0 with 1 tasks 18/04/17 16:36:00 INFO scheduler.DAGScheduler: Got job 97 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 16:36:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 98 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 16:36:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 16:36:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 16:36:00 INFO scheduler.DAGScheduler: Submitting ResultStage 98 (KafkaRDD[110] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 16:36:00 INFO storage.MemoryStore: Block broadcast_98 stored as values in memory (estimated size 5.7 KB, free 491.1 MB) 18/04/17 16:36:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 97.0 (TID 97, ***hostname masked***, executor 10, partition 0, NODE_LOCAL, 2037 bytes) 18/04/17 16:36:00 INFO storage.MemoryStore: Block broadcast_98_piece0 stored as bytes in memory (estimated size 3.1 KB, free 491.1 MB) 18/04/17 16:36:00 INFO storage.BlockManagerInfo: Added broadcast_98_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.5 MB) 18/04/17 16:36:00 INFO spark.SparkContext: Created broadcast 98 from broadcast at DAGScheduler.scala:1006 18/04/17 16:36:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 98 (KafkaRDD[110] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 16:36:00 INFO cluster.YarnClusterScheduler: Adding task set 98.0 with 1 tasks 18/04/17 16:36:00 INFO scheduler.DAGScheduler: Got job 99 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 16:36:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 99 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 16:36:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 16:36:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 16:36:00 INFO scheduler.DAGScheduler: Submitting ResultStage 99 (KafkaRDD[116] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 16:36:00 INFO storage.MemoryStore: Block broadcast_99 stored as values in memory (estimated size 5.7 KB, free 491.1 MB) 18/04/17 16:36:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 98.0 (TID 98, ***hostname masked***, executor 9, partition 0, RACK_LOCAL, 2034 bytes) 18/04/17 16:36:00 INFO storage.BlockManagerInfo: Added broadcast_94_piece0 in memory on ***hostname masked***:35790 (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:36:00 INFO storage.MemoryStore: Block broadcast_99_piece0 stored as bytes in memory (estimated size 3.1 KB, free 491.1 MB) 18/04/17 16:36:00 INFO storage.BlockManagerInfo: Added broadcast_99_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.5 MB) 18/04/17 16:36:00 INFO storage.BlockManagerInfo: Added broadcast_95_piece0 in memory on ***hostname masked***:42188 (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:36:00 INFO spark.SparkContext: Created broadcast 99 from broadcast at DAGScheduler.scala:1006 18/04/17 16:36:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 99 (KafkaRDD[116] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 16:36:00 INFO cluster.YarnClusterScheduler: Adding task set 99.0 with 1 tasks 18/04/17 16:36:00 INFO scheduler.DAGScheduler: Got job 100 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 16:36:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 100 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 16:36:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 16:36:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 16:36:00 INFO scheduler.DAGScheduler: Submitting ResultStage 100 (KafkaRDD[127] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 16:36:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 99.0 (TID 99, ***hostname masked***, executor 8, partition 0, RACK_LOCAL, 2051 bytes) 18/04/17 16:36:00 INFO storage.MemoryStore: Block broadcast_100 stored as values in memory (estimated size 5.7 KB, free 491.1 MB) 18/04/17 16:36:00 INFO storage.BlockManagerInfo: Added broadcast_97_piece0 in memory on ***hostname masked***:55095 (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:36:00 INFO storage.MemoryStore: Block broadcast_100_piece0 stored as bytes in memory (estimated size 3.1 KB, free 491.1 MB) 18/04/17 16:36:00 INFO storage.BlockManagerInfo: Added broadcast_100_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.5 MB) 18/04/17 16:36:00 INFO spark.SparkContext: Created broadcast 100 from broadcast at DAGScheduler.scala:1006 18/04/17 16:36:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 100 (KafkaRDD[127] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 16:36:00 INFO cluster.YarnClusterScheduler: Adding task set 100.0 with 1 tasks 18/04/17 16:36:00 INFO scheduler.DAGScheduler: Got job 101 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 16:36:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 101 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 16:36:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 16:36:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 16:36:00 INFO scheduler.DAGScheduler: Submitting ResultStage 101 (KafkaRDD[141] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 16:36:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 100.0 (TID 100, ***hostname masked***, executor 2, partition 0, RACK_LOCAL, 2050 bytes) 18/04/17 16:36:00 INFO storage.MemoryStore: Block broadcast_101 stored as values in memory (estimated size 5.7 KB, free 491.1 MB) 18/04/17 16:36:00 INFO storage.BlockManagerInfo: Added broadcast_98_piece0 in memory on ***hostname masked***:55033 (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:36:00 INFO storage.BlockManagerInfo: Added broadcast_96_piece0 in memory on ***hostname masked***:42188 (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:36:00 INFO storage.MemoryStore: Block broadcast_101_piece0 stored as bytes in memory (estimated size 3.1 KB, free 491.1 MB) 18/04/17 16:36:00 INFO storage.BlockManagerInfo: Added broadcast_101_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.5 MB) 18/04/17 16:36:00 INFO spark.SparkContext: Created broadcast 101 from broadcast at DAGScheduler.scala:1006 18/04/17 16:36:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 101 (KafkaRDD[141] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 16:36:00 INFO cluster.YarnClusterScheduler: Adding task set 101.0 with 1 tasks 18/04/17 16:36:00 INFO scheduler.DAGScheduler: Got job 102 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 16:36:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 102 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 16:36:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 16:36:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 16:36:00 INFO scheduler.DAGScheduler: Submitting ResultStage 102 (KafkaRDD[140] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 16:36:00 INFO storage.MemoryStore: Block broadcast_102 stored as values in memory (estimated size 5.7 KB, free 491.1 MB) 18/04/17 16:36:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 101.0 (TID 101, ***hostname masked***, executor 12, partition 0, NODE_LOCAL, 2048 bytes) 18/04/17 16:36:00 INFO storage.MemoryStore: Block broadcast_102_piece0 stored as bytes in memory (estimated size 3.1 KB, free 491.1 MB) 18/04/17 16:36:00 INFO storage.BlockManagerInfo: Added broadcast_102_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.5 MB) 18/04/17 16:36:00 INFO spark.SparkContext: Created broadcast 102 from broadcast at DAGScheduler.scala:1006 18/04/17 16:36:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 102 (KafkaRDD[140] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 16:36:00 INFO cluster.YarnClusterScheduler: Adding task set 102.0 with 1 tasks 18/04/17 16:36:00 INFO scheduler.DAGScheduler: Got job 103 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 16:36:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 103 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 16:36:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 16:36:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 16:36:00 INFO scheduler.DAGScheduler: Submitting ResultStage 103 (KafkaRDD[115] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 16:36:00 INFO storage.MemoryStore: Block broadcast_103 stored as values in memory (estimated size 5.7 KB, free 491.1 MB) 18/04/17 16:36:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 102.0 (TID 102, ***hostname masked***, executor 5, partition 0, RACK_LOCAL, 2040 bytes) 18/04/17 16:36:00 INFO storage.BlockManagerInfo: Added broadcast_99_piece0 in memory on ***hostname masked***:50260 (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:36:00 INFO storage.MemoryStore: Block broadcast_103_piece0 stored as bytes in memory (estimated size 3.1 KB, free 491.1 MB) 18/04/17 16:36:00 INFO storage.BlockManagerInfo: Added broadcast_103_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.5 MB) 18/04/17 16:36:00 INFO spark.SparkContext: Created broadcast 103 from broadcast at DAGScheduler.scala:1006 18/04/17 16:36:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 103 (KafkaRDD[115] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 16:36:00 INFO cluster.YarnClusterScheduler: Adding task set 103.0 with 1 tasks 18/04/17 16:36:00 INFO scheduler.DAGScheduler: Got job 104 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 16:36:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 104 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 16:36:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 16:36:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 16:36:00 INFO scheduler.DAGScheduler: Submitting ResultStage 104 (KafkaRDD[131] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 16:36:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 103.0 (TID 103, ***hostname masked***, executor 9, partition 0, RACK_LOCAL, 2060 bytes) 18/04/17 16:36:00 INFO storage.MemoryStore: Block broadcast_104 stored as values in memory (estimated size 5.7 KB, free 491.1 MB) 18/04/17 16:36:00 INFO storage.MemoryStore: Block broadcast_104_piece0 stored as bytes in memory (estimated size 3.1 KB, free 491.1 MB) 18/04/17 16:36:00 INFO storage.BlockManagerInfo: Added broadcast_104_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.5 MB) 18/04/17 16:36:00 INFO spark.SparkContext: Created broadcast 104 from broadcast at DAGScheduler.scala:1006 18/04/17 16:36:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 104 (KafkaRDD[131] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 16:36:00 INFO cluster.YarnClusterScheduler: Adding task set 104.0 with 1 tasks 18/04/17 16:36:00 INFO storage.BlockManagerInfo: Added broadcast_101_piece0 in memory on ***hostname masked***:42188 (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:36:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 104.0 (TID 104, ***hostname masked***, executor 8, partition 0, RACK_LOCAL, 2048 bytes) 18/04/17 16:36:00 INFO storage.BlockManagerInfo: Added broadcast_102_piece0 in memory on ***hostname masked***:53081 (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:36:00 INFO storage.BlockManagerInfo: Added broadcast_100_piece0 in memory on ***hostname masked***:43653 (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:36:00 INFO storage.BlockManagerInfo: Added broadcast_103_piece0 in memory on ***hostname masked***:55033 (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:36:00 INFO storage.BlockManagerInfo: Added broadcast_104_piece0 in memory on ***hostname masked***:50260 (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:36:00 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 82.0 (TID 82) in 179 ms on ***hostname masked*** (executor 4) (1/1) 18/04/17 16:36:00 INFO cluster.YarnClusterScheduler: Removed TaskSet 82.0, whose tasks have all completed, from pool 18/04/17 16:36:00 INFO scheduler.DAGScheduler: ResultStage 82 (foreachPartition at PredictorEngineApp.java:153) finished in 0.180 s 18/04/17 16:36:00 INFO scheduler.DAGScheduler: Job 82 finished: foreachPartition at PredictorEngineApp.java:153, took 0.221462 s 18/04/17 16:36:00 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x39914f3d connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 16:36:00 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x39914f3d0x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 16:36:00 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 16:36:00 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:35321, server: ***hostname masked***/***IP masked***:2181 18/04/17 16:36:00 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x2626be142b28add, negotiated timeout = 60000 18/04/17 16:36:00 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x2626be142b28add 18/04/17 16:36:00 INFO zookeeper.ZooKeeper: Session: 0x2626be142b28add closed 18/04/17 16:36:00 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 16:36:00 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 86.0 (TID 86) in 182 ms on ***hostname masked*** (executor 1) (1/1) 18/04/17 16:36:00 INFO cluster.YarnClusterScheduler: Removed TaskSet 86.0, whose tasks have all completed, from pool 18/04/17 16:36:00 INFO scheduler.DAGScheduler: ResultStage 86 (foreachPartition at PredictorEngineApp.java:153) finished in 0.182 s 18/04/17 16:36:00 INFO scheduler.DAGScheduler: Job 86 finished: foreachPartition at PredictorEngineApp.java:153, took 0.247229 s 18/04/17 16:36:00 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x1f42444 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 16:36:00 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x1f424440x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 16:36:00 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 16:36:00 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:35324, server: ***hostname masked***/***IP masked***:2181 18/04/17 16:36:00 INFO scheduler.JobScheduler: Finished job streaming job 1523972160000 ms.10 from job set of time 1523972160000 ms 18/04/17 16:36:00 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x2626be142b28ade, negotiated timeout = 60000 18/04/17 16:36:00 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x2626be142b28ade 18/04/17 16:36:00 INFO zookeeper.ZooKeeper: Session: 0x2626be142b28ade closed 18/04/17 16:36:00 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 16:36:00 INFO scheduler.JobScheduler: Finished job streaming job 1523972160000 ms.34 from job set of time 1523972160000 ms 18/04/17 16:36:00 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 98.0 (TID 98) in 199 ms on ***hostname masked*** (executor 9) (1/1) 18/04/17 16:36:00 INFO cluster.YarnClusterScheduler: Removed TaskSet 98.0, whose tasks have all completed, from pool 18/04/17 16:36:00 INFO scheduler.DAGScheduler: ResultStage 98 (foreachPartition at PredictorEngineApp.java:153) finished in 0.200 s 18/04/17 16:36:00 INFO scheduler.DAGScheduler: Job 97 finished: foreachPartition at PredictorEngineApp.java:153, took 0.320675 s 18/04/17 16:36:00 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x78a6afa connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 16:36:00 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x78a6afa0x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 16:36:00 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 16:36:00 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:58965, server: ***hostname masked***/***IP masked***:2181 18/04/17 16:36:00 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x1626be1444c91f4, negotiated timeout = 60000 18/04/17 16:36:00 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x1626be1444c91f4 18/04/17 16:36:00 INFO zookeeper.ZooKeeper: Session: 0x1626be1444c91f4 closed 18/04/17 16:36:00 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 16:36:00 INFO scheduler.JobScheduler: Finished job streaming job 1523972160000 ms.2 from job set of time 1523972160000 ms 18/04/17 16:36:00 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 97.0 (TID 97) in 309 ms on ***hostname masked*** (executor 10) (1/1) 18/04/17 16:36:00 INFO cluster.YarnClusterScheduler: Removed TaskSet 97.0, whose tasks have all completed, from pool 18/04/17 16:36:00 INFO scheduler.DAGScheduler: ResultStage 97 (foreachPartition at PredictorEngineApp.java:153) finished in 0.310 s 18/04/17 16:36:00 INFO scheduler.DAGScheduler: Job 98 finished: foreachPartition at PredictorEngineApp.java:153, took 0.428236 s 18/04/17 16:36:00 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x20c9f64c connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 16:36:00 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x20c9f64c0x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 16:36:00 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 16:36:00 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:52586, server: ***hostname masked***/***IP masked***:2181 18/04/17 16:36:00 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x3626be1439a91bd, negotiated timeout = 60000 18/04/17 16:36:00 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x3626be1439a91bd 18/04/17 16:36:00 INFO zookeeper.ZooKeeper: Session: 0x3626be1439a91bd closed 18/04/17 16:36:00 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 16:36:00 INFO scheduler.JobScheduler: Finished job streaming job 1523972160000 ms.35 from job set of time 1523972160000 ms 18/04/17 16:36:01 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 90.0 (TID 90) in 1076 ms on ***hostname masked*** (executor 9) (1/1) 18/04/17 16:36:01 INFO cluster.YarnClusterScheduler: Removed TaskSet 90.0, whose tasks have all completed, from pool 18/04/17 16:36:01 INFO scheduler.DAGScheduler: ResultStage 90 (foreachPartition at PredictorEngineApp.java:153) finished in 1.077 s 18/04/17 16:36:01 INFO scheduler.DAGScheduler: Job 90 finished: foreachPartition at PredictorEngineApp.java:153, took 1.164117 s 18/04/17 16:36:01 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x428d7c41 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 16:36:01 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x428d7c410x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 16:36:01 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 16:36:01 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:35334, server: ***hostname masked***/***IP masked***:2181 18/04/17 16:36:01 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x2626be142b28ae7, negotiated timeout = 60000 18/04/17 16:36:01 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x2626be142b28ae7 18/04/17 16:36:01 INFO zookeeper.ZooKeeper: Session: 0x2626be142b28ae7 closed 18/04/17 16:36:01 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 16:36:01 INFO scheduler.JobScheduler: Finished job streaming job 1523972160000 ms.25 from job set of time 1523972160000 ms 18/04/17 16:36:01 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 103.0 (TID 103) in 1682 ms on ***hostname masked*** (executor 9) (1/1) 18/04/17 16:36:01 INFO cluster.YarnClusterScheduler: Removed TaskSet 103.0, whose tasks have all completed, from pool 18/04/17 16:36:01 INFO scheduler.DAGScheduler: ResultStage 103 (foreachPartition at PredictorEngineApp.java:153) finished in 1.683 s 18/04/17 16:36:01 INFO scheduler.DAGScheduler: Job 103 finished: foreachPartition at PredictorEngineApp.java:153, took 1.820661 s 18/04/17 16:36:01 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x68b0d419 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 16:36:01 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x68b0d4190x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 16:36:01 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 16:36:01 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:52595, server: ***hostname masked***/***IP masked***:2181 18/04/17 16:36:01 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x3626be1439a91bf, negotiated timeout = 60000 18/04/17 16:36:01 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x3626be1439a91bf 18/04/17 16:36:01 INFO zookeeper.ZooKeeper: Session: 0x3626be1439a91bf closed 18/04/17 16:36:01 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 16:36:01 INFO scheduler.JobScheduler: Finished job streaming job 1523972160000 ms.7 from job set of time 1523972160000 ms 18/04/17 16:36:02 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 83.0 (TID 83) in 2710 ms on ***hostname masked*** (executor 10) (1/1) 18/04/17 16:36:02 INFO cluster.YarnClusterScheduler: Removed TaskSet 83.0, whose tasks have all completed, from pool 18/04/17 16:36:02 INFO scheduler.DAGScheduler: ResultStage 83 (foreachPartition at PredictorEngineApp.java:153) finished in 2.710 s 18/04/17 16:36:02 INFO scheduler.DAGScheduler: Job 83 finished: foreachPartition at PredictorEngineApp.java:153, took 2.756921 s 18/04/17 16:36:02 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x33bd08f1 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 16:36:02 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x33bd08f10x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 16:36:02 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 16:36:02 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:52599, server: ***hostname masked***/***IP masked***:2181 18/04/17 16:36:02 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x3626be1439a91c0, negotiated timeout = 60000 18/04/17 16:36:02 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x3626be1439a91c0 18/04/17 16:36:02 INFO zookeeper.ZooKeeper: Session: 0x3626be1439a91c0 closed 18/04/17 16:36:02 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 16:36:02 INFO scheduler.JobScheduler: Finished job streaming job 1523972160000 ms.12 from job set of time 1523972160000 ms 18/04/17 16:36:03 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 99.0 (TID 99) in 2879 ms on ***hostname masked*** (executor 8) (1/1) 18/04/17 16:36:03 INFO cluster.YarnClusterScheduler: Removed TaskSet 99.0, whose tasks have all completed, from pool 18/04/17 16:36:03 INFO scheduler.DAGScheduler: ResultStage 99 (foreachPartition at PredictorEngineApp.java:153) finished in 2.886 s 18/04/17 16:36:03 INFO scheduler.DAGScheduler: Job 99 finished: foreachPartition at PredictorEngineApp.java:153, took 3.010531 s 18/04/17 16:36:03 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x1b542e4b connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 16:36:03 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x1b542e4b0x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 16:36:03 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 16:36:03 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:35347, server: ***hostname masked***/***IP masked***:2181 18/04/17 16:36:03 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x2626be142b28aea, negotiated timeout = 60000 18/04/17 16:36:03 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x2626be142b28aea 18/04/17 16:36:03 INFO zookeeper.ZooKeeper: Session: 0x2626be142b28aea closed 18/04/17 16:36:03 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 16:36:03 INFO scheduler.JobScheduler: Finished job streaming job 1523972160000 ms.8 from job set of time 1523972160000 ms 18/04/17 16:36:03 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 100.0 (TID 100) in 3207 ms on ***hostname masked*** (executor 2) (1/1) 18/04/17 16:36:03 INFO cluster.YarnClusterScheduler: Removed TaskSet 100.0, whose tasks have all completed, from pool 18/04/17 16:36:03 INFO scheduler.DAGScheduler: ResultStage 100 (foreachPartition at PredictorEngineApp.java:153) finished in 3.208 s 18/04/17 16:36:03 INFO scheduler.DAGScheduler: Job 100 finished: foreachPartition at PredictorEngineApp.java:153, took 3.342232 s 18/04/17 16:36:03 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x6fb78f66 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 16:36:03 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x6fb78f660x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 16:36:03 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 16:36:03 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:52606, server: ***hostname masked***/***IP masked***:2181 18/04/17 16:36:03 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x3626be1439a91c3, negotiated timeout = 60000 18/04/17 16:36:03 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x3626be1439a91c3 18/04/17 16:36:03 INFO zookeeper.ZooKeeper: Session: 0x3626be1439a91c3 closed 18/04/17 16:36:03 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 16:36:03 INFO scheduler.JobScheduler: Finished job streaming job 1523972160000 ms.19 from job set of time 1523972160000 ms 18/04/17 16:36:05 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 102.0 (TID 102) in 5250 ms on ***hostname masked*** (executor 5) (1/1) 18/04/17 16:36:05 INFO cluster.YarnClusterScheduler: Removed TaskSet 102.0, whose tasks have all completed, from pool 18/04/17 16:36:05 INFO scheduler.DAGScheduler: ResultStage 102 (foreachPartition at PredictorEngineApp.java:153) finished in 5.251 s 18/04/17 16:36:05 INFO scheduler.DAGScheduler: Job 102 finished: foreachPartition at PredictorEngineApp.java:153, took 5.386278 s 18/04/17 16:36:05 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x469db523 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 16:36:05 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x469db5230x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 16:36:05 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 16:36:05 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:58993, server: ***hostname masked***/***IP masked***:2181 18/04/17 16:36:05 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x1626be1444c91fb, negotiated timeout = 60000 18/04/17 16:36:05 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x1626be1444c91fb 18/04/17 16:36:05 INFO zookeeper.ZooKeeper: Session: 0x1626be1444c91fb closed 18/04/17 16:36:05 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 16:36:05 INFO scheduler.JobScheduler: Finished job streaming job 1523972160000 ms.32 from job set of time 1523972160000 ms 18/04/17 16:36:06 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 93.0 (TID 93) in 5975 ms on ***hostname masked*** (executor 5) (1/1) 18/04/17 16:36:06 INFO cluster.YarnClusterScheduler: Removed TaskSet 93.0, whose tasks have all completed, from pool 18/04/17 16:36:06 INFO scheduler.DAGScheduler: ResultStage 93 (foreachPartition at PredictorEngineApp.java:153) finished in 5.977 s 18/04/17 16:36:06 INFO scheduler.DAGScheduler: Job 92 finished: foreachPartition at PredictorEngineApp.java:153, took 6.078335 s 18/04/17 16:36:06 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x1d3cf443 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 16:36:06 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x1d3cf4430x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 16:36:06 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 16:36:06 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:58998, server: ***hostname masked***/***IP masked***:2181 18/04/17 16:36:06 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x1626be1444c91fc, negotiated timeout = 60000 18/04/17 16:36:06 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x1626be1444c91fc 18/04/17 16:36:06 INFO zookeeper.ZooKeeper: Session: 0x1626be1444c91fc closed 18/04/17 16:36:06 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 16:36:06 INFO scheduler.JobScheduler: Finished job streaming job 1523972160000 ms.24 from job set of time 1523972160000 ms 18/04/17 16:36:06 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 85.0 (TID 85) in 6172 ms on ***hostname masked*** (executor 1) (1/1) 18/04/17 16:36:06 INFO cluster.YarnClusterScheduler: Removed TaskSet 85.0, whose tasks have all completed, from pool 18/04/17 16:36:06 INFO scheduler.DAGScheduler: ResultStage 85 (foreachPartition at PredictorEngineApp.java:153) finished in 6.172 s 18/04/17 16:36:06 INFO scheduler.DAGScheduler: Job 85 finished: foreachPartition at PredictorEngineApp.java:153, took 6.231408 s 18/04/17 16:36:06 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x70c6a619 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 16:36:06 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x70c6a6190x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 16:36:06 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 16:36:06 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:35363, server: ***hostname masked***/***IP masked***:2181 18/04/17 16:36:06 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x2626be142b28aec, negotiated timeout = 60000 18/04/17 16:36:06 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x2626be142b28aec 18/04/17 16:36:06 INFO zookeeper.ZooKeeper: Session: 0x2626be142b28aec closed 18/04/17 16:36:06 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 16:36:06 INFO scheduler.JobScheduler: Finished job streaming job 1523972160000 ms.20 from job set of time 1523972160000 ms 18/04/17 16:36:06 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 80.0 (TID 80) in 6399 ms on ***hostname masked*** (executor 11) (1/1) 18/04/17 16:36:06 INFO cluster.YarnClusterScheduler: Removed TaskSet 80.0, whose tasks have all completed, from pool 18/04/17 16:36:06 INFO scheduler.DAGScheduler: ResultStage 80 (foreachPartition at PredictorEngineApp.java:153) finished in 6.400 s 18/04/17 16:36:06 INFO scheduler.DAGScheduler: Job 80 finished: foreachPartition at PredictorEngineApp.java:153, took 6.433278 s 18/04/17 16:36:06 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x1e38f474 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 16:36:06 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x1e38f4740x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 16:36:06 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 16:36:06 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:35367, server: ***hostname masked***/***IP masked***:2181 18/04/17 16:36:06 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x2626be142b28aed, negotiated timeout = 60000 18/04/17 16:36:06 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x2626be142b28aed 18/04/17 16:36:06 INFO zookeeper.ZooKeeper: Session: 0x2626be142b28aed closed 18/04/17 16:36:06 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 16:36:06 INFO scheduler.JobScheduler: Finished job streaming job 1523972160000 ms.31 from job set of time 1523972160000 ms 18/04/17 16:36:06 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 89.0 (TID 89) in 6622 ms on ***hostname masked*** (executor 8) (1/1) 18/04/17 16:36:06 INFO cluster.YarnClusterScheduler: Removed TaskSet 89.0, whose tasks have all completed, from pool 18/04/17 16:36:06 INFO scheduler.DAGScheduler: ResultStage 89 (foreachPartition at PredictorEngineApp.java:153) finished in 6.623 s 18/04/17 16:36:06 INFO scheduler.DAGScheduler: Job 89 finished: foreachPartition at PredictorEngineApp.java:153, took 6.704438 s 18/04/17 16:36:06 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x7a09992d connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 16:36:06 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x7a09992d0x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 16:36:06 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 16:36:06 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:52627, server: ***hostname masked***/***IP masked***:2181 18/04/17 16:36:06 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x3626be1439a91c7, negotiated timeout = 60000 18/04/17 16:36:06 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x3626be1439a91c7 18/04/17 16:36:06 INFO zookeeper.ZooKeeper: Session: 0x3626be1439a91c7 closed 18/04/17 16:36:06 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 16:36:06 INFO scheduler.JobScheduler: Finished job streaming job 1523972160000 ms.9 from job set of time 1523972160000 ms 18/04/17 16:36:06 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 94.0 (TID 94) in 6675 ms on ***hostname masked*** (executor 6) (1/1) 18/04/17 16:36:06 INFO cluster.YarnClusterScheduler: Removed TaskSet 94.0, whose tasks have all completed, from pool 18/04/17 16:36:06 INFO scheduler.DAGScheduler: ResultStage 94 (foreachPartition at PredictorEngineApp.java:153) finished in 6.676 s 18/04/17 16:36:06 INFO scheduler.DAGScheduler: Job 94 finished: foreachPartition at PredictorEngineApp.java:153, took 6.782144 s 18/04/17 16:36:06 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0xa9002b9 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 16:36:06 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0xa9002b90x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 16:36:06 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 16:36:06 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:59012, server: ***hostname masked***/***IP masked***:2181 18/04/17 16:36:06 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x1626be1444c91fe, negotiated timeout = 60000 18/04/17 16:36:06 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x1626be1444c91fe 18/04/17 16:36:06 INFO zookeeper.ZooKeeper: Session: 0x1626be1444c91fe closed 18/04/17 16:36:06 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 16:36:06 INFO scheduler.JobScheduler: Finished job streaming job 1523972160000 ms.26 from job set of time 1523972160000 ms 18/04/17 16:36:07 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 104.0 (TID 104) in 7118 ms on ***hostname masked*** (executor 8) (1/1) 18/04/17 16:36:07 INFO cluster.YarnClusterScheduler: Removed TaskSet 104.0, whose tasks have all completed, from pool 18/04/17 16:36:07 INFO scheduler.DAGScheduler: ResultStage 104 (foreachPartition at PredictorEngineApp.java:153) finished in 7.118 s 18/04/17 16:36:07 INFO scheduler.DAGScheduler: Job 104 finished: foreachPartition at PredictorEngineApp.java:153, took 7.259508 s 18/04/17 16:36:07 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x82edaf1 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 16:36:07 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x82edaf10x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 16:36:07 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 16:36:07 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:59016, server: ***hostname masked***/***IP masked***:2181 18/04/17 16:36:07 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x1626be1444c91ff, negotiated timeout = 60000 18/04/17 16:36:07 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x1626be1444c91ff 18/04/17 16:36:07 INFO zookeeper.ZooKeeper: Session: 0x1626be1444c91ff closed 18/04/17 16:36:07 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 16:36:07 INFO scheduler.JobScheduler: Finished job streaming job 1523972160000 ms.23 from job set of time 1523972160000 ms 18/04/17 16:36:07 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 78.0 (TID 78) in 7358 ms on ***hostname masked*** (executor 4) (1/1) 18/04/17 16:36:07 INFO cluster.YarnClusterScheduler: Removed TaskSet 78.0, whose tasks have all completed, from pool 18/04/17 16:36:07 INFO scheduler.DAGScheduler: ResultStage 78 (foreachPartition at PredictorEngineApp.java:153) finished in 7.358 s 18/04/17 16:36:07 INFO scheduler.DAGScheduler: Job 78 finished: foreachPartition at PredictorEngineApp.java:153, took 7.383509 s 18/04/17 16:36:07 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x3cc71e40 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 16:36:07 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x3cc71e400x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 16:36:07 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 16:36:07 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:59019, server: ***hostname masked***/***IP masked***:2181 18/04/17 16:36:07 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x1626be1444c9200, negotiated timeout = 60000 18/04/17 16:36:07 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x1626be1444c9200 18/04/17 16:36:07 INFO zookeeper.ZooKeeper: Session: 0x1626be1444c9200 closed 18/04/17 16:36:07 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 16:36:07 INFO scheduler.JobScheduler: Finished job streaming job 1523972160000 ms.29 from job set of time 1523972160000 ms 18/04/17 16:36:08 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 84.0 (TID 84) in 8667 ms on ***hostname masked*** (executor 4) (1/1) 18/04/17 16:36:08 INFO cluster.YarnClusterScheduler: Removed TaskSet 84.0, whose tasks have all completed, from pool 18/04/17 16:36:08 INFO scheduler.DAGScheduler: ResultStage 84 (foreachPartition at PredictorEngineApp.java:153) finished in 8.667 s 18/04/17 16:36:08 INFO scheduler.DAGScheduler: Job 84 finished: foreachPartition at PredictorEngineApp.java:153, took 8.720583 s 18/04/17 16:36:08 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x7d0f3294 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 16:36:08 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x7d0f32940x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 16:36:08 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 16:36:08 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:35385, server: ***hostname masked***/***IP masked***:2181 18/04/17 16:36:08 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x2626be142b28af0, negotiated timeout = 60000 18/04/17 16:36:08 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x2626be142b28af0 18/04/17 16:36:08 INFO zookeeper.ZooKeeper: Session: 0x2626be142b28af0 closed 18/04/17 16:36:08 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 16:36:08 INFO scheduler.JobScheduler: Finished job streaming job 1523972160000 ms.27 from job set of time 1523972160000 ms 18/04/17 16:36:08 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 88.0 (TID 88) in 8743 ms on ***hostname masked*** (executor 11) (1/1) 18/04/17 16:36:08 INFO cluster.YarnClusterScheduler: Removed TaskSet 88.0, whose tasks have all completed, from pool 18/04/17 16:36:08 INFO scheduler.DAGScheduler: ResultStage 88 (foreachPartition at PredictorEngineApp.java:153) finished in 8.745 s 18/04/17 16:36:08 INFO scheduler.DAGScheduler: Job 88 finished: foreachPartition at PredictorEngineApp.java:153, took 8.820387 s 18/04/17 16:36:08 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x201f131e connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 16:36:08 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x201f131e0x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 16:36:08 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 16:36:08 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:52644, server: ***hostname masked***/***IP masked***:2181 18/04/17 16:36:08 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x3626be1439a91cb, negotiated timeout = 60000 18/04/17 16:36:08 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x3626be1439a91cb 18/04/17 16:36:08 INFO zookeeper.ZooKeeper: Session: 0x3626be1439a91cb closed 18/04/17 16:36:08 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 16:36:08 INFO scheduler.JobScheduler: Finished job streaming job 1523972160000 ms.18 from job set of time 1523972160000 ms 18/04/17 16:36:09 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 91.0 (TID 91) in 8881 ms on ***hostname masked*** (executor 11) (1/1) 18/04/17 16:36:09 INFO cluster.YarnClusterScheduler: Removed TaskSet 91.0, whose tasks have all completed, from pool 18/04/17 16:36:09 INFO scheduler.DAGScheduler: ResultStage 91 (foreachPartition at PredictorEngineApp.java:153) finished in 8.882 s 18/04/17 16:36:09 INFO scheduler.DAGScheduler: Job 91 finished: foreachPartition at PredictorEngineApp.java:153, took 8.973409 s 18/04/17 16:36:09 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x1e4c04e9 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 16:36:09 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x1e4c04e90x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 16:36:09 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 16:36:09 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:59030, server: ***hostname masked***/***IP masked***:2181 18/04/17 16:36:09 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x1626be1444c9202, negotiated timeout = 60000 18/04/17 16:36:09 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x1626be1444c9202 18/04/17 16:36:09 INFO zookeeper.ZooKeeper: Session: 0x1626be1444c9202 closed 18/04/17 16:36:09 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 16:36:09 INFO scheduler.JobScheduler: Finished job streaming job 1523972160000 ms.15 from job set of time 1523972160000 ms 18/04/17 16:36:09 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 96.0 (TID 96) in 9599 ms on ***hostname masked*** (executor 12) (1/1) 18/04/17 16:36:09 INFO cluster.YarnClusterScheduler: Removed TaskSet 96.0, whose tasks have all completed, from pool 18/04/17 16:36:09 INFO scheduler.DAGScheduler: ResultStage 96 (foreachPartition at PredictorEngineApp.java:153) finished in 9.600 s 18/04/17 16:36:09 INFO scheduler.DAGScheduler: Job 96 finished: foreachPartition at PredictorEngineApp.java:153, took 9.714132 s 18/04/17 16:36:09 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x7b8e34da connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 16:36:09 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x7b8e34da0x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 16:36:09 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 16:36:09 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:35395, server: ***hostname masked***/***IP masked***:2181 18/04/17 16:36:09 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x2626be142b28af1, negotiated timeout = 60000 18/04/17 16:36:09 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x2626be142b28af1 18/04/17 16:36:09 INFO zookeeper.ZooKeeper: Session: 0x2626be142b28af1 closed 18/04/17 16:36:09 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 16:36:09 INFO scheduler.JobScheduler: Finished job streaming job 1523972160000 ms.28 from job set of time 1523972160000 ms 18/04/17 16:36:09 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 92.0 (TID 92) in 9683 ms on ***hostname masked*** (executor 1) (1/1) 18/04/17 16:36:09 INFO cluster.YarnClusterScheduler: Removed TaskSet 92.0, whose tasks have all completed, from pool 18/04/17 16:36:09 INFO scheduler.DAGScheduler: ResultStage 92 (foreachPartition at PredictorEngineApp.java:153) finished in 9.684 s 18/04/17 16:36:09 INFO scheduler.DAGScheduler: Job 93 finished: foreachPartition at PredictorEngineApp.java:153, took 9.780278 s 18/04/17 16:36:09 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x547511fa connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 16:36:09 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x547511fa0x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 16:36:09 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 16:36:09 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:52654, server: ***hostname masked***/***IP masked***:2181 18/04/17 16:36:09 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x3626be1439a91cd, negotiated timeout = 60000 18/04/17 16:36:09 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x3626be1439a91cd 18/04/17 16:36:09 INFO zookeeper.ZooKeeper: Session: 0x3626be1439a91cd closed 18/04/17 16:36:09 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 16:36:09 INFO scheduler.JobScheduler: Finished job streaming job 1523972160000 ms.5 from job set of time 1523972160000 ms 18/04/17 16:36:11 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 101.0 (TID 101) in 11307 ms on ***hostname masked*** (executor 12) (1/1) 18/04/17 16:36:11 INFO cluster.YarnClusterScheduler: Removed TaskSet 101.0, whose tasks have all completed, from pool 18/04/17 16:36:11 INFO scheduler.DAGScheduler: ResultStage 101 (foreachPartition at PredictorEngineApp.java:153) finished in 11.308 s 18/04/17 16:36:11 INFO scheduler.DAGScheduler: Job 101 finished: foreachPartition at PredictorEngineApp.java:153, took 11.445977 s 18/04/17 16:36:11 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x2ac1d39f connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 16:36:11 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x2ac1d39f0x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 16:36:11 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 16:36:11 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:35404, server: ***hostname masked***/***IP masked***:2181 18/04/17 16:36:11 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x2626be142b28af4, negotiated timeout = 60000 18/04/17 16:36:11 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x2626be142b28af4 18/04/17 16:36:11 INFO zookeeper.ZooKeeper: Session: 0x2626be142b28af4 closed 18/04/17 16:36:11 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 16:36:11 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 81.0 (TID 81) in 11439 ms on ***hostname masked*** (executor 12) (1/1) 18/04/17 16:36:11 INFO cluster.YarnClusterScheduler: Removed TaskSet 81.0, whose tasks have all completed, from pool 18/04/17 16:36:11 INFO scheduler.DAGScheduler: ResultStage 81 (foreachPartition at PredictorEngineApp.java:153) finished in 11.440 s 18/04/17 16:36:11 INFO scheduler.DAGScheduler: Job 81 finished: foreachPartition at PredictorEngineApp.java:153, took 11.477510 s 18/04/17 16:36:11 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x6b725810 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 16:36:11 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x6b7258100x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 16:36:11 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 16:36:11 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:52663, server: ***hostname masked***/***IP masked***:2181 18/04/17 16:36:11 INFO scheduler.JobScheduler: Finished job streaming job 1523972160000 ms.33 from job set of time 1523972160000 ms 18/04/17 16:36:11 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x3626be1439a91d2, negotiated timeout = 60000 18/04/17 16:36:11 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x3626be1439a91d2 18/04/17 16:36:11 INFO zookeeper.ZooKeeper: Session: 0x3626be1439a91d2 closed 18/04/17 16:36:11 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 16:36:11 INFO scheduler.JobScheduler: Finished job streaming job 1523972160000 ms.6 from job set of time 1523972160000 ms 18/04/17 16:36:12 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 79.0 (TID 79) in 11993 ms on ***hostname masked*** (executor 4) (1/1) 18/04/17 16:36:12 INFO cluster.YarnClusterScheduler: Removed TaskSet 79.0, whose tasks have all completed, from pool 18/04/17 16:36:12 INFO scheduler.DAGScheduler: ResultStage 79 (foreachPartition at PredictorEngineApp.java:153) finished in 11.993 s 18/04/17 16:36:12 INFO scheduler.DAGScheduler: Job 79 finished: foreachPartition at PredictorEngineApp.java:153, took 12.022554 s 18/04/17 16:36:12 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0xc84a03a connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 16:36:12 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0xc84a03a0x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 16:36:12 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 16:36:12 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:35411, server: ***hostname masked***/***IP masked***:2181 18/04/17 16:36:12 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x2626be142b28af6, negotiated timeout = 60000 18/04/17 16:36:12 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x2626be142b28af6 18/04/17 16:36:12 INFO zookeeper.ZooKeeper: Session: 0x2626be142b28af6 closed 18/04/17 16:36:12 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 16:36:12 INFO scheduler.JobScheduler: Finished job streaming job 1523972160000 ms.1 from job set of time 1523972160000 ms 18/04/17 16:36:13 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 87.0 (TID 87) in 13523 ms on ***hostname masked*** (executor 11) (1/1) 18/04/17 16:36:13 INFO cluster.YarnClusterScheduler: Removed TaskSet 87.0, whose tasks have all completed, from pool 18/04/17 16:36:13 INFO scheduler.DAGScheduler: ResultStage 87 (foreachPartition at PredictorEngineApp.java:153) finished in 13.524 s 18/04/17 16:36:13 INFO scheduler.DAGScheduler: Job 87 finished: foreachPartition at PredictorEngineApp.java:153, took 13.593879 s 18/04/17 16:36:13 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x4da2a4d6 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 16:36:13 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x4da2a4d60x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 16:36:13 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 16:36:13 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:52671, server: ***hostname masked***/***IP masked***:2181 18/04/17 16:36:13 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x3626be1439a91d6, negotiated timeout = 60000 18/04/17 16:36:13 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x3626be1439a91d6 18/04/17 16:36:13 INFO zookeeper.ZooKeeper: Session: 0x3626be1439a91d6 closed 18/04/17 16:36:13 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 16:36:13 INFO scheduler.JobScheduler: Finished job streaming job 1523972160000 ms.22 from job set of time 1523972160000 ms 18/04/17 16:36:15 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 95.0 (TID 95) in 15078 ms on ***hostname masked*** (executor 12) (1/1) 18/04/17 16:36:15 INFO cluster.YarnClusterScheduler: Removed TaskSet 95.0, whose tasks have all completed, from pool 18/04/17 16:36:15 INFO scheduler.DAGScheduler: ResultStage 95 (foreachPartition at PredictorEngineApp.java:153) finished in 15.079 s 18/04/17 16:36:15 INFO scheduler.DAGScheduler: Job 95 finished: foreachPartition at PredictorEngineApp.java:153, took 15.189821 s 18/04/17 16:36:15 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x35ff7e03 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 16:36:15 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x35ff7e030x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 16:36:15 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 16:36:15 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:52676, server: ***hostname masked***/***IP masked***:2181 18/04/17 16:36:15 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x3626be1439a91d7, negotiated timeout = 60000 18/04/17 16:36:15 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x3626be1439a91d7 18/04/17 16:36:15 INFO zookeeper.ZooKeeper: Session: 0x3626be1439a91d7 closed 18/04/17 16:36:15 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 16:36:15 INFO scheduler.JobScheduler: Finished job streaming job 1523972160000 ms.11 from job set of time 1523972160000 ms 18/04/17 16:36:15 INFO scheduler.JobScheduler: Total delay: 15.304 s for time 1523972160000 ms (execution: 15.231 s) 18/04/17 16:36:15 INFO kafka.KafkaRDD: Removing RDD 72 from persistence list 18/04/17 16:36:15 INFO storage.BlockManager: Removing RDD 72 18/04/17 16:36:15 INFO kafka.KafkaRDD: Removing RDD 72 from persistence list 18/04/17 16:36:15 INFO storage.BlockManager: Removing RDD 72 18/04/17 16:36:15 INFO kafka.KafkaRDD: Removing RDD 73 from persistence list 18/04/17 16:36:15 INFO storage.BlockManager: Removing RDD 73 18/04/17 16:36:15 INFO kafka.KafkaRDD: Removing RDD 73 from persistence list 18/04/17 16:36:15 INFO storage.BlockManager: Removing RDD 73 18/04/17 16:36:15 INFO kafka.KafkaRDD: Removing RDD 74 from persistence list 18/04/17 16:36:15 INFO storage.BlockManager: Removing RDD 74 18/04/17 16:36:15 INFO kafka.KafkaRDD: Removing RDD 74 from persistence list 18/04/17 16:36:15 INFO storage.BlockManager: Removing RDD 74 18/04/17 16:36:15 INFO kafka.KafkaRDD: Removing RDD 75 from persistence list 18/04/17 16:36:15 INFO storage.BlockManager: Removing RDD 75 18/04/17 16:36:15 INFO kafka.KafkaRDD: Removing RDD 75 from persistence list 18/04/17 16:36:15 INFO storage.BlockManager: Removing RDD 75 18/04/17 16:36:15 INFO kafka.KafkaRDD: Removing RDD 76 from persistence list 18/04/17 16:36:15 INFO storage.BlockManager: Removing RDD 76 18/04/17 16:36:15 INFO kafka.KafkaRDD: Removing RDD 76 from persistence list 18/04/17 16:36:15 INFO storage.BlockManager: Removing RDD 76 18/04/17 16:36:15 INFO kafka.KafkaRDD: Removing RDD 77 from persistence list 18/04/17 16:36:15 INFO storage.BlockManager: Removing RDD 77 18/04/17 16:36:15 INFO kafka.KafkaRDD: Removing RDD 77 from persistence list 18/04/17 16:36:15 INFO storage.BlockManager: Removing RDD 77 18/04/17 16:36:15 INFO kafka.KafkaRDD: Removing RDD 78 from persistence list 18/04/17 16:36:15 INFO storage.BlockManager: Removing RDD 78 18/04/17 16:36:15 INFO kafka.KafkaRDD: Removing RDD 78 from persistence list 18/04/17 16:36:15 INFO storage.BlockManager: Removing RDD 78 18/04/17 16:36:15 INFO kafka.KafkaRDD: Removing RDD 79 from persistence list 18/04/17 16:36:15 INFO storage.BlockManager: Removing RDD 79 18/04/17 16:36:15 INFO kafka.KafkaRDD: Removing RDD 79 from persistence list 18/04/17 16:36:15 INFO storage.BlockManager: Removing RDD 79 18/04/17 16:36:15 INFO kafka.KafkaRDD: Removing RDD 80 from persistence list 18/04/17 16:36:15 INFO storage.BlockManager: Removing RDD 80 18/04/17 16:36:15 INFO kafka.KafkaRDD: Removing RDD 80 from persistence list 18/04/17 16:36:15 INFO storage.BlockManager: Removing RDD 80 18/04/17 16:36:15 INFO kafka.KafkaRDD: Removing RDD 81 from persistence list 18/04/17 16:36:15 INFO storage.BlockManager: Removing RDD 81 18/04/17 16:36:15 INFO kafka.KafkaRDD: Removing RDD 81 from persistence list 18/04/17 16:36:15 INFO storage.BlockManager: Removing RDD 81 18/04/17 16:36:15 INFO kafka.KafkaRDD: Removing RDD 82 from persistence list 18/04/17 16:36:15 INFO storage.BlockManager: Removing RDD 82 18/04/17 16:36:15 INFO kafka.KafkaRDD: Removing RDD 82 from persistence list 18/04/17 16:36:15 INFO storage.BlockManager: Removing RDD 82 18/04/17 16:36:15 INFO kafka.KafkaRDD: Removing RDD 83 from persistence list 18/04/17 16:36:15 INFO storage.BlockManager: Removing RDD 83 18/04/17 16:36:15 INFO kafka.KafkaRDD: Removing RDD 83 from persistence list 18/04/17 16:36:15 INFO storage.BlockManager: Removing RDD 83 18/04/17 16:36:15 INFO kafka.KafkaRDD: Removing RDD 84 from persistence list 18/04/17 16:36:15 INFO storage.BlockManager: Removing RDD 84 18/04/17 16:36:15 INFO kafka.KafkaRDD: Removing RDD 84 from persistence list 18/04/17 16:36:15 INFO storage.BlockManager: Removing RDD 84 18/04/17 16:36:15 INFO kafka.KafkaRDD: Removing RDD 85 from persistence list 18/04/17 16:36:15 INFO storage.BlockManager: Removing RDD 85 18/04/17 16:36:15 INFO kafka.KafkaRDD: Removing RDD 85 from persistence list 18/04/17 16:36:15 INFO storage.BlockManager: Removing RDD 85 18/04/17 16:36:15 INFO kafka.KafkaRDD: Removing RDD 86 from persistence list 18/04/17 16:36:15 INFO storage.BlockManager: Removing RDD 86 18/04/17 16:36:15 INFO kafka.KafkaRDD: Removing RDD 86 from persistence list 18/04/17 16:36:15 INFO storage.BlockManager: Removing RDD 86 18/04/17 16:36:15 INFO kafka.KafkaRDD: Removing RDD 87 from persistence list 18/04/17 16:36:15 INFO storage.BlockManager: Removing RDD 87 18/04/17 16:36:15 INFO kafka.KafkaRDD: Removing RDD 87 from persistence list 18/04/17 16:36:15 INFO storage.BlockManager: Removing RDD 87 18/04/17 16:36:15 INFO kafka.KafkaRDD: Removing RDD 88 from persistence list 18/04/17 16:36:15 INFO storage.BlockManager: Removing RDD 88 18/04/17 16:36:15 INFO kafka.KafkaRDD: Removing RDD 88 from persistence list 18/04/17 16:36:15 INFO storage.BlockManager: Removing RDD 88 18/04/17 16:36:15 INFO kafka.KafkaRDD: Removing RDD 89 from persistence list 18/04/17 16:36:15 INFO storage.BlockManager: Removing RDD 89 18/04/17 16:36:15 INFO kafka.KafkaRDD: Removing RDD 89 from persistence list 18/04/17 16:36:15 INFO storage.BlockManager: Removing RDD 89 18/04/17 16:36:15 INFO kafka.KafkaRDD: Removing RDD 90 from persistence list 18/04/17 16:36:15 INFO storage.BlockManager: Removing RDD 90 18/04/17 16:36:15 INFO kafka.KafkaRDD: Removing RDD 90 from persistence list 18/04/17 16:36:15 INFO storage.BlockManager: Removing RDD 90 18/04/17 16:36:15 INFO kafka.KafkaRDD: Removing RDD 91 from persistence list 18/04/17 16:36:15 INFO storage.BlockManager: Removing RDD 91 18/04/17 16:36:15 INFO kafka.KafkaRDD: Removing RDD 91 from persistence list 18/04/17 16:36:15 INFO storage.BlockManager: Removing RDD 91 18/04/17 16:36:15 INFO kafka.KafkaRDD: Removing RDD 92 from persistence list 18/04/17 16:36:15 INFO storage.BlockManager: Removing RDD 92 18/04/17 16:36:15 INFO kafka.KafkaRDD: Removing RDD 92 from persistence list 18/04/17 16:36:15 INFO storage.BlockManager: Removing RDD 92 18/04/17 16:36:15 INFO kafka.KafkaRDD: Removing RDD 93 from persistence list 18/04/17 16:36:15 INFO storage.BlockManager: Removing RDD 93 18/04/17 16:36:15 INFO kafka.KafkaRDD: Removing RDD 93 from persistence list 18/04/17 16:36:15 INFO storage.BlockManager: Removing RDD 93 18/04/17 16:36:15 INFO kafka.KafkaRDD: Removing RDD 94 from persistence list 18/04/17 16:36:15 INFO storage.BlockManager: Removing RDD 94 18/04/17 16:36:15 INFO kafka.KafkaRDD: Removing RDD 94 from persistence list 18/04/17 16:36:15 INFO storage.BlockManager: Removing RDD 94 18/04/17 16:36:15 INFO kafka.KafkaRDD: Removing RDD 95 from persistence list 18/04/17 16:36:15 INFO storage.BlockManager: Removing RDD 95 18/04/17 16:36:15 INFO kafka.KafkaRDD: Removing RDD 95 from persistence list 18/04/17 16:36:15 INFO storage.BlockManager: Removing RDD 95 18/04/17 16:36:15 INFO kafka.KafkaRDD: Removing RDD 96 from persistence list 18/04/17 16:36:15 INFO storage.BlockManager: Removing RDD 96 18/04/17 16:36:15 INFO kafka.KafkaRDD: Removing RDD 96 from persistence list 18/04/17 16:36:15 INFO storage.BlockManager: Removing RDD 96 18/04/17 16:36:15 INFO kafka.KafkaRDD: Removing RDD 97 from persistence list 18/04/17 16:36:15 INFO storage.BlockManager: Removing RDD 97 18/04/17 16:36:15 INFO kafka.KafkaRDD: Removing RDD 97 from persistence list 18/04/17 16:36:15 INFO storage.BlockManager: Removing RDD 97 18/04/17 16:36:15 INFO kafka.KafkaRDD: Removing RDD 98 from persistence list 18/04/17 16:36:15 INFO storage.BlockManager: Removing RDD 98 18/04/17 16:36:15 INFO kafka.KafkaRDD: Removing RDD 98 from persistence list 18/04/17 16:36:15 INFO storage.BlockManager: Removing RDD 98 18/04/17 16:36:15 INFO kafka.KafkaRDD: Removing RDD 99 from persistence list 18/04/17 16:36:15 INFO storage.BlockManager: Removing RDD 99 18/04/17 16:36:15 INFO kafka.KafkaRDD: Removing RDD 99 from persistence list 18/04/17 16:36:15 INFO storage.BlockManager: Removing RDD 99 18/04/17 16:36:15 INFO kafka.KafkaRDD: Removing RDD 100 from persistence list 18/04/17 16:36:15 INFO storage.BlockManager: Removing RDD 100 18/04/17 16:36:15 INFO kafka.KafkaRDD: Removing RDD 100 from persistence list 18/04/17 16:36:15 INFO storage.BlockManager: Removing RDD 100 18/04/17 16:36:15 INFO kafka.KafkaRDD: Removing RDD 101 from persistence list 18/04/17 16:36:15 INFO storage.BlockManager: Removing RDD 101 18/04/17 16:36:15 INFO kafka.KafkaRDD: Removing RDD 101 from persistence list 18/04/17 16:36:15 INFO storage.BlockManager: Removing RDD 101 18/04/17 16:36:15 INFO kafka.KafkaRDD: Removing RDD 102 from persistence list 18/04/17 16:36:15 INFO storage.BlockManager: Removing RDD 102 18/04/17 16:36:15 INFO kafka.KafkaRDD: Removing RDD 102 from persistence list 18/04/17 16:36:15 INFO storage.BlockManager: Removing RDD 102 18/04/17 16:36:15 INFO kafka.KafkaRDD: Removing RDD 103 from persistence list 18/04/17 16:36:15 INFO storage.BlockManager: Removing RDD 103 18/04/17 16:36:15 INFO kafka.KafkaRDD: Removing RDD 103 from persistence list 18/04/17 16:36:15 INFO storage.BlockManager: Removing RDD 103 18/04/17 16:36:15 INFO kafka.KafkaRDD: Removing RDD 104 from persistence list 18/04/17 16:36:15 INFO storage.BlockManager: Removing RDD 104 18/04/17 16:36:15 INFO kafka.KafkaRDD: Removing RDD 104 from persistence list 18/04/17 16:36:15 INFO storage.BlockManager: Removing RDD 104 18/04/17 16:36:15 INFO kafka.KafkaRDD: Removing RDD 105 from persistence list 18/04/17 16:36:15 INFO storage.BlockManager: Removing RDD 105 18/04/17 16:36:15 INFO kafka.KafkaRDD: Removing RDD 105 from persistence list 18/04/17 16:36:15 INFO storage.BlockManager: Removing RDD 105 18/04/17 16:36:15 INFO kafka.KafkaRDD: Removing RDD 106 from persistence list 18/04/17 16:36:15 INFO storage.BlockManager: Removing RDD 106 18/04/17 16:36:15 INFO kafka.KafkaRDD: Removing RDD 106 from persistence list 18/04/17 16:36:15 INFO storage.BlockManager: Removing RDD 106 18/04/17 16:36:15 INFO kafka.KafkaRDD: Removing RDD 107 from persistence list 18/04/17 16:36:15 INFO storage.BlockManager: Removing RDD 107 18/04/17 16:36:15 INFO kafka.KafkaRDD: Removing RDD 107 from persistence list 18/04/17 16:36:15 INFO storage.BlockManager: Removing RDD 107 18/04/17 16:36:15 INFO scheduler.ReceivedBlockTracker: Deleting batches ArrayBuffer() 18/04/17 16:36:15 INFO scheduler.InputInfoTracker: remove old batch metadata: 1523972040000 ms 18/04/17 16:37:00 INFO scheduler.JobScheduler: Added jobs for time 1523972220000 ms 18/04/17 16:37:00 INFO scheduler.JobScheduler: Starting job streaming job 1523972220000 ms.0 from job set of time 1523972220000 ms 18/04/17 16:37:00 INFO scheduler.JobScheduler: Starting job streaming job 1523972220000 ms.1 from job set of time 1523972220000 ms 18/04/17 16:37:00 INFO scheduler.JobScheduler: Starting job streaming job 1523972220000 ms.3 from job set of time 1523972220000 ms 18/04/17 16:37:00 INFO scheduler.JobScheduler: Starting job streaming job 1523972220000 ms.2 from job set of time 1523972220000 ms 18/04/17 16:37:00 INFO scheduler.JobScheduler: Finished job streaming job 1523972220000 ms.0 from job set of time 1523972220000 ms 18/04/17 16:37:00 INFO scheduler.JobScheduler: Starting job streaming job 1523972220000 ms.4 from job set of time 1523972220000 ms 18/04/17 16:37:00 INFO scheduler.JobScheduler: Starting job streaming job 1523972220000 ms.7 from job set of time 1523972220000 ms 18/04/17 16:37:00 INFO scheduler.JobScheduler: Starting job streaming job 1523972220000 ms.5 from job set of time 1523972220000 ms 18/04/17 16:37:00 INFO scheduler.JobScheduler: Finished job streaming job 1523972220000 ms.3 from job set of time 1523972220000 ms 18/04/17 16:37:00 INFO scheduler.JobScheduler: Starting job streaming job 1523972220000 ms.6 from job set of time 1523972220000 ms 18/04/17 16:37:00 INFO scheduler.JobScheduler: Starting job streaming job 1523972220000 ms.9 from job set of time 1523972220000 ms 18/04/17 16:37:00 INFO scheduler.JobScheduler: Starting job streaming job 1523972220000 ms.8 from job set of time 1523972220000 ms 18/04/17 16:37:00 INFO scheduler.JobScheduler: Finished job streaming job 1523972220000 ms.4 from job set of time 1523972220000 ms 18/04/17 16:37:00 INFO scheduler.JobScheduler: Starting job streaming job 1523972220000 ms.11 from job set of time 1523972220000 ms 18/04/17 16:37:00 INFO scheduler.JobScheduler: Starting job streaming job 1523972220000 ms.10 from job set of time 1523972220000 ms 18/04/17 16:37:00 INFO scheduler.JobScheduler: Starting job streaming job 1523972220000 ms.14 from job set of time 1523972220000 ms 18/04/17 16:37:00 INFO scheduler.JobScheduler: Starting job streaming job 1523972220000 ms.12 from job set of time 1523972220000 ms 18/04/17 16:37:00 INFO scheduler.JobScheduler: Starting job streaming job 1523972220000 ms.13 from job set of time 1523972220000 ms 18/04/17 16:37:00 INFO scheduler.JobScheduler: Finished job streaming job 1523972220000 ms.14 from job set of time 1523972220000 ms 18/04/17 16:37:00 INFO scheduler.JobScheduler: Starting job streaming job 1523972220000 ms.17 from job set of time 1523972220000 ms 18/04/17 16:37:00 INFO scheduler.JobScheduler: Starting job streaming job 1523972220000 ms.15 from job set of time 1523972220000 ms 18/04/17 16:37:00 INFO scheduler.JobScheduler: Finished job streaming job 1523972220000 ms.13 from job set of time 1523972220000 ms 18/04/17 16:37:00 INFO scheduler.JobScheduler: Starting job streaming job 1523972220000 ms.19 from job set of time 1523972220000 ms 18/04/17 16:37:00 INFO scheduler.JobScheduler: Finished job streaming job 1523972220000 ms.17 from job set of time 1523972220000 ms 18/04/17 16:37:00 INFO scheduler.JobScheduler: Starting job streaming job 1523972220000 ms.16 from job set of time 1523972220000 ms 18/04/17 16:37:00 INFO scheduler.JobScheduler: Starting job streaming job 1523972220000 ms.21 from job set of time 1523972220000 ms 18/04/17 16:37:00 INFO scheduler.JobScheduler: Starting job streaming job 1523972220000 ms.18 from job set of time 1523972220000 ms 18/04/17 16:37:00 INFO scheduler.JobScheduler: Starting job streaming job 1523972220000 ms.20 from job set of time 1523972220000 ms 18/04/17 16:37:00 INFO scheduler.JobScheduler: Finished job streaming job 1523972220000 ms.21 from job set of time 1523972220000 ms 18/04/17 16:37:00 INFO scheduler.JobScheduler: Finished job streaming job 1523972220000 ms.16 from job set of time 1523972220000 ms 18/04/17 16:37:00 INFO scheduler.JobScheduler: Starting job streaming job 1523972220000 ms.24 from job set of time 1523972220000 ms 18/04/17 16:37:00 INFO scheduler.JobScheduler: Starting job streaming job 1523972220000 ms.25 from job set of time 1523972220000 ms 18/04/17 16:37:00 INFO scheduler.JobScheduler: Starting job streaming job 1523972220000 ms.22 from job set of time 1523972220000 ms 18/04/17 16:37:00 INFO scheduler.JobScheduler: Starting job streaming job 1523972220000 ms.23 from job set of time 1523972220000 ms 18/04/17 16:37:00 INFO scheduler.JobScheduler: Starting job streaming job 1523972220000 ms.26 from job set of time 1523972220000 ms 18/04/17 16:37:00 INFO scheduler.JobScheduler: Starting job streaming job 1523972220000 ms.28 from job set of time 1523972220000 ms 18/04/17 16:37:00 INFO scheduler.JobScheduler: Starting job streaming job 1523972220000 ms.27 from job set of time 1523972220000 ms 18/04/17 16:37:00 INFO scheduler.JobScheduler: Starting job streaming job 1523972220000 ms.29 from job set of time 1523972220000 ms 18/04/17 16:37:00 INFO scheduler.JobScheduler: Starting job streaming job 1523972220000 ms.30 from job set of time 1523972220000 ms 18/04/17 16:37:00 INFO scheduler.JobScheduler: Starting job streaming job 1523972220000 ms.31 from job set of time 1523972220000 ms 18/04/17 16:37:00 INFO scheduler.JobScheduler: Finished job streaming job 1523972220000 ms.30 from job set of time 1523972220000 ms 18/04/17 16:37:00 INFO scheduler.JobScheduler: Starting job streaming job 1523972220000 ms.32 from job set of time 1523972220000 ms 18/04/17 16:37:00 INFO scheduler.JobScheduler: Starting job streaming job 1523972220000 ms.34 from job set of time 1523972220000 ms 18/04/17 16:37:00 INFO scheduler.JobScheduler: Starting job streaming job 1523972220000 ms.33 from job set of time 1523972220000 ms 18/04/17 16:37:00 INFO scheduler.JobScheduler: Starting job streaming job 1523972220000 ms.35 from job set of time 1523972220000 ms 18/04/17 16:37:00 INFO scheduler.JobScheduler: Finished job streaming job 1523972220000 ms.35 from job set of time 1523972220000 ms 18/04/17 16:37:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 16:37:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 16:37:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 16:37:00 INFO scheduler.DAGScheduler: Got job 106 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 16:37:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 105 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 16:37:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 16:37:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 16:37:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 16:37:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 16:37:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 16:37:00 INFO scheduler.DAGScheduler: Submitting ResultStage 105 (KafkaRDD[171] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 16:37:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 16:37:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 16:37:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 16:37:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 16:37:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 16:37:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 16:37:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 16:37:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 16:37:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 16:37:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 16:37:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 16:37:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 16:37:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 16:37:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 16:37:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 16:37:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 16:37:00 INFO storage.MemoryStore: Block broadcast_105 stored as values in memory (estimated size 5.7 KB, free 491.1 MB) 18/04/17 16:37:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 16:37:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 16:37:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 16:37:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 16:37:00 INFO storage.MemoryStore: Block broadcast_105_piece0 stored as bytes in memory (estimated size 3.1 KB, free 491.1 MB) 18/04/17 16:37:00 INFO storage.BlockManagerInfo: Added broadcast_105_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.5 MB) 18/04/17 16:37:00 INFO spark.SparkContext: Created broadcast 105 from broadcast at DAGScheduler.scala:1006 18/04/17 16:37:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 105 (KafkaRDD[171] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 16:37:00 INFO cluster.YarnClusterScheduler: Adding task set 105.0 with 1 tasks 18/04/17 16:37:00 INFO scheduler.DAGScheduler: Got job 105 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 16:37:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 106 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 16:37:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 16:37:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 16:37:00 INFO scheduler.DAGScheduler: Submitting ResultStage 106 (KafkaRDD[176] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 16:37:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 105.0 (TID 105, ***hostname masked***, executor 9, partition 0, RACK_LOCAL, 2048 bytes) 18/04/17 16:37:00 INFO storage.MemoryStore: Block broadcast_106 stored as values in memory (estimated size 5.7 KB, free 491.1 MB) 18/04/17 16:37:00 INFO storage.BlockManagerInfo: Removed broadcast_100_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.5 MB) 18/04/17 16:37:00 INFO storage.MemoryStore: Block broadcast_106_piece0 stored as bytes in memory (estimated size 3.1 KB, free 491.1 MB) 18/04/17 16:37:00 INFO storage.BlockManagerInfo: Added broadcast_106_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.5 MB) 18/04/17 16:37:00 INFO spark.SparkContext: Created broadcast 106 from broadcast at DAGScheduler.scala:1006 18/04/17 16:37:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 106 (KafkaRDD[176] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 16:37:00 INFO cluster.YarnClusterScheduler: Adding task set 106.0 with 1 tasks 18/04/17 16:37:00 INFO scheduler.DAGScheduler: Got job 107 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 16:37:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 107 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 16:37:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 16:37:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 16:37:00 INFO scheduler.DAGScheduler: Submitting ResultStage 107 (KafkaRDD[153] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 16:37:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 106.0 (TID 106, ***hostname masked***, executor 8, partition 0, RACK_LOCAL, 2040 bytes) 18/04/17 16:37:00 INFO storage.MemoryStore: Block broadcast_107 stored as values in memory (estimated size 5.7 KB, free 491.1 MB) 18/04/17 16:37:00 INFO storage.BlockManagerInfo: Removed broadcast_100_piece0 on ***hostname masked***:43653 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:37:00 INFO storage.BlockManagerInfo: Added broadcast_105_piece0 in memory on ***hostname masked***:55033 (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:37:00 INFO storage.MemoryStore: Block broadcast_107_piece0 stored as bytes in memory (estimated size 3.1 KB, free 491.1 MB) 18/04/17 16:37:00 INFO storage.BlockManagerInfo: Removed broadcast_101_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.5 MB) 18/04/17 16:37:00 INFO storage.BlockManagerInfo: Added broadcast_107_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.5 MB) 18/04/17 16:37:00 INFO spark.SparkContext: Created broadcast 107 from broadcast at DAGScheduler.scala:1006 18/04/17 16:37:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 107 (KafkaRDD[153] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 16:37:00 INFO cluster.YarnClusterScheduler: Adding task set 107.0 with 1 tasks 18/04/17 16:37:00 INFO scheduler.DAGScheduler: Got job 108 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 16:37:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 108 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 16:37:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 16:37:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 16:37:00 INFO scheduler.DAGScheduler: Submitting ResultStage 108 (KafkaRDD[169] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 16:37:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 107.0 (TID 107, ***hostname masked***, executor 11, partition 0, RACK_LOCAL, 2041 bytes) 18/04/17 16:37:00 INFO storage.MemoryStore: Block broadcast_108 stored as values in memory (estimated size 5.7 KB, free 491.1 MB) 18/04/17 16:37:00 INFO storage.BlockManagerInfo: Removed broadcast_101_piece0 on ***hostname masked***:42188 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:37:00 INFO storage.BlockManagerInfo: Removed broadcast_102_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.5 MB) 18/04/17 16:37:00 INFO storage.BlockManagerInfo: Added broadcast_106_piece0 in memory on ***hostname masked***:50260 (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:37:00 INFO storage.BlockManagerInfo: Removed broadcast_102_piece0 on ***hostname masked***:53081 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:37:00 INFO storage.MemoryStore: Block broadcast_108_piece0 stored as bytes in memory (estimated size 3.1 KB, free 491.1 MB) 18/04/17 16:37:00 INFO storage.BlockManagerInfo: Added broadcast_108_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.5 MB) 18/04/17 16:37:00 INFO spark.SparkContext: Created broadcast 108 from broadcast at DAGScheduler.scala:1006 18/04/17 16:37:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 108 (KafkaRDD[169] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 16:37:00 INFO cluster.YarnClusterScheduler: Adding task set 108.0 with 1 tasks 18/04/17 16:37:00 INFO scheduler.DAGScheduler: Got job 109 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 16:37:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 109 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 16:37:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 16:37:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 16:37:00 INFO scheduler.DAGScheduler: Submitting ResultStage 109 (KafkaRDD[155] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 16:37:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 108.0 (TID 108, ***hostname masked***, executor 4, partition 0, RACK_LOCAL, 2045 bytes) 18/04/17 16:37:00 INFO storage.BlockManagerInfo: Removed broadcast_103_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.5 MB) 18/04/17 16:37:00 INFO storage.MemoryStore: Block broadcast_109 stored as values in memory (estimated size 5.7 KB, free 491.1 MB) 18/04/17 16:37:00 INFO storage.BlockManagerInfo: Removed broadcast_103_piece0 on ***hostname masked***:55033 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:37:00 INFO storage.MemoryStore: Block broadcast_109_piece0 stored as bytes in memory (estimated size 3.1 KB, free 491.1 MB) 18/04/17 16:37:00 INFO storage.BlockManagerInfo: Removed broadcast_104_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.5 MB) 18/04/17 16:37:00 INFO storage.BlockManagerInfo: Added broadcast_109_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.5 MB) 18/04/17 16:37:00 INFO spark.SparkContext: Created broadcast 109 from broadcast at DAGScheduler.scala:1006 18/04/17 16:37:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 109 (KafkaRDD[155] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 16:37:00 INFO cluster.YarnClusterScheduler: Adding task set 109.0 with 1 tasks 18/04/17 16:37:00 INFO scheduler.DAGScheduler: Got job 110 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 16:37:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 110 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 16:37:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 16:37:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 16:37:00 INFO scheduler.DAGScheduler: Submitting ResultStage 110 (KafkaRDD[175] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 16:37:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 109.0 (TID 109, ***hostname masked***, executor 8, partition 0, RACK_LOCAL, 2046 bytes) 18/04/17 16:37:00 INFO storage.MemoryStore: Block broadcast_110 stored as values in memory (estimated size 5.7 KB, free 491.1 MB) 18/04/17 16:37:00 INFO storage.BlockManagerInfo: Removed broadcast_104_piece0 on ***hostname masked***:50260 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:37:00 INFO storage.BlockManagerInfo: Added broadcast_107_piece0 in memory on ***hostname masked***:57847 (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:37:00 INFO storage.MemoryStore: Block broadcast_110_piece0 stored as bytes in memory (estimated size 3.1 KB, free 491.1 MB) 18/04/17 16:37:00 INFO storage.BlockManagerInfo: Added broadcast_110_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.5 MB) 18/04/17 16:37:00 INFO spark.SparkContext: Created broadcast 110 from broadcast at DAGScheduler.scala:1006 18/04/17 16:37:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 110 (KafkaRDD[175] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 16:37:00 INFO cluster.YarnClusterScheduler: Adding task set 110.0 with 1 tasks 18/04/17 16:37:00 INFO scheduler.DAGScheduler: Got job 111 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 16:37:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 111 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 16:37:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 16:37:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 16:37:00 INFO scheduler.DAGScheduler: Submitting ResultStage 111 (KafkaRDD[156] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 16:37:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 110.0 (TID 110, ***hostname masked***, executor 1, partition 0, RACK_LOCAL, 2042 bytes) 18/04/17 16:37:00 INFO storage.MemoryStore: Block broadcast_111 stored as values in memory (estimated size 5.7 KB, free 491.1 MB) 18/04/17 16:37:00 INFO storage.BlockManagerInfo: Added broadcast_108_piece0 in memory on ***hostname masked***:55279 (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:37:00 INFO storage.BlockManagerInfo: Added broadcast_109_piece0 in memory on ***hostname masked***:50260 (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:37:00 INFO storage.MemoryStore: Block broadcast_111_piece0 stored as bytes in memory (estimated size 3.1 KB, free 491.1 MB) 18/04/17 16:37:00 INFO storage.BlockManagerInfo: Added broadcast_111_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.5 MB) 18/04/17 16:37:00 INFO spark.SparkContext: Created broadcast 111 from broadcast at DAGScheduler.scala:1006 18/04/17 16:37:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 111 (KafkaRDD[156] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 16:37:00 INFO cluster.YarnClusterScheduler: Adding task set 111.0 with 1 tasks 18/04/17 16:37:00 INFO scheduler.DAGScheduler: Got job 112 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 16:37:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 112 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 16:37:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 16:37:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 16:37:00 INFO scheduler.DAGScheduler: Submitting ResultStage 112 (KafkaRDD[167] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 16:37:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 111.0 (TID 111, ***hostname masked***, executor 8, partition 0, RACK_LOCAL, 2051 bytes) 18/04/17 16:37:00 INFO storage.MemoryStore: Block broadcast_112 stored as values in memory (estimated size 5.7 KB, free 491.0 MB) 18/04/17 16:37:00 INFO storage.BlockManagerInfo: Added broadcast_110_piece0 in memory on ***hostname masked***:56034 (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:37:00 INFO storage.MemoryStore: Block broadcast_112_piece0 stored as bytes in memory (estimated size 3.1 KB, free 491.0 MB) 18/04/17 16:37:00 INFO storage.BlockManagerInfo: Added broadcast_112_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.4 MB) 18/04/17 16:37:00 INFO spark.SparkContext: Created broadcast 112 from broadcast at DAGScheduler.scala:1006 18/04/17 16:37:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 112 (KafkaRDD[167] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 16:37:00 INFO cluster.YarnClusterScheduler: Adding task set 112.0 with 1 tasks 18/04/17 16:37:00 INFO scheduler.DAGScheduler: Got job 113 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 16:37:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 113 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 16:37:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 16:37:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 16:37:00 INFO scheduler.DAGScheduler: Submitting ResultStage 113 (KafkaRDD[173] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 16:37:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 112.0 (TID 112, ***hostname masked***, executor 8, partition 0, RACK_LOCAL, 2048 bytes) 18/04/17 16:37:00 INFO storage.MemoryStore: Block broadcast_113 stored as values in memory (estimated size 5.7 KB, free 491.0 MB) 18/04/17 16:37:00 INFO storage.MemoryStore: Block broadcast_113_piece0 stored as bytes in memory (estimated size 3.1 KB, free 491.0 MB) 18/04/17 16:37:00 INFO storage.BlockManagerInfo: Added broadcast_113_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.4 MB) 18/04/17 16:37:00 INFO spark.SparkContext: Created broadcast 113 from broadcast at DAGScheduler.scala:1006 18/04/17 16:37:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 113 (KafkaRDD[173] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 16:37:00 INFO cluster.YarnClusterScheduler: Adding task set 113.0 with 1 tasks 18/04/17 16:37:00 INFO scheduler.DAGScheduler: Got job 114 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 16:37:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 114 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 16:37:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 16:37:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 16:37:00 INFO scheduler.DAGScheduler: Submitting ResultStage 114 (KafkaRDD[149] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 16:37:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 113.0 (TID 113, ***hostname masked***, executor 5, partition 0, NODE_LOCAL, 2049 bytes) 18/04/17 16:37:00 INFO storage.MemoryStore: Block broadcast_114 stored as values in memory (estimated size 5.7 KB, free 491.0 MB) 18/04/17 16:37:00 INFO storage.MemoryStore: Block broadcast_114_piece0 stored as bytes in memory (estimated size 3.1 KB, free 491.0 MB) 18/04/17 16:37:00 INFO storage.BlockManagerInfo: Added broadcast_114_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.4 MB) 18/04/17 16:37:00 INFO spark.SparkContext: Created broadcast 114 from broadcast at DAGScheduler.scala:1006 18/04/17 16:37:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 114 (KafkaRDD[149] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 16:37:00 INFO cluster.YarnClusterScheduler: Adding task set 114.0 with 1 tasks 18/04/17 16:37:00 INFO scheduler.DAGScheduler: Got job 115 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 16:37:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 115 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 16:37:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 16:37:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 16:37:00 INFO scheduler.DAGScheduler: Submitting ResultStage 115 (KafkaRDD[159] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 16:37:00 INFO storage.MemoryStore: Block broadcast_115 stored as values in memory (estimated size 5.7 KB, free 491.0 MB) 18/04/17 16:37:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 114.0 (TID 114, ***hostname masked***, executor 5, partition 0, NODE_LOCAL, 2059 bytes) 18/04/17 16:37:00 INFO storage.MemoryStore: Block broadcast_115_piece0 stored as bytes in memory (estimated size 3.1 KB, free 491.0 MB) 18/04/17 16:37:00 INFO storage.BlockManagerInfo: Added broadcast_115_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.4 MB) 18/04/17 16:37:00 INFO spark.SparkContext: Created broadcast 115 from broadcast at DAGScheduler.scala:1006 18/04/17 16:37:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 115 (KafkaRDD[159] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 16:37:00 INFO cluster.YarnClusterScheduler: Adding task set 115.0 with 1 tasks 18/04/17 16:37:00 INFO scheduler.DAGScheduler: Got job 116 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 16:37:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 116 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 16:37:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 16:37:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 16:37:00 INFO scheduler.DAGScheduler: Submitting ResultStage 116 (KafkaRDD[145] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 16:37:00 INFO storage.MemoryStore: Block broadcast_116 stored as values in memory (estimated size 5.7 KB, free 491.0 MB) 18/04/17 16:37:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 115.0 (TID 115, ***hostname masked***, executor 6, partition 0, RACK_LOCAL, 2064 bytes) 18/04/17 16:37:00 INFO storage.BlockManagerInfo: Added broadcast_113_piece0 in memory on ***hostname masked***:53081 (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:37:00 INFO storage.MemoryStore: Block broadcast_116_piece0 stored as bytes in memory (estimated size 3.1 KB, free 491.0 MB) 18/04/17 16:37:00 INFO storage.BlockManagerInfo: Added broadcast_116_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.4 MB) 18/04/17 16:37:00 INFO spark.SparkContext: Created broadcast 116 from broadcast at DAGScheduler.scala:1006 18/04/17 16:37:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 116 (KafkaRDD[145] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 16:37:00 INFO cluster.YarnClusterScheduler: Adding task set 116.0 with 1 tasks 18/04/17 16:37:00 INFO scheduler.DAGScheduler: Got job 117 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 16:37:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 117 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 16:37:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 16:37:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 16:37:00 INFO scheduler.DAGScheduler: Submitting ResultStage 117 (KafkaRDD[146] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 16:37:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 116.0 (TID 116, ***hostname masked***, executor 2, partition 0, RACK_LOCAL, 2063 bytes) 18/04/17 16:37:00 INFO storage.MemoryStore: Block broadcast_117 stored as values in memory (estimated size 5.7 KB, free 491.0 MB) 18/04/17 16:37:00 INFO storage.MemoryStore: Block broadcast_117_piece0 stored as bytes in memory (estimated size 3.1 KB, free 491.0 MB) 18/04/17 16:37:00 INFO storage.BlockManagerInfo: Added broadcast_117_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.4 MB) 18/04/17 16:37:00 INFO spark.SparkContext: Created broadcast 117 from broadcast at DAGScheduler.scala:1006 18/04/17 16:37:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 117 (KafkaRDD[146] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 16:37:00 INFO cluster.YarnClusterScheduler: Adding task set 117.0 with 1 tasks 18/04/17 16:37:00 INFO scheduler.DAGScheduler: Got job 118 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 16:37:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 118 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 16:37:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 16:37:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 16:37:00 INFO scheduler.DAGScheduler: Submitting ResultStage 118 (KafkaRDD[178] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 16:37:00 INFO storage.MemoryStore: Block broadcast_118 stored as values in memory (estimated size 5.7 KB, free 491.0 MB) 18/04/17 16:37:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 117.0 (TID 117, ***hostname masked***, executor 2, partition 0, RACK_LOCAL, 2034 bytes) 18/04/17 16:37:00 INFO storage.BlockManagerInfo: Added broadcast_114_piece0 in memory on ***hostname masked***:53081 (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:37:00 INFO storage.MemoryStore: Block broadcast_118_piece0 stored as bytes in memory (estimated size 3.1 KB, free 491.0 MB) 18/04/17 16:37:00 INFO storage.BlockManagerInfo: Added broadcast_118_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.4 MB) 18/04/17 16:37:00 INFO spark.SparkContext: Created broadcast 118 from broadcast at DAGScheduler.scala:1006 18/04/17 16:37:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 118 (KafkaRDD[178] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 16:37:00 INFO cluster.YarnClusterScheduler: Adding task set 118.0 with 1 tasks 18/04/17 16:37:00 INFO scheduler.DAGScheduler: Got job 119 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 16:37:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 119 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 16:37:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 16:37:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 16:37:00 INFO scheduler.DAGScheduler: Submitting ResultStage 119 (KafkaRDD[152] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 16:37:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 118.0 (TID 118, ***hostname masked***, executor 7, partition 0, RACK_LOCAL, 2053 bytes) 18/04/17 16:37:00 INFO storage.MemoryStore: Block broadcast_119 stored as values in memory (estimated size 5.7 KB, free 491.0 MB) 18/04/17 16:37:00 INFO storage.BlockManagerInfo: Added broadcast_115_piece0 in memory on ***hostname masked***:35790 (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:37:00 INFO storage.BlockManagerInfo: Added broadcast_116_piece0 in memory on ***hostname masked***:43653 (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:37:00 INFO storage.MemoryStore: Block broadcast_119_piece0 stored as bytes in memory (estimated size 3.1 KB, free 491.0 MB) 18/04/17 16:37:00 INFO storage.BlockManagerInfo: Added broadcast_119_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.4 MB) 18/04/17 16:37:00 INFO spark.SparkContext: Created broadcast 119 from broadcast at DAGScheduler.scala:1006 18/04/17 16:37:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 119 (KafkaRDD[152] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 16:37:00 INFO cluster.YarnClusterScheduler: Adding task set 119.0 with 1 tasks 18/04/17 16:37:00 INFO scheduler.DAGScheduler: Got job 120 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 16:37:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 120 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 16:37:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 16:37:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 16:37:00 INFO scheduler.DAGScheduler: Submitting ResultStage 120 (KafkaRDD[166] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 16:37:00 INFO storage.MemoryStore: Block broadcast_120 stored as values in memory (estimated size 5.7 KB, free 491.0 MB) 18/04/17 16:37:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 119.0 (TID 119, ***hostname masked***, executor 10, partition 0, RACK_LOCAL, 2051 bytes) 18/04/17 16:37:00 INFO storage.MemoryStore: Block broadcast_120_piece0 stored as bytes in memory (estimated size 3.1 KB, free 491.0 MB) 18/04/17 16:37:00 INFO storage.BlockManagerInfo: Added broadcast_120_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.4 MB) 18/04/17 16:37:00 INFO spark.SparkContext: Created broadcast 120 from broadcast at DAGScheduler.scala:1006 18/04/17 16:37:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 120 (KafkaRDD[166] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 16:37:00 INFO cluster.YarnClusterScheduler: Adding task set 120.0 with 1 tasks 18/04/17 16:37:00 INFO scheduler.DAGScheduler: Got job 121 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 16:37:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 121 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 16:37:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 16:37:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 16:37:00 INFO scheduler.DAGScheduler: Submitting ResultStage 121 (KafkaRDD[177] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 16:37:00 INFO storage.MemoryStore: Block broadcast_121 stored as values in memory (estimated size 5.7 KB, free 491.0 MB) 18/04/17 16:37:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 120.0 (TID 120, ***hostname masked***, executor 1, partition 0, RACK_LOCAL, 2069 bytes) 18/04/17 16:37:00 INFO storage.BlockManagerInfo: Added broadcast_118_piece0 in memory on ***hostname masked***:41751 (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:37:00 INFO storage.MemoryStore: Block broadcast_121_piece0 stored as bytes in memory (estimated size 3.1 KB, free 491.0 MB) 18/04/17 16:37:00 INFO storage.BlockManagerInfo: Added broadcast_121_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.4 MB) 18/04/17 16:37:00 INFO spark.SparkContext: Created broadcast 121 from broadcast at DAGScheduler.scala:1006 18/04/17 16:37:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 121 (KafkaRDD[177] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 16:37:00 INFO cluster.YarnClusterScheduler: Adding task set 121.0 with 1 tasks 18/04/17 16:37:00 INFO scheduler.DAGScheduler: Got job 122 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 16:37:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 122 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 16:37:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 16:37:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 16:37:00 INFO scheduler.DAGScheduler: Submitting ResultStage 122 (KafkaRDD[150] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 16:37:00 INFO storage.MemoryStore: Block broadcast_122 stored as values in memory (estimated size 5.7 KB, free 491.0 MB) 18/04/17 16:37:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 121.0 (TID 121, ***hostname masked***, executor 7, partition 0, NODE_LOCAL, 2048 bytes) 18/04/17 16:37:00 INFO storage.BlockManagerInfo: Added broadcast_119_piece0 in memory on ***hostname masked***:55095 (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:37:00 INFO storage.BlockManagerInfo: Added broadcast_117_piece0 in memory on ***hostname masked***:43653 (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:37:00 INFO storage.MemoryStore: Block broadcast_122_piece0 stored as bytes in memory (estimated size 3.1 KB, free 491.0 MB) 18/04/17 16:37:00 INFO storage.BlockManagerInfo: Added broadcast_122_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.4 MB) 18/04/17 16:37:00 INFO spark.SparkContext: Created broadcast 122 from broadcast at DAGScheduler.scala:1006 18/04/17 16:37:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 122 (KafkaRDD[150] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 16:37:00 INFO cluster.YarnClusterScheduler: Adding task set 122.0 with 1 tasks 18/04/17 16:37:00 INFO storage.BlockManagerInfo: Added broadcast_120_piece0 in memory on ***hostname masked***:56034 (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:37:00 INFO scheduler.DAGScheduler: Got job 124 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 16:37:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 123 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 16:37:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 16:37:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 16:37:00 INFO scheduler.DAGScheduler: Submitting ResultStage 123 (KafkaRDD[163] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 16:37:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 122.0 (TID 122, ***hostname masked***, executor 10, partition 0, NODE_LOCAL, 2045 bytes) 18/04/17 16:37:00 INFO storage.BlockManagerInfo: Added broadcast_111_piece0 in memory on ***hostname masked***:50260 (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:37:00 INFO storage.MemoryStore: Block broadcast_123 stored as values in memory (estimated size 5.7 KB, free 491.0 MB) 18/04/17 16:37:00 INFO storage.MemoryStore: Block broadcast_123_piece0 stored as bytes in memory (estimated size 3.1 KB, free 490.9 MB) 18/04/17 16:37:00 INFO storage.BlockManagerInfo: Added broadcast_123_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.4 MB) 18/04/17 16:37:00 INFO spark.SparkContext: Created broadcast 123 from broadcast at DAGScheduler.scala:1006 18/04/17 16:37:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 123 (KafkaRDD[163] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 16:37:00 INFO cluster.YarnClusterScheduler: Adding task set 123.0 with 1 tasks 18/04/17 16:37:00 INFO storage.BlockManagerInfo: Added broadcast_121_piece0 in memory on ***hostname masked***:41751 (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:37:00 INFO scheduler.DAGScheduler: Got job 123 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 16:37:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 124 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 16:37:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 16:37:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 16:37:00 INFO scheduler.DAGScheduler: Submitting ResultStage 124 (KafkaRDD[164] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 16:37:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 123.0 (TID 123, ***hostname masked***, executor 6, partition 0, RACK_LOCAL, 2050 bytes) 18/04/17 16:37:00 INFO storage.MemoryStore: Block broadcast_124 stored as values in memory (estimated size 5.7 KB, free 490.9 MB) 18/04/17 16:37:00 INFO storage.MemoryStore: Block broadcast_124_piece0 stored as bytes in memory (estimated size 3.1 KB, free 490.9 MB) 18/04/17 16:37:00 INFO storage.BlockManagerInfo: Added broadcast_124_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.4 MB) 18/04/17 16:37:00 INFO spark.SparkContext: Created broadcast 124 from broadcast at DAGScheduler.scala:1006 18/04/17 16:37:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 124 (KafkaRDD[164] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 16:37:00 INFO cluster.YarnClusterScheduler: Adding task set 124.0 with 1 tasks 18/04/17 16:37:00 INFO scheduler.DAGScheduler: Got job 125 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 16:37:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 125 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 16:37:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 16:37:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 16:37:00 INFO scheduler.DAGScheduler: Submitting ResultStage 125 (KafkaRDD[162] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 16:37:00 INFO storage.MemoryStore: Block broadcast_125 stored as values in memory (estimated size 5.7 KB, free 490.9 MB) 18/04/17 16:37:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 124.0 (TID 124, ***hostname masked***, executor 12, partition 0, RACK_LOCAL, 2048 bytes) 18/04/17 16:37:00 INFO storage.BlockManagerInfo: Added broadcast_122_piece0 in memory on ***hostname masked***:55095 (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:37:00 INFO storage.MemoryStore: Block broadcast_125_piece0 stored as bytes in memory (estimated size 3.1 KB, free 490.9 MB) 18/04/17 16:37:00 INFO storage.BlockManagerInfo: Added broadcast_125_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.4 MB) 18/04/17 16:37:00 INFO spark.SparkContext: Created broadcast 125 from broadcast at DAGScheduler.scala:1006 18/04/17 16:37:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 125 (KafkaRDD[162] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 16:37:00 INFO cluster.YarnClusterScheduler: Adding task set 125.0 with 1 tasks 18/04/17 16:37:00 INFO scheduler.DAGScheduler: Got job 126 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 16:37:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 126 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 16:37:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 16:37:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 16:37:00 INFO scheduler.DAGScheduler: Submitting ResultStage 126 (KafkaRDD[151] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 16:37:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 125.0 (TID 125, ***hostname masked***, executor 6, partition 0, RACK_LOCAL, 2048 bytes) 18/04/17 16:37:00 INFO storage.MemoryStore: Block broadcast_126 stored as values in memory (estimated size 5.7 KB, free 490.9 MB) 18/04/17 16:37:00 INFO storage.BlockManagerInfo: Added broadcast_112_piece0 in memory on ***hostname masked***:50260 (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:37:00 INFO storage.BlockManagerInfo: Added broadcast_123_piece0 in memory on ***hostname masked***:35790 (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:37:00 INFO storage.MemoryStore: Block broadcast_126_piece0 stored as bytes in memory (estimated size 3.1 KB, free 490.9 MB) 18/04/17 16:37:00 INFO storage.BlockManagerInfo: Added broadcast_126_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.4 MB) 18/04/17 16:37:00 INFO spark.SparkContext: Created broadcast 126 from broadcast at DAGScheduler.scala:1006 18/04/17 16:37:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 126 (KafkaRDD[151] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 16:37:00 INFO cluster.YarnClusterScheduler: Adding task set 126.0 with 1 tasks 18/04/17 16:37:00 INFO scheduler.DAGScheduler: Got job 127 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 16:37:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 127 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 16:37:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 16:37:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 16:37:00 INFO scheduler.DAGScheduler: Submitting ResultStage 127 (KafkaRDD[168] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 16:37:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 126.0 (TID 126, ***hostname masked***, executor 11, partition 0, RACK_LOCAL, 2060 bytes) 18/04/17 16:37:00 INFO storage.MemoryStore: Block broadcast_127 stored as values in memory (estimated size 5.7 KB, free 490.9 MB) 18/04/17 16:37:00 INFO storage.MemoryStore: Block broadcast_127_piece0 stored as bytes in memory (estimated size 3.1 KB, free 490.9 MB) 18/04/17 16:37:00 INFO storage.BlockManagerInfo: Added broadcast_127_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.4 MB) 18/04/17 16:37:00 INFO spark.SparkContext: Created broadcast 127 from broadcast at DAGScheduler.scala:1006 18/04/17 16:37:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 127 (KafkaRDD[168] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 16:37:00 INFO cluster.YarnClusterScheduler: Adding task set 127.0 with 1 tasks 18/04/17 16:37:00 INFO storage.BlockManagerInfo: Added broadcast_125_piece0 in memory on ***hostname masked***:35790 (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:37:00 INFO scheduler.DAGScheduler: Got job 128 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 16:37:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 128 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 16:37:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 16:37:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 16:37:00 INFO scheduler.DAGScheduler: Submitting ResultStage 128 (KafkaRDD[154] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 16:37:00 INFO storage.BlockManagerInfo: Added broadcast_126_piece0 in memory on ***hostname masked***:57847 (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:37:00 INFO storage.BlockManagerInfo: Added broadcast_124_piece0 in memory on ***hostname masked***:42188 (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:37:00 INFO storage.MemoryStore: Block broadcast_128 stored as values in memory (estimated size 5.7 KB, free 490.9 MB) 18/04/17 16:37:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 127.0 (TID 127, ***hostname masked***, executor 3, partition 0, NODE_LOCAL, 2047 bytes) 18/04/17 16:37:00 INFO storage.MemoryStore: Block broadcast_128_piece0 stored as bytes in memory (estimated size 3.1 KB, free 490.9 MB) 18/04/17 16:37:00 INFO storage.BlockManagerInfo: Added broadcast_128_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.4 MB) 18/04/17 16:37:00 INFO spark.SparkContext: Created broadcast 128 from broadcast at DAGScheduler.scala:1006 18/04/17 16:37:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 128 (KafkaRDD[154] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 16:37:00 INFO cluster.YarnClusterScheduler: Adding task set 128.0 with 1 tasks 18/04/17 16:37:00 INFO scheduler.DAGScheduler: Got job 129 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 16:37:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 129 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 16:37:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 16:37:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 16:37:00 INFO scheduler.DAGScheduler: Submitting ResultStage 129 (KafkaRDD[170] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 16:37:00 INFO storage.MemoryStore: Block broadcast_129 stored as values in memory (estimated size 5.7 KB, free 490.9 MB) 18/04/17 16:37:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 128.0 (TID 128, ***hostname masked***, executor 2, partition 0, NODE_LOCAL, 2059 bytes) 18/04/17 16:37:00 INFO storage.MemoryStore: Block broadcast_129_piece0 stored as bytes in memory (estimated size 3.1 KB, free 490.9 MB) 18/04/17 16:37:00 INFO storage.BlockManagerInfo: Added broadcast_129_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.4 MB) 18/04/17 16:37:00 INFO spark.SparkContext: Created broadcast 129 from broadcast at DAGScheduler.scala:1006 18/04/17 16:37:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 129 (KafkaRDD[170] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 16:37:00 INFO cluster.YarnClusterScheduler: Adding task set 129.0 with 1 tasks 18/04/17 16:37:00 INFO scheduler.DAGScheduler: Got job 130 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 16:37:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 130 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 16:37:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 16:37:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 16:37:00 INFO scheduler.DAGScheduler: Submitting ResultStage 130 (KafkaRDD[172] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 16:37:00 INFO storage.MemoryStore: Block broadcast_130 stored as values in memory (estimated size 5.7 KB, free 490.9 MB) 18/04/17 16:37:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 129.0 (TID 129, ***hostname masked***, executor 5, partition 0, NODE_LOCAL, 2056 bytes) 18/04/17 16:37:00 INFO storage.MemoryStore: Block broadcast_130_piece0 stored as bytes in memory (estimated size 3.1 KB, free 490.9 MB) 18/04/17 16:37:00 INFO storage.BlockManagerInfo: Added broadcast_130_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.4 MB) 18/04/17 16:37:00 INFO spark.SparkContext: Created broadcast 130 from broadcast at DAGScheduler.scala:1006 18/04/17 16:37:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 130 (KafkaRDD[172] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 16:37:00 INFO cluster.YarnClusterScheduler: Adding task set 130.0 with 1 tasks 18/04/17 16:37:00 INFO storage.BlockManagerInfo: Added broadcast_127_piece0 in memory on ***hostname masked***:60107 (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:37:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 130.0 (TID 130, ***hostname masked***, executor 5, partition 0, RACK_LOCAL, 2045 bytes) 18/04/17 16:37:00 INFO storage.BlockManagerInfo: Added broadcast_128_piece0 in memory on ***hostname masked***:43653 (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:37:00 INFO storage.BlockManagerInfo: Added broadcast_129_piece0 in memory on ***hostname masked***:53081 (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:37:00 INFO storage.BlockManagerInfo: Added broadcast_130_piece0 in memory on ***hostname masked***:53081 (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:37:00 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 114.0 (TID 114) in 157 ms on ***hostname masked*** (executor 5) (1/1) 18/04/17 16:37:00 INFO cluster.YarnClusterScheduler: Removed TaskSet 114.0, whose tasks have all completed, from pool 18/04/17 16:37:00 INFO scheduler.DAGScheduler: ResultStage 114 (foreachPartition at PredictorEngineApp.java:153) finished in 0.158 s 18/04/17 16:37:00 INFO scheduler.DAGScheduler: Job 114 finished: foreachPartition at PredictorEngineApp.java:153, took 0.237472 s 18/04/17 16:37:00 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x22d47762 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 16:37:00 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x22d477620x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 16:37:00 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 16:37:00 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:35578, server: ***hostname masked***/***IP masked***:2181 18/04/17 16:37:00 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x2626be142b28b06, negotiated timeout = 60000 18/04/17 16:37:00 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x2626be142b28b06 18/04/17 16:37:00 INFO zookeeper.ZooKeeper: Session: 0x2626be142b28b06 closed 18/04/17 16:37:00 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 16:37:00 INFO scheduler.JobScheduler: Finished job streaming job 1523972220000 ms.5 from job set of time 1523972220000 ms 18/04/17 16:37:01 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 108.0 (TID 108) in 1080 ms on ***hostname masked*** (executor 4) (1/1) 18/04/17 16:37:01 INFO cluster.YarnClusterScheduler: Removed TaskSet 108.0, whose tasks have all completed, from pool 18/04/17 16:37:01 INFO scheduler.DAGScheduler: ResultStage 108 (foreachPartition at PredictorEngineApp.java:153) finished in 1.082 s 18/04/17 16:37:01 INFO scheduler.DAGScheduler: Job 108 finished: foreachPartition at PredictorEngineApp.java:153, took 1.130712 s 18/04/17 16:37:01 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x515087e4 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 16:37:01 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x515087e40x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 16:37:01 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 16:37:01 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:35582, server: ***hostname masked***/***IP masked***:2181 18/04/17 16:37:01 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x2626be142b28b0e, negotiated timeout = 60000 18/04/17 16:37:01 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x2626be142b28b0e 18/04/17 16:37:01 INFO zookeeper.ZooKeeper: Session: 0x2626be142b28b0e closed 18/04/17 16:37:01 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 16:37:01 INFO scheduler.JobScheduler: Finished job streaming job 1523972220000 ms.25 from job set of time 1523972220000 ms 18/04/17 16:37:02 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 126.0 (TID 126) in 1918 ms on ***hostname masked*** (executor 11) (1/1) 18/04/17 16:37:02 INFO cluster.YarnClusterScheduler: Removed TaskSet 126.0, whose tasks have all completed, from pool 18/04/17 16:37:02 INFO scheduler.DAGScheduler: ResultStage 126 (foreachPartition at PredictorEngineApp.java:153) finished in 1.919 s 18/04/17 16:37:02 INFO scheduler.DAGScheduler: Job 126 finished: foreachPartition at PredictorEngineApp.java:153, took 2.038244 s 18/04/17 16:37:02 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x66a6c3a3 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 16:37:02 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x66a6c3a30x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 16:37:02 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 16:37:02 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:59227, server: ***hostname masked***/***IP masked***:2181 18/04/17 16:37:02 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x1626be1444c9210, negotiated timeout = 60000 18/04/17 16:37:02 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x1626be1444c9210 18/04/17 16:37:02 INFO zookeeper.ZooKeeper: Session: 0x1626be1444c9210 closed 18/04/17 16:37:02 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 16:37:02 INFO scheduler.JobScheduler: Finished job streaming job 1523972220000 ms.7 from job set of time 1523972220000 ms 18/04/17 16:37:02 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 119.0 (TID 119) in 2091 ms on ***hostname masked*** (executor 10) (1/1) 18/04/17 16:37:02 INFO cluster.YarnClusterScheduler: Removed TaskSet 119.0, whose tasks have all completed, from pool 18/04/17 16:37:02 INFO scheduler.DAGScheduler: ResultStage 119 (foreachPartition at PredictorEngineApp.java:153) finished in 2.092 s 18/04/17 16:37:02 INFO scheduler.DAGScheduler: Job 119 finished: foreachPartition at PredictorEngineApp.java:153, took 2.186756 s 18/04/17 16:37:02 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x1e6b5355 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 16:37:02 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x1e6b53550x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 16:37:02 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 16:37:02 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:59230, server: ***hostname masked***/***IP masked***:2181 18/04/17 16:37:02 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x1626be1444c9211, negotiated timeout = 60000 18/04/17 16:37:02 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x1626be1444c9211 18/04/17 16:37:02 INFO zookeeper.ZooKeeper: Session: 0x1626be1444c9211 closed 18/04/17 16:37:02 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 16:37:02 INFO scheduler.JobScheduler: Finished job streaming job 1523972220000 ms.8 from job set of time 1523972220000 ms 18/04/17 16:37:03 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 110.0 (TID 110) in 3192 ms on ***hostname masked*** (executor 1) (1/1) 18/04/17 16:37:03 INFO cluster.YarnClusterScheduler: Removed TaskSet 110.0, whose tasks have all completed, from pool 18/04/17 16:37:03 INFO scheduler.DAGScheduler: ResultStage 110 (foreachPartition at PredictorEngineApp.java:153) finished in 3.194 s 18/04/17 16:37:03 INFO scheduler.DAGScheduler: Job 110 finished: foreachPartition at PredictorEngineApp.java:153, took 3.253596 s 18/04/17 16:37:03 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x45adbc04 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 16:37:03 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x45adbc040x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 16:37:03 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 16:37:03 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:35596, server: ***hostname masked***/***IP masked***:2181 18/04/17 16:37:03 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x2626be142b28b12, negotiated timeout = 60000 18/04/17 16:37:03 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x2626be142b28b12 18/04/17 16:37:03 INFO zookeeper.ZooKeeper: Session: 0x2626be142b28b12 closed 18/04/17 16:37:03 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 16:37:03 INFO scheduler.JobScheduler: Finished job streaming job 1523972220000 ms.31 from job set of time 1523972220000 ms 18/04/17 16:37:04 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 127.0 (TID 127) in 3954 ms on ***hostname masked*** (executor 3) (1/1) 18/04/17 16:37:04 INFO cluster.YarnClusterScheduler: Removed TaskSet 127.0, whose tasks have all completed, from pool 18/04/17 16:37:04 INFO scheduler.DAGScheduler: ResultStage 127 (foreachPartition at PredictorEngineApp.java:153) finished in 3.955 s 18/04/17 16:37:04 INFO scheduler.DAGScheduler: Job 127 finished: foreachPartition at PredictorEngineApp.java:153, took 4.085328 s 18/04/17 16:37:04 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x5d9569fe connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 16:37:04 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x5d9569fe0x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 16:37:04 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 16:37:04 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:59238, server: ***hostname masked***/***IP masked***:2181 18/04/17 16:37:04 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x1626be1444c9212, negotiated timeout = 60000 18/04/17 16:37:04 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x1626be1444c9212 18/04/17 16:37:04 INFO zookeeper.ZooKeeper: Session: 0x1626be1444c9212 closed 18/04/17 16:37:04 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 16:37:04 INFO scheduler.JobScheduler: Finished job streaming job 1523972220000 ms.24 from job set of time 1523972220000 ms 18/04/17 16:37:04 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 124.0 (TID 124) in 4072 ms on ***hostname masked*** (executor 12) (1/1) 18/04/17 16:37:04 INFO cluster.YarnClusterScheduler: Removed TaskSet 124.0, whose tasks have all completed, from pool 18/04/17 16:37:04 INFO scheduler.DAGScheduler: ResultStage 124 (foreachPartition at PredictorEngineApp.java:153) finished in 4.073 s 18/04/17 16:37:04 INFO scheduler.DAGScheduler: Job 123 finished: foreachPartition at PredictorEngineApp.java:153, took 4.186431 s 18/04/17 16:37:04 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x1a8ae996 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 16:37:04 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x1a8ae9960x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 16:37:04 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 16:37:04 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:52859, server: ***hostname masked***/***IP masked***:2181 18/04/17 16:37:04 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x3626be1439a91ea, negotiated timeout = 60000 18/04/17 16:37:04 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x3626be1439a91ea 18/04/17 16:37:04 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 105.0 (TID 105) in 4192 ms on ***hostname masked*** (executor 9) (1/1) 18/04/17 16:37:04 INFO cluster.YarnClusterScheduler: Removed TaskSet 105.0, whose tasks have all completed, from pool 18/04/17 16:37:04 INFO scheduler.DAGScheduler: ResultStage 105 (foreachPartition at PredictorEngineApp.java:153) finished in 4.192 s 18/04/17 16:37:04 INFO scheduler.DAGScheduler: Job 106 finished: foreachPartition at PredictorEngineApp.java:153, took 4.212128 s 18/04/17 16:37:04 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x28217311 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 16:37:04 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x282173110x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 16:37:04 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 16:37:04 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:52862, server: ***hostname masked***/***IP masked***:2181 18/04/17 16:37:04 INFO zookeeper.ZooKeeper: Session: 0x3626be1439a91ea closed 18/04/17 16:37:04 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 16:37:04 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x3626be1439a91eb, negotiated timeout = 60000 18/04/17 16:37:04 INFO scheduler.JobScheduler: Finished job streaming job 1523972220000 ms.20 from job set of time 1523972220000 ms 18/04/17 16:37:04 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x3626be1439a91eb 18/04/17 16:37:04 INFO zookeeper.ZooKeeper: Session: 0x3626be1439a91eb closed 18/04/17 16:37:04 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 16:37:04 INFO scheduler.JobScheduler: Finished job streaming job 1523972220000 ms.27 from job set of time 1523972220000 ms 18/04/17 16:37:04 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 107.0 (TID 107) in 4488 ms on ***hostname masked*** (executor 11) (1/1) 18/04/17 16:37:04 INFO cluster.YarnClusterScheduler: Removed TaskSet 107.0, whose tasks have all completed, from pool 18/04/17 16:37:04 INFO scheduler.DAGScheduler: ResultStage 107 (foreachPartition at PredictorEngineApp.java:153) finished in 4.488 s 18/04/17 16:37:04 INFO scheduler.DAGScheduler: Job 107 finished: foreachPartition at PredictorEngineApp.java:153, took 4.532699 s 18/04/17 16:37:04 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x73adf8fb connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 16:37:04 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x73adf8fb0x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 16:37:04 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 16:37:04 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:52866, server: ***hostname masked***/***IP masked***:2181 18/04/17 16:37:04 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x3626be1439a91ec, negotiated timeout = 60000 18/04/17 16:37:04 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x3626be1439a91ec 18/04/17 16:37:04 INFO zookeeper.ZooKeeper: Session: 0x3626be1439a91ec closed 18/04/17 16:37:04 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 16:37:04 INFO scheduler.JobScheduler: Finished job streaming job 1523972220000 ms.9 from job set of time 1523972220000 ms 18/04/17 16:37:05 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 122.0 (TID 122) in 4876 ms on ***hostname masked*** (executor 10) (1/1) 18/04/17 16:37:05 INFO cluster.YarnClusterScheduler: Removed TaskSet 122.0, whose tasks have all completed, from pool 18/04/17 16:37:05 INFO scheduler.DAGScheduler: ResultStage 122 (foreachPartition at PredictorEngineApp.java:153) finished in 4.877 s 18/04/17 16:37:05 INFO scheduler.DAGScheduler: Job 122 finished: foreachPartition at PredictorEngineApp.java:153, took 4.984503 s 18/04/17 16:37:05 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x515da9b1 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 16:37:05 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x515da9b10x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 16:37:05 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 16:37:05 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:52869, server: ***hostname masked***/***IP masked***:2181 18/04/17 16:37:05 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x3626be1439a91ed, negotiated timeout = 60000 18/04/17 16:37:05 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x3626be1439a91ed 18/04/17 16:37:05 INFO zookeeper.ZooKeeper: Session: 0x3626be1439a91ed closed 18/04/17 16:37:05 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 16:37:05 INFO scheduler.JobScheduler: Finished job streaming job 1523972220000 ms.6 from job set of time 1523972220000 ms 18/04/17 16:37:07 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 118.0 (TID 118) in 7213 ms on ***hostname masked*** (executor 7) (1/1) 18/04/17 16:37:07 INFO cluster.YarnClusterScheduler: Removed TaskSet 118.0, whose tasks have all completed, from pool 18/04/17 16:37:07 INFO scheduler.DAGScheduler: ResultStage 118 (foreachPartition at PredictorEngineApp.java:153) finished in 7.214 s 18/04/17 16:37:07 INFO scheduler.DAGScheduler: Job 118 finished: foreachPartition at PredictorEngineApp.java:153, took 7.303812 s 18/04/17 16:37:07 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x408db653 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 16:37:07 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x408db6530x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 16:37:07 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 16:37:07 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:52875, server: ***hostname masked***/***IP masked***:2181 18/04/17 16:37:07 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 121.0 (TID 121) in 7208 ms on ***hostname masked*** (executor 7) (1/1) 18/04/17 16:37:07 INFO cluster.YarnClusterScheduler: Removed TaskSet 121.0, whose tasks have all completed, from pool 18/04/17 16:37:07 INFO scheduler.DAGScheduler: ResultStage 121 (foreachPartition at PredictorEngineApp.java:153) finished in 7.209 s 18/04/17 16:37:07 INFO scheduler.DAGScheduler: Job 121 finished: foreachPartition at PredictorEngineApp.java:153, took 7.312764 s 18/04/17 16:37:07 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x17fe81d2 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 16:37:07 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x17fe81d20x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 16:37:07 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 16:37:07 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x3626be1439a91ef, negotiated timeout = 60000 18/04/17 16:37:07 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:59258, server: ***hostname masked***/***IP masked***:2181 18/04/17 16:37:07 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x1626be1444c9215, negotiated timeout = 60000 18/04/17 16:37:07 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x1626be1444c9215 18/04/17 16:37:07 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x3626be1439a91ef 18/04/17 16:37:07 INFO zookeeper.ZooKeeper: Session: 0x3626be1439a91ef closed 18/04/17 16:37:07 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 16:37:07 INFO zookeeper.ZooKeeper: Session: 0x1626be1444c9215 closed 18/04/17 16:37:07 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 16:37:07 INFO scheduler.JobScheduler: Finished job streaming job 1523972220000 ms.34 from job set of time 1523972220000 ms 18/04/17 16:37:07 INFO scheduler.JobScheduler: Finished job streaming job 1523972220000 ms.33 from job set of time 1523972220000 ms 18/04/17 16:37:08 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 111.0 (TID 111) in 8415 ms on ***hostname masked*** (executor 8) (1/1) 18/04/17 16:37:08 INFO cluster.YarnClusterScheduler: Removed TaskSet 111.0, whose tasks have all completed, from pool 18/04/17 16:37:08 INFO scheduler.DAGScheduler: ResultStage 111 (foreachPartition at PredictorEngineApp.java:153) finished in 8.416 s 18/04/17 16:37:08 INFO scheduler.DAGScheduler: Job 111 finished: foreachPartition at PredictorEngineApp.java:153, took 8.481002 s 18/04/17 16:37:08 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x3bec685a connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 16:37:08 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x3bec685a0x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 16:37:08 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 16:37:08 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:59265, server: ***hostname masked***/***IP masked***:2181 18/04/17 16:37:08 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x1626be1444c9216, negotiated timeout = 60000 18/04/17 16:37:08 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x1626be1444c9216 18/04/17 16:37:08 INFO zookeeper.ZooKeeper: Session: 0x1626be1444c9216 closed 18/04/17 16:37:08 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 16:37:08 INFO scheduler.JobScheduler: Finished job streaming job 1523972220000 ms.12 from job set of time 1523972220000 ms 18/04/17 16:37:09 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 115.0 (TID 115) in 9571 ms on ***hostname masked*** (executor 6) (1/1) 18/04/17 16:37:09 INFO cluster.YarnClusterScheduler: Removed TaskSet 115.0, whose tasks have all completed, from pool 18/04/17 16:37:09 INFO scheduler.DAGScheduler: ResultStage 115 (foreachPartition at PredictorEngineApp.java:153) finished in 9.572 s 18/04/17 16:37:09 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 125.0 (TID 125) in 9533 ms on ***hostname masked*** (executor 6) (1/1) 18/04/17 16:37:09 INFO cluster.YarnClusterScheduler: Removed TaskSet 125.0, whose tasks have all completed, from pool 18/04/17 16:37:09 INFO scheduler.DAGScheduler: Job 115 finished: foreachPartition at PredictorEngineApp.java:153, took 9.655992 s 18/04/17 16:37:09 INFO scheduler.DAGScheduler: ResultStage 125 (foreachPartition at PredictorEngineApp.java:153) finished in 9.534 s 18/04/17 16:37:09 INFO scheduler.DAGScheduler: Job 125 finished: foreachPartition at PredictorEngineApp.java:153, took 9.650108 s 18/04/17 16:37:09 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x6770e478 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 16:37:09 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x6770e4780x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 16:37:09 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x7929fac connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 16:37:09 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x7929fac0x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 16:37:09 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 16:37:09 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:35632, server: ***hostname masked***/***IP masked***:2181 18/04/17 16:37:09 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 16:37:09 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:59271, server: ***hostname masked***/***IP masked***:2181 18/04/17 16:37:09 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x2626be142b28b18, negotiated timeout = 60000 18/04/17 16:37:09 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x1626be1444c9218, negotiated timeout = 60000 18/04/17 16:37:09 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x2626be142b28b18 18/04/17 16:37:09 INFO zookeeper.ZooKeeper: Session: 0x2626be142b28b18 closed 18/04/17 16:37:09 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 16:37:09 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x1626be1444c9218 18/04/17 16:37:09 INFO zookeeper.ZooKeeper: Session: 0x1626be1444c9218 closed 18/04/17 16:37:09 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 16:37:09 INFO scheduler.JobScheduler: Finished job streaming job 1523972220000 ms.18 from job set of time 1523972220000 ms 18/04/17 16:37:09 INFO scheduler.JobScheduler: Finished job streaming job 1523972220000 ms.15 from job set of time 1523972220000 ms 18/04/17 16:37:09 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 123.0 (TID 123) in 9633 ms on ***hostname masked*** (executor 6) (1/1) 18/04/17 16:37:09 INFO cluster.YarnClusterScheduler: Removed TaskSet 123.0, whose tasks have all completed, from pool 18/04/17 16:37:09 INFO scheduler.DAGScheduler: ResultStage 123 (foreachPartition at PredictorEngineApp.java:153) finished in 9.634 s 18/04/17 16:37:09 INFO scheduler.DAGScheduler: Job 124 finished: foreachPartition at PredictorEngineApp.java:153, took 9.744431 s 18/04/17 16:37:09 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x4bf52e1b connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 16:37:09 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x4bf52e1b0x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 16:37:09 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 16:37:09 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:59276, server: ***hostname masked***/***IP masked***:2181 18/04/17 16:37:09 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x1626be1444c9219, negotiated timeout = 60000 18/04/17 16:37:09 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x1626be1444c9219 18/04/17 16:37:09 INFO zookeeper.ZooKeeper: Session: 0x1626be1444c9219 closed 18/04/17 16:37:09 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 16:37:09 INFO scheduler.JobScheduler: Finished job streaming job 1523972220000 ms.19 from job set of time 1523972220000 ms 18/04/17 16:37:09 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 106.0 (TID 106) in 9841 ms on ***hostname masked*** (executor 8) (1/1) 18/04/17 16:37:09 INFO scheduler.DAGScheduler: ResultStage 106 (foreachPartition at PredictorEngineApp.java:153) finished in 9.842 s 18/04/17 16:37:09 INFO cluster.YarnClusterScheduler: Removed TaskSet 106.0, whose tasks have all completed, from pool 18/04/17 16:37:09 INFO scheduler.DAGScheduler: Job 105 finished: foreachPartition at PredictorEngineApp.java:153, took 9.912869 s 18/04/17 16:37:09 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x2438a5ed connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 16:37:09 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x2438a5ed0x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 16:37:09 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 16:37:09 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:52897, server: ***hostname masked***/***IP masked***:2181 18/04/17 16:37:10 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x3626be1439a91f1, negotiated timeout = 60000 18/04/17 16:37:10 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 113.0 (TID 113) in 9853 ms on ***hostname masked*** (executor 5) (1/1) 18/04/17 16:37:10 INFO cluster.YarnClusterScheduler: Removed TaskSet 113.0, whose tasks have all completed, from pool 18/04/17 16:37:10 INFO scheduler.DAGScheduler: ResultStage 113 (foreachPartition at PredictorEngineApp.java:153) finished in 9.855 s 18/04/17 16:37:10 INFO scheduler.DAGScheduler: Job 113 finished: foreachPartition at PredictorEngineApp.java:153, took 9.930074 s 18/04/17 16:37:10 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x5d77f063 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 16:37:10 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x5d77f0630x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 16:37:10 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 16:37:10 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:52898, server: ***hostname masked***/***IP masked***:2181 18/04/17 16:37:10 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x3626be1439a91f2, negotiated timeout = 60000 18/04/17 16:37:10 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x3626be1439a91f2 18/04/17 16:37:10 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x3626be1439a91f1 18/04/17 16:37:10 INFO zookeeper.ZooKeeper: Session: 0x3626be1439a91f2 closed 18/04/17 16:37:10 INFO zookeeper.ZooKeeper: Session: 0x3626be1439a91f1 closed 18/04/17 16:37:10 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 16:37:10 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 16:37:10 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 130.0 (TID 130) in 9813 ms on ***hostname masked*** (executor 5) (1/1) 18/04/17 16:37:10 INFO cluster.YarnClusterScheduler: Removed TaskSet 130.0, whose tasks have all completed, from pool 18/04/17 16:37:10 INFO scheduler.DAGScheduler: ResultStage 130 (foreachPartition at PredictorEngineApp.java:153) finished in 9.815 s 18/04/17 16:37:10 INFO scheduler.DAGScheduler: Job 130 finished: foreachPartition at PredictorEngineApp.java:153, took 9.952502 s 18/04/17 16:37:10 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x2dbec434 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 16:37:10 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x2dbec4340x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 16:37:10 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 16:37:10 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:59285, server: ***hostname masked***/***IP masked***:2181 18/04/17 16:37:10 INFO scheduler.JobScheduler: Finished job streaming job 1523972220000 ms.32 from job set of time 1523972220000 ms 18/04/17 16:37:10 INFO scheduler.JobScheduler: Finished job streaming job 1523972220000 ms.29 from job set of time 1523972220000 ms 18/04/17 16:37:10 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x1626be1444c921c, negotiated timeout = 60000 18/04/17 16:37:10 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x1626be1444c921c 18/04/17 16:37:10 INFO zookeeper.ZooKeeper: Session: 0x1626be1444c921c closed 18/04/17 16:37:10 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 16:37:10 INFO scheduler.JobScheduler: Finished job streaming job 1523972220000 ms.28 from job set of time 1523972220000 ms 18/04/17 16:37:10 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 112.0 (TID 112) in 10493 ms on ***hostname masked*** (executor 8) (1/1) 18/04/17 16:37:10 INFO scheduler.DAGScheduler: ResultStage 112 (foreachPartition at PredictorEngineApp.java:153) finished in 10.493 s 18/04/17 16:37:10 INFO scheduler.DAGScheduler: Job 112 finished: foreachPartition at PredictorEngineApp.java:153, took 10.564783 s 18/04/17 16:37:10 INFO cluster.YarnClusterScheduler: Removed TaskSet 112.0, whose tasks have all completed, from pool 18/04/17 16:37:10 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x7d5b903e connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 16:37:10 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x7d5b903e0x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 16:37:10 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 16:37:10 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:52909, server: ***hostname masked***/***IP masked***:2181 18/04/17 16:37:10 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x3626be1439a91f4, negotiated timeout = 60000 18/04/17 16:37:10 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x3626be1439a91f4 18/04/17 16:37:10 INFO zookeeper.ZooKeeper: Session: 0x3626be1439a91f4 closed 18/04/17 16:37:10 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 16:37:10 INFO scheduler.JobScheduler: Finished job streaming job 1523972220000 ms.23 from job set of time 1523972220000 ms 18/04/17 16:37:10 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 117.0 (TID 117) in 10580 ms on ***hostname masked*** (executor 2) (1/1) 18/04/17 16:37:10 INFO cluster.YarnClusterScheduler: Removed TaskSet 117.0, whose tasks have all completed, from pool 18/04/17 16:37:10 INFO scheduler.DAGScheduler: ResultStage 117 (foreachPartition at PredictorEngineApp.java:153) finished in 10.581 s 18/04/17 16:37:10 INFO scheduler.DAGScheduler: Job 117 finished: foreachPartition at PredictorEngineApp.java:153, took 10.672303 s 18/04/17 16:37:10 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0xbc11b2 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 16:37:10 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0xbc11b20x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 16:37:10 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 16:37:10 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:52912, server: ***hostname masked***/***IP masked***:2181 18/04/17 16:37:10 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x3626be1439a91f5, negotiated timeout = 60000 18/04/17 16:37:10 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x3626be1439a91f5 18/04/17 16:37:10 INFO zookeeper.ZooKeeper: Session: 0x3626be1439a91f5 closed 18/04/17 16:37:10 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 16:37:10 INFO scheduler.JobScheduler: Finished job streaming job 1523972220000 ms.2 from job set of time 1523972220000 ms 18/04/17 16:37:11 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 120.0 (TID 120) in 10930 ms on ***hostname masked*** (executor 1) (1/1) 18/04/17 16:37:11 INFO cluster.YarnClusterScheduler: Removed TaskSet 120.0, whose tasks have all completed, from pool 18/04/17 16:37:11 INFO scheduler.DAGScheduler: ResultStage 120 (foreachPartition at PredictorEngineApp.java:153) finished in 10.931 s 18/04/17 16:37:11 INFO scheduler.DAGScheduler: Job 120 finished: foreachPartition at PredictorEngineApp.java:153, took 11.030804 s 18/04/17 16:37:11 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x3e31411d connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 16:37:11 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x3e31411d0x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 16:37:11 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 16:37:11 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:59297, server: ***hostname masked***/***IP masked***:2181 18/04/17 16:37:11 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x1626be1444c921e, negotiated timeout = 60000 18/04/17 16:37:11 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x1626be1444c921e 18/04/17 16:37:11 INFO zookeeper.ZooKeeper: Session: 0x1626be1444c921e closed 18/04/17 16:37:11 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 16:37:11 INFO scheduler.JobScheduler: Finished job streaming job 1523972220000 ms.22 from job set of time 1523972220000 ms 18/04/17 16:37:20 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 116.0 (TID 116) in 20245 ms on ***hostname masked*** (executor 2) (1/1) 18/04/17 16:37:20 INFO cluster.YarnClusterScheduler: Removed TaskSet 116.0, whose tasks have all completed, from pool 18/04/17 16:37:20 INFO scheduler.DAGScheduler: ResultStage 116 (foreachPartition at PredictorEngineApp.java:153) finished in 20.246 s 18/04/17 16:37:20 INFO scheduler.DAGScheduler: Job 116 finished: foreachPartition at PredictorEngineApp.java:153, took 20.333506 s 18/04/17 16:37:20 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x6bf5171c connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 16:37:20 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x6bf5171c0x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 16:37:20 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 16:37:20 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:59316, server: ***hostname masked***/***IP masked***:2181 18/04/17 16:37:20 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x1626be1444c9221, negotiated timeout = 60000 18/04/17 16:37:20 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x1626be1444c9221 18/04/17 16:37:20 INFO zookeeper.ZooKeeper: Session: 0x1626be1444c9221 closed 18/04/17 16:37:20 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 16:37:20 INFO scheduler.JobScheduler: Finished job streaming job 1523972220000 ms.1 from job set of time 1523972220000 ms 18/04/17 16:37:21 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 128.0 (TID 128) in 21709 ms on ***hostname masked*** (executor 2) (1/1) 18/04/17 16:37:21 INFO cluster.YarnClusterScheduler: Removed TaskSet 128.0, whose tasks have all completed, from pool 18/04/17 16:37:21 INFO scheduler.DAGScheduler: ResultStage 128 (foreachPartition at PredictorEngineApp.java:153) finished in 21.711 s 18/04/17 16:37:21 INFO scheduler.DAGScheduler: Job 128 finished: foreachPartition at PredictorEngineApp.java:153, took 21.844112 s 18/04/17 16:37:21 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x5ae575ec connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 16:37:21 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x5ae575ec0x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 16:37:21 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 16:37:21 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:52939, server: ***hostname masked***/***IP masked***:2181 18/04/17 16:37:21 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x3626be1439a91ff, negotiated timeout = 60000 18/04/17 16:37:21 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x3626be1439a91ff 18/04/17 16:37:21 INFO zookeeper.ZooKeeper: Session: 0x3626be1439a91ff closed 18/04/17 16:37:21 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 16:37:21 INFO scheduler.JobScheduler: Finished job streaming job 1523972220000 ms.10 from job set of time 1523972220000 ms 18/04/17 16:38:00 INFO scheduler.JobScheduler: Added jobs for time 1523972280000 ms 18/04/17 16:38:00 INFO scheduler.JobScheduler: Starting job streaming job 1523972280000 ms.0 from job set of time 1523972280000 ms 18/04/17 16:38:00 INFO scheduler.JobScheduler: Starting job streaming job 1523972280000 ms.1 from job set of time 1523972280000 ms 18/04/17 16:38:00 INFO scheduler.JobScheduler: Starting job streaming job 1523972280000 ms.2 from job set of time 1523972280000 ms 18/04/17 16:38:00 INFO scheduler.JobScheduler: Finished job streaming job 1523972280000 ms.0 from job set of time 1523972280000 ms 18/04/17 16:38:00 INFO scheduler.JobScheduler: Starting job streaming job 1523972280000 ms.3 from job set of time 1523972280000 ms 18/04/17 16:38:00 INFO scheduler.JobScheduler: Starting job streaming job 1523972280000 ms.6 from job set of time 1523972280000 ms 18/04/17 16:38:00 INFO scheduler.JobScheduler: Starting job streaming job 1523972280000 ms.5 from job set of time 1523972280000 ms 18/04/17 16:38:00 INFO scheduler.JobScheduler: Starting job streaming job 1523972280000 ms.4 from job set of time 1523972280000 ms 18/04/17 16:38:00 INFO scheduler.JobScheduler: Starting job streaming job 1523972280000 ms.7 from job set of time 1523972280000 ms 18/04/17 16:38:00 INFO scheduler.JobScheduler: Finished job streaming job 1523972280000 ms.3 from job set of time 1523972280000 ms 18/04/17 16:38:00 INFO scheduler.JobScheduler: Starting job streaming job 1523972280000 ms.10 from job set of time 1523972280000 ms 18/04/17 16:38:00 INFO scheduler.JobScheduler: Starting job streaming job 1523972280000 ms.8 from job set of time 1523972280000 ms 18/04/17 16:38:00 INFO scheduler.JobScheduler: Finished job streaming job 1523972280000 ms.4 from job set of time 1523972280000 ms 18/04/17 16:38:00 INFO scheduler.JobScheduler: Starting job streaming job 1523972280000 ms.9 from job set of time 1523972280000 ms 18/04/17 16:38:00 INFO scheduler.JobScheduler: Starting job streaming job 1523972280000 ms.12 from job set of time 1523972280000 ms 18/04/17 16:38:00 INFO scheduler.JobScheduler: Starting job streaming job 1523972280000 ms.13 from job set of time 1523972280000 ms 18/04/17 16:38:00 INFO scheduler.JobScheduler: Starting job streaming job 1523972280000 ms.11 from job set of time 1523972280000 ms 18/04/17 16:38:00 INFO scheduler.JobScheduler: Starting job streaming job 1523972280000 ms.14 from job set of time 1523972280000 ms 18/04/17 16:38:00 INFO scheduler.JobScheduler: Finished job streaming job 1523972280000 ms.13 from job set of time 1523972280000 ms 18/04/17 16:38:00 INFO scheduler.JobScheduler: Starting job streaming job 1523972280000 ms.17 from job set of time 1523972280000 ms 18/04/17 16:38:00 INFO scheduler.JobScheduler: Starting job streaming job 1523972280000 ms.15 from job set of time 1523972280000 ms 18/04/17 16:38:00 INFO scheduler.JobScheduler: Finished job streaming job 1523972280000 ms.14 from job set of time 1523972280000 ms 18/04/17 16:38:00 INFO scheduler.JobScheduler: Finished job streaming job 1523972280000 ms.17 from job set of time 1523972280000 ms 18/04/17 16:38:00 INFO scheduler.JobScheduler: Starting job streaming job 1523972280000 ms.16 from job set of time 1523972280000 ms 18/04/17 16:38:00 INFO scheduler.JobScheduler: Starting job streaming job 1523972280000 ms.19 from job set of time 1523972280000 ms 18/04/17 16:38:00 INFO scheduler.JobScheduler: Starting job streaming job 1523972280000 ms.21 from job set of time 1523972280000 ms 18/04/17 16:38:00 INFO scheduler.JobScheduler: Starting job streaming job 1523972280000 ms.18 from job set of time 1523972280000 ms 18/04/17 16:38:00 INFO scheduler.JobScheduler: Finished job streaming job 1523972280000 ms.21 from job set of time 1523972280000 ms 18/04/17 16:38:00 INFO scheduler.JobScheduler: Starting job streaming job 1523972280000 ms.23 from job set of time 1523972280000 ms 18/04/17 16:38:00 INFO scheduler.JobScheduler: Starting job streaming job 1523972280000 ms.20 from job set of time 1523972280000 ms 18/04/17 16:38:00 INFO scheduler.JobScheduler: Finished job streaming job 1523972280000 ms.16 from job set of time 1523972280000 ms 18/04/17 16:38:00 INFO scheduler.JobScheduler: Starting job streaming job 1523972280000 ms.24 from job set of time 1523972280000 ms 18/04/17 16:38:00 INFO scheduler.JobScheduler: Starting job streaming job 1523972280000 ms.22 from job set of time 1523972280000 ms 18/04/17 16:38:00 INFO scheduler.JobScheduler: Starting job streaming job 1523972280000 ms.25 from job set of time 1523972280000 ms 18/04/17 16:38:00 INFO scheduler.JobScheduler: Starting job streaming job 1523972280000 ms.26 from job set of time 1523972280000 ms 18/04/17 16:38:00 INFO scheduler.JobScheduler: Starting job streaming job 1523972280000 ms.27 from job set of time 1523972280000 ms 18/04/17 16:38:00 INFO scheduler.JobScheduler: Starting job streaming job 1523972280000 ms.29 from job set of time 1523972280000 ms 18/04/17 16:38:00 INFO scheduler.JobScheduler: Starting job streaming job 1523972280000 ms.28 from job set of time 1523972280000 ms 18/04/17 16:38:00 INFO scheduler.JobScheduler: Starting job streaming job 1523972280000 ms.30 from job set of time 1523972280000 ms 18/04/17 16:38:00 INFO scheduler.JobScheduler: Starting job streaming job 1523972280000 ms.31 from job set of time 1523972280000 ms 18/04/17 16:38:00 INFO scheduler.JobScheduler: Starting job streaming job 1523972280000 ms.33 from job set of time 1523972280000 ms 18/04/17 16:38:00 INFO scheduler.JobScheduler: Starting job streaming job 1523972280000 ms.32 from job set of time 1523972280000 ms 18/04/17 16:38:00 INFO scheduler.JobScheduler: Finished job streaming job 1523972280000 ms.30 from job set of time 1523972280000 ms 18/04/17 16:38:00 INFO scheduler.JobScheduler: Starting job streaming job 1523972280000 ms.35 from job set of time 1523972280000 ms 18/04/17 16:38:00 INFO scheduler.JobScheduler: Starting job streaming job 1523972280000 ms.34 from job set of time 1523972280000 ms 18/04/17 16:38:00 INFO scheduler.JobScheduler: Finished job streaming job 1523972280000 ms.35 from job set of time 1523972280000 ms 18/04/17 16:38:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 16:38:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 16:38:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 16:38:00 INFO scheduler.DAGScheduler: Got job 131 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 16:38:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 131 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 16:38:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 16:38:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 16:38:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 16:38:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 16:38:00 INFO scheduler.DAGScheduler: Submitting ResultStage 131 (KafkaRDD[203] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 16:38:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 16:38:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 16:38:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 16:38:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 16:38:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 16:38:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 16:38:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 16:38:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 16:38:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 16:38:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 16:38:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 16:38:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 16:38:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 16:38:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 16:38:00 INFO storage.MemoryStore: Block broadcast_131 stored as values in memory (estimated size 5.7 KB, free 490.9 MB) 18/04/17 16:38:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 16:38:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 16:38:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 16:38:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 16:38:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 16:38:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 16:38:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 16:38:00 INFO storage.MemoryStore: Block broadcast_131_piece0 stored as bytes in memory (estimated size 3.1 KB, free 490.9 MB) 18/04/17 16:38:00 INFO storage.BlockManagerInfo: Added broadcast_131_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.4 MB) 18/04/17 16:38:00 INFO spark.SparkContext: Created broadcast 131 from broadcast at DAGScheduler.scala:1006 18/04/17 16:38:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 131 (KafkaRDD[203] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 16:38:00 INFO cluster.YarnClusterScheduler: Adding task set 131.0 with 1 tasks 18/04/17 16:38:00 INFO scheduler.DAGScheduler: Got job 132 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 16:38:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 132 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 16:38:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 16:38:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 16:38:00 INFO scheduler.DAGScheduler: Submitting ResultStage 132 (KafkaRDD[209] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 16:38:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 131.0 (TID 131, ***hostname masked***, executor 5, partition 0, RACK_LOCAL, 2048 bytes) 18/04/17 16:38:00 INFO storage.MemoryStore: Block broadcast_132 stored as values in memory (estimated size 5.7 KB, free 490.9 MB) 18/04/17 16:38:00 INFO storage.MemoryStore: Block broadcast_132_piece0 stored as bytes in memory (estimated size 3.1 KB, free 490.9 MB) 18/04/17 16:38:00 INFO storage.BlockManagerInfo: Added broadcast_132_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.4 MB) 18/04/17 16:38:00 INFO spark.SparkContext: Created broadcast 132 from broadcast at DAGScheduler.scala:1006 18/04/17 16:38:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 132 (KafkaRDD[209] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 16:38:00 INFO cluster.YarnClusterScheduler: Adding task set 132.0 with 1 tasks 18/04/17 16:38:00 INFO scheduler.DAGScheduler: Got job 133 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 16:38:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 133 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 16:38:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 16:38:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 16:38:00 INFO scheduler.DAGScheduler: Submitting ResultStage 133 (KafkaRDD[202] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 16:38:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 132.0 (TID 132, ***hostname masked***, executor 6, partition 0, NODE_LOCAL, 2049 bytes) 18/04/17 16:38:00 INFO storage.MemoryStore: Block broadcast_133 stored as values in memory (estimated size 5.7 KB, free 490.9 MB) 18/04/17 16:38:00 INFO storage.MemoryStore: Block broadcast_133_piece0 stored as bytes in memory (estimated size 3.1 KB, free 490.9 MB) 18/04/17 16:38:00 INFO storage.BlockManagerInfo: Added broadcast_133_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.4 MB) 18/04/17 16:38:00 INFO spark.SparkContext: Created broadcast 133 from broadcast at DAGScheduler.scala:1006 18/04/17 16:38:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 133 (KafkaRDD[202] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 16:38:00 INFO cluster.YarnClusterScheduler: Adding task set 133.0 with 1 tasks 18/04/17 16:38:00 INFO scheduler.DAGScheduler: Got job 134 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 16:38:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 134 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 16:38:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 16:38:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 16:38:00 INFO scheduler.DAGScheduler: Submitting ResultStage 134 (KafkaRDD[204] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 16:38:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 133.0 (TID 133, ***hostname masked***, executor 3, partition 0, RACK_LOCAL, 2069 bytes) 18/04/17 16:38:00 INFO storage.MemoryStore: Block broadcast_134 stored as values in memory (estimated size 5.7 KB, free 490.9 MB) 18/04/17 16:38:00 INFO storage.BlockManagerInfo: Added broadcast_131_piece0 in memory on ***hostname masked***:53081 (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:38:00 INFO storage.MemoryStore: Block broadcast_134_piece0 stored as bytes in memory (estimated size 3.1 KB, free 490.9 MB) 18/04/17 16:38:00 INFO storage.BlockManagerInfo: Added broadcast_134_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.4 MB) 18/04/17 16:38:00 INFO spark.SparkContext: Created broadcast 134 from broadcast at DAGScheduler.scala:1006 18/04/17 16:38:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 134 (KafkaRDD[204] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 16:38:00 INFO cluster.YarnClusterScheduler: Adding task set 134.0 with 1 tasks 18/04/17 16:38:00 INFO scheduler.DAGScheduler: Got job 135 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 16:38:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 135 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 16:38:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 16:38:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 16:38:00 INFO scheduler.DAGScheduler: Submitting ResultStage 135 (KafkaRDD[182] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 16:38:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 134.0 (TID 134, ***hostname masked***, executor 5, partition 0, NODE_LOCAL, 2047 bytes) 18/04/17 16:38:00 INFO storage.MemoryStore: Block broadcast_135 stored as values in memory (estimated size 5.7 KB, free 490.8 MB) 18/04/17 16:38:00 INFO storage.BlockManagerInfo: Added broadcast_132_piece0 in memory on ***hostname masked***:35790 (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:38:00 INFO storage.MemoryStore: Block broadcast_135_piece0 stored as bytes in memory (estimated size 3.1 KB, free 490.8 MB) 18/04/17 16:38:00 INFO storage.BlockManagerInfo: Added broadcast_135_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.4 MB) 18/04/17 16:38:00 INFO spark.SparkContext: Created broadcast 135 from broadcast at DAGScheduler.scala:1006 18/04/17 16:38:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 135 (KafkaRDD[182] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 16:38:00 INFO cluster.YarnClusterScheduler: Adding task set 135.0 with 1 tasks 18/04/17 16:38:00 INFO scheduler.DAGScheduler: Got job 136 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 16:38:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 136 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 16:38:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 16:38:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 16:38:00 INFO scheduler.DAGScheduler: Submitting ResultStage 136 (KafkaRDD[190] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 16:38:00 INFO storage.BlockManagerInfo: Added broadcast_133_piece0 in memory on ***hostname masked***:60107 (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:38:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 135.0 (TID 135, ***hostname masked***, executor 5, partition 0, RACK_LOCAL, 2034 bytes) 18/04/17 16:38:00 INFO storage.MemoryStore: Block broadcast_136 stored as values in memory (estimated size 5.7 KB, free 490.8 MB) 18/04/17 16:38:00 INFO storage.MemoryStore: Block broadcast_136_piece0 stored as bytes in memory (estimated size 3.1 KB, free 490.8 MB) 18/04/17 16:38:00 INFO storage.BlockManagerInfo: Added broadcast_134_piece0 in memory on ***hostname masked***:53081 (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:38:00 INFO storage.BlockManagerInfo: Added broadcast_136_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.4 MB) 18/04/17 16:38:00 INFO spark.SparkContext: Created broadcast 136 from broadcast at DAGScheduler.scala:1006 18/04/17 16:38:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 136 (KafkaRDD[190] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 16:38:00 INFO cluster.YarnClusterScheduler: Adding task set 136.0 with 1 tasks 18/04/17 16:38:00 INFO scheduler.DAGScheduler: Got job 137 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 16:38:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 137 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 16:38:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 16:38:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 16:38:00 INFO scheduler.DAGScheduler: Submitting ResultStage 137 (KafkaRDD[192] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 16:38:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 136.0 (TID 136, ***hostname masked***, executor 2, partition 0, NODE_LOCAL, 2059 bytes) 18/04/17 16:38:00 INFO storage.MemoryStore: Block broadcast_137 stored as values in memory (estimated size 5.7 KB, free 490.8 MB) 18/04/17 16:38:00 INFO storage.MemoryStore: Block broadcast_137_piece0 stored as bytes in memory (estimated size 3.1 KB, free 490.8 MB) 18/04/17 16:38:00 INFO storage.BlockManagerInfo: Added broadcast_137_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.4 MB) 18/04/17 16:38:00 INFO spark.SparkContext: Created broadcast 137 from broadcast at DAGScheduler.scala:1006 18/04/17 16:38:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 137 (KafkaRDD[192] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 16:38:00 INFO cluster.YarnClusterScheduler: Adding task set 137.0 with 1 tasks 18/04/17 16:38:00 INFO scheduler.DAGScheduler: Got job 138 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 16:38:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 138 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 16:38:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 16:38:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 16:38:00 INFO scheduler.DAGScheduler: Submitting ResultStage 138 (KafkaRDD[211] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 16:38:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 137.0 (TID 137, ***hostname masked***, executor 11, partition 0, RACK_LOCAL, 2051 bytes) 18/04/17 16:38:00 INFO storage.MemoryStore: Block broadcast_138 stored as values in memory (estimated size 5.7 KB, free 490.8 MB) 18/04/17 16:38:00 INFO storage.BlockManagerInfo: Added broadcast_135_piece0 in memory on ***hostname masked***:53081 (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:38:00 INFO spark.ContextCleaner: Cleaned accumulator 129 18/04/17 16:38:00 INFO storage.MemoryStore: Block broadcast_138_piece0 stored as bytes in memory (estimated size 3.1 KB, free 490.8 MB) 18/04/17 16:38:00 INFO storage.BlockManagerInfo: Added broadcast_138_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.4 MB) 18/04/17 16:38:00 INFO spark.SparkContext: Created broadcast 138 from broadcast at DAGScheduler.scala:1006 18/04/17 16:38:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 138 (KafkaRDD[211] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 16:38:00 INFO cluster.YarnClusterScheduler: Adding task set 138.0 with 1 tasks 18/04/17 16:38:00 INFO storage.BlockManagerInfo: Removed broadcast_127_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.4 MB) 18/04/17 16:38:00 INFO scheduler.DAGScheduler: Got job 140 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 16:38:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 139 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 16:38:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 16:38:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 16:38:00 INFO scheduler.DAGScheduler: Submitting ResultStage 139 (KafkaRDD[200] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 16:38:00 INFO storage.MemoryStore: Block broadcast_139 stored as values in memory (estimated size 5.7 KB, free 490.8 MB) 18/04/17 16:38:00 INFO storage.BlockManagerInfo: Removed broadcast_127_piece0 on ***hostname masked***:60107 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:38:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 138.0 (TID 138, ***hostname masked***, executor 3, partition 0, RACK_LOCAL, 2042 bytes) 18/04/17 16:38:00 INFO storage.BlockManagerInfo: Added broadcast_137_piece0 in memory on ***hostname masked***:57847 (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:38:00 INFO storage.BlockManagerInfo: Added broadcast_136_piece0 in memory on ***hostname masked***:43653 (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:38:00 INFO storage.MemoryStore: Block broadcast_139_piece0 stored as bytes in memory (estimated size 3.1 KB, free 490.8 MB) 18/04/17 16:38:00 INFO storage.BlockManagerInfo: Added broadcast_139_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.4 MB) 18/04/17 16:38:00 INFO spark.SparkContext: Created broadcast 139 from broadcast at DAGScheduler.scala:1006 18/04/17 16:38:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 139 (KafkaRDD[200] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 16:38:00 INFO cluster.YarnClusterScheduler: Adding task set 139.0 with 1 tasks 18/04/17 16:38:00 INFO scheduler.DAGScheduler: Got job 139 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 16:38:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 140 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 16:38:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 16:38:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 16:38:00 INFO scheduler.DAGScheduler: Submitting ResultStage 140 (KafkaRDD[198] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 16:38:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 139.0 (TID 139, ***hostname masked***, executor 3, partition 0, RACK_LOCAL, 2048 bytes) 18/04/17 16:38:00 INFO storage.MemoryStore: Block broadcast_140 stored as values in memory (estimated size 5.7 KB, free 490.8 MB) 18/04/17 16:38:00 INFO storage.BlockManagerInfo: Added broadcast_138_piece0 in memory on ***hostname masked***:60107 (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:38:00 INFO storage.BlockManagerInfo: Removed broadcast_128_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.4 MB) 18/04/17 16:38:00 INFO storage.BlockManagerInfo: Removed broadcast_128_piece0 on ***hostname masked***:43653 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:38:00 INFO storage.MemoryStore: Block broadcast_140_piece0 stored as bytes in memory (estimated size 3.1 KB, free 490.8 MB) 18/04/17 16:38:00 INFO storage.BlockManagerInfo: Added broadcast_140_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.4 MB) 18/04/17 16:38:00 INFO spark.ContextCleaner: Cleaned accumulator 131 18/04/17 16:38:00 INFO spark.SparkContext: Created broadcast 140 from broadcast at DAGScheduler.scala:1006 18/04/17 16:38:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 140 (KafkaRDD[198] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 16:38:00 INFO cluster.YarnClusterScheduler: Adding task set 140.0 with 1 tasks 18/04/17 16:38:00 INFO storage.BlockManagerInfo: Removed broadcast_130_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.4 MB) 18/04/17 16:38:00 INFO scheduler.DAGScheduler: Got job 141 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 16:38:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 141 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 16:38:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 16:38:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 16:38:00 INFO scheduler.DAGScheduler: Submitting ResultStage 141 (KafkaRDD[189] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 16:38:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 140.0 (TID 140, ***hostname masked***, executor 6, partition 0, RACK_LOCAL, 2048 bytes) 18/04/17 16:38:00 INFO storage.MemoryStore: Block broadcast_141 stored as values in memory (estimated size 5.7 KB, free 490.8 MB) 18/04/17 16:38:00 INFO storage.BlockManagerInfo: Removed broadcast_130_piece0 on ***hostname masked***:53081 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:38:00 INFO storage.MemoryStore: Block broadcast_141_piece0 stored as bytes in memory (estimated size 3.1 KB, free 490.8 MB) 18/04/17 16:38:00 INFO storage.BlockManagerInfo: Added broadcast_141_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.4 MB) 18/04/17 16:38:00 INFO spark.SparkContext: Created broadcast 141 from broadcast at DAGScheduler.scala:1006 18/04/17 16:38:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 141 (KafkaRDD[189] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 16:38:00 INFO cluster.YarnClusterScheduler: Adding task set 141.0 with 1 tasks 18/04/17 16:38:00 INFO scheduler.DAGScheduler: Got job 142 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 16:38:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 142 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 16:38:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 16:38:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 16:38:00 INFO scheduler.DAGScheduler: Submitting ResultStage 142 (KafkaRDD[205] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 16:38:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 141.0 (TID 141, ***hostname masked***, executor 2, partition 0, RACK_LOCAL, 2041 bytes) 18/04/17 16:38:00 INFO storage.MemoryStore: Block broadcast_142 stored as values in memory (estimated size 5.7 KB, free 490.8 MB) 18/04/17 16:38:00 INFO storage.BlockManagerInfo: Added broadcast_139_piece0 in memory on ***hostname masked***:60107 (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:38:00 INFO storage.MemoryStore: Block broadcast_142_piece0 stored as bytes in memory (estimated size 3.1 KB, free 490.8 MB) 18/04/17 16:38:00 INFO storage.BlockManagerInfo: Added broadcast_142_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.4 MB) 18/04/17 16:38:00 INFO spark.SparkContext: Created broadcast 142 from broadcast at DAGScheduler.scala:1006 18/04/17 16:38:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 142 (KafkaRDD[205] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 16:38:00 INFO cluster.YarnClusterScheduler: Adding task set 142.0 with 1 tasks 18/04/17 16:38:00 INFO scheduler.DAGScheduler: Got job 143 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 16:38:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 143 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 16:38:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 16:38:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 16:38:00 INFO scheduler.DAGScheduler: Submitting ResultStage 143 (KafkaRDD[214] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 16:38:00 INFO storage.BlockManagerInfo: Added broadcast_140_piece0 in memory on ***hostname masked***:35790 (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:38:00 INFO storage.MemoryStore: Block broadcast_143 stored as values in memory (estimated size 5.7 KB, free 490.8 MB) 18/04/17 16:38:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 142.0 (TID 142, ***hostname masked***, executor 1, partition 0, RACK_LOCAL, 2045 bytes) 18/04/17 16:38:00 INFO storage.BlockManagerInfo: Added broadcast_141_piece0 in memory on ***hostname masked***:43653 (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:38:00 INFO storage.MemoryStore: Block broadcast_143_piece0 stored as bytes in memory (estimated size 3.1 KB, free 490.8 MB) 18/04/17 16:38:00 INFO storage.BlockManagerInfo: Added broadcast_143_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.4 MB) 18/04/17 16:38:00 INFO spark.SparkContext: Created broadcast 143 from broadcast at DAGScheduler.scala:1006 18/04/17 16:38:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 143 (KafkaRDD[214] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 16:38:00 INFO cluster.YarnClusterScheduler: Adding task set 143.0 with 1 tasks 18/04/17 16:38:00 INFO scheduler.DAGScheduler: Got job 144 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 16:38:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 144 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 16:38:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 16:38:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 16:38:00 INFO scheduler.DAGScheduler: Submitting ResultStage 144 (KafkaRDD[188] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 16:38:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 143.0 (TID 143, ***hostname masked***, executor 8, partition 0, RACK_LOCAL, 2053 bytes) 18/04/17 16:38:00 INFO storage.MemoryStore: Block broadcast_144 stored as values in memory (estimated size 5.7 KB, free 490.8 MB) 18/04/17 16:38:00 INFO storage.MemoryStore: Block broadcast_144_piece0 stored as bytes in memory (estimated size 3.1 KB, free 490.8 MB) 18/04/17 16:38:00 INFO storage.BlockManagerInfo: Added broadcast_144_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.4 MB) 18/04/17 16:38:00 INFO spark.SparkContext: Created broadcast 144 from broadcast at DAGScheduler.scala:1006 18/04/17 16:38:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 144 (KafkaRDD[188] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 16:38:00 INFO cluster.YarnClusterScheduler: Adding task set 144.0 with 1 tasks 18/04/17 16:38:00 INFO storage.BlockManagerInfo: Added broadcast_142_piece0 in memory on ***hostname masked***:56034 (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:38:00 INFO scheduler.DAGScheduler: Got job 145 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 16:38:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 145 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 16:38:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 16:38:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 16:38:00 INFO scheduler.DAGScheduler: Submitting ResultStage 145 (KafkaRDD[195] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 16:38:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 144.0 (TID 144, ***hostname masked***, executor 1, partition 0, RACK_LOCAL, 2051 bytes) 18/04/17 16:38:00 INFO storage.MemoryStore: Block broadcast_145 stored as values in memory (estimated size 5.7 KB, free 490.8 MB) 18/04/17 16:38:00 INFO storage.BlockManagerInfo: Added broadcast_143_piece0 in memory on ***hostname masked***:50260 (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:38:00 INFO storage.MemoryStore: Block broadcast_145_piece0 stored as bytes in memory (estimated size 3.1 KB, free 490.8 MB) 18/04/17 16:38:00 INFO storage.BlockManagerInfo: Added broadcast_144_piece0 in memory on ***hostname masked***:56034 (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:38:00 INFO storage.BlockManagerInfo: Added broadcast_145_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.4 MB) 18/04/17 16:38:00 INFO spark.SparkContext: Created broadcast 145 from broadcast at DAGScheduler.scala:1006 18/04/17 16:38:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 145 (KafkaRDD[195] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 16:38:00 INFO cluster.YarnClusterScheduler: Adding task set 145.0 with 1 tasks 18/04/17 16:38:00 INFO scheduler.DAGScheduler: Got job 146 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 16:38:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 146 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 16:38:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 16:38:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 16:38:00 INFO scheduler.DAGScheduler: Submitting ResultStage 146 (KafkaRDD[181] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 16:38:00 INFO storage.MemoryStore: Block broadcast_146 stored as values in memory (estimated size 5.7 KB, free 490.8 MB) 18/04/17 16:38:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 145.0 (TID 145, ***hostname masked***, executor 6, partition 0, RACK_LOCAL, 2064 bytes) 18/04/17 16:38:00 INFO storage.MemoryStore: Block broadcast_146_piece0 stored as bytes in memory (estimated size 3.1 KB, free 490.8 MB) 18/04/17 16:38:00 INFO storage.BlockManagerInfo: Added broadcast_146_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.4 MB) 18/04/17 16:38:00 INFO spark.SparkContext: Created broadcast 146 from broadcast at DAGScheduler.scala:1006 18/04/17 16:38:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 146 (KafkaRDD[181] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 16:38:00 INFO cluster.YarnClusterScheduler: Adding task set 146.0 with 1 tasks 18/04/17 16:38:00 INFO scheduler.DAGScheduler: Got job 147 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 16:38:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 147 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 16:38:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 16:38:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 16:38:00 INFO scheduler.DAGScheduler: Submitting ResultStage 147 (KafkaRDD[187] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 16:38:00 INFO storage.MemoryStore: Block broadcast_147 stored as values in memory (estimated size 5.7 KB, free 490.8 MB) 18/04/17 16:38:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 146.0 (TID 146, ***hostname masked***, executor 2, partition 0, RACK_LOCAL, 2063 bytes) 18/04/17 16:38:00 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 139.0 (TID 139) in 61 ms on ***hostname masked*** (executor 3) (1/1) 18/04/17 16:38:00 INFO cluster.YarnClusterScheduler: Removed TaskSet 139.0, whose tasks have all completed, from pool 18/04/17 16:38:00 INFO storage.MemoryStore: Block broadcast_147_piece0 stored as bytes in memory (estimated size 3.1 KB, free 490.8 MB) 18/04/17 16:38:00 INFO storage.BlockManagerInfo: Added broadcast_147_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.4 MB) 18/04/17 16:38:00 INFO spark.SparkContext: Created broadcast 147 from broadcast at DAGScheduler.scala:1006 18/04/17 16:38:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 147 (KafkaRDD[187] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 16:38:00 INFO cluster.YarnClusterScheduler: Adding task set 147.0 with 1 tasks 18/04/17 16:38:00 INFO scheduler.DAGScheduler: Got job 149 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 16:38:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 148 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 16:38:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 16:38:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 16:38:00 INFO scheduler.DAGScheduler: Submitting ResultStage 148 (KafkaRDD[212] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 16:38:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 147.0 (TID 147, ***hostname masked***, executor 8, partition 0, RACK_LOCAL, 2060 bytes) 18/04/17 16:38:00 INFO storage.MemoryStore: Block broadcast_148 stored as values in memory (estimated size 5.7 KB, free 490.8 MB) 18/04/17 16:38:00 INFO storage.BlockManagerInfo: Added broadcast_145_piece0 in memory on ***hostname masked***:35790 (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:38:00 INFO storage.BlockManagerInfo: Added broadcast_146_piece0 in memory on ***hostname masked***:43653 (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:38:00 INFO storage.MemoryStore: Block broadcast_148_piece0 stored as bytes in memory (estimated size 3.1 KB, free 490.8 MB) 18/04/17 16:38:00 INFO storage.BlockManagerInfo: Added broadcast_148_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.3 MB) 18/04/17 16:38:00 INFO spark.SparkContext: Created broadcast 148 from broadcast at DAGScheduler.scala:1006 18/04/17 16:38:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 148 (KafkaRDD[212] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 16:38:00 INFO cluster.YarnClusterScheduler: Adding task set 148.0 with 1 tasks 18/04/17 16:38:00 INFO scheduler.DAGScheduler: Got job 148 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 16:38:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 149 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 16:38:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 16:38:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 16:38:00 INFO scheduler.DAGScheduler: Submitting ResultStage 149 (KafkaRDD[213] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 16:38:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 148.0 (TID 148, ***hostname masked***, executor 8, partition 0, RACK_LOCAL, 2040 bytes) 18/04/17 16:38:00 INFO storage.MemoryStore: Block broadcast_149 stored as values in memory (estimated size 5.7 KB, free 490.8 MB) 18/04/17 16:38:00 INFO storage.MemoryStore: Block broadcast_149_piece0 stored as bytes in memory (estimated size 3.1 KB, free 490.7 MB) 18/04/17 16:38:00 INFO storage.BlockManagerInfo: Added broadcast_149_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.3 MB) 18/04/17 16:38:00 INFO spark.SparkContext: Created broadcast 149 from broadcast at DAGScheduler.scala:1006 18/04/17 16:38:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 149 (KafkaRDD[213] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 16:38:00 INFO cluster.YarnClusterScheduler: Adding task set 149.0 with 1 tasks 18/04/17 16:38:00 INFO scheduler.DAGScheduler: Got job 150 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 16:38:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 150 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 16:38:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 16:38:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 16:38:00 INFO scheduler.DAGScheduler: Submitting ResultStage 150 (KafkaRDD[185] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 16:38:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 149.0 (TID 149, ***hostname masked***, executor 12, partition 0, NODE_LOCAL, 2048 bytes) 18/04/17 16:38:00 INFO storage.MemoryStore: Block broadcast_150 stored as values in memory (estimated size 5.7 KB, free 490.7 MB) 18/04/17 16:38:00 INFO storage.BlockManagerInfo: Added broadcast_147_piece0 in memory on ***hostname masked***:50260 (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:38:00 INFO storage.MemoryStore: Block broadcast_150_piece0 stored as bytes in memory (estimated size 3.1 KB, free 490.7 MB) 18/04/17 16:38:00 INFO storage.BlockManagerInfo: Added broadcast_150_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.3 MB) 18/04/17 16:38:00 INFO spark.SparkContext: Created broadcast 150 from broadcast at DAGScheduler.scala:1006 18/04/17 16:38:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 150 (KafkaRDD[185] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 16:38:00 INFO cluster.YarnClusterScheduler: Adding task set 150.0 with 1 tasks 18/04/17 16:38:00 INFO scheduler.DAGScheduler: Got job 152 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 16:38:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 151 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 16:38:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 16:38:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 16:38:00 INFO scheduler.DAGScheduler: Submitting ResultStage 151 (KafkaRDD[191] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 16:38:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 150.0 (TID 150, ***hostname masked***, executor 3, partition 0, NODE_LOCAL, 2059 bytes) 18/04/17 16:38:00 INFO storage.MemoryStore: Block broadcast_151 stored as values in memory (estimated size 5.7 KB, free 490.7 MB) 18/04/17 16:38:00 INFO storage.MemoryStore: Block broadcast_151_piece0 stored as bytes in memory (estimated size 3.1 KB, free 490.7 MB) 18/04/17 16:38:00 INFO storage.BlockManagerInfo: Added broadcast_151_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.3 MB) 18/04/17 16:38:00 INFO spark.SparkContext: Created broadcast 151 from broadcast at DAGScheduler.scala:1006 18/04/17 16:38:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 151 (KafkaRDD[191] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 16:38:00 INFO cluster.YarnClusterScheduler: Adding task set 151.0 with 1 tasks 18/04/17 16:38:00 INFO scheduler.DAGScheduler: Got job 151 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 16:38:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 152 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 16:38:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 16:38:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 16:38:00 INFO scheduler.DAGScheduler: Submitting ResultStage 152 (KafkaRDD[206] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 16:38:00 INFO storage.MemoryStore: Block broadcast_152 stored as values in memory (estimated size 5.7 KB, free 490.7 MB) 18/04/17 16:38:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 151.0 (TID 151, ***hostname masked***, executor 4, partition 0, RACK_LOCAL, 2046 bytes) 18/04/17 16:38:00 INFO storage.MemoryStore: Block broadcast_152_piece0 stored as bytes in memory (estimated size 3.1 KB, free 490.7 MB) 18/04/17 16:38:00 INFO storage.BlockManagerInfo: Added broadcast_152_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.3 MB) 18/04/17 16:38:00 INFO spark.SparkContext: Created broadcast 152 from broadcast at DAGScheduler.scala:1006 18/04/17 16:38:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 152 (KafkaRDD[206] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 16:38:00 INFO cluster.YarnClusterScheduler: Adding task set 152.0 with 1 tasks 18/04/17 16:38:00 INFO scheduler.DAGScheduler: Got job 153 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 16:38:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 153 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 16:38:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 16:38:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 16:38:00 INFO scheduler.DAGScheduler: Submitting ResultStage 153 (KafkaRDD[199] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 16:38:00 INFO storage.BlockManagerInfo: Added broadcast_150_piece0 in memory on ***hostname masked***:60107 (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:38:00 INFO storage.MemoryStore: Block broadcast_153 stored as values in memory (estimated size 5.7 KB, free 490.7 MB) 18/04/17 16:38:00 INFO storage.BlockManagerInfo: Added broadcast_149_piece0 in memory on ***hostname masked***:42188 (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:38:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 152.0 (TID 152, ***hostname masked***, executor 2, partition 0, NODE_LOCAL, 2056 bytes) 18/04/17 16:38:00 INFO storage.BlockManagerInfo: Added broadcast_148_piece0 in memory on ***hostname masked***:50260 (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:38:00 INFO storage.MemoryStore: Block broadcast_153_piece0 stored as bytes in memory (estimated size 3.1 KB, free 490.7 MB) 18/04/17 16:38:00 INFO storage.BlockManagerInfo: Added broadcast_153_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.3 MB) 18/04/17 16:38:00 INFO spark.SparkContext: Created broadcast 153 from broadcast at DAGScheduler.scala:1006 18/04/17 16:38:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 153 (KafkaRDD[199] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 16:38:00 INFO cluster.YarnClusterScheduler: Adding task set 153.0 with 1 tasks 18/04/17 16:38:00 INFO scheduler.DAGScheduler: Got job 154 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 16:38:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 154 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 16:38:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 16:38:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 16:38:00 INFO scheduler.DAGScheduler: Submitting ResultStage 154 (KafkaRDD[207] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 16:38:00 INFO storage.MemoryStore: Block broadcast_154 stored as values in memory (estimated size 5.7 KB, free 490.7 MB) 18/04/17 16:38:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 153.0 (TID 153, ***hostname masked***, executor 1, partition 0, RACK_LOCAL, 2050 bytes) 18/04/17 16:38:00 INFO storage.MemoryStore: Block broadcast_154_piece0 stored as bytes in memory (estimated size 3.1 KB, free 490.7 MB) 18/04/17 16:38:00 INFO storage.BlockManagerInfo: Added broadcast_154_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.3 MB) 18/04/17 16:38:00 INFO spark.SparkContext: Created broadcast 154 from broadcast at DAGScheduler.scala:1006 18/04/17 16:38:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 154 (KafkaRDD[207] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 16:38:00 INFO cluster.YarnClusterScheduler: Adding task set 154.0 with 1 tasks 18/04/17 16:38:00 INFO scheduler.DAGScheduler: Got job 155 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 16:38:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 155 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 16:38:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 16:38:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 16:38:00 INFO scheduler.DAGScheduler: Submitting ResultStage 155 (KafkaRDD[208] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 16:38:00 INFO storage.BlockManagerInfo: Added broadcast_152_piece0 in memory on ***hostname masked***:43653 (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:38:00 INFO storage.MemoryStore: Block broadcast_155 stored as values in memory (estimated size 5.7 KB, free 490.7 MB) 18/04/17 16:38:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 154.0 (TID 154, ***hostname masked***, executor 4, partition 0, RACK_LOCAL, 2048 bytes) 18/04/17 16:38:00 INFO storage.BlockManagerInfo: Added broadcast_151_piece0 in memory on ***hostname masked***:55279 (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:38:00 INFO storage.MemoryStore: Block broadcast_155_piece0 stored as bytes in memory (estimated size 3.1 KB, free 490.7 MB) 18/04/17 16:38:00 INFO storage.BlockManagerInfo: Added broadcast_155_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.3 MB) 18/04/17 16:38:00 INFO spark.SparkContext: Created broadcast 155 from broadcast at DAGScheduler.scala:1006 18/04/17 16:38:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 155 (KafkaRDD[208] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 16:38:00 INFO cluster.YarnClusterScheduler: Adding task set 155.0 with 1 tasks 18/04/17 16:38:00 INFO scheduler.DAGScheduler: Got job 156 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 16:38:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 156 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 16:38:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 16:38:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 16:38:00 INFO scheduler.DAGScheduler: Submitting ResultStage 156 (KafkaRDD[186] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 16:38:00 INFO storage.MemoryStore: Block broadcast_156 stored as values in memory (estimated size 5.7 KB, free 490.7 MB) 18/04/17 16:38:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 155.0 (TID 155, ***hostname masked***, executor 10, partition 0, RACK_LOCAL, 2045 bytes) 18/04/17 16:38:00 INFO storage.BlockManagerInfo: Added broadcast_153_piece0 in memory on ***hostname masked***:56034 (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:38:00 INFO storage.MemoryStore: Block broadcast_156_piece0 stored as bytes in memory (estimated size 3.1 KB, free 490.7 MB) 18/04/17 16:38:00 INFO storage.BlockManagerInfo: Added broadcast_156_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.3 MB) 18/04/17 16:38:00 INFO spark.SparkContext: Created broadcast 156 from broadcast at DAGScheduler.scala:1006 18/04/17 16:38:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 156 (KafkaRDD[186] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 16:38:00 INFO cluster.YarnClusterScheduler: Adding task set 156.0 with 1 tasks 18/04/17 16:38:00 INFO scheduler.DAGScheduler: ResultStage 139 (foreachPartition at PredictorEngineApp.java:153) finished in 0.093 s 18/04/17 16:38:00 INFO scheduler.DAGScheduler: Job 140 finished: foreachPartition at PredictorEngineApp.java:153, took 0.164190 s 18/04/17 16:38:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 156.0 (TID 156, ***hostname masked***, executor 9, partition 0, NODE_LOCAL, 2045 bytes) 18/04/17 16:38:00 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x6b2e361f connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 16:38:00 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x6b2e361f0x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 16:38:00 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 16:38:00 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:59462, server: ***hostname masked***/***IP masked***:2181 18/04/17 16:38:00 INFO storage.BlockManagerInfo: Added broadcast_154_piece0 in memory on ***hostname masked***:55279 (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:38:00 INFO storage.BlockManagerInfo: Added broadcast_155_piece0 in memory on ***hostname masked***:55095 (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:38:00 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x1626be1444c922b, negotiated timeout = 60000 18/04/17 16:38:00 INFO storage.BlockManagerInfo: Added broadcast_156_piece0 in memory on ***hostname masked***:55033 (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:38:00 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x1626be1444c922b 18/04/17 16:38:00 INFO zookeeper.ZooKeeper: Session: 0x1626be1444c922b closed 18/04/17 16:38:00 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 16:38:00 INFO scheduler.JobScheduler: Finished job streaming job 1523972280000 ms.20 from job set of time 1523972280000 ms 18/04/17 16:38:00 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 144.0 (TID 144) in 147 ms on ***hostname masked*** (executor 1) (1/1) 18/04/17 16:38:00 INFO scheduler.DAGScheduler: ResultStage 144 (foreachPartition at PredictorEngineApp.java:153) finished in 0.147 s 18/04/17 16:38:00 INFO cluster.YarnClusterScheduler: Removed TaskSet 144.0, whose tasks have all completed, from pool 18/04/17 16:38:00 INFO scheduler.DAGScheduler: Job 144 finished: foreachPartition at PredictorEngineApp.java:153, took 0.244729 s 18/04/17 16:38:00 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x5b7873be connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 16:38:00 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x5b7873be0x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 16:38:00 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 16:38:00 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:53083, server: ***hostname masked***/***IP masked***:2181 18/04/17 16:38:00 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x3626be1439a9206, negotiated timeout = 60000 18/04/17 16:38:00 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x3626be1439a9206 18/04/17 16:38:00 INFO zookeeper.ZooKeeper: Session: 0x3626be1439a9206 closed 18/04/17 16:38:00 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 16:38:00 INFO scheduler.JobScheduler: Finished job streaming job 1523972280000 ms.8 from job set of time 1523972280000 ms 18/04/17 16:38:01 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 142.0 (TID 142) in 1705 ms on ***hostname masked*** (executor 1) (1/1) 18/04/17 16:38:01 INFO cluster.YarnClusterScheduler: Removed TaskSet 142.0, whose tasks have all completed, from pool 18/04/17 16:38:01 INFO scheduler.DAGScheduler: ResultStage 142 (foreachPartition at PredictorEngineApp.java:153) finished in 1.706 s 18/04/17 16:38:01 INFO scheduler.DAGScheduler: Job 142 finished: foreachPartition at PredictorEngineApp.java:153, took 1.793532 s 18/04/17 16:38:01 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x55bba691 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 16:38:01 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x55bba6910x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 16:38:01 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 16:38:01 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:35831, server: ***hostname masked***/***IP masked***:2181 18/04/17 16:38:01 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x2626be142b28b2c, negotiated timeout = 60000 18/04/17 16:38:01 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x2626be142b28b2c 18/04/17 16:38:01 INFO zookeeper.ZooKeeper: Session: 0x2626be142b28b2c closed 18/04/17 16:38:01 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 16:38:01 INFO scheduler.JobScheduler: Finished job streaming job 1523972280000 ms.25 from job set of time 1523972280000 ms 18/04/17 16:38:02 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 147.0 (TID 147) in 2465 ms on ***hostname masked*** (executor 8) (1/1) 18/04/17 16:38:02 INFO cluster.YarnClusterScheduler: Removed TaskSet 147.0, whose tasks have all completed, from pool 18/04/17 16:38:02 INFO scheduler.DAGScheduler: ResultStage 147 (foreachPartition at PredictorEngineApp.java:153) finished in 2.466 s 18/04/17 16:38:02 INFO scheduler.DAGScheduler: Job 147 finished: foreachPartition at PredictorEngineApp.java:153, took 2.599188 s 18/04/17 16:38:02 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x64b03079 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 16:38:02 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x64b030790x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 16:38:02 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 16:38:02 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:59475, server: ***hostname masked***/***IP masked***:2181 18/04/17 16:38:02 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x1626be1444c9230, negotiated timeout = 60000 18/04/17 16:38:02 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x1626be1444c9230 18/04/17 16:38:02 INFO zookeeper.ZooKeeper: Session: 0x1626be1444c9230 closed 18/04/17 16:38:02 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 16:38:02 INFO scheduler.JobScheduler: Finished job streaming job 1523972280000 ms.7 from job set of time 1523972280000 ms 18/04/17 16:38:02 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 137.0 (TID 137) in 2622 ms on ***hostname masked*** (executor 11) (1/1) 18/04/17 16:38:02 INFO cluster.YarnClusterScheduler: Removed TaskSet 137.0, whose tasks have all completed, from pool 18/04/17 16:38:02 INFO scheduler.DAGScheduler: ResultStage 137 (foreachPartition at PredictorEngineApp.java:153) finished in 2.623 s 18/04/17 16:38:02 INFO scheduler.DAGScheduler: Job 137 finished: foreachPartition at PredictorEngineApp.java:153, took 2.667873 s 18/04/17 16:38:02 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x63ee3cc8 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 16:38:02 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x63ee3cc80x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 16:38:02 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 16:38:02 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:59478, server: ***hostname masked***/***IP masked***:2181 18/04/17 16:38:02 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x1626be1444c9231, negotiated timeout = 60000 18/04/17 16:38:02 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x1626be1444c9231 18/04/17 16:38:02 INFO zookeeper.ZooKeeper: Session: 0x1626be1444c9231 closed 18/04/17 16:38:02 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 16:38:02 INFO scheduler.JobScheduler: Finished job streaming job 1523972280000 ms.12 from job set of time 1523972280000 ms 18/04/17 16:38:03 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 155.0 (TID 155) in 3468 ms on ***hostname masked*** (executor 10) (1/1) 18/04/17 16:38:03 INFO cluster.YarnClusterScheduler: Removed TaskSet 155.0, whose tasks have all completed, from pool 18/04/17 16:38:03 INFO scheduler.DAGScheduler: ResultStage 155 (foreachPartition at PredictorEngineApp.java:153) finished in 3.469 s 18/04/17 16:38:03 INFO scheduler.DAGScheduler: Job 155 finished: foreachPartition at PredictorEngineApp.java:153, took 3.622851 s 18/04/17 16:38:03 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0xc517b7 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 16:38:03 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0xc517b70x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 16:38:03 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 16:38:03 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:59482, server: ***hostname masked***/***IP masked***:2181 18/04/17 16:38:03 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x1626be1444c9232, negotiated timeout = 60000 18/04/17 16:38:03 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x1626be1444c9232 18/04/17 16:38:03 INFO zookeeper.ZooKeeper: Session: 0x1626be1444c9232 closed 18/04/17 16:38:03 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 16:38:03 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 156.0 (TID 156) in 3494 ms on ***hostname masked*** (executor 9) (1/1) 18/04/17 16:38:03 INFO cluster.YarnClusterScheduler: Removed TaskSet 156.0, whose tasks have all completed, from pool 18/04/17 16:38:03 INFO scheduler.DAGScheduler: ResultStage 156 (foreachPartition at PredictorEngineApp.java:153) finished in 3.495 s 18/04/17 16:38:03 INFO scheduler.DAGScheduler: Job 156 finished: foreachPartition at PredictorEngineApp.java:153, took 3.651909 s 18/04/17 16:38:03 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x1764cd58 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 16:38:03 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x1764cd580x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 16:38:03 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 16:38:03 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:59485, server: ***hostname masked***/***IP masked***:2181 18/04/17 16:38:03 INFO scheduler.JobScheduler: Finished job streaming job 1523972280000 ms.28 from job set of time 1523972280000 ms 18/04/17 16:38:03 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x1626be1444c9233, negotiated timeout = 60000 18/04/17 16:38:03 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x1626be1444c9233 18/04/17 16:38:03 INFO zookeeper.ZooKeeper: Session: 0x1626be1444c9233 closed 18/04/17 16:38:03 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 16:38:03 INFO scheduler.JobScheduler: Finished job streaming job 1523972280000 ms.6 from job set of time 1523972280000 ms 18/04/17 16:38:04 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 149.0 (TID 149) in 3812 ms on ***hostname masked*** (executor 12) (1/1) 18/04/17 16:38:04 INFO cluster.YarnClusterScheduler: Removed TaskSet 149.0, whose tasks have all completed, from pool 18/04/17 16:38:04 INFO scheduler.DAGScheduler: ResultStage 149 (foreachPartition at PredictorEngineApp.java:153) finished in 3.812 s 18/04/17 16:38:04 INFO scheduler.DAGScheduler: Job 148 finished: foreachPartition at PredictorEngineApp.java:153, took 3.952666 s 18/04/17 16:38:04 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x77c66a51 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 16:38:04 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x77c66a510x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 16:38:04 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 16:38:04 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:59489, server: ***hostname masked***/***IP masked***:2181 18/04/17 16:38:04 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x1626be1444c9235, negotiated timeout = 60000 18/04/17 16:38:04 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x1626be1444c9235 18/04/17 16:38:04 INFO zookeeper.ZooKeeper: Session: 0x1626be1444c9235 closed 18/04/17 16:38:04 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 16:38:04 INFO scheduler.JobScheduler: Finished job streaming job 1523972280000 ms.33 from job set of time 1523972280000 ms 18/04/17 16:38:04 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 153.0 (TID 153) in 4014 ms on ***hostname masked*** (executor 1) (1/1) 18/04/17 16:38:04 INFO cluster.YarnClusterScheduler: Removed TaskSet 153.0, whose tasks have all completed, from pool 18/04/17 16:38:04 INFO scheduler.DAGScheduler: ResultStage 153 (foreachPartition at PredictorEngineApp.java:153) finished in 4.015 s 18/04/17 16:38:04 INFO scheduler.DAGScheduler: Job 153 finished: foreachPartition at PredictorEngineApp.java:153, took 4.163099 s 18/04/17 16:38:04 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x45fb69d2 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 16:38:04 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x45fb69d20x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 16:38:04 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 16:38:04 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:59492, server: ***hostname masked***/***IP masked***:2181 18/04/17 16:38:04 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x1626be1444c9236, negotiated timeout = 60000 18/04/17 16:38:04 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x1626be1444c9236 18/04/17 16:38:04 INFO zookeeper.ZooKeeper: Session: 0x1626be1444c9236 closed 18/04/17 16:38:04 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 16:38:04 INFO scheduler.JobScheduler: Finished job streaming job 1523972280000 ms.19 from job set of time 1523972280000 ms 18/04/17 16:38:04 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 138.0 (TID 138) in 4820 ms on ***hostname masked*** (executor 3) (1/1) 18/04/17 16:38:04 INFO cluster.YarnClusterScheduler: Removed TaskSet 138.0, whose tasks have all completed, from pool 18/04/17 16:38:04 INFO scheduler.DAGScheduler: ResultStage 138 (foreachPartition at PredictorEngineApp.java:153) finished in 4.824 s 18/04/17 16:38:04 INFO scheduler.DAGScheduler: Job 138 finished: foreachPartition at PredictorEngineApp.java:153, took 4.888852 s 18/04/17 16:38:04 INFO spark.ContextCleaner: Cleaned accumulator 154 18/04/17 16:38:04 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x23a28536 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 16:38:04 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x23a285360x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 16:38:04 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 16:38:04 INFO storage.BlockManagerInfo: Removed broadcast_138_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.3 MB) 18/04/17 16:38:04 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:35858, server: ***hostname masked***/***IP masked***:2181 18/04/17 16:38:04 INFO storage.BlockManagerInfo: Removed broadcast_138_piece0 on ***hostname masked***:60107 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:38:04 INFO spark.ContextCleaner: Cleaned accumulator 140 18/04/17 16:38:04 INFO storage.BlockManagerInfo: Removed broadcast_139_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.3 MB) 18/04/17 16:38:04 INFO storage.BlockManagerInfo: Removed broadcast_139_piece0 on ***hostname masked***:60107 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:38:04 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x2626be142b28b2e, negotiated timeout = 60000 18/04/17 16:38:04 INFO spark.ContextCleaner: Cleaned accumulator 143 18/04/17 16:38:04 INFO storage.BlockManagerInfo: Removed broadcast_142_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.3 MB) 18/04/17 16:38:04 INFO storage.BlockManagerInfo: Removed broadcast_142_piece0 on ***hostname masked***:56034 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:38:04 INFO spark.ContextCleaner: Cleaned accumulator 145 18/04/17 16:38:04 INFO storage.BlockManagerInfo: Removed broadcast_144_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.3 MB) 18/04/17 16:38:04 INFO storage.BlockManagerInfo: Removed broadcast_144_piece0 on ***hostname masked***:56034 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:38:04 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x2626be142b28b2e 18/04/17 16:38:04 INFO spark.ContextCleaner: Cleaned accumulator 148 18/04/17 16:38:04 INFO storage.BlockManagerInfo: Removed broadcast_147_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.3 MB) 18/04/17 16:38:04 INFO storage.BlockManagerInfo: Removed broadcast_147_piece0 on ***hostname masked***:50260 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:38:04 INFO zookeeper.ZooKeeper: Session: 0x2626be142b28b2e closed 18/04/17 16:38:04 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 16:38:04 INFO spark.ContextCleaner: Cleaned accumulator 150 18/04/17 16:38:05 INFO storage.BlockManagerInfo: Removed broadcast_149_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.3 MB) 18/04/17 16:38:05 INFO storage.BlockManagerInfo: Removed broadcast_149_piece0 on ***hostname masked***:42188 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:38:05 INFO storage.BlockManagerInfo: Removed broadcast_153_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.3 MB) 18/04/17 16:38:05 INFO storage.BlockManagerInfo: Removed broadcast_153_piece0 on ***hostname masked***:56034 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:38:05 INFO storage.BlockManagerInfo: Removed broadcast_156_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.3 MB) 18/04/17 16:38:05 INFO storage.BlockManagerInfo: Removed broadcast_156_piece0 on ***hostname masked***:55033 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:38:05 INFO spark.ContextCleaner: Cleaned accumulator 157 18/04/17 16:38:05 INFO spark.ContextCleaner: Cleaned accumulator 156 18/04/17 16:38:05 INFO scheduler.JobScheduler: Finished job streaming job 1523972280000 ms.31 from job set of time 1523972280000 ms 18/04/17 16:38:05 INFO storage.BlockManagerInfo: Removed broadcast_155_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.4 MB) 18/04/17 16:38:05 INFO storage.BlockManagerInfo: Removed broadcast_155_piece0 on ***hostname masked***:55095 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:38:07 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 154.0 (TID 154) in 6789 ms on ***hostname masked*** (executor 4) (1/1) 18/04/17 16:38:07 INFO cluster.YarnClusterScheduler: Removed TaskSet 154.0, whose tasks have all completed, from pool 18/04/17 16:38:07 INFO scheduler.DAGScheduler: ResultStage 154 (foreachPartition at PredictorEngineApp.java:153) finished in 6.790 s 18/04/17 16:38:07 INFO scheduler.DAGScheduler: Job 154 finished: foreachPartition at PredictorEngineApp.java:153, took 6.941550 s 18/04/17 16:38:07 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x8c6467b connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 16:38:07 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x8c6467b0x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 16:38:07 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 16:38:07 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:59502, server: ***hostname masked***/***IP masked***:2181 18/04/17 16:38:07 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x1626be1444c9238, negotiated timeout = 60000 18/04/17 16:38:07 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x1626be1444c9238 18/04/17 16:38:07 INFO zookeeper.ZooKeeper: Session: 0x1626be1444c9238 closed 18/04/17 16:38:07 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 16:38:07 INFO scheduler.JobScheduler: Finished job streaming job 1523972280000 ms.27 from job set of time 1523972280000 ms 18/04/17 16:38:07 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 148.0 (TID 148) in 7129 ms on ***hostname masked*** (executor 8) (1/1) 18/04/17 16:38:07 INFO cluster.YarnClusterScheduler: Removed TaskSet 148.0, whose tasks have all completed, from pool 18/04/17 16:38:07 INFO scheduler.DAGScheduler: ResultStage 148 (foreachPartition at PredictorEngineApp.java:153) finished in 7.130 s 18/04/17 16:38:07 INFO scheduler.DAGScheduler: Job 149 finished: foreachPartition at PredictorEngineApp.java:153, took 7.266656 s 18/04/17 16:38:07 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x63d647eb connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 16:38:07 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x63d647eb0x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 16:38:07 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 16:38:07 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:59505, server: ***hostname masked***/***IP masked***:2181 18/04/17 16:38:07 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x1626be1444c9239, negotiated timeout = 60000 18/04/17 16:38:07 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x1626be1444c9239 18/04/17 16:38:07 INFO zookeeper.ZooKeeper: Session: 0x1626be1444c9239 closed 18/04/17 16:38:07 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 16:38:07 INFO scheduler.JobScheduler: Finished job streaming job 1523972280000 ms.32 from job set of time 1523972280000 ms 18/04/17 16:38:08 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 143.0 (TID 143) in 7828 ms on ***hostname masked*** (executor 8) (1/1) 18/04/17 16:38:08 INFO cluster.YarnClusterScheduler: Removed TaskSet 143.0, whose tasks have all completed, from pool 18/04/17 16:38:08 INFO scheduler.DAGScheduler: ResultStage 143 (foreachPartition at PredictorEngineApp.java:153) finished in 7.829 s 18/04/17 16:38:08 INFO scheduler.DAGScheduler: Job 143 finished: foreachPartition at PredictorEngineApp.java:153, took 7.921321 s 18/04/17 16:38:08 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x76be6ef2 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 16:38:08 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x76be6ef20x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 16:38:08 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 16:38:08 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:53127, server: ***hostname masked***/***IP masked***:2181 18/04/17 16:38:08 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x3626be1439a920c, negotiated timeout = 60000 18/04/17 16:38:08 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x3626be1439a920c 18/04/17 16:38:08 INFO zookeeper.ZooKeeper: Session: 0x3626be1439a920c closed 18/04/17 16:38:08 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 16:38:08 INFO scheduler.JobScheduler: Finished job streaming job 1523972280000 ms.34 from job set of time 1523972280000 ms 18/04/17 16:38:08 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 140.0 (TID 140) in 8218 ms on ***hostname masked*** (executor 6) (1/1) 18/04/17 16:38:08 INFO cluster.YarnClusterScheduler: Removed TaskSet 140.0, whose tasks have all completed, from pool 18/04/17 16:38:08 INFO scheduler.DAGScheduler: ResultStage 140 (foreachPartition at PredictorEngineApp.java:153) finished in 8.220 s 18/04/17 16:38:08 INFO scheduler.DAGScheduler: Job 139 finished: foreachPartition at PredictorEngineApp.java:153, took 8.296663 s 18/04/17 16:38:08 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x72f7b3e5 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 16:38:08 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x72f7b3e50x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 16:38:08 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 16:38:08 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:35875, server: ***hostname masked***/***IP masked***:2181 18/04/17 16:38:08 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x2626be142b28b2f, negotiated timeout = 60000 18/04/17 16:38:08 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x2626be142b28b2f 18/04/17 16:38:08 INFO zookeeper.ZooKeeper: Session: 0x2626be142b28b2f closed 18/04/17 16:38:08 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 16:38:08 INFO scheduler.JobScheduler: Finished job streaming job 1523972280000 ms.18 from job set of time 1523972280000 ms 18/04/17 16:38:08 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 132.0 (TID 132) in 8397 ms on ***hostname masked*** (executor 6) (1/1) 18/04/17 16:38:08 INFO cluster.YarnClusterScheduler: Removed TaskSet 132.0, whose tasks have all completed, from pool 18/04/17 16:38:08 INFO scheduler.DAGScheduler: ResultStage 132 (foreachPartition at PredictorEngineApp.java:153) finished in 8.397 s 18/04/17 16:38:08 INFO scheduler.DAGScheduler: Job 132 finished: foreachPartition at PredictorEngineApp.java:153, took 8.417499 s 18/04/17 16:38:08 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x1d350d0d connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 16:38:08 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x1d350d0d0x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 16:38:08 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 16:38:08 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:35878, server: ***hostname masked***/***IP masked***:2181 18/04/17 16:38:08 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x2626be142b28b31, negotiated timeout = 60000 18/04/17 16:38:08 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x2626be142b28b31 18/04/17 16:38:08 INFO zookeeper.ZooKeeper: Session: 0x2626be142b28b31 closed 18/04/17 16:38:08 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 16:38:08 INFO scheduler.JobScheduler: Finished job streaming job 1523972280000 ms.29 from job set of time 1523972280000 ms 18/04/17 16:38:08 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 145.0 (TID 145) in 8353 ms on ***hostname masked*** (executor 6) (1/1) 18/04/17 16:38:08 INFO cluster.YarnClusterScheduler: Removed TaskSet 145.0, whose tasks have all completed, from pool 18/04/17 16:38:08 INFO scheduler.DAGScheduler: ResultStage 145 (foreachPartition at PredictorEngineApp.java:153) finished in 8.354 s 18/04/17 16:38:08 INFO scheduler.DAGScheduler: Job 145 finished: foreachPartition at PredictorEngineApp.java:153, took 8.477045 s 18/04/17 16:38:08 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x47cb8d3b connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 16:38:08 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x47cb8d3b0x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 16:38:08 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 16:38:08 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:35881, server: ***hostname masked***/***IP masked***:2181 18/04/17 16:38:08 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x2626be142b28b32, negotiated timeout = 60000 18/04/17 16:38:08 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x2626be142b28b32 18/04/17 16:38:08 INFO zookeeper.ZooKeeper: Session: 0x2626be142b28b32 closed 18/04/17 16:38:08 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 16:38:08 INFO scheduler.JobScheduler: Finished job streaming job 1523972280000 ms.15 from job set of time 1523972280000 ms 18/04/17 16:38:09 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 135.0 (TID 135) in 8956 ms on ***hostname masked*** (executor 5) (1/1) 18/04/17 16:38:09 INFO cluster.YarnClusterScheduler: Removed TaskSet 135.0, whose tasks have all completed, from pool 18/04/17 16:38:09 INFO scheduler.DAGScheduler: ResultStage 135 (foreachPartition at PredictorEngineApp.java:153) finished in 8.956 s 18/04/17 16:38:09 INFO scheduler.DAGScheduler: Job 135 finished: foreachPartition at PredictorEngineApp.java:153, took 8.991043 s 18/04/17 16:38:09 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x98ddf82 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 16:38:09 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x98ddf820x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 16:38:09 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 16:38:09 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:53141, server: ***hostname masked***/***IP masked***:2181 18/04/17 16:38:09 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x3626be1439a920d, negotiated timeout = 60000 18/04/17 16:38:09 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x3626be1439a920d 18/04/17 16:38:09 INFO zookeeper.ZooKeeper: Session: 0x3626be1439a920d closed 18/04/17 16:38:09 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 16:38:09 INFO scheduler.JobScheduler: Finished job streaming job 1523972280000 ms.2 from job set of time 1523972280000 ms 18/04/17 16:38:09 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 134.0 (TID 134) in 9529 ms on ***hostname masked*** (executor 5) (1/1) 18/04/17 16:38:09 INFO cluster.YarnClusterScheduler: Removed TaskSet 134.0, whose tasks have all completed, from pool 18/04/17 16:38:09 INFO scheduler.DAGScheduler: ResultStage 134 (foreachPartition at PredictorEngineApp.java:153) finished in 9.529 s 18/04/17 16:38:09 INFO scheduler.DAGScheduler: Job 134 finished: foreachPartition at PredictorEngineApp.java:153, took 9.559546 s 18/04/17 16:38:09 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x3a1f5863 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 16:38:09 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x3a1f58630x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 16:38:09 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 16:38:09 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:35888, server: ***hostname masked***/***IP masked***:2181 18/04/17 16:38:09 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x2626be142b28b33, negotiated timeout = 60000 18/04/17 16:38:09 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x2626be142b28b33 18/04/17 16:38:09 INFO zookeeper.ZooKeeper: Session: 0x2626be142b28b33 closed 18/04/17 16:38:09 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 16:38:09 INFO scheduler.JobScheduler: Finished job streaming job 1523972280000 ms.24 from job set of time 1523972280000 ms 18/04/17 16:38:09 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 131.0 (TID 131) in 9769 ms on ***hostname masked*** (executor 5) (1/1) 18/04/17 16:38:09 INFO cluster.YarnClusterScheduler: Removed TaskSet 131.0, whose tasks have all completed, from pool 18/04/17 16:38:09 INFO scheduler.DAGScheduler: ResultStage 131 (foreachPartition at PredictorEngineApp.java:153) finished in 9.770 s 18/04/17 16:38:09 INFO scheduler.DAGScheduler: Job 131 finished: foreachPartition at PredictorEngineApp.java:153, took 9.785545 s 18/04/17 16:38:09 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x7d9b8be4 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 16:38:09 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x7d9b8be40x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 16:38:09 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 16:38:09 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:53147, server: ***hostname masked***/***IP masked***:2181 18/04/17 16:38:09 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x3626be1439a9211, negotiated timeout = 60000 18/04/17 16:38:09 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x3626be1439a9211 18/04/17 16:38:09 INFO zookeeper.ZooKeeper: Session: 0x3626be1439a9211 closed 18/04/17 16:38:09 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 16:38:09 INFO scheduler.JobScheduler: Finished job streaming job 1523972280000 ms.23 from job set of time 1523972280000 ms 18/04/17 16:38:11 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 109.0 (TID 109) in 71056 ms on ***hostname masked*** (executor 8) (1/1) 18/04/17 16:38:11 INFO cluster.YarnClusterScheduler: Removed TaskSet 109.0, whose tasks have all completed, from pool 18/04/17 16:38:11 INFO scheduler.DAGScheduler: ResultStage 109 (foreachPartition at PredictorEngineApp.java:153) finished in 71.058 s 18/04/17 16:38:11 INFO scheduler.DAGScheduler: Job 109 finished: foreachPartition at PredictorEngineApp.java:153, took 71.113246 s 18/04/17 16:38:11 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0xef93fd8 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 16:38:11 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0xef93fd80x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 16:38:11 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 16:38:11 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:53152, server: ***hostname masked***/***IP masked***:2181 18/04/17 16:38:11 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x3626be1439a9213, negotiated timeout = 60000 18/04/17 16:38:11 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x3626be1439a9213 18/04/17 16:38:11 INFO zookeeper.ZooKeeper: Session: 0x3626be1439a9213 closed 18/04/17 16:38:11 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 16:38:11 INFO scheduler.JobScheduler: Finished job streaming job 1523972220000 ms.11 from job set of time 1523972220000 ms 18/04/17 16:38:11 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 151.0 (TID 151) in 11182 ms on ***hostname masked*** (executor 4) (1/1) 18/04/17 16:38:11 INFO cluster.YarnClusterScheduler: Removed TaskSet 151.0, whose tasks have all completed, from pool 18/04/17 16:38:11 INFO scheduler.DAGScheduler: ResultStage 151 (foreachPartition at PredictorEngineApp.java:153) finished in 11.183 s 18/04/17 16:38:11 INFO scheduler.DAGScheduler: Job 152 finished: foreachPartition at PredictorEngineApp.java:153, took 11.329908 s 18/04/17 16:38:11 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0xdaf0bc1 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 16:38:11 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0xdaf0bc10x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 16:38:11 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 16:38:11 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:53155, server: ***hostname masked***/***IP masked***:2181 18/04/17 16:38:11 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x3626be1439a9214, negotiated timeout = 60000 18/04/17 16:38:11 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x3626be1439a9214 18/04/17 16:38:11 INFO zookeeper.ZooKeeper: Session: 0x3626be1439a9214 closed 18/04/17 16:38:11 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 16:38:11 INFO scheduler.JobScheduler: Finished job streaming job 1523972280000 ms.11 from job set of time 1523972280000 ms 18/04/17 16:38:12 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 141.0 (TID 141) in 12207 ms on ***hostname masked*** (executor 2) (1/1) 18/04/17 16:38:12 INFO cluster.YarnClusterScheduler: Removed TaskSet 141.0, whose tasks have all completed, from pool 18/04/17 16:38:12 INFO scheduler.DAGScheduler: ResultStage 141 (foreachPartition at PredictorEngineApp.java:153) finished in 12.207 s 18/04/17 16:38:12 INFO scheduler.DAGScheduler: Job 141 finished: foreachPartition at PredictorEngineApp.java:153, took 12.289485 s 18/04/17 16:38:12 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x151bd5de connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 16:38:12 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x151bd5de0x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 16:38:12 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 16:38:12 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:59541, server: ***hostname masked***/***IP masked***:2181 18/04/17 16:38:12 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x1626be1444c9241, negotiated timeout = 60000 18/04/17 16:38:12 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x1626be1444c9241 18/04/17 16:38:12 INFO zookeeper.ZooKeeper: Session: 0x1626be1444c9241 closed 18/04/17 16:38:12 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 16:38:12 INFO scheduler.JobScheduler: Finished job streaming job 1523972280000 ms.9 from job set of time 1523972280000 ms 18/04/17 16:38:13 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 150.0 (TID 150) in 13758 ms on ***hostname masked*** (executor 3) (1/1) 18/04/17 16:38:13 INFO cluster.YarnClusterScheduler: Removed TaskSet 150.0, whose tasks have all completed, from pool 18/04/17 16:38:13 INFO scheduler.DAGScheduler: ResultStage 150 (foreachPartition at PredictorEngineApp.java:153) finished in 13.759 s 18/04/17 16:38:13 INFO scheduler.DAGScheduler: Job 150 finished: foreachPartition at PredictorEngineApp.java:153, took 13.902927 s 18/04/17 16:38:13 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x1a0611c4 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 16:38:13 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x1a0611c40x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 16:38:13 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 16:38:13 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:59547, server: ***hostname masked***/***IP masked***:2181 18/04/17 16:38:13 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x1626be1444c9243, negotiated timeout = 60000 18/04/17 16:38:14 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x1626be1444c9243 18/04/17 16:38:14 INFO zookeeper.ZooKeeper: Session: 0x1626be1444c9243 closed 18/04/17 16:38:14 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 16:38:14 INFO scheduler.JobScheduler: Finished job streaming job 1523972280000 ms.5 from job set of time 1523972280000 ms 18/04/17 16:38:14 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 133.0 (TID 133) in 14442 ms on ***hostname masked*** (executor 3) (1/1) 18/04/17 16:38:14 INFO cluster.YarnClusterScheduler: Removed TaskSet 133.0, whose tasks have all completed, from pool 18/04/17 16:38:14 INFO scheduler.DAGScheduler: ResultStage 133 (foreachPartition at PredictorEngineApp.java:153) finished in 14.442 s 18/04/17 16:38:14 INFO scheduler.DAGScheduler: Job 133 finished: foreachPartition at PredictorEngineApp.java:153, took 14.467739 s 18/04/17 16:38:14 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x2e79c297 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 16:38:14 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x2e79c2970x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 16:38:14 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 16:38:14 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:35913, server: ***hostname masked***/***IP masked***:2181 18/04/17 16:38:14 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x2626be142b28b37, negotiated timeout = 60000 18/04/17 16:38:14 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x2626be142b28b37 18/04/17 16:38:14 INFO zookeeper.ZooKeeper: Session: 0x2626be142b28b37 closed 18/04/17 16:38:14 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 16:38:14 INFO scheduler.JobScheduler: Finished job streaming job 1523972280000 ms.22 from job set of time 1523972280000 ms 18/04/17 16:38:24 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 146.0 (TID 146) in 24488 ms on ***hostname masked*** (executor 2) (1/1) 18/04/17 16:38:24 INFO cluster.YarnClusterScheduler: Removed TaskSet 146.0, whose tasks have all completed, from pool 18/04/17 16:38:24 INFO scheduler.DAGScheduler: ResultStage 146 (foreachPartition at PredictorEngineApp.java:153) finished in 24.489 s 18/04/17 16:38:24 INFO scheduler.DAGScheduler: Job 146 finished: foreachPartition at PredictorEngineApp.java:153, took 24.617300 s 18/04/17 16:38:24 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x36d01e46 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 16:38:24 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x36d01e460x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 16:38:24 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 16:38:24 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:35935, server: ***hostname masked***/***IP masked***:2181 18/04/17 16:38:24 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x2626be142b28b3f, negotiated timeout = 60000 18/04/17 16:38:24 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x2626be142b28b3f 18/04/17 16:38:24 INFO zookeeper.ZooKeeper: Session: 0x2626be142b28b3f closed 18/04/17 16:38:24 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 16:38:24 INFO scheduler.JobScheduler: Finished job streaming job 1523972280000 ms.1 from job set of time 1523972280000 ms 18/04/17 16:38:24 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 152.0 (TID 152) in 24524 ms on ***hostname masked*** (executor 2) (1/1) 18/04/17 16:38:24 INFO cluster.YarnClusterScheduler: Removed TaskSet 152.0, whose tasks have all completed, from pool 18/04/17 16:38:24 INFO scheduler.DAGScheduler: ResultStage 152 (foreachPartition at PredictorEngineApp.java:153) finished in 24.525 s 18/04/17 16:38:24 INFO scheduler.DAGScheduler: Job 151 finished: foreachPartition at PredictorEngineApp.java:153, took 24.675326 s 18/04/17 16:38:24 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x7475296f connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 16:38:24 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x7475296f0x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 16:38:24 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 16:38:24 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:35938, server: ***hostname masked***/***IP masked***:2181 18/04/17 16:38:24 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x2626be142b28b40, negotiated timeout = 60000 18/04/17 16:38:24 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x2626be142b28b40 18/04/17 16:38:24 INFO zookeeper.ZooKeeper: Session: 0x2626be142b28b40 closed 18/04/17 16:38:24 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 16:38:24 INFO scheduler.JobScheduler: Finished job streaming job 1523972280000 ms.26 from job set of time 1523972280000 ms 18/04/17 16:38:25 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 136.0 (TID 136) in 25687 ms on ***hostname masked*** (executor 2) (1/1) 18/04/17 16:38:25 INFO cluster.YarnClusterScheduler: Removed TaskSet 136.0, whose tasks have all completed, from pool 18/04/17 16:38:25 INFO scheduler.DAGScheduler: ResultStage 136 (foreachPartition at PredictorEngineApp.java:153) finished in 25.688 s 18/04/17 16:38:25 INFO scheduler.DAGScheduler: Job 136 finished: foreachPartition at PredictorEngineApp.java:153, took 25.727414 s 18/04/17 16:38:25 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x1cdf6c87 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 16:38:25 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x1cdf6c870x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 16:38:25 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 16:38:25 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:59580, server: ***hostname masked***/***IP masked***:2181 18/04/17 16:38:25 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x1626be1444c9248, negotiated timeout = 60000 18/04/17 16:38:25 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x1626be1444c9248 18/04/17 16:38:25 INFO zookeeper.ZooKeeper: Session: 0x1626be1444c9248 closed 18/04/17 16:38:25 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 16:38:25 INFO scheduler.JobScheduler: Finished job streaming job 1523972280000 ms.10 from job set of time 1523972280000 ms 18/04/17 16:38:25 INFO scheduler.JobScheduler: Total delay: 25.837 s for time 1523972280000 ms (execution: 25.768 s) 18/04/17 16:38:25 INFO kafka.KafkaRDD: Removing RDD 108 from persistence list 18/04/17 16:38:25 INFO storage.BlockManager: Removing RDD 108 18/04/17 16:38:25 INFO kafka.KafkaRDD: Removing RDD 144 from persistence list 18/04/17 16:38:25 INFO storage.BlockManager: Removing RDD 144 18/04/17 16:38:25 INFO kafka.KafkaRDD: Removing RDD 108 from persistence list 18/04/17 16:38:25 INFO storage.BlockManager: Removing RDD 108 18/04/17 16:38:25 INFO kafka.KafkaRDD: Removing RDD 144 from persistence list 18/04/17 16:38:25 INFO storage.BlockManager: Removing RDD 144 18/04/17 16:38:25 INFO kafka.KafkaRDD: Removing RDD 109 from persistence list 18/04/17 16:38:25 INFO storage.BlockManager: Removing RDD 109 18/04/17 16:38:25 INFO kafka.KafkaRDD: Removing RDD 145 from persistence list 18/04/17 16:38:25 INFO storage.BlockManager: Removing RDD 145 18/04/17 16:38:25 INFO kafka.KafkaRDD: Removing RDD 109 from persistence list 18/04/17 16:38:25 INFO storage.BlockManager: Removing RDD 109 18/04/17 16:38:25 INFO kafka.KafkaRDD: Removing RDD 145 from persistence list 18/04/17 16:38:25 INFO storage.BlockManager: Removing RDD 145 18/04/17 16:38:25 INFO kafka.KafkaRDD: Removing RDD 110 from persistence list 18/04/17 16:38:25 INFO storage.BlockManager: Removing RDD 110 18/04/17 16:38:25 INFO kafka.KafkaRDD: Removing RDD 146 from persistence list 18/04/17 16:38:25 INFO storage.BlockManager: Removing RDD 146 18/04/17 16:38:25 INFO kafka.KafkaRDD: Removing RDD 110 from persistence list 18/04/17 16:38:25 INFO storage.BlockManager: Removing RDD 110 18/04/17 16:38:25 INFO kafka.KafkaRDD: Removing RDD 146 from persistence list 18/04/17 16:38:25 INFO storage.BlockManager: Removing RDD 146 18/04/17 16:38:25 INFO kafka.KafkaRDD: Removing RDD 111 from persistence list 18/04/17 16:38:25 INFO storage.BlockManager: Removing RDD 111 18/04/17 16:38:25 INFO kafka.KafkaRDD: Removing RDD 147 from persistence list 18/04/17 16:38:25 INFO storage.BlockManager: Removing RDD 147 18/04/17 16:38:25 INFO kafka.KafkaRDD: Removing RDD 111 from persistence list 18/04/17 16:38:25 INFO storage.BlockManager: Removing RDD 111 18/04/17 16:38:25 INFO kafka.KafkaRDD: Removing RDD 147 from persistence list 18/04/17 16:38:25 INFO storage.BlockManager: Removing RDD 147 18/04/17 16:38:25 INFO kafka.KafkaRDD: Removing RDD 112 from persistence list 18/04/17 16:38:25 INFO storage.BlockManager: Removing RDD 112 18/04/17 16:38:25 INFO kafka.KafkaRDD: Removing RDD 148 from persistence list 18/04/17 16:38:25 INFO storage.BlockManager: Removing RDD 148 18/04/17 16:38:25 INFO kafka.KafkaRDD: Removing RDD 112 from persistence list 18/04/17 16:38:25 INFO storage.BlockManager: Removing RDD 112 18/04/17 16:38:25 INFO kafka.KafkaRDD: Removing RDD 148 from persistence list 18/04/17 16:38:25 INFO storage.BlockManager: Removing RDD 148 18/04/17 16:38:25 INFO kafka.KafkaRDD: Removing RDD 113 from persistence list 18/04/17 16:38:25 INFO storage.BlockManager: Removing RDD 113 18/04/17 16:38:25 INFO kafka.KafkaRDD: Removing RDD 149 from persistence list 18/04/17 16:38:25 INFO storage.BlockManager: Removing RDD 149 18/04/17 16:38:25 INFO kafka.KafkaRDD: Removing RDD 113 from persistence list 18/04/17 16:38:25 INFO storage.BlockManager: Removing RDD 113 18/04/17 16:38:25 INFO kafka.KafkaRDD: Removing RDD 149 from persistence list 18/04/17 16:38:25 INFO storage.BlockManager: Removing RDD 149 18/04/17 16:38:25 INFO kafka.KafkaRDD: Removing RDD 114 from persistence list 18/04/17 16:38:25 INFO storage.BlockManager: Removing RDD 114 18/04/17 16:38:25 INFO kafka.KafkaRDD: Removing RDD 150 from persistence list 18/04/17 16:38:25 INFO storage.BlockManager: Removing RDD 150 18/04/17 16:38:25 INFO kafka.KafkaRDD: Removing RDD 114 from persistence list 18/04/17 16:38:25 INFO storage.BlockManager: Removing RDD 114 18/04/17 16:38:25 INFO kafka.KafkaRDD: Removing RDD 150 from persistence list 18/04/17 16:38:25 INFO storage.BlockManager: Removing RDD 150 18/04/17 16:38:25 INFO kafka.KafkaRDD: Removing RDD 115 from persistence list 18/04/17 16:38:25 INFO storage.BlockManager: Removing RDD 115 18/04/17 16:38:25 INFO kafka.KafkaRDD: Removing RDD 151 from persistence list 18/04/17 16:38:25 INFO storage.BlockManager: Removing RDD 151 18/04/17 16:38:25 INFO kafka.KafkaRDD: Removing RDD 115 from persistence list 18/04/17 16:38:25 INFO storage.BlockManager: Removing RDD 115 18/04/17 16:38:25 INFO kafka.KafkaRDD: Removing RDD 151 from persistence list 18/04/17 16:38:25 INFO storage.BlockManager: Removing RDD 151 18/04/17 16:38:25 INFO kafka.KafkaRDD: Removing RDD 116 from persistence list 18/04/17 16:38:25 INFO storage.BlockManager: Removing RDD 116 18/04/17 16:38:25 INFO kafka.KafkaRDD: Removing RDD 152 from persistence list 18/04/17 16:38:25 INFO storage.BlockManager: Removing RDD 152 18/04/17 16:38:25 INFO kafka.KafkaRDD: Removing RDD 116 from persistence list 18/04/17 16:38:25 INFO storage.BlockManager: Removing RDD 116 18/04/17 16:38:25 INFO kafka.KafkaRDD: Removing RDD 152 from persistence list 18/04/17 16:38:25 INFO storage.BlockManager: Removing RDD 152 18/04/17 16:38:25 INFO kafka.KafkaRDD: Removing RDD 117 from persistence list 18/04/17 16:38:25 INFO storage.BlockManager: Removing RDD 117 18/04/17 16:38:25 INFO kafka.KafkaRDD: Removing RDD 153 from persistence list 18/04/17 16:38:25 INFO storage.BlockManager: Removing RDD 153 18/04/17 16:38:25 INFO kafka.KafkaRDD: Removing RDD 117 from persistence list 18/04/17 16:38:25 INFO storage.BlockManager: Removing RDD 117 18/04/17 16:38:25 INFO kafka.KafkaRDD: Removing RDD 153 from persistence list 18/04/17 16:38:25 INFO storage.BlockManager: Removing RDD 153 18/04/17 16:38:25 INFO kafka.KafkaRDD: Removing RDD 118 from persistence list 18/04/17 16:38:25 INFO storage.BlockManager: Removing RDD 118 18/04/17 16:38:25 INFO kafka.KafkaRDD: Removing RDD 154 from persistence list 18/04/17 16:38:25 INFO storage.BlockManager: Removing RDD 154 18/04/17 16:38:25 INFO kafka.KafkaRDD: Removing RDD 118 from persistence list 18/04/17 16:38:25 INFO storage.BlockManager: Removing RDD 118 18/04/17 16:38:25 INFO kafka.KafkaRDD: Removing RDD 154 from persistence list 18/04/17 16:38:25 INFO storage.BlockManager: Removing RDD 154 18/04/17 16:38:25 INFO kafka.KafkaRDD: Removing RDD 119 from persistence list 18/04/17 16:38:25 INFO storage.BlockManager: Removing RDD 119 18/04/17 16:38:25 INFO kafka.KafkaRDD: Removing RDD 155 from persistence list 18/04/17 16:38:25 INFO storage.BlockManager: Removing RDD 155 18/04/17 16:38:25 INFO kafka.KafkaRDD: Removing RDD 119 from persistence list 18/04/17 16:38:25 INFO storage.BlockManager: Removing RDD 119 18/04/17 16:38:25 INFO kafka.KafkaRDD: Removing RDD 155 from persistence list 18/04/17 16:38:25 INFO storage.BlockManager: Removing RDD 155 18/04/17 16:38:25 INFO kafka.KafkaRDD: Removing RDD 120 from persistence list 18/04/17 16:38:25 INFO storage.BlockManager: Removing RDD 120 18/04/17 16:38:25 INFO kafka.KafkaRDD: Removing RDD 156 from persistence list 18/04/17 16:38:25 INFO storage.BlockManager: Removing RDD 156 18/04/17 16:38:25 INFO kafka.KafkaRDD: Removing RDD 120 from persistence list 18/04/17 16:38:25 INFO storage.BlockManager: Removing RDD 120 18/04/17 16:38:25 INFO kafka.KafkaRDD: Removing RDD 156 from persistence list 18/04/17 16:38:25 INFO storage.BlockManager: Removing RDD 156 18/04/17 16:38:25 INFO kafka.KafkaRDD: Removing RDD 121 from persistence list 18/04/17 16:38:25 INFO storage.BlockManager: Removing RDD 121 18/04/17 16:38:25 INFO kafka.KafkaRDD: Removing RDD 157 from persistence list 18/04/17 16:38:25 INFO storage.BlockManager: Removing RDD 157 18/04/17 16:38:25 INFO kafka.KafkaRDD: Removing RDD 121 from persistence list 18/04/17 16:38:25 INFO storage.BlockManager: Removing RDD 121 18/04/17 16:38:25 INFO kafka.KafkaRDD: Removing RDD 157 from persistence list 18/04/17 16:38:25 INFO storage.BlockManager: Removing RDD 157 18/04/17 16:38:25 INFO kafka.KafkaRDD: Removing RDD 122 from persistence list 18/04/17 16:38:25 INFO storage.BlockManager: Removing RDD 122 18/04/17 16:38:25 INFO kafka.KafkaRDD: Removing RDD 158 from persistence list 18/04/17 16:38:25 INFO storage.BlockManager: Removing RDD 158 18/04/17 16:38:25 INFO kafka.KafkaRDD: Removing RDD 122 from persistence list 18/04/17 16:38:25 INFO storage.BlockManager: Removing RDD 122 18/04/17 16:38:25 INFO kafka.KafkaRDD: Removing RDD 158 from persistence list 18/04/17 16:38:25 INFO storage.BlockManager: Removing RDD 158 18/04/17 16:38:25 INFO kafka.KafkaRDD: Removing RDD 123 from persistence list 18/04/17 16:38:25 INFO storage.BlockManager: Removing RDD 123 18/04/17 16:38:25 INFO kafka.KafkaRDD: Removing RDD 159 from persistence list 18/04/17 16:38:25 INFO storage.BlockManager: Removing RDD 159 18/04/17 16:38:25 INFO kafka.KafkaRDD: Removing RDD 123 from persistence list 18/04/17 16:38:25 INFO storage.BlockManager: Removing RDD 123 18/04/17 16:38:25 INFO kafka.KafkaRDD: Removing RDD 159 from persistence list 18/04/17 16:38:25 INFO storage.BlockManager: Removing RDD 159 18/04/17 16:38:25 INFO kafka.KafkaRDD: Removing RDD 124 from persistence list 18/04/17 16:38:25 INFO storage.BlockManager: Removing RDD 124 18/04/17 16:38:25 INFO kafka.KafkaRDD: Removing RDD 160 from persistence list 18/04/17 16:38:25 INFO storage.BlockManager: Removing RDD 160 18/04/17 16:38:25 INFO kafka.KafkaRDD: Removing RDD 124 from persistence list 18/04/17 16:38:25 INFO storage.BlockManager: Removing RDD 124 18/04/17 16:38:25 INFO kafka.KafkaRDD: Removing RDD 160 from persistence list 18/04/17 16:38:25 INFO storage.BlockManager: Removing RDD 160 18/04/17 16:38:25 INFO kafka.KafkaRDD: Removing RDD 125 from persistence list 18/04/17 16:38:25 INFO storage.BlockManager: Removing RDD 125 18/04/17 16:38:25 INFO kafka.KafkaRDD: Removing RDD 161 from persistence list 18/04/17 16:38:25 INFO storage.BlockManager: Removing RDD 161 18/04/17 16:38:25 INFO kafka.KafkaRDD: Removing RDD 125 from persistence list 18/04/17 16:38:25 INFO storage.BlockManager: Removing RDD 125 18/04/17 16:38:25 INFO kafka.KafkaRDD: Removing RDD 161 from persistence list 18/04/17 16:38:25 INFO storage.BlockManager: Removing RDD 161 18/04/17 16:38:25 INFO kafka.KafkaRDD: Removing RDD 126 from persistence list 18/04/17 16:38:25 INFO storage.BlockManager: Removing RDD 126 18/04/17 16:38:25 INFO kafka.KafkaRDD: Removing RDD 162 from persistence list 18/04/17 16:38:25 INFO storage.BlockManager: Removing RDD 162 18/04/17 16:38:25 INFO kafka.KafkaRDD: Removing RDD 126 from persistence list 18/04/17 16:38:25 INFO storage.BlockManager: Removing RDD 126 18/04/17 16:38:25 INFO kafka.KafkaRDD: Removing RDD 162 from persistence list 18/04/17 16:38:25 INFO storage.BlockManager: Removing RDD 162 18/04/17 16:38:25 INFO kafka.KafkaRDD: Removing RDD 127 from persistence list 18/04/17 16:38:25 INFO storage.BlockManager: Removing RDD 127 18/04/17 16:38:25 INFO kafka.KafkaRDD: Removing RDD 163 from persistence list 18/04/17 16:38:25 INFO storage.BlockManager: Removing RDD 163 18/04/17 16:38:25 INFO kafka.KafkaRDD: Removing RDD 127 from persistence list 18/04/17 16:38:25 INFO storage.BlockManager: Removing RDD 127 18/04/17 16:38:25 INFO kafka.KafkaRDD: Removing RDD 163 from persistence list 18/04/17 16:38:25 INFO storage.BlockManager: Removing RDD 163 18/04/17 16:38:25 INFO kafka.KafkaRDD: Removing RDD 128 from persistence list 18/04/17 16:38:25 INFO storage.BlockManager: Removing RDD 128 18/04/17 16:38:25 INFO kafka.KafkaRDD: Removing RDD 164 from persistence list 18/04/17 16:38:25 INFO storage.BlockManager: Removing RDD 164 18/04/17 16:38:25 INFO kafka.KafkaRDD: Removing RDD 128 from persistence list 18/04/17 16:38:25 INFO storage.BlockManager: Removing RDD 128 18/04/17 16:38:25 INFO kafka.KafkaRDD: Removing RDD 164 from persistence list 18/04/17 16:38:25 INFO storage.BlockManager: Removing RDD 164 18/04/17 16:38:25 INFO kafka.KafkaRDD: Removing RDD 129 from persistence list 18/04/17 16:38:25 INFO storage.BlockManager: Removing RDD 129 18/04/17 16:38:25 INFO kafka.KafkaRDD: Removing RDD 165 from persistence list 18/04/17 16:38:25 INFO storage.BlockManager: Removing RDD 165 18/04/17 16:38:25 INFO kafka.KafkaRDD: Removing RDD 129 from persistence list 18/04/17 16:38:25 INFO storage.BlockManager: Removing RDD 129 18/04/17 16:38:25 INFO kafka.KafkaRDD: Removing RDD 165 from persistence list 18/04/17 16:38:25 INFO storage.BlockManager: Removing RDD 165 18/04/17 16:38:25 INFO kafka.KafkaRDD: Removing RDD 130 from persistence list 18/04/17 16:38:25 INFO storage.BlockManager: Removing RDD 130 18/04/17 16:38:25 INFO kafka.KafkaRDD: Removing RDD 166 from persistence list 18/04/17 16:38:25 INFO storage.BlockManager: Removing RDD 166 18/04/17 16:38:25 INFO kafka.KafkaRDD: Removing RDD 130 from persistence list 18/04/17 16:38:25 INFO storage.BlockManager: Removing RDD 130 18/04/17 16:38:25 INFO kafka.KafkaRDD: Removing RDD 166 from persistence list 18/04/17 16:38:25 INFO storage.BlockManager: Removing RDD 166 18/04/17 16:38:25 INFO kafka.KafkaRDD: Removing RDD 131 from persistence list 18/04/17 16:38:25 INFO storage.BlockManager: Removing RDD 131 18/04/17 16:38:25 INFO kafka.KafkaRDD: Removing RDD 167 from persistence list 18/04/17 16:38:25 INFO storage.BlockManager: Removing RDD 167 18/04/17 16:38:25 INFO kafka.KafkaRDD: Removing RDD 131 from persistence list 18/04/17 16:38:25 INFO storage.BlockManager: Removing RDD 131 18/04/17 16:38:25 INFO kafka.KafkaRDD: Removing RDD 167 from persistence list 18/04/17 16:38:25 INFO storage.BlockManager: Removing RDD 167 18/04/17 16:38:25 INFO kafka.KafkaRDD: Removing RDD 132 from persistence list 18/04/17 16:38:25 INFO storage.BlockManager: Removing RDD 132 18/04/17 16:38:25 INFO kafka.KafkaRDD: Removing RDD 168 from persistence list 18/04/17 16:38:25 INFO storage.BlockManager: Removing RDD 168 18/04/17 16:38:25 INFO kafka.KafkaRDD: Removing RDD 132 from persistence list 18/04/17 16:38:25 INFO storage.BlockManager: Removing RDD 132 18/04/17 16:38:25 INFO kafka.KafkaRDD: Removing RDD 168 from persistence list 18/04/17 16:38:25 INFO storage.BlockManager: Removing RDD 168 18/04/17 16:38:25 INFO kafka.KafkaRDD: Removing RDD 133 from persistence list 18/04/17 16:38:25 INFO storage.BlockManager: Removing RDD 133 18/04/17 16:38:25 INFO kafka.KafkaRDD: Removing RDD 169 from persistence list 18/04/17 16:38:25 INFO storage.BlockManager: Removing RDD 169 18/04/17 16:38:25 INFO kafka.KafkaRDD: Removing RDD 133 from persistence list 18/04/17 16:38:25 INFO storage.BlockManager: Removing RDD 133 18/04/17 16:38:25 INFO kafka.KafkaRDD: Removing RDD 169 from persistence list 18/04/17 16:38:25 INFO storage.BlockManager: Removing RDD 169 18/04/17 16:38:25 INFO kafka.KafkaRDD: Removing RDD 134 from persistence list 18/04/17 16:38:25 INFO storage.BlockManager: Removing RDD 134 18/04/17 16:38:25 INFO kafka.KafkaRDD: Removing RDD 170 from persistence list 18/04/17 16:38:25 INFO storage.BlockManager: Removing RDD 170 18/04/17 16:38:25 INFO kafka.KafkaRDD: Removing RDD 134 from persistence list 18/04/17 16:38:25 INFO storage.BlockManager: Removing RDD 134 18/04/17 16:38:25 INFO kafka.KafkaRDD: Removing RDD 170 from persistence list 18/04/17 16:38:25 INFO storage.BlockManager: Removing RDD 170 18/04/17 16:38:25 INFO kafka.KafkaRDD: Removing RDD 135 from persistence list 18/04/17 16:38:25 INFO storage.BlockManager: Removing RDD 135 18/04/17 16:38:25 INFO kafka.KafkaRDD: Removing RDD 171 from persistence list 18/04/17 16:38:25 INFO storage.BlockManager: Removing RDD 171 18/04/17 16:38:25 INFO kafka.KafkaRDD: Removing RDD 135 from persistence list 18/04/17 16:38:25 INFO storage.BlockManager: Removing RDD 135 18/04/17 16:38:25 INFO kafka.KafkaRDD: Removing RDD 171 from persistence list 18/04/17 16:38:25 INFO storage.BlockManager: Removing RDD 171 18/04/17 16:38:25 INFO kafka.KafkaRDD: Removing RDD 136 from persistence list 18/04/17 16:38:25 INFO storage.BlockManager: Removing RDD 136 18/04/17 16:38:25 INFO kafka.KafkaRDD: Removing RDD 172 from persistence list 18/04/17 16:38:25 INFO storage.BlockManager: Removing RDD 172 18/04/17 16:38:25 INFO kafka.KafkaRDD: Removing RDD 136 from persistence list 18/04/17 16:38:25 INFO storage.BlockManager: Removing RDD 136 18/04/17 16:38:25 INFO kafka.KafkaRDD: Removing RDD 172 from persistence list 18/04/17 16:38:25 INFO storage.BlockManager: Removing RDD 172 18/04/17 16:38:25 INFO kafka.KafkaRDD: Removing RDD 137 from persistence list 18/04/17 16:38:25 INFO storage.BlockManager: Removing RDD 137 18/04/17 16:38:25 INFO kafka.KafkaRDD: Removing RDD 173 from persistence list 18/04/17 16:38:25 INFO storage.BlockManager: Removing RDD 173 18/04/17 16:38:25 INFO kafka.KafkaRDD: Removing RDD 137 from persistence list 18/04/17 16:38:25 INFO storage.BlockManager: Removing RDD 137 18/04/17 16:38:25 INFO kafka.KafkaRDD: Removing RDD 173 from persistence list 18/04/17 16:38:25 INFO storage.BlockManager: Removing RDD 173 18/04/17 16:38:25 INFO kafka.KafkaRDD: Removing RDD 138 from persistence list 18/04/17 16:38:25 INFO storage.BlockManager: Removing RDD 138 18/04/17 16:38:25 INFO kafka.KafkaRDD: Removing RDD 174 from persistence list 18/04/17 16:38:25 INFO storage.BlockManager: Removing RDD 174 18/04/17 16:38:25 INFO kafka.KafkaRDD: Removing RDD 138 from persistence list 18/04/17 16:38:25 INFO storage.BlockManager: Removing RDD 138 18/04/17 16:38:25 INFO kafka.KafkaRDD: Removing RDD 174 from persistence list 18/04/17 16:38:25 INFO storage.BlockManager: Removing RDD 174 18/04/17 16:38:25 INFO kafka.KafkaRDD: Removing RDD 139 from persistence list 18/04/17 16:38:25 INFO storage.BlockManager: Removing RDD 139 18/04/17 16:38:25 INFO kafka.KafkaRDD: Removing RDD 175 from persistence list 18/04/17 16:38:25 INFO storage.BlockManager: Removing RDD 175 18/04/17 16:38:25 INFO kafka.KafkaRDD: Removing RDD 139 from persistence list 18/04/17 16:38:25 INFO storage.BlockManager: Removing RDD 139 18/04/17 16:38:25 INFO kafka.KafkaRDD: Removing RDD 175 from persistence list 18/04/17 16:38:25 INFO storage.BlockManager: Removing RDD 175 18/04/17 16:38:25 INFO kafka.KafkaRDD: Removing RDD 140 from persistence list 18/04/17 16:38:25 INFO storage.BlockManager: Removing RDD 140 18/04/17 16:38:25 INFO kafka.KafkaRDD: Removing RDD 176 from persistence list 18/04/17 16:38:25 INFO storage.BlockManager: Removing RDD 176 18/04/17 16:38:25 INFO kafka.KafkaRDD: Removing RDD 140 from persistence list 18/04/17 16:38:25 INFO storage.BlockManager: Removing RDD 140 18/04/17 16:38:25 INFO kafka.KafkaRDD: Removing RDD 176 from persistence list 18/04/17 16:38:25 INFO storage.BlockManager: Removing RDD 176 18/04/17 16:38:25 INFO kafka.KafkaRDD: Removing RDD 141 from persistence list 18/04/17 16:38:25 INFO storage.BlockManager: Removing RDD 141 18/04/17 16:38:25 INFO kafka.KafkaRDD: Removing RDD 177 from persistence list 18/04/17 16:38:25 INFO storage.BlockManager: Removing RDD 177 18/04/17 16:38:25 INFO kafka.KafkaRDD: Removing RDD 141 from persistence list 18/04/17 16:38:25 INFO storage.BlockManager: Removing RDD 141 18/04/17 16:38:25 INFO kafka.KafkaRDD: Removing RDD 177 from persistence list 18/04/17 16:38:25 INFO storage.BlockManager: Removing RDD 177 18/04/17 16:38:25 INFO kafka.KafkaRDD: Removing RDD 142 from persistence list 18/04/17 16:38:25 INFO storage.BlockManager: Removing RDD 142 18/04/17 16:38:25 INFO kafka.KafkaRDD: Removing RDD 178 from persistence list 18/04/17 16:38:25 INFO storage.BlockManager: Removing RDD 178 18/04/17 16:38:25 INFO kafka.KafkaRDD: Removing RDD 142 from persistence list 18/04/17 16:38:25 INFO storage.BlockManager: Removing RDD 142 18/04/17 16:38:25 INFO kafka.KafkaRDD: Removing RDD 178 from persistence list 18/04/17 16:38:25 INFO storage.BlockManager: Removing RDD 178 18/04/17 16:38:25 INFO kafka.KafkaRDD: Removing RDD 143 from persistence list 18/04/17 16:38:25 INFO storage.BlockManager: Removing RDD 143 18/04/17 16:38:25 INFO kafka.KafkaRDD: Removing RDD 179 from persistence list 18/04/17 16:38:25 INFO storage.BlockManager: Removing RDD 179 18/04/17 16:38:25 INFO kafka.KafkaRDD: Removing RDD 143 from persistence list 18/04/17 16:38:25 INFO storage.BlockManager: Removing RDD 143 18/04/17 16:38:25 INFO kafka.KafkaRDD: Removing RDD 179 from persistence list 18/04/17 16:38:25 INFO storage.BlockManager: Removing RDD 179 18/04/17 16:38:25 INFO scheduler.ReceivedBlockTracker: Deleting batches ArrayBuffer() 18/04/17 16:38:25 INFO scheduler.InputInfoTracker: remove old batch metadata: 1523972160000 ms 1523972100000 ms 18/04/17 16:39:00 INFO scheduler.JobScheduler: Added jobs for time 1523972340000 ms 18/04/17 16:39:00 INFO scheduler.JobScheduler: Starting job streaming job 1523972340000 ms.0 from job set of time 1523972340000 ms 18/04/17 16:39:00 INFO scheduler.JobScheduler: Starting job streaming job 1523972340000 ms.1 from job set of time 1523972340000 ms 18/04/17 16:39:00 INFO scheduler.JobScheduler: Starting job streaming job 1523972340000 ms.2 from job set of time 1523972340000 ms 18/04/17 16:39:00 INFO scheduler.JobScheduler: Finished job streaming job 1523972340000 ms.0 from job set of time 1523972340000 ms 18/04/17 16:39:00 INFO scheduler.JobScheduler: Starting job streaming job 1523972340000 ms.5 from job set of time 1523972340000 ms 18/04/17 16:39:00 INFO scheduler.JobScheduler: Starting job streaming job 1523972340000 ms.3 from job set of time 1523972340000 ms 18/04/17 16:39:00 INFO scheduler.JobScheduler: Starting job streaming job 1523972340000 ms.4 from job set of time 1523972340000 ms 18/04/17 16:39:00 INFO scheduler.JobScheduler: Starting job streaming job 1523972340000 ms.6 from job set of time 1523972340000 ms 18/04/17 16:39:00 INFO scheduler.JobScheduler: Finished job streaming job 1523972340000 ms.4 from job set of time 1523972340000 ms 18/04/17 16:39:00 INFO scheduler.JobScheduler: Finished job streaming job 1523972340000 ms.3 from job set of time 1523972340000 ms 18/04/17 16:39:00 INFO scheduler.JobScheduler: Starting job streaming job 1523972340000 ms.8 from job set of time 1523972340000 ms 18/04/17 16:39:00 INFO scheduler.JobScheduler: Starting job streaming job 1523972340000 ms.9 from job set of time 1523972340000 ms 18/04/17 16:39:00 INFO scheduler.JobScheduler: Starting job streaming job 1523972340000 ms.7 from job set of time 1523972340000 ms 18/04/17 16:39:00 INFO scheduler.JobScheduler: Starting job streaming job 1523972340000 ms.10 from job set of time 1523972340000 ms 18/04/17 16:39:00 INFO scheduler.JobScheduler: Starting job streaming job 1523972340000 ms.11 from job set of time 1523972340000 ms 18/04/17 16:39:00 INFO scheduler.JobScheduler: Starting job streaming job 1523972340000 ms.12 from job set of time 1523972340000 ms 18/04/17 16:39:00 INFO scheduler.JobScheduler: Starting job streaming job 1523972340000 ms.13 from job set of time 1523972340000 ms 18/04/17 16:39:00 INFO scheduler.JobScheduler: Starting job streaming job 1523972340000 ms.14 from job set of time 1523972340000 ms 18/04/17 16:39:00 INFO scheduler.JobScheduler: Finished job streaming job 1523972340000 ms.13 from job set of time 1523972340000 ms 18/04/17 16:39:00 INFO scheduler.JobScheduler: Starting job streaming job 1523972340000 ms.15 from job set of time 1523972340000 ms 18/04/17 16:39:00 INFO scheduler.JobScheduler: Starting job streaming job 1523972340000 ms.17 from job set of time 1523972340000 ms 18/04/17 16:39:00 INFO scheduler.JobScheduler: Starting job streaming job 1523972340000 ms.16 from job set of time 1523972340000 ms 18/04/17 16:39:00 INFO scheduler.JobScheduler: Starting job streaming job 1523972340000 ms.18 from job set of time 1523972340000 ms 18/04/17 16:39:00 INFO scheduler.JobScheduler: Finished job streaming job 1523972340000 ms.14 from job set of time 1523972340000 ms 18/04/17 16:39:00 INFO scheduler.JobScheduler: Finished job streaming job 1523972340000 ms.17 from job set of time 1523972340000 ms 18/04/17 16:39:00 INFO scheduler.JobScheduler: Starting job streaming job 1523972340000 ms.21 from job set of time 1523972340000 ms 18/04/17 16:39:00 INFO scheduler.JobScheduler: Starting job streaming job 1523972340000 ms.19 from job set of time 1523972340000 ms 18/04/17 16:39:00 INFO scheduler.JobScheduler: Finished job streaming job 1523972340000 ms.16 from job set of time 1523972340000 ms 18/04/17 16:39:00 INFO scheduler.JobScheduler: Finished job streaming job 1523972340000 ms.21 from job set of time 1523972340000 ms 18/04/17 16:39:00 INFO scheduler.JobScheduler: Starting job streaming job 1523972340000 ms.23 from job set of time 1523972340000 ms 18/04/17 16:39:00 INFO scheduler.JobScheduler: Starting job streaming job 1523972340000 ms.24 from job set of time 1523972340000 ms 18/04/17 16:39:00 INFO scheduler.JobScheduler: Starting job streaming job 1523972340000 ms.22 from job set of time 1523972340000 ms 18/04/17 16:39:00 INFO scheduler.JobScheduler: Starting job streaming job 1523972340000 ms.20 from job set of time 1523972340000 ms 18/04/17 16:39:00 INFO scheduler.JobScheduler: Starting job streaming job 1523972340000 ms.25 from job set of time 1523972340000 ms 18/04/17 16:39:00 INFO scheduler.JobScheduler: Starting job streaming job 1523972340000 ms.26 from job set of time 1523972340000 ms 18/04/17 16:39:00 INFO scheduler.JobScheduler: Starting job streaming job 1523972340000 ms.27 from job set of time 1523972340000 ms 18/04/17 16:39:00 INFO scheduler.JobScheduler: Starting job streaming job 1523972340000 ms.28 from job set of time 1523972340000 ms 18/04/17 16:39:00 INFO scheduler.JobScheduler: Starting job streaming job 1523972340000 ms.29 from job set of time 1523972340000 ms 18/04/17 16:39:00 INFO scheduler.JobScheduler: Starting job streaming job 1523972340000 ms.30 from job set of time 1523972340000 ms 18/04/17 16:39:00 INFO scheduler.JobScheduler: Finished job streaming job 1523972340000 ms.30 from job set of time 1523972340000 ms 18/04/17 16:39:00 INFO scheduler.JobScheduler: Starting job streaming job 1523972340000 ms.31 from job set of time 1523972340000 ms 18/04/17 16:39:00 INFO scheduler.JobScheduler: Starting job streaming job 1523972340000 ms.33 from job set of time 1523972340000 ms 18/04/17 16:39:00 INFO scheduler.JobScheduler: Starting job streaming job 1523972340000 ms.34 from job set of time 1523972340000 ms 18/04/17 16:39:00 INFO scheduler.JobScheduler: Starting job streaming job 1523972340000 ms.32 from job set of time 1523972340000 ms 18/04/17 16:39:00 INFO scheduler.JobScheduler: Starting job streaming job 1523972340000 ms.35 from job set of time 1523972340000 ms 18/04/17 16:39:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 16:39:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 16:39:00 INFO scheduler.DAGScheduler: Got job 157 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 16:39:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 16:39:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 157 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 16:39:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 16:39:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 16:39:00 INFO scheduler.DAGScheduler: Submitting ResultStage 157 (KafkaRDD[245] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 16:39:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 16:39:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 16:39:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 16:39:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 16:39:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 16:39:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 16:39:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 16:39:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 16:39:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 16:39:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 16:39:00 INFO storage.MemoryStore: Block broadcast_157 stored as values in memory (estimated size 5.7 KB, free 490.8 MB) 18/04/17 16:39:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 16:39:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 16:39:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 16:39:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 16:39:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 16:39:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 16:39:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 16:39:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 16:39:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 16:39:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 16:39:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 16:39:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 16:39:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 16:39:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 16:39:00 INFO storage.MemoryStore: Block broadcast_157_piece0 stored as bytes in memory (estimated size 3.1 KB, free 490.8 MB) 18/04/17 16:39:00 INFO storage.BlockManagerInfo: Added broadcast_157_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.3 MB) 18/04/17 16:39:00 INFO spark.SparkContext: Created broadcast 157 from broadcast at DAGScheduler.scala:1006 18/04/17 16:39:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 157 (KafkaRDD[245] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 16:39:00 INFO cluster.YarnClusterScheduler: Adding task set 157.0 with 1 tasks 18/04/17 16:39:00 INFO scheduler.DAGScheduler: Got job 158 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 16:39:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 158 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 16:39:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 16:39:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 16:39:00 INFO scheduler.DAGScheduler: Submitting ResultStage 158 (KafkaRDD[244] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 16:39:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 157.0 (TID 157, ***hostname masked***, executor 2, partition 0, NODE_LOCAL, 2049 bytes) 18/04/17 16:39:00 INFO storage.MemoryStore: Block broadcast_158 stored as values in memory (estimated size 5.7 KB, free 490.8 MB) 18/04/17 16:39:00 INFO storage.MemoryStore: Block broadcast_158_piece0 stored as bytes in memory (estimated size 3.1 KB, free 490.7 MB) 18/04/17 16:39:00 INFO storage.BlockManagerInfo: Added broadcast_158_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.3 MB) 18/04/17 16:39:00 INFO spark.SparkContext: Created broadcast 158 from broadcast at DAGScheduler.scala:1006 18/04/17 16:39:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 158 (KafkaRDD[244] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 16:39:00 INFO cluster.YarnClusterScheduler: Adding task set 158.0 with 1 tasks 18/04/17 16:39:00 INFO scheduler.DAGScheduler: Got job 159 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 16:39:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 159 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 16:39:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 16:39:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 16:39:00 INFO scheduler.DAGScheduler: Submitting ResultStage 159 (KafkaRDD[235] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 16:39:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 158.0 (TID 158, ***hostname masked***, executor 7, partition 0, RACK_LOCAL, 2045 bytes) 18/04/17 16:39:00 INFO storage.MemoryStore: Block broadcast_159 stored as values in memory (estimated size 5.7 KB, free 490.7 MB) 18/04/17 16:39:00 INFO storage.MemoryStore: Block broadcast_159_piece0 stored as bytes in memory (estimated size 3.1 KB, free 490.7 MB) 18/04/17 16:39:00 INFO storage.BlockManagerInfo: Added broadcast_159_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.3 MB) 18/04/17 16:39:00 INFO spark.SparkContext: Created broadcast 159 from broadcast at DAGScheduler.scala:1006 18/04/17 16:39:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 159 (KafkaRDD[235] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 16:39:00 INFO cluster.YarnClusterScheduler: Adding task set 159.0 with 1 tasks 18/04/17 16:39:00 INFO scheduler.DAGScheduler: Got job 160 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 16:39:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 160 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 16:39:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 16:39:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 16:39:00 INFO scheduler.DAGScheduler: Submitting ResultStage 160 (KafkaRDD[217] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 16:39:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 159.0 (TID 159, ***hostname masked***, executor 11, partition 0, RACK_LOCAL, 2050 bytes) 18/04/17 16:39:00 INFO storage.MemoryStore: Block broadcast_160 stored as values in memory (estimated size 5.7 KB, free 490.7 MB) 18/04/17 16:39:00 INFO storage.MemoryStore: Block broadcast_160_piece0 stored as bytes in memory (estimated size 3.1 KB, free 490.7 MB) 18/04/17 16:39:00 INFO storage.BlockManagerInfo: Added broadcast_160_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.3 MB) 18/04/17 16:39:00 INFO spark.SparkContext: Created broadcast 160 from broadcast at DAGScheduler.scala:1006 18/04/17 16:39:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 160 (KafkaRDD[217] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 16:39:00 INFO cluster.YarnClusterScheduler: Adding task set 160.0 with 1 tasks 18/04/17 16:39:00 INFO scheduler.DAGScheduler: Got job 161 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 16:39:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 161 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 16:39:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 16:39:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 16:39:00 INFO scheduler.DAGScheduler: Submitting ResultStage 161 (KafkaRDD[231] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 16:39:00 INFO storage.BlockManagerInfo: Added broadcast_157_piece0 in memory on ***hostname masked***:43653 (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:39:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 160.0 (TID 160, ***hostname masked***, executor 8, partition 0, RACK_LOCAL, 2063 bytes) 18/04/17 16:39:00 INFO storage.MemoryStore: Block broadcast_161 stored as values in memory (estimated size 5.7 KB, free 490.7 MB) 18/04/17 16:39:00 INFO storage.MemoryStore: Block broadcast_161_piece0 stored as bytes in memory (estimated size 3.1 KB, free 490.7 MB) 18/04/17 16:39:00 INFO storage.BlockManagerInfo: Added broadcast_161_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.3 MB) 18/04/17 16:39:00 INFO spark.SparkContext: Created broadcast 161 from broadcast at DAGScheduler.scala:1006 18/04/17 16:39:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 161 (KafkaRDD[231] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 16:39:00 INFO cluster.YarnClusterScheduler: Adding task set 161.0 with 1 tasks 18/04/17 16:39:00 INFO scheduler.DAGScheduler: Got job 162 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 16:39:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 162 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 16:39:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 16:39:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 16:39:00 INFO scheduler.DAGScheduler: Submitting ResultStage 162 (KafkaRDD[221] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 16:39:00 INFO storage.MemoryStore: Block broadcast_162 stored as values in memory (estimated size 5.7 KB, free 490.7 MB) 18/04/17 16:39:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 161.0 (TID 161, ***hostname masked***, executor 8, partition 0, RACK_LOCAL, 2064 bytes) 18/04/17 16:39:00 INFO storage.MemoryStore: Block broadcast_162_piece0 stored as bytes in memory (estimated size 3.1 KB, free 490.7 MB) 18/04/17 16:39:00 INFO storage.BlockManagerInfo: Added broadcast_162_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.3 MB) 18/04/17 16:39:00 INFO spark.SparkContext: Created broadcast 162 from broadcast at DAGScheduler.scala:1006 18/04/17 16:39:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 162 (KafkaRDD[221] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 16:39:00 INFO cluster.YarnClusterScheduler: Adding task set 162.0 with 1 tasks 18/04/17 16:39:00 INFO scheduler.DAGScheduler: Got job 163 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 16:39:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 163 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 16:39:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 16:39:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 16:39:00 INFO scheduler.DAGScheduler: Submitting ResultStage 163 (KafkaRDD[238] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 16:39:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 162.0 (TID 162, ***hostname masked***, executor 1, partition 0, NODE_LOCAL, 2059 bytes) 18/04/17 16:39:00 INFO storage.MemoryStore: Block broadcast_163 stored as values in memory (estimated size 5.7 KB, free 490.7 MB) 18/04/17 16:39:00 INFO storage.BlockManagerInfo: Added broadcast_159_piece0 in memory on ***hostname masked***:57847 (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:39:00 INFO storage.BlockManagerInfo: Added broadcast_160_piece0 in memory on ***hostname masked***:50260 (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:39:00 INFO storage.MemoryStore: Block broadcast_163_piece0 stored as bytes in memory (estimated size 3.1 KB, free 490.7 MB) 18/04/17 16:39:00 INFO storage.BlockManagerInfo: Added broadcast_163_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.3 MB) 18/04/17 16:39:00 INFO storage.BlockManagerInfo: Added broadcast_158_piece0 in memory on ***hostname masked***:41751 (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:39:00 INFO spark.SparkContext: Created broadcast 163 from broadcast at DAGScheduler.scala:1006 18/04/17 16:39:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 163 (KafkaRDD[238] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 16:39:00 INFO cluster.YarnClusterScheduler: Adding task set 163.0 with 1 tasks 18/04/17 16:39:00 INFO scheduler.DAGScheduler: Got job 164 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 16:39:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 164 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 16:39:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 16:39:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 16:39:00 INFO scheduler.DAGScheduler: Submitting ResultStage 164 (KafkaRDD[228] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 16:39:00 INFO storage.MemoryStore: Block broadcast_164 stored as values in memory (estimated size 5.7 KB, free 490.7 MB) 18/04/17 16:39:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 163.0 (TID 163, ***hostname masked***, executor 6, partition 0, RACK_LOCAL, 2069 bytes) 18/04/17 16:39:00 INFO storage.MemoryStore: Block broadcast_164_piece0 stored as bytes in memory (estimated size 3.1 KB, free 490.7 MB) 18/04/17 16:39:00 INFO storage.BlockManagerInfo: Added broadcast_164_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.3 MB) 18/04/17 16:39:00 INFO spark.SparkContext: Created broadcast 164 from broadcast at DAGScheduler.scala:1006 18/04/17 16:39:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 164 (KafkaRDD[228] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 16:39:00 INFO cluster.YarnClusterScheduler: Adding task set 164.0 with 1 tasks 18/04/17 16:39:00 INFO scheduler.DAGScheduler: Got job 165 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 16:39:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 165 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 16:39:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 16:39:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 16:39:00 INFO scheduler.DAGScheduler: Submitting ResultStage 165 (KafkaRDD[241] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 16:39:00 INFO storage.MemoryStore: Block broadcast_165 stored as values in memory (estimated size 5.7 KB, free 490.7 MB) 18/04/17 16:39:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 164.0 (TID 164, ***hostname masked***, executor 10, partition 0, RACK_LOCAL, 2051 bytes) 18/04/17 16:39:00 INFO storage.MemoryStore: Block broadcast_165_piece0 stored as bytes in memory (estimated size 3.1 KB, free 490.7 MB) 18/04/17 16:39:00 INFO storage.BlockManagerInfo: Added broadcast_165_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.3 MB) 18/04/17 16:39:00 INFO spark.SparkContext: Created broadcast 165 from broadcast at DAGScheduler.scala:1006 18/04/17 16:39:00 INFO storage.BlockManagerInfo: Added broadcast_161_piece0 in memory on ***hostname masked***:50260 (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:39:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 165 (KafkaRDD[241] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 16:39:00 INFO cluster.YarnClusterScheduler: Adding task set 165.0 with 1 tasks 18/04/17 16:39:00 INFO scheduler.DAGScheduler: Got job 166 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 16:39:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 166 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 16:39:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 16:39:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 16:39:00 INFO scheduler.DAGScheduler: Submitting ResultStage 166 (KafkaRDD[218] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 16:39:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 165.0 (TID 165, ***hostname masked***, executor 3, partition 0, RACK_LOCAL, 2045 bytes) 18/04/17 16:39:00 INFO storage.MemoryStore: Block broadcast_166 stored as values in memory (estimated size 5.7 KB, free 490.7 MB) 18/04/17 16:39:00 INFO storage.BlockManagerInfo: Added broadcast_163_piece0 in memory on ***hostname masked***:35790 (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:39:00 INFO storage.MemoryStore: Block broadcast_166_piece0 stored as bytes in memory (estimated size 3.1 KB, free 490.7 MB) 18/04/17 16:39:00 INFO storage.BlockManagerInfo: Added broadcast_166_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.3 MB) 18/04/17 16:39:00 INFO spark.SparkContext: Created broadcast 166 from broadcast at DAGScheduler.scala:1006 18/04/17 16:39:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 166 (KafkaRDD[218] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 16:39:00 INFO cluster.YarnClusterScheduler: Adding task set 166.0 with 1 tasks 18/04/17 16:39:00 INFO scheduler.DAGScheduler: Got job 167 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 16:39:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 167 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 16:39:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 16:39:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 16:39:00 INFO scheduler.DAGScheduler: Submitting ResultStage 167 (KafkaRDD[226] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 16:39:00 INFO storage.MemoryStore: Block broadcast_167 stored as values in memory (estimated size 5.7 KB, free 490.7 MB) 18/04/17 16:39:00 INFO storage.BlockManagerInfo: Added broadcast_162_piece0 in memory on ***hostname masked***:56034 (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:39:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 166.0 (TID 166, ***hostname masked***, executor 6, partition 0, RACK_LOCAL, 2034 bytes) 18/04/17 16:39:00 INFO storage.MemoryStore: Block broadcast_167_piece0 stored as bytes in memory (estimated size 3.1 KB, free 490.7 MB) 18/04/17 16:39:00 INFO storage.BlockManagerInfo: Added broadcast_167_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.3 MB) 18/04/17 16:39:00 INFO spark.SparkContext: Created broadcast 167 from broadcast at DAGScheduler.scala:1006 18/04/17 16:39:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 167 (KafkaRDD[226] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 16:39:00 INFO cluster.YarnClusterScheduler: Adding task set 167.0 with 1 tasks 18/04/17 16:39:00 INFO scheduler.DAGScheduler: Got job 168 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 16:39:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 168 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 16:39:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 16:39:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 16:39:00 INFO scheduler.DAGScheduler: Submitting ResultStage 168 (KafkaRDD[225] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 16:39:00 INFO storage.BlockManagerInfo: Added broadcast_164_piece0 in memory on ***hostname masked***:55095 (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:39:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 167.0 (TID 167, ***hostname masked***, executor 1, partition 0, NODE_LOCAL, 2059 bytes) 18/04/17 16:39:00 INFO storage.MemoryStore: Block broadcast_168 stored as values in memory (estimated size 5.7 KB, free 490.7 MB) 18/04/17 16:39:00 INFO storage.BlockManagerInfo: Added broadcast_165_piece0 in memory on ***hostname masked***:60107 (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:39:00 INFO storage.MemoryStore: Block broadcast_168_piece0 stored as bytes in memory (estimated size 3.1 KB, free 490.7 MB) 18/04/17 16:39:00 INFO storage.BlockManagerInfo: Added broadcast_168_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.3 MB) 18/04/17 16:39:00 INFO spark.SparkContext: Created broadcast 168 from broadcast at DAGScheduler.scala:1006 18/04/17 16:39:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 168 (KafkaRDD[225] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 16:39:00 INFO cluster.YarnClusterScheduler: Adding task set 168.0 with 1 tasks 18/04/17 16:39:00 INFO scheduler.DAGScheduler: Got job 169 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 16:39:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 169 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 16:39:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 16:39:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 16:39:00 INFO scheduler.DAGScheduler: Submitting ResultStage 169 (KafkaRDD[227] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 16:39:00 INFO storage.BlockManagerInfo: Added broadcast_166_piece0 in memory on ***hostname masked***:35790 (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:39:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 168.0 (TID 168, ***hostname masked***, executor 2, partition 0, RACK_LOCAL, 2041 bytes) 18/04/17 16:39:00 INFO storage.MemoryStore: Block broadcast_169 stored as values in memory (estimated size 5.7 KB, free 490.7 MB) 18/04/17 16:39:00 INFO storage.BlockManagerInfo: Added broadcast_167_piece0 in memory on ***hostname masked***:56034 (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:39:00 INFO storage.MemoryStore: Block broadcast_169_piece0 stored as bytes in memory (estimated size 3.1 KB, free 490.7 MB) 18/04/17 16:39:00 INFO storage.BlockManagerInfo: Added broadcast_169_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.3 MB) 18/04/17 16:39:00 INFO spark.SparkContext: Created broadcast 169 from broadcast at DAGScheduler.scala:1006 18/04/17 16:39:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 169 (KafkaRDD[227] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 16:39:00 INFO cluster.YarnClusterScheduler: Adding task set 169.0 with 1 tasks 18/04/17 16:39:00 INFO scheduler.DAGScheduler: Got job 170 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 16:39:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 170 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 16:39:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 16:39:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 16:39:00 INFO scheduler.DAGScheduler: Submitting ResultStage 170 (KafkaRDD[222] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 16:39:00 INFO storage.MemoryStore: Block broadcast_170 stored as values in memory (estimated size 5.7 KB, free 490.6 MB) 18/04/17 16:39:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 169.0 (TID 169, ***hostname masked***, executor 3, partition 0, RACK_LOCAL, 2046 bytes) 18/04/17 16:39:00 INFO storage.BlockManagerInfo: Added broadcast_168_piece0 in memory on ***hostname masked***:43653 (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:39:00 INFO storage.MemoryStore: Block broadcast_170_piece0 stored as bytes in memory (estimated size 3.1 KB, free 490.6 MB) 18/04/17 16:39:00 INFO storage.BlockManagerInfo: Added broadcast_170_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.3 MB) 18/04/17 16:39:00 INFO spark.SparkContext: Created broadcast 170 from broadcast at DAGScheduler.scala:1006 18/04/17 16:39:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 170 (KafkaRDD[222] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 16:39:00 INFO cluster.YarnClusterScheduler: Adding task set 170.0 with 1 tasks 18/04/17 16:39:00 INFO scheduler.DAGScheduler: Got job 171 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 16:39:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 171 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 16:39:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 16:39:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 16:39:00 INFO scheduler.DAGScheduler: Submitting ResultStage 171 (KafkaRDD[249] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 16:39:00 INFO storage.MemoryStore: Block broadcast_171 stored as values in memory (estimated size 5.7 KB, free 490.6 MB) 18/04/17 16:39:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 170.0 (TID 170, ***hostname masked***, executor 8, partition 0, NODE_LOCAL, 2045 bytes) 18/04/17 16:39:00 INFO storage.BlockManagerInfo: Added broadcast_169_piece0 in memory on ***hostname masked***:60107 (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:39:00 INFO storage.MemoryStore: Block broadcast_171_piece0 stored as bytes in memory (estimated size 3.1 KB, free 490.6 MB) 18/04/17 16:39:00 INFO storage.BlockManagerInfo: Added broadcast_171_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.3 MB) 18/04/17 16:39:00 INFO spark.SparkContext: Created broadcast 171 from broadcast at DAGScheduler.scala:1006 18/04/17 16:39:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 171 (KafkaRDD[249] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 16:39:00 INFO cluster.YarnClusterScheduler: Adding task set 171.0 with 1 tasks 18/04/17 16:39:00 INFO scheduler.DAGScheduler: Got job 172 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 16:39:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 172 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 16:39:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 16:39:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 16:39:00 INFO scheduler.DAGScheduler: Submitting ResultStage 172 (KafkaRDD[234] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 16:39:00 INFO storage.MemoryStore: Block broadcast_172 stored as values in memory (estimated size 5.7 KB, free 490.6 MB) 18/04/17 16:39:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 171.0 (TID 171, ***hostname masked***, executor 10, partition 0, NODE_LOCAL, 2048 bytes) 18/04/17 16:39:00 INFO storage.MemoryStore: Block broadcast_172_piece0 stored as bytes in memory (estimated size 3.1 KB, free 490.6 MB) 18/04/17 16:39:00 INFO storage.BlockManagerInfo: Added broadcast_172_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.3 MB) 18/04/17 16:39:00 INFO spark.SparkContext: Created broadcast 172 from broadcast at DAGScheduler.scala:1006 18/04/17 16:39:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 172 (KafkaRDD[234] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 16:39:00 INFO cluster.YarnClusterScheduler: Adding task set 172.0 with 1 tasks 18/04/17 16:39:00 INFO scheduler.DAGScheduler: Got job 173 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 16:39:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 173 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 16:39:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 16:39:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 16:39:00 INFO scheduler.DAGScheduler: Submitting ResultStage 173 (KafkaRDD[248] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 16:39:00 INFO storage.MemoryStore: Block broadcast_173 stored as values in memory (estimated size 5.7 KB, free 490.6 MB) 18/04/17 16:39:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 172.0 (TID 172, ***hostname masked***, executor 12, partition 0, RACK_LOCAL, 2048 bytes) 18/04/17 16:39:00 INFO storage.BlockManagerInfo: Added broadcast_170_piece0 in memory on ***hostname masked***:50260 (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:39:00 INFO storage.MemoryStore: Block broadcast_173_piece0 stored as bytes in memory (estimated size 3.1 KB, free 490.6 MB) 18/04/17 16:39:00 INFO storage.BlockManagerInfo: Added broadcast_173_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.3 MB) 18/04/17 16:39:00 INFO spark.SparkContext: Created broadcast 173 from broadcast at DAGScheduler.scala:1006 18/04/17 16:39:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 173 (KafkaRDD[248] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 16:39:00 INFO cluster.YarnClusterScheduler: Adding task set 173.0 with 1 tasks 18/04/17 16:39:00 INFO scheduler.DAGScheduler: Got job 174 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 16:39:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 174 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 16:39:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 16:39:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 16:39:00 INFO scheduler.DAGScheduler: Submitting ResultStage 174 (KafkaRDD[250] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 16:39:00 INFO storage.MemoryStore: Block broadcast_174 stored as values in memory (estimated size 5.7 KB, free 490.6 MB) 18/04/17 16:39:00 INFO storage.BlockManagerInfo: Added broadcast_171_piece0 in memory on ***hostname masked***:55095 (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:39:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 173.0 (TID 173, ***hostname masked***, executor 7, partition 0, RACK_LOCAL, 2040 bytes) 18/04/17 16:39:00 INFO storage.MemoryStore: Block broadcast_174_piece0 stored as bytes in memory (estimated size 3.1 KB, free 490.6 MB) 18/04/17 16:39:00 INFO storage.BlockManagerInfo: Added broadcast_174_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.3 MB) 18/04/17 16:39:00 INFO spark.SparkContext: Created broadcast 174 from broadcast at DAGScheduler.scala:1006 18/04/17 16:39:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 174 (KafkaRDD[250] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 16:39:00 INFO cluster.YarnClusterScheduler: Adding task set 174.0 with 1 tasks 18/04/17 16:39:00 INFO scheduler.DAGScheduler: Got job 175 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 16:39:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 175 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 16:39:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 16:39:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 16:39:00 INFO scheduler.DAGScheduler: Submitting ResultStage 175 (KafkaRDD[251] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 16:39:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 174.0 (TID 174, ***hostname masked***, executor 2, partition 0, RACK_LOCAL, 2053 bytes) 18/04/17 16:39:00 INFO storage.MemoryStore: Block broadcast_175 stored as values in memory (estimated size 5.7 KB, free 490.6 MB) 18/04/17 16:39:00 INFO storage.MemoryStore: Block broadcast_175_piece0 stored as bytes in memory (estimated size 3.1 KB, free 490.6 MB) 18/04/17 16:39:00 INFO storage.BlockManagerInfo: Added broadcast_175_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.3 MB) 18/04/17 16:39:00 INFO spark.SparkContext: Created broadcast 175 from broadcast at DAGScheduler.scala:1006 18/04/17 16:39:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 175 (KafkaRDD[251] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 16:39:00 INFO cluster.YarnClusterScheduler: Adding task set 175.0 with 1 tasks 18/04/17 16:39:00 INFO scheduler.DAGScheduler: Got job 176 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 16:39:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 176 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 16:39:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 16:39:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 16:39:00 INFO scheduler.DAGScheduler: Submitting ResultStage 176 (KafkaRDD[242] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 16:39:00 INFO storage.MemoryStore: Block broadcast_176 stored as values in memory (estimated size 5.7 KB, free 490.6 MB) 18/04/17 16:39:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 175.0 (TID 175, ***hostname masked***, executor 9, partition 0, NODE_LOCAL, 2037 bytes) 18/04/17 16:39:00 INFO storage.MemoryStore: Block broadcast_176_piece0 stored as bytes in memory (estimated size 3.1 KB, free 490.6 MB) 18/04/17 16:39:00 INFO storage.BlockManagerInfo: Added broadcast_176_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.3 MB) 18/04/17 16:39:00 INFO spark.SparkContext: Created broadcast 176 from broadcast at DAGScheduler.scala:1006 18/04/17 16:39:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 176 (KafkaRDD[242] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 16:39:00 INFO cluster.YarnClusterScheduler: Adding task set 176.0 with 1 tasks 18/04/17 16:39:00 INFO scheduler.DAGScheduler: Got job 177 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 16:39:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 177 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 16:39:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 16:39:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 16:39:00 INFO scheduler.DAGScheduler: Submitting ResultStage 177 (KafkaRDD[223] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 16:39:00 INFO storage.BlockManagerInfo: Added broadcast_172_piece0 in memory on ***hostname masked***:42188 (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:39:00 INFO storage.MemoryStore: Block broadcast_177 stored as values in memory (estimated size 5.7 KB, free 490.6 MB) 18/04/17 16:39:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 176.0 (TID 176, ***hostname masked***, executor 3, partition 0, NODE_LOCAL, 2056 bytes) 18/04/17 16:39:00 INFO storage.BlockManagerInfo: Added broadcast_173_piece0 in memory on ***hostname masked***:41751 (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:39:00 INFO storage.BlockManagerInfo: Added broadcast_174_piece0 in memory on ***hostname masked***:43653 (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:39:00 INFO storage.MemoryStore: Block broadcast_177_piece0 stored as bytes in memory (estimated size 3.1 KB, free 490.6 MB) 18/04/17 16:39:00 INFO storage.BlockManagerInfo: Added broadcast_177_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.3 MB) 18/04/17 16:39:00 INFO spark.SparkContext: Created broadcast 177 from broadcast at DAGScheduler.scala:1006 18/04/17 16:39:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 177 (KafkaRDD[223] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 16:39:00 INFO cluster.YarnClusterScheduler: Adding task set 177.0 with 1 tasks 18/04/17 16:39:00 INFO scheduler.DAGScheduler: Got job 180 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 16:39:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 178 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 16:39:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 16:39:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 16:39:00 INFO scheduler.DAGScheduler: Submitting ResultStage 178 (KafkaRDD[224] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 16:39:00 INFO storage.MemoryStore: Block broadcast_178 stored as values in memory (estimated size 5.7 KB, free 490.6 MB) 18/04/17 16:39:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 177.0 (TID 177, ***hostname masked***, executor 6, partition 0, RACK_LOCAL, 2060 bytes) 18/04/17 16:39:00 INFO storage.MemoryStore: Block broadcast_178_piece0 stored as bytes in memory (estimated size 3.1 KB, free 490.6 MB) 18/04/17 16:39:00 INFO storage.BlockManagerInfo: Added broadcast_178_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.3 MB) 18/04/17 16:39:00 INFO spark.SparkContext: Created broadcast 178 from broadcast at DAGScheduler.scala:1006 18/04/17 16:39:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 178 (KafkaRDD[224] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 16:39:00 INFO cluster.YarnClusterScheduler: Adding task set 178.0 with 1 tasks 18/04/17 16:39:00 INFO scheduler.DAGScheduler: Got job 179 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 16:39:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 179 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 16:39:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 16:39:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 16:39:00 INFO scheduler.DAGScheduler: Submitting ResultStage 179 (KafkaRDD[239] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 16:39:00 INFO storage.MemoryStore: Block broadcast_179 stored as values in memory (estimated size 5.7 KB, free 490.6 MB) 18/04/17 16:39:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 178.0 (TID 178, ***hostname masked***, executor 1, partition 0, RACK_LOCAL, 2051 bytes) 18/04/17 16:39:00 INFO storage.MemoryStore: Block broadcast_179_piece0 stored as bytes in memory (estimated size 3.1 KB, free 490.6 MB) 18/04/17 16:39:00 INFO storage.BlockManagerInfo: Added broadcast_179_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.3 MB) 18/04/17 16:39:00 INFO spark.SparkContext: Created broadcast 179 from broadcast at DAGScheduler.scala:1006 18/04/17 16:39:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 179 (KafkaRDD[239] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 16:39:00 INFO cluster.YarnClusterScheduler: Adding task set 179.0 with 1 tasks 18/04/17 16:39:00 INFO scheduler.DAGScheduler: Got job 178 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 16:39:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 180 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 16:39:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 16:39:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 16:39:00 INFO scheduler.DAGScheduler: Submitting ResultStage 180 (KafkaRDD[236] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 16:39:00 INFO storage.MemoryStore: Block broadcast_180 stored as values in memory (estimated size 5.7 KB, free 490.6 MB) 18/04/17 16:39:00 INFO storage.BlockManagerInfo: Added broadcast_176_piece0 in memory on ***hostname masked***:60107 (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:39:00 INFO storage.BlockManagerInfo: Added broadcast_175_piece0 in memory on ***hostname masked***:55033 (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:39:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 179.0 (TID 179, ***hostname masked***, executor 12, partition 0, RACK_LOCAL, 2048 bytes) 18/04/17 16:39:00 INFO storage.MemoryStore: Block broadcast_180_piece0 stored as bytes in memory (estimated size 3.1 KB, free 490.6 MB) 18/04/17 16:39:00 INFO storage.BlockManagerInfo: Added broadcast_180_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.3 MB) 18/04/17 16:39:00 INFO spark.SparkContext: Created broadcast 180 from broadcast at DAGScheduler.scala:1006 18/04/17 16:39:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 180 (KafkaRDD[236] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 16:39:00 INFO cluster.YarnClusterScheduler: Adding task set 180.0 with 1 tasks 18/04/17 16:39:00 INFO scheduler.DAGScheduler: Got job 181 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 16:39:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 181 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 16:39:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 16:39:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 16:39:00 INFO scheduler.DAGScheduler: Submitting ResultStage 181 (KafkaRDD[240] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 16:39:00 INFO storage.MemoryStore: Block broadcast_181 stored as values in memory (estimated size 5.7 KB, free 490.6 MB) 18/04/17 16:39:00 INFO storage.BlockManagerInfo: Added broadcast_178_piece0 in memory on ***hostname masked***:56034 (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:39:00 INFO storage.BlockManagerInfo: Added broadcast_177_piece0 in memory on ***hostname masked***:35790 (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:39:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 180.0 (TID 180, ***hostname masked***, executor 12, partition 0, RACK_LOCAL, 2048 bytes) 18/04/17 16:39:00 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 161.0 (TID 161) in 77 ms on ***hostname masked*** (executor 8) (1/1) 18/04/17 16:39:00 INFO cluster.YarnClusterScheduler: Removed TaskSet 161.0, whose tasks have all completed, from pool 18/04/17 16:39:00 INFO storage.MemoryStore: Block broadcast_181_piece0 stored as bytes in memory (estimated size 3.1 KB, free 490.6 MB) 18/04/17 16:39:00 INFO storage.BlockManagerInfo: Added broadcast_181_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.3 MB) 18/04/17 16:39:00 INFO spark.SparkContext: Created broadcast 181 from broadcast at DAGScheduler.scala:1006 18/04/17 16:39:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 181 (KafkaRDD[240] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 16:39:00 INFO cluster.YarnClusterScheduler: Adding task set 181.0 with 1 tasks 18/04/17 16:39:00 INFO scheduler.DAGScheduler: Got job 182 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 16:39:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 182 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 16:39:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 16:39:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 16:39:00 INFO scheduler.DAGScheduler: Submitting ResultStage 182 (KafkaRDD[243] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 16:39:00 INFO storage.MemoryStore: Block broadcast_182 stored as values in memory (estimated size 5.7 KB, free 490.5 MB) 18/04/17 16:39:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 181.0 (TID 181, ***hostname masked***, executor 4, partition 0, NODE_LOCAL, 2047 bytes) 18/04/17 16:39:00 INFO storage.MemoryStore: Block broadcast_182_piece0 stored as bytes in memory (estimated size 3.1 KB, free 490.5 MB) 18/04/17 16:39:00 INFO storage.BlockManagerInfo: Added broadcast_179_piece0 in memory on ***hostname masked***:42188 (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:39:00 INFO storage.BlockManagerInfo: Added broadcast_182_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.3 MB) 18/04/17 16:39:00 INFO spark.SparkContext: Created broadcast 182 from broadcast at DAGScheduler.scala:1006 18/04/17 16:39:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 182 (KafkaRDD[243] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 16:39:00 INFO cluster.YarnClusterScheduler: Adding task set 182.0 with 1 tasks 18/04/17 16:39:00 INFO scheduler.DAGScheduler: Got job 183 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 16:39:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 183 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 16:39:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 16:39:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 16:39:00 INFO scheduler.DAGScheduler: Submitting ResultStage 183 (KafkaRDD[247] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 16:39:00 INFO storage.MemoryStore: Block broadcast_183 stored as values in memory (estimated size 5.7 KB, free 490.5 MB) 18/04/17 16:39:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 182.0 (TID 182, ***hostname masked***, executor 6, partition 0, RACK_LOCAL, 2048 bytes) 18/04/17 16:39:00 INFO storage.MemoryStore: Block broadcast_183_piece0 stored as bytes in memory (estimated size 3.1 KB, free 490.5 MB) 18/04/17 16:39:00 INFO storage.BlockManagerInfo: Added broadcast_183_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.3 MB) 18/04/17 16:39:00 INFO spark.SparkContext: Created broadcast 183 from broadcast at DAGScheduler.scala:1006 18/04/17 16:39:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 183 (KafkaRDD[247] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 16:39:00 INFO cluster.YarnClusterScheduler: Adding task set 183.0 with 1 tasks 18/04/17 16:39:00 INFO scheduler.DAGScheduler: ResultStage 161 (foreachPartition at PredictorEngineApp.java:153) finished in 0.085 s 18/04/17 16:39:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 183.0 (TID 183, ***hostname masked***, executor 5, partition 0, RACK_LOCAL, 2042 bytes) 18/04/17 16:39:00 INFO scheduler.DAGScheduler: Job 161 finished: foreachPartition at PredictorEngineApp.java:153, took 0.124697 s 18/04/17 16:39:00 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x1a00959d connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 16:39:00 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x1a00959d0x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 16:39:00 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 16:39:00 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:53334, server: ***hostname masked***/***IP masked***:2181 18/04/17 16:39:00 INFO storage.BlockManagerInfo: Added broadcast_180_piece0 in memory on ***hostname masked***:42188 (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:39:00 INFO storage.BlockManagerInfo: Added broadcast_181_piece0 in memory on ***hostname masked***:55279 (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:39:00 INFO storage.BlockManagerInfo: Added broadcast_182_piece0 in memory on ***hostname masked***:35790 (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:39:00 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x3626be1439a9220, negotiated timeout = 60000 18/04/17 16:39:00 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x3626be1439a9220 18/04/17 16:39:00 INFO storage.BlockManagerInfo: Added broadcast_183_piece0 in memory on ***hostname masked***:53081 (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:39:00 INFO zookeeper.ZooKeeper: Session: 0x3626be1439a9220 closed 18/04/17 16:39:00 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 16:39:00 INFO scheduler.JobScheduler: Finished job streaming job 1523972340000 ms.15 from job set of time 1523972340000 ms 18/04/17 16:39:00 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 162.0 (TID 162) in 173 ms on ***hostname masked*** (executor 1) (1/1) 18/04/17 16:39:00 INFO cluster.YarnClusterScheduler: Removed TaskSet 162.0, whose tasks have all completed, from pool 18/04/17 16:39:00 INFO scheduler.DAGScheduler: ResultStage 162 (foreachPartition at PredictorEngineApp.java:153) finished in 0.173 s 18/04/17 16:39:00 INFO scheduler.DAGScheduler: Job 162 finished: foreachPartition at PredictorEngineApp.java:153, took 0.216741 s 18/04/17 16:39:00 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x42201d7b connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 16:39:00 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x42201d7b0x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 16:39:00 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 16:39:00 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:53337, server: ***hostname masked***/***IP masked***:2181 18/04/17 16:39:00 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x3626be1439a9221, negotiated timeout = 60000 18/04/17 16:39:00 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x3626be1439a9221 18/04/17 16:39:00 INFO zookeeper.ZooKeeper: Session: 0x3626be1439a9221 closed 18/04/17 16:39:00 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 16:39:00 INFO scheduler.JobScheduler: Finished job streaming job 1523972340000 ms.5 from job set of time 1523972340000 ms 18/04/17 16:39:00 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 175.0 (TID 175) in 277 ms on ***hostname masked*** (executor 9) (1/1) 18/04/17 16:39:00 INFO cluster.YarnClusterScheduler: Removed TaskSet 175.0, whose tasks have all completed, from pool 18/04/17 16:39:00 INFO scheduler.DAGScheduler: ResultStage 175 (foreachPartition at PredictorEngineApp.java:153) finished in 0.278 s 18/04/17 16:39:00 INFO scheduler.DAGScheduler: Job 175 finished: foreachPartition at PredictorEngineApp.java:153, took 0.371967 s 18/04/17 16:39:00 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x285e5ca7 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 16:39:00 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x285e5ca70x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 16:39:00 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 16:39:00 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:36084, server: ***hostname masked***/***IP masked***:2181 18/04/17 16:39:00 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x2626be142b28b4a, negotiated timeout = 60000 18/04/17 16:39:00 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x2626be142b28b4a 18/04/17 16:39:00 INFO zookeeper.ZooKeeper: Session: 0x2626be142b28b4a closed 18/04/17 16:39:00 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 16:39:00 INFO scheduler.JobScheduler: Finished job streaming job 1523972340000 ms.35 from job set of time 1523972340000 ms 18/04/17 16:39:02 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 165.0 (TID 165) in 2050 ms on ***hostname masked*** (executor 3) (1/1) 18/04/17 16:39:02 INFO cluster.YarnClusterScheduler: Removed TaskSet 165.0, whose tasks have all completed, from pool 18/04/17 16:39:02 INFO scheduler.DAGScheduler: ResultStage 165 (foreachPartition at PredictorEngineApp.java:153) finished in 2.051 s 18/04/17 16:39:02 INFO scheduler.DAGScheduler: Job 165 finished: foreachPartition at PredictorEngineApp.java:153, took 2.106414 s 18/04/17 16:39:02 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x281d86b connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 16:39:02 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x281d86b0x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 16:39:02 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 16:39:02 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:59728, server: ***hostname masked***/***IP masked***:2181 18/04/17 16:39:02 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x1626be1444c925b, negotiated timeout = 60000 18/04/17 16:39:02 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x1626be1444c925b 18/04/17 16:39:02 INFO zookeeper.ZooKeeper: Session: 0x1626be1444c925b closed 18/04/17 16:39:02 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 16:39:02 INFO scheduler.JobScheduler: Finished job streaming job 1523972340000 ms.25 from job set of time 1523972340000 ms 18/04/17 16:39:02 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 183.0 (TID 183) in 2129 ms on ***hostname masked*** (executor 5) (1/1) 18/04/17 16:39:02 INFO cluster.YarnClusterScheduler: Removed TaskSet 183.0, whose tasks have all completed, from pool 18/04/17 16:39:02 INFO scheduler.DAGScheduler: ResultStage 183 (foreachPartition at PredictorEngineApp.java:153) finished in 2.130 s 18/04/17 16:39:02 INFO scheduler.DAGScheduler: Job 183 finished: foreachPartition at PredictorEngineApp.java:153, took 2.244752 s 18/04/17 16:39:02 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0xe35860 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 16:39:02 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0xe358600x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 16:39:02 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 16:39:02 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:53349, server: ***hostname masked***/***IP masked***:2181 18/04/17 16:39:02 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x3626be1439a922a, negotiated timeout = 60000 18/04/17 16:39:02 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x3626be1439a922a 18/04/17 16:39:02 INFO zookeeper.ZooKeeper: Session: 0x3626be1439a922a closed 18/04/17 16:39:02 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 16:39:02 INFO scheduler.JobScheduler: Finished job streaming job 1523972340000 ms.31 from job set of time 1523972340000 ms 18/04/17 16:39:02 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 178.0 (TID 178) in 2269 ms on ***hostname masked*** (executor 1) (1/1) 18/04/17 16:39:02 INFO cluster.YarnClusterScheduler: Removed TaskSet 178.0, whose tasks have all completed, from pool 18/04/17 16:39:02 INFO scheduler.DAGScheduler: ResultStage 178 (foreachPartition at PredictorEngineApp.java:153) finished in 2.270 s 18/04/17 16:39:02 INFO scheduler.DAGScheduler: Job 180 finished: foreachPartition at PredictorEngineApp.java:153, took 2.366285 s 18/04/17 16:39:02 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x1daafdd8 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 16:39:02 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x1daafdd80x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 16:39:02 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 16:39:02 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:36096, server: ***hostname masked***/***IP masked***:2181 18/04/17 16:39:02 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x2626be142b28b4e, negotiated timeout = 60000 18/04/17 16:39:02 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x2626be142b28b4e 18/04/17 16:39:02 INFO zookeeper.ZooKeeper: Session: 0x2626be142b28b4e closed 18/04/17 16:39:02 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 16:39:02 INFO scheduler.JobScheduler: Finished job streaming job 1523972340000 ms.8 from job set of time 1523972340000 ms 18/04/17 16:39:03 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 159.0 (TID 159) in 3254 ms on ***hostname masked*** (executor 11) (1/1) 18/04/17 16:39:03 INFO cluster.YarnClusterScheduler: Removed TaskSet 159.0, whose tasks have all completed, from pool 18/04/17 16:39:03 INFO scheduler.DAGScheduler: ResultStage 159 (foreachPartition at PredictorEngineApp.java:153) finished in 3.255 s 18/04/17 16:39:03 INFO scheduler.DAGScheduler: Job 159 finished: foreachPartition at PredictorEngineApp.java:153, took 3.287708 s 18/04/17 16:39:03 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x549590cf connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 16:39:03 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x549590cf0x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 16:39:03 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 16:39:03 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:53357, server: ***hostname masked***/***IP masked***:2181 18/04/17 16:39:03 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x3626be1439a922d, negotiated timeout = 60000 18/04/17 16:39:03 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x3626be1439a922d 18/04/17 16:39:03 INFO zookeeper.ZooKeeper: Session: 0x3626be1439a922d closed 18/04/17 16:39:03 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 16:39:03 INFO scheduler.JobScheduler: Finished job streaming job 1523972340000 ms.19 from job set of time 1523972340000 ms 18/04/17 16:39:03 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 181.0 (TID 181) in 3662 ms on ***hostname masked*** (executor 4) (1/1) 18/04/17 16:39:03 INFO cluster.YarnClusterScheduler: Removed TaskSet 181.0, whose tasks have all completed, from pool 18/04/17 16:39:03 INFO scheduler.DAGScheduler: ResultStage 181 (foreachPartition at PredictorEngineApp.java:153) finished in 3.663 s 18/04/17 16:39:03 INFO scheduler.DAGScheduler: Job 181 finished: foreachPartition at PredictorEngineApp.java:153, took 3.772940 s 18/04/17 16:39:03 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x16a7c730 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 16:39:03 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x16a7c7300x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 16:39:03 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 16:39:03 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:59742, server: ***hostname masked***/***IP masked***:2181 18/04/17 16:39:03 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x1626be1444c925d, negotiated timeout = 60000 18/04/17 16:39:03 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x1626be1444c925d 18/04/17 16:39:03 INFO zookeeper.ZooKeeper: Session: 0x1626be1444c925d closed 18/04/17 16:39:03 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 16:39:03 INFO scheduler.JobScheduler: Finished job streaming job 1523972340000 ms.24 from job set of time 1523972340000 ms 18/04/17 16:39:03 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 177.0 (TID 177) in 3771 ms on ***hostname masked*** (executor 6) (1/1) 18/04/17 16:39:03 INFO cluster.YarnClusterScheduler: Removed TaskSet 177.0, whose tasks have all completed, from pool 18/04/17 16:39:03 INFO scheduler.DAGScheduler: ResultStage 177 (foreachPartition at PredictorEngineApp.java:153) finished in 3.772 s 18/04/17 16:39:03 INFO scheduler.DAGScheduler: Job 177 finished: foreachPartition at PredictorEngineApp.java:153, took 3.872070 s 18/04/17 16:39:03 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x504888de connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 16:39:03 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x504888de0x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 16:39:03 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 16:39:03 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:59745, server: ***hostname masked***/***IP masked***:2181 18/04/17 16:39:03 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x1626be1444c925e, negotiated timeout = 60000 18/04/17 16:39:03 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x1626be1444c925e 18/04/17 16:39:03 INFO zookeeper.ZooKeeper: Session: 0x1626be1444c925e closed 18/04/17 16:39:03 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 16:39:03 INFO scheduler.JobScheduler: Finished job streaming job 1523972340000 ms.7 from job set of time 1523972340000 ms 18/04/17 16:39:05 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 164.0 (TID 164) in 5333 ms on ***hostname masked*** (executor 10) (1/1) 18/04/17 16:39:05 INFO cluster.YarnClusterScheduler: Removed TaskSet 164.0, whose tasks have all completed, from pool 18/04/17 16:39:05 INFO scheduler.DAGScheduler: ResultStage 164 (foreachPartition at PredictorEngineApp.java:153) finished in 5.334 s 18/04/17 16:39:05 INFO scheduler.DAGScheduler: Job 164 finished: foreachPartition at PredictorEngineApp.java:153, took 5.385538 s 18/04/17 16:39:05 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x76dbb9bf connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 16:39:05 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x76dbb9bf0x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 16:39:05 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 16:39:05 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:53369, server: ***hostname masked***/***IP masked***:2181 18/04/17 16:39:05 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x3626be1439a922e, negotiated timeout = 60000 18/04/17 16:39:05 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x3626be1439a922e 18/04/17 16:39:05 INFO zookeeper.ZooKeeper: Session: 0x3626be1439a922e closed 18/04/17 16:39:05 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 16:39:05 INFO scheduler.JobScheduler: Finished job streaming job 1523972340000 ms.12 from job set of time 1523972340000 ms 18/04/17 16:39:05 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 158.0 (TID 158) in 5676 ms on ***hostname masked*** (executor 7) (1/1) 18/04/17 16:39:05 INFO cluster.YarnClusterScheduler: Removed TaskSet 158.0, whose tasks have all completed, from pool 18/04/17 16:39:05 INFO scheduler.DAGScheduler: ResultStage 158 (foreachPartition at PredictorEngineApp.java:153) finished in 5.676 s 18/04/17 16:39:05 INFO scheduler.DAGScheduler: Job 158 finished: foreachPartition at PredictorEngineApp.java:153, took 5.694813 s 18/04/17 16:39:05 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x401bd88f connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 16:39:05 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x401bd88f0x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 16:39:05 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 16:39:05 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:53372, server: ***hostname masked***/***IP masked***:2181 18/04/17 16:39:05 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x3626be1439a922f, negotiated timeout = 60000 18/04/17 16:39:05 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x3626be1439a922f 18/04/17 16:39:05 INFO zookeeper.ZooKeeper: Session: 0x3626be1439a922f closed 18/04/17 16:39:05 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 16:39:05 INFO scheduler.JobScheduler: Finished job streaming job 1523972340000 ms.28 from job set of time 1523972340000 ms 18/04/17 16:39:05 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 173.0 (TID 173) in 5686 ms on ***hostname masked*** (executor 7) (1/1) 18/04/17 16:39:05 INFO cluster.YarnClusterScheduler: Removed TaskSet 173.0, whose tasks have all completed, from pool 18/04/17 16:39:05 INFO scheduler.DAGScheduler: ResultStage 173 (foreachPartition at PredictorEngineApp.java:153) finished in 5.687 s 18/04/17 16:39:05 INFO scheduler.DAGScheduler: Job 173 finished: foreachPartition at PredictorEngineApp.java:153, took 5.774404 s 18/04/17 16:39:05 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x5ce14ba0 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 16:39:05 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x5ce14ba00x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 16:39:05 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 16:39:05 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:59757, server: ***hostname masked***/***IP masked***:2181 18/04/17 16:39:05 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x1626be1444c9260, negotiated timeout = 60000 18/04/17 16:39:05 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x1626be1444c9260 18/04/17 16:39:05 INFO zookeeper.ZooKeeper: Session: 0x1626be1444c9260 closed 18/04/17 16:39:05 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 16:39:05 INFO scheduler.JobScheduler: Finished job streaming job 1523972340000 ms.32 from job set of time 1523972340000 ms 18/04/17 16:39:06 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 171.0 (TID 171) in 6461 ms on ***hostname masked*** (executor 10) (1/1) 18/04/17 16:39:06 INFO cluster.YarnClusterScheduler: Removed TaskSet 171.0, whose tasks have all completed, from pool 18/04/17 16:39:06 INFO scheduler.DAGScheduler: ResultStage 171 (foreachPartition at PredictorEngineApp.java:153) finished in 6.462 s 18/04/17 16:39:06 INFO scheduler.DAGScheduler: Job 171 finished: foreachPartition at PredictorEngineApp.java:153, took 6.541576 s 18/04/17 16:39:06 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x61335b6b connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 16:39:06 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x61335b6b0x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 16:39:06 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 16:39:06 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:59762, server: ***hostname masked***/***IP masked***:2181 18/04/17 16:39:06 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x1626be1444c9262, negotiated timeout = 60000 18/04/17 16:39:06 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x1626be1444c9262 18/04/17 16:39:06 INFO zookeeper.ZooKeeper: Session: 0x1626be1444c9262 closed 18/04/17 16:39:06 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 16:39:06 INFO scheduler.JobScheduler: Finished job streaming job 1523972340000 ms.33 from job set of time 1523972340000 ms 18/04/17 16:39:06 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 170.0 (TID 170) in 6512 ms on ***hostname masked*** (executor 8) (1/1) 18/04/17 16:39:06 INFO cluster.YarnClusterScheduler: Removed TaskSet 170.0, whose tasks have all completed, from pool 18/04/17 16:39:06 INFO scheduler.DAGScheduler: ResultStage 170 (foreachPartition at PredictorEngineApp.java:153) finished in 6.512 s 18/04/17 16:39:06 INFO scheduler.DAGScheduler: Job 170 finished: foreachPartition at PredictorEngineApp.java:153, took 6.588033 s 18/04/17 16:39:06 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x6256f44e connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 16:39:06 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x6256f44e0x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 16:39:06 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 16:39:06 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:53383, server: ***hostname masked***/***IP masked***:2181 18/04/17 16:39:06 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x3626be1439a9231, negotiated timeout = 60000 18/04/17 16:39:06 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x3626be1439a9231 18/04/17 16:39:06 INFO zookeeper.ZooKeeper: Session: 0x3626be1439a9231 closed 18/04/17 16:39:06 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 16:39:06 INFO scheduler.JobScheduler: Finished job streaming job 1523972340000 ms.6 from job set of time 1523972340000 ms 18/04/17 16:39:09 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 172.0 (TID 172) in 8905 ms on ***hostname masked*** (executor 12) (1/1) 18/04/17 16:39:09 INFO cluster.YarnClusterScheduler: Removed TaskSet 172.0, whose tasks have all completed, from pool 18/04/17 16:39:09 INFO scheduler.DAGScheduler: ResultStage 172 (foreachPartition at PredictorEngineApp.java:153) finished in 8.906 s 18/04/17 16:39:09 INFO scheduler.DAGScheduler: Job 172 finished: foreachPartition at PredictorEngineApp.java:153, took 8.989668 s 18/04/17 16:39:09 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x5ef87b25 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 16:39:09 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x5ef87b250x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 16:39:09 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 16:39:09 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:59771, server: ***hostname masked***/***IP masked***:2181 18/04/17 16:39:09 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x1626be1444c9264, negotiated timeout = 60000 18/04/17 16:39:09 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x1626be1444c9264 18/04/17 16:39:09 INFO zookeeper.ZooKeeper: Session: 0x1626be1444c9264 closed 18/04/17 16:39:09 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 16:39:09 INFO scheduler.JobScheduler: Finished job streaming job 1523972340000 ms.18 from job set of time 1523972340000 ms 18/04/17 16:39:09 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 157.0 (TID 157) in 9378 ms on ***hostname masked*** (executor 2) (1/1) 18/04/17 16:39:09 INFO cluster.YarnClusterScheduler: Removed TaskSet 157.0, whose tasks have all completed, from pool 18/04/17 16:39:09 INFO scheduler.DAGScheduler: ResultStage 157 (foreachPartition at PredictorEngineApp.java:153) finished in 9.378 s 18/04/17 16:39:09 INFO scheduler.DAGScheduler: Job 157 finished: foreachPartition at PredictorEngineApp.java:153, took 9.392775 s 18/04/17 16:39:09 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x6b29c8c2 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 16:39:09 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x6b29c8c20x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 16:39:09 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 16:39:09 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:53393, server: ***hostname masked***/***IP masked***:2181 18/04/17 16:39:09 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x3626be1439a9232, negotiated timeout = 60000 18/04/17 16:39:09 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x3626be1439a9232 18/04/17 16:39:09 INFO zookeeper.ZooKeeper: Session: 0x3626be1439a9232 closed 18/04/17 16:39:09 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 16:39:09 INFO scheduler.JobScheduler: Finished job streaming job 1523972340000 ms.29 from job set of time 1523972340000 ms 18/04/17 16:39:09 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 166.0 (TID 166) in 9403 ms on ***hostname masked*** (executor 6) (1/1) 18/04/17 16:39:09 INFO cluster.YarnClusterScheduler: Removed TaskSet 166.0, whose tasks have all completed, from pool 18/04/17 16:39:09 INFO scheduler.DAGScheduler: ResultStage 166 (foreachPartition at PredictorEngineApp.java:153) finished in 9.404 s 18/04/17 16:39:09 INFO scheduler.DAGScheduler: Job 166 finished: foreachPartition at PredictorEngineApp.java:153, took 9.462469 s 18/04/17 16:39:09 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x1ca35655 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 16:39:09 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x1ca356550x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 16:39:09 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 16:39:09 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:36140, server: ***hostname masked***/***IP masked***:2181 18/04/17 16:39:09 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x2626be142b28b53, negotiated timeout = 60000 18/04/17 16:39:09 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x2626be142b28b53 18/04/17 16:39:09 INFO zookeeper.ZooKeeper: Session: 0x2626be142b28b53 closed 18/04/17 16:39:09 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 16:39:09 INFO scheduler.JobScheduler: Finished job streaming job 1523972340000 ms.2 from job set of time 1523972340000 ms 18/04/17 16:39:09 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 168.0 (TID 168) in 9592 ms on ***hostname masked*** (executor 2) (1/1) 18/04/17 16:39:09 INFO cluster.YarnClusterScheduler: Removed TaskSet 168.0, whose tasks have all completed, from pool 18/04/17 16:39:09 INFO scheduler.DAGScheduler: ResultStage 168 (foreachPartition at PredictorEngineApp.java:153) finished in 9.592 s 18/04/17 16:39:09 INFO scheduler.DAGScheduler: Job 168 finished: foreachPartition at PredictorEngineApp.java:153, took 9.659432 s 18/04/17 16:39:09 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x1f4a3f68 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 16:39:09 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x1f4a3f680x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 16:39:09 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 16:39:09 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:36143, server: ***hostname masked***/***IP masked***:2181 18/04/17 16:39:09 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x2626be142b28b54, negotiated timeout = 60000 18/04/17 16:39:09 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x2626be142b28b54 18/04/17 16:39:09 INFO zookeeper.ZooKeeper: Session: 0x2626be142b28b54 closed 18/04/17 16:39:09 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 16:39:09 INFO scheduler.JobScheduler: Finished job streaming job 1523972340000 ms.9 from job set of time 1523972340000 ms 18/04/17 16:39:09 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 179.0 (TID 179) in 9720 ms on ***hostname masked*** (executor 12) (1/1) 18/04/17 16:39:09 INFO cluster.YarnClusterScheduler: Removed TaskSet 179.0, whose tasks have all completed, from pool 18/04/17 16:39:09 INFO scheduler.DAGScheduler: ResultStage 179 (foreachPartition at PredictorEngineApp.java:153) finished in 9.726 s 18/04/17 16:39:09 INFO scheduler.DAGScheduler: Job 179 finished: foreachPartition at PredictorEngineApp.java:153, took 9.831890 s 18/04/17 16:39:09 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x18138c18 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 16:39:09 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x18138c180x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 16:39:09 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 16:39:09 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:36146, server: ***hostname masked***/***IP masked***:2181 18/04/17 16:39:09 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x2626be142b28b55, negotiated timeout = 60000 18/04/17 16:39:09 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x2626be142b28b55 18/04/17 16:39:09 INFO zookeeper.ZooKeeper: Session: 0x2626be142b28b55 closed 18/04/17 16:39:09 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 16:39:09 INFO scheduler.JobScheduler: Finished job streaming job 1523972340000 ms.23 from job set of time 1523972340000 ms 18/04/17 16:39:10 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 180.0 (TID 180) in 9893 ms on ***hostname masked*** (executor 12) (1/1) 18/04/17 16:39:10 INFO cluster.YarnClusterScheduler: Removed TaskSet 180.0, whose tasks have all completed, from pool 18/04/17 16:39:10 INFO scheduler.DAGScheduler: ResultStage 180 (foreachPartition at PredictorEngineApp.java:153) finished in 9.894 s 18/04/17 16:39:10 INFO scheduler.DAGScheduler: Job 178 finished: foreachPartition at PredictorEngineApp.java:153, took 10.008002 s 18/04/17 16:39:10 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x343fe861 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 16:39:10 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x343fe8610x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 16:39:10 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 16:39:10 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:59787, server: ***hostname masked***/***IP masked***:2181 18/04/17 16:39:10 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x1626be1444c9267, negotiated timeout = 60000 18/04/17 16:39:10 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x1626be1444c9267 18/04/17 16:39:10 INFO zookeeper.ZooKeeper: Session: 0x1626be1444c9267 closed 18/04/17 16:39:10 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 16:39:10 INFO scheduler.JobScheduler: Finished job streaming job 1523972340000 ms.20 from job set of time 1523972340000 ms 18/04/17 16:39:10 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 182.0 (TID 182) in 10218 ms on ***hostname masked*** (executor 6) (1/1) 18/04/17 16:39:10 INFO cluster.YarnClusterScheduler: Removed TaskSet 182.0, whose tasks have all completed, from pool 18/04/17 16:39:10 INFO scheduler.DAGScheduler: ResultStage 182 (foreachPartition at PredictorEngineApp.java:153) finished in 10.219 s 18/04/17 16:39:10 INFO scheduler.DAGScheduler: Job 182 finished: foreachPartition at PredictorEngineApp.java:153, took 10.331896 s 18/04/17 16:39:10 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x2b528bfc connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 16:39:10 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x2b528bfc0x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 16:39:10 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 16:39:10 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:36153, server: ***hostname masked***/***IP masked***:2181 18/04/17 16:39:10 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x2626be142b28b57, negotiated timeout = 60000 18/04/17 16:39:10 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x2626be142b28b57 18/04/17 16:39:10 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 174.0 (TID 174) in 10272 ms on ***hostname masked*** (executor 2) (1/1) 18/04/17 16:39:10 INFO cluster.YarnClusterScheduler: Removed TaskSet 174.0, whose tasks have all completed, from pool 18/04/17 16:39:10 INFO scheduler.DAGScheduler: ResultStage 174 (foreachPartition at PredictorEngineApp.java:153) finished in 10.272 s 18/04/17 16:39:10 INFO scheduler.DAGScheduler: Job 174 finished: foreachPartition at PredictorEngineApp.java:153, took 10.363445 s 18/04/17 16:39:10 INFO zookeeper.ZooKeeper: Session: 0x2626be142b28b57 closed 18/04/17 16:39:10 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 16:39:10 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x655bee93 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 16:39:10 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x655bee930x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 16:39:10 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 16:39:10 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:53412, server: ***hostname masked***/***IP masked***:2181 18/04/17 16:39:10 INFO scheduler.JobScheduler: Finished job streaming job 1523972340000 ms.27 from job set of time 1523972340000 ms 18/04/17 16:39:10 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x3626be1439a9237, negotiated timeout = 60000 18/04/17 16:39:10 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x3626be1439a9237 18/04/17 16:39:10 INFO zookeeper.ZooKeeper: Session: 0x3626be1439a9237 closed 18/04/17 16:39:10 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 16:39:10 INFO scheduler.JobScheduler: Finished job streaming job 1523972340000 ms.34 from job set of time 1523972340000 ms 18/04/17 16:39:11 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 160.0 (TID 160) in 10988 ms on ***hostname masked*** (executor 8) (1/1) 18/04/17 16:39:11 INFO cluster.YarnClusterScheduler: Removed TaskSet 160.0, whose tasks have all completed, from pool 18/04/17 16:39:11 INFO scheduler.DAGScheduler: ResultStage 160 (foreachPartition at PredictorEngineApp.java:153) finished in 10.989 s 18/04/17 16:39:11 INFO scheduler.DAGScheduler: Job 160 finished: foreachPartition at PredictorEngineApp.java:153, took 11.024668 s 18/04/17 16:39:11 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x5146be13 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 16:39:11 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x5146be130x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 16:39:11 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 16:39:11 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:59798, server: ***hostname masked***/***IP masked***:2181 18/04/17 16:39:11 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x1626be1444c926a, negotiated timeout = 60000 18/04/17 16:39:11 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x1626be1444c926a 18/04/17 16:39:11 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 16:39:11 INFO zookeeper.ZooKeeper: Session: 0x1626be1444c926a closed 18/04/17 16:39:11 INFO scheduler.JobScheduler: Finished job streaming job 1523972340000 ms.1 from job set of time 1523972340000 ms 18/04/17 16:39:11 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 167.0 (TID 167) in 11506 ms on ***hostname masked*** (executor 1) (1/1) 18/04/17 16:39:11 INFO cluster.YarnClusterScheduler: Removed TaskSet 167.0, whose tasks have all completed, from pool 18/04/17 16:39:11 INFO scheduler.DAGScheduler: ResultStage 167 (foreachPartition at PredictorEngineApp.java:153) finished in 11.506 s 18/04/17 16:39:11 INFO scheduler.DAGScheduler: Job 167 finished: foreachPartition at PredictorEngineApp.java:153, took 11.569863 s 18/04/17 16:39:11 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x4b400f9e connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 16:39:11 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x4b400f9e0x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 16:39:11 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 16:39:11 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:53419, server: ***hostname masked***/***IP masked***:2181 18/04/17 16:39:11 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x3626be1439a923b, negotiated timeout = 60000 18/04/17 16:39:11 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x3626be1439a923b 18/04/17 16:39:11 INFO zookeeper.ZooKeeper: Session: 0x3626be1439a923b closed 18/04/17 16:39:11 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 16:39:11 INFO scheduler.JobScheduler: Finished job streaming job 1523972340000 ms.10 from job set of time 1523972340000 ms 18/04/17 16:39:14 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 163.0 (TID 163) in 14815 ms on ***hostname masked*** (executor 6) (1/1) 18/04/17 16:39:14 INFO cluster.YarnClusterScheduler: Removed TaskSet 163.0, whose tasks have all completed, from pool 18/04/17 16:39:14 INFO scheduler.DAGScheduler: ResultStage 163 (foreachPartition at PredictorEngineApp.java:153) finished in 14.816 s 18/04/17 16:39:14 INFO scheduler.DAGScheduler: Job 163 finished: foreachPartition at PredictorEngineApp.java:153, took 14.863713 s 18/04/17 16:39:14 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x41eff669 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 16:39:14 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x41eff6690x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 16:39:14 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 16:39:14 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:59808, server: ***hostname masked***/***IP masked***:2181 18/04/17 16:39:14 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x1626be1444c926d, negotiated timeout = 60000 18/04/17 16:39:14 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x1626be1444c926d 18/04/17 16:39:14 INFO zookeeper.ZooKeeper: Session: 0x1626be1444c926d closed 18/04/17 16:39:14 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 16:39:14 INFO scheduler.JobScheduler: Finished job streaming job 1523972340000 ms.22 from job set of time 1523972340000 ms 18/04/17 16:39:16 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 169.0 (TID 169) in 16290 ms on ***hostname masked*** (executor 3) (1/1) 18/04/17 16:39:16 INFO cluster.YarnClusterScheduler: Removed TaskSet 169.0, whose tasks have all completed, from pool 18/04/17 16:39:16 INFO scheduler.DAGScheduler: ResultStage 169 (foreachPartition at PredictorEngineApp.java:153) finished in 16.292 s 18/04/17 16:39:16 INFO scheduler.DAGScheduler: Job 169 finished: foreachPartition at PredictorEngineApp.java:153, took 16.363604 s 18/04/17 16:39:16 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x3e5a2165 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 16:39:16 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x3e5a21650x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 16:39:16 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 16:39:16 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:36176, server: ***hostname masked***/***IP masked***:2181 18/04/17 16:39:16 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x2626be142b28b58, negotiated timeout = 60000 18/04/17 16:39:16 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x2626be142b28b58 18/04/17 16:39:16 INFO zookeeper.ZooKeeper: Session: 0x2626be142b28b58 closed 18/04/17 16:39:16 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 16:39:16 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 176.0 (TID 176) in 16295 ms on ***hostname masked*** (executor 3) (1/1) 18/04/17 16:39:16 INFO cluster.YarnClusterScheduler: Removed TaskSet 176.0, whose tasks have all completed, from pool 18/04/17 16:39:16 INFO scheduler.DAGScheduler: ResultStage 176 (foreachPartition at PredictorEngineApp.java:153) finished in 16.296 s 18/04/17 16:39:16 INFO scheduler.DAGScheduler: Job 176 finished: foreachPartition at PredictorEngineApp.java:153, took 16.392810 s 18/04/17 16:39:16 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x16bad613 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 16:39:16 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x16bad6130x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 16:39:16 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 16:39:16 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:36179, server: ***hostname masked***/***IP masked***:2181 18/04/17 16:39:16 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x2626be142b28b59, negotiated timeout = 60000 18/04/17 16:39:16 INFO scheduler.JobScheduler: Finished job streaming job 1523972340000 ms.11 from job set of time 1523972340000 ms 18/04/17 16:39:16 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x2626be142b28b59 18/04/17 16:39:16 INFO zookeeper.ZooKeeper: Session: 0x2626be142b28b59 closed 18/04/17 16:39:16 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 16:39:16 INFO scheduler.JobScheduler: Finished job streaming job 1523972340000 ms.26 from job set of time 1523972340000 ms 18/04/17 16:39:16 INFO scheduler.JobScheduler: Total delay: 16.497 s for time 1523972340000 ms (execution: 16.432 s) 18/04/17 16:39:16 INFO kafka.KafkaRDD: Removing RDD 180 from persistence list 18/04/17 16:39:16 INFO storage.BlockManager: Removing RDD 180 18/04/17 16:39:16 INFO kafka.KafkaRDD: Removing RDD 180 from persistence list 18/04/17 16:39:16 INFO storage.BlockManager: Removing RDD 180 18/04/17 16:39:16 INFO kafka.KafkaRDD: Removing RDD 181 from persistence list 18/04/17 16:39:16 INFO storage.BlockManager: Removing RDD 181 18/04/17 16:39:16 INFO kafka.KafkaRDD: Removing RDD 181 from persistence list 18/04/17 16:39:16 INFO storage.BlockManager: Removing RDD 181 18/04/17 16:39:16 INFO kafka.KafkaRDD: Removing RDD 182 from persistence list 18/04/17 16:39:16 INFO storage.BlockManager: Removing RDD 182 18/04/17 16:39:16 INFO kafka.KafkaRDD: Removing RDD 182 from persistence list 18/04/17 16:39:16 INFO storage.BlockManager: Removing RDD 182 18/04/17 16:39:16 INFO kafka.KafkaRDD: Removing RDD 183 from persistence list 18/04/17 16:39:16 INFO storage.BlockManager: Removing RDD 183 18/04/17 16:39:16 INFO kafka.KafkaRDD: Removing RDD 183 from persistence list 18/04/17 16:39:16 INFO storage.BlockManager: Removing RDD 183 18/04/17 16:39:16 INFO kafka.KafkaRDD: Removing RDD 184 from persistence list 18/04/17 16:39:16 INFO storage.BlockManager: Removing RDD 184 18/04/17 16:39:16 INFO kafka.KafkaRDD: Removing RDD 184 from persistence list 18/04/17 16:39:16 INFO storage.BlockManager: Removing RDD 184 18/04/17 16:39:16 INFO kafka.KafkaRDD: Removing RDD 185 from persistence list 18/04/17 16:39:16 INFO storage.BlockManager: Removing RDD 185 18/04/17 16:39:16 INFO kafka.KafkaRDD: Removing RDD 185 from persistence list 18/04/17 16:39:16 INFO storage.BlockManager: Removing RDD 185 18/04/17 16:39:16 INFO kafka.KafkaRDD: Removing RDD 186 from persistence list 18/04/17 16:39:16 INFO storage.BlockManager: Removing RDD 186 18/04/17 16:39:16 INFO kafka.KafkaRDD: Removing RDD 186 from persistence list 18/04/17 16:39:16 INFO storage.BlockManager: Removing RDD 186 18/04/17 16:39:16 INFO kafka.KafkaRDD: Removing RDD 187 from persistence list 18/04/17 16:39:16 INFO storage.BlockManager: Removing RDD 187 18/04/17 16:39:16 INFO kafka.KafkaRDD: Removing RDD 187 from persistence list 18/04/17 16:39:16 INFO storage.BlockManager: Removing RDD 187 18/04/17 16:39:16 INFO kafka.KafkaRDD: Removing RDD 188 from persistence list 18/04/17 16:39:16 INFO storage.BlockManager: Removing RDD 188 18/04/17 16:39:16 INFO kafka.KafkaRDD: Removing RDD 188 from persistence list 18/04/17 16:39:16 INFO storage.BlockManager: Removing RDD 188 18/04/17 16:39:16 INFO kafka.KafkaRDD: Removing RDD 189 from persistence list 18/04/17 16:39:16 INFO storage.BlockManager: Removing RDD 189 18/04/17 16:39:16 INFO kafka.KafkaRDD: Removing RDD 189 from persistence list 18/04/17 16:39:16 INFO storage.BlockManager: Removing RDD 189 18/04/17 16:39:16 INFO kafka.KafkaRDD: Removing RDD 190 from persistence list 18/04/17 16:39:16 INFO storage.BlockManager: Removing RDD 190 18/04/17 16:39:16 INFO kafka.KafkaRDD: Removing RDD 190 from persistence list 18/04/17 16:39:16 INFO storage.BlockManager: Removing RDD 190 18/04/17 16:39:16 INFO kafka.KafkaRDD: Removing RDD 191 from persistence list 18/04/17 16:39:16 INFO storage.BlockManager: Removing RDD 191 18/04/17 16:39:16 INFO kafka.KafkaRDD: Removing RDD 191 from persistence list 18/04/17 16:39:16 INFO storage.BlockManager: Removing RDD 191 18/04/17 16:39:16 INFO kafka.KafkaRDD: Removing RDD 192 from persistence list 18/04/17 16:39:16 INFO storage.BlockManager: Removing RDD 192 18/04/17 16:39:16 INFO kafka.KafkaRDD: Removing RDD 192 from persistence list 18/04/17 16:39:16 INFO storage.BlockManager: Removing RDD 192 18/04/17 16:39:16 INFO kafka.KafkaRDD: Removing RDD 193 from persistence list 18/04/17 16:39:16 INFO storage.BlockManager: Removing RDD 193 18/04/17 16:39:16 INFO kafka.KafkaRDD: Removing RDD 193 from persistence list 18/04/17 16:39:16 INFO storage.BlockManager: Removing RDD 193 18/04/17 16:39:16 INFO kafka.KafkaRDD: Removing RDD 194 from persistence list 18/04/17 16:39:16 INFO storage.BlockManager: Removing RDD 194 18/04/17 16:39:16 INFO kafka.KafkaRDD: Removing RDD 194 from persistence list 18/04/17 16:39:16 INFO storage.BlockManager: Removing RDD 194 18/04/17 16:39:16 INFO kafka.KafkaRDD: Removing RDD 195 from persistence list 18/04/17 16:39:16 INFO storage.BlockManager: Removing RDD 195 18/04/17 16:39:16 INFO kafka.KafkaRDD: Removing RDD 195 from persistence list 18/04/17 16:39:16 INFO storage.BlockManager: Removing RDD 195 18/04/17 16:39:16 INFO kafka.KafkaRDD: Removing RDD 196 from persistence list 18/04/17 16:39:16 INFO storage.BlockManager: Removing RDD 196 18/04/17 16:39:16 INFO kafka.KafkaRDD: Removing RDD 196 from persistence list 18/04/17 16:39:16 INFO storage.BlockManager: Removing RDD 196 18/04/17 16:39:16 INFO kafka.KafkaRDD: Removing RDD 197 from persistence list 18/04/17 16:39:16 INFO storage.BlockManager: Removing RDD 197 18/04/17 16:39:16 INFO kafka.KafkaRDD: Removing RDD 197 from persistence list 18/04/17 16:39:16 INFO storage.BlockManager: Removing RDD 197 18/04/17 16:39:16 INFO kafka.KafkaRDD: Removing RDD 198 from persistence list 18/04/17 16:39:16 INFO storage.BlockManager: Removing RDD 198 18/04/17 16:39:16 INFO kafka.KafkaRDD: Removing RDD 198 from persistence list 18/04/17 16:39:16 INFO storage.BlockManager: Removing RDD 198 18/04/17 16:39:16 INFO kafka.KafkaRDD: Removing RDD 199 from persistence list 18/04/17 16:39:16 INFO storage.BlockManager: Removing RDD 199 18/04/17 16:39:16 INFO kafka.KafkaRDD: Removing RDD 199 from persistence list 18/04/17 16:39:16 INFO storage.BlockManager: Removing RDD 199 18/04/17 16:39:16 INFO kafka.KafkaRDD: Removing RDD 200 from persistence list 18/04/17 16:39:16 INFO storage.BlockManager: Removing RDD 200 18/04/17 16:39:16 INFO kafka.KafkaRDD: Removing RDD 200 from persistence list 18/04/17 16:39:16 INFO storage.BlockManager: Removing RDD 200 18/04/17 16:39:16 INFO kafka.KafkaRDD: Removing RDD 201 from persistence list 18/04/17 16:39:16 INFO storage.BlockManager: Removing RDD 201 18/04/17 16:39:16 INFO kafka.KafkaRDD: Removing RDD 201 from persistence list 18/04/17 16:39:16 INFO storage.BlockManager: Removing RDD 201 18/04/17 16:39:16 INFO kafka.KafkaRDD: Removing RDD 202 from persistence list 18/04/17 16:39:16 INFO storage.BlockManager: Removing RDD 202 18/04/17 16:39:16 INFO kafka.KafkaRDD: Removing RDD 202 from persistence list 18/04/17 16:39:16 INFO storage.BlockManager: Removing RDD 202 18/04/17 16:39:16 INFO kafka.KafkaRDD: Removing RDD 203 from persistence list 18/04/17 16:39:16 INFO storage.BlockManager: Removing RDD 203 18/04/17 16:39:16 INFO kafka.KafkaRDD: Removing RDD 203 from persistence list 18/04/17 16:39:16 INFO storage.BlockManager: Removing RDD 203 18/04/17 16:39:16 INFO kafka.KafkaRDD: Removing RDD 204 from persistence list 18/04/17 16:39:16 INFO storage.BlockManager: Removing RDD 204 18/04/17 16:39:16 INFO kafka.KafkaRDD: Removing RDD 204 from persistence list 18/04/17 16:39:16 INFO storage.BlockManager: Removing RDD 204 18/04/17 16:39:16 INFO kafka.KafkaRDD: Removing RDD 205 from persistence list 18/04/17 16:39:16 INFO storage.BlockManager: Removing RDD 205 18/04/17 16:39:16 INFO kafka.KafkaRDD: Removing RDD 205 from persistence list 18/04/17 16:39:16 INFO storage.BlockManager: Removing RDD 205 18/04/17 16:39:16 INFO kafka.KafkaRDD: Removing RDD 206 from persistence list 18/04/17 16:39:16 INFO storage.BlockManager: Removing RDD 206 18/04/17 16:39:16 INFO kafka.KafkaRDD: Removing RDD 206 from persistence list 18/04/17 16:39:16 INFO storage.BlockManager: Removing RDD 206 18/04/17 16:39:16 INFO kafka.KafkaRDD: Removing RDD 207 from persistence list 18/04/17 16:39:16 INFO storage.BlockManager: Removing RDD 207 18/04/17 16:39:16 INFO kafka.KafkaRDD: Removing RDD 207 from persistence list 18/04/17 16:39:16 INFO storage.BlockManager: Removing RDD 207 18/04/17 16:39:16 INFO kafka.KafkaRDD: Removing RDD 208 from persistence list 18/04/17 16:39:16 INFO storage.BlockManager: Removing RDD 208 18/04/17 16:39:16 INFO kafka.KafkaRDD: Removing RDD 208 from persistence list 18/04/17 16:39:16 INFO storage.BlockManager: Removing RDD 208 18/04/17 16:39:16 INFO kafka.KafkaRDD: Removing RDD 209 from persistence list 18/04/17 16:39:16 INFO storage.BlockManager: Removing RDD 209 18/04/17 16:39:16 INFO kafka.KafkaRDD: Removing RDD 209 from persistence list 18/04/17 16:39:16 INFO storage.BlockManager: Removing RDD 209 18/04/17 16:39:16 INFO kafka.KafkaRDD: Removing RDD 210 from persistence list 18/04/17 16:39:16 INFO storage.BlockManager: Removing RDD 210 18/04/17 16:39:16 INFO kafka.KafkaRDD: Removing RDD 210 from persistence list 18/04/17 16:39:16 INFO storage.BlockManager: Removing RDD 210 18/04/17 16:39:16 INFO kafka.KafkaRDD: Removing RDD 211 from persistence list 18/04/17 16:39:16 INFO storage.BlockManager: Removing RDD 211 18/04/17 16:39:16 INFO kafka.KafkaRDD: Removing RDD 211 from persistence list 18/04/17 16:39:16 INFO storage.BlockManager: Removing RDD 211 18/04/17 16:39:16 INFO kafka.KafkaRDD: Removing RDD 212 from persistence list 18/04/17 16:39:16 INFO storage.BlockManager: Removing RDD 212 18/04/17 16:39:16 INFO kafka.KafkaRDD: Removing RDD 212 from persistence list 18/04/17 16:39:16 INFO storage.BlockManager: Removing RDD 212 18/04/17 16:39:16 INFO kafka.KafkaRDD: Removing RDD 213 from persistence list 18/04/17 16:39:16 INFO storage.BlockManager: Removing RDD 213 18/04/17 16:39:16 INFO kafka.KafkaRDD: Removing RDD 213 from persistence list 18/04/17 16:39:16 INFO storage.BlockManager: Removing RDD 213 18/04/17 16:39:16 INFO kafka.KafkaRDD: Removing RDD 214 from persistence list 18/04/17 16:39:16 INFO storage.BlockManager: Removing RDD 214 18/04/17 16:39:16 INFO kafka.KafkaRDD: Removing RDD 214 from persistence list 18/04/17 16:39:16 INFO storage.BlockManager: Removing RDD 214 18/04/17 16:39:16 INFO kafka.KafkaRDD: Removing RDD 215 from persistence list 18/04/17 16:39:16 INFO storage.BlockManager: Removing RDD 215 18/04/17 16:39:16 INFO kafka.KafkaRDD: Removing RDD 215 from persistence list 18/04/17 16:39:16 INFO storage.BlockManager: Removing RDD 215 18/04/17 16:39:16 INFO scheduler.ReceivedBlockTracker: Deleting batches ArrayBuffer() 18/04/17 16:39:16 INFO scheduler.InputInfoTracker: remove old batch metadata: 1523972220000 ms 18/04/17 16:40:00 INFO scheduler.JobScheduler: Added jobs for time 1523972400000 ms 18/04/17 16:40:00 INFO scheduler.JobScheduler: Starting job streaming job 1523972400000 ms.0 from job set of time 1523972400000 ms 18/04/17 16:40:00 INFO scheduler.JobScheduler: Starting job streaming job 1523972400000 ms.1 from job set of time 1523972400000 ms 18/04/17 16:40:00 INFO scheduler.JobScheduler: Finished job streaming job 1523972400000 ms.0 from job set of time 1523972400000 ms 18/04/17 16:40:00 INFO scheduler.JobScheduler: Starting job streaming job 1523972400000 ms.2 from job set of time 1523972400000 ms 18/04/17 16:40:00 INFO scheduler.JobScheduler: Starting job streaming job 1523972400000 ms.3 from job set of time 1523972400000 ms 18/04/17 16:40:00 INFO scheduler.JobScheduler: Starting job streaming job 1523972400000 ms.5 from job set of time 1523972400000 ms 18/04/17 16:40:00 INFO scheduler.JobScheduler: Starting job streaming job 1523972400000 ms.8 from job set of time 1523972400000 ms 18/04/17 16:40:00 INFO scheduler.JobScheduler: Starting job streaming job 1523972400000 ms.6 from job set of time 1523972400000 ms 18/04/17 16:40:00 INFO scheduler.JobScheduler: Starting job streaming job 1523972400000 ms.9 from job set of time 1523972400000 ms 18/04/17 16:40:00 INFO scheduler.JobScheduler: Starting job streaming job 1523972400000 ms.10 from job set of time 1523972400000 ms 18/04/17 16:40:00 INFO scheduler.JobScheduler: Starting job streaming job 1523972400000 ms.11 from job set of time 1523972400000 ms 18/04/17 16:40:00 INFO scheduler.JobScheduler: Starting job streaming job 1523972400000 ms.12 from job set of time 1523972400000 ms 18/04/17 16:40:00 INFO scheduler.JobScheduler: Starting job streaming job 1523972400000 ms.4 from job set of time 1523972400000 ms 18/04/17 16:40:00 INFO scheduler.JobScheduler: Starting job streaming job 1523972400000 ms.13 from job set of time 1523972400000 ms 18/04/17 16:40:00 INFO scheduler.JobScheduler: Finished job streaming job 1523972400000 ms.4 from job set of time 1523972400000 ms 18/04/17 16:40:00 INFO scheduler.JobScheduler: Starting job streaming job 1523972400000 ms.14 from job set of time 1523972400000 ms 18/04/17 16:40:00 INFO scheduler.JobScheduler: Starting job streaming job 1523972400000 ms.17 from job set of time 1523972400000 ms 18/04/17 16:40:00 INFO scheduler.JobScheduler: Starting job streaming job 1523972400000 ms.15 from job set of time 1523972400000 ms 18/04/17 16:40:00 INFO scheduler.JobScheduler: Finished job streaming job 1523972400000 ms.3 from job set of time 1523972400000 ms 18/04/17 16:40:00 INFO scheduler.JobScheduler: Starting job streaming job 1523972400000 ms.16 from job set of time 1523972400000 ms 18/04/17 16:40:00 INFO scheduler.JobScheduler: Finished job streaming job 1523972400000 ms.17 from job set of time 1523972400000 ms 18/04/17 16:40:00 INFO scheduler.JobScheduler: Starting job streaming job 1523972400000 ms.20 from job set of time 1523972400000 ms 18/04/17 16:40:00 INFO scheduler.JobScheduler: Starting job streaming job 1523972400000 ms.19 from job set of time 1523972400000 ms 18/04/17 16:40:00 INFO scheduler.JobScheduler: Finished job streaming job 1523972400000 ms.14 from job set of time 1523972400000 ms 18/04/17 16:40:00 INFO scheduler.JobScheduler: Finished job streaming job 1523972400000 ms.16 from job set of time 1523972400000 ms 18/04/17 16:40:00 INFO scheduler.JobScheduler: Finished job streaming job 1523972400000 ms.13 from job set of time 1523972400000 ms 18/04/17 16:40:00 INFO scheduler.JobScheduler: Starting job streaming job 1523972400000 ms.22 from job set of time 1523972400000 ms 18/04/17 16:40:00 INFO scheduler.JobScheduler: Starting job streaming job 1523972400000 ms.23 from job set of time 1523972400000 ms 18/04/17 16:40:00 INFO scheduler.JobScheduler: Starting job streaming job 1523972400000 ms.18 from job set of time 1523972400000 ms 18/04/17 16:40:00 INFO scheduler.JobScheduler: Starting job streaming job 1523972400000 ms.25 from job set of time 1523972400000 ms 18/04/17 16:40:00 INFO scheduler.JobScheduler: Starting job streaming job 1523972400000 ms.21 from job set of time 1523972400000 ms 18/04/17 16:40:00 INFO scheduler.JobScheduler: Starting job streaming job 1523972400000 ms.24 from job set of time 1523972400000 ms 18/04/17 16:40:00 INFO scheduler.JobScheduler: Starting job streaming job 1523972400000 ms.26 from job set of time 1523972400000 ms 18/04/17 16:40:00 INFO scheduler.JobScheduler: Finished job streaming job 1523972400000 ms.21 from job set of time 1523972400000 ms 18/04/17 16:40:00 INFO scheduler.JobScheduler: Starting job streaming job 1523972400000 ms.27 from job set of time 1523972400000 ms 18/04/17 16:40:00 INFO scheduler.JobScheduler: Starting job streaming job 1523972400000 ms.29 from job set of time 1523972400000 ms 18/04/17 16:40:00 INFO scheduler.JobScheduler: Starting job streaming job 1523972400000 ms.7 from job set of time 1523972400000 ms 18/04/17 16:40:00 INFO scheduler.JobScheduler: Starting job streaming job 1523972400000 ms.28 from job set of time 1523972400000 ms 18/04/17 16:40:00 INFO scheduler.JobScheduler: Starting job streaming job 1523972400000 ms.30 from job set of time 1523972400000 ms 18/04/17 16:40:00 INFO scheduler.JobScheduler: Finished job streaming job 1523972400000 ms.30 from job set of time 1523972400000 ms 18/04/17 16:40:00 INFO scheduler.JobScheduler: Starting job streaming job 1523972400000 ms.33 from job set of time 1523972400000 ms 18/04/17 16:40:00 INFO scheduler.JobScheduler: Starting job streaming job 1523972400000 ms.31 from job set of time 1523972400000 ms 18/04/17 16:40:00 INFO scheduler.JobScheduler: Starting job streaming job 1523972400000 ms.32 from job set of time 1523972400000 ms 18/04/17 16:40:00 INFO scheduler.JobScheduler: Starting job streaming job 1523972400000 ms.34 from job set of time 1523972400000 ms 18/04/17 16:40:00 INFO scheduler.JobScheduler: Starting job streaming job 1523972400000 ms.35 from job set of time 1523972400000 ms 18/04/17 16:40:00 INFO scheduler.JobScheduler: Finished job streaming job 1523972400000 ms.35 from job set of time 1523972400000 ms 18/04/17 16:40:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 16:40:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 16:40:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 16:40:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 16:40:00 INFO scheduler.DAGScheduler: Got job 186 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 16:40:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 184 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 16:40:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 16:40:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 16:40:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 16:40:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 16:40:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 16:40:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 16:40:00 INFO scheduler.DAGScheduler: Submitting ResultStage 184 (KafkaRDD[274] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 16:40:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 16:40:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 16:40:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 16:40:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 16:40:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 16:40:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 16:40:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 16:40:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 16:40:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 16:40:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 16:40:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 16:40:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 16:40:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 16:40:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 16:40:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 16:40:00 INFO storage.MemoryStore: Block broadcast_184 stored as values in memory (estimated size 5.7 KB, free 490.5 MB) 18/04/17 16:40:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 16:40:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 16:40:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 16:40:00 INFO storage.MemoryStore: Block broadcast_184_piece0 stored as bytes in memory (estimated size 3.1 KB, free 490.5 MB) 18/04/17 16:40:00 INFO storage.BlockManagerInfo: Added broadcast_184_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.3 MB) 18/04/17 16:40:00 INFO spark.SparkContext: Created broadcast 184 from broadcast at DAGScheduler.scala:1006 18/04/17 16:40:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 184 (KafkaRDD[274] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 16:40:00 INFO cluster.YarnClusterScheduler: Adding task set 184.0 with 1 tasks 18/04/17 16:40:00 INFO scheduler.DAGScheduler: Got job 185 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 16:40:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 185 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 16:40:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 16:40:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 16:40:00 INFO scheduler.DAGScheduler: Submitting ResultStage 185 (KafkaRDD[279] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 16:40:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 184.0 (TID 184, ***hostname masked***, executor 8, partition 0, RACK_LOCAL, 2069 bytes) 18/04/17 16:40:00 INFO storage.MemoryStore: Block broadcast_185 stored as values in memory (estimated size 5.7 KB, free 490.5 MB) 18/04/17 16:40:00 INFO storage.MemoryStore: Block broadcast_185_piece0 stored as bytes in memory (estimated size 3.1 KB, free 490.5 MB) 18/04/17 16:40:00 INFO storage.BlockManagerInfo: Added broadcast_185_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.3 MB) 18/04/17 16:40:00 INFO spark.SparkContext: Created broadcast 185 from broadcast at DAGScheduler.scala:1006 18/04/17 16:40:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 185 (KafkaRDD[279] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 16:40:00 INFO cluster.YarnClusterScheduler: Adding task set 185.0 with 1 tasks 18/04/17 16:40:00 INFO scheduler.DAGScheduler: Got job 184 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 16:40:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 186 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 16:40:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 16:40:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 16:40:00 INFO scheduler.DAGScheduler: Submitting ResultStage 186 (KafkaRDD[253] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 16:40:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 185.0 (TID 185, ***hostname masked***, executor 5, partition 0, RACK_LOCAL, 2048 bytes) 18/04/17 16:40:00 INFO storage.MemoryStore: Block broadcast_186 stored as values in memory (estimated size 5.7 KB, free 490.5 MB) 18/04/17 16:40:00 INFO storage.BlockManagerInfo: Removed broadcast_180_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.3 MB) 18/04/17 16:40:00 INFO storage.MemoryStore: Block broadcast_186_piece0 stored as bytes in memory (estimated size 3.1 KB, free 490.5 MB) 18/04/17 16:40:00 INFO storage.BlockManagerInfo: Added broadcast_186_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.3 MB) 18/04/17 16:40:00 INFO spark.SparkContext: Created broadcast 186 from broadcast at DAGScheduler.scala:1006 18/04/17 16:40:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 186 (KafkaRDD[253] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 16:40:00 INFO cluster.YarnClusterScheduler: Adding task set 186.0 with 1 tasks 18/04/17 16:40:00 INFO scheduler.DAGScheduler: Got job 187 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 16:40:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 187 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 16:40:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 16:40:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 16:40:00 INFO scheduler.DAGScheduler: Submitting ResultStage 187 (KafkaRDD[263] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 16:40:00 INFO storage.BlockManagerInfo: Removed broadcast_180_piece0 on ***hostname masked***:42188 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:40:00 INFO storage.BlockManagerInfo: Added broadcast_184_piece0 in memory on ***hostname masked***:50260 (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:40:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 186.0 (TID 186, ***hostname masked***, executor 8, partition 0, RACK_LOCAL, 2063 bytes) 18/04/17 16:40:00 INFO storage.MemoryStore: Block broadcast_187 stored as values in memory (estimated size 5.7 KB, free 490.5 MB) 18/04/17 16:40:00 INFO spark.ContextCleaner: Cleaned accumulator 182 18/04/17 16:40:00 INFO storage.MemoryStore: Block broadcast_187_piece0 stored as bytes in memory (estimated size 3.1 KB, free 490.5 MB) 18/04/17 16:40:00 INFO storage.BlockManagerInfo: Removed broadcast_181_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.3 MB) 18/04/17 16:40:00 INFO storage.BlockManagerInfo: Added broadcast_187_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.3 MB) 18/04/17 16:40:00 INFO spark.SparkContext: Created broadcast 187 from broadcast at DAGScheduler.scala:1006 18/04/17 16:40:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 187 (KafkaRDD[263] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 16:40:00 INFO cluster.YarnClusterScheduler: Adding task set 187.0 with 1 tasks 18/04/17 16:40:00 INFO scheduler.DAGScheduler: Got job 189 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 16:40:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 188 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 16:40:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 16:40:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 16:40:00 INFO scheduler.DAGScheduler: Submitting ResultStage 188 (KafkaRDD[280] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 16:40:00 INFO storage.MemoryStore: Block broadcast_188 stored as values in memory (estimated size 5.7 KB, free 490.5 MB) 18/04/17 16:40:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 187.0 (TID 187, ***hostname masked***, executor 8, partition 0, RACK_LOCAL, 2046 bytes) 18/04/17 16:40:00 INFO storage.BlockManagerInfo: Removed broadcast_181_piece0 on ***hostname masked***:55279 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:40:00 INFO storage.BlockManagerInfo: Added broadcast_185_piece0 in memory on ***hostname masked***:53081 (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:40:00 INFO spark.ContextCleaner: Cleaned accumulator 183 18/04/17 16:40:00 INFO storage.MemoryStore: Block broadcast_188_piece0 stored as bytes in memory (estimated size 3.1 KB, free 490.5 MB) 18/04/17 16:40:00 INFO storage.BlockManagerInfo: Added broadcast_188_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.3 MB) 18/04/17 16:40:00 INFO storage.BlockManagerInfo: Removed broadcast_182_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.3 MB) 18/04/17 16:40:00 INFO spark.SparkContext: Created broadcast 188 from broadcast at DAGScheduler.scala:1006 18/04/17 16:40:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 188 (KafkaRDD[280] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 16:40:00 INFO cluster.YarnClusterScheduler: Adding task set 188.0 with 1 tasks 18/04/17 16:40:00 INFO scheduler.DAGScheduler: Got job 188 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 16:40:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 189 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 16:40:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 16:40:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 16:40:00 INFO scheduler.DAGScheduler: Submitting ResultStage 189 (KafkaRDD[271] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 16:40:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 188.0 (TID 188, ***hostname masked***, executor 5, partition 0, RACK_LOCAL, 2045 bytes) 18/04/17 16:40:00 INFO storage.MemoryStore: Block broadcast_189 stored as values in memory (estimated size 5.7 KB, free 490.5 MB) 18/04/17 16:40:00 INFO storage.BlockManagerInfo: Removed broadcast_182_piece0 on ***hostname masked***:35790 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:40:00 INFO spark.ContextCleaner: Cleaned accumulator 184 18/04/17 16:40:00 INFO storage.BlockManagerInfo: Removed broadcast_183_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.3 MB) 18/04/17 16:40:00 INFO storage.BlockManagerInfo: Removed broadcast_183_piece0 on ***hostname masked***:53081 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:40:00 INFO storage.MemoryStore: Block broadcast_189_piece0 stored as bytes in memory (estimated size 3.1 KB, free 490.5 MB) 18/04/17 16:40:00 INFO storage.BlockManagerInfo: Added broadcast_186_piece0 in memory on ***hostname masked***:50260 (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:40:00 INFO storage.BlockManagerInfo: Added broadcast_189_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.3 MB) 18/04/17 16:40:00 INFO spark.SparkContext: Created broadcast 189 from broadcast at DAGScheduler.scala:1006 18/04/17 16:40:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 189 (KafkaRDD[271] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 16:40:00 INFO cluster.YarnClusterScheduler: Adding task set 189.0 with 1 tasks 18/04/17 16:40:00 INFO scheduler.DAGScheduler: Got job 190 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 16:40:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 190 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 16:40:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 16:40:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 16:40:00 INFO scheduler.DAGScheduler: Submitting ResultStage 190 (KafkaRDD[254] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 16:40:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 189.0 (TID 189, ***hostname masked***, executor 11, partition 0, RACK_LOCAL, 2050 bytes) 18/04/17 16:40:00 INFO storage.MemoryStore: Block broadcast_190 stored as values in memory (estimated size 5.7 KB, free 490.5 MB) 18/04/17 16:40:00 INFO storage.MemoryStore: Block broadcast_190_piece0 stored as bytes in memory (estimated size 3.1 KB, free 490.5 MB) 18/04/17 16:40:00 INFO storage.BlockManagerInfo: Added broadcast_190_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.3 MB) 18/04/17 16:40:00 INFO spark.SparkContext: Created broadcast 190 from broadcast at DAGScheduler.scala:1006 18/04/17 16:40:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 190 (KafkaRDD[254] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 16:40:00 INFO cluster.YarnClusterScheduler: Adding task set 190.0 with 1 tasks 18/04/17 16:40:00 INFO scheduler.DAGScheduler: Got job 191 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 16:40:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 191 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 16:40:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 16:40:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 16:40:00 INFO storage.BlockManagerInfo: Added broadcast_188_piece0 in memory on ***hostname masked***:53081 (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:40:00 INFO scheduler.DAGScheduler: Submitting ResultStage 191 (KafkaRDD[267] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 16:40:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 190.0 (TID 190, ***hostname masked***, executor 9, partition 0, RACK_LOCAL, 2034 bytes) 18/04/17 16:40:00 INFO storage.MemoryStore: Block broadcast_191 stored as values in memory (estimated size 5.7 KB, free 490.5 MB) 18/04/17 16:40:00 INFO storage.MemoryStore: Block broadcast_191_piece0 stored as bytes in memory (estimated size 3.1 KB, free 490.5 MB) 18/04/17 16:40:00 INFO storage.BlockManagerInfo: Added broadcast_191_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.3 MB) 18/04/17 16:40:00 INFO spark.SparkContext: Created broadcast 191 from broadcast at DAGScheduler.scala:1006 18/04/17 16:40:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 191 (KafkaRDD[267] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 16:40:00 INFO cluster.YarnClusterScheduler: Adding task set 191.0 with 1 tasks 18/04/17 16:40:00 INFO scheduler.DAGScheduler: Got job 192 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 16:40:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 192 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 16:40:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 16:40:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 16:40:00 INFO scheduler.DAGScheduler: Submitting ResultStage 192 (KafkaRDD[272] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 16:40:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 191.0 (TID 191, ***hostname masked***, executor 12, partition 0, RACK_LOCAL, 2064 bytes) 18/04/17 16:40:00 INFO storage.MemoryStore: Block broadcast_192 stored as values in memory (estimated size 5.7 KB, free 490.5 MB) 18/04/17 16:40:00 INFO storage.BlockManagerInfo: Added broadcast_187_piece0 in memory on ***hostname masked***:50260 (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:40:00 INFO storage.BlockManagerInfo: Added broadcast_189_piece0 in memory on ***hostname masked***:57847 (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:40:00 INFO storage.MemoryStore: Block broadcast_192_piece0 stored as bytes in memory (estimated size 3.1 KB, free 490.5 MB) 18/04/17 16:40:00 INFO storage.BlockManagerInfo: Added broadcast_192_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.3 MB) 18/04/17 16:40:00 INFO spark.SparkContext: Created broadcast 192 from broadcast at DAGScheduler.scala:1006 18/04/17 16:40:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 192 (KafkaRDD[272] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 16:40:00 INFO cluster.YarnClusterScheduler: Adding task set 192.0 with 1 tasks 18/04/17 16:40:00 INFO scheduler.DAGScheduler: Got job 193 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 16:40:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 193 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 16:40:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 16:40:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 16:40:00 INFO scheduler.DAGScheduler: Submitting ResultStage 193 (KafkaRDD[278] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 16:40:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 192.0 (TID 192, ***hostname masked***, executor 3, partition 0, RACK_LOCAL, 2048 bytes) 18/04/17 16:40:00 INFO storage.MemoryStore: Block broadcast_193 stored as values in memory (estimated size 5.7 KB, free 490.5 MB) 18/04/17 16:40:00 INFO storage.MemoryStore: Block broadcast_193_piece0 stored as bytes in memory (estimated size 3.1 KB, free 490.5 MB) 18/04/17 16:40:00 INFO storage.BlockManagerInfo: Added broadcast_193_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.2 MB) 18/04/17 16:40:00 INFO spark.SparkContext: Created broadcast 193 from broadcast at DAGScheduler.scala:1006 18/04/17 16:40:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 193 (KafkaRDD[278] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 16:40:00 INFO cluster.YarnClusterScheduler: Adding task set 193.0 with 1 tasks 18/04/17 16:40:00 INFO scheduler.DAGScheduler: Got job 194 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 16:40:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 194 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 16:40:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 16:40:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 16:40:00 INFO scheduler.DAGScheduler: Submitting ResultStage 194 (KafkaRDD[264] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 16:40:00 INFO storage.BlockManagerInfo: Added broadcast_190_piece0 in memory on ***hostname masked***:55033 (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:40:00 INFO storage.MemoryStore: Block broadcast_194 stored as values in memory (estimated size 5.7 KB, free 490.5 MB) 18/04/17 16:40:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 193.0 (TID 193, ***hostname masked***, executor 3, partition 0, NODE_LOCAL, 2056 bytes) 18/04/17 16:40:00 INFO storage.MemoryStore: Block broadcast_194_piece0 stored as bytes in memory (estimated size 3.1 KB, free 490.5 MB) 18/04/17 16:40:00 INFO storage.BlockManagerInfo: Added broadcast_194_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.2 MB) 18/04/17 16:40:00 INFO spark.SparkContext: Created broadcast 194 from broadcast at DAGScheduler.scala:1006 18/04/17 16:40:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 194 (KafkaRDD[264] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 16:40:00 INFO cluster.YarnClusterScheduler: Adding task set 194.0 with 1 tasks 18/04/17 16:40:00 INFO scheduler.DAGScheduler: Got job 195 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 16:40:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 195 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 16:40:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 16:40:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 16:40:00 INFO scheduler.DAGScheduler: Submitting ResultStage 195 (KafkaRDD[283] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 16:40:00 INFO storage.MemoryStore: Block broadcast_195 stored as values in memory (estimated size 5.7 KB, free 490.5 MB) 18/04/17 16:40:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 194.0 (TID 194, ***hostname masked***, executor 1, partition 0, RACK_LOCAL, 2051 bytes) 18/04/17 16:40:00 INFO storage.BlockManagerInfo: Added broadcast_192_piece0 in memory on ***hostname masked***:60107 (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:40:00 INFO storage.MemoryStore: Block broadcast_195_piece0 stored as bytes in memory (estimated size 3.1 KB, free 490.5 MB) 18/04/17 16:40:00 INFO storage.BlockManagerInfo: Added broadcast_195_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.2 MB) 18/04/17 16:40:00 INFO spark.SparkContext: Created broadcast 195 from broadcast at DAGScheduler.scala:1006 18/04/17 16:40:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 195 (KafkaRDD[283] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 16:40:00 INFO cluster.YarnClusterScheduler: Adding task set 195.0 with 1 tasks 18/04/17 16:40:00 INFO scheduler.DAGScheduler: Got job 196 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 16:40:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 196 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 16:40:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 16:40:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 16:40:00 INFO scheduler.DAGScheduler: Submitting ResultStage 196 (KafkaRDD[257] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 16:40:00 INFO storage.MemoryStore: Block broadcast_196 stored as values in memory (estimated size 5.7 KB, free 490.5 MB) 18/04/17 16:40:00 INFO storage.BlockManagerInfo: Added broadcast_191_piece0 in memory on ***hostname masked***:42188 (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:40:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 195.0 (TID 195, ***hostname masked***, executor 9, partition 0, RACK_LOCAL, 2042 bytes) 18/04/17 16:40:00 INFO storage.MemoryStore: Block broadcast_196_piece0 stored as bytes in memory (estimated size 3.1 KB, free 490.5 MB) 18/04/17 16:40:00 INFO storage.BlockManagerInfo: Added broadcast_196_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.2 MB) 18/04/17 16:40:00 INFO spark.SparkContext: Created broadcast 196 from broadcast at DAGScheduler.scala:1006 18/04/17 16:40:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 196 (KafkaRDD[257] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 16:40:00 INFO cluster.YarnClusterScheduler: Adding task set 196.0 with 1 tasks 18/04/17 16:40:00 INFO scheduler.DAGScheduler: Got job 197 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 16:40:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 197 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 16:40:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 16:40:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 16:40:00 INFO scheduler.DAGScheduler: Submitting ResultStage 197 (KafkaRDD[261] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 16:40:00 INFO storage.MemoryStore: Block broadcast_197 stored as values in memory (estimated size 5.7 KB, free 490.5 MB) 18/04/17 16:40:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 196.0 (TID 196, ***hostname masked***, executor 5, partition 0, NODE_LOCAL, 2059 bytes) 18/04/17 16:40:00 INFO storage.BlockManagerInfo: Added broadcast_194_piece0 in memory on ***hostname masked***:56034 (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:40:00 INFO storage.MemoryStore: Block broadcast_197_piece0 stored as bytes in memory (estimated size 3.1 KB, free 490.4 MB) 18/04/17 16:40:00 INFO storage.BlockManagerInfo: Added broadcast_197_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.2 MB) 18/04/17 16:40:00 INFO spark.SparkContext: Created broadcast 197 from broadcast at DAGScheduler.scala:1006 18/04/17 16:40:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 197 (KafkaRDD[261] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 16:40:00 INFO cluster.YarnClusterScheduler: Adding task set 197.0 with 1 tasks 18/04/17 16:40:00 INFO scheduler.DAGScheduler: Got job 198 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 16:40:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 198 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 16:40:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 16:40:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 16:40:00 INFO scheduler.DAGScheduler: Submitting ResultStage 198 (KafkaRDD[262] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 16:40:00 INFO storage.MemoryStore: Block broadcast_198 stored as values in memory (estimated size 5.7 KB, free 490.4 MB) 18/04/17 16:40:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 197.0 (TID 197, ***hostname masked***, executor 4, partition 0, RACK_LOCAL, 2041 bytes) 18/04/17 16:40:00 INFO storage.BlockManagerInfo: Added broadcast_193_piece0 in memory on ***hostname masked***:60107 (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:40:00 INFO storage.BlockManagerInfo: Added broadcast_195_piece0 in memory on ***hostname masked***:55033 (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:40:00 INFO storage.MemoryStore: Block broadcast_198_piece0 stored as bytes in memory (estimated size 3.1 KB, free 490.4 MB) 18/04/17 16:40:00 INFO storage.BlockManagerInfo: Added broadcast_198_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.2 MB) 18/04/17 16:40:00 INFO storage.BlockManagerInfo: Added broadcast_196_piece0 in memory on ***hostname masked***:53081 (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:40:00 INFO spark.SparkContext: Created broadcast 198 from broadcast at DAGScheduler.scala:1006 18/04/17 16:40:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 198 (KafkaRDD[262] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 16:40:00 INFO cluster.YarnClusterScheduler: Adding task set 198.0 with 1 tasks 18/04/17 16:40:00 INFO scheduler.DAGScheduler: Got job 199 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 16:40:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 199 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 16:40:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 16:40:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 16:40:00 INFO scheduler.DAGScheduler: Submitting ResultStage 199 (KafkaRDD[276] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 16:40:00 INFO storage.MemoryStore: Block broadcast_199 stored as values in memory (estimated size 5.7 KB, free 490.4 MB) 18/04/17 16:40:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 198.0 (TID 198, ***hostname masked***, executor 3, partition 0, NODE_LOCAL, 2059 bytes) 18/04/17 16:40:00 INFO storage.MemoryStore: Block broadcast_199_piece0 stored as bytes in memory (estimated size 3.1 KB, free 490.4 MB) 18/04/17 16:40:00 INFO storage.BlockManagerInfo: Added broadcast_199_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.2 MB) 18/04/17 16:40:00 INFO spark.SparkContext: Created broadcast 199 from broadcast at DAGScheduler.scala:1006 18/04/17 16:40:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 199 (KafkaRDD[276] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 16:40:00 INFO cluster.YarnClusterScheduler: Adding task set 199.0 with 1 tasks 18/04/17 16:40:00 INFO scheduler.DAGScheduler: Got job 200 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 16:40:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 200 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 16:40:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 16:40:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 16:40:00 INFO scheduler.DAGScheduler: Submitting ResultStage 200 (KafkaRDD[281] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 16:40:00 INFO storage.MemoryStore: Block broadcast_200 stored as values in memory (estimated size 5.7 KB, free 490.4 MB) 18/04/17 16:40:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 199.0 (TID 199, ***hostname masked***, executor 1, partition 0, NODE_LOCAL, 2047 bytes) 18/04/17 16:40:00 INFO storage.BlockManagerInfo: Added broadcast_198_piece0 in memory on ***hostname masked***:60107 (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:40:00 INFO storage.BlockManagerInfo: Added broadcast_197_piece0 in memory on ***hostname masked***:55279 (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:40:00 INFO storage.MemoryStore: Block broadcast_200_piece0 stored as bytes in memory (estimated size 3.1 KB, free 490.4 MB) 18/04/17 16:40:00 INFO storage.BlockManagerInfo: Added broadcast_200_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.2 MB) 18/04/17 16:40:00 INFO spark.SparkContext: Created broadcast 200 from broadcast at DAGScheduler.scala:1006 18/04/17 16:40:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 200 (KafkaRDD[281] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 16:40:00 INFO cluster.YarnClusterScheduler: Adding task set 200.0 with 1 tasks 18/04/17 16:40:00 INFO scheduler.DAGScheduler: Got job 201 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 16:40:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 201 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 16:40:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 16:40:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 16:40:00 INFO scheduler.DAGScheduler: Submitting ResultStage 201 (KafkaRDD[285] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 16:40:00 INFO storage.MemoryStore: Block broadcast_201 stored as values in memory (estimated size 5.7 KB, free 490.4 MB) 18/04/17 16:40:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 200.0 (TID 200, ***hostname masked***, executor 1, partition 0, NODE_LOCAL, 2049 bytes) 18/04/17 16:40:00 INFO storage.MemoryStore: Block broadcast_201_piece0 stored as bytes in memory (estimated size 3.1 KB, free 490.4 MB) 18/04/17 16:40:00 INFO storage.BlockManagerInfo: Added broadcast_201_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.2 MB) 18/04/17 16:40:00 INFO spark.SparkContext: Created broadcast 201 from broadcast at DAGScheduler.scala:1006 18/04/17 16:40:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 201 (KafkaRDD[285] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 16:40:00 INFO cluster.YarnClusterScheduler: Adding task set 201.0 with 1 tasks 18/04/17 16:40:00 INFO scheduler.DAGScheduler: Got job 203 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 16:40:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 202 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 16:40:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 16:40:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 16:40:00 INFO scheduler.DAGScheduler: Submitting ResultStage 202 (KafkaRDD[284] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 16:40:00 INFO storage.BlockManagerInfo: Added broadcast_199_piece0 in memory on ***hostname masked***:56034 (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:40:00 INFO storage.MemoryStore: Block broadcast_202 stored as values in memory (estimated size 5.7 KB, free 490.4 MB) 18/04/17 16:40:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 201.0 (TID 201, ***hostname masked***, executor 7, partition 0, NODE_LOCAL, 2048 bytes) 18/04/17 16:40:00 INFO storage.MemoryStore: Block broadcast_202_piece0 stored as bytes in memory (estimated size 3.1 KB, free 490.4 MB) 18/04/17 16:40:00 INFO storage.BlockManagerInfo: Added broadcast_202_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.2 MB) 18/04/17 16:40:00 INFO spark.SparkContext: Created broadcast 202 from broadcast at DAGScheduler.scala:1006 18/04/17 16:40:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 202 (KafkaRDD[284] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 16:40:00 INFO cluster.YarnClusterScheduler: Adding task set 202.0 with 1 tasks 18/04/17 16:40:00 INFO scheduler.DAGScheduler: Got job 202 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 16:40:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 203 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 16:40:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 16:40:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 16:40:00 INFO scheduler.DAGScheduler: Submitting ResultStage 203 (KafkaRDD[286] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 16:40:00 INFO storage.MemoryStore: Block broadcast_203 stored as values in memory (estimated size 5.7 KB, free 490.4 MB) 18/04/17 16:40:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 202.0 (TID 202, ***hostname masked***, executor 1, partition 0, RACK_LOCAL, 2040 bytes) 18/04/17 16:40:00 INFO storage.MemoryStore: Block broadcast_203_piece0 stored as bytes in memory (estimated size 3.1 KB, free 490.4 MB) 18/04/17 16:40:00 INFO storage.BlockManagerInfo: Added broadcast_203_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.2 MB) 18/04/17 16:40:00 INFO spark.SparkContext: Created broadcast 203 from broadcast at DAGScheduler.scala:1006 18/04/17 16:40:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 203 (KafkaRDD[286] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 16:40:00 INFO cluster.YarnClusterScheduler: Adding task set 203.0 with 1 tasks 18/04/17 16:40:00 INFO scheduler.DAGScheduler: Got job 204 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 16:40:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 204 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 16:40:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 16:40:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 16:40:00 INFO scheduler.DAGScheduler: Submitting ResultStage 204 (KafkaRDD[259] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 16:40:00 INFO storage.MemoryStore: Block broadcast_204 stored as values in memory (estimated size 5.7 KB, free 490.4 MB) 18/04/17 16:40:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 203.0 (TID 203, ***hostname masked***, executor 7, partition 0, RACK_LOCAL, 2053 bytes) 18/04/17 16:40:00 INFO storage.MemoryStore: Block broadcast_204_piece0 stored as bytes in memory (estimated size 3.1 KB, free 490.4 MB) 18/04/17 16:40:00 INFO storage.BlockManagerInfo: Added broadcast_200_piece0 in memory on ***hostname masked***:56034 (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:40:00 INFO storage.BlockManagerInfo: Added broadcast_204_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.2 MB) 18/04/17 16:40:00 INFO spark.SparkContext: Created broadcast 204 from broadcast at DAGScheduler.scala:1006 18/04/17 16:40:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 204 (KafkaRDD[259] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 16:40:00 INFO cluster.YarnClusterScheduler: Adding task set 204.0 with 1 tasks 18/04/17 16:40:00 INFO scheduler.DAGScheduler: Got job 205 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 16:40:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 205 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 16:40:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 16:40:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 16:40:00 INFO scheduler.DAGScheduler: Submitting ResultStage 205 (KafkaRDD[258] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 16:40:00 INFO storage.MemoryStore: Block broadcast_205 stored as values in memory (estimated size 5.7 KB, free 490.4 MB) 18/04/17 16:40:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 204.0 (TID 204, ***hostname masked***, executor 9, partition 0, RACK_LOCAL, 2060 bytes) 18/04/17 16:40:00 INFO storage.MemoryStore: Block broadcast_205_piece0 stored as bytes in memory (estimated size 3.1 KB, free 490.4 MB) 18/04/17 16:40:00 INFO storage.BlockManagerInfo: Added broadcast_205_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.2 MB) 18/04/17 16:40:00 INFO spark.SparkContext: Created broadcast 205 from broadcast at DAGScheduler.scala:1006 18/04/17 16:40:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 205 (KafkaRDD[258] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 16:40:00 INFO cluster.YarnClusterScheduler: Adding task set 205.0 with 1 tasks 18/04/17 16:40:00 INFO scheduler.DAGScheduler: Got job 206 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 16:40:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 206 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 16:40:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 16:40:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 16:40:00 INFO scheduler.DAGScheduler: Submitting ResultStage 206 (KafkaRDD[270] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 16:40:00 INFO storage.MemoryStore: Block broadcast_206 stored as values in memory (estimated size 5.7 KB, free 490.4 MB) 18/04/17 16:40:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 205.0 (TID 205, ***hostname masked***, executor 11, partition 0, NODE_LOCAL, 2045 bytes) 18/04/17 16:40:00 INFO storage.MemoryStore: Block broadcast_206_piece0 stored as bytes in memory (estimated size 3.1 KB, free 490.4 MB) 18/04/17 16:40:00 INFO storage.BlockManagerInfo: Added broadcast_206_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.2 MB) 18/04/17 16:40:00 INFO spark.SparkContext: Created broadcast 206 from broadcast at DAGScheduler.scala:1006 18/04/17 16:40:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 206 (KafkaRDD[270] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 16:40:00 INFO cluster.YarnClusterScheduler: Adding task set 206.0 with 1 tasks 18/04/17 16:40:00 INFO scheduler.DAGScheduler: Got job 207 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 16:40:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 207 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 16:40:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 16:40:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 16:40:00 INFO scheduler.DAGScheduler: Submitting ResultStage 207 (KafkaRDD[260] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 16:40:00 INFO storage.MemoryStore: Block broadcast_207 stored as values in memory (estimated size 5.7 KB, free 490.4 MB) 18/04/17 16:40:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 206.0 (TID 206, ***hostname masked***, executor 11, partition 0, RACK_LOCAL, 2048 bytes) 18/04/17 16:40:00 INFO storage.BlockManagerInfo: Added broadcast_204_piece0 in memory on ***hostname masked***:55033 (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:40:00 INFO storage.BlockManagerInfo: Added broadcast_201_piece0 in memory on ***hostname masked***:41751 (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:40:00 INFO storage.MemoryStore: Block broadcast_207_piece0 stored as bytes in memory (estimated size 3.1 KB, free 490.4 MB) 18/04/17 16:40:00 INFO storage.BlockManagerInfo: Added broadcast_207_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.2 MB) 18/04/17 16:40:00 INFO spark.SparkContext: Created broadcast 207 from broadcast at DAGScheduler.scala:1006 18/04/17 16:40:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 207 (KafkaRDD[260] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 16:40:00 INFO cluster.YarnClusterScheduler: Adding task set 207.0 with 1 tasks 18/04/17 16:40:00 INFO scheduler.DAGScheduler: Got job 208 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 16:40:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 208 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 16:40:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 16:40:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 16:40:00 INFO scheduler.DAGScheduler: Submitting ResultStage 208 (KafkaRDD[275] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 16:40:00 INFO storage.BlockManagerInfo: Added broadcast_202_piece0 in memory on ***hostname masked***:56034 (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:40:00 INFO storage.MemoryStore: Block broadcast_208 stored as values in memory (estimated size 5.7 KB, free 490.4 MB) 18/04/17 16:40:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 207.0 (TID 207, ***hostname masked***, executor 6, partition 0, RACK_LOCAL, 2051 bytes) 18/04/17 16:40:00 INFO storage.BlockManagerInfo: Added broadcast_205_piece0 in memory on ***hostname masked***:57847 (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:40:00 INFO storage.MemoryStore: Block broadcast_208_piece0 stored as bytes in memory (estimated size 3.1 KB, free 490.4 MB) 18/04/17 16:40:00 INFO storage.BlockManagerInfo: Added broadcast_208_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.2 MB) 18/04/17 16:40:00 INFO spark.SparkContext: Created broadcast 208 from broadcast at DAGScheduler.scala:1006 18/04/17 16:40:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 208 (KafkaRDD[275] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 16:40:00 INFO cluster.YarnClusterScheduler: Adding task set 208.0 with 1 tasks 18/04/17 16:40:00 INFO scheduler.DAGScheduler: Got job 209 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 16:40:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 209 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 16:40:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 16:40:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 16:40:00 INFO scheduler.DAGScheduler: Submitting ResultStage 209 (KafkaRDD[277] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 16:40:00 INFO storage.MemoryStore: Block broadcast_209 stored as values in memory (estimated size 5.7 KB, free 490.3 MB) 18/04/17 16:40:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 208.0 (TID 208, ***hostname masked***, executor 11, partition 0, RACK_LOCAL, 2048 bytes) 18/04/17 16:40:00 INFO storage.MemoryStore: Block broadcast_209_piece0 stored as bytes in memory (estimated size 3.1 KB, free 490.3 MB) 18/04/17 16:40:00 INFO storage.BlockManagerInfo: Added broadcast_209_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.2 MB) 18/04/17 16:40:00 INFO spark.SparkContext: Created broadcast 209 from broadcast at DAGScheduler.scala:1006 18/04/17 16:40:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 209 (KafkaRDD[277] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 16:40:00 INFO cluster.YarnClusterScheduler: Adding task set 209.0 with 1 tasks 18/04/17 16:40:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 209.0 (TID 209, ***hostname masked***, executor 2, partition 0, RACK_LOCAL, 2045 bytes) 18/04/17 16:40:00 INFO storage.BlockManagerInfo: Added broadcast_207_piece0 in memory on ***hostname masked***:35790 (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:40:00 INFO storage.BlockManagerInfo: Added broadcast_203_piece0 in memory on ***hostname masked***:41751 (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:40:00 INFO storage.BlockManagerInfo: Added broadcast_209_piece0 in memory on ***hostname masked***:43653 (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:40:00 INFO storage.BlockManagerInfo: Added broadcast_208_piece0 in memory on ***hostname masked***:57847 (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:40:00 INFO storage.BlockManagerInfo: Added broadcast_206_piece0 in memory on ***hostname masked***:57847 (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:40:00 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 209.0 (TID 209) in 732 ms on ***hostname masked*** (executor 2) (1/1) 18/04/17 16:40:00 INFO cluster.YarnClusterScheduler: Removed TaskSet 209.0, whose tasks have all completed, from pool 18/04/17 16:40:00 INFO scheduler.DAGScheduler: ResultStage 209 (foreachPartition at PredictorEngineApp.java:153) finished in 0.734 s 18/04/17 16:40:00 INFO scheduler.DAGScheduler: Job 209 finished: foreachPartition at PredictorEngineApp.java:153, took 0.852584 s 18/04/17 16:40:00 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x214ca441 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 16:40:00 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x214ca4410x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 16:40:00 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 16:40:00 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:59977, server: ***hostname masked***/***IP masked***:2181 18/04/17 16:40:00 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x1626be1444c9280, negotiated timeout = 60000 18/04/17 16:40:00 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x1626be1444c9280 18/04/17 16:40:00 INFO zookeeper.ZooKeeper: Session: 0x1626be1444c9280 closed 18/04/17 16:40:00 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 16:40:00 INFO scheduler.JobScheduler: Finished job streaming job 1523972400000 ms.25 from job set of time 1523972400000 ms 18/04/17 16:40:01 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 207.0 (TID 207) in 928 ms on ***hostname masked*** (executor 6) (1/1) 18/04/17 16:40:01 INFO cluster.YarnClusterScheduler: Removed TaskSet 207.0, whose tasks have all completed, from pool 18/04/17 16:40:01 INFO scheduler.DAGScheduler: ResultStage 207 (foreachPartition at PredictorEngineApp.java:153) finished in 0.929 s 18/04/17 16:40:01 INFO scheduler.DAGScheduler: Job 207 finished: foreachPartition at PredictorEngineApp.java:153, took 1.042495 s 18/04/17 16:40:01 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x59f0ef94 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 16:40:01 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x59f0ef940x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 16:40:01 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 16:40:01 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:59980, server: ***hostname masked***/***IP masked***:2181 18/04/17 16:40:01 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x1626be1444c9281, negotiated timeout = 60000 18/04/17 16:40:01 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x1626be1444c9281 18/04/17 16:40:01 INFO zookeeper.ZooKeeper: Session: 0x1626be1444c9281 closed 18/04/17 16:40:01 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 16:40:01 INFO scheduler.JobScheduler: Finished job streaming job 1523972400000 ms.8 from job set of time 1523972400000 ms 18/04/17 16:40:02 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 204.0 (TID 204) in 2109 ms on ***hostname masked*** (executor 9) (1/1) 18/04/17 16:40:02 INFO cluster.YarnClusterScheduler: Removed TaskSet 204.0, whose tasks have all completed, from pool 18/04/17 16:40:02 INFO scheduler.DAGScheduler: ResultStage 204 (foreachPartition at PredictorEngineApp.java:153) finished in 2.110 s 18/04/17 16:40:02 INFO scheduler.DAGScheduler: Job 204 finished: foreachPartition at PredictorEngineApp.java:153, took 2.214748 s 18/04/17 16:40:02 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x4ff32645 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 16:40:02 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x4ff326450x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 16:40:02 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 16:40:02 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:53603, server: ***hostname masked***/***IP masked***:2181 18/04/17 16:40:02 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x3626be1439a9253, negotiated timeout = 60000 18/04/17 16:40:02 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x3626be1439a9253 18/04/17 16:40:02 INFO zookeeper.ZooKeeper: Session: 0x3626be1439a9253 closed 18/04/17 16:40:02 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 16:40:02 INFO scheduler.JobScheduler: Finished job streaming job 1523972400000 ms.7 from job set of time 1523972400000 ms 18/04/17 16:40:03 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 197.0 (TID 197) in 3166 ms on ***hostname masked*** (executor 4) (1/1) 18/04/17 16:40:03 INFO cluster.YarnClusterScheduler: Removed TaskSet 197.0, whose tasks have all completed, from pool 18/04/17 16:40:03 INFO scheduler.DAGScheduler: ResultStage 197 (foreachPartition at PredictorEngineApp.java:153) finished in 3.168 s 18/04/17 16:40:03 INFO scheduler.DAGScheduler: Job 197 finished: foreachPartition at PredictorEngineApp.java:153, took 3.247824 s 18/04/17 16:40:03 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x3ad09454 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 16:40:03 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x3ad094540x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 16:40:03 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 16:40:03 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:36353, server: ***hostname masked***/***IP masked***:2181 18/04/17 16:40:03 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x2626be142b28b69, negotiated timeout = 60000 18/04/17 16:40:03 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x2626be142b28b69 18/04/17 16:40:03 INFO zookeeper.ZooKeeper: Session: 0x2626be142b28b69 closed 18/04/17 16:40:03 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 16:40:03 INFO scheduler.JobScheduler: Finished job streaming job 1523972400000 ms.9 from job set of time 1523972400000 ms 18/04/17 16:40:03 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 191.0 (TID 191) in 3421 ms on ***hostname masked*** (executor 12) (1/1) 18/04/17 16:40:03 INFO cluster.YarnClusterScheduler: Removed TaskSet 191.0, whose tasks have all completed, from pool 18/04/17 16:40:03 INFO scheduler.DAGScheduler: ResultStage 191 (foreachPartition at PredictorEngineApp.java:153) finished in 3.422 s 18/04/17 16:40:03 INFO scheduler.DAGScheduler: Job 191 finished: foreachPartition at PredictorEngineApp.java:153, took 3.478660 s 18/04/17 16:40:03 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x49830122 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 16:40:03 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x498301220x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 16:40:03 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 16:40:03 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:53612, server: ***hostname masked***/***IP masked***:2181 18/04/17 16:40:03 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x3626be1439a9254, negotiated timeout = 60000 18/04/17 16:40:03 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x3626be1439a9254 18/04/17 16:40:03 INFO zookeeper.ZooKeeper: Session: 0x3626be1439a9254 closed 18/04/17 16:40:03 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 16:40:03 INFO scheduler.JobScheduler: Finished job streaming job 1523972400000 ms.15 from job set of time 1523972400000 ms 18/04/17 16:40:03 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 195.0 (TID 195) in 3689 ms on ***hostname masked*** (executor 9) (1/1) 18/04/17 16:40:03 INFO cluster.YarnClusterScheduler: Removed TaskSet 195.0, whose tasks have all completed, from pool 18/04/17 16:40:03 INFO scheduler.DAGScheduler: ResultStage 195 (foreachPartition at PredictorEngineApp.java:153) finished in 3.690 s 18/04/17 16:40:03 INFO scheduler.DAGScheduler: Job 195 finished: foreachPartition at PredictorEngineApp.java:153, took 3.763857 s 18/04/17 16:40:03 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x3aad6616 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 16:40:03 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x3aad66160x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 16:40:03 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 16:40:03 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:59997, server: ***hostname masked***/***IP masked***:2181 18/04/17 16:40:03 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x1626be1444c9282, negotiated timeout = 60000 18/04/17 16:40:03 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x1626be1444c9282 18/04/17 16:40:03 INFO zookeeper.ZooKeeper: Session: 0x1626be1444c9282 closed 18/04/17 16:40:03 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 16:40:03 INFO scheduler.JobScheduler: Finished job streaming job 1523972400000 ms.31 from job set of time 1523972400000 ms 18/04/17 16:40:05 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 190.0 (TID 190) in 5529 ms on ***hostname masked*** (executor 9) (1/1) 18/04/17 16:40:05 INFO cluster.YarnClusterScheduler: Removed TaskSet 190.0, whose tasks have all completed, from pool 18/04/17 16:40:05 INFO scheduler.DAGScheduler: ResultStage 190 (foreachPartition at PredictorEngineApp.java:153) finished in 5.530 s 18/04/17 16:40:05 INFO scheduler.DAGScheduler: Job 190 finished: foreachPartition at PredictorEngineApp.java:153, took 5.581029 s 18/04/17 16:40:05 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x3559e1a5 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 16:40:05 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x3559e1a50x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 16:40:05 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 16:40:05 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:53620, server: ***hostname masked***/***IP masked***:2181 18/04/17 16:40:05 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x3626be1439a9256, negotiated timeout = 60000 18/04/17 16:40:05 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x3626be1439a9256 18/04/17 16:40:05 INFO zookeeper.ZooKeeper: Session: 0x3626be1439a9256 closed 18/04/17 16:40:05 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 16:40:05 INFO scheduler.JobScheduler: Finished job streaming job 1523972400000 ms.2 from job set of time 1523972400000 ms 18/04/17 16:40:06 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 201.0 (TID 201) in 6251 ms on ***hostname masked*** (executor 7) (1/1) 18/04/17 16:40:06 INFO cluster.YarnClusterScheduler: Removed TaskSet 201.0, whose tasks have all completed, from pool 18/04/17 16:40:06 INFO scheduler.DAGScheduler: ResultStage 201 (foreachPartition at PredictorEngineApp.java:153) finished in 6.253 s 18/04/17 16:40:06 INFO scheduler.DAGScheduler: Job 201 finished: foreachPartition at PredictorEngineApp.java:153, took 6.348554 s 18/04/17 16:40:06 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x6556fe4a connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 16:40:06 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x6556fe4a0x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 16:40:06 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 16:40:06 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:53624, server: ***hostname masked***/***IP masked***:2181 18/04/17 16:40:06 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x3626be1439a9257, negotiated timeout = 60000 18/04/17 16:40:06 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x3626be1439a9257 18/04/17 16:40:06 INFO zookeeper.ZooKeeper: Session: 0x3626be1439a9257 closed 18/04/17 16:40:06 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 16:40:06 INFO scheduler.JobScheduler: Finished job streaming job 1523972400000 ms.33 from job set of time 1523972400000 ms 18/04/17 16:40:06 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 203.0 (TID 203) in 6322 ms on ***hostname masked*** (executor 7) (1/1) 18/04/17 16:40:06 INFO cluster.YarnClusterScheduler: Removed TaskSet 203.0, whose tasks have all completed, from pool 18/04/17 16:40:06 INFO scheduler.DAGScheduler: ResultStage 203 (foreachPartition at PredictorEngineApp.java:153) finished in 6.323 s 18/04/17 16:40:06 INFO scheduler.DAGScheduler: Job 202 finished: foreachPartition at PredictorEngineApp.java:153, took 6.424658 s 18/04/17 16:40:06 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x53504bb2 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 16:40:06 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x53504bb20x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 16:40:06 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 16:40:06 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:36371, server: ***hostname masked***/***IP masked***:2181 18/04/17 16:40:06 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x2626be142b28b6b, negotiated timeout = 60000 18/04/17 16:40:06 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x2626be142b28b6b 18/04/17 16:40:06 INFO zookeeper.ZooKeeper: Session: 0x2626be142b28b6b closed 18/04/17 16:40:06 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 16:40:06 INFO scheduler.JobScheduler: Finished job streaming job 1523972400000 ms.34 from job set of time 1523972400000 ms 18/04/17 16:40:08 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 188.0 (TID 188) in 8587 ms on ***hostname masked*** (executor 5) (1/1) 18/04/17 16:40:08 INFO cluster.YarnClusterScheduler: Removed TaskSet 188.0, whose tasks have all completed, from pool 18/04/17 16:40:08 INFO scheduler.DAGScheduler: ResultStage 188 (foreachPartition at PredictorEngineApp.java:153) finished in 8.587 s 18/04/17 16:40:08 INFO scheduler.DAGScheduler: Job 189 finished: foreachPartition at PredictorEngineApp.java:153, took 8.630803 s 18/04/17 16:40:08 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x3537c7e8 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 16:40:08 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x3537c7e80x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 16:40:08 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 16:40:08 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:53633, server: ***hostname masked***/***IP masked***:2181 18/04/17 16:40:08 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 189.0 (TID 189) in 8593 ms on ***hostname masked*** (executor 11) (1/1) 18/04/17 16:40:08 INFO cluster.YarnClusterScheduler: Removed TaskSet 189.0, whose tasks have all completed, from pool 18/04/17 16:40:08 INFO scheduler.DAGScheduler: ResultStage 189 (foreachPartition at PredictorEngineApp.java:153) finished in 8.594 s 18/04/17 16:40:08 INFO scheduler.DAGScheduler: Job 188 finished: foreachPartition at PredictorEngineApp.java:153, took 8.641902 s 18/04/17 16:40:08 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x3626be1439a9258, negotiated timeout = 60000 18/04/17 16:40:08 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x34b494fe connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 16:40:08 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x34b494fe0x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 16:40:08 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 16:40:08 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:53634, server: ***hostname masked***/***IP masked***:2181 18/04/17 16:40:08 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x3626be1439a9259, negotiated timeout = 60000 18/04/17 16:40:08 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x3626be1439a9258 18/04/17 16:40:08 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x3626be1439a9259 18/04/17 16:40:08 INFO zookeeper.ZooKeeper: Session: 0x3626be1439a9258 closed 18/04/17 16:40:08 INFO zookeeper.ZooKeeper: Session: 0x3626be1439a9259 closed 18/04/17 16:40:08 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 16:40:08 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 16:40:08 INFO scheduler.JobScheduler: Finished job streaming job 1523972400000 ms.28 from job set of time 1523972400000 ms 18/04/17 16:40:08 INFO scheduler.JobScheduler: Finished job streaming job 1523972400000 ms.19 from job set of time 1523972400000 ms 18/04/17 16:40:09 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 205.0 (TID 205) in 8904 ms on ***hostname masked*** (executor 11) (1/1) 18/04/17 16:40:09 INFO cluster.YarnClusterScheduler: Removed TaskSet 205.0, whose tasks have all completed, from pool 18/04/17 16:40:09 INFO scheduler.DAGScheduler: ResultStage 205 (foreachPartition at PredictorEngineApp.java:153) finished in 8.905 s 18/04/17 16:40:09 INFO scheduler.DAGScheduler: Job 205 finished: foreachPartition at PredictorEngineApp.java:153, took 9.007985 s 18/04/17 16:40:09 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x44a92ecc connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 16:40:09 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x44a92ecc0x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 16:40:09 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 16:40:09 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:60022, server: ***hostname masked***/***IP masked***:2181 18/04/17 16:40:09 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x1626be1444c9285, negotiated timeout = 60000 18/04/17 16:40:09 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x1626be1444c9285 18/04/17 16:40:09 INFO zookeeper.ZooKeeper: Session: 0x1626be1444c9285 closed 18/04/17 16:40:09 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 16:40:09 INFO scheduler.JobScheduler: Finished job streaming job 1523972400000 ms.6 from job set of time 1523972400000 ms 18/04/17 16:40:09 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 192.0 (TID 192) in 9034 ms on ***hostname masked*** (executor 3) (1/1) 18/04/17 16:40:09 INFO cluster.YarnClusterScheduler: Removed TaskSet 192.0, whose tasks have all completed, from pool 18/04/17 16:40:09 INFO scheduler.DAGScheduler: ResultStage 192 (foreachPartition at PredictorEngineApp.java:153) finished in 9.035 s 18/04/17 16:40:09 INFO scheduler.DAGScheduler: Job 192 finished: foreachPartition at PredictorEngineApp.java:153, took 9.095717 s 18/04/17 16:40:09 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x19977221 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 16:40:09 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x199772210x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 16:40:09 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 16:40:09 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:53643, server: ***hostname masked***/***IP masked***:2181 18/04/17 16:40:09 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x3626be1439a925b, negotiated timeout = 60000 18/04/17 16:40:09 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x3626be1439a925b 18/04/17 16:40:09 INFO zookeeper.ZooKeeper: Session: 0x3626be1439a925b closed 18/04/17 16:40:09 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 16:40:09 INFO scheduler.JobScheduler: Finished job streaming job 1523972400000 ms.20 from job set of time 1523972400000 ms 18/04/17 16:40:09 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 208.0 (TID 208) in 9015 ms on ***hostname masked*** (executor 11) (1/1) 18/04/17 16:40:09 INFO cluster.YarnClusterScheduler: Removed TaskSet 208.0, whose tasks have all completed, from pool 18/04/17 16:40:09 INFO scheduler.DAGScheduler: ResultStage 208 (foreachPartition at PredictorEngineApp.java:153) finished in 9.017 s 18/04/17 16:40:09 INFO scheduler.DAGScheduler: Job 208 finished: foreachPartition at PredictorEngineApp.java:153, took 9.132393 s 18/04/17 16:40:09 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x1eb56b95 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 16:40:09 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x1eb56b950x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 16:40:09 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 16:40:09 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:60028, server: ***hostname masked***/***IP masked***:2181 18/04/17 16:40:09 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x1626be1444c9286, negotiated timeout = 60000 18/04/17 16:40:09 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x1626be1444c9286 18/04/17 16:40:09 INFO zookeeper.ZooKeeper: Session: 0x1626be1444c9286 closed 18/04/17 16:40:09 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 16:40:09 INFO scheduler.JobScheduler: Finished job streaming job 1523972400000 ms.23 from job set of time 1523972400000 ms 18/04/17 16:40:09 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 185.0 (TID 185) in 9264 ms on ***hostname masked*** (executor 5) (1/1) 18/04/17 16:40:09 INFO cluster.YarnClusterScheduler: Removed TaskSet 185.0, whose tasks have all completed, from pool 18/04/17 16:40:09 INFO scheduler.DAGScheduler: ResultStage 185 (foreachPartition at PredictorEngineApp.java:153) finished in 9.265 s 18/04/17 16:40:09 INFO scheduler.DAGScheduler: Job 185 finished: foreachPartition at PredictorEngineApp.java:153, took 9.295729 s 18/04/17 16:40:09 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x6ad2b87f connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 16:40:09 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x6ad2b87f0x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 16:40:09 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 16:40:09 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:60031, server: ***hostname masked***/***IP masked***:2181 18/04/17 16:40:09 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x1626be1444c9287, negotiated timeout = 60000 18/04/17 16:40:09 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x1626be1444c9287 18/04/17 16:40:09 INFO zookeeper.ZooKeeper: Session: 0x1626be1444c9287 closed 18/04/17 16:40:09 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 16:40:09 INFO scheduler.JobScheduler: Finished job streaming job 1523972400000 ms.27 from job set of time 1523972400000 ms 18/04/17 16:40:10 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 194.0 (TID 194) in 10064 ms on ***hostname masked*** (executor 1) (1/1) 18/04/17 16:40:10 INFO cluster.YarnClusterScheduler: Removed TaskSet 194.0, whose tasks have all completed, from pool 18/04/17 16:40:10 INFO scheduler.DAGScheduler: ResultStage 194 (foreachPartition at PredictorEngineApp.java:153) finished in 10.065 s 18/04/17 16:40:10 INFO scheduler.DAGScheduler: Job 194 finished: foreachPartition at PredictorEngineApp.java:153, took 10.134913 s 18/04/17 16:40:10 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0xef8c42e connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 16:40:10 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0xef8c42e0x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 16:40:10 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 16:40:10 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:53653, server: ***hostname masked***/***IP masked***:2181 18/04/17 16:40:10 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x3626be1439a925d, negotiated timeout = 60000 18/04/17 16:40:10 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x3626be1439a925d 18/04/17 16:40:10 INFO zookeeper.ZooKeeper: Session: 0x3626be1439a925d closed 18/04/17 16:40:10 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 16:40:10 INFO scheduler.JobScheduler: Finished job streaming job 1523972400000 ms.12 from job set of time 1523972400000 ms 18/04/17 16:40:10 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 200.0 (TID 200) in 10815 ms on ***hostname masked*** (executor 1) (1/1) 18/04/17 16:40:10 INFO cluster.YarnClusterScheduler: Removed TaskSet 200.0, whose tasks have all completed, from pool 18/04/17 16:40:10 INFO scheduler.DAGScheduler: ResultStage 200 (foreachPartition at PredictorEngineApp.java:153) finished in 10.817 s 18/04/17 16:40:10 INFO scheduler.DAGScheduler: Job 200 finished: foreachPartition at PredictorEngineApp.java:153, took 10.908656 s 18/04/17 16:40:10 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x5af5d996 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 16:40:10 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x5af5d9960x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 16:40:10 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 16:40:10 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:53657, server: ***hostname masked***/***IP masked***:2181 18/04/17 16:40:10 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x3626be1439a925e, negotiated timeout = 60000 18/04/17 16:40:11 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x3626be1439a925e 18/04/17 16:40:11 INFO zookeeper.ZooKeeper: Session: 0x3626be1439a925e closed 18/04/17 16:40:11 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 16:40:11 INFO scheduler.JobScheduler: Finished job streaming job 1523972400000 ms.29 from job set of time 1523972400000 ms 18/04/17 16:40:11 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 202.0 (TID 202) in 11303 ms on ***hostname masked*** (executor 1) (1/1) 18/04/17 16:40:11 INFO cluster.YarnClusterScheduler: Removed TaskSet 202.0, whose tasks have all completed, from pool 18/04/17 16:40:11 INFO scheduler.DAGScheduler: ResultStage 202 (foreachPartition at PredictorEngineApp.java:153) finished in 11.304 s 18/04/17 16:40:11 INFO scheduler.DAGScheduler: Job 203 finished: foreachPartition at PredictorEngineApp.java:153, took 11.402796 s 18/04/17 16:40:11 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x1c10a035 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 16:40:11 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x1c10a0350x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 16:40:11 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 16:40:11 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:53660, server: ***hostname masked***/***IP masked***:2181 18/04/17 16:40:11 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x3626be1439a925f, negotiated timeout = 60000 18/04/17 16:40:11 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x3626be1439a925f 18/04/17 16:40:11 INFO zookeeper.ZooKeeper: Session: 0x3626be1439a925f closed 18/04/17 16:40:11 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 16:40:11 INFO scheduler.JobScheduler: Finished job streaming job 1523972400000 ms.32 from job set of time 1523972400000 ms 18/04/17 16:40:11 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 199.0 (TID 199) in 11365 ms on ***hostname masked*** (executor 1) (1/1) 18/04/17 16:40:11 INFO cluster.YarnClusterScheduler: Removed TaskSet 199.0, whose tasks have all completed, from pool 18/04/17 16:40:11 INFO scheduler.DAGScheduler: ResultStage 199 (foreachPartition at PredictorEngineApp.java:153) finished in 11.366 s 18/04/17 16:40:11 INFO scheduler.DAGScheduler: Job 199 finished: foreachPartition at PredictorEngineApp.java:153, took 11.454168 s 18/04/17 16:40:11 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0xcb4652b connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 16:40:11 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0xcb4652b0x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 16:40:11 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 16:40:11 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:53663, server: ***hostname masked***/***IP masked***:2181 18/04/17 16:40:11 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x3626be1439a9261, negotiated timeout = 60000 18/04/17 16:40:11 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x3626be1439a9261 18/04/17 16:40:11 INFO zookeeper.ZooKeeper: Session: 0x3626be1439a9261 closed 18/04/17 16:40:11 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 16:40:11 INFO scheduler.JobScheduler: Finished job streaming job 1523972400000 ms.24 from job set of time 1523972400000 ms 18/04/17 16:40:12 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 196.0 (TID 196) in 12771 ms on ***hostname masked*** (executor 5) (1/1) 18/04/17 16:40:12 INFO cluster.YarnClusterScheduler: Removed TaskSet 196.0, whose tasks have all completed, from pool 18/04/17 16:40:12 INFO scheduler.DAGScheduler: ResultStage 196 (foreachPartition at PredictorEngineApp.java:153) finished in 12.771 s 18/04/17 16:40:12 INFO scheduler.DAGScheduler: Job 196 finished: foreachPartition at PredictorEngineApp.java:153, took 12.848232 s 18/04/17 16:40:12 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x3f8165ac connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 16:40:12 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x3f8165ac0x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 16:40:12 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 16:40:12 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:60050, server: ***hostname masked***/***IP masked***:2181 18/04/17 16:40:12 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x1626be1444c9289, negotiated timeout = 60000 18/04/17 16:40:12 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x1626be1444c9289 18/04/17 16:40:12 INFO zookeeper.ZooKeeper: Session: 0x1626be1444c9289 closed 18/04/17 16:40:12 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 16:40:12 INFO scheduler.JobScheduler: Finished job streaming job 1523972400000 ms.5 from job set of time 1523972400000 ms 18/04/17 16:40:16 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 193.0 (TID 193) in 16708 ms on ***hostname masked*** (executor 3) (1/1) 18/04/17 16:40:16 INFO cluster.YarnClusterScheduler: Removed TaskSet 193.0, whose tasks have all completed, from pool 18/04/17 16:40:16 INFO scheduler.DAGScheduler: ResultStage 193 (foreachPartition at PredictorEngineApp.java:153) finished in 16.709 s 18/04/17 16:40:16 INFO scheduler.DAGScheduler: Job 193 finished: foreachPartition at PredictorEngineApp.java:153, took 16.774695 s 18/04/17 16:40:16 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x2f5aff26 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 16:40:16 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x2f5aff260x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 16:40:16 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 16:40:16 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:36421, server: ***hostname masked***/***IP masked***:2181 18/04/17 16:40:16 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x2626be142b28b76, negotiated timeout = 60000 18/04/17 16:40:16 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x2626be142b28b76 18/04/17 16:40:16 INFO zookeeper.ZooKeeper: Session: 0x2626be142b28b76 closed 18/04/17 16:40:16 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 16:40:16 INFO scheduler.JobScheduler: Finished job streaming job 1523972400000 ms.26 from job set of time 1523972400000 ms 18/04/17 16:40:17 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 198.0 (TID 198) in 17689 ms on ***hostname masked*** (executor 3) (1/1) 18/04/17 16:40:17 INFO cluster.YarnClusterScheduler: Removed TaskSet 198.0, whose tasks have all completed, from pool 18/04/17 16:40:17 INFO scheduler.DAGScheduler: ResultStage 198 (foreachPartition at PredictorEngineApp.java:153) finished in 17.690 s 18/04/17 16:40:17 INFO scheduler.DAGScheduler: Job 198 finished: foreachPartition at PredictorEngineApp.java:153, took 17.773989 s 18/04/17 16:40:17 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x5d571138 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 16:40:17 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x5d5711380x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 16:40:17 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 16:40:17 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:60063, server: ***hostname masked***/***IP masked***:2181 18/04/17 16:40:17 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x1626be1444c928e, negotiated timeout = 60000 18/04/17 16:40:17 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x1626be1444c928e 18/04/17 16:40:17 INFO zookeeper.ZooKeeper: Session: 0x1626be1444c928e closed 18/04/17 16:40:17 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 16:40:17 INFO scheduler.JobScheduler: Finished job streaming job 1523972400000 ms.10 from job set of time 1523972400000 ms 18/04/17 16:40:20 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 186.0 (TID 186) in 20765 ms on ***hostname masked*** (executor 8) (1/1) 18/04/17 16:40:20 INFO cluster.YarnClusterScheduler: Removed TaskSet 186.0, whose tasks have all completed, from pool 18/04/17 16:40:20 INFO scheduler.DAGScheduler: ResultStage 186 (foreachPartition at PredictorEngineApp.java:153) finished in 20.765 s 18/04/17 16:40:20 INFO scheduler.DAGScheduler: Job 184 finished: foreachPartition at PredictorEngineApp.java:153, took 20.801889 s 18/04/17 16:40:20 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x657a1f32 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 16:40:20 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x657a1f320x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 16:40:20 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 16:40:20 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:36433, server: ***hostname masked***/***IP masked***:2181 18/04/17 16:40:20 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x2626be142b28b7b, negotiated timeout = 60000 18/04/17 16:40:20 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 187.0 (TID 187) in 20780 ms on ***hostname masked*** (executor 8) (1/1) 18/04/17 16:40:20 INFO cluster.YarnClusterScheduler: Removed TaskSet 187.0, whose tasks have all completed, from pool 18/04/17 16:40:20 INFO scheduler.DAGScheduler: ResultStage 187 (foreachPartition at PredictorEngineApp.java:153) finished in 20.781 s 18/04/17 16:40:20 INFO scheduler.DAGScheduler: Job 187 finished: foreachPartition at PredictorEngineApp.java:153, took 20.821361 s 18/04/17 16:40:20 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x2626be142b28b7b 18/04/17 16:40:20 INFO zookeeper.ZooKeeper: Session: 0x2626be142b28b7b closed 18/04/17 16:40:20 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 16:40:20 INFO scheduler.JobScheduler: Finished job streaming job 1523972400000 ms.1 from job set of time 1523972400000 ms 18/04/17 16:40:20 INFO scheduler.JobScheduler: Finished job streaming job 1523972400000 ms.11 from job set of time 1523972400000 ms 18/04/17 16:40:21 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 184.0 (TID 184) in 21014 ms on ***hostname masked*** (executor 8) (1/1) 18/04/17 16:40:21 INFO cluster.YarnClusterScheduler: Removed TaskSet 184.0, whose tasks have all completed, from pool 18/04/17 16:40:21 INFO scheduler.DAGScheduler: ResultStage 184 (foreachPartition at PredictorEngineApp.java:153) finished in 21.014 s 18/04/17 16:40:21 INFO scheduler.DAGScheduler: Job 186 finished: foreachPartition at PredictorEngineApp.java:153, took 21.027530 s 18/04/17 16:40:21 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x38a1c65f connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 16:40:21 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x38a1c65f0x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 16:40:21 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 16:40:21 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:53693, server: ***hostname masked***/***IP masked***:2181 18/04/17 16:40:21 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x3626be1439a9264, negotiated timeout = 60000 18/04/17 16:40:21 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x3626be1439a9264 18/04/17 16:40:21 INFO zookeeper.ZooKeeper: Session: 0x3626be1439a9264 closed 18/04/17 16:40:21 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 16:40:21 INFO scheduler.JobScheduler: Finished job streaming job 1523972400000 ms.22 from job set of time 1523972400000 ms 18/04/17 16:41:00 INFO scheduler.JobScheduler: Added jobs for time 1523972460000 ms 18/04/17 16:41:00 INFO scheduler.JobScheduler: Starting job streaming job 1523972460000 ms.3 from job set of time 1523972460000 ms 18/04/17 16:41:00 INFO scheduler.JobScheduler: Starting job streaming job 1523972460000 ms.4 from job set of time 1523972460000 ms 18/04/17 16:41:00 INFO scheduler.JobScheduler: Starting job streaming job 1523972460000 ms.2 from job set of time 1523972460000 ms 18/04/17 16:41:00 INFO scheduler.JobScheduler: Starting job streaming job 1523972460000 ms.1 from job set of time 1523972460000 ms 18/04/17 16:41:00 INFO scheduler.JobScheduler: Starting job streaming job 1523972460000 ms.0 from job set of time 1523972460000 ms 18/04/17 16:41:00 INFO scheduler.JobScheduler: Starting job streaming job 1523972460000 ms.5 from job set of time 1523972460000 ms 18/04/17 16:41:00 INFO scheduler.JobScheduler: Starting job streaming job 1523972460000 ms.6 from job set of time 1523972460000 ms 18/04/17 16:41:00 INFO scheduler.JobScheduler: Starting job streaming job 1523972460000 ms.7 from job set of time 1523972460000 ms 18/04/17 16:41:00 INFO scheduler.JobScheduler: Starting job streaming job 1523972460000 ms.8 from job set of time 1523972460000 ms 18/04/17 16:41:00 INFO scheduler.JobScheduler: Finished job streaming job 1523972460000 ms.3 from job set of time 1523972460000 ms 18/04/17 16:41:00 INFO scheduler.JobScheduler: Finished job streaming job 1523972460000 ms.4 from job set of time 1523972460000 ms 18/04/17 16:41:00 INFO scheduler.JobScheduler: Finished job streaming job 1523972460000 ms.0 from job set of time 1523972460000 ms 18/04/17 16:41:00 INFO scheduler.JobScheduler: Starting job streaming job 1523972460000 ms.9 from job set of time 1523972460000 ms 18/04/17 16:41:00 INFO scheduler.JobScheduler: Starting job streaming job 1523972460000 ms.11 from job set of time 1523972460000 ms 18/04/17 16:41:00 INFO scheduler.JobScheduler: Starting job streaming job 1523972460000 ms.12 from job set of time 1523972460000 ms 18/04/17 16:41:00 INFO scheduler.JobScheduler: Starting job streaming job 1523972460000 ms.13 from job set of time 1523972460000 ms 18/04/17 16:41:00 INFO scheduler.JobScheduler: Starting job streaming job 1523972460000 ms.10 from job set of time 1523972460000 ms 18/04/17 16:41:00 INFO scheduler.JobScheduler: Finished job streaming job 1523972460000 ms.13 from job set of time 1523972460000 ms 18/04/17 16:41:00 INFO scheduler.JobScheduler: Starting job streaming job 1523972460000 ms.15 from job set of time 1523972460000 ms 18/04/17 16:41:00 INFO scheduler.JobScheduler: Starting job streaming job 1523972460000 ms.14 from job set of time 1523972460000 ms 18/04/17 16:41:00 INFO scheduler.JobScheduler: Starting job streaming job 1523972460000 ms.16 from job set of time 1523972460000 ms 18/04/17 16:41:00 INFO scheduler.JobScheduler: Starting job streaming job 1523972460000 ms.17 from job set of time 1523972460000 ms 18/04/17 16:41:00 INFO scheduler.JobScheduler: Finished job streaming job 1523972460000 ms.14 from job set of time 1523972460000 ms 18/04/17 16:41:00 INFO scheduler.JobScheduler: Finished job streaming job 1523972460000 ms.16 from job set of time 1523972460000 ms 18/04/17 16:41:00 INFO scheduler.JobScheduler: Starting job streaming job 1523972460000 ms.18 from job set of time 1523972460000 ms 18/04/17 16:41:00 INFO scheduler.JobScheduler: Starting job streaming job 1523972460000 ms.20 from job set of time 1523972460000 ms 18/04/17 16:41:00 INFO scheduler.JobScheduler: Starting job streaming job 1523972460000 ms.19 from job set of time 1523972460000 ms 18/04/17 16:41:00 INFO scheduler.JobScheduler: Finished job streaming job 1523972460000 ms.17 from job set of time 1523972460000 ms 18/04/17 16:41:00 INFO scheduler.JobScheduler: Starting job streaming job 1523972460000 ms.21 from job set of time 1523972460000 ms 18/04/17 16:41:00 INFO scheduler.JobScheduler: Finished job streaming job 1523972460000 ms.21 from job set of time 1523972460000 ms 18/04/17 16:41:00 INFO scheduler.JobScheduler: Starting job streaming job 1523972460000 ms.22 from job set of time 1523972460000 ms 18/04/17 16:41:00 INFO scheduler.JobScheduler: Starting job streaming job 1523972460000 ms.23 from job set of time 1523972460000 ms 18/04/17 16:41:00 INFO scheduler.JobScheduler: Starting job streaming job 1523972460000 ms.24 from job set of time 1523972460000 ms 18/04/17 16:41:00 INFO scheduler.JobScheduler: Starting job streaming job 1523972460000 ms.25 from job set of time 1523972460000 ms 18/04/17 16:41:00 INFO scheduler.JobScheduler: Starting job streaming job 1523972460000 ms.26 from job set of time 1523972460000 ms 18/04/17 16:41:00 INFO scheduler.JobScheduler: Starting job streaming job 1523972460000 ms.27 from job set of time 1523972460000 ms 18/04/17 16:41:00 INFO scheduler.JobScheduler: Starting job streaming job 1523972460000 ms.28 from job set of time 1523972460000 ms 18/04/17 16:41:00 INFO scheduler.JobScheduler: Starting job streaming job 1523972460000 ms.29 from job set of time 1523972460000 ms 18/04/17 16:41:00 INFO scheduler.JobScheduler: Starting job streaming job 1523972460000 ms.30 from job set of time 1523972460000 ms 18/04/17 16:41:00 INFO scheduler.JobScheduler: Starting job streaming job 1523972460000 ms.31 from job set of time 1523972460000 ms 18/04/17 16:41:00 INFO scheduler.JobScheduler: Starting job streaming job 1523972460000 ms.32 from job set of time 1523972460000 ms 18/04/17 16:41:00 INFO scheduler.JobScheduler: Finished job streaming job 1523972460000 ms.30 from job set of time 1523972460000 ms 18/04/17 16:41:00 INFO scheduler.JobScheduler: Starting job streaming job 1523972460000 ms.33 from job set of time 1523972460000 ms 18/04/17 16:41:00 INFO scheduler.JobScheduler: Starting job streaming job 1523972460000 ms.35 from job set of time 1523972460000 ms 18/04/17 16:41:00 INFO scheduler.JobScheduler: Starting job streaming job 1523972460000 ms.34 from job set of time 1523972460000 ms 18/04/17 16:41:00 INFO scheduler.JobScheduler: Finished job streaming job 1523972460000 ms.35 from job set of time 1523972460000 ms 18/04/17 16:41:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 16:41:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 16:41:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 16:41:00 INFO scheduler.DAGScheduler: Got job 210 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 16:41:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 210 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 16:41:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 16:41:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 16:41:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 16:41:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 16:41:00 INFO scheduler.DAGScheduler: Submitting ResultStage 210 (KafkaRDD[312] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 16:41:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 16:41:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 16:41:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 16:41:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 16:41:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 16:41:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 16:41:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 16:41:00 INFO storage.MemoryStore: Block broadcast_210 stored as values in memory (estimated size 5.7 KB, free 490.3 MB) 18/04/17 16:41:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 16:41:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 16:41:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 16:41:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 16:41:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 16:41:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 16:41:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 16:41:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 16:41:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 16:41:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 16:41:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 16:41:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 16:41:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 16:41:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 16:41:00 INFO storage.MemoryStore: Block broadcast_210_piece0 stored as bytes in memory (estimated size 3.1 KB, free 490.3 MB) 18/04/17 16:41:00 INFO storage.BlockManagerInfo: Added broadcast_210_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.2 MB) 18/04/17 16:41:00 INFO spark.SparkContext: Created broadcast 210 from broadcast at DAGScheduler.scala:1006 18/04/17 16:41:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 210 (KafkaRDD[312] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 16:41:00 INFO cluster.YarnClusterScheduler: Adding task set 210.0 with 1 tasks 18/04/17 16:41:00 INFO scheduler.DAGScheduler: Got job 211 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 16:41:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 211 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 16:41:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 16:41:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 16:41:00 INFO scheduler.DAGScheduler: Submitting ResultStage 211 (KafkaRDD[317] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 16:41:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 210.0 (TID 210, ***hostname masked***, executor 2, partition 0, NODE_LOCAL, 2047 bytes) 18/04/17 16:41:00 INFO storage.MemoryStore: Block broadcast_211 stored as values in memory (estimated size 5.7 KB, free 490.3 MB) 18/04/17 16:41:00 INFO storage.MemoryStore: Block broadcast_211_piece0 stored as bytes in memory (estimated size 3.1 KB, free 490.3 MB) 18/04/17 16:41:00 INFO storage.BlockManagerInfo: Added broadcast_211_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.2 MB) 18/04/17 16:41:00 INFO spark.SparkContext: Created broadcast 211 from broadcast at DAGScheduler.scala:1006 18/04/17 16:41:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 211 (KafkaRDD[317] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 16:41:00 INFO cluster.YarnClusterScheduler: Adding task set 211.0 with 1 tasks 18/04/17 16:41:00 INFO scheduler.DAGScheduler: Got job 212 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 16:41:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 212 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 16:41:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 16:41:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 16:41:00 INFO scheduler.DAGScheduler: Submitting ResultStage 212 (KafkaRDD[311] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 16:41:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 211.0 (TID 211, ***hostname masked***, executor 5, partition 0, NODE_LOCAL, 2049 bytes) 18/04/17 16:41:00 INFO storage.MemoryStore: Block broadcast_212 stored as values in memory (estimated size 5.7 KB, free 490.3 MB) 18/04/17 16:41:00 INFO storage.MemoryStore: Block broadcast_212_piece0 stored as bytes in memory (estimated size 3.1 KB, free 490.3 MB) 18/04/17 16:41:00 INFO storage.BlockManagerInfo: Added broadcast_212_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.2 MB) 18/04/17 16:41:00 INFO spark.SparkContext: Created broadcast 212 from broadcast at DAGScheduler.scala:1006 18/04/17 16:41:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 212 (KafkaRDD[311] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 16:41:00 INFO cluster.YarnClusterScheduler: Adding task set 212.0 with 1 tasks 18/04/17 16:41:00 INFO scheduler.DAGScheduler: Got job 213 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 16:41:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 213 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 16:41:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 16:41:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 16:41:00 INFO scheduler.DAGScheduler: Submitting ResultStage 213 (KafkaRDD[316] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 16:41:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 212.0 (TID 212, ***hostname masked***, executor 3, partition 0, RACK_LOCAL, 2048 bytes) 18/04/17 16:41:00 INFO storage.MemoryStore: Block broadcast_213 stored as values in memory (estimated size 5.7 KB, free 490.3 MB) 18/04/17 16:41:00 INFO storage.BlockManagerInfo: Added broadcast_211_piece0 in memory on ***hostname masked***:53081 (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:41:00 INFO storage.BlockManagerInfo: Added broadcast_210_piece0 in memory on ***hostname masked***:43653 (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:41:00 INFO storage.MemoryStore: Block broadcast_213_piece0 stored as bytes in memory (estimated size 3.1 KB, free 490.3 MB) 18/04/17 16:41:00 INFO storage.BlockManagerInfo: Added broadcast_213_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.2 MB) 18/04/17 16:41:00 INFO spark.SparkContext: Created broadcast 213 from broadcast at DAGScheduler.scala:1006 18/04/17 16:41:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 213 (KafkaRDD[316] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 16:41:00 INFO cluster.YarnClusterScheduler: Adding task set 213.0 with 1 tasks 18/04/17 16:41:00 INFO scheduler.DAGScheduler: Got job 214 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 16:41:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 214 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 16:41:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 16:41:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 16:41:00 INFO scheduler.DAGScheduler: Submitting ResultStage 214 (KafkaRDD[308] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 16:41:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 213.0 (TID 213, ***hostname masked***, executor 9, partition 0, RACK_LOCAL, 2045 bytes) 18/04/17 16:41:00 INFO storage.MemoryStore: Block broadcast_214 stored as values in memory (estimated size 5.7 KB, free 490.3 MB) 18/04/17 16:41:00 INFO storage.MemoryStore: Block broadcast_214_piece0 stored as bytes in memory (estimated size 3.1 KB, free 490.3 MB) 18/04/17 16:41:00 INFO storage.BlockManagerInfo: Added broadcast_214_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.2 MB) 18/04/17 16:41:00 INFO spark.SparkContext: Created broadcast 214 from broadcast at DAGScheduler.scala:1006 18/04/17 16:41:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 214 (KafkaRDD[308] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 16:41:00 INFO cluster.YarnClusterScheduler: Adding task set 214.0 with 1 tasks 18/04/17 16:41:00 INFO scheduler.DAGScheduler: Got job 216 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 16:41:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 215 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 16:41:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 16:41:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 16:41:00 INFO scheduler.DAGScheduler: Submitting ResultStage 215 (KafkaRDD[322] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 16:41:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 214.0 (TID 214, ***hostname masked***, executor 5, partition 0, RACK_LOCAL, 2048 bytes) 18/04/17 16:41:00 INFO storage.MemoryStore: Block broadcast_215 stored as values in memory (estimated size 5.7 KB, free 490.3 MB) 18/04/17 16:41:00 INFO storage.MemoryStore: Block broadcast_215_piece0 stored as bytes in memory (estimated size 3.1 KB, free 490.3 MB) 18/04/17 16:41:00 INFO storage.BlockManagerInfo: Added broadcast_215_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.2 MB) 18/04/17 16:41:00 INFO spark.SparkContext: Created broadcast 215 from broadcast at DAGScheduler.scala:1006 18/04/17 16:41:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 215 (KafkaRDD[322] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 16:41:00 INFO cluster.YarnClusterScheduler: Adding task set 215.0 with 1 tasks 18/04/17 16:41:00 INFO scheduler.DAGScheduler: Got job 215 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 16:41:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 216 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 16:41:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 16:41:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 16:41:00 INFO scheduler.DAGScheduler: Submitting ResultStage 216 (KafkaRDD[307] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 16:41:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 215.0 (TID 215, ***hostname masked***, executor 9, partition 0, RACK_LOCAL, 2053 bytes) 18/04/17 16:41:00 INFO storage.MemoryStore: Block broadcast_216 stored as values in memory (estimated size 5.7 KB, free 490.3 MB) 18/04/17 16:41:00 INFO storage.BlockManagerInfo: Added broadcast_214_piece0 in memory on ***hostname masked***:53081 (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:41:00 INFO storage.BlockManagerInfo: Added broadcast_213_piece0 in memory on ***hostname masked***:55033 (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:41:00 INFO storage.MemoryStore: Block broadcast_216_piece0 stored as bytes in memory (estimated size 3.1 KB, free 490.3 MB) 18/04/17 16:41:00 INFO storage.BlockManagerInfo: Added broadcast_216_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.2 MB) 18/04/17 16:41:00 INFO spark.SparkContext: Created broadcast 216 from broadcast at DAGScheduler.scala:1006 18/04/17 16:41:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 216 (KafkaRDD[307] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 16:41:00 INFO cluster.YarnClusterScheduler: Adding task set 216.0 with 1 tasks 18/04/17 16:41:00 INFO scheduler.DAGScheduler: Got job 217 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 16:41:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 217 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 16:41:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 16:41:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 16:41:00 INFO scheduler.DAGScheduler: Submitting ResultStage 217 (KafkaRDD[320] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 16:41:00 INFO storage.MemoryStore: Block broadcast_217 stored as values in memory (estimated size 5.7 KB, free 490.3 MB) 18/04/17 16:41:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 216.0 (TID 216, ***hostname masked***, executor 11, partition 0, RACK_LOCAL, 2050 bytes) 18/04/17 16:41:00 INFO storage.MemoryStore: Block broadcast_217_piece0 stored as bytes in memory (estimated size 3.1 KB, free 490.3 MB) 18/04/17 16:41:00 INFO storage.BlockManagerInfo: Added broadcast_217_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.2 MB) 18/04/17 16:41:00 INFO spark.SparkContext: Created broadcast 217 from broadcast at DAGScheduler.scala:1006 18/04/17 16:41:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 217 (KafkaRDD[320] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 16:41:00 INFO cluster.YarnClusterScheduler: Adding task set 217.0 with 1 tasks 18/04/17 16:41:00 INFO scheduler.DAGScheduler: Got job 218 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 16:41:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 218 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 16:41:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 16:41:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 16:41:00 INFO scheduler.DAGScheduler: Submitting ResultStage 218 (KafkaRDD[303] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 16:41:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 217.0 (TID 217, ***hostname masked***, executor 6, partition 0, RACK_LOCAL, 2040 bytes) 18/04/17 16:41:00 INFO storage.MemoryStore: Block broadcast_218 stored as values in memory (estimated size 5.7 KB, free 490.3 MB) 18/04/17 16:41:00 INFO spark.ContextCleaner: Cleaned accumulator 203 18/04/17 16:41:00 INFO storage.BlockManagerInfo: Added broadcast_215_piece0 in memory on ***hostname masked***:55033 (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:41:00 INFO storage.BlockManagerInfo: Removed broadcast_185_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.2 MB) 18/04/17 16:41:00 INFO storage.MemoryStore: Block broadcast_218_piece0 stored as bytes in memory (estimated size 3.1 KB, free 490.3 MB) 18/04/17 16:41:00 INFO storage.BlockManagerInfo: Added broadcast_216_piece0 in memory on ***hostname masked***:57847 (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:41:00 INFO storage.BlockManagerInfo: Added broadcast_218_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.2 MB) 18/04/17 16:41:00 INFO spark.SparkContext: Created broadcast 218 from broadcast at DAGScheduler.scala:1006 18/04/17 16:41:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 218 (KafkaRDD[303] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 16:41:00 INFO cluster.YarnClusterScheduler: Adding task set 218.0 with 1 tasks 18/04/17 16:41:00 INFO scheduler.DAGScheduler: Got job 219 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 16:41:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 219 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 16:41:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 16:41:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 16:41:00 INFO storage.BlockManagerInfo: Removed broadcast_185_piece0 on ***hostname masked***:53081 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:41:00 INFO scheduler.DAGScheduler: Submitting ResultStage 219 (KafkaRDD[300] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 16:41:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 218.0 (TID 218, ***hostname masked***, executor 5, partition 0, RACK_LOCAL, 2064 bytes) 18/04/17 16:41:00 INFO storage.MemoryStore: Block broadcast_219 stored as values in memory (estimated size 5.7 KB, free 490.3 MB) 18/04/17 16:41:00 INFO storage.BlockManagerInfo: Added broadcast_217_piece0 in memory on ***hostname masked***:35790 (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:41:00 INFO storage.BlockManagerInfo: Added broadcast_212_piece0 in memory on ***hostname masked***:60107 (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:41:00 INFO storage.BlockManagerInfo: Removed broadcast_187_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.2 MB) 18/04/17 16:41:00 INFO storage.MemoryStore: Block broadcast_219_piece0 stored as bytes in memory (estimated size 3.1 KB, free 490.3 MB) 18/04/17 16:41:00 INFO storage.BlockManagerInfo: Added broadcast_219_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.2 MB) 18/04/17 16:41:00 INFO spark.SparkContext: Created broadcast 219 from broadcast at DAGScheduler.scala:1006 18/04/17 16:41:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 219 (KafkaRDD[300] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 16:41:00 INFO cluster.YarnClusterScheduler: Adding task set 219.0 with 1 tasks 18/04/17 16:41:00 INFO scheduler.DAGScheduler: Got job 220 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 16:41:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 220 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 16:41:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 16:41:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 16:41:00 INFO scheduler.DAGScheduler: Submitting ResultStage 220 (KafkaRDD[310] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 16:41:00 INFO storage.MemoryStore: Block broadcast_220 stored as values in memory (estimated size 5.7 KB, free 490.3 MB) 18/04/17 16:41:00 INFO storage.BlockManagerInfo: Removed broadcast_187_piece0 on ***hostname masked***:50260 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:41:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 219.0 (TID 219, ***hostname masked***, executor 8, partition 0, RACK_LOCAL, 2051 bytes) 18/04/17 16:41:00 INFO spark.ContextCleaner: Cleaned accumulator 188 18/04/17 16:41:00 INFO storage.BlockManagerInfo: Removed broadcast_186_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.2 MB) 18/04/17 16:41:00 INFO storage.MemoryStore: Block broadcast_220_piece0 stored as bytes in memory (estimated size 3.1 KB, free 490.3 MB) 18/04/17 16:41:00 INFO storage.BlockManagerInfo: Added broadcast_220_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.2 MB) 18/04/17 16:41:00 INFO spark.SparkContext: Created broadcast 220 from broadcast at DAGScheduler.scala:1006 18/04/17 16:41:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 220 (KafkaRDD[310] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 16:41:00 INFO cluster.YarnClusterScheduler: Adding task set 220.0 with 1 tasks 18/04/17 16:41:00 INFO scheduler.DAGScheduler: Got job 221 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 16:41:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 221 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 16:41:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 16:41:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 16:41:00 INFO scheduler.DAGScheduler: Submitting ResultStage 221 (KafkaRDD[306] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 16:41:00 INFO storage.BlockManagerInfo: Removed broadcast_186_piece0 on ***hostname masked***:50260 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:41:00 INFO storage.BlockManagerInfo: Added broadcast_218_piece0 in memory on ***hostname masked***:53081 (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:41:00 INFO storage.MemoryStore: Block broadcast_221 stored as values in memory (estimated size 5.7 KB, free 490.3 MB) 18/04/17 16:41:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 220.0 (TID 220, ***hostname masked***, executor 2, partition 0, RACK_LOCAL, 2069 bytes) 18/04/17 16:41:00 INFO spark.ContextCleaner: Cleaned accumulator 187 18/04/17 16:41:00 INFO storage.BlockManagerInfo: Removed broadcast_189_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.2 MB) 18/04/17 16:41:00 INFO storage.BlockManagerInfo: Removed broadcast_189_piece0 on ***hostname masked***:57847 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:41:00 INFO spark.ContextCleaner: Cleaned accumulator 190 18/04/17 16:41:00 INFO storage.MemoryStore: Block broadcast_221_piece0 stored as bytes in memory (estimated size 3.1 KB, free 490.3 MB) 18/04/17 16:41:00 INFO storage.BlockManagerInfo: Added broadcast_221_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.2 MB) 18/04/17 16:41:00 INFO storage.BlockManagerInfo: Removed broadcast_188_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.2 MB) 18/04/17 16:41:00 INFO spark.SparkContext: Created broadcast 221 from broadcast at DAGScheduler.scala:1006 18/04/17 16:41:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 221 (KafkaRDD[306] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 16:41:00 INFO cluster.YarnClusterScheduler: Adding task set 221.0 with 1 tasks 18/04/17 16:41:00 INFO scheduler.DAGScheduler: Got job 222 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 16:41:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 222 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 16:41:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 16:41:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 16:41:00 INFO storage.BlockManagerInfo: Removed broadcast_188_piece0 on ***hostname masked***:53081 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:41:00 INFO scheduler.DAGScheduler: Submitting ResultStage 222 (KafkaRDD[299] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 16:41:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 221.0 (TID 221, ***hostname masked***, executor 10, partition 0, RACK_LOCAL, 2048 bytes) 18/04/17 16:41:00 INFO storage.MemoryStore: Block broadcast_222 stored as values in memory (estimated size 5.7 KB, free 490.3 MB) 18/04/17 16:41:00 INFO storage.BlockManagerInfo: Added broadcast_219_piece0 in memory on ***hostname masked***:50260 (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:41:00 INFO spark.ContextCleaner: Cleaned accumulator 189 18/04/17 16:41:00 INFO storage.BlockManagerInfo: Removed broadcast_191_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.2 MB) 18/04/17 16:41:00 INFO storage.BlockManagerInfo: Removed broadcast_191_piece0 on ***hostname masked***:42188 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:41:00 INFO storage.MemoryStore: Block broadcast_222_piece0 stored as bytes in memory (estimated size 3.1 KB, free 490.3 MB) 18/04/17 16:41:00 INFO storage.BlockManagerInfo: Added broadcast_222_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.2 MB) 18/04/17 16:41:00 INFO spark.SparkContext: Created broadcast 222 from broadcast at DAGScheduler.scala:1006 18/04/17 16:41:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 222 (KafkaRDD[299] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 16:41:00 INFO cluster.YarnClusterScheduler: Adding task set 222.0 with 1 tasks 18/04/17 16:41:00 INFO spark.ContextCleaner: Cleaned accumulator 192 18/04/17 16:41:00 INFO scheduler.DAGScheduler: Got job 223 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 16:41:00 INFO storage.BlockManagerInfo: Added broadcast_220_piece0 in memory on ***hostname masked***:43653 (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:41:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 223 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 16:41:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 16:41:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 16:41:00 INFO scheduler.DAGScheduler: Submitting ResultStage 223 (KafkaRDD[298] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 16:41:00 INFO storage.BlockManagerInfo: Removed broadcast_190_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.2 MB) 18/04/17 16:41:00 INFO storage.MemoryStore: Block broadcast_223 stored as values in memory (estimated size 5.7 KB, free 490.3 MB) 18/04/17 16:41:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 222.0 (TID 222, ***hostname masked***, executor 12, partition 0, RACK_LOCAL, 2046 bytes) 18/04/17 16:41:00 INFO storage.BlockManagerInfo: Removed broadcast_190_piece0 on ***hostname masked***:55033 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:41:00 INFO spark.ContextCleaner: Cleaned accumulator 191 18/04/17 16:41:00 INFO storage.MemoryStore: Block broadcast_223_piece0 stored as bytes in memory (estimated size 3.1 KB, free 490.3 MB) 18/04/17 16:41:00 INFO storage.BlockManagerInfo: Added broadcast_221_piece0 in memory on ***hostname masked***:55095 (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:41:00 INFO storage.BlockManagerInfo: Added broadcast_223_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.2 MB) 18/04/17 16:41:00 INFO storage.BlockManagerInfo: Removed broadcast_193_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.2 MB) 18/04/17 16:41:00 INFO spark.SparkContext: Created broadcast 223 from broadcast at DAGScheduler.scala:1006 18/04/17 16:41:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 223 (KafkaRDD[298] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 16:41:00 INFO cluster.YarnClusterScheduler: Adding task set 223.0 with 1 tasks 18/04/17 16:41:00 INFO scheduler.DAGScheduler: Got job 224 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 16:41:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 224 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 16:41:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 16:41:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 16:41:00 INFO scheduler.DAGScheduler: Submitting ResultStage 224 (KafkaRDD[296] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 16:41:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 223.0 (TID 223, ***hostname masked***, executor 2, partition 0, NODE_LOCAL, 2059 bytes) 18/04/17 16:41:00 INFO storage.MemoryStore: Block broadcast_224 stored as values in memory (estimated size 5.7 KB, free 490.3 MB) 18/04/17 16:41:00 INFO storage.BlockManagerInfo: Removed broadcast_193_piece0 on ***hostname masked***:60107 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:41:00 INFO spark.ContextCleaner: Cleaned accumulator 194 18/04/17 16:41:00 INFO storage.BlockManagerInfo: Removed broadcast_192_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.2 MB) 18/04/17 16:41:00 INFO storage.BlockManagerInfo: Removed broadcast_192_piece0 on ***hostname masked***:60107 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:41:00 INFO storage.MemoryStore: Block broadcast_224_piece0 stored as bytes in memory (estimated size 3.1 KB, free 490.3 MB) 18/04/17 16:41:00 INFO storage.BlockManagerInfo: Added broadcast_224_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.2 MB) 18/04/17 16:41:00 INFO spark.SparkContext: Created broadcast 224 from broadcast at DAGScheduler.scala:1006 18/04/17 16:41:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 224 (KafkaRDD[296] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 16:41:00 INFO spark.ContextCleaner: Cleaned accumulator 193 18/04/17 16:41:00 INFO cluster.YarnClusterScheduler: Adding task set 224.0 with 1 tasks 18/04/17 16:41:00 INFO scheduler.DAGScheduler: Got job 225 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 16:41:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 225 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 16:41:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 16:41:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 16:41:00 INFO scheduler.DAGScheduler: Submitting ResultStage 225 (KafkaRDD[293] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 16:41:00 INFO storage.BlockManagerInfo: Removed broadcast_195_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.2 MB) 18/04/17 16:41:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 224.0 (TID 224, ***hostname masked***, executor 7, partition 0, RACK_LOCAL, 2051 bytes) 18/04/17 16:41:00 INFO storage.MemoryStore: Block broadcast_225 stored as values in memory (estimated size 5.7 KB, free 490.3 MB) 18/04/17 16:41:00 INFO storage.BlockManagerInfo: Added broadcast_222_piece0 in memory on ***hostname masked***:42188 (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:41:00 INFO storage.BlockManagerInfo: Added broadcast_223_piece0 in memory on ***hostname masked***:43653 (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:41:00 INFO storage.BlockManagerInfo: Removed broadcast_195_piece0 on ***hostname masked***:55033 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:41:00 INFO spark.ContextCleaner: Cleaned accumulator 196 18/04/17 16:41:00 INFO storage.BlockManagerInfo: Removed broadcast_194_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.2 MB) 18/04/17 16:41:00 INFO storage.MemoryStore: Block broadcast_225_piece0 stored as bytes in memory (estimated size 3.1 KB, free 490.3 MB) 18/04/17 16:41:00 INFO storage.BlockManagerInfo: Added broadcast_225_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.2 MB) 18/04/17 16:41:00 INFO spark.SparkContext: Created broadcast 225 from broadcast at DAGScheduler.scala:1006 18/04/17 16:41:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 225 (KafkaRDD[293] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 16:41:00 INFO cluster.YarnClusterScheduler: Adding task set 225.0 with 1 tasks 18/04/17 16:41:00 INFO scheduler.DAGScheduler: Got job 226 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 16:41:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 226 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 16:41:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 16:41:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 16:41:00 INFO storage.BlockManagerInfo: Removed broadcast_194_piece0 on ***hostname masked***:56034 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:41:00 INFO scheduler.DAGScheduler: Submitting ResultStage 226 (KafkaRDD[314] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 16:41:00 INFO storage.MemoryStore: Block broadcast_226 stored as values in memory (estimated size 5.7 KB, free 490.3 MB) 18/04/17 16:41:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 225.0 (TID 225, ***hostname masked***, executor 6, partition 0, NODE_LOCAL, 2059 bytes) 18/04/17 16:41:00 INFO spark.ContextCleaner: Cleaned accumulator 195 18/04/17 16:41:00 INFO storage.BlockManagerInfo: Removed broadcast_197_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.2 MB) 18/04/17 16:41:00 INFO storage.BlockManagerInfo: Removed broadcast_197_piece0 on ***hostname masked***:55279 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:41:00 INFO storage.MemoryStore: Block broadcast_226_piece0 stored as bytes in memory (estimated size 3.1 KB, free 490.3 MB) 18/04/17 16:41:00 INFO storage.BlockManagerInfo: Added broadcast_226_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.2 MB) 18/04/17 16:41:00 INFO spark.SparkContext: Created broadcast 226 from broadcast at DAGScheduler.scala:1006 18/04/17 16:41:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 226 (KafkaRDD[314] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 16:41:00 INFO cluster.YarnClusterScheduler: Adding task set 226.0 with 1 tasks 18/04/17 16:41:00 INFO spark.ContextCleaner: Cleaned accumulator 198 18/04/17 16:41:00 INFO scheduler.DAGScheduler: Got job 227 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 16:41:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 227 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 16:41:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 16:41:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 16:41:00 INFO scheduler.DAGScheduler: Submitting ResultStage 227 (KafkaRDD[289] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 16:41:00 INFO storage.BlockManagerInfo: Removed broadcast_196_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.2 MB) 18/04/17 16:41:00 INFO storage.MemoryStore: Block broadcast_227 stored as values in memory (estimated size 5.7 KB, free 490.3 MB) 18/04/17 16:41:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 226.0 (TID 226, ***hostname masked***, executor 6, partition 0, NODE_LOCAL, 2056 bytes) 18/04/17 16:41:00 INFO storage.BlockManagerInfo: Removed broadcast_196_piece0 on ***hostname masked***:53081 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:41:00 INFO storage.BlockManagerInfo: Added broadcast_224_piece0 in memory on ***hostname masked***:41751 (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:41:00 INFO spark.ContextCleaner: Cleaned accumulator 197 18/04/17 16:41:00 INFO storage.BlockManagerInfo: Removed broadcast_199_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.2 MB) 18/04/17 16:41:00 INFO storage.MemoryStore: Block broadcast_227_piece0 stored as bytes in memory (estimated size 3.1 KB, free 490.3 MB) 18/04/17 16:41:00 INFO storage.BlockManagerInfo: Added broadcast_225_piece0 in memory on ***hostname masked***:35790 (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:41:00 INFO storage.BlockManagerInfo: Added broadcast_227_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.2 MB) 18/04/17 16:41:00 INFO spark.SparkContext: Created broadcast 227 from broadcast at DAGScheduler.scala:1006 18/04/17 16:41:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 227 (KafkaRDD[289] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 16:41:00 INFO cluster.YarnClusterScheduler: Adding task set 227.0 with 1 tasks 18/04/17 16:41:00 INFO scheduler.DAGScheduler: Got job 228 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 16:41:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 228 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 16:41:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 16:41:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 16:41:00 INFO scheduler.DAGScheduler: Submitting ResultStage 228 (KafkaRDD[313] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 16:41:00 INFO storage.BlockManagerInfo: Removed broadcast_199_piece0 on ***hostname masked***:56034 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:41:00 INFO storage.MemoryStore: Block broadcast_228 stored as values in memory (estimated size 5.7 KB, free 490.3 MB) 18/04/17 16:41:00 INFO spark.ContextCleaner: Cleaned accumulator 200 18/04/17 16:41:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 227.0 (TID 227, ***hostname masked***, executor 8, partition 0, RACK_LOCAL, 2063 bytes) 18/04/17 16:41:00 INFO storage.BlockManagerInfo: Removed broadcast_198_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.2 MB) 18/04/17 16:41:00 INFO storage.MemoryStore: Block broadcast_228_piece0 stored as bytes in memory (estimated size 3.1 KB, free 490.3 MB) 18/04/17 16:41:00 INFO storage.BlockManagerInfo: Added broadcast_228_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.2 MB) 18/04/17 16:41:00 INFO storage.BlockManagerInfo: Removed broadcast_198_piece0 on ***hostname masked***:60107 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:41:00 INFO spark.SparkContext: Created broadcast 228 from broadcast at DAGScheduler.scala:1006 18/04/17 16:41:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 228 (KafkaRDD[313] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 16:41:00 INFO cluster.YarnClusterScheduler: Adding task set 228.0 with 1 tasks 18/04/17 16:41:00 INFO scheduler.DAGScheduler: Got job 229 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 16:41:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 229 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 16:41:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 16:41:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 16:41:00 INFO scheduler.DAGScheduler: Submitting ResultStage 229 (KafkaRDD[321] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 16:41:00 INFO spark.ContextCleaner: Cleaned accumulator 199 18/04/17 16:41:00 INFO storage.MemoryStore: Block broadcast_229 stored as values in memory (estimated size 5.7 KB, free 490.3 MB) 18/04/17 16:41:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 228.0 (TID 228, ***hostname masked***, executor 8, partition 0, RACK_LOCAL, 2045 bytes) 18/04/17 16:41:00 INFO storage.BlockManagerInfo: Removed broadcast_201_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.2 MB) 18/04/17 16:41:00 INFO storage.MemoryStore: Block broadcast_229_piece0 stored as bytes in memory (estimated size 3.1 KB, free 490.3 MB) 18/04/17 16:41:00 INFO storage.BlockManagerInfo: Removed broadcast_201_piece0 on ***hostname masked***:41751 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:41:00 INFO storage.BlockManagerInfo: Added broadcast_229_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.2 MB) 18/04/17 16:41:00 INFO spark.SparkContext: Created broadcast 229 from broadcast at DAGScheduler.scala:1006 18/04/17 16:41:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 229 (KafkaRDD[321] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 16:41:00 INFO cluster.YarnClusterScheduler: Adding task set 229.0 with 1 tasks 18/04/17 16:41:00 INFO scheduler.DAGScheduler: Got job 230 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 16:41:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 230 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 16:41:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 16:41:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 16:41:00 INFO scheduler.DAGScheduler: Submitting ResultStage 230 (KafkaRDD[297] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 16:41:00 INFO spark.ContextCleaner: Cleaned accumulator 202 18/04/17 16:41:00 INFO storage.MemoryStore: Block broadcast_230 stored as values in memory (estimated size 5.7 KB, free 490.3 MB) 18/04/17 16:41:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 229.0 (TID 229, ***hostname masked***, executor 9, partition 0, NODE_LOCAL, 2048 bytes) 18/04/17 16:41:00 INFO storage.BlockManagerInfo: Removed broadcast_200_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.2 MB) 18/04/17 16:41:00 INFO storage.BlockManagerInfo: Removed broadcast_200_piece0 on ***hostname masked***:56034 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:41:00 INFO storage.BlockManagerInfo: Added broadcast_227_piece0 in memory on ***hostname masked***:50260 (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:41:00 INFO storage.BlockManagerInfo: Added broadcast_226_piece0 in memory on ***hostname masked***:35790 (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:41:00 INFO storage.MemoryStore: Block broadcast_230_piece0 stored as bytes in memory (estimated size 3.1 KB, free 490.3 MB) 18/04/17 16:41:00 INFO spark.ContextCleaner: Cleaned accumulator 201 18/04/17 16:41:00 INFO storage.BlockManagerInfo: Added broadcast_230_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.2 MB) 18/04/17 16:41:00 INFO spark.SparkContext: Created broadcast 230 from broadcast at DAGScheduler.scala:1006 18/04/17 16:41:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 230 (KafkaRDD[297] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 16:41:00 INFO cluster.YarnClusterScheduler: Adding task set 230.0 with 1 tasks 18/04/17 16:41:00 INFO scheduler.DAGScheduler: Got job 231 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 16:41:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 231 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 16:41:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 16:41:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 16:41:00 INFO scheduler.DAGScheduler: Submitting ResultStage 231 (KafkaRDD[294] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 16:41:00 INFO storage.BlockManagerInfo: Removed broadcast_203_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.2 MB) 18/04/17 16:41:00 INFO storage.MemoryStore: Block broadcast_231 stored as values in memory (estimated size 5.7 KB, free 490.3 MB) 18/04/17 16:41:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 230.0 (TID 230, ***hostname masked***, executor 7, partition 0, RACK_LOCAL, 2041 bytes) 18/04/17 16:41:00 INFO storage.BlockManagerInfo: Removed broadcast_203_piece0 on ***hostname masked***:41751 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:41:00 INFO spark.ContextCleaner: Cleaned accumulator 204 18/04/17 16:41:00 INFO storage.MemoryStore: Block broadcast_231_piece0 stored as bytes in memory (estimated size 3.1 KB, free 490.3 MB) 18/04/17 16:41:00 INFO storage.BlockManagerInfo: Added broadcast_231_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.2 MB) 18/04/17 16:41:00 INFO storage.BlockManagerInfo: Removed broadcast_202_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.2 MB) 18/04/17 16:41:00 INFO spark.SparkContext: Created broadcast 231 from broadcast at DAGScheduler.scala:1006 18/04/17 16:41:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 231 (KafkaRDD[294] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 16:41:00 INFO cluster.YarnClusterScheduler: Adding task set 231.0 with 1 tasks 18/04/17 16:41:00 INFO scheduler.DAGScheduler: Got job 233 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 16:41:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 232 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 16:41:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 16:41:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 16:41:00 INFO scheduler.DAGScheduler: Submitting ResultStage 232 (KafkaRDD[315] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 16:41:00 INFO storage.BlockManagerInfo: Removed broadcast_202_piece0 on ***hostname masked***:56034 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:41:00 INFO storage.MemoryStore: Block broadcast_232 stored as values in memory (estimated size 5.7 KB, free 490.3 MB) 18/04/17 16:41:00 INFO storage.BlockManagerInfo: Added broadcast_229_piece0 in memory on ***hostname masked***:55033 (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:41:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 231.0 (TID 231, ***hostname masked***, executor 8, partition 0, NODE_LOCAL, 2045 bytes) 18/04/17 16:41:00 INFO storage.BlockManagerInfo: Removed broadcast_205_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.2 MB) 18/04/17 16:41:00 INFO storage.MemoryStore: Block broadcast_232_piece0 stored as bytes in memory (estimated size 3.1 KB, free 490.3 MB) 18/04/17 16:41:00 INFO storage.BlockManagerInfo: Added broadcast_232_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.2 MB) 18/04/17 16:41:00 INFO spark.SparkContext: Created broadcast 232 from broadcast at DAGScheduler.scala:1006 18/04/17 16:41:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 232 (KafkaRDD[315] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 16:41:00 INFO cluster.YarnClusterScheduler: Adding task set 232.0 with 1 tasks 18/04/17 16:41:00 INFO storage.BlockManagerInfo: Removed broadcast_205_piece0 on ***hostname masked***:57847 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:41:00 INFO scheduler.DAGScheduler: Got job 234 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 16:41:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 233 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 16:41:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 16:41:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 16:41:00 INFO scheduler.DAGScheduler: Submitting ResultStage 233 (KafkaRDD[290] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 16:41:00 INFO storage.MemoryStore: Block broadcast_233 stored as values in memory (estimated size 5.7 KB, free 490.3 MB) 18/04/17 16:41:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 232.0 (TID 232, ***hostname masked***, executor 4, partition 0, RACK_LOCAL, 2048 bytes) 18/04/17 16:41:00 INFO spark.ContextCleaner: Cleaned accumulator 206 18/04/17 16:41:00 INFO storage.BlockManagerInfo: Removed broadcast_204_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.2 MB) 18/04/17 16:41:00 INFO storage.BlockManagerInfo: Added broadcast_230_piece0 in memory on ***hostname masked***:41751 (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:41:00 INFO storage.MemoryStore: Block broadcast_233_piece0 stored as bytes in memory (estimated size 3.1 KB, free 490.3 MB) 18/04/17 16:41:00 INFO storage.BlockManagerInfo: Added broadcast_233_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.2 MB) 18/04/17 16:41:00 INFO storage.BlockManagerInfo: Removed broadcast_204_piece0 on ***hostname masked***:55033 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:41:00 INFO spark.SparkContext: Created broadcast 233 from broadcast at DAGScheduler.scala:1006 18/04/17 16:41:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 233 (KafkaRDD[290] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 16:41:00 INFO cluster.YarnClusterScheduler: Adding task set 233.0 with 1 tasks 18/04/17 16:41:00 INFO scheduler.DAGScheduler: Got job 232 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 16:41:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 234 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 16:41:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 16:41:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 16:41:00 INFO scheduler.DAGScheduler: Submitting ResultStage 234 (KafkaRDD[295] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 16:41:00 INFO storage.BlockManagerInfo: Added broadcast_228_piece0 in memory on ***hostname masked***:50260 (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:41:00 INFO storage.MemoryStore: Block broadcast_234 stored as values in memory (estimated size 5.7 KB, free 490.3 MB) 18/04/17 16:41:00 INFO spark.ContextCleaner: Cleaned accumulator 205 18/04/17 16:41:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 233.0 (TID 233, ***hostname masked***, executor 9, partition 0, RACK_LOCAL, 2034 bytes) 18/04/17 16:41:00 INFO storage.BlockManagerInfo: Removed broadcast_207_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.2 MB) 18/04/17 16:41:00 INFO storage.MemoryStore: Block broadcast_234_piece0 stored as bytes in memory (estimated size 3.1 KB, free 490.3 MB) 18/04/17 16:41:00 INFO storage.BlockManagerInfo: Removed broadcast_207_piece0 on ***hostname masked***:35790 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:41:00 INFO storage.BlockManagerInfo: Added broadcast_234_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.2 MB) 18/04/17 16:41:00 INFO spark.SparkContext: Created broadcast 234 from broadcast at DAGScheduler.scala:1006 18/04/17 16:41:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 234 (KafkaRDD[295] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 16:41:00 INFO cluster.YarnClusterScheduler: Adding task set 234.0 with 1 tasks 18/04/17 16:41:00 INFO scheduler.DAGScheduler: Got job 235 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 16:41:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 235 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 16:41:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 16:41:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 16:41:00 INFO scheduler.DAGScheduler: Submitting ResultStage 235 (KafkaRDD[319] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 16:41:00 INFO storage.MemoryStore: Block broadcast_235 stored as values in memory (estimated size 5.7 KB, free 490.3 MB) 18/04/17 16:41:00 INFO spark.ContextCleaner: Cleaned accumulator 208 18/04/17 16:41:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 234.0 (TID 234, ***hostname masked***, executor 11, partition 0, RACK_LOCAL, 2060 bytes) 18/04/17 16:41:00 INFO storage.BlockManagerInfo: Removed broadcast_209_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.2 MB) 18/04/17 16:41:00 INFO storage.MemoryStore: Block broadcast_235_piece0 stored as bytes in memory (estimated size 3.1 KB, free 490.3 MB) 18/04/17 16:41:00 INFO storage.BlockManagerInfo: Added broadcast_235_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.2 MB) 18/04/17 16:41:00 INFO spark.SparkContext: Created broadcast 235 from broadcast at DAGScheduler.scala:1006 18/04/17 16:41:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 235 (KafkaRDD[319] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 16:41:00 INFO cluster.YarnClusterScheduler: Adding task set 235.0 with 1 tasks 18/04/17 16:41:00 INFO storage.BlockManagerInfo: Added broadcast_232_piece0 in memory on ***hostname masked***:55279 (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:41:00 INFO storage.BlockManagerInfo: Added broadcast_233_piece0 in memory on ***hostname masked***:55033 (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:41:00 INFO storage.BlockManagerInfo: Removed broadcast_209_piece0 on ***hostname masked***:43653 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:41:00 INFO spark.ContextCleaner: Cleaned accumulator 210 18/04/17 16:41:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 235.0 (TID 235, ***hostname masked***, executor 10, partition 0, RACK_LOCAL, 2042 bytes) 18/04/17 16:41:00 INFO storage.BlockManagerInfo: Removed broadcast_208_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.2 MB) 18/04/17 16:41:00 INFO storage.BlockManagerInfo: Removed broadcast_208_piece0 on ***hostname masked***:57847 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:41:00 INFO spark.ContextCleaner: Cleaned accumulator 209 18/04/17 16:41:00 INFO storage.BlockManagerInfo: Added broadcast_235_piece0 in memory on ***hostname masked***:55095 (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:41:00 INFO storage.BlockManagerInfo: Added broadcast_234_piece0 in memory on ***hostname masked***:57847 (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:41:00 INFO storage.BlockManagerInfo: Added broadcast_231_piece0 in memory on ***hostname masked***:50260 (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:41:00 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 226.0 (TID 226) in 153 ms on ***hostname masked*** (executor 6) (1/1) 18/04/17 16:41:00 INFO cluster.YarnClusterScheduler: Removed TaskSet 226.0, whose tasks have all completed, from pool 18/04/17 16:41:00 INFO scheduler.DAGScheduler: ResultStage 226 (foreachPartition at PredictorEngineApp.java:153) finished in 0.155 s 18/04/17 16:41:00 INFO scheduler.DAGScheduler: Job 226 finished: foreachPartition at PredictorEngineApp.java:153, took 0.262946 s 18/04/17 16:41:00 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x5a66c3e8 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 16:41:00 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x5a66c3e80x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 16:41:00 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 16:41:00 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:53839, server: ***hostname masked***/***IP masked***:2181 18/04/17 16:41:00 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x3626be1439a926a, negotiated timeout = 60000 18/04/17 16:41:00 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x3626be1439a926a 18/04/17 16:41:00 INFO zookeeper.ZooKeeper: Session: 0x3626be1439a926a closed 18/04/17 16:41:00 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 16:41:00 INFO scheduler.JobScheduler: Finished job streaming job 1523972460000 ms.26 from job set of time 1523972460000 ms 18/04/17 16:41:02 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 228.0 (TID 228) in 1874 ms on ***hostname masked*** (executor 8) (1/1) 18/04/17 16:41:02 INFO cluster.YarnClusterScheduler: Removed TaskSet 228.0, whose tasks have all completed, from pool 18/04/17 16:41:02 INFO scheduler.DAGScheduler: ResultStage 228 (foreachPartition at PredictorEngineApp.java:153) finished in 1.875 s 18/04/17 16:41:02 INFO scheduler.DAGScheduler: Job 228 finished: foreachPartition at PredictorEngineApp.java:153, took 1.990439 s 18/04/17 16:41:02 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x2dcfbb37 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 16:41:02 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x2dcfbb370x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 16:41:02 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 16:41:02 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:36588, server: ***hostname masked***/***IP masked***:2181 18/04/17 16:41:02 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x2626be142b28b8b, negotiated timeout = 60000 18/04/17 16:41:02 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x2626be142b28b8b 18/04/17 16:41:02 INFO zookeeper.ZooKeeper: Session: 0x2626be142b28b8b closed 18/04/17 16:41:02 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 16:41:02 INFO scheduler.JobScheduler: Finished job streaming job 1523972460000 ms.25 from job set of time 1523972460000 ms 18/04/17 16:41:02 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 224.0 (TID 224) in 2193 ms on ***hostname masked*** (executor 7) (1/1) 18/04/17 16:41:02 INFO cluster.YarnClusterScheduler: Removed TaskSet 224.0, whose tasks have all completed, from pool 18/04/17 16:41:02 INFO scheduler.DAGScheduler: ResultStage 224 (foreachPartition at PredictorEngineApp.java:153) finished in 2.195 s 18/04/17 16:41:02 INFO scheduler.DAGScheduler: Job 224 finished: foreachPartition at PredictorEngineApp.java:153, took 2.294018 s 18/04/17 16:41:02 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x2aa143d9 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 16:41:02 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x2aa143d90x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 16:41:02 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 16:41:02 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:36592, server: ***hostname masked***/***IP masked***:2181 18/04/17 16:41:02 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x2626be142b28b8c, negotiated timeout = 60000 18/04/17 16:41:02 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x2626be142b28b8c 18/04/17 16:41:02 INFO zookeeper.ZooKeeper: Session: 0x2626be142b28b8c closed 18/04/17 16:41:02 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 16:41:02 INFO scheduler.JobScheduler: Finished job streaming job 1523972460000 ms.8 from job set of time 1523972460000 ms 18/04/17 16:41:02 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 234.0 (TID 234) in 2256 ms on ***hostname masked*** (executor 11) (1/1) 18/04/17 16:41:02 INFO cluster.YarnClusterScheduler: Removed TaskSet 234.0, whose tasks have all completed, from pool 18/04/17 16:41:02 INFO scheduler.DAGScheduler: ResultStage 234 (foreachPartition at PredictorEngineApp.java:153) finished in 2.257 s 18/04/17 16:41:02 INFO scheduler.DAGScheduler: Job 232 finished: foreachPartition at PredictorEngineApp.java:153, took 2.390530 s 18/04/17 16:41:02 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x200f411f connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 16:41:02 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x200f411f0x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 16:41:02 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 16:41:02 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:60233, server: ***hostname masked***/***IP masked***:2181 18/04/17 16:41:02 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x1626be1444c92a2, negotiated timeout = 60000 18/04/17 16:41:02 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x1626be1444c92a2 18/04/17 16:41:02 INFO zookeeper.ZooKeeper: Session: 0x1626be1444c92a2 closed 18/04/17 16:41:02 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 16:41:02 INFO scheduler.JobScheduler: Finished job streaming job 1523972460000 ms.7 from job set of time 1523972460000 ms 18/04/17 16:41:04 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 232.0 (TID 232) in 3866 ms on ***hostname masked*** (executor 4) (1/1) 18/04/17 16:41:04 INFO cluster.YarnClusterScheduler: Removed TaskSet 232.0, whose tasks have all completed, from pool 18/04/17 16:41:04 INFO scheduler.DAGScheduler: ResultStage 232 (foreachPartition at PredictorEngineApp.java:153) finished in 3.867 s 18/04/17 16:41:04 INFO scheduler.DAGScheduler: Job 233 finished: foreachPartition at PredictorEngineApp.java:153, took 3.995172 s 18/04/17 16:41:04 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x356ce56f connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 16:41:04 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x356ce56f0x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 16:41:04 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 16:41:04 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:36601, server: ***hostname masked***/***IP masked***:2181 18/04/17 16:41:04 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 212.0 (TID 212) in 3979 ms on ***hostname masked*** (executor 3) (1/1) 18/04/17 16:41:04 INFO cluster.YarnClusterScheduler: Removed TaskSet 212.0, whose tasks have all completed, from pool 18/04/17 16:41:04 INFO scheduler.DAGScheduler: ResultStage 212 (foreachPartition at PredictorEngineApp.java:153) finished in 3.979 s 18/04/17 16:41:04 INFO scheduler.DAGScheduler: Job 212 finished: foreachPartition at PredictorEngineApp.java:153, took 4.004394 s 18/04/17 16:41:04 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x7317b4e connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 16:41:04 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x7317b4e0x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 16:41:04 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 16:41:04 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:60240, server: ***hostname masked***/***IP masked***:2181 18/04/17 16:41:04 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x2626be142b28b8d, negotiated timeout = 60000 18/04/17 16:41:04 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x1626be1444c92a4, negotiated timeout = 60000 18/04/17 16:41:04 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x2626be142b28b8d 18/04/17 16:41:04 INFO zookeeper.ZooKeeper: Session: 0x2626be142b28b8d closed 18/04/17 16:41:04 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 16:41:04 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x1626be1444c92a4 18/04/17 16:41:04 INFO zookeeper.ZooKeeper: Session: 0x1626be1444c92a4 closed 18/04/17 16:41:04 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 16:41:04 INFO scheduler.JobScheduler: Finished job streaming job 1523972460000 ms.27 from job set of time 1523972460000 ms 18/04/17 16:41:04 INFO scheduler.JobScheduler: Finished job streaming job 1523972460000 ms.23 from job set of time 1523972460000 ms 18/04/17 16:41:04 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 230.0 (TID 230) in 4322 ms on ***hostname masked*** (executor 7) (1/1) 18/04/17 16:41:04 INFO cluster.YarnClusterScheduler: Removed TaskSet 230.0, whose tasks have all completed, from pool 18/04/17 16:41:04 INFO scheduler.DAGScheduler: ResultStage 230 (foreachPartition at PredictorEngineApp.java:153) finished in 4.322 s 18/04/17 16:41:04 INFO scheduler.DAGScheduler: Job 230 finished: foreachPartition at PredictorEngineApp.java:153, took 4.443609 s 18/04/17 16:41:04 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x84bbee2 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 16:41:04 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x84bbee20x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 16:41:04 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 16:41:04 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:36607, server: ***hostname masked***/***IP masked***:2181 18/04/17 16:41:04 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x2626be142b28b8e, negotiated timeout = 60000 18/04/17 16:41:04 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x2626be142b28b8e 18/04/17 16:41:04 INFO zookeeper.ZooKeeper: Session: 0x2626be142b28b8e closed 18/04/17 16:41:04 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 16:41:04 INFO scheduler.JobScheduler: Finished job streaming job 1523972460000 ms.9 from job set of time 1523972460000 ms 18/04/17 16:41:04 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 216.0 (TID 216) in 4573 ms on ***hostname masked*** (executor 11) (1/1) 18/04/17 16:41:04 INFO cluster.YarnClusterScheduler: Removed TaskSet 216.0, whose tasks have all completed, from pool 18/04/17 16:41:04 INFO scheduler.DAGScheduler: ResultStage 216 (foreachPartition at PredictorEngineApp.java:153) finished in 4.573 s 18/04/17 16:41:04 INFO scheduler.DAGScheduler: Job 215 finished: foreachPartition at PredictorEngineApp.java:153, took 4.621227 s 18/04/17 16:41:04 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x5110ec3 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 16:41:04 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x5110ec30x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 16:41:04 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 16:41:04 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:36610, server: ***hostname masked***/***IP masked***:2181 18/04/17 16:41:04 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x2626be142b28b8f, negotiated timeout = 60000 18/04/17 16:41:04 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x2626be142b28b8f 18/04/17 16:41:04 INFO zookeeper.ZooKeeper: Session: 0x2626be142b28b8f closed 18/04/17 16:41:04 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 16:41:04 INFO scheduler.JobScheduler: Finished job streaming job 1523972460000 ms.19 from job set of time 1523972460000 ms 18/04/17 16:41:05 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 235.0 (TID 235) in 4841 ms on ***hostname masked*** (executor 10) (1/1) 18/04/17 16:41:05 INFO cluster.YarnClusterScheduler: Removed TaskSet 235.0, whose tasks have all completed, from pool 18/04/17 16:41:05 INFO scheduler.DAGScheduler: ResultStage 235 (foreachPartition at PredictorEngineApp.java:153) finished in 4.844 s 18/04/17 16:41:05 INFO scheduler.DAGScheduler: Job 235 finished: foreachPartition at PredictorEngineApp.java:153, took 4.973964 s 18/04/17 16:41:05 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x6659417e connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 16:41:05 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x6659417e0x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 16:41:05 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 16:41:05 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:60253, server: ***hostname masked***/***IP masked***:2181 18/04/17 16:41:05 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x1626be1444c92a5, negotiated timeout = 60000 18/04/17 16:41:05 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x1626be1444c92a5 18/04/17 16:41:05 INFO zookeeper.ZooKeeper: Session: 0x1626be1444c92a5 closed 18/04/17 16:41:05 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 16:41:05 INFO scheduler.JobScheduler: Finished job streaming job 1523972460000 ms.31 from job set of time 1523972460000 ms 18/04/17 16:41:05 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 221.0 (TID 221) in 5770 ms on ***hostname masked*** (executor 10) (1/1) 18/04/17 16:41:05 INFO cluster.YarnClusterScheduler: Removed TaskSet 221.0, whose tasks have all completed, from pool 18/04/17 16:41:05 INFO scheduler.DAGScheduler: ResultStage 221 (foreachPartition at PredictorEngineApp.java:153) finished in 5.770 s 18/04/17 16:41:05 INFO scheduler.DAGScheduler: Job 221 finished: foreachPartition at PredictorEngineApp.java:153, took 5.855690 s 18/04/17 16:41:05 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x6aee58a8 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 16:41:05 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x6aee58a80x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 16:41:05 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 16:41:05 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:53875, server: ***hostname masked***/***IP masked***:2181 18/04/17 16:41:05 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x3626be1439a9273, negotiated timeout = 60000 18/04/17 16:41:05 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x3626be1439a9273 18/04/17 16:41:05 INFO zookeeper.ZooKeeper: Session: 0x3626be1439a9273 closed 18/04/17 16:41:05 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 16:41:05 INFO scheduler.JobScheduler: Finished job streaming job 1523972460000 ms.18 from job set of time 1523972460000 ms 18/04/17 16:41:06 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 217.0 (TID 217) in 6335 ms on ***hostname masked*** (executor 6) (1/1) 18/04/17 16:41:06 INFO cluster.YarnClusterScheduler: Removed TaskSet 217.0, whose tasks have all completed, from pool 18/04/17 16:41:06 INFO scheduler.DAGScheduler: ResultStage 217 (foreachPartition at PredictorEngineApp.java:153) finished in 6.335 s 18/04/17 16:41:06 INFO scheduler.DAGScheduler: Job 217 finished: foreachPartition at PredictorEngineApp.java:153, took 6.388834 s 18/04/17 16:41:06 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x50743ae8 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 16:41:06 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x50743ae80x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 16:41:06 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 16:41:06 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:53878, server: ***hostname masked***/***IP masked***:2181 18/04/17 16:41:06 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x3626be1439a9274, negotiated timeout = 60000 18/04/17 16:41:06 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x3626be1439a9274 18/04/17 16:41:06 INFO zookeeper.ZooKeeper: Session: 0x3626be1439a9274 closed 18/04/17 16:41:06 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 16:41:06 INFO scheduler.JobScheduler: Finished job streaming job 1523972460000 ms.32 from job set of time 1523972460000 ms 18/04/17 16:41:08 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 222.0 (TID 222) in 8719 ms on ***hostname masked*** (executor 12) (1/1) 18/04/17 16:41:08 INFO cluster.YarnClusterScheduler: Removed TaskSet 222.0, whose tasks have all completed, from pool 18/04/17 16:41:08 INFO scheduler.DAGScheduler: ResultStage 222 (foreachPartition at PredictorEngineApp.java:153) finished in 8.720 s 18/04/17 16:41:08 INFO scheduler.DAGScheduler: Job 222 finished: foreachPartition at PredictorEngineApp.java:153, took 8.811153 s 18/04/17 16:41:08 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x15176bde connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 16:41:08 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x15176bde0x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 16:41:08 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 16:41:08 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:60269, server: ***hostname masked***/***IP masked***:2181 18/04/17 16:41:08 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x1626be1444c92a7, negotiated timeout = 60000 18/04/17 16:41:08 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x1626be1444c92a7 18/04/17 16:41:08 INFO zookeeper.ZooKeeper: Session: 0x1626be1444c92a7 closed 18/04/17 16:41:08 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 16:41:08 INFO scheduler.JobScheduler: Finished job streaming job 1523972460000 ms.11 from job set of time 1523972460000 ms 18/04/17 16:41:09 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 219.0 (TID 219) in 9206 ms on ***hostname masked*** (executor 8) (1/1) 18/04/17 16:41:09 INFO cluster.YarnClusterScheduler: Removed TaskSet 219.0, whose tasks have all completed, from pool 18/04/17 16:41:09 INFO scheduler.DAGScheduler: ResultStage 219 (foreachPartition at PredictorEngineApp.java:153) finished in 9.207 s 18/04/17 16:41:09 INFO scheduler.DAGScheduler: Job 219 finished: foreachPartition at PredictorEngineApp.java:153, took 9.283306 s 18/04/17 16:41:09 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x786c712b connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 16:41:09 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x786c712b0x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 16:41:09 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 16:41:09 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:53890, server: ***hostname masked***/***IP masked***:2181 18/04/17 16:41:09 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x3626be1439a9278, negotiated timeout = 60000 18/04/17 16:41:09 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x3626be1439a9278 18/04/17 16:41:09 INFO zookeeper.ZooKeeper: Session: 0x3626be1439a9278 closed 18/04/17 16:41:09 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 16:41:09 INFO scheduler.JobScheduler: Finished job streaming job 1523972460000 ms.12 from job set of time 1523972460000 ms 18/04/17 16:41:09 INFO spark.ContextCleaner: Cleaned accumulator 217 18/04/17 16:41:09 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 211.0 (TID 211) in 9734 ms on ***hostname masked*** (executor 5) (1/1) 18/04/17 16:41:09 INFO cluster.YarnClusterScheduler: Removed TaskSet 211.0, whose tasks have all completed, from pool 18/04/17 16:41:09 INFO scheduler.DAGScheduler: ResultStage 211 (foreachPartition at PredictorEngineApp.java:153) finished in 9.735 s 18/04/17 16:41:09 INFO storage.BlockManagerInfo: Removed broadcast_235_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.2 MB) 18/04/17 16:41:09 INFO scheduler.DAGScheduler: Job 211 finished: foreachPartition at PredictorEngineApp.java:153, took 9.753707 s 18/04/17 16:41:09 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x4d44e333 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 16:41:09 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x4d44e3330x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 16:41:09 INFO storage.BlockManagerInfo: Removed broadcast_235_piece0 on ***hostname masked***:55095 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:41:09 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 16:41:09 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:36637, server: ***hostname masked***/***IP masked***:2181 18/04/17 16:41:09 INFO spark.ContextCleaner: Cleaned accumulator 236 18/04/17 16:41:09 INFO storage.BlockManagerInfo: Removed broadcast_234_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.2 MB) 18/04/17 16:41:09 INFO storage.BlockManagerInfo: Removed broadcast_234_piece0 on ***hostname masked***:57847 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:41:09 INFO storage.BlockManagerInfo: Removed broadcast_212_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.2 MB) 18/04/17 16:41:09 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x2626be142b28b91, negotiated timeout = 60000 18/04/17 16:41:09 INFO storage.BlockManagerInfo: Removed broadcast_212_piece0 on ***hostname masked***:60107 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:41:09 INFO spark.ContextCleaner: Cleaned accumulator 213 18/04/17 16:41:09 INFO storage.BlockManagerInfo: Removed broadcast_217_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.2 MB) 18/04/17 16:41:09 INFO storage.BlockManagerInfo: Removed broadcast_217_piece0 on ***hostname masked***:35790 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:41:09 INFO spark.ContextCleaner: Cleaned accumulator 218 18/04/17 16:41:09 INFO storage.BlockManagerInfo: Removed broadcast_216_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.2 MB) 18/04/17 16:41:09 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x2626be142b28b91 18/04/17 16:41:09 INFO storage.BlockManagerInfo: Removed broadcast_216_piece0 on ***hostname masked***:57847 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:41:09 INFO spark.ContextCleaner: Cleaned accumulator 220 18/04/17 16:41:09 INFO storage.BlockManagerInfo: Removed broadcast_219_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.2 MB) 18/04/17 16:41:09 INFO zookeeper.ZooKeeper: Session: 0x2626be142b28b91 closed 18/04/17 16:41:09 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 16:41:09 INFO storage.BlockManagerInfo: Removed broadcast_219_piece0 on ***hostname masked***:50260 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:41:09 INFO spark.ContextCleaner: Cleaned accumulator 223 18/04/17 16:41:09 INFO storage.BlockManagerInfo: Removed broadcast_221_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.2 MB) 18/04/17 16:41:09 INFO storage.BlockManagerInfo: Removed broadcast_221_piece0 on ***hostname masked***:55095 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:41:09 INFO spark.ContextCleaner: Cleaned accumulator 222 18/04/17 16:41:09 INFO storage.BlockManagerInfo: Removed broadcast_222_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.2 MB) 18/04/17 16:41:09 INFO storage.BlockManagerInfo: Removed broadcast_222_piece0 on ***hostname masked***:42188 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:41:09 INFO storage.BlockManagerInfo: Removed broadcast_224_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.2 MB) 18/04/17 16:41:09 INFO scheduler.JobScheduler: Finished job streaming job 1523972460000 ms.29 from job set of time 1523972460000 ms 18/04/17 16:41:09 INFO storage.BlockManagerInfo: Removed broadcast_224_piece0 on ***hostname masked***:41751 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:41:09 INFO spark.ContextCleaner: Cleaned accumulator 225 18/04/17 16:41:09 INFO storage.BlockManagerInfo: Removed broadcast_226_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.2 MB) 18/04/17 16:41:09 INFO storage.BlockManagerInfo: Removed broadcast_226_piece0 on ***hostname masked***:35790 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:41:09 INFO spark.ContextCleaner: Cleaned accumulator 227 18/04/17 16:41:09 INFO spark.ContextCleaner: Cleaned accumulator 229 18/04/17 16:41:09 INFO storage.BlockManagerInfo: Removed broadcast_228_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.2 MB) 18/04/17 16:41:09 INFO storage.BlockManagerInfo: Removed broadcast_228_piece0 on ***hostname masked***:50260 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:41:09 INFO storage.BlockManagerInfo: Removed broadcast_230_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.2 MB) 18/04/17 16:41:09 INFO storage.BlockManagerInfo: Removed broadcast_230_piece0 on ***hostname masked***:41751 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:41:09 INFO spark.ContextCleaner: Cleaned accumulator 231 18/04/17 16:41:09 INFO storage.BlockManagerInfo: Removed broadcast_232_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.2 MB) 18/04/17 16:41:09 INFO storage.BlockManagerInfo: Removed broadcast_232_piece0 on ***hostname masked***:55279 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:41:09 INFO spark.ContextCleaner: Cleaned accumulator 233 18/04/17 16:41:09 INFO spark.ContextCleaner: Cleaned accumulator 235 18/04/17 16:41:10 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 218.0 (TID 218) in 10041 ms on ***hostname masked*** (executor 5) (1/1) 18/04/17 16:41:10 INFO cluster.YarnClusterScheduler: Removed TaskSet 218.0, whose tasks have all completed, from pool 18/04/17 16:41:10 INFO scheduler.DAGScheduler: ResultStage 218 (foreachPartition at PredictorEngineApp.java:153) finished in 10.043 s 18/04/17 16:41:10 INFO scheduler.DAGScheduler: Job 218 finished: foreachPartition at PredictorEngineApp.java:153, took 10.115012 s 18/04/17 16:41:10 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x5b57e8ec connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 16:41:10 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x5b57e8ec0x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 16:41:10 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 16:41:10 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:36641, server: ***hostname masked***/***IP masked***:2181 18/04/17 16:41:10 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x2626be142b28b94, negotiated timeout = 60000 18/04/17 16:41:10 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x2626be142b28b94 18/04/17 16:41:10 INFO zookeeper.ZooKeeper: Session: 0x2626be142b28b94 closed 18/04/17 16:41:10 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 16:41:10 INFO scheduler.JobScheduler: Finished job streaming job 1523972460000 ms.15 from job set of time 1523972460000 ms 18/04/17 16:41:10 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 214.0 (TID 214) in 10189 ms on ***hostname masked*** (executor 5) (1/1) 18/04/17 16:41:10 INFO cluster.YarnClusterScheduler: Removed TaskSet 214.0, whose tasks have all completed, from pool 18/04/17 16:41:10 INFO scheduler.DAGScheduler: ResultStage 214 (foreachPartition at PredictorEngineApp.java:153) finished in 10.191 s 18/04/17 16:41:10 INFO scheduler.DAGScheduler: Job 214 finished: foreachPartition at PredictorEngineApp.java:153, took 10.228589 s 18/04/17 16:41:10 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x1f0a5fd6 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 16:41:10 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x1f0a5fd60x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 16:41:10 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 16:41:10 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:60282, server: ***hostname masked***/***IP masked***:2181 18/04/17 16:41:10 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x1626be1444c92a9, negotiated timeout = 60000 18/04/17 16:41:10 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x1626be1444c92a9 18/04/17 16:41:10 INFO zookeeper.ZooKeeper: Session: 0x1626be1444c92a9 closed 18/04/17 16:41:10 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 16:41:10 INFO scheduler.JobScheduler: Finished job streaming job 1523972460000 ms.20 from job set of time 1523972460000 ms 18/04/17 16:41:10 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 225.0 (TID 225) in 10578 ms on ***hostname masked*** (executor 6) (1/1) 18/04/17 16:41:10 INFO cluster.YarnClusterScheduler: Removed TaskSet 225.0, whose tasks have all completed, from pool 18/04/17 16:41:10 INFO scheduler.DAGScheduler: ResultStage 225 (foreachPartition at PredictorEngineApp.java:153) finished in 10.579 s 18/04/17 16:41:10 INFO scheduler.DAGScheduler: Job 225 finished: foreachPartition at PredictorEngineApp.java:153, took 10.683329 s 18/04/17 16:41:10 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x250a5a1d connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 16:41:10 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x250a5a1d0x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 16:41:10 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 16:41:10 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:36647, server: ***hostname masked***/***IP masked***:2181 18/04/17 16:41:10 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x2626be142b28b96, negotiated timeout = 60000 18/04/17 16:41:10 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x2626be142b28b96 18/04/17 16:41:10 INFO zookeeper.ZooKeeper: Session: 0x2626be142b28b96 closed 18/04/17 16:41:10 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 16:41:10 INFO scheduler.JobScheduler: Finished job streaming job 1523972460000 ms.5 from job set of time 1523972460000 ms 18/04/17 16:41:10 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 231.0 (TID 231) in 10704 ms on ***hostname masked*** (executor 8) (1/1) 18/04/17 16:41:10 INFO cluster.YarnClusterScheduler: Removed TaskSet 231.0, whose tasks have all completed, from pool 18/04/17 16:41:10 INFO scheduler.DAGScheduler: ResultStage 231 (foreachPartition at PredictorEngineApp.java:153) finished in 10.705 s 18/04/17 16:41:10 INFO scheduler.DAGScheduler: Job 231 finished: foreachPartition at PredictorEngineApp.java:153, took 10.830225 s 18/04/17 16:41:10 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x43cae646 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 16:41:10 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x43cae6460x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 16:41:10 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 16:41:10 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:53907, server: ***hostname masked***/***IP masked***:2181 18/04/17 16:41:10 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x3626be1439a927b, negotiated timeout = 60000 18/04/17 16:41:10 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x3626be1439a927b 18/04/17 16:41:10 INFO zookeeper.ZooKeeper: Session: 0x3626be1439a927b closed 18/04/17 16:41:10 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 16:41:10 INFO scheduler.JobScheduler: Finished job streaming job 1523972460000 ms.6 from job set of time 1523972460000 ms 18/04/17 16:41:12 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 210.0 (TID 210) in 11939 ms on ***hostname masked*** (executor 2) (1/1) 18/04/17 16:41:12 INFO cluster.YarnClusterScheduler: Removed TaskSet 210.0, whose tasks have all completed, from pool 18/04/17 16:41:12 INFO scheduler.DAGScheduler: ResultStage 210 (foreachPartition at PredictorEngineApp.java:153) finished in 11.940 s 18/04/17 16:41:12 INFO scheduler.DAGScheduler: Job 210 finished: foreachPartition at PredictorEngineApp.java:153, took 11.953977 s 18/04/17 16:41:12 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x1c91d43c connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 16:41:12 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x1c91d43c0x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 16:41:12 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 16:41:12 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:60293, server: ***hostname masked***/***IP masked***:2181 18/04/17 16:41:12 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x1626be1444c92ab, negotiated timeout = 60000 18/04/17 16:41:12 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x1626be1444c92ab 18/04/17 16:41:12 INFO zookeeper.ZooKeeper: Session: 0x1626be1444c92ab closed 18/04/17 16:41:12 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 16:41:12 INFO scheduler.JobScheduler: Finished job streaming job 1523972460000 ms.24 from job set of time 1523972460000 ms 18/04/17 16:41:13 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 213.0 (TID 213) in 13055 ms on ***hostname masked*** (executor 9) (1/1) 18/04/17 16:41:13 INFO cluster.YarnClusterScheduler: Removed TaskSet 213.0, whose tasks have all completed, from pool 18/04/17 16:41:13 INFO scheduler.DAGScheduler: ResultStage 213 (foreachPartition at PredictorEngineApp.java:153) finished in 13.055 s 18/04/17 16:41:13 INFO scheduler.DAGScheduler: Job 213 finished: foreachPartition at PredictorEngineApp.java:153, took 13.088725 s 18/04/17 16:41:13 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x1f5c078d connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 16:41:13 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x1f5c078d0x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 16:41:13 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 16:41:13 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:36660, server: ***hostname masked***/***IP masked***:2181 18/04/17 16:41:13 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x2626be142b28b9a, negotiated timeout = 60000 18/04/17 16:41:13 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x2626be142b28b9a 18/04/17 16:41:13 INFO zookeeper.ZooKeeper: Session: 0x2626be142b28b9a closed 18/04/17 16:41:13 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 16:41:13 INFO scheduler.JobScheduler: Finished job streaming job 1523972460000 ms.28 from job set of time 1523972460000 ms 18/04/17 16:41:13 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 233.0 (TID 233) in 13475 ms on ***hostname masked*** (executor 9) (1/1) 18/04/17 16:41:13 INFO cluster.YarnClusterScheduler: Removed TaskSet 233.0, whose tasks have all completed, from pool 18/04/17 16:41:13 INFO scheduler.DAGScheduler: ResultStage 233 (foreachPartition at PredictorEngineApp.java:153) finished in 13.477 s 18/04/17 16:41:13 INFO scheduler.DAGScheduler: Job 234 finished: foreachPartition at PredictorEngineApp.java:153, took 13.600990 s 18/04/17 16:41:13 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x3ff3a2f0 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 16:41:13 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x3ff3a2f00x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 16:41:13 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 16:41:13 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:53919, server: ***hostname masked***/***IP masked***:2181 18/04/17 16:41:13 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x3626be1439a927d, negotiated timeout = 60000 18/04/17 16:41:13 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x3626be1439a927d 18/04/17 16:41:13 INFO zookeeper.ZooKeeper: Session: 0x3626be1439a927d closed 18/04/17 16:41:13 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 16:41:13 INFO scheduler.JobScheduler: Finished job streaming job 1523972460000 ms.2 from job set of time 1523972460000 ms 18/04/17 16:41:14 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 229.0 (TID 229) in 14113 ms on ***hostname masked*** (executor 9) (1/1) 18/04/17 16:41:14 INFO cluster.YarnClusterScheduler: Removed TaskSet 229.0, whose tasks have all completed, from pool 18/04/17 16:41:14 INFO scheduler.DAGScheduler: ResultStage 229 (foreachPartition at PredictorEngineApp.java:153) finished in 14.114 s 18/04/17 16:41:14 INFO scheduler.DAGScheduler: Job 229 finished: foreachPartition at PredictorEngineApp.java:153, took 14.232834 s 18/04/17 16:41:14 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0xbb87503 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 16:41:14 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0xbb875030x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 16:41:14 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 16:41:14 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:53924, server: ***hostname masked***/***IP masked***:2181 18/04/17 16:41:14 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x3626be1439a927e, negotiated timeout = 60000 18/04/17 16:41:14 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x3626be1439a927e 18/04/17 16:41:14 INFO zookeeper.ZooKeeper: Session: 0x3626be1439a927e closed 18/04/17 16:41:14 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 16:41:14 INFO scheduler.JobScheduler: Finished job streaming job 1523972460000 ms.33 from job set of time 1523972460000 ms 18/04/17 16:41:14 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 215.0 (TID 215) in 14238 ms on ***hostname masked*** (executor 9) (1/1) 18/04/17 16:41:14 INFO cluster.YarnClusterScheduler: Removed TaskSet 215.0, whose tasks have all completed, from pool 18/04/17 16:41:14 INFO scheduler.DAGScheduler: ResultStage 215 (foreachPartition at PredictorEngineApp.java:153) finished in 14.239 s 18/04/17 16:41:14 INFO scheduler.DAGScheduler: Job 216 finished: foreachPartition at PredictorEngineApp.java:153, took 14.281933 s 18/04/17 16:41:14 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x5cd49cb connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 16:41:14 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x5cd49cb0x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 16:41:14 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 16:41:14 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:53927, server: ***hostname masked***/***IP masked***:2181 18/04/17 16:41:14 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x3626be1439a9280, negotiated timeout = 60000 18/04/17 16:41:14 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x3626be1439a9280 18/04/17 16:41:14 INFO zookeeper.ZooKeeper: Session: 0x3626be1439a9280 closed 18/04/17 16:41:14 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 16:41:14 INFO scheduler.JobScheduler: Finished job streaming job 1523972460000 ms.34 from job set of time 1523972460000 ms 18/04/17 16:41:15 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 227.0 (TID 227) in 14968 ms on ***hostname masked*** (executor 8) (1/1) 18/04/17 16:41:15 INFO cluster.YarnClusterScheduler: Removed TaskSet 227.0, whose tasks have all completed, from pool 18/04/17 16:41:15 INFO scheduler.DAGScheduler: ResultStage 227 (foreachPartition at PredictorEngineApp.java:153) finished in 14.969 s 18/04/17 16:41:15 INFO scheduler.DAGScheduler: Job 227 finished: foreachPartition at PredictorEngineApp.java:153, took 15.080781 s 18/04/17 16:41:15 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x3eaca233 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 16:41:15 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x3eaca2330x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 16:41:15 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 16:41:15 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:53931, server: ***hostname masked***/***IP masked***:2181 18/04/17 16:41:15 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x3626be1439a9282, negotiated timeout = 60000 18/04/17 16:41:15 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x3626be1439a9282 18/04/17 16:41:15 INFO zookeeper.ZooKeeper: Session: 0x3626be1439a9282 closed 18/04/17 16:41:15 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 16:41:15 INFO scheduler.JobScheduler: Finished job streaming job 1523972460000 ms.1 from job set of time 1523972460000 ms 18/04/17 16:41:21 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 220.0 (TID 220) in 21679 ms on ***hostname masked*** (executor 2) (1/1) 18/04/17 16:41:21 INFO cluster.YarnClusterScheduler: Removed TaskSet 220.0, whose tasks have all completed, from pool 18/04/17 16:41:21 INFO scheduler.DAGScheduler: ResultStage 220 (foreachPartition at PredictorEngineApp.java:153) finished in 21.681 s 18/04/17 16:41:21 INFO scheduler.DAGScheduler: Job 220 finished: foreachPartition at PredictorEngineApp.java:153, took 21.760784 s 18/04/17 16:41:21 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x46299d37 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 16:41:21 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x46299d370x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 16:41:21 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 16:41:21 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:36688, server: ***hostname masked***/***IP masked***:2181 18/04/17 16:41:21 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x2626be142b28ba0, negotiated timeout = 60000 18/04/17 16:41:21 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x2626be142b28ba0 18/04/17 16:41:21 INFO zookeeper.ZooKeeper: Session: 0x2626be142b28ba0 closed 18/04/17 16:41:21 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 16:41:21 INFO scheduler.JobScheduler: Finished job streaming job 1523972460000 ms.22 from job set of time 1523972460000 ms 18/04/17 16:41:22 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 223.0 (TID 223) in 21926 ms on ***hostname masked*** (executor 2) (1/1) 18/04/17 16:41:22 INFO cluster.YarnClusterScheduler: Removed TaskSet 223.0, whose tasks have all completed, from pool 18/04/17 16:41:22 INFO scheduler.DAGScheduler: ResultStage 223 (foreachPartition at PredictorEngineApp.java:153) finished in 21.927 s 18/04/17 16:41:22 INFO scheduler.DAGScheduler: Job 223 finished: foreachPartition at PredictorEngineApp.java:153, took 22.022273 s 18/04/17 16:41:22 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x472bcabe connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 16:41:22 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x472bcabe0x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 16:41:22 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 16:41:22 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:53948, server: ***hostname masked***/***IP masked***:2181 18/04/17 16:41:22 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x3626be1439a9284, negotiated timeout = 60000 18/04/17 16:41:22 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x3626be1439a9284 18/04/17 16:41:22 INFO zookeeper.ZooKeeper: Session: 0x3626be1439a9284 closed 18/04/17 16:41:22 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 16:41:22 INFO scheduler.JobScheduler: Finished job streaming job 1523972460000 ms.10 from job set of time 1523972460000 ms 18/04/17 16:41:22 INFO scheduler.JobScheduler: Total delay: 22.134 s for time 1523972460000 ms (execution: 22.061 s) 18/04/17 16:41:22 INFO kafka.KafkaRDD: Removing RDD 252 from persistence list 18/04/17 16:41:22 INFO storage.BlockManager: Removing RDD 252 18/04/17 16:41:22 INFO kafka.KafkaRDD: Removing RDD 216 from persistence list 18/04/17 16:41:22 INFO storage.BlockManager: Removing RDD 216 18/04/17 16:41:22 INFO kafka.KafkaRDD: Removing RDD 252 from persistence list 18/04/17 16:41:22 INFO storage.BlockManager: Removing RDD 252 18/04/17 16:41:22 INFO kafka.KafkaRDD: Removing RDD 216 from persistence list 18/04/17 16:41:22 INFO storage.BlockManager: Removing RDD 216 18/04/17 16:41:22 INFO kafka.KafkaRDD: Removing RDD 253 from persistence list 18/04/17 16:41:22 INFO storage.BlockManager: Removing RDD 253 18/04/17 16:41:22 INFO kafka.KafkaRDD: Removing RDD 217 from persistence list 18/04/17 16:41:22 INFO storage.BlockManager: Removing RDD 217 18/04/17 16:41:22 INFO kafka.KafkaRDD: Removing RDD 253 from persistence list 18/04/17 16:41:22 INFO storage.BlockManager: Removing RDD 253 18/04/17 16:41:22 INFO kafka.KafkaRDD: Removing RDD 217 from persistence list 18/04/17 16:41:22 INFO storage.BlockManager: Removing RDD 217 18/04/17 16:41:22 INFO kafka.KafkaRDD: Removing RDD 254 from persistence list 18/04/17 16:41:22 INFO storage.BlockManager: Removing RDD 254 18/04/17 16:41:22 INFO kafka.KafkaRDD: Removing RDD 218 from persistence list 18/04/17 16:41:22 INFO storage.BlockManager: Removing RDD 218 18/04/17 16:41:22 INFO kafka.KafkaRDD: Removing RDD 254 from persistence list 18/04/17 16:41:22 INFO storage.BlockManager: Removing RDD 254 18/04/17 16:41:22 INFO kafka.KafkaRDD: Removing RDD 218 from persistence list 18/04/17 16:41:22 INFO storage.BlockManager: Removing RDD 218 18/04/17 16:41:22 INFO kafka.KafkaRDD: Removing RDD 255 from persistence list 18/04/17 16:41:22 INFO storage.BlockManager: Removing RDD 255 18/04/17 16:41:22 INFO kafka.KafkaRDD: Removing RDD 219 from persistence list 18/04/17 16:41:22 INFO storage.BlockManager: Removing RDD 219 18/04/17 16:41:22 INFO kafka.KafkaRDD: Removing RDD 255 from persistence list 18/04/17 16:41:22 INFO storage.BlockManager: Removing RDD 255 18/04/17 16:41:22 INFO kafka.KafkaRDD: Removing RDD 219 from persistence list 18/04/17 16:41:22 INFO storage.BlockManager: Removing RDD 219 18/04/17 16:41:22 INFO kafka.KafkaRDD: Removing RDD 256 from persistence list 18/04/17 16:41:22 INFO storage.BlockManager: Removing RDD 256 18/04/17 16:41:22 INFO kafka.KafkaRDD: Removing RDD 220 from persistence list 18/04/17 16:41:22 INFO storage.BlockManager: Removing RDD 220 18/04/17 16:41:22 INFO kafka.KafkaRDD: Removing RDD 256 from persistence list 18/04/17 16:41:22 INFO storage.BlockManager: Removing RDD 256 18/04/17 16:41:22 INFO kafka.KafkaRDD: Removing RDD 220 from persistence list 18/04/17 16:41:22 INFO storage.BlockManager: Removing RDD 220 18/04/17 16:41:22 INFO kafka.KafkaRDD: Removing RDD 257 from persistence list 18/04/17 16:41:22 INFO storage.BlockManager: Removing RDD 257 18/04/17 16:41:22 INFO kafka.KafkaRDD: Removing RDD 221 from persistence list 18/04/17 16:41:22 INFO storage.BlockManager: Removing RDD 221 18/04/17 16:41:22 INFO kafka.KafkaRDD: Removing RDD 257 from persistence list 18/04/17 16:41:22 INFO storage.BlockManager: Removing RDD 257 18/04/17 16:41:22 INFO kafka.KafkaRDD: Removing RDD 221 from persistence list 18/04/17 16:41:22 INFO storage.BlockManager: Removing RDD 221 18/04/17 16:41:22 INFO kafka.KafkaRDD: Removing RDD 258 from persistence list 18/04/17 16:41:22 INFO storage.BlockManager: Removing RDD 258 18/04/17 16:41:22 INFO kafka.KafkaRDD: Removing RDD 222 from persistence list 18/04/17 16:41:22 INFO storage.BlockManager: Removing RDD 222 18/04/17 16:41:22 INFO kafka.KafkaRDD: Removing RDD 258 from persistence list 18/04/17 16:41:22 INFO storage.BlockManager: Removing RDD 258 18/04/17 16:41:22 INFO kafka.KafkaRDD: Removing RDD 222 from persistence list 18/04/17 16:41:22 INFO storage.BlockManager: Removing RDD 222 18/04/17 16:41:22 INFO kafka.KafkaRDD: Removing RDD 259 from persistence list 18/04/17 16:41:22 INFO storage.BlockManager: Removing RDD 259 18/04/17 16:41:22 INFO kafka.KafkaRDD: Removing RDD 223 from persistence list 18/04/17 16:41:22 INFO storage.BlockManager: Removing RDD 223 18/04/17 16:41:22 INFO kafka.KafkaRDD: Removing RDD 259 from persistence list 18/04/17 16:41:22 INFO storage.BlockManager: Removing RDD 259 18/04/17 16:41:22 INFO kafka.KafkaRDD: Removing RDD 223 from persistence list 18/04/17 16:41:22 INFO storage.BlockManager: Removing RDD 223 18/04/17 16:41:22 INFO kafka.KafkaRDD: Removing RDD 260 from persistence list 18/04/17 16:41:22 INFO storage.BlockManager: Removing RDD 260 18/04/17 16:41:22 INFO kafka.KafkaRDD: Removing RDD 224 from persistence list 18/04/17 16:41:22 INFO storage.BlockManager: Removing RDD 224 18/04/17 16:41:22 INFO kafka.KafkaRDD: Removing RDD 260 from persistence list 18/04/17 16:41:22 INFO storage.BlockManager: Removing RDD 260 18/04/17 16:41:22 INFO kafka.KafkaRDD: Removing RDD 224 from persistence list 18/04/17 16:41:22 INFO storage.BlockManager: Removing RDD 224 18/04/17 16:41:22 INFO kafka.KafkaRDD: Removing RDD 261 from persistence list 18/04/17 16:41:22 INFO storage.BlockManager: Removing RDD 261 18/04/17 16:41:22 INFO kafka.KafkaRDD: Removing RDD 225 from persistence list 18/04/17 16:41:22 INFO storage.BlockManager: Removing RDD 225 18/04/17 16:41:22 INFO kafka.KafkaRDD: Removing RDD 261 from persistence list 18/04/17 16:41:22 INFO storage.BlockManager: Removing RDD 261 18/04/17 16:41:22 INFO kafka.KafkaRDD: Removing RDD 225 from persistence list 18/04/17 16:41:22 INFO storage.BlockManager: Removing RDD 225 18/04/17 16:41:22 INFO kafka.KafkaRDD: Removing RDD 262 from persistence list 18/04/17 16:41:22 INFO storage.BlockManager: Removing RDD 262 18/04/17 16:41:22 INFO kafka.KafkaRDD: Removing RDD 226 from persistence list 18/04/17 16:41:22 INFO storage.BlockManager: Removing RDD 226 18/04/17 16:41:22 INFO kafka.KafkaRDD: Removing RDD 262 from persistence list 18/04/17 16:41:22 INFO storage.BlockManager: Removing RDD 262 18/04/17 16:41:22 INFO kafka.KafkaRDD: Removing RDD 226 from persistence list 18/04/17 16:41:22 INFO storage.BlockManager: Removing RDD 226 18/04/17 16:41:22 INFO kafka.KafkaRDD: Removing RDD 263 from persistence list 18/04/17 16:41:22 INFO storage.BlockManager: Removing RDD 263 18/04/17 16:41:22 INFO kafka.KafkaRDD: Removing RDD 227 from persistence list 18/04/17 16:41:22 INFO storage.BlockManager: Removing RDD 227 18/04/17 16:41:22 INFO kafka.KafkaRDD: Removing RDD 263 from persistence list 18/04/17 16:41:22 INFO storage.BlockManager: Removing RDD 263 18/04/17 16:41:22 INFO kafka.KafkaRDD: Removing RDD 227 from persistence list 18/04/17 16:41:22 INFO storage.BlockManager: Removing RDD 227 18/04/17 16:41:22 INFO kafka.KafkaRDD: Removing RDD 264 from persistence list 18/04/17 16:41:22 INFO storage.BlockManager: Removing RDD 264 18/04/17 16:41:22 INFO kafka.KafkaRDD: Removing RDD 228 from persistence list 18/04/17 16:41:22 INFO storage.BlockManager: Removing RDD 228 18/04/17 16:41:22 INFO kafka.KafkaRDD: Removing RDD 264 from persistence list 18/04/17 16:41:22 INFO storage.BlockManager: Removing RDD 264 18/04/17 16:41:22 INFO kafka.KafkaRDD: Removing RDD 228 from persistence list 18/04/17 16:41:22 INFO storage.BlockManager: Removing RDD 228 18/04/17 16:41:22 INFO kafka.KafkaRDD: Removing RDD 265 from persistence list 18/04/17 16:41:22 INFO storage.BlockManager: Removing RDD 265 18/04/17 16:41:22 INFO kafka.KafkaRDD: Removing RDD 229 from persistence list 18/04/17 16:41:22 INFO storage.BlockManager: Removing RDD 229 18/04/17 16:41:22 INFO kafka.KafkaRDD: Removing RDD 265 from persistence list 18/04/17 16:41:22 INFO storage.BlockManager: Removing RDD 265 18/04/17 16:41:22 INFO kafka.KafkaRDD: Removing RDD 229 from persistence list 18/04/17 16:41:22 INFO storage.BlockManager: Removing RDD 229 18/04/17 16:41:22 INFO kafka.KafkaRDD: Removing RDD 266 from persistence list 18/04/17 16:41:22 INFO storage.BlockManager: Removing RDD 266 18/04/17 16:41:22 INFO kafka.KafkaRDD: Removing RDD 230 from persistence list 18/04/17 16:41:22 INFO storage.BlockManager: Removing RDD 230 18/04/17 16:41:22 INFO kafka.KafkaRDD: Removing RDD 266 from persistence list 18/04/17 16:41:22 INFO storage.BlockManager: Removing RDD 266 18/04/17 16:41:22 INFO kafka.KafkaRDD: Removing RDD 230 from persistence list 18/04/17 16:41:22 INFO storage.BlockManager: Removing RDD 230 18/04/17 16:41:22 INFO kafka.KafkaRDD: Removing RDD 267 from persistence list 18/04/17 16:41:22 INFO storage.BlockManager: Removing RDD 267 18/04/17 16:41:22 INFO kafka.KafkaRDD: Removing RDD 231 from persistence list 18/04/17 16:41:22 INFO storage.BlockManager: Removing RDD 231 18/04/17 16:41:22 INFO kafka.KafkaRDD: Removing RDD 267 from persistence list 18/04/17 16:41:22 INFO storage.BlockManager: Removing RDD 267 18/04/17 16:41:22 INFO kafka.KafkaRDD: Removing RDD 231 from persistence list 18/04/17 16:41:22 INFO storage.BlockManager: Removing RDD 231 18/04/17 16:41:22 INFO kafka.KafkaRDD: Removing RDD 268 from persistence list 18/04/17 16:41:22 INFO storage.BlockManager: Removing RDD 268 18/04/17 16:41:22 INFO kafka.KafkaRDD: Removing RDD 232 from persistence list 18/04/17 16:41:22 INFO storage.BlockManager: Removing RDD 232 18/04/17 16:41:22 INFO kafka.KafkaRDD: Removing RDD 268 from persistence list 18/04/17 16:41:22 INFO storage.BlockManager: Removing RDD 268 18/04/17 16:41:22 INFO kafka.KafkaRDD: Removing RDD 232 from persistence list 18/04/17 16:41:22 INFO storage.BlockManager: Removing RDD 232 18/04/17 16:41:22 INFO kafka.KafkaRDD: Removing RDD 269 from persistence list 18/04/17 16:41:22 INFO storage.BlockManager: Removing RDD 269 18/04/17 16:41:22 INFO kafka.KafkaRDD: Removing RDD 233 from persistence list 18/04/17 16:41:22 INFO storage.BlockManager: Removing RDD 233 18/04/17 16:41:22 INFO kafka.KafkaRDD: Removing RDD 269 from persistence list 18/04/17 16:41:22 INFO storage.BlockManager: Removing RDD 269 18/04/17 16:41:22 INFO kafka.KafkaRDD: Removing RDD 233 from persistence list 18/04/17 16:41:22 INFO storage.BlockManager: Removing RDD 233 18/04/17 16:41:22 INFO kafka.KafkaRDD: Removing RDD 270 from persistence list 18/04/17 16:41:22 INFO storage.BlockManager: Removing RDD 270 18/04/17 16:41:22 INFO kafka.KafkaRDD: Removing RDD 234 from persistence list 18/04/17 16:41:22 INFO storage.BlockManager: Removing RDD 234 18/04/17 16:41:22 INFO kafka.KafkaRDD: Removing RDD 270 from persistence list 18/04/17 16:41:22 INFO storage.BlockManager: Removing RDD 270 18/04/17 16:41:22 INFO kafka.KafkaRDD: Removing RDD 234 from persistence list 18/04/17 16:41:22 INFO storage.BlockManager: Removing RDD 234 18/04/17 16:41:22 INFO kafka.KafkaRDD: Removing RDD 271 from persistence list 18/04/17 16:41:22 INFO storage.BlockManager: Removing RDD 271 18/04/17 16:41:22 INFO kafka.KafkaRDD: Removing RDD 235 from persistence list 18/04/17 16:41:22 INFO storage.BlockManager: Removing RDD 235 18/04/17 16:41:22 INFO kafka.KafkaRDD: Removing RDD 271 from persistence list 18/04/17 16:41:22 INFO storage.BlockManager: Removing RDD 271 18/04/17 16:41:22 INFO kafka.KafkaRDD: Removing RDD 235 from persistence list 18/04/17 16:41:22 INFO storage.BlockManager: Removing RDD 235 18/04/17 16:41:22 INFO kafka.KafkaRDD: Removing RDD 272 from persistence list 18/04/17 16:41:22 INFO storage.BlockManager: Removing RDD 272 18/04/17 16:41:22 INFO kafka.KafkaRDD: Removing RDD 236 from persistence list 18/04/17 16:41:22 INFO storage.BlockManager: Removing RDD 236 18/04/17 16:41:22 INFO kafka.KafkaRDD: Removing RDD 272 from persistence list 18/04/17 16:41:22 INFO storage.BlockManager: Removing RDD 272 18/04/17 16:41:22 INFO kafka.KafkaRDD: Removing RDD 236 from persistence list 18/04/17 16:41:22 INFO storage.BlockManager: Removing RDD 236 18/04/17 16:41:22 INFO kafka.KafkaRDD: Removing RDD 273 from persistence list 18/04/17 16:41:22 INFO storage.BlockManager: Removing RDD 273 18/04/17 16:41:22 INFO kafka.KafkaRDD: Removing RDD 237 from persistence list 18/04/17 16:41:22 INFO storage.BlockManager: Removing RDD 237 18/04/17 16:41:22 INFO kafka.KafkaRDD: Removing RDD 273 from persistence list 18/04/17 16:41:22 INFO storage.BlockManager: Removing RDD 273 18/04/17 16:41:22 INFO kafka.KafkaRDD: Removing RDD 237 from persistence list 18/04/17 16:41:22 INFO storage.BlockManager: Removing RDD 237 18/04/17 16:41:22 INFO kafka.KafkaRDD: Removing RDD 274 from persistence list 18/04/17 16:41:22 INFO storage.BlockManager: Removing RDD 274 18/04/17 16:41:22 INFO kafka.KafkaRDD: Removing RDD 238 from persistence list 18/04/17 16:41:22 INFO storage.BlockManager: Removing RDD 238 18/04/17 16:41:22 INFO kafka.KafkaRDD: Removing RDD 274 from persistence list 18/04/17 16:41:22 INFO storage.BlockManager: Removing RDD 274 18/04/17 16:41:22 INFO kafka.KafkaRDD: Removing RDD 238 from persistence list 18/04/17 16:41:22 INFO storage.BlockManager: Removing RDD 238 18/04/17 16:41:22 INFO kafka.KafkaRDD: Removing RDD 275 from persistence list 18/04/17 16:41:22 INFO storage.BlockManager: Removing RDD 275 18/04/17 16:41:22 INFO kafka.KafkaRDD: Removing RDD 239 from persistence list 18/04/17 16:41:22 INFO storage.BlockManager: Removing RDD 239 18/04/17 16:41:22 INFO kafka.KafkaRDD: Removing RDD 275 from persistence list 18/04/17 16:41:22 INFO storage.BlockManager: Removing RDD 275 18/04/17 16:41:22 INFO kafka.KafkaRDD: Removing RDD 239 from persistence list 18/04/17 16:41:22 INFO storage.BlockManager: Removing RDD 239 18/04/17 16:41:22 INFO kafka.KafkaRDD: Removing RDD 276 from persistence list 18/04/17 16:41:22 INFO storage.BlockManager: Removing RDD 276 18/04/17 16:41:22 INFO kafka.KafkaRDD: Removing RDD 240 from persistence list 18/04/17 16:41:22 INFO storage.BlockManager: Removing RDD 240 18/04/17 16:41:22 INFO kafka.KafkaRDD: Removing RDD 276 from persistence list 18/04/17 16:41:22 INFO storage.BlockManager: Removing RDD 276 18/04/17 16:41:22 INFO kafka.KafkaRDD: Removing RDD 240 from persistence list 18/04/17 16:41:22 INFO storage.BlockManager: Removing RDD 240 18/04/17 16:41:22 INFO kafka.KafkaRDD: Removing RDD 277 from persistence list 18/04/17 16:41:22 INFO storage.BlockManager: Removing RDD 277 18/04/17 16:41:22 INFO kafka.KafkaRDD: Removing RDD 241 from persistence list 18/04/17 16:41:22 INFO storage.BlockManager: Removing RDD 241 18/04/17 16:41:22 INFO kafka.KafkaRDD: Removing RDD 277 from persistence list 18/04/17 16:41:22 INFO storage.BlockManager: Removing RDD 277 18/04/17 16:41:22 INFO kafka.KafkaRDD: Removing RDD 241 from persistence list 18/04/17 16:41:22 INFO storage.BlockManager: Removing RDD 241 18/04/17 16:41:22 INFO kafka.KafkaRDD: Removing RDD 278 from persistence list 18/04/17 16:41:22 INFO storage.BlockManager: Removing RDD 278 18/04/17 16:41:22 INFO kafka.KafkaRDD: Removing RDD 242 from persistence list 18/04/17 16:41:22 INFO storage.BlockManager: Removing RDD 242 18/04/17 16:41:22 INFO kafka.KafkaRDD: Removing RDD 278 from persistence list 18/04/17 16:41:22 INFO storage.BlockManager: Removing RDD 278 18/04/17 16:41:22 INFO kafka.KafkaRDD: Removing RDD 242 from persistence list 18/04/17 16:41:22 INFO storage.BlockManager: Removing RDD 242 18/04/17 16:41:22 INFO kafka.KafkaRDD: Removing RDD 279 from persistence list 18/04/17 16:41:22 INFO storage.BlockManager: Removing RDD 279 18/04/17 16:41:22 INFO kafka.KafkaRDD: Removing RDD 243 from persistence list 18/04/17 16:41:22 INFO storage.BlockManager: Removing RDD 243 18/04/17 16:41:22 INFO kafka.KafkaRDD: Removing RDD 279 from persistence list 18/04/17 16:41:22 INFO storage.BlockManager: Removing RDD 279 18/04/17 16:41:22 INFO kafka.KafkaRDD: Removing RDD 243 from persistence list 18/04/17 16:41:22 INFO storage.BlockManager: Removing RDD 243 18/04/17 16:41:22 INFO kafka.KafkaRDD: Removing RDD 280 from persistence list 18/04/17 16:41:22 INFO storage.BlockManager: Removing RDD 280 18/04/17 16:41:22 INFO kafka.KafkaRDD: Removing RDD 244 from persistence list 18/04/17 16:41:22 INFO storage.BlockManager: Removing RDD 244 18/04/17 16:41:22 INFO kafka.KafkaRDD: Removing RDD 280 from persistence list 18/04/17 16:41:22 INFO storage.BlockManager: Removing RDD 280 18/04/17 16:41:22 INFO kafka.KafkaRDD: Removing RDD 244 from persistence list 18/04/17 16:41:22 INFO storage.BlockManager: Removing RDD 244 18/04/17 16:41:22 INFO kafka.KafkaRDD: Removing RDD 281 from persistence list 18/04/17 16:41:22 INFO storage.BlockManager: Removing RDD 281 18/04/17 16:41:22 INFO kafka.KafkaRDD: Removing RDD 245 from persistence list 18/04/17 16:41:22 INFO storage.BlockManager: Removing RDD 245 18/04/17 16:41:22 INFO kafka.KafkaRDD: Removing RDD 281 from persistence list 18/04/17 16:41:22 INFO storage.BlockManager: Removing RDD 281 18/04/17 16:41:22 INFO kafka.KafkaRDD: Removing RDD 245 from persistence list 18/04/17 16:41:22 INFO storage.BlockManager: Removing RDD 245 18/04/17 16:41:22 INFO kafka.KafkaRDD: Removing RDD 282 from persistence list 18/04/17 16:41:22 INFO storage.BlockManager: Removing RDD 282 18/04/17 16:41:22 INFO kafka.KafkaRDD: Removing RDD 246 from persistence list 18/04/17 16:41:22 INFO storage.BlockManager: Removing RDD 246 18/04/17 16:41:22 INFO kafka.KafkaRDD: Removing RDD 282 from persistence list 18/04/17 16:41:22 INFO storage.BlockManager: Removing RDD 282 18/04/17 16:41:22 INFO kafka.KafkaRDD: Removing RDD 246 from persistence list 18/04/17 16:41:22 INFO storage.BlockManager: Removing RDD 246 18/04/17 16:41:22 INFO kafka.KafkaRDD: Removing RDD 283 from persistence list 18/04/17 16:41:22 INFO storage.BlockManager: Removing RDD 283 18/04/17 16:41:22 INFO kafka.KafkaRDD: Removing RDD 247 from persistence list 18/04/17 16:41:22 INFO storage.BlockManager: Removing RDD 247 18/04/17 16:41:22 INFO kafka.KafkaRDD: Removing RDD 283 from persistence list 18/04/17 16:41:22 INFO storage.BlockManager: Removing RDD 283 18/04/17 16:41:22 INFO kafka.KafkaRDD: Removing RDD 247 from persistence list 18/04/17 16:41:22 INFO storage.BlockManager: Removing RDD 247 18/04/17 16:41:22 INFO kafka.KafkaRDD: Removing RDD 284 from persistence list 18/04/17 16:41:22 INFO storage.BlockManager: Removing RDD 284 18/04/17 16:41:22 INFO kafka.KafkaRDD: Removing RDD 248 from persistence list 18/04/17 16:41:22 INFO storage.BlockManager: Removing RDD 248 18/04/17 16:41:22 INFO kafka.KafkaRDD: Removing RDD 284 from persistence list 18/04/17 16:41:22 INFO storage.BlockManager: Removing RDD 284 18/04/17 16:41:22 INFO kafka.KafkaRDD: Removing RDD 248 from persistence list 18/04/17 16:41:22 INFO storage.BlockManager: Removing RDD 248 18/04/17 16:41:22 INFO kafka.KafkaRDD: Removing RDD 285 from persistence list 18/04/17 16:41:22 INFO storage.BlockManager: Removing RDD 285 18/04/17 16:41:22 INFO kafka.KafkaRDD: Removing RDD 249 from persistence list 18/04/17 16:41:22 INFO storage.BlockManager: Removing RDD 249 18/04/17 16:41:22 INFO kafka.KafkaRDD: Removing RDD 285 from persistence list 18/04/17 16:41:22 INFO storage.BlockManager: Removing RDD 285 18/04/17 16:41:22 INFO kafka.KafkaRDD: Removing RDD 249 from persistence list 18/04/17 16:41:22 INFO storage.BlockManager: Removing RDD 249 18/04/17 16:41:22 INFO kafka.KafkaRDD: Removing RDD 286 from persistence list 18/04/17 16:41:22 INFO storage.BlockManager: Removing RDD 286 18/04/17 16:41:22 INFO kafka.KafkaRDD: Removing RDD 250 from persistence list 18/04/17 16:41:22 INFO storage.BlockManager: Removing RDD 250 18/04/17 16:41:22 INFO kafka.KafkaRDD: Removing RDD 286 from persistence list 18/04/17 16:41:22 INFO storage.BlockManager: Removing RDD 286 18/04/17 16:41:22 INFO kafka.KafkaRDD: Removing RDD 250 from persistence list 18/04/17 16:41:22 INFO storage.BlockManager: Removing RDD 250 18/04/17 16:41:22 INFO kafka.KafkaRDD: Removing RDD 287 from persistence list 18/04/17 16:41:22 INFO storage.BlockManager: Removing RDD 287 18/04/17 16:41:22 INFO kafka.KafkaRDD: Removing RDD 251 from persistence list 18/04/17 16:41:22 INFO storage.BlockManager: Removing RDD 251 18/04/17 16:41:22 INFO kafka.KafkaRDD: Removing RDD 287 from persistence list 18/04/17 16:41:22 INFO storage.BlockManager: Removing RDD 287 18/04/17 16:41:22 INFO kafka.KafkaRDD: Removing RDD 251 from persistence list 18/04/17 16:41:22 INFO storage.BlockManager: Removing RDD 251 18/04/17 16:41:22 INFO scheduler.ReceivedBlockTracker: Deleting batches ArrayBuffer() 18/04/17 16:41:22 INFO scheduler.InputInfoTracker: remove old batch metadata: 1523972280000 ms 1523972340000 ms 18/04/17 16:41:28 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 206.0 (TID 206) in 87893 ms on ***hostname masked*** (executor 11) (1/1) 18/04/17 16:41:28 INFO cluster.YarnClusterScheduler: Removed TaskSet 206.0, whose tasks have all completed, from pool 18/04/17 16:41:28 INFO scheduler.DAGScheduler: ResultStage 206 (foreachPartition at PredictorEngineApp.java:153) finished in 87.893 s 18/04/17 16:41:28 INFO scheduler.DAGScheduler: Job 206 finished: foreachPartition at PredictorEngineApp.java:153, took 87.998730 s 18/04/17 16:41:28 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x6563de69 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 16:41:28 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x6563de690x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 16:41:28 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 16:41:28 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:53967, server: ***hostname masked***/***IP masked***:2181 18/04/17 16:41:28 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x3626be1439a9288, negotiated timeout = 60000 18/04/17 16:41:28 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x3626be1439a9288 18/04/17 16:41:28 INFO zookeeper.ZooKeeper: Session: 0x3626be1439a9288 closed 18/04/17 16:41:28 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 16:41:28 INFO scheduler.JobScheduler: Finished job streaming job 1523972400000 ms.18 from job set of time 1523972400000 ms 18/04/17 16:41:28 INFO scheduler.JobScheduler: Total delay: 88.117 s for time 1523972400000 ms (execution: 88.046 s) 18/04/17 16:41:28 INFO scheduler.ReceivedBlockTracker: Deleting batches ArrayBuffer() 18/04/17 16:41:28 INFO scheduler.InputInfoTracker: remove old batch metadata: 18/04/17 16:42:00 INFO scheduler.JobScheduler: Added jobs for time 1523972520000 ms 18/04/17 16:42:00 INFO scheduler.JobScheduler: Starting job streaming job 1523972520000 ms.0 from job set of time 1523972520000 ms 18/04/17 16:42:00 INFO scheduler.JobScheduler: Starting job streaming job 1523972520000 ms.1 from job set of time 1523972520000 ms 18/04/17 16:42:00 INFO scheduler.JobScheduler: Starting job streaming job 1523972520000 ms.2 from job set of time 1523972520000 ms 18/04/17 16:42:00 INFO scheduler.JobScheduler: Starting job streaming job 1523972520000 ms.3 from job set of time 1523972520000 ms 18/04/17 16:42:00 INFO scheduler.JobScheduler: Finished job streaming job 1523972520000 ms.0 from job set of time 1523972520000 ms 18/04/17 16:42:00 INFO scheduler.JobScheduler: Starting job streaming job 1523972520000 ms.4 from job set of time 1523972520000 ms 18/04/17 16:42:00 INFO scheduler.JobScheduler: Finished job streaming job 1523972520000 ms.3 from job set of time 1523972520000 ms 18/04/17 16:42:00 INFO scheduler.JobScheduler: Starting job streaming job 1523972520000 ms.6 from job set of time 1523972520000 ms 18/04/17 16:42:00 INFO scheduler.JobScheduler: Starting job streaming job 1523972520000 ms.7 from job set of time 1523972520000 ms 18/04/17 16:42:00 INFO scheduler.JobScheduler: Starting job streaming job 1523972520000 ms.5 from job set of time 1523972520000 ms 18/04/17 16:42:00 INFO scheduler.JobScheduler: Starting job streaming job 1523972520000 ms.8 from job set of time 1523972520000 ms 18/04/17 16:42:00 INFO scheduler.JobScheduler: Starting job streaming job 1523972520000 ms.9 from job set of time 1523972520000 ms 18/04/17 16:42:00 INFO scheduler.JobScheduler: Finished job streaming job 1523972520000 ms.4 from job set of time 1523972520000 ms 18/04/17 16:42:00 INFO scheduler.JobScheduler: Starting job streaming job 1523972520000 ms.10 from job set of time 1523972520000 ms 18/04/17 16:42:00 INFO scheduler.JobScheduler: Starting job streaming job 1523972520000 ms.12 from job set of time 1523972520000 ms 18/04/17 16:42:00 INFO scheduler.JobScheduler: Starting job streaming job 1523972520000 ms.11 from job set of time 1523972520000 ms 18/04/17 16:42:00 INFO scheduler.JobScheduler: Starting job streaming job 1523972520000 ms.13 from job set of time 1523972520000 ms 18/04/17 16:42:00 INFO scheduler.JobScheduler: Starting job streaming job 1523972520000 ms.14 from job set of time 1523972520000 ms 18/04/17 16:42:00 INFO scheduler.JobScheduler: Starting job streaming job 1523972520000 ms.15 from job set of time 1523972520000 ms 18/04/17 16:42:00 INFO scheduler.JobScheduler: Finished job streaming job 1523972520000 ms.13 from job set of time 1523972520000 ms 18/04/17 16:42:00 INFO scheduler.JobScheduler: Starting job streaming job 1523972520000 ms.16 from job set of time 1523972520000 ms 18/04/17 16:42:00 INFO scheduler.JobScheduler: Starting job streaming job 1523972520000 ms.18 from job set of time 1523972520000 ms 18/04/17 16:42:00 INFO scheduler.JobScheduler: Starting job streaming job 1523972520000 ms.17 from job set of time 1523972520000 ms 18/04/17 16:42:00 INFO scheduler.JobScheduler: Finished job streaming job 1523972520000 ms.14 from job set of time 1523972520000 ms 18/04/17 16:42:00 INFO scheduler.JobScheduler: Starting job streaming job 1523972520000 ms.19 from job set of time 1523972520000 ms 18/04/17 16:42:00 INFO scheduler.JobScheduler: Starting job streaming job 1523972520000 ms.20 from job set of time 1523972520000 ms 18/04/17 16:42:00 INFO scheduler.JobScheduler: Finished job streaming job 1523972520000 ms.16 from job set of time 1523972520000 ms 18/04/17 16:42:00 INFO scheduler.JobScheduler: Starting job streaming job 1523972520000 ms.22 from job set of time 1523972520000 ms 18/04/17 16:42:00 INFO scheduler.JobScheduler: Starting job streaming job 1523972520000 ms.21 from job set of time 1523972520000 ms 18/04/17 16:42:00 INFO scheduler.JobScheduler: Starting job streaming job 1523972520000 ms.23 from job set of time 1523972520000 ms 18/04/17 16:42:00 INFO scheduler.JobScheduler: Finished job streaming job 1523972520000 ms.17 from job set of time 1523972520000 ms 18/04/17 16:42:00 INFO scheduler.JobScheduler: Starting job streaming job 1523972520000 ms.24 from job set of time 1523972520000 ms 18/04/17 16:42:00 INFO scheduler.JobScheduler: Starting job streaming job 1523972520000 ms.25 from job set of time 1523972520000 ms 18/04/17 16:42:00 INFO scheduler.JobScheduler: Finished job streaming job 1523972520000 ms.21 from job set of time 1523972520000 ms 18/04/17 16:42:00 INFO scheduler.JobScheduler: Starting job streaming job 1523972520000 ms.27 from job set of time 1523972520000 ms 18/04/17 16:42:00 INFO scheduler.JobScheduler: Starting job streaming job 1523972520000 ms.26 from job set of time 1523972520000 ms 18/04/17 16:42:00 INFO scheduler.JobScheduler: Starting job streaming job 1523972520000 ms.28 from job set of time 1523972520000 ms 18/04/17 16:42:00 INFO scheduler.JobScheduler: Starting job streaming job 1523972520000 ms.29 from job set of time 1523972520000 ms 18/04/17 16:42:00 INFO scheduler.JobScheduler: Starting job streaming job 1523972520000 ms.30 from job set of time 1523972520000 ms 18/04/17 16:42:00 INFO scheduler.JobScheduler: Starting job streaming job 1523972520000 ms.31 from job set of time 1523972520000 ms 18/04/17 16:42:00 INFO scheduler.JobScheduler: Finished job streaming job 1523972520000 ms.30 from job set of time 1523972520000 ms 18/04/17 16:42:00 INFO scheduler.JobScheduler: Starting job streaming job 1523972520000 ms.32 from job set of time 1523972520000 ms 18/04/17 16:42:00 INFO scheduler.JobScheduler: Starting job streaming job 1523972520000 ms.33 from job set of time 1523972520000 ms 18/04/17 16:42:00 INFO scheduler.JobScheduler: Starting job streaming job 1523972520000 ms.35 from job set of time 1523972520000 ms 18/04/17 16:42:00 INFO scheduler.JobScheduler: Starting job streaming job 1523972520000 ms.34 from job set of time 1523972520000 ms 18/04/17 16:42:00 INFO scheduler.JobScheduler: Finished job streaming job 1523972520000 ms.35 from job set of time 1523972520000 ms 18/04/17 16:42:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 16:42:00 INFO scheduler.DAGScheduler: Got job 236 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 16:42:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 236 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 16:42:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 16:42:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 16:42:00 INFO scheduler.DAGScheduler: Submitting ResultStage 236 (KafkaRDD[344] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 16:42:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 16:42:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 16:42:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 16:42:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 16:42:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 16:42:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 16:42:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 16:42:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 16:42:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 16:42:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 16:42:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 16:42:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 16:42:00 INFO storage.MemoryStore: Block broadcast_236 stored as values in memory (estimated size 5.7 KB, free 490.4 MB) 18/04/17 16:42:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 16:42:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 16:42:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 16:42:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 16:42:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 16:42:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 16:42:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 16:42:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 16:42:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 16:42:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 16:42:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 16:42:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 16:42:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 16:42:00 INFO storage.MemoryStore: Block broadcast_236_piece0 stored as bytes in memory (estimated size 3.1 KB, free 490.4 MB) 18/04/17 16:42:00 INFO storage.BlockManagerInfo: Added broadcast_236_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.2 MB) 18/04/17 16:42:00 INFO spark.SparkContext: Created broadcast 236 from broadcast at DAGScheduler.scala:1006 18/04/17 16:42:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 236 (KafkaRDD[344] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 16:42:00 INFO cluster.YarnClusterScheduler: Adding task set 236.0 with 1 tasks 18/04/17 16:42:00 INFO scheduler.DAGScheduler: Got job 237 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 16:42:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 237 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 16:42:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 16:42:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 16:42:00 INFO scheduler.DAGScheduler: Submitting ResultStage 237 (KafkaRDD[325] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 16:42:00 INFO storage.MemoryStore: Block broadcast_237 stored as values in memory (estimated size 5.7 KB, free 490.4 MB) 18/04/17 16:42:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 236.0 (TID 236, ***hostname masked***, executor 6, partition 0, RACK_LOCAL, 2048 bytes) 18/04/17 16:42:00 INFO storage.MemoryStore: Block broadcast_237_piece0 stored as bytes in memory (estimated size 3.1 KB, free 490.4 MB) 18/04/17 16:42:00 INFO storage.BlockManagerInfo: Added broadcast_237_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.2 MB) 18/04/17 16:42:00 INFO spark.SparkContext: Created broadcast 237 from broadcast at DAGScheduler.scala:1006 18/04/17 16:42:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 237 (KafkaRDD[325] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 16:42:00 INFO cluster.YarnClusterScheduler: Adding task set 237.0 with 1 tasks 18/04/17 16:42:00 INFO scheduler.DAGScheduler: Got job 238 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 16:42:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 238 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 16:42:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 16:42:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 16:42:00 INFO scheduler.DAGScheduler: Submitting ResultStage 238 (KafkaRDD[343] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 16:42:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 237.0 (TID 237, ***hostname masked***, executor 7, partition 0, RACK_LOCAL, 2063 bytes) 18/04/17 16:42:00 INFO storage.MemoryStore: Block broadcast_238 stored as values in memory (estimated size 5.7 KB, free 490.4 MB) 18/04/17 16:42:00 INFO storage.MemoryStore: Block broadcast_238_piece0 stored as bytes in memory (estimated size 3.1 KB, free 490.4 MB) 18/04/17 16:42:00 INFO storage.BlockManagerInfo: Added broadcast_238_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.2 MB) 18/04/17 16:42:00 INFO spark.SparkContext: Created broadcast 238 from broadcast at DAGScheduler.scala:1006 18/04/17 16:42:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 238 (KafkaRDD[343] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 16:42:00 INFO cluster.YarnClusterScheduler: Adding task set 238.0 with 1 tasks 18/04/17 16:42:00 INFO scheduler.DAGScheduler: Got job 239 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 16:42:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 239 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 16:42:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 16:42:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 16:42:00 INFO scheduler.DAGScheduler: Submitting ResultStage 239 (KafkaRDD[333] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 16:42:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 238.0 (TID 238, ***hostname masked***, executor 6, partition 0, RACK_LOCAL, 2050 bytes) 18/04/17 16:42:00 INFO storage.MemoryStore: Block broadcast_239 stored as values in memory (estimated size 5.7 KB, free 490.4 MB) 18/04/17 16:42:00 INFO storage.MemoryStore: Block broadcast_239_piece0 stored as bytes in memory (estimated size 3.1 KB, free 490.4 MB) 18/04/17 16:42:00 INFO storage.BlockManagerInfo: Added broadcast_239_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.2 MB) 18/04/17 16:42:00 INFO spark.SparkContext: Created broadcast 239 from broadcast at DAGScheduler.scala:1006 18/04/17 16:42:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 239 (KafkaRDD[333] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 16:42:00 INFO cluster.YarnClusterScheduler: Adding task set 239.0 with 1 tasks 18/04/17 16:42:00 INFO scheduler.DAGScheduler: Got job 240 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 16:42:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 240 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 16:42:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 16:42:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 16:42:00 INFO scheduler.DAGScheduler: Submitting ResultStage 240 (KafkaRDD[346] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 16:42:00 INFO storage.BlockManagerInfo: Added broadcast_236_piece0 in memory on ***hostname masked***:35790 (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:42:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 239.0 (TID 239, ***hostname masked***, executor 2, partition 0, RACK_LOCAL, 2041 bytes) 18/04/17 16:42:00 INFO storage.MemoryStore: Block broadcast_240 stored as values in memory (estimated size 5.7 KB, free 490.4 MB) 18/04/17 16:42:00 INFO storage.MemoryStore: Block broadcast_240_piece0 stored as bytes in memory (estimated size 3.1 KB, free 490.4 MB) 18/04/17 16:42:00 INFO storage.BlockManagerInfo: Added broadcast_237_piece0 in memory on ***hostname masked***:41751 (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:42:00 INFO storage.BlockManagerInfo: Added broadcast_240_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.2 MB) 18/04/17 16:42:00 INFO spark.SparkContext: Created broadcast 240 from broadcast at DAGScheduler.scala:1006 18/04/17 16:42:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 240 (KafkaRDD[346] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 16:42:00 INFO cluster.YarnClusterScheduler: Adding task set 240.0 with 1 tasks 18/04/17 16:42:00 INFO scheduler.DAGScheduler: Got job 241 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 16:42:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 241 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 16:42:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 16:42:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 16:42:00 INFO scheduler.DAGScheduler: Submitting ResultStage 241 (KafkaRDD[355] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 16:42:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 240.0 (TID 240, ***hostname masked***, executor 6, partition 0, RACK_LOCAL, 2069 bytes) 18/04/17 16:42:00 INFO storage.MemoryStore: Block broadcast_241 stored as values in memory (estimated size 5.7 KB, free 490.4 MB) 18/04/17 16:42:00 INFO storage.MemoryStore: Block broadcast_241_piece0 stored as bytes in memory (estimated size 3.1 KB, free 490.4 MB) 18/04/17 16:42:00 INFO storage.BlockManagerInfo: Removed broadcast_206_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.2 MB) 18/04/17 16:42:00 INFO storage.BlockManagerInfo: Added broadcast_241_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.2 MB) 18/04/17 16:42:00 INFO spark.SparkContext: Created broadcast 241 from broadcast at DAGScheduler.scala:1006 18/04/17 16:42:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 241 (KafkaRDD[355] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 16:42:00 INFO cluster.YarnClusterScheduler: Adding task set 241.0 with 1 tasks 18/04/17 16:42:00 INFO scheduler.DAGScheduler: Got job 242 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 16:42:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 242 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 16:42:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 16:42:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 16:42:00 INFO scheduler.DAGScheduler: Submitting ResultStage 242 (KafkaRDD[326] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 16:42:00 INFO storage.BlockManagerInfo: Added broadcast_238_piece0 in memory on ***hostname masked***:35790 (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:42:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 241.0 (TID 241, ***hostname masked***, executor 1, partition 0, RACK_LOCAL, 2042 bytes) 18/04/17 16:42:00 INFO storage.MemoryStore: Block broadcast_242 stored as values in memory (estimated size 5.7 KB, free 490.4 MB) 18/04/17 16:42:00 INFO storage.MemoryStore: Block broadcast_242_piece0 stored as bytes in memory (estimated size 3.1 KB, free 490.4 MB) 18/04/17 16:42:00 INFO storage.BlockManagerInfo: Added broadcast_242_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.2 MB) 18/04/17 16:42:00 INFO spark.SparkContext: Created broadcast 242 from broadcast at DAGScheduler.scala:1006 18/04/17 16:42:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 242 (KafkaRDD[326] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 16:42:00 INFO cluster.YarnClusterScheduler: Adding task set 242.0 with 1 tasks 18/04/17 16:42:00 INFO storage.BlockManagerInfo: Added broadcast_239_piece0 in memory on ***hostname masked***:43653 (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:42:00 INFO scheduler.DAGScheduler: Got job 243 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 16:42:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 243 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 16:42:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 16:42:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 16:42:00 INFO storage.BlockManagerInfo: Removed broadcast_206_piece0 on ***hostname masked***:57847 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:42:00 INFO scheduler.DAGScheduler: Submitting ResultStage 243 (KafkaRDD[348] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 16:42:00 INFO storage.MemoryStore: Block broadcast_243 stored as values in memory (estimated size 5.7 KB, free 490.4 MB) 18/04/17 16:42:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 242.0 (TID 242, ***hostname masked***, executor 9, partition 0, RACK_LOCAL, 2034 bytes) 18/04/17 16:42:00 INFO spark.ContextCleaner: Cleaned accumulator 211 18/04/17 16:42:00 INFO spark.ContextCleaner: Cleaned accumulator 207 18/04/17 16:42:00 INFO storage.BlockManagerInfo: Removed broadcast_210_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.2 MB) 18/04/17 16:42:00 INFO storage.MemoryStore: Block broadcast_243_piece0 stored as bytes in memory (estimated size 3.1 KB, free 490.4 MB) 18/04/17 16:42:00 INFO storage.BlockManagerInfo: Added broadcast_243_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.2 MB) 18/04/17 16:42:00 INFO spark.SparkContext: Created broadcast 243 from broadcast at DAGScheduler.scala:1006 18/04/17 16:42:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 243 (KafkaRDD[348] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 16:42:00 INFO cluster.YarnClusterScheduler: Adding task set 243.0 with 1 tasks 18/04/17 16:42:00 INFO scheduler.DAGScheduler: Got job 244 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 16:42:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 244 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 16:42:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 16:42:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 16:42:00 INFO scheduler.DAGScheduler: Submitting ResultStage 244 (KafkaRDD[329] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 16:42:00 INFO storage.BlockManagerInfo: Removed broadcast_210_piece0 on ***hostname masked***:43653 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:42:00 INFO storage.MemoryStore: Block broadcast_244 stored as values in memory (estimated size 5.7 KB, free 490.4 MB) 18/04/17 16:42:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 243.0 (TID 243, ***hostname masked***, executor 5, partition 0, NODE_LOCAL, 2047 bytes) 18/04/17 16:42:00 INFO spark.ContextCleaner: Cleaned accumulator 214 18/04/17 16:42:00 INFO storage.BlockManagerInfo: Added broadcast_240_piece0 in memory on ***hostname masked***:35790 (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:42:00 INFO storage.BlockManagerInfo: Removed broadcast_211_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.2 MB) 18/04/17 16:42:00 INFO storage.BlockManagerInfo: Removed broadcast_211_piece0 on ***hostname masked***:53081 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:42:00 INFO storage.BlockManagerInfo: Added broadcast_241_piece0 in memory on ***hostname masked***:56034 (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:42:00 INFO storage.MemoryStore: Block broadcast_244_piece0 stored as bytes in memory (estimated size 3.1 KB, free 490.4 MB) 18/04/17 16:42:00 INFO storage.BlockManagerInfo: Added broadcast_244_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.2 MB) 18/04/17 16:42:00 INFO spark.SparkContext: Created broadcast 244 from broadcast at DAGScheduler.scala:1006 18/04/17 16:42:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 244 (KafkaRDD[329] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 16:42:00 INFO cluster.YarnClusterScheduler: Adding task set 244.0 with 1 tasks 18/04/17 16:42:00 INFO spark.ContextCleaner: Cleaned accumulator 212 18/04/17 16:42:00 INFO scheduler.DAGScheduler: Got job 245 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 16:42:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 245 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 16:42:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 16:42:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 16:42:00 INFO scheduler.DAGScheduler: Submitting ResultStage 245 (KafkaRDD[349] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 16:42:00 INFO storage.MemoryStore: Block broadcast_245 stored as values in memory (estimated size 5.7 KB, free 490.4 MB) 18/04/17 16:42:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 244.0 (TID 244, ***hostname masked***, executor 5, partition 0, NODE_LOCAL, 2059 bytes) 18/04/17 16:42:00 INFO storage.BlockManagerInfo: Removed broadcast_214_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.2 MB) 18/04/17 16:42:00 INFO storage.BlockManagerInfo: Removed broadcast_214_piece0 on ***hostname masked***:53081 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:42:00 INFO storage.BlockManagerInfo: Added broadcast_242_piece0 in memory on ***hostname masked***:55033 (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:42:00 INFO spark.ContextCleaner: Cleaned accumulator 215 18/04/17 16:42:00 INFO storage.MemoryStore: Block broadcast_245_piece0 stored as bytes in memory (estimated size 3.1 KB, free 490.4 MB) 18/04/17 16:42:00 INFO storage.BlockManagerInfo: Added broadcast_245_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.2 MB) 18/04/17 16:42:00 INFO storage.BlockManagerInfo: Removed broadcast_213_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.2 MB) 18/04/17 16:42:00 INFO spark.SparkContext: Created broadcast 245 from broadcast at DAGScheduler.scala:1006 18/04/17 16:42:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 245 (KafkaRDD[349] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 16:42:00 INFO cluster.YarnClusterScheduler: Adding task set 245.0 with 1 tasks 18/04/17 16:42:00 INFO scheduler.DAGScheduler: Got job 246 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 16:42:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 246 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 16:42:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 16:42:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 16:42:00 INFO scheduler.DAGScheduler: Submitting ResultStage 246 (KafkaRDD[351] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 16:42:00 INFO storage.BlockManagerInfo: Removed broadcast_213_piece0 on ***hostname masked***:55033 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:42:00 INFO storage.MemoryStore: Block broadcast_246 stored as values in memory (estimated size 5.7 KB, free 490.4 MB) 18/04/17 16:42:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 245.0 (TID 245, ***hostname masked***, executor 7, partition 0, RACK_LOCAL, 2045 bytes) 18/04/17 16:42:00 INFO spark.ContextCleaner: Cleaned accumulator 219 18/04/17 16:42:00 INFO storage.BlockManagerInfo: Removed broadcast_215_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.2 MB) 18/04/17 16:42:00 INFO storage.BlockManagerInfo: Added broadcast_243_piece0 in memory on ***hostname masked***:53081 (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:42:00 INFO storage.BlockManagerInfo: Removed broadcast_215_piece0 on ***hostname masked***:55033 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:42:00 INFO storage.MemoryStore: Block broadcast_246_piece0 stored as bytes in memory (estimated size 3.1 KB, free 490.4 MB) 18/04/17 16:42:00 INFO spark.ContextCleaner: Cleaned accumulator 216 18/04/17 16:42:00 INFO storage.BlockManagerInfo: Added broadcast_246_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.2 MB) 18/04/17 16:42:00 INFO spark.SparkContext: Created broadcast 246 from broadcast at DAGScheduler.scala:1006 18/04/17 16:42:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 246 (KafkaRDD[351] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 16:42:00 INFO cluster.YarnClusterScheduler: Adding task set 246.0 with 1 tasks 18/04/17 16:42:00 INFO scheduler.DAGScheduler: Got job 247 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 16:42:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 247 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 16:42:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 16:42:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 16:42:00 INFO scheduler.DAGScheduler: Submitting ResultStage 247 (KafkaRDD[352] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 16:42:00 INFO storage.BlockManagerInfo: Removed broadcast_220_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.2 MB) 18/04/17 16:42:00 INFO storage.MemoryStore: Block broadcast_247 stored as values in memory (estimated size 5.7 KB, free 490.4 MB) 18/04/17 16:42:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 246.0 (TID 246, ***hostname masked***, executor 8, partition 0, RACK_LOCAL, 2048 bytes) 18/04/17 16:42:00 INFO storage.BlockManagerInfo: Removed broadcast_220_piece0 on ***hostname masked***:43653 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:42:00 INFO storage.BlockManagerInfo: Added broadcast_245_piece0 in memory on ***hostname masked***:41751 (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:42:00 INFO spark.ContextCleaner: Cleaned accumulator 221 18/04/17 16:42:00 INFO storage.BlockManagerInfo: Removed broadcast_218_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.2 MB) 18/04/17 16:42:00 INFO storage.MemoryStore: Block broadcast_247_piece0 stored as bytes in memory (estimated size 3.1 KB, free 490.4 MB) 18/04/17 16:42:00 INFO storage.BlockManagerInfo: Added broadcast_247_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.2 MB) 18/04/17 16:42:00 INFO spark.SparkContext: Created broadcast 247 from broadcast at DAGScheduler.scala:1006 18/04/17 16:42:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 247 (KafkaRDD[352] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 16:42:00 INFO cluster.YarnClusterScheduler: Adding task set 247.0 with 1 tasks 18/04/17 16:42:00 INFO scheduler.DAGScheduler: Got job 248 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 16:42:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 248 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 16:42:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 16:42:00 INFO storage.BlockManagerInfo: Removed broadcast_218_piece0 on ***hostname masked***:53081 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:42:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 16:42:00 INFO scheduler.DAGScheduler: Submitting ResultStage 248 (KafkaRDD[330] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 16:42:00 INFO storage.MemoryStore: Block broadcast_248 stored as values in memory (estimated size 5.7 KB, free 490.4 MB) 18/04/17 16:42:00 INFO spark.ContextCleaner: Cleaned accumulator 226 18/04/17 16:42:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 247.0 (TID 247, ***hostname masked***, executor 11, partition 0, RACK_LOCAL, 2045 bytes) 18/04/17 16:42:00 INFO storage.BlockManagerInfo: Removed broadcast_223_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.2 MB) 18/04/17 16:42:00 INFO storage.BlockManagerInfo: Added broadcast_244_piece0 in memory on ***hostname masked***:53081 (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:42:00 INFO storage.BlockManagerInfo: Removed broadcast_223_piece0 on ***hostname masked***:43653 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:42:00 INFO storage.MemoryStore: Block broadcast_248_piece0 stored as bytes in memory (estimated size 3.1 KB, free 490.4 MB) 18/04/17 16:42:00 INFO storage.BlockManagerInfo: Added broadcast_248_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.2 MB) 18/04/17 16:42:00 INFO spark.SparkContext: Created broadcast 248 from broadcast at DAGScheduler.scala:1006 18/04/17 16:42:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 248 (KafkaRDD[330] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 16:42:00 INFO cluster.YarnClusterScheduler: Adding task set 248.0 with 1 tasks 18/04/17 16:42:00 INFO scheduler.DAGScheduler: Got job 250 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 16:42:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 249 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 16:42:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 16:42:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 16:42:00 INFO scheduler.DAGScheduler: Submitting ResultStage 249 (KafkaRDD[332] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 16:42:00 INFO storage.MemoryStore: Block broadcast_249 stored as values in memory (estimated size 5.7 KB, free 490.4 MB) 18/04/17 16:42:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 248.0 (TID 248, ***hostname masked***, executor 7, partition 0, NODE_LOCAL, 2045 bytes) 18/04/17 16:42:00 INFO spark.ContextCleaner: Cleaned accumulator 224 18/04/17 16:42:00 INFO storage.BlockManagerInfo: Removed broadcast_227_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.2 MB) 18/04/17 16:42:00 INFO storage.MemoryStore: Block broadcast_249_piece0 stored as bytes in memory (estimated size 3.1 KB, free 490.4 MB) 18/04/17 16:42:00 INFO storage.BlockManagerInfo: Added broadcast_249_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.2 MB) 18/04/17 16:42:00 INFO spark.SparkContext: Created broadcast 249 from broadcast at DAGScheduler.scala:1006 18/04/17 16:42:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 249 (KafkaRDD[332] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 16:42:00 INFO cluster.YarnClusterScheduler: Adding task set 249.0 with 1 tasks 18/04/17 16:42:00 INFO scheduler.DAGScheduler: Got job 249 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 16:42:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 250 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 16:42:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 16:42:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 16:42:00 INFO scheduler.DAGScheduler: Submitting ResultStage 250 (KafkaRDD[356] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 16:42:00 INFO storage.MemoryStore: Block broadcast_250 stored as values in memory (estimated size 5.7 KB, free 490.4 MB) 18/04/17 16:42:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 249.0 (TID 249, ***hostname masked***, executor 1, partition 0, RACK_LOCAL, 2051 bytes) 18/04/17 16:42:00 INFO storage.BlockManagerInfo: Added broadcast_246_piece0 in memory on ***hostname masked***:50260 (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:42:00 INFO storage.BlockManagerInfo: Removed broadcast_227_piece0 on ***hostname masked***:50260 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:42:00 INFO spark.ContextCleaner: Cleaned accumulator 228 18/04/17 16:42:00 INFO storage.MemoryStore: Block broadcast_250_piece0 stored as bytes in memory (estimated size 3.1 KB, free 490.4 MB) 18/04/17 16:42:00 INFO storage.BlockManagerInfo: Added broadcast_250_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.2 MB) 18/04/17 16:42:00 INFO storage.BlockManagerInfo: Removed broadcast_225_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.2 MB) 18/04/17 16:42:00 INFO spark.SparkContext: Created broadcast 250 from broadcast at DAGScheduler.scala:1006 18/04/17 16:42:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 250 (KafkaRDD[356] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 16:42:00 INFO cluster.YarnClusterScheduler: Adding task set 250.0 with 1 tasks 18/04/17 16:42:00 INFO scheduler.DAGScheduler: Got job 251 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 16:42:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 251 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 16:42:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 16:42:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 16:42:00 INFO scheduler.DAGScheduler: Submitting ResultStage 251 (KafkaRDD[347] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 16:42:00 INFO storage.MemoryStore: Block broadcast_251 stored as values in memory (estimated size 5.7 KB, free 490.4 MB) 18/04/17 16:42:00 INFO storage.BlockManagerInfo: Added broadcast_247_piece0 in memory on ***hostname masked***:57847 (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:42:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 250.0 (TID 250, ***hostname masked***, executor 2, partition 0, RACK_LOCAL, 2040 bytes) 18/04/17 16:42:00 INFO storage.BlockManagerInfo: Added broadcast_248_piece0 in memory on ***hostname masked***:41751 (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:42:00 INFO storage.BlockManagerInfo: Removed broadcast_225_piece0 on ***hostname masked***:35790 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:42:00 INFO storage.MemoryStore: Block broadcast_251_piece0 stored as bytes in memory (estimated size 3.1 KB, free 490.4 MB) 18/04/17 16:42:00 INFO storage.BlockManagerInfo: Added broadcast_251_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.2 MB) 18/04/17 16:42:00 INFO spark.SparkContext: Created broadcast 251 from broadcast at DAGScheduler.scala:1006 18/04/17 16:42:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 251 (KafkaRDD[347] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 16:42:00 INFO cluster.YarnClusterScheduler: Adding task set 251.0 with 1 tasks 18/04/17 16:42:00 INFO scheduler.DAGScheduler: Got job 252 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 16:42:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 252 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 16:42:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 16:42:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 16:42:00 INFO scheduler.DAGScheduler: Submitting ResultStage 252 (KafkaRDD[342] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 16:42:00 INFO storage.MemoryStore: Block broadcast_252 stored as values in memory (estimated size 5.7 KB, free 490.4 MB) 18/04/17 16:42:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 251.0 (TID 251, ***hostname masked***, executor 7, partition 0, RACK_LOCAL, 2048 bytes) 18/04/17 16:42:00 INFO spark.ContextCleaner: Cleaned accumulator 232 18/04/17 16:42:00 INFO storage.BlockManagerInfo: Removed broadcast_229_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.2 MB) 18/04/17 16:42:00 INFO storage.MemoryStore: Block broadcast_252_piece0 stored as bytes in memory (estimated size 3.1 KB, free 490.4 MB) 18/04/17 16:42:00 INFO storage.BlockManagerInfo: Added broadcast_252_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.2 MB) 18/04/17 16:42:00 INFO storage.BlockManagerInfo: Added broadcast_249_piece0 in memory on ***hostname masked***:56034 (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:42:00 INFO spark.SparkContext: Created broadcast 252 from broadcast at DAGScheduler.scala:1006 18/04/17 16:42:00 INFO storage.BlockManagerInfo: Added broadcast_250_piece0 in memory on ***hostname masked***:43653 (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:42:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 252 (KafkaRDD[342] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 16:42:00 INFO cluster.YarnClusterScheduler: Adding task set 252.0 with 1 tasks 18/04/17 16:42:00 INFO scheduler.DAGScheduler: Got job 253 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 16:42:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 253 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 16:42:00 INFO storage.BlockManagerInfo: Removed broadcast_229_piece0 on ***hostname masked***:55033 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:42:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 16:42:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 16:42:00 INFO scheduler.DAGScheduler: Submitting ResultStage 253 (KafkaRDD[358] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 16:42:00 INFO storage.MemoryStore: Block broadcast_253 stored as values in memory (estimated size 5.7 KB, free 490.4 MB) 18/04/17 16:42:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 252.0 (TID 252, ***hostname masked***, executor 1, partition 0, RACK_LOCAL, 2048 bytes) 18/04/17 16:42:00 INFO spark.ContextCleaner: Cleaned accumulator 230 18/04/17 16:42:00 INFO storage.BlockManagerInfo: Removed broadcast_233_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.2 MB) 18/04/17 16:42:00 INFO storage.MemoryStore: Block broadcast_253_piece0 stored as bytes in memory (estimated size 3.1 KB, free 490.4 MB) 18/04/17 16:42:00 INFO storage.BlockManagerInfo: Added broadcast_253_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.2 MB) 18/04/17 16:42:00 INFO spark.SparkContext: Created broadcast 253 from broadcast at DAGScheduler.scala:1006 18/04/17 16:42:00 INFO storage.BlockManagerInfo: Removed broadcast_233_piece0 on ***hostname masked***:55033 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:42:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 253 (KafkaRDD[358] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 16:42:00 INFO cluster.YarnClusterScheduler: Adding task set 253.0 with 1 tasks 18/04/17 16:42:00 INFO scheduler.DAGScheduler: Got job 254 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 16:42:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 254 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 16:42:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 16:42:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 16:42:00 INFO scheduler.DAGScheduler: Submitting ResultStage 254 (KafkaRDD[353] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 16:42:00 INFO storage.MemoryStore: Block broadcast_254 stored as values in memory (estimated size 5.7 KB, free 490.4 MB) 18/04/17 16:42:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 253.0 (TID 253, ***hostname masked***, executor 10, partition 0, RACK_LOCAL, 2053 bytes) 18/04/17 16:42:00 INFO storage.BlockManagerInfo: Added broadcast_252_piece0 in memory on ***hostname masked***:56034 (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:42:00 INFO storage.MemoryStore: Block broadcast_254_piece0 stored as bytes in memory (estimated size 3.1 KB, free 490.4 MB) 18/04/17 16:42:00 INFO storage.BlockManagerInfo: Added broadcast_254_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.2 MB) 18/04/17 16:42:00 INFO spark.ContextCleaner: Cleaned accumulator 234 18/04/17 16:42:00 INFO spark.SparkContext: Created broadcast 254 from broadcast at DAGScheduler.scala:1006 18/04/17 16:42:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 254 (KafkaRDD[353] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 16:42:00 INFO cluster.YarnClusterScheduler: Adding task set 254.0 with 1 tasks 18/04/17 16:42:00 INFO scheduler.DAGScheduler: Got job 255 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 16:42:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 255 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 16:42:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 16:42:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 16:42:00 INFO scheduler.DAGScheduler: Submitting ResultStage 255 (KafkaRDD[339] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 16:42:00 INFO storage.BlockManagerInfo: Removed broadcast_231_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.2 MB) 18/04/17 16:42:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 254.0 (TID 254, ***hostname masked***, executor 1, partition 0, NODE_LOCAL, 2049 bytes) 18/04/17 16:42:00 INFO storage.MemoryStore: Block broadcast_255 stored as values in memory (estimated size 5.7 KB, free 490.4 MB) 18/04/17 16:42:00 INFO storage.BlockManagerInfo: Removed broadcast_231_piece0 on ***hostname masked***:50260 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:42:00 INFO storage.BlockManagerInfo: Added broadcast_251_piece0 in memory on ***hostname masked***:41751 (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:42:00 INFO storage.MemoryStore: Block broadcast_255_piece0 stored as bytes in memory (estimated size 3.1 KB, free 490.4 MB) 18/04/17 16:42:00 INFO storage.BlockManagerInfo: Added broadcast_255_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.2 MB) 18/04/17 16:42:00 INFO spark.SparkContext: Created broadcast 255 from broadcast at DAGScheduler.scala:1006 18/04/17 16:42:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 255 (KafkaRDD[339] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 16:42:00 INFO cluster.YarnClusterScheduler: Adding task set 255.0 with 1 tasks 18/04/17 16:42:00 INFO scheduler.DAGScheduler: Got job 257 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 16:42:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 256 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 16:42:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 16:42:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 16:42:00 INFO scheduler.DAGScheduler: Submitting ResultStage 256 (KafkaRDD[357] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 16:42:00 INFO storage.BlockManagerInfo: Added broadcast_254_piece0 in memory on ***hostname masked***:56034 (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:42:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 255.0 (TID 255, ***hostname masked***, executor 8, partition 0, RACK_LOCAL, 2064 bytes) 18/04/17 16:42:00 INFO storage.MemoryStore: Block broadcast_256 stored as values in memory (estimated size 5.7 KB, free 490.4 MB) 18/04/17 16:42:00 INFO storage.MemoryStore: Block broadcast_256_piece0 stored as bytes in memory (estimated size 3.1 KB, free 490.4 MB) 18/04/17 16:42:00 INFO storage.BlockManagerInfo: Added broadcast_256_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.2 MB) 18/04/17 16:42:00 INFO spark.SparkContext: Created broadcast 256 from broadcast at DAGScheduler.scala:1006 18/04/17 16:42:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 256 (KafkaRDD[357] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 16:42:00 INFO cluster.YarnClusterScheduler: Adding task set 256.0 with 1 tasks 18/04/17 16:42:00 INFO scheduler.DAGScheduler: Got job 256 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 16:42:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 257 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 16:42:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 16:42:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 16:42:00 INFO scheduler.DAGScheduler: Submitting ResultStage 257 (KafkaRDD[336] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 16:42:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 256.0 (TID 256, ***hostname masked***, executor 9, partition 0, NODE_LOCAL, 2048 bytes) 18/04/17 16:42:00 INFO storage.BlockManagerInfo: Added broadcast_253_piece0 in memory on ***hostname masked***:55095 (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:42:00 INFO storage.MemoryStore: Block broadcast_257 stored as values in memory (estimated size 5.7 KB, free 490.4 MB) 18/04/17 16:42:00 INFO storage.BlockManagerInfo: Added broadcast_255_piece0 in memory on ***hostname masked***:50260 (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:42:00 INFO storage.MemoryStore: Block broadcast_257_piece0 stored as bytes in memory (estimated size 3.1 KB, free 490.4 MB) 18/04/17 16:42:00 INFO storage.BlockManagerInfo: Added broadcast_257_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.2 MB) 18/04/17 16:42:00 INFO spark.SparkContext: Created broadcast 257 from broadcast at DAGScheduler.scala:1006 18/04/17 16:42:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 257 (KafkaRDD[336] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 16:42:00 INFO cluster.YarnClusterScheduler: Adding task set 257.0 with 1 tasks 18/04/17 16:42:00 INFO scheduler.DAGScheduler: Got job 258 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 16:42:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 258 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 16:42:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 16:42:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 16:42:00 INFO scheduler.DAGScheduler: Submitting ResultStage 258 (KafkaRDD[335] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 16:42:00 INFO storage.MemoryStore: Block broadcast_258 stored as values in memory (estimated size 5.7 KB, free 490.4 MB) 18/04/17 16:42:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 257.0 (TID 257, ***hostname masked***, executor 8, partition 0, RACK_LOCAL, 2051 bytes) 18/04/17 16:42:00 INFO storage.MemoryStore: Block broadcast_258_piece0 stored as bytes in memory (estimated size 3.1 KB, free 490.4 MB) 18/04/17 16:42:00 INFO storage.BlockManagerInfo: Added broadcast_258_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.2 MB) 18/04/17 16:42:00 INFO spark.SparkContext: Created broadcast 258 from broadcast at DAGScheduler.scala:1006 18/04/17 16:42:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 258 (KafkaRDD[335] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 16:42:00 INFO cluster.YarnClusterScheduler: Adding task set 258.0 with 1 tasks 18/04/17 16:42:00 INFO scheduler.DAGScheduler: Got job 259 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 16:42:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 259 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 16:42:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 16:42:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 16:42:00 INFO scheduler.DAGScheduler: Submitting ResultStage 259 (KafkaRDD[350] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 16:42:00 INFO storage.MemoryStore: Block broadcast_259 stored as values in memory (estimated size 5.7 KB, free 490.4 MB) 18/04/17 16:42:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 258.0 (TID 258, ***hostname masked***, executor 3, partition 0, RACK_LOCAL, 2046 bytes) 18/04/17 16:42:00 INFO storage.MemoryStore: Block broadcast_259_piece0 stored as bytes in memory (estimated size 3.1 KB, free 490.4 MB) 18/04/17 16:42:00 INFO storage.BlockManagerInfo: Added broadcast_259_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.2 MB) 18/04/17 16:42:00 INFO spark.SparkContext: Created broadcast 259 from broadcast at DAGScheduler.scala:1006 18/04/17 16:42:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 259 (KafkaRDD[350] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 16:42:00 INFO cluster.YarnClusterScheduler: Adding task set 259.0 with 1 tasks 18/04/17 16:42:00 INFO scheduler.DAGScheduler: Got job 260 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 16:42:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 260 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 16:42:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 16:42:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 16:42:00 INFO scheduler.DAGScheduler: Submitting ResultStage 260 (KafkaRDD[334] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 16:42:00 INFO storage.MemoryStore: Block broadcast_260 stored as values in memory (estimated size 5.7 KB, free 490.3 MB) 18/04/17 16:42:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 259.0 (TID 259, ***hostname masked***, executor 4, partition 0, NODE_LOCAL, 2056 bytes) 18/04/17 16:42:00 INFO storage.BlockManagerInfo: Added broadcast_257_piece0 in memory on ***hostname masked***:50260 (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:42:00 INFO storage.MemoryStore: Block broadcast_260_piece0 stored as bytes in memory (estimated size 3.1 KB, free 490.3 MB) 18/04/17 16:42:00 INFO storage.BlockManagerInfo: Added broadcast_260_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.2 MB) 18/04/17 16:42:00 INFO spark.SparkContext: Created broadcast 260 from broadcast at DAGScheduler.scala:1006 18/04/17 16:42:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 260 (KafkaRDD[334] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 16:42:00 INFO cluster.YarnClusterScheduler: Adding task set 260.0 with 1 tasks 18/04/17 16:42:00 INFO scheduler.DAGScheduler: Got job 261 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 16:42:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 261 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 16:42:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 16:42:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 16:42:00 INFO scheduler.DAGScheduler: Submitting ResultStage 261 (KafkaRDD[331] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 16:42:00 INFO storage.MemoryStore: Block broadcast_261 stored as values in memory (estimated size 5.7 KB, free 490.3 MB) 18/04/17 16:42:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 260.0 (TID 260, ***hostname masked***, executor 3, partition 0, NODE_LOCAL, 2059 bytes) 18/04/17 16:42:00 INFO storage.MemoryStore: Block broadcast_261_piece0 stored as bytes in memory (estimated size 3.1 KB, free 490.3 MB) 18/04/17 16:42:00 INFO storage.BlockManagerInfo: Added broadcast_261_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.2 MB) 18/04/17 16:42:00 INFO spark.SparkContext: Created broadcast 261 from broadcast at DAGScheduler.scala:1006 18/04/17 16:42:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 261 (KafkaRDD[331] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 16:42:00 INFO cluster.YarnClusterScheduler: Adding task set 261.0 with 1 tasks 18/04/17 16:42:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 261.0 (TID 261, ***hostname masked***, executor 8, partition 0, RACK_LOCAL, 2060 bytes) 18/04/17 16:42:00 INFO storage.BlockManagerInfo: Added broadcast_258_piece0 in memory on ***hostname masked***:60107 (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:42:00 INFO storage.BlockManagerInfo: Added broadcast_261_piece0 in memory on ***hostname masked***:50260 (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:42:00 INFO storage.BlockManagerInfo: Added broadcast_260_piece0 in memory on ***hostname masked***:60107 (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:42:00 INFO storage.BlockManagerInfo: Added broadcast_259_piece0 in memory on ***hostname masked***:55279 (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:42:00 INFO storage.BlockManagerInfo: Added broadcast_256_piece0 in memory on ***hostname masked***:55033 (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:42:00 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 236.0 (TID 236) in 183 ms on ***hostname masked*** (executor 6) (1/1) 18/04/17 16:42:00 INFO cluster.YarnClusterScheduler: Removed TaskSet 236.0, whose tasks have all completed, from pool 18/04/17 16:42:00 INFO scheduler.DAGScheduler: ResultStage 236 (foreachPartition at PredictorEngineApp.java:153) finished in 0.184 s 18/04/17 16:42:00 INFO scheduler.DAGScheduler: Job 236 finished: foreachPartition at PredictorEngineApp.java:153, took 0.197513 s 18/04/17 16:42:00 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x75f8c7df connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 16:42:00 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x75f8c7df0x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 16:42:00 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 16:42:00 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:60494, server: ***hostname masked***/***IP masked***:2181 18/04/17 16:42:00 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x1626be1444c92b8, negotiated timeout = 60000 18/04/17 16:42:00 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x1626be1444c92b8 18/04/17 16:42:00 INFO zookeeper.ZooKeeper: Session: 0x1626be1444c92b8 closed 18/04/17 16:42:00 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 16:42:00 INFO scheduler.JobScheduler: Finished job streaming job 1523972520000 ms.20 from job set of time 1523972520000 ms 18/04/17 16:42:00 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 252.0 (TID 252) in 157 ms on ***hostname masked*** (executor 1) (1/1) 18/04/17 16:42:00 INFO cluster.YarnClusterScheduler: Removed TaskSet 252.0, whose tasks have all completed, from pool 18/04/17 16:42:00 INFO scheduler.DAGScheduler: ResultStage 252 (foreachPartition at PredictorEngineApp.java:153) finished in 0.158 s 18/04/17 16:42:00 INFO scheduler.DAGScheduler: Job 252 finished: foreachPartition at PredictorEngineApp.java:153, took 0.236369 s 18/04/17 16:42:00 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x17696424 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 16:42:00 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x176964240x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 16:42:00 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 16:42:00 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:60497, server: ***hostname masked***/***IP masked***:2181 18/04/17 16:42:00 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x1626be1444c92ba, negotiated timeout = 60000 18/04/17 16:42:00 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x1626be1444c92ba 18/04/17 16:42:00 INFO zookeeper.ZooKeeper: Session: 0x1626be1444c92ba closed 18/04/17 16:42:00 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 16:42:00 INFO scheduler.JobScheduler: Finished job streaming job 1523972520000 ms.18 from job set of time 1523972520000 ms 18/04/17 16:42:02 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 245.0 (TID 245) in 2468 ms on ***hostname masked*** (executor 7) (1/1) 18/04/17 16:42:02 INFO cluster.YarnClusterScheduler: Removed TaskSet 245.0, whose tasks have all completed, from pool 18/04/17 16:42:02 INFO scheduler.DAGScheduler: ResultStage 245 (foreachPartition at PredictorEngineApp.java:153) finished in 2.469 s 18/04/17 16:42:02 INFO scheduler.DAGScheduler: Job 245 finished: foreachPartition at PredictorEngineApp.java:153, took 2.526165 s 18/04/17 16:42:02 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x21d08b83 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 16:42:02 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x21d08b830x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 16:42:02 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 16:42:02 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:36865, server: ***hostname masked***/***IP masked***:2181 18/04/17 16:42:02 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x2626be142b28bb2, negotiated timeout = 60000 18/04/17 16:42:02 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x2626be142b28bb2 18/04/17 16:42:02 INFO zookeeper.ZooKeeper: Session: 0x2626be142b28bb2 closed 18/04/17 16:42:02 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 16:42:02 INFO scheduler.JobScheduler: Finished job streaming job 1523972520000 ms.25 from job set of time 1523972520000 ms 18/04/17 16:42:03 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 249.0 (TID 249) in 3364 ms on ***hostname masked*** (executor 1) (1/1) 18/04/17 16:42:03 INFO cluster.YarnClusterScheduler: Removed TaskSet 249.0, whose tasks have all completed, from pool 18/04/17 16:42:03 INFO scheduler.DAGScheduler: ResultStage 249 (foreachPartition at PredictorEngineApp.java:153) finished in 3.365 s 18/04/17 16:42:03 INFO scheduler.DAGScheduler: Job 250 finished: foreachPartition at PredictorEngineApp.java:153, took 3.437884 s 18/04/17 16:42:03 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x55e784c2 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 16:42:03 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x55e784c20x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 16:42:03 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 16:42:03 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:60507, server: ***hostname masked***/***IP masked***:2181 18/04/17 16:42:03 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x1626be1444c92bf, negotiated timeout = 60000 18/04/17 16:42:03 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x1626be1444c92bf 18/04/17 16:42:03 INFO zookeeper.ZooKeeper: Session: 0x1626be1444c92bf closed 18/04/17 16:42:03 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 16:42:03 INFO scheduler.JobScheduler: Finished job streaming job 1523972520000 ms.8 from job set of time 1523972520000 ms 18/04/17 16:42:03 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 247.0 (TID 247) in 3782 ms on ***hostname masked*** (executor 11) (1/1) 18/04/17 16:42:03 INFO cluster.YarnClusterScheduler: Removed TaskSet 247.0, whose tasks have all completed, from pool 18/04/17 16:42:03 INFO scheduler.DAGScheduler: ResultStage 247 (foreachPartition at PredictorEngineApp.java:153) finished in 3.783 s 18/04/17 16:42:03 INFO scheduler.DAGScheduler: Job 247 finished: foreachPartition at PredictorEngineApp.java:153, took 3.848281 s 18/04/17 16:42:03 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x2ba88022 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 16:42:03 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x2ba880220x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 16:42:03 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 16:42:03 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:60510, server: ***hostname masked***/***IP masked***:2181 18/04/17 16:42:03 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x1626be1444c92c2, negotiated timeout = 60000 18/04/17 16:42:03 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x1626be1444c92c2 18/04/17 16:42:03 INFO zookeeper.ZooKeeper: Session: 0x1626be1444c92c2 closed 18/04/17 16:42:03 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 16:42:03 INFO scheduler.JobScheduler: Finished job streaming job 1523972520000 ms.28 from job set of time 1523972520000 ms 18/04/17 16:42:04 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 261.0 (TID 261) in 4157 ms on ***hostname masked*** (executor 8) (1/1) 18/04/17 16:42:04 INFO cluster.YarnClusterScheduler: Removed TaskSet 261.0, whose tasks have all completed, from pool 18/04/17 16:42:04 INFO scheduler.DAGScheduler: ResultStage 261 (foreachPartition at PredictorEngineApp.java:153) finished in 4.159 s 18/04/17 16:42:04 INFO scheduler.DAGScheduler: Job 261 finished: foreachPartition at PredictorEngineApp.java:153, took 4.268878 s 18/04/17 16:42:04 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x217da346 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 16:42:04 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x217da3460x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 16:42:04 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 16:42:04 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:54134, server: ***hostname masked***/***IP masked***:2181 18/04/17 16:42:04 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x3626be1439a9298, negotiated timeout = 60000 18/04/17 16:42:04 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x3626be1439a9298 18/04/17 16:42:04 INFO zookeeper.ZooKeeper: Session: 0x3626be1439a9298 closed 18/04/17 16:42:04 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 16:42:04 INFO scheduler.JobScheduler: Finished job streaming job 1523972520000 ms.7 from job set of time 1523972520000 ms 18/04/17 16:42:04 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 253.0 (TID 253) in 4719 ms on ***hostname masked*** (executor 10) (1/1) 18/04/17 16:42:04 INFO cluster.YarnClusterScheduler: Removed TaskSet 253.0, whose tasks have all completed, from pool 18/04/17 16:42:04 INFO scheduler.DAGScheduler: ResultStage 253 (foreachPartition at PredictorEngineApp.java:153) finished in 4.720 s 18/04/17 16:42:04 INFO scheduler.DAGScheduler: Job 253 finished: foreachPartition at PredictorEngineApp.java:153, took 4.802864 s 18/04/17 16:42:04 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x8a6653a connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 16:42:04 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x8a6653a0x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 16:42:04 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 16:42:04 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:36886, server: ***hostname masked***/***IP masked***:2181 18/04/17 16:42:04 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x2626be142b28bb4, negotiated timeout = 60000 18/04/17 16:42:04 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x2626be142b28bb4 18/04/17 16:42:04 INFO zookeeper.ZooKeeper: Session: 0x2626be142b28bb4 closed 18/04/17 16:42:04 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 16:42:04 INFO scheduler.JobScheduler: Finished job streaming job 1523972520000 ms.34 from job set of time 1523972520000 ms 18/04/17 16:42:05 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 241.0 (TID 241) in 5312 ms on ***hostname masked*** (executor 1) (1/1) 18/04/17 16:42:05 INFO cluster.YarnClusterScheduler: Removed TaskSet 241.0, whose tasks have all completed, from pool 18/04/17 16:42:05 INFO scheduler.DAGScheduler: ResultStage 241 (foreachPartition at PredictorEngineApp.java:153) finished in 5.313 s 18/04/17 16:42:05 INFO scheduler.DAGScheduler: Job 241 finished: foreachPartition at PredictorEngineApp.java:153, took 5.354966 s 18/04/17 16:42:05 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x1617d2ec connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 16:42:05 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x1617d2ec0x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 16:42:05 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 16:42:05 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:60528, server: ***hostname masked***/***IP masked***:2181 18/04/17 16:42:05 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x1626be1444c92c4, negotiated timeout = 60000 18/04/17 16:42:05 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x1626be1444c92c4 18/04/17 16:42:05 INFO zookeeper.ZooKeeper: Session: 0x1626be1444c92c4 closed 18/04/17 16:42:05 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 16:42:05 INFO scheduler.JobScheduler: Finished job streaming job 1523972520000 ms.31 from job set of time 1523972520000 ms 18/04/17 16:42:06 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 238.0 (TID 238) in 6478 ms on ***hostname masked*** (executor 6) (1/1) 18/04/17 16:42:06 INFO cluster.YarnClusterScheduler: Removed TaskSet 238.0, whose tasks have all completed, from pool 18/04/17 16:42:06 INFO scheduler.DAGScheduler: ResultStage 238 (foreachPartition at PredictorEngineApp.java:153) finished in 6.478 s 18/04/17 16:42:06 INFO scheduler.DAGScheduler: Job 238 finished: foreachPartition at PredictorEngineApp.java:153, took 6.498818 s 18/04/17 16:42:06 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x1c5ab1ec connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 16:42:06 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x1c5ab1ec0x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 16:42:06 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 16:42:06 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:36894, server: ***hostname masked***/***IP masked***:2181 18/04/17 16:42:06 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x2626be142b28bb6, negotiated timeout = 60000 18/04/17 16:42:06 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x2626be142b28bb6 18/04/17 16:42:06 INFO zookeeper.ZooKeeper: Session: 0x2626be142b28bb6 closed 18/04/17 16:42:06 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 16:42:06 INFO scheduler.JobScheduler: Finished job streaming job 1523972520000 ms.19 from job set of time 1523972520000 ms 18/04/17 16:42:07 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 254.0 (TID 254) in 6869 ms on ***hostname masked*** (executor 1) (1/1) 18/04/17 16:42:07 INFO cluster.YarnClusterScheduler: Removed TaskSet 254.0, whose tasks have all completed, from pool 18/04/17 16:42:07 INFO scheduler.DAGScheduler: ResultStage 254 (foreachPartition at PredictorEngineApp.java:153) finished in 6.870 s 18/04/17 16:42:07 INFO scheduler.DAGScheduler: Job 254 finished: foreachPartition at PredictorEngineApp.java:153, took 6.956289 s 18/04/17 16:42:07 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x7a67aa90 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 16:42:07 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x7a67aa900x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 16:42:07 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 16:42:07 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:36898, server: ***hostname masked***/***IP masked***:2181 18/04/17 16:42:07 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x2626be142b28bb8, negotiated timeout = 60000 18/04/17 16:42:07 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x2626be142b28bb8 18/04/17 16:42:07 INFO zookeeper.ZooKeeper: Session: 0x2626be142b28bb8 closed 18/04/17 16:42:07 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 16:42:07 INFO scheduler.JobScheduler: Finished job streaming job 1523972520000 ms.29 from job set of time 1523972520000 ms 18/04/17 16:42:07 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 242.0 (TID 242) in 7393 ms on ***hostname masked*** (executor 9) (1/1) 18/04/17 16:42:07 INFO cluster.YarnClusterScheduler: Removed TaskSet 242.0, whose tasks have all completed, from pool 18/04/17 16:42:07 INFO scheduler.DAGScheduler: ResultStage 242 (foreachPartition at PredictorEngineApp.java:153) finished in 7.393 s 18/04/17 16:42:07 INFO scheduler.DAGScheduler: Job 242 finished: foreachPartition at PredictorEngineApp.java:153, took 7.439449 s 18/04/17 16:42:07 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x322a639f connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 16:42:07 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x322a639f0x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 16:42:07 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 16:42:07 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:54158, server: ***hostname masked***/***IP masked***:2181 18/04/17 16:42:07 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x3626be1439a9299, negotiated timeout = 60000 18/04/17 16:42:07 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x3626be1439a9299 18/04/17 16:42:07 INFO zookeeper.ZooKeeper: Session: 0x3626be1439a9299 closed 18/04/17 16:42:07 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 16:42:07 INFO scheduler.JobScheduler: Finished job streaming job 1523972520000 ms.2 from job set of time 1523972520000 ms 18/04/17 16:42:07 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 250.0 (TID 250) in 7425 ms on ***hostname masked*** (executor 2) (1/1) 18/04/17 16:42:07 INFO cluster.YarnClusterScheduler: Removed TaskSet 250.0, whose tasks have all completed, from pool 18/04/17 16:42:07 INFO scheduler.DAGScheduler: ResultStage 250 (foreachPartition at PredictorEngineApp.java:153) finished in 7.426 s 18/04/17 16:42:07 INFO scheduler.DAGScheduler: Job 249 finished: foreachPartition at PredictorEngineApp.java:153, took 7.502923 s 18/04/17 16:42:07 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x75d3e0e6 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 16:42:07 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x75d3e0e60x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 16:42:07 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 16:42:07 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:36905, server: ***hostname masked***/***IP masked***:2181 18/04/17 16:42:07 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x2626be142b28bba, negotiated timeout = 60000 18/04/17 16:42:07 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x2626be142b28bba 18/04/17 16:42:07 INFO zookeeper.ZooKeeper: Session: 0x2626be142b28bba closed 18/04/17 16:42:07 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 16:42:07 INFO scheduler.JobScheduler: Finished job streaming job 1523972520000 ms.32 from job set of time 1523972520000 ms 18/04/17 16:42:07 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 239.0 (TID 239) in 7554 ms on ***hostname masked*** (executor 2) (1/1) 18/04/17 16:42:07 INFO cluster.YarnClusterScheduler: Removed TaskSet 239.0, whose tasks have all completed, from pool 18/04/17 16:42:07 INFO scheduler.DAGScheduler: ResultStage 239 (foreachPartition at PredictorEngineApp.java:153) finished in 7.554 s 18/04/17 16:42:07 INFO scheduler.DAGScheduler: Job 239 finished: foreachPartition at PredictorEngineApp.java:153, took 7.578523 s 18/04/17 16:42:07 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x7b075227 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 16:42:07 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x7b0752270x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 16:42:07 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 16:42:07 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:54164, server: ***hostname masked***/***IP masked***:2181 18/04/17 16:42:07 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x3626be1439a929b, negotiated timeout = 60000 18/04/17 16:42:07 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x3626be1439a929b 18/04/17 16:42:07 INFO zookeeper.ZooKeeper: Session: 0x3626be1439a929b closed 18/04/17 16:42:07 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 16:42:07 INFO scheduler.JobScheduler: Finished job streaming job 1523972520000 ms.9 from job set of time 1523972520000 ms 18/04/17 16:42:07 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 243.0 (TID 243) in 7590 ms on ***hostname masked*** (executor 5) (1/1) 18/04/17 16:42:07 INFO cluster.YarnClusterScheduler: Removed TaskSet 243.0, whose tasks have all completed, from pool 18/04/17 16:42:07 INFO scheduler.DAGScheduler: ResultStage 243 (foreachPartition at PredictorEngineApp.java:153) finished in 7.591 s 18/04/17 16:42:07 INFO scheduler.DAGScheduler: Job 243 finished: foreachPartition at PredictorEngineApp.java:153, took 7.639921 s 18/04/17 16:42:07 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0xd6161bb connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 16:42:07 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0xd6161bb0x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 16:42:07 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 16:42:07 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:36911, server: ***hostname masked***/***IP masked***:2181 18/04/17 16:42:07 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x2626be142b28bbb, negotiated timeout = 60000 18/04/17 16:42:07 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x2626be142b28bbb 18/04/17 16:42:07 INFO zookeeper.ZooKeeper: Session: 0x2626be142b28bbb closed 18/04/17 16:42:07 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 16:42:07 INFO scheduler.JobScheduler: Finished job streaming job 1523972520000 ms.24 from job set of time 1523972520000 ms 18/04/17 16:42:07 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 259.0 (TID 259) in 7735 ms on ***hostname masked*** (executor 4) (1/1) 18/04/17 16:42:07 INFO cluster.YarnClusterScheduler: Removed TaskSet 259.0, whose tasks have all completed, from pool 18/04/17 16:42:07 INFO scheduler.DAGScheduler: ResultStage 259 (foreachPartition at PredictorEngineApp.java:153) finished in 7.736 s 18/04/17 16:42:07 INFO scheduler.DAGScheduler: Job 259 finished: foreachPartition at PredictorEngineApp.java:153, took 7.840739 s 18/04/17 16:42:07 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x2cd4b808 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 16:42:07 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x2cd4b8080x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 16:42:07 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 16:42:07 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:36914, server: ***hostname masked***/***IP masked***:2181 18/04/17 16:42:07 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x2626be142b28bbc, negotiated timeout = 60000 18/04/17 16:42:07 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x2626be142b28bbc 18/04/17 16:42:07 INFO zookeeper.ZooKeeper: Session: 0x2626be142b28bbc closed 18/04/17 16:42:07 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 16:42:07 INFO scheduler.JobScheduler: Finished job streaming job 1523972520000 ms.26 from job set of time 1523972520000 ms 18/04/17 16:42:07 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 256.0 (TID 256) in 7777 ms on ***hostname masked*** (executor 9) (1/1) 18/04/17 16:42:07 INFO cluster.YarnClusterScheduler: Removed TaskSet 256.0, whose tasks have all completed, from pool 18/04/17 16:42:07 INFO scheduler.DAGScheduler: ResultStage 256 (foreachPartition at PredictorEngineApp.java:153) finished in 7.778 s 18/04/17 16:42:07 INFO scheduler.DAGScheduler: Job 257 finished: foreachPartition at PredictorEngineApp.java:153, took 7.872791 s 18/04/17 16:42:07 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x7772c068 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 16:42:07 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x7772c0680x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 16:42:07 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 16:42:07 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:54173, server: ***hostname masked***/***IP masked***:2181 18/04/17 16:42:07 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x3626be1439a929c, negotiated timeout = 60000 18/04/17 16:42:07 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x3626be1439a929c 18/04/17 16:42:07 INFO zookeeper.ZooKeeper: Session: 0x3626be1439a929c closed 18/04/17 16:42:07 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 16:42:07 INFO scheduler.JobScheduler: Finished job streaming job 1523972520000 ms.33 from job set of time 1523972520000 ms 18/04/17 16:42:09 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 257.0 (TID 257) in 8862 ms on ***hostname masked*** (executor 8) (1/1) 18/04/17 16:42:09 INFO cluster.YarnClusterScheduler: Removed TaskSet 257.0, whose tasks have all completed, from pool 18/04/17 16:42:09 INFO scheduler.DAGScheduler: ResultStage 257 (foreachPartition at PredictorEngineApp.java:153) finished in 8.863 s 18/04/17 16:42:09 INFO scheduler.DAGScheduler: Job 256 finished: foreachPartition at PredictorEngineApp.java:153, took 8.961551 s 18/04/17 16:42:09 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x226ed458 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 16:42:09 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x226ed4580x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 16:42:09 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 16:42:09 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:60560, server: ***hostname masked***/***IP masked***:2181 18/04/17 16:42:09 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x1626be1444c92c6, negotiated timeout = 60000 18/04/17 16:42:09 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x1626be1444c92c6 18/04/17 16:42:09 INFO zookeeper.ZooKeeper: Session: 0x1626be1444c92c6 closed 18/04/17 16:42:09 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 16:42:09 INFO scheduler.JobScheduler: Finished job streaming job 1523972520000 ms.12 from job set of time 1523972520000 ms 18/04/17 16:42:09 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 248.0 (TID 248) in 9416 ms on ***hostname masked*** (executor 7) (1/1) 18/04/17 16:42:09 INFO scheduler.DAGScheduler: ResultStage 248 (foreachPartition at PredictorEngineApp.java:153) finished in 9.418 s 18/04/17 16:42:09 INFO cluster.YarnClusterScheduler: Removed TaskSet 248.0, whose tasks have all completed, from pool 18/04/17 16:42:09 INFO scheduler.DAGScheduler: Job 248 finished: foreachPartition at PredictorEngineApp.java:153, took 9.712343 s 18/04/17 16:42:09 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x6e2106c connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 16:42:09 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x6e2106c0x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 16:42:09 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 16:42:09 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:36934, server: ***hostname masked***/***IP masked***:2181 18/04/17 16:42:09 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x2626be142b28bbe, negotiated timeout = 60000 18/04/17 16:42:09 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x2626be142b28bbe 18/04/17 16:42:09 INFO zookeeper.ZooKeeper: Session: 0x2626be142b28bbe closed 18/04/17 16:42:09 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 16:42:09 INFO scheduler.JobScheduler: Finished job streaming job 1523972520000 ms.6 from job set of time 1523972520000 ms 18/04/17 16:42:09 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 255.0 (TID 255) in 9762 ms on ***hostname masked*** (executor 8) (1/1) 18/04/17 16:42:09 INFO cluster.YarnClusterScheduler: Removed TaskSet 255.0, whose tasks have all completed, from pool 18/04/17 16:42:09 INFO scheduler.DAGScheduler: ResultStage 255 (foreachPartition at PredictorEngineApp.java:153) finished in 9.764 s 18/04/17 16:42:09 INFO scheduler.DAGScheduler: Job 255 finished: foreachPartition at PredictorEngineApp.java:153, took 9.854678 s 18/04/17 16:42:09 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x2becd080 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 16:42:09 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x2becd0800x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 16:42:09 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 16:42:09 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:60575, server: ***hostname masked***/***IP masked***:2181 18/04/17 16:42:09 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x1626be1444c92ca, negotiated timeout = 60000 18/04/17 16:42:09 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x1626be1444c92ca 18/04/17 16:42:09 INFO zookeeper.ZooKeeper: Session: 0x1626be1444c92ca closed 18/04/17 16:42:09 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 16:42:09 INFO scheduler.JobScheduler: Finished job streaming job 1523972520000 ms.15 from job set of time 1523972520000 ms 18/04/17 16:42:09 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 240.0 (TID 240) in 9879 ms on ***hostname masked*** (executor 6) (1/1) 18/04/17 16:42:09 INFO cluster.YarnClusterScheduler: Removed TaskSet 240.0, whose tasks have all completed, from pool 18/04/17 16:42:09 INFO scheduler.DAGScheduler: ResultStage 240 (foreachPartition at PredictorEngineApp.java:153) finished in 9.879 s 18/04/17 16:42:09 INFO scheduler.DAGScheduler: Job 240 finished: foreachPartition at PredictorEngineApp.java:153, took 9.907566 s 18/04/17 16:42:09 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x2b4292d8 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 16:42:09 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x2b4292d80x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 16:42:09 INFO storage.BlockManagerInfo: Removed broadcast_247_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.2 MB) 18/04/17 16:42:09 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 16:42:09 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:60579, server: ***hostname masked***/***IP masked***:2181 18/04/17 16:42:09 INFO storage.BlockManagerInfo: Removed broadcast_247_piece0 on ***hostname masked***:57847 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:42:09 INFO spark.ContextCleaner: Cleaned accumulator 237 18/04/17 16:42:09 INFO storage.BlockManagerInfo: Removed broadcast_236_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.2 MB) 18/04/17 16:42:09 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x1626be1444c92cb, negotiated timeout = 60000 18/04/17 16:42:09 INFO storage.BlockManagerInfo: Removed broadcast_236_piece0 on ***hostname masked***:35790 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:42:09 INFO spark.ContextCleaner: Cleaned accumulator 240 18/04/17 16:42:09 INFO storage.BlockManagerInfo: Removed broadcast_238_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.2 MB) 18/04/17 16:42:09 INFO storage.BlockManagerInfo: Removed broadcast_238_piece0 on ***hostname masked***:35790 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:42:09 INFO spark.ContextCleaner: Cleaned accumulator 239 18/04/17 16:42:10 INFO storage.BlockManagerInfo: Removed broadcast_239_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.2 MB) 18/04/17 16:42:10 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x1626be1444c92cb 18/04/17 16:42:10 INFO storage.BlockManagerInfo: Removed broadcast_239_piece0 on ***hostname masked***:43653 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:42:10 INFO spark.ContextCleaner: Cleaned accumulator 243 18/04/17 16:42:10 INFO zookeeper.ZooKeeper: Session: 0x1626be1444c92cb closed 18/04/17 16:42:10 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 16:42:10 INFO storage.BlockManagerInfo: Removed broadcast_241_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.2 MB) 18/04/17 16:42:10 INFO storage.BlockManagerInfo: Removed broadcast_241_piece0 on ***hostname masked***:56034 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:42:10 INFO spark.ContextCleaner: Cleaned accumulator 242 18/04/17 16:42:10 INFO storage.BlockManagerInfo: Removed broadcast_243_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.2 MB) 18/04/17 16:42:10 INFO storage.BlockManagerInfo: Removed broadcast_243_piece0 on ***hostname masked***:53081 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:42:10 INFO spark.ContextCleaner: Cleaned accumulator 244 18/04/17 16:42:10 INFO storage.BlockManagerInfo: Removed broadcast_242_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.2 MB) 18/04/17 16:42:10 INFO storage.BlockManagerInfo: Removed broadcast_242_piece0 on ***hostname masked***:55033 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:42:10 INFO spark.ContextCleaner: Cleaned accumulator 246 18/04/17 16:42:10 INFO storage.BlockManagerInfo: Removed broadcast_245_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.2 MB) 18/04/17 16:42:10 INFO scheduler.JobScheduler: Finished job streaming job 1523972520000 ms.22 from job set of time 1523972520000 ms 18/04/17 16:42:10 INFO storage.BlockManagerInfo: Removed broadcast_245_piece0 on ***hostname masked***:41751 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:42:10 INFO storage.BlockManagerInfo: Removed broadcast_261_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.2 MB) 18/04/17 16:42:10 INFO storage.BlockManagerInfo: Removed broadcast_261_piece0 on ***hostname masked***:50260 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:42:10 INFO spark.ContextCleaner: Cleaned accumulator 262 18/04/17 16:42:10 INFO spark.ContextCleaner: Cleaned accumulator 249 18/04/17 16:42:10 INFO spark.ContextCleaner: Cleaned accumulator 248 18/04/17 16:42:10 INFO storage.BlockManagerInfo: Removed broadcast_249_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.2 MB) 18/04/17 16:42:10 INFO storage.BlockManagerInfo: Removed broadcast_249_piece0 on ***hostname masked***:56034 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:42:10 INFO spark.ContextCleaner: Cleaned accumulator 250 18/04/17 16:42:10 INFO storage.BlockManagerInfo: Removed broadcast_248_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.2 MB) 18/04/17 16:42:10 INFO storage.BlockManagerInfo: Removed broadcast_248_piece0 on ***hostname masked***:41751 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:42:10 INFO storage.BlockManagerInfo: Removed broadcast_250_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.2 MB) 18/04/17 16:42:10 INFO storage.BlockManagerInfo: Removed broadcast_250_piece0 on ***hostname masked***:43653 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:42:10 INFO spark.ContextCleaner: Cleaned accumulator 251 18/04/17 16:42:10 INFO storage.BlockManagerInfo: Removed broadcast_252_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.2 MB) 18/04/17 16:42:10 INFO storage.BlockManagerInfo: Removed broadcast_252_piece0 on ***hostname masked***:56034 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:42:10 INFO spark.ContextCleaner: Cleaned accumulator 253 18/04/17 16:42:10 INFO spark.ContextCleaner: Cleaned accumulator 255 18/04/17 16:42:10 INFO storage.BlockManagerInfo: Removed broadcast_253_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.2 MB) 18/04/17 16:42:10 INFO storage.BlockManagerInfo: Removed broadcast_253_piece0 on ***hostname masked***:55095 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:42:10 INFO spark.ContextCleaner: Cleaned accumulator 254 18/04/17 16:42:10 INFO storage.BlockManagerInfo: Removed broadcast_255_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.2 MB) 18/04/17 16:42:10 INFO storage.BlockManagerInfo: Removed broadcast_255_piece0 on ***hostname masked***:50260 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:42:10 INFO spark.ContextCleaner: Cleaned accumulator 256 18/04/17 16:42:10 INFO storage.BlockManagerInfo: Removed broadcast_254_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.2 MB) 18/04/17 16:42:10 INFO storage.BlockManagerInfo: Removed broadcast_254_piece0 on ***hostname masked***:56034 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:42:10 INFO spark.ContextCleaner: Cleaned accumulator 258 18/04/17 16:42:10 INFO storage.BlockManagerInfo: Removed broadcast_256_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.2 MB) 18/04/17 16:42:10 INFO storage.BlockManagerInfo: Removed broadcast_256_piece0 on ***hostname masked***:55033 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:42:10 INFO spark.ContextCleaner: Cleaned accumulator 257 18/04/17 16:42:10 INFO storage.BlockManagerInfo: Removed broadcast_257_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.3 MB) 18/04/17 16:42:10 INFO storage.BlockManagerInfo: Removed broadcast_257_piece0 on ***hostname masked***:50260 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:42:10 INFO storage.BlockManagerInfo: Removed broadcast_259_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.3 MB) 18/04/17 16:42:10 INFO storage.BlockManagerInfo: Removed broadcast_259_piece0 on ***hostname masked***:55279 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:42:10 INFO spark.ContextCleaner: Cleaned accumulator 260 18/04/17 16:42:10 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 251.0 (TID 251) in 10100 ms on ***hostname masked*** (executor 7) (1/1) 18/04/17 16:42:10 INFO cluster.YarnClusterScheduler: Removed TaskSet 251.0, whose tasks have all completed, from pool 18/04/17 16:42:10 INFO scheduler.DAGScheduler: ResultStage 251 (foreachPartition at PredictorEngineApp.java:153) finished in 10.101 s 18/04/17 16:42:10 INFO scheduler.DAGScheduler: Job 251 finished: foreachPartition at PredictorEngineApp.java:153, took 10.175750 s 18/04/17 16:42:10 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x4cccc663 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 16:42:10 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x4cccc6630x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 16:42:10 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 16:42:10 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:36944, server: ***hostname masked***/***IP masked***:2181 18/04/17 16:42:10 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x2626be142b28bbf, negotiated timeout = 60000 18/04/17 16:42:10 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x2626be142b28bbf 18/04/17 16:42:10 INFO zookeeper.ZooKeeper: Session: 0x2626be142b28bbf closed 18/04/17 16:42:10 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 16:42:10 INFO scheduler.JobScheduler: Finished job streaming job 1523972520000 ms.23 from job set of time 1523972520000 ms 18/04/17 16:42:10 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 246.0 (TID 246) in 10528 ms on ***hostname masked*** (executor 8) (1/1) 18/04/17 16:42:10 INFO cluster.YarnClusterScheduler: Removed TaskSet 246.0, whose tasks have all completed, from pool 18/04/17 16:42:10 INFO scheduler.DAGScheduler: ResultStage 246 (foreachPartition at PredictorEngineApp.java:153) finished in 10.530 s 18/04/17 16:42:10 INFO scheduler.DAGScheduler: Job 246 finished: foreachPartition at PredictorEngineApp.java:153, took 10.591066 s 18/04/17 16:42:10 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0xc53f409 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 16:42:10 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0xc53f4090x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 16:42:10 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 16:42:10 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:54203, server: ***hostname masked***/***IP masked***:2181 18/04/17 16:42:10 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x3626be1439a929f, negotiated timeout = 60000 18/04/17 16:42:10 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x3626be1439a929f 18/04/17 16:42:10 INFO zookeeper.ZooKeeper: Session: 0x3626be1439a929f closed 18/04/17 16:42:10 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 16:42:10 INFO scheduler.JobScheduler: Finished job streaming job 1523972520000 ms.27 from job set of time 1523972520000 ms 18/04/17 16:42:11 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 244.0 (TID 244) in 10911 ms on ***hostname masked*** (executor 5) (1/1) 18/04/17 16:42:11 INFO scheduler.DAGScheduler: ResultStage 244 (foreachPartition at PredictorEngineApp.java:153) finished in 10.912 s 18/04/17 16:42:11 INFO cluster.YarnClusterScheduler: Removed TaskSet 244.0, whose tasks have all completed, from pool 18/04/17 16:42:11 INFO scheduler.DAGScheduler: Job 244 finished: foreachPartition at PredictorEngineApp.java:153, took 10.965538 s 18/04/17 16:42:11 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x50411915 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 16:42:11 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x504119150x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 16:42:11 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 16:42:11 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:36951, server: ***hostname masked***/***IP masked***:2181 18/04/17 16:42:11 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x2626be142b28bc1, negotiated timeout = 60000 18/04/17 16:42:11 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x2626be142b28bc1 18/04/17 16:42:11 INFO zookeeper.ZooKeeper: Session: 0x2626be142b28bc1 closed 18/04/17 16:42:11 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 16:42:11 INFO scheduler.JobScheduler: Finished job streaming job 1523972520000 ms.5 from job set of time 1523972520000 ms 18/04/17 16:42:12 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 237.0 (TID 237) in 12908 ms on ***hostname masked*** (executor 7) (1/1) 18/04/17 16:42:12 INFO cluster.YarnClusterScheduler: Removed TaskSet 237.0, whose tasks have all completed, from pool 18/04/17 16:42:12 INFO scheduler.DAGScheduler: ResultStage 237 (foreachPartition at PredictorEngineApp.java:153) finished in 12.908 s 18/04/17 16:42:12 INFO scheduler.DAGScheduler: Job 237 finished: foreachPartition at PredictorEngineApp.java:153, took 12.925352 s 18/04/17 16:42:12 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x14828cac connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 16:42:12 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x14828cac0x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 16:42:12 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 16:42:13 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:54221, server: ***hostname masked***/***IP masked***:2181 18/04/17 16:42:13 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x3626be1439a92a1, negotiated timeout = 60000 18/04/17 16:42:13 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x3626be1439a92a1 18/04/17 16:42:13 INFO zookeeper.ZooKeeper: Session: 0x3626be1439a92a1 closed 18/04/17 16:42:13 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 16:42:13 INFO scheduler.JobScheduler: Finished job streaming job 1523972520000 ms.1 from job set of time 1523972520000 ms 18/04/17 16:42:13 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 258.0 (TID 258) in 13625 ms on ***hostname masked*** (executor 3) (1/1) 18/04/17 16:42:13 INFO cluster.YarnClusterScheduler: Removed TaskSet 258.0, whose tasks have all completed, from pool 18/04/17 16:42:13 INFO scheduler.DAGScheduler: ResultStage 258 (foreachPartition at PredictorEngineApp.java:153) finished in 13.626 s 18/04/17 16:42:13 INFO scheduler.DAGScheduler: Job 258 finished: foreachPartition at PredictorEngineApp.java:153, took 13.727715 s 18/04/17 16:42:13 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x3ce3067d connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 16:42:13 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x3ce3067d0x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 16:42:13 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 16:42:13 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:36968, server: ***hostname masked***/***IP masked***:2181 18/04/17 16:42:13 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x2626be142b28bc2, negotiated timeout = 60000 18/04/17 16:42:13 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x2626be142b28bc2 18/04/17 16:42:13 INFO zookeeper.ZooKeeper: Session: 0x2626be142b28bc2 closed 18/04/17 16:42:13 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 16:42:13 INFO scheduler.JobScheduler: Finished job streaming job 1523972520000 ms.11 from job set of time 1523972520000 ms 18/04/17 16:42:14 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 260.0 (TID 260) in 14325 ms on ***hostname masked*** (executor 3) (1/1) 18/04/17 16:42:14 INFO cluster.YarnClusterScheduler: Removed TaskSet 260.0, whose tasks have all completed, from pool 18/04/17 16:42:14 INFO scheduler.DAGScheduler: ResultStage 260 (foreachPartition at PredictorEngineApp.java:153) finished in 14.327 s 18/04/17 16:42:14 INFO scheduler.DAGScheduler: Job 260 finished: foreachPartition at PredictorEngineApp.java:153, took 14.433941 s 18/04/17 16:42:14 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x63c1dba1 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 16:42:14 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x63c1dba10x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 16:42:14 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 16:42:14 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:36973, server: ***hostname masked***/***IP masked***:2181 18/04/17 16:42:14 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x2626be142b28bc4, negotiated timeout = 60000 18/04/17 16:42:14 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x2626be142b28bc4 18/04/17 16:42:14 INFO zookeeper.ZooKeeper: Session: 0x2626be142b28bc4 closed 18/04/17 16:42:14 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 16:42:14 INFO scheduler.JobScheduler: Finished job streaming job 1523972520000 ms.10 from job set of time 1523972520000 ms 18/04/17 16:42:14 INFO scheduler.JobScheduler: Total delay: 14.543 s for time 1523972520000 ms (execution: 14.482 s) 18/04/17 16:42:14 INFO kafka.KafkaRDD: Removing RDD 288 from persistence list 18/04/17 16:42:14 INFO storage.BlockManager: Removing RDD 288 18/04/17 16:42:14 INFO kafka.KafkaRDD: Removing RDD 288 from persistence list 18/04/17 16:42:14 INFO storage.BlockManager: Removing RDD 288 18/04/17 16:42:14 INFO kafka.KafkaRDD: Removing RDD 289 from persistence list 18/04/17 16:42:14 INFO storage.BlockManager: Removing RDD 289 18/04/17 16:42:14 INFO kafka.KafkaRDD: Removing RDD 289 from persistence list 18/04/17 16:42:14 INFO storage.BlockManager: Removing RDD 289 18/04/17 16:42:14 INFO kafka.KafkaRDD: Removing RDD 290 from persistence list 18/04/17 16:42:14 INFO storage.BlockManager: Removing RDD 290 18/04/17 16:42:14 INFO kafka.KafkaRDD: Removing RDD 290 from persistence list 18/04/17 16:42:14 INFO storage.BlockManager: Removing RDD 290 18/04/17 16:42:14 INFO kafka.KafkaRDD: Removing RDD 291 from persistence list 18/04/17 16:42:14 INFO storage.BlockManager: Removing RDD 291 18/04/17 16:42:14 INFO kafka.KafkaRDD: Removing RDD 291 from persistence list 18/04/17 16:42:14 INFO storage.BlockManager: Removing RDD 291 18/04/17 16:42:14 INFO kafka.KafkaRDD: Removing RDD 292 from persistence list 18/04/17 16:42:14 INFO storage.BlockManager: Removing RDD 292 18/04/17 16:42:14 INFO kafka.KafkaRDD: Removing RDD 292 from persistence list 18/04/17 16:42:14 INFO storage.BlockManager: Removing RDD 292 18/04/17 16:42:14 INFO kafka.KafkaRDD: Removing RDD 293 from persistence list 18/04/17 16:42:14 INFO storage.BlockManager: Removing RDD 293 18/04/17 16:42:14 INFO kafka.KafkaRDD: Removing RDD 293 from persistence list 18/04/17 16:42:14 INFO storage.BlockManager: Removing RDD 293 18/04/17 16:42:14 INFO kafka.KafkaRDD: Removing RDD 294 from persistence list 18/04/17 16:42:14 INFO storage.BlockManager: Removing RDD 294 18/04/17 16:42:14 INFO kafka.KafkaRDD: Removing RDD 294 from persistence list 18/04/17 16:42:14 INFO storage.BlockManager: Removing RDD 294 18/04/17 16:42:14 INFO kafka.KafkaRDD: Removing RDD 295 from persistence list 18/04/17 16:42:14 INFO storage.BlockManager: Removing RDD 295 18/04/17 16:42:14 INFO kafka.KafkaRDD: Removing RDD 295 from persistence list 18/04/17 16:42:14 INFO storage.BlockManager: Removing RDD 295 18/04/17 16:42:14 INFO kafka.KafkaRDD: Removing RDD 296 from persistence list 18/04/17 16:42:14 INFO storage.BlockManager: Removing RDD 296 18/04/17 16:42:14 INFO kafka.KafkaRDD: Removing RDD 296 from persistence list 18/04/17 16:42:14 INFO storage.BlockManager: Removing RDD 296 18/04/17 16:42:14 INFO kafka.KafkaRDD: Removing RDD 297 from persistence list 18/04/17 16:42:14 INFO storage.BlockManager: Removing RDD 297 18/04/17 16:42:14 INFO kafka.KafkaRDD: Removing RDD 297 from persistence list 18/04/17 16:42:14 INFO storage.BlockManager: Removing RDD 297 18/04/17 16:42:14 INFO kafka.KafkaRDD: Removing RDD 298 from persistence list 18/04/17 16:42:14 INFO storage.BlockManager: Removing RDD 298 18/04/17 16:42:14 INFO kafka.KafkaRDD: Removing RDD 298 from persistence list 18/04/17 16:42:14 INFO storage.BlockManager: Removing RDD 298 18/04/17 16:42:14 INFO kafka.KafkaRDD: Removing RDD 299 from persistence list 18/04/17 16:42:14 INFO storage.BlockManager: Removing RDD 299 18/04/17 16:42:14 INFO kafka.KafkaRDD: Removing RDD 299 from persistence list 18/04/17 16:42:14 INFO storage.BlockManager: Removing RDD 299 18/04/17 16:42:14 INFO kafka.KafkaRDD: Removing RDD 300 from persistence list 18/04/17 16:42:14 INFO storage.BlockManager: Removing RDD 300 18/04/17 16:42:14 INFO kafka.KafkaRDD: Removing RDD 300 from persistence list 18/04/17 16:42:14 INFO storage.BlockManager: Removing RDD 300 18/04/17 16:42:14 INFO kafka.KafkaRDD: Removing RDD 301 from persistence list 18/04/17 16:42:14 INFO storage.BlockManager: Removing RDD 301 18/04/17 16:42:14 INFO kafka.KafkaRDD: Removing RDD 301 from persistence list 18/04/17 16:42:14 INFO storage.BlockManager: Removing RDD 301 18/04/17 16:42:14 INFO kafka.KafkaRDD: Removing RDD 302 from persistence list 18/04/17 16:42:14 INFO storage.BlockManager: Removing RDD 302 18/04/17 16:42:14 INFO kafka.KafkaRDD: Removing RDD 302 from persistence list 18/04/17 16:42:14 INFO storage.BlockManager: Removing RDD 302 18/04/17 16:42:14 INFO kafka.KafkaRDD: Removing RDD 303 from persistence list 18/04/17 16:42:14 INFO storage.BlockManager: Removing RDD 303 18/04/17 16:42:14 INFO kafka.KafkaRDD: Removing RDD 303 from persistence list 18/04/17 16:42:14 INFO storage.BlockManager: Removing RDD 303 18/04/17 16:42:14 INFO kafka.KafkaRDD: Removing RDD 304 from persistence list 18/04/17 16:42:14 INFO storage.BlockManager: Removing RDD 304 18/04/17 16:42:14 INFO kafka.KafkaRDD: Removing RDD 304 from persistence list 18/04/17 16:42:14 INFO storage.BlockManager: Removing RDD 304 18/04/17 16:42:14 INFO kafka.KafkaRDD: Removing RDD 305 from persistence list 18/04/17 16:42:14 INFO storage.BlockManager: Removing RDD 305 18/04/17 16:42:14 INFO kafka.KafkaRDD: Removing RDD 305 from persistence list 18/04/17 16:42:14 INFO storage.BlockManager: Removing RDD 305 18/04/17 16:42:14 INFO kafka.KafkaRDD: Removing RDD 306 from persistence list 18/04/17 16:42:14 INFO storage.BlockManager: Removing RDD 306 18/04/17 16:42:14 INFO kafka.KafkaRDD: Removing RDD 306 from persistence list 18/04/17 16:42:14 INFO storage.BlockManager: Removing RDD 306 18/04/17 16:42:14 INFO kafka.KafkaRDD: Removing RDD 307 from persistence list 18/04/17 16:42:14 INFO storage.BlockManager: Removing RDD 307 18/04/17 16:42:14 INFO kafka.KafkaRDD: Removing RDD 307 from persistence list 18/04/17 16:42:14 INFO storage.BlockManager: Removing RDD 307 18/04/17 16:42:14 INFO kafka.KafkaRDD: Removing RDD 308 from persistence list 18/04/17 16:42:14 INFO storage.BlockManager: Removing RDD 308 18/04/17 16:42:14 INFO kafka.KafkaRDD: Removing RDD 308 from persistence list 18/04/17 16:42:14 INFO storage.BlockManager: Removing RDD 308 18/04/17 16:42:14 INFO kafka.KafkaRDD: Removing RDD 309 from persistence list 18/04/17 16:42:14 INFO storage.BlockManager: Removing RDD 309 18/04/17 16:42:14 INFO kafka.KafkaRDD: Removing RDD 309 from persistence list 18/04/17 16:42:14 INFO storage.BlockManager: Removing RDD 309 18/04/17 16:42:14 INFO kafka.KafkaRDD: Removing RDD 310 from persistence list 18/04/17 16:42:14 INFO storage.BlockManager: Removing RDD 310 18/04/17 16:42:14 INFO kafka.KafkaRDD: Removing RDD 310 from persistence list 18/04/17 16:42:14 INFO storage.BlockManager: Removing RDD 310 18/04/17 16:42:14 INFO kafka.KafkaRDD: Removing RDD 311 from persistence list 18/04/17 16:42:14 INFO storage.BlockManager: Removing RDD 311 18/04/17 16:42:14 INFO kafka.KafkaRDD: Removing RDD 311 from persistence list 18/04/17 16:42:14 INFO storage.BlockManager: Removing RDD 311 18/04/17 16:42:14 INFO kafka.KafkaRDD: Removing RDD 312 from persistence list 18/04/17 16:42:14 INFO storage.BlockManager: Removing RDD 312 18/04/17 16:42:14 INFO kafka.KafkaRDD: Removing RDD 312 from persistence list 18/04/17 16:42:14 INFO storage.BlockManager: Removing RDD 312 18/04/17 16:42:14 INFO kafka.KafkaRDD: Removing RDD 313 from persistence list 18/04/17 16:42:14 INFO storage.BlockManager: Removing RDD 313 18/04/17 16:42:14 INFO kafka.KafkaRDD: Removing RDD 313 from persistence list 18/04/17 16:42:14 INFO storage.BlockManager: Removing RDD 313 18/04/17 16:42:14 INFO kafka.KafkaRDD: Removing RDD 314 from persistence list 18/04/17 16:42:14 INFO storage.BlockManager: Removing RDD 314 18/04/17 16:42:14 INFO kafka.KafkaRDD: Removing RDD 314 from persistence list 18/04/17 16:42:14 INFO storage.BlockManager: Removing RDD 314 18/04/17 16:42:14 INFO kafka.KafkaRDD: Removing RDD 315 from persistence list 18/04/17 16:42:14 INFO storage.BlockManager: Removing RDD 315 18/04/17 16:42:14 INFO kafka.KafkaRDD: Removing RDD 315 from persistence list 18/04/17 16:42:14 INFO storage.BlockManager: Removing RDD 315 18/04/17 16:42:14 INFO kafka.KafkaRDD: Removing RDD 316 from persistence list 18/04/17 16:42:14 INFO storage.BlockManager: Removing RDD 316 18/04/17 16:42:14 INFO kafka.KafkaRDD: Removing RDD 316 from persistence list 18/04/17 16:42:14 INFO storage.BlockManager: Removing RDD 316 18/04/17 16:42:14 INFO kafka.KafkaRDD: Removing RDD 317 from persistence list 18/04/17 16:42:14 INFO storage.BlockManager: Removing RDD 317 18/04/17 16:42:14 INFO kafka.KafkaRDD: Removing RDD 317 from persistence list 18/04/17 16:42:14 INFO storage.BlockManager: Removing RDD 317 18/04/17 16:42:14 INFO kafka.KafkaRDD: Removing RDD 318 from persistence list 18/04/17 16:42:14 INFO storage.BlockManager: Removing RDD 318 18/04/17 16:42:14 INFO kafka.KafkaRDD: Removing RDD 318 from persistence list 18/04/17 16:42:14 INFO storage.BlockManager: Removing RDD 318 18/04/17 16:42:14 INFO kafka.KafkaRDD: Removing RDD 319 from persistence list 18/04/17 16:42:14 INFO storage.BlockManager: Removing RDD 319 18/04/17 16:42:14 INFO kafka.KafkaRDD: Removing RDD 319 from persistence list 18/04/17 16:42:14 INFO storage.BlockManager: Removing RDD 319 18/04/17 16:42:14 INFO kafka.KafkaRDD: Removing RDD 320 from persistence list 18/04/17 16:42:14 INFO storage.BlockManager: Removing RDD 320 18/04/17 16:42:14 INFO kafka.KafkaRDD: Removing RDD 320 from persistence list 18/04/17 16:42:14 INFO storage.BlockManager: Removing RDD 320 18/04/17 16:42:14 INFO kafka.KafkaRDD: Removing RDD 321 from persistence list 18/04/17 16:42:14 INFO storage.BlockManager: Removing RDD 321 18/04/17 16:42:14 INFO kafka.KafkaRDD: Removing RDD 321 from persistence list 18/04/17 16:42:14 INFO storage.BlockManager: Removing RDD 321 18/04/17 16:42:14 INFO kafka.KafkaRDD: Removing RDD 322 from persistence list 18/04/17 16:42:14 INFO storage.BlockManager: Removing RDD 322 18/04/17 16:42:14 INFO kafka.KafkaRDD: Removing RDD 322 from persistence list 18/04/17 16:42:14 INFO storage.BlockManager: Removing RDD 322 18/04/17 16:42:14 INFO kafka.KafkaRDD: Removing RDD 323 from persistence list 18/04/17 16:42:14 INFO storage.BlockManager: Removing RDD 323 18/04/17 16:42:14 INFO kafka.KafkaRDD: Removing RDD 323 from persistence list 18/04/17 16:42:14 INFO storage.BlockManager: Removing RDD 323 18/04/17 16:42:14 INFO scheduler.ReceivedBlockTracker: Deleting batches ArrayBuffer() 18/04/17 16:42:14 INFO scheduler.InputInfoTracker: remove old batch metadata: 1523972400000 ms 18/04/17 16:43:00 INFO scheduler.JobScheduler: Added jobs for time 1523972580000 ms 18/04/17 16:43:00 INFO scheduler.JobScheduler: Starting job streaming job 1523972580000 ms.0 from job set of time 1523972580000 ms 18/04/17 16:43:00 INFO scheduler.JobScheduler: Starting job streaming job 1523972580000 ms.2 from job set of time 1523972580000 ms 18/04/17 16:43:00 INFO scheduler.JobScheduler: Starting job streaming job 1523972580000 ms.1 from job set of time 1523972580000 ms 18/04/17 16:43:00 INFO scheduler.JobScheduler: Starting job streaming job 1523972580000 ms.3 from job set of time 1523972580000 ms 18/04/17 16:43:00 INFO scheduler.JobScheduler: Starting job streaming job 1523972580000 ms.4 from job set of time 1523972580000 ms 18/04/17 16:43:00 INFO scheduler.JobScheduler: Finished job streaming job 1523972580000 ms.0 from job set of time 1523972580000 ms 18/04/17 16:43:00 INFO scheduler.JobScheduler: Finished job streaming job 1523972580000 ms.3 from job set of time 1523972580000 ms 18/04/17 16:43:00 INFO scheduler.JobScheduler: Starting job streaming job 1523972580000 ms.5 from job set of time 1523972580000 ms 18/04/17 16:43:00 INFO scheduler.JobScheduler: Starting job streaming job 1523972580000 ms.6 from job set of time 1523972580000 ms 18/04/17 16:43:00 INFO scheduler.JobScheduler: Finished job streaming job 1523972580000 ms.4 from job set of time 1523972580000 ms 18/04/17 16:43:00 INFO scheduler.JobScheduler: Starting job streaming job 1523972580000 ms.7 from job set of time 1523972580000 ms 18/04/17 16:43:00 INFO scheduler.JobScheduler: Starting job streaming job 1523972580000 ms.9 from job set of time 1523972580000 ms 18/04/17 16:43:00 INFO scheduler.JobScheduler: Starting job streaming job 1523972580000 ms.10 from job set of time 1523972580000 ms 18/04/17 16:43:00 INFO scheduler.JobScheduler: Starting job streaming job 1523972580000 ms.8 from job set of time 1523972580000 ms 18/04/17 16:43:00 INFO scheduler.JobScheduler: Starting job streaming job 1523972580000 ms.11 from job set of time 1523972580000 ms 18/04/17 16:43:00 INFO scheduler.JobScheduler: Starting job streaming job 1523972580000 ms.12 from job set of time 1523972580000 ms 18/04/17 16:43:00 INFO scheduler.JobScheduler: Starting job streaming job 1523972580000 ms.13 from job set of time 1523972580000 ms 18/04/17 16:43:00 INFO scheduler.JobScheduler: Starting job streaming job 1523972580000 ms.14 from job set of time 1523972580000 ms 18/04/17 16:43:00 INFO scheduler.JobScheduler: Starting job streaming job 1523972580000 ms.15 from job set of time 1523972580000 ms 18/04/17 16:43:00 INFO scheduler.JobScheduler: Finished job streaming job 1523972580000 ms.13 from job set of time 1523972580000 ms 18/04/17 16:43:00 INFO scheduler.JobScheduler: Starting job streaming job 1523972580000 ms.16 from job set of time 1523972580000 ms 18/04/17 16:43:00 INFO scheduler.JobScheduler: Starting job streaming job 1523972580000 ms.17 from job set of time 1523972580000 ms 18/04/17 16:43:00 INFO scheduler.JobScheduler: Finished job streaming job 1523972580000 ms.14 from job set of time 1523972580000 ms 18/04/17 16:43:00 INFO scheduler.JobScheduler: Starting job streaming job 1523972580000 ms.18 from job set of time 1523972580000 ms 18/04/17 16:43:00 INFO scheduler.JobScheduler: Starting job streaming job 1523972580000 ms.19 from job set of time 1523972580000 ms 18/04/17 16:43:00 INFO scheduler.JobScheduler: Finished job streaming job 1523972580000 ms.17 from job set of time 1523972580000 ms 18/04/17 16:43:00 INFO scheduler.JobScheduler: Starting job streaming job 1523972580000 ms.21 from job set of time 1523972580000 ms 18/04/17 16:43:00 INFO scheduler.JobScheduler: Finished job streaming job 1523972580000 ms.16 from job set of time 1523972580000 ms 18/04/17 16:43:00 INFO scheduler.JobScheduler: Starting job streaming job 1523972580000 ms.20 from job set of time 1523972580000 ms 18/04/17 16:43:00 INFO scheduler.JobScheduler: Starting job streaming job 1523972580000 ms.23 from job set of time 1523972580000 ms 18/04/17 16:43:00 INFO scheduler.JobScheduler: Finished job streaming job 1523972580000 ms.21 from job set of time 1523972580000 ms 18/04/17 16:43:00 INFO scheduler.JobScheduler: Starting job streaming job 1523972580000 ms.22 from job set of time 1523972580000 ms 18/04/17 16:43:00 INFO scheduler.JobScheduler: Starting job streaming job 1523972580000 ms.24 from job set of time 1523972580000 ms 18/04/17 16:43:00 INFO scheduler.JobScheduler: Starting job streaming job 1523972580000 ms.25 from job set of time 1523972580000 ms 18/04/17 16:43:00 INFO scheduler.JobScheduler: Starting job streaming job 1523972580000 ms.26 from job set of time 1523972580000 ms 18/04/17 16:43:00 INFO scheduler.JobScheduler: Starting job streaming job 1523972580000 ms.27 from job set of time 1523972580000 ms 18/04/17 16:43:00 INFO scheduler.JobScheduler: Starting job streaming job 1523972580000 ms.28 from job set of time 1523972580000 ms 18/04/17 16:43:00 INFO scheduler.JobScheduler: Starting job streaming job 1523972580000 ms.29 from job set of time 1523972580000 ms 18/04/17 16:43:00 INFO scheduler.JobScheduler: Starting job streaming job 1523972580000 ms.30 from job set of time 1523972580000 ms 18/04/17 16:43:00 INFO scheduler.JobScheduler: Starting job streaming job 1523972580000 ms.31 from job set of time 1523972580000 ms 18/04/17 16:43:00 INFO scheduler.JobScheduler: Finished job streaming job 1523972580000 ms.30 from job set of time 1523972580000 ms 18/04/17 16:43:00 INFO scheduler.JobScheduler: Starting job streaming job 1523972580000 ms.32 from job set of time 1523972580000 ms 18/04/17 16:43:00 INFO scheduler.JobScheduler: Starting job streaming job 1523972580000 ms.33 from job set of time 1523972580000 ms 18/04/17 16:43:00 INFO scheduler.JobScheduler: Starting job streaming job 1523972580000 ms.34 from job set of time 1523972580000 ms 18/04/17 16:43:00 INFO scheduler.JobScheduler: Starting job streaming job 1523972580000 ms.35 from job set of time 1523972580000 ms 18/04/17 16:43:00 INFO scheduler.JobScheduler: Finished job streaming job 1523972580000 ms.35 from job set of time 1523972580000 ms 18/04/17 16:43:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 16:43:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 16:43:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 16:43:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 16:43:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 16:43:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 16:43:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 16:43:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 16:43:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 16:43:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 16:43:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 16:43:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 16:43:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 16:43:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 16:43:00 INFO scheduler.DAGScheduler: Got job 264 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 16:43:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 16:43:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 262 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 16:43:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 16:43:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 16:43:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 16:43:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 16:43:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 16:43:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 16:43:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 16:43:00 INFO scheduler.DAGScheduler: Submitting ResultStage 262 (KafkaRDD[385] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 16:43:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 16:43:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 16:43:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 16:43:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 16:43:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 16:43:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 16:43:00 INFO storage.MemoryStore: Block broadcast_262 stored as values in memory (estimated size 5.7 KB, free 490.5 MB) 18/04/17 16:43:00 INFO storage.MemoryStore: Block broadcast_262_piece0 stored as bytes in memory (estimated size 3.1 KB, free 490.5 MB) 18/04/17 16:43:00 INFO storage.BlockManagerInfo: Added broadcast_262_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.3 MB) 18/04/17 16:43:00 INFO spark.SparkContext: Created broadcast 262 from broadcast at DAGScheduler.scala:1006 18/04/17 16:43:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 262 (KafkaRDD[385] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 16:43:00 INFO cluster.YarnClusterScheduler: Adding task set 262.0 with 1 tasks 18/04/17 16:43:00 INFO scheduler.DAGScheduler: Got job 262 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 16:43:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 263 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 16:43:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 16:43:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 16:43:00 INFO scheduler.DAGScheduler: Submitting ResultStage 263 (KafkaRDD[366] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 16:43:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 262.0 (TID 262, ***hostname masked***, executor 2, partition 0, RACK_LOCAL, 2045 bytes) 18/04/17 16:43:00 INFO storage.MemoryStore: Block broadcast_263 stored as values in memory (estimated size 5.7 KB, free 490.5 MB) 18/04/17 16:43:00 INFO storage.MemoryStore: Block broadcast_263_piece0 stored as bytes in memory (estimated size 3.1 KB, free 490.5 MB) 18/04/17 16:43:00 INFO storage.BlockManagerInfo: Added broadcast_263_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.2 MB) 18/04/17 16:43:00 INFO spark.SparkContext: Created broadcast 263 from broadcast at DAGScheduler.scala:1006 18/04/17 16:43:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 263 (KafkaRDD[366] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 16:43:00 INFO cluster.YarnClusterScheduler: Adding task set 263.0 with 1 tasks 18/04/17 16:43:00 INFO scheduler.DAGScheduler: Got job 263 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 16:43:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 264 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 16:43:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 16:43:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 16:43:00 INFO scheduler.DAGScheduler: Submitting ResultStage 264 (KafkaRDD[369] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 16:43:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 263.0 (TID 263, ***hostname masked***, executor 9, partition 0, NODE_LOCAL, 2045 bytes) 18/04/17 16:43:00 INFO storage.MemoryStore: Block broadcast_264 stored as values in memory (estimated size 5.7 KB, free 490.5 MB) 18/04/17 16:43:00 INFO storage.MemoryStore: Block broadcast_264_piece0 stored as bytes in memory (estimated size 3.1 KB, free 490.5 MB) 18/04/17 16:43:00 INFO storage.BlockManagerInfo: Added broadcast_264_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.2 MB) 18/04/17 16:43:00 INFO spark.SparkContext: Created broadcast 264 from broadcast at DAGScheduler.scala:1006 18/04/17 16:43:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 264 (KafkaRDD[369] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 16:43:00 INFO cluster.YarnClusterScheduler: Adding task set 264.0 with 1 tasks 18/04/17 16:43:00 INFO scheduler.DAGScheduler: Got job 265 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 16:43:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 265 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 16:43:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 16:43:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 16:43:00 INFO scheduler.DAGScheduler: Submitting ResultStage 265 (KafkaRDD[387] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 16:43:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 264.0 (TID 264, ***hostname masked***, executor 4, partition 0, RACK_LOCAL, 2041 bytes) 18/04/17 16:43:00 INFO storage.MemoryStore: Block broadcast_265 stored as values in memory (estimated size 5.7 KB, free 490.5 MB) 18/04/17 16:43:00 INFO storage.BlockManagerInfo: Added broadcast_262_piece0 in memory on ***hostname masked***:43653 (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:43:00 INFO storage.MemoryStore: Block broadcast_265_piece0 stored as bytes in memory (estimated size 3.1 KB, free 490.5 MB) 18/04/17 16:43:00 INFO storage.BlockManagerInfo: Added broadcast_265_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.2 MB) 18/04/17 16:43:00 INFO spark.SparkContext: Created broadcast 265 from broadcast at DAGScheduler.scala:1006 18/04/17 16:43:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 265 (KafkaRDD[387] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 16:43:00 INFO cluster.YarnClusterScheduler: Adding task set 265.0 with 1 tasks 18/04/17 16:43:00 INFO scheduler.DAGScheduler: Got job 266 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 16:43:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 266 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 16:43:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 16:43:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 16:43:00 INFO scheduler.DAGScheduler: Submitting ResultStage 266 (KafkaRDD[362] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 16:43:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 265.0 (TID 265, ***hostname masked***, executor 1, partition 0, RACK_LOCAL, 2048 bytes) 18/04/17 16:43:00 INFO storage.BlockManagerInfo: Added broadcast_263_piece0 in memory on ***hostname masked***:55033 (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:43:00 INFO storage.MemoryStore: Block broadcast_266 stored as values in memory (estimated size 5.7 KB, free 490.5 MB) 18/04/17 16:43:00 INFO storage.MemoryStore: Block broadcast_266_piece0 stored as bytes in memory (estimated size 3.1 KB, free 490.5 MB) 18/04/17 16:43:00 INFO storage.BlockManagerInfo: Added broadcast_266_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.2 MB) 18/04/17 16:43:00 INFO spark.SparkContext: Created broadcast 266 from broadcast at DAGScheduler.scala:1006 18/04/17 16:43:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 266 (KafkaRDD[362] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 16:43:00 INFO cluster.YarnClusterScheduler: Adding task set 266.0 with 1 tasks 18/04/17 16:43:00 INFO scheduler.DAGScheduler: Got job 267 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 16:43:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 267 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 16:43:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 16:43:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 16:43:00 INFO scheduler.DAGScheduler: Submitting ResultStage 267 (KafkaRDD[382] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 16:43:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 266.0 (TID 266, ***hostname masked***, executor 5, partition 0, RACK_LOCAL, 2034 bytes) 18/04/17 16:43:00 INFO storage.MemoryStore: Block broadcast_267 stored as values in memory (estimated size 5.7 KB, free 490.5 MB) 18/04/17 16:43:00 INFO storage.BlockManagerInfo: Added broadcast_264_piece0 in memory on ***hostname masked***:55279 (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:43:00 INFO storage.MemoryStore: Block broadcast_267_piece0 stored as bytes in memory (estimated size 3.1 KB, free 490.4 MB) 18/04/17 16:43:00 INFO storage.BlockManagerInfo: Added broadcast_267_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.2 MB) 18/04/17 16:43:00 INFO spark.SparkContext: Created broadcast 267 from broadcast at DAGScheduler.scala:1006 18/04/17 16:43:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 267 (KafkaRDD[382] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 16:43:00 INFO cluster.YarnClusterScheduler: Adding task set 267.0 with 1 tasks 18/04/17 16:43:00 INFO scheduler.DAGScheduler: Got job 270 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 16:43:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 268 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 16:43:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 16:43:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 16:43:00 INFO scheduler.DAGScheduler: Submitting ResultStage 268 (KafkaRDD[361] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 16:43:00 INFO storage.MemoryStore: Block broadcast_268 stored as values in memory (estimated size 5.7 KB, free 490.4 MB) 18/04/17 16:43:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 267.0 (TID 267, ***hostname masked***, executor 3, partition 0, RACK_LOCAL, 2069 bytes) 18/04/17 16:43:00 INFO storage.BlockManagerInfo: Added broadcast_265_piece0 in memory on ***hostname masked***:56034 (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:43:00 INFO storage.MemoryStore: Block broadcast_268_piece0 stored as bytes in memory (estimated size 3.1 KB, free 490.4 MB) 18/04/17 16:43:00 INFO storage.BlockManagerInfo: Added broadcast_268_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.2 MB) 18/04/17 16:43:00 INFO spark.SparkContext: Created broadcast 268 from broadcast at DAGScheduler.scala:1006 18/04/17 16:43:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 268 (KafkaRDD[361] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 16:43:00 INFO cluster.YarnClusterScheduler: Adding task set 268.0 with 1 tasks 18/04/17 16:43:00 INFO scheduler.DAGScheduler: Got job 268 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 16:43:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 269 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 16:43:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 16:43:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 16:43:00 INFO scheduler.DAGScheduler: Submitting ResultStage 269 (KafkaRDD[379] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 16:43:00 INFO storage.MemoryStore: Block broadcast_269 stored as values in memory (estimated size 5.7 KB, free 490.4 MB) 18/04/17 16:43:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 268.0 (TID 268, ***hostname masked***, executor 8, partition 0, RACK_LOCAL, 2063 bytes) 18/04/17 16:43:00 INFO storage.BlockManagerInfo: Added broadcast_266_piece0 in memory on ***hostname masked***:53081 (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:43:00 INFO storage.MemoryStore: Block broadcast_269_piece0 stored as bytes in memory (estimated size 3.1 KB, free 490.4 MB) 18/04/17 16:43:00 INFO storage.BlockManagerInfo: Added broadcast_269_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.2 MB) 18/04/17 16:43:00 INFO spark.SparkContext: Created broadcast 269 from broadcast at DAGScheduler.scala:1006 18/04/17 16:43:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 269 (KafkaRDD[379] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 16:43:00 INFO cluster.YarnClusterScheduler: Adding task set 269.0 with 1 tasks 18/04/17 16:43:00 INFO scheduler.DAGScheduler: Got job 269 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 16:43:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 270 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 16:43:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 16:43:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 16:43:00 INFO scheduler.DAGScheduler: Submitting ResultStage 270 (KafkaRDD[378] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 16:43:00 INFO storage.MemoryStore: Block broadcast_270 stored as values in memory (estimated size 5.7 KB, free 490.4 MB) 18/04/17 16:43:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 269.0 (TID 269, ***hostname masked***, executor 12, partition 0, RACK_LOCAL, 2050 bytes) 18/04/17 16:43:00 INFO storage.MemoryStore: Block broadcast_270_piece0 stored as bytes in memory (estimated size 3.1 KB, free 490.4 MB) 18/04/17 16:43:00 INFO storage.BlockManagerInfo: Added broadcast_270_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.2 MB) 18/04/17 16:43:00 INFO spark.SparkContext: Created broadcast 270 from broadcast at DAGScheduler.scala:1006 18/04/17 16:43:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 270 (KafkaRDD[378] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 16:43:00 INFO cluster.YarnClusterScheduler: Adding task set 270.0 with 1 tasks 18/04/17 16:43:00 INFO scheduler.DAGScheduler: Got job 271 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 16:43:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 271 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 16:43:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 16:43:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 16:43:00 INFO scheduler.DAGScheduler: Submitting ResultStage 271 (KafkaRDD[380] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 16:43:00 INFO storage.MemoryStore: Block broadcast_271 stored as values in memory (estimated size 5.7 KB, free 490.4 MB) 18/04/17 16:43:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 270.0 (TID 270, ***hostname masked***, executor 6, partition 0, RACK_LOCAL, 2048 bytes) 18/04/17 16:43:00 INFO storage.BlockManagerInfo: Added broadcast_267_piece0 in memory on ***hostname masked***:60107 (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:43:00 INFO storage.MemoryStore: Block broadcast_271_piece0 stored as bytes in memory (estimated size 3.1 KB, free 490.4 MB) 18/04/17 16:43:00 INFO storage.BlockManagerInfo: Added broadcast_271_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.2 MB) 18/04/17 16:43:00 INFO spark.SparkContext: Created broadcast 271 from broadcast at DAGScheduler.scala:1006 18/04/17 16:43:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 271 (KafkaRDD[380] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 16:43:00 INFO cluster.YarnClusterScheduler: Adding task set 271.0 with 1 tasks 18/04/17 16:43:00 INFO scheduler.DAGScheduler: Got job 272 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 16:43:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 272 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 16:43:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 16:43:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 16:43:00 INFO scheduler.DAGScheduler: Submitting ResultStage 272 (KafkaRDD[370] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 16:43:00 INFO storage.MemoryStore: Block broadcast_272 stored as values in memory (estimated size 5.7 KB, free 490.4 MB) 18/04/17 16:43:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 271.0 (TID 271, ***hostname masked***, executor 6, partition 0, RACK_LOCAL, 2048 bytes) 18/04/17 16:43:00 INFO storage.BlockManagerInfo: Added broadcast_268_piece0 in memory on ***hostname masked***:50260 (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:43:00 INFO storage.MemoryStore: Block broadcast_272_piece0 stored as bytes in memory (estimated size 3.1 KB, free 490.4 MB) 18/04/17 16:43:00 INFO storage.BlockManagerInfo: Added broadcast_272_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.2 MB) 18/04/17 16:43:00 INFO spark.SparkContext: Created broadcast 272 from broadcast at DAGScheduler.scala:1006 18/04/17 16:43:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 272 (KafkaRDD[370] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 16:43:00 INFO cluster.YarnClusterScheduler: Adding task set 272.0 with 1 tasks 18/04/17 16:43:00 INFO scheduler.DAGScheduler: Got job 273 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 16:43:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 273 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 16:43:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 16:43:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 16:43:00 INFO scheduler.DAGScheduler: Submitting ResultStage 273 (KafkaRDD[375] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 16:43:00 INFO storage.MemoryStore: Block broadcast_273 stored as values in memory (estimated size 5.7 KB, free 490.4 MB) 18/04/17 16:43:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 272.0 (TID 272, ***hostname masked***, executor 3, partition 0, NODE_LOCAL, 2059 bytes) 18/04/17 16:43:00 INFO storage.MemoryStore: Block broadcast_273_piece0 stored as bytes in memory (estimated size 3.1 KB, free 490.4 MB) 18/04/17 16:43:00 INFO storage.BlockManagerInfo: Added broadcast_273_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.2 MB) 18/04/17 16:43:00 INFO spark.SparkContext: Created broadcast 273 from broadcast at DAGScheduler.scala:1006 18/04/17 16:43:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 273 (KafkaRDD[375] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 16:43:00 INFO cluster.YarnClusterScheduler: Adding task set 273.0 with 1 tasks 18/04/17 16:43:00 INFO storage.BlockManagerInfo: Added broadcast_270_piece0 in memory on ***hostname masked***:35790 (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:43:00 INFO scheduler.DAGScheduler: Got job 274 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 16:43:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 274 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 16:43:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 16:43:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 16:43:00 INFO scheduler.DAGScheduler: Submitting ResultStage 274 (KafkaRDD[388] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 16:43:00 INFO storage.MemoryStore: Block broadcast_274 stored as values in memory (estimated size 5.7 KB, free 490.4 MB) 18/04/17 16:43:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 273.0 (TID 273, ***hostname masked***, executor 7, partition 0, RACK_LOCAL, 2064 bytes) 18/04/17 16:43:00 INFO storage.BlockManagerInfo: Added broadcast_269_piece0 in memory on ***hostname masked***:42188 (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:43:00 INFO storage.BlockManagerInfo: Added broadcast_272_piece0 in memory on ***hostname masked***:60107 (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:43:00 INFO storage.MemoryStore: Block broadcast_274_piece0 stored as bytes in memory (estimated size 3.1 KB, free 490.4 MB) 18/04/17 16:43:00 INFO storage.BlockManagerInfo: Added broadcast_274_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.2 MB) 18/04/17 16:43:00 INFO spark.SparkContext: Created broadcast 274 from broadcast at DAGScheduler.scala:1006 18/04/17 16:43:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 274 (KafkaRDD[388] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 16:43:00 INFO cluster.YarnClusterScheduler: Adding task set 274.0 with 1 tasks 18/04/17 16:43:00 INFO scheduler.DAGScheduler: Got job 276 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 16:43:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 275 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 16:43:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 16:43:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 16:43:00 INFO scheduler.DAGScheduler: Submitting ResultStage 275 (KafkaRDD[383] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 16:43:00 INFO storage.MemoryStore: Block broadcast_275 stored as values in memory (estimated size 5.7 KB, free 490.4 MB) 18/04/17 16:43:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 274.0 (TID 274, ***hostname masked***, executor 3, partition 0, RACK_LOCAL, 2045 bytes) 18/04/17 16:43:00 INFO storage.MemoryStore: Block broadcast_275_piece0 stored as bytes in memory (estimated size 3.1 KB, free 490.4 MB) 18/04/17 16:43:00 INFO storage.BlockManagerInfo: Added broadcast_275_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.2 MB) 18/04/17 16:43:00 INFO spark.SparkContext: Created broadcast 275 from broadcast at DAGScheduler.scala:1006 18/04/17 16:43:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 275 (KafkaRDD[383] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 16:43:00 INFO cluster.YarnClusterScheduler: Adding task set 275.0 with 1 tasks 18/04/17 16:43:00 INFO scheduler.DAGScheduler: Got job 275 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 16:43:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 276 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 16:43:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 16:43:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 16:43:00 INFO scheduler.DAGScheduler: Submitting ResultStage 276 (KafkaRDD[372] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 16:43:00 INFO storage.MemoryStore: Block broadcast_276 stored as values in memory (estimated size 5.7 KB, free 490.4 MB) 18/04/17 16:43:00 INFO storage.BlockManagerInfo: Added broadcast_271_piece0 in memory on ***hostname masked***:35790 (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:43:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 275.0 (TID 275, ***hostname masked***, executor 2, partition 0, RACK_LOCAL, 2048 bytes) 18/04/17 16:43:00 INFO spark.ContextCleaner: Cleaned accumulator 241 18/04/17 16:43:00 INFO storage.MemoryStore: Block broadcast_276_piece0 stored as bytes in memory (estimated size 3.1 KB, free 490.4 MB) 18/04/17 16:43:00 INFO storage.BlockManagerInfo: Added broadcast_276_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.2 MB) 18/04/17 16:43:00 INFO spark.SparkContext: Created broadcast 276 from broadcast at DAGScheduler.scala:1006 18/04/17 16:43:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 276 (KafkaRDD[372] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 16:43:00 INFO cluster.YarnClusterScheduler: Adding task set 276.0 with 1 tasks 18/04/17 16:43:00 INFO storage.BlockManagerInfo: Removed broadcast_237_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.2 MB) 18/04/17 16:43:00 INFO scheduler.DAGScheduler: Got job 277 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 16:43:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 277 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 16:43:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 16:43:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 16:43:00 INFO scheduler.DAGScheduler: Submitting ResultStage 277 (KafkaRDD[394] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 16:43:00 INFO storage.MemoryStore: Block broadcast_277 stored as values in memory (estimated size 5.7 KB, free 490.4 MB) 18/04/17 16:43:00 INFO storage.BlockManagerInfo: Added broadcast_273_piece0 in memory on ***hostname masked***:41751 (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:43:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 276.0 (TID 276, ***hostname masked***, executor 6, partition 0, RACK_LOCAL, 2051 bytes) 18/04/17 16:43:00 INFO storage.BlockManagerInfo: Removed broadcast_237_piece0 on ***hostname masked***:41751 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:43:00 INFO storage.BlockManagerInfo: Added broadcast_274_piece0 in memory on ***hostname masked***:60107 (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:43:00 INFO storage.MemoryStore: Block broadcast_277_piece0 stored as bytes in memory (estimated size 3.1 KB, free 490.4 MB) 18/04/17 16:43:00 INFO spark.ContextCleaner: Cleaned accumulator 238 18/04/17 16:43:00 INFO storage.BlockManagerInfo: Added broadcast_277_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.2 MB) 18/04/17 16:43:00 INFO spark.SparkContext: Created broadcast 277 from broadcast at DAGScheduler.scala:1006 18/04/17 16:43:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 277 (KafkaRDD[394] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 16:43:00 INFO cluster.YarnClusterScheduler: Adding task set 277.0 with 1 tasks 18/04/17 16:43:00 INFO scheduler.DAGScheduler: Got job 278 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 16:43:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 278 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 16:43:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 16:43:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 16:43:00 INFO scheduler.DAGScheduler: Submitting ResultStage 278 (KafkaRDD[386] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 16:43:00 INFO storage.MemoryStore: Block broadcast_278 stored as values in memory (estimated size 5.7 KB, free 490.4 MB) 18/04/17 16:43:00 INFO storage.BlockManagerInfo: Added broadcast_275_piece0 in memory on ***hostname masked***:43653 (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:43:00 INFO storage.BlockManagerInfo: Removed broadcast_240_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.2 MB) 18/04/17 16:43:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 277.0 (TID 277, ***hostname masked***, executor 11, partition 0, RACK_LOCAL, 2053 bytes) 18/04/17 16:43:00 INFO storage.BlockManagerInfo: Removed broadcast_240_piece0 on ***hostname masked***:35790 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:43:00 INFO storage.MemoryStore: Block broadcast_278_piece0 stored as bytes in memory (estimated size 3.1 KB, free 490.4 MB) 18/04/17 16:43:00 INFO storage.BlockManagerInfo: Added broadcast_278_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.2 MB) 18/04/17 16:43:00 INFO spark.SparkContext: Created broadcast 278 from broadcast at DAGScheduler.scala:1006 18/04/17 16:43:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 278 (KafkaRDD[386] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 16:43:00 INFO cluster.YarnClusterScheduler: Adding task set 278.0 with 1 tasks 18/04/17 16:43:00 INFO spark.ContextCleaner: Cleaned accumulator 247 18/04/17 16:43:00 INFO scheduler.DAGScheduler: Got job 279 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 16:43:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 279 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 16:43:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 16:43:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 16:43:00 INFO scheduler.DAGScheduler: Submitting ResultStage 279 (KafkaRDD[368] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 16:43:00 INFO storage.MemoryStore: Block broadcast_279 stored as values in memory (estimated size 5.7 KB, free 490.4 MB) 18/04/17 16:43:00 INFO storage.BlockManagerInfo: Added broadcast_276_piece0 in memory on ***hostname masked***:35790 (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:43:00 INFO storage.BlockManagerInfo: Removed broadcast_244_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.2 MB) 18/04/17 16:43:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 278.0 (TID 278, ***hostname masked***, executor 2, partition 0, NODE_LOCAL, 2056 bytes) 18/04/17 16:43:00 INFO storage.BlockManagerInfo: Removed broadcast_244_piece0 on ***hostname masked***:53081 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:43:00 INFO spark.ContextCleaner: Cleaned accumulator 245 18/04/17 16:43:00 INFO storage.BlockManagerInfo: Removed broadcast_251_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.2 MB) 18/04/17 16:43:00 INFO storage.MemoryStore: Block broadcast_279_piece0 stored as bytes in memory (estimated size 3.1 KB, free 490.4 MB) 18/04/17 16:43:00 INFO storage.BlockManagerInfo: Added broadcast_279_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.2 MB) 18/04/17 16:43:00 INFO storage.BlockManagerInfo: Removed broadcast_251_piece0 on ***hostname masked***:41751 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:43:00 INFO spark.SparkContext: Created broadcast 279 from broadcast at DAGScheduler.scala:1006 18/04/17 16:43:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 279 (KafkaRDD[368] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 16:43:00 INFO cluster.YarnClusterScheduler: Adding task set 279.0 with 1 tasks 18/04/17 16:43:00 INFO scheduler.DAGScheduler: Got job 280 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 16:43:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 280 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 16:43:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 16:43:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 16:43:00 INFO scheduler.DAGScheduler: Submitting ResultStage 280 (KafkaRDD[365] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 16:43:00 INFO storage.BlockManagerInfo: Added broadcast_277_piece0 in memory on ***hostname masked***:57847 (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:43:00 INFO spark.ContextCleaner: Cleaned accumulator 252 18/04/17 16:43:00 INFO storage.MemoryStore: Block broadcast_280 stored as values in memory (estimated size 5.7 KB, free 490.4 MB) 18/04/17 16:43:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 279.0 (TID 279, ***hostname masked***, executor 9, partition 0, RACK_LOCAL, 2051 bytes) 18/04/17 16:43:00 INFO storage.BlockManagerInfo: Removed broadcast_246_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.2 MB) 18/04/17 16:43:00 INFO storage.BlockManagerInfo: Added broadcast_278_piece0 in memory on ***hostname masked***:43653 (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:43:00 INFO storage.BlockManagerInfo: Removed broadcast_246_piece0 on ***hostname masked***:50260 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:43:00 INFO spark.ContextCleaner: Cleaned accumulator 261 18/04/17 16:43:00 INFO storage.MemoryStore: Block broadcast_280_piece0 stored as bytes in memory (estimated size 3.1 KB, free 490.4 MB) 18/04/17 16:43:00 INFO storage.BlockManagerInfo: Removed broadcast_258_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.2 MB) 18/04/17 16:43:00 INFO storage.BlockManagerInfo: Added broadcast_280_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.2 MB) 18/04/17 16:43:00 INFO spark.SparkContext: Created broadcast 280 from broadcast at DAGScheduler.scala:1006 18/04/17 16:43:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 280 (KafkaRDD[365] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 16:43:00 INFO cluster.YarnClusterScheduler: Adding task set 280.0 with 1 tasks 18/04/17 16:43:00 INFO scheduler.DAGScheduler: Got job 281 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 16:43:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 281 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 16:43:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 16:43:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 16:43:00 INFO scheduler.DAGScheduler: Submitting ResultStage 281 (KafkaRDD[367] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 16:43:00 INFO storage.MemoryStore: Block broadcast_281 stored as values in memory (estimated size 5.7 KB, free 490.4 MB) 18/04/17 16:43:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 280.0 (TID 280, ***hostname masked***, executor 1, partition 0, NODE_LOCAL, 2059 bytes) 18/04/17 16:43:00 INFO storage.BlockManagerInfo: Removed broadcast_258_piece0 on ***hostname masked***:60107 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:43:00 INFO storage.BlockManagerInfo: Added broadcast_279_piece0 in memory on ***hostname masked***:55033 (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:43:00 INFO spark.ContextCleaner: Cleaned accumulator 259 18/04/17 16:43:00 INFO storage.BlockManagerInfo: Removed broadcast_260_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.2 MB) 18/04/17 16:43:00 INFO storage.BlockManagerInfo: Removed broadcast_260_piece0 on ***hostname masked***:60107 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:43:00 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 272.0 (TID 272) in 52 ms on ***hostname masked*** (executor 3) (1/1) 18/04/17 16:43:00 INFO storage.MemoryStore: Block broadcast_281_piece0 stored as bytes in memory (estimated size 3.1 KB, free 490.4 MB) 18/04/17 16:43:00 INFO cluster.YarnClusterScheduler: Removed TaskSet 272.0, whose tasks have all completed, from pool 18/04/17 16:43:00 INFO storage.BlockManagerInfo: Added broadcast_281_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.2 MB) 18/04/17 16:43:00 INFO spark.SparkContext: Created broadcast 281 from broadcast at DAGScheduler.scala:1006 18/04/17 16:43:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 281 (KafkaRDD[367] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 16:43:00 INFO cluster.YarnClusterScheduler: Adding task set 281.0 with 1 tasks 18/04/17 16:43:00 INFO storage.BlockManagerInfo: Added broadcast_280_piece0 in memory on ***hostname masked***:56034 (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:43:00 INFO scheduler.DAGScheduler: Got job 282 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 16:43:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 282 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 16:43:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 16:43:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 16:43:00 INFO scheduler.DAGScheduler: Submitting ResultStage 282 (KafkaRDD[389] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 16:43:00 INFO storage.MemoryStore: Block broadcast_282 stored as values in memory (estimated size 5.7 KB, free 490.4 MB) 18/04/17 16:43:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 281.0 (TID 281, ***hostname masked***, executor 7, partition 0, RACK_LOCAL, 2060 bytes) 18/04/17 16:43:00 INFO storage.MemoryStore: Block broadcast_282_piece0 stored as bytes in memory (estimated size 3.1 KB, free 490.4 MB) 18/04/17 16:43:00 INFO storage.BlockManagerInfo: Added broadcast_282_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.2 MB) 18/04/17 16:43:00 INFO spark.SparkContext: Created broadcast 282 from broadcast at DAGScheduler.scala:1006 18/04/17 16:43:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 282 (KafkaRDD[389] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 16:43:00 INFO cluster.YarnClusterScheduler: Adding task set 282.0 with 1 tasks 18/04/17 16:43:00 INFO scheduler.DAGScheduler: Got job 283 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 16:43:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 283 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 16:43:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 16:43:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 16:43:00 INFO scheduler.DAGScheduler: Submitting ResultStage 283 (KafkaRDD[392] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 16:43:00 INFO storage.MemoryStore: Block broadcast_283 stored as values in memory (estimated size 5.7 KB, free 490.4 MB) 18/04/17 16:43:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 282.0 (TID 282, ***hostname masked***, executor 6, partition 0, NODE_LOCAL, 2049 bytes) 18/04/17 16:43:00 INFO storage.BlockManagerInfo: Added broadcast_281_piece0 in memory on ***hostname masked***:41751 (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:43:00 INFO storage.MemoryStore: Block broadcast_283_piece0 stored as bytes in memory (estimated size 3.1 KB, free 490.4 MB) 18/04/17 16:43:00 INFO storage.BlockManagerInfo: Added broadcast_283_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.2 MB) 18/04/17 16:43:00 INFO spark.SparkContext: Created broadcast 283 from broadcast at DAGScheduler.scala:1006 18/04/17 16:43:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 283 (KafkaRDD[392] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 16:43:00 INFO cluster.YarnClusterScheduler: Adding task set 283.0 with 1 tasks 18/04/17 16:43:00 INFO scheduler.DAGScheduler: Got job 284 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 16:43:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 284 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 16:43:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 16:43:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 16:43:00 INFO scheduler.DAGScheduler: Submitting ResultStage 284 (KafkaRDD[391] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 16:43:00 INFO storage.MemoryStore: Block broadcast_284 stored as values in memory (estimated size 5.7 KB, free 490.4 MB) 18/04/17 16:43:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 283.0 (TID 283, ***hostname masked***, executor 7, partition 0, RACK_LOCAL, 2040 bytes) 18/04/17 16:43:00 INFO storage.MemoryStore: Block broadcast_284_piece0 stored as bytes in memory (estimated size 3.1 KB, free 490.4 MB) 18/04/17 16:43:00 INFO storage.BlockManagerInfo: Added broadcast_284_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.2 MB) 18/04/17 16:43:00 INFO spark.SparkContext: Created broadcast 284 from broadcast at DAGScheduler.scala:1006 18/04/17 16:43:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 284 (KafkaRDD[391] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 16:43:00 INFO cluster.YarnClusterScheduler: Adding task set 284.0 with 1 tasks 18/04/17 16:43:00 INFO scheduler.DAGScheduler: Got job 285 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 16:43:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 285 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 16:43:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 16:43:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 16:43:00 INFO scheduler.DAGScheduler: Submitting ResultStage 285 (KafkaRDD[371] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 16:43:00 INFO storage.MemoryStore: Block broadcast_285 stored as values in memory (estimated size 5.7 KB, free 490.4 MB) 18/04/17 16:43:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 284.0 (TID 284, ***hostname masked***, executor 12, partition 0, RACK_LOCAL, 2042 bytes) 18/04/17 16:43:00 INFO storage.MemoryStore: Block broadcast_285_piece0 stored as bytes in memory (estimated size 3.1 KB, free 490.4 MB) 18/04/17 16:43:00 INFO storage.BlockManagerInfo: Added broadcast_285_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.2 MB) 18/04/17 16:43:00 INFO spark.SparkContext: Created broadcast 285 from broadcast at DAGScheduler.scala:1006 18/04/17 16:43:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 285 (KafkaRDD[371] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 16:43:00 INFO cluster.YarnClusterScheduler: Adding task set 285.0 with 1 tasks 18/04/17 16:43:00 INFO scheduler.DAGScheduler: Got job 286 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 16:43:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 286 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 16:43:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 16:43:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 16:43:00 INFO scheduler.DAGScheduler: Submitting ResultStage 286 (KafkaRDD[384] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 16:43:00 INFO storage.BlockManagerInfo: Added broadcast_282_piece0 in memory on ***hostname masked***:35790 (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:43:00 INFO storage.MemoryStore: Block broadcast_286 stored as values in memory (estimated size 5.7 KB, free 490.3 MB) 18/04/17 16:43:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 285.0 (TID 285, ***hostname masked***, executor 10, partition 0, RACK_LOCAL, 2046 bytes) 18/04/17 16:43:00 INFO storage.MemoryStore: Block broadcast_286_piece0 stored as bytes in memory (estimated size 3.1 KB, free 490.3 MB) 18/04/17 16:43:00 INFO storage.BlockManagerInfo: Added broadcast_286_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.2 MB) 18/04/17 16:43:00 INFO spark.SparkContext: Created broadcast 286 from broadcast at DAGScheduler.scala:1006 18/04/17 16:43:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 286 (KafkaRDD[384] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 16:43:00 INFO cluster.YarnClusterScheduler: Adding task set 286.0 with 1 tasks 18/04/17 16:43:00 INFO storage.BlockManagerInfo: Added broadcast_283_piece0 in memory on ***hostname masked***:41751 (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:43:00 INFO scheduler.DAGScheduler: Got job 287 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 16:43:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 287 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 16:43:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 16:43:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 16:43:00 INFO scheduler.DAGScheduler: Submitting ResultStage 287 (KafkaRDD[393] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 16:43:00 INFO storage.MemoryStore: Block broadcast_287 stored as values in memory (estimated size 5.7 KB, free 490.3 MB) 18/04/17 16:43:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 286.0 (TID 286, ***hostname masked***, executor 2, partition 0, NODE_LOCAL, 2047 bytes) 18/04/17 16:43:00 INFO storage.MemoryStore: Block broadcast_287_piece0 stored as bytes in memory (estimated size 3.1 KB, free 490.3 MB) 18/04/17 16:43:00 INFO storage.BlockManagerInfo: Added broadcast_287_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.2 MB) 18/04/17 16:43:00 INFO spark.SparkContext: Created broadcast 287 from broadcast at DAGScheduler.scala:1006 18/04/17 16:43:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 287 (KafkaRDD[393] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 16:43:00 INFO cluster.YarnClusterScheduler: Adding task set 287.0 with 1 tasks 18/04/17 16:43:00 INFO scheduler.DAGScheduler: ResultStage 272 (foreachPartition at PredictorEngineApp.java:153) finished in 0.071 s 18/04/17 16:43:00 INFO scheduler.DAGScheduler: Job 272 finished: foreachPartition at PredictorEngineApp.java:153, took 0.126828 s 18/04/17 16:43:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 287.0 (TID 287, ***hostname masked***, executor 9, partition 0, NODE_LOCAL, 2048 bytes) 18/04/17 16:43:00 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x3b562e44 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 16:43:00 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x3b562e440x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 16:43:00 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 16:43:00 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:54457, server: ***hostname masked***/***IP masked***:2181 18/04/17 16:43:00 INFO storage.BlockManagerInfo: Added broadcast_284_piece0 in memory on ***hostname masked***:42188 (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:43:00 INFO storage.BlockManagerInfo: Added broadcast_286_piece0 in memory on ***hostname masked***:43653 (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:43:00 INFO storage.BlockManagerInfo: Added broadcast_287_piece0 in memory on ***hostname masked***:55033 (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:43:00 INFO storage.BlockManagerInfo: Added broadcast_285_piece0 in memory on ***hostname masked***:55095 (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:43:00 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x3626be1439a92ae, negotiated timeout = 60000 18/04/17 16:43:00 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x3626be1439a92ae 18/04/17 16:43:00 INFO zookeeper.ZooKeeper: Session: 0x3626be1439a92ae closed 18/04/17 16:43:00 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 16:43:00 INFO scheduler.JobScheduler: Finished job streaming job 1523972580000 ms.10 from job set of time 1523972580000 ms 18/04/17 16:43:02 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 262.0 (TID 262) in 2577 ms on ***hostname masked*** (executor 2) (1/1) 18/04/17 16:43:02 INFO cluster.YarnClusterScheduler: Removed TaskSet 262.0, whose tasks have all completed, from pool 18/04/17 16:43:02 INFO scheduler.DAGScheduler: ResultStage 262 (foreachPartition at PredictorEngineApp.java:153) finished in 2.577 s 18/04/17 16:43:02 INFO scheduler.DAGScheduler: Job 264 finished: foreachPartition at PredictorEngineApp.java:153, took 2.590299 s 18/04/17 16:43:02 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x411e02d connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 16:43:02 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x411e02d0x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 16:43:02 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 16:43:02 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:54463, server: ***hostname masked***/***IP masked***:2181 18/04/17 16:43:02 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x3626be1439a92b2, negotiated timeout = 60000 18/04/17 16:43:02 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x3626be1439a92b2 18/04/17 16:43:02 INFO zookeeper.ZooKeeper: Session: 0x3626be1439a92b2 closed 18/04/17 16:43:02 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 16:43:02 INFO scheduler.JobScheduler: Finished job streaming job 1523972580000 ms.25 from job set of time 1523972580000 ms 18/04/17 16:43:02 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 281.0 (TID 281) in 2551 ms on ***hostname masked*** (executor 7) (1/1) 18/04/17 16:43:02 INFO cluster.YarnClusterScheduler: Removed TaskSet 281.0, whose tasks have all completed, from pool 18/04/17 16:43:02 INFO scheduler.DAGScheduler: ResultStage 281 (foreachPartition at PredictorEngineApp.java:153) finished in 2.552 s 18/04/17 16:43:02 INFO scheduler.DAGScheduler: Job 281 finished: foreachPartition at PredictorEngineApp.java:153, took 2.659644 s 18/04/17 16:43:02 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x30035d53 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 16:43:02 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x30035d530x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 16:43:02 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 16:43:02 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:54466, server: ***hostname masked***/***IP masked***:2181 18/04/17 16:43:02 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x3626be1439a92b3, negotiated timeout = 60000 18/04/17 16:43:02 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x3626be1439a92b3 18/04/17 16:43:02 INFO zookeeper.ZooKeeper: Session: 0x3626be1439a92b3 closed 18/04/17 16:43:02 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 16:43:02 INFO scheduler.JobScheduler: Finished job streaming job 1523972580000 ms.7 from job set of time 1523972580000 ms 18/04/17 16:43:03 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 279.0 (TID 279) in 3412 ms on ***hostname masked*** (executor 9) (1/1) 18/04/17 16:43:03 INFO cluster.YarnClusterScheduler: Removed TaskSet 279.0, whose tasks have all completed, from pool 18/04/17 16:43:03 INFO scheduler.DAGScheduler: ResultStage 279 (foreachPartition at PredictorEngineApp.java:153) finished in 3.413 s 18/04/17 16:43:03 INFO scheduler.DAGScheduler: Job 279 finished: foreachPartition at PredictorEngineApp.java:153, took 3.510297 s 18/04/17 16:43:03 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x39833c30 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 16:43:03 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x39833c300x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 16:43:03 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 16:43:03 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:60852, server: ***hostname masked***/***IP masked***:2181 18/04/17 16:43:03 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x1626be1444c92db, negotiated timeout = 60000 18/04/17 16:43:03 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x1626be1444c92db 18/04/17 16:43:03 INFO zookeeper.ZooKeeper: Session: 0x1626be1444c92db closed 18/04/17 16:43:03 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 16:43:03 INFO scheduler.JobScheduler: Finished job streaming job 1523972580000 ms.8 from job set of time 1523972580000 ms 18/04/17 16:43:04 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 284.0 (TID 284) in 4170 ms on ***hostname masked*** (executor 12) (1/1) 18/04/17 16:43:04 INFO cluster.YarnClusterScheduler: Removed TaskSet 284.0, whose tasks have all completed, from pool 18/04/17 16:43:04 INFO scheduler.DAGScheduler: ResultStage 284 (foreachPartition at PredictorEngineApp.java:153) finished in 4.171 s 18/04/17 16:43:04 INFO scheduler.DAGScheduler: Job 284 finished: foreachPartition at PredictorEngineApp.java:153, took 4.289946 s 18/04/17 16:43:04 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0xa7d3d45 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 16:43:04 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0xa7d3d450x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 16:43:04 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 16:43:04 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:37218, server: ***hostname masked***/***IP masked***:2181 18/04/17 16:43:04 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x2626be142b28be5, negotiated timeout = 60000 18/04/17 16:43:04 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x2626be142b28be5 18/04/17 16:43:04 INFO zookeeper.ZooKeeper: Session: 0x2626be142b28be5 closed 18/04/17 16:43:04 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 16:43:04 INFO scheduler.JobScheduler: Finished job streaming job 1523972580000 ms.31 from job set of time 1523972580000 ms 18/04/17 16:43:04 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 264.0 (TID 264) in 4415 ms on ***hostname masked*** (executor 4) (1/1) 18/04/17 16:43:04 INFO cluster.YarnClusterScheduler: Removed TaskSet 264.0, whose tasks have all completed, from pool 18/04/17 16:43:04 INFO scheduler.DAGScheduler: ResultStage 264 (foreachPartition at PredictorEngineApp.java:153) finished in 4.417 s 18/04/17 16:43:04 INFO scheduler.DAGScheduler: Job 263 finished: foreachPartition at PredictorEngineApp.java:153, took 4.438697 s 18/04/17 16:43:04 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x62214010 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 16:43:04 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x622140100x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 16:43:04 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 16:43:04 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:60859, server: ***hostname masked***/***IP masked***:2181 18/04/17 16:43:04 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 266.0 (TID 266) in 4410 ms on ***hostname masked*** (executor 5) (1/1) 18/04/17 16:43:04 INFO cluster.YarnClusterScheduler: Removed TaskSet 266.0, whose tasks have all completed, from pool 18/04/17 16:43:04 INFO scheduler.DAGScheduler: ResultStage 266 (foreachPartition at PredictorEngineApp.java:153) finished in 4.411 s 18/04/17 16:43:04 INFO scheduler.DAGScheduler: Job 266 finished: foreachPartition at PredictorEngineApp.java:153, took 4.443534 s 18/04/17 16:43:04 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0xcc700cc connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 16:43:04 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x1626be1444c92dc, negotiated timeout = 60000 18/04/17 16:43:04 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0xcc700cc0x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 16:43:04 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 16:43:04 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:60860, server: ***hostname masked***/***IP masked***:2181 18/04/17 16:43:04 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x1626be1444c92dd, negotiated timeout = 60000 18/04/17 16:43:04 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x1626be1444c92dc 18/04/17 16:43:04 INFO zookeeper.ZooKeeper: Session: 0x1626be1444c92dc closed 18/04/17 16:43:04 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 16:43:04 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x1626be1444c92dd 18/04/17 16:43:04 INFO zookeeper.ZooKeeper: Session: 0x1626be1444c92dd closed 18/04/17 16:43:04 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 16:43:04 INFO scheduler.JobScheduler: Finished job streaming job 1523972580000 ms.9 from job set of time 1523972580000 ms 18/04/17 16:43:04 INFO scheduler.JobScheduler: Finished job streaming job 1523972580000 ms.2 from job set of time 1523972580000 ms 18/04/17 16:43:04 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 277.0 (TID 277) in 4476 ms on ***hostname masked*** (executor 11) (1/1) 18/04/17 16:43:04 INFO cluster.YarnClusterScheduler: Removed TaskSet 277.0, whose tasks have all completed, from pool 18/04/17 16:43:04 INFO scheduler.DAGScheduler: ResultStage 277 (foreachPartition at PredictorEngineApp.java:153) finished in 4.476 s 18/04/17 16:43:04 INFO scheduler.DAGScheduler: Job 277 finished: foreachPartition at PredictorEngineApp.java:153, took 4.563275 s 18/04/17 16:43:04 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x21131e1e connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 16:43:04 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x21131e1e0x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 16:43:04 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 16:43:04 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:54483, server: ***hostname masked***/***IP masked***:2181 18/04/17 16:43:04 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x3626be1439a92b6, negotiated timeout = 60000 18/04/17 16:43:04 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x3626be1439a92b6 18/04/17 16:43:04 INFO zookeeper.ZooKeeper: Session: 0x3626be1439a92b6 closed 18/04/17 16:43:04 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 16:43:04 INFO scheduler.JobScheduler: Finished job streaming job 1523972580000 ms.34 from job set of time 1523972580000 ms 18/04/17 16:43:06 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 269.0 (TID 269) in 6081 ms on ***hostname masked*** (executor 12) (1/1) 18/04/17 16:43:06 INFO cluster.YarnClusterScheduler: Removed TaskSet 269.0, whose tasks have all completed, from pool 18/04/17 16:43:06 INFO scheduler.DAGScheduler: ResultStage 269 (foreachPartition at PredictorEngineApp.java:153) finished in 6.081 s 18/04/17 16:43:06 INFO scheduler.DAGScheduler: Job 268 finished: foreachPartition at PredictorEngineApp.java:153, took 6.128506 s 18/04/17 16:43:06 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x51fd338e connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 16:43:06 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x51fd338e0x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 16:43:06 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 16:43:06 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:37233, server: ***hostname masked***/***IP masked***:2181 18/04/17 16:43:06 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x2626be142b28be8, negotiated timeout = 60000 18/04/17 16:43:06 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x2626be142b28be8 18/04/17 16:43:06 INFO zookeeper.ZooKeeper: Session: 0x2626be142b28be8 closed 18/04/17 16:43:06 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 16:43:06 INFO scheduler.JobScheduler: Finished job streaming job 1523972580000 ms.19 from job set of time 1523972580000 ms 18/04/17 16:43:07 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 274.0 (TID 274) in 7121 ms on ***hostname masked*** (executor 3) (1/1) 18/04/17 16:43:07 INFO cluster.YarnClusterScheduler: Removed TaskSet 274.0, whose tasks have all completed, from pool 18/04/17 16:43:07 INFO scheduler.DAGScheduler: ResultStage 274 (foreachPartition at PredictorEngineApp.java:153) finished in 7.123 s 18/04/17 16:43:07 INFO scheduler.DAGScheduler: Job 274 finished: foreachPartition at PredictorEngineApp.java:153, took 7.185990 s 18/04/17 16:43:07 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x7871729 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 16:43:07 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x78717290x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 16:43:07 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 16:43:07 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:54494, server: ***hostname masked***/***IP masked***:2181 18/04/17 16:43:07 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x3626be1439a92b8, negotiated timeout = 60000 18/04/17 16:43:07 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x3626be1439a92b8 18/04/17 16:43:07 INFO zookeeper.ZooKeeper: Session: 0x3626be1439a92b8 closed 18/04/17 16:43:07 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 16:43:07 INFO scheduler.JobScheduler: Finished job streaming job 1523972580000 ms.28 from job set of time 1523972580000 ms 18/04/17 16:43:07 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 273.0 (TID 273) in 7777 ms on ***hostname masked*** (executor 7) (1/1) 18/04/17 16:43:07 INFO cluster.YarnClusterScheduler: Removed TaskSet 273.0, whose tasks have all completed, from pool 18/04/17 16:43:07 INFO scheduler.DAGScheduler: ResultStage 273 (foreachPartition at PredictorEngineApp.java:153) finished in 7.778 s 18/04/17 16:43:07 INFO scheduler.DAGScheduler: Job 273 finished: foreachPartition at PredictorEngineApp.java:153, took 7.837561 s 18/04/17 16:43:07 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x373116ac connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 16:43:07 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x373116ac0x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 16:43:07 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 16:43:07 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:54498, server: ***hostname masked***/***IP masked***:2181 18/04/17 16:43:07 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x3626be1439a92bc, negotiated timeout = 60000 18/04/17 16:43:07 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x3626be1439a92bc 18/04/17 16:43:07 INFO zookeeper.ZooKeeper: Session: 0x3626be1439a92bc closed 18/04/17 16:43:07 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 16:43:07 INFO scheduler.JobScheduler: Finished job streaming job 1523972580000 ms.15 from job set of time 1523972580000 ms 18/04/17 16:43:08 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 283.0 (TID 283) in 7870 ms on ***hostname masked*** (executor 7) (1/1) 18/04/17 16:43:08 INFO cluster.YarnClusterScheduler: Removed TaskSet 283.0, whose tasks have all completed, from pool 18/04/17 16:43:08 INFO scheduler.DAGScheduler: ResultStage 283 (foreachPartition at PredictorEngineApp.java:153) finished in 7.871 s 18/04/17 16:43:08 INFO scheduler.DAGScheduler: Job 283 finished: foreachPartition at PredictorEngineApp.java:153, took 7.986515 s 18/04/17 16:43:08 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x4f588c80 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 16:43:08 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x4f588c800x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 16:43:08 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 16:43:08 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:37246, server: ***hostname masked***/***IP masked***:2181 18/04/17 16:43:08 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x2626be142b28bea, negotiated timeout = 60000 18/04/17 16:43:08 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x2626be142b28bea 18/04/17 16:43:08 INFO zookeeper.ZooKeeper: Session: 0x2626be142b28bea closed 18/04/17 16:43:08 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 16:43:08 INFO scheduler.JobScheduler: Finished job streaming job 1523972580000 ms.32 from job set of time 1523972580000 ms 18/04/17 16:43:08 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 265.0 (TID 265) in 8338 ms on ***hostname masked*** (executor 1) (1/1) 18/04/17 16:43:08 INFO cluster.YarnClusterScheduler: Removed TaskSet 265.0, whose tasks have all completed, from pool 18/04/17 16:43:08 INFO scheduler.DAGScheduler: ResultStage 265 (foreachPartition at PredictorEngineApp.java:153) finished in 8.339 s 18/04/17 16:43:08 INFO scheduler.DAGScheduler: Job 265 finished: foreachPartition at PredictorEngineApp.java:153, took 8.366628 s 18/04/17 16:43:08 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x5be801a8 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 16:43:08 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x5be801a80x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 16:43:08 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 16:43:08 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:37249, server: ***hostname masked***/***IP masked***:2181 18/04/17 16:43:08 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x2626be142b28beb, negotiated timeout = 60000 18/04/17 16:43:08 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x2626be142b28beb 18/04/17 16:43:08 INFO zookeeper.ZooKeeper: Session: 0x2626be142b28beb closed 18/04/17 16:43:08 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 16:43:08 INFO scheduler.JobScheduler: Finished job streaming job 1523972580000 ms.27 from job set of time 1523972580000 ms 18/04/17 16:43:09 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 268.0 (TID 268) in 8935 ms on ***hostname masked*** (executor 8) (1/1) 18/04/17 16:43:09 INFO cluster.YarnClusterScheduler: Removed TaskSet 268.0, whose tasks have all completed, from pool 18/04/17 16:43:09 INFO scheduler.DAGScheduler: ResultStage 268 (foreachPartition at PredictorEngineApp.java:153) finished in 8.935 s 18/04/17 16:43:09 INFO scheduler.DAGScheduler: Job 270 finished: foreachPartition at PredictorEngineApp.java:153, took 8.977114 s 18/04/17 16:43:09 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x15216b7d connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 16:43:09 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x15216b7d0x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 16:43:09 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 16:43:09 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:60890, server: ***hostname masked***/***IP masked***:2181 18/04/17 16:43:09 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x1626be1444c92e1, negotiated timeout = 60000 18/04/17 16:43:09 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x1626be1444c92e1 18/04/17 16:43:09 INFO zookeeper.ZooKeeper: Session: 0x1626be1444c92e1 closed 18/04/17 16:43:09 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 16:43:09 INFO scheduler.JobScheduler: Finished job streaming job 1523972580000 ms.1 from job set of time 1523972580000 ms 18/04/17 16:43:09 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 263.0 (TID 263) in 9134 ms on ***hostname masked*** (executor 9) (1/1) 18/04/17 16:43:09 INFO cluster.YarnClusterScheduler: Removed TaskSet 263.0, whose tasks have all completed, from pool 18/04/17 16:43:09 INFO scheduler.DAGScheduler: ResultStage 263 (foreachPartition at PredictorEngineApp.java:153) finished in 9.135 s 18/04/17 16:43:09 INFO scheduler.DAGScheduler: Job 262 finished: foreachPartition at PredictorEngineApp.java:153, took 9.153076 s 18/04/17 16:43:09 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x70388e15 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 16:43:09 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x70388e150x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 16:43:09 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 16:43:09 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:60894, server: ***hostname masked***/***IP masked***:2181 18/04/17 16:43:09 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x1626be1444c92e2, negotiated timeout = 60000 18/04/17 16:43:09 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x1626be1444c92e2 18/04/17 16:43:09 INFO zookeeper.ZooKeeper: Session: 0x1626be1444c92e2 closed 18/04/17 16:43:09 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 16:43:09 INFO scheduler.JobScheduler: Finished job streaming job 1523972580000 ms.6 from job set of time 1523972580000 ms 18/04/17 16:43:09 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 287.0 (TID 287) in 9336 ms on ***hostname masked*** (executor 9) (1/1) 18/04/17 16:43:09 INFO cluster.YarnClusterScheduler: Removed TaskSet 287.0, whose tasks have all completed, from pool 18/04/17 16:43:09 INFO scheduler.DAGScheduler: ResultStage 287 (foreachPartition at PredictorEngineApp.java:153) finished in 9.337 s 18/04/17 16:43:09 INFO scheduler.DAGScheduler: Job 287 finished: foreachPartition at PredictorEngineApp.java:153, took 9.461993 s 18/04/17 16:43:09 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x2ef01074 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 16:43:09 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x2ef010740x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 16:43:09 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 16:43:09 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:60897, server: ***hostname masked***/***IP masked***:2181 18/04/17 16:43:09 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x1626be1444c92e4, negotiated timeout = 60000 18/04/17 16:43:09 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x1626be1444c92e4 18/04/17 16:43:09 INFO zookeeper.ZooKeeper: Session: 0x1626be1444c92e4 closed 18/04/17 16:43:09 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 16:43:09 INFO scheduler.JobScheduler: Finished job streaming job 1523972580000 ms.33 from job set of time 1523972580000 ms 18/04/17 16:43:11 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 285.0 (TID 285) in 11401 ms on ***hostname masked*** (executor 10) (1/1) 18/04/17 16:43:11 INFO cluster.YarnClusterScheduler: Removed TaskSet 285.0, whose tasks have all completed, from pool 18/04/17 16:43:11 INFO scheduler.DAGScheduler: ResultStage 285 (foreachPartition at PredictorEngineApp.java:153) finished in 11.403 s 18/04/17 16:43:11 INFO scheduler.DAGScheduler: Job 285 finished: foreachPartition at PredictorEngineApp.java:153, took 11.523258 s 18/04/17 16:43:11 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x1839a692 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 16:43:11 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x1839a6920x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 16:43:11 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 16:43:11 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:60903, server: ***hostname masked***/***IP masked***:2181 18/04/17 16:43:11 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x1626be1444c92e5, negotiated timeout = 60000 18/04/17 16:43:11 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x1626be1444c92e5 18/04/17 16:43:11 INFO zookeeper.ZooKeeper: Session: 0x1626be1444c92e5 closed 18/04/17 16:43:11 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 16:43:11 INFO scheduler.JobScheduler: Finished job streaming job 1523972580000 ms.11 from job set of time 1523972580000 ms 18/04/17 16:43:12 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 280.0 (TID 280) in 12693 ms on ***hostname masked*** (executor 1) (1/1) 18/04/17 16:43:12 INFO cluster.YarnClusterScheduler: Removed TaskSet 280.0, whose tasks have all completed, from pool 18/04/17 16:43:12 INFO scheduler.DAGScheduler: ResultStage 280 (foreachPartition at PredictorEngineApp.java:153) finished in 12.694 s 18/04/17 16:43:12 INFO scheduler.DAGScheduler: Job 280 finished: foreachPartition at PredictorEngineApp.java:153, took 12.796255 s 18/04/17 16:43:12 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x16efb4b9 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 16:43:12 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x16efb4b90x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 16:43:12 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 16:43:12 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:60909, server: ***hostname masked***/***IP masked***:2181 18/04/17 16:43:12 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x1626be1444c92e6, negotiated timeout = 60000 18/04/17 16:43:12 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x1626be1444c92e6 18/04/17 16:43:12 INFO zookeeper.ZooKeeper: Session: 0x1626be1444c92e6 closed 18/04/17 16:43:12 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 16:43:12 INFO scheduler.JobScheduler: Finished job streaming job 1523972580000 ms.5 from job set of time 1523972580000 ms 18/04/17 16:43:12 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 267.0 (TID 267) in 12886 ms on ***hostname masked*** (executor 3) (1/1) 18/04/17 16:43:12 INFO cluster.YarnClusterScheduler: Removed TaskSet 267.0, whose tasks have all completed, from pool 18/04/17 16:43:12 INFO scheduler.DAGScheduler: ResultStage 267 (foreachPartition at PredictorEngineApp.java:153) finished in 12.887 s 18/04/17 16:43:12 INFO scheduler.DAGScheduler: Job 267 finished: foreachPartition at PredictorEngineApp.java:153, took 12.924312 s 18/04/17 16:43:12 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x4f9b341f connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 16:43:12 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x4f9b341f0x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 16:43:13 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 16:43:13 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:60912, server: ***hostname masked***/***IP masked***:2181 18/04/17 16:43:13 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x1626be1444c92e7, negotiated timeout = 60000 18/04/17 16:43:13 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x1626be1444c92e7 18/04/17 16:43:13 INFO zookeeper.ZooKeeper: Session: 0x1626be1444c92e7 closed 18/04/17 16:43:13 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 16:43:13 INFO scheduler.JobScheduler: Finished job streaming job 1523972580000 ms.22 from job set of time 1523972580000 ms 18/04/17 16:43:13 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 276.0 (TID 276) in 12870 ms on ***hostname masked*** (executor 6) (1/1) 18/04/17 16:43:13 INFO cluster.YarnClusterScheduler: Removed TaskSet 276.0, whose tasks have all completed, from pool 18/04/17 16:43:13 INFO scheduler.DAGScheduler: ResultStage 276 (foreachPartition at PredictorEngineApp.java:153) finished in 12.871 s 18/04/17 16:43:13 INFO scheduler.DAGScheduler: Job 275 finished: foreachPartition at PredictorEngineApp.java:153, took 12.954214 s 18/04/17 16:43:13 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x1e5c8244 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 16:43:13 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x1e5c82440x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 16:43:13 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 16:43:13 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:54533, server: ***hostname masked***/***IP masked***:2181 18/04/17 16:43:13 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x3626be1439a92be, negotiated timeout = 60000 18/04/17 16:43:13 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x3626be1439a92be 18/04/17 16:43:13 INFO zookeeper.ZooKeeper: Session: 0x3626be1439a92be closed 18/04/17 16:43:13 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 16:43:13 INFO scheduler.JobScheduler: Finished job streaming job 1523972580000 ms.12 from job set of time 1523972580000 ms 18/04/17 16:43:14 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 275.0 (TID 275) in 14255 ms on ***hostname masked*** (executor 2) (1/1) 18/04/17 16:43:14 INFO cluster.YarnClusterScheduler: Removed TaskSet 275.0, whose tasks have all completed, from pool 18/04/17 16:43:14 INFO scheduler.DAGScheduler: ResultStage 275 (foreachPartition at PredictorEngineApp.java:153) finished in 14.267 s 18/04/17 16:43:14 INFO scheduler.DAGScheduler: Job 276 finished: foreachPartition at PredictorEngineApp.java:153, took 14.335333 s 18/04/17 16:43:14 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x61cbcaea connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 16:43:14 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x61cbcaea0x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 16:43:14 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 16:43:14 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:60920, server: ***hostname masked***/***IP masked***:2181 18/04/17 16:43:14 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x1626be1444c92e9, negotiated timeout = 60000 18/04/17 16:43:14 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x1626be1444c92e9 18/04/17 16:43:14 INFO zookeeper.ZooKeeper: Session: 0x1626be1444c92e9 closed 18/04/17 16:43:14 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 16:43:14 INFO scheduler.JobScheduler: Finished job streaming job 1523972580000 ms.23 from job set of time 1523972580000 ms 18/04/17 16:43:14 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 270.0 (TID 270) in 14644 ms on ***hostname masked*** (executor 6) (1/1) 18/04/17 16:43:14 INFO cluster.YarnClusterScheduler: Removed TaskSet 270.0, whose tasks have all completed, from pool 18/04/17 16:43:14 INFO scheduler.DAGScheduler: ResultStage 270 (foreachPartition at PredictorEngineApp.java:153) finished in 14.646 s 18/04/17 16:43:14 INFO scheduler.DAGScheduler: Job 269 finished: foreachPartition at PredictorEngineApp.java:153, took 14.697162 s 18/04/17 16:43:14 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x678fb2b8 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 16:43:14 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x678fb2b80x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 16:43:14 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 16:43:14 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:37286, server: ***hostname masked***/***IP masked***:2181 18/04/17 16:43:14 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x2626be142b28bf1, negotiated timeout = 60000 18/04/17 16:43:14 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x2626be142b28bf1 18/04/17 16:43:14 INFO zookeeper.ZooKeeper: Session: 0x2626be142b28bf1 closed 18/04/17 16:43:14 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 16:43:14 INFO scheduler.JobScheduler: Finished job streaming job 1523972580000 ms.18 from job set of time 1523972580000 ms 18/04/17 16:43:14 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 286.0 (TID 286) in 14671 ms on ***hostname masked*** (executor 2) (1/1) 18/04/17 16:43:14 INFO cluster.YarnClusterScheduler: Removed TaskSet 286.0, whose tasks have all completed, from pool 18/04/17 16:43:14 INFO scheduler.DAGScheduler: ResultStage 286 (foreachPartition at PredictorEngineApp.java:153) finished in 14.672 s 18/04/17 16:43:14 INFO scheduler.DAGScheduler: Job 286 finished: foreachPartition at PredictorEngineApp.java:153, took 14.795029 s 18/04/17 16:43:14 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x56e5fa95 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 16:43:14 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x56e5fa950x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 16:43:14 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 16:43:14 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:37289, server: ***hostname masked***/***IP masked***:2181 18/04/17 16:43:14 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x2626be142b28bf2, negotiated timeout = 60000 18/04/17 16:43:14 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x2626be142b28bf2 18/04/17 16:43:14 INFO zookeeper.ZooKeeper: Session: 0x2626be142b28bf2 closed 18/04/17 16:43:14 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 16:43:14 INFO scheduler.JobScheduler: Finished job streaming job 1523972580000 ms.24 from job set of time 1523972580000 ms 18/04/17 16:43:15 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 282.0 (TID 282) in 14881 ms on ***hostname masked*** (executor 6) (1/1) 18/04/17 16:43:15 INFO cluster.YarnClusterScheduler: Removed TaskSet 282.0, whose tasks have all completed, from pool 18/04/17 16:43:15 INFO scheduler.DAGScheduler: ResultStage 282 (foreachPartition at PredictorEngineApp.java:153) finished in 14.882 s 18/04/17 16:43:15 INFO scheduler.DAGScheduler: Job 282 finished: foreachPartition at PredictorEngineApp.java:153, took 14.994929 s 18/04/17 16:43:15 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x39c6838b connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 16:43:15 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x39c6838b0x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 16:43:15 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 16:43:15 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:54549, server: ***hostname masked***/***IP masked***:2181 18/04/17 16:43:15 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x3626be1439a92c2, negotiated timeout = 60000 18/04/17 16:43:15 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x3626be1439a92c2 18/04/17 16:43:15 INFO zookeeper.ZooKeeper: Session: 0x3626be1439a92c2 closed 18/04/17 16:43:15 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 16:43:15 INFO scheduler.JobScheduler: Finished job streaming job 1523972580000 ms.29 from job set of time 1523972580000 ms 18/04/17 16:43:15 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 271.0 (TID 271) in 15154 ms on ***hostname masked*** (executor 6) (1/1) 18/04/17 16:43:15 INFO cluster.YarnClusterScheduler: Removed TaskSet 271.0, whose tasks have all completed, from pool 18/04/17 16:43:15 INFO scheduler.DAGScheduler: ResultStage 271 (foreachPartition at PredictorEngineApp.java:153) finished in 15.155 s 18/04/17 16:43:15 INFO scheduler.DAGScheduler: Job 271 finished: foreachPartition at PredictorEngineApp.java:153, took 15.210181 s 18/04/17 16:43:15 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x81c64c4 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 16:43:15 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x81c64c40x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 16:43:15 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 16:43:15 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:37296, server: ***hostname masked***/***IP masked***:2181 18/04/17 16:43:15 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x2626be142b28bf5, negotiated timeout = 60000 18/04/17 16:43:15 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x2626be142b28bf5 18/04/17 16:43:15 INFO zookeeper.ZooKeeper: Session: 0x2626be142b28bf5 closed 18/04/17 16:43:15 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 16:43:15 INFO scheduler.JobScheduler: Finished job streaming job 1523972580000 ms.20 from job set of time 1523972580000 ms 18/04/17 16:43:20 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 278.0 (TID 278) in 19895 ms on ***hostname masked*** (executor 2) (1/1) 18/04/17 16:43:20 INFO cluster.YarnClusterScheduler: Removed TaskSet 278.0, whose tasks have all completed, from pool 18/04/17 16:43:20 INFO scheduler.DAGScheduler: ResultStage 278 (foreachPartition at PredictorEngineApp.java:153) finished in 19.897 s 18/04/17 16:43:20 INFO scheduler.DAGScheduler: Job 278 finished: foreachPartition at PredictorEngineApp.java:153, took 19.986598 s 18/04/17 16:43:20 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x71ef2d55 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 16:43:20 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x71ef2d550x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 16:43:20 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 16:43:20 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:60942, server: ***hostname masked***/***IP masked***:2181 18/04/17 16:43:20 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x1626be1444c92ee, negotiated timeout = 60000 18/04/17 16:43:20 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x1626be1444c92ee 18/04/17 16:43:20 INFO zookeeper.ZooKeeper: Session: 0x1626be1444c92ee closed 18/04/17 16:43:20 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 16:43:20 INFO scheduler.JobScheduler: Finished job streaming job 1523972580000 ms.26 from job set of time 1523972580000 ms 18/04/17 16:43:20 INFO scheduler.JobScheduler: Total delay: 20.092 s for time 1523972580000 ms (execution: 20.027 s) 18/04/17 16:43:20 INFO kafka.KafkaRDD: Removing RDD 324 from persistence list 18/04/17 16:43:20 INFO storage.BlockManager: Removing RDD 324 18/04/17 16:43:20 INFO kafka.KafkaRDD: Removing RDD 324 from persistence list 18/04/17 16:43:20 INFO storage.BlockManager: Removing RDD 324 18/04/17 16:43:20 INFO kafka.KafkaRDD: Removing RDD 325 from persistence list 18/04/17 16:43:20 INFO storage.BlockManager: Removing RDD 325 18/04/17 16:43:20 INFO kafka.KafkaRDD: Removing RDD 325 from persistence list 18/04/17 16:43:20 INFO storage.BlockManager: Removing RDD 325 18/04/17 16:43:20 INFO kafka.KafkaRDD: Removing RDD 326 from persistence list 18/04/17 16:43:20 INFO storage.BlockManager: Removing RDD 326 18/04/17 16:43:20 INFO kafka.KafkaRDD: Removing RDD 326 from persistence list 18/04/17 16:43:20 INFO storage.BlockManager: Removing RDD 326 18/04/17 16:43:20 INFO kafka.KafkaRDD: Removing RDD 327 from persistence list 18/04/17 16:43:20 INFO storage.BlockManager: Removing RDD 327 18/04/17 16:43:20 INFO kafka.KafkaRDD: Removing RDD 327 from persistence list 18/04/17 16:43:20 INFO storage.BlockManager: Removing RDD 327 18/04/17 16:43:20 INFO kafka.KafkaRDD: Removing RDD 328 from persistence list 18/04/17 16:43:20 INFO storage.BlockManager: Removing RDD 328 18/04/17 16:43:20 INFO kafka.KafkaRDD: Removing RDD 328 from persistence list 18/04/17 16:43:20 INFO storage.BlockManager: Removing RDD 328 18/04/17 16:43:20 INFO kafka.KafkaRDD: Removing RDD 329 from persistence list 18/04/17 16:43:20 INFO storage.BlockManager: Removing RDD 329 18/04/17 16:43:20 INFO kafka.KafkaRDD: Removing RDD 329 from persistence list 18/04/17 16:43:20 INFO storage.BlockManager: Removing RDD 329 18/04/17 16:43:20 INFO kafka.KafkaRDD: Removing RDD 330 from persistence list 18/04/17 16:43:20 INFO storage.BlockManager: Removing RDD 330 18/04/17 16:43:20 INFO kafka.KafkaRDD: Removing RDD 330 from persistence list 18/04/17 16:43:20 INFO storage.BlockManager: Removing RDD 330 18/04/17 16:43:20 INFO kafka.KafkaRDD: Removing RDD 331 from persistence list 18/04/17 16:43:20 INFO storage.BlockManager: Removing RDD 331 18/04/17 16:43:20 INFO kafka.KafkaRDD: Removing RDD 331 from persistence list 18/04/17 16:43:20 INFO storage.BlockManager: Removing RDD 331 18/04/17 16:43:20 INFO kafka.KafkaRDD: Removing RDD 332 from persistence list 18/04/17 16:43:20 INFO storage.BlockManager: Removing RDD 332 18/04/17 16:43:20 INFO kafka.KafkaRDD: Removing RDD 332 from persistence list 18/04/17 16:43:20 INFO storage.BlockManager: Removing RDD 332 18/04/17 16:43:20 INFO kafka.KafkaRDD: Removing RDD 333 from persistence list 18/04/17 16:43:20 INFO storage.BlockManager: Removing RDD 333 18/04/17 16:43:20 INFO kafka.KafkaRDD: Removing RDD 333 from persistence list 18/04/17 16:43:20 INFO storage.BlockManager: Removing RDD 333 18/04/17 16:43:20 INFO kafka.KafkaRDD: Removing RDD 334 from persistence list 18/04/17 16:43:20 INFO storage.BlockManager: Removing RDD 334 18/04/17 16:43:20 INFO kafka.KafkaRDD: Removing RDD 334 from persistence list 18/04/17 16:43:20 INFO storage.BlockManager: Removing RDD 334 18/04/17 16:43:20 INFO kafka.KafkaRDD: Removing RDD 335 from persistence list 18/04/17 16:43:20 INFO storage.BlockManager: Removing RDD 335 18/04/17 16:43:20 INFO kafka.KafkaRDD: Removing RDD 335 from persistence list 18/04/17 16:43:20 INFO storage.BlockManager: Removing RDD 335 18/04/17 16:43:20 INFO kafka.KafkaRDD: Removing RDD 336 from persistence list 18/04/17 16:43:20 INFO storage.BlockManager: Removing RDD 336 18/04/17 16:43:20 INFO kafka.KafkaRDD: Removing RDD 336 from persistence list 18/04/17 16:43:20 INFO storage.BlockManager: Removing RDD 336 18/04/17 16:43:20 INFO kafka.KafkaRDD: Removing RDD 337 from persistence list 18/04/17 16:43:20 INFO storage.BlockManager: Removing RDD 337 18/04/17 16:43:20 INFO kafka.KafkaRDD: Removing RDD 337 from persistence list 18/04/17 16:43:20 INFO storage.BlockManager: Removing RDD 337 18/04/17 16:43:20 INFO kafka.KafkaRDD: Removing RDD 338 from persistence list 18/04/17 16:43:20 INFO storage.BlockManager: Removing RDD 338 18/04/17 16:43:20 INFO kafka.KafkaRDD: Removing RDD 338 from persistence list 18/04/17 16:43:20 INFO storage.BlockManager: Removing RDD 338 18/04/17 16:43:20 INFO kafka.KafkaRDD: Removing RDD 339 from persistence list 18/04/17 16:43:20 INFO storage.BlockManager: Removing RDD 339 18/04/17 16:43:20 INFO kafka.KafkaRDD: Removing RDD 339 from persistence list 18/04/17 16:43:20 INFO storage.BlockManager: Removing RDD 339 18/04/17 16:43:20 INFO kafka.KafkaRDD: Removing RDD 340 from persistence list 18/04/17 16:43:20 INFO storage.BlockManager: Removing RDD 340 18/04/17 16:43:20 INFO kafka.KafkaRDD: Removing RDD 340 from persistence list 18/04/17 16:43:20 INFO storage.BlockManager: Removing RDD 340 18/04/17 16:43:20 INFO kafka.KafkaRDD: Removing RDD 341 from persistence list 18/04/17 16:43:20 INFO storage.BlockManager: Removing RDD 341 18/04/17 16:43:20 INFO kafka.KafkaRDD: Removing RDD 341 from persistence list 18/04/17 16:43:20 INFO storage.BlockManager: Removing RDD 341 18/04/17 16:43:20 INFO kafka.KafkaRDD: Removing RDD 342 from persistence list 18/04/17 16:43:20 INFO storage.BlockManager: Removing RDD 342 18/04/17 16:43:20 INFO kafka.KafkaRDD: Removing RDD 342 from persistence list 18/04/17 16:43:20 INFO storage.BlockManager: Removing RDD 342 18/04/17 16:43:20 INFO kafka.KafkaRDD: Removing RDD 343 from persistence list 18/04/17 16:43:20 INFO storage.BlockManager: Removing RDD 343 18/04/17 16:43:20 INFO kafka.KafkaRDD: Removing RDD 343 from persistence list 18/04/17 16:43:20 INFO storage.BlockManager: Removing RDD 343 18/04/17 16:43:20 INFO kafka.KafkaRDD: Removing RDD 344 from persistence list 18/04/17 16:43:20 INFO storage.BlockManager: Removing RDD 344 18/04/17 16:43:20 INFO kafka.KafkaRDD: Removing RDD 344 from persistence list 18/04/17 16:43:20 INFO storage.BlockManager: Removing RDD 344 18/04/17 16:43:20 INFO kafka.KafkaRDD: Removing RDD 345 from persistence list 18/04/17 16:43:20 INFO storage.BlockManager: Removing RDD 345 18/04/17 16:43:20 INFO kafka.KafkaRDD: Removing RDD 345 from persistence list 18/04/17 16:43:20 INFO storage.BlockManager: Removing RDD 345 18/04/17 16:43:20 INFO kafka.KafkaRDD: Removing RDD 346 from persistence list 18/04/17 16:43:20 INFO storage.BlockManager: Removing RDD 346 18/04/17 16:43:20 INFO kafka.KafkaRDD: Removing RDD 346 from persistence list 18/04/17 16:43:20 INFO storage.BlockManager: Removing RDD 346 18/04/17 16:43:20 INFO kafka.KafkaRDD: Removing RDD 347 from persistence list 18/04/17 16:43:20 INFO storage.BlockManager: Removing RDD 347 18/04/17 16:43:20 INFO kafka.KafkaRDD: Removing RDD 347 from persistence list 18/04/17 16:43:20 INFO storage.BlockManager: Removing RDD 347 18/04/17 16:43:20 INFO kafka.KafkaRDD: Removing RDD 348 from persistence list 18/04/17 16:43:20 INFO storage.BlockManager: Removing RDD 348 18/04/17 16:43:20 INFO kafka.KafkaRDD: Removing RDD 348 from persistence list 18/04/17 16:43:20 INFO storage.BlockManager: Removing RDD 348 18/04/17 16:43:20 INFO kafka.KafkaRDD: Removing RDD 349 from persistence list 18/04/17 16:43:20 INFO storage.BlockManager: Removing RDD 349 18/04/17 16:43:20 INFO kafka.KafkaRDD: Removing RDD 349 from persistence list 18/04/17 16:43:20 INFO storage.BlockManager: Removing RDD 349 18/04/17 16:43:20 INFO kafka.KafkaRDD: Removing RDD 350 from persistence list 18/04/17 16:43:20 INFO storage.BlockManager: Removing RDD 350 18/04/17 16:43:20 INFO kafka.KafkaRDD: Removing RDD 350 from persistence list 18/04/17 16:43:20 INFO storage.BlockManager: Removing RDD 350 18/04/17 16:43:20 INFO kafka.KafkaRDD: Removing RDD 351 from persistence list 18/04/17 16:43:20 INFO storage.BlockManager: Removing RDD 351 18/04/17 16:43:20 INFO kafka.KafkaRDD: Removing RDD 351 from persistence list 18/04/17 16:43:20 INFO storage.BlockManager: Removing RDD 351 18/04/17 16:43:20 INFO kafka.KafkaRDD: Removing RDD 352 from persistence list 18/04/17 16:43:20 INFO storage.BlockManager: Removing RDD 352 18/04/17 16:43:20 INFO kafka.KafkaRDD: Removing RDD 352 from persistence list 18/04/17 16:43:20 INFO storage.BlockManager: Removing RDD 352 18/04/17 16:43:20 INFO kafka.KafkaRDD: Removing RDD 353 from persistence list 18/04/17 16:43:20 INFO storage.BlockManager: Removing RDD 353 18/04/17 16:43:20 INFO kafka.KafkaRDD: Removing RDD 353 from persistence list 18/04/17 16:43:20 INFO storage.BlockManager: Removing RDD 353 18/04/17 16:43:20 INFO kafka.KafkaRDD: Removing RDD 354 from persistence list 18/04/17 16:43:20 INFO storage.BlockManager: Removing RDD 354 18/04/17 16:43:20 INFO kafka.KafkaRDD: Removing RDD 354 from persistence list 18/04/17 16:43:20 INFO storage.BlockManager: Removing RDD 354 18/04/17 16:43:20 INFO kafka.KafkaRDD: Removing RDD 355 from persistence list 18/04/17 16:43:20 INFO storage.BlockManager: Removing RDD 355 18/04/17 16:43:20 INFO kafka.KafkaRDD: Removing RDD 355 from persistence list 18/04/17 16:43:20 INFO storage.BlockManager: Removing RDD 355 18/04/17 16:43:20 INFO kafka.KafkaRDD: Removing RDD 356 from persistence list 18/04/17 16:43:20 INFO storage.BlockManager: Removing RDD 356 18/04/17 16:43:20 INFO kafka.KafkaRDD: Removing RDD 356 from persistence list 18/04/17 16:43:20 INFO storage.BlockManager: Removing RDD 356 18/04/17 16:43:20 INFO kafka.KafkaRDD: Removing RDD 357 from persistence list 18/04/17 16:43:20 INFO storage.BlockManager: Removing RDD 357 18/04/17 16:43:20 INFO kafka.KafkaRDD: Removing RDD 357 from persistence list 18/04/17 16:43:20 INFO storage.BlockManager: Removing RDD 357 18/04/17 16:43:20 INFO kafka.KafkaRDD: Removing RDD 358 from persistence list 18/04/17 16:43:20 INFO storage.BlockManager: Removing RDD 358 18/04/17 16:43:20 INFO kafka.KafkaRDD: Removing RDD 358 from persistence list 18/04/17 16:43:20 INFO storage.BlockManager: Removing RDD 358 18/04/17 16:43:20 INFO kafka.KafkaRDD: Removing RDD 359 from persistence list 18/04/17 16:43:20 INFO storage.BlockManager: Removing RDD 359 18/04/17 16:43:20 INFO kafka.KafkaRDD: Removing RDD 359 from persistence list 18/04/17 16:43:20 INFO storage.BlockManager: Removing RDD 359 18/04/17 16:43:20 INFO scheduler.ReceivedBlockTracker: Deleting batches ArrayBuffer() 18/04/17 16:43:20 INFO scheduler.InputInfoTracker: remove old batch metadata: 1523972460000 ms 18/04/17 16:44:00 INFO scheduler.JobScheduler: Added jobs for time 1523972640000 ms 18/04/17 16:44:00 INFO scheduler.JobScheduler: Starting job streaming job 1523972640000 ms.0 from job set of time 1523972640000 ms 18/04/17 16:44:00 INFO scheduler.JobScheduler: Starting job streaming job 1523972640000 ms.1 from job set of time 1523972640000 ms 18/04/17 16:44:00 INFO scheduler.JobScheduler: Starting job streaming job 1523972640000 ms.2 from job set of time 1523972640000 ms 18/04/17 16:44:00 INFO scheduler.JobScheduler: Starting job streaming job 1523972640000 ms.3 from job set of time 1523972640000 ms 18/04/17 16:44:00 INFO scheduler.JobScheduler: Starting job streaming job 1523972640000 ms.4 from job set of time 1523972640000 ms 18/04/17 16:44:00 INFO scheduler.JobScheduler: Finished job streaming job 1523972640000 ms.4 from job set of time 1523972640000 ms 18/04/17 16:44:00 INFO scheduler.JobScheduler: Finished job streaming job 1523972640000 ms.0 from job set of time 1523972640000 ms 18/04/17 16:44:00 INFO scheduler.JobScheduler: Finished job streaming job 1523972640000 ms.3 from job set of time 1523972640000 ms 18/04/17 16:44:00 INFO scheduler.JobScheduler: Starting job streaming job 1523972640000 ms.5 from job set of time 1523972640000 ms 18/04/17 16:44:00 INFO scheduler.JobScheduler: Starting job streaming job 1523972640000 ms.6 from job set of time 1523972640000 ms 18/04/17 16:44:00 INFO scheduler.JobScheduler: Starting job streaming job 1523972640000 ms.7 from job set of time 1523972640000 ms 18/04/17 16:44:00 INFO scheduler.JobScheduler: Starting job streaming job 1523972640000 ms.8 from job set of time 1523972640000 ms 18/04/17 16:44:00 INFO scheduler.JobScheduler: Starting job streaming job 1523972640000 ms.9 from job set of time 1523972640000 ms 18/04/17 16:44:00 INFO scheduler.JobScheduler: Starting job streaming job 1523972640000 ms.10 from job set of time 1523972640000 ms 18/04/17 16:44:00 INFO scheduler.JobScheduler: Starting job streaming job 1523972640000 ms.11 from job set of time 1523972640000 ms 18/04/17 16:44:00 INFO scheduler.JobScheduler: Starting job streaming job 1523972640000 ms.12 from job set of time 1523972640000 ms 18/04/17 16:44:00 INFO scheduler.JobScheduler: Starting job streaming job 1523972640000 ms.13 from job set of time 1523972640000 ms 18/04/17 16:44:00 INFO scheduler.JobScheduler: Starting job streaming job 1523972640000 ms.14 from job set of time 1523972640000 ms 18/04/17 16:44:00 INFO scheduler.JobScheduler: Finished job streaming job 1523972640000 ms.13 from job set of time 1523972640000 ms 18/04/17 16:44:00 INFO scheduler.JobScheduler: Starting job streaming job 1523972640000 ms.15 from job set of time 1523972640000 ms 18/04/17 16:44:00 INFO scheduler.JobScheduler: Starting job streaming job 1523972640000 ms.17 from job set of time 1523972640000 ms 18/04/17 16:44:00 INFO scheduler.JobScheduler: Starting job streaming job 1523972640000 ms.16 from job set of time 1523972640000 ms 18/04/17 16:44:00 INFO scheduler.JobScheduler: Finished job streaming job 1523972640000 ms.14 from job set of time 1523972640000 ms 18/04/17 16:44:00 INFO scheduler.JobScheduler: Finished job streaming job 1523972640000 ms.17 from job set of time 1523972640000 ms 18/04/17 16:44:00 INFO scheduler.JobScheduler: Starting job streaming job 1523972640000 ms.19 from job set of time 1523972640000 ms 18/04/17 16:44:00 INFO scheduler.JobScheduler: Starting job streaming job 1523972640000 ms.18 from job set of time 1523972640000 ms 18/04/17 16:44:00 INFO scheduler.JobScheduler: Finished job streaming job 1523972640000 ms.16 from job set of time 1523972640000 ms 18/04/17 16:44:00 INFO scheduler.JobScheduler: Starting job streaming job 1523972640000 ms.20 from job set of time 1523972640000 ms 18/04/17 16:44:00 INFO scheduler.JobScheduler: Starting job streaming job 1523972640000 ms.21 from job set of time 1523972640000 ms 18/04/17 16:44:00 INFO scheduler.JobScheduler: Starting job streaming job 1523972640000 ms.22 from job set of time 1523972640000 ms 18/04/17 16:44:00 INFO scheduler.JobScheduler: Starting job streaming job 1523972640000 ms.23 from job set of time 1523972640000 ms 18/04/17 16:44:00 INFO scheduler.JobScheduler: Finished job streaming job 1523972640000 ms.21 from job set of time 1523972640000 ms 18/04/17 16:44:00 INFO scheduler.JobScheduler: Starting job streaming job 1523972640000 ms.25 from job set of time 1523972640000 ms 18/04/17 16:44:00 INFO scheduler.JobScheduler: Starting job streaming job 1523972640000 ms.24 from job set of time 1523972640000 ms 18/04/17 16:44:00 INFO scheduler.JobScheduler: Starting job streaming job 1523972640000 ms.26 from job set of time 1523972640000 ms 18/04/17 16:44:00 INFO scheduler.JobScheduler: Starting job streaming job 1523972640000 ms.27 from job set of time 1523972640000 ms 18/04/17 16:44:00 INFO scheduler.JobScheduler: Starting job streaming job 1523972640000 ms.28 from job set of time 1523972640000 ms 18/04/17 16:44:00 INFO scheduler.JobScheduler: Starting job streaming job 1523972640000 ms.29 from job set of time 1523972640000 ms 18/04/17 16:44:00 INFO scheduler.JobScheduler: Starting job streaming job 1523972640000 ms.30 from job set of time 1523972640000 ms 18/04/17 16:44:00 INFO scheduler.JobScheduler: Starting job streaming job 1523972640000 ms.31 from job set of time 1523972640000 ms 18/04/17 16:44:00 INFO scheduler.JobScheduler: Finished job streaming job 1523972640000 ms.30 from job set of time 1523972640000 ms 18/04/17 16:44:00 INFO scheduler.JobScheduler: Starting job streaming job 1523972640000 ms.32 from job set of time 1523972640000 ms 18/04/17 16:44:00 INFO scheduler.JobScheduler: Starting job streaming job 1523972640000 ms.33 from job set of time 1523972640000 ms 18/04/17 16:44:00 INFO scheduler.JobScheduler: Starting job streaming job 1523972640000 ms.34 from job set of time 1523972640000 ms 18/04/17 16:44:00 INFO scheduler.JobScheduler: Starting job streaming job 1523972640000 ms.35 from job set of time 1523972640000 ms 18/04/17 16:44:00 INFO scheduler.JobScheduler: Finished job streaming job 1523972640000 ms.35 from job set of time 1523972640000 ms 18/04/17 16:44:00 INFO spark.ContextCleaner: Cleaned accumulator 284 18/04/17 16:44:00 INFO storage.BlockManagerInfo: Removed broadcast_262_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.2 MB) 18/04/17 16:44:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 16:44:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 16:44:00 INFO storage.BlockManagerInfo: Removed broadcast_262_piece0 on ***hostname masked***:43653 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:44:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 16:44:00 INFO scheduler.DAGScheduler: Got job 288 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 16:44:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 288 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 16:44:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 16:44:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 16:44:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 16:44:00 INFO scheduler.DAGScheduler: Submitting ResultStage 288 (KafkaRDD[420] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 16:44:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 16:44:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 16:44:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 16:44:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 16:44:00 INFO storage.MemoryStore: Block broadcast_288 stored as values in memory (estimated size 5.7 KB, free 490.3 MB) 18/04/17 16:44:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 16:44:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 16:44:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 16:44:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 16:44:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 16:44:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 16:44:00 INFO spark.ContextCleaner: Cleaned accumulator 263 18/04/17 16:44:00 INFO spark.ContextCleaner: Cleaned accumulator 265 18/04/17 16:44:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 16:44:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 16:44:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 16:44:00 INFO storage.BlockManagerInfo: Removed broadcast_263_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.2 MB) 18/04/17 16:44:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 16:44:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 16:44:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 16:44:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 16:44:00 INFO storage.BlockManagerInfo: Removed broadcast_263_piece0 on ***hostname masked***:55033 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:44:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 16:44:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 16:44:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 16:44:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 16:44:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 16:44:00 INFO spark.ContextCleaner: Cleaned accumulator 264 18/04/17 16:44:00 INFO storage.BlockManagerInfo: Removed broadcast_265_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.2 MB) 18/04/17 16:44:00 INFO storage.MemoryStore: Block broadcast_288_piece0 stored as bytes in memory (estimated size 3.1 KB, free 490.4 MB) 18/04/17 16:44:00 INFO storage.BlockManagerInfo: Added broadcast_288_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.2 MB) 18/04/17 16:44:00 INFO spark.SparkContext: Created broadcast 288 from broadcast at DAGScheduler.scala:1006 18/04/17 16:44:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 288 (KafkaRDD[420] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 16:44:00 INFO cluster.YarnClusterScheduler: Adding task set 288.0 with 1 tasks 18/04/17 16:44:00 INFO scheduler.DAGScheduler: Got job 289 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 16:44:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 289 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 16:44:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 16:44:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 16:44:00 INFO scheduler.DAGScheduler: Submitting ResultStage 289 (KafkaRDD[419] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 16:44:00 INFO storage.BlockManagerInfo: Removed broadcast_265_piece0 on ***hostname masked***:56034 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:44:00 INFO storage.MemoryStore: Block broadcast_289 stored as values in memory (estimated size 5.7 KB, free 490.3 MB) 18/04/17 16:44:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 288.0 (TID 288, ***hostname masked***, executor 5, partition 0, NODE_LOCAL, 2047 bytes) 18/04/17 16:44:00 INFO spark.ContextCleaner: Cleaned accumulator 266 18/04/17 16:44:00 INFO storage.BlockManagerInfo: Removed broadcast_264_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.2 MB) 18/04/17 16:44:00 INFO storage.BlockManagerInfo: Removed broadcast_264_piece0 on ***hostname masked***:55279 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:44:00 INFO storage.MemoryStore: Block broadcast_289_piece0 stored as bytes in memory (estimated size 3.1 KB, free 490.4 MB) 18/04/17 16:44:00 INFO storage.BlockManagerInfo: Added broadcast_289_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.2 MB) 18/04/17 16:44:00 INFO spark.SparkContext: Created broadcast 289 from broadcast at DAGScheduler.scala:1006 18/04/17 16:44:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 289 (KafkaRDD[419] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 16:44:00 INFO cluster.YarnClusterScheduler: Adding task set 289.0 with 1 tasks 18/04/17 16:44:00 INFO scheduler.DAGScheduler: Got job 290 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 16:44:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 290 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 16:44:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 16:44:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 16:44:00 INFO scheduler.DAGScheduler: Submitting ResultStage 290 (KafkaRDD[425] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 16:44:00 INFO spark.ContextCleaner: Cleaned accumulator 268 18/04/17 16:44:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 289.0 (TID 289, ***hostname masked***, executor 8, partition 0, RACK_LOCAL, 2048 bytes) 18/04/17 16:44:00 INFO storage.MemoryStore: Block broadcast_290 stored as values in memory (estimated size 5.7 KB, free 490.3 MB) 18/04/17 16:44:00 INFO storage.BlockManagerInfo: Removed broadcast_266_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.2 MB) 18/04/17 16:44:00 INFO storage.BlockManagerInfo: Removed broadcast_266_piece0 on ***hostname masked***:53081 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:44:00 INFO storage.MemoryStore: Block broadcast_290_piece0 stored as bytes in memory (estimated size 3.1 KB, free 490.4 MB) 18/04/17 16:44:00 INFO storage.BlockManagerInfo: Added broadcast_290_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.2 MB) 18/04/17 16:44:00 INFO spark.SparkContext: Created broadcast 290 from broadcast at DAGScheduler.scala:1006 18/04/17 16:44:00 INFO storage.BlockManagerInfo: Added broadcast_288_piece0 in memory on ***hostname masked***:53081 (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:44:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 290 (KafkaRDD[425] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 16:44:00 INFO cluster.YarnClusterScheduler: Adding task set 290.0 with 1 tasks 18/04/17 16:44:00 INFO spark.ContextCleaner: Cleaned accumulator 267 18/04/17 16:44:00 INFO scheduler.DAGScheduler: Got job 291 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 16:44:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 291 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 16:44:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 16:44:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 16:44:00 INFO scheduler.DAGScheduler: Submitting ResultStage 291 (KafkaRDD[397] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 16:44:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 290.0 (TID 290, ***hostname masked***, executor 3, partition 0, NODE_LOCAL, 2049 bytes) 18/04/17 16:44:00 INFO storage.MemoryStore: Block broadcast_291 stored as values in memory (estimated size 5.7 KB, free 490.3 MB) 18/04/17 16:44:00 INFO storage.BlockManagerInfo: Removed broadcast_268_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.2 MB) 18/04/17 16:44:00 INFO storage.BlockManagerInfo: Removed broadcast_268_piece0 on ***hostname masked***:50260 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:44:00 INFO spark.ContextCleaner: Cleaned accumulator 269 18/04/17 16:44:00 INFO storage.BlockManagerInfo: Removed broadcast_267_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.2 MB) 18/04/17 16:44:00 INFO storage.MemoryStore: Block broadcast_291_piece0 stored as bytes in memory (estimated size 3.1 KB, free 490.4 MB) 18/04/17 16:44:00 INFO storage.BlockManagerInfo: Added broadcast_291_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.2 MB) 18/04/17 16:44:00 INFO spark.SparkContext: Created broadcast 291 from broadcast at DAGScheduler.scala:1006 18/04/17 16:44:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 291 (KafkaRDD[397] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 16:44:00 INFO cluster.YarnClusterScheduler: Adding task set 291.0 with 1 tasks 18/04/17 16:44:00 INFO scheduler.DAGScheduler: Got job 292 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 16:44:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 292 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 16:44:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 16:44:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 16:44:00 INFO scheduler.DAGScheduler: Submitting ResultStage 292 (KafkaRDD[398] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 16:44:00 INFO storage.BlockManagerInfo: Removed broadcast_267_piece0 on ***hostname masked***:60107 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:44:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 291.0 (TID 291, ***hostname masked***, executor 5, partition 0, RACK_LOCAL, 2063 bytes) 18/04/17 16:44:00 INFO storage.MemoryStore: Block broadcast_292 stored as values in memory (estimated size 5.7 KB, free 490.4 MB) 18/04/17 16:44:00 INFO spark.ContextCleaner: Cleaned accumulator 271 18/04/17 16:44:00 INFO storage.BlockManagerInfo: Removed broadcast_269_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.2 MB) 18/04/17 16:44:00 INFO storage.BlockManagerInfo: Added broadcast_289_piece0 in memory on ***hostname masked***:50260 (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:44:00 INFO storage.MemoryStore: Block broadcast_292_piece0 stored as bytes in memory (estimated size 3.1 KB, free 490.4 MB) 18/04/17 16:44:00 INFO storage.BlockManagerInfo: Added broadcast_292_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.2 MB) 18/04/17 16:44:00 INFO spark.SparkContext: Created broadcast 292 from broadcast at DAGScheduler.scala:1006 18/04/17 16:44:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 292 (KafkaRDD[398] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 16:44:00 INFO cluster.YarnClusterScheduler: Adding task set 292.0 with 1 tasks 18/04/17 16:44:00 INFO storage.BlockManagerInfo: Removed broadcast_269_piece0 on ***hostname masked***:42188 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:44:00 INFO scheduler.DAGScheduler: Got job 293 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 16:44:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 293 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 16:44:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 16:44:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 16:44:00 INFO scheduler.DAGScheduler: Submitting ResultStage 293 (KafkaRDD[415] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 16:44:00 INFO storage.BlockManagerInfo: Added broadcast_290_piece0 in memory on ***hostname masked***:60107 (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:44:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 292.0 (TID 292, ***hostname masked***, executor 12, partition 0, RACK_LOCAL, 2034 bytes) 18/04/17 16:44:00 INFO storage.MemoryStore: Block broadcast_293 stored as values in memory (estimated size 5.7 KB, free 490.4 MB) 18/04/17 16:44:00 INFO spark.ContextCleaner: Cleaned accumulator 270 18/04/17 16:44:00 INFO storage.BlockManagerInfo: Removed broadcast_271_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.2 MB) 18/04/17 16:44:00 INFO storage.BlockManagerInfo: Removed broadcast_271_piece0 on ***hostname masked***:35790 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:44:00 INFO storage.BlockManagerInfo: Added broadcast_291_piece0 in memory on ***hostname masked***:53081 (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:44:00 INFO storage.MemoryStore: Block broadcast_293_piece0 stored as bytes in memory (estimated size 3.1 KB, free 490.4 MB) 18/04/17 16:44:00 INFO spark.ContextCleaner: Cleaned accumulator 272 18/04/17 16:44:00 INFO storage.BlockManagerInfo: Added broadcast_293_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.2 MB) 18/04/17 16:44:00 INFO spark.SparkContext: Created broadcast 293 from broadcast at DAGScheduler.scala:1006 18/04/17 16:44:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 293 (KafkaRDD[415] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 16:44:00 INFO cluster.YarnClusterScheduler: Adding task set 293.0 with 1 tasks 18/04/17 16:44:00 INFO scheduler.DAGScheduler: Got job 294 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 16:44:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 294 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 16:44:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 16:44:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 16:44:00 INFO storage.BlockManagerInfo: Removed broadcast_270_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.2 MB) 18/04/17 16:44:00 INFO scheduler.DAGScheduler: Submitting ResultStage 294 (KafkaRDD[411] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 16:44:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 293.0 (TID 293, ***hostname masked***, executor 6, partition 0, RACK_LOCAL, 2050 bytes) 18/04/17 16:44:00 INFO storage.MemoryStore: Block broadcast_294 stored as values in memory (estimated size 5.7 KB, free 490.4 MB) 18/04/17 16:44:00 INFO storage.BlockManagerInfo: Removed broadcast_270_piece0 on ***hostname masked***:35790 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:44:00 INFO spark.ContextCleaner: Cleaned accumulator 274 18/04/17 16:44:00 INFO storage.BlockManagerInfo: Removed broadcast_272_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.2 MB) 18/04/17 16:44:00 INFO storage.MemoryStore: Block broadcast_294_piece0 stored as bytes in memory (estimated size 3.1 KB, free 490.4 MB) 18/04/17 16:44:00 INFO storage.BlockManagerInfo: Added broadcast_294_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.2 MB) 18/04/17 16:44:00 INFO spark.SparkContext: Created broadcast 294 from broadcast at DAGScheduler.scala:1006 18/04/17 16:44:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 294 (KafkaRDD[411] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 16:44:00 INFO cluster.YarnClusterScheduler: Adding task set 294.0 with 1 tasks 18/04/17 16:44:00 INFO storage.BlockManagerInfo: Removed broadcast_272_piece0 on ***hostname masked***:60107 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:44:00 INFO scheduler.DAGScheduler: Got job 295 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 16:44:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 295 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 16:44:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 16:44:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 16:44:00 INFO scheduler.DAGScheduler: Submitting ResultStage 295 (KafkaRDD[407] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 16:44:00 INFO storage.MemoryStore: Block broadcast_295 stored as values in memory (estimated size 5.7 KB, free 490.4 MB) 18/04/17 16:44:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 294.0 (TID 294, ***hostname masked***, executor 3, partition 0, RACK_LOCAL, 2064 bytes) 18/04/17 16:44:00 INFO spark.ContextCleaner: Cleaned accumulator 273 18/04/17 16:44:00 INFO storage.MemoryStore: Block broadcast_295_piece0 stored as bytes in memory (estimated size 3.1 KB, free 490.4 MB) 18/04/17 16:44:00 INFO storage.BlockManagerInfo: Added broadcast_292_piece0 in memory on ***hostname masked***:42188 (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:44:00 INFO storage.BlockManagerInfo: Added broadcast_295_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.2 MB) 18/04/17 16:44:00 INFO storage.BlockManagerInfo: Removed broadcast_274_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.2 MB) 18/04/17 16:44:00 INFO spark.SparkContext: Created broadcast 295 from broadcast at DAGScheduler.scala:1006 18/04/17 16:44:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 295 (KafkaRDD[407] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 16:44:00 INFO cluster.YarnClusterScheduler: Adding task set 295.0 with 1 tasks 18/04/17 16:44:00 INFO storage.BlockManagerInfo: Added broadcast_293_piece0 in memory on ***hostname masked***:35790 (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:44:00 INFO scheduler.DAGScheduler: Got job 296 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 16:44:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 296 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 16:44:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 16:44:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 16:44:00 INFO scheduler.DAGScheduler: Submitting ResultStage 296 (KafkaRDD[429] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 16:44:00 INFO storage.BlockManagerInfo: Removed broadcast_274_piece0 on ***hostname masked***:60107 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:44:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 295.0 (TID 295, ***hostname masked***, executor 9, partition 0, RACK_LOCAL, 2046 bytes) 18/04/17 16:44:00 INFO storage.MemoryStore: Block broadcast_296 stored as values in memory (estimated size 5.7 KB, free 490.4 MB) 18/04/17 16:44:00 INFO spark.ContextCleaner: Cleaned accumulator 275 18/04/17 16:44:00 INFO storage.BlockManagerInfo: Removed broadcast_273_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.2 MB) 18/04/17 16:44:00 INFO storage.BlockManagerInfo: Added broadcast_294_piece0 in memory on ***hostname masked***:60107 (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:44:00 INFO storage.BlockManagerInfo: Removed broadcast_273_piece0 on ***hostname masked***:41751 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:44:00 INFO spark.ContextCleaner: Cleaned accumulator 277 18/04/17 16:44:00 INFO storage.BlockManagerInfo: Removed broadcast_275_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.2 MB) 18/04/17 16:44:00 INFO storage.MemoryStore: Block broadcast_296_piece0 stored as bytes in memory (estimated size 3.1 KB, free 490.4 MB) 18/04/17 16:44:00 INFO storage.BlockManagerInfo: Added broadcast_296_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.2 MB) 18/04/17 16:44:00 INFO spark.SparkContext: Created broadcast 296 from broadcast at DAGScheduler.scala:1006 18/04/17 16:44:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 296 (KafkaRDD[429] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 16:44:00 INFO cluster.YarnClusterScheduler: Adding task set 296.0 with 1 tasks 18/04/17 16:44:00 INFO storage.BlockManagerInfo: Removed broadcast_275_piece0 on ***hostname masked***:43653 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:44:00 INFO scheduler.DAGScheduler: Got job 297 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 16:44:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 297 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 16:44:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 16:44:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 16:44:00 INFO scheduler.DAGScheduler: Submitting ResultStage 297 (KafkaRDD[421] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 16:44:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 296.0 (TID 296, ***hostname masked***, executor 12, partition 0, NODE_LOCAL, 2048 bytes) 18/04/17 16:44:00 INFO spark.ContextCleaner: Cleaned accumulator 276 18/04/17 16:44:00 INFO storage.MemoryStore: Block broadcast_297 stored as values in memory (estimated size 5.7 KB, free 490.4 MB) 18/04/17 16:44:00 INFO storage.BlockManagerInfo: Removed broadcast_277_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.2 MB) 18/04/17 16:44:00 INFO storage.BlockManagerInfo: Removed broadcast_277_piece0 on ***hostname masked***:57847 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:44:00 INFO storage.BlockManagerInfo: Added broadcast_295_piece0 in memory on ***hostname masked***:55033 (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:44:00 INFO spark.ContextCleaner: Cleaned accumulator 278 18/04/17 16:44:00 INFO storage.BlockManagerInfo: Removed broadcast_276_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.2 MB) 18/04/17 16:44:00 INFO storage.MemoryStore: Block broadcast_297_piece0 stored as bytes in memory (estimated size 3.1 KB, free 490.4 MB) 18/04/17 16:44:00 INFO storage.BlockManagerInfo: Removed broadcast_276_piece0 on ***hostname masked***:35790 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:44:00 INFO storage.BlockManagerInfo: Added broadcast_297_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.2 MB) 18/04/17 16:44:00 INFO spark.SparkContext: Created broadcast 297 from broadcast at DAGScheduler.scala:1006 18/04/17 16:44:00 INFO storage.BlockManagerInfo: Added broadcast_296_piece0 in memory on ***hostname masked***:42188 (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:44:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 297 (KafkaRDD[421] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 16:44:00 INFO cluster.YarnClusterScheduler: Adding task set 297.0 with 1 tasks 18/04/17 16:44:00 INFO scheduler.DAGScheduler: Got job 298 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 16:44:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 298 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 16:44:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 16:44:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 16:44:00 INFO scheduler.DAGScheduler: Submitting ResultStage 298 (KafkaRDD[414] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 16:44:00 INFO storage.BlockManagerInfo: Removed broadcast_287_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.2 MB) 18/04/17 16:44:00 INFO storage.MemoryStore: Block broadcast_298 stored as values in memory (estimated size 5.7 KB, free 490.4 MB) 18/04/17 16:44:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 297.0 (TID 297, ***hostname masked***, executor 12, partition 0, RACK_LOCAL, 2045 bytes) 18/04/17 16:44:00 INFO storage.BlockManagerInfo: Removed broadcast_287_piece0 on ***hostname masked***:55033 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:44:00 INFO spark.ContextCleaner: Cleaned accumulator 288 18/04/17 16:44:00 INFO storage.BlockManagerInfo: Removed broadcast_286_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.2 MB) 18/04/17 16:44:00 INFO storage.MemoryStore: Block broadcast_298_piece0 stored as bytes in memory (estimated size 3.1 KB, free 490.4 MB) 18/04/17 16:44:00 INFO storage.BlockManagerInfo: Removed broadcast_286_piece0 on ***hostname masked***:43653 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:44:00 INFO storage.BlockManagerInfo: Added broadcast_298_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.2 MB) 18/04/17 16:44:00 INFO spark.SparkContext: Created broadcast 298 from broadcast at DAGScheduler.scala:1006 18/04/17 16:44:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 298 (KafkaRDD[414] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 16:44:00 INFO cluster.YarnClusterScheduler: Adding task set 298.0 with 1 tasks 18/04/17 16:44:00 INFO spark.ContextCleaner: Cleaned accumulator 279 18/04/17 16:44:00 INFO scheduler.DAGScheduler: Got job 299 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 16:44:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 299 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 16:44:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 16:44:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 16:44:00 INFO scheduler.DAGScheduler: Submitting ResultStage 299 (KafkaRDD[403] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 16:44:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 298.0 (TID 298, ***hostname masked***, executor 3, partition 0, RACK_LOCAL, 2048 bytes) 18/04/17 16:44:00 INFO storage.BlockManagerInfo: Removed broadcast_279_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.2 MB) 18/04/17 16:44:00 INFO storage.MemoryStore: Block broadcast_299 stored as values in memory (estimated size 5.7 KB, free 490.4 MB) 18/04/17 16:44:00 INFO storage.BlockManagerInfo: Removed broadcast_279_piece0 on ***hostname masked***:55033 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:44:00 INFO spark.ContextCleaner: Cleaned accumulator 280 18/04/17 16:44:00 INFO storage.BlockManagerInfo: Removed broadcast_278_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.2 MB) 18/04/17 16:44:00 INFO storage.BlockManagerInfo: Added broadcast_297_piece0 in memory on ***hostname masked***:42188 (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:44:00 INFO storage.BlockManagerInfo: Removed broadcast_278_piece0 on ***hostname masked***:43653 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:44:00 INFO storage.MemoryStore: Block broadcast_299_piece0 stored as bytes in memory (estimated size 3.1 KB, free 490.4 MB) 18/04/17 16:44:00 INFO storage.BlockManagerInfo: Added broadcast_299_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.2 MB) 18/04/17 16:44:00 INFO spark.ContextCleaner: Cleaned accumulator 282 18/04/17 16:44:00 INFO spark.SparkContext: Created broadcast 299 from broadcast at DAGScheduler.scala:1006 18/04/17 16:44:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 299 (KafkaRDD[403] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 16:44:00 INFO cluster.YarnClusterScheduler: Adding task set 299.0 with 1 tasks 18/04/17 16:44:00 INFO scheduler.DAGScheduler: Got job 300 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 16:44:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 300 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 16:44:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 16:44:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 16:44:00 INFO scheduler.DAGScheduler: Submitting ResultStage 300 (KafkaRDD[423] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 16:44:00 INFO storage.BlockManagerInfo: Removed broadcast_280_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.2 MB) 18/04/17 16:44:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 299.0 (TID 299, ***hostname masked***, executor 5, partition 0, RACK_LOCAL, 2060 bytes) 18/04/17 16:44:00 INFO storage.BlockManagerInfo: Added broadcast_298_piece0 in memory on ***hostname masked***:60107 (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:44:00 INFO storage.MemoryStore: Block broadcast_300 stored as values in memory (estimated size 5.7 KB, free 490.4 MB) 18/04/17 16:44:00 INFO storage.BlockManagerInfo: Removed broadcast_280_piece0 on ***hostname masked***:56034 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:44:00 INFO spark.ContextCleaner: Cleaned accumulator 281 18/04/17 16:44:00 INFO storage.BlockManagerInfo: Removed broadcast_282_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.2 MB) 18/04/17 16:44:00 INFO storage.BlockManagerInfo: Removed broadcast_282_piece0 on ***hostname masked***:35790 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:44:00 INFO storage.MemoryStore: Block broadcast_300_piece0 stored as bytes in memory (estimated size 3.1 KB, free 490.4 MB) 18/04/17 16:44:00 INFO storage.BlockManagerInfo: Added broadcast_300_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.2 MB) 18/04/17 16:44:00 INFO spark.SparkContext: Created broadcast 300 from broadcast at DAGScheduler.scala:1006 18/04/17 16:44:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 300 (KafkaRDD[423] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 16:44:00 INFO cluster.YarnClusterScheduler: Adding task set 300.0 with 1 tasks 18/04/17 16:44:00 INFO scheduler.DAGScheduler: Got job 301 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 16:44:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 301 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 16:44:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 16:44:00 INFO spark.ContextCleaner: Cleaned accumulator 283 18/04/17 16:44:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 16:44:00 INFO scheduler.DAGScheduler: Submitting ResultStage 301 (KafkaRDD[424] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 16:44:00 INFO storage.MemoryStore: Block broadcast_301 stored as values in memory (estimated size 5.7 KB, free 490.4 MB) 18/04/17 16:44:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 300.0 (TID 300, ***hostname masked***, executor 9, partition 0, RACK_LOCAL, 2048 bytes) 18/04/17 16:44:00 INFO storage.BlockManagerInfo: Removed broadcast_281_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.2 MB) 18/04/17 16:44:00 INFO storage.BlockManagerInfo: Removed broadcast_281_piece0 on ***hostname masked***:41751 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:44:00 INFO spark.ContextCleaner: Cleaned accumulator 285 18/04/17 16:44:00 INFO storage.MemoryStore: Block broadcast_301_piece0 stored as bytes in memory (estimated size 3.1 KB, free 490.4 MB) 18/04/17 16:44:00 INFO storage.BlockManagerInfo: Added broadcast_301_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.2 MB) 18/04/17 16:44:00 INFO storage.BlockManagerInfo: Removed broadcast_283_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.2 MB) 18/04/17 16:44:00 INFO spark.SparkContext: Created broadcast 301 from broadcast at DAGScheduler.scala:1006 18/04/17 16:44:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 301 (KafkaRDD[424] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 16:44:00 INFO cluster.YarnClusterScheduler: Adding task set 301.0 with 1 tasks 18/04/17 16:44:00 INFO scheduler.DAGScheduler: Got job 302 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 16:44:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 302 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 16:44:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 16:44:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 16:44:00 INFO scheduler.DAGScheduler: Submitting ResultStage 302 (KafkaRDD[405] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 16:44:00 INFO storage.BlockManagerInfo: Removed broadcast_283_piece0 on ***hostname masked***:41751 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:44:00 INFO storage.MemoryStore: Block broadcast_302 stored as values in memory (estimated size 5.7 KB, free 490.4 MB) 18/04/17 16:44:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 301.0 (TID 301, ***hostname masked***, executor 11, partition 0, RACK_LOCAL, 2045 bytes) 18/04/17 16:44:00 INFO storage.BlockManagerInfo: Removed broadcast_285_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.2 MB) 18/04/17 16:44:00 INFO storage.MemoryStore: Block broadcast_302_piece0 stored as bytes in memory (estimated size 3.1 KB, free 490.4 MB) 18/04/17 16:44:00 INFO storage.BlockManagerInfo: Added broadcast_300_piece0 in memory on ***hostname masked***:55033 (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:44:00 INFO storage.BlockManagerInfo: Added broadcast_302_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.2 MB) 18/04/17 16:44:00 INFO spark.SparkContext: Created broadcast 302 from broadcast at DAGScheduler.scala:1006 18/04/17 16:44:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 302 (KafkaRDD[405] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 16:44:00 INFO cluster.YarnClusterScheduler: Adding task set 302.0 with 1 tasks 18/04/17 16:44:00 INFO scheduler.DAGScheduler: Got job 303 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 16:44:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 303 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 16:44:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 16:44:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 16:44:00 INFO scheduler.DAGScheduler: Submitting ResultStage 303 (KafkaRDD[416] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 16:44:00 INFO storage.BlockManagerInfo: Removed broadcast_285_piece0 on ***hostname masked***:55095 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:44:00 INFO storage.MemoryStore: Block broadcast_303 stored as values in memory (estimated size 5.7 KB, free 490.4 MB) 18/04/17 16:44:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 302.0 (TID 302, ***hostname masked***, executor 11, partition 0, RACK_LOCAL, 2041 bytes) 18/04/17 16:44:00 INFO spark.ContextCleaner: Cleaned accumulator 286 18/04/17 16:44:00 INFO storage.BlockManagerInfo: Removed broadcast_284_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.2 MB) 18/04/17 16:44:00 INFO storage.MemoryStore: Block broadcast_303_piece0 stored as bytes in memory (estimated size 3.1 KB, free 490.4 MB) 18/04/17 16:44:00 INFO storage.BlockManagerInfo: Removed broadcast_284_piece0 on ***hostname masked***:42188 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:44:00 INFO storage.BlockManagerInfo: Added broadcast_303_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.2 MB) 18/04/17 16:44:00 INFO spark.SparkContext: Created broadcast 303 from broadcast at DAGScheduler.scala:1006 18/04/17 16:44:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 303 (KafkaRDD[416] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 16:44:00 INFO cluster.YarnClusterScheduler: Adding task set 303.0 with 1 tasks 18/04/17 16:44:00 INFO scheduler.DAGScheduler: Got job 304 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 16:44:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 304 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 16:44:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 16:44:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 16:44:00 INFO scheduler.DAGScheduler: Submitting ResultStage 304 (KafkaRDD[406] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 16:44:00 INFO storage.MemoryStore: Block broadcast_304 stored as values in memory (estimated size 5.7 KB, free 490.4 MB) 18/04/17 16:44:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 303.0 (TID 303, ***hostname masked***, executor 8, partition 0, RACK_LOCAL, 2048 bytes) 18/04/17 16:44:00 INFO spark.ContextCleaner: Cleaned accumulator 287 18/04/17 16:44:00 INFO storage.BlockManagerInfo: Added broadcast_301_piece0 in memory on ***hostname masked***:57847 (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:44:00 INFO storage.MemoryStore: Block broadcast_304_piece0 stored as bytes in memory (estimated size 3.1 KB, free 490.4 MB) 18/04/17 16:44:00 INFO storage.BlockManagerInfo: Added broadcast_304_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.2 MB) 18/04/17 16:44:00 INFO spark.SparkContext: Created broadcast 304 from broadcast at DAGScheduler.scala:1006 18/04/17 16:44:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 304 (KafkaRDD[406] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 16:44:00 INFO cluster.YarnClusterScheduler: Adding task set 304.0 with 1 tasks 18/04/17 16:44:00 INFO scheduler.DAGScheduler: Got job 305 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 16:44:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 305 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 16:44:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 16:44:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 16:44:00 INFO scheduler.DAGScheduler: Submitting ResultStage 305 (KafkaRDD[408] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 16:44:00 INFO storage.MemoryStore: Block broadcast_305 stored as values in memory (estimated size 5.7 KB, free 490.4 MB) 18/04/17 16:44:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 304.0 (TID 304, ***hostname masked***, executor 6, partition 0, NODE_LOCAL, 2059 bytes) 18/04/17 16:44:00 INFO storage.MemoryStore: Block broadcast_305_piece0 stored as bytes in memory (estimated size 3.1 KB, free 490.4 MB) 18/04/17 16:44:00 INFO storage.BlockManagerInfo: Added broadcast_305_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.2 MB) 18/04/17 16:44:00 INFO spark.SparkContext: Created broadcast 305 from broadcast at DAGScheduler.scala:1006 18/04/17 16:44:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 305 (KafkaRDD[408] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 16:44:00 INFO cluster.YarnClusterScheduler: Adding task set 305.0 with 1 tasks 18/04/17 16:44:00 INFO scheduler.DAGScheduler: Got job 306 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 16:44:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 306 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 16:44:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 16:44:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 16:44:00 INFO scheduler.DAGScheduler: Submitting ResultStage 306 (KafkaRDD[422] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 16:44:00 INFO storage.MemoryStore: Block broadcast_306 stored as values in memory (estimated size 5.7 KB, free 490.4 MB) 18/04/17 16:44:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 305.0 (TID 305, ***hostname masked***, executor 8, partition 0, RACK_LOCAL, 2051 bytes) 18/04/17 16:44:00 INFO storage.BlockManagerInfo: Added broadcast_303_piece0 in memory on ***hostname masked***:50260 (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:44:00 INFO storage.MemoryStore: Block broadcast_306_piece0 stored as bytes in memory (estimated size 3.1 KB, free 490.4 MB) 18/04/17 16:44:00 INFO storage.BlockManagerInfo: Added broadcast_306_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.2 MB) 18/04/17 16:44:00 INFO spark.SparkContext: Created broadcast 306 from broadcast at DAGScheduler.scala:1006 18/04/17 16:44:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 306 (KafkaRDD[422] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 16:44:00 INFO cluster.YarnClusterScheduler: Adding task set 306.0 with 1 tasks 18/04/17 16:44:00 INFO scheduler.DAGScheduler: Got job 307 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 16:44:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 307 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 16:44:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 16:44:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 16:44:00 INFO scheduler.DAGScheduler: Submitting ResultStage 307 (KafkaRDD[404] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 16:44:00 INFO storage.MemoryStore: Block broadcast_307 stored as values in memory (estimated size 5.7 KB, free 490.4 MB) 18/04/17 16:44:00 INFO storage.BlockManagerInfo: Added broadcast_304_piece0 in memory on ***hostname masked***:35790 (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:44:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 306.0 (TID 306, ***hostname masked***, executor 2, partition 0, NODE_LOCAL, 2056 bytes) 18/04/17 16:44:00 INFO storage.BlockManagerInfo: Added broadcast_302_piece0 in memory on ***hostname masked***:57847 (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:44:00 INFO storage.MemoryStore: Block broadcast_307_piece0 stored as bytes in memory (estimated size 3.1 KB, free 490.4 MB) 18/04/17 16:44:00 INFO storage.BlockManagerInfo: Added broadcast_307_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.2 MB) 18/04/17 16:44:00 INFO spark.SparkContext: Created broadcast 307 from broadcast at DAGScheduler.scala:1006 18/04/17 16:44:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 307 (KafkaRDD[404] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 16:44:00 INFO cluster.YarnClusterScheduler: Adding task set 307.0 with 1 tasks 18/04/17 16:44:00 INFO scheduler.DAGScheduler: Got job 308 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 16:44:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 308 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 16:44:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 16:44:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 16:44:00 INFO scheduler.DAGScheduler: Submitting ResultStage 308 (KafkaRDD[427] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 16:44:00 INFO storage.MemoryStore: Block broadcast_308 stored as values in memory (estimated size 5.7 KB, free 490.4 MB) 18/04/17 16:44:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 307.0 (TID 307, ***hostname masked***, executor 11, partition 0, RACK_LOCAL, 2051 bytes) 18/04/17 16:44:00 INFO storage.BlockManagerInfo: Added broadcast_305_piece0 in memory on ***hostname masked***:50260 (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:44:00 INFO storage.MemoryStore: Block broadcast_308_piece0 stored as bytes in memory (estimated size 3.1 KB, free 490.4 MB) 18/04/17 16:44:00 INFO storage.BlockManagerInfo: Added broadcast_308_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.2 MB) 18/04/17 16:44:00 INFO spark.SparkContext: Created broadcast 308 from broadcast at DAGScheduler.scala:1006 18/04/17 16:44:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 308 (KafkaRDD[427] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 16:44:00 INFO cluster.YarnClusterScheduler: Adding task set 308.0 with 1 tasks 18/04/17 16:44:00 INFO scheduler.DAGScheduler: Got job 309 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 16:44:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 309 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 16:44:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 16:44:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 16:44:00 INFO scheduler.DAGScheduler: Submitting ResultStage 309 (KafkaRDD[430] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 16:44:00 INFO storage.MemoryStore: Block broadcast_309 stored as values in memory (estimated size 5.7 KB, free 490.4 MB) 18/04/17 16:44:00 INFO storage.BlockManagerInfo: Added broadcast_306_piece0 in memory on ***hostname masked***:43653 (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:44:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 308.0 (TID 308, ***hostname masked***, executor 2, partition 0, RACK_LOCAL, 2042 bytes) 18/04/17 16:44:00 INFO storage.MemoryStore: Block broadcast_309_piece0 stored as bytes in memory (estimated size 3.1 KB, free 490.4 MB) 18/04/17 16:44:00 INFO storage.BlockManagerInfo: Added broadcast_309_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.2 MB) 18/04/17 16:44:00 INFO spark.SparkContext: Created broadcast 309 from broadcast at DAGScheduler.scala:1006 18/04/17 16:44:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 309 (KafkaRDD[430] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 16:44:00 INFO cluster.YarnClusterScheduler: Adding task set 309.0 with 1 tasks 18/04/17 16:44:00 INFO scheduler.DAGScheduler: Got job 310 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 16:44:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 310 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 16:44:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 16:44:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 16:44:00 INFO scheduler.DAGScheduler: Submitting ResultStage 310 (KafkaRDD[428] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 16:44:00 INFO storage.BlockManagerInfo: Added broadcast_307_piece0 in memory on ***hostname masked***:57847 (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:44:00 INFO storage.MemoryStore: Block broadcast_310 stored as values in memory (estimated size 5.7 KB, free 490.4 MB) 18/04/17 16:44:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 309.0 (TID 309, ***hostname masked***, executor 4, partition 0, RACK_LOCAL, 2053 bytes) 18/04/17 16:44:00 INFO storage.MemoryStore: Block broadcast_310_piece0 stored as bytes in memory (estimated size 3.1 KB, free 490.4 MB) 18/04/17 16:44:00 INFO storage.BlockManagerInfo: Added broadcast_310_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.2 MB) 18/04/17 16:44:00 INFO spark.SparkContext: Created broadcast 310 from broadcast at DAGScheduler.scala:1006 18/04/17 16:44:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 310 (KafkaRDD[428] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 16:44:00 INFO cluster.YarnClusterScheduler: Adding task set 310.0 with 1 tasks 18/04/17 16:44:00 INFO scheduler.DAGScheduler: Got job 311 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 16:44:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 311 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 16:44:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 16:44:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 16:44:00 INFO scheduler.DAGScheduler: Submitting ResultStage 311 (KafkaRDD[402] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 16:44:00 INFO storage.MemoryStore: Block broadcast_311 stored as values in memory (estimated size 5.7 KB, free 490.4 MB) 18/04/17 16:44:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 310.0 (TID 310, ***hostname masked***, executor 2, partition 0, RACK_LOCAL, 2040 bytes) 18/04/17 16:44:00 INFO storage.BlockManagerInfo: Added broadcast_308_piece0 in memory on ***hostname masked***:43653 (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:44:00 INFO storage.MemoryStore: Block broadcast_311_piece0 stored as bytes in memory (estimated size 3.1 KB, free 490.4 MB) 18/04/17 16:44:00 INFO storage.BlockManagerInfo: Added broadcast_311_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.2 MB) 18/04/17 16:44:00 INFO spark.SparkContext: Created broadcast 311 from broadcast at DAGScheduler.scala:1006 18/04/17 16:44:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 311 (KafkaRDD[402] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 16:44:00 INFO cluster.YarnClusterScheduler: Adding task set 311.0 with 1 tasks 18/04/17 16:44:00 INFO scheduler.DAGScheduler: Got job 312 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 16:44:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 312 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 16:44:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 16:44:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 16:44:00 INFO scheduler.DAGScheduler: Submitting ResultStage 312 (KafkaRDD[418] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 16:44:00 INFO storage.MemoryStore: Block broadcast_312 stored as values in memory (estimated size 5.7 KB, free 490.3 MB) 18/04/17 16:44:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 311.0 (TID 311, ***hostname masked***, executor 9, partition 0, NODE_LOCAL, 2045 bytes) 18/04/17 16:44:00 INFO storage.MemoryStore: Block broadcast_312_piece0 stored as bytes in memory (estimated size 3.1 KB, free 490.3 MB) 18/04/17 16:44:00 INFO storage.BlockManagerInfo: Added broadcast_312_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.2 MB) 18/04/17 16:44:00 INFO spark.SparkContext: Created broadcast 312 from broadcast at DAGScheduler.scala:1006 18/04/17 16:44:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 312 (KafkaRDD[418] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 16:44:00 INFO cluster.YarnClusterScheduler: Adding task set 312.0 with 1 tasks 18/04/17 16:44:00 INFO scheduler.DAGScheduler: Got job 313 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 16:44:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 313 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 16:44:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 16:44:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 16:44:00 INFO scheduler.DAGScheduler: Submitting ResultStage 313 (KafkaRDD[401] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 16:44:00 INFO storage.MemoryStore: Block broadcast_313 stored as values in memory (estimated size 5.7 KB, free 490.3 MB) 18/04/17 16:44:00 INFO storage.BlockManagerInfo: Added broadcast_309_piece0 in memory on ***hostname masked***:55279 (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:44:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 312.0 (TID 312, ***hostname masked***, executor 9, partition 0, RACK_LOCAL, 2069 bytes) 18/04/17 16:44:00 INFO storage.MemoryStore: Block broadcast_313_piece0 stored as bytes in memory (estimated size 3.1 KB, free 490.3 MB) 18/04/17 16:44:00 INFO storage.BlockManagerInfo: Added broadcast_313_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.2 MB) 18/04/17 16:44:00 INFO spark.SparkContext: Created broadcast 313 from broadcast at DAGScheduler.scala:1006 18/04/17 16:44:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 313 (KafkaRDD[401] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 16:44:00 INFO cluster.YarnClusterScheduler: Adding task set 313.0 with 1 tasks 18/04/17 16:44:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 313.0 (TID 313, ***hostname masked***, executor 1, partition 0, NODE_LOCAL, 2059 bytes) 18/04/17 16:44:00 INFO storage.BlockManagerInfo: Added broadcast_311_piece0 in memory on ***hostname masked***:55033 (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:44:00 INFO storage.BlockManagerInfo: Added broadcast_313_piece0 in memory on ***hostname masked***:56034 (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:44:00 INFO storage.BlockManagerInfo: Added broadcast_312_piece0 in memory on ***hostname masked***:55033 (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:44:00 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 302.0 (TID 302) in 66 ms on ***hostname masked*** (executor 11) (1/1) 18/04/17 16:44:00 INFO cluster.YarnClusterScheduler: Removed TaskSet 302.0, whose tasks have all completed, from pool 18/04/17 16:44:00 INFO scheduler.DAGScheduler: ResultStage 302 (foreachPartition at PredictorEngineApp.java:153) finished in 0.067 s 18/04/17 16:44:00 INFO scheduler.DAGScheduler: Job 302 finished: foreachPartition at PredictorEngineApp.java:153, took 0.144431 s 18/04/17 16:44:00 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x69419be connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 16:44:00 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x69419be0x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 16:44:00 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 16:44:00 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:37470, server: ***hostname masked***/***IP masked***:2181 18/04/17 16:44:00 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x2626be142b28c05, negotiated timeout = 60000 18/04/17 16:44:00 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x2626be142b28c05 18/04/17 16:44:00 INFO zookeeper.ZooKeeper: Session: 0x2626be142b28c05 closed 18/04/17 16:44:00 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 16:44:00 INFO scheduler.JobScheduler: Finished job streaming job 1523972640000 ms.9 from job set of time 1523972640000 ms 18/04/17 16:44:00 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 292.0 (TID 292) in 186 ms on ***hostname masked*** (executor 12) (1/1) 18/04/17 16:44:00 INFO cluster.YarnClusterScheduler: Removed TaskSet 292.0, whose tasks have all completed, from pool 18/04/17 16:44:00 INFO scheduler.DAGScheduler: ResultStage 292 (foreachPartition at PredictorEngineApp.java:153) finished in 0.187 s 18/04/17 16:44:00 INFO scheduler.DAGScheduler: Job 292 finished: foreachPartition at PredictorEngineApp.java:153, took 0.217286 s 18/04/17 16:44:00 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x5ccfd674 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 16:44:00 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x5ccfd6740x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 16:44:00 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 16:44:00 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:54729, server: ***hostname masked***/***IP masked***:2181 18/04/17 16:44:00 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x3626be1439a92d3, negotiated timeout = 60000 18/04/17 16:44:00 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x3626be1439a92d3 18/04/17 16:44:00 INFO zookeeper.ZooKeeper: Session: 0x3626be1439a92d3 closed 18/04/17 16:44:00 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 16:44:00 INFO scheduler.JobScheduler: Finished job streaming job 1523972640000 ms.2 from job set of time 1523972640000 ms 18/04/17 16:44:00 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 312.0 (TID 312) in 156 ms on ***hostname masked*** (executor 9) (1/1) 18/04/17 16:44:00 INFO cluster.YarnClusterScheduler: Removed TaskSet 312.0, whose tasks have all completed, from pool 18/04/17 16:44:00 INFO scheduler.DAGScheduler: ResultStage 312 (foreachPartition at PredictorEngineApp.java:153) finished in 0.157 s 18/04/17 16:44:00 INFO scheduler.DAGScheduler: Job 312 finished: foreachPartition at PredictorEngineApp.java:153, took 0.266298 s 18/04/17 16:44:00 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x161b6788 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 16:44:00 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x161b67880x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 16:44:00 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 16:44:00 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:37476, server: ***hostname masked***/***IP masked***:2181 18/04/17 16:44:00 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x2626be142b28c09, negotiated timeout = 60000 18/04/17 16:44:00 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x2626be142b28c09 18/04/17 16:44:00 INFO zookeeper.ZooKeeper: Session: 0x2626be142b28c09 closed 18/04/17 16:44:00 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 16:44:00 INFO scheduler.JobScheduler: Finished job streaming job 1523972640000 ms.22 from job set of time 1523972640000 ms 18/04/17 16:44:00 INFO storage.BlockManagerInfo: Added broadcast_310_piece0 in memory on ***hostname masked***:43653 (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:44:00 INFO storage.BlockManagerInfo: Added broadcast_299_piece0 in memory on ***hostname masked***:53081 (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:44:01 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 297.0 (TID 297) in 1576 ms on ***hostname masked*** (executor 12) (1/1) 18/04/17 16:44:01 INFO cluster.YarnClusterScheduler: Removed TaskSet 297.0, whose tasks have all completed, from pool 18/04/17 16:44:01 INFO scheduler.DAGScheduler: ResultStage 297 (foreachPartition at PredictorEngineApp.java:153) finished in 1.577 s 18/04/17 16:44:01 INFO scheduler.DAGScheduler: Job 297 finished: foreachPartition at PredictorEngineApp.java:153, took 1.632023 s 18/04/17 16:44:01 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0xab3bbcf connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 16:44:01 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0xab3bbcf0x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 16:44:01 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 16:44:01 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:37490, server: ***hostname masked***/***IP masked***:2181 18/04/17 16:44:01 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x2626be142b28c0d, negotiated timeout = 60000 18/04/17 16:44:01 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x2626be142b28c0d 18/04/17 16:44:01 INFO zookeeper.ZooKeeper: Session: 0x2626be142b28c0d closed 18/04/17 16:44:01 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 16:44:01 INFO scheduler.JobScheduler: Finished job streaming job 1523972640000 ms.25 from job set of time 1523972640000 ms 18/04/17 16:44:02 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 307.0 (TID 307) in 2347 ms on ***hostname masked*** (executor 11) (1/1) 18/04/17 16:44:02 INFO cluster.YarnClusterScheduler: Removed TaskSet 307.0, whose tasks have all completed, from pool 18/04/17 16:44:02 INFO scheduler.DAGScheduler: ResultStage 307 (foreachPartition at PredictorEngineApp.java:153) finished in 2.348 s 18/04/17 16:44:02 INFO scheduler.DAGScheduler: Job 307 finished: foreachPartition at PredictorEngineApp.java:153, took 2.443163 s 18/04/17 16:44:02 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x2d93323d connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 16:44:02 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x2d93323d0x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 16:44:02 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 16:44:02 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:32901, server: ***hostname masked***/***IP masked***:2181 18/04/17 16:44:02 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x1626be1444c92ff, negotiated timeout = 60000 18/04/17 16:44:02 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x1626be1444c92ff 18/04/17 16:44:02 INFO zookeeper.ZooKeeper: Session: 0x1626be1444c92ff closed 18/04/17 16:44:02 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 16:44:02 INFO scheduler.JobScheduler: Finished job streaming job 1523972640000 ms.8 from job set of time 1523972640000 ms 18/04/17 16:44:03 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 299.0 (TID 299) in 3771 ms on ***hostname masked*** (executor 5) (1/1) 18/04/17 16:44:03 INFO cluster.YarnClusterScheduler: Removed TaskSet 299.0, whose tasks have all completed, from pool 18/04/17 16:44:03 INFO scheduler.DAGScheduler: ResultStage 299 (foreachPartition at PredictorEngineApp.java:153) finished in 3.772 s 18/04/17 16:44:03 INFO scheduler.DAGScheduler: Job 299 finished: foreachPartition at PredictorEngineApp.java:153, took 3.837979 s 18/04/17 16:44:03 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x294d815d connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 16:44:03 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x294d815d0x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 16:44:03 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 16:44:03 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:54761, server: ***hostname masked***/***IP masked***:2181 18/04/17 16:44:03 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x3626be1439a92d8, negotiated timeout = 60000 18/04/17 16:44:03 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x3626be1439a92d8 18/04/17 16:44:03 INFO zookeeper.ZooKeeper: Session: 0x3626be1439a92d8 closed 18/04/17 16:44:03 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 16:44:03 INFO scheduler.JobScheduler: Finished job streaming job 1523972640000 ms.7 from job set of time 1523972640000 ms 18/04/17 16:44:04 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 309.0 (TID 309) in 4355 ms on ***hostname masked*** (executor 4) (1/1) 18/04/17 16:44:04 INFO cluster.YarnClusterScheduler: Removed TaskSet 309.0, whose tasks have all completed, from pool 18/04/17 16:44:04 INFO scheduler.DAGScheduler: ResultStage 309 (foreachPartition at PredictorEngineApp.java:153) finished in 4.363 s 18/04/17 16:44:04 INFO scheduler.DAGScheduler: Job 309 finished: foreachPartition at PredictorEngineApp.java:153, took 4.459178 s 18/04/17 16:44:04 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x355bc301 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 16:44:04 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x355bc3010x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 16:44:04 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 16:44:04 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:54765, server: ***hostname masked***/***IP masked***:2181 18/04/17 16:44:04 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x3626be1439a92da, negotiated timeout = 60000 18/04/17 16:44:04 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x3626be1439a92da 18/04/17 16:44:04 INFO zookeeper.ZooKeeper: Session: 0x3626be1439a92da closed 18/04/17 16:44:04 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 16:44:04 INFO scheduler.JobScheduler: Finished job streaming job 1523972640000 ms.34 from job set of time 1523972640000 ms 18/04/17 16:44:04 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 296.0 (TID 296) in 4604 ms on ***hostname masked*** (executor 12) (1/1) 18/04/17 16:44:04 INFO cluster.YarnClusterScheduler: Removed TaskSet 296.0, whose tasks have all completed, from pool 18/04/17 16:44:04 INFO scheduler.DAGScheduler: ResultStage 296 (foreachPartition at PredictorEngineApp.java:153) finished in 4.605 s 18/04/17 16:44:04 INFO scheduler.DAGScheduler: Job 296 finished: foreachPartition at PredictorEngineApp.java:153, took 4.653976 s 18/04/17 16:44:04 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x903d414 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 16:44:04 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x903d4140x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 16:44:04 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 16:44:04 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:54768, server: ***hostname masked***/***IP masked***:2181 18/04/17 16:44:04 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x3626be1439a92db, negotiated timeout = 60000 18/04/17 16:44:04 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x3626be1439a92db 18/04/17 16:44:04 INFO zookeeper.ZooKeeper: Session: 0x3626be1439a92db closed 18/04/17 16:44:04 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 16:44:04 INFO scheduler.JobScheduler: Finished job streaming job 1523972640000 ms.33 from job set of time 1523972640000 ms 18/04/17 16:44:04 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 301.0 (TID 301) in 4698 ms on ***hostname masked*** (executor 11) (1/1) 18/04/17 16:44:04 INFO cluster.YarnClusterScheduler: Removed TaskSet 301.0, whose tasks have all completed, from pool 18/04/17 16:44:04 INFO scheduler.DAGScheduler: ResultStage 301 (foreachPartition at PredictorEngineApp.java:153) finished in 4.699 s 18/04/17 16:44:04 INFO scheduler.DAGScheduler: Job 301 finished: foreachPartition at PredictorEngineApp.java:153, took 4.773493 s 18/04/17 16:44:04 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x37f2cb56 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 16:44:04 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x37f2cb560x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 16:44:04 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 16:44:04 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:54771, server: ***hostname masked***/***IP masked***:2181 18/04/17 16:44:04 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x3626be1439a92dc, negotiated timeout = 60000 18/04/17 16:44:04 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x3626be1439a92dc 18/04/17 16:44:04 INFO zookeeper.ZooKeeper: Session: 0x3626be1439a92dc closed 18/04/17 16:44:04 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 16:44:04 INFO scheduler.JobScheduler: Finished job streaming job 1523972640000 ms.28 from job set of time 1523972640000 ms 18/04/17 16:44:05 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 308.0 (TID 308) in 5792 ms on ***hostname masked*** (executor 2) (1/1) 18/04/17 16:44:05 INFO cluster.YarnClusterScheduler: Removed TaskSet 308.0, whose tasks have all completed, from pool 18/04/17 16:44:05 INFO scheduler.DAGScheduler: ResultStage 308 (foreachPartition at PredictorEngineApp.java:153) finished in 5.794 s 18/04/17 16:44:05 INFO scheduler.DAGScheduler: Job 308 finished: foreachPartition at PredictorEngineApp.java:153, took 5.886609 s 18/04/17 16:44:05 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x5086d01d connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 16:44:05 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x5086d01d0x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 16:44:05 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 16:44:05 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:32928, server: ***hostname masked***/***IP masked***:2181 18/04/17 16:44:05 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x1626be1444c9302, negotiated timeout = 60000 18/04/17 16:44:06 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x1626be1444c9302 18/04/17 16:44:06 INFO zookeeper.ZooKeeper: Session: 0x1626be1444c9302 closed 18/04/17 16:44:06 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 16:44:06 INFO scheduler.JobScheduler: Finished job streaming job 1523972640000 ms.31 from job set of time 1523972640000 ms 18/04/17 16:44:06 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 293.0 (TID 293) in 6705 ms on ***hostname masked*** (executor 6) (1/1) 18/04/17 16:44:06 INFO cluster.YarnClusterScheduler: Removed TaskSet 293.0, whose tasks have all completed, from pool 18/04/17 16:44:06 INFO scheduler.DAGScheduler: ResultStage 293 (foreachPartition at PredictorEngineApp.java:153) finished in 6.705 s 18/04/17 16:44:06 INFO scheduler.DAGScheduler: Job 293 finished: foreachPartition at PredictorEngineApp.java:153, took 6.739448 s 18/04/17 16:44:06 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x7dd70b30 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 16:44:06 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x7dd70b300x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 16:44:06 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 16:44:06 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:54786, server: ***hostname masked***/***IP masked***:2181 18/04/17 16:44:06 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x3626be1439a92de, negotiated timeout = 60000 18/04/17 16:44:06 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x3626be1439a92de 18/04/17 16:44:06 INFO zookeeper.ZooKeeper: Session: 0x3626be1439a92de closed 18/04/17 16:44:06 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 16:44:06 INFO scheduler.JobScheduler: Finished job streaming job 1523972640000 ms.19 from job set of time 1523972640000 ms 18/04/17 16:44:07 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 305.0 (TID 305) in 7746 ms on ***hostname masked*** (executor 8) (1/1) 18/04/17 16:44:07 INFO cluster.YarnClusterScheduler: Removed TaskSet 305.0, whose tasks have all completed, from pool 18/04/17 16:44:07 INFO scheduler.DAGScheduler: ResultStage 305 (foreachPartition at PredictorEngineApp.java:153) finished in 7.747 s 18/04/17 16:44:07 INFO scheduler.DAGScheduler: Job 305 finished: foreachPartition at PredictorEngineApp.java:153, took 7.835335 s 18/04/17 16:44:07 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x2651ce8f connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 16:44:07 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x2651ce8f0x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 16:44:07 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 16:44:07 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:32939, server: ***hostname masked***/***IP masked***:2181 18/04/17 16:44:07 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x1626be1444c9303, negotiated timeout = 60000 18/04/17 16:44:07 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x1626be1444c9303 18/04/17 16:44:07 INFO zookeeper.ZooKeeper: Session: 0x1626be1444c9303 closed 18/04/17 16:44:07 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 16:44:07 INFO scheduler.JobScheduler: Finished job streaming job 1523972640000 ms.12 from job set of time 1523972640000 ms 18/04/17 16:44:08 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 288.0 (TID 288) in 8498 ms on ***hostname masked*** (executor 5) (1/1) 18/04/17 16:44:08 INFO cluster.YarnClusterScheduler: Removed TaskSet 288.0, whose tasks have all completed, from pool 18/04/17 16:44:08 INFO scheduler.DAGScheduler: ResultStage 288 (foreachPartition at PredictorEngineApp.java:153) finished in 8.499 s 18/04/17 16:44:08 INFO scheduler.DAGScheduler: Job 288 finished: foreachPartition at PredictorEngineApp.java:153, took 8.513193 s 18/04/17 16:44:08 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x5567f9cb connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 16:44:08 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x5567f9cb0x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 16:44:08 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 16:44:08 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:37538, server: ***hostname masked***/***IP masked***:2181 18/04/17 16:44:08 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x2626be142b28c10, negotiated timeout = 60000 18/04/17 16:44:08 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x2626be142b28c10 18/04/17 16:44:08 INFO zookeeper.ZooKeeper: Session: 0x2626be142b28c10 closed 18/04/17 16:44:08 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 16:44:08 INFO scheduler.JobScheduler: Finished job streaming job 1523972640000 ms.24 from job set of time 1523972640000 ms 18/04/17 16:44:08 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 313.0 (TID 313) in 8435 ms on ***hostname masked*** (executor 1) (1/1) 18/04/17 16:44:08 INFO cluster.YarnClusterScheduler: Removed TaskSet 313.0, whose tasks have all completed, from pool 18/04/17 16:44:08 INFO scheduler.DAGScheduler: ResultStage 313 (foreachPartition at PredictorEngineApp.java:153) finished in 8.436 s 18/04/17 16:44:08 INFO scheduler.DAGScheduler: Job 313 finished: foreachPartition at PredictorEngineApp.java:153, took 8.548600 s 18/04/17 16:44:08 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x809ae47 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 16:44:08 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x809ae470x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 16:44:08 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 16:44:08 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:32946, server: ***hostname masked***/***IP masked***:2181 18/04/17 16:44:08 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x1626be1444c9304, negotiated timeout = 60000 18/04/17 16:44:08 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x1626be1444c9304 18/04/17 16:44:08 INFO zookeeper.ZooKeeper: Session: 0x1626be1444c9304 closed 18/04/17 16:44:08 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 16:44:08 INFO scheduler.JobScheduler: Finished job streaming job 1523972640000 ms.5 from job set of time 1523972640000 ms 18/04/17 16:44:08 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 294.0 (TID 294) in 8831 ms on ***hostname masked*** (executor 3) (1/1) 18/04/17 16:44:08 INFO cluster.YarnClusterScheduler: Removed TaskSet 294.0, whose tasks have all completed, from pool 18/04/17 16:44:08 INFO scheduler.DAGScheduler: ResultStage 294 (foreachPartition at PredictorEngineApp.java:153) finished in 8.832 s 18/04/17 16:44:08 INFO scheduler.DAGScheduler: Job 294 finished: foreachPartition at PredictorEngineApp.java:153, took 8.870373 s 18/04/17 16:44:08 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x52bc73a0 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 16:44:08 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x52bc73a00x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 16:44:08 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 16:44:08 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:32949, server: ***hostname masked***/***IP masked***:2181 18/04/17 16:44:08 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x1626be1444c9306, negotiated timeout = 60000 18/04/17 16:44:08 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x1626be1444c9306 18/04/17 16:44:08 INFO zookeeper.ZooKeeper: Session: 0x1626be1444c9306 closed 18/04/17 16:44:08 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 16:44:08 INFO scheduler.JobScheduler: Finished job streaming job 1523972640000 ms.15 from job set of time 1523972640000 ms 18/04/17 16:44:09 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 298.0 (TID 298) in 9371 ms on ***hostname masked*** (executor 3) (1/1) 18/04/17 16:44:09 INFO cluster.YarnClusterScheduler: Removed TaskSet 298.0, whose tasks have all completed, from pool 18/04/17 16:44:09 INFO scheduler.DAGScheduler: ResultStage 298 (foreachPartition at PredictorEngineApp.java:153) finished in 9.371 s 18/04/17 16:44:09 INFO scheduler.DAGScheduler: Job 298 finished: foreachPartition at PredictorEngineApp.java:153, took 9.432236 s 18/04/17 16:44:09 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x356e8ee4 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 16:44:09 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x356e8ee40x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 16:44:09 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 16:44:09 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:32953, server: ***hostname masked***/***IP masked***:2181 18/04/17 16:44:09 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x1626be1444c9307, negotiated timeout = 60000 18/04/17 16:44:09 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x1626be1444c9307 18/04/17 16:44:09 INFO zookeeper.ZooKeeper: Session: 0x1626be1444c9307 closed 18/04/17 16:44:09 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 16:44:09 INFO scheduler.JobScheduler: Finished job streaming job 1523972640000 ms.18 from job set of time 1523972640000 ms 18/04/17 16:44:09 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 290.0 (TID 290) in 9719 ms on ***hostname masked*** (executor 3) (1/1) 18/04/17 16:44:09 INFO cluster.YarnClusterScheduler: Removed TaskSet 290.0, whose tasks have all completed, from pool 18/04/17 16:44:09 INFO scheduler.DAGScheduler: ResultStage 290 (foreachPartition at PredictorEngineApp.java:153) finished in 9.719 s 18/04/17 16:44:09 INFO scheduler.DAGScheduler: Job 290 finished: foreachPartition at PredictorEngineApp.java:153, took 9.742545 s 18/04/17 16:44:09 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x1dfafb6e connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 16:44:09 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x1dfafb6e0x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 16:44:09 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 16:44:09 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:54808, server: ***hostname masked***/***IP masked***:2181 18/04/17 16:44:09 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x3626be1439a92e2, negotiated timeout = 60000 18/04/17 16:44:09 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x3626be1439a92e2 18/04/17 16:44:09 INFO zookeeper.ZooKeeper: Session: 0x3626be1439a92e2 closed 18/04/17 16:44:09 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 16:44:09 INFO scheduler.JobScheduler: Finished job streaming job 1523972640000 ms.29 from job set of time 1523972640000 ms 18/04/17 16:44:10 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 289.0 (TID 289) in 10124 ms on ***hostname masked*** (executor 8) (1/1) 18/04/17 16:44:10 INFO cluster.YarnClusterScheduler: Removed TaskSet 289.0, whose tasks have all completed, from pool 18/04/17 16:44:10 INFO scheduler.DAGScheduler: ResultStage 289 (foreachPartition at PredictorEngineApp.java:153) finished in 10.124 s 18/04/17 16:44:10 INFO scheduler.DAGScheduler: Job 289 finished: foreachPartition at PredictorEngineApp.java:153, took 10.144102 s 18/04/17 16:44:10 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x3353b81e connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 16:44:10 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x3353b81e0x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 16:44:10 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 16:44:10 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:32965, server: ***hostname masked***/***IP masked***:2181 18/04/17 16:44:10 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x1626be1444c9308, negotiated timeout = 60000 18/04/17 16:44:10 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x1626be1444c9308 18/04/17 16:44:10 INFO zookeeper.ZooKeeper: Session: 0x1626be1444c9308 closed 18/04/17 16:44:10 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 16:44:10 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 303.0 (TID 303) in 10084 ms on ***hostname masked*** (executor 8) (1/1) 18/04/17 16:44:10 INFO cluster.YarnClusterScheduler: Removed TaskSet 303.0, whose tasks have all completed, from pool 18/04/17 16:44:10 INFO scheduler.DAGScheduler: ResultStage 303 (foreachPartition at PredictorEngineApp.java:153) finished in 10.085 s 18/04/17 16:44:10 INFO scheduler.DAGScheduler: Job 303 finished: foreachPartition at PredictorEngineApp.java:153, took 10.166590 s 18/04/17 16:44:10 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x4f32a89b connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 16:44:10 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x4f32a89b0x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 16:44:10 INFO scheduler.JobScheduler: Finished job streaming job 1523972640000 ms.23 from job set of time 1523972640000 ms 18/04/17 16:44:10 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 16:44:10 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:37563, server: ***hostname masked***/***IP masked***:2181 18/04/17 16:44:10 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x2626be142b28c13, negotiated timeout = 60000 18/04/17 16:44:10 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x2626be142b28c13 18/04/17 16:44:10 INFO zookeeper.ZooKeeper: Session: 0x2626be142b28c13 closed 18/04/17 16:44:10 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 16:44:10 INFO scheduler.JobScheduler: Finished job streaming job 1523972640000 ms.20 from job set of time 1523972640000 ms 18/04/17 16:44:10 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 310.0 (TID 310) in 10262 ms on ***hostname masked*** (executor 2) (1/1) 18/04/17 16:44:10 INFO cluster.YarnClusterScheduler: Removed TaskSet 310.0, whose tasks have all completed, from pool 18/04/17 16:44:10 INFO scheduler.DAGScheduler: ResultStage 310 (foreachPartition at PredictorEngineApp.java:153) finished in 10.263 s 18/04/17 16:44:10 INFO scheduler.DAGScheduler: Job 310 finished: foreachPartition at PredictorEngineApp.java:153, took 10.368891 s 18/04/17 16:44:10 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x59e16b15 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 16:44:10 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x59e16b150x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 16:44:10 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 16:44:10 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:54822, server: ***hostname masked***/***IP masked***:2181 18/04/17 16:44:10 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x3626be1439a92e3, negotiated timeout = 60000 18/04/17 16:44:10 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x3626be1439a92e3 18/04/17 16:44:10 INFO zookeeper.ZooKeeper: Session: 0x3626be1439a92e3 closed 18/04/17 16:44:10 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 16:44:10 INFO scheduler.JobScheduler: Finished job streaming job 1523972640000 ms.32 from job set of time 1523972640000 ms 18/04/17 16:44:10 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 311.0 (TID 311) in 10311 ms on ***hostname masked*** (executor 9) (1/1) 18/04/17 16:44:10 INFO cluster.YarnClusterScheduler: Removed TaskSet 311.0, whose tasks have all completed, from pool 18/04/17 16:44:10 INFO scheduler.DAGScheduler: ResultStage 311 (foreachPartition at PredictorEngineApp.java:153) finished in 10.312 s 18/04/17 16:44:10 INFO scheduler.DAGScheduler: Job 311 finished: foreachPartition at PredictorEngineApp.java:153, took 10.419982 s 18/04/17 16:44:10 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x6f6a40be connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 16:44:10 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x6f6a40be0x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 16:44:10 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 16:44:10 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:54825, server: ***hostname masked***/***IP masked***:2181 18/04/17 16:44:10 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x3626be1439a92e4, negotiated timeout = 60000 18/04/17 16:44:10 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x3626be1439a92e4 18/04/17 16:44:10 INFO zookeeper.ZooKeeper: Session: 0x3626be1439a92e4 closed 18/04/17 16:44:10 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 16:44:10 INFO scheduler.JobScheduler: Finished job streaming job 1523972640000 ms.6 from job set of time 1523972640000 ms 18/04/17 16:44:11 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 300.0 (TID 300) in 11034 ms on ***hostname masked*** (executor 9) (1/1) 18/04/17 16:44:11 INFO cluster.YarnClusterScheduler: Removed TaskSet 300.0, whose tasks have all completed, from pool 18/04/17 16:44:11 INFO scheduler.DAGScheduler: ResultStage 300 (foreachPartition at PredictorEngineApp.java:153) finished in 11.035 s 18/04/17 16:44:11 INFO scheduler.DAGScheduler: Job 300 finished: foreachPartition at PredictorEngineApp.java:153, took 11.105964 s 18/04/17 16:44:11 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x5f62b8e6 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 16:44:11 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x5f62b8e60x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 16:44:11 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 16:44:11 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:32978, server: ***hostname masked***/***IP masked***:2181 18/04/17 16:44:11 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x1626be1444c930b, negotiated timeout = 60000 18/04/17 16:44:11 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x1626be1444c930b 18/04/17 16:44:11 INFO zookeeper.ZooKeeper: Session: 0x1626be1444c930b closed 18/04/17 16:44:11 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 16:44:11 INFO scheduler.JobScheduler: Finished job streaming job 1523972640000 ms.27 from job set of time 1523972640000 ms 18/04/17 16:44:13 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 291.0 (TID 291) in 12885 ms on ***hostname masked*** (executor 5) (1/1) 18/04/17 16:44:13 INFO cluster.YarnClusterScheduler: Removed TaskSet 291.0, whose tasks have all completed, from pool 18/04/17 16:44:13 INFO scheduler.DAGScheduler: ResultStage 291 (foreachPartition at PredictorEngineApp.java:153) finished in 12.885 s 18/04/17 16:44:13 INFO scheduler.DAGScheduler: Job 291 finished: foreachPartition at PredictorEngineApp.java:153, took 12.911002 s 18/04/17 16:44:13 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x3e054e2b connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 16:44:13 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x3e054e2b0x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 16:44:13 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 16:44:13 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:54834, server: ***hostname masked***/***IP masked***:2181 18/04/17 16:44:13 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x3626be1439a92e6, negotiated timeout = 60000 18/04/17 16:44:13 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x3626be1439a92e6 18/04/17 16:44:13 INFO zookeeper.ZooKeeper: Session: 0x3626be1439a92e6 closed 18/04/17 16:44:13 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 16:44:13 INFO scheduler.JobScheduler: Finished job streaming job 1523972640000 ms.1 from job set of time 1523972640000 ms 18/04/17 16:44:13 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 304.0 (TID 304) in 13735 ms on ***hostname masked*** (executor 6) (1/1) 18/04/17 16:44:13 INFO cluster.YarnClusterScheduler: Removed TaskSet 304.0, whose tasks have all completed, from pool 18/04/17 16:44:13 INFO scheduler.DAGScheduler: ResultStage 304 (foreachPartition at PredictorEngineApp.java:153) finished in 13.736 s 18/04/17 16:44:13 INFO scheduler.DAGScheduler: Job 304 finished: foreachPartition at PredictorEngineApp.java:153, took 13.821649 s 18/04/17 16:44:13 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x59a8eed4 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 16:44:13 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x59a8eed40x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 16:44:13 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 16:44:13 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:32987, server: ***hostname masked***/***IP masked***:2181 18/04/17 16:44:13 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x1626be1444c930c, negotiated timeout = 60000 18/04/17 16:44:13 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x1626be1444c930c 18/04/17 16:44:13 INFO zookeeper.ZooKeeper: Session: 0x1626be1444c930c closed 18/04/17 16:44:13 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 16:44:13 INFO scheduler.JobScheduler: Finished job streaming job 1523972640000 ms.10 from job set of time 1523972640000 ms 18/04/17 16:44:15 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 306.0 (TID 306) in 15192 ms on ***hostname masked*** (executor 2) (1/1) 18/04/17 16:44:15 INFO cluster.YarnClusterScheduler: Removed TaskSet 306.0, whose tasks have all completed, from pool 18/04/17 16:44:15 INFO scheduler.DAGScheduler: ResultStage 306 (foreachPartition at PredictorEngineApp.java:153) finished in 15.193 s 18/04/17 16:44:15 INFO scheduler.DAGScheduler: Job 306 finished: foreachPartition at PredictorEngineApp.java:153, took 15.284178 s 18/04/17 16:44:15 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x1dae5112 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 16:44:15 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x1dae51120x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 16:44:15 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 16:44:15 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:54845, server: ***hostname masked***/***IP masked***:2181 18/04/17 16:44:15 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x3626be1439a92e7, negotiated timeout = 60000 18/04/17 16:44:15 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x3626be1439a92e7 18/04/17 16:44:15 INFO zookeeper.ZooKeeper: Session: 0x3626be1439a92e7 closed 18/04/17 16:44:15 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 16:44:15 INFO scheduler.JobScheduler: Finished job streaming job 1523972640000 ms.26 from job set of time 1523972640000 ms 18/04/17 16:44:16 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 295.0 (TID 295) in 16182 ms on ***hostname masked*** (executor 9) (1/1) 18/04/17 16:44:16 INFO cluster.YarnClusterScheduler: Removed TaskSet 295.0, whose tasks have all completed, from pool 18/04/17 16:44:16 INFO scheduler.DAGScheduler: ResultStage 295 (foreachPartition at PredictorEngineApp.java:153) finished in 16.184 s 18/04/17 16:44:16 INFO scheduler.DAGScheduler: Job 295 finished: foreachPartition at PredictorEngineApp.java:153, took 16.226702 s 18/04/17 16:44:16 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x783b9b35 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 16:44:16 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x783b9b350x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 16:44:16 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 16:44:16 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:33000, server: ***hostname masked***/***IP masked***:2181 18/04/17 16:44:16 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x1626be1444c930d, negotiated timeout = 60000 18/04/17 16:44:16 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x1626be1444c930d 18/04/17 16:44:16 INFO zookeeper.ZooKeeper: Session: 0x1626be1444c930d closed 18/04/17 16:44:16 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 16:44:16 INFO scheduler.JobScheduler: Finished job streaming job 1523972640000 ms.11 from job set of time 1523972640000 ms 18/04/17 16:44:16 INFO scheduler.JobScheduler: Total delay: 16.350 s for time 1523972640000 ms (execution: 16.282 s) 18/04/17 16:44:16 INFO kafka.KafkaRDD: Removing RDD 360 from persistence list 18/04/17 16:44:16 INFO storage.BlockManager: Removing RDD 360 18/04/17 16:44:16 INFO kafka.KafkaRDD: Removing RDD 360 from persistence list 18/04/17 16:44:16 INFO storage.BlockManager: Removing RDD 360 18/04/17 16:44:16 INFO kafka.KafkaRDD: Removing RDD 361 from persistence list 18/04/17 16:44:16 INFO storage.BlockManager: Removing RDD 361 18/04/17 16:44:16 INFO kafka.KafkaRDD: Removing RDD 361 from persistence list 18/04/17 16:44:16 INFO storage.BlockManager: Removing RDD 361 18/04/17 16:44:16 INFO kafka.KafkaRDD: Removing RDD 362 from persistence list 18/04/17 16:44:16 INFO storage.BlockManager: Removing RDD 362 18/04/17 16:44:16 INFO kafka.KafkaRDD: Removing RDD 362 from persistence list 18/04/17 16:44:16 INFO storage.BlockManager: Removing RDD 362 18/04/17 16:44:16 INFO kafka.KafkaRDD: Removing RDD 363 from persistence list 18/04/17 16:44:16 INFO storage.BlockManager: Removing RDD 363 18/04/17 16:44:16 INFO kafka.KafkaRDD: Removing RDD 363 from persistence list 18/04/17 16:44:16 INFO storage.BlockManager: Removing RDD 363 18/04/17 16:44:16 INFO kafka.KafkaRDD: Removing RDD 364 from persistence list 18/04/17 16:44:16 INFO storage.BlockManager: Removing RDD 364 18/04/17 16:44:16 INFO kafka.KafkaRDD: Removing RDD 364 from persistence list 18/04/17 16:44:16 INFO storage.BlockManager: Removing RDD 364 18/04/17 16:44:16 INFO kafka.KafkaRDD: Removing RDD 365 from persistence list 18/04/17 16:44:16 INFO storage.BlockManager: Removing RDD 365 18/04/17 16:44:16 INFO kafka.KafkaRDD: Removing RDD 365 from persistence list 18/04/17 16:44:16 INFO storage.BlockManager: Removing RDD 365 18/04/17 16:44:16 INFO kafka.KafkaRDD: Removing RDD 366 from persistence list 18/04/17 16:44:16 INFO storage.BlockManager: Removing RDD 366 18/04/17 16:44:16 INFO kafka.KafkaRDD: Removing RDD 366 from persistence list 18/04/17 16:44:16 INFO storage.BlockManager: Removing RDD 366 18/04/17 16:44:16 INFO kafka.KafkaRDD: Removing RDD 367 from persistence list 18/04/17 16:44:16 INFO storage.BlockManager: Removing RDD 367 18/04/17 16:44:16 INFO kafka.KafkaRDD: Removing RDD 367 from persistence list 18/04/17 16:44:16 INFO storage.BlockManager: Removing RDD 367 18/04/17 16:44:16 INFO kafka.KafkaRDD: Removing RDD 368 from persistence list 18/04/17 16:44:16 INFO storage.BlockManager: Removing RDD 368 18/04/17 16:44:16 INFO kafka.KafkaRDD: Removing RDD 368 from persistence list 18/04/17 16:44:16 INFO storage.BlockManager: Removing RDD 368 18/04/17 16:44:16 INFO kafka.KafkaRDD: Removing RDD 369 from persistence list 18/04/17 16:44:16 INFO storage.BlockManager: Removing RDD 369 18/04/17 16:44:16 INFO kafka.KafkaRDD: Removing RDD 369 from persistence list 18/04/17 16:44:16 INFO storage.BlockManager: Removing RDD 369 18/04/17 16:44:16 INFO kafka.KafkaRDD: Removing RDD 370 from persistence list 18/04/17 16:44:16 INFO storage.BlockManager: Removing RDD 370 18/04/17 16:44:16 INFO kafka.KafkaRDD: Removing RDD 370 from persistence list 18/04/17 16:44:16 INFO storage.BlockManager: Removing RDD 370 18/04/17 16:44:16 INFO kafka.KafkaRDD: Removing RDD 371 from persistence list 18/04/17 16:44:16 INFO storage.BlockManager: Removing RDD 371 18/04/17 16:44:16 INFO kafka.KafkaRDD: Removing RDD 371 from persistence list 18/04/17 16:44:16 INFO storage.BlockManager: Removing RDD 371 18/04/17 16:44:16 INFO kafka.KafkaRDD: Removing RDD 372 from persistence list 18/04/17 16:44:16 INFO storage.BlockManager: Removing RDD 372 18/04/17 16:44:16 INFO kafka.KafkaRDD: Removing RDD 372 from persistence list 18/04/17 16:44:16 INFO storage.BlockManager: Removing RDD 372 18/04/17 16:44:16 INFO kafka.KafkaRDD: Removing RDD 373 from persistence list 18/04/17 16:44:16 INFO storage.BlockManager: Removing RDD 373 18/04/17 16:44:16 INFO kafka.KafkaRDD: Removing RDD 373 from persistence list 18/04/17 16:44:16 INFO storage.BlockManager: Removing RDD 373 18/04/17 16:44:16 INFO kafka.KafkaRDD: Removing RDD 374 from persistence list 18/04/17 16:44:16 INFO storage.BlockManager: Removing RDD 374 18/04/17 16:44:16 INFO kafka.KafkaRDD: Removing RDD 374 from persistence list 18/04/17 16:44:16 INFO storage.BlockManager: Removing RDD 374 18/04/17 16:44:16 INFO kafka.KafkaRDD: Removing RDD 375 from persistence list 18/04/17 16:44:16 INFO storage.BlockManager: Removing RDD 375 18/04/17 16:44:16 INFO kafka.KafkaRDD: Removing RDD 375 from persistence list 18/04/17 16:44:16 INFO storage.BlockManager: Removing RDD 375 18/04/17 16:44:16 INFO kafka.KafkaRDD: Removing RDD 376 from persistence list 18/04/17 16:44:16 INFO storage.BlockManager: Removing RDD 376 18/04/17 16:44:16 INFO kafka.KafkaRDD: Removing RDD 376 from persistence list 18/04/17 16:44:16 INFO storage.BlockManager: Removing RDD 376 18/04/17 16:44:16 INFO kafka.KafkaRDD: Removing RDD 377 from persistence list 18/04/17 16:44:16 INFO storage.BlockManager: Removing RDD 377 18/04/17 16:44:16 INFO kafka.KafkaRDD: Removing RDD 377 from persistence list 18/04/17 16:44:16 INFO storage.BlockManager: Removing RDD 377 18/04/17 16:44:16 INFO kafka.KafkaRDD: Removing RDD 378 from persistence list 18/04/17 16:44:16 INFO storage.BlockManager: Removing RDD 378 18/04/17 16:44:16 INFO kafka.KafkaRDD: Removing RDD 378 from persistence list 18/04/17 16:44:16 INFO storage.BlockManager: Removing RDD 378 18/04/17 16:44:16 INFO kafka.KafkaRDD: Removing RDD 379 from persistence list 18/04/17 16:44:16 INFO storage.BlockManager: Removing RDD 379 18/04/17 16:44:16 INFO kafka.KafkaRDD: Removing RDD 379 from persistence list 18/04/17 16:44:16 INFO storage.BlockManager: Removing RDD 379 18/04/17 16:44:16 INFO kafka.KafkaRDD: Removing RDD 380 from persistence list 18/04/17 16:44:16 INFO storage.BlockManager: Removing RDD 380 18/04/17 16:44:16 INFO kafka.KafkaRDD: Removing RDD 380 from persistence list 18/04/17 16:44:16 INFO storage.BlockManager: Removing RDD 380 18/04/17 16:44:16 INFO kafka.KafkaRDD: Removing RDD 381 from persistence list 18/04/17 16:44:16 INFO storage.BlockManager: Removing RDD 381 18/04/17 16:44:16 INFO kafka.KafkaRDD: Removing RDD 381 from persistence list 18/04/17 16:44:16 INFO storage.BlockManager: Removing RDD 381 18/04/17 16:44:16 INFO kafka.KafkaRDD: Removing RDD 382 from persistence list 18/04/17 16:44:16 INFO storage.BlockManager: Removing RDD 382 18/04/17 16:44:16 INFO kafka.KafkaRDD: Removing RDD 382 from persistence list 18/04/17 16:44:16 INFO storage.BlockManager: Removing RDD 382 18/04/17 16:44:16 INFO kafka.KafkaRDD: Removing RDD 383 from persistence list 18/04/17 16:44:16 INFO storage.BlockManager: Removing RDD 383 18/04/17 16:44:16 INFO kafka.KafkaRDD: Removing RDD 383 from persistence list 18/04/17 16:44:16 INFO storage.BlockManager: Removing RDD 383 18/04/17 16:44:16 INFO kafka.KafkaRDD: Removing RDD 384 from persistence list 18/04/17 16:44:16 INFO storage.BlockManager: Removing RDD 384 18/04/17 16:44:16 INFO kafka.KafkaRDD: Removing RDD 384 from persistence list 18/04/17 16:44:16 INFO storage.BlockManager: Removing RDD 384 18/04/17 16:44:16 INFO kafka.KafkaRDD: Removing RDD 385 from persistence list 18/04/17 16:44:16 INFO storage.BlockManager: Removing RDD 385 18/04/17 16:44:16 INFO kafka.KafkaRDD: Removing RDD 385 from persistence list 18/04/17 16:44:16 INFO storage.BlockManager: Removing RDD 385 18/04/17 16:44:16 INFO kafka.KafkaRDD: Removing RDD 386 from persistence list 18/04/17 16:44:16 INFO storage.BlockManager: Removing RDD 386 18/04/17 16:44:16 INFO kafka.KafkaRDD: Removing RDD 386 from persistence list 18/04/17 16:44:16 INFO storage.BlockManager: Removing RDD 386 18/04/17 16:44:16 INFO kafka.KafkaRDD: Removing RDD 387 from persistence list 18/04/17 16:44:16 INFO storage.BlockManager: Removing RDD 387 18/04/17 16:44:16 INFO kafka.KafkaRDD: Removing RDD 387 from persistence list 18/04/17 16:44:16 INFO storage.BlockManager: Removing RDD 387 18/04/17 16:44:16 INFO kafka.KafkaRDD: Removing RDD 388 from persistence list 18/04/17 16:44:16 INFO storage.BlockManager: Removing RDD 388 18/04/17 16:44:16 INFO kafka.KafkaRDD: Removing RDD 388 from persistence list 18/04/17 16:44:16 INFO storage.BlockManager: Removing RDD 388 18/04/17 16:44:16 INFO kafka.KafkaRDD: Removing RDD 389 from persistence list 18/04/17 16:44:16 INFO storage.BlockManager: Removing RDD 389 18/04/17 16:44:16 INFO kafka.KafkaRDD: Removing RDD 389 from persistence list 18/04/17 16:44:16 INFO storage.BlockManager: Removing RDD 389 18/04/17 16:44:16 INFO kafka.KafkaRDD: Removing RDD 390 from persistence list 18/04/17 16:44:16 INFO storage.BlockManager: Removing RDD 390 18/04/17 16:44:16 INFO kafka.KafkaRDD: Removing RDD 390 from persistence list 18/04/17 16:44:16 INFO storage.BlockManager: Removing RDD 390 18/04/17 16:44:16 INFO kafka.KafkaRDD: Removing RDD 391 from persistence list 18/04/17 16:44:16 INFO storage.BlockManager: Removing RDD 391 18/04/17 16:44:16 INFO kafka.KafkaRDD: Removing RDD 391 from persistence list 18/04/17 16:44:16 INFO storage.BlockManager: Removing RDD 391 18/04/17 16:44:16 INFO kafka.KafkaRDD: Removing RDD 392 from persistence list 18/04/17 16:44:16 INFO storage.BlockManager: Removing RDD 392 18/04/17 16:44:16 INFO kafka.KafkaRDD: Removing RDD 392 from persistence list 18/04/17 16:44:16 INFO storage.BlockManager: Removing RDD 392 18/04/17 16:44:16 INFO kafka.KafkaRDD: Removing RDD 393 from persistence list 18/04/17 16:44:16 INFO storage.BlockManager: Removing RDD 393 18/04/17 16:44:16 INFO kafka.KafkaRDD: Removing RDD 393 from persistence list 18/04/17 16:44:16 INFO storage.BlockManager: Removing RDD 393 18/04/17 16:44:16 INFO kafka.KafkaRDD: Removing RDD 394 from persistence list 18/04/17 16:44:16 INFO storage.BlockManager: Removing RDD 394 18/04/17 16:44:16 INFO kafka.KafkaRDD: Removing RDD 394 from persistence list 18/04/17 16:44:16 INFO storage.BlockManager: Removing RDD 394 18/04/17 16:44:16 INFO kafka.KafkaRDD: Removing RDD 395 from persistence list 18/04/17 16:44:16 INFO storage.BlockManager: Removing RDD 395 18/04/17 16:44:16 INFO kafka.KafkaRDD: Removing RDD 395 from persistence list 18/04/17 16:44:16 INFO storage.BlockManager: Removing RDD 395 18/04/17 16:44:16 INFO scheduler.ReceivedBlockTracker: Deleting batches ArrayBuffer() 18/04/17 16:44:16 INFO scheduler.InputInfoTracker: remove old batch metadata: 1523972520000 ms 18/04/17 16:45:00 INFO scheduler.JobScheduler: Added jobs for time 1523972700000 ms 18/04/17 16:45:00 INFO scheduler.JobScheduler: Starting job streaming job 1523972700000 ms.0 from job set of time 1523972700000 ms 18/04/17 16:45:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 16:45:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 16:45:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 16:45:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 16:45:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 16:45:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 16:45:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 16:45:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 16:45:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 16:45:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 16:45:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 16:45:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 16:45:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 16:45:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 16:45:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 16:45:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 16:45:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 16:45:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 16:45:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 16:45:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 16:45:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 16:45:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 16:45:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 16:45:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 16:45:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 16:45:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 16:45:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 16:45:00 INFO scheduler.DAGScheduler: Got job 314 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 16:45:00 INFO scheduler.JobScheduler: Starting job streaming job 1523972700000 ms.1 from job set of time 1523972700000 ms 18/04/17 16:45:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 314 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 16:45:00 INFO scheduler.JobScheduler: Starting job streaming job 1523972700000 ms.2 from job set of time 1523972700000 ms 18/04/17 16:45:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 16:45:00 INFO scheduler.JobScheduler: Starting job streaming job 1523972700000 ms.3 from job set of time 1523972700000 ms 18/04/17 16:45:00 INFO scheduler.JobScheduler: Starting job streaming job 1523972700000 ms.4 from job set of time 1523972700000 ms 18/04/17 16:45:00 INFO scheduler.JobScheduler: Starting job streaming job 1523972700000 ms.5 from job set of time 1523972700000 ms 18/04/17 16:45:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 16:45:00 INFO scheduler.JobScheduler: Finished job streaming job 1523972700000 ms.3 from job set of time 1523972700000 ms 18/04/17 16:45:00 INFO scheduler.JobScheduler: Starting job streaming job 1523972700000 ms.6 from job set of time 1523972700000 ms 18/04/17 16:45:00 INFO scheduler.JobScheduler: Starting job streaming job 1523972700000 ms.7 from job set of time 1523972700000 ms 18/04/17 16:45:00 INFO scheduler.JobScheduler: Finished job streaming job 1523972700000 ms.4 from job set of time 1523972700000 ms 18/04/17 16:45:00 INFO scheduler.JobScheduler: Starting job streaming job 1523972700000 ms.9 from job set of time 1523972700000 ms 18/04/17 16:45:00 INFO scheduler.JobScheduler: Finished job streaming job 1523972700000 ms.0 from job set of time 1523972700000 ms 18/04/17 16:45:00 INFO scheduler.JobScheduler: Starting job streaming job 1523972700000 ms.10 from job set of time 1523972700000 ms 18/04/17 16:45:00 INFO scheduler.JobScheduler: Starting job streaming job 1523972700000 ms.8 from job set of time 1523972700000 ms 18/04/17 16:45:00 INFO scheduler.JobScheduler: Starting job streaming job 1523972700000 ms.11 from job set of time 1523972700000 ms 18/04/17 16:45:00 INFO scheduler.JobScheduler: Starting job streaming job 1523972700000 ms.12 from job set of time 1523972700000 ms 18/04/17 16:45:00 INFO scheduler.JobScheduler: Starting job streaming job 1523972700000 ms.13 from job set of time 1523972700000 ms 18/04/17 16:45:00 INFO scheduler.JobScheduler: Finished job streaming job 1523972700000 ms.13 from job set of time 1523972700000 ms 18/04/17 16:45:00 INFO scheduler.JobScheduler: Starting job streaming job 1523972700000 ms.15 from job set of time 1523972700000 ms 18/04/17 16:45:00 INFO scheduler.JobScheduler: Starting job streaming job 1523972700000 ms.14 from job set of time 1523972700000 ms 18/04/17 16:45:00 INFO scheduler.JobScheduler: Finished job streaming job 1523972700000 ms.14 from job set of time 1523972700000 ms 18/04/17 16:45:00 INFO scheduler.DAGScheduler: Submitting ResultStage 314 (KafkaRDD[459] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 16:45:00 INFO scheduler.JobScheduler: Starting job streaming job 1523972700000 ms.18 from job set of time 1523972700000 ms 18/04/17 16:45:00 INFO scheduler.JobScheduler: Starting job streaming job 1523972700000 ms.17 from job set of time 1523972700000 ms 18/04/17 16:45:00 INFO scheduler.JobScheduler: Starting job streaming job 1523972700000 ms.16 from job set of time 1523972700000 ms 18/04/17 16:45:00 INFO scheduler.JobScheduler: Starting job streaming job 1523972700000 ms.19 from job set of time 1523972700000 ms 18/04/17 16:45:00 INFO scheduler.JobScheduler: Starting job streaming job 1523972700000 ms.20 from job set of time 1523972700000 ms 18/04/17 16:45:00 INFO scheduler.JobScheduler: Finished job streaming job 1523972700000 ms.17 from job set of time 1523972700000 ms 18/04/17 16:45:00 INFO scheduler.JobScheduler: Finished job streaming job 1523972700000 ms.16 from job set of time 1523972700000 ms 18/04/17 16:45:00 INFO scheduler.JobScheduler: Starting job streaming job 1523972700000 ms.22 from job set of time 1523972700000 ms 18/04/17 16:45:00 INFO scheduler.JobScheduler: Starting job streaming job 1523972700000 ms.21 from job set of time 1523972700000 ms 18/04/17 16:45:00 INFO scheduler.JobScheduler: Starting job streaming job 1523972700000 ms.23 from job set of time 1523972700000 ms 18/04/17 16:45:00 INFO scheduler.JobScheduler: Finished job streaming job 1523972700000 ms.21 from job set of time 1523972700000 ms 18/04/17 16:45:00 INFO scheduler.JobScheduler: Starting job streaming job 1523972700000 ms.24 from job set of time 1523972700000 ms 18/04/17 16:45:00 INFO scheduler.JobScheduler: Starting job streaming job 1523972700000 ms.25 from job set of time 1523972700000 ms 18/04/17 16:45:00 INFO scheduler.JobScheduler: Starting job streaming job 1523972700000 ms.26 from job set of time 1523972700000 ms 18/04/17 16:45:00 INFO scheduler.JobScheduler: Starting job streaming job 1523972700000 ms.27 from job set of time 1523972700000 ms 18/04/17 16:45:00 INFO scheduler.JobScheduler: Starting job streaming job 1523972700000 ms.28 from job set of time 1523972700000 ms 18/04/17 16:45:00 INFO scheduler.JobScheduler: Starting job streaming job 1523972700000 ms.29 from job set of time 1523972700000 ms 18/04/17 16:45:00 INFO scheduler.JobScheduler: Starting job streaming job 1523972700000 ms.30 from job set of time 1523972700000 ms 18/04/17 16:45:00 INFO scheduler.JobScheduler: Starting job streaming job 1523972700000 ms.31 from job set of time 1523972700000 ms 18/04/17 16:45:00 INFO scheduler.JobScheduler: Starting job streaming job 1523972700000 ms.32 from job set of time 1523972700000 ms 18/04/17 16:45:00 INFO scheduler.JobScheduler: Starting job streaming job 1523972700000 ms.33 from job set of time 1523972700000 ms 18/04/17 16:45:00 INFO scheduler.JobScheduler: Finished job streaming job 1523972700000 ms.30 from job set of time 1523972700000 ms 18/04/17 16:45:00 INFO scheduler.JobScheduler: Starting job streaming job 1523972700000 ms.35 from job set of time 1523972700000 ms 18/04/17 16:45:00 INFO scheduler.JobScheduler: Starting job streaming job 1523972700000 ms.34 from job set of time 1523972700000 ms 18/04/17 16:45:00 INFO storage.MemoryStore: Block broadcast_314 stored as values in memory (estimated size 5.7 KB, free 490.3 MB) 18/04/17 16:45:00 INFO storage.MemoryStore: Block broadcast_314_piece0 stored as bytes in memory (estimated size 3.1 KB, free 490.3 MB) 18/04/17 16:45:00 INFO storage.BlockManagerInfo: Added broadcast_314_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.2 MB) 18/04/17 16:45:00 INFO spark.SparkContext: Created broadcast 314 from broadcast at DAGScheduler.scala:1006 18/04/17 16:45:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 314 (KafkaRDD[459] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 16:45:00 INFO cluster.YarnClusterScheduler: Adding task set 314.0 with 1 tasks 18/04/17 16:45:00 INFO scheduler.DAGScheduler: Got job 316 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 16:45:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 315 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 16:45:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 16:45:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 16:45:00 INFO scheduler.DAGScheduler: Submitting ResultStage 315 (KafkaRDD[461] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 16:45:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 314.0 (TID 314, ***hostname masked***, executor 3, partition 0, RACK_LOCAL, 2048 bytes) 18/04/17 16:45:00 INFO storage.MemoryStore: Block broadcast_315 stored as values in memory (estimated size 5.7 KB, free 490.3 MB) 18/04/17 16:45:00 INFO storage.MemoryStore: Block broadcast_315_piece0 stored as bytes in memory (estimated size 3.1 KB, free 490.3 MB) 18/04/17 16:45:00 INFO storage.BlockManagerInfo: Added broadcast_315_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.2 MB) 18/04/17 16:45:00 INFO spark.SparkContext: Created broadcast 315 from broadcast at DAGScheduler.scala:1006 18/04/17 16:45:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 315 (KafkaRDD[461] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 16:45:00 INFO cluster.YarnClusterScheduler: Adding task set 315.0 with 1 tasks 18/04/17 16:45:00 INFO scheduler.DAGScheduler: Got job 315 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 16:45:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 316 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 16:45:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 16:45:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 16:45:00 INFO scheduler.DAGScheduler: Submitting ResultStage 316 (KafkaRDD[434] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 16:45:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 315.0 (TID 315, ***hostname masked***, executor 6, partition 0, NODE_LOCAL, 2049 bytes) 18/04/17 16:45:00 INFO storage.MemoryStore: Block broadcast_316 stored as values in memory (estimated size 5.7 KB, free 490.3 MB) 18/04/17 16:45:00 INFO storage.MemoryStore: Block broadcast_316_piece0 stored as bytes in memory (estimated size 3.1 KB, free 490.3 MB) 18/04/17 16:45:00 INFO storage.BlockManagerInfo: Added broadcast_316_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.2 MB) 18/04/17 16:45:00 INFO spark.SparkContext: Created broadcast 316 from broadcast at DAGScheduler.scala:1006 18/04/17 16:45:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 316 (KafkaRDD[434] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 16:45:00 INFO cluster.YarnClusterScheduler: Adding task set 316.0 with 1 tasks 18/04/17 16:45:00 INFO scheduler.DAGScheduler: Got job 318 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 16:45:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 317 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 16:45:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 16:45:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 16:45:00 INFO scheduler.DAGScheduler: Submitting ResultStage 317 (KafkaRDD[444] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 16:45:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 316.0 (TID 316, ***hostname masked***, executor 3, partition 0, RACK_LOCAL, 2034 bytes) 18/04/17 16:45:00 INFO storage.MemoryStore: Block broadcast_317 stored as values in memory (estimated size 5.7 KB, free 490.3 MB) 18/04/17 16:45:00 INFO storage.BlockManagerInfo: Added broadcast_314_piece0 in memory on ***hostname masked***:60107 (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:45:00 INFO storage.MemoryStore: Block broadcast_317_piece0 stored as bytes in memory (estimated size 3.1 KB, free 490.3 MB) 18/04/17 16:45:00 INFO storage.BlockManagerInfo: Added broadcast_317_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.2 MB) 18/04/17 16:45:00 INFO spark.SparkContext: Created broadcast 317 from broadcast at DAGScheduler.scala:1006 18/04/17 16:45:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 317 (KafkaRDD[444] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 16:45:00 INFO cluster.YarnClusterScheduler: Adding task set 317.0 with 1 tasks 18/04/17 16:45:00 INFO scheduler.DAGScheduler: Got job 317 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 16:45:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 318 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 16:45:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 16:45:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 16:45:00 INFO scheduler.DAGScheduler: Submitting ResultStage 318 (KafkaRDD[455] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 16:45:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 317.0 (TID 317, ***hostname masked***, executor 6, partition 0, RACK_LOCAL, 2051 bytes) 18/04/17 16:45:00 INFO storage.MemoryStore: Block broadcast_318 stored as values in memory (estimated size 5.7 KB, free 490.3 MB) 18/04/17 16:45:00 INFO storage.BlockManagerInfo: Added broadcast_315_piece0 in memory on ***hostname masked***:35790 (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:45:00 INFO storage.MemoryStore: Block broadcast_318_piece0 stored as bytes in memory (estimated size 3.1 KB, free 490.3 MB) 18/04/17 16:45:00 INFO storage.BlockManagerInfo: Added broadcast_316_piece0 in memory on ***hostname masked***:60107 (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:45:00 INFO storage.BlockManagerInfo: Added broadcast_318_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.2 MB) 18/04/17 16:45:00 INFO spark.SparkContext: Created broadcast 318 from broadcast at DAGScheduler.scala:1006 18/04/17 16:45:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 318 (KafkaRDD[455] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 16:45:00 INFO cluster.YarnClusterScheduler: Adding task set 318.0 with 1 tasks 18/04/17 16:45:00 INFO scheduler.DAGScheduler: Got job 319 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 16:45:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 319 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 16:45:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 16:45:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 16:45:00 INFO scheduler.DAGScheduler: Submitting ResultStage 319 (KafkaRDD[442] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 16:45:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 318.0 (TID 318, ***hostname masked***, executor 7, partition 0, RACK_LOCAL, 2048 bytes) 18/04/17 16:45:00 INFO storage.MemoryStore: Block broadcast_319 stored as values in memory (estimated size 5.7 KB, free 490.3 MB) 18/04/17 16:45:00 INFO storage.MemoryStore: Block broadcast_319_piece0 stored as bytes in memory (estimated size 3.1 KB, free 490.3 MB) 18/04/17 16:45:00 INFO storage.BlockManagerInfo: Added broadcast_319_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.2 MB) 18/04/17 16:45:00 INFO spark.SparkContext: Created broadcast 319 from broadcast at DAGScheduler.scala:1006 18/04/17 16:45:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 319 (KafkaRDD[442] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 16:45:00 INFO cluster.YarnClusterScheduler: Adding task set 319.0 with 1 tasks 18/04/17 16:45:00 INFO scheduler.DAGScheduler: Got job 320 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 16:45:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 320 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 16:45:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 16:45:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 16:45:00 INFO scheduler.DAGScheduler: Submitting ResultStage 320 (KafkaRDD[447] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 16:45:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 319.0 (TID 319, ***hostname masked***, executor 5, partition 0, NODE_LOCAL, 2059 bytes) 18/04/17 16:45:00 INFO storage.MemoryStore: Block broadcast_320 stored as values in memory (estimated size 5.7 KB, free 490.3 MB) 18/04/17 16:45:00 INFO storage.BlockManagerInfo: Added broadcast_317_piece0 in memory on ***hostname masked***:35790 (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:45:00 INFO storage.MemoryStore: Block broadcast_320_piece0 stored as bytes in memory (estimated size 3.1 KB, free 490.3 MB) 18/04/17 16:45:00 INFO storage.BlockManagerInfo: Added broadcast_320_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.2 MB) 18/04/17 16:45:00 INFO spark.SparkContext: Created broadcast 320 from broadcast at DAGScheduler.scala:1006 18/04/17 16:45:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 320 (KafkaRDD[447] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 16:45:00 INFO cluster.YarnClusterScheduler: Adding task set 320.0 with 1 tasks 18/04/17 16:45:00 INFO scheduler.DAGScheduler: Got job 321 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 16:45:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 321 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 16:45:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 16:45:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 16:45:00 INFO scheduler.DAGScheduler: Submitting ResultStage 321 (KafkaRDD[433] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 16:45:00 INFO storage.MemoryStore: Block broadcast_321 stored as values in memory (estimated size 5.7 KB, free 490.3 MB) 18/04/17 16:45:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 320.0 (TID 320, ***hostname masked***, executor 1, partition 0, RACK_LOCAL, 2064 bytes) 18/04/17 16:45:00 INFO storage.MemoryStore: Block broadcast_321_piece0 stored as bytes in memory (estimated size 3.1 KB, free 490.3 MB) 18/04/17 16:45:00 INFO storage.BlockManagerInfo: Added broadcast_321_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.2 MB) 18/04/17 16:45:00 INFO spark.SparkContext: Created broadcast 321 from broadcast at DAGScheduler.scala:1006 18/04/17 16:45:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 321 (KafkaRDD[433] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 16:45:00 INFO cluster.YarnClusterScheduler: Adding task set 321.0 with 1 tasks 18/04/17 16:45:00 INFO storage.BlockManagerInfo: Removed broadcast_311_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.2 MB) 18/04/17 16:45:00 INFO scheduler.DAGScheduler: Got job 322 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 16:45:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 322 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 16:45:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 16:45:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 16:45:00 INFO scheduler.DAGScheduler: Submitting ResultStage 322 (KafkaRDD[450] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 16:45:00 INFO storage.MemoryStore: Block broadcast_322 stored as values in memory (estimated size 5.7 KB, free 490.3 MB) 18/04/17 16:45:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 321.0 (TID 321, ***hostname masked***, executor 2, partition 0, RACK_LOCAL, 2063 bytes) 18/04/17 16:45:00 INFO storage.BlockManagerInfo: Added broadcast_319_piece0 in memory on ***hostname masked***:53081 (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:45:00 INFO storage.BlockManagerInfo: Added broadcast_318_piece0 in memory on ***hostname masked***:41751 (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:45:00 INFO storage.MemoryStore: Block broadcast_322_piece0 stored as bytes in memory (estimated size 3.1 KB, free 490.3 MB) 18/04/17 16:45:00 INFO storage.BlockManagerInfo: Added broadcast_322_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.2 MB) 18/04/17 16:45:00 INFO spark.SparkContext: Created broadcast 322 from broadcast at DAGScheduler.scala:1006 18/04/17 16:45:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 322 (KafkaRDD[450] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 16:45:00 INFO cluster.YarnClusterScheduler: Adding task set 322.0 with 1 tasks 18/04/17 16:45:00 INFO scheduler.DAGScheduler: Got job 324 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 16:45:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 323 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 16:45:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 16:45:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 16:45:00 INFO scheduler.DAGScheduler: Submitting ResultStage 323 (KafkaRDD[439] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 16:45:00 INFO storage.MemoryStore: Block broadcast_323 stored as values in memory (estimated size 5.7 KB, free 490.3 MB) 18/04/17 16:45:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 322.0 (TID 322, ***hostname masked***, executor 9, partition 0, RACK_LOCAL, 2048 bytes) 18/04/17 16:45:00 INFO storage.BlockManagerInfo: Removed broadcast_311_piece0 on ***hostname masked***:55033 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:45:00 INFO storage.MemoryStore: Block broadcast_323_piece0 stored as bytes in memory (estimated size 3.1 KB, free 490.3 MB) 18/04/17 16:45:00 INFO storage.BlockManagerInfo: Added broadcast_323_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.2 MB) 18/04/17 16:45:00 INFO spark.SparkContext: Created broadcast 323 from broadcast at DAGScheduler.scala:1006 18/04/17 16:45:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 323 (KafkaRDD[439] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 16:45:00 INFO cluster.YarnClusterScheduler: Adding task set 323.0 with 1 tasks 18/04/17 16:45:00 INFO scheduler.DAGScheduler: Got job 325 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 16:45:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 324 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 16:45:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 16:45:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 16:45:00 INFO scheduler.DAGScheduler: Submitting ResultStage 324 (KafkaRDD[443] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 16:45:00 INFO storage.MemoryStore: Block broadcast_324 stored as values in memory (estimated size 5.7 KB, free 490.3 MB) 18/04/17 16:45:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 323.0 (TID 323, ***hostname masked***, executor 6, partition 0, RACK_LOCAL, 2060 bytes) 18/04/17 16:45:00 INFO storage.BlockManagerInfo: Removed broadcast_289_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.2 MB) 18/04/17 16:45:00 INFO storage.BlockManagerInfo: Removed broadcast_289_piece0 on ***hostname masked***:50260 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:45:00 INFO storage.MemoryStore: Block broadcast_324_piece0 stored as bytes in memory (estimated size 3.1 KB, free 490.3 MB) 18/04/17 16:45:00 INFO storage.BlockManagerInfo: Added broadcast_324_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.2 MB) 18/04/17 16:45:00 INFO spark.SparkContext: Created broadcast 324 from broadcast at DAGScheduler.scala:1006 18/04/17 16:45:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 324 (KafkaRDD[443] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 16:45:00 INFO cluster.YarnClusterScheduler: Adding task set 324.0 with 1 tasks 18/04/17 16:45:00 INFO spark.ContextCleaner: Cleaned accumulator 290 18/04/17 16:45:00 INFO scheduler.DAGScheduler: Got job 323 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 16:45:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 325 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 16:45:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 16:45:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 16:45:00 INFO scheduler.DAGScheduler: Submitting ResultStage 325 (KafkaRDD[441] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 16:45:00 INFO storage.MemoryStore: Block broadcast_325 stored as values in memory (estimated size 5.7 KB, free 490.3 MB) 18/04/17 16:45:00 INFO storage.BlockManagerInfo: Removed broadcast_288_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.2 MB) 18/04/17 16:45:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 324.0 (TID 324, ***hostname masked***, executor 9, partition 0, RACK_LOCAL, 2046 bytes) 18/04/17 16:45:00 INFO storage.BlockManagerInfo: Removed broadcast_288_piece0 on ***hostname masked***:53081 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:45:00 INFO storage.BlockManagerInfo: Added broadcast_320_piece0 in memory on ***hostname masked***:56034 (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:45:00 INFO spark.ContextCleaner: Cleaned accumulator 289 18/04/17 16:45:00 INFO storage.MemoryStore: Block broadcast_325_piece0 stored as bytes in memory (estimated size 3.1 KB, free 490.3 MB) 18/04/17 16:45:00 INFO storage.BlockManagerInfo: Removed broadcast_291_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.2 MB) 18/04/17 16:45:00 INFO storage.BlockManagerInfo: Added broadcast_325_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.2 MB) 18/04/17 16:45:00 INFO spark.SparkContext: Created broadcast 325 from broadcast at DAGScheduler.scala:1006 18/04/17 16:45:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 325 (KafkaRDD[441] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 16:45:00 INFO storage.BlockManagerInfo: Removed broadcast_291_piece0 on ***hostname masked***:53081 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:45:00 INFO cluster.YarnClusterScheduler: Adding task set 325.0 with 1 tasks 18/04/17 16:45:00 INFO scheduler.DAGScheduler: Got job 327 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 16:45:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 326 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 16:45:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 16:45:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 16:45:00 INFO scheduler.DAGScheduler: Submitting ResultStage 326 (KafkaRDD[467] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 16:45:00 INFO storage.BlockManagerInfo: Added broadcast_322_piece0 in memory on ***hostname masked***:55033 (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:45:00 INFO storage.BlockManagerInfo: Added broadcast_323_piece0 in memory on ***hostname masked***:35790 (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:45:00 INFO storage.MemoryStore: Block broadcast_326 stored as values in memory (estimated size 5.7 KB, free 490.3 MB) 18/04/17 16:45:00 INFO spark.ContextCleaner: Cleaned accumulator 292 18/04/17 16:45:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 325.0 (TID 325, ***hostname masked***, executor 6, partition 0, RACK_LOCAL, 2041 bytes) 18/04/17 16:45:00 INFO storage.BlockManagerInfo: Removed broadcast_290_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.2 MB) 18/04/17 16:45:00 INFO storage.BlockManagerInfo: Removed broadcast_290_piece0 on ***hostname masked***:60107 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:45:00 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 314.0 (TID 314) in 65 ms on ***hostname masked*** (executor 3) (1/1) 18/04/17 16:45:00 INFO cluster.YarnClusterScheduler: Removed TaskSet 314.0, whose tasks have all completed, from pool 18/04/17 16:45:00 INFO spark.ContextCleaner: Cleaned accumulator 291 18/04/17 16:45:00 INFO storage.MemoryStore: Block broadcast_326_piece0 stored as bytes in memory (estimated size 3.1 KB, free 490.3 MB) 18/04/17 16:45:00 INFO storage.BlockManagerInfo: Added broadcast_326_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.2 MB) 18/04/17 16:45:00 INFO storage.BlockManagerInfo: Removed broadcast_293_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.2 MB) 18/04/17 16:45:00 INFO spark.SparkContext: Created broadcast 326 from broadcast at DAGScheduler.scala:1006 18/04/17 16:45:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 326 (KafkaRDD[467] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 16:45:00 INFO cluster.YarnClusterScheduler: Adding task set 326.0 with 1 tasks 18/04/17 16:45:00 INFO scheduler.DAGScheduler: Got job 326 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 16:45:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 327 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 16:45:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 16:45:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 16:45:00 INFO scheduler.DAGScheduler: Submitting ResultStage 327 (KafkaRDD[437] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 16:45:00 INFO storage.MemoryStore: Block broadcast_327 stored as values in memory (estimated size 5.7 KB, free 490.3 MB) 18/04/17 16:45:00 INFO storage.BlockManagerInfo: Removed broadcast_293_piece0 on ***hostname masked***:35790 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:45:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 326.0 (TID 326, ***hostname masked***, executor 11, partition 0, NODE_LOCAL, 2037 bytes) 18/04/17 16:45:00 INFO storage.BlockManagerInfo: Added broadcast_321_piece0 in memory on ***hostname masked***:43653 (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:45:00 INFO storage.MemoryStore: Block broadcast_327_piece0 stored as bytes in memory (estimated size 3.1 KB, free 490.3 MB) 18/04/17 16:45:00 INFO storage.BlockManagerInfo: Added broadcast_327_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.2 MB) 18/04/17 16:45:00 INFO spark.SparkContext: Created broadcast 327 from broadcast at DAGScheduler.scala:1006 18/04/17 16:45:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 327 (KafkaRDD[437] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 16:45:00 INFO cluster.YarnClusterScheduler: Adding task set 327.0 with 1 tasks 18/04/17 16:45:00 INFO scheduler.DAGScheduler: Got job 328 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 16:45:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 328 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 16:45:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 16:45:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 16:45:00 INFO scheduler.DAGScheduler: Submitting ResultStage 328 (KafkaRDD[456] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 16:45:00 INFO storage.MemoryStore: Block broadcast_328 stored as values in memory (estimated size 5.7 KB, free 490.3 MB) 18/04/17 16:45:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 327.0 (TID 327, ***hostname masked***, executor 4, partition 0, NODE_LOCAL, 2059 bytes) 18/04/17 16:45:00 INFO spark.ContextCleaner: Cleaned accumulator 294 18/04/17 16:45:00 INFO storage.BlockManagerInfo: Removed broadcast_292_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.2 MB) 18/04/17 16:45:00 INFO storage.BlockManagerInfo: Added broadcast_324_piece0 in memory on ***hostname masked***:55033 (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:45:00 INFO storage.MemoryStore: Block broadcast_328_piece0 stored as bytes in memory (estimated size 3.1 KB, free 490.3 MB) 18/04/17 16:45:00 INFO storage.BlockManagerInfo: Added broadcast_328_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.2 MB) 18/04/17 16:45:00 INFO spark.SparkContext: Created broadcast 328 from broadcast at DAGScheduler.scala:1006 18/04/17 16:45:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 328 (KafkaRDD[456] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 16:45:00 INFO cluster.YarnClusterScheduler: Adding task set 328.0 with 1 tasks 18/04/17 16:45:00 INFO scheduler.DAGScheduler: Got job 329 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 16:45:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 329 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 16:45:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 16:45:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 16:45:00 INFO scheduler.DAGScheduler: Submitting ResultStage 329 (KafkaRDD[454] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 16:45:00 INFO storage.BlockManagerInfo: Removed broadcast_292_piece0 on ***hostname masked***:42188 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:45:00 INFO storage.MemoryStore: Block broadcast_329 stored as values in memory (estimated size 5.7 KB, free 490.3 MB) 18/04/17 16:45:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 328.0 (TID 328, ***hostname masked***, executor 4, partition 0, NODE_LOCAL, 2047 bytes) 18/04/17 16:45:00 INFO spark.ContextCleaner: Cleaned accumulator 293 18/04/17 16:45:00 INFO storage.BlockManagerInfo: Removed broadcast_295_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.2 MB) 18/04/17 16:45:00 INFO storage.BlockManagerInfo: Added broadcast_325_piece0 in memory on ***hostname masked***:35790 (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:45:00 INFO storage.BlockManagerInfo: Added broadcast_326_piece0 in memory on ***hostname masked***:57847 (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:45:00 INFO storage.BlockManagerInfo: Removed broadcast_295_piece0 on ***hostname masked***:55033 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:45:00 INFO storage.MemoryStore: Block broadcast_329_piece0 stored as bytes in memory (estimated size 3.1 KB, free 490.3 MB) 18/04/17 16:45:00 INFO storage.BlockManagerInfo: Added broadcast_329_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.2 MB) 18/04/17 16:45:00 INFO spark.SparkContext: Created broadcast 329 from broadcast at DAGScheduler.scala:1006 18/04/17 16:45:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 329 (KafkaRDD[454] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 16:45:00 INFO cluster.YarnClusterScheduler: Adding task set 329.0 with 1 tasks 18/04/17 16:45:00 INFO spark.ContextCleaner: Cleaned accumulator 296 18/04/17 16:45:00 INFO scheduler.DAGScheduler: Got job 331 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 16:45:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 330 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 16:45:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 16:45:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 16:45:00 INFO scheduler.DAGScheduler: Submitting ResultStage 330 (KafkaRDD[458] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 16:45:00 INFO storage.MemoryStore: Block broadcast_330 stored as values in memory (estimated size 5.7 KB, free 490.3 MB) 18/04/17 16:45:00 INFO storage.BlockManagerInfo: Removed broadcast_294_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.2 MB) 18/04/17 16:45:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 329.0 (TID 329, ***hostname masked***, executor 10, partition 0, RACK_LOCAL, 2069 bytes) 18/04/17 16:45:00 INFO storage.BlockManagerInfo: Removed broadcast_294_piece0 on ***hostname masked***:60107 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:45:00 INFO spark.ContextCleaner: Cleaned accumulator 295 18/04/17 16:45:00 INFO storage.MemoryStore: Block broadcast_330_piece0 stored as bytes in memory (estimated size 3.1 KB, free 490.3 MB) 18/04/17 16:45:00 INFO storage.BlockManagerInfo: Added broadcast_330_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.2 MB) 18/04/17 16:45:00 INFO storage.BlockManagerInfo: Removed broadcast_296_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.2 MB) 18/04/17 16:45:00 INFO spark.SparkContext: Created broadcast 330 from broadcast at DAGScheduler.scala:1006 18/04/17 16:45:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 330 (KafkaRDD[458] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 16:45:00 INFO cluster.YarnClusterScheduler: Adding task set 330.0 with 1 tasks 18/04/17 16:45:00 INFO scheduler.DAGScheduler: Got job 330 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 16:45:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 331 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 16:45:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 16:45:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 16:45:00 INFO storage.BlockManagerInfo: Added broadcast_327_piece0 in memory on ***hostname masked***:55279 (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:45:00 INFO scheduler.DAGScheduler: Submitting ResultStage 331 (KafkaRDD[466] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 16:45:00 INFO storage.MemoryStore: Block broadcast_331 stored as values in memory (estimated size 5.7 KB, free 490.3 MB) 18/04/17 16:45:00 INFO storage.BlockManagerInfo: Removed broadcast_296_piece0 on ***hostname masked***:42188 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:45:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 330.0 (TID 330, ***hostname masked***, executor 2, partition 0, NODE_LOCAL, 2056 bytes) 18/04/17 16:45:00 INFO spark.ContextCleaner: Cleaned accumulator 297 18/04/17 16:45:00 INFO storage.BlockManagerInfo: Removed broadcast_298_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.2 MB) 18/04/17 16:45:00 INFO storage.MemoryStore: Block broadcast_331_piece0 stored as bytes in memory (estimated size 3.1 KB, free 490.3 MB) 18/04/17 16:45:00 INFO storage.BlockManagerInfo: Added broadcast_331_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.2 MB) 18/04/17 16:45:00 INFO spark.SparkContext: Created broadcast 331 from broadcast at DAGScheduler.scala:1006 18/04/17 16:45:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 331 (KafkaRDD[466] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 16:45:00 INFO cluster.YarnClusterScheduler: Adding task set 331.0 with 1 tasks 18/04/17 16:45:00 INFO storage.BlockManagerInfo: Removed broadcast_298_piece0 on ***hostname masked***:60107 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:45:00 INFO scheduler.DAGScheduler: Got job 332 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 16:45:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 332 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 16:45:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 16:45:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 16:45:00 INFO scheduler.DAGScheduler: Submitting ResultStage 332 (KafkaRDD[460] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 16:45:00 INFO storage.MemoryStore: Block broadcast_332 stored as values in memory (estimated size 5.7 KB, free 490.3 MB) 18/04/17 16:45:00 INFO spark.ContextCleaner: Cleaned accumulator 299 18/04/17 16:45:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 331.0 (TID 331, ***hostname masked***, executor 9, partition 0, RACK_LOCAL, 2053 bytes) 18/04/17 16:45:00 INFO storage.BlockManagerInfo: Removed broadcast_297_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.2 MB) 18/04/17 16:45:00 INFO storage.BlockManagerInfo: Removed broadcast_297_piece0 on ***hostname masked***:42188 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:45:00 INFO spark.ContextCleaner: Cleaned accumulator 298 18/04/17 16:45:00 INFO storage.MemoryStore: Block broadcast_332_piece0 stored as bytes in memory (estimated size 3.1 KB, free 490.3 MB) 18/04/17 16:45:00 INFO storage.BlockManagerInfo: Added broadcast_332_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.2 MB) 18/04/17 16:45:00 INFO storage.BlockManagerInfo: Removed broadcast_300_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.2 MB) 18/04/17 16:45:00 INFO spark.SparkContext: Created broadcast 332 from broadcast at DAGScheduler.scala:1006 18/04/17 16:45:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 332 (KafkaRDD[460] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 16:45:00 INFO cluster.YarnClusterScheduler: Adding task set 332.0 with 1 tasks 18/04/17 16:45:00 INFO scheduler.DAGScheduler: Got job 334 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 16:45:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 333 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 16:45:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 16:45:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 16:45:00 INFO scheduler.DAGScheduler: Submitting ResultStage 333 (KafkaRDD[464] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 16:45:00 INFO storage.MemoryStore: Block broadcast_333 stored as values in memory (estimated size 5.7 KB, free 490.3 MB) 18/04/17 16:45:00 INFO storage.BlockManagerInfo: Removed broadcast_300_piece0 on ***hostname masked***:55033 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:45:00 INFO storage.BlockManagerInfo: Added broadcast_330_piece0 in memory on ***hostname masked***:43653 (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:45:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 332.0 (TID 332, ***hostname masked***, executor 12, partition 0, RACK_LOCAL, 2045 bytes) 18/04/17 16:45:00 INFO spark.ContextCleaner: Cleaned accumulator 301 18/04/17 16:45:00 INFO storage.BlockManagerInfo: Added broadcast_328_piece0 in memory on ***hostname masked***:55279 (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:45:00 INFO storage.BlockManagerInfo: Removed broadcast_299_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.2 MB) 18/04/17 16:45:00 INFO storage.BlockManagerInfo: Added broadcast_331_piece0 in memory on ***hostname masked***:55033 (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:45:00 INFO storage.BlockManagerInfo: Removed broadcast_299_piece0 on ***hostname masked***:53081 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:45:00 INFO storage.MemoryStore: Block broadcast_333_piece0 stored as bytes in memory (estimated size 3.1 KB, free 490.3 MB) 18/04/17 16:45:00 INFO storage.BlockManagerInfo: Added broadcast_333_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.2 MB) 18/04/17 16:45:00 INFO spark.SparkContext: Created broadcast 333 from broadcast at DAGScheduler.scala:1006 18/04/17 16:45:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 333 (KafkaRDD[464] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 16:45:00 INFO cluster.YarnClusterScheduler: Adding task set 333.0 with 1 tasks 18/04/17 16:45:00 INFO scheduler.DAGScheduler: Got job 333 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 16:45:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 334 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 16:45:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 16:45:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 16:45:00 INFO spark.ContextCleaner: Cleaned accumulator 300 18/04/17 16:45:00 INFO scheduler.DAGScheduler: Submitting ResultStage 334 (KafkaRDD[438] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 16:45:00 INFO storage.MemoryStore: Block broadcast_334 stored as values in memory (estimated size 5.7 KB, free 490.3 MB) 18/04/17 16:45:00 INFO storage.BlockManagerInfo: Removed broadcast_302_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.2 MB) 18/04/17 16:45:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 333.0 (TID 333, ***hostname masked***, executor 2, partition 0, RACK_LOCAL, 2040 bytes) 18/04/17 16:45:00 INFO storage.BlockManagerInfo: Added broadcast_329_piece0 in memory on ***hostname masked***:55095 (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:45:00 INFO storage.BlockManagerInfo: Removed broadcast_302_piece0 on ***hostname masked***:57847 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:45:00 INFO spark.ContextCleaner: Cleaned accumulator 303 18/04/17 16:45:00 INFO storage.BlockManagerInfo: Removed broadcast_301_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.2 MB) 18/04/17 16:45:00 INFO storage.MemoryStore: Block broadcast_334_piece0 stored as bytes in memory (estimated size 3.1 KB, free 490.3 MB) 18/04/17 16:45:00 INFO storage.BlockManagerInfo: Added broadcast_334_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.2 MB) 18/04/17 16:45:00 INFO spark.SparkContext: Created broadcast 334 from broadcast at DAGScheduler.scala:1006 18/04/17 16:45:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 334 (KafkaRDD[438] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 16:45:00 INFO cluster.YarnClusterScheduler: Adding task set 334.0 with 1 tasks 18/04/17 16:45:00 INFO scheduler.DAGScheduler: Got job 336 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 16:45:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 335 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 16:45:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 16:45:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 16:45:00 INFO scheduler.DAGScheduler: Submitting ResultStage 335 (KafkaRDD[457] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 16:45:00 INFO storage.BlockManagerInfo: Removed broadcast_301_piece0 on ***hostname masked***:57847 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:45:00 INFO storage.MemoryStore: Block broadcast_335 stored as values in memory (estimated size 5.7 KB, free 490.3 MB) 18/04/17 16:45:00 INFO spark.ContextCleaner: Cleaned accumulator 302 18/04/17 16:45:00 INFO storage.BlockManagerInfo: Removed broadcast_304_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.2 MB) 18/04/17 16:45:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 334.0 (TID 334, ***hostname masked***, executor 7, partition 0, NODE_LOCAL, 2045 bytes) 18/04/17 16:45:00 INFO storage.BlockManagerInfo: Added broadcast_332_piece0 in memory on ***hostname masked***:42188 (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:45:00 INFO storage.BlockManagerInfo: Added broadcast_333_piece0 in memory on ***hostname masked***:43653 (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:45:00 INFO storage.BlockManagerInfo: Removed broadcast_304_piece0 on ***hostname masked***:35790 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:45:00 INFO storage.MemoryStore: Block broadcast_335_piece0 stored as bytes in memory (estimated size 3.1 KB, free 490.3 MB) 18/04/17 16:45:00 INFO storage.BlockManagerInfo: Added broadcast_335_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.2 MB) 18/04/17 16:45:00 INFO spark.ContextCleaner: Cleaned accumulator 305 18/04/17 16:45:00 INFO spark.SparkContext: Created broadcast 335 from broadcast at DAGScheduler.scala:1006 18/04/17 16:45:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 335 (KafkaRDD[457] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 16:45:00 INFO cluster.YarnClusterScheduler: Adding task set 335.0 with 1 tasks 18/04/17 16:45:00 INFO scheduler.DAGScheduler: Got job 335 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 16:45:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 336 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 16:45:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 16:45:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 16:45:00 INFO scheduler.DAGScheduler: Submitting ResultStage 336 (KafkaRDD[465] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 16:45:00 INFO storage.BlockManagerInfo: Removed broadcast_303_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.2 MB) 18/04/17 16:45:00 INFO storage.MemoryStore: Block broadcast_336 stored as values in memory (estimated size 5.7 KB, free 490.3 MB) 18/04/17 16:45:00 INFO storage.BlockManagerInfo: Removed broadcast_303_piece0 on ***hostname masked***:50260 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:45:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 335.0 (TID 335, ***hostname masked***, executor 10, partition 0, RACK_LOCAL, 2045 bytes) 18/04/17 16:45:00 INFO spark.ContextCleaner: Cleaned accumulator 304 18/04/17 16:45:00 INFO storage.MemoryStore: Block broadcast_336_piece0 stored as bytes in memory (estimated size 3.1 KB, free 490.3 MB) 18/04/17 16:45:00 INFO storage.BlockManagerInfo: Added broadcast_336_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.2 MB) 18/04/17 16:45:00 INFO storage.BlockManagerInfo: Removed broadcast_306_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.2 MB) 18/04/17 16:45:00 INFO spark.SparkContext: Created broadcast 336 from broadcast at DAGScheduler.scala:1006 18/04/17 16:45:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 336 (KafkaRDD[465] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 16:45:00 INFO cluster.YarnClusterScheduler: Adding task set 336.0 with 1 tasks 18/04/17 16:45:00 INFO scheduler.DAGScheduler: Got job 337 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 16:45:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 337 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 16:45:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 16:45:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 16:45:00 INFO scheduler.DAGScheduler: Submitting ResultStage 337 (KafkaRDD[451] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 16:45:00 INFO storage.MemoryStore: Block broadcast_337 stored as values in memory (estimated size 5.7 KB, free 490.3 MB) 18/04/17 16:45:00 INFO storage.BlockManagerInfo: Removed broadcast_306_piece0 on ***hostname masked***:43653 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:45:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 336.0 (TID 336, ***hostname masked***, executor 9, partition 0, NODE_LOCAL, 2048 bytes) 18/04/17 16:45:00 INFO spark.ContextCleaner: Cleaned accumulator 307 18/04/17 16:45:00 INFO storage.BlockManagerInfo: Removed broadcast_305_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.2 MB) 18/04/17 16:45:00 INFO storage.MemoryStore: Block broadcast_337_piece0 stored as bytes in memory (estimated size 3.1 KB, free 490.3 MB) 18/04/17 16:45:00 INFO storage.BlockManagerInfo: Added broadcast_337_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.2 MB) 18/04/17 16:45:00 INFO spark.SparkContext: Created broadcast 337 from broadcast at DAGScheduler.scala:1006 18/04/17 16:45:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 337 (KafkaRDD[451] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 16:45:00 INFO cluster.YarnClusterScheduler: Adding task set 337.0 with 1 tasks 18/04/17 16:45:00 INFO scheduler.DAGScheduler: Got job 338 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 16:45:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 338 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 16:45:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 16:45:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 16:45:00 INFO scheduler.DAGScheduler: Submitting ResultStage 338 (KafkaRDD[452] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 16:45:00 INFO storage.MemoryStore: Block broadcast_338 stored as values in memory (estimated size 5.7 KB, free 490.3 MB) 18/04/17 16:45:00 INFO storage.BlockManagerInfo: Removed broadcast_305_piece0 on ***hostname masked***:50260 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:45:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 337.0 (TID 337, ***hostname masked***, executor 12, partition 0, RACK_LOCAL, 2050 bytes) 18/04/17 16:45:00 INFO storage.BlockManagerInfo: Added broadcast_334_piece0 in memory on ***hostname masked***:41751 (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:45:00 INFO spark.ContextCleaner: Cleaned accumulator 306 18/04/17 16:45:00 INFO storage.MemoryStore: Block broadcast_338_piece0 stored as bytes in memory (estimated size 3.1 KB, free 490.3 MB) 18/04/17 16:45:00 INFO storage.BlockManagerInfo: Added broadcast_338_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.2 MB) 18/04/17 16:45:00 INFO storage.BlockManagerInfo: Removed broadcast_308_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.2 MB) 18/04/17 16:45:00 INFO spark.SparkContext: Created broadcast 338 from broadcast at DAGScheduler.scala:1006 18/04/17 16:45:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 338 (KafkaRDD[452] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 16:45:00 INFO cluster.YarnClusterScheduler: Adding task set 338.0 with 1 tasks 18/04/17 16:45:00 INFO scheduler.DAGScheduler: Got job 339 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 16:45:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 339 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 16:45:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 16:45:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 16:45:00 INFO scheduler.DAGScheduler: Submitting ResultStage 339 (KafkaRDD[463] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 16:45:00 INFO storage.MemoryStore: Block broadcast_339 stored as values in memory (estimated size 5.7 KB, free 490.3 MB) 18/04/17 16:45:00 INFO storage.BlockManagerInfo: Removed broadcast_308_piece0 on ***hostname masked***:43653 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:45:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 338.0 (TID 338, ***hostname masked***, executor 2, partition 0, RACK_LOCAL, 2048 bytes) 18/04/17 16:45:00 INFO spark.ContextCleaner: Cleaned accumulator 309 18/04/17 16:45:00 INFO storage.MemoryStore: Block broadcast_339_piece0 stored as bytes in memory (estimated size 3.1 KB, free 490.3 MB) 18/04/17 16:45:00 INFO storage.BlockManagerInfo: Added broadcast_336_piece0 in memory on ***hostname masked***:55033 (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:45:00 INFO storage.BlockManagerInfo: Added broadcast_339_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.2 MB) 18/04/17 16:45:00 INFO storage.BlockManagerInfo: Removed broadcast_307_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.2 MB) 18/04/17 16:45:00 INFO spark.SparkContext: Created broadcast 339 from broadcast at DAGScheduler.scala:1006 18/04/17 16:45:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 339 (KafkaRDD[463] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 16:45:00 INFO cluster.YarnClusterScheduler: Adding task set 339.0 with 1 tasks 18/04/17 16:45:00 INFO scheduler.DAGScheduler: Got job 340 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 16:45:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 340 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 16:45:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 16:45:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 16:45:00 INFO scheduler.DAGScheduler: Submitting ResultStage 340 (KafkaRDD[440] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 16:45:00 INFO storage.BlockManagerInfo: Removed broadcast_307_piece0 on ***hostname masked***:57847 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:45:00 INFO storage.MemoryStore: Block broadcast_340 stored as values in memory (estimated size 5.7 KB, free 490.3 MB) 18/04/17 16:45:00 INFO storage.BlockManagerInfo: Added broadcast_335_piece0 in memory on ***hostname masked***:55095 (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:45:00 INFO spark.ContextCleaner: Cleaned accumulator 308 18/04/17 16:45:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 339.0 (TID 339, ***hostname masked***, executor 4, partition 0, RACK_LOCAL, 2042 bytes) 18/04/17 16:45:00 INFO storage.BlockManagerInfo: Added broadcast_337_piece0 in memory on ***hostname masked***:42188 (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:45:00 INFO storage.BlockManagerInfo: Removed broadcast_310_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.2 MB) 18/04/17 16:45:00 INFO storage.MemoryStore: Block broadcast_340_piece0 stored as bytes in memory (estimated size 3.1 KB, free 490.3 MB) 18/04/17 16:45:00 INFO storage.BlockManagerInfo: Added broadcast_340_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.2 MB) 18/04/17 16:45:00 INFO spark.SparkContext: Created broadcast 340 from broadcast at DAGScheduler.scala:1006 18/04/17 16:45:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 340 (KafkaRDD[440] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 16:45:00 INFO cluster.YarnClusterScheduler: Adding task set 340.0 with 1 tasks 18/04/17 16:45:00 INFO scheduler.DAGScheduler: ResultStage 314 (foreachPartition at PredictorEngineApp.java:153) finished in 0.114 s 18/04/17 16:45:00 INFO scheduler.DAGScheduler: Job 314 finished: foreachPartition at PredictorEngineApp.java:153, took 0.127257 s 18/04/17 16:45:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 340.0 (TID 340, ***hostname masked***, executor 11, partition 0, RACK_LOCAL, 2051 bytes) 18/04/17 16:45:00 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x1d11e04a connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 16:45:00 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x1d11e04a0x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 16:45:00 INFO storage.BlockManagerInfo: Removed broadcast_310_piece0 on ***hostname masked***:43653 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:45:00 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 16:45:00 INFO spark.ContextCleaner: Cleaned accumulator 311 18/04/17 16:45:00 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:33196, server: ***hostname masked***/***IP masked***:2181 18/04/17 16:45:00 INFO storage.BlockManagerInfo: Added broadcast_338_piece0 in memory on ***hostname masked***:43653 (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:45:00 INFO storage.BlockManagerInfo: Removed broadcast_309_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.2 MB) 18/04/17 16:45:00 INFO storage.BlockManagerInfo: Added broadcast_339_piece0 in memory on ***hostname masked***:55279 (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:45:00 INFO storage.BlockManagerInfo: Removed broadcast_309_piece0 on ***hostname masked***:55279 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:45:00 INFO spark.ContextCleaner: Cleaned accumulator 310 18/04/17 16:45:00 INFO storage.BlockManagerInfo: Added broadcast_340_piece0 in memory on ***hostname masked***:57847 (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:45:00 INFO storage.BlockManagerInfo: Removed broadcast_312_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.2 MB) 18/04/17 16:45:00 INFO storage.BlockManagerInfo: Removed broadcast_312_piece0 on ***hostname masked***:55033 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:45:00 INFO spark.ContextCleaner: Cleaned accumulator 313 18/04/17 16:45:00 INFO spark.ContextCleaner: Cleaned accumulator 312 18/04/17 16:45:00 INFO storage.BlockManagerInfo: Removed broadcast_313_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.2 MB) 18/04/17 16:45:00 INFO storage.BlockManagerInfo: Removed broadcast_313_piece0 on ***hostname masked***:56034 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:45:00 INFO spark.ContextCleaner: Cleaned accumulator 314 18/04/17 16:45:00 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x1626be1444c931a, negotiated timeout = 60000 18/04/17 16:45:00 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x1626be1444c931a 18/04/17 16:45:00 INFO zookeeper.ZooKeeper: Session: 0x1626be1444c931a closed 18/04/17 16:45:00 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 16:45:00 INFO scheduler.JobScheduler: Finished job streaming job 1523972700000 ms.27 from job set of time 1523972700000 ms 18/04/17 16:45:00 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 326.0 (TID 326) in 153 ms on ***hostname masked*** (executor 11) (1/1) 18/04/17 16:45:00 INFO cluster.YarnClusterScheduler: Removed TaskSet 326.0, whose tasks have all completed, from pool 18/04/17 16:45:00 INFO scheduler.DAGScheduler: ResultStage 326 (foreachPartition at PredictorEngineApp.java:153) finished in 0.154 s 18/04/17 16:45:00 INFO scheduler.DAGScheduler: Job 327 finished: foreachPartition at PredictorEngineApp.java:153, took 0.233526 s 18/04/17 16:45:00 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x30bc30fd connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 16:45:00 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x30bc30fd0x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 16:45:00 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 16:45:00 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:33199, server: ***hostname masked***/***IP masked***:2181 18/04/17 16:45:00 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 323.0 (TID 323) in 170 ms on ***hostname masked*** (executor 6) (1/1) 18/04/17 16:45:00 INFO scheduler.DAGScheduler: ResultStage 323 (foreachPartition at PredictorEngineApp.java:153) finished in 0.170 s 18/04/17 16:45:00 INFO cluster.YarnClusterScheduler: Removed TaskSet 323.0, whose tasks have all completed, from pool 18/04/17 16:45:00 INFO scheduler.DAGScheduler: Job 324 finished: foreachPartition at PredictorEngineApp.java:153, took 0.237993 s 18/04/17 16:45:00 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x185b6cac connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 16:45:00 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x185b6cac0x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 16:45:00 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 16:45:00 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:55051, server: ***hostname masked***/***IP masked***:2181 18/04/17 16:45:00 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x1626be1444c931b, negotiated timeout = 60000 18/04/17 16:45:00 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x3626be1439a92f1, negotiated timeout = 60000 18/04/17 16:45:00 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x1626be1444c931b 18/04/17 16:45:00 INFO zookeeper.ZooKeeper: Session: 0x1626be1444c931b closed 18/04/17 16:45:00 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 16:45:00 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x3626be1439a92f1 18/04/17 16:45:00 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 330.0 (TID 330) in 165 ms on ***hostname masked*** (executor 2) (1/1) 18/04/17 16:45:00 INFO scheduler.DAGScheduler: ResultStage 330 (foreachPartition at PredictorEngineApp.java:153) finished in 0.166 s 18/04/17 16:45:00 INFO cluster.YarnClusterScheduler: Removed TaskSet 330.0, whose tasks have all completed, from pool 18/04/17 16:45:00 INFO scheduler.DAGScheduler: Job 331 finished: foreachPartition at PredictorEngineApp.java:153, took 0.259767 s 18/04/17 16:45:00 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x44e5c2c8 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 16:45:00 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x44e5c2c80x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 16:45:00 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 16:45:00 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:55056, server: ***hostname masked***/***IP masked***:2181 18/04/17 16:45:00 INFO scheduler.JobScheduler: Finished job streaming job 1523972700000 ms.35 from job set of time 1523972700000 ms 18/04/17 16:45:00 INFO zookeeper.ZooKeeper: Session: 0x3626be1439a92f1 closed 18/04/17 16:45:00 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 16:45:00 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x3626be1439a92f2, negotiated timeout = 60000 18/04/17 16:45:00 INFO scheduler.JobScheduler: Finished job streaming job 1523972700000 ms.7 from job set of time 1523972700000 ms 18/04/17 16:45:00 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x3626be1439a92f2 18/04/17 16:45:00 INFO zookeeper.ZooKeeper: Session: 0x3626be1439a92f2 closed 18/04/17 16:45:00 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 16:45:00 INFO scheduler.JobScheduler: Finished job streaming job 1523972700000 ms.26 from job set of time 1523972700000 ms 18/04/17 16:45:00 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 335.0 (TID 335) in 646 ms on ***hostname masked*** (executor 10) (1/1) 18/04/17 16:45:00 INFO cluster.YarnClusterScheduler: Removed TaskSet 335.0, whose tasks have all completed, from pool 18/04/17 16:45:00 INFO scheduler.DAGScheduler: ResultStage 335 (foreachPartition at PredictorEngineApp.java:153) finished in 0.647 s 18/04/17 16:45:00 INFO scheduler.DAGScheduler: Job 336 finished: foreachPartition at PredictorEngineApp.java:153, took 0.759092 s 18/04/17 16:45:00 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x1e21c343 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 16:45:00 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x1e21c3430x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 16:45:00 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 16:45:00 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:55060, server: ***hostname masked***/***IP masked***:2181 18/04/17 16:45:00 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x3626be1439a92f9, negotiated timeout = 60000 18/04/17 16:45:00 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x3626be1439a92f9 18/04/17 16:45:00 INFO zookeeper.ZooKeeper: Session: 0x3626be1439a92f9 closed 18/04/17 16:45:00 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 16:45:00 INFO scheduler.JobScheduler: Finished job streaming job 1523972700000 ms.25 from job set of time 1523972700000 ms 18/04/17 16:45:01 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 340.0 (TID 340) in 1172 ms on ***hostname masked*** (executor 11) (1/1) 18/04/17 16:45:01 INFO cluster.YarnClusterScheduler: Removed TaskSet 340.0, whose tasks have all completed, from pool 18/04/17 16:45:01 INFO scheduler.DAGScheduler: ResultStage 340 (foreachPartition at PredictorEngineApp.java:153) finished in 1.173 s 18/04/17 16:45:01 INFO scheduler.DAGScheduler: Job 340 finished: foreachPartition at PredictorEngineApp.java:153, took 1.296251 s 18/04/17 16:45:01 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x4eb035ca connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 16:45:01 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x4eb035ca0x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 16:45:01 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 16:45:01 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:33213, server: ***hostname masked***/***IP masked***:2181 18/04/17 16:45:01 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x1626be1444c931d, negotiated timeout = 60000 18/04/17 16:45:01 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x1626be1444c931d 18/04/17 16:45:01 INFO zookeeper.ZooKeeper: Session: 0x1626be1444c931d closed 18/04/17 16:45:01 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 16:45:01 INFO scheduler.JobScheduler: Finished job streaming job 1523972700000 ms.8 from job set of time 1523972700000 ms 18/04/17 16:45:04 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 316.0 (TID 316) in 3973 ms on ***hostname masked*** (executor 3) (1/1) 18/04/17 16:45:04 INFO cluster.YarnClusterScheduler: Removed TaskSet 316.0, whose tasks have all completed, from pool 18/04/17 16:45:04 INFO scheduler.DAGScheduler: ResultStage 316 (foreachPartition at PredictorEngineApp.java:153) finished in 3.974 s 18/04/17 16:45:04 INFO scheduler.DAGScheduler: Job 315 finished: foreachPartition at PredictorEngineApp.java:153, took 3.998421 s 18/04/17 16:45:04 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x1625eebd connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 16:45:04 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x1625eebd0x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 16:45:04 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 16:45:04 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:37819, server: ***hostname masked***/***IP masked***:2181 18/04/17 16:45:04 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x2626be142b28c2e, negotiated timeout = 60000 18/04/17 16:45:04 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x2626be142b28c2e 18/04/17 16:45:04 INFO zookeeper.ZooKeeper: Session: 0x2626be142b28c2e closed 18/04/17 16:45:04 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 16:45:04 INFO scheduler.JobScheduler: Finished job streaming job 1523972700000 ms.2 from job set of time 1523972700000 ms 18/04/17 16:45:04 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 320.0 (TID 320) in 4207 ms on ***hostname masked*** (executor 1) (1/1) 18/04/17 16:45:04 INFO cluster.YarnClusterScheduler: Removed TaskSet 320.0, whose tasks have all completed, from pool 18/04/17 16:45:04 INFO scheduler.DAGScheduler: ResultStage 320 (foreachPartition at PredictorEngineApp.java:153) finished in 4.207 s 18/04/17 16:45:04 INFO scheduler.DAGScheduler: Job 320 finished: foreachPartition at PredictorEngineApp.java:153, took 4.252428 s 18/04/17 16:45:04 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x1d73e9ad connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 16:45:04 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x1d73e9ad0x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 16:45:04 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 16:45:04 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:55078, server: ***hostname masked***/***IP masked***:2181 18/04/17 16:45:04 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x3626be1439a92fc, negotiated timeout = 60000 18/04/17 16:45:04 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x3626be1439a92fc 18/04/17 16:45:04 INFO zookeeper.ZooKeeper: Session: 0x3626be1439a92fc closed 18/04/17 16:45:04 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 16:45:04 INFO scheduler.JobScheduler: Finished job streaming job 1523972700000 ms.15 from job set of time 1523972700000 ms 18/04/17 16:45:06 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 339.0 (TID 339) in 5803 ms on ***hostname masked*** (executor 4) (1/1) 18/04/17 16:45:06 INFO cluster.YarnClusterScheduler: Removed TaskSet 339.0, whose tasks have all completed, from pool 18/04/17 16:45:06 INFO scheduler.DAGScheduler: ResultStage 339 (foreachPartition at PredictorEngineApp.java:153) finished in 5.804 s 18/04/17 16:45:06 INFO scheduler.DAGScheduler: Job 339 finished: foreachPartition at PredictorEngineApp.java:153, took 5.927257 s 18/04/17 16:45:06 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x66e49837 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 16:45:06 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x66e498370x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 16:45:06 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 16:45:06 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:33233, server: ***hostname masked***/***IP masked***:2181 18/04/17 16:45:06 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x1626be1444c931e, negotiated timeout = 60000 18/04/17 16:45:06 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x1626be1444c931e 18/04/17 16:45:06 INFO zookeeper.ZooKeeper: Session: 0x1626be1444c931e closed 18/04/17 16:45:06 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 16:45:06 INFO scheduler.JobScheduler: Finished job streaming job 1523972700000 ms.31 from job set of time 1523972700000 ms 18/04/17 16:45:06 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 332.0 (TID 332) in 6400 ms on ***hostname masked*** (executor 12) (1/1) 18/04/17 16:45:06 INFO cluster.YarnClusterScheduler: Removed TaskSet 332.0, whose tasks have all completed, from pool 18/04/17 16:45:06 INFO scheduler.DAGScheduler: ResultStage 332 (foreachPartition at PredictorEngineApp.java:153) finished in 6.401 s 18/04/17 16:45:06 INFO scheduler.DAGScheduler: Job 332 finished: foreachPartition at PredictorEngineApp.java:153, took 6.501570 s 18/04/17 16:45:06 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x1f48d1c5 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 16:45:06 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x1f48d1c50x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 16:45:06 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 16:45:06 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:55087, server: ***hostname masked***/***IP masked***:2181 18/04/17 16:45:06 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x3626be1439a92fd, negotiated timeout = 60000 18/04/17 16:45:06 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x3626be1439a92fd 18/04/17 16:45:06 INFO zookeeper.ZooKeeper: Session: 0x3626be1439a92fd closed 18/04/17 16:45:06 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 16:45:06 INFO scheduler.JobScheduler: Finished job streaming job 1523972700000 ms.28 from job set of time 1523972700000 ms 18/04/17 16:45:06 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 334.0 (TID 334) in 6495 ms on ***hostname masked*** (executor 7) (1/1) 18/04/17 16:45:06 INFO cluster.YarnClusterScheduler: Removed TaskSet 334.0, whose tasks have all completed, from pool 18/04/17 16:45:06 INFO scheduler.DAGScheduler: ResultStage 334 (foreachPartition at PredictorEngineApp.java:153) finished in 6.497 s 18/04/17 16:45:06 INFO scheduler.DAGScheduler: Job 333 finished: foreachPartition at PredictorEngineApp.java:153, took 6.605797 s 18/04/17 16:45:06 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x2729e7cd connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 16:45:06 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x2729e7cd0x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 16:45:06 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 16:45:06 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:33239, server: ***hostname masked***/***IP masked***:2181 18/04/17 16:45:06 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x1626be1444c9320, negotiated timeout = 60000 18/04/17 16:45:06 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x1626be1444c9320 18/04/17 16:45:06 INFO zookeeper.ZooKeeper: Session: 0x1626be1444c9320 closed 18/04/17 16:45:06 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 16:45:06 INFO scheduler.JobScheduler: Finished job streaming job 1523972700000 ms.6 from job set of time 1523972700000 ms 18/04/17 16:45:06 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 337.0 (TID 337) in 6747 ms on ***hostname masked*** (executor 12) (1/1) 18/04/17 16:45:06 INFO cluster.YarnClusterScheduler: Removed TaskSet 337.0, whose tasks have all completed, from pool 18/04/17 16:45:06 INFO scheduler.DAGScheduler: ResultStage 337 (foreachPartition at PredictorEngineApp.java:153) finished in 6.748 s 18/04/17 16:45:06 INFO scheduler.DAGScheduler: Job 337 finished: foreachPartition at PredictorEngineApp.java:153, took 6.866523 s 18/04/17 16:45:06 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x63122f30 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 16:45:06 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x63122f300x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 16:45:06 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 16:45:06 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:33243, server: ***hostname masked***/***IP masked***:2181 18/04/17 16:45:06 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x1626be1444c9321, negotiated timeout = 60000 18/04/17 16:45:06 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x1626be1444c9321 18/04/17 16:45:06 INFO zookeeper.ZooKeeper: Session: 0x1626be1444c9321 closed 18/04/17 16:45:06 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 16:45:06 INFO scheduler.JobScheduler: Finished job streaming job 1523972700000 ms.19 from job set of time 1523972700000 ms 18/04/17 16:45:07 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 318.0 (TID 318) in 7112 ms on ***hostname masked*** (executor 7) (1/1) 18/04/17 16:45:07 INFO cluster.YarnClusterScheduler: Removed TaskSet 318.0, whose tasks have all completed, from pool 18/04/17 16:45:07 INFO scheduler.DAGScheduler: ResultStage 318 (foreachPartition at PredictorEngineApp.java:153) finished in 7.112 s 18/04/17 16:45:07 INFO scheduler.DAGScheduler: Job 317 finished: foreachPartition at PredictorEngineApp.java:153, took 7.147520 s 18/04/17 16:45:07 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x3214c554 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 16:45:07 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x3214c5540x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 16:45:07 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 16:45:07 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:33246, server: ***hostname masked***/***IP masked***:2181 18/04/17 16:45:07 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x1626be1444c9322, negotiated timeout = 60000 18/04/17 16:45:07 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x1626be1444c9322 18/04/17 16:45:07 INFO zookeeper.ZooKeeper: Session: 0x1626be1444c9322 closed 18/04/17 16:45:07 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 16:45:07 INFO scheduler.JobScheduler: Finished job streaming job 1523972700000 ms.23 from job set of time 1523972700000 ms 18/04/17 16:45:09 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 328.0 (TID 328) in 9159 ms on ***hostname masked*** (executor 4) (1/1) 18/04/17 16:45:09 INFO cluster.YarnClusterScheduler: Removed TaskSet 328.0, whose tasks have all completed, from pool 18/04/17 16:45:09 INFO scheduler.DAGScheduler: ResultStage 328 (foreachPartition at PredictorEngineApp.java:153) finished in 9.160 s 18/04/17 16:45:09 INFO scheduler.DAGScheduler: Job 328 finished: foreachPartition at PredictorEngineApp.java:153, took 9.246768 s 18/04/17 16:45:09 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x7ea7e94b connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 16:45:09 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x7ea7e94b0x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 16:45:09 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 16:45:09 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:37849, server: ***hostname masked***/***IP masked***:2181 18/04/17 16:45:09 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x2626be142b28c32, negotiated timeout = 60000 18/04/17 16:45:09 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x2626be142b28c32 18/04/17 16:45:09 INFO zookeeper.ZooKeeper: Session: 0x2626be142b28c32 closed 18/04/17 16:45:09 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 16:45:09 INFO scheduler.JobScheduler: Finished job streaming job 1523972700000 ms.24 from job set of time 1523972700000 ms 18/04/17 16:45:10 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 317.0 (TID 317) in 9972 ms on ***hostname masked*** (executor 6) (1/1) 18/04/17 16:45:10 INFO scheduler.DAGScheduler: ResultStage 317 (foreachPartition at PredictorEngineApp.java:153) finished in 9.972 s 18/04/17 16:45:10 INFO cluster.YarnClusterScheduler: Removed TaskSet 317.0, whose tasks have all completed, from pool 18/04/17 16:45:10 INFO scheduler.DAGScheduler: Job 318 finished: foreachPartition at PredictorEngineApp.java:153, took 10.025904 s 18/04/17 16:45:10 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x6f11732f connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 16:45:10 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x6f11732f0x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 16:45:10 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 16:45:10 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:55109, server: ***hostname masked***/***IP masked***:2181 18/04/17 16:45:10 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x3626be1439a92fe, negotiated timeout = 60000 18/04/17 16:45:10 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x3626be1439a92fe 18/04/17 16:45:10 INFO zookeeper.ZooKeeper: Session: 0x3626be1439a92fe closed 18/04/17 16:45:10 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 16:45:10 INFO scheduler.JobScheduler: Finished job streaming job 1523972700000 ms.12 from job set of time 1523972700000 ms 18/04/17 16:45:10 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 325.0 (TID 325) in 10760 ms on ***hostname masked*** (executor 6) (1/1) 18/04/17 16:45:10 INFO cluster.YarnClusterScheduler: Removed TaskSet 325.0, whose tasks have all completed, from pool 18/04/17 16:45:10 INFO scheduler.DAGScheduler: ResultStage 325 (foreachPartition at PredictorEngineApp.java:153) finished in 10.761 s 18/04/17 16:45:10 INFO scheduler.DAGScheduler: Job 323 finished: foreachPartition at PredictorEngineApp.java:153, took 10.835943 s 18/04/17 16:45:10 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x5664865d connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 16:45:10 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x5664865d0x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 16:45:10 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 16:45:10 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:55114, server: ***hostname masked***/***IP masked***:2181 18/04/17 16:45:10 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x3626be1439a92ff, negotiated timeout = 60000 18/04/17 16:45:10 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 315.0 (TID 315) in 10829 ms on ***hostname masked*** (executor 6) (1/1) 18/04/17 16:45:10 INFO cluster.YarnClusterScheduler: Removed TaskSet 315.0, whose tasks have all completed, from pool 18/04/17 16:45:10 INFO scheduler.DAGScheduler: ResultStage 315 (foreachPartition at PredictorEngineApp.java:153) finished in 10.829 s 18/04/17 16:45:10 INFO scheduler.DAGScheduler: Job 316 finished: foreachPartition at PredictorEngineApp.java:153, took 10.847265 s 18/04/17 16:45:10 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x3626be1439a92ff 18/04/17 16:45:10 INFO scheduler.JobScheduler: Finished job streaming job 1523972700000 ms.29 from job set of time 1523972700000 ms 18/04/17 16:45:10 INFO zookeeper.ZooKeeper: Session: 0x3626be1439a92ff closed 18/04/17 16:45:10 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 16:45:10 INFO scheduler.JobScheduler: Finished job streaming job 1523972700000 ms.9 from job set of time 1523972700000 ms 18/04/17 16:45:11 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 333.0 (TID 333) in 10902 ms on ***hostname masked*** (executor 2) (1/1) 18/04/17 16:45:11 INFO cluster.YarnClusterScheduler: Removed TaskSet 333.0, whose tasks have all completed, from pool 18/04/17 16:45:11 INFO scheduler.DAGScheduler: ResultStage 333 (foreachPartition at PredictorEngineApp.java:153) finished in 10.903 s 18/04/17 16:45:11 INFO scheduler.DAGScheduler: Job 334 finished: foreachPartition at PredictorEngineApp.java:153, took 11.007904 s 18/04/17 16:45:11 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x5d0af0e9 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 16:45:11 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x5d0af0e90x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 16:45:11 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 16:45:11 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:55117, server: ***hostname masked***/***IP masked***:2181 18/04/17 16:45:11 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x3626be1439a9300, negotiated timeout = 60000 18/04/17 16:45:11 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x3626be1439a9300 18/04/17 16:45:11 INFO zookeeper.ZooKeeper: Session: 0x3626be1439a9300 closed 18/04/17 16:45:11 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 16:45:11 INFO scheduler.JobScheduler: Finished job streaming job 1523972700000 ms.32 from job set of time 1523972700000 ms 18/04/17 16:45:11 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 319.0 (TID 319) in 11104 ms on ***hostname masked*** (executor 5) (1/1) 18/04/17 16:45:11 INFO cluster.YarnClusterScheduler: Removed TaskSet 319.0, whose tasks have all completed, from pool 18/04/17 16:45:11 INFO scheduler.DAGScheduler: ResultStage 319 (foreachPartition at PredictorEngineApp.java:153) finished in 11.104 s 18/04/17 16:45:11 INFO scheduler.DAGScheduler: Job 319 finished: foreachPartition at PredictorEngineApp.java:153, took 11.144278 s 18/04/17 16:45:11 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x6a83f22a connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 16:45:11 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x6a83f22a0x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 16:45:11 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 16:45:11 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:33269, server: ***hostname masked***/***IP masked***:2181 18/04/17 16:45:11 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x1626be1444c9325, negotiated timeout = 60000 18/04/17 16:45:11 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x1626be1444c9325 18/04/17 16:45:11 INFO zookeeper.ZooKeeper: Session: 0x1626be1444c9325 closed 18/04/17 16:45:11 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 16:45:11 INFO scheduler.JobScheduler: Finished job streaming job 1523972700000 ms.10 from job set of time 1523972700000 ms 18/04/17 16:45:12 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 338.0 (TID 338) in 12501 ms on ***hostname masked*** (executor 2) (1/1) 18/04/17 16:45:12 INFO cluster.YarnClusterScheduler: Removed TaskSet 338.0, whose tasks have all completed, from pool 18/04/17 16:45:12 INFO scheduler.DAGScheduler: ResultStage 338 (foreachPartition at PredictorEngineApp.java:153) finished in 12.502 s 18/04/17 16:45:12 INFO scheduler.DAGScheduler: Job 338 finished: foreachPartition at PredictorEngineApp.java:153, took 12.622514 s 18/04/17 16:45:12 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x34207666 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 16:45:12 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x342076660x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 16:45:12 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 16:45:12 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:37868, server: ***hostname masked***/***IP masked***:2181 18/04/17 16:45:12 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x2626be142b28c34, negotiated timeout = 60000 18/04/17 16:45:12 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x2626be142b28c34 18/04/17 16:45:12 INFO zookeeper.ZooKeeper: Session: 0x2626be142b28c34 closed 18/04/17 16:45:12 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 16:45:12 INFO scheduler.JobScheduler: Finished job streaming job 1523972700000 ms.20 from job set of time 1523972700000 ms 18/04/17 16:45:13 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 327.0 (TID 327) in 13371 ms on ***hostname masked*** (executor 4) (1/1) 18/04/17 16:45:13 INFO cluster.YarnClusterScheduler: Removed TaskSet 327.0, whose tasks have all completed, from pool 18/04/17 16:45:13 INFO scheduler.DAGScheduler: ResultStage 327 (foreachPartition at PredictorEngineApp.java:153) finished in 13.372 s 18/04/17 16:45:13 INFO scheduler.DAGScheduler: Job 326 finished: foreachPartition at PredictorEngineApp.java:153, took 13.455281 s 18/04/17 16:45:13 INFO spark.ContextCleaner: Cleaned accumulator 340 18/04/17 16:45:13 INFO spark.ContextCleaner: Cleaned accumulator 315 18/04/17 16:45:13 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x1470088a connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 16:45:13 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x1470088a0x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 16:45:13 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 16:45:13 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:37872, server: ***hostname masked***/***IP masked***:2181 18/04/17 16:45:13 INFO storage.BlockManagerInfo: Removed broadcast_315_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.2 MB) 18/04/17 16:45:13 INFO storage.BlockManagerInfo: Removed broadcast_315_piece0 on ***hostname masked***:35790 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:45:13 INFO spark.ContextCleaner: Cleaned accumulator 316 18/04/17 16:45:13 INFO storage.BlockManagerInfo: Removed broadcast_314_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.2 MB) 18/04/17 16:45:13 INFO storage.BlockManagerInfo: Removed broadcast_314_piece0 on ***hostname masked***:60107 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:45:13 INFO spark.ContextCleaner: Cleaned accumulator 318 18/04/17 16:45:13 INFO storage.BlockManagerInfo: Removed broadcast_316_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.2 MB) 18/04/17 16:45:13 INFO storage.BlockManagerInfo: Removed broadcast_316_piece0 on ***hostname masked***:60107 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:45:13 INFO spark.ContextCleaner: Cleaned accumulator 317 18/04/17 16:45:13 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x2626be142b28c36, negotiated timeout = 60000 18/04/17 16:45:13 INFO storage.BlockManagerInfo: Removed broadcast_318_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.2 MB) 18/04/17 16:45:13 INFO storage.BlockManagerInfo: Removed broadcast_318_piece0 on ***hostname masked***:41751 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:45:13 INFO spark.ContextCleaner: Cleaned accumulator 319 18/04/17 16:45:13 INFO storage.BlockManagerInfo: Removed broadcast_317_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.2 MB) 18/04/17 16:45:13 INFO storage.BlockManagerInfo: Removed broadcast_317_piece0 on ***hostname masked***:35790 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:45:13 INFO spark.ContextCleaner: Cleaned accumulator 321 18/04/17 16:45:13 INFO storage.BlockManagerInfo: Removed broadcast_319_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.2 MB) 18/04/17 16:45:13 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x2626be142b28c36 18/04/17 16:45:13 INFO storage.BlockManagerInfo: Removed broadcast_319_piece0 on ***hostname masked***:53081 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:45:13 INFO spark.ContextCleaner: Cleaned accumulator 320 18/04/17 16:45:13 INFO storage.BlockManagerInfo: Removed broadcast_320_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.2 MB) 18/04/17 16:45:13 INFO storage.BlockManagerInfo: Removed broadcast_320_piece0 on ***hostname masked***:56034 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:45:13 INFO spark.ContextCleaner: Cleaned accumulator 324 18/04/17 16:45:13 INFO storage.BlockManagerInfo: Removed broadcast_323_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.2 MB) 18/04/17 16:45:13 INFO storage.BlockManagerInfo: Removed broadcast_323_piece0 on ***hostname masked***:35790 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:45:13 INFO zookeeper.ZooKeeper: Session: 0x2626be142b28c36 closed 18/04/17 16:45:13 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 16:45:13 INFO spark.ContextCleaner: Cleaned accumulator 327 18/04/17 16:45:13 INFO storage.BlockManagerInfo: Removed broadcast_325_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.2 MB) 18/04/17 16:45:13 INFO storage.BlockManagerInfo: Removed broadcast_325_piece0 on ***hostname masked***:35790 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:45:13 INFO spark.ContextCleaner: Cleaned accumulator 326 18/04/17 16:45:13 INFO storage.BlockManagerInfo: Removed broadcast_327_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.2 MB) 18/04/17 16:45:13 INFO storage.BlockManagerInfo: Removed broadcast_327_piece0 on ***hostname masked***:55279 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:45:13 INFO spark.ContextCleaner: Cleaned accumulator 328 18/04/17 16:45:13 INFO storage.BlockManagerInfo: Removed broadcast_326_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.2 MB) 18/04/17 16:45:13 INFO storage.BlockManagerInfo: Removed broadcast_326_piece0 on ***hostname masked***:57847 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:45:13 INFO storage.BlockManagerInfo: Removed broadcast_328_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.2 MB) 18/04/17 16:45:13 INFO scheduler.JobScheduler: Finished job streaming job 1523972700000 ms.5 from job set of time 1523972700000 ms 18/04/17 16:45:13 INFO storage.BlockManagerInfo: Removed broadcast_328_piece0 on ***hostname masked***:55279 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:45:13 INFO spark.ContextCleaner: Cleaned accumulator 329 18/04/17 16:45:13 INFO storage.BlockManagerInfo: Removed broadcast_330_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.2 MB) 18/04/17 16:45:13 INFO storage.BlockManagerInfo: Removed broadcast_330_piece0 on ***hostname masked***:43653 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:45:13 INFO spark.ContextCleaner: Cleaned accumulator 331 18/04/17 16:45:13 INFO spark.ContextCleaner: Cleaned accumulator 333 18/04/17 16:45:13 INFO storage.BlockManagerInfo: Removed broadcast_333_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.2 MB) 18/04/17 16:45:13 INFO storage.BlockManagerInfo: Removed broadcast_333_piece0 on ***hostname masked***:43653 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:45:13 INFO spark.ContextCleaner: Cleaned accumulator 334 18/04/17 16:45:13 INFO storage.BlockManagerInfo: Removed broadcast_332_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.2 MB) 18/04/17 16:45:13 INFO storage.BlockManagerInfo: Removed broadcast_332_piece0 on ***hostname masked***:42188 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:45:13 INFO spark.ContextCleaner: Cleaned accumulator 335 18/04/17 16:45:13 INFO storage.BlockManagerInfo: Removed broadcast_335_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.2 MB) 18/04/17 16:45:13 INFO storage.BlockManagerInfo: Removed broadcast_335_piece0 on ***hostname masked***:55095 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:45:13 INFO spark.ContextCleaner: Cleaned accumulator 336 18/04/17 16:45:13 INFO storage.BlockManagerInfo: Removed broadcast_334_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.2 MB) 18/04/17 16:45:13 INFO storage.BlockManagerInfo: Removed broadcast_334_piece0 on ***hostname masked***:41751 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:45:13 INFO spark.ContextCleaner: Cleaned accumulator 338 18/04/17 16:45:13 INFO storage.BlockManagerInfo: Removed broadcast_338_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.2 MB) 18/04/17 16:45:13 INFO storage.BlockManagerInfo: Removed broadcast_338_piece0 on ***hostname masked***:43653 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:45:13 INFO spark.ContextCleaner: Cleaned accumulator 339 18/04/17 16:45:13 INFO storage.BlockManagerInfo: Removed broadcast_337_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.3 MB) 18/04/17 16:45:13 INFO storage.BlockManagerInfo: Removed broadcast_337_piece0 on ***hostname masked***:42188 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:45:13 INFO storage.BlockManagerInfo: Removed broadcast_340_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.3 MB) 18/04/17 16:45:13 INFO storage.BlockManagerInfo: Removed broadcast_340_piece0 on ***hostname masked***:57847 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:45:13 INFO spark.ContextCleaner: Cleaned accumulator 341 18/04/17 16:45:13 INFO storage.BlockManagerInfo: Removed broadcast_339_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.3 MB) 18/04/17 16:45:13 INFO storage.BlockManagerInfo: Removed broadcast_339_piece0 on ***hostname masked***:55279 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:45:15 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 322.0 (TID 322) in 15268 ms on ***hostname masked*** (executor 9) (1/1) 18/04/17 16:45:15 INFO cluster.YarnClusterScheduler: Removed TaskSet 322.0, whose tasks have all completed, from pool 18/04/17 16:45:15 INFO scheduler.DAGScheduler: ResultStage 322 (foreachPartition at PredictorEngineApp.java:153) finished in 15.268 s 18/04/17 16:45:15 INFO scheduler.DAGScheduler: Job 322 finished: foreachPartition at PredictorEngineApp.java:153, took 15.332737 s 18/04/17 16:45:15 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x6719f3f1 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 16:45:15 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x6719f3f10x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 16:45:15 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 16:45:15 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:33283, server: ***hostname masked***/***IP masked***:2181 18/04/17 16:45:15 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x1626be1444c9328, negotiated timeout = 60000 18/04/17 16:45:15 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x1626be1444c9328 18/04/17 16:45:15 INFO zookeeper.ZooKeeper: Session: 0x1626be1444c9328 closed 18/04/17 16:45:15 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 16:45:15 INFO scheduler.JobScheduler: Finished job streaming job 1523972700000 ms.18 from job set of time 1523972700000 ms 18/04/17 16:45:16 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 321.0 (TID 321) in 16723 ms on ***hostname masked*** (executor 2) (1/1) 18/04/17 16:45:16 INFO cluster.YarnClusterScheduler: Removed TaskSet 321.0, whose tasks have all completed, from pool 18/04/17 16:45:16 INFO scheduler.DAGScheduler: ResultStage 321 (foreachPartition at PredictorEngineApp.java:153) finished in 16.725 s 18/04/17 16:45:16 INFO scheduler.DAGScheduler: Job 321 finished: foreachPartition at PredictorEngineApp.java:153, took 16.785989 s 18/04/17 16:45:16 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x1466fc52 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 16:45:16 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x1466fc520x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 16:45:16 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 16:45:16 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:37883, server: ***hostname masked***/***IP masked***:2181 18/04/17 16:45:16 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x2626be142b28c37, negotiated timeout = 60000 18/04/17 16:45:16 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x2626be142b28c37 18/04/17 16:45:16 INFO zookeeper.ZooKeeper: Session: 0x2626be142b28c37 closed 18/04/17 16:45:16 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 16:45:16 INFO scheduler.JobScheduler: Finished job streaming job 1523972700000 ms.1 from job set of time 1523972700000 ms 18/04/17 16:45:17 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 331.0 (TID 331) in 17328 ms on ***hostname masked*** (executor 9) (1/1) 18/04/17 16:45:17 INFO cluster.YarnClusterScheduler: Removed TaskSet 331.0, whose tasks have all completed, from pool 18/04/17 16:45:17 INFO scheduler.DAGScheduler: ResultStage 331 (foreachPartition at PredictorEngineApp.java:153) finished in 17.329 s 18/04/17 16:45:17 INFO scheduler.DAGScheduler: Job 330 finished: foreachPartition at PredictorEngineApp.java:153, took 17.425497 s 18/04/17 16:45:17 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x6bc4d90c connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 16:45:17 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x6bc4d90c0x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 16:45:17 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 16:45:17 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:33292, server: ***hostname masked***/***IP masked***:2181 18/04/17 16:45:17 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x1626be1444c932b, negotiated timeout = 60000 18/04/17 16:45:17 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x1626be1444c932b 18/04/17 16:45:17 INFO zookeeper.ZooKeeper: Session: 0x1626be1444c932b closed 18/04/17 16:45:17 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 16:45:17 INFO scheduler.JobScheduler: Finished job streaming job 1523972700000 ms.34 from job set of time 1523972700000 ms 18/04/17 16:45:17 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 336.0 (TID 336) in 17541 ms on ***hostname masked*** (executor 9) (1/1) 18/04/17 16:45:17 INFO cluster.YarnClusterScheduler: Removed TaskSet 336.0, whose tasks have all completed, from pool 18/04/17 16:45:17 INFO scheduler.DAGScheduler: ResultStage 336 (foreachPartition at PredictorEngineApp.java:153) finished in 17.542 s 18/04/17 16:45:17 INFO scheduler.DAGScheduler: Job 335 finished: foreachPartition at PredictorEngineApp.java:153, took 17.657908 s 18/04/17 16:45:17 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x253dd34b connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 16:45:17 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x253dd34b0x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 16:45:17 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 16:45:17 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:37890, server: ***hostname masked***/***IP masked***:2181 18/04/17 16:45:17 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x2626be142b28c38, negotiated timeout = 60000 18/04/17 16:45:17 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x2626be142b28c38 18/04/17 16:45:17 INFO zookeeper.ZooKeeper: Session: 0x2626be142b28c38 closed 18/04/17 16:45:17 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 16:45:17 INFO scheduler.JobScheduler: Finished job streaming job 1523972700000 ms.33 from job set of time 1523972700000 ms 18/04/17 16:45:21 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 324.0 (TID 324) in 21365 ms on ***hostname masked*** (executor 9) (1/1) 18/04/17 16:45:21 INFO scheduler.DAGScheduler: ResultStage 324 (foreachPartition at PredictorEngineApp.java:153) finished in 21.365 s 18/04/17 16:45:21 INFO cluster.YarnClusterScheduler: Removed TaskSet 324.0, whose tasks have all completed, from pool 18/04/17 16:45:21 INFO scheduler.DAGScheduler: Job 325 finished: foreachPartition at PredictorEngineApp.java:153, took 21.437369 s 18/04/17 16:45:21 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x791b39aa connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 16:45:21 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x791b39aa0x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 16:45:21 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 16:45:21 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:33305, server: ***hostname masked***/***IP masked***:2181 18/04/17 16:45:21 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x1626be1444c932e, negotiated timeout = 60000 18/04/17 16:45:21 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x1626be1444c932e 18/04/17 16:45:21 INFO zookeeper.ZooKeeper: Session: 0x1626be1444c932e closed 18/04/17 16:45:21 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 16:45:21 INFO scheduler.JobScheduler: Finished job streaming job 1523972700000 ms.11 from job set of time 1523972700000 ms 18/04/17 16:45:30 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 51.0 (TID 51) in 689866 ms on ***hostname masked*** (executor 3) (1/1) 18/04/17 16:45:30 INFO cluster.YarnClusterScheduler: Removed TaskSet 51.0, whose tasks have all completed, from pool 18/04/17 16:45:30 INFO scheduler.DAGScheduler: ResultStage 51 (foreachPartition at PredictorEngineApp.java:153) finished in 689.867 s 18/04/17 16:45:30 INFO scheduler.DAGScheduler: Job 49 finished: foreachPartition at PredictorEngineApp.java:153, took 690.103371 s 18/04/17 16:45:30 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x74563323 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 16:45:30 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x745633230x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 16:45:30 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 16:45:30 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:55177, server: ***hostname masked***/***IP masked***:2181 18/04/17 16:45:30 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x3626be1439a930a, negotiated timeout = 60000 18/04/17 16:45:30 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x3626be1439a930a 18/04/17 16:45:30 INFO zookeeper.ZooKeeper: Session: 0x3626be1439a930a closed 18/04/17 16:45:30 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 16:45:30 INFO scheduler.JobScheduler: Finished job streaming job 1523972040000 ms.26 from job set of time 1523972040000 ms 18/04/17 16:45:30 INFO scheduler.JobScheduler: Total delay: 690.278 s for time 1523972040000 ms (execution: 690.192 s) 18/04/17 16:45:30 INFO scheduler.ReceivedBlockTracker: Deleting batches ArrayBuffer() 18/04/17 16:45:30 INFO scheduler.InputInfoTracker: remove old batch metadata: 18/04/17 16:46:00 INFO scheduler.JobScheduler: Added jobs for time 1523972760000 ms 18/04/17 16:46:00 INFO scheduler.JobScheduler: Starting job streaming job 1523972760000 ms.1 from job set of time 1523972760000 ms 18/04/17 16:46:00 INFO scheduler.JobScheduler: Starting job streaming job 1523972760000 ms.2 from job set of time 1523972760000 ms 18/04/17 16:46:00 INFO scheduler.JobScheduler: Starting job streaming job 1523972760000 ms.0 from job set of time 1523972760000 ms 18/04/17 16:46:00 INFO scheduler.JobScheduler: Finished job streaming job 1523972760000 ms.0 from job set of time 1523972760000 ms 18/04/17 16:46:00 INFO scheduler.JobScheduler: Starting job streaming job 1523972760000 ms.3 from job set of time 1523972760000 ms 18/04/17 16:46:00 INFO scheduler.JobScheduler: Starting job streaming job 1523972760000 ms.5 from job set of time 1523972760000 ms 18/04/17 16:46:00 INFO scheduler.JobScheduler: Starting job streaming job 1523972760000 ms.4 from job set of time 1523972760000 ms 18/04/17 16:46:00 INFO scheduler.JobScheduler: Finished job streaming job 1523972760000 ms.4 from job set of time 1523972760000 ms 18/04/17 16:46:00 INFO scheduler.JobScheduler: Finished job streaming job 1523972760000 ms.3 from job set of time 1523972760000 ms 18/04/17 16:46:00 INFO scheduler.JobScheduler: Starting job streaming job 1523972760000 ms.7 from job set of time 1523972760000 ms 18/04/17 16:46:00 INFO scheduler.JobScheduler: Starting job streaming job 1523972760000 ms.6 from job set of time 1523972760000 ms 18/04/17 16:46:00 INFO scheduler.JobScheduler: Starting job streaming job 1523972760000 ms.8 from job set of time 1523972760000 ms 18/04/17 16:46:00 INFO scheduler.JobScheduler: Starting job streaming job 1523972760000 ms.9 from job set of time 1523972760000 ms 18/04/17 16:46:00 INFO scheduler.JobScheduler: Starting job streaming job 1523972760000 ms.10 from job set of time 1523972760000 ms 18/04/17 16:46:00 INFO scheduler.JobScheduler: Starting job streaming job 1523972760000 ms.11 from job set of time 1523972760000 ms 18/04/17 16:46:00 INFO scheduler.JobScheduler: Starting job streaming job 1523972760000 ms.12 from job set of time 1523972760000 ms 18/04/17 16:46:00 INFO scheduler.JobScheduler: Starting job streaming job 1523972760000 ms.13 from job set of time 1523972760000 ms 18/04/17 16:46:00 INFO scheduler.JobScheduler: Finished job streaming job 1523972760000 ms.13 from job set of time 1523972760000 ms 18/04/17 16:46:00 INFO scheduler.JobScheduler: Starting job streaming job 1523972760000 ms.14 from job set of time 1523972760000 ms 18/04/17 16:46:00 INFO scheduler.JobScheduler: Starting job streaming job 1523972760000 ms.15 from job set of time 1523972760000 ms 18/04/17 16:46:00 INFO scheduler.JobScheduler: Starting job streaming job 1523972760000 ms.17 from job set of time 1523972760000 ms 18/04/17 16:46:00 INFO scheduler.JobScheduler: Starting job streaming job 1523972760000 ms.16 from job set of time 1523972760000 ms 18/04/17 16:46:00 INFO scheduler.JobScheduler: Finished job streaming job 1523972760000 ms.17 from job set of time 1523972760000 ms 18/04/17 16:46:00 INFO scheduler.JobScheduler: Starting job streaming job 1523972760000 ms.18 from job set of time 1523972760000 ms 18/04/17 16:46:00 INFO scheduler.JobScheduler: Finished job streaming job 1523972760000 ms.16 from job set of time 1523972760000 ms 18/04/17 16:46:00 INFO scheduler.JobScheduler: Starting job streaming job 1523972760000 ms.19 from job set of time 1523972760000 ms 18/04/17 16:46:00 INFO scheduler.JobScheduler: Starting job streaming job 1523972760000 ms.20 from job set of time 1523972760000 ms 18/04/17 16:46:00 INFO scheduler.JobScheduler: Starting job streaming job 1523972760000 ms.21 from job set of time 1523972760000 ms 18/04/17 16:46:00 INFO scheduler.JobScheduler: Finished job streaming job 1523972760000 ms.21 from job set of time 1523972760000 ms 18/04/17 16:46:00 INFO scheduler.JobScheduler: Starting job streaming job 1523972760000 ms.22 from job set of time 1523972760000 ms 18/04/17 16:46:00 INFO scheduler.JobScheduler: Starting job streaming job 1523972760000 ms.24 from job set of time 1523972760000 ms 18/04/17 16:46:00 INFO scheduler.JobScheduler: Starting job streaming job 1523972760000 ms.23 from job set of time 1523972760000 ms 18/04/17 16:46:00 INFO scheduler.JobScheduler: Finished job streaming job 1523972760000 ms.14 from job set of time 1523972760000 ms 18/04/17 16:46:00 INFO scheduler.JobScheduler: Starting job streaming job 1523972760000 ms.26 from job set of time 1523972760000 ms 18/04/17 16:46:00 INFO scheduler.JobScheduler: Starting job streaming job 1523972760000 ms.25 from job set of time 1523972760000 ms 18/04/17 16:46:00 INFO scheduler.JobScheduler: Starting job streaming job 1523972760000 ms.27 from job set of time 1523972760000 ms 18/04/17 16:46:00 INFO scheduler.JobScheduler: Starting job streaming job 1523972760000 ms.28 from job set of time 1523972760000 ms 18/04/17 16:46:00 INFO scheduler.JobScheduler: Starting job streaming job 1523972760000 ms.29 from job set of time 1523972760000 ms 18/04/17 16:46:00 INFO scheduler.JobScheduler: Starting job streaming job 1523972760000 ms.30 from job set of time 1523972760000 ms 18/04/17 16:46:00 INFO scheduler.JobScheduler: Finished job streaming job 1523972760000 ms.30 from job set of time 1523972760000 ms 18/04/17 16:46:00 INFO scheduler.JobScheduler: Starting job streaming job 1523972760000 ms.33 from job set of time 1523972760000 ms 18/04/17 16:46:00 INFO scheduler.JobScheduler: Starting job streaming job 1523972760000 ms.31 from job set of time 1523972760000 ms 18/04/17 16:46:00 INFO scheduler.JobScheduler: Starting job streaming job 1523972760000 ms.32 from job set of time 1523972760000 ms 18/04/17 16:46:00 INFO scheduler.JobScheduler: Starting job streaming job 1523972760000 ms.34 from job set of time 1523972760000 ms 18/04/17 16:46:00 INFO scheduler.JobScheduler: Starting job streaming job 1523972760000 ms.35 from job set of time 1523972760000 ms 18/04/17 16:46:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 16:46:00 INFO scheduler.DAGScheduler: Got job 341 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 16:46:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 341 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 16:46:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 16:46:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 16:46:00 INFO scheduler.DAGScheduler: Submitting ResultStage 341 (KafkaRDD[480] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 16:46:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 16:46:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 16:46:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 16:46:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 16:46:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 16:46:00 INFO storage.MemoryStore: Block broadcast_341 stored as values in memory (estimated size 5.7 KB, free 490.5 MB) 18/04/17 16:46:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 16:46:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 16:46:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 16:46:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 16:46:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 16:46:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 16:46:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 16:46:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 16:46:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 16:46:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 16:46:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 16:46:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 16:46:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 16:46:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 16:46:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 16:46:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 16:46:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 16:46:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 16:46:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 16:46:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 16:46:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 16:46:00 INFO storage.MemoryStore: Block broadcast_341_piece0 stored as bytes in memory (estimated size 3.1 KB, free 490.5 MB) 18/04/17 16:46:00 INFO storage.BlockManagerInfo: Added broadcast_341_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.3 MB) 18/04/17 16:46:00 INFO spark.SparkContext: Created broadcast 341 from broadcast at DAGScheduler.scala:1006 18/04/17 16:46:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 341 (KafkaRDD[480] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 16:46:00 INFO cluster.YarnClusterScheduler: Adding task set 341.0 with 1 tasks 18/04/17 16:46:00 INFO scheduler.DAGScheduler: Got job 342 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 16:46:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 342 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 16:46:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 16:46:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 16:46:00 INFO scheduler.DAGScheduler: Submitting ResultStage 342 (KafkaRDD[490] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 16:46:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 341.0 (TID 341, ***hostname masked***, executor 6, partition 0, RACK_LOCAL, 2051 bytes) 18/04/17 16:46:00 INFO storage.MemoryStore: Block broadcast_342 stored as values in memory (estimated size 5.7 KB, free 490.5 MB) 18/04/17 16:46:00 INFO storage.MemoryStore: Block broadcast_342_piece0 stored as bytes in memory (estimated size 3.1 KB, free 490.5 MB) 18/04/17 16:46:00 INFO storage.BlockManagerInfo: Added broadcast_342_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.3 MB) 18/04/17 16:46:00 INFO spark.SparkContext: Created broadcast 342 from broadcast at DAGScheduler.scala:1006 18/04/17 16:46:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 342 (KafkaRDD[490] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 16:46:00 INFO cluster.YarnClusterScheduler: Adding task set 342.0 with 1 tasks 18/04/17 16:46:00 INFO scheduler.DAGScheduler: Got job 343 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 16:46:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 343 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 16:46:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 16:46:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 16:46:00 INFO scheduler.DAGScheduler: Submitting ResultStage 343 (KafkaRDD[494] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 16:46:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 342.0 (TID 342, ***hostname masked***, executor 2, partition 0, RACK_LOCAL, 2069 bytes) 18/04/17 16:46:00 INFO storage.MemoryStore: Block broadcast_343 stored as values in memory (estimated size 5.7 KB, free 490.5 MB) 18/04/17 16:46:00 INFO storage.MemoryStore: Block broadcast_343_piece0 stored as bytes in memory (estimated size 3.1 KB, free 490.5 MB) 18/04/17 16:46:00 INFO storage.BlockManagerInfo: Added broadcast_343_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.2 MB) 18/04/17 16:46:00 INFO spark.SparkContext: Created broadcast 343 from broadcast at DAGScheduler.scala:1006 18/04/17 16:46:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 343 (KafkaRDD[494] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 16:46:00 INFO cluster.YarnClusterScheduler: Adding task set 343.0 with 1 tasks 18/04/17 16:46:00 INFO scheduler.DAGScheduler: Got job 344 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 16:46:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 344 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 16:46:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 16:46:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 16:46:00 INFO scheduler.DAGScheduler: Submitting ResultStage 344 (KafkaRDD[483] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 16:46:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 343.0 (TID 343, ***hostname masked***, executor 1, partition 0, NODE_LOCAL, 2056 bytes) 18/04/17 16:46:00 INFO storage.MemoryStore: Block broadcast_344 stored as values in memory (estimated size 5.7 KB, free 490.5 MB) 18/04/17 16:46:00 INFO storage.MemoryStore: Block broadcast_344_piece0 stored as bytes in memory (estimated size 3.1 KB, free 490.5 MB) 18/04/17 16:46:00 INFO storage.BlockManagerInfo: Added broadcast_344_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.2 MB) 18/04/17 16:46:00 INFO spark.SparkContext: Created broadcast 344 from broadcast at DAGScheduler.scala:1006 18/04/17 16:46:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 344 (KafkaRDD[483] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 16:46:00 INFO storage.BlockManagerInfo: Added broadcast_341_piece0 in memory on ***hostname masked***:35790 (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:46:00 INFO cluster.YarnClusterScheduler: Adding task set 344.0 with 1 tasks 18/04/17 16:46:00 INFO scheduler.DAGScheduler: Got job 345 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 16:46:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 345 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 16:46:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 16:46:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 16:46:00 INFO scheduler.DAGScheduler: Submitting ResultStage 345 (KafkaRDD[497] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 16:46:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 344.0 (TID 344, ***hostname masked***, executor 6, partition 0, RACK_LOCAL, 2064 bytes) 18/04/17 16:46:00 INFO storage.MemoryStore: Block broadcast_345 stored as values in memory (estimated size 5.7 KB, free 490.5 MB) 18/04/17 16:46:00 INFO storage.MemoryStore: Block broadcast_345_piece0 stored as bytes in memory (estimated size 3.1 KB, free 490.5 MB) 18/04/17 16:46:00 INFO storage.BlockManagerInfo: Added broadcast_345_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.2 MB) 18/04/17 16:46:00 INFO spark.SparkContext: Created broadcast 345 from broadcast at DAGScheduler.scala:1006 18/04/17 16:46:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 345 (KafkaRDD[497] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 16:46:00 INFO cluster.YarnClusterScheduler: Adding task set 345.0 with 1 tasks 18/04/17 16:46:00 INFO scheduler.DAGScheduler: Got job 346 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 16:46:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 346 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 16:46:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 16:46:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 16:46:00 INFO scheduler.DAGScheduler: Submitting ResultStage 346 (KafkaRDD[469] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 16:46:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 345.0 (TID 345, ***hostname masked***, executor 3, partition 0, NODE_LOCAL, 2049 bytes) 18/04/17 16:46:00 INFO storage.MemoryStore: Block broadcast_346 stored as values in memory (estimated size 5.7 KB, free 490.5 MB) 18/04/17 16:46:00 INFO storage.BlockManagerInfo: Added broadcast_342_piece0 in memory on ***hostname masked***:43653 (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:46:00 INFO storage.BlockManagerInfo: Added broadcast_343_piece0 in memory on ***hostname masked***:56034 (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:46:00 INFO storage.MemoryStore: Block broadcast_346_piece0 stored as bytes in memory (estimated size 3.1 KB, free 490.5 MB) 18/04/17 16:46:00 INFO storage.BlockManagerInfo: Added broadcast_346_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.2 MB) 18/04/17 16:46:00 INFO spark.SparkContext: Created broadcast 346 from broadcast at DAGScheduler.scala:1006 18/04/17 16:46:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 346 (KafkaRDD[469] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 16:46:00 INFO cluster.YarnClusterScheduler: Adding task set 346.0 with 1 tasks 18/04/17 16:46:00 INFO scheduler.DAGScheduler: Got job 347 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 16:46:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 347 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 16:46:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 16:46:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 16:46:00 INFO scheduler.DAGScheduler: Submitting ResultStage 347 (KafkaRDD[478] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 16:46:00 INFO storage.MemoryStore: Block broadcast_347 stored as values in memory (estimated size 5.7 KB, free 490.5 MB) 18/04/17 16:46:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 346.0 (TID 346, ***hostname masked***, executor 12, partition 0, RACK_LOCAL, 2063 bytes) 18/04/17 16:46:00 INFO storage.BlockManagerInfo: Added broadcast_344_piece0 in memory on ***hostname masked***:35790 (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:46:00 INFO storage.MemoryStore: Block broadcast_347_piece0 stored as bytes in memory (estimated size 3.1 KB, free 490.4 MB) 18/04/17 16:46:00 INFO storage.BlockManagerInfo: Added broadcast_347_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.2 MB) 18/04/17 16:46:00 INFO spark.SparkContext: Created broadcast 347 from broadcast at DAGScheduler.scala:1006 18/04/17 16:46:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 347 (KafkaRDD[478] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 16:46:00 INFO cluster.YarnClusterScheduler: Adding task set 347.0 with 1 tasks 18/04/17 16:46:00 INFO scheduler.DAGScheduler: Got job 349 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 16:46:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 348 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 16:46:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 16:46:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 16:46:00 INFO scheduler.DAGScheduler: Submitting ResultStage 348 (KafkaRDD[496] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 16:46:00 INFO storage.MemoryStore: Block broadcast_348 stored as values in memory (estimated size 5.7 KB, free 490.4 MB) 18/04/17 16:46:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 347.0 (TID 347, ***hostname masked***, executor 5, partition 0, NODE_LOCAL, 2059 bytes) 18/04/17 16:46:00 INFO storage.BlockManagerInfo: Added broadcast_345_piece0 in memory on ***hostname masked***:60107 (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:46:00 INFO storage.MemoryStore: Block broadcast_348_piece0 stored as bytes in memory (estimated size 3.1 KB, free 490.4 MB) 18/04/17 16:46:00 INFO storage.BlockManagerInfo: Added broadcast_348_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.2 MB) 18/04/17 16:46:00 INFO spark.SparkContext: Created broadcast 348 from broadcast at DAGScheduler.scala:1006 18/04/17 16:46:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 348 (KafkaRDD[496] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 16:46:00 INFO cluster.YarnClusterScheduler: Adding task set 348.0 with 1 tasks 18/04/17 16:46:00 INFO scheduler.DAGScheduler: Got job 348 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 16:46:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 349 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 16:46:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 16:46:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 16:46:00 INFO scheduler.DAGScheduler: Submitting ResultStage 349 (KafkaRDD[487] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 16:46:00 INFO storage.MemoryStore: Block broadcast_349 stored as values in memory (estimated size 5.7 KB, free 490.4 MB) 18/04/17 16:46:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 348.0 (TID 348, ***hostname masked***, executor 8, partition 0, RACK_LOCAL, 2045 bytes) 18/04/17 16:46:00 INFO storage.MemoryStore: Block broadcast_349_piece0 stored as bytes in memory (estimated size 3.1 KB, free 490.4 MB) 18/04/17 16:46:00 INFO storage.BlockManagerInfo: Added broadcast_349_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.2 MB) 18/04/17 16:46:00 INFO spark.SparkContext: Created broadcast 349 from broadcast at DAGScheduler.scala:1006 18/04/17 16:46:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 349 (KafkaRDD[487] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 16:46:00 INFO cluster.YarnClusterScheduler: Adding task set 349.0 with 1 tasks 18/04/17 16:46:00 INFO storage.BlockManagerInfo: Added broadcast_346_piece0 in memory on ***hostname masked***:42188 (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:46:00 INFO scheduler.DAGScheduler: Got job 351 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 16:46:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 350 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 16:46:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 16:46:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 16:46:00 INFO scheduler.DAGScheduler: Submitting ResultStage 350 (KafkaRDD[470] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 16:46:00 INFO storage.MemoryStore: Block broadcast_350 stored as values in memory (estimated size 5.7 KB, free 490.4 MB) 18/04/17 16:46:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 349.0 (TID 349, ***hostname masked***, executor 3, partition 0, RACK_LOCAL, 2050 bytes) 18/04/17 16:46:00 INFO storage.BlockManagerInfo: Added broadcast_347_piece0 in memory on ***hostname masked***:53081 (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:46:00 INFO storage.MemoryStore: Block broadcast_350_piece0 stored as bytes in memory (estimated size 3.1 KB, free 490.4 MB) 18/04/17 16:46:00 INFO storage.BlockManagerInfo: Added broadcast_350_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.2 MB) 18/04/17 16:46:00 INFO spark.SparkContext: Created broadcast 350 from broadcast at DAGScheduler.scala:1006 18/04/17 16:46:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 350 (KafkaRDD[470] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 16:46:00 INFO cluster.YarnClusterScheduler: Adding task set 350.0 with 1 tasks 18/04/17 16:46:00 INFO scheduler.DAGScheduler: Got job 350 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 16:46:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 351 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 16:46:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 16:46:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 16:46:00 INFO scheduler.DAGScheduler: Submitting ResultStage 351 (KafkaRDD[479] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 16:46:00 INFO storage.MemoryStore: Block broadcast_351 stored as values in memory (estimated size 5.7 KB, free 490.4 MB) 18/04/17 16:46:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 350.0 (TID 350, ***hostname masked***, executor 4, partition 0, RACK_LOCAL, 2034 bytes) 18/04/17 16:46:00 INFO storage.BlockManagerInfo: Added broadcast_349_piece0 in memory on ***hostname masked***:60107 (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:46:00 INFO storage.MemoryStore: Block broadcast_351_piece0 stored as bytes in memory (estimated size 3.1 KB, free 490.4 MB) 18/04/17 16:46:00 INFO storage.BlockManagerInfo: Added broadcast_351_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.2 MB) 18/04/17 16:46:00 INFO spark.SparkContext: Created broadcast 351 from broadcast at DAGScheduler.scala:1006 18/04/17 16:46:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 351 (KafkaRDD[479] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 16:46:00 INFO cluster.YarnClusterScheduler: Adding task set 351.0 with 1 tasks 18/04/17 16:46:00 INFO scheduler.DAGScheduler: Got job 352 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 16:46:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 352 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 16:46:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 16:46:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 16:46:00 INFO scheduler.DAGScheduler: Submitting ResultStage 352 (KafkaRDD[502] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 16:46:00 INFO storage.MemoryStore: Block broadcast_352 stored as values in memory (estimated size 5.7 KB, free 490.4 MB) 18/04/17 16:46:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 351.0 (TID 351, ***hostname masked***, executor 6, partition 0, RACK_LOCAL, 2046 bytes) 18/04/17 16:46:00 INFO storage.BlockManagerInfo: Added broadcast_348_piece0 in memory on ***hostname masked***:50260 (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:46:00 INFO storage.MemoryStore: Block broadcast_352_piece0 stored as bytes in memory (estimated size 3.1 KB, free 490.4 MB) 18/04/17 16:46:00 INFO storage.BlockManagerInfo: Added broadcast_352_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.2 MB) 18/04/17 16:46:00 INFO spark.SparkContext: Created broadcast 352 from broadcast at DAGScheduler.scala:1006 18/04/17 16:46:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 352 (KafkaRDD[502] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 16:46:00 INFO cluster.YarnClusterScheduler: Adding task set 352.0 with 1 tasks 18/04/17 16:46:00 INFO scheduler.DAGScheduler: Got job 353 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 16:46:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 353 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 16:46:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 16:46:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 16:46:00 INFO scheduler.DAGScheduler: Submitting ResultStage 353 (KafkaRDD[493] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 16:46:00 INFO storage.MemoryStore: Block broadcast_353 stored as values in memory (estimated size 5.7 KB, free 490.4 MB) 18/04/17 16:46:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 352.0 (TID 352, ***hostname masked***, executor 8, partition 0, RACK_LOCAL, 2053 bytes) 18/04/17 16:46:00 INFO storage.MemoryStore: Block broadcast_353_piece0 stored as bytes in memory (estimated size 3.1 KB, free 490.4 MB) 18/04/17 16:46:00 INFO storage.BlockManagerInfo: Added broadcast_353_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.2 MB) 18/04/17 16:46:00 INFO spark.SparkContext: Created broadcast 353 from broadcast at DAGScheduler.scala:1006 18/04/17 16:46:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 353 (KafkaRDD[493] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 16:46:00 INFO cluster.YarnClusterScheduler: Adding task set 353.0 with 1 tasks 18/04/17 16:46:00 INFO scheduler.DAGScheduler: Got job 355 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 16:46:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 354 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 16:46:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 16:46:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 16:46:00 INFO scheduler.DAGScheduler: Submitting ResultStage 354 (KafkaRDD[501] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 16:46:00 INFO storage.MemoryStore: Block broadcast_354 stored as values in memory (estimated size 5.7 KB, free 490.4 MB) 18/04/17 16:46:00 INFO storage.BlockManagerInfo: Added broadcast_351_piece0 in memory on ***hostname masked***:35790 (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:46:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 353.0 (TID 353, ***hostname masked***, executor 8, partition 0, RACK_LOCAL, 2045 bytes) 18/04/17 16:46:00 INFO storage.BlockManagerInfo: Added broadcast_350_piece0 in memory on ***hostname masked***:55279 (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:46:00 INFO storage.MemoryStore: Block broadcast_354_piece0 stored as bytes in memory (estimated size 3.1 KB, free 490.4 MB) 18/04/17 16:46:00 INFO storage.BlockManagerInfo: Added broadcast_354_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.2 MB) 18/04/17 16:46:00 INFO spark.SparkContext: Created broadcast 354 from broadcast at DAGScheduler.scala:1006 18/04/17 16:46:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 354 (KafkaRDD[501] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 16:46:00 INFO cluster.YarnClusterScheduler: Adding task set 354.0 with 1 tasks 18/04/17 16:46:00 INFO scheduler.DAGScheduler: Got job 354 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 16:46:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 355 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 16:46:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 16:46:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 16:46:00 INFO scheduler.DAGScheduler: Submitting ResultStage 355 (KafkaRDD[492] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 16:46:00 INFO storage.MemoryStore: Block broadcast_355 stored as values in memory (estimated size 5.7 KB, free 490.4 MB) 18/04/17 16:46:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 354.0 (TID 354, ***hostname masked***, executor 8, partition 0, NODE_LOCAL, 2048 bytes) 18/04/17 16:46:00 INFO storage.BlockManagerInfo: Added broadcast_352_piece0 in memory on ***hostname masked***:50260 (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:46:00 INFO storage.MemoryStore: Block broadcast_355_piece0 stored as bytes in memory (estimated size 3.1 KB, free 490.4 MB) 18/04/17 16:46:00 INFO storage.BlockManagerInfo: Added broadcast_355_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.2 MB) 18/04/17 16:46:00 INFO spark.SparkContext: Created broadcast 355 from broadcast at DAGScheduler.scala:1006 18/04/17 16:46:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 355 (KafkaRDD[492] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 16:46:00 INFO cluster.YarnClusterScheduler: Adding task set 355.0 with 1 tasks 18/04/17 16:46:00 INFO scheduler.DAGScheduler: Got job 356 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 16:46:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 356 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 16:46:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 16:46:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 16:46:00 INFO scheduler.DAGScheduler: Submitting ResultStage 356 (KafkaRDD[486] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 16:46:00 INFO storage.MemoryStore: Block broadcast_356 stored as values in memory (estimated size 5.7 KB, free 490.4 MB) 18/04/17 16:46:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 355.0 (TID 355, ***hostname masked***, executor 2, partition 0, NODE_LOCAL, 2047 bytes) 18/04/17 16:46:00 INFO storage.MemoryStore: Block broadcast_356_piece0 stored as bytes in memory (estimated size 3.1 KB, free 490.4 MB) 18/04/17 16:46:00 INFO storage.BlockManagerInfo: Added broadcast_356_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.2 MB) 18/04/17 16:46:00 INFO spark.SparkContext: Created broadcast 356 from broadcast at DAGScheduler.scala:1006 18/04/17 16:46:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 356 (KafkaRDD[486] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 16:46:00 INFO cluster.YarnClusterScheduler: Adding task set 356.0 with 1 tasks 18/04/17 16:46:00 INFO scheduler.DAGScheduler: Got job 357 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 16:46:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 357 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 16:46:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 16:46:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 16:46:00 INFO scheduler.DAGScheduler: Submitting ResultStage 357 (KafkaRDD[473] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 16:46:00 INFO storage.MemoryStore: Block broadcast_357 stored as values in memory (estimated size 5.7 KB, free 490.4 MB) 18/04/17 16:46:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 356.0 (TID 356, ***hostname masked***, executor 11, partition 0, RACK_LOCAL, 2048 bytes) 18/04/17 16:46:00 INFO storage.BlockManagerInfo: Added broadcast_354_piece0 in memory on ***hostname masked***:50260 (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:46:00 INFO storage.MemoryStore: Block broadcast_357_piece0 stored as bytes in memory (estimated size 3.1 KB, free 490.4 MB) 18/04/17 16:46:00 INFO storage.BlockManagerInfo: Added broadcast_357_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.2 MB) 18/04/17 16:46:00 INFO spark.SparkContext: Created broadcast 357 from broadcast at DAGScheduler.scala:1006 18/04/17 16:46:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 357 (KafkaRDD[473] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 16:46:00 INFO cluster.YarnClusterScheduler: Adding task set 357.0 with 1 tasks 18/04/17 16:46:00 INFO scheduler.DAGScheduler: Got job 358 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 16:46:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 358 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 16:46:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 16:46:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 16:46:00 INFO scheduler.DAGScheduler: Submitting ResultStage 358 (KafkaRDD[475] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 16:46:00 INFO storage.MemoryStore: Block broadcast_358 stored as values in memory (estimated size 5.7 KB, free 490.4 MB) 18/04/17 16:46:00 INFO storage.BlockManagerInfo: Added broadcast_355_piece0 in memory on ***hostname masked***:43653 (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:46:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 357.0 (TID 357, ***hostname masked***, executor 5, partition 0, NODE_LOCAL, 2059 bytes) 18/04/17 16:46:00 INFO storage.MemoryStore: Block broadcast_358_piece0 stored as bytes in memory (estimated size 3.1 KB, free 490.4 MB) 18/04/17 16:46:00 INFO storage.BlockManagerInfo: Added broadcast_358_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.2 MB) 18/04/17 16:46:00 INFO spark.SparkContext: Created broadcast 358 from broadcast at DAGScheduler.scala:1006 18/04/17 16:46:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 358 (KafkaRDD[475] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 16:46:00 INFO cluster.YarnClusterScheduler: Adding task set 358.0 with 1 tasks 18/04/17 16:46:00 INFO scheduler.DAGScheduler: Got job 359 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 16:46:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 359 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 16:46:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 16:46:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 16:46:00 INFO scheduler.DAGScheduler: Submitting ResultStage 359 (KafkaRDD[474] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 16:46:00 INFO storage.MemoryStore: Block broadcast_359 stored as values in memory (estimated size 5.7 KB, free 490.3 MB) 18/04/17 16:46:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 358.0 (TID 358, ***hostname masked***, executor 12, partition 0, RACK_LOCAL, 2060 bytes) 18/04/17 16:46:00 INFO storage.BlockManagerInfo: Added broadcast_353_piece0 in memory on ***hostname masked***:50260 (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:46:00 INFO storage.BlockManagerInfo: Added broadcast_356_piece0 in memory on ***hostname masked***:57847 (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:46:00 INFO storage.MemoryStore: Block broadcast_359_piece0 stored as bytes in memory (estimated size 3.1 KB, free 490.3 MB) 18/04/17 16:46:00 INFO storage.BlockManagerInfo: Added broadcast_359_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.2 MB) 18/04/17 16:46:00 INFO spark.SparkContext: Created broadcast 359 from broadcast at DAGScheduler.scala:1006 18/04/17 16:46:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 359 (KafkaRDD[474] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 16:46:00 INFO cluster.YarnClusterScheduler: Adding task set 359.0 with 1 tasks 18/04/17 16:46:00 INFO scheduler.DAGScheduler: Got job 360 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 16:46:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 360 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 16:46:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 16:46:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 16:46:00 INFO scheduler.DAGScheduler: Submitting ResultStage 360 (KafkaRDD[476] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 16:46:00 INFO storage.BlockManagerInfo: Added broadcast_357_piece0 in memory on ***hostname masked***:53081 (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:46:00 INFO storage.MemoryStore: Block broadcast_360 stored as values in memory (estimated size 5.7 KB, free 490.3 MB) 18/04/17 16:46:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 359.0 (TID 359, ***hostname masked***, executor 9, partition 0, NODE_LOCAL, 2045 bytes) 18/04/17 16:46:00 INFO storage.MemoryStore: Block broadcast_360_piece0 stored as bytes in memory (estimated size 3.1 KB, free 490.3 MB) 18/04/17 16:46:00 INFO storage.BlockManagerInfo: Added broadcast_360_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.2 MB) 18/04/17 16:46:00 INFO spark.SparkContext: Created broadcast 360 from broadcast at DAGScheduler.scala:1006 18/04/17 16:46:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 360 (KafkaRDD[476] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 16:46:00 INFO cluster.YarnClusterScheduler: Adding task set 360.0 with 1 tasks 18/04/17 16:46:00 INFO spark.ContextCleaner: Cleaned accumulator 325 18/04/17 16:46:00 INFO spark.ContextCleaner: Cleaned accumulator 323 18/04/17 16:46:00 INFO scheduler.DAGScheduler: Got job 361 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 16:46:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 361 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 16:46:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 16:46:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 16:46:00 INFO scheduler.DAGScheduler: Submitting ResultStage 361 (KafkaRDD[495] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 16:46:00 INFO storage.MemoryStore: Block broadcast_361 stored as values in memory (estimated size 5.7 KB, free 490.3 MB) 18/04/17 16:46:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 360.0 (TID 360, ***hostname masked***, executor 12, partition 0, RACK_LOCAL, 2051 bytes) 18/04/17 16:46:00 INFO storage.BlockManagerInfo: Removed broadcast_321_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.2 MB) 18/04/17 16:46:00 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 349.0 (TID 349) in 57 ms on ***hostname masked*** (executor 3) (1/1) 18/04/17 16:46:00 INFO cluster.YarnClusterScheduler: Removed TaskSet 349.0, whose tasks have all completed, from pool 18/04/17 16:46:00 INFO storage.BlockManagerInfo: Removed broadcast_321_piece0 on ***hostname masked***:43653 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:46:00 INFO storage.BlockManagerInfo: Added broadcast_358_piece0 in memory on ***hostname masked***:42188 (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:46:00 INFO storage.MemoryStore: Block broadcast_361_piece0 stored as bytes in memory (estimated size 3.1 KB, free 490.3 MB) 18/04/17 16:46:00 INFO storage.BlockManagerInfo: Added broadcast_361_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.2 MB) 18/04/17 16:46:00 INFO spark.SparkContext: Created broadcast 361 from broadcast at DAGScheduler.scala:1006 18/04/17 16:46:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 361 (KafkaRDD[495] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 16:46:00 INFO cluster.YarnClusterScheduler: Adding task set 361.0 with 1 tasks 18/04/17 16:46:00 INFO spark.ContextCleaner: Cleaned accumulator 322 18/04/17 16:46:00 INFO scheduler.DAGScheduler: Got job 362 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 16:46:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 362 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 16:46:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 16:46:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 16:46:00 INFO scheduler.DAGScheduler: Submitting ResultStage 362 (KafkaRDD[491] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 16:46:00 INFO storage.MemoryStore: Block broadcast_362 stored as values in memory (estimated size 5.7 KB, free 490.3 MB) 18/04/17 16:46:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 361.0 (TID 361, ***hostname masked***, executor 11, partition 0, RACK_LOCAL, 2048 bytes) 18/04/17 16:46:00 INFO storage.BlockManagerInfo: Removed broadcast_324_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.2 MB) 18/04/17 16:46:00 INFO storage.BlockManagerInfo: Removed broadcast_324_piece0 on ***hostname masked***:55033 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:46:00 INFO storage.MemoryStore: Block broadcast_362_piece0 stored as bytes in memory (estimated size 3.1 KB, free 490.3 MB) 18/04/17 16:46:00 INFO storage.BlockManagerInfo: Added broadcast_362_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.2 MB) 18/04/17 16:46:00 INFO spark.SparkContext: Created broadcast 362 from broadcast at DAGScheduler.scala:1006 18/04/17 16:46:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 362 (KafkaRDD[491] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 16:46:00 INFO cluster.YarnClusterScheduler: Adding task set 362.0 with 1 tasks 18/04/17 16:46:00 INFO scheduler.DAGScheduler: Got job 363 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 16:46:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 363 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 16:46:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 16:46:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 16:46:00 INFO scheduler.DAGScheduler: Submitting ResultStage 363 (KafkaRDD[500] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 16:46:00 INFO storage.MemoryStore: Block broadcast_363 stored as values in memory (estimated size 5.7 KB, free 490.3 MB) 18/04/17 16:46:00 INFO storage.BlockManagerInfo: Removed broadcast_322_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.2 MB) 18/04/17 16:46:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 362.0 (TID 362, ***hostname masked***, executor 5, partition 0, RACK_LOCAL, 2048 bytes) 18/04/17 16:46:00 INFO storage.BlockManagerInfo: Removed broadcast_322_piece0 on ***hostname masked***:55033 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:46:00 INFO storage.BlockManagerInfo: Added broadcast_361_piece0 in memory on ***hostname masked***:57847 (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:46:00 INFO storage.BlockManagerInfo: Added broadcast_360_piece0 in memory on ***hostname masked***:42188 (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:46:00 INFO storage.MemoryStore: Block broadcast_363_piece0 stored as bytes in memory (estimated size 3.1 KB, free 490.3 MB) 18/04/17 16:46:00 INFO storage.BlockManagerInfo: Added broadcast_363_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.2 MB) 18/04/17 16:46:00 INFO spark.SparkContext: Created broadcast 363 from broadcast at DAGScheduler.scala:1006 18/04/17 16:46:00 INFO spark.ContextCleaner: Cleaned accumulator 332 18/04/17 16:46:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 363 (KafkaRDD[500] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 16:46:00 INFO cluster.YarnClusterScheduler: Adding task set 363.0 with 1 tasks 18/04/17 16:46:00 INFO scheduler.DAGScheduler: Got job 364 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 16:46:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 364 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 16:46:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 16:46:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 16:46:00 INFO scheduler.DAGScheduler: Submitting ResultStage 364 (KafkaRDD[477] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 16:46:00 INFO storage.MemoryStore: Block broadcast_364 stored as values in memory (estimated size 5.7 KB, free 490.3 MB) 18/04/17 16:46:00 INFO storage.BlockManagerInfo: Added broadcast_359_piece0 in memory on ***hostname masked***:55033 (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:46:00 INFO storage.BlockManagerInfo: Removed broadcast_336_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.2 MB) 18/04/17 16:46:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 363.0 (TID 363, ***hostname masked***, executor 1, partition 0, RACK_LOCAL, 2040 bytes) 18/04/17 16:46:00 INFO storage.BlockManagerInfo: Removed broadcast_336_piece0 on ***hostname masked***:55033 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:46:00 INFO spark.ContextCleaner: Cleaned accumulator 337 18/04/17 16:46:00 INFO storage.MemoryStore: Block broadcast_364_piece0 stored as bytes in memory (estimated size 3.1 KB, free 490.3 MB) 18/04/17 16:46:00 INFO storage.BlockManagerInfo: Added broadcast_364_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.2 MB) 18/04/17 16:46:00 INFO storage.BlockManagerInfo: Removed broadcast_331_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.2 MB) 18/04/17 16:46:00 INFO spark.SparkContext: Created broadcast 364 from broadcast at DAGScheduler.scala:1006 18/04/17 16:46:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 364 (KafkaRDD[477] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 16:46:00 INFO cluster.YarnClusterScheduler: Adding task set 364.0 with 1 tasks 18/04/17 16:46:00 INFO storage.BlockManagerInfo: Added broadcast_362_piece0 in memory on ***hostname masked***:53081 (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:46:00 INFO scheduler.DAGScheduler: Got job 365 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 16:46:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 365 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 16:46:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 16:46:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 16:46:00 INFO scheduler.DAGScheduler: Submitting ResultStage 365 (KafkaRDD[488] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 16:46:00 INFO storage.BlockManagerInfo: Removed broadcast_331_piece0 on ***hostname masked***:55033 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:46:00 INFO storage.MemoryStore: Block broadcast_365 stored as values in memory (estimated size 5.7 KB, free 490.3 MB) 18/04/17 16:46:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 364.0 (TID 364, ***hostname masked***, executor 10, partition 0, RACK_LOCAL, 2041 bytes) 18/04/17 16:46:00 INFO storage.MemoryStore: Block broadcast_365_piece0 stored as bytes in memory (estimated size 3.1 KB, free 490.3 MB) 18/04/17 16:46:00 INFO storage.BlockManagerInfo: Added broadcast_365_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.2 MB) 18/04/17 16:46:00 INFO spark.SparkContext: Created broadcast 365 from broadcast at DAGScheduler.scala:1006 18/04/17 16:46:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 365 (KafkaRDD[488] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 16:46:00 INFO cluster.YarnClusterScheduler: Adding task set 365.0 with 1 tasks 18/04/17 16:46:00 INFO storage.BlockManagerInfo: Added broadcast_363_piece0 in memory on ***hostname masked***:56034 (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:46:00 INFO scheduler.DAGScheduler: Got job 366 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 16:46:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 366 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 16:46:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 16:46:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 16:46:00 INFO scheduler.DAGScheduler: Submitting ResultStage 366 (KafkaRDD[499] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 16:46:00 INFO storage.MemoryStore: Block broadcast_366 stored as values in memory (estimated size 5.7 KB, free 490.3 MB) 18/04/17 16:46:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 365.0 (TID 365, ***hostname masked***, executor 4, partition 0, RACK_LOCAL, 2048 bytes) 18/04/17 16:46:00 INFO storage.MemoryStore: Block broadcast_366_piece0 stored as bytes in memory (estimated size 3.1 KB, free 490.3 MB) 18/04/17 16:46:00 INFO storage.BlockManagerInfo: Added broadcast_366_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.2 MB) 18/04/17 16:46:00 INFO spark.SparkContext: Created broadcast 366 from broadcast at DAGScheduler.scala:1006 18/04/17 16:46:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 366 (KafkaRDD[499] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 16:46:00 INFO cluster.YarnClusterScheduler: Adding task set 366.0 with 1 tasks 18/04/17 16:46:00 INFO scheduler.DAGScheduler: Got job 367 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 16:46:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 367 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 16:46:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 16:46:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 16:46:00 INFO scheduler.DAGScheduler: Submitting ResultStage 367 (KafkaRDD[503] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 16:46:00 INFO storage.MemoryStore: Block broadcast_367 stored as values in memory (estimated size 5.7 KB, free 490.3 MB) 18/04/17 16:46:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 366.0 (TID 366, ***hostname masked***, executor 3, partition 0, RACK_LOCAL, 2042 bytes) 18/04/17 16:46:00 INFO storage.MemoryStore: Block broadcast_367_piece0 stored as bytes in memory (estimated size 3.1 KB, free 490.3 MB) 18/04/17 16:46:00 INFO storage.BlockManagerInfo: Added broadcast_367_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.2 MB) 18/04/17 16:46:00 INFO spark.SparkContext: Created broadcast 367 from broadcast at DAGScheduler.scala:1006 18/04/17 16:46:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 367 (KafkaRDD[503] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 16:46:00 INFO cluster.YarnClusterScheduler: Adding task set 367.0 with 1 tasks 18/04/17 16:46:00 INFO scheduler.DAGScheduler: ResultStage 349 (foreachPartition at PredictorEngineApp.java:153) finished in 0.081 s 18/04/17 16:46:00 INFO storage.BlockManagerInfo: Added broadcast_365_piece0 in memory on ***hostname masked***:55279 (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:46:00 INFO scheduler.DAGScheduler: Job 348 finished: foreachPartition at PredictorEngineApp.java:153, took 0.127843 s 18/04/17 16:46:00 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x5e0d3bd8 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 16:46:00 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x5e0d3bd80x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 16:46:00 INFO storage.BlockManagerInfo: Added broadcast_364_piece0 in memory on ***hostname masked***:55095 (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:46:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 367.0 (TID 367, ***hostname masked***, executor 12, partition 0, NODE_LOCAL, 2037 bytes) 18/04/17 16:46:00 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 16:46:00 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:33487, server: ***hostname masked***/***IP masked***:2181 18/04/17 16:46:00 INFO storage.BlockManagerInfo: Added broadcast_366_piece0 in memory on ***hostname masked***:60107 (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:46:00 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 353.0 (TID 353) in 71 ms on ***hostname masked*** (executor 8) (1/1) 18/04/17 16:46:00 INFO scheduler.DAGScheduler: ResultStage 353 (foreachPartition at PredictorEngineApp.java:153) finished in 0.072 s 18/04/17 16:46:00 INFO cluster.YarnClusterScheduler: Removed TaskSet 353.0, whose tasks have all completed, from pool 18/04/17 16:46:00 INFO scheduler.DAGScheduler: Job 353 finished: foreachPartition at PredictorEngineApp.java:153, took 0.134547 s 18/04/17 16:46:00 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0xe3032b6 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 16:46:00 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0xe3032b60x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 16:46:00 INFO storage.BlockManagerInfo: Added broadcast_367_piece0 in memory on ***hostname masked***:42188 (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:46:00 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 16:46:00 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:55339, server: ***hostname masked***/***IP masked***:2181 18/04/17 16:46:00 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x1626be1444c9339, negotiated timeout = 60000 18/04/17 16:46:00 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x3626be1439a9311, negotiated timeout = 60000 18/04/17 16:46:00 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x1626be1444c9339 18/04/17 16:46:00 INFO zookeeper.ZooKeeper: Session: 0x1626be1444c9339 closed 18/04/17 16:46:00 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 16:46:00 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x3626be1439a9311 18/04/17 16:46:00 INFO scheduler.JobScheduler: Finished job streaming job 1523972760000 ms.19 from job set of time 1523972760000 ms 18/04/17 16:46:00 INFO zookeeper.ZooKeeper: Session: 0x3626be1439a9311 closed 18/04/17 16:46:00 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 16:46:00 INFO scheduler.JobScheduler: Finished job streaming job 1523972760000 ms.25 from job set of time 1523972760000 ms 18/04/17 16:46:00 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 367.0 (TID 367) in 401 ms on ***hostname masked*** (executor 12) (1/1) 18/04/17 16:46:00 INFO cluster.YarnClusterScheduler: Removed TaskSet 367.0, whose tasks have all completed, from pool 18/04/17 16:46:00 INFO scheduler.DAGScheduler: ResultStage 367 (foreachPartition at PredictorEngineApp.java:153) finished in 0.403 s 18/04/17 16:46:00 INFO scheduler.DAGScheduler: Job 367 finished: foreachPartition at PredictorEngineApp.java:153, took 0.522361 s 18/04/17 16:46:00 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x2be04b4e connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 16:46:00 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x2be04b4e0x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 16:46:00 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 16:46:00 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:33493, server: ***hostname masked***/***IP masked***:2181 18/04/17 16:46:00 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x1626be1444c933f, negotiated timeout = 60000 18/04/17 16:46:00 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x1626be1444c933f 18/04/17 16:46:00 INFO zookeeper.ZooKeeper: Session: 0x1626be1444c933f closed 18/04/17 16:46:00 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 16:46:00 INFO scheduler.JobScheduler: Finished job streaming job 1523972760000 ms.35 from job set of time 1523972760000 ms 18/04/17 16:46:03 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 358.0 (TID 358) in 3168 ms on ***hostname masked*** (executor 12) (1/1) 18/04/17 16:46:03 INFO cluster.YarnClusterScheduler: Removed TaskSet 358.0, whose tasks have all completed, from pool 18/04/17 16:46:03 INFO scheduler.DAGScheduler: ResultStage 358 (foreachPartition at PredictorEngineApp.java:153) finished in 3.170 s 18/04/17 16:46:03 INFO scheduler.DAGScheduler: Job 358 finished: foreachPartition at PredictorEngineApp.java:153, took 3.243104 s 18/04/17 16:46:03 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x22647f73 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 16:46:03 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x22647f730x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 16:46:03 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 16:46:03 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:55352, server: ***hostname masked***/***IP masked***:2181 18/04/17 16:46:03 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x3626be1439a9318, negotiated timeout = 60000 18/04/17 16:46:03 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x3626be1439a9318 18/04/17 16:46:03 INFO zookeeper.ZooKeeper: Session: 0x3626be1439a9318 closed 18/04/17 16:46:03 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 16:46:03 INFO scheduler.JobScheduler: Finished job streaming job 1523972760000 ms.7 from job set of time 1523972760000 ms 18/04/17 16:46:03 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 360.0 (TID 360) in 3209 ms on ***hostname masked*** (executor 12) (1/1) 18/04/17 16:46:03 INFO cluster.YarnClusterScheduler: Removed TaskSet 360.0, whose tasks have all completed, from pool 18/04/17 16:46:03 INFO scheduler.DAGScheduler: ResultStage 360 (foreachPartition at PredictorEngineApp.java:153) finished in 3.210 s 18/04/17 16:46:03 INFO scheduler.DAGScheduler: Job 360 finished: foreachPartition at PredictorEngineApp.java:153, took 3.304942 s 18/04/17 16:46:03 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x638ed63f connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 16:46:03 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x638ed63f0x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 16:46:03 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 16:46:03 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:33504, server: ***hostname masked***/***IP masked***:2181 18/04/17 16:46:03 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x1626be1444c9342, negotiated timeout = 60000 18/04/17 16:46:03 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x1626be1444c9342 18/04/17 16:46:03 INFO zookeeper.ZooKeeper: Session: 0x1626be1444c9342 closed 18/04/17 16:46:03 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 16:46:03 INFO scheduler.JobScheduler: Finished job streaming job 1523972760000 ms.8 from job set of time 1523972760000 ms 18/04/17 16:46:03 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 364.0 (TID 364) in 3434 ms on ***hostname masked*** (executor 10) (1/1) 18/04/17 16:46:03 INFO cluster.YarnClusterScheduler: Removed TaskSet 364.0, whose tasks have all completed, from pool 18/04/17 16:46:03 INFO scheduler.DAGScheduler: ResultStage 364 (foreachPartition at PredictorEngineApp.java:153) finished in 3.435 s 18/04/17 16:46:03 INFO scheduler.DAGScheduler: Job 364 finished: foreachPartition at PredictorEngineApp.java:153, took 3.546683 s 18/04/17 16:46:03 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x3b18e798 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 16:46:03 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x3b18e7980x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 16:46:03 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 16:46:03 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:33507, server: ***hostname masked***/***IP masked***:2181 18/04/17 16:46:03 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x1626be1444c9343, negotiated timeout = 60000 18/04/17 16:46:03 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x1626be1444c9343 18/04/17 16:46:03 INFO zookeeper.ZooKeeper: Session: 0x1626be1444c9343 closed 18/04/17 16:46:03 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 16:46:03 INFO scheduler.JobScheduler: Finished job streaming job 1523972760000 ms.9 from job set of time 1523972760000 ms 18/04/17 16:46:04 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 359.0 (TID 359) in 3992 ms on ***hostname masked*** (executor 9) (1/1) 18/04/17 16:46:04 INFO cluster.YarnClusterScheduler: Removed TaskSet 359.0, whose tasks have all completed, from pool 18/04/17 16:46:04 INFO scheduler.DAGScheduler: ResultStage 359 (foreachPartition at PredictorEngineApp.java:153) finished in 4.009 s 18/04/17 16:46:04 INFO scheduler.DAGScheduler: Job 359 finished: foreachPartition at PredictorEngineApp.java:153, took 4.086886 s 18/04/17 16:46:04 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x10c90d3b connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 16:46:04 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 366.0 (TID 366) in 3969 ms on ***hostname masked*** (executor 3) (1/1) 18/04/17 16:46:04 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x10c90d3b0x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 16:46:04 INFO cluster.YarnClusterScheduler: Removed TaskSet 366.0, whose tasks have all completed, from pool 18/04/17 16:46:04 INFO scheduler.DAGScheduler: ResultStage 366 (foreachPartition at PredictorEngineApp.java:153) finished in 3.970 s 18/04/17 16:46:04 INFO scheduler.DAGScheduler: Job 366 finished: foreachPartition at PredictorEngineApp.java:153, took 4.087065 s 18/04/17 16:46:04 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x37a61a94 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 16:46:04 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x37a61a940x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 16:46:04 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 16:46:04 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:55362, server: ***hostname masked***/***IP masked***:2181 18/04/17 16:46:04 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 16:46:04 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:33512, server: ***hostname masked***/***IP masked***:2181 18/04/17 16:46:04 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x3626be1439a9319, negotiated timeout = 60000 18/04/17 16:46:04 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x1626be1444c9345, negotiated timeout = 60000 18/04/17 16:46:04 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x1626be1444c9345 18/04/17 16:46:04 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x3626be1439a9319 18/04/17 16:46:04 INFO zookeeper.ZooKeeper: Session: 0x3626be1439a9319 closed 18/04/17 16:46:04 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 16:46:04 INFO zookeeper.ZooKeeper: Session: 0x1626be1444c9345 closed 18/04/17 16:46:04 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 16:46:04 INFO scheduler.JobScheduler: Finished job streaming job 1523972760000 ms.6 from job set of time 1523972760000 ms 18/04/17 16:46:04 INFO scheduler.JobScheduler: Finished job streaming job 1523972760000 ms.31 from job set of time 1523972760000 ms 18/04/17 16:46:04 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 345.0 (TID 345) in 4864 ms on ***hostname masked*** (executor 3) (1/1) 18/04/17 16:46:04 INFO cluster.YarnClusterScheduler: Removed TaskSet 345.0, whose tasks have all completed, from pool 18/04/17 16:46:04 INFO scheduler.DAGScheduler: ResultStage 345 (foreachPartition at PredictorEngineApp.java:153) finished in 4.865 s 18/04/17 16:46:04 INFO scheduler.DAGScheduler: Job 345 finished: foreachPartition at PredictorEngineApp.java:153, took 4.893897 s 18/04/17 16:46:04 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x79d02e84 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 16:46:04 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x79d02e840x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 16:46:04 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 16:46:04 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:38112, server: ***hostname masked***/***IP masked***:2181 18/04/17 16:46:04 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x2626be142b28c4e, negotiated timeout = 60000 18/04/17 16:46:04 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x2626be142b28c4e 18/04/17 16:46:04 INFO zookeeper.ZooKeeper: Session: 0x2626be142b28c4e closed 18/04/17 16:46:04 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 16:46:04 INFO scheduler.JobScheduler: Finished job streaming job 1523972760000 ms.29 from job set of time 1523972760000 ms 18/04/17 16:46:06 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 350.0 (TID 350) in 5937 ms on ***hostname masked*** (executor 4) (1/1) 18/04/17 16:46:06 INFO cluster.YarnClusterScheduler: Removed TaskSet 350.0, whose tasks have all completed, from pool 18/04/17 16:46:06 INFO scheduler.DAGScheduler: ResultStage 350 (foreachPartition at PredictorEngineApp.java:153) finished in 5.938 s 18/04/17 16:46:06 INFO scheduler.DAGScheduler: Job 351 finished: foreachPartition at PredictorEngineApp.java:153, took 5.988936 s 18/04/17 16:46:06 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x32c76e86 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 16:46:06 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x32c76e860x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 16:46:06 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 16:46:06 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:55374, server: ***hostname masked***/***IP masked***:2181 18/04/17 16:46:06 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x3626be1439a931b, negotiated timeout = 60000 18/04/17 16:46:06 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x3626be1439a931b 18/04/17 16:46:06 INFO zookeeper.ZooKeeper: Session: 0x3626be1439a931b closed 18/04/17 16:46:06 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 16:46:06 INFO scheduler.JobScheduler: Finished job streaming job 1523972760000 ms.2 from job set of time 1523972760000 ms 18/04/17 16:46:06 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 356.0 (TID 356) in 5966 ms on ***hostname masked*** (executor 11) (1/1) 18/04/17 16:46:06 INFO cluster.YarnClusterScheduler: Removed TaskSet 356.0, whose tasks have all completed, from pool 18/04/17 16:46:06 INFO scheduler.DAGScheduler: ResultStage 356 (foreachPartition at PredictorEngineApp.java:153) finished in 5.967 s 18/04/17 16:46:06 INFO scheduler.DAGScheduler: Job 356 finished: foreachPartition at PredictorEngineApp.java:153, took 6.033062 s 18/04/17 16:46:06 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x7e524ce5 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 16:46:06 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x7e524ce50x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 16:46:06 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 16:46:06 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:55378, server: ***hostname masked***/***IP masked***:2181 18/04/17 16:46:06 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x3626be1439a931c, negotiated timeout = 60000 18/04/17 16:46:06 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x3626be1439a931c 18/04/17 16:46:06 INFO zookeeper.ZooKeeper: Session: 0x3626be1439a931c closed 18/04/17 16:46:06 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 16:46:06 INFO scheduler.JobScheduler: Finished job streaming job 1523972760000 ms.18 from job set of time 1523972760000 ms 18/04/17 16:46:06 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 363.0 (TID 363) in 5988 ms on ***hostname masked*** (executor 1) (1/1) 18/04/17 16:46:06 INFO cluster.YarnClusterScheduler: Removed TaskSet 363.0, whose tasks have all completed, from pool 18/04/17 16:46:06 INFO scheduler.DAGScheduler: ResultStage 363 (foreachPartition at PredictorEngineApp.java:153) finished in 5.989 s 18/04/17 16:46:06 INFO scheduler.DAGScheduler: Job 363 finished: foreachPartition at PredictorEngineApp.java:153, took 6.095758 s 18/04/17 16:46:06 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x4213aaf2 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 16:46:06 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x4213aaf20x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 16:46:06 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 16:46:06 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:38125, server: ***hostname masked***/***IP masked***:2181 18/04/17 16:46:06 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x2626be142b28c50, negotiated timeout = 60000 18/04/17 16:46:06 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x2626be142b28c50 18/04/17 16:46:06 INFO zookeeper.ZooKeeper: Session: 0x2626be142b28c50 closed 18/04/17 16:46:06 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 16:46:06 INFO scheduler.JobScheduler: Finished job streaming job 1523972760000 ms.32 from job set of time 1523972760000 ms 18/04/17 16:46:06 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 365.0 (TID 365) in 6669 ms on ***hostname masked*** (executor 4) (1/1) 18/04/17 16:46:06 INFO cluster.YarnClusterScheduler: Removed TaskSet 365.0, whose tasks have all completed, from pool 18/04/17 16:46:06 INFO scheduler.DAGScheduler: ResultStage 365 (foreachPartition at PredictorEngineApp.java:153) finished in 6.670 s 18/04/17 16:46:06 INFO scheduler.DAGScheduler: Job 365 finished: foreachPartition at PredictorEngineApp.java:153, took 6.783878 s 18/04/17 16:46:06 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x3cc4af0b connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 16:46:06 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x3cc4af0b0x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 16:46:06 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 16:46:06 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:38128, server: ***hostname masked***/***IP masked***:2181 18/04/17 16:46:06 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x2626be142b28c51, negotiated timeout = 60000 18/04/17 16:46:06 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x2626be142b28c51 18/04/17 16:46:06 INFO zookeeper.ZooKeeper: Session: 0x2626be142b28c51 closed 18/04/17 16:46:06 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 16:46:06 INFO scheduler.JobScheduler: Finished job streaming job 1523972760000 ms.20 from job set of time 1523972760000 ms 18/04/17 16:46:06 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 361.0 (TID 361) in 6764 ms on ***hostname masked*** (executor 11) (1/1) 18/04/17 16:46:06 INFO cluster.YarnClusterScheduler: Removed TaskSet 361.0, whose tasks have all completed, from pool 18/04/17 16:46:06 INFO scheduler.DAGScheduler: ResultStage 361 (foreachPartition at PredictorEngineApp.java:153) finished in 6.765 s 18/04/17 16:46:06 INFO scheduler.DAGScheduler: Job 361 finished: foreachPartition at PredictorEngineApp.java:153, took 6.864285 s 18/04/17 16:46:06 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x6979cd7c connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 16:46:06 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x6979cd7c0x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 16:46:06 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 16:46:06 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:38131, server: ***hostname masked***/***IP masked***:2181 18/04/17 16:46:06 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x2626be142b28c52, negotiated timeout = 60000 18/04/17 16:46:06 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x2626be142b28c52 18/04/17 16:46:06 INFO zookeeper.ZooKeeper: Session: 0x2626be142b28c52 closed 18/04/17 16:46:06 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 16:46:06 INFO scheduler.JobScheduler: Finished job streaming job 1523972760000 ms.27 from job set of time 1523972760000 ms 18/04/17 16:46:07 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 341.0 (TID 341) in 7511 ms on ***hostname masked*** (executor 6) (1/1) 18/04/17 16:46:07 INFO cluster.YarnClusterScheduler: Removed TaskSet 341.0, whose tasks have all completed, from pool 18/04/17 16:46:07 INFO scheduler.DAGScheduler: ResultStage 341 (foreachPartition at PredictorEngineApp.java:153) finished in 7.512 s 18/04/17 16:46:07 INFO scheduler.DAGScheduler: Job 341 finished: foreachPartition at PredictorEngineApp.java:153, took 7.525282 s 18/04/17 16:46:07 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x611913a5 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 16:46:07 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x611913a50x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 16:46:07 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 16:46:07 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:33543, server: ***hostname masked***/***IP masked***:2181 18/04/17 16:46:07 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x1626be1444c9348, negotiated timeout = 60000 18/04/17 16:46:07 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x1626be1444c9348 18/04/17 16:46:07 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 16:46:07 INFO zookeeper.ZooKeeper: Session: 0x1626be1444c9348 closed 18/04/17 16:46:07 INFO scheduler.JobScheduler: Finished job streaming job 1523972760000 ms.12 from job set of time 1523972760000 ms 18/04/17 16:46:08 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 344.0 (TID 344) in 8324 ms on ***hostname masked*** (executor 6) (1/1) 18/04/17 16:46:08 INFO cluster.YarnClusterScheduler: Removed TaskSet 344.0, whose tasks have all completed, from pool 18/04/17 16:46:08 INFO scheduler.DAGScheduler: ResultStage 344 (foreachPartition at PredictorEngineApp.java:153) finished in 8.324 s 18/04/17 16:46:08 INFO scheduler.DAGScheduler: Job 344 finished: foreachPartition at PredictorEngineApp.java:153, took 8.349358 s 18/04/17 16:46:08 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x35f4f3 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 16:46:08 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x35f4f30x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 16:46:08 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 16:46:08 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:38142, server: ***hostname masked***/***IP masked***:2181 18/04/17 16:46:08 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x2626be142b28c54, negotiated timeout = 60000 18/04/17 16:46:08 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x2626be142b28c54 18/04/17 16:46:08 INFO zookeeper.ZooKeeper: Session: 0x2626be142b28c54 closed 18/04/17 16:46:08 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 16:46:08 INFO scheduler.JobScheduler: Finished job streaming job 1523972760000 ms.15 from job set of time 1523972760000 ms 18/04/17 16:46:08 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 355.0 (TID 355) in 8400 ms on ***hostname masked*** (executor 2) (1/1) 18/04/17 16:46:08 INFO cluster.YarnClusterScheduler: Removed TaskSet 355.0, whose tasks have all completed, from pool 18/04/17 16:46:08 INFO scheduler.DAGScheduler: ResultStage 355 (foreachPartition at PredictorEngineApp.java:153) finished in 8.401 s 18/04/17 16:46:08 INFO scheduler.DAGScheduler: Job 354 finished: foreachPartition at PredictorEngineApp.java:153, took 8.471017 s 18/04/17 16:46:08 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x769a11ed connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 16:46:08 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x769a11ed0x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 16:46:08 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 16:46:08 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:55401, server: ***hostname masked***/***IP masked***:2181 18/04/17 16:46:08 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x3626be1439a931e, negotiated timeout = 60000 18/04/17 16:46:08 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x3626be1439a931e 18/04/17 16:46:08 INFO zookeeper.ZooKeeper: Session: 0x3626be1439a931e closed 18/04/17 16:46:08 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 16:46:08 INFO scheduler.JobScheduler: Finished job streaming job 1523972760000 ms.24 from job set of time 1523972760000 ms 18/04/17 16:46:09 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 348.0 (TID 348) in 9630 ms on ***hostname masked*** (executor 8) (1/1) 18/04/17 16:46:09 INFO cluster.YarnClusterScheduler: Removed TaskSet 348.0, whose tasks have all completed, from pool 18/04/17 16:46:09 INFO scheduler.DAGScheduler: ResultStage 348 (foreachPartition at PredictorEngineApp.java:153) finished in 9.631 s 18/04/17 16:46:09 INFO scheduler.DAGScheduler: Job 349 finished: foreachPartition at PredictorEngineApp.java:153, took 9.673462 s 18/04/17 16:46:09 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0xfd53541 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 16:46:09 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0xfd535410x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 16:46:09 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 16:46:09 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:55405, server: ***hostname masked***/***IP masked***:2181 18/04/17 16:46:09 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x3626be1439a9320, negotiated timeout = 60000 18/04/17 16:46:09 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x3626be1439a9320 18/04/17 16:46:09 INFO zookeeper.ZooKeeper: Session: 0x3626be1439a9320 closed 18/04/17 16:46:09 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 16:46:09 INFO scheduler.JobScheduler: Finished job streaming job 1523972760000 ms.28 from job set of time 1523972760000 ms 18/04/17 16:46:09 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 346.0 (TID 346) in 9785 ms on ***hostname masked*** (executor 12) (1/1) 18/04/17 16:46:09 INFO cluster.YarnClusterScheduler: Removed TaskSet 346.0, whose tasks have all completed, from pool 18/04/17 16:46:09 INFO scheduler.DAGScheduler: ResultStage 346 (foreachPartition at PredictorEngineApp.java:153) finished in 9.785 s 18/04/17 16:46:09 INFO scheduler.DAGScheduler: Job 346 finished: foreachPartition at PredictorEngineApp.java:153, took 9.819019 s 18/04/17 16:46:09 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x3df2464d connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 16:46:09 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x3df2464d0x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 16:46:09 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 16:46:09 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:33557, server: ***hostname masked***/***IP masked***:2181 18/04/17 16:46:09 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x1626be1444c934c, negotiated timeout = 60000 18/04/17 16:46:09 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x1626be1444c934c 18/04/17 16:46:09 INFO zookeeper.ZooKeeper: Session: 0x1626be1444c934c closed 18/04/17 16:46:09 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 16:46:09 INFO scheduler.JobScheduler: Finished job streaming job 1523972760000 ms.1 from job set of time 1523972760000 ms 18/04/17 16:46:11 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 343.0 (TID 343) in 10959 ms on ***hostname masked*** (executor 1) (1/1) 18/04/17 16:46:11 INFO cluster.YarnClusterScheduler: Removed TaskSet 343.0, whose tasks have all completed, from pool 18/04/17 16:46:11 INFO scheduler.DAGScheduler: ResultStage 343 (foreachPartition at PredictorEngineApp.java:153) finished in 10.960 s 18/04/17 16:46:11 INFO scheduler.DAGScheduler: Job 343 finished: foreachPartition at PredictorEngineApp.java:153, took 10.980351 s 18/04/17 16:46:11 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x6eb662c5 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 16:46:11 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x6eb662c50x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 16:46:11 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 16:46:11 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:55413, server: ***hostname masked***/***IP masked***:2181 18/04/17 16:46:11 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x3626be1439a9322, negotiated timeout = 60000 18/04/17 16:46:11 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x3626be1439a9322 18/04/17 16:46:11 INFO zookeeper.ZooKeeper: Session: 0x3626be1439a9322 closed 18/04/17 16:46:11 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 16:46:11 INFO scheduler.JobScheduler: Finished job streaming job 1523972760000 ms.26 from job set of time 1523972760000 ms 18/04/17 16:46:11 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 362.0 (TID 362) in 11174 ms on ***hostname masked*** (executor 5) (1/1) 18/04/17 16:46:11 INFO cluster.YarnClusterScheduler: Removed TaskSet 362.0, whose tasks have all completed, from pool 18/04/17 16:46:11 INFO scheduler.DAGScheduler: ResultStage 362 (foreachPartition at PredictorEngineApp.java:153) finished in 11.175 s 18/04/17 16:46:11 INFO scheduler.DAGScheduler: Job 362 finished: foreachPartition at PredictorEngineApp.java:153, took 11.278427 s 18/04/17 16:46:11 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0xab14ac2 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 16:46:11 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0xab14ac20x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 16:46:11 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 16:46:11 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:55417, server: ***hostname masked***/***IP masked***:2181 18/04/17 16:46:11 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x3626be1439a9323, negotiated timeout = 60000 18/04/17 16:46:11 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x3626be1439a9323 18/04/17 16:46:11 INFO zookeeper.ZooKeeper: Session: 0x3626be1439a9323 closed 18/04/17 16:46:11 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 16:46:11 INFO scheduler.JobScheduler: Finished job streaming job 1523972760000 ms.23 from job set of time 1523972760000 ms 18/04/17 16:46:11 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 352.0 (TID 352) in 11291 ms on ***hostname masked*** (executor 8) (1/1) 18/04/17 16:46:11 INFO cluster.YarnClusterScheduler: Removed TaskSet 352.0, whose tasks have all completed, from pool 18/04/17 16:46:11 INFO scheduler.DAGScheduler: ResultStage 352 (foreachPartition at PredictorEngineApp.java:153) finished in 11.292 s 18/04/17 16:46:11 INFO scheduler.DAGScheduler: Job 352 finished: foreachPartition at PredictorEngineApp.java:153, took 11.351255 s 18/04/17 16:46:11 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x3896200d connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 16:46:11 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x3896200d0x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 16:46:11 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 16:46:11 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:33569, server: ***hostname masked***/***IP masked***:2181 18/04/17 16:46:11 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x1626be1444c934f, negotiated timeout = 60000 18/04/17 16:46:11 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x1626be1444c934f 18/04/17 16:46:11 INFO zookeeper.ZooKeeper: Session: 0x1626be1444c934f closed 18/04/17 16:46:11 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 16:46:11 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 354.0 (TID 354) in 11316 ms on ***hostname masked*** (executor 8) (1/1) 18/04/17 16:46:11 INFO cluster.YarnClusterScheduler: Removed TaskSet 354.0, whose tasks have all completed, from pool 18/04/17 16:46:11 INFO scheduler.DAGScheduler: ResultStage 354 (foreachPartition at PredictorEngineApp.java:153) finished in 11.317 s 18/04/17 16:46:11 INFO scheduler.DAGScheduler: Job 355 finished: foreachPartition at PredictorEngineApp.java:153, took 11.382958 s 18/04/17 16:46:11 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x344af041 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 16:46:11 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x344af0410x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 16:46:11 INFO scheduler.JobScheduler: Finished job streaming job 1523972760000 ms.34 from job set of time 1523972760000 ms 18/04/17 16:46:11 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 16:46:11 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:33572, server: ***hostname masked***/***IP masked***:2181 18/04/17 16:46:11 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x1626be1444c9350, negotiated timeout = 60000 18/04/17 16:46:11 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x1626be1444c9350 18/04/17 16:46:11 INFO zookeeper.ZooKeeper: Session: 0x1626be1444c9350 closed 18/04/17 16:46:11 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 16:46:11 INFO scheduler.JobScheduler: Finished job streaming job 1523972760000 ms.33 from job set of time 1523972760000 ms 18/04/17 16:46:13 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 342.0 (TID 342) in 13687 ms on ***hostname masked*** (executor 2) (1/1) 18/04/17 16:46:13 INFO cluster.YarnClusterScheduler: Removed TaskSet 342.0, whose tasks have all completed, from pool 18/04/17 16:46:13 INFO scheduler.DAGScheduler: ResultStage 342 (foreachPartition at PredictorEngineApp.java:153) finished in 13.688 s 18/04/17 16:46:13 INFO scheduler.DAGScheduler: Job 342 finished: foreachPartition at PredictorEngineApp.java:153, took 13.704127 s 18/04/17 16:46:13 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x436bd1b0 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 16:46:13 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x436bd1b00x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 16:46:13 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 16:46:13 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:33581, server: ***hostname masked***/***IP masked***:2181 18/04/17 16:46:13 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x1626be1444c9353, negotiated timeout = 60000 18/04/17 16:46:13 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x1626be1444c9353 18/04/17 16:46:13 INFO zookeeper.ZooKeeper: Session: 0x1626be1444c9353 closed 18/04/17 16:46:13 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 16:46:13 INFO scheduler.JobScheduler: Finished job streaming job 1523972760000 ms.22 from job set of time 1523972760000 ms 18/04/17 16:46:14 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 351.0 (TID 351) in 14051 ms on ***hostname masked*** (executor 6) (1/1) 18/04/17 16:46:14 INFO cluster.YarnClusterScheduler: Removed TaskSet 351.0, whose tasks have all completed, from pool 18/04/17 16:46:14 INFO scheduler.DAGScheduler: ResultStage 351 (foreachPartition at PredictorEngineApp.java:153) finished in 14.051 s 18/04/17 16:46:14 INFO scheduler.DAGScheduler: Job 350 finished: foreachPartition at PredictorEngineApp.java:153, took 14.106034 s 18/04/17 16:46:14 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x353f4e20 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 16:46:14 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x353f4e200x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 16:46:14 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 16:46:14 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:38180, server: ***hostname masked***/***IP masked***:2181 18/04/17 16:46:14 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x2626be142b28c57, negotiated timeout = 60000 18/04/17 16:46:14 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x2626be142b28c57 18/04/17 16:46:14 INFO zookeeper.ZooKeeper: Session: 0x2626be142b28c57 closed 18/04/17 16:46:14 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 16:46:14 INFO scheduler.JobScheduler: Finished job streaming job 1523972760000 ms.11 from job set of time 1523972760000 ms 18/04/17 16:46:14 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 329.0 (TID 329) in 74602 ms on ***hostname masked*** (executor 10) (1/1) 18/04/17 16:46:14 INFO cluster.YarnClusterScheduler: Removed TaskSet 329.0, whose tasks have all completed, from pool 18/04/17 16:46:14 INFO scheduler.DAGScheduler: ResultStage 329 (foreachPartition at PredictorEngineApp.java:153) finished in 74.603 s 18/04/17 16:46:14 INFO scheduler.DAGScheduler: Job 329 finished: foreachPartition at PredictorEngineApp.java:153, took 74.694029 s 18/04/17 16:46:14 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x5da756b9 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 16:46:14 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x5da756b90x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 16:46:14 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 16:46:14 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:38183, server: ***hostname masked***/***IP masked***:2181 18/04/17 16:46:14 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x2626be142b28c58, negotiated timeout = 60000 18/04/17 16:46:14 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x2626be142b28c58 18/04/17 16:46:14 INFO zookeeper.ZooKeeper: Session: 0x2626be142b28c58 closed 18/04/17 16:46:14 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 16:46:14 INFO scheduler.JobScheduler: Finished job streaming job 1523972700000 ms.22 from job set of time 1523972700000 ms 18/04/17 16:46:14 INFO scheduler.JobScheduler: Total delay: 74.821 s for time 1523972700000 ms (execution: 74.764 s) 18/04/17 16:46:14 INFO kafka.KafkaRDD: Removing RDD 396 from persistence list 18/04/17 16:46:14 INFO storage.BlockManager: Removing RDD 396 18/04/17 16:46:14 INFO kafka.KafkaRDD: Removing RDD 396 from persistence list 18/04/17 16:46:14 INFO storage.BlockManager: Removing RDD 396 18/04/17 16:46:14 INFO kafka.KafkaRDD: Removing RDD 397 from persistence list 18/04/17 16:46:14 INFO storage.BlockManager: Removing RDD 397 18/04/17 16:46:14 INFO kafka.KafkaRDD: Removing RDD 397 from persistence list 18/04/17 16:46:14 INFO storage.BlockManager: Removing RDD 397 18/04/17 16:46:14 INFO kafka.KafkaRDD: Removing RDD 398 from persistence list 18/04/17 16:46:14 INFO storage.BlockManager: Removing RDD 398 18/04/17 16:46:14 INFO kafka.KafkaRDD: Removing RDD 398 from persistence list 18/04/17 16:46:14 INFO storage.BlockManager: Removing RDD 398 18/04/17 16:46:14 INFO kafka.KafkaRDD: Removing RDD 399 from persistence list 18/04/17 16:46:14 INFO storage.BlockManager: Removing RDD 399 18/04/17 16:46:14 INFO kafka.KafkaRDD: Removing RDD 399 from persistence list 18/04/17 16:46:14 INFO storage.BlockManager: Removing RDD 399 18/04/17 16:46:14 INFO kafka.KafkaRDD: Removing RDD 400 from persistence list 18/04/17 16:46:14 INFO storage.BlockManager: Removing RDD 400 18/04/17 16:46:14 INFO kafka.KafkaRDD: Removing RDD 400 from persistence list 18/04/17 16:46:14 INFO storage.BlockManager: Removing RDD 400 18/04/17 16:46:14 INFO kafka.KafkaRDD: Removing RDD 401 from persistence list 18/04/17 16:46:14 INFO storage.BlockManager: Removing RDD 401 18/04/17 16:46:14 INFO kafka.KafkaRDD: Removing RDD 401 from persistence list 18/04/17 16:46:14 INFO storage.BlockManager: Removing RDD 401 18/04/17 16:46:14 INFO kafka.KafkaRDD: Removing RDD 402 from persistence list 18/04/17 16:46:14 INFO storage.BlockManager: Removing RDD 402 18/04/17 16:46:14 INFO kafka.KafkaRDD: Removing RDD 402 from persistence list 18/04/17 16:46:14 INFO storage.BlockManager: Removing RDD 402 18/04/17 16:46:14 INFO kafka.KafkaRDD: Removing RDD 403 from persistence list 18/04/17 16:46:14 INFO storage.BlockManager: Removing RDD 403 18/04/17 16:46:14 INFO kafka.KafkaRDD: Removing RDD 403 from persistence list 18/04/17 16:46:14 INFO storage.BlockManager: Removing RDD 403 18/04/17 16:46:14 INFO kafka.KafkaRDD: Removing RDD 404 from persistence list 18/04/17 16:46:14 INFO storage.BlockManager: Removing RDD 404 18/04/17 16:46:14 INFO kafka.KafkaRDD: Removing RDD 404 from persistence list 18/04/17 16:46:14 INFO storage.BlockManager: Removing RDD 404 18/04/17 16:46:14 INFO kafka.KafkaRDD: Removing RDD 405 from persistence list 18/04/17 16:46:14 INFO storage.BlockManager: Removing RDD 405 18/04/17 16:46:14 INFO kafka.KafkaRDD: Removing RDD 405 from persistence list 18/04/17 16:46:14 INFO storage.BlockManager: Removing RDD 405 18/04/17 16:46:14 INFO kafka.KafkaRDD: Removing RDD 406 from persistence list 18/04/17 16:46:14 INFO storage.BlockManager: Removing RDD 406 18/04/17 16:46:14 INFO kafka.KafkaRDD: Removing RDD 406 from persistence list 18/04/17 16:46:14 INFO storage.BlockManager: Removing RDD 406 18/04/17 16:46:14 INFO kafka.KafkaRDD: Removing RDD 407 from persistence list 18/04/17 16:46:14 INFO storage.BlockManager: Removing RDD 407 18/04/17 16:46:14 INFO kafka.KafkaRDD: Removing RDD 407 from persistence list 18/04/17 16:46:14 INFO storage.BlockManager: Removing RDD 407 18/04/17 16:46:14 INFO kafka.KafkaRDD: Removing RDD 408 from persistence list 18/04/17 16:46:14 INFO storage.BlockManager: Removing RDD 408 18/04/17 16:46:14 INFO kafka.KafkaRDD: Removing RDD 408 from persistence list 18/04/17 16:46:14 INFO storage.BlockManager: Removing RDD 408 18/04/17 16:46:14 INFO kafka.KafkaRDD: Removing RDD 409 from persistence list 18/04/17 16:46:14 INFO storage.BlockManager: Removing RDD 409 18/04/17 16:46:14 INFO kafka.KafkaRDD: Removing RDD 409 from persistence list 18/04/17 16:46:14 INFO storage.BlockManager: Removing RDD 409 18/04/17 16:46:14 INFO kafka.KafkaRDD: Removing RDD 410 from persistence list 18/04/17 16:46:14 INFO storage.BlockManager: Removing RDD 410 18/04/17 16:46:14 INFO kafka.KafkaRDD: Removing RDD 410 from persistence list 18/04/17 16:46:14 INFO storage.BlockManager: Removing RDD 410 18/04/17 16:46:14 INFO kafka.KafkaRDD: Removing RDD 411 from persistence list 18/04/17 16:46:14 INFO storage.BlockManager: Removing RDD 411 18/04/17 16:46:14 INFO kafka.KafkaRDD: Removing RDD 411 from persistence list 18/04/17 16:46:14 INFO storage.BlockManager: Removing RDD 411 18/04/17 16:46:14 INFO kafka.KafkaRDD: Removing RDD 412 from persistence list 18/04/17 16:46:14 INFO storage.BlockManager: Removing RDD 412 18/04/17 16:46:14 INFO kafka.KafkaRDD: Removing RDD 412 from persistence list 18/04/17 16:46:14 INFO storage.BlockManager: Removing RDD 412 18/04/17 16:46:14 INFO kafka.KafkaRDD: Removing RDD 413 from persistence list 18/04/17 16:46:14 INFO storage.BlockManager: Removing RDD 413 18/04/17 16:46:14 INFO kafka.KafkaRDD: Removing RDD 413 from persistence list 18/04/17 16:46:14 INFO storage.BlockManager: Removing RDD 413 18/04/17 16:46:14 INFO kafka.KafkaRDD: Removing RDD 414 from persistence list 18/04/17 16:46:14 INFO storage.BlockManager: Removing RDD 414 18/04/17 16:46:14 INFO kafka.KafkaRDD: Removing RDD 414 from persistence list 18/04/17 16:46:14 INFO storage.BlockManager: Removing RDD 414 18/04/17 16:46:14 INFO kafka.KafkaRDD: Removing RDD 415 from persistence list 18/04/17 16:46:14 INFO storage.BlockManager: Removing RDD 415 18/04/17 16:46:14 INFO kafka.KafkaRDD: Removing RDD 415 from persistence list 18/04/17 16:46:14 INFO storage.BlockManager: Removing RDD 415 18/04/17 16:46:14 INFO kafka.KafkaRDD: Removing RDD 416 from persistence list 18/04/17 16:46:14 INFO storage.BlockManager: Removing RDD 416 18/04/17 16:46:14 INFO kafka.KafkaRDD: Removing RDD 416 from persistence list 18/04/17 16:46:14 INFO storage.BlockManager: Removing RDD 416 18/04/17 16:46:14 INFO kafka.KafkaRDD: Removing RDD 417 from persistence list 18/04/17 16:46:14 INFO storage.BlockManager: Removing RDD 417 18/04/17 16:46:14 INFO kafka.KafkaRDD: Removing RDD 417 from persistence list 18/04/17 16:46:14 INFO storage.BlockManager: Removing RDD 417 18/04/17 16:46:14 INFO kafka.KafkaRDD: Removing RDD 418 from persistence list 18/04/17 16:46:14 INFO storage.BlockManager: Removing RDD 418 18/04/17 16:46:14 INFO kafka.KafkaRDD: Removing RDD 418 from persistence list 18/04/17 16:46:14 INFO storage.BlockManager: Removing RDD 418 18/04/17 16:46:14 INFO kafka.KafkaRDD: Removing RDD 419 from persistence list 18/04/17 16:46:14 INFO storage.BlockManager: Removing RDD 419 18/04/17 16:46:14 INFO kafka.KafkaRDD: Removing RDD 419 from persistence list 18/04/17 16:46:14 INFO storage.BlockManager: Removing RDD 419 18/04/17 16:46:14 INFO kafka.KafkaRDD: Removing RDD 420 from persistence list 18/04/17 16:46:14 INFO storage.BlockManager: Removing RDD 420 18/04/17 16:46:14 INFO kafka.KafkaRDD: Removing RDD 420 from persistence list 18/04/17 16:46:14 INFO storage.BlockManager: Removing RDD 420 18/04/17 16:46:14 INFO kafka.KafkaRDD: Removing RDD 421 from persistence list 18/04/17 16:46:14 INFO storage.BlockManager: Removing RDD 421 18/04/17 16:46:14 INFO kafka.KafkaRDD: Removing RDD 421 from persistence list 18/04/17 16:46:14 INFO storage.BlockManager: Removing RDD 421 18/04/17 16:46:14 INFO kafka.KafkaRDD: Removing RDD 422 from persistence list 18/04/17 16:46:14 INFO storage.BlockManager: Removing RDD 422 18/04/17 16:46:14 INFO kafka.KafkaRDD: Removing RDD 422 from persistence list 18/04/17 16:46:14 INFO storage.BlockManager: Removing RDD 422 18/04/17 16:46:14 INFO kafka.KafkaRDD: Removing RDD 423 from persistence list 18/04/17 16:46:14 INFO storage.BlockManager: Removing RDD 423 18/04/17 16:46:14 INFO kafka.KafkaRDD: Removing RDD 423 from persistence list 18/04/17 16:46:14 INFO storage.BlockManager: Removing RDD 423 18/04/17 16:46:14 INFO kafka.KafkaRDD: Removing RDD 424 from persistence list 18/04/17 16:46:14 INFO storage.BlockManager: Removing RDD 424 18/04/17 16:46:14 INFO kafka.KafkaRDD: Removing RDD 424 from persistence list 18/04/17 16:46:14 INFO storage.BlockManager: Removing RDD 424 18/04/17 16:46:14 INFO kafka.KafkaRDD: Removing RDD 425 from persistence list 18/04/17 16:46:14 INFO storage.BlockManager: Removing RDD 425 18/04/17 16:46:14 INFO kafka.KafkaRDD: Removing RDD 425 from persistence list 18/04/17 16:46:14 INFO storage.BlockManager: Removing RDD 425 18/04/17 16:46:14 INFO kafka.KafkaRDD: Removing RDD 426 from persistence list 18/04/17 16:46:14 INFO storage.BlockManager: Removing RDD 426 18/04/17 16:46:14 INFO kafka.KafkaRDD: Removing RDD 426 from persistence list 18/04/17 16:46:14 INFO storage.BlockManager: Removing RDD 426 18/04/17 16:46:14 INFO kafka.KafkaRDD: Removing RDD 427 from persistence list 18/04/17 16:46:14 INFO storage.BlockManager: Removing RDD 427 18/04/17 16:46:14 INFO kafka.KafkaRDD: Removing RDD 427 from persistence list 18/04/17 16:46:14 INFO storage.BlockManager: Removing RDD 427 18/04/17 16:46:14 INFO kafka.KafkaRDD: Removing RDD 428 from persistence list 18/04/17 16:46:14 INFO storage.BlockManager: Removing RDD 428 18/04/17 16:46:14 INFO kafka.KafkaRDD: Removing RDD 428 from persistence list 18/04/17 16:46:14 INFO storage.BlockManager: Removing RDD 428 18/04/17 16:46:14 INFO kafka.KafkaRDD: Removing RDD 429 from persistence list 18/04/17 16:46:14 INFO storage.BlockManager: Removing RDD 429 18/04/17 16:46:14 INFO kafka.KafkaRDD: Removing RDD 429 from persistence list 18/04/17 16:46:14 INFO storage.BlockManager: Removing RDD 429 18/04/17 16:46:14 INFO kafka.KafkaRDD: Removing RDD 430 from persistence list 18/04/17 16:46:14 INFO storage.BlockManager: Removing RDD 430 18/04/17 16:46:14 INFO kafka.KafkaRDD: Removing RDD 430 from persistence list 18/04/17 16:46:14 INFO storage.BlockManager: Removing RDD 430 18/04/17 16:46:14 INFO kafka.KafkaRDD: Removing RDD 431 from persistence list 18/04/17 16:46:14 INFO storage.BlockManager: Removing RDD 431 18/04/17 16:46:14 INFO kafka.KafkaRDD: Removing RDD 431 from persistence list 18/04/17 16:46:14 INFO storage.BlockManager: Removing RDD 431 18/04/17 16:46:14 INFO scheduler.ReceivedBlockTracker: Deleting batches ArrayBuffer() 18/04/17 16:46:14 INFO scheduler.InputInfoTracker: remove old batch metadata: 1523972580000 ms 18/04/17 16:46:18 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 357.0 (TID 357) in 18271 ms on ***hostname masked*** (executor 5) (1/1) 18/04/17 16:46:18 INFO cluster.YarnClusterScheduler: Removed TaskSet 357.0, whose tasks have all completed, from pool 18/04/17 16:46:18 INFO scheduler.DAGScheduler: ResultStage 357 (foreachPartition at PredictorEngineApp.java:153) finished in 18.272 s 18/04/17 16:46:18 INFO scheduler.DAGScheduler: Job 357 finished: foreachPartition at PredictorEngineApp.java:153, took 18.341554 s 18/04/17 16:46:18 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x1d971b21 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 16:46:18 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x1d971b210x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 16:46:18 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 16:46:18 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:38192, server: ***hostname masked***/***IP masked***:2181 18/04/17 16:46:18 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x2626be142b28c5b, negotiated timeout = 60000 18/04/17 16:46:18 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x2626be142b28c5b 18/04/17 16:46:18 INFO zookeeper.ZooKeeper: Session: 0x2626be142b28c5b closed 18/04/17 16:46:18 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 16:46:18 INFO scheduler.JobScheduler: Finished job streaming job 1523972760000 ms.5 from job set of time 1523972760000 ms 18/04/17 16:46:19 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 347.0 (TID 347) in 19786 ms on ***hostname masked*** (executor 5) (1/1) 18/04/17 16:46:19 INFO cluster.YarnClusterScheduler: Removed TaskSet 347.0, whose tasks have all completed, from pool 18/04/17 16:46:19 INFO scheduler.DAGScheduler: ResultStage 347 (foreachPartition at PredictorEngineApp.java:153) finished in 19.787 s 18/04/17 16:46:19 INFO scheduler.DAGScheduler: Job 347 finished: foreachPartition at PredictorEngineApp.java:153, took 19.825591 s 18/04/17 16:46:19 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x13a8e576 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 16:46:19 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x13a8e5760x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 16:46:19 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 16:46:19 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:33605, server: ***hostname masked***/***IP masked***:2181 18/04/17 16:46:19 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x1626be1444c9356, negotiated timeout = 60000 18/04/17 16:46:19 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x1626be1444c9356 18/04/17 16:46:19 INFO zookeeper.ZooKeeper: Session: 0x1626be1444c9356 closed 18/04/17 16:46:19 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 16:46:19 INFO scheduler.JobScheduler: Finished job streaming job 1523972760000 ms.10 from job set of time 1523972760000 ms 18/04/17 16:46:19 INFO scheduler.JobScheduler: Total delay: 19.926 s for time 1523972760000 ms (execution: 19.862 s) 18/04/17 16:46:19 INFO kafka.KafkaRDD: Removing RDD 432 from persistence list 18/04/17 16:46:19 INFO storage.BlockManager: Removing RDD 432 18/04/17 16:46:19 INFO kafka.KafkaRDD: Removing RDD 432 from persistence list 18/04/17 16:46:19 INFO storage.BlockManager: Removing RDD 432 18/04/17 16:46:19 INFO kafka.KafkaRDD: Removing RDD 433 from persistence list 18/04/17 16:46:19 INFO storage.BlockManager: Removing RDD 433 18/04/17 16:46:19 INFO kafka.KafkaRDD: Removing RDD 433 from persistence list 18/04/17 16:46:19 INFO storage.BlockManager: Removing RDD 433 18/04/17 16:46:19 INFO kafka.KafkaRDD: Removing RDD 434 from persistence list 18/04/17 16:46:19 INFO storage.BlockManager: Removing RDD 434 18/04/17 16:46:19 INFO kafka.KafkaRDD: Removing RDD 434 from persistence list 18/04/17 16:46:19 INFO storage.BlockManager: Removing RDD 434 18/04/17 16:46:19 INFO kafka.KafkaRDD: Removing RDD 435 from persistence list 18/04/17 16:46:19 INFO storage.BlockManager: Removing RDD 435 18/04/17 16:46:19 INFO kafka.KafkaRDD: Removing RDD 435 from persistence list 18/04/17 16:46:19 INFO storage.BlockManager: Removing RDD 435 18/04/17 16:46:19 INFO kafka.KafkaRDD: Removing RDD 436 from persistence list 18/04/17 16:46:19 INFO storage.BlockManager: Removing RDD 436 18/04/17 16:46:19 INFO kafka.KafkaRDD: Removing RDD 436 from persistence list 18/04/17 16:46:19 INFO storage.BlockManager: Removing RDD 436 18/04/17 16:46:19 INFO kafka.KafkaRDD: Removing RDD 437 from persistence list 18/04/17 16:46:19 INFO storage.BlockManager: Removing RDD 437 18/04/17 16:46:19 INFO kafka.KafkaRDD: Removing RDD 437 from persistence list 18/04/17 16:46:19 INFO storage.BlockManager: Removing RDD 437 18/04/17 16:46:19 INFO kafka.KafkaRDD: Removing RDD 438 from persistence list 18/04/17 16:46:19 INFO storage.BlockManager: Removing RDD 438 18/04/17 16:46:19 INFO kafka.KafkaRDD: Removing RDD 438 from persistence list 18/04/17 16:46:19 INFO storage.BlockManager: Removing RDD 438 18/04/17 16:46:19 INFO kafka.KafkaRDD: Removing RDD 439 from persistence list 18/04/17 16:46:19 INFO storage.BlockManager: Removing RDD 439 18/04/17 16:46:19 INFO kafka.KafkaRDD: Removing RDD 439 from persistence list 18/04/17 16:46:19 INFO storage.BlockManager: Removing RDD 439 18/04/17 16:46:19 INFO kafka.KafkaRDD: Removing RDD 440 from persistence list 18/04/17 16:46:19 INFO storage.BlockManager: Removing RDD 440 18/04/17 16:46:19 INFO kafka.KafkaRDD: Removing RDD 440 from persistence list 18/04/17 16:46:19 INFO storage.BlockManager: Removing RDD 440 18/04/17 16:46:19 INFO kafka.KafkaRDD: Removing RDD 441 from persistence list 18/04/17 16:46:19 INFO storage.BlockManager: Removing RDD 441 18/04/17 16:46:19 INFO kafka.KafkaRDD: Removing RDD 441 from persistence list 18/04/17 16:46:19 INFO storage.BlockManager: Removing RDD 441 18/04/17 16:46:19 INFO kafka.KafkaRDD: Removing RDD 442 from persistence list 18/04/17 16:46:19 INFO storage.BlockManager: Removing RDD 442 18/04/17 16:46:19 INFO kafka.KafkaRDD: Removing RDD 442 from persistence list 18/04/17 16:46:19 INFO storage.BlockManager: Removing RDD 442 18/04/17 16:46:19 INFO kafka.KafkaRDD: Removing RDD 443 from persistence list 18/04/17 16:46:19 INFO storage.BlockManager: Removing RDD 443 18/04/17 16:46:19 INFO kafka.KafkaRDD: Removing RDD 443 from persistence list 18/04/17 16:46:19 INFO storage.BlockManager: Removing RDD 443 18/04/17 16:46:19 INFO kafka.KafkaRDD: Removing RDD 444 from persistence list 18/04/17 16:46:19 INFO storage.BlockManager: Removing RDD 444 18/04/17 16:46:19 INFO kafka.KafkaRDD: Removing RDD 444 from persistence list 18/04/17 16:46:19 INFO storage.BlockManager: Removing RDD 444 18/04/17 16:46:19 INFO kafka.KafkaRDD: Removing RDD 445 from persistence list 18/04/17 16:46:19 INFO storage.BlockManager: Removing RDD 445 18/04/17 16:46:19 INFO kafka.KafkaRDD: Removing RDD 445 from persistence list 18/04/17 16:46:19 INFO storage.BlockManager: Removing RDD 445 18/04/17 16:46:19 INFO kafka.KafkaRDD: Removing RDD 446 from persistence list 18/04/17 16:46:19 INFO storage.BlockManager: Removing RDD 446 18/04/17 16:46:19 INFO kafka.KafkaRDD: Removing RDD 446 from persistence list 18/04/17 16:46:19 INFO storage.BlockManager: Removing RDD 446 18/04/17 16:46:19 INFO kafka.KafkaRDD: Removing RDD 447 from persistence list 18/04/17 16:46:19 INFO storage.BlockManager: Removing RDD 447 18/04/17 16:46:19 INFO kafka.KafkaRDD: Removing RDD 447 from persistence list 18/04/17 16:46:19 INFO storage.BlockManager: Removing RDD 447 18/04/17 16:46:19 INFO kafka.KafkaRDD: Removing RDD 448 from persistence list 18/04/17 16:46:19 INFO storage.BlockManager: Removing RDD 448 18/04/17 16:46:19 INFO kafka.KafkaRDD: Removing RDD 448 from persistence list 18/04/17 16:46:19 INFO storage.BlockManager: Removing RDD 448 18/04/17 16:46:19 INFO kafka.KafkaRDD: Removing RDD 449 from persistence list 18/04/17 16:46:19 INFO storage.BlockManager: Removing RDD 449 18/04/17 16:46:19 INFO kafka.KafkaRDD: Removing RDD 449 from persistence list 18/04/17 16:46:19 INFO storage.BlockManager: Removing RDD 449 18/04/17 16:46:19 INFO kafka.KafkaRDD: Removing RDD 450 from persistence list 18/04/17 16:46:19 INFO storage.BlockManager: Removing RDD 450 18/04/17 16:46:19 INFO kafka.KafkaRDD: Removing RDD 450 from persistence list 18/04/17 16:46:19 INFO storage.BlockManager: Removing RDD 450 18/04/17 16:46:19 INFO kafka.KafkaRDD: Removing RDD 451 from persistence list 18/04/17 16:46:19 INFO storage.BlockManager: Removing RDD 451 18/04/17 16:46:19 INFO kafka.KafkaRDD: Removing RDD 451 from persistence list 18/04/17 16:46:19 INFO storage.BlockManager: Removing RDD 451 18/04/17 16:46:19 INFO kafka.KafkaRDD: Removing RDD 452 from persistence list 18/04/17 16:46:19 INFO storage.BlockManager: Removing RDD 452 18/04/17 16:46:19 INFO kafka.KafkaRDD: Removing RDD 452 from persistence list 18/04/17 16:46:19 INFO storage.BlockManager: Removing RDD 452 18/04/17 16:46:19 INFO kafka.KafkaRDD: Removing RDD 453 from persistence list 18/04/17 16:46:19 INFO storage.BlockManager: Removing RDD 453 18/04/17 16:46:19 INFO kafka.KafkaRDD: Removing RDD 453 from persistence list 18/04/17 16:46:19 INFO storage.BlockManager: Removing RDD 453 18/04/17 16:46:19 INFO kafka.KafkaRDD: Removing RDD 454 from persistence list 18/04/17 16:46:19 INFO storage.BlockManager: Removing RDD 454 18/04/17 16:46:19 INFO kafka.KafkaRDD: Removing RDD 454 from persistence list 18/04/17 16:46:19 INFO storage.BlockManager: Removing RDD 454 18/04/17 16:46:19 INFO kafka.KafkaRDD: Removing RDD 455 from persistence list 18/04/17 16:46:19 INFO storage.BlockManager: Removing RDD 455 18/04/17 16:46:19 INFO kafka.KafkaRDD: Removing RDD 455 from persistence list 18/04/17 16:46:19 INFO storage.BlockManager: Removing RDD 455 18/04/17 16:46:19 INFO kafka.KafkaRDD: Removing RDD 456 from persistence list 18/04/17 16:46:19 INFO storage.BlockManager: Removing RDD 456 18/04/17 16:46:19 INFO kafka.KafkaRDD: Removing RDD 456 from persistence list 18/04/17 16:46:19 INFO storage.BlockManager: Removing RDD 456 18/04/17 16:46:19 INFO kafka.KafkaRDD: Removing RDD 457 from persistence list 18/04/17 16:46:19 INFO storage.BlockManager: Removing RDD 457 18/04/17 16:46:19 INFO kafka.KafkaRDD: Removing RDD 457 from persistence list 18/04/17 16:46:19 INFO storage.BlockManager: Removing RDD 457 18/04/17 16:46:19 INFO kafka.KafkaRDD: Removing RDD 458 from persistence list 18/04/17 16:46:19 INFO storage.BlockManager: Removing RDD 458 18/04/17 16:46:19 INFO kafka.KafkaRDD: Removing RDD 458 from persistence list 18/04/17 16:46:19 INFO storage.BlockManager: Removing RDD 458 18/04/17 16:46:19 INFO kafka.KafkaRDD: Removing RDD 459 from persistence list 18/04/17 16:46:19 INFO storage.BlockManager: Removing RDD 459 18/04/17 16:46:19 INFO kafka.KafkaRDD: Removing RDD 459 from persistence list 18/04/17 16:46:19 INFO storage.BlockManager: Removing RDD 459 18/04/17 16:46:19 INFO kafka.KafkaRDD: Removing RDD 460 from persistence list 18/04/17 16:46:19 INFO storage.BlockManager: Removing RDD 460 18/04/17 16:46:19 INFO kafka.KafkaRDD: Removing RDD 460 from persistence list 18/04/17 16:46:19 INFO storage.BlockManager: Removing RDD 460 18/04/17 16:46:19 INFO kafka.KafkaRDD: Removing RDD 461 from persistence list 18/04/17 16:46:19 INFO storage.BlockManager: Removing RDD 461 18/04/17 16:46:19 INFO kafka.KafkaRDD: Removing RDD 461 from persistence list 18/04/17 16:46:19 INFO storage.BlockManager: Removing RDD 461 18/04/17 16:46:19 INFO kafka.KafkaRDD: Removing RDD 462 from persistence list 18/04/17 16:46:19 INFO storage.BlockManager: Removing RDD 462 18/04/17 16:46:19 INFO kafka.KafkaRDD: Removing RDD 462 from persistence list 18/04/17 16:46:19 INFO storage.BlockManager: Removing RDD 462 18/04/17 16:46:19 INFO kafka.KafkaRDD: Removing RDD 463 from persistence list 18/04/17 16:46:19 INFO storage.BlockManager: Removing RDD 463 18/04/17 16:46:19 INFO kafka.KafkaRDD: Removing RDD 463 from persistence list 18/04/17 16:46:19 INFO storage.BlockManager: Removing RDD 463 18/04/17 16:46:19 INFO kafka.KafkaRDD: Removing RDD 464 from persistence list 18/04/17 16:46:19 INFO storage.BlockManager: Removing RDD 464 18/04/17 16:46:19 INFO kafka.KafkaRDD: Removing RDD 464 from persistence list 18/04/17 16:46:19 INFO storage.BlockManager: Removing RDD 464 18/04/17 16:46:19 INFO kafka.KafkaRDD: Removing RDD 465 from persistence list 18/04/17 16:46:19 INFO storage.BlockManager: Removing RDD 465 18/04/17 16:46:19 INFO kafka.KafkaRDD: Removing RDD 465 from persistence list 18/04/17 16:46:19 INFO storage.BlockManager: Removing RDD 465 18/04/17 16:46:19 INFO kafka.KafkaRDD: Removing RDD 466 from persistence list 18/04/17 16:46:19 INFO storage.BlockManager: Removing RDD 466 18/04/17 16:46:19 INFO kafka.KafkaRDD: Removing RDD 466 from persistence list 18/04/17 16:46:19 INFO storage.BlockManager: Removing RDD 466 18/04/17 16:46:19 INFO kafka.KafkaRDD: Removing RDD 467 from persistence list 18/04/17 16:46:19 INFO storage.BlockManager: Removing RDD 467 18/04/17 16:46:19 INFO kafka.KafkaRDD: Removing RDD 467 from persistence list 18/04/17 16:46:19 INFO storage.BlockManager: Removing RDD 467 18/04/17 16:46:19 INFO scheduler.ReceivedBlockTracker: Deleting batches ArrayBuffer() 18/04/17 16:46:19 INFO scheduler.InputInfoTracker: remove old batch metadata: 1523972640000 ms 18/04/17 16:46:25 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 129.0 (TID 129) in 565144 ms on ***hostname masked*** (executor 5) (1/1) 18/04/17 16:46:25 INFO cluster.YarnClusterScheduler: Removed TaskSet 129.0, whose tasks have all completed, from pool 18/04/17 16:46:25 INFO scheduler.DAGScheduler: ResultStage 129 (foreachPartition at PredictorEngineApp.java:153) finished in 565.145 s 18/04/17 16:46:25 INFO scheduler.DAGScheduler: Job 129 finished: foreachPartition at PredictorEngineApp.java:153, took 565.280794 s 18/04/17 16:46:25 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x61c9ca5 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 16:46:25 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x61c9ca50x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 16:46:25 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 16:46:25 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:38212, server: ***hostname masked***/***IP masked***:2181 18/04/17 16:46:25 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x2626be142b28c5f, negotiated timeout = 60000 18/04/17 16:46:25 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x2626be142b28c5f 18/04/17 16:46:25 INFO zookeeper.ZooKeeper: Session: 0x2626be142b28c5f closed 18/04/17 16:46:25 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 16:46:25 INFO scheduler.JobScheduler: Finished job streaming job 1523972220000 ms.26 from job set of time 1523972220000 ms 18/04/17 16:46:25 INFO scheduler.JobScheduler: Total delay: 565.395 s for time 1523972220000 ms (execution: 565.326 s) 18/04/17 16:46:25 INFO scheduler.ReceivedBlockTracker: Deleting batches ArrayBuffer() 18/04/17 16:46:25 INFO scheduler.InputInfoTracker: remove old batch metadata: 18/04/17 16:47:00 INFO scheduler.JobScheduler: Added jobs for time 1523972820000 ms 18/04/17 16:47:00 INFO scheduler.JobScheduler: Starting job streaming job 1523972820000 ms.0 from job set of time 1523972820000 ms 18/04/17 16:47:00 INFO scheduler.JobScheduler: Starting job streaming job 1523972820000 ms.1 from job set of time 1523972820000 ms 18/04/17 16:47:00 INFO scheduler.JobScheduler: Starting job streaming job 1523972820000 ms.2 from job set of time 1523972820000 ms 18/04/17 16:47:00 INFO scheduler.JobScheduler: Starting job streaming job 1523972820000 ms.3 from job set of time 1523972820000 ms 18/04/17 16:47:00 INFO scheduler.JobScheduler: Starting job streaming job 1523972820000 ms.4 from job set of time 1523972820000 ms 18/04/17 16:47:00 INFO scheduler.JobScheduler: Finished job streaming job 1523972820000 ms.0 from job set of time 1523972820000 ms 18/04/17 16:47:00 INFO scheduler.JobScheduler: Finished job streaming job 1523972820000 ms.3 from job set of time 1523972820000 ms 18/04/17 16:47:00 INFO scheduler.JobScheduler: Starting job streaming job 1523972820000 ms.6 from job set of time 1523972820000 ms 18/04/17 16:47:00 INFO scheduler.JobScheduler: Starting job streaming job 1523972820000 ms.5 from job set of time 1523972820000 ms 18/04/17 16:47:00 INFO scheduler.JobScheduler: Finished job streaming job 1523972820000 ms.4 from job set of time 1523972820000 ms 18/04/17 16:47:00 INFO scheduler.JobScheduler: Starting job streaming job 1523972820000 ms.8 from job set of time 1523972820000 ms 18/04/17 16:47:00 INFO scheduler.JobScheduler: Starting job streaming job 1523972820000 ms.7 from job set of time 1523972820000 ms 18/04/17 16:47:00 INFO scheduler.JobScheduler: Starting job streaming job 1523972820000 ms.9 from job set of time 1523972820000 ms 18/04/17 16:47:00 INFO scheduler.JobScheduler: Starting job streaming job 1523972820000 ms.10 from job set of time 1523972820000 ms 18/04/17 16:47:00 INFO scheduler.JobScheduler: Starting job streaming job 1523972820000 ms.11 from job set of time 1523972820000 ms 18/04/17 16:47:00 INFO scheduler.JobScheduler: Starting job streaming job 1523972820000 ms.12 from job set of time 1523972820000 ms 18/04/17 16:47:00 INFO scheduler.JobScheduler: Starting job streaming job 1523972820000 ms.13 from job set of time 1523972820000 ms 18/04/17 16:47:00 INFO scheduler.JobScheduler: Starting job streaming job 1523972820000 ms.14 from job set of time 1523972820000 ms 18/04/17 16:47:00 INFO scheduler.JobScheduler: Finished job streaming job 1523972820000 ms.13 from job set of time 1523972820000 ms 18/04/17 16:47:00 INFO scheduler.JobScheduler: Starting job streaming job 1523972820000 ms.15 from job set of time 1523972820000 ms 18/04/17 16:47:00 INFO scheduler.JobScheduler: Starting job streaming job 1523972820000 ms.16 from job set of time 1523972820000 ms 18/04/17 16:47:00 INFO scheduler.JobScheduler: Finished job streaming job 1523972820000 ms.14 from job set of time 1523972820000 ms 18/04/17 16:47:00 INFO scheduler.JobScheduler: Finished job streaming job 1523972820000 ms.16 from job set of time 1523972820000 ms 18/04/17 16:47:00 INFO scheduler.JobScheduler: Starting job streaming job 1523972820000 ms.17 from job set of time 1523972820000 ms 18/04/17 16:47:00 INFO scheduler.JobScheduler: Starting job streaming job 1523972820000 ms.18 from job set of time 1523972820000 ms 18/04/17 16:47:00 INFO scheduler.JobScheduler: Finished job streaming job 1523972820000 ms.17 from job set of time 1523972820000 ms 18/04/17 16:47:00 INFO scheduler.JobScheduler: Starting job streaming job 1523972820000 ms.19 from job set of time 1523972820000 ms 18/04/17 16:47:00 INFO scheduler.JobScheduler: Starting job streaming job 1523972820000 ms.20 from job set of time 1523972820000 ms 18/04/17 16:47:00 INFO scheduler.JobScheduler: Starting job streaming job 1523972820000 ms.21 from job set of time 1523972820000 ms 18/04/17 16:47:00 INFO scheduler.JobScheduler: Starting job streaming job 1523972820000 ms.22 from job set of time 1523972820000 ms 18/04/17 16:47:00 INFO scheduler.JobScheduler: Finished job streaming job 1523972820000 ms.21 from job set of time 1523972820000 ms 18/04/17 16:47:00 INFO scheduler.JobScheduler: Starting job streaming job 1523972820000 ms.23 from job set of time 1523972820000 ms 18/04/17 16:47:00 INFO scheduler.JobScheduler: Starting job streaming job 1523972820000 ms.25 from job set of time 1523972820000 ms 18/04/17 16:47:00 INFO scheduler.JobScheduler: Starting job streaming job 1523972820000 ms.24 from job set of time 1523972820000 ms 18/04/17 16:47:00 INFO scheduler.JobScheduler: Starting job streaming job 1523972820000 ms.26 from job set of time 1523972820000 ms 18/04/17 16:47:00 INFO scheduler.JobScheduler: Starting job streaming job 1523972820000 ms.27 from job set of time 1523972820000 ms 18/04/17 16:47:00 INFO scheduler.JobScheduler: Starting job streaming job 1523972820000 ms.28 from job set of time 1523972820000 ms 18/04/17 16:47:00 INFO scheduler.JobScheduler: Starting job streaming job 1523972820000 ms.29 from job set of time 1523972820000 ms 18/04/17 16:47:00 INFO scheduler.JobScheduler: Starting job streaming job 1523972820000 ms.30 from job set of time 1523972820000 ms 18/04/17 16:47:00 INFO scheduler.JobScheduler: Starting job streaming job 1523972820000 ms.31 from job set of time 1523972820000 ms 18/04/17 16:47:00 INFO scheduler.JobScheduler: Finished job streaming job 1523972820000 ms.30 from job set of time 1523972820000 ms 18/04/17 16:47:00 INFO scheduler.JobScheduler: Starting job streaming job 1523972820000 ms.32 from job set of time 1523972820000 ms 18/04/17 16:47:00 INFO scheduler.JobScheduler: Starting job streaming job 1523972820000 ms.33 from job set of time 1523972820000 ms 18/04/17 16:47:00 INFO scheduler.JobScheduler: Starting job streaming job 1523972820000 ms.34 from job set of time 1523972820000 ms 18/04/17 16:47:00 INFO scheduler.JobScheduler: Starting job streaming job 1523972820000 ms.35 from job set of time 1523972820000 ms 18/04/17 16:47:00 INFO storage.BlockManagerInfo: Removed broadcast_363_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.2 MB) 18/04/17 16:47:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 16:47:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 16:47:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 16:47:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 16:47:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 16:47:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 16:47:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 16:47:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 16:47:00 INFO scheduler.DAGScheduler: Got job 369 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 16:47:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 368 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 16:47:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 16:47:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 16:47:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 16:47:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 16:47:00 INFO scheduler.DAGScheduler: Submitting ResultStage 368 (KafkaRDD[531] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 16:47:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 16:47:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 16:47:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 16:47:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 16:47:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 16:47:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 16:47:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 16:47:00 INFO storage.BlockManagerInfo: Removed broadcast_363_piece0 on ***hostname masked***:56034 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:47:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 16:47:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 16:47:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 16:47:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 16:47:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 16:47:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 16:47:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 16:47:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 16:47:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 16:47:00 INFO spark.ContextCleaner: Cleaned accumulator 342 18/04/17 16:47:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 16:47:00 INFO storage.MemoryStore: Block broadcast_368 stored as values in memory (estimated size 5.7 KB, free 490.3 MB) 18/04/17 16:47:00 INFO storage.BlockManagerInfo: Removed broadcast_329_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.2 MB) 18/04/17 16:47:00 INFO storage.BlockManagerInfo: Removed broadcast_329_piece0 on ***hostname masked***:55095 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:47:00 INFO spark.ContextCleaner: Cleaned accumulator 330 18/04/17 16:47:00 INFO spark.ContextCleaner: Cleaned accumulator 344 18/04/17 16:47:00 INFO storage.BlockManagerInfo: Removed broadcast_342_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.2 MB) 18/04/17 16:47:00 INFO storage.MemoryStore: Block broadcast_368_piece0 stored as bytes in memory (estimated size 3.1 KB, free 490.3 MB) 18/04/17 16:47:00 INFO storage.BlockManagerInfo: Added broadcast_368_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.2 MB) 18/04/17 16:47:00 INFO spark.SparkContext: Created broadcast 368 from broadcast at DAGScheduler.scala:1006 18/04/17 16:47:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 368 (KafkaRDD[531] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 16:47:00 INFO cluster.YarnClusterScheduler: Adding task set 368.0 with 1 tasks 18/04/17 16:47:00 INFO scheduler.DAGScheduler: Got job 370 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 16:47:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 369 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 16:47:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 16:47:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 16:47:00 INFO storage.BlockManagerInfo: Removed broadcast_342_piece0 on ***hostname masked***:43653 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:47:00 INFO scheduler.DAGScheduler: Submitting ResultStage 369 (KafkaRDD[529] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 16:47:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 368.0 (TID 368, ***hostname masked***, executor 10, partition 0, RACK_LOCAL, 2048 bytes) 18/04/17 16:47:00 INFO storage.MemoryStore: Block broadcast_369 stored as values in memory (estimated size 5.7 KB, free 490.3 MB) 18/04/17 16:47:00 INFO spark.ContextCleaner: Cleaned accumulator 343 18/04/17 16:47:00 INFO storage.BlockManagerInfo: Removed broadcast_341_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.2 MB) 18/04/17 16:47:00 INFO storage.BlockManagerInfo: Removed broadcast_341_piece0 on ***hostname masked***:35790 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:47:00 INFO spark.ContextCleaner: Cleaned accumulator 346 18/04/17 16:47:00 INFO storage.MemoryStore: Block broadcast_369_piece0 stored as bytes in memory (estimated size 3.1 KB, free 490.3 MB) 18/04/17 16:47:00 INFO storage.BlockManagerInfo: Added broadcast_369_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.2 MB) 18/04/17 16:47:00 INFO storage.BlockManagerInfo: Removed broadcast_344_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.2 MB) 18/04/17 16:47:00 INFO spark.SparkContext: Created broadcast 369 from broadcast at DAGScheduler.scala:1006 18/04/17 16:47:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 369 (KafkaRDD[529] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 16:47:00 INFO cluster.YarnClusterScheduler: Adding task set 369.0 with 1 tasks 18/04/17 16:47:00 INFO scheduler.DAGScheduler: Got job 368 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 16:47:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 370 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 16:47:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 16:47:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 16:47:00 INFO storage.BlockManagerInfo: Removed broadcast_344_piece0 on ***hostname masked***:35790 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:47:00 INFO scheduler.DAGScheduler: Submitting ResultStage 370 (KafkaRDD[514] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 16:47:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 369.0 (TID 369, ***hostname masked***, executor 6, partition 0, RACK_LOCAL, 2045 bytes) 18/04/17 16:47:00 INFO storage.MemoryStore: Block broadcast_370 stored as values in memory (estimated size 5.7 KB, free 490.3 MB) 18/04/17 16:47:00 INFO spark.ContextCleaner: Cleaned accumulator 345 18/04/17 16:47:00 INFO storage.BlockManagerInfo: Removed broadcast_343_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.2 MB) 18/04/17 16:47:00 INFO storage.BlockManagerInfo: Removed broadcast_343_piece0 on ***hostname masked***:56034 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:47:00 INFO spark.ContextCleaner: Cleaned accumulator 348 18/04/17 16:47:00 INFO storage.BlockManagerInfo: Removed broadcast_346_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.2 MB) 18/04/17 16:47:00 INFO storage.MemoryStore: Block broadcast_370_piece0 stored as bytes in memory (estimated size 3.1 KB, free 490.4 MB) 18/04/17 16:47:00 INFO storage.BlockManagerInfo: Added broadcast_370_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.2 MB) 18/04/17 16:47:00 INFO spark.SparkContext: Created broadcast 370 from broadcast at DAGScheduler.scala:1006 18/04/17 16:47:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 370 (KafkaRDD[514] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 16:47:00 INFO cluster.YarnClusterScheduler: Adding task set 370.0 with 1 tasks 18/04/17 16:47:00 INFO scheduler.DAGScheduler: Got job 371 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 16:47:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 371 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 16:47:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 16:47:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 16:47:00 INFO scheduler.DAGScheduler: Submitting ResultStage 371 (KafkaRDD[512] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 16:47:00 INFO storage.BlockManagerInfo: Removed broadcast_346_piece0 on ***hostname masked***:42188 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:47:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 370.0 (TID 370, ***hostname masked***, executor 3, partition 0, NODE_LOCAL, 2059 bytes) 18/04/17 16:47:00 INFO storage.MemoryStore: Block broadcast_371 stored as values in memory (estimated size 5.7 KB, free 490.3 MB) 18/04/17 16:47:00 INFO spark.ContextCleaner: Cleaned accumulator 347 18/04/17 16:47:00 INFO storage.BlockManagerInfo: Added broadcast_368_piece0 in memory on ***hostname masked***:55095 (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:47:00 INFO storage.BlockManagerInfo: Removed broadcast_345_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.2 MB) 18/04/17 16:47:00 INFO storage.BlockManagerInfo: Added broadcast_369_piece0 in memory on ***hostname masked***:35790 (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:47:00 INFO storage.BlockManagerInfo: Removed broadcast_345_piece0 on ***hostname masked***:60107 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:47:00 INFO spark.ContextCleaner: Cleaned accumulator 350 18/04/17 16:47:00 INFO storage.MemoryStore: Block broadcast_371_piece0 stored as bytes in memory (estimated size 3.1 KB, free 490.4 MB) 18/04/17 16:47:00 INFO storage.BlockManagerInfo: Added broadcast_371_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.2 MB) 18/04/17 16:47:00 INFO storage.BlockManagerInfo: Removed broadcast_348_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.2 MB) 18/04/17 16:47:00 INFO spark.SparkContext: Created broadcast 371 from broadcast at DAGScheduler.scala:1006 18/04/17 16:47:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 371 (KafkaRDD[512] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 16:47:00 INFO cluster.YarnClusterScheduler: Adding task set 371.0 with 1 tasks 18/04/17 16:47:00 INFO scheduler.DAGScheduler: Got job 372 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 16:47:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 372 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 16:47:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 16:47:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 16:47:00 INFO scheduler.DAGScheduler: Submitting ResultStage 372 (KafkaRDD[513] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 16:47:00 INFO storage.BlockManagerInfo: Removed broadcast_348_piece0 on ***hostname masked***:50260 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:47:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 371.0 (TID 371, ***hostname masked***, executor 5, partition 0, RACK_LOCAL, 2051 bytes) 18/04/17 16:47:00 INFO storage.MemoryStore: Block broadcast_372 stored as values in memory (estimated size 5.7 KB, free 490.4 MB) 18/04/17 16:47:00 INFO spark.ContextCleaner: Cleaned accumulator 349 18/04/17 16:47:00 INFO storage.BlockManagerInfo: Removed broadcast_347_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.2 MB) 18/04/17 16:47:00 INFO storage.BlockManagerInfo: Removed broadcast_347_piece0 on ***hostname masked***:53081 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:47:00 INFO storage.BlockManagerInfo: Added broadcast_370_piece0 in memory on ***hostname masked***:60107 (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:47:00 INFO spark.ContextCleaner: Cleaned accumulator 351 18/04/17 16:47:00 INFO storage.MemoryStore: Block broadcast_372_piece0 stored as bytes in memory (estimated size 3.1 KB, free 490.4 MB) 18/04/17 16:47:00 INFO storage.BlockManagerInfo: Added broadcast_372_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.2 MB) 18/04/17 16:47:00 INFO storage.BlockManagerInfo: Removed broadcast_349_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.2 MB) 18/04/17 16:47:00 INFO spark.SparkContext: Created broadcast 372 from broadcast at DAGScheduler.scala:1006 18/04/17 16:47:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 372 (KafkaRDD[513] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 16:47:00 INFO cluster.YarnClusterScheduler: Adding task set 372.0 with 1 tasks 18/04/17 16:47:00 INFO scheduler.DAGScheduler: Got job 373 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 16:47:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 373 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 16:47:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 16:47:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 16:47:00 INFO scheduler.DAGScheduler: Submitting ResultStage 373 (KafkaRDD[515] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 16:47:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 372.0 (TID 372, ***hostname masked***, executor 5, partition 0, RACK_LOCAL, 2041 bytes) 18/04/17 16:47:00 INFO storage.MemoryStore: Block broadcast_373 stored as values in memory (estimated size 5.7 KB, free 490.4 MB) 18/04/17 16:47:00 INFO storage.BlockManagerInfo: Removed broadcast_349_piece0 on ***hostname masked***:60107 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:47:00 INFO spark.ContextCleaner: Cleaned accumulator 353 18/04/17 16:47:00 INFO storage.BlockManagerInfo: Removed broadcast_351_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.2 MB) 18/04/17 16:47:00 INFO storage.BlockManagerInfo: Removed broadcast_351_piece0 on ***hostname masked***:35790 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:47:00 INFO storage.MemoryStore: Block broadcast_373_piece0 stored as bytes in memory (estimated size 3.1 KB, free 490.4 MB) 18/04/17 16:47:00 INFO storage.BlockManagerInfo: Added broadcast_373_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.2 MB) 18/04/17 16:47:00 INFO spark.SparkContext: Created broadcast 373 from broadcast at DAGScheduler.scala:1006 18/04/17 16:47:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 373 (KafkaRDD[515] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 16:47:00 INFO cluster.YarnClusterScheduler: Adding task set 373.0 with 1 tasks 18/04/17 16:47:00 INFO scheduler.DAGScheduler: Got job 374 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 16:47:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 374 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 16:47:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 16:47:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 16:47:00 INFO scheduler.DAGScheduler: Submitting ResultStage 374 (KafkaRDD[527] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 16:47:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 373.0 (TID 373, ***hostname masked***, executor 9, partition 0, RACK_LOCAL, 2046 bytes) 18/04/17 16:47:00 INFO storage.MemoryStore: Block broadcast_374 stored as values in memory (estimated size 5.7 KB, free 490.4 MB) 18/04/17 16:47:00 INFO spark.ContextCleaner: Cleaned accumulator 352 18/04/17 16:47:00 INFO storage.BlockManagerInfo: Added broadcast_371_piece0 in memory on ***hostname masked***:53081 (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:47:00 INFO storage.BlockManagerInfo: Removed broadcast_350_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.2 MB) 18/04/17 16:47:00 INFO storage.MemoryStore: Block broadcast_374_piece0 stored as bytes in memory (estimated size 3.1 KB, free 490.4 MB) 18/04/17 16:47:00 INFO storage.BlockManagerInfo: Removed broadcast_350_piece0 on ***hostname masked***:55279 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:47:00 INFO storage.BlockManagerInfo: Added broadcast_374_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.2 MB) 18/04/17 16:47:00 INFO spark.SparkContext: Created broadcast 374 from broadcast at DAGScheduler.scala:1006 18/04/17 16:47:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 374 (KafkaRDD[527] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 16:47:00 INFO cluster.YarnClusterScheduler: Adding task set 374.0 with 1 tasks 18/04/17 16:47:00 INFO scheduler.DAGScheduler: Got job 375 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 16:47:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 375 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 16:47:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 16:47:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 16:47:00 INFO scheduler.DAGScheduler: Submitting ResultStage 375 (KafkaRDD[532] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 16:47:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 374.0 (TID 374, ***hostname masked***, executor 7, partition 0, RACK_LOCAL, 2048 bytes) 18/04/17 16:47:00 INFO storage.MemoryStore: Block broadcast_375 stored as values in memory (estimated size 5.7 KB, free 490.4 MB) 18/04/17 16:47:00 INFO storage.BlockManagerInfo: Removed broadcast_367_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.2 MB) 18/04/17 16:47:00 INFO storage.BlockManagerInfo: Removed broadcast_367_piece0 on ***hostname masked***:42188 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:47:00 INFO spark.ContextCleaner: Cleaned accumulator 368 18/04/17 16:47:00 INFO storage.MemoryStore: Block broadcast_375_piece0 stored as bytes in memory (estimated size 3.1 KB, free 490.4 MB) 18/04/17 16:47:00 INFO storage.BlockManagerInfo: Removed broadcast_366_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.2 MB) 18/04/17 16:47:00 INFO storage.BlockManagerInfo: Added broadcast_375_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.2 MB) 18/04/17 16:47:00 INFO spark.SparkContext: Created broadcast 375 from broadcast at DAGScheduler.scala:1006 18/04/17 16:47:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 375 (KafkaRDD[532] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 16:47:00 INFO cluster.YarnClusterScheduler: Adding task set 375.0 with 1 tasks 18/04/17 16:47:00 INFO scheduler.DAGScheduler: Got job 376 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 16:47:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 376 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 16:47:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 16:47:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 16:47:00 INFO scheduler.DAGScheduler: Submitting ResultStage 376 (KafkaRDD[516] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 16:47:00 INFO storage.BlockManagerInfo: Removed broadcast_366_piece0 on ***hostname masked***:60107 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:47:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 375.0 (TID 375, ***hostname masked***, executor 3, partition 0, RACK_LOCAL, 2045 bytes) 18/04/17 16:47:00 INFO storage.MemoryStore: Block broadcast_376 stored as values in memory (estimated size 5.7 KB, free 490.4 MB) 18/04/17 16:47:00 INFO spark.ContextCleaner: Cleaned accumulator 367 18/04/17 16:47:00 INFO spark.ContextCleaner: Cleaned accumulator 355 18/04/17 16:47:00 INFO storage.BlockManagerInfo: Added broadcast_372_piece0 in memory on ***hostname masked***:53081 (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:47:00 INFO storage.BlockManagerInfo: Removed broadcast_353_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.2 MB) 18/04/17 16:47:00 INFO storage.BlockManagerInfo: Added broadcast_373_piece0 in memory on ***hostname masked***:55033 (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:47:00 INFO storage.BlockManagerInfo: Removed broadcast_353_piece0 on ***hostname masked***:50260 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:47:00 INFO spark.ContextCleaner: Cleaned accumulator 354 18/04/17 16:47:00 INFO storage.MemoryStore: Block broadcast_376_piece0 stored as bytes in memory (estimated size 3.1 KB, free 490.4 MB) 18/04/17 16:47:00 INFO storage.BlockManagerInfo: Added broadcast_376_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.2 MB) 18/04/17 16:47:00 INFO storage.BlockManagerInfo: Removed broadcast_352_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.2 MB) 18/04/17 16:47:00 INFO spark.SparkContext: Created broadcast 376 from broadcast at DAGScheduler.scala:1006 18/04/17 16:47:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 376 (KafkaRDD[516] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 16:47:00 INFO cluster.YarnClusterScheduler: Adding task set 376.0 with 1 tasks 18/04/17 16:47:00 INFO scheduler.DAGScheduler: Got job 377 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 16:47:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 377 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 16:47:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 16:47:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 16:47:00 INFO scheduler.DAGScheduler: Submitting ResultStage 377 (KafkaRDD[536] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 16:47:00 INFO storage.BlockManagerInfo: Removed broadcast_352_piece0 on ***hostname masked***:50260 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:47:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 376.0 (TID 376, ***hostname masked***, executor 11, partition 0, RACK_LOCAL, 2051 bytes) 18/04/17 16:47:00 INFO storage.MemoryStore: Block broadcast_377 stored as values in memory (estimated size 5.7 KB, free 490.4 MB) 18/04/17 16:47:00 INFO spark.ContextCleaner: Cleaned accumulator 357 18/04/17 16:47:00 INFO storage.BlockManagerInfo: Removed broadcast_355_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.2 MB) 18/04/17 16:47:00 INFO storage.BlockManagerInfo: Added broadcast_375_piece0 in memory on ***hostname masked***:60107 (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:47:00 INFO storage.BlockManagerInfo: Added broadcast_374_piece0 in memory on ***hostname masked***:41751 (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:47:00 INFO storage.MemoryStore: Block broadcast_377_piece0 stored as bytes in memory (estimated size 3.1 KB, free 490.4 MB) 18/04/17 16:47:00 INFO storage.BlockManagerInfo: Added broadcast_377_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.2 MB) 18/04/17 16:47:00 INFO storage.BlockManagerInfo: Removed broadcast_355_piece0 on ***hostname masked***:43653 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:47:00 INFO spark.SparkContext: Created broadcast 377 from broadcast at DAGScheduler.scala:1006 18/04/17 16:47:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 377 (KafkaRDD[536] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 16:47:00 INFO cluster.YarnClusterScheduler: Adding task set 377.0 with 1 tasks 18/04/17 16:47:00 INFO scheduler.DAGScheduler: Got job 378 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 16:47:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 378 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 16:47:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 16:47:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 16:47:00 INFO scheduler.DAGScheduler: Submitting ResultStage 378 (KafkaRDD[524] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 16:47:00 INFO spark.ContextCleaner: Cleaned accumulator 356 18/04/17 16:47:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 377.0 (TID 377, ***hostname masked***, executor 12, partition 0, RACK_LOCAL, 2040 bytes) 18/04/17 16:47:00 INFO storage.MemoryStore: Block broadcast_378 stored as values in memory (estimated size 5.7 KB, free 490.4 MB) 18/04/17 16:47:00 INFO storage.BlockManagerInfo: Removed broadcast_354_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.2 MB) 18/04/17 16:47:00 INFO storage.BlockManagerInfo: Removed broadcast_354_piece0 on ***hostname masked***:50260 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:47:00 INFO spark.ContextCleaner: Cleaned accumulator 359 18/04/17 16:47:00 INFO storage.MemoryStore: Block broadcast_378_piece0 stored as bytes in memory (estimated size 3.1 KB, free 490.4 MB) 18/04/17 16:47:00 INFO storage.BlockManagerInfo: Removed broadcast_357_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.2 MB) 18/04/17 16:47:00 INFO storage.BlockManagerInfo: Added broadcast_378_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.2 MB) 18/04/17 16:47:00 INFO spark.SparkContext: Created broadcast 378 from broadcast at DAGScheduler.scala:1006 18/04/17 16:47:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 378 (KafkaRDD[524] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 16:47:00 INFO cluster.YarnClusterScheduler: Adding task set 378.0 with 1 tasks 18/04/17 16:47:00 INFO scheduler.DAGScheduler: Got job 379 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 16:47:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 379 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 16:47:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 16:47:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 16:47:00 INFO storage.BlockManagerInfo: Removed broadcast_357_piece0 on ***hostname masked***:53081 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:47:00 INFO scheduler.DAGScheduler: Submitting ResultStage 379 (KafkaRDD[506] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 16:47:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 378.0 (TID 378, ***hostname masked***, executor 11, partition 0, RACK_LOCAL, 2048 bytes) 18/04/17 16:47:00 INFO storage.MemoryStore: Block broadcast_379 stored as values in memory (estimated size 5.7 KB, free 490.4 MB) 18/04/17 16:47:00 INFO spark.ContextCleaner: Cleaned accumulator 358 18/04/17 16:47:00 INFO storage.BlockManagerInfo: Removed broadcast_356_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.2 MB) 18/04/17 16:47:00 INFO storage.BlockManagerInfo: Removed broadcast_356_piece0 on ***hostname masked***:57847 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:47:00 INFO storage.BlockManagerInfo: Added broadcast_376_piece0 in memory on ***hostname masked***:57847 (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:47:00 INFO storage.MemoryStore: Block broadcast_379_piece0 stored as bytes in memory (estimated size 3.1 KB, free 490.4 MB) 18/04/17 16:47:00 INFO storage.BlockManagerInfo: Added broadcast_379_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.2 MB) 18/04/17 16:47:00 INFO spark.SparkContext: Created broadcast 379 from broadcast at DAGScheduler.scala:1006 18/04/17 16:47:00 INFO spark.ContextCleaner: Cleaned accumulator 361 18/04/17 16:47:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 379 (KafkaRDD[506] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 16:47:00 INFO cluster.YarnClusterScheduler: Adding task set 379.0 with 1 tasks 18/04/17 16:47:00 INFO scheduler.DAGScheduler: Got job 380 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 16:47:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 380 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 16:47:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 16:47:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 16:47:00 INFO scheduler.DAGScheduler: Submitting ResultStage 380 (KafkaRDD[535] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 16:47:00 INFO storage.BlockManagerInfo: Added broadcast_377_piece0 in memory on ***hostname masked***:42188 (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:47:00 INFO storage.BlockManagerInfo: Removed broadcast_359_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.2 MB) 18/04/17 16:47:00 INFO storage.MemoryStore: Block broadcast_380 stored as values in memory (estimated size 5.7 KB, free 490.4 MB) 18/04/17 16:47:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 379.0 (TID 379, ***hostname masked***, executor 2, partition 0, RACK_LOCAL, 2034 bytes) 18/04/17 16:47:00 INFO storage.BlockManagerInfo: Removed broadcast_359_piece0 on ***hostname masked***:55033 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:47:00 INFO spark.ContextCleaner: Cleaned accumulator 360 18/04/17 16:47:00 INFO storage.BlockManagerInfo: Removed broadcast_358_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.2 MB) 18/04/17 16:47:00 INFO storage.MemoryStore: Block broadcast_380_piece0 stored as bytes in memory (estimated size 3.1 KB, free 490.4 MB) 18/04/17 16:47:00 INFO storage.BlockManagerInfo: Added broadcast_380_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.2 MB) 18/04/17 16:47:00 INFO spark.SparkContext: Created broadcast 380 from broadcast at DAGScheduler.scala:1006 18/04/17 16:47:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 380 (KafkaRDD[535] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 16:47:00 INFO cluster.YarnClusterScheduler: Adding task set 380.0 with 1 tasks 18/04/17 16:47:00 INFO scheduler.DAGScheduler: Got job 381 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 16:47:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 381 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 16:47:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 16:47:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 16:47:00 INFO scheduler.DAGScheduler: Submitting ResultStage 381 (KafkaRDD[505] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 16:47:00 INFO storage.BlockManagerInfo: Removed broadcast_358_piece0 on ***hostname masked***:42188 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:47:00 INFO storage.MemoryStore: Block broadcast_381 stored as values in memory (estimated size 5.7 KB, free 490.4 MB) 18/04/17 16:47:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 380.0 (TID 380, ***hostname masked***, executor 11, partition 0, RACK_LOCAL, 2042 bytes) 18/04/17 16:47:00 INFO spark.ContextCleaner: Cleaned accumulator 363 18/04/17 16:47:00 INFO storage.BlockManagerInfo: Removed broadcast_361_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.2 MB) 18/04/17 16:47:00 INFO storage.BlockManagerInfo: Added broadcast_378_piece0 in memory on ***hostname masked***:57847 (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:47:00 INFO storage.BlockManagerInfo: Removed broadcast_361_piece0 on ***hostname masked***:57847 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:47:00 INFO storage.MemoryStore: Block broadcast_381_piece0 stored as bytes in memory (estimated size 3.1 KB, free 490.4 MB) 18/04/17 16:47:00 INFO storage.BlockManagerInfo: Added broadcast_381_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.2 MB) 18/04/17 16:47:00 INFO spark.ContextCleaner: Cleaned accumulator 362 18/04/17 16:47:00 INFO spark.SparkContext: Created broadcast 381 from broadcast at DAGScheduler.scala:1006 18/04/17 16:47:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 381 (KafkaRDD[505] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 16:47:00 INFO cluster.YarnClusterScheduler: Adding task set 381.0 with 1 tasks 18/04/17 16:47:00 INFO scheduler.DAGScheduler: Got job 382 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 16:47:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 382 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 16:47:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 16:47:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 16:47:00 INFO scheduler.DAGScheduler: Submitting ResultStage 382 (KafkaRDD[526] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 16:47:00 INFO storage.BlockManagerInfo: Added broadcast_379_piece0 in memory on ***hostname masked***:43653 (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:47:00 INFO storage.BlockManagerInfo: Removed broadcast_360_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.2 MB) 18/04/17 16:47:00 INFO storage.MemoryStore: Block broadcast_382 stored as values in memory (estimated size 5.7 KB, free 490.4 MB) 18/04/17 16:47:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 381.0 (TID 381, ***hostname masked***, executor 7, partition 0, RACK_LOCAL, 2063 bytes) 18/04/17 16:47:00 INFO storage.BlockManagerInfo: Removed broadcast_360_piece0 on ***hostname masked***:42188 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:47:00 INFO spark.ContextCleaner: Cleaned accumulator 365 18/04/17 16:47:00 INFO spark.ContextCleaner: Cleaned accumulator 364 18/04/17 16:47:00 INFO storage.BlockManagerInfo: Removed broadcast_362_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.2 MB) 18/04/17 16:47:00 INFO storage.BlockManagerInfo: Removed broadcast_362_piece0 on ***hostname masked***:53081 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:47:00 INFO storage.MemoryStore: Block broadcast_382_piece0 stored as bytes in memory (estimated size 3.1 KB, free 490.4 MB) 18/04/17 16:47:00 INFO storage.BlockManagerInfo: Added broadcast_382_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.2 MB) 18/04/17 16:47:00 INFO spark.SparkContext: Created broadcast 382 from broadcast at DAGScheduler.scala:1006 18/04/17 16:47:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 382 (KafkaRDD[526] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 16:47:00 INFO cluster.YarnClusterScheduler: Adding task set 382.0 with 1 tasks 18/04/17 16:47:00 INFO scheduler.DAGScheduler: Got job 383 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 16:47:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 383 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 16:47:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 16:47:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 16:47:00 INFO scheduler.DAGScheduler: Submitting ResultStage 383 (KafkaRDD[523] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 16:47:00 INFO storage.MemoryStore: Block broadcast_383 stored as values in memory (estimated size 5.7 KB, free 490.4 MB) 18/04/17 16:47:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 382.0 (TID 382, ***hostname masked***, executor 2, partition 0, RACK_LOCAL, 2069 bytes) 18/04/17 16:47:00 INFO storage.BlockManagerInfo: Removed broadcast_365_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.2 MB) 18/04/17 16:47:00 INFO storage.BlockManagerInfo: Added broadcast_380_piece0 in memory on ***hostname masked***:57847 (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:47:00 INFO storage.BlockManagerInfo: Removed broadcast_365_piece0 on ***hostname masked***:55279 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:47:00 INFO storage.BlockManagerInfo: Added broadcast_381_piece0 in memory on ***hostname masked***:41751 (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:47:00 INFO spark.ContextCleaner: Cleaned accumulator 366 18/04/17 16:47:00 INFO storage.MemoryStore: Block broadcast_383_piece0 stored as bytes in memory (estimated size 3.1 KB, free 490.4 MB) 18/04/17 16:47:00 INFO storage.BlockManagerInfo: Removed broadcast_364_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.2 MB) 18/04/17 16:47:00 INFO storage.BlockManagerInfo: Added broadcast_383_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.2 MB) 18/04/17 16:47:00 INFO spark.SparkContext: Created broadcast 383 from broadcast at DAGScheduler.scala:1006 18/04/17 16:47:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 383 (KafkaRDD[523] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 16:47:00 INFO cluster.YarnClusterScheduler: Adding task set 383.0 with 1 tasks 18/04/17 16:47:00 INFO scheduler.DAGScheduler: Got job 384 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 16:47:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 384 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 16:47:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 16:47:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 16:47:00 INFO scheduler.DAGScheduler: Submitting ResultStage 384 (KafkaRDD[510] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 16:47:00 INFO storage.MemoryStore: Block broadcast_384 stored as values in memory (estimated size 5.7 KB, free 490.4 MB) 18/04/17 16:47:00 INFO storage.BlockManagerInfo: Removed broadcast_364_piece0 on ***hostname masked***:55095 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:47:00 INFO storage.BlockManagerInfo: Added broadcast_382_piece0 in memory on ***hostname masked***:43653 (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:47:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 383.0 (TID 383, ***hostname masked***, executor 5, partition 0, RACK_LOCAL, 2050 bytes) 18/04/17 16:47:00 INFO storage.MemoryStore: Block broadcast_384_piece0 stored as bytes in memory (estimated size 3.1 KB, free 490.4 MB) 18/04/17 16:47:00 INFO storage.BlockManagerInfo: Added broadcast_384_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.2 MB) 18/04/17 16:47:00 INFO spark.SparkContext: Created broadcast 384 from broadcast at DAGScheduler.scala:1006 18/04/17 16:47:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 384 (KafkaRDD[510] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 16:47:00 INFO cluster.YarnClusterScheduler: Adding task set 384.0 with 1 tasks 18/04/17 16:47:00 INFO scheduler.DAGScheduler: Got job 385 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 16:47:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 385 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 16:47:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 16:47:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 16:47:00 INFO scheduler.DAGScheduler: Submitting ResultStage 385 (KafkaRDD[519] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 16:47:00 INFO storage.MemoryStore: Block broadcast_385 stored as values in memory (estimated size 5.7 KB, free 490.4 MB) 18/04/17 16:47:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 384.0 (TID 384, ***hostname masked***, executor 7, partition 0, NODE_LOCAL, 2045 bytes) 18/04/17 16:47:00 INFO storage.MemoryStore: Block broadcast_385_piece0 stored as bytes in memory (estimated size 3.1 KB, free 490.4 MB) 18/04/17 16:47:00 INFO storage.BlockManagerInfo: Added broadcast_385_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.2 MB) 18/04/17 16:47:00 INFO spark.SparkContext: Created broadcast 385 from broadcast at DAGScheduler.scala:1006 18/04/17 16:47:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 385 (KafkaRDD[519] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 16:47:00 INFO cluster.YarnClusterScheduler: Adding task set 385.0 with 1 tasks 18/04/17 16:47:00 INFO scheduler.DAGScheduler: Got job 386 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 16:47:00 INFO storage.BlockManagerInfo: Added broadcast_383_piece0 in memory on ***hostname masked***:53081 (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:47:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 386 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 16:47:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 16:47:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 16:47:00 INFO scheduler.DAGScheduler: Submitting ResultStage 386 (KafkaRDD[528] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 16:47:00 INFO storage.MemoryStore: Block broadcast_386 stored as values in memory (estimated size 5.7 KB, free 490.4 MB) 18/04/17 16:47:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 385.0 (TID 385, ***hostname masked***, executor 6, partition 0, RACK_LOCAL, 2064 bytes) 18/04/17 16:47:00 INFO storage.BlockManagerInfo: Added broadcast_384_piece0 in memory on ***hostname masked***:41751 (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:47:00 INFO storage.MemoryStore: Block broadcast_386_piece0 stored as bytes in memory (estimated size 3.1 KB, free 490.4 MB) 18/04/17 16:47:00 INFO storage.BlockManagerInfo: Added broadcast_386_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.2 MB) 18/04/17 16:47:00 INFO spark.SparkContext: Created broadcast 386 from broadcast at DAGScheduler.scala:1006 18/04/17 16:47:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 386 (KafkaRDD[528] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 16:47:00 INFO cluster.YarnClusterScheduler: Adding task set 386.0 with 1 tasks 18/04/17 16:47:00 INFO scheduler.DAGScheduler: Got job 387 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 16:47:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 387 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 16:47:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 16:47:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 16:47:00 INFO scheduler.DAGScheduler: Submitting ResultStage 387 (KafkaRDD[530] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 16:47:00 INFO storage.MemoryStore: Block broadcast_387 stored as values in memory (estimated size 5.7 KB, free 490.4 MB) 18/04/17 16:47:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 386.0 (TID 386, ***hostname masked***, executor 6, partition 0, NODE_LOCAL, 2047 bytes) 18/04/17 16:47:00 INFO storage.MemoryStore: Block broadcast_387_piece0 stored as bytes in memory (estimated size 3.1 KB, free 490.4 MB) 18/04/17 16:47:00 INFO storage.BlockManagerInfo: Added broadcast_387_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.2 MB) 18/04/17 16:47:00 INFO spark.SparkContext: Created broadcast 387 from broadcast at DAGScheduler.scala:1006 18/04/17 16:47:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 387 (KafkaRDD[530] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 16:47:00 INFO cluster.YarnClusterScheduler: Adding task set 387.0 with 1 tasks 18/04/17 16:47:00 INFO scheduler.DAGScheduler: Got job 388 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 16:47:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 388 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 16:47:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 16:47:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 16:47:00 INFO scheduler.DAGScheduler: Submitting ResultStage 388 (KafkaRDD[537] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 16:47:00 INFO storage.MemoryStore: Block broadcast_388 stored as values in memory (estimated size 5.7 KB, free 490.4 MB) 18/04/17 16:47:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 387.0 (TID 387, ***hostname masked***, executor 4, partition 0, NODE_LOCAL, 2056 bytes) 18/04/17 16:47:00 INFO storage.MemoryStore: Block broadcast_388_piece0 stored as bytes in memory (estimated size 3.1 KB, free 490.4 MB) 18/04/17 16:47:00 INFO storage.BlockManagerInfo: Added broadcast_388_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.2 MB) 18/04/17 16:47:00 INFO spark.SparkContext: Created broadcast 388 from broadcast at DAGScheduler.scala:1006 18/04/17 16:47:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 388 (KafkaRDD[537] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 16:47:00 INFO cluster.YarnClusterScheduler: Adding task set 388.0 with 1 tasks 18/04/17 16:47:00 INFO scheduler.DAGScheduler: Got job 390 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 16:47:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 389 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 16:47:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 16:47:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 16:47:00 INFO scheduler.DAGScheduler: Submitting ResultStage 389 (KafkaRDD[522] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 16:47:00 INFO storage.MemoryStore: Block broadcast_389 stored as values in memory (estimated size 5.7 KB, free 490.4 MB) 18/04/17 16:47:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 388.0 (TID 388, ***hostname masked***, executor 10, partition 0, NODE_LOCAL, 2048 bytes) 18/04/17 16:47:00 INFO storage.MemoryStore: Block broadcast_389_piece0 stored as bytes in memory (estimated size 3.1 KB, free 490.4 MB) 18/04/17 16:47:00 INFO storage.BlockManagerInfo: Added broadcast_389_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.2 MB) 18/04/17 16:47:00 INFO spark.SparkContext: Created broadcast 389 from broadcast at DAGScheduler.scala:1006 18/04/17 16:47:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 389 (KafkaRDD[522] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 16:47:00 INFO cluster.YarnClusterScheduler: Adding task set 389.0 with 1 tasks 18/04/17 16:47:00 INFO scheduler.DAGScheduler: Got job 391 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 16:47:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 390 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 16:47:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 16:47:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 16:47:00 INFO scheduler.DAGScheduler: Submitting ResultStage 390 (KafkaRDD[511] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 16:47:00 INFO storage.MemoryStore: Block broadcast_390 stored as values in memory (estimated size 5.7 KB, free 490.4 MB) 18/04/17 16:47:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 389.0 (TID 389, ***hostname masked***, executor 3, partition 0, RACK_LOCAL, 2048 bytes) 18/04/17 16:47:00 INFO storage.BlockManagerInfo: Added broadcast_387_piece0 in memory on ***hostname masked***:55279 (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:47:00 INFO storage.MemoryStore: Block broadcast_390_piece0 stored as bytes in memory (estimated size 3.1 KB, free 490.4 MB) 18/04/17 16:47:00 INFO storage.BlockManagerInfo: Added broadcast_390_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.2 MB) 18/04/17 16:47:00 INFO spark.SparkContext: Created broadcast 390 from broadcast at DAGScheduler.scala:1006 18/04/17 16:47:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 390 (KafkaRDD[511] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 16:47:00 INFO cluster.YarnClusterScheduler: Adding task set 390.0 with 1 tasks 18/04/17 16:47:00 INFO scheduler.DAGScheduler: Got job 389 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 16:47:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 391 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 16:47:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 16:47:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 16:47:00 INFO scheduler.DAGScheduler: Submitting ResultStage 391 (KafkaRDD[533] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 16:47:00 INFO storage.MemoryStore: Block broadcast_391 stored as values in memory (estimated size 5.7 KB, free 490.4 MB) 18/04/17 16:47:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 390.0 (TID 390, ***hostname masked***, executor 4, partition 0, RACK_LOCAL, 2060 bytes) 18/04/17 16:47:00 INFO storage.BlockManagerInfo: Added broadcast_389_piece0 in memory on ***hostname masked***:60107 (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:47:00 INFO storage.MemoryStore: Block broadcast_391_piece0 stored as bytes in memory (estimated size 3.1 KB, free 490.4 MB) 18/04/17 16:47:00 INFO storage.BlockManagerInfo: Added broadcast_391_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.2 MB) 18/04/17 16:47:00 INFO spark.SparkContext: Created broadcast 391 from broadcast at DAGScheduler.scala:1006 18/04/17 16:47:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 391 (KafkaRDD[533] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 16:47:00 INFO cluster.YarnClusterScheduler: Adding task set 391.0 with 1 tasks 18/04/17 16:47:00 INFO scheduler.DAGScheduler: Got job 393 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 16:47:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 392 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 16:47:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 16:47:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 16:47:00 INFO scheduler.DAGScheduler: Submitting ResultStage 392 (KafkaRDD[538] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 16:47:00 INFO storage.MemoryStore: Block broadcast_392 stored as values in memory (estimated size 5.7 KB, free 490.3 MB) 18/04/17 16:47:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 391.0 (TID 391, ***hostname masked***, executor 2, partition 0, NODE_LOCAL, 2049 bytes) 18/04/17 16:47:00 INFO storage.MemoryStore: Block broadcast_392_piece0 stored as bytes in memory (estimated size 3.1 KB, free 490.3 MB) 18/04/17 16:47:00 INFO storage.BlockManagerInfo: Added broadcast_392_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.2 MB) 18/04/17 16:47:00 INFO spark.SparkContext: Created broadcast 392 from broadcast at DAGScheduler.scala:1006 18/04/17 16:47:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 392 (KafkaRDD[538] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 16:47:00 INFO cluster.YarnClusterScheduler: Adding task set 392.0 with 1 tasks 18/04/17 16:47:00 INFO scheduler.DAGScheduler: Got job 392 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 16:47:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 393 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 16:47:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 16:47:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 16:47:00 INFO scheduler.DAGScheduler: Submitting ResultStage 393 (KafkaRDD[539] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 16:47:00 INFO storage.MemoryStore: Block broadcast_393 stored as values in memory (estimated size 5.7 KB, free 490.3 MB) 18/04/17 16:47:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 392.0 (TID 392, ***hostname masked***, executor 2, partition 0, RACK_LOCAL, 2053 bytes) 18/04/17 16:47:00 INFO storage.MemoryStore: Block broadcast_393_piece0 stored as bytes in memory (estimated size 3.1 KB, free 490.3 MB) 18/04/17 16:47:00 INFO storage.BlockManagerInfo: Added broadcast_393_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.2 MB) 18/04/17 16:47:00 INFO spark.SparkContext: Created broadcast 393 from broadcast at DAGScheduler.scala:1006 18/04/17 16:47:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 393 (KafkaRDD[539] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 16:47:00 INFO cluster.YarnClusterScheduler: Adding task set 393.0 with 1 tasks 18/04/17 16:47:00 INFO scheduler.DAGScheduler: Got job 394 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 16:47:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 394 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 16:47:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 16:47:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 16:47:00 INFO scheduler.DAGScheduler: Submitting ResultStage 394 (KafkaRDD[509] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 16:47:00 INFO storage.MemoryStore: Block broadcast_394 stored as values in memory (estimated size 5.7 KB, free 490.3 MB) 18/04/17 16:47:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 393.0 (TID 393, ***hostname masked***, executor 10, partition 0, NODE_LOCAL, 2037 bytes) 18/04/17 16:47:00 INFO storage.MemoryStore: Block broadcast_394_piece0 stored as bytes in memory (estimated size 3.1 KB, free 490.3 MB) 18/04/17 16:47:00 INFO storage.BlockManagerInfo: Added broadcast_394_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.2 MB) 18/04/17 16:47:00 INFO spark.SparkContext: Created broadcast 394 from broadcast at DAGScheduler.scala:1006 18/04/17 16:47:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 394 (KafkaRDD[509] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 16:47:00 INFO cluster.YarnClusterScheduler: Adding task set 394.0 with 1 tasks 18/04/17 16:47:00 INFO storage.BlockManagerInfo: Added broadcast_391_piece0 in memory on ***hostname masked***:43653 (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:47:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 394.0 (TID 394, ***hostname masked***, executor 1, partition 0, NODE_LOCAL, 2059 bytes) 18/04/17 16:47:00 INFO storage.BlockManagerInfo: Added broadcast_392_piece0 in memory on ***hostname masked***:43653 (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:47:00 INFO storage.BlockManagerInfo: Added broadcast_394_piece0 in memory on ***hostname masked***:56034 (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:47:00 INFO storage.BlockManagerInfo: Added broadcast_390_piece0 in memory on ***hostname masked***:55279 (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:47:00 INFO storage.BlockManagerInfo: Added broadcast_386_piece0 in memory on ***hostname masked***:35790 (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:47:00 INFO storage.BlockManagerInfo: Added broadcast_385_piece0 in memory on ***hostname masked***:35790 (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:47:00 INFO storage.BlockManagerInfo: Added broadcast_388_piece0 in memory on ***hostname masked***:55095 (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:47:00 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 378.0 (TID 378) in 246 ms on ***hostname masked*** (executor 11) (1/1) 18/04/17 16:47:00 INFO cluster.YarnClusterScheduler: Removed TaskSet 378.0, whose tasks have all completed, from pool 18/04/17 16:47:00 INFO scheduler.DAGScheduler: ResultStage 378 (foreachPartition at PredictorEngineApp.java:153) finished in 0.247 s 18/04/17 16:47:00 INFO scheduler.DAGScheduler: Job 378 finished: foreachPartition at PredictorEngineApp.java:153, took 0.302668 s 18/04/17 16:47:00 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x37e64d65 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 16:47:00 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x37e64d650x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 16:47:00 INFO storage.BlockManagerInfo: Added broadcast_393_piece0 in memory on ***hostname masked***:55095 (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:47:00 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 16:47:00 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:55637, server: ***hostname masked***/***IP masked***:2181 18/04/17 16:47:00 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x3626be1439a9331, negotiated timeout = 60000 18/04/17 16:47:00 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x3626be1439a9331 18/04/17 16:47:00 INFO zookeeper.ZooKeeper: Session: 0x3626be1439a9331 closed 18/04/17 16:47:00 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 16:47:00 INFO scheduler.JobScheduler: Finished job streaming job 1523972820000 ms.20 from job set of time 1523972820000 ms 18/04/17 16:47:00 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 393.0 (TID 393) in 393 ms on ***hostname masked*** (executor 10) (1/1) 18/04/17 16:47:00 INFO cluster.YarnClusterScheduler: Removed TaskSet 393.0, whose tasks have all completed, from pool 18/04/17 16:47:00 INFO scheduler.DAGScheduler: ResultStage 393 (foreachPartition at PredictorEngineApp.java:153) finished in 0.394 s 18/04/17 16:47:00 INFO scheduler.DAGScheduler: Job 392 finished: foreachPartition at PredictorEngineApp.java:153, took 0.507034 s 18/04/17 16:47:00 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x1f151709 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 16:47:00 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x1f1517090x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 16:47:00 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 16:47:00 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:33789, server: ***hostname masked***/***IP masked***:2181 18/04/17 16:47:00 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x1626be1444c9368, negotiated timeout = 60000 18/04/17 16:47:00 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x1626be1444c9368 18/04/17 16:47:00 INFO zookeeper.ZooKeeper: Session: 0x1626be1444c9368 closed 18/04/17 16:47:00 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 16:47:00 INFO scheduler.JobScheduler: Finished job streaming job 1523972820000 ms.35 from job set of time 1523972820000 ms 18/04/17 16:47:02 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 390.0 (TID 390) in 1984 ms on ***hostname masked*** (executor 4) (1/1) 18/04/17 16:47:02 INFO cluster.YarnClusterScheduler: Removed TaskSet 390.0, whose tasks have all completed, from pool 18/04/17 16:47:02 INFO scheduler.DAGScheduler: ResultStage 390 (foreachPartition at PredictorEngineApp.java:153) finished in 1.986 s 18/04/17 16:47:02 INFO scheduler.DAGScheduler: Job 391 finished: foreachPartition at PredictorEngineApp.java:153, took 2.083111 s 18/04/17 16:47:02 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x19b5a08 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 16:47:02 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x19b5a080x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 16:47:02 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 16:47:02 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:38389, server: ***hostname masked***/***IP masked***:2181 18/04/17 16:47:02 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x2626be142b28c72, negotiated timeout = 60000 18/04/17 16:47:02 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 369.0 (TID 369) in 2095 ms on ***hostname masked*** (executor 6) (1/1) 18/04/17 16:47:02 INFO cluster.YarnClusterScheduler: Removed TaskSet 369.0, whose tasks have all completed, from pool 18/04/17 16:47:02 INFO scheduler.DAGScheduler: ResultStage 369 (foreachPartition at PredictorEngineApp.java:153) finished in 2.095 s 18/04/17 16:47:02 INFO scheduler.DAGScheduler: Job 370 finished: foreachPartition at PredictorEngineApp.java:153, took 2.112239 s 18/04/17 16:47:02 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x2626be142b28c72 18/04/17 16:47:02 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x56c15327 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 16:47:02 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x56c153270x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 16:47:02 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 16:47:02 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:38392, server: ***hostname masked***/***IP masked***:2181 18/04/17 16:47:02 INFO zookeeper.ZooKeeper: Session: 0x2626be142b28c72 closed 18/04/17 16:47:02 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 16:47:02 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x2626be142b28c74, negotiated timeout = 60000 18/04/17 16:47:02 INFO scheduler.JobScheduler: Finished job streaming job 1523972820000 ms.7 from job set of time 1523972820000 ms 18/04/17 16:47:02 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x2626be142b28c74 18/04/17 16:47:02 INFO zookeeper.ZooKeeper: Session: 0x2626be142b28c74 closed 18/04/17 16:47:02 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 16:47:02 INFO scheduler.JobScheduler: Finished job streaming job 1523972820000 ms.25 from job set of time 1523972820000 ms 18/04/17 16:47:03 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 371.0 (TID 371) in 3182 ms on ***hostname masked*** (executor 5) (1/1) 18/04/17 16:47:03 INFO cluster.YarnClusterScheduler: Removed TaskSet 371.0, whose tasks have all completed, from pool 18/04/17 16:47:03 INFO scheduler.DAGScheduler: ResultStage 371 (foreachPartition at PredictorEngineApp.java:153) finished in 3.182 s 18/04/17 16:47:03 INFO scheduler.DAGScheduler: Job 371 finished: foreachPartition at PredictorEngineApp.java:153, took 3.209100 s 18/04/17 16:47:03 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x6f2670b5 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 16:47:03 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x6f2670b50x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 16:47:03 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 16:47:03 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:38401, server: ***hostname masked***/***IP masked***:2181 18/04/17 16:47:03 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x2626be142b28c77, negotiated timeout = 60000 18/04/17 16:47:03 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x2626be142b28c77 18/04/17 16:47:03 INFO zookeeper.ZooKeeper: Session: 0x2626be142b28c77 closed 18/04/17 16:47:03 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 16:47:03 INFO scheduler.JobScheduler: Finished job streaming job 1523972820000 ms.8 from job set of time 1523972820000 ms 18/04/17 16:47:03 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 380.0 (TID 380) in 3794 ms on ***hostname masked*** (executor 11) (1/1) 18/04/17 16:47:03 INFO cluster.YarnClusterScheduler: Removed TaskSet 380.0, whose tasks have all completed, from pool 18/04/17 16:47:03 INFO scheduler.DAGScheduler: ResultStage 380 (foreachPartition at PredictorEngineApp.java:153) finished in 3.795 s 18/04/17 16:47:03 INFO scheduler.DAGScheduler: Job 380 finished: foreachPartition at PredictorEngineApp.java:153, took 3.859005 s 18/04/17 16:47:03 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x1d47540d connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 16:47:03 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x1d47540d0x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 16:47:03 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 16:47:03 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:33812, server: ***hostname masked***/***IP masked***:2181 18/04/17 16:47:03 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x1626be1444c9369, negotiated timeout = 60000 18/04/17 16:47:03 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x1626be1444c9369 18/04/17 16:47:03 INFO zookeeper.ZooKeeper: Session: 0x1626be1444c9369 closed 18/04/17 16:47:03 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 16:47:03 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 377.0 (TID 377) in 3836 ms on ***hostname masked*** (executor 12) (1/1) 18/04/17 16:47:03 INFO cluster.YarnClusterScheduler: Removed TaskSet 377.0, whose tasks have all completed, from pool 18/04/17 16:47:03 INFO scheduler.DAGScheduler: ResultStage 377 (foreachPartition at PredictorEngineApp.java:153) finished in 3.837 s 18/04/17 16:47:03 INFO scheduler.DAGScheduler: Job 377 finished: foreachPartition at PredictorEngineApp.java:153, took 3.888293 s 18/04/17 16:47:03 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x5e92564c connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 16:47:03 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x5e92564c0x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 16:47:03 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 16:47:03 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:33815, server: ***hostname masked***/***IP masked***:2181 18/04/17 16:47:03 INFO scheduler.JobScheduler: Finished job streaming job 1523972820000 ms.31 from job set of time 1523972820000 ms 18/04/17 16:47:03 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x1626be1444c936a, negotiated timeout = 60000 18/04/17 16:47:03 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x1626be1444c936a 18/04/17 16:47:03 INFO zookeeper.ZooKeeper: Session: 0x1626be1444c936a closed 18/04/17 16:47:03 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 16:47:03 INFO scheduler.JobScheduler: Finished job streaming job 1523972820000 ms.32 from job set of time 1523972820000 ms 18/04/17 16:47:04 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 376.0 (TID 376) in 4803 ms on ***hostname masked*** (executor 11) (1/1) 18/04/17 16:47:04 INFO cluster.YarnClusterScheduler: Removed TaskSet 376.0, whose tasks have all completed, from pool 18/04/17 16:47:04 INFO scheduler.DAGScheduler: ResultStage 376 (foreachPartition at PredictorEngineApp.java:153) finished in 4.804 s 18/04/17 16:47:04 INFO scheduler.DAGScheduler: Job 376 finished: foreachPartition at PredictorEngineApp.java:153, took 4.851667 s 18/04/17 16:47:04 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0xba25fbf connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 16:47:04 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0xba25fbf0x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 16:47:04 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 16:47:04 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:55670, server: ***hostname masked***/***IP masked***:2181 18/04/17 16:47:04 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x3626be1439a9338, negotiated timeout = 60000 18/04/17 16:47:04 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x3626be1439a9338 18/04/17 16:47:04 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 16:47:04 INFO zookeeper.ZooKeeper: Session: 0x3626be1439a9338 closed 18/04/17 16:47:04 INFO scheduler.JobScheduler: Finished job streaming job 1523972820000 ms.12 from job set of time 1523972820000 ms 18/04/17 16:47:07 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 368.0 (TID 368) in 7053 ms on ***hostname masked*** (executor 10) (1/1) 18/04/17 16:47:07 INFO cluster.YarnClusterScheduler: Removed TaskSet 368.0, whose tasks have all completed, from pool 18/04/17 16:47:07 INFO scheduler.DAGScheduler: ResultStage 368 (foreachPartition at PredictorEngineApp.java:153) finished in 7.053 s 18/04/17 16:47:07 INFO scheduler.DAGScheduler: Job 369 finished: foreachPartition at PredictorEngineApp.java:153, took 7.064862 s 18/04/17 16:47:07 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x4e94dcb4 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 16:47:07 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x4e94dcb40x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 16:47:07 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 16:47:07 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:33825, server: ***hostname masked***/***IP masked***:2181 18/04/17 16:47:07 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x1626be1444c936b, negotiated timeout = 60000 18/04/17 16:47:07 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x1626be1444c936b 18/04/17 16:47:07 INFO zookeeper.ZooKeeper: Session: 0x1626be1444c936b closed 18/04/17 16:47:07 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 16:47:07 INFO scheduler.JobScheduler: Finished job streaming job 1523972820000 ms.27 from job set of time 1523972820000 ms 18/04/17 16:47:07 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 385.0 (TID 385) in 7151 ms on ***hostname masked*** (executor 6) (1/1) 18/04/17 16:47:07 INFO cluster.YarnClusterScheduler: Removed TaskSet 385.0, whose tasks have all completed, from pool 18/04/17 16:47:07 INFO scheduler.DAGScheduler: ResultStage 385 (foreachPartition at PredictorEngineApp.java:153) finished in 7.153 s 18/04/17 16:47:07 INFO scheduler.DAGScheduler: Job 385 finished: foreachPartition at PredictorEngineApp.java:153, took 7.235611 s 18/04/17 16:47:07 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x52370070 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 16:47:07 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x523700700x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 16:47:07 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 16:47:07 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:55680, server: ***hostname masked***/***IP masked***:2181 18/04/17 16:47:07 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x3626be1439a933b, negotiated timeout = 60000 18/04/17 16:47:07 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x3626be1439a933b 18/04/17 16:47:07 INFO zookeeper.ZooKeeper: Session: 0x3626be1439a933b closed 18/04/17 16:47:07 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 16:47:07 INFO scheduler.JobScheduler: Finished job streaming job 1523972820000 ms.15 from job set of time 1523972820000 ms 18/04/17 16:47:07 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 388.0 (TID 388) in 7236 ms on ***hostname masked*** (executor 10) (1/1) 18/04/17 16:47:07 INFO cluster.YarnClusterScheduler: Removed TaskSet 388.0, whose tasks have all completed, from pool 18/04/17 16:47:07 INFO scheduler.DAGScheduler: ResultStage 388 (foreachPartition at PredictorEngineApp.java:153) finished in 7.237 s 18/04/17 16:47:07 INFO scheduler.DAGScheduler: Job 388 finished: foreachPartition at PredictorEngineApp.java:153, took 7.331128 s 18/04/17 16:47:07 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x56309011 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 16:47:07 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x563090110x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 16:47:07 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 16:47:07 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:55683, server: ***hostname masked***/***IP masked***:2181 18/04/17 16:47:07 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x3626be1439a933d, negotiated timeout = 60000 18/04/17 16:47:07 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x3626be1439a933d 18/04/17 16:47:07 INFO zookeeper.ZooKeeper: Session: 0x3626be1439a933d closed 18/04/17 16:47:07 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 16:47:07 INFO scheduler.JobScheduler: Finished job streaming job 1523972820000 ms.33 from job set of time 1523972820000 ms 18/04/17 16:47:08 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 386.0 (TID 386) in 8232 ms on ***hostname masked*** (executor 6) (1/1) 18/04/17 16:47:08 INFO cluster.YarnClusterScheduler: Removed TaskSet 386.0, whose tasks have all completed, from pool 18/04/17 16:47:08 INFO scheduler.DAGScheduler: ResultStage 386 (foreachPartition at PredictorEngineApp.java:153) finished in 8.234 s 18/04/17 16:47:08 INFO scheduler.DAGScheduler: Job 386 finished: foreachPartition at PredictorEngineApp.java:153, took 8.320592 s 18/04/17 16:47:08 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x7f205e81 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 16:47:08 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x7f205e810x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 16:47:08 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 16:47:08 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:33836, server: ***hostname masked***/***IP masked***:2181 18/04/17 16:47:08 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x1626be1444c936c, negotiated timeout = 60000 18/04/17 16:47:08 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x1626be1444c936c 18/04/17 16:47:08 INFO zookeeper.ZooKeeper: Session: 0x1626be1444c936c closed 18/04/17 16:47:08 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 16:47:08 INFO scheduler.JobScheduler: Finished job streaming job 1523972820000 ms.24 from job set of time 1523972820000 ms 18/04/17 16:47:08 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 394.0 (TID 394) in 8565 ms on ***hostname masked*** (executor 1) (1/1) 18/04/17 16:47:08 INFO cluster.YarnClusterScheduler: Removed TaskSet 394.0, whose tasks have all completed, from pool 18/04/17 16:47:08 INFO scheduler.DAGScheduler: ResultStage 394 (foreachPartition at PredictorEngineApp.java:153) finished in 8.565 s 18/04/17 16:47:08 INFO scheduler.DAGScheduler: Job 394 finished: foreachPartition at PredictorEngineApp.java:153, took 8.680415 s 18/04/17 16:47:08 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x7ddd814c connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 16:47:08 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x7ddd814c0x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 16:47:08 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 16:47:08 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:55690, server: ***hostname masked***/***IP masked***:2181 18/04/17 16:47:08 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x3626be1439a933f, negotiated timeout = 60000 18/04/17 16:47:08 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x3626be1439a933f 18/04/17 16:47:08 INFO zookeeper.ZooKeeper: Session: 0x3626be1439a933f closed 18/04/17 16:47:08 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 16:47:08 INFO scheduler.JobScheduler: Finished job streaming job 1523972820000 ms.5 from job set of time 1523972820000 ms 18/04/17 16:47:08 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 372.0 (TID 372) in 8785 ms on ***hostname masked*** (executor 5) (1/1) 18/04/17 16:47:08 INFO cluster.YarnClusterScheduler: Removed TaskSet 372.0, whose tasks have all completed, from pool 18/04/17 16:47:08 INFO scheduler.DAGScheduler: ResultStage 372 (foreachPartition at PredictorEngineApp.java:153) finished in 8.786 s 18/04/17 16:47:08 INFO scheduler.DAGScheduler: Job 372 finished: foreachPartition at PredictorEngineApp.java:153, took 8.817499 s 18/04/17 16:47:08 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x484aadfe connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 16:47:08 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x484aadfe0x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 16:47:08 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 16:47:08 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:38437, server: ***hostname masked***/***IP masked***:2181 18/04/17 16:47:08 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x2626be142b28c7b, negotiated timeout = 60000 18/04/17 16:47:08 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x2626be142b28c7b 18/04/17 16:47:08 INFO zookeeper.ZooKeeper: Session: 0x2626be142b28c7b closed 18/04/17 16:47:08 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 16:47:08 INFO scheduler.JobScheduler: Finished job streaming job 1523972820000 ms.9 from job set of time 1523972820000 ms 18/04/17 16:47:08 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 383.0 (TID 383) in 8772 ms on ***hostname masked*** (executor 5) (1/1) 18/04/17 16:47:08 INFO cluster.YarnClusterScheduler: Removed TaskSet 383.0, whose tasks have all completed, from pool 18/04/17 16:47:08 INFO scheduler.DAGScheduler: ResultStage 383 (foreachPartition at PredictorEngineApp.java:153) finished in 8.774 s 18/04/17 16:47:08 INFO scheduler.DAGScheduler: Job 383 finished: foreachPartition at PredictorEngineApp.java:153, took 8.848685 s 18/04/17 16:47:08 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x375448a8 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 16:47:08 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x375448a80x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 16:47:08 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 16:47:08 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:38440, server: ***hostname masked***/***IP masked***:2181 18/04/17 16:47:08 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x2626be142b28c7c, negotiated timeout = 60000 18/04/17 16:47:08 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x2626be142b28c7c 18/04/17 16:47:08 INFO zookeeper.ZooKeeper: Session: 0x2626be142b28c7c closed 18/04/17 16:47:08 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 16:47:08 INFO scheduler.JobScheduler: Finished job streaming job 1523972820000 ms.19 from job set of time 1523972820000 ms 18/04/17 16:47:09 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 373.0 (TID 373) in 9449 ms on ***hostname masked*** (executor 9) (1/1) 18/04/17 16:47:09 INFO cluster.YarnClusterScheduler: Removed TaskSet 373.0, whose tasks have all completed, from pool 18/04/17 16:47:09 INFO scheduler.DAGScheduler: ResultStage 373 (foreachPartition at PredictorEngineApp.java:153) finished in 9.450 s 18/04/17 16:47:09 INFO scheduler.DAGScheduler: Job 373 finished: foreachPartition at PredictorEngineApp.java:153, took 9.485905 s 18/04/17 16:47:09 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x2282a662 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 16:47:09 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x2282a6620x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 16:47:09 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 16:47:09 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:38444, server: ***hostname masked***/***IP masked***:2181 18/04/17 16:47:09 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x2626be142b28c7d, negotiated timeout = 60000 18/04/17 16:47:09 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x2626be142b28c7d 18/04/17 16:47:09 INFO zookeeper.ZooKeeper: Session: 0x2626be142b28c7d closed 18/04/17 16:47:09 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 16:47:09 INFO scheduler.JobScheduler: Finished job streaming job 1523972820000 ms.11 from job set of time 1523972820000 ms 18/04/17 16:47:09 INFO scheduler.DAGScheduler: ResultStage 384 (foreachPartition at PredictorEngineApp.java:153) finished in 9.567 s 18/04/17 16:47:09 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 384.0 (TID 384) in 9566 ms on ***hostname masked*** (executor 7) (1/1) 18/04/17 16:47:09 INFO cluster.YarnClusterScheduler: Removed TaskSet 384.0, whose tasks have all completed, from pool 18/04/17 16:47:09 INFO scheduler.DAGScheduler: Job 384 finished: foreachPartition at PredictorEngineApp.java:153, took 9.646331 s 18/04/17 16:47:09 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x39a926a0 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 16:47:09 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x39a926a00x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 16:47:09 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 16:47:09 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:33852, server: ***hostname masked***/***IP masked***:2181 18/04/17 16:47:09 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x1626be1444c936f, negotiated timeout = 60000 18/04/17 16:47:09 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x1626be1444c936f 18/04/17 16:47:09 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 387.0 (TID 387) in 9576 ms on ***hostname masked*** (executor 4) (1/1) 18/04/17 16:47:09 INFO scheduler.DAGScheduler: ResultStage 387 (foreachPartition at PredictorEngineApp.java:153) finished in 9.576 s 18/04/17 16:47:09 INFO cluster.YarnClusterScheduler: Removed TaskSet 387.0, whose tasks have all completed, from pool 18/04/17 16:47:09 INFO zookeeper.ZooKeeper: Session: 0x1626be1444c936f closed 18/04/17 16:47:09 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 16:47:09 INFO scheduler.DAGScheduler: Job 387 finished: foreachPartition at PredictorEngineApp.java:153, took 9.667113 s 18/04/17 16:47:09 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x30f7d1ca connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 16:47:09 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x30f7d1ca0x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 16:47:09 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 16:47:09 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:38450, server: ***hostname masked***/***IP masked***:2181 18/04/17 16:47:09 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x2626be142b28c7e, negotiated timeout = 60000 18/04/17 16:47:09 INFO scheduler.JobScheduler: Finished job streaming job 1523972820000 ms.6 from job set of time 1523972820000 ms 18/04/17 16:47:09 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x2626be142b28c7e 18/04/17 16:47:09 INFO zookeeper.ZooKeeper: Session: 0x2626be142b28c7e closed 18/04/17 16:47:09 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 16:47:09 INFO scheduler.JobScheduler: Finished job streaming job 1523972820000 ms.26 from job set of time 1523972820000 ms 18/04/17 16:47:09 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 375.0 (TID 375) in 9822 ms on ***hostname masked*** (executor 3) (1/1) 18/04/17 16:47:09 INFO cluster.YarnClusterScheduler: Removed TaskSet 375.0, whose tasks have all completed, from pool 18/04/17 16:47:09 INFO scheduler.DAGScheduler: ResultStage 375 (foreachPartition at PredictorEngineApp.java:153) finished in 9.823 s 18/04/17 16:47:09 INFO scheduler.DAGScheduler: Job 375 finished: foreachPartition at PredictorEngineApp.java:153, took 9.867556 s 18/04/17 16:47:09 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x2266962d connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 16:47:09 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x2266962d0x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 16:47:09 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 16:47:09 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:38454, server: ***hostname masked***/***IP masked***:2181 18/04/17 16:47:09 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x2626be142b28c7f, negotiated timeout = 60000 18/04/17 16:47:09 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x2626be142b28c7f 18/04/17 16:47:09 INFO zookeeper.ZooKeeper: Session: 0x2626be142b28c7f closed 18/04/17 16:47:09 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 16:47:09 INFO scheduler.JobScheduler: Finished job streaming job 1523972820000 ms.28 from job set of time 1523972820000 ms 18/04/17 16:47:09 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 389.0 (TID 389) in 9812 ms on ***hostname masked*** (executor 3) (1/1) 18/04/17 16:47:09 INFO cluster.YarnClusterScheduler: Removed TaskSet 389.0, whose tasks have all completed, from pool 18/04/17 16:47:09 INFO scheduler.DAGScheduler: ResultStage 389 (foreachPartition at PredictorEngineApp.java:153) finished in 9.813 s 18/04/17 16:47:09 INFO scheduler.DAGScheduler: Job 390 finished: foreachPartition at PredictorEngineApp.java:153, took 9.906058 s 18/04/17 16:47:09 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x44390d74 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 16:47:09 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x44390d740x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 16:47:09 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 16:47:09 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:38457, server: ***hostname masked***/***IP masked***:2181 18/04/17 16:47:09 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x2626be142b28c80, negotiated timeout = 60000 18/04/17 16:47:10 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x2626be142b28c80 18/04/17 16:47:10 INFO zookeeper.ZooKeeper: Session: 0x2626be142b28c80 closed 18/04/17 16:47:10 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 16:47:10 INFO scheduler.JobScheduler: Finished job streaming job 1523972820000 ms.18 from job set of time 1523972820000 ms 18/04/17 16:47:10 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 374.0 (TID 374) in 10180 ms on ***hostname masked*** (executor 7) (1/1) 18/04/17 16:47:10 INFO cluster.YarnClusterScheduler: Removed TaskSet 374.0, whose tasks have all completed, from pool 18/04/17 16:47:10 INFO scheduler.DAGScheduler: ResultStage 374 (foreachPartition at PredictorEngineApp.java:153) finished in 10.180 s 18/04/17 16:47:10 INFO scheduler.DAGScheduler: Job 374 finished: foreachPartition at PredictorEngineApp.java:153, took 10.220356 s 18/04/17 16:47:10 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x153c901f connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 16:47:10 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x153c901f0x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 16:47:10 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 16:47:10 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:55717, server: ***hostname masked***/***IP masked***:2181 18/04/17 16:47:10 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x3626be1439a9340, negotiated timeout = 60000 18/04/17 16:47:10 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x3626be1439a9340 18/04/17 16:47:10 INFO zookeeper.ZooKeeper: Session: 0x3626be1439a9340 closed 18/04/17 16:47:10 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 16:47:10 INFO scheduler.JobScheduler: Finished job streaming job 1523972820000 ms.23 from job set of time 1523972820000 ms 18/04/17 16:47:14 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 381.0 (TID 381) in 14688 ms on ***hostname masked*** (executor 7) (1/1) 18/04/17 16:47:14 INFO cluster.YarnClusterScheduler: Removed TaskSet 381.0, whose tasks have all completed, from pool 18/04/17 16:47:14 INFO scheduler.DAGScheduler: ResultStage 381 (foreachPartition at PredictorEngineApp.java:153) finished in 14.688 s 18/04/17 16:47:14 INFO scheduler.DAGScheduler: Job 381 finished: foreachPartition at PredictorEngineApp.java:153, took 14.755858 s 18/04/17 16:47:14 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x35a2a3a2 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 16:47:14 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x35a2a3a20x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 16:47:14 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 16:47:14 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:55726, server: ***hostname masked***/***IP masked***:2181 18/04/17 16:47:14 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x3626be1439a9342, negotiated timeout = 60000 18/04/17 16:47:14 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x3626be1439a9342 18/04/17 16:47:14 INFO zookeeper.ZooKeeper: Session: 0x3626be1439a9342 closed 18/04/17 16:47:14 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 16:47:14 INFO scheduler.JobScheduler: Finished job streaming job 1523972820000 ms.1 from job set of time 1523972820000 ms 18/04/17 16:47:14 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 379.0 (TID 379) in 14811 ms on ***hostname masked*** (executor 2) (1/1) 18/04/17 16:47:14 INFO cluster.YarnClusterScheduler: Removed TaskSet 379.0, whose tasks have all completed, from pool 18/04/17 16:47:14 INFO scheduler.DAGScheduler: ResultStage 379 (foreachPartition at PredictorEngineApp.java:153) finished in 14.812 s 18/04/17 16:47:14 INFO scheduler.DAGScheduler: Job 379 finished: foreachPartition at PredictorEngineApp.java:153, took 14.871426 s 18/04/17 16:47:14 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x2c166705 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 16:47:14 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x2c1667050x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 16:47:14 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 16:47:14 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:38473, server: ***hostname masked***/***IP masked***:2181 18/04/17 16:47:14 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x2626be142b28c83, negotiated timeout = 60000 18/04/17 16:47:14 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x2626be142b28c83 18/04/17 16:47:14 INFO zookeeper.ZooKeeper: Session: 0x2626be142b28c83 closed 18/04/17 16:47:14 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 16:47:14 INFO scheduler.JobScheduler: Finished job streaming job 1523972820000 ms.2 from job set of time 1523972820000 ms 18/04/17 16:47:15 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 391.0 (TID 391) in 15296 ms on ***hostname masked*** (executor 2) (1/1) 18/04/17 16:47:15 INFO cluster.YarnClusterScheduler: Removed TaskSet 391.0, whose tasks have all completed, from pool 18/04/17 16:47:15 INFO scheduler.DAGScheduler: ResultStage 391 (foreachPartition at PredictorEngineApp.java:153) finished in 15.297 s 18/04/17 16:47:15 INFO scheduler.DAGScheduler: Job 389 finished: foreachPartition at PredictorEngineApp.java:153, took 15.399504 s 18/04/17 16:47:15 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x5983f2dc connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 16:47:15 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x5983f2dc0x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 16:47:15 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 16:47:15 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:55733, server: ***hostname masked***/***IP masked***:2181 18/04/17 16:47:15 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x3626be1439a9345, negotiated timeout = 60000 18/04/17 16:47:15 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x3626be1439a9345 18/04/17 16:47:15 INFO zookeeper.ZooKeeper: Session: 0x3626be1439a9345 closed 18/04/17 16:47:15 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 16:47:15 INFO scheduler.JobScheduler: Finished job streaming job 1523972820000 ms.29 from job set of time 1523972820000 ms 18/04/17 16:47:15 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 370.0 (TID 370) in 15745 ms on ***hostname masked*** (executor 3) (1/1) 18/04/17 16:47:15 INFO cluster.YarnClusterScheduler: Removed TaskSet 370.0, whose tasks have all completed, from pool 18/04/17 16:47:15 INFO scheduler.DAGScheduler: ResultStage 370 (foreachPartition at PredictorEngineApp.java:153) finished in 15.745 s 18/04/17 16:47:15 INFO scheduler.DAGScheduler: Job 368 finished: foreachPartition at PredictorEngineApp.java:153, took 15.767557 s 18/04/17 16:47:15 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x36aa914a connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 16:47:15 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x36aa914a0x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 16:47:15 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 16:47:15 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:33885, server: ***hostname masked***/***IP masked***:2181 18/04/17 16:47:15 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x1626be1444c9374, negotiated timeout = 60000 18/04/17 16:47:15 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x1626be1444c9374 18/04/17 16:47:15 INFO zookeeper.ZooKeeper: Session: 0x1626be1444c9374 closed 18/04/17 16:47:15 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 16:47:15 INFO scheduler.JobScheduler: Finished job streaming job 1523972820000 ms.10 from job set of time 1523972820000 ms 18/04/17 16:47:16 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 392.0 (TID 392) in 16251 ms on ***hostname masked*** (executor 2) (1/1) 18/04/17 16:47:16 INFO cluster.YarnClusterScheduler: Removed TaskSet 392.0, whose tasks have all completed, from pool 18/04/17 16:47:16 INFO scheduler.DAGScheduler: ResultStage 392 (foreachPartition at PredictorEngineApp.java:153) finished in 16.259 s 18/04/17 16:47:16 INFO scheduler.DAGScheduler: Job 393 finished: foreachPartition at PredictorEngineApp.java:153, took 16.363541 s 18/04/17 16:47:16 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x2522526 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 16:47:16 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x25225260x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 16:47:16 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 16:47:16 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:55740, server: ***hostname masked***/***IP masked***:2181 18/04/17 16:47:16 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x3626be1439a9346, negotiated timeout = 60000 18/04/17 16:47:16 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x3626be1439a9346 18/04/17 16:47:16 INFO zookeeper.ZooKeeper: Session: 0x3626be1439a9346 closed 18/04/17 16:47:16 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 16:47:16 INFO scheduler.JobScheduler: Finished job streaming job 1523972820000 ms.34 from job set of time 1523972820000 ms 18/04/17 16:47:21 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 382.0 (TID 382) in 21425 ms on ***hostname masked*** (executor 2) (1/1) 18/04/17 16:47:21 INFO cluster.YarnClusterScheduler: Removed TaskSet 382.0, whose tasks have all completed, from pool 18/04/17 16:47:21 INFO scheduler.DAGScheduler: ResultStage 382 (foreachPartition at PredictorEngineApp.java:153) finished in 21.425 s 18/04/17 16:47:21 INFO scheduler.DAGScheduler: Job 382 finished: foreachPartition at PredictorEngineApp.java:153, took 21.497455 s 18/04/17 16:47:21 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x3299f2d3 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 16:47:21 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x3299f2d30x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 16:47:21 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 16:47:21 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:38495, server: ***hostname masked***/***IP masked***:2181 18/04/17 16:47:21 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x2626be142b28c84, negotiated timeout = 60000 18/04/17 16:47:21 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x2626be142b28c84 18/04/17 16:47:21 INFO zookeeper.ZooKeeper: Session: 0x2626be142b28c84 closed 18/04/17 16:47:21 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 16:47:21 INFO scheduler.JobScheduler: Finished job streaming job 1523972820000 ms.22 from job set of time 1523972820000 ms 18/04/17 16:47:21 INFO scheduler.JobScheduler: Total delay: 21.619 s for time 1523972820000 ms (execution: 21.565 s) 18/04/17 16:47:21 INFO kafka.KafkaRDD: Removing RDD 468 from persistence list 18/04/17 16:47:21 INFO storage.BlockManager: Removing RDD 468 18/04/17 16:47:21 INFO kafka.KafkaRDD: Removing RDD 468 from persistence list 18/04/17 16:47:21 INFO storage.BlockManager: Removing RDD 468 18/04/17 16:47:21 INFO kafka.KafkaRDD: Removing RDD 469 from persistence list 18/04/17 16:47:21 INFO storage.BlockManager: Removing RDD 469 18/04/17 16:47:21 INFO kafka.KafkaRDD: Removing RDD 469 from persistence list 18/04/17 16:47:21 INFO storage.BlockManager: Removing RDD 469 18/04/17 16:47:21 INFO kafka.KafkaRDD: Removing RDD 470 from persistence list 18/04/17 16:47:21 INFO storage.BlockManager: Removing RDD 470 18/04/17 16:47:21 INFO kafka.KafkaRDD: Removing RDD 470 from persistence list 18/04/17 16:47:21 INFO storage.BlockManager: Removing RDD 470 18/04/17 16:47:21 INFO kafka.KafkaRDD: Removing RDD 471 from persistence list 18/04/17 16:47:21 INFO storage.BlockManager: Removing RDD 471 18/04/17 16:47:21 INFO kafka.KafkaRDD: Removing RDD 471 from persistence list 18/04/17 16:47:21 INFO storage.BlockManager: Removing RDD 471 18/04/17 16:47:21 INFO kafka.KafkaRDD: Removing RDD 472 from persistence list 18/04/17 16:47:21 INFO storage.BlockManager: Removing RDD 472 18/04/17 16:47:21 INFO kafka.KafkaRDD: Removing RDD 472 from persistence list 18/04/17 16:47:21 INFO storage.BlockManager: Removing RDD 472 18/04/17 16:47:21 INFO kafka.KafkaRDD: Removing RDD 473 from persistence list 18/04/17 16:47:21 INFO storage.BlockManager: Removing RDD 473 18/04/17 16:47:21 INFO kafka.KafkaRDD: Removing RDD 473 from persistence list 18/04/17 16:47:21 INFO storage.BlockManager: Removing RDD 473 18/04/17 16:47:21 INFO kafka.KafkaRDD: Removing RDD 474 from persistence list 18/04/17 16:47:21 INFO storage.BlockManager: Removing RDD 474 18/04/17 16:47:21 INFO kafka.KafkaRDD: Removing RDD 474 from persistence list 18/04/17 16:47:21 INFO storage.BlockManager: Removing RDD 474 18/04/17 16:47:21 INFO kafka.KafkaRDD: Removing RDD 475 from persistence list 18/04/17 16:47:21 INFO storage.BlockManager: Removing RDD 475 18/04/17 16:47:21 INFO kafka.KafkaRDD: Removing RDD 475 from persistence list 18/04/17 16:47:21 INFO storage.BlockManager: Removing RDD 475 18/04/17 16:47:21 INFO kafka.KafkaRDD: Removing RDD 476 from persistence list 18/04/17 16:47:21 INFO storage.BlockManager: Removing RDD 476 18/04/17 16:47:21 INFO kafka.KafkaRDD: Removing RDD 476 from persistence list 18/04/17 16:47:21 INFO storage.BlockManager: Removing RDD 476 18/04/17 16:47:21 INFO kafka.KafkaRDD: Removing RDD 477 from persistence list 18/04/17 16:47:21 INFO storage.BlockManager: Removing RDD 477 18/04/17 16:47:21 INFO kafka.KafkaRDD: Removing RDD 477 from persistence list 18/04/17 16:47:21 INFO storage.BlockManager: Removing RDD 477 18/04/17 16:47:21 INFO kafka.KafkaRDD: Removing RDD 478 from persistence list 18/04/17 16:47:21 INFO storage.BlockManager: Removing RDD 478 18/04/17 16:47:21 INFO kafka.KafkaRDD: Removing RDD 478 from persistence list 18/04/17 16:47:21 INFO storage.BlockManager: Removing RDD 478 18/04/17 16:47:21 INFO kafka.KafkaRDD: Removing RDD 479 from persistence list 18/04/17 16:47:21 INFO storage.BlockManager: Removing RDD 479 18/04/17 16:47:21 INFO kafka.KafkaRDD: Removing RDD 479 from persistence list 18/04/17 16:47:21 INFO storage.BlockManager: Removing RDD 479 18/04/17 16:47:21 INFO kafka.KafkaRDD: Removing RDD 480 from persistence list 18/04/17 16:47:21 INFO storage.BlockManager: Removing RDD 480 18/04/17 16:47:21 INFO kafka.KafkaRDD: Removing RDD 480 from persistence list 18/04/17 16:47:21 INFO storage.BlockManager: Removing RDD 480 18/04/17 16:47:21 INFO kafka.KafkaRDD: Removing RDD 481 from persistence list 18/04/17 16:47:21 INFO storage.BlockManager: Removing RDD 481 18/04/17 16:47:21 INFO kafka.KafkaRDD: Removing RDD 481 from persistence list 18/04/17 16:47:21 INFO storage.BlockManager: Removing RDD 481 18/04/17 16:47:21 INFO kafka.KafkaRDD: Removing RDD 482 from persistence list 18/04/17 16:47:21 INFO storage.BlockManager: Removing RDD 482 18/04/17 16:47:21 INFO kafka.KafkaRDD: Removing RDD 482 from persistence list 18/04/17 16:47:21 INFO storage.BlockManager: Removing RDD 482 18/04/17 16:47:21 INFO kafka.KafkaRDD: Removing RDD 483 from persistence list 18/04/17 16:47:21 INFO storage.BlockManager: Removing RDD 483 18/04/17 16:47:21 INFO kafka.KafkaRDD: Removing RDD 483 from persistence list 18/04/17 16:47:21 INFO storage.BlockManager: Removing RDD 483 18/04/17 16:47:21 INFO kafka.KafkaRDD: Removing RDD 484 from persistence list 18/04/17 16:47:21 INFO storage.BlockManager: Removing RDD 484 18/04/17 16:47:21 INFO kafka.KafkaRDD: Removing RDD 484 from persistence list 18/04/17 16:47:21 INFO storage.BlockManager: Removing RDD 484 18/04/17 16:47:21 INFO kafka.KafkaRDD: Removing RDD 485 from persistence list 18/04/17 16:47:21 INFO storage.BlockManager: Removing RDD 485 18/04/17 16:47:21 INFO kafka.KafkaRDD: Removing RDD 485 from persistence list 18/04/17 16:47:21 INFO storage.BlockManager: Removing RDD 485 18/04/17 16:47:21 INFO kafka.KafkaRDD: Removing RDD 486 from persistence list 18/04/17 16:47:21 INFO storage.BlockManager: Removing RDD 486 18/04/17 16:47:21 INFO kafka.KafkaRDD: Removing RDD 486 from persistence list 18/04/17 16:47:21 INFO storage.BlockManager: Removing RDD 486 18/04/17 16:47:21 INFO kafka.KafkaRDD: Removing RDD 487 from persistence list 18/04/17 16:47:21 INFO storage.BlockManager: Removing RDD 487 18/04/17 16:47:21 INFO kafka.KafkaRDD: Removing RDD 487 from persistence list 18/04/17 16:47:21 INFO storage.BlockManager: Removing RDD 487 18/04/17 16:47:21 INFO kafka.KafkaRDD: Removing RDD 488 from persistence list 18/04/17 16:47:21 INFO storage.BlockManager: Removing RDD 488 18/04/17 16:47:21 INFO kafka.KafkaRDD: Removing RDD 488 from persistence list 18/04/17 16:47:21 INFO storage.BlockManager: Removing RDD 488 18/04/17 16:47:21 INFO kafka.KafkaRDD: Removing RDD 489 from persistence list 18/04/17 16:47:21 INFO storage.BlockManager: Removing RDD 489 18/04/17 16:47:21 INFO kafka.KafkaRDD: Removing RDD 489 from persistence list 18/04/17 16:47:21 INFO storage.BlockManager: Removing RDD 489 18/04/17 16:47:21 INFO kafka.KafkaRDD: Removing RDD 490 from persistence list 18/04/17 16:47:21 INFO storage.BlockManager: Removing RDD 490 18/04/17 16:47:21 INFO kafka.KafkaRDD: Removing RDD 490 from persistence list 18/04/17 16:47:21 INFO storage.BlockManager: Removing RDD 490 18/04/17 16:47:21 INFO kafka.KafkaRDD: Removing RDD 491 from persistence list 18/04/17 16:47:21 INFO storage.BlockManager: Removing RDD 491 18/04/17 16:47:21 INFO kafka.KafkaRDD: Removing RDD 491 from persistence list 18/04/17 16:47:21 INFO storage.BlockManager: Removing RDD 491 18/04/17 16:47:21 INFO kafka.KafkaRDD: Removing RDD 492 from persistence list 18/04/17 16:47:21 INFO storage.BlockManager: Removing RDD 492 18/04/17 16:47:21 INFO kafka.KafkaRDD: Removing RDD 492 from persistence list 18/04/17 16:47:21 INFO storage.BlockManager: Removing RDD 492 18/04/17 16:47:21 INFO kafka.KafkaRDD: Removing RDD 493 from persistence list 18/04/17 16:47:21 INFO storage.BlockManager: Removing RDD 493 18/04/17 16:47:21 INFO kafka.KafkaRDD: Removing RDD 493 from persistence list 18/04/17 16:47:21 INFO storage.BlockManager: Removing RDD 493 18/04/17 16:47:21 INFO kafka.KafkaRDD: Removing RDD 494 from persistence list 18/04/17 16:47:21 INFO storage.BlockManager: Removing RDD 494 18/04/17 16:47:21 INFO kafka.KafkaRDD: Removing RDD 494 from persistence list 18/04/17 16:47:21 INFO storage.BlockManager: Removing RDD 494 18/04/17 16:47:21 INFO kafka.KafkaRDD: Removing RDD 495 from persistence list 18/04/17 16:47:21 INFO storage.BlockManager: Removing RDD 495 18/04/17 16:47:21 INFO kafka.KafkaRDD: Removing RDD 495 from persistence list 18/04/17 16:47:21 INFO storage.BlockManager: Removing RDD 495 18/04/17 16:47:21 INFO kafka.KafkaRDD: Removing RDD 496 from persistence list 18/04/17 16:47:21 INFO storage.BlockManager: Removing RDD 496 18/04/17 16:47:21 INFO kafka.KafkaRDD: Removing RDD 496 from persistence list 18/04/17 16:47:21 INFO storage.BlockManager: Removing RDD 496 18/04/17 16:47:21 INFO kafka.KafkaRDD: Removing RDD 497 from persistence list 18/04/17 16:47:21 INFO storage.BlockManager: Removing RDD 497 18/04/17 16:47:21 INFO kafka.KafkaRDD: Removing RDD 497 from persistence list 18/04/17 16:47:21 INFO storage.BlockManager: Removing RDD 497 18/04/17 16:47:21 INFO kafka.KafkaRDD: Removing RDD 498 from persistence list 18/04/17 16:47:21 INFO storage.BlockManager: Removing RDD 498 18/04/17 16:47:21 INFO kafka.KafkaRDD: Removing RDD 498 from persistence list 18/04/17 16:47:21 INFO storage.BlockManager: Removing RDD 498 18/04/17 16:47:21 INFO kafka.KafkaRDD: Removing RDD 499 from persistence list 18/04/17 16:47:21 INFO storage.BlockManager: Removing RDD 499 18/04/17 16:47:21 INFO kafka.KafkaRDD: Removing RDD 499 from persistence list 18/04/17 16:47:21 INFO storage.BlockManager: Removing RDD 499 18/04/17 16:47:21 INFO kafka.KafkaRDD: Removing RDD 500 from persistence list 18/04/17 16:47:21 INFO storage.BlockManager: Removing RDD 500 18/04/17 16:47:21 INFO kafka.KafkaRDD: Removing RDD 500 from persistence list 18/04/17 16:47:21 INFO storage.BlockManager: Removing RDD 500 18/04/17 16:47:21 INFO kafka.KafkaRDD: Removing RDD 501 from persistence list 18/04/17 16:47:21 INFO storage.BlockManager: Removing RDD 501 18/04/17 16:47:21 INFO kafka.KafkaRDD: Removing RDD 501 from persistence list 18/04/17 16:47:21 INFO storage.BlockManager: Removing RDD 501 18/04/17 16:47:21 INFO kafka.KafkaRDD: Removing RDD 502 from persistence list 18/04/17 16:47:21 INFO storage.BlockManager: Removing RDD 502 18/04/17 16:47:21 INFO kafka.KafkaRDD: Removing RDD 502 from persistence list 18/04/17 16:47:21 INFO storage.BlockManager: Removing RDD 502 18/04/17 16:47:21 INFO kafka.KafkaRDD: Removing RDD 503 from persistence list 18/04/17 16:47:21 INFO storage.BlockManager: Removing RDD 503 18/04/17 16:47:21 INFO kafka.KafkaRDD: Removing RDD 503 from persistence list 18/04/17 16:47:21 INFO storage.BlockManager: Removing RDD 503 18/04/17 16:47:21 INFO scheduler.ReceivedBlockTracker: Deleting batches ArrayBuffer() 18/04/17 16:47:21 INFO scheduler.InputInfoTracker: remove old batch metadata: 1523972700000 ms 18/04/17 16:48:00 INFO scheduler.JobScheduler: Added jobs for time 1523972880000 ms 18/04/17 16:48:00 INFO scheduler.JobScheduler: Starting job streaming job 1523972880000 ms.0 from job set of time 1523972880000 ms 18/04/17 16:48:00 INFO scheduler.JobScheduler: Starting job streaming job 1523972880000 ms.1 from job set of time 1523972880000 ms 18/04/17 16:48:00 INFO scheduler.JobScheduler: Finished job streaming job 1523972880000 ms.0 from job set of time 1523972880000 ms 18/04/17 16:48:00 INFO scheduler.JobScheduler: Starting job streaming job 1523972880000 ms.2 from job set of time 1523972880000 ms 18/04/17 16:48:00 INFO scheduler.JobScheduler: Starting job streaming job 1523972880000 ms.3 from job set of time 1523972880000 ms 18/04/17 16:48:00 INFO scheduler.JobScheduler: Starting job streaming job 1523972880000 ms.4 from job set of time 1523972880000 ms 18/04/17 16:48:00 INFO scheduler.JobScheduler: Finished job streaming job 1523972880000 ms.3 from job set of time 1523972880000 ms 18/04/17 16:48:00 INFO scheduler.JobScheduler: Starting job streaming job 1523972880000 ms.6 from job set of time 1523972880000 ms 18/04/17 16:48:00 INFO scheduler.JobScheduler: Finished job streaming job 1523972880000 ms.4 from job set of time 1523972880000 ms 18/04/17 16:48:00 INFO scheduler.JobScheduler: Starting job streaming job 1523972880000 ms.5 from job set of time 1523972880000 ms 18/04/17 16:48:00 INFO scheduler.JobScheduler: Starting job streaming job 1523972880000 ms.7 from job set of time 1523972880000 ms 18/04/17 16:48:00 INFO scheduler.JobScheduler: Starting job streaming job 1523972880000 ms.9 from job set of time 1523972880000 ms 18/04/17 16:48:00 INFO scheduler.JobScheduler: Starting job streaming job 1523972880000 ms.8 from job set of time 1523972880000 ms 18/04/17 16:48:00 INFO scheduler.JobScheduler: Starting job streaming job 1523972880000 ms.10 from job set of time 1523972880000 ms 18/04/17 16:48:00 INFO scheduler.JobScheduler: Starting job streaming job 1523972880000 ms.12 from job set of time 1523972880000 ms 18/04/17 16:48:00 INFO scheduler.JobScheduler: Starting job streaming job 1523972880000 ms.11 from job set of time 1523972880000 ms 18/04/17 16:48:00 INFO scheduler.JobScheduler: Starting job streaming job 1523972880000 ms.13 from job set of time 1523972880000 ms 18/04/17 16:48:00 INFO scheduler.JobScheduler: Starting job streaming job 1523972880000 ms.14 from job set of time 1523972880000 ms 18/04/17 16:48:00 INFO scheduler.JobScheduler: Starting job streaming job 1523972880000 ms.15 from job set of time 1523972880000 ms 18/04/17 16:48:00 INFO scheduler.JobScheduler: Finished job streaming job 1523972880000 ms.13 from job set of time 1523972880000 ms 18/04/17 16:48:00 INFO scheduler.JobScheduler: Starting job streaming job 1523972880000 ms.16 from job set of time 1523972880000 ms 18/04/17 16:48:00 INFO scheduler.JobScheduler: Starting job streaming job 1523972880000 ms.17 from job set of time 1523972880000 ms 18/04/17 16:48:00 INFO scheduler.JobScheduler: Finished job streaming job 1523972880000 ms.14 from job set of time 1523972880000 ms 18/04/17 16:48:00 INFO scheduler.JobScheduler: Starting job streaming job 1523972880000 ms.18 from job set of time 1523972880000 ms 18/04/17 16:48:00 INFO scheduler.JobScheduler: Finished job streaming job 1523972880000 ms.17 from job set of time 1523972880000 ms 18/04/17 16:48:00 INFO scheduler.JobScheduler: Starting job streaming job 1523972880000 ms.19 from job set of time 1523972880000 ms 18/04/17 16:48:00 INFO scheduler.JobScheduler: Finished job streaming job 1523972880000 ms.16 from job set of time 1523972880000 ms 18/04/17 16:48:00 INFO scheduler.JobScheduler: Starting job streaming job 1523972880000 ms.21 from job set of time 1523972880000 ms 18/04/17 16:48:00 INFO scheduler.JobScheduler: Starting job streaming job 1523972880000 ms.20 from job set of time 1523972880000 ms 18/04/17 16:48:00 INFO scheduler.JobScheduler: Finished job streaming job 1523972880000 ms.21 from job set of time 1523972880000 ms 18/04/17 16:48:00 INFO scheduler.JobScheduler: Starting job streaming job 1523972880000 ms.22 from job set of time 1523972880000 ms 18/04/17 16:48:00 INFO scheduler.JobScheduler: Starting job streaming job 1523972880000 ms.23 from job set of time 1523972880000 ms 18/04/17 16:48:00 INFO scheduler.JobScheduler: Starting job streaming job 1523972880000 ms.24 from job set of time 1523972880000 ms 18/04/17 16:48:00 INFO scheduler.JobScheduler: Starting job streaming job 1523972880000 ms.25 from job set of time 1523972880000 ms 18/04/17 16:48:00 INFO scheduler.JobScheduler: Starting job streaming job 1523972880000 ms.26 from job set of time 1523972880000 ms 18/04/17 16:48:00 INFO scheduler.JobScheduler: Starting job streaming job 1523972880000 ms.27 from job set of time 1523972880000 ms 18/04/17 16:48:00 INFO scheduler.JobScheduler: Starting job streaming job 1523972880000 ms.28 from job set of time 1523972880000 ms 18/04/17 16:48:00 INFO scheduler.JobScheduler: Starting job streaming job 1523972880000 ms.29 from job set of time 1523972880000 ms 18/04/17 16:48:00 INFO scheduler.JobScheduler: Starting job streaming job 1523972880000 ms.30 from job set of time 1523972880000 ms 18/04/17 16:48:00 INFO scheduler.JobScheduler: Starting job streaming job 1523972880000 ms.31 from job set of time 1523972880000 ms 18/04/17 16:48:00 INFO scheduler.JobScheduler: Starting job streaming job 1523972880000 ms.32 from job set of time 1523972880000 ms 18/04/17 16:48:00 INFO scheduler.JobScheduler: Finished job streaming job 1523972880000 ms.30 from job set of time 1523972880000 ms 18/04/17 16:48:00 INFO scheduler.JobScheduler: Starting job streaming job 1523972880000 ms.34 from job set of time 1523972880000 ms 18/04/17 16:48:00 INFO scheduler.JobScheduler: Starting job streaming job 1523972880000 ms.33 from job set of time 1523972880000 ms 18/04/17 16:48:00 INFO scheduler.JobScheduler: Starting job streaming job 1523972880000 ms.35 from job set of time 1523972880000 ms 18/04/17 16:48:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 16:48:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 16:48:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 16:48:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 16:48:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 16:48:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 16:48:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 16:48:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 16:48:00 INFO scheduler.DAGScheduler: Got job 395 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 16:48:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 395 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 16:48:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 16:48:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 16:48:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 16:48:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 16:48:00 INFO scheduler.DAGScheduler: Submitting ResultStage 395 (KafkaRDD[549] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 16:48:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 16:48:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 16:48:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 16:48:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 16:48:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 16:48:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 16:48:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 16:48:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 16:48:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 16:48:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 16:48:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 16:48:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 16:48:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 16:48:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 16:48:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 16:48:00 INFO storage.MemoryStore: Block broadcast_395 stored as values in memory (estimated size 5.7 KB, free 490.3 MB) 18/04/17 16:48:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 16:48:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 16:48:00 INFO storage.MemoryStore: Block broadcast_395_piece0 stored as bytes in memory (estimated size 3.1 KB, free 490.3 MB) 18/04/17 16:48:00 INFO storage.BlockManagerInfo: Added broadcast_395_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.2 MB) 18/04/17 16:48:00 INFO spark.SparkContext: Created broadcast 395 from broadcast at DAGScheduler.scala:1006 18/04/17 16:48:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 395 (KafkaRDD[549] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 16:48:00 INFO cluster.YarnClusterScheduler: Adding task set 395.0 with 1 tasks 18/04/17 16:48:00 INFO scheduler.DAGScheduler: Got job 396 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 16:48:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 396 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 16:48:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 16:48:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 16:48:00 INFO scheduler.DAGScheduler: Submitting ResultStage 396 (KafkaRDD[560] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 16:48:00 INFO storage.MemoryStore: Block broadcast_396 stored as values in memory (estimated size 5.7 KB, free 490.3 MB) 18/04/17 16:48:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 395.0 (TID 395, ***hostname masked***, executor 12, partition 0, RACK_LOCAL, 2041 bytes) 18/04/17 16:48:00 INFO storage.MemoryStore: Block broadcast_396_piece0 stored as bytes in memory (estimated size 3.1 KB, free 490.3 MB) 18/04/17 16:48:00 INFO storage.BlockManagerInfo: Added broadcast_396_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.2 MB) 18/04/17 16:48:00 INFO spark.SparkContext: Created broadcast 396 from broadcast at DAGScheduler.scala:1006 18/04/17 16:48:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 396 (KafkaRDD[560] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 16:48:00 INFO cluster.YarnClusterScheduler: Adding task set 396.0 with 1 tasks 18/04/17 16:48:00 INFO scheduler.DAGScheduler: Got job 397 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 16:48:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 397 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 16:48:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 16:48:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 16:48:00 INFO scheduler.DAGScheduler: Submitting ResultStage 397 (KafkaRDD[542] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 16:48:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 396.0 (TID 396, ***hostname masked***, executor 9, partition 0, RACK_LOCAL, 2048 bytes) 18/04/17 16:48:00 INFO storage.MemoryStore: Block broadcast_397 stored as values in memory (estimated size 5.7 KB, free 490.3 MB) 18/04/17 16:48:00 INFO storage.MemoryStore: Block broadcast_397_piece0 stored as bytes in memory (estimated size 3.1 KB, free 490.3 MB) 18/04/17 16:48:00 INFO storage.BlockManagerInfo: Added broadcast_397_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.2 MB) 18/04/17 16:48:00 INFO spark.SparkContext: Created broadcast 397 from broadcast at DAGScheduler.scala:1006 18/04/17 16:48:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 397 (KafkaRDD[542] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 16:48:00 INFO cluster.YarnClusterScheduler: Adding task set 397.0 with 1 tasks 18/04/17 16:48:00 INFO scheduler.DAGScheduler: Got job 398 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 16:48:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 398 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 16:48:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 16:48:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 16:48:00 INFO scheduler.DAGScheduler: Submitting ResultStage 398 (KafkaRDD[541] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 16:48:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 397.0 (TID 397, ***hostname masked***, executor 8, partition 0, RACK_LOCAL, 2034 bytes) 18/04/17 16:48:00 INFO storage.MemoryStore: Block broadcast_398 stored as values in memory (estimated size 5.7 KB, free 490.3 MB) 18/04/17 16:48:00 INFO storage.MemoryStore: Block broadcast_398_piece0 stored as bytes in memory (estimated size 3.1 KB, free 490.3 MB) 18/04/17 16:48:00 INFO storage.BlockManagerInfo: Added broadcast_398_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.2 MB) 18/04/17 16:48:00 INFO spark.SparkContext: Created broadcast 398 from broadcast at DAGScheduler.scala:1006 18/04/17 16:48:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 398 (KafkaRDD[541] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 16:48:00 INFO cluster.YarnClusterScheduler: Adding task set 398.0 with 1 tasks 18/04/17 16:48:00 INFO scheduler.DAGScheduler: Got job 399 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 16:48:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 399 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 16:48:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 16:48:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 16:48:00 INFO scheduler.DAGScheduler: Submitting ResultStage 399 (KafkaRDD[558] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 16:48:00 INFO storage.MemoryStore: Block broadcast_399 stored as values in memory (estimated size 5.7 KB, free 490.3 MB) 18/04/17 16:48:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 398.0 (TID 398, ***hostname masked***, executor 9, partition 0, RACK_LOCAL, 2063 bytes) 18/04/17 16:48:00 INFO storage.MemoryStore: Block broadcast_399_piece0 stored as bytes in memory (estimated size 3.1 KB, free 490.3 MB) 18/04/17 16:48:00 INFO storage.BlockManagerInfo: Added broadcast_399_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.2 MB) 18/04/17 16:48:00 INFO spark.SparkContext: Created broadcast 399 from broadcast at DAGScheduler.scala:1006 18/04/17 16:48:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 399 (KafkaRDD[558] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 16:48:00 INFO cluster.YarnClusterScheduler: Adding task set 399.0 with 1 tasks 18/04/17 16:48:00 INFO scheduler.DAGScheduler: Got job 400 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 16:48:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 400 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 16:48:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 16:48:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 16:48:00 INFO scheduler.DAGScheduler: Submitting ResultStage 400 (KafkaRDD[567] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 16:48:00 INFO storage.MemoryStore: Block broadcast_400 stored as values in memory (estimated size 5.7 KB, free 490.3 MB) 18/04/17 16:48:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 399.0 (TID 399, ***hostname masked***, executor 1, partition 0, RACK_LOCAL, 2048 bytes) 18/04/17 16:48:00 INFO storage.BlockManagerInfo: Added broadcast_396_piece0 in memory on ***hostname masked***:55033 (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:48:00 INFO storage.MemoryStore: Block broadcast_400_piece0 stored as bytes in memory (estimated size 3.1 KB, free 490.3 MB) 18/04/17 16:48:00 INFO storage.BlockManagerInfo: Added broadcast_400_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.2 MB) 18/04/17 16:48:00 INFO spark.SparkContext: Created broadcast 400 from broadcast at DAGScheduler.scala:1006 18/04/17 16:48:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 400 (KafkaRDD[567] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 16:48:00 INFO cluster.YarnClusterScheduler: Adding task set 400.0 with 1 tasks 18/04/17 16:48:00 INFO scheduler.DAGScheduler: Got job 401 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 16:48:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 401 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 16:48:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 16:48:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 16:48:00 INFO scheduler.DAGScheduler: Submitting ResultStage 401 (KafkaRDD[562] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 16:48:00 INFO storage.MemoryStore: Block broadcast_401 stored as values in memory (estimated size 5.7 KB, free 490.3 MB) 18/04/17 16:48:00 INFO storage.BlockManagerInfo: Added broadcast_395_piece0 in memory on ***hostname masked***:42188 (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:48:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 400.0 (TID 400, ***hostname masked***, executor 1, partition 0, RACK_LOCAL, 2048 bytes) 18/04/17 16:48:00 INFO storage.MemoryStore: Block broadcast_401_piece0 stored as bytes in memory (estimated size 3.1 KB, free 490.3 MB) 18/04/17 16:48:00 INFO storage.BlockManagerInfo: Added broadcast_401_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.2 MB) 18/04/17 16:48:00 INFO spark.SparkContext: Created broadcast 401 from broadcast at DAGScheduler.scala:1006 18/04/17 16:48:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 401 (KafkaRDD[562] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 16:48:00 INFO cluster.YarnClusterScheduler: Adding task set 401.0 with 1 tasks 18/04/17 16:48:00 INFO scheduler.DAGScheduler: Got job 402 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 16:48:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 402 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 16:48:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 16:48:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 16:48:00 INFO scheduler.DAGScheduler: Submitting ResultStage 402 (KafkaRDD[573] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 16:48:00 INFO storage.MemoryStore: Block broadcast_402 stored as values in memory (estimated size 5.7 KB, free 490.3 MB) 18/04/17 16:48:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 401.0 (TID 401, ***hostname masked***, executor 3, partition 0, RACK_LOCAL, 2069 bytes) 18/04/17 16:48:00 INFO storage.BlockManagerInfo: Added broadcast_397_piece0 in memory on ***hostname masked***:50260 (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:48:00 INFO storage.MemoryStore: Block broadcast_402_piece0 stored as bytes in memory (estimated size 3.1 KB, free 490.3 MB) 18/04/17 16:48:00 INFO storage.BlockManagerInfo: Added broadcast_402_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.2 MB) 18/04/17 16:48:00 INFO spark.SparkContext: Created broadcast 402 from broadcast at DAGScheduler.scala:1006 18/04/17 16:48:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 402 (KafkaRDD[573] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 16:48:00 INFO cluster.YarnClusterScheduler: Adding task set 402.0 with 1 tasks 18/04/17 16:48:00 INFO scheduler.DAGScheduler: Got job 403 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 16:48:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 403 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 16:48:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 16:48:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 16:48:00 INFO scheduler.DAGScheduler: Submitting ResultStage 403 (KafkaRDD[572] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 16:48:00 INFO storage.MemoryStore: Block broadcast_403 stored as values in memory (estimated size 5.7 KB, free 490.3 MB) 18/04/17 16:48:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 402.0 (TID 402, ***hostname masked***, executor 8, partition 0, NODE_LOCAL, 2048 bytes) 18/04/17 16:48:00 INFO storage.MemoryStore: Block broadcast_403_piece0 stored as bytes in memory (estimated size 3.1 KB, free 490.3 MB) 18/04/17 16:48:00 INFO storage.BlockManagerInfo: Added broadcast_403_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.2 MB) 18/04/17 16:48:00 INFO spark.SparkContext: Created broadcast 403 from broadcast at DAGScheduler.scala:1006 18/04/17 16:48:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 403 (KafkaRDD[572] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 16:48:00 INFO cluster.YarnClusterScheduler: Adding task set 403.0 with 1 tasks 18/04/17 16:48:00 INFO scheduler.DAGScheduler: Got job 404 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 16:48:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 404 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 16:48:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 16:48:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 16:48:00 INFO scheduler.DAGScheduler: Submitting ResultStage 404 (KafkaRDD[548] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 16:48:00 INFO storage.BlockManagerInfo: Added broadcast_398_piece0 in memory on ***hostname masked***:55033 (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:48:00 INFO storage.MemoryStore: Block broadcast_404 stored as values in memory (estimated size 5.7 KB, free 490.2 MB) 18/04/17 16:48:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 403.0 (TID 403, ***hostname masked***, executor 11, partition 0, RACK_LOCAL, 2040 bytes) 18/04/17 16:48:00 INFO storage.BlockManagerInfo: Added broadcast_401_piece0 in memory on ***hostname masked***:60107 (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:48:00 INFO storage.BlockManagerInfo: Added broadcast_399_piece0 in memory on ***hostname masked***:56034 (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:48:00 INFO storage.MemoryStore: Block broadcast_404_piece0 stored as bytes in memory (estimated size 3.1 KB, free 490.2 MB) 18/04/17 16:48:00 INFO storage.BlockManagerInfo: Added broadcast_404_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.2 MB) 18/04/17 16:48:00 INFO spark.SparkContext: Created broadcast 404 from broadcast at DAGScheduler.scala:1006 18/04/17 16:48:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 404 (KafkaRDD[548] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 16:48:00 INFO storage.BlockManagerInfo: Removed broadcast_382_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.2 MB) 18/04/17 16:48:00 INFO cluster.YarnClusterScheduler: Adding task set 404.0 with 1 tasks 18/04/17 16:48:00 INFO scheduler.DAGScheduler: Got job 405 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 16:48:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 405 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 16:48:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 16:48:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 16:48:00 INFO scheduler.DAGScheduler: Submitting ResultStage 405 (KafkaRDD[547] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 16:48:00 INFO storage.MemoryStore: Block broadcast_405 stored as values in memory (estimated size 5.7 KB, free 490.2 MB) 18/04/17 16:48:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 404.0 (TID 404, ***hostname masked***, executor 1, partition 0, RACK_LOCAL, 2051 bytes) 18/04/17 16:48:00 INFO storage.BlockManagerInfo: Added broadcast_402_piece0 in memory on ***hostname masked***:50260 (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:48:00 INFO storage.MemoryStore: Block broadcast_405_piece0 stored as bytes in memory (estimated size 3.1 KB, free 490.2 MB) 18/04/17 16:48:00 INFO storage.BlockManagerInfo: Added broadcast_405_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.2 MB) 18/04/17 16:48:00 INFO spark.SparkContext: Created broadcast 405 from broadcast at DAGScheduler.scala:1006 18/04/17 16:48:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 405 (KafkaRDD[547] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 16:48:00 INFO cluster.YarnClusterScheduler: Adding task set 405.0 with 1 tasks 18/04/17 16:48:00 INFO scheduler.DAGScheduler: Got job 406 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 16:48:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 406 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 16:48:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 16:48:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 16:48:00 INFO scheduler.DAGScheduler: Submitting ResultStage 406 (KafkaRDD[565] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 16:48:00 INFO storage.MemoryStore: Block broadcast_406 stored as values in memory (estimated size 5.7 KB, free 490.2 MB) 18/04/17 16:48:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 405.0 (TID 405, ***hostname masked***, executor 5, partition 0, RACK_LOCAL, 2060 bytes) 18/04/17 16:48:00 INFO storage.BlockManagerInfo: Removed broadcast_382_piece0 on ***hostname masked***:43653 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:48:00 INFO storage.BlockManagerInfo: Added broadcast_400_piece0 in memory on ***hostname masked***:56034 (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:48:00 INFO storage.MemoryStore: Block broadcast_406_piece0 stored as bytes in memory (estimated size 3.1 KB, free 490.2 MB) 18/04/17 16:48:00 INFO storage.BlockManagerInfo: Added broadcast_406_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.2 MB) 18/04/17 16:48:00 INFO spark.SparkContext: Created broadcast 406 from broadcast at DAGScheduler.scala:1006 18/04/17 16:48:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 406 (KafkaRDD[565] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 16:48:00 INFO cluster.YarnClusterScheduler: Adding task set 406.0 with 1 tasks 18/04/17 16:48:00 INFO scheduler.DAGScheduler: Got job 407 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 16:48:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 407 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 16:48:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 16:48:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 16:48:00 INFO scheduler.DAGScheduler: Submitting ResultStage 407 (KafkaRDD[568] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 16:48:00 INFO storage.MemoryStore: Block broadcast_407 stored as values in memory (estimated size 5.7 KB, free 490.2 MB) 18/04/17 16:48:00 INFO storage.BlockManagerInfo: Added broadcast_403_piece0 in memory on ***hostname masked***:57847 (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:48:00 INFO storage.BlockManagerInfo: Removed broadcast_368_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.2 MB) 18/04/17 16:48:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 406.0 (TID 406, ***hostname masked***, executor 7, partition 0, RACK_LOCAL, 2045 bytes) 18/04/17 16:48:00 INFO storage.BlockManagerInfo: Removed broadcast_368_piece0 on ***hostname masked***:55095 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:48:00 INFO storage.MemoryStore: Block broadcast_407_piece0 stored as bytes in memory (estimated size 3.1 KB, free 490.2 MB) 18/04/17 16:48:00 INFO storage.BlockManagerInfo: Added broadcast_407_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.2 MB) 18/04/17 16:48:00 INFO spark.SparkContext: Created broadcast 407 from broadcast at DAGScheduler.scala:1006 18/04/17 16:48:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 407 (KafkaRDD[568] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 16:48:00 INFO cluster.YarnClusterScheduler: Adding task set 407.0 with 1 tasks 18/04/17 16:48:00 INFO scheduler.DAGScheduler: Got job 408 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 16:48:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 408 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 16:48:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 16:48:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 16:48:00 INFO scheduler.DAGScheduler: Submitting ResultStage 408 (KafkaRDD[550] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 16:48:00 INFO storage.MemoryStore: Block broadcast_408 stored as values in memory (estimated size 5.7 KB, free 490.2 MB) 18/04/17 16:48:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 407.0 (TID 407, ***hostname masked***, executor 6, partition 0, RACK_LOCAL, 2045 bytes) 18/04/17 16:48:00 INFO spark.ContextCleaner: Cleaned accumulator 369 18/04/17 16:48:00 INFO spark.ContextCleaner: Cleaned accumulator 372 18/04/17 16:48:00 INFO storage.BlockManagerInfo: Removed broadcast_370_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.2 MB) 18/04/17 16:48:00 INFO storage.MemoryStore: Block broadcast_408_piece0 stored as bytes in memory (estimated size 3.1 KB, free 490.2 MB) 18/04/17 16:48:00 INFO storage.BlockManagerInfo: Added broadcast_408_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.2 MB) 18/04/17 16:48:00 INFO spark.SparkContext: Created broadcast 408 from broadcast at DAGScheduler.scala:1006 18/04/17 16:48:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 408 (KafkaRDD[550] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 16:48:00 INFO cluster.YarnClusterScheduler: Adding task set 408.0 with 1 tasks 18/04/17 16:48:00 INFO scheduler.DAGScheduler: Got job 409 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 16:48:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 409 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 16:48:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 16:48:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 16:48:00 INFO scheduler.DAGScheduler: Submitting ResultStage 409 (KafkaRDD[566] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 16:48:00 INFO storage.BlockManagerInfo: Removed broadcast_370_piece0 on ***hostname masked***:60107 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:48:00 INFO storage.MemoryStore: Block broadcast_409 stored as values in memory (estimated size 5.7 KB, free 490.2 MB) 18/04/17 16:48:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 408.0 (TID 408, ***hostname masked***, executor 1, partition 0, NODE_LOCAL, 2059 bytes) 18/04/17 16:48:00 INFO spark.ContextCleaner: Cleaned accumulator 371 18/04/17 16:48:00 INFO spark.ContextCleaner: Cleaned accumulator 374 18/04/17 16:48:00 INFO storage.BlockManagerInfo: Removed broadcast_372_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.2 MB) 18/04/17 16:48:00 INFO storage.MemoryStore: Block broadcast_409_piece0 stored as bytes in memory (estimated size 3.1 KB, free 490.2 MB) 18/04/17 16:48:00 INFO storage.BlockManagerInfo: Added broadcast_409_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.2 MB) 18/04/17 16:48:00 INFO storage.BlockManagerInfo: Removed broadcast_372_piece0 on ***hostname masked***:53081 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:48:00 INFO spark.SparkContext: Created broadcast 409 from broadcast at DAGScheduler.scala:1006 18/04/17 16:48:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 409 (KafkaRDD[566] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 16:48:00 INFO cluster.YarnClusterScheduler: Adding task set 409.0 with 1 tasks 18/04/17 16:48:00 INFO scheduler.DAGScheduler: Got job 410 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 16:48:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 410 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 16:48:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 16:48:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 16:48:00 INFO scheduler.DAGScheduler: Submitting ResultStage 410 (KafkaRDD[569] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 16:48:00 INFO spark.ContextCleaner: Cleaned accumulator 373 18/04/17 16:48:00 INFO storage.MemoryStore: Block broadcast_410 stored as values in memory (estimated size 5.7 KB, free 490.2 MB) 18/04/17 16:48:00 INFO storage.BlockManagerInfo: Added broadcast_406_piece0 in memory on ***hostname masked***:41751 (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:48:00 INFO storage.BlockManagerInfo: Removed broadcast_371_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.2 MB) 18/04/17 16:48:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 409.0 (TID 409, ***hostname masked***, executor 4, partition 0, NODE_LOCAL, 2056 bytes) 18/04/17 16:48:00 INFO storage.BlockManagerInfo: Removed broadcast_371_piece0 on ***hostname masked***:53081 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:48:00 INFO storage.MemoryStore: Block broadcast_410_piece0 stored as bytes in memory (estimated size 3.1 KB, free 490.2 MB) 18/04/17 16:48:00 INFO storage.BlockManagerInfo: Added broadcast_410_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.2 MB) 18/04/17 16:48:00 INFO spark.SparkContext: Created broadcast 410 from broadcast at DAGScheduler.scala:1006 18/04/17 16:48:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 410 (KafkaRDD[569] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 16:48:00 INFO cluster.YarnClusterScheduler: Adding task set 410.0 with 1 tasks 18/04/17 16:48:00 INFO scheduler.DAGScheduler: Got job 411 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 16:48:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 411 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 16:48:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 16:48:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 16:48:00 INFO scheduler.DAGScheduler: Submitting ResultStage 411 (KafkaRDD[555] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 16:48:00 INFO spark.ContextCleaner: Cleaned accumulator 376 18/04/17 16:48:00 INFO storage.BlockManagerInfo: Added broadcast_405_piece0 in memory on ***hostname masked***:53081 (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:48:00 INFO storage.MemoryStore: Block broadcast_411 stored as values in memory (estimated size 5.7 KB, free 490.2 MB) 18/04/17 16:48:00 INFO storage.BlockManagerInfo: Removed broadcast_374_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.2 MB) 18/04/17 16:48:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 410.0 (TID 410, ***hostname masked***, executor 3, partition 0, NODE_LOCAL, 2049 bytes) 18/04/17 16:48:00 INFO storage.BlockManagerInfo: Added broadcast_404_piece0 in memory on ***hostname masked***:56034 (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:48:00 INFO storage.BlockManagerInfo: Removed broadcast_374_piece0 on ***hostname masked***:41751 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:48:00 INFO storage.MemoryStore: Block broadcast_411_piece0 stored as bytes in memory (estimated size 3.1 KB, free 490.2 MB) 18/04/17 16:48:00 INFO storage.BlockManagerInfo: Added broadcast_411_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.2 MB) 18/04/17 16:48:00 INFO spark.SparkContext: Created broadcast 411 from broadcast at DAGScheduler.scala:1006 18/04/17 16:48:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 411 (KafkaRDD[555] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 16:48:00 INFO cluster.YarnClusterScheduler: Adding task set 411.0 with 1 tasks 18/04/17 16:48:00 INFO scheduler.DAGScheduler: Got job 412 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 16:48:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 412 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 16:48:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 16:48:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 16:48:00 INFO scheduler.DAGScheduler: Submitting ResultStage 412 (KafkaRDD[563] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 16:48:00 INFO storage.MemoryStore: Block broadcast_412 stored as values in memory (estimated size 5.7 KB, free 490.2 MB) 18/04/17 16:48:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 411.0 (TID 411, ***hostname masked***, executor 5, partition 0, RACK_LOCAL, 2064 bytes) 18/04/17 16:48:00 INFO storage.BlockManagerInfo: Added broadcast_409_piece0 in memory on ***hostname masked***:55279 (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:48:00 INFO storage.MemoryStore: Block broadcast_412_piece0 stored as bytes in memory (estimated size 3.1 KB, free 490.2 MB) 18/04/17 16:48:00 INFO storage.BlockManagerInfo: Added broadcast_412_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.2 MB) 18/04/17 16:48:00 INFO spark.SparkContext: Created broadcast 412 from broadcast at DAGScheduler.scala:1006 18/04/17 16:48:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 412 (KafkaRDD[563] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 16:48:00 INFO cluster.YarnClusterScheduler: Adding task set 412.0 with 1 tasks 18/04/17 16:48:00 INFO spark.ContextCleaner: Cleaned accumulator 375 18/04/17 16:48:00 INFO scheduler.DAGScheduler: Got job 413 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 16:48:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 413 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 16:48:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 16:48:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 16:48:00 INFO scheduler.DAGScheduler: Submitting ResultStage 413 (KafkaRDD[571] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 16:48:00 INFO storage.MemoryStore: Block broadcast_413 stored as values in memory (estimated size 5.7 KB, free 490.2 MB) 18/04/17 16:48:00 INFO storage.BlockManagerInfo: Removed broadcast_373_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.2 MB) 18/04/17 16:48:00 INFO storage.BlockManagerInfo: Added broadcast_407_piece0 in memory on ***hostname masked***:35790 (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:48:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 412.0 (TID 412, ***hostname masked***, executor 8, partition 0, RACK_LOCAL, 2048 bytes) 18/04/17 16:48:00 INFO storage.BlockManagerInfo: Added broadcast_410_piece0 in memory on ***hostname masked***:60107 (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:48:00 INFO storage.BlockManagerInfo: Removed broadcast_373_piece0 on ***hostname masked***:55033 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:48:00 INFO spark.ContextCleaner: Cleaned accumulator 378 18/04/17 16:48:00 INFO storage.MemoryStore: Block broadcast_413_piece0 stored as bytes in memory (estimated size 3.1 KB, free 490.2 MB) 18/04/17 16:48:00 INFO storage.BlockManagerInfo: Added broadcast_413_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.2 MB) 18/04/17 16:48:00 INFO storage.BlockManagerInfo: Added broadcast_408_piece0 in memory on ***hostname masked***:56034 (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:48:00 INFO spark.SparkContext: Created broadcast 413 from broadcast at DAGScheduler.scala:1006 18/04/17 16:48:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 413 (KafkaRDD[571] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 16:48:00 INFO storage.BlockManagerInfo: Removed broadcast_376_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.2 MB) 18/04/17 16:48:00 INFO cluster.YarnClusterScheduler: Adding task set 413.0 with 1 tasks 18/04/17 16:48:00 INFO scheduler.DAGScheduler: Got job 414 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 16:48:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 414 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 16:48:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 16:48:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 16:48:00 INFO scheduler.DAGScheduler: Submitting ResultStage 414 (KafkaRDD[546] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 16:48:00 INFO storage.BlockManagerInfo: Added broadcast_411_piece0 in memory on ***hostname masked***:53081 (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:48:00 INFO storage.MemoryStore: Block broadcast_414 stored as values in memory (estimated size 5.7 KB, free 490.2 MB) 18/04/17 16:48:00 INFO storage.BlockManagerInfo: Removed broadcast_376_piece0 on ***hostname masked***:57847 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:48:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 413.0 (TID 413, ***hostname masked***, executor 6, partition 0, RACK_LOCAL, 2042 bytes) 18/04/17 16:48:00 INFO spark.ContextCleaner: Cleaned accumulator 377 18/04/17 16:48:00 INFO storage.BlockManagerInfo: Removed broadcast_375_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.2 MB) 18/04/17 16:48:00 INFO storage.MemoryStore: Block broadcast_414_piece0 stored as bytes in memory (estimated size 3.1 KB, free 490.2 MB) 18/04/17 16:48:00 INFO storage.BlockManagerInfo: Added broadcast_414_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.2 MB) 18/04/17 16:48:00 INFO storage.BlockManagerInfo: Removed broadcast_375_piece0 on ***hostname masked***:60107 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:48:00 INFO spark.SparkContext: Created broadcast 414 from broadcast at DAGScheduler.scala:1006 18/04/17 16:48:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 414 (KafkaRDD[546] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 16:48:00 INFO cluster.YarnClusterScheduler: Adding task set 414.0 with 1 tasks 18/04/17 16:48:00 INFO scheduler.DAGScheduler: Got job 415 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 16:48:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 415 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 16:48:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 16:48:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 16:48:00 INFO storage.BlockManagerInfo: Added broadcast_412_piece0 in memory on ***hostname masked***:50260 (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:48:00 INFO scheduler.DAGScheduler: Submitting ResultStage 415 (KafkaRDD[559] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 16:48:00 INFO spark.ContextCleaner: Cleaned accumulator 380 18/04/17 16:48:00 INFO storage.MemoryStore: Block broadcast_415 stored as values in memory (estimated size 5.7 KB, free 490.2 MB) 18/04/17 16:48:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 414.0 (TID 414, ***hostname masked***, executor 9, partition 0, NODE_LOCAL, 2045 bytes) 18/04/17 16:48:00 INFO storage.BlockManagerInfo: Removed broadcast_378_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.2 MB) 18/04/17 16:48:00 INFO storage.BlockManagerInfo: Removed broadcast_378_piece0 on ***hostname masked***:57847 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:48:00 INFO spark.ContextCleaner: Cleaned accumulator 379 18/04/17 16:48:00 INFO storage.MemoryStore: Block broadcast_415_piece0 stored as bytes in memory (estimated size 3.1 KB, free 490.2 MB) 18/04/17 16:48:00 INFO storage.BlockManagerInfo: Added broadcast_415_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.2 MB) 18/04/17 16:48:00 INFO storage.BlockManagerInfo: Removed broadcast_377_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.2 MB) 18/04/17 16:48:00 INFO spark.SparkContext: Created broadcast 415 from broadcast at DAGScheduler.scala:1006 18/04/17 16:48:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 415 (KafkaRDD[559] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 16:48:00 INFO cluster.YarnClusterScheduler: Adding task set 415.0 with 1 tasks 18/04/17 16:48:00 INFO scheduler.DAGScheduler: Got job 417 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 16:48:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 416 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 16:48:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 16:48:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 16:48:00 INFO scheduler.DAGScheduler: Submitting ResultStage 416 (KafkaRDD[552] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 16:48:00 INFO storage.MemoryStore: Block broadcast_416 stored as values in memory (estimated size 5.7 KB, free 490.2 MB) 18/04/17 16:48:00 INFO storage.BlockManagerInfo: Removed broadcast_377_piece0 on ***hostname masked***:42188 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:48:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 415.0 (TID 415, ***hostname masked***, executor 3, partition 0, RACK_LOCAL, 2050 bytes) 18/04/17 16:48:00 INFO storage.BlockManagerInfo: Added broadcast_413_piece0 in memory on ***hostname masked***:35790 (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:48:00 INFO spark.ContextCleaner: Cleaned accumulator 382 18/04/17 16:48:00 INFO storage.BlockManagerInfo: Removed broadcast_380_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.2 MB) 18/04/17 16:48:00 INFO storage.MemoryStore: Block broadcast_416_piece0 stored as bytes in memory (estimated size 3.1 KB, free 490.2 MB) 18/04/17 16:48:00 INFO storage.BlockManagerInfo: Removed broadcast_380_piece0 on ***hostname masked***:57847 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:48:00 INFO storage.BlockManagerInfo: Added broadcast_416_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.2 MB) 18/04/17 16:48:00 INFO storage.BlockManagerInfo: Added broadcast_414_piece0 in memory on ***hostname masked***:55033 (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:48:00 INFO spark.SparkContext: Created broadcast 416 from broadcast at DAGScheduler.scala:1006 18/04/17 16:48:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 416 (KafkaRDD[552] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 16:48:00 INFO cluster.YarnClusterScheduler: Adding task set 416.0 with 1 tasks 18/04/17 16:48:00 INFO scheduler.DAGScheduler: Got job 418 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 16:48:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 417 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 16:48:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 16:48:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 16:48:00 INFO spark.ContextCleaner: Cleaned accumulator 381 18/04/17 16:48:00 INFO scheduler.DAGScheduler: Submitting ResultStage 417 (KafkaRDD[545] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 16:48:00 INFO storage.MemoryStore: Block broadcast_417 stored as values in memory (estimated size 5.7 KB, free 490.2 MB) 18/04/17 16:48:00 INFO storage.BlockManagerInfo: Removed broadcast_379_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.2 MB) 18/04/17 16:48:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 416.0 (TID 416, ***hostname masked***, executor 9, partition 0, RACK_LOCAL, 2051 bytes) 18/04/17 16:48:00 INFO storage.BlockManagerInfo: Removed broadcast_379_piece0 on ***hostname masked***:43653 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:48:00 INFO storage.MemoryStore: Block broadcast_417_piece0 stored as bytes in memory (estimated size 3.1 KB, free 490.2 MB) 18/04/17 16:48:00 INFO storage.BlockManagerInfo: Added broadcast_417_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.2 MB) 18/04/17 16:48:00 INFO spark.SparkContext: Created broadcast 417 from broadcast at DAGScheduler.scala:1006 18/04/17 16:48:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 417 (KafkaRDD[545] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 16:48:00 INFO cluster.YarnClusterScheduler: Adding task set 417.0 with 1 tasks 18/04/17 16:48:00 INFO spark.ContextCleaner: Cleaned accumulator 384 18/04/17 16:48:00 INFO spark.ContextCleaner: Cleaned accumulator 383 18/04/17 16:48:00 INFO scheduler.DAGScheduler: Got job 419 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 16:48:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 418 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 16:48:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 16:48:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 16:48:00 INFO scheduler.DAGScheduler: Submitting ResultStage 418 (KafkaRDD[564] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 16:48:00 INFO storage.MemoryStore: Block broadcast_418 stored as values in memory (estimated size 5.7 KB, free 490.2 MB) 18/04/17 16:48:00 INFO storage.BlockManagerInfo: Removed broadcast_381_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.2 MB) 18/04/17 16:48:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 417.0 (TID 417, ***hostname masked***, executor 5, partition 0, NODE_LOCAL, 2059 bytes) 18/04/17 16:48:00 INFO storage.BlockManagerInfo: Removed broadcast_381_piece0 on ***hostname masked***:41751 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:48:00 INFO storage.BlockManagerInfo: Added broadcast_415_piece0 in memory on ***hostname masked***:60107 (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:48:00 INFO storage.MemoryStore: Block broadcast_418_piece0 stored as bytes in memory (estimated size 3.1 KB, free 490.2 MB) 18/04/17 16:48:00 INFO spark.ContextCleaner: Cleaned accumulator 386 18/04/17 16:48:00 INFO storage.BlockManagerInfo: Added broadcast_418_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.2 MB) 18/04/17 16:48:00 INFO spark.SparkContext: Created broadcast 418 from broadcast at DAGScheduler.scala:1006 18/04/17 16:48:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 418 (KafkaRDD[564] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 16:48:00 INFO cluster.YarnClusterScheduler: Adding task set 418.0 with 1 tasks 18/04/17 16:48:00 INFO scheduler.DAGScheduler: Got job 416 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 16:48:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 419 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 16:48:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 16:48:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 16:48:00 INFO scheduler.DAGScheduler: Submitting ResultStage 419 (KafkaRDD[551] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 16:48:00 INFO storage.BlockManagerInfo: Removed broadcast_384_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.2 MB) 18/04/17 16:48:00 INFO storage.MemoryStore: Block broadcast_419 stored as values in memory (estimated size 5.7 KB, free 490.2 MB) 18/04/17 16:48:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 418.0 (TID 418, ***hostname masked***, executor 2, partition 0, NODE_LOCAL, 2047 bytes) 18/04/17 16:48:00 INFO storage.BlockManagerInfo: Removed broadcast_384_piece0 on ***hostname masked***:41751 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:48:00 INFO spark.ContextCleaner: Cleaned accumulator 385 18/04/17 16:48:00 INFO storage.MemoryStore: Block broadcast_419_piece0 stored as bytes in memory (estimated size 3.1 KB, free 490.2 MB) 18/04/17 16:48:00 INFO storage.BlockManagerInfo: Added broadcast_419_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.2 MB) 18/04/17 16:48:00 INFO storage.BlockManagerInfo: Added broadcast_416_piece0 in memory on ***hostname masked***:55033 (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:48:00 INFO spark.SparkContext: Created broadcast 419 from broadcast at DAGScheduler.scala:1006 18/04/17 16:48:00 INFO storage.BlockManagerInfo: Removed broadcast_383_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.2 MB) 18/04/17 16:48:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 419 (KafkaRDD[551] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 16:48:00 INFO cluster.YarnClusterScheduler: Adding task set 419.0 with 1 tasks 18/04/17 16:48:00 INFO scheduler.DAGScheduler: Got job 420 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 16:48:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 420 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 16:48:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 16:48:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 16:48:00 INFO scheduler.DAGScheduler: Submitting ResultStage 420 (KafkaRDD[574] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 16:48:00 INFO storage.MemoryStore: Block broadcast_420 stored as values in memory (estimated size 5.7 KB, free 490.2 MB) 18/04/17 16:48:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 419.0 (TID 419, ***hostname masked***, executor 8, partition 0, RACK_LOCAL, 2046 bytes) 18/04/17 16:48:00 INFO storage.BlockManagerInfo: Added broadcast_417_piece0 in memory on ***hostname masked***:53081 (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:48:00 INFO storage.MemoryStore: Block broadcast_420_piece0 stored as bytes in memory (estimated size 3.1 KB, free 490.2 MB) 18/04/17 16:48:00 INFO storage.BlockManagerInfo: Removed broadcast_383_piece0 on ***hostname masked***:53081 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:48:00 INFO storage.BlockManagerInfo: Added broadcast_420_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.2 MB) 18/04/17 16:48:00 INFO spark.SparkContext: Created broadcast 420 from broadcast at DAGScheduler.scala:1006 18/04/17 16:48:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 420 (KafkaRDD[574] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 16:48:00 INFO cluster.YarnClusterScheduler: Adding task set 420.0 with 1 tasks 18/04/17 16:48:00 INFO scheduler.DAGScheduler: Got job 421 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 16:48:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 421 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 16:48:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 16:48:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 16:48:00 INFO scheduler.DAGScheduler: Submitting ResultStage 421 (KafkaRDD[575] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 16:48:00 INFO spark.ContextCleaner: Cleaned accumulator 387 18/04/17 16:48:00 INFO storage.MemoryStore: Block broadcast_421 stored as values in memory (estimated size 5.7 KB, free 490.2 MB) 18/04/17 16:48:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 420.0 (TID 420, ***hostname masked***, executor 7, partition 0, RACK_LOCAL, 2053 bytes) 18/04/17 16:48:00 INFO storage.BlockManagerInfo: Removed broadcast_385_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.2 MB) 18/04/17 16:48:00 INFO storage.MemoryStore: Block broadcast_421_piece0 stored as bytes in memory (estimated size 3.1 KB, free 490.2 MB) 18/04/17 16:48:00 INFO storage.BlockManagerInfo: Removed broadcast_385_piece0 on ***hostname masked***:35790 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:48:00 INFO storage.BlockManagerInfo: Added broadcast_421_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.2 MB) 18/04/17 16:48:00 INFO spark.SparkContext: Created broadcast 421 from broadcast at DAGScheduler.scala:1006 18/04/17 16:48:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 421 (KafkaRDD[575] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 16:48:00 INFO cluster.YarnClusterScheduler: Adding task set 421.0 with 1 tasks 18/04/17 16:48:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 421.0 (TID 421, ***hostname masked***, executor 10, partition 0, NODE_LOCAL, 2037 bytes) 18/04/17 16:48:00 INFO storage.BlockManagerInfo: Added broadcast_419_piece0 in memory on ***hostname masked***:50260 (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:48:00 INFO storage.BlockManagerInfo: Added broadcast_418_piece0 in memory on ***hostname masked***:43653 (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:48:00 INFO storage.BlockManagerInfo: Added broadcast_420_piece0 in memory on ***hostname masked***:41751 (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:48:00 INFO spark.ContextCleaner: Cleaned accumulator 389 18/04/17 16:48:00 INFO storage.BlockManagerInfo: Removed broadcast_387_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.2 MB) 18/04/17 16:48:00 INFO storage.BlockManagerInfo: Added broadcast_421_piece0 in memory on ***hostname masked***:55095 (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:48:00 INFO storage.BlockManagerInfo: Removed broadcast_387_piece0 on ***hostname masked***:55279 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:48:00 INFO spark.ContextCleaner: Cleaned accumulator 388 18/04/17 16:48:00 INFO storage.BlockManagerInfo: Removed broadcast_386_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.2 MB) 18/04/17 16:48:00 INFO storage.BlockManagerInfo: Removed broadcast_386_piece0 on ***hostname masked***:35790 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:48:00 INFO spark.ContextCleaner: Cleaned accumulator 391 18/04/17 16:48:00 INFO storage.BlockManagerInfo: Removed broadcast_389_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.2 MB) 18/04/17 16:48:00 INFO storage.BlockManagerInfo: Removed broadcast_389_piece0 on ***hostname masked***:60107 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:48:00 INFO spark.ContextCleaner: Cleaned accumulator 390 18/04/17 16:48:00 INFO storage.BlockManagerInfo: Removed broadcast_388_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.2 MB) 18/04/17 16:48:00 INFO storage.BlockManagerInfo: Removed broadcast_388_piece0 on ***hostname masked***:55095 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:48:00 INFO spark.ContextCleaner: Cleaned accumulator 393 18/04/17 16:48:00 INFO storage.BlockManagerInfo: Removed broadcast_391_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.2 MB) 18/04/17 16:48:00 INFO storage.BlockManagerInfo: Removed broadcast_391_piece0 on ***hostname masked***:43653 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:48:00 INFO spark.ContextCleaner: Cleaned accumulator 392 18/04/17 16:48:00 INFO storage.BlockManagerInfo: Removed broadcast_390_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.2 MB) 18/04/17 16:48:00 INFO storage.BlockManagerInfo: Removed broadcast_390_piece0 on ***hostname masked***:55279 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:48:00 INFO spark.ContextCleaner: Cleaned accumulator 395 18/04/17 16:48:00 INFO storage.BlockManagerInfo: Removed broadcast_393_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.2 MB) 18/04/17 16:48:00 INFO storage.BlockManagerInfo: Removed broadcast_393_piece0 on ***hostname masked***:55095 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:48:00 INFO spark.ContextCleaner: Cleaned accumulator 394 18/04/17 16:48:00 INFO storage.BlockManagerInfo: Removed broadcast_392_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.2 MB) 18/04/17 16:48:00 INFO storage.BlockManagerInfo: Removed broadcast_392_piece0 on ***hostname masked***:43653 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:48:00 INFO storage.BlockManagerInfo: Removed broadcast_394_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.2 MB) 18/04/17 16:48:00 INFO storage.BlockManagerInfo: Removed broadcast_394_piece0 on ***hostname masked***:56034 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:48:00 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 416.0 (TID 416) in 152 ms on ***hostname masked*** (executor 9) (1/1) 18/04/17 16:48:00 INFO cluster.YarnClusterScheduler: Removed TaskSet 416.0, whose tasks have all completed, from pool 18/04/17 16:48:00 INFO scheduler.DAGScheduler: ResultStage 416 (foreachPartition at PredictorEngineApp.java:153) finished in 0.154 s 18/04/17 16:48:00 INFO scheduler.DAGScheduler: Job 417 finished: foreachPartition at PredictorEngineApp.java:153, took 0.258661 s 18/04/17 16:48:00 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x1196996a connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 16:48:00 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x1196996a0x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 16:48:00 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 16:48:00 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:38657, server: ***hostname masked***/***IP masked***:2181 18/04/17 16:48:00 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x2626be142b28c8e, negotiated timeout = 60000 18/04/17 16:48:00 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x2626be142b28c8e 18/04/17 16:48:00 INFO zookeeper.ZooKeeper: Session: 0x2626be142b28c8e closed 18/04/17 16:48:00 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 16:48:00 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 421.0 (TID 421) in 170 ms on ***hostname masked*** (executor 10) (1/1) 18/04/17 16:48:00 INFO scheduler.DAGScheduler: ResultStage 421 (foreachPartition at PredictorEngineApp.java:153) finished in 0.171 s 18/04/17 16:48:00 INFO cluster.YarnClusterScheduler: Removed TaskSet 421.0, whose tasks have all completed, from pool 18/04/17 16:48:00 INFO scheduler.DAGScheduler: Job 421 finished: foreachPartition at PredictorEngineApp.java:153, took 0.286819 s 18/04/17 16:48:00 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x108fd2ae connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 16:48:00 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x108fd2ae0x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 16:48:00 INFO scheduler.JobScheduler: Finished job streaming job 1523972880000 ms.12 from job set of time 1523972880000 ms 18/04/17 16:48:00 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 16:48:00 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:55916, server: ***hostname masked***/***IP masked***:2181 18/04/17 16:48:00 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x3626be1439a9351, negotiated timeout = 60000 18/04/17 16:48:00 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x3626be1439a9351 18/04/17 16:48:00 INFO zookeeper.ZooKeeper: Session: 0x3626be1439a9351 closed 18/04/17 16:48:00 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 16:48:00 INFO scheduler.JobScheduler: Finished job streaming job 1523972880000 ms.35 from job set of time 1523972880000 ms 18/04/17 16:48:01 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 406.0 (TID 406) in 1126 ms on ***hostname masked*** (executor 7) (1/1) 18/04/17 16:48:01 INFO cluster.YarnClusterScheduler: Removed TaskSet 406.0, whose tasks have all completed, from pool 18/04/17 16:48:01 INFO scheduler.DAGScheduler: ResultStage 406 (foreachPartition at PredictorEngineApp.java:153) finished in 1.127 s 18/04/17 16:48:01 INFO scheduler.DAGScheduler: Job 406 finished: foreachPartition at PredictorEngineApp.java:153, took 1.204714 s 18/04/17 16:48:01 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x63dd04b0 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 16:48:01 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x63dd04b00x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 16:48:01 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 16:48:01 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:38665, server: ***hostname masked***/***IP masked***:2181 18/04/17 16:48:01 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x2626be142b28c95, negotiated timeout = 60000 18/04/17 16:48:01 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x2626be142b28c95 18/04/17 16:48:01 INFO zookeeper.ZooKeeper: Session: 0x2626be142b28c95 closed 18/04/17 16:48:01 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 16:48:01 INFO scheduler.JobScheduler: Finished job streaming job 1523972880000 ms.25 from job set of time 1523972880000 ms 18/04/17 16:48:03 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 405.0 (TID 405) in 3262 ms on ***hostname masked*** (executor 5) (1/1) 18/04/17 16:48:03 INFO scheduler.DAGScheduler: ResultStage 405 (foreachPartition at PredictorEngineApp.java:153) finished in 3.262 s 18/04/17 16:48:03 INFO cluster.YarnClusterScheduler: Removed TaskSet 405.0, whose tasks have all completed, from pool 18/04/17 16:48:03 INFO scheduler.DAGScheduler: Job 405 finished: foreachPartition at PredictorEngineApp.java:153, took 3.374440 s 18/04/17 16:48:03 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x59371377 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 16:48:03 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x593713770x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 16:48:03 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 403.0 (TID 403) in 3336 ms on ***hostname masked*** (executor 11) (1/1) 18/04/17 16:48:03 INFO scheduler.DAGScheduler: ResultStage 403 (foreachPartition at PredictorEngineApp.java:153) finished in 3.337 s 18/04/17 16:48:03 INFO cluster.YarnClusterScheduler: Removed TaskSet 403.0, whose tasks have all completed, from pool 18/04/17 16:48:03 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 16:48:03 INFO scheduler.DAGScheduler: Job 403 finished: foreachPartition at PredictorEngineApp.java:153, took 3.377378 s 18/04/17 16:48:03 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:38675, server: ***hostname masked***/***IP masked***:2181 18/04/17 16:48:03 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x166a9510 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 16:48:03 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x166a95100x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 16:48:03 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 16:48:03 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:34081, server: ***hostname masked***/***IP masked***:2181 18/04/17 16:48:03 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x2626be142b28c97, negotiated timeout = 60000 18/04/17 16:48:03 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x2626be142b28c97 18/04/17 16:48:03 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x1626be1444c9386, negotiated timeout = 60000 18/04/17 16:48:03 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 395.0 (TID 395) in 3377 ms on ***hostname masked*** (executor 12) (1/1) 18/04/17 16:48:03 INFO scheduler.DAGScheduler: ResultStage 395 (foreachPartition at PredictorEngineApp.java:153) finished in 3.377 s 18/04/17 16:48:03 INFO cluster.YarnClusterScheduler: Removed TaskSet 395.0, whose tasks have all completed, from pool 18/04/17 16:48:03 INFO scheduler.DAGScheduler: Job 395 finished: foreachPartition at PredictorEngineApp.java:153, took 3.390107 s 18/04/17 16:48:03 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x1181f843 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 16:48:03 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x1181f8430x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 16:48:03 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 16:48:03 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:34084, server: ***hostname masked***/***IP masked***:2181 18/04/17 16:48:03 INFO zookeeper.ZooKeeper: Session: 0x2626be142b28c97 closed 18/04/17 16:48:03 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 16:48:03 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x1626be1444c9386 18/04/17 16:48:03 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x1626be1444c9387, negotiated timeout = 60000 18/04/17 16:48:03 INFO scheduler.JobScheduler: Finished job streaming job 1523972880000 ms.7 from job set of time 1523972880000 ms 18/04/17 16:48:03 INFO zookeeper.ZooKeeper: Session: 0x1626be1444c9386 closed 18/04/17 16:48:03 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 16:48:03 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x1626be1444c9387 18/04/17 16:48:03 INFO scheduler.JobScheduler: Finished job streaming job 1523972880000 ms.32 from job set of time 1523972880000 ms 18/04/17 16:48:03 INFO zookeeper.ZooKeeper: Session: 0x1626be1444c9387 closed 18/04/17 16:48:03 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 16:48:03 INFO scheduler.JobScheduler: Finished job streaming job 1523972880000 ms.9 from job set of time 1523972880000 ms 18/04/17 16:48:04 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 418.0 (TID 418) in 4577 ms on ***hostname masked*** (executor 2) (1/1) 18/04/17 16:48:04 INFO cluster.YarnClusterScheduler: Removed TaskSet 418.0, whose tasks have all completed, from pool 18/04/17 16:48:04 INFO scheduler.DAGScheduler: ResultStage 418 (foreachPartition at PredictorEngineApp.java:153) finished in 4.577 s 18/04/17 16:48:04 INFO scheduler.DAGScheduler: Job 419 finished: foreachPartition at PredictorEngineApp.java:153, took 4.687044 s 18/04/17 16:48:04 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x7efeb57 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 16:48:04 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x7efeb570x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 16:48:04 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 16:48:04 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:55941, server: ***hostname masked***/***IP masked***:2181 18/04/17 16:48:04 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x3626be1439a9359, negotiated timeout = 60000 18/04/17 16:48:04 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x3626be1439a9359 18/04/17 16:48:04 INFO zookeeper.ZooKeeper: Session: 0x3626be1439a9359 closed 18/04/17 16:48:04 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 16:48:04 INFO scheduler.JobScheduler: Finished job streaming job 1523972880000 ms.24 from job set of time 1523972880000 ms 18/04/17 16:48:04 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 420.0 (TID 420) in 4611 ms on ***hostname masked*** (executor 7) (1/1) 18/04/17 16:48:04 INFO scheduler.DAGScheduler: ResultStage 420 (foreachPartition at PredictorEngineApp.java:153) finished in 4.612 s 18/04/17 16:48:04 INFO cluster.YarnClusterScheduler: Removed TaskSet 420.0, whose tasks have all completed, from pool 18/04/17 16:48:04 INFO scheduler.DAGScheduler: Job 420 finished: foreachPartition at PredictorEngineApp.java:153, took 4.726848 s 18/04/17 16:48:04 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x477db03a connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 16:48:04 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x477db03a0x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 16:48:04 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 16:48:04 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:55944, server: ***hostname masked***/***IP masked***:2181 18/04/17 16:48:04 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x3626be1439a935a, negotiated timeout = 60000 18/04/17 16:48:04 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x3626be1439a935a 18/04/17 16:48:04 INFO zookeeper.ZooKeeper: Session: 0x3626be1439a935a closed 18/04/17 16:48:04 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 16:48:04 INFO scheduler.JobScheduler: Finished job streaming job 1523972880000 ms.34 from job set of time 1523972880000 ms 18/04/17 16:48:04 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 404.0 (TID 404) in 4725 ms on ***hostname masked*** (executor 1) (1/1) 18/04/17 16:48:04 INFO cluster.YarnClusterScheduler: Removed TaskSet 404.0, whose tasks have all completed, from pool 18/04/17 16:48:04 INFO scheduler.DAGScheduler: ResultStage 404 (foreachPartition at PredictorEngineApp.java:153) finished in 4.726 s 18/04/17 16:48:04 INFO scheduler.DAGScheduler: Job 404 finished: foreachPartition at PredictorEngineApp.java:153, took 4.796071 s 18/04/17 16:48:04 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x2250bf20 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 16:48:04 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x2250bf200x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 16:48:04 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 16:48:04 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:38691, server: ***hostname masked***/***IP masked***:2181 18/04/17 16:48:04 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x2626be142b28c98, negotiated timeout = 60000 18/04/17 16:48:04 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x2626be142b28c98 18/04/17 16:48:04 INFO zookeeper.ZooKeeper: Session: 0x2626be142b28c98 closed 18/04/17 16:48:04 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 16:48:04 INFO scheduler.JobScheduler: Finished job streaming job 1523972880000 ms.8 from job set of time 1523972880000 ms 18/04/17 16:48:05 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 413.0 (TID 413) in 4918 ms on ***hostname masked*** (executor 6) (1/1) 18/04/17 16:48:05 INFO cluster.YarnClusterScheduler: Removed TaskSet 413.0, whose tasks have all completed, from pool 18/04/17 16:48:05 INFO scheduler.DAGScheduler: ResultStage 413 (foreachPartition at PredictorEngineApp.java:153) finished in 4.918 s 18/04/17 16:48:05 INFO scheduler.DAGScheduler: Job 413 finished: foreachPartition at PredictorEngineApp.java:153, took 5.012613 s 18/04/17 16:48:05 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x4e6c64eb connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 16:48:05 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x4e6c64eb0x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 16:48:05 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 16:48:05 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:38694, server: ***hostname masked***/***IP masked***:2181 18/04/17 16:48:05 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x2626be142b28c99, negotiated timeout = 60000 18/04/17 16:48:05 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x2626be142b28c99 18/04/17 16:48:05 INFO zookeeper.ZooKeeper: Session: 0x2626be142b28c99 closed 18/04/17 16:48:05 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 16:48:05 INFO scheduler.JobScheduler: Finished job streaming job 1523972880000 ms.31 from job set of time 1523972880000 ms 18/04/17 16:48:05 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 407.0 (TID 407) in 5337 ms on ***hostname masked*** (executor 6) (1/1) 18/04/17 16:48:05 INFO cluster.YarnClusterScheduler: Removed TaskSet 407.0, whose tasks have all completed, from pool 18/04/17 16:48:05 INFO scheduler.DAGScheduler: ResultStage 407 (foreachPartition at PredictorEngineApp.java:153) finished in 5.338 s 18/04/17 16:48:05 INFO scheduler.DAGScheduler: Job 407 finished: foreachPartition at PredictorEngineApp.java:153, took 5.417537 s 18/04/17 16:48:05 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x1f36b8e7 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 16:48:05 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x1f36b8e70x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 16:48:05 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 16:48:05 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:55954, server: ***hostname masked***/***IP masked***:2181 18/04/17 16:48:05 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x3626be1439a935c, negotiated timeout = 60000 18/04/17 16:48:05 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x3626be1439a935c 18/04/17 16:48:05 INFO zookeeper.ZooKeeper: Session: 0x3626be1439a935c closed 18/04/17 16:48:05 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 16:48:05 INFO scheduler.JobScheduler: Finished job streaming job 1523972880000 ms.28 from job set of time 1523972880000 ms 18/04/17 16:48:07 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 411.0 (TID 411) in 7636 ms on ***hostname masked*** (executor 5) (1/1) 18/04/17 16:48:07 INFO cluster.YarnClusterScheduler: Removed TaskSet 411.0, whose tasks have all completed, from pool 18/04/17 16:48:07 INFO scheduler.DAGScheduler: ResultStage 411 (foreachPartition at PredictorEngineApp.java:153) finished in 7.637 s 18/04/17 16:48:07 INFO scheduler.DAGScheduler: Job 411 finished: foreachPartition at PredictorEngineApp.java:153, took 7.725411 s 18/04/17 16:48:07 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x6bf0b25d connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 16:48:07 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x6bf0b25d0x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 16:48:07 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 16:48:07 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:34109, server: ***hostname masked***/***IP masked***:2181 18/04/17 16:48:07 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x1626be1444c938d, negotiated timeout = 60000 18/04/17 16:48:07 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x1626be1444c938d 18/04/17 16:48:07 INFO zookeeper.ZooKeeper: Session: 0x1626be1444c938d closed 18/04/17 16:48:07 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 16:48:07 INFO scheduler.JobScheduler: Finished job streaming job 1523972880000 ms.15 from job set of time 1523972880000 ms 18/04/17 16:48:08 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 415.0 (TID 415) in 8014 ms on ***hostname masked*** (executor 3) (1/1) 18/04/17 16:48:08 INFO cluster.YarnClusterScheduler: Removed TaskSet 415.0, whose tasks have all completed, from pool 18/04/17 16:48:08 INFO scheduler.DAGScheduler: ResultStage 415 (foreachPartition at PredictorEngineApp.java:153) finished in 8.015 s 18/04/17 16:48:08 INFO scheduler.DAGScheduler: Job 415 finished: foreachPartition at PredictorEngineApp.java:153, took 8.116252 s 18/04/17 16:48:08 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x51050add connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 16:48:08 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x51050add0x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 16:48:08 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 16:48:08 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:38707, server: ***hostname masked***/***IP masked***:2181 18/04/17 16:48:08 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x2626be142b28c9a, negotiated timeout = 60000 18/04/17 16:48:08 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x2626be142b28c9a 18/04/17 16:48:08 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 410.0 (TID 410) in 8050 ms on ***hostname masked*** (executor 3) (1/1) 18/04/17 16:48:08 INFO cluster.YarnClusterScheduler: Removed TaskSet 410.0, whose tasks have all completed, from pool 18/04/17 16:48:08 INFO scheduler.DAGScheduler: ResultStage 410 (foreachPartition at PredictorEngineApp.java:153) finished in 8.051 s 18/04/17 16:48:08 INFO scheduler.DAGScheduler: Job 410 finished: foreachPartition at PredictorEngineApp.java:153, took 8.136373 s 18/04/17 16:48:08 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x58616cad connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 16:48:08 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x58616cad0x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 16:48:08 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 16:48:08 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:55966, server: ***hostname masked***/***IP masked***:2181 18/04/17 16:48:08 INFO zookeeper.ZooKeeper: Session: 0x2626be142b28c9a closed 18/04/17 16:48:08 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 16:48:08 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x3626be1439a935f, negotiated timeout = 60000 18/04/17 16:48:08 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x3626be1439a935f 18/04/17 16:48:08 INFO scheduler.JobScheduler: Finished job streaming job 1523972880000 ms.19 from job set of time 1523972880000 ms 18/04/17 16:48:08 INFO zookeeper.ZooKeeper: Session: 0x3626be1439a935f closed 18/04/17 16:48:08 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 16:48:08 INFO scheduler.JobScheduler: Finished job streaming job 1523972880000 ms.29 from job set of time 1523972880000 ms 18/04/17 16:48:08 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 409.0 (TID 409) in 8414 ms on ***hostname masked*** (executor 4) (1/1) 18/04/17 16:48:08 INFO cluster.YarnClusterScheduler: Removed TaskSet 409.0, whose tasks have all completed, from pool 18/04/17 16:48:08 INFO scheduler.DAGScheduler: ResultStage 409 (foreachPartition at PredictorEngineApp.java:153) finished in 8.415 s 18/04/17 16:48:08 INFO scheduler.DAGScheduler: Job 409 finished: foreachPartition at PredictorEngineApp.java:153, took 8.496829 s 18/04/17 16:48:08 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x39113758 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 16:48:08 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x391137580x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 16:48:08 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 16:48:08 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:38714, server: ***hostname masked***/***IP masked***:2181 18/04/17 16:48:08 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x2626be142b28c9b, negotiated timeout = 60000 18/04/17 16:48:08 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x2626be142b28c9b 18/04/17 16:48:08 INFO zookeeper.ZooKeeper: Session: 0x2626be142b28c9b closed 18/04/17 16:48:08 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 16:48:08 INFO scheduler.JobScheduler: Finished job streaming job 1523972880000 ms.26 from job set of time 1523972880000 ms 18/04/17 16:48:10 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 414.0 (TID 414) in 10340 ms on ***hostname masked*** (executor 9) (1/1) 18/04/17 16:48:10 INFO cluster.YarnClusterScheduler: Removed TaskSet 414.0, whose tasks have all completed, from pool 18/04/17 16:48:10 INFO scheduler.DAGScheduler: ResultStage 414 (foreachPartition at PredictorEngineApp.java:153) finished in 10.341 s 18/04/17 16:48:10 INFO scheduler.DAGScheduler: Job 414 finished: foreachPartition at PredictorEngineApp.java:153, took 10.439343 s 18/04/17 16:48:10 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x8d04e0f connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 16:48:10 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x8d04e0f0x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 16:48:10 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 16:48:10 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:38719, server: ***hostname masked***/***IP masked***:2181 18/04/17 16:48:10 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x2626be142b28c9d, negotiated timeout = 60000 18/04/17 16:48:10 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x2626be142b28c9d 18/04/17 16:48:10 INFO zookeeper.ZooKeeper: Session: 0x2626be142b28c9d closed 18/04/17 16:48:10 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 16:48:10 INFO scheduler.JobScheduler: Finished job streaming job 1523972880000 ms.6 from job set of time 1523972880000 ms 18/04/17 16:48:10 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 399.0 (TID 399) in 10696 ms on ***hostname masked*** (executor 1) (1/1) 18/04/17 16:48:10 INFO cluster.YarnClusterScheduler: Removed TaskSet 399.0, whose tasks have all completed, from pool 18/04/17 16:48:10 INFO scheduler.DAGScheduler: ResultStage 399 (foreachPartition at PredictorEngineApp.java:153) finished in 10.696 s 18/04/17 16:48:10 INFO scheduler.DAGScheduler: Job 399 finished: foreachPartition at PredictorEngineApp.java:153, took 10.723062 s 18/04/17 16:48:10 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x3e1aff03 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 16:48:10 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x3e1aff030x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 16:48:10 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 16:48:10 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:38722, server: ***hostname masked***/***IP masked***:2181 18/04/17 16:48:10 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x2626be142b28c9e, negotiated timeout = 60000 18/04/17 16:48:10 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x2626be142b28c9e 18/04/17 16:48:10 INFO zookeeper.ZooKeeper: Session: 0x2626be142b28c9e closed 18/04/17 16:48:10 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 16:48:10 INFO scheduler.JobScheduler: Finished job streaming job 1523972880000 ms.18 from job set of time 1523972880000 ms 18/04/17 16:48:11 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 397.0 (TID 397) in 11559 ms on ***hostname masked*** (executor 8) (1/1) 18/04/17 16:48:11 INFO cluster.YarnClusterScheduler: Removed TaskSet 397.0, whose tasks have all completed, from pool 18/04/17 16:48:11 INFO scheduler.DAGScheduler: ResultStage 397 (foreachPartition at PredictorEngineApp.java:153) finished in 11.559 s 18/04/17 16:48:11 INFO scheduler.DAGScheduler: Job 397 finished: foreachPartition at PredictorEngineApp.java:153, took 11.579863 s 18/04/17 16:48:11 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x19e68a85 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 16:48:11 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x19e68a850x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 16:48:11 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 16:48:11 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:34131, server: ***hostname masked***/***IP masked***:2181 18/04/17 16:48:11 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x1626be1444c9391, negotiated timeout = 60000 18/04/17 16:48:11 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x1626be1444c9391 18/04/17 16:48:11 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 16:48:11 INFO zookeeper.ZooKeeper: Session: 0x1626be1444c9391 closed 18/04/17 16:48:11 INFO scheduler.JobScheduler: Finished job streaming job 1523972880000 ms.2 from job set of time 1523972880000 ms 18/04/17 16:48:11 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 396.0 (TID 396) in 11894 ms on ***hostname masked*** (executor 9) (1/1) 18/04/17 16:48:11 INFO cluster.YarnClusterScheduler: Removed TaskSet 396.0, whose tasks have all completed, from pool 18/04/17 16:48:11 INFO scheduler.DAGScheduler: ResultStage 396 (foreachPartition at PredictorEngineApp.java:153) finished in 11.894 s 18/04/17 16:48:11 INFO scheduler.DAGScheduler: Job 396 finished: foreachPartition at PredictorEngineApp.java:153, took 11.910924 s 18/04/17 16:48:11 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x448de4fa connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 16:48:11 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x448de4fa0x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 16:48:11 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 16:48:11 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:55985, server: ***hostname masked***/***IP masked***:2181 18/04/17 16:48:11 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x3626be1439a9363, negotiated timeout = 60000 18/04/17 16:48:11 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x3626be1439a9363 18/04/17 16:48:11 INFO zookeeper.ZooKeeper: Session: 0x3626be1439a9363 closed 18/04/17 16:48:11 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 16:48:11 INFO scheduler.JobScheduler: Finished job streaming job 1523972880000 ms.20 from job set of time 1523972880000 ms 18/04/17 16:48:12 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 417.0 (TID 417) in 12022 ms on ***hostname masked*** (executor 5) (1/1) 18/04/17 16:48:12 INFO cluster.YarnClusterScheduler: Removed TaskSet 417.0, whose tasks have all completed, from pool 18/04/17 16:48:12 INFO scheduler.DAGScheduler: ResultStage 417 (foreachPartition at PredictorEngineApp.java:153) finished in 12.023 s 18/04/17 16:48:12 INFO scheduler.DAGScheduler: Job 418 finished: foreachPartition at PredictorEngineApp.java:153, took 12.129924 s 18/04/17 16:48:12 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x282d0b0b connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 16:48:12 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x282d0b0b0x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 16:48:12 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 16:48:12 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:55989, server: ***hostname masked***/***IP masked***:2181 18/04/17 16:48:12 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x3626be1439a9364, negotiated timeout = 60000 18/04/17 16:48:12 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x3626be1439a9364 18/04/17 16:48:12 INFO zookeeper.ZooKeeper: Session: 0x3626be1439a9364 closed 18/04/17 16:48:12 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 16:48:12 INFO scheduler.JobScheduler: Finished job streaming job 1523972880000 ms.5 from job set of time 1523972880000 ms 18/04/17 16:48:12 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 400.0 (TID 400) in 12870 ms on ***hostname masked*** (executor 1) (1/1) 18/04/17 16:48:12 INFO cluster.YarnClusterScheduler: Removed TaskSet 400.0, whose tasks have all completed, from pool 18/04/17 16:48:12 INFO scheduler.DAGScheduler: ResultStage 400 (foreachPartition at PredictorEngineApp.java:153) finished in 12.871 s 18/04/17 16:48:12 INFO scheduler.DAGScheduler: Job 400 finished: foreachPartition at PredictorEngineApp.java:153, took 12.900738 s 18/04/17 16:48:12 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x6b0f81ae connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 16:48:12 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x6b0f81ae0x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 16:48:12 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 16:48:12 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:38737, server: ***hostname masked***/***IP masked***:2181 18/04/17 16:48:12 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x2626be142b28ca1, negotiated timeout = 60000 18/04/17 16:48:12 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x2626be142b28ca1 18/04/17 16:48:12 INFO zookeeper.ZooKeeper: Session: 0x2626be142b28ca1 closed 18/04/17 16:48:12 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 16:48:12 INFO scheduler.JobScheduler: Finished job streaming job 1523972880000 ms.27 from job set of time 1523972880000 ms 18/04/17 16:48:13 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 401.0 (TID 401) in 12923 ms on ***hostname masked*** (executor 3) (1/1) 18/04/17 16:48:13 INFO cluster.YarnClusterScheduler: Removed TaskSet 401.0, whose tasks have all completed, from pool 18/04/17 16:48:13 INFO scheduler.DAGScheduler: ResultStage 401 (foreachPartition at PredictorEngineApp.java:153) finished in 12.924 s 18/04/17 16:48:13 INFO scheduler.DAGScheduler: Job 401 finished: foreachPartition at PredictorEngineApp.java:153, took 12.957208 s 18/04/17 16:48:13 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x748b43eb connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 16:48:13 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x748b43eb0x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 16:48:13 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 16:48:13 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:55996, server: ***hostname masked***/***IP masked***:2181 18/04/17 16:48:13 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x3626be1439a9365, negotiated timeout = 60000 18/04/17 16:48:13 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x3626be1439a9365 18/04/17 16:48:13 INFO zookeeper.ZooKeeper: Session: 0x3626be1439a9365 closed 18/04/17 16:48:13 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 16:48:13 INFO scheduler.JobScheduler: Finished job streaming job 1523972880000 ms.22 from job set of time 1523972880000 ms 18/04/17 16:48:14 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 402.0 (TID 402) in 14176 ms on ***hostname masked*** (executor 8) (1/1) 18/04/17 16:48:14 INFO cluster.YarnClusterScheduler: Removed TaskSet 402.0, whose tasks have all completed, from pool 18/04/17 16:48:14 INFO scheduler.DAGScheduler: ResultStage 402 (foreachPartition at PredictorEngineApp.java:153) finished in 14.176 s 18/04/17 16:48:14 INFO scheduler.DAGScheduler: Job 402 finished: foreachPartition at PredictorEngineApp.java:153, took 14.213087 s 18/04/17 16:48:14 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x5fcc4c6b connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 16:48:14 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x5fcc4c6b0x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 16:48:14 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 16:48:14 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:56004, server: ***hostname masked***/***IP masked***:2181 18/04/17 16:48:14 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x3626be1439a9367, negotiated timeout = 60000 18/04/17 16:48:14 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x3626be1439a9367 18/04/17 16:48:14 INFO zookeeper.ZooKeeper: Session: 0x3626be1439a9367 closed 18/04/17 16:48:14 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 16:48:14 INFO scheduler.JobScheduler: Finished job streaming job 1523972880000 ms.33 from job set of time 1523972880000 ms 18/04/17 16:48:14 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 412.0 (TID 412) in 14317 ms on ***hostname masked*** (executor 8) (1/1) 18/04/17 16:48:14 INFO cluster.YarnClusterScheduler: Removed TaskSet 412.0, whose tasks have all completed, from pool 18/04/17 16:48:14 INFO scheduler.DAGScheduler: ResultStage 412 (foreachPartition at PredictorEngineApp.java:153) finished in 14.319 s 18/04/17 16:48:14 INFO scheduler.DAGScheduler: Job 412 finished: foreachPartition at PredictorEngineApp.java:153, took 14.409869 s 18/04/17 16:48:14 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x4ad76171 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 16:48:14 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x4ad761710x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 16:48:14 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 16:48:14 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:38751, server: ***hostname masked***/***IP masked***:2181 18/04/17 16:48:14 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x2626be142b28ca3, negotiated timeout = 60000 18/04/17 16:48:14 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x2626be142b28ca3 18/04/17 16:48:14 INFO zookeeper.ZooKeeper: Session: 0x2626be142b28ca3 closed 18/04/17 16:48:14 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 16:48:14 INFO scheduler.JobScheduler: Finished job streaming job 1523972880000 ms.23 from job set of time 1523972880000 ms 18/04/17 16:48:15 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 398.0 (TID 398) in 15894 ms on ***hostname masked*** (executor 9) (1/1) 18/04/17 16:48:15 INFO cluster.YarnClusterScheduler: Removed TaskSet 398.0, whose tasks have all completed, from pool 18/04/17 16:48:15 INFO scheduler.DAGScheduler: ResultStage 398 (foreachPartition at PredictorEngineApp.java:153) finished in 15.896 s 18/04/17 16:48:15 INFO scheduler.DAGScheduler: Job 398 finished: foreachPartition at PredictorEngineApp.java:153, took 15.919312 s 18/04/17 16:48:15 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x7d317509 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 16:48:15 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x7d3175090x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 16:48:15 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 16:48:15 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:34160, server: ***hostname masked***/***IP masked***:2181 18/04/17 16:48:15 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x1626be1444c9397, negotiated timeout = 60000 18/04/17 16:48:15 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x1626be1444c9397 18/04/17 16:48:15 INFO zookeeper.ZooKeeper: Session: 0x1626be1444c9397 closed 18/04/17 16:48:15 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 16:48:16 INFO scheduler.JobScheduler: Finished job streaming job 1523972880000 ms.1 from job set of time 1523972880000 ms 18/04/17 16:48:18 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 408.0 (TID 408) in 17940 ms on ***hostname masked*** (executor 1) (1/1) 18/04/17 16:48:18 INFO cluster.YarnClusterScheduler: Removed TaskSet 408.0, whose tasks have all completed, from pool 18/04/17 16:48:18 INFO scheduler.DAGScheduler: ResultStage 408 (foreachPartition at PredictorEngineApp.java:153) finished in 17.941 s 18/04/17 16:48:18 INFO scheduler.DAGScheduler: Job 408 finished: foreachPartition at PredictorEngineApp.java:153, took 18.025407 s 18/04/17 16:48:18 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x7251bb66 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 16:48:18 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x7251bb660x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 16:48:18 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 16:48:18 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:56017, server: ***hostname masked***/***IP masked***:2181 18/04/17 16:48:18 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x3626be1439a9369, negotiated timeout = 60000 18/04/17 16:48:18 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x3626be1439a9369 18/04/17 16:48:18 INFO zookeeper.ZooKeeper: Session: 0x3626be1439a9369 closed 18/04/17 16:48:18 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 16:48:18 INFO scheduler.JobScheduler: Finished job streaming job 1523972880000 ms.10 from job set of time 1523972880000 ms 18/04/17 16:48:18 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 419.0 (TID 419) in 18387 ms on ***hostname masked*** (executor 8) (1/1) 18/04/17 16:48:18 INFO cluster.YarnClusterScheduler: Removed TaskSet 419.0, whose tasks have all completed, from pool 18/04/17 16:48:18 INFO scheduler.DAGScheduler: ResultStage 419 (foreachPartition at PredictorEngineApp.java:153) finished in 18.388 s 18/04/17 16:48:18 INFO scheduler.DAGScheduler: Job 416 finished: foreachPartition at PredictorEngineApp.java:153, took 18.499641 s 18/04/17 16:48:18 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x73d85cb4 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 16:48:18 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x73d85cb40x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 16:48:18 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 16:48:18 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:34170, server: ***hostname masked***/***IP masked***:2181 18/04/17 16:48:18 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x1626be1444c9398, negotiated timeout = 60000 18/04/17 16:48:18 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x1626be1444c9398 18/04/17 16:48:18 INFO zookeeper.ZooKeeper: Session: 0x1626be1444c9398 closed 18/04/17 16:48:18 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 16:48:18 INFO scheduler.JobScheduler: Finished job streaming job 1523972880000 ms.11 from job set of time 1523972880000 ms 18/04/17 16:48:18 INFO scheduler.JobScheduler: Total delay: 18.596 s for time 1523972880000 ms (execution: 18.543 s) 18/04/17 16:48:18 INFO kafka.KafkaRDD: Removing RDD 504 from persistence list 18/04/17 16:48:18 INFO storage.BlockManager: Removing RDD 504 18/04/17 16:48:18 INFO kafka.KafkaRDD: Removing RDD 504 from persistence list 18/04/17 16:48:18 INFO storage.BlockManager: Removing RDD 504 18/04/17 16:48:18 INFO kafka.KafkaRDD: Removing RDD 505 from persistence list 18/04/17 16:48:18 INFO storage.BlockManager: Removing RDD 505 18/04/17 16:48:18 INFO kafka.KafkaRDD: Removing RDD 505 from persistence list 18/04/17 16:48:18 INFO storage.BlockManager: Removing RDD 505 18/04/17 16:48:18 INFO kafka.KafkaRDD: Removing RDD 506 from persistence list 18/04/17 16:48:18 INFO storage.BlockManager: Removing RDD 506 18/04/17 16:48:18 INFO kafka.KafkaRDD: Removing RDD 506 from persistence list 18/04/17 16:48:18 INFO storage.BlockManager: Removing RDD 506 18/04/17 16:48:18 INFO kafka.KafkaRDD: Removing RDD 507 from persistence list 18/04/17 16:48:18 INFO storage.BlockManager: Removing RDD 507 18/04/17 16:48:18 INFO kafka.KafkaRDD: Removing RDD 507 from persistence list 18/04/17 16:48:18 INFO storage.BlockManager: Removing RDD 507 18/04/17 16:48:18 INFO kafka.KafkaRDD: Removing RDD 508 from persistence list 18/04/17 16:48:18 INFO storage.BlockManager: Removing RDD 508 18/04/17 16:48:18 INFO kafka.KafkaRDD: Removing RDD 508 from persistence list 18/04/17 16:48:18 INFO storage.BlockManager: Removing RDD 508 18/04/17 16:48:18 INFO kafka.KafkaRDD: Removing RDD 509 from persistence list 18/04/17 16:48:18 INFO storage.BlockManager: Removing RDD 509 18/04/17 16:48:18 INFO kafka.KafkaRDD: Removing RDD 509 from persistence list 18/04/17 16:48:18 INFO storage.BlockManager: Removing RDD 509 18/04/17 16:48:18 INFO kafka.KafkaRDD: Removing RDD 510 from persistence list 18/04/17 16:48:18 INFO storage.BlockManager: Removing RDD 510 18/04/17 16:48:18 INFO kafka.KafkaRDD: Removing RDD 510 from persistence list 18/04/17 16:48:18 INFO storage.BlockManager: Removing RDD 510 18/04/17 16:48:18 INFO kafka.KafkaRDD: Removing RDD 511 from persistence list 18/04/17 16:48:18 INFO storage.BlockManager: Removing RDD 511 18/04/17 16:48:18 INFO kafka.KafkaRDD: Removing RDD 511 from persistence list 18/04/17 16:48:18 INFO storage.BlockManager: Removing RDD 511 18/04/17 16:48:18 INFO kafka.KafkaRDD: Removing RDD 512 from persistence list 18/04/17 16:48:18 INFO storage.BlockManager: Removing RDD 512 18/04/17 16:48:18 INFO kafka.KafkaRDD: Removing RDD 512 from persistence list 18/04/17 16:48:18 INFO storage.BlockManager: Removing RDD 512 18/04/17 16:48:18 INFO kafka.KafkaRDD: Removing RDD 513 from persistence list 18/04/17 16:48:18 INFO storage.BlockManager: Removing RDD 513 18/04/17 16:48:18 INFO kafka.KafkaRDD: Removing RDD 513 from persistence list 18/04/17 16:48:18 INFO storage.BlockManager: Removing RDD 513 18/04/17 16:48:18 INFO kafka.KafkaRDD: Removing RDD 514 from persistence list 18/04/17 16:48:18 INFO storage.BlockManager: Removing RDD 514 18/04/17 16:48:18 INFO kafka.KafkaRDD: Removing RDD 514 from persistence list 18/04/17 16:48:18 INFO storage.BlockManager: Removing RDD 514 18/04/17 16:48:18 INFO kafka.KafkaRDD: Removing RDD 515 from persistence list 18/04/17 16:48:18 INFO storage.BlockManager: Removing RDD 515 18/04/17 16:48:18 INFO kafka.KafkaRDD: Removing RDD 515 from persistence list 18/04/17 16:48:18 INFO storage.BlockManager: Removing RDD 515 18/04/17 16:48:18 INFO kafka.KafkaRDD: Removing RDD 516 from persistence list 18/04/17 16:48:18 INFO storage.BlockManager: Removing RDD 516 18/04/17 16:48:18 INFO kafka.KafkaRDD: Removing RDD 516 from persistence list 18/04/17 16:48:18 INFO storage.BlockManager: Removing RDD 516 18/04/17 16:48:18 INFO kafka.KafkaRDD: Removing RDD 517 from persistence list 18/04/17 16:48:18 INFO storage.BlockManager: Removing RDD 517 18/04/17 16:48:18 INFO kafka.KafkaRDD: Removing RDD 517 from persistence list 18/04/17 16:48:18 INFO storage.BlockManager: Removing RDD 517 18/04/17 16:48:18 INFO kafka.KafkaRDD: Removing RDD 518 from persistence list 18/04/17 16:48:18 INFO storage.BlockManager: Removing RDD 518 18/04/17 16:48:18 INFO kafka.KafkaRDD: Removing RDD 518 from persistence list 18/04/17 16:48:18 INFO storage.BlockManager: Removing RDD 518 18/04/17 16:48:18 INFO kafka.KafkaRDD: Removing RDD 519 from persistence list 18/04/17 16:48:18 INFO storage.BlockManager: Removing RDD 519 18/04/17 16:48:18 INFO kafka.KafkaRDD: Removing RDD 519 from persistence list 18/04/17 16:48:18 INFO storage.BlockManager: Removing RDD 519 18/04/17 16:48:18 INFO kafka.KafkaRDD: Removing RDD 520 from persistence list 18/04/17 16:48:18 INFO storage.BlockManager: Removing RDD 520 18/04/17 16:48:18 INFO kafka.KafkaRDD: Removing RDD 520 from persistence list 18/04/17 16:48:18 INFO storage.BlockManager: Removing RDD 520 18/04/17 16:48:18 INFO kafka.KafkaRDD: Removing RDD 521 from persistence list 18/04/17 16:48:18 INFO storage.BlockManager: Removing RDD 521 18/04/17 16:48:18 INFO kafka.KafkaRDD: Removing RDD 521 from persistence list 18/04/17 16:48:18 INFO storage.BlockManager: Removing RDD 521 18/04/17 16:48:18 INFO spark.ContextCleaner: Cleaned accumulator 411 18/04/17 16:48:18 INFO kafka.KafkaRDD: Removing RDD 522 from persistence list 18/04/17 16:48:18 INFO storage.BlockManager: Removing RDD 522 18/04/17 16:48:18 INFO kafka.KafkaRDD: Removing RDD 522 from persistence list 18/04/17 16:48:18 INFO storage.BlockManagerInfo: Removed broadcast_395_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.2 MB) 18/04/17 16:48:18 INFO storage.BlockManager: Removing RDD 522 18/04/17 16:48:18 INFO kafka.KafkaRDD: Removing RDD 523 from persistence list 18/04/17 16:48:18 INFO storage.BlockManager: Removing RDD 523 18/04/17 16:48:18 INFO kafka.KafkaRDD: Removing RDD 523 from persistence list 18/04/17 16:48:18 INFO storage.BlockManager: Removing RDD 523 18/04/17 16:48:18 INFO kafka.KafkaRDD: Removing RDD 524 from persistence list 18/04/17 16:48:18 INFO storage.BlockManagerInfo: Removed broadcast_395_piece0 on ***hostname masked***:42188 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:48:18 INFO storage.BlockManager: Removing RDD 524 18/04/17 16:48:18 INFO kafka.KafkaRDD: Removing RDD 524 from persistence list 18/04/17 16:48:18 INFO spark.ContextCleaner: Cleaned accumulator 396 18/04/17 16:48:18 INFO spark.ContextCleaner: Cleaned accumulator 398 18/04/17 16:48:18 INFO storage.BlockManager: Removing RDD 524 18/04/17 16:48:18 INFO kafka.KafkaRDD: Removing RDD 525 from persistence list 18/04/17 16:48:18 INFO storage.BlockManager: Removing RDD 525 18/04/17 16:48:18 INFO kafka.KafkaRDD: Removing RDD 525 from persistence list 18/04/17 16:48:18 INFO storage.BlockManagerInfo: Removed broadcast_396_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.2 MB) 18/04/17 16:48:18 INFO storage.BlockManager: Removing RDD 525 18/04/17 16:48:18 INFO kafka.KafkaRDD: Removing RDD 526 from persistence list 18/04/17 16:48:18 INFO storage.BlockManagerInfo: Removed broadcast_396_piece0 on ***hostname masked***:55033 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:48:18 INFO storage.BlockManager: Removing RDD 526 18/04/17 16:48:18 INFO kafka.KafkaRDD: Removing RDD 526 from persistence list 18/04/17 16:48:18 INFO spark.ContextCleaner: Cleaned accumulator 397 18/04/17 16:48:18 INFO storage.BlockManager: Removing RDD 526 18/04/17 16:48:18 INFO kafka.KafkaRDD: Removing RDD 527 from persistence list 18/04/17 16:48:18 INFO storage.BlockManager: Removing RDD 527 18/04/17 16:48:18 INFO kafka.KafkaRDD: Removing RDD 527 from persistence list 18/04/17 16:48:18 INFO storage.BlockManagerInfo: Removed broadcast_398_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.2 MB) 18/04/17 16:48:18 INFO storage.BlockManager: Removing RDD 527 18/04/17 16:48:18 INFO kafka.KafkaRDD: Removing RDD 528 from persistence list 18/04/17 16:48:18 INFO storage.BlockManagerInfo: Removed broadcast_398_piece0 on ***hostname masked***:55033 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:48:18 INFO storage.BlockManager: Removing RDD 528 18/04/17 16:48:18 INFO kafka.KafkaRDD: Removing RDD 528 from persistence list 18/04/17 16:48:18 INFO spark.ContextCleaner: Cleaned accumulator 399 18/04/17 16:48:18 INFO storage.BlockManager: Removing RDD 528 18/04/17 16:48:18 INFO kafka.KafkaRDD: Removing RDD 529 from persistence list 18/04/17 16:48:18 INFO storage.BlockManager: Removing RDD 529 18/04/17 16:48:18 INFO kafka.KafkaRDD: Removing RDD 529 from persistence list 18/04/17 16:48:18 INFO storage.BlockManagerInfo: Removed broadcast_397_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.2 MB) 18/04/17 16:48:18 INFO storage.BlockManager: Removing RDD 529 18/04/17 16:48:18 INFO kafka.KafkaRDD: Removing RDD 530 from persistence list 18/04/17 16:48:18 INFO storage.BlockManagerInfo: Removed broadcast_397_piece0 on ***hostname masked***:50260 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:48:18 INFO storage.BlockManager: Removing RDD 530 18/04/17 16:48:18 INFO kafka.KafkaRDD: Removing RDD 530 from persistence list 18/04/17 16:48:18 INFO spark.ContextCleaner: Cleaned accumulator 401 18/04/17 16:48:18 INFO storage.BlockManager: Removing RDD 530 18/04/17 16:48:18 INFO kafka.KafkaRDD: Removing RDD 531 from persistence list 18/04/17 16:48:18 INFO storage.BlockManager: Removing RDD 531 18/04/17 16:48:18 INFO kafka.KafkaRDD: Removing RDD 531 from persistence list 18/04/17 16:48:18 INFO storage.BlockManagerInfo: Removed broadcast_399_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.2 MB) 18/04/17 16:48:18 INFO storage.BlockManager: Removing RDD 531 18/04/17 16:48:18 INFO kafka.KafkaRDD: Removing RDD 532 from persistence list 18/04/17 16:48:18 INFO storage.BlockManagerInfo: Removed broadcast_399_piece0 on ***hostname masked***:56034 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:48:18 INFO storage.BlockManager: Removing RDD 532 18/04/17 16:48:18 INFO kafka.KafkaRDD: Removing RDD 532 from persistence list 18/04/17 16:48:18 INFO storage.BlockManager: Removing RDD 532 18/04/17 16:48:18 INFO spark.ContextCleaner: Cleaned accumulator 400 18/04/17 16:48:18 INFO kafka.KafkaRDD: Removing RDD 533 from persistence list 18/04/17 16:48:18 INFO storage.BlockManager: Removing RDD 533 18/04/17 16:48:18 INFO kafka.KafkaRDD: Removing RDD 533 from persistence list 18/04/17 16:48:18 INFO storage.BlockManagerInfo: Removed broadcast_401_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.2 MB) 18/04/17 16:48:18 INFO storage.BlockManager: Removing RDD 533 18/04/17 16:48:18 INFO kafka.KafkaRDD: Removing RDD 534 from persistence list 18/04/17 16:48:18 INFO storage.BlockManagerInfo: Removed broadcast_401_piece0 on ***hostname masked***:60107 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:48:18 INFO storage.BlockManager: Removing RDD 534 18/04/17 16:48:18 INFO kafka.KafkaRDD: Removing RDD 534 from persistence list 18/04/17 16:48:18 INFO storage.BlockManager: Removing RDD 534 18/04/17 16:48:18 INFO spark.ContextCleaner: Cleaned accumulator 402 18/04/17 16:48:18 INFO kafka.KafkaRDD: Removing RDD 535 from persistence list 18/04/17 16:48:18 INFO storage.BlockManager: Removing RDD 535 18/04/17 16:48:18 INFO kafka.KafkaRDD: Removing RDD 535 from persistence list 18/04/17 16:48:18 INFO storage.BlockManagerInfo: Removed broadcast_400_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.2 MB) 18/04/17 16:48:18 INFO storage.BlockManagerInfo: Removed broadcast_400_piece0 on ***hostname masked***:56034 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:48:18 INFO storage.BlockManager: Removing RDD 535 18/04/17 16:48:18 INFO kafka.KafkaRDD: Removing RDD 536 from persistence list 18/04/17 16:48:18 INFO spark.ContextCleaner: Cleaned accumulator 404 18/04/17 16:48:18 INFO storage.BlockManager: Removing RDD 536 18/04/17 16:48:18 INFO kafka.KafkaRDD: Removing RDD 536 from persistence list 18/04/17 16:48:18 INFO storage.BlockManager: Removing RDD 536 18/04/17 16:48:18 INFO kafka.KafkaRDD: Removing RDD 537 from persistence list 18/04/17 16:48:18 INFO storage.BlockManagerInfo: Removed broadcast_402_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.2 MB) 18/04/17 16:48:18 INFO storage.BlockManager: Removing RDD 537 18/04/17 16:48:18 INFO kafka.KafkaRDD: Removing RDD 537 from persistence list 18/04/17 16:48:18 INFO storage.BlockManagerInfo: Removed broadcast_402_piece0 on ***hostname masked***:50260 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:48:18 INFO storage.BlockManager: Removing RDD 537 18/04/17 16:48:18 INFO kafka.KafkaRDD: Removing RDD 538 from persistence list 18/04/17 16:48:18 INFO spark.ContextCleaner: Cleaned accumulator 403 18/04/17 16:48:18 INFO storage.BlockManager: Removing RDD 538 18/04/17 16:48:18 INFO kafka.KafkaRDD: Removing RDD 538 from persistence list 18/04/17 16:48:18 INFO storage.BlockManager: Removing RDD 538 18/04/17 16:48:18 INFO kafka.KafkaRDD: Removing RDD 539 from persistence list 18/04/17 16:48:18 INFO storage.BlockManagerInfo: Removed broadcast_404_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.2 MB) 18/04/17 16:48:18 INFO storage.BlockManager: Removing RDD 539 18/04/17 16:48:18 INFO kafka.KafkaRDD: Removing RDD 539 from persistence list 18/04/17 16:48:18 INFO storage.BlockManagerInfo: Removed broadcast_404_piece0 on ***hostname masked***:56034 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:48:18 INFO storage.BlockManager: Removing RDD 539 18/04/17 16:48:18 INFO scheduler.ReceivedBlockTracker: Deleting batches ArrayBuffer() 18/04/17 16:48:18 INFO spark.ContextCleaner: Cleaned accumulator 405 18/04/17 16:48:18 INFO scheduler.InputInfoTracker: remove old batch metadata: 1523972760000 ms 18/04/17 16:48:18 INFO storage.BlockManagerInfo: Removed broadcast_403_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.2 MB) 18/04/17 16:48:18 INFO storage.BlockManagerInfo: Removed broadcast_403_piece0 on ***hostname masked***:57847 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:48:18 INFO spark.ContextCleaner: Cleaned accumulator 407 18/04/17 16:48:18 INFO storage.BlockManagerInfo: Removed broadcast_405_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.2 MB) 18/04/17 16:48:18 INFO storage.BlockManagerInfo: Removed broadcast_405_piece0 on ***hostname masked***:53081 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:48:18 INFO spark.ContextCleaner: Cleaned accumulator 406 18/04/17 16:48:18 INFO storage.BlockManagerInfo: Removed broadcast_407_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.2 MB) 18/04/17 16:48:18 INFO storage.BlockManagerInfo: Removed broadcast_407_piece0 on ***hostname masked***:35790 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:48:18 INFO spark.ContextCleaner: Cleaned accumulator 408 18/04/17 16:48:18 INFO storage.BlockManagerInfo: Removed broadcast_406_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.2 MB) 18/04/17 16:48:18 INFO storage.BlockManagerInfo: Removed broadcast_406_piece0 on ***hostname masked***:41751 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:48:18 INFO spark.ContextCleaner: Cleaned accumulator 410 18/04/17 16:48:18 INFO storage.BlockManagerInfo: Removed broadcast_408_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.2 MB) 18/04/17 16:48:18 INFO storage.BlockManagerInfo: Removed broadcast_408_piece0 on ***hostname masked***:56034 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:48:18 INFO spark.ContextCleaner: Cleaned accumulator 409 18/04/17 16:48:18 INFO storage.BlockManagerInfo: Removed broadcast_410_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.2 MB) 18/04/17 16:48:18 INFO storage.BlockManagerInfo: Removed broadcast_410_piece0 on ***hostname masked***:60107 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:48:18 INFO storage.BlockManagerInfo: Removed broadcast_409_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.2 MB) 18/04/17 16:48:18 INFO storage.BlockManagerInfo: Removed broadcast_409_piece0 on ***hostname masked***:55279 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:48:18 INFO spark.ContextCleaner: Cleaned accumulator 413 18/04/17 16:48:18 INFO storage.BlockManagerInfo: Removed broadcast_411_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.2 MB) 18/04/17 16:48:18 INFO storage.BlockManagerInfo: Removed broadcast_411_piece0 on ***hostname masked***:53081 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:48:18 INFO spark.ContextCleaner: Cleaned accumulator 412 18/04/17 16:48:18 INFO storage.BlockManagerInfo: Removed broadcast_413_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.2 MB) 18/04/17 16:48:18 INFO storage.BlockManagerInfo: Removed broadcast_413_piece0 on ***hostname masked***:35790 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:48:18 INFO spark.ContextCleaner: Cleaned accumulator 414 18/04/17 16:48:18 INFO storage.BlockManagerInfo: Removed broadcast_412_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.2 MB) 18/04/17 16:48:18 INFO storage.BlockManagerInfo: Removed broadcast_412_piece0 on ***hostname masked***:50260 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:48:18 INFO spark.ContextCleaner: Cleaned accumulator 416 18/04/17 16:48:18 INFO storage.BlockManagerInfo: Removed broadcast_414_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.3 MB) 18/04/17 16:48:18 INFO storage.BlockManagerInfo: Removed broadcast_414_piece0 on ***hostname masked***:55033 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:48:18 INFO spark.ContextCleaner: Cleaned accumulator 415 18/04/17 16:48:18 INFO storage.BlockManagerInfo: Removed broadcast_416_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.3 MB) 18/04/17 16:48:18 INFO storage.BlockManagerInfo: Removed broadcast_416_piece0 on ***hostname masked***:55033 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:48:18 INFO spark.ContextCleaner: Cleaned accumulator 417 18/04/17 16:48:18 INFO storage.BlockManagerInfo: Removed broadcast_415_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.3 MB) 18/04/17 16:48:18 INFO storage.BlockManagerInfo: Removed broadcast_415_piece0 on ***hostname masked***:60107 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:48:18 INFO spark.ContextCleaner: Cleaned accumulator 419 18/04/17 16:48:18 INFO storage.BlockManagerInfo: Removed broadcast_417_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.3 MB) 18/04/17 16:48:18 INFO storage.BlockManagerInfo: Removed broadcast_417_piece0 on ***hostname masked***:53081 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:48:18 INFO spark.ContextCleaner: Cleaned accumulator 418 18/04/17 16:48:18 INFO storage.BlockManagerInfo: Removed broadcast_419_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.3 MB) 18/04/17 16:48:18 INFO storage.BlockManagerInfo: Removed broadcast_419_piece0 on ***hostname masked***:50260 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:48:18 INFO spark.ContextCleaner: Cleaned accumulator 420 18/04/17 16:48:18 INFO storage.BlockManagerInfo: Removed broadcast_418_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.3 MB) 18/04/17 16:48:18 INFO storage.BlockManagerInfo: Removed broadcast_418_piece0 on ***hostname masked***:43653 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:48:18 INFO storage.BlockManagerInfo: Removed broadcast_421_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.3 MB) 18/04/17 16:48:18 INFO storage.BlockManagerInfo: Removed broadcast_421_piece0 on ***hostname masked***:55095 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:48:18 INFO spark.ContextCleaner: Cleaned accumulator 422 18/04/17 16:48:18 INFO storage.BlockManagerInfo: Removed broadcast_420_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.3 MB) 18/04/17 16:48:18 INFO storage.BlockManagerInfo: Removed broadcast_420_piece0 on ***hostname masked***:41751 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:48:18 INFO spark.ContextCleaner: Cleaned accumulator 421 18/04/17 16:49:00 INFO scheduler.JobScheduler: Added jobs for time 1523972940000 ms 18/04/17 16:49:00 INFO scheduler.JobScheduler: Starting job streaming job 1523972940000 ms.0 from job set of time 1523972940000 ms 18/04/17 16:49:00 INFO scheduler.JobScheduler: Starting job streaming job 1523972940000 ms.1 from job set of time 1523972940000 ms 18/04/17 16:49:00 INFO scheduler.JobScheduler: Finished job streaming job 1523972940000 ms.0 from job set of time 1523972940000 ms 18/04/17 16:49:00 INFO scheduler.JobScheduler: Starting job streaming job 1523972940000 ms.2 from job set of time 1523972940000 ms 18/04/17 16:49:00 INFO scheduler.JobScheduler: Starting job streaming job 1523972940000 ms.3 from job set of time 1523972940000 ms 18/04/17 16:49:00 INFO scheduler.JobScheduler: Starting job streaming job 1523972940000 ms.4 from job set of time 1523972940000 ms 18/04/17 16:49:00 INFO scheduler.JobScheduler: Starting job streaming job 1523972940000 ms.5 from job set of time 1523972940000 ms 18/04/17 16:49:00 INFO scheduler.JobScheduler: Finished job streaming job 1523972940000 ms.4 from job set of time 1523972940000 ms 18/04/17 16:49:00 INFO scheduler.JobScheduler: Finished job streaming job 1523972940000 ms.3 from job set of time 1523972940000 ms 18/04/17 16:49:00 INFO scheduler.JobScheduler: Starting job streaming job 1523972940000 ms.6 from job set of time 1523972940000 ms 18/04/17 16:49:00 INFO scheduler.JobScheduler: Starting job streaming job 1523972940000 ms.7 from job set of time 1523972940000 ms 18/04/17 16:49:00 INFO scheduler.JobScheduler: Starting job streaming job 1523972940000 ms.8 from job set of time 1523972940000 ms 18/04/17 16:49:00 INFO scheduler.JobScheduler: Starting job streaming job 1523972940000 ms.9 from job set of time 1523972940000 ms 18/04/17 16:49:00 INFO scheduler.JobScheduler: Starting job streaming job 1523972940000 ms.10 from job set of time 1523972940000 ms 18/04/17 16:49:00 INFO scheduler.JobScheduler: Starting job streaming job 1523972940000 ms.11 from job set of time 1523972940000 ms 18/04/17 16:49:00 INFO scheduler.JobScheduler: Starting job streaming job 1523972940000 ms.12 from job set of time 1523972940000 ms 18/04/17 16:49:00 INFO scheduler.JobScheduler: Starting job streaming job 1523972940000 ms.13 from job set of time 1523972940000 ms 18/04/17 16:49:00 INFO scheduler.JobScheduler: Starting job streaming job 1523972940000 ms.14 from job set of time 1523972940000 ms 18/04/17 16:49:00 INFO scheduler.JobScheduler: Finished job streaming job 1523972940000 ms.13 from job set of time 1523972940000 ms 18/04/17 16:49:00 INFO scheduler.JobScheduler: Starting job streaming job 1523972940000 ms.16 from job set of time 1523972940000 ms 18/04/17 16:49:00 INFO scheduler.JobScheduler: Starting job streaming job 1523972940000 ms.15 from job set of time 1523972940000 ms 18/04/17 16:49:00 INFO scheduler.JobScheduler: Finished job streaming job 1523972940000 ms.16 from job set of time 1523972940000 ms 18/04/17 16:49:00 INFO scheduler.JobScheduler: Starting job streaming job 1523972940000 ms.17 from job set of time 1523972940000 ms 18/04/17 16:49:00 INFO scheduler.JobScheduler: Finished job streaming job 1523972940000 ms.14 from job set of time 1523972940000 ms 18/04/17 16:49:00 INFO scheduler.JobScheduler: Starting job streaming job 1523972940000 ms.18 from job set of time 1523972940000 ms 18/04/17 16:49:00 INFO scheduler.JobScheduler: Finished job streaming job 1523972940000 ms.17 from job set of time 1523972940000 ms 18/04/17 16:49:00 INFO scheduler.JobScheduler: Starting job streaming job 1523972940000 ms.19 from job set of time 1523972940000 ms 18/04/17 16:49:00 INFO scheduler.JobScheduler: Starting job streaming job 1523972940000 ms.20 from job set of time 1523972940000 ms 18/04/17 16:49:00 INFO scheduler.JobScheduler: Starting job streaming job 1523972940000 ms.21 from job set of time 1523972940000 ms 18/04/17 16:49:00 INFO scheduler.JobScheduler: Starting job streaming job 1523972940000 ms.22 from job set of time 1523972940000 ms 18/04/17 16:49:00 INFO scheduler.JobScheduler: Starting job streaming job 1523972940000 ms.23 from job set of time 1523972940000 ms 18/04/17 16:49:00 INFO scheduler.JobScheduler: Finished job streaming job 1523972940000 ms.21 from job set of time 1523972940000 ms 18/04/17 16:49:00 INFO scheduler.JobScheduler: Starting job streaming job 1523972940000 ms.25 from job set of time 1523972940000 ms 18/04/17 16:49:00 INFO scheduler.JobScheduler: Starting job streaming job 1523972940000 ms.26 from job set of time 1523972940000 ms 18/04/17 16:49:00 INFO scheduler.JobScheduler: Starting job streaming job 1523972940000 ms.24 from job set of time 1523972940000 ms 18/04/17 16:49:00 INFO scheduler.JobScheduler: Starting job streaming job 1523972940000 ms.27 from job set of time 1523972940000 ms 18/04/17 16:49:00 INFO scheduler.JobScheduler: Starting job streaming job 1523972940000 ms.28 from job set of time 1523972940000 ms 18/04/17 16:49:00 INFO scheduler.JobScheduler: Starting job streaming job 1523972940000 ms.29 from job set of time 1523972940000 ms 18/04/17 16:49:00 INFO scheduler.JobScheduler: Starting job streaming job 1523972940000 ms.30 from job set of time 1523972940000 ms 18/04/17 16:49:00 INFO scheduler.JobScheduler: Finished job streaming job 1523972940000 ms.30 from job set of time 1523972940000 ms 18/04/17 16:49:00 INFO scheduler.JobScheduler: Starting job streaming job 1523972940000 ms.31 from job set of time 1523972940000 ms 18/04/17 16:49:00 INFO scheduler.JobScheduler: Starting job streaming job 1523972940000 ms.32 from job set of time 1523972940000 ms 18/04/17 16:49:00 INFO scheduler.JobScheduler: Starting job streaming job 1523972940000 ms.33 from job set of time 1523972940000 ms 18/04/17 16:49:00 INFO scheduler.JobScheduler: Starting job streaming job 1523972940000 ms.34 from job set of time 1523972940000 ms 18/04/17 16:49:00 INFO scheduler.JobScheduler: Starting job streaming job 1523972940000 ms.35 from job set of time 1523972940000 ms 18/04/17 16:49:00 INFO scheduler.JobScheduler: Finished job streaming job 1523972940000 ms.35 from job set of time 1523972940000 ms 18/04/17 16:49:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 16:49:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 16:49:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 16:49:00 INFO scheduler.DAGScheduler: Got job 422 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 16:49:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 422 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 16:49:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 16:49:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 16:49:00 INFO scheduler.DAGScheduler: Submitting ResultStage 422 (KafkaRDD[604] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 16:49:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 16:49:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 16:49:00 INFO storage.MemoryStore: Block broadcast_422 stored as values in memory (estimated size 5.7 KB, free 490.5 MB) 18/04/17 16:49:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 16:49:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 16:49:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 16:49:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 16:49:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 16:49:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 16:49:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 16:49:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 16:49:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 16:49:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 16:49:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 16:49:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 16:49:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 16:49:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 16:49:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 16:49:00 INFO storage.MemoryStore: Block broadcast_422_piece0 stored as bytes in memory (estimated size 3.1 KB, free 490.5 MB) 18/04/17 16:49:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 16:49:00 INFO storage.BlockManagerInfo: Added broadcast_422_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.3 MB) 18/04/17 16:49:00 INFO spark.SparkContext: Created broadcast 422 from broadcast at DAGScheduler.scala:1006 18/04/17 16:49:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 422 (KafkaRDD[604] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 16:49:00 INFO cluster.YarnClusterScheduler: Adding task set 422.0 with 1 tasks 18/04/17 16:49:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 16:49:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 16:49:00 INFO scheduler.DAGScheduler: Got job 424 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 16:49:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 423 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 16:49:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 16:49:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 16:49:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 16:49:00 INFO scheduler.DAGScheduler: Submitting ResultStage 423 (KafkaRDD[607] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 16:49:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 16:49:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 422.0 (TID 422, ***hostname masked***, executor 10, partition 0, RACK_LOCAL, 2045 bytes) 18/04/17 16:49:00 INFO storage.MemoryStore: Block broadcast_423 stored as values in memory (estimated size 5.7 KB, free 490.5 MB) 18/04/17 16:49:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 16:49:00 INFO storage.MemoryStore: Block broadcast_423_piece0 stored as bytes in memory (estimated size 3.1 KB, free 490.5 MB) 18/04/17 16:49:00 INFO storage.BlockManagerInfo: Added broadcast_423_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.3 MB) 18/04/17 16:49:00 INFO spark.SparkContext: Created broadcast 423 from broadcast at DAGScheduler.scala:1006 18/04/17 16:49:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 423 (KafkaRDD[607] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 16:49:00 INFO cluster.YarnClusterScheduler: Adding task set 423.0 with 1 tasks 18/04/17 16:49:00 INFO scheduler.DAGScheduler: Got job 423 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 16:49:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 424 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 16:49:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 16:49:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 16:49:00 INFO scheduler.DAGScheduler: Submitting ResultStage 424 (KafkaRDD[591] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 16:49:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 423.0 (TID 423, ***hostname masked***, executor 1, partition 0, RACK_LOCAL, 2042 bytes) 18/04/17 16:49:00 INFO storage.MemoryStore: Block broadcast_424 stored as values in memory (estimated size 5.7 KB, free 490.5 MB) 18/04/17 16:49:00 INFO storage.MemoryStore: Block broadcast_424_piece0 stored as bytes in memory (estimated size 3.1 KB, free 490.5 MB) 18/04/17 16:49:00 INFO storage.BlockManagerInfo: Added broadcast_424_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.3 MB) 18/04/17 16:49:00 INFO spark.SparkContext: Created broadcast 424 from broadcast at DAGScheduler.scala:1006 18/04/17 16:49:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 424 (KafkaRDD[591] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 16:49:00 INFO cluster.YarnClusterScheduler: Adding task set 424.0 with 1 tasks 18/04/17 16:49:00 INFO scheduler.DAGScheduler: Got job 425 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 16:49:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 425 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 16:49:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 16:49:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 16:49:00 INFO scheduler.DAGScheduler: Submitting ResultStage 425 (KafkaRDD[585] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 16:49:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 424.0 (TID 424, ***hostname masked***, executor 8, partition 0, RACK_LOCAL, 2064 bytes) 18/04/17 16:49:00 INFO storage.MemoryStore: Block broadcast_425 stored as values in memory (estimated size 5.7 KB, free 490.5 MB) 18/04/17 16:49:00 INFO storage.MemoryStore: Block broadcast_425_piece0 stored as bytes in memory (estimated size 3.1 KB, free 490.5 MB) 18/04/17 16:49:00 INFO storage.BlockManagerInfo: Added broadcast_425_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.3 MB) 18/04/17 16:49:00 INFO spark.SparkContext: Created broadcast 425 from broadcast at DAGScheduler.scala:1006 18/04/17 16:49:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 425 (KafkaRDD[585] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 16:49:00 INFO cluster.YarnClusterScheduler: Adding task set 425.0 with 1 tasks 18/04/17 16:49:00 INFO scheduler.DAGScheduler: Got job 426 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 16:49:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 426 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 16:49:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 16:49:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 16:49:00 INFO scheduler.DAGScheduler: Submitting ResultStage 426 (KafkaRDD[605] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 16:49:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 425.0 (TID 425, ***hostname masked***, executor 3, partition 0, RACK_LOCAL, 2041 bytes) 18/04/17 16:49:00 INFO storage.MemoryStore: Block broadcast_426 stored as values in memory (estimated size 5.7 KB, free 490.5 MB) 18/04/17 16:49:00 INFO storage.MemoryStore: Block broadcast_426_piece0 stored as bytes in memory (estimated size 3.1 KB, free 490.5 MB) 18/04/17 16:49:00 INFO storage.BlockManagerInfo: Added broadcast_426_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.3 MB) 18/04/17 16:49:00 INFO spark.SparkContext: Created broadcast 426 from broadcast at DAGScheduler.scala:1006 18/04/17 16:49:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 426 (KafkaRDD[605] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 16:49:00 INFO cluster.YarnClusterScheduler: Adding task set 426.0 with 1 tasks 18/04/17 16:49:00 INFO scheduler.DAGScheduler: Got job 427 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 16:49:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 427 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 16:49:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 16:49:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 16:49:00 INFO scheduler.DAGScheduler: Submitting ResultStage 427 (KafkaRDD[584] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 16:49:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 426.0 (TID 426, ***hostname masked***, executor 5, partition 0, NODE_LOCAL, 2049 bytes) 18/04/17 16:49:00 INFO storage.MemoryStore: Block broadcast_427 stored as values in memory (estimated size 5.7 KB, free 490.5 MB) 18/04/17 16:49:00 INFO storage.MemoryStore: Block broadcast_427_piece0 stored as bytes in memory (estimated size 3.1 KB, free 490.5 MB) 18/04/17 16:49:00 INFO storage.BlockManagerInfo: Added broadcast_427_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.3 MB) 18/04/17 16:49:00 INFO spark.SparkContext: Created broadcast 427 from broadcast at DAGScheduler.scala:1006 18/04/17 16:49:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 427 (KafkaRDD[584] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 16:49:00 INFO cluster.YarnClusterScheduler: Adding task set 427.0 with 1 tasks 18/04/17 16:49:00 INFO scheduler.DAGScheduler: Got job 429 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 16:49:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 428 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 16:49:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 16:49:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 16:49:00 INFO scheduler.DAGScheduler: Submitting ResultStage 428 (KafkaRDD[578] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 16:49:00 INFO storage.MemoryStore: Block broadcast_428 stored as values in memory (estimated size 5.7 KB, free 490.5 MB) 18/04/17 16:49:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 427.0 (TID 427, ***hostname masked***, executor 12, partition 0, RACK_LOCAL, 2051 bytes) 18/04/17 16:49:00 INFO storage.BlockManagerInfo: Added broadcast_423_piece0 in memory on ***hostname masked***:56034 (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:49:00 INFO storage.MemoryStore: Block broadcast_428_piece0 stored as bytes in memory (estimated size 3.1 KB, free 490.5 MB) 18/04/17 16:49:00 INFO storage.BlockManagerInfo: Added broadcast_428_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.3 MB) 18/04/17 16:49:00 INFO spark.SparkContext: Created broadcast 428 from broadcast at DAGScheduler.scala:1006 18/04/17 16:49:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 428 (KafkaRDD[578] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 16:49:00 INFO cluster.YarnClusterScheduler: Adding task set 428.0 with 1 tasks 18/04/17 16:49:00 INFO scheduler.DAGScheduler: Got job 430 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 16:49:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 429 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 16:49:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 16:49:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 16:49:00 INFO scheduler.DAGScheduler: Submitting ResultStage 429 (KafkaRDD[582] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 16:49:00 INFO storage.BlockManagerInfo: Added broadcast_422_piece0 in memory on ***hostname masked***:55095 (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:49:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 428.0 (TID 428, ***hostname masked***, executor 7, partition 0, RACK_LOCAL, 2034 bytes) 18/04/17 16:49:00 INFO storage.BlockManagerInfo: Added broadcast_424_piece0 in memory on ***hostname masked***:50260 (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:49:00 INFO storage.MemoryStore: Block broadcast_429 stored as values in memory (estimated size 5.7 KB, free 490.5 MB) 18/04/17 16:49:00 INFO storage.MemoryStore: Block broadcast_429_piece0 stored as bytes in memory (estimated size 3.1 KB, free 490.5 MB) 18/04/17 16:49:00 INFO storage.BlockManagerInfo: Added broadcast_429_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.2 MB) 18/04/17 16:49:00 INFO spark.SparkContext: Created broadcast 429 from broadcast at DAGScheduler.scala:1006 18/04/17 16:49:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 429 (KafkaRDD[582] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 16:49:00 INFO cluster.YarnClusterScheduler: Adding task set 429.0 with 1 tasks 18/04/17 16:49:00 INFO scheduler.DAGScheduler: Got job 431 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 16:49:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 430 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 16:49:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 16:49:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 16:49:00 INFO scheduler.DAGScheduler: Submitting ResultStage 430 (KafkaRDD[608] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 16:49:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 429.0 (TID 429, ***hostname masked***, executor 8, partition 0, NODE_LOCAL, 2045 bytes) 18/04/17 16:49:00 INFO storage.MemoryStore: Block broadcast_430 stored as values in memory (estimated size 5.7 KB, free 490.5 MB) 18/04/17 16:49:00 INFO storage.MemoryStore: Block broadcast_430_piece0 stored as bytes in memory (estimated size 3.1 KB, free 490.5 MB) 18/04/17 16:49:00 INFO storage.BlockManagerInfo: Added broadcast_430_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.2 MB) 18/04/17 16:49:00 INFO storage.BlockManagerInfo: Added broadcast_426_piece0 in memory on ***hostname masked***:53081 (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:49:00 INFO spark.SparkContext: Created broadcast 430 from broadcast at DAGScheduler.scala:1006 18/04/17 16:49:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 430 (KafkaRDD[608] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 16:49:00 INFO cluster.YarnClusterScheduler: Adding task set 430.0 with 1 tasks 18/04/17 16:49:00 INFO scheduler.DAGScheduler: Got job 428 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 16:49:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 431 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 16:49:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 16:49:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 16:49:00 INFO scheduler.DAGScheduler: Submitting ResultStage 431 (KafkaRDD[596] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 16:49:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 430.0 (TID 430, ***hostname masked***, executor 5, partition 0, RACK_LOCAL, 2040 bytes) 18/04/17 16:49:00 INFO storage.MemoryStore: Block broadcast_431 stored as values in memory (estimated size 5.7 KB, free 490.5 MB) 18/04/17 16:49:00 INFO storage.BlockManagerInfo: Added broadcast_427_piece0 in memory on ***hostname masked***:42188 (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:49:00 INFO storage.MemoryStore: Block broadcast_431_piece0 stored as bytes in memory (estimated size 3.1 KB, free 490.5 MB) 18/04/17 16:49:00 INFO storage.BlockManagerInfo: Added broadcast_431_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.2 MB) 18/04/17 16:49:00 INFO storage.BlockManagerInfo: Added broadcast_429_piece0 in memory on ***hostname masked***:50260 (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:49:00 INFO spark.SparkContext: Created broadcast 431 from broadcast at DAGScheduler.scala:1006 18/04/17 16:49:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 431 (KafkaRDD[596] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 16:49:00 INFO cluster.YarnClusterScheduler: Adding task set 431.0 with 1 tasks 18/04/17 16:49:00 INFO scheduler.DAGScheduler: Got job 432 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 16:49:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 432 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 16:49:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 16:49:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 16:49:00 INFO scheduler.DAGScheduler: Submitting ResultStage 432 (KafkaRDD[599] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 16:49:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 431.0 (TID 431, ***hostname masked***, executor 10, partition 0, RACK_LOCAL, 2048 bytes) 18/04/17 16:49:00 INFO storage.MemoryStore: Block broadcast_432 stored as values in memory (estimated size 5.7 KB, free 490.5 MB) 18/04/17 16:49:00 INFO storage.BlockManagerInfo: Added broadcast_428_piece0 in memory on ***hostname masked***:41751 (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:49:00 INFO storage.MemoryStore: Block broadcast_432_piece0 stored as bytes in memory (estimated size 3.1 KB, free 490.5 MB) 18/04/17 16:49:00 INFO storage.BlockManagerInfo: Added broadcast_432_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.2 MB) 18/04/17 16:49:00 INFO spark.SparkContext: Created broadcast 432 from broadcast at DAGScheduler.scala:1006 18/04/17 16:49:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 432 (KafkaRDD[599] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 16:49:00 INFO cluster.YarnClusterScheduler: Adding task set 432.0 with 1 tasks 18/04/17 16:49:00 INFO scheduler.DAGScheduler: Got job 433 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 16:49:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 433 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 16:49:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 16:49:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 16:49:00 INFO scheduler.DAGScheduler: Submitting ResultStage 433 (KafkaRDD[586] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 16:49:00 INFO storage.MemoryStore: Block broadcast_433 stored as values in memory (estimated size 5.7 KB, free 490.5 MB) 18/04/17 16:49:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 432.0 (TID 432, ***hostname masked***, executor 3, partition 0, RACK_LOCAL, 2048 bytes) 18/04/17 16:49:00 INFO storage.BlockManagerInfo: Added broadcast_430_piece0 in memory on ***hostname masked***:53081 (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:49:00 INFO storage.MemoryStore: Block broadcast_433_piece0 stored as bytes in memory (estimated size 3.1 KB, free 490.4 MB) 18/04/17 16:49:00 INFO storage.BlockManagerInfo: Added broadcast_433_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.2 MB) 18/04/17 16:49:00 INFO spark.SparkContext: Created broadcast 433 from broadcast at DAGScheduler.scala:1006 18/04/17 16:49:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 433 (KafkaRDD[586] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 16:49:00 INFO cluster.YarnClusterScheduler: Adding task set 433.0 with 1 tasks 18/04/17 16:49:00 INFO scheduler.DAGScheduler: Got job 434 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 16:49:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 434 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 16:49:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 16:49:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 16:49:00 INFO scheduler.DAGScheduler: Submitting ResultStage 434 (KafkaRDD[595] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 16:49:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 433.0 (TID 433, ***hostname masked***, executor 6, partition 0, NODE_LOCAL, 2059 bytes) 18/04/17 16:49:00 INFO storage.MemoryStore: Block broadcast_434 stored as values in memory (estimated size 5.7 KB, free 490.4 MB) 18/04/17 16:49:00 INFO storage.BlockManagerInfo: Added broadcast_431_piece0 in memory on ***hostname masked***:55095 (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:49:00 INFO storage.MemoryStore: Block broadcast_434_piece0 stored as bytes in memory (estimated size 3.1 KB, free 490.4 MB) 18/04/17 16:49:00 INFO storage.BlockManagerInfo: Added broadcast_434_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.2 MB) 18/04/17 16:49:00 INFO spark.SparkContext: Created broadcast 434 from broadcast at DAGScheduler.scala:1006 18/04/17 16:49:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 434 (KafkaRDD[595] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 16:49:00 INFO cluster.YarnClusterScheduler: Adding task set 434.0 with 1 tasks 18/04/17 16:49:00 INFO scheduler.DAGScheduler: Got job 435 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 16:49:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 435 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 16:49:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 16:49:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 16:49:00 INFO scheduler.DAGScheduler: Submitting ResultStage 435 (KafkaRDD[581] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 16:49:00 INFO storage.MemoryStore: Block broadcast_435 stored as values in memory (estimated size 5.7 KB, free 490.4 MB) 18/04/17 16:49:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 434.0 (TID 434, ***hostname masked***, executor 8, partition 0, RACK_LOCAL, 2050 bytes) 18/04/17 16:49:00 INFO storage.MemoryStore: Block broadcast_435_piece0 stored as bytes in memory (estimated size 3.1 KB, free 490.4 MB) 18/04/17 16:49:00 INFO storage.BlockManagerInfo: Added broadcast_435_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.2 MB) 18/04/17 16:49:00 INFO spark.SparkContext: Created broadcast 435 from broadcast at DAGScheduler.scala:1006 18/04/17 16:49:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 435 (KafkaRDD[581] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 16:49:00 INFO cluster.YarnClusterScheduler: Adding task set 435.0 with 1 tasks 18/04/17 16:49:00 INFO scheduler.DAGScheduler: Got job 436 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 16:49:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 436 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 16:49:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 16:49:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 16:49:00 INFO scheduler.DAGScheduler: Submitting ResultStage 436 (KafkaRDD[587] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 16:49:00 INFO storage.MemoryStore: Block broadcast_436 stored as values in memory (estimated size 5.7 KB, free 490.4 MB) 18/04/17 16:49:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 435.0 (TID 435, ***hostname masked***, executor 5, partition 0, NODE_LOCAL, 2059 bytes) 18/04/17 16:49:00 INFO storage.MemoryStore: Block broadcast_436_piece0 stored as bytes in memory (estimated size 3.1 KB, free 490.4 MB) 18/04/17 16:49:00 INFO storage.BlockManagerInfo: Added broadcast_436_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.2 MB) 18/04/17 16:49:00 INFO spark.SparkContext: Created broadcast 436 from broadcast at DAGScheduler.scala:1006 18/04/17 16:49:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 436 (KafkaRDD[587] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 16:49:00 INFO cluster.YarnClusterScheduler: Adding task set 436.0 with 1 tasks 18/04/17 16:49:00 INFO scheduler.DAGScheduler: Got job 437 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 16:49:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 437 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 16:49:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 16:49:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 16:49:00 INFO scheduler.DAGScheduler: Submitting ResultStage 437 (KafkaRDD[610] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 16:49:00 INFO storage.MemoryStore: Block broadcast_437 stored as values in memory (estimated size 5.7 KB, free 490.4 MB) 18/04/17 16:49:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 436.0 (TID 436, ***hostname masked***, executor 1, partition 0, RACK_LOCAL, 2046 bytes) 18/04/17 16:49:00 INFO storage.BlockManagerInfo: Added broadcast_434_piece0 in memory on ***hostname masked***:50260 (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:49:00 INFO storage.BlockManagerInfo: Added broadcast_425_piece0 in memory on ***hostname masked***:60107 (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:49:00 INFO storage.BlockManagerInfo: Added broadcast_433_piece0 in memory on ***hostname masked***:35790 (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:49:00 INFO storage.BlockManagerInfo: Added broadcast_435_piece0 in memory on ***hostname masked***:53081 (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:49:00 INFO storage.MemoryStore: Block broadcast_437_piece0 stored as bytes in memory (estimated size 3.1 KB, free 490.4 MB) 18/04/17 16:49:00 INFO storage.BlockManagerInfo: Added broadcast_437_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.2 MB) 18/04/17 16:49:00 INFO spark.SparkContext: Created broadcast 437 from broadcast at DAGScheduler.scala:1006 18/04/17 16:49:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 437 (KafkaRDD[610] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 16:49:00 INFO cluster.YarnClusterScheduler: Adding task set 437.0 with 1 tasks 18/04/17 16:49:00 INFO scheduler.DAGScheduler: Got job 438 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 16:49:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 438 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 16:49:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 16:49:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 16:49:00 INFO scheduler.DAGScheduler: Submitting ResultStage 438 (KafkaRDD[583] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 16:49:00 INFO storage.MemoryStore: Block broadcast_438 stored as values in memory (estimated size 5.7 KB, free 490.4 MB) 18/04/17 16:49:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 437.0 (TID 437, ***hostname masked***, executor 2, partition 0, RACK_LOCAL, 2053 bytes) 18/04/17 16:49:00 INFO storage.MemoryStore: Block broadcast_438_piece0 stored as bytes in memory (estimated size 3.1 KB, free 490.4 MB) 18/04/17 16:49:00 INFO storage.BlockManagerInfo: Added broadcast_438_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.2 MB) 18/04/17 16:49:00 INFO spark.SparkContext: Created broadcast 438 from broadcast at DAGScheduler.scala:1006 18/04/17 16:49:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 438 (KafkaRDD[583] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 16:49:00 INFO cluster.YarnClusterScheduler: Adding task set 438.0 with 1 tasks 18/04/17 16:49:00 INFO scheduler.DAGScheduler: Got job 439 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 16:49:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 439 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 16:49:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 16:49:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 16:49:00 INFO scheduler.DAGScheduler: Submitting ResultStage 439 (KafkaRDD[600] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 16:49:00 INFO storage.MemoryStore: Block broadcast_439 stored as values in memory (estimated size 5.7 KB, free 490.4 MB) 18/04/17 16:49:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 438.0 (TID 438, ***hostname masked***, executor 8, partition 0, RACK_LOCAL, 2060 bytes) 18/04/17 16:49:00 INFO storage.BlockManagerInfo: Added broadcast_436_piece0 in memory on ***hostname masked***:56034 (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:49:00 INFO storage.MemoryStore: Block broadcast_439_piece0 stored as bytes in memory (estimated size 3.1 KB, free 490.4 MB) 18/04/17 16:49:00 INFO storage.BlockManagerInfo: Added broadcast_439_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.2 MB) 18/04/17 16:49:00 INFO spark.SparkContext: Created broadcast 439 from broadcast at DAGScheduler.scala:1006 18/04/17 16:49:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 439 (KafkaRDD[600] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 16:49:00 INFO cluster.YarnClusterScheduler: Adding task set 439.0 with 1 tasks 18/04/17 16:49:00 INFO scheduler.DAGScheduler: Got job 441 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 16:49:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 440 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 16:49:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 16:49:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 16:49:00 INFO scheduler.DAGScheduler: Submitting ResultStage 440 (KafkaRDD[601] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 16:49:00 INFO storage.MemoryStore: Block broadcast_440 stored as values in memory (estimated size 5.7 KB, free 490.4 MB) 18/04/17 16:49:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 439.0 (TID 439, ***hostname masked***, executor 4, partition 0, NODE_LOCAL, 2047 bytes) 18/04/17 16:49:00 INFO storage.BlockManagerInfo: Added broadcast_432_piece0 in memory on ***hostname masked***:60107 (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:49:00 INFO storage.MemoryStore: Block broadcast_440_piece0 stored as bytes in memory (estimated size 3.1 KB, free 490.4 MB) 18/04/17 16:49:00 INFO storage.BlockManagerInfo: Added broadcast_440_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.2 MB) 18/04/17 16:49:00 INFO spark.SparkContext: Created broadcast 440 from broadcast at DAGScheduler.scala:1006 18/04/17 16:49:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 440 (KafkaRDD[601] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 16:49:00 INFO cluster.YarnClusterScheduler: Adding task set 440.0 with 1 tasks 18/04/17 16:49:00 INFO scheduler.DAGScheduler: Got job 442 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 16:49:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 441 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 16:49:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 16:49:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 16:49:00 INFO scheduler.DAGScheduler: Submitting ResultStage 441 (KafkaRDD[577] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 16:49:00 INFO storage.BlockManagerInfo: Added broadcast_438_piece0 in memory on ***hostname masked***:50260 (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:49:00 INFO storage.BlockManagerInfo: Added broadcast_437_piece0 in memory on ***hostname masked***:43653 (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:49:00 INFO storage.MemoryStore: Block broadcast_441 stored as values in memory (estimated size 5.7 KB, free 490.4 MB) 18/04/17 16:49:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 440.0 (TID 440, ***hostname masked***, executor 11, partition 0, RACK_LOCAL, 2045 bytes) 18/04/17 16:49:00 INFO storage.MemoryStore: Block broadcast_441_piece0 stored as bytes in memory (estimated size 3.1 KB, free 490.4 MB) 18/04/17 16:49:00 INFO storage.BlockManagerInfo: Added broadcast_441_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.2 MB) 18/04/17 16:49:00 INFO spark.SparkContext: Created broadcast 441 from broadcast at DAGScheduler.scala:1006 18/04/17 16:49:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 441 (KafkaRDD[577] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 16:49:00 INFO cluster.YarnClusterScheduler: Adding task set 441.0 with 1 tasks 18/04/17 16:49:00 INFO scheduler.DAGScheduler: Got job 440 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 16:49:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 442 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 16:49:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 16:49:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 16:49:00 INFO scheduler.DAGScheduler: Submitting ResultStage 442 (KafkaRDD[588] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 16:49:00 INFO storage.MemoryStore: Block broadcast_442 stored as values in memory (estimated size 5.7 KB, free 490.4 MB) 18/04/17 16:49:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 441.0 (TID 441, ***hostname masked***, executor 4, partition 0, RACK_LOCAL, 2063 bytes) 18/04/17 16:49:00 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 429.0 (TID 429) in 54 ms on ***hostname masked*** (executor 8) (1/1) 18/04/17 16:49:00 INFO cluster.YarnClusterScheduler: Removed TaskSet 429.0, whose tasks have all completed, from pool 18/04/17 16:49:00 INFO storage.MemoryStore: Block broadcast_442_piece0 stored as bytes in memory (estimated size 3.1 KB, free 490.4 MB) 18/04/17 16:49:00 INFO storage.BlockManagerInfo: Added broadcast_442_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.2 MB) 18/04/17 16:49:00 INFO spark.SparkContext: Created broadcast 442 from broadcast at DAGScheduler.scala:1006 18/04/17 16:49:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 442 (KafkaRDD[588] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 16:49:00 INFO cluster.YarnClusterScheduler: Adding task set 442.0 with 1 tasks 18/04/17 16:49:00 INFO scheduler.DAGScheduler: Got job 443 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 16:49:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 443 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 16:49:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 16:49:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 16:49:00 INFO scheduler.DAGScheduler: Submitting ResultStage 443 (KafkaRDD[598] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 16:49:00 INFO storage.MemoryStore: Block broadcast_443 stored as values in memory (estimated size 5.7 KB, free 490.4 MB) 18/04/17 16:49:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 442.0 (TID 442, ***hostname masked***, executor 9, partition 0, RACK_LOCAL, 2051 bytes) 18/04/17 16:49:00 INFO storage.BlockManagerInfo: Added broadcast_440_piece0 in memory on ***hostname masked***:57847 (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:49:00 INFO storage.MemoryStore: Block broadcast_443_piece0 stored as bytes in memory (estimated size 3.1 KB, free 490.4 MB) 18/04/17 16:49:00 INFO storage.BlockManagerInfo: Added broadcast_443_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.2 MB) 18/04/17 16:49:00 INFO spark.SparkContext: Created broadcast 443 from broadcast at DAGScheduler.scala:1006 18/04/17 16:49:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 443 (KafkaRDD[598] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 16:49:00 INFO cluster.YarnClusterScheduler: Adding task set 443.0 with 1 tasks 18/04/17 16:49:00 INFO storage.BlockManagerInfo: Added broadcast_439_piece0 in memory on ***hostname masked***:55279 (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:49:00 INFO scheduler.DAGScheduler: Got job 444 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 16:49:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 444 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 16:49:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 16:49:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 16:49:00 INFO scheduler.DAGScheduler: Submitting ResultStage 444 (KafkaRDD[602] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 16:49:00 INFO storage.MemoryStore: Block broadcast_444 stored as values in memory (estimated size 5.7 KB, free 490.4 MB) 18/04/17 16:49:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 443.0 (TID 443, ***hostname masked***, executor 7, partition 0, RACK_LOCAL, 2069 bytes) 18/04/17 16:49:00 INFO storage.MemoryStore: Block broadcast_444_piece0 stored as bytes in memory (estimated size 3.1 KB, free 490.4 MB) 18/04/17 16:49:00 INFO storage.BlockManagerInfo: Added broadcast_444_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.2 MB) 18/04/17 16:49:00 INFO spark.SparkContext: Created broadcast 444 from broadcast at DAGScheduler.scala:1006 18/04/17 16:49:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 444 (KafkaRDD[602] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 16:49:00 INFO cluster.YarnClusterScheduler: Adding task set 444.0 with 1 tasks 18/04/17 16:49:00 INFO scheduler.DAGScheduler: Got job 445 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 16:49:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 445 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 16:49:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 16:49:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 16:49:00 INFO scheduler.DAGScheduler: Submitting ResultStage 445 (KafkaRDD[594] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 16:49:00 INFO storage.MemoryStore: Block broadcast_445 stored as values in memory (estimated size 5.7 KB, free 490.3 MB) 18/04/17 16:49:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 444.0 (TID 444, ***hostname masked***, executor 6, partition 0, NODE_LOCAL, 2056 bytes) 18/04/17 16:49:00 INFO storage.BlockManagerInfo: Added broadcast_443_piece0 in memory on ***hostname masked***:41751 (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:49:00 INFO storage.MemoryStore: Block broadcast_445_piece0 stored as bytes in memory (estimated size 3.1 KB, free 490.3 MB) 18/04/17 16:49:00 INFO storage.BlockManagerInfo: Added broadcast_445_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.2 MB) 18/04/17 16:49:00 INFO spark.SparkContext: Created broadcast 445 from broadcast at DAGScheduler.scala:1006 18/04/17 16:49:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 445 (KafkaRDD[594] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 16:49:00 INFO cluster.YarnClusterScheduler: Adding task set 445.0 with 1 tasks 18/04/17 16:49:00 INFO scheduler.DAGScheduler: Got job 446 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 16:49:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 446 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 16:49:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 16:49:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 16:49:00 INFO scheduler.DAGScheduler: Submitting ResultStage 446 (KafkaRDD[603] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 16:49:00 INFO storage.MemoryStore: Block broadcast_446 stored as values in memory (estimated size 5.7 KB, free 490.3 MB) 18/04/17 16:49:00 INFO storage.BlockManagerInfo: Added broadcast_442_piece0 in memory on ***hostname masked***:55033 (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:49:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 445.0 (TID 445, ***hostname masked***, executor 12, partition 0, RACK_LOCAL, 2048 bytes) 18/04/17 16:49:00 INFO storage.BlockManagerInfo: Added broadcast_441_piece0 in memory on ***hostname masked***:55279 (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:49:00 INFO storage.MemoryStore: Block broadcast_446_piece0 stored as bytes in memory (estimated size 3.1 KB, free 490.3 MB) 18/04/17 16:49:00 INFO storage.BlockManagerInfo: Added broadcast_446_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.2 MB) 18/04/17 16:49:00 INFO spark.SparkContext: Created broadcast 446 from broadcast at DAGScheduler.scala:1006 18/04/17 16:49:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 446 (KafkaRDD[603] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 16:49:00 INFO cluster.YarnClusterScheduler: Adding task set 446.0 with 1 tasks 18/04/17 16:49:00 INFO scheduler.DAGScheduler: Got job 447 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 16:49:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 447 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 16:49:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 16:49:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 16:49:00 INFO scheduler.DAGScheduler: Submitting ResultStage 447 (KafkaRDD[609] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 16:49:00 INFO storage.BlockManagerInfo: Added broadcast_444_piece0 in memory on ***hostname masked***:35790 (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:49:00 INFO storage.MemoryStore: Block broadcast_447 stored as values in memory (estimated size 5.7 KB, free 490.3 MB) 18/04/17 16:49:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 446.0 (TID 446, ***hostname masked***, executor 11, partition 0, RACK_LOCAL, 2048 bytes) 18/04/17 16:49:00 INFO storage.MemoryStore: Block broadcast_447_piece0 stored as bytes in memory (estimated size 3.1 KB, free 490.3 MB) 18/04/17 16:49:00 INFO storage.BlockManagerInfo: Added broadcast_447_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.2 MB) 18/04/17 16:49:00 INFO spark.SparkContext: Created broadcast 447 from broadcast at DAGScheduler.scala:1006 18/04/17 16:49:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 447 (KafkaRDD[609] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 16:49:00 INFO cluster.YarnClusterScheduler: Adding task set 447.0 with 1 tasks 18/04/17 16:49:00 INFO scheduler.DAGScheduler: ResultStage 429 (foreachPartition at PredictorEngineApp.java:153) finished in 0.071 s 18/04/17 16:49:00 INFO scheduler.DAGScheduler: Job 430 finished: foreachPartition at PredictorEngineApp.java:153, took 0.101583 s 18/04/17 16:49:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 447.0 (TID 447, ***hostname masked***, executor 7, partition 0, NODE_LOCAL, 2048 bytes) 18/04/17 16:49:00 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x7667fed9 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 16:49:00 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x7667fed90x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 16:49:00 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 16:49:00 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:34324, server: ***hostname masked***/***IP masked***:2181 18/04/17 16:49:00 INFO storage.BlockManagerInfo: Added broadcast_445_piece0 in memory on ***hostname masked***:42188 (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:49:00 INFO storage.BlockManagerInfo: Added broadcast_446_piece0 in memory on ***hostname masked***:57847 (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:49:00 INFO storage.BlockManagerInfo: Added broadcast_447_piece0 in memory on ***hostname masked***:41751 (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:49:00 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x1626be1444c93a9, negotiated timeout = 60000 18/04/17 16:49:00 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 432.0 (TID 432) in 80 ms on ***hostname masked*** (executor 3) (1/1) 18/04/17 16:49:00 INFO scheduler.DAGScheduler: ResultStage 432 (foreachPartition at PredictorEngineApp.java:153) finished in 0.080 s 18/04/17 16:49:00 INFO cluster.YarnClusterScheduler: Removed TaskSet 432.0, whose tasks have all completed, from pool 18/04/17 16:49:00 INFO scheduler.DAGScheduler: Job 432 finished: foreachPartition at PredictorEngineApp.java:153, took 0.120314 s 18/04/17 16:49:00 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x1626be1444c93a9 18/04/17 16:49:00 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 16:49:00 INFO zookeeper.ZooKeeper: Session: 0x1626be1444c93a9 closed 18/04/17 16:49:00 INFO scheduler.JobScheduler: Finished job streaming job 1523972940000 ms.6 from job set of time 1523972940000 ms 18/04/17 16:49:00 INFO scheduler.JobScheduler: Finished job streaming job 1523972940000 ms.23 from job set of time 1523972940000 ms 18/04/17 16:49:00 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 426.0 (TID 426) in 174 ms on ***hostname masked*** (executor 5) (1/1) 18/04/17 16:49:00 INFO scheduler.DAGScheduler: ResultStage 426 (foreachPartition at PredictorEngineApp.java:153) finished in 0.174 s 18/04/17 16:49:00 INFO cluster.YarnClusterScheduler: Removed TaskSet 426.0, whose tasks have all completed, from pool 18/04/17 16:49:00 INFO scheduler.DAGScheduler: Job 426 finished: foreachPartition at PredictorEngineApp.java:153, took 0.196617 s 18/04/17 16:49:00 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x5b553e40 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 16:49:00 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x5b553e400x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 16:49:00 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 16:49:00 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:56178, server: ***hostname masked***/***IP masked***:2181 18/04/17 16:49:00 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x3626be1439a9373, negotiated timeout = 60000 18/04/17 16:49:00 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x3626be1439a9373 18/04/17 16:49:00 INFO zookeeper.ZooKeeper: Session: 0x3626be1439a9373 closed 18/04/17 16:49:00 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 16:49:00 INFO scheduler.JobScheduler: Finished job streaming job 1523972940000 ms.29 from job set of time 1523972940000 ms 18/04/17 16:49:01 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 440.0 (TID 440) in 1310 ms on ***hostname masked*** (executor 11) (1/1) 18/04/17 16:49:01 INFO cluster.YarnClusterScheduler: Removed TaskSet 440.0, whose tasks have all completed, from pool 18/04/17 16:49:01 INFO scheduler.DAGScheduler: ResultStage 440 (foreachPartition at PredictorEngineApp.java:153) finished in 1.312 s 18/04/17 16:49:01 INFO scheduler.DAGScheduler: Job 441 finished: foreachPartition at PredictorEngineApp.java:153, took 1.377498 s 18/04/17 16:49:01 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x7ca52cc6 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 16:49:01 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x7ca52cc60x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 16:49:01 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 16:49:01 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:34331, server: ***hostname masked***/***IP masked***:2181 18/04/17 16:49:01 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x1626be1444c93b0, negotiated timeout = 60000 18/04/17 16:49:01 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x1626be1444c93b0 18/04/17 16:49:01 INFO zookeeper.ZooKeeper: Session: 0x1626be1444c93b0 closed 18/04/17 16:49:01 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 16:49:01 INFO scheduler.JobScheduler: Finished job streaming job 1523972940000 ms.25 from job set of time 1523972940000 ms 18/04/17 16:49:01 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 427.0 (TID 427) in 1670 ms on ***hostname masked*** (executor 12) (1/1) 18/04/17 16:49:01 INFO cluster.YarnClusterScheduler: Removed TaskSet 427.0, whose tasks have all completed, from pool 18/04/17 16:49:01 INFO scheduler.DAGScheduler: ResultStage 427 (foreachPartition at PredictorEngineApp.java:153) finished in 1.671 s 18/04/17 16:49:01 INFO scheduler.DAGScheduler: Job 427 finished: foreachPartition at PredictorEngineApp.java:153, took 1.695232 s 18/04/17 16:49:01 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x662c1429 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 16:49:01 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x662c14290x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 16:49:01 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 16:49:01 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:38929, server: ***hostname masked***/***IP masked***:2181 18/04/17 16:49:01 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x2626be142b28cb5, negotiated timeout = 60000 18/04/17 16:49:01 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x2626be142b28cb5 18/04/17 16:49:01 INFO zookeeper.ZooKeeper: Session: 0x2626be142b28cb5 closed 18/04/17 16:49:01 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 16:49:01 INFO scheduler.JobScheduler: Finished job streaming job 1523972940000 ms.8 from job set of time 1523972940000 ms 18/04/17 16:49:02 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 438.0 (TID 438) in 2295 ms on ***hostname masked*** (executor 8) (1/1) 18/04/17 16:49:02 INFO cluster.YarnClusterScheduler: Removed TaskSet 438.0, whose tasks have all completed, from pool 18/04/17 16:49:02 INFO scheduler.DAGScheduler: ResultStage 438 (foreachPartition at PredictorEngineApp.java:153) finished in 2.296 s 18/04/17 16:49:02 INFO scheduler.DAGScheduler: Job 438 finished: foreachPartition at PredictorEngineApp.java:153, took 2.354948 s 18/04/17 16:49:02 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x307a0481 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 16:49:02 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x307a04810x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 16:49:02 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 16:49:02 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:34339, server: ***hostname masked***/***IP masked***:2181 18/04/17 16:49:02 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x1626be1444c93b1, negotiated timeout = 60000 18/04/17 16:49:02 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x1626be1444c93b1 18/04/17 16:49:02 INFO zookeeper.ZooKeeper: Session: 0x1626be1444c93b1 closed 18/04/17 16:49:02 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 16:49:02 INFO scheduler.JobScheduler: Finished job streaming job 1523972940000 ms.7 from job set of time 1523972940000 ms 18/04/17 16:49:03 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 442.0 (TID 442) in 2837 ms on ***hostname masked*** (executor 9) (1/1) 18/04/17 16:49:03 INFO cluster.YarnClusterScheduler: Removed TaskSet 442.0, whose tasks have all completed, from pool 18/04/17 16:49:03 INFO scheduler.DAGScheduler: ResultStage 442 (foreachPartition at PredictorEngineApp.java:153) finished in 2.838 s 18/04/17 16:49:03 INFO scheduler.DAGScheduler: Job 440 finished: foreachPartition at PredictorEngineApp.java:153, took 2.922037 s 18/04/17 16:49:03 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x2fae309b connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 16:49:03 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x2fae309b0x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 16:49:03 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 16:49:03 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:34343, server: ***hostname masked***/***IP masked***:2181 18/04/17 16:49:03 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x1626be1444c93b2, negotiated timeout = 60000 18/04/17 16:49:03 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x1626be1444c93b2 18/04/17 16:49:03 INFO zookeeper.ZooKeeper: Session: 0x1626be1444c93b2 closed 18/04/17 16:49:03 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 16:49:03 INFO scheduler.JobScheduler: Finished job streaming job 1523972940000 ms.12 from job set of time 1523972940000 ms 18/04/17 16:49:03 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 423.0 (TID 423) in 2951 ms on ***hostname masked*** (executor 1) (1/1) 18/04/17 16:49:03 INFO cluster.YarnClusterScheduler: Removed TaskSet 423.0, whose tasks have all completed, from pool 18/04/17 16:49:03 INFO scheduler.DAGScheduler: ResultStage 423 (foreachPartition at PredictorEngineApp.java:153) finished in 2.951 s 18/04/17 16:49:03 INFO scheduler.DAGScheduler: Job 424 finished: foreachPartition at PredictorEngineApp.java:153, took 2.965425 s 18/04/17 16:49:03 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x73642eba connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 16:49:03 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x73642eba0x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 16:49:03 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 16:49:03 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:56198, server: ***hostname masked***/***IP masked***:2181 18/04/17 16:49:03 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x3626be1439a937c, negotiated timeout = 60000 18/04/17 16:49:03 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x3626be1439a937c 18/04/17 16:49:03 INFO zookeeper.ZooKeeper: Session: 0x3626be1439a937c closed 18/04/17 16:49:03 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 16:49:03 INFO scheduler.JobScheduler: Finished job streaming job 1523972940000 ms.31 from job set of time 1523972940000 ms 18/04/17 16:49:03 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 425.0 (TID 425) in 3272 ms on ***hostname masked*** (executor 3) (1/1) 18/04/17 16:49:03 INFO cluster.YarnClusterScheduler: Removed TaskSet 425.0, whose tasks have all completed, from pool 18/04/17 16:49:03 INFO scheduler.DAGScheduler: ResultStage 425 (foreachPartition at PredictorEngineApp.java:153) finished in 3.272 s 18/04/17 16:49:03 INFO scheduler.DAGScheduler: Job 425 finished: foreachPartition at PredictorEngineApp.java:153, took 3.292572 s 18/04/17 16:49:03 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x3433b4ab connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 16:49:03 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x3433b4ab0x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 16:49:03 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 16:49:03 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:38945, server: ***hostname masked***/***IP masked***:2181 18/04/17 16:49:03 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x2626be142b28cb8, negotiated timeout = 60000 18/04/17 16:49:03 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x2626be142b28cb8 18/04/17 16:49:03 INFO zookeeper.ZooKeeper: Session: 0x2626be142b28cb8 closed 18/04/17 16:49:03 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 16:49:03 INFO scheduler.JobScheduler: Finished job streaming job 1523972940000 ms.9 from job set of time 1523972940000 ms 18/04/17 16:49:04 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 446.0 (TID 446) in 3934 ms on ***hostname masked*** (executor 11) (1/1) 18/04/17 16:49:04 INFO cluster.YarnClusterScheduler: Removed TaskSet 446.0, whose tasks have all completed, from pool 18/04/17 16:49:04 INFO scheduler.DAGScheduler: ResultStage 446 (foreachPartition at PredictorEngineApp.java:153) finished in 3.935 s 18/04/17 16:49:04 INFO scheduler.DAGScheduler: Job 446 finished: foreachPartition at PredictorEngineApp.java:153, took 4.027571 s 18/04/17 16:49:04 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x46b23352 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 16:49:04 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x46b233520x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 16:49:04 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 16:49:04 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:34354, server: ***hostname masked***/***IP masked***:2181 18/04/17 16:49:04 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x1626be1444c93b3, negotiated timeout = 60000 18/04/17 16:49:04 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x1626be1444c93b3 18/04/17 16:49:04 INFO zookeeper.ZooKeeper: Session: 0x1626be1444c93b3 closed 18/04/17 16:49:04 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 16:49:04 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 445.0 (TID 445) in 3959 ms on ***hostname masked*** (executor 12) (1/1) 18/04/17 16:49:04 INFO cluster.YarnClusterScheduler: Removed TaskSet 445.0, whose tasks have all completed, from pool 18/04/17 16:49:04 INFO scheduler.DAGScheduler: ResultStage 445 (foreachPartition at PredictorEngineApp.java:153) finished in 3.960 s 18/04/17 16:49:04 INFO scheduler.DAGScheduler: Job 445 finished: foreachPartition at PredictorEngineApp.java:153, took 4.049868 s 18/04/17 16:49:04 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x7d21ce3b connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 16:49:04 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x7d21ce3b0x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 16:49:04 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 16:49:04 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:34357, server: ***hostname masked***/***IP masked***:2181 18/04/17 16:49:04 INFO scheduler.JobScheduler: Finished job streaming job 1523972940000 ms.27 from job set of time 1523972940000 ms 18/04/17 16:49:04 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x1626be1444c93b4, negotiated timeout = 60000 18/04/17 16:49:04 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x1626be1444c93b4 18/04/17 16:49:04 INFO zookeeper.ZooKeeper: Session: 0x1626be1444c93b4 closed 18/04/17 16:49:04 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 16:49:04 INFO scheduler.JobScheduler: Finished job streaming job 1523972940000 ms.18 from job set of time 1523972940000 ms 18/04/17 16:49:04 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 437.0 (TID 437) in 4170 ms on ***hostname masked*** (executor 2) (1/1) 18/04/17 16:49:04 INFO cluster.YarnClusterScheduler: Removed TaskSet 437.0, whose tasks have all completed, from pool 18/04/17 16:49:04 INFO scheduler.DAGScheduler: ResultStage 437 (foreachPartition at PredictorEngineApp.java:153) finished in 4.171 s 18/04/17 16:49:04 INFO scheduler.DAGScheduler: Job 437 finished: foreachPartition at PredictorEngineApp.java:153, took 4.227465 s 18/04/17 16:49:04 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x1b857ca0 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 16:49:04 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x1b857ca00x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 16:49:04 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 16:49:04 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:34360, server: ***hostname masked***/***IP masked***:2181 18/04/17 16:49:04 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x1626be1444c93b5, negotiated timeout = 60000 18/04/17 16:49:04 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x1626be1444c93b5 18/04/17 16:49:04 INFO zookeeper.ZooKeeper: Session: 0x1626be1444c93b5 closed 18/04/17 16:49:04 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 16:49:04 INFO scheduler.JobScheduler: Finished job streaming job 1523972940000 ms.34 from job set of time 1523972940000 ms 18/04/17 16:49:05 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 422.0 (TID 422) in 5824 ms on ***hostname masked*** (executor 10) (1/1) 18/04/17 16:49:05 INFO cluster.YarnClusterScheduler: Removed TaskSet 422.0, whose tasks have all completed, from pool 18/04/17 16:49:05 INFO scheduler.DAGScheduler: ResultStage 422 (foreachPartition at PredictorEngineApp.java:153) finished in 5.825 s 18/04/17 16:49:05 INFO scheduler.DAGScheduler: Job 422 finished: foreachPartition at PredictorEngineApp.java:153, took 5.835707 s 18/04/17 16:49:05 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x2421fcc7 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 16:49:05 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x2421fcc70x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 16:49:05 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 16:49:05 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:38959, server: ***hostname masked***/***IP masked***:2181 18/04/17 16:49:05 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x2626be142b28cb9, negotiated timeout = 60000 18/04/17 16:49:05 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x2626be142b28cb9 18/04/17 16:49:05 INFO zookeeper.ZooKeeper: Session: 0x2626be142b28cb9 closed 18/04/17 16:49:05 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 16:49:05 INFO scheduler.JobScheduler: Finished job streaming job 1523972940000 ms.28 from job set of time 1523972940000 ms 18/04/17 16:49:06 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 430.0 (TID 430) in 6147 ms on ***hostname masked*** (executor 5) (1/1) 18/04/17 16:49:06 INFO cluster.YarnClusterScheduler: Removed TaskSet 430.0, whose tasks have all completed, from pool 18/04/17 16:49:06 INFO scheduler.DAGScheduler: ResultStage 430 (foreachPartition at PredictorEngineApp.java:153) finished in 6.147 s 18/04/17 16:49:06 INFO scheduler.DAGScheduler: Job 431 finished: foreachPartition at PredictorEngineApp.java:153, took 6.180559 s 18/04/17 16:49:06 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x56410f86 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 16:49:06 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x56410f860x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 16:49:06 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 16:49:06 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:38963, server: ***hostname masked***/***IP masked***:2181 18/04/17 16:49:06 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x2626be142b28cba, negotiated timeout = 60000 18/04/17 16:49:06 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x2626be142b28cba 18/04/17 16:49:06 INFO zookeeper.ZooKeeper: Session: 0x2626be142b28cba closed 18/04/17 16:49:06 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 16:49:06 INFO scheduler.JobScheduler: Finished job streaming job 1523972940000 ms.32 from job set of time 1523972940000 ms 18/04/17 16:49:06 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 431.0 (TID 431) in 6247 ms on ***hostname masked*** (executor 10) (1/1) 18/04/17 16:49:06 INFO cluster.YarnClusterScheduler: Removed TaskSet 431.0, whose tasks have all completed, from pool 18/04/17 16:49:06 INFO scheduler.DAGScheduler: ResultStage 431 (foreachPartition at PredictorEngineApp.java:153) finished in 6.248 s 18/04/17 16:49:06 INFO scheduler.DAGScheduler: Job 428 finished: foreachPartition at PredictorEngineApp.java:153, took 6.285156 s 18/04/17 16:49:06 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x2f7b38a5 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 16:49:06 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x2f7b38a50x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 16:49:06 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 16:49:06 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:38966, server: ***hostname masked***/***IP masked***:2181 18/04/17 16:49:06 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x2626be142b28cbb, negotiated timeout = 60000 18/04/17 16:49:06 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x2626be142b28cbb 18/04/17 16:49:06 INFO zookeeper.ZooKeeper: Session: 0x2626be142b28cbb closed 18/04/17 16:49:06 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 16:49:06 INFO scheduler.JobScheduler: Finished job streaming job 1523972940000 ms.20 from job set of time 1523972940000 ms 18/04/17 16:49:06 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 439.0 (TID 439) in 6255 ms on ***hostname masked*** (executor 4) (1/1) 18/04/17 16:49:06 INFO cluster.YarnClusterScheduler: Removed TaskSet 439.0, whose tasks have all completed, from pool 18/04/17 16:49:06 INFO scheduler.DAGScheduler: ResultStage 439 (foreachPartition at PredictorEngineApp.java:153) finished in 6.255 s 18/04/17 16:49:06 INFO scheduler.DAGScheduler: Job 439 finished: foreachPartition at PredictorEngineApp.java:153, took 6.318243 s 18/04/17 16:49:06 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x4add1b02 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 16:49:06 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x4add1b020x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 16:49:06 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 16:49:06 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:38969, server: ***hostname masked***/***IP masked***:2181 18/04/17 16:49:06 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x2626be142b28cbc, negotiated timeout = 60000 18/04/17 16:49:06 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x2626be142b28cbc 18/04/17 16:49:06 INFO zookeeper.ZooKeeper: Session: 0x2626be142b28cbc closed 18/04/17 16:49:06 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 16:49:06 INFO scheduler.JobScheduler: Finished job streaming job 1523972940000 ms.24 from job set of time 1523972940000 ms 18/04/17 16:49:07 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 424.0 (TID 424) in 7084 ms on ***hostname masked*** (executor 8) (1/1) 18/04/17 16:49:07 INFO cluster.YarnClusterScheduler: Removed TaskSet 424.0, whose tasks have all completed, from pool 18/04/17 16:49:07 INFO scheduler.DAGScheduler: ResultStage 424 (foreachPartition at PredictorEngineApp.java:153) finished in 7.085 s 18/04/17 16:49:07 INFO scheduler.DAGScheduler: Job 423 finished: foreachPartition at PredictorEngineApp.java:153, took 7.102433 s 18/04/17 16:49:07 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x7638cf44 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 16:49:07 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x7638cf440x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 16:49:07 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 434.0 (TID 434) in 7051 ms on ***hostname masked*** (executor 8) (1/1) 18/04/17 16:49:07 INFO cluster.YarnClusterScheduler: Removed TaskSet 434.0, whose tasks have all completed, from pool 18/04/17 16:49:07 INFO scheduler.DAGScheduler: ResultStage 434 (foreachPartition at PredictorEngineApp.java:153) finished in 7.051 s 18/04/17 16:49:07 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 16:49:07 INFO scheduler.DAGScheduler: Job 434 finished: foreachPartition at PredictorEngineApp.java:153, took 7.099172 s 18/04/17 16:49:07 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:56230, server: ***hostname masked***/***IP masked***:2181 18/04/17 16:49:07 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x3bfb6531 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 16:49:07 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x3bfb65310x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 16:49:07 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 16:49:07 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:38975, server: ***hostname masked***/***IP masked***:2181 18/04/17 16:49:07 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x3626be1439a937e, negotiated timeout = 60000 18/04/17 16:49:07 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x2626be142b28cbe, negotiated timeout = 60000 18/04/17 16:49:07 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x3626be1439a937e 18/04/17 16:49:07 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x2626be142b28cbe 18/04/17 16:49:07 INFO zookeeper.ZooKeeper: Session: 0x3626be1439a937e closed 18/04/17 16:49:07 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 16:49:07 INFO zookeeper.ZooKeeper: Session: 0x2626be142b28cbe closed 18/04/17 16:49:07 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 16:49:07 INFO scheduler.JobScheduler: Finished job streaming job 1523972940000 ms.15 from job set of time 1523972940000 ms 18/04/17 16:49:07 INFO scheduler.JobScheduler: Finished job streaming job 1523972940000 ms.19 from job set of time 1523972940000 ms 18/04/17 16:49:07 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 428.0 (TID 428) in 7809 ms on ***hostname masked*** (executor 7) (1/1) 18/04/17 16:49:07 INFO cluster.YarnClusterScheduler: Removed TaskSet 428.0, whose tasks have all completed, from pool 18/04/17 16:49:07 INFO scheduler.DAGScheduler: ResultStage 428 (foreachPartition at PredictorEngineApp.java:153) finished in 7.810 s 18/04/17 16:49:07 INFO scheduler.DAGScheduler: Job 429 finished: foreachPartition at PredictorEngineApp.java:153, took 7.836821 s 18/04/17 16:49:07 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x4c87276a connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 16:49:07 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x4c87276a0x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 16:49:07 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 16:49:07 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:34386, server: ***hostname masked***/***IP masked***:2181 18/04/17 16:49:07 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x1626be1444c93b6, negotiated timeout = 60000 18/04/17 16:49:07 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x1626be1444c93b6 18/04/17 16:49:07 INFO zookeeper.ZooKeeper: Session: 0x1626be1444c93b6 closed 18/04/17 16:49:07 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 16:49:07 INFO scheduler.JobScheduler: Finished job streaming job 1523972940000 ms.2 from job set of time 1523972940000 ms 18/04/17 16:49:08 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 447.0 (TID 447) in 8615 ms on ***hostname masked*** (executor 7) (1/1) 18/04/17 16:49:08 INFO cluster.YarnClusterScheduler: Removed TaskSet 447.0, whose tasks have all completed, from pool 18/04/17 16:49:08 INFO scheduler.DAGScheduler: ResultStage 447 (foreachPartition at PredictorEngineApp.java:153) finished in 8.616 s 18/04/17 16:49:08 INFO scheduler.DAGScheduler: Job 447 finished: foreachPartition at PredictorEngineApp.java:153, took 8.710096 s 18/04/17 16:49:08 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x53e3877 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 16:49:08 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x53e38770x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 16:49:08 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 16:49:08 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:56241, server: ***hostname masked***/***IP masked***:2181 18/04/17 16:49:08 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x3626be1439a9381, negotiated timeout = 60000 18/04/17 16:49:08 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x3626be1439a9381 18/04/17 16:49:08 INFO zookeeper.ZooKeeper: Session: 0x3626be1439a9381 closed 18/04/17 16:49:08 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 16:49:08 INFO scheduler.JobScheduler: Finished job streaming job 1523972940000 ms.33 from job set of time 1523972940000 ms 18/04/17 16:49:09 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 436.0 (TID 436) in 9081 ms on ***hostname masked*** (executor 1) (1/1) 18/04/17 16:49:09 INFO cluster.YarnClusterScheduler: Removed TaskSet 436.0, whose tasks have all completed, from pool 18/04/17 16:49:09 INFO scheduler.DAGScheduler: ResultStage 436 (foreachPartition at PredictorEngineApp.java:153) finished in 9.081 s 18/04/17 16:49:09 INFO scheduler.DAGScheduler: Job 436 finished: foreachPartition at PredictorEngineApp.java:153, took 9.134914 s 18/04/17 16:49:09 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x4f519377 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 16:49:09 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x4f5193770x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 16:49:09 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 16:49:09 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:56245, server: ***hostname masked***/***IP masked***:2181 18/04/17 16:49:09 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x3626be1439a9384, negotiated timeout = 60000 18/04/17 16:49:09 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x3626be1439a9384 18/04/17 16:49:09 INFO zookeeper.ZooKeeper: Session: 0x3626be1439a9384 closed 18/04/17 16:49:09 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 16:49:09 INFO scheduler.JobScheduler: Finished job streaming job 1523972940000 ms.11 from job set of time 1523972940000 ms 18/04/17 16:49:09 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 435.0 (TID 435) in 9671 ms on ***hostname masked*** (executor 5) (1/1) 18/04/17 16:49:09 INFO cluster.YarnClusterScheduler: Removed TaskSet 435.0, whose tasks have all completed, from pool 18/04/17 16:49:09 INFO scheduler.DAGScheduler: ResultStage 435 (foreachPartition at PredictorEngineApp.java:153) finished in 9.672 s 18/04/17 16:49:09 INFO scheduler.DAGScheduler: Job 435 finished: foreachPartition at PredictorEngineApp.java:153, took 9.721490 s 18/04/17 16:49:09 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x43146621 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 16:49:09 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x431466210x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 16:49:09 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 16:49:09 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:56248, server: ***hostname masked***/***IP masked***:2181 18/04/17 16:49:09 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x3626be1439a9385, negotiated timeout = 60000 18/04/17 16:49:09 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x3626be1439a9385 18/04/17 16:49:09 INFO zookeeper.ZooKeeper: Session: 0x3626be1439a9385 closed 18/04/17 16:49:09 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 16:49:09 INFO scheduler.JobScheduler: Finished job streaming job 1523972940000 ms.5 from job set of time 1523972940000 ms 18/04/17 16:49:09 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 441.0 (TID 441) in 9743 ms on ***hostname masked*** (executor 4) (1/1) 18/04/17 16:49:09 INFO cluster.YarnClusterScheduler: Removed TaskSet 441.0, whose tasks have all completed, from pool 18/04/17 16:49:09 INFO scheduler.DAGScheduler: ResultStage 441 (foreachPartition at PredictorEngineApp.java:153) finished in 9.755 s 18/04/17 16:49:09 INFO scheduler.DAGScheduler: Job 442 finished: foreachPartition at PredictorEngineApp.java:153, took 9.823855 s 18/04/17 16:49:09 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x6fe52858 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 16:49:09 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x6fe528580x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 16:49:09 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 16:49:09 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:34400, server: ***hostname masked***/***IP masked***:2181 18/04/17 16:49:09 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x1626be1444c93b7, negotiated timeout = 60000 18/04/17 16:49:09 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x1626be1444c93b7 18/04/17 16:49:09 INFO zookeeper.ZooKeeper: Session: 0x1626be1444c93b7 closed 18/04/17 16:49:09 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 16:49:09 INFO scheduler.JobScheduler: Finished job streaming job 1523972940000 ms.1 from job set of time 1523972940000 ms 18/04/17 16:49:12 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 443.0 (TID 443) in 12335 ms on ***hostname masked*** (executor 7) (1/1) 18/04/17 16:49:12 INFO cluster.YarnClusterScheduler: Removed TaskSet 443.0, whose tasks have all completed, from pool 18/04/17 16:49:12 INFO scheduler.DAGScheduler: ResultStage 443 (foreachPartition at PredictorEngineApp.java:153) finished in 12.336 s 18/04/17 16:49:12 INFO scheduler.DAGScheduler: Job 443 finished: foreachPartition at PredictorEngineApp.java:153, took 12.419275 s 18/04/17 16:49:12 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x297a5f75 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 16:49:12 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x297a5f750x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 16:49:12 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 16:49:12 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:56257, server: ***hostname masked***/***IP masked***:2181 18/04/17 16:49:12 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x3626be1439a9388, negotiated timeout = 60000 18/04/17 16:49:12 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x3626be1439a9388 18/04/17 16:49:12 INFO zookeeper.ZooKeeper: Session: 0x3626be1439a9388 closed 18/04/17 16:49:12 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 16:49:12 INFO scheduler.JobScheduler: Finished job streaming job 1523972940000 ms.22 from job set of time 1523972940000 ms 18/04/17 16:49:13 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 444.0 (TID 444) in 12978 ms on ***hostname masked*** (executor 6) (1/1) 18/04/17 16:49:13 INFO cluster.YarnClusterScheduler: Removed TaskSet 444.0, whose tasks have all completed, from pool 18/04/17 16:49:13 INFO scheduler.DAGScheduler: ResultStage 444 (foreachPartition at PredictorEngineApp.java:153) finished in 12.979 s 18/04/17 16:49:13 INFO scheduler.DAGScheduler: Job 444 finished: foreachPartition at PredictorEngineApp.java:153, took 13.066187 s 18/04/17 16:49:13 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0xcc7b8ab connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 16:49:13 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0xcc7b8ab0x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 16:49:13 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 16:49:13 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:56262, server: ***hostname masked***/***IP masked***:2181 18/04/17 16:49:13 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x3626be1439a938a, negotiated timeout = 60000 18/04/17 16:49:13 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x3626be1439a938a 18/04/17 16:49:13 INFO zookeeper.ZooKeeper: Session: 0x3626be1439a938a closed 18/04/17 16:49:13 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 16:49:13 INFO scheduler.JobScheduler: Finished job streaming job 1523972940000 ms.26 from job set of time 1523972940000 ms 18/04/17 16:49:14 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 433.0 (TID 433) in 14172 ms on ***hostname masked*** (executor 6) (1/1) 18/04/17 16:49:14 INFO cluster.YarnClusterScheduler: Removed TaskSet 433.0, whose tasks have all completed, from pool 18/04/17 16:49:14 INFO scheduler.DAGScheduler: ResultStage 433 (foreachPartition at PredictorEngineApp.java:153) finished in 14.173 s 18/04/17 16:49:14 INFO scheduler.DAGScheduler: Job 433 finished: foreachPartition at PredictorEngineApp.java:153, took 14.216052 s 18/04/17 16:49:14 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x23f1c6b3 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 16:49:14 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x23f1c6b30x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 16:49:14 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 16:49:14 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:39010, server: ***hostname masked***/***IP masked***:2181 18/04/17 16:49:14 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x2626be142b28cc2, negotiated timeout = 60000 18/04/17 16:49:14 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x2626be142b28cc2 18/04/17 16:49:14 INFO zookeeper.ZooKeeper: Session: 0x2626be142b28cc2 closed 18/04/17 16:49:14 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 16:49:14 INFO scheduler.JobScheduler: Finished job streaming job 1523972940000 ms.10 from job set of time 1523972940000 ms 18/04/17 16:49:14 INFO scheduler.JobScheduler: Total delay: 14.346 s for time 1523972940000 ms (execution: 14.279 s) 18/04/17 16:49:14 INFO kafka.KafkaRDD: Removing RDD 540 from persistence list 18/04/17 16:49:14 INFO storage.BlockManager: Removing RDD 540 18/04/17 16:49:14 INFO kafka.KafkaRDD: Removing RDD 540 from persistence list 18/04/17 16:49:14 INFO storage.BlockManager: Removing RDD 540 18/04/17 16:49:14 INFO kafka.KafkaRDD: Removing RDD 541 from persistence list 18/04/17 16:49:14 INFO storage.BlockManager: Removing RDD 541 18/04/17 16:49:14 INFO kafka.KafkaRDD: Removing RDD 541 from persistence list 18/04/17 16:49:14 INFO storage.BlockManager: Removing RDD 541 18/04/17 16:49:14 INFO kafka.KafkaRDD: Removing RDD 542 from persistence list 18/04/17 16:49:14 INFO storage.BlockManager: Removing RDD 542 18/04/17 16:49:14 INFO kafka.KafkaRDD: Removing RDD 542 from persistence list 18/04/17 16:49:14 INFO storage.BlockManager: Removing RDD 542 18/04/17 16:49:14 INFO kafka.KafkaRDD: Removing RDD 543 from persistence list 18/04/17 16:49:14 INFO storage.BlockManager: Removing RDD 543 18/04/17 16:49:14 INFO kafka.KafkaRDD: Removing RDD 543 from persistence list 18/04/17 16:49:14 INFO storage.BlockManager: Removing RDD 543 18/04/17 16:49:14 INFO kafka.KafkaRDD: Removing RDD 544 from persistence list 18/04/17 16:49:14 INFO storage.BlockManager: Removing RDD 544 18/04/17 16:49:14 INFO kafka.KafkaRDD: Removing RDD 544 from persistence list 18/04/17 16:49:14 INFO storage.BlockManager: Removing RDD 544 18/04/17 16:49:14 INFO kafka.KafkaRDD: Removing RDD 545 from persistence list 18/04/17 16:49:14 INFO storage.BlockManager: Removing RDD 545 18/04/17 16:49:14 INFO kafka.KafkaRDD: Removing RDD 545 from persistence list 18/04/17 16:49:14 INFO storage.BlockManager: Removing RDD 545 18/04/17 16:49:14 INFO kafka.KafkaRDD: Removing RDD 546 from persistence list 18/04/17 16:49:14 INFO storage.BlockManager: Removing RDD 546 18/04/17 16:49:14 INFO kafka.KafkaRDD: Removing RDD 546 from persistence list 18/04/17 16:49:14 INFO storage.BlockManager: Removing RDD 546 18/04/17 16:49:14 INFO kafka.KafkaRDD: Removing RDD 547 from persistence list 18/04/17 16:49:14 INFO storage.BlockManager: Removing RDD 547 18/04/17 16:49:14 INFO kafka.KafkaRDD: Removing RDD 547 from persistence list 18/04/17 16:49:14 INFO storage.BlockManager: Removing RDD 547 18/04/17 16:49:14 INFO kafka.KafkaRDD: Removing RDD 548 from persistence list 18/04/17 16:49:14 INFO storage.BlockManager: Removing RDD 548 18/04/17 16:49:14 INFO kafka.KafkaRDD: Removing RDD 548 from persistence list 18/04/17 16:49:14 INFO storage.BlockManager: Removing RDD 548 18/04/17 16:49:14 INFO kafka.KafkaRDD: Removing RDD 549 from persistence list 18/04/17 16:49:14 INFO storage.BlockManager: Removing RDD 549 18/04/17 16:49:14 INFO kafka.KafkaRDD: Removing RDD 549 from persistence list 18/04/17 16:49:14 INFO storage.BlockManager: Removing RDD 549 18/04/17 16:49:14 INFO kafka.KafkaRDD: Removing RDD 550 from persistence list 18/04/17 16:49:14 INFO storage.BlockManager: Removing RDD 550 18/04/17 16:49:14 INFO kafka.KafkaRDD: Removing RDD 550 from persistence list 18/04/17 16:49:14 INFO storage.BlockManager: Removing RDD 550 18/04/17 16:49:14 INFO kafka.KafkaRDD: Removing RDD 551 from persistence list 18/04/17 16:49:14 INFO storage.BlockManager: Removing RDD 551 18/04/17 16:49:14 INFO kafka.KafkaRDD: Removing RDD 551 from persistence list 18/04/17 16:49:14 INFO storage.BlockManager: Removing RDD 551 18/04/17 16:49:14 INFO kafka.KafkaRDD: Removing RDD 552 from persistence list 18/04/17 16:49:14 INFO storage.BlockManager: Removing RDD 552 18/04/17 16:49:14 INFO kafka.KafkaRDD: Removing RDD 552 from persistence list 18/04/17 16:49:14 INFO storage.BlockManager: Removing RDD 552 18/04/17 16:49:14 INFO kafka.KafkaRDD: Removing RDD 553 from persistence list 18/04/17 16:49:14 INFO storage.BlockManager: Removing RDD 553 18/04/17 16:49:14 INFO kafka.KafkaRDD: Removing RDD 553 from persistence list 18/04/17 16:49:14 INFO storage.BlockManager: Removing RDD 553 18/04/17 16:49:14 INFO kafka.KafkaRDD: Removing RDD 554 from persistence list 18/04/17 16:49:14 INFO storage.BlockManager: Removing RDD 554 18/04/17 16:49:14 INFO kafka.KafkaRDD: Removing RDD 554 from persistence list 18/04/17 16:49:14 INFO storage.BlockManager: Removing RDD 554 18/04/17 16:49:14 INFO kafka.KafkaRDD: Removing RDD 555 from persistence list 18/04/17 16:49:14 INFO storage.BlockManager: Removing RDD 555 18/04/17 16:49:14 INFO kafka.KafkaRDD: Removing RDD 555 from persistence list 18/04/17 16:49:14 INFO storage.BlockManager: Removing RDD 555 18/04/17 16:49:14 INFO kafka.KafkaRDD: Removing RDD 556 from persistence list 18/04/17 16:49:14 INFO storage.BlockManager: Removing RDD 556 18/04/17 16:49:14 INFO kafka.KafkaRDD: Removing RDD 556 from persistence list 18/04/17 16:49:14 INFO storage.BlockManager: Removing RDD 556 18/04/17 16:49:14 INFO kafka.KafkaRDD: Removing RDD 557 from persistence list 18/04/17 16:49:14 INFO storage.BlockManager: Removing RDD 557 18/04/17 16:49:14 INFO kafka.KafkaRDD: Removing RDD 557 from persistence list 18/04/17 16:49:14 INFO storage.BlockManager: Removing RDD 557 18/04/17 16:49:14 INFO kafka.KafkaRDD: Removing RDD 558 from persistence list 18/04/17 16:49:14 INFO storage.BlockManager: Removing RDD 558 18/04/17 16:49:14 INFO kafka.KafkaRDD: Removing RDD 558 from persistence list 18/04/17 16:49:14 INFO storage.BlockManager: Removing RDD 558 18/04/17 16:49:14 INFO kafka.KafkaRDD: Removing RDD 559 from persistence list 18/04/17 16:49:14 INFO storage.BlockManager: Removing RDD 559 18/04/17 16:49:14 INFO kafka.KafkaRDD: Removing RDD 559 from persistence list 18/04/17 16:49:14 INFO storage.BlockManager: Removing RDD 559 18/04/17 16:49:14 INFO kafka.KafkaRDD: Removing RDD 560 from persistence list 18/04/17 16:49:14 INFO storage.BlockManager: Removing RDD 560 18/04/17 16:49:14 INFO kafka.KafkaRDD: Removing RDD 560 from persistence list 18/04/17 16:49:14 INFO storage.BlockManager: Removing RDD 560 18/04/17 16:49:14 INFO kafka.KafkaRDD: Removing RDD 561 from persistence list 18/04/17 16:49:14 INFO storage.BlockManager: Removing RDD 561 18/04/17 16:49:14 INFO kafka.KafkaRDD: Removing RDD 561 from persistence list 18/04/17 16:49:14 INFO storage.BlockManager: Removing RDD 561 18/04/17 16:49:14 INFO kafka.KafkaRDD: Removing RDD 562 from persistence list 18/04/17 16:49:14 INFO storage.BlockManager: Removing RDD 562 18/04/17 16:49:14 INFO kafka.KafkaRDD: Removing RDD 562 from persistence list 18/04/17 16:49:14 INFO storage.BlockManager: Removing RDD 562 18/04/17 16:49:14 INFO kafka.KafkaRDD: Removing RDD 563 from persistence list 18/04/17 16:49:14 INFO storage.BlockManager: Removing RDD 563 18/04/17 16:49:14 INFO kafka.KafkaRDD: Removing RDD 563 from persistence list 18/04/17 16:49:14 INFO storage.BlockManager: Removing RDD 563 18/04/17 16:49:14 INFO kafka.KafkaRDD: Removing RDD 564 from persistence list 18/04/17 16:49:14 INFO storage.BlockManager: Removing RDD 564 18/04/17 16:49:14 INFO kafka.KafkaRDD: Removing RDD 564 from persistence list 18/04/17 16:49:14 INFO storage.BlockManager: Removing RDD 564 18/04/17 16:49:14 INFO kafka.KafkaRDD: Removing RDD 565 from persistence list 18/04/17 16:49:14 INFO storage.BlockManager: Removing RDD 565 18/04/17 16:49:14 INFO kafka.KafkaRDD: Removing RDD 565 from persistence list 18/04/17 16:49:14 INFO storage.BlockManager: Removing RDD 565 18/04/17 16:49:14 INFO kafka.KafkaRDD: Removing RDD 566 from persistence list 18/04/17 16:49:14 INFO storage.BlockManager: Removing RDD 566 18/04/17 16:49:14 INFO kafka.KafkaRDD: Removing RDD 566 from persistence list 18/04/17 16:49:14 INFO storage.BlockManager: Removing RDD 566 18/04/17 16:49:14 INFO kafka.KafkaRDD: Removing RDD 567 from persistence list 18/04/17 16:49:14 INFO storage.BlockManager: Removing RDD 567 18/04/17 16:49:14 INFO kafka.KafkaRDD: Removing RDD 567 from persistence list 18/04/17 16:49:14 INFO storage.BlockManager: Removing RDD 567 18/04/17 16:49:14 INFO kafka.KafkaRDD: Removing RDD 568 from persistence list 18/04/17 16:49:14 INFO storage.BlockManager: Removing RDD 568 18/04/17 16:49:14 INFO kafka.KafkaRDD: Removing RDD 568 from persistence list 18/04/17 16:49:14 INFO storage.BlockManager: Removing RDD 568 18/04/17 16:49:14 INFO kafka.KafkaRDD: Removing RDD 569 from persistence list 18/04/17 16:49:14 INFO storage.BlockManager: Removing RDD 569 18/04/17 16:49:14 INFO kafka.KafkaRDD: Removing RDD 569 from persistence list 18/04/17 16:49:14 INFO storage.BlockManager: Removing RDD 569 18/04/17 16:49:14 INFO kafka.KafkaRDD: Removing RDD 570 from persistence list 18/04/17 16:49:14 INFO storage.BlockManager: Removing RDD 570 18/04/17 16:49:14 INFO kafka.KafkaRDD: Removing RDD 570 from persistence list 18/04/17 16:49:14 INFO storage.BlockManager: Removing RDD 570 18/04/17 16:49:14 INFO kafka.KafkaRDD: Removing RDD 571 from persistence list 18/04/17 16:49:14 INFO storage.BlockManager: Removing RDD 571 18/04/17 16:49:14 INFO kafka.KafkaRDD: Removing RDD 571 from persistence list 18/04/17 16:49:14 INFO storage.BlockManager: Removing RDD 571 18/04/17 16:49:14 INFO kafka.KafkaRDD: Removing RDD 572 from persistence list 18/04/17 16:49:14 INFO storage.BlockManager: Removing RDD 572 18/04/17 16:49:14 INFO kafka.KafkaRDD: Removing RDD 572 from persistence list 18/04/17 16:49:14 INFO storage.BlockManager: Removing RDD 572 18/04/17 16:49:14 INFO kafka.KafkaRDD: Removing RDD 573 from persistence list 18/04/17 16:49:14 INFO storage.BlockManager: Removing RDD 573 18/04/17 16:49:14 INFO kafka.KafkaRDD: Removing RDD 573 from persistence list 18/04/17 16:49:14 INFO storage.BlockManager: Removing RDD 573 18/04/17 16:49:14 INFO kafka.KafkaRDD: Removing RDD 574 from persistence list 18/04/17 16:49:14 INFO storage.BlockManager: Removing RDD 574 18/04/17 16:49:14 INFO kafka.KafkaRDD: Removing RDD 574 from persistence list 18/04/17 16:49:14 INFO storage.BlockManager: Removing RDD 574 18/04/17 16:49:14 INFO kafka.KafkaRDD: Removing RDD 575 from persistence list 18/04/17 16:49:14 INFO storage.BlockManager: Removing RDD 575 18/04/17 16:49:14 INFO kafka.KafkaRDD: Removing RDD 575 from persistence list 18/04/17 16:49:14 INFO storage.BlockManager: Removing RDD 575 18/04/17 16:49:14 INFO scheduler.ReceivedBlockTracker: Deleting batches ArrayBuffer() 18/04/17 16:49:14 INFO scheduler.InputInfoTracker: remove old batch metadata: 1523972820000 ms 18/04/17 16:50:00 INFO scheduler.JobScheduler: Added jobs for time 1523973000000 ms 18/04/17 16:50:00 INFO scheduler.JobScheduler: Starting job streaming job 1523973000000 ms.1 from job set of time 1523973000000 ms 18/04/17 16:50:00 INFO scheduler.JobScheduler: Starting job streaming job 1523973000000 ms.0 from job set of time 1523973000000 ms 18/04/17 16:50:00 INFO scheduler.JobScheduler: Starting job streaming job 1523973000000 ms.2 from job set of time 1523973000000 ms 18/04/17 16:50:00 INFO scheduler.JobScheduler: Starting job streaming job 1523973000000 ms.3 from job set of time 1523973000000 ms 18/04/17 16:50:00 INFO scheduler.JobScheduler: Finished job streaming job 1523973000000 ms.0 from job set of time 1523973000000 ms 18/04/17 16:50:00 INFO scheduler.JobScheduler: Starting job streaming job 1523973000000 ms.4 from job set of time 1523973000000 ms 18/04/17 16:50:00 INFO scheduler.JobScheduler: Finished job streaming job 1523973000000 ms.4 from job set of time 1523973000000 ms 18/04/17 16:50:00 INFO scheduler.JobScheduler: Starting job streaming job 1523973000000 ms.5 from job set of time 1523973000000 ms 18/04/17 16:50:00 INFO scheduler.JobScheduler: Finished job streaming job 1523973000000 ms.3 from job set of time 1523973000000 ms 18/04/17 16:50:00 INFO scheduler.JobScheduler: Starting job streaming job 1523973000000 ms.7 from job set of time 1523973000000 ms 18/04/17 16:50:00 INFO scheduler.JobScheduler: Starting job streaming job 1523973000000 ms.8 from job set of time 1523973000000 ms 18/04/17 16:50:00 INFO scheduler.JobScheduler: Starting job streaming job 1523973000000 ms.6 from job set of time 1523973000000 ms 18/04/17 16:50:00 INFO scheduler.JobScheduler: Starting job streaming job 1523973000000 ms.9 from job set of time 1523973000000 ms 18/04/17 16:50:00 INFO scheduler.JobScheduler: Starting job streaming job 1523973000000 ms.10 from job set of time 1523973000000 ms 18/04/17 16:50:00 INFO scheduler.JobScheduler: Starting job streaming job 1523973000000 ms.11 from job set of time 1523973000000 ms 18/04/17 16:50:00 INFO scheduler.JobScheduler: Starting job streaming job 1523973000000 ms.12 from job set of time 1523973000000 ms 18/04/17 16:50:00 INFO scheduler.JobScheduler: Starting job streaming job 1523973000000 ms.13 from job set of time 1523973000000 ms 18/04/17 16:50:00 INFO scheduler.JobScheduler: Starting job streaming job 1523973000000 ms.14 from job set of time 1523973000000 ms 18/04/17 16:50:00 INFO scheduler.JobScheduler: Starting job streaming job 1523973000000 ms.15 from job set of time 1523973000000 ms 18/04/17 16:50:00 INFO scheduler.JobScheduler: Finished job streaming job 1523973000000 ms.14 from job set of time 1523973000000 ms 18/04/17 16:50:00 INFO scheduler.JobScheduler: Finished job streaming job 1523973000000 ms.13 from job set of time 1523973000000 ms 18/04/17 16:50:00 INFO scheduler.JobScheduler: Starting job streaming job 1523973000000 ms.16 from job set of time 1523973000000 ms 18/04/17 16:50:00 INFO scheduler.JobScheduler: Starting job streaming job 1523973000000 ms.18 from job set of time 1523973000000 ms 18/04/17 16:50:00 INFO scheduler.JobScheduler: Starting job streaming job 1523973000000 ms.17 from job set of time 1523973000000 ms 18/04/17 16:50:00 INFO scheduler.JobScheduler: Finished job streaming job 1523973000000 ms.16 from job set of time 1523973000000 ms 18/04/17 16:50:00 INFO scheduler.JobScheduler: Starting job streaming job 1523973000000 ms.19 from job set of time 1523973000000 ms 18/04/17 16:50:00 INFO scheduler.JobScheduler: Finished job streaming job 1523973000000 ms.17 from job set of time 1523973000000 ms 18/04/17 16:50:00 INFO scheduler.JobScheduler: Starting job streaming job 1523973000000 ms.21 from job set of time 1523973000000 ms 18/04/17 16:50:00 INFO scheduler.JobScheduler: Starting job streaming job 1523973000000 ms.20 from job set of time 1523973000000 ms 18/04/17 16:50:00 INFO scheduler.JobScheduler: Finished job streaming job 1523973000000 ms.21 from job set of time 1523973000000 ms 18/04/17 16:50:00 INFO scheduler.JobScheduler: Starting job streaming job 1523973000000 ms.23 from job set of time 1523973000000 ms 18/04/17 16:50:00 INFO scheduler.JobScheduler: Starting job streaming job 1523973000000 ms.22 from job set of time 1523973000000 ms 18/04/17 16:50:00 INFO scheduler.JobScheduler: Starting job streaming job 1523973000000 ms.24 from job set of time 1523973000000 ms 18/04/17 16:50:00 INFO scheduler.JobScheduler: Starting job streaming job 1523973000000 ms.26 from job set of time 1523973000000 ms 18/04/17 16:50:00 INFO scheduler.JobScheduler: Starting job streaming job 1523973000000 ms.27 from job set of time 1523973000000 ms 18/04/17 16:50:00 INFO scheduler.JobScheduler: Starting job streaming job 1523973000000 ms.25 from job set of time 1523973000000 ms 18/04/17 16:50:00 INFO scheduler.JobScheduler: Starting job streaming job 1523973000000 ms.28 from job set of time 1523973000000 ms 18/04/17 16:50:00 INFO scheduler.JobScheduler: Starting job streaming job 1523973000000 ms.29 from job set of time 1523973000000 ms 18/04/17 16:50:00 INFO scheduler.JobScheduler: Starting job streaming job 1523973000000 ms.31 from job set of time 1523973000000 ms 18/04/17 16:50:00 INFO scheduler.JobScheduler: Starting job streaming job 1523973000000 ms.30 from job set of time 1523973000000 ms 18/04/17 16:50:00 INFO scheduler.JobScheduler: Starting job streaming job 1523973000000 ms.32 from job set of time 1523973000000 ms 18/04/17 16:50:00 INFO scheduler.JobScheduler: Finished job streaming job 1523973000000 ms.30 from job set of time 1523973000000 ms 18/04/17 16:50:00 INFO scheduler.JobScheduler: Starting job streaming job 1523973000000 ms.33 from job set of time 1523973000000 ms 18/04/17 16:50:00 INFO scheduler.JobScheduler: Starting job streaming job 1523973000000 ms.35 from job set of time 1523973000000 ms 18/04/17 16:50:00 INFO scheduler.JobScheduler: Finished job streaming job 1523973000000 ms.35 from job set of time 1523973000000 ms 18/04/17 16:50:00 INFO scheduler.JobScheduler: Starting job streaming job 1523973000000 ms.34 from job set of time 1523973000000 ms 18/04/17 16:50:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 16:50:00 INFO scheduler.DAGScheduler: Got job 448 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 16:50:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 448 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 16:50:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 16:50:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 16:50:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 16:50:00 INFO scheduler.DAGScheduler: Submitting ResultStage 448 (KafkaRDD[623] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 16:50:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 16:50:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 16:50:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 16:50:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 16:50:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 16:50:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 16:50:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 16:50:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 16:50:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 16:50:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 16:50:00 INFO storage.MemoryStore: Block broadcast_448 stored as values in memory (estimated size 5.7 KB, free 490.3 MB) 18/04/17 16:50:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 16:50:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 16:50:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 16:50:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 16:50:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 16:50:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 16:50:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 16:50:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 16:50:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 16:50:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 16:50:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 16:50:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 16:50:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 16:50:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 16:50:00 INFO storage.MemoryStore: Block broadcast_448_piece0 stored as bytes in memory (estimated size 3.1 KB, free 490.3 MB) 18/04/17 16:50:00 INFO storage.BlockManagerInfo: Added broadcast_448_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.2 MB) 18/04/17 16:50:00 INFO spark.SparkContext: Created broadcast 448 from broadcast at DAGScheduler.scala:1006 18/04/17 16:50:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 448 (KafkaRDD[623] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 16:50:00 INFO cluster.YarnClusterScheduler: Adding task set 448.0 with 1 tasks 18/04/17 16:50:00 INFO scheduler.DAGScheduler: Got job 449 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 16:50:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 448.0 (TID 448, ***hostname masked***, executor 10, partition 0, RACK_LOCAL, 2046 bytes) 18/04/17 16:50:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 449 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 16:50:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 16:50:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 16:50:00 INFO scheduler.DAGScheduler: Submitting ResultStage 449 (KafkaRDD[640] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 16:50:00 INFO storage.MemoryStore: Block broadcast_449 stored as values in memory (estimated size 5.7 KB, free 490.3 MB) 18/04/17 16:50:00 INFO storage.BlockManagerInfo: Added broadcast_448_piece0 in memory on ***hostname masked***:55095 (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:50:00 INFO storage.MemoryStore: Block broadcast_449_piece0 stored as bytes in memory (estimated size 3.1 KB, free 490.3 MB) 18/04/17 16:50:00 INFO storage.BlockManagerInfo: Added broadcast_449_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.2 MB) 18/04/17 16:50:00 INFO spark.SparkContext: Created broadcast 449 from broadcast at DAGScheduler.scala:1006 18/04/17 16:50:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 449 (KafkaRDD[640] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 16:50:00 INFO cluster.YarnClusterScheduler: Adding task set 449.0 with 1 tasks 18/04/17 16:50:00 INFO scheduler.DAGScheduler: Got job 450 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 16:50:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 450 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 16:50:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 16:50:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 16:50:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 449.0 (TID 449, ***hostname masked***, executor 7, partition 0, RACK_LOCAL, 2045 bytes) 18/04/17 16:50:00 INFO scheduler.DAGScheduler: Submitting ResultStage 450 (KafkaRDD[635] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 16:50:00 INFO storage.MemoryStore: Block broadcast_450 stored as values in memory (estimated size 5.7 KB, free 490.3 MB) 18/04/17 16:50:00 INFO storage.MemoryStore: Block broadcast_450_piece0 stored as bytes in memory (estimated size 3.1 KB, free 490.3 MB) 18/04/17 16:50:00 INFO storage.BlockManagerInfo: Added broadcast_450_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.2 MB) 18/04/17 16:50:00 INFO spark.SparkContext: Created broadcast 450 from broadcast at DAGScheduler.scala:1006 18/04/17 16:50:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 450 (KafkaRDD[635] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 16:50:00 INFO cluster.YarnClusterScheduler: Adding task set 450.0 with 1 tasks 18/04/17 16:50:00 INFO scheduler.DAGScheduler: Got job 451 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 16:50:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 451 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 16:50:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 16:50:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 16:50:00 INFO storage.BlockManagerInfo: Added broadcast_449_piece0 in memory on ***hostname masked***:41751 (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:50:00 INFO scheduler.DAGScheduler: Submitting ResultStage 451 (KafkaRDD[638] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 16:50:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 450.0 (TID 450, ***hostname masked***, executor 11, partition 0, RACK_LOCAL, 2048 bytes) 18/04/17 16:50:00 INFO storage.MemoryStore: Block broadcast_451 stored as values in memory (estimated size 5.7 KB, free 490.3 MB) 18/04/17 16:50:00 INFO storage.MemoryStore: Block broadcast_451_piece0 stored as bytes in memory (estimated size 3.1 KB, free 490.3 MB) 18/04/17 16:50:00 INFO storage.BlockManagerInfo: Added broadcast_451_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.2 MB) 18/04/17 16:50:00 INFO spark.SparkContext: Created broadcast 451 from broadcast at DAGScheduler.scala:1006 18/04/17 16:50:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 451 (KafkaRDD[638] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 16:50:00 INFO cluster.YarnClusterScheduler: Adding task set 451.0 with 1 tasks 18/04/17 16:50:00 INFO scheduler.DAGScheduler: Got job 452 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 16:50:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 452 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 16:50:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 16:50:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 16:50:00 INFO scheduler.DAGScheduler: Submitting ResultStage 452 (KafkaRDD[645] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 16:50:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 451.0 (TID 451, ***hostname masked***, executor 4, partition 0, NODE_LOCAL, 2056 bytes) 18/04/17 16:50:00 INFO storage.MemoryStore: Block broadcast_452 stored as values in memory (estimated size 5.7 KB, free 490.3 MB) 18/04/17 16:50:00 INFO storage.MemoryStore: Block broadcast_452_piece0 stored as bytes in memory (estimated size 3.1 KB, free 490.3 MB) 18/04/17 16:50:00 INFO storage.BlockManagerInfo: Added broadcast_452_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.2 MB) 18/04/17 16:50:00 INFO spark.SparkContext: Created broadcast 452 from broadcast at DAGScheduler.scala:1006 18/04/17 16:50:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 452 (KafkaRDD[645] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 16:50:00 INFO storage.BlockManagerInfo: Added broadcast_450_piece0 in memory on ***hostname masked***:57847 (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:50:00 INFO cluster.YarnClusterScheduler: Adding task set 452.0 with 1 tasks 18/04/17 16:50:00 INFO scheduler.DAGScheduler: Got job 453 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 16:50:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 453 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 16:50:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 16:50:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 16:50:00 INFO scheduler.DAGScheduler: Submitting ResultStage 453 (KafkaRDD[617] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 16:50:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 452.0 (TID 452, ***hostname masked***, executor 9, partition 0, NODE_LOCAL, 2048 bytes) 18/04/17 16:50:00 INFO storage.MemoryStore: Block broadcast_453 stored as values in memory (estimated size 5.7 KB, free 490.3 MB) 18/04/17 16:50:00 INFO storage.MemoryStore: Block broadcast_453_piece0 stored as bytes in memory (estimated size 3.1 KB, free 490.3 MB) 18/04/17 16:50:00 INFO storage.BlockManagerInfo: Added broadcast_453_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.2 MB) 18/04/17 16:50:00 INFO spark.SparkContext: Created broadcast 453 from broadcast at DAGScheduler.scala:1006 18/04/17 16:50:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 453 (KafkaRDD[617] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 16:50:00 INFO cluster.YarnClusterScheduler: Adding task set 453.0 with 1 tasks 18/04/17 16:50:00 INFO scheduler.DAGScheduler: Got job 454 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 16:50:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 454 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 16:50:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 16:50:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 16:50:00 INFO scheduler.DAGScheduler: Submitting ResultStage 454 (KafkaRDD[643] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 16:50:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 453.0 (TID 453, ***hostname masked***, executor 2, partition 0, NODE_LOCAL, 2059 bytes) 18/04/17 16:50:00 INFO storage.MemoryStore: Block broadcast_454 stored as values in memory (estimated size 5.7 KB, free 490.3 MB) 18/04/17 16:50:00 INFO storage.MemoryStore: Block broadcast_454_piece0 stored as bytes in memory (estimated size 3.1 KB, free 490.3 MB) 18/04/17 16:50:00 INFO storage.BlockManagerInfo: Added broadcast_454_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.2 MB) 18/04/17 16:50:00 INFO spark.SparkContext: Created broadcast 454 from broadcast at DAGScheduler.scala:1006 18/04/17 16:50:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 454 (KafkaRDD[643] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 16:50:00 INFO cluster.YarnClusterScheduler: Adding task set 454.0 with 1 tasks 18/04/17 16:50:00 INFO scheduler.DAGScheduler: Got job 455 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 16:50:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 455 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 16:50:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 16:50:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 16:50:00 INFO scheduler.DAGScheduler: Submitting ResultStage 455 (KafkaRDD[641] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 16:50:00 INFO storage.MemoryStore: Block broadcast_455 stored as values in memory (estimated size 5.7 KB, free 490.3 MB) 18/04/17 16:50:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 454.0 (TID 454, ***hostname masked***, executor 4, partition 0, RACK_LOCAL, 2042 bytes) 18/04/17 16:50:00 INFO spark.ContextCleaner: Cleaned accumulator 431 18/04/17 16:50:00 INFO storage.BlockManagerInfo: Removed broadcast_423_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.2 MB) 18/04/17 16:50:00 INFO storage.MemoryStore: Block broadcast_455_piece0 stored as bytes in memory (estimated size 3.1 KB, free 490.3 MB) 18/04/17 16:50:00 INFO storage.BlockManagerInfo: Added broadcast_455_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.2 MB) 18/04/17 16:50:00 INFO spark.SparkContext: Created broadcast 455 from broadcast at DAGScheduler.scala:1006 18/04/17 16:50:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 455 (KafkaRDD[641] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 16:50:00 INFO cluster.YarnClusterScheduler: Adding task set 455.0 with 1 tasks 18/04/17 16:50:00 INFO scheduler.DAGScheduler: Got job 456 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 16:50:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 456 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 16:50:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 16:50:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 16:50:00 INFO scheduler.DAGScheduler: Submitting ResultStage 456 (KafkaRDD[646] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 16:50:00 INFO storage.MemoryStore: Block broadcast_456 stored as values in memory (estimated size 5.7 KB, free 490.3 MB) 18/04/17 16:50:00 INFO storage.BlockManagerInfo: Added broadcast_452_piece0 in memory on ***hostname masked***:55033 (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:50:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 455.0 (TID 455, ***hostname masked***, executor 3, partition 0, NODE_LOCAL, 2049 bytes) 18/04/17 16:50:00 INFO storage.MemoryStore: Block broadcast_456_piece0 stored as bytes in memory (estimated size 3.1 KB, free 490.3 MB) 18/04/17 16:50:00 INFO storage.BlockManagerInfo: Added broadcast_456_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.2 MB) 18/04/17 16:50:00 INFO spark.SparkContext: Created broadcast 456 from broadcast at DAGScheduler.scala:1006 18/04/17 16:50:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 456 (KafkaRDD[646] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 16:50:00 INFO cluster.YarnClusterScheduler: Adding task set 456.0 with 1 tasks 18/04/17 16:50:00 INFO scheduler.DAGScheduler: Got job 457 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 16:50:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 457 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 16:50:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 16:50:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 16:50:00 INFO scheduler.DAGScheduler: Submitting ResultStage 457 (KafkaRDD[644] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 16:50:00 INFO storage.BlockManagerInfo: Removed broadcast_423_piece0 on ***hostname masked***:56034 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:50:00 INFO storage.MemoryStore: Block broadcast_457 stored as values in memory (estimated size 5.7 KB, free 490.3 MB) 18/04/17 16:50:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 456.0 (TID 456, ***hostname masked***, executor 6, partition 0, RACK_LOCAL, 2053 bytes) 18/04/17 16:50:00 INFO storage.BlockManagerInfo: Added broadcast_453_piece0 in memory on ***hostname masked***:43653 (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:50:00 INFO storage.MemoryStore: Block broadcast_457_piece0 stored as bytes in memory (estimated size 3.1 KB, free 490.3 MB) 18/04/17 16:50:00 INFO storage.BlockManagerInfo: Added broadcast_457_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.2 MB) 18/04/17 16:50:00 INFO spark.SparkContext: Created broadcast 457 from broadcast at DAGScheduler.scala:1006 18/04/17 16:50:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 457 (KafkaRDD[644] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 16:50:00 INFO cluster.YarnClusterScheduler: Adding task set 457.0 with 1 tasks 18/04/17 16:50:00 INFO scheduler.DAGScheduler: Got job 459 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 16:50:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 458 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 16:50:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 16:50:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 16:50:00 INFO scheduler.DAGScheduler: Submitting ResultStage 458 (KafkaRDD[636] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 16:50:00 INFO storage.MemoryStore: Block broadcast_458 stored as values in memory (estimated size 5.7 KB, free 490.2 MB) 18/04/17 16:50:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 457.0 (TID 457, ***hostname masked***, executor 4, partition 0, RACK_LOCAL, 2040 bytes) 18/04/17 16:50:00 INFO spark.ContextCleaner: Cleaned accumulator 424 18/04/17 16:50:00 INFO storage.BlockManagerInfo: Removed broadcast_422_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.2 MB) 18/04/17 16:50:00 INFO storage.MemoryStore: Block broadcast_458_piece0 stored as bytes in memory (estimated size 3.1 KB, free 490.3 MB) 18/04/17 16:50:00 INFO storage.BlockManagerInfo: Added broadcast_458_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.2 MB) 18/04/17 16:50:00 INFO spark.SparkContext: Created broadcast 458 from broadcast at DAGScheduler.scala:1006 18/04/17 16:50:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 458 (KafkaRDD[636] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 16:50:00 INFO cluster.YarnClusterScheduler: Adding task set 458.0 with 1 tasks 18/04/17 16:50:00 INFO storage.BlockManagerInfo: Removed broadcast_422_piece0 on ***hostname masked***:55095 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:50:00 INFO scheduler.DAGScheduler: Got job 458 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 16:50:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 459 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 16:50:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 16:50:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 16:50:00 INFO storage.BlockManagerInfo: Added broadcast_455_piece0 in memory on ***hostname masked***:60107 (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:50:00 INFO scheduler.DAGScheduler: Submitting ResultStage 459 (KafkaRDD[620] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 16:50:00 INFO storage.MemoryStore: Block broadcast_459 stored as values in memory (estimated size 5.7 KB, free 490.2 MB) 18/04/17 16:50:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 458.0 (TID 458, ***hostname masked***, executor 3, partition 0, NODE_LOCAL, 2047 bytes) 18/04/17 16:50:00 INFO spark.ContextCleaner: Cleaned accumulator 423 18/04/17 16:50:00 INFO storage.BlockManagerInfo: Removed broadcast_425_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.2 MB) 18/04/17 16:50:00 INFO storage.BlockManagerInfo: Removed broadcast_425_piece0 on ***hostname masked***:60107 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:50:00 INFO storage.MemoryStore: Block broadcast_459_piece0 stored as bytes in memory (estimated size 3.1 KB, free 490.3 MB) 18/04/17 16:50:00 INFO storage.BlockManagerInfo: Added broadcast_459_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.2 MB) 18/04/17 16:50:00 INFO spark.SparkContext: Created broadcast 459 from broadcast at DAGScheduler.scala:1006 18/04/17 16:50:00 INFO spark.ContextCleaner: Cleaned accumulator 426 18/04/17 16:50:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 459 (KafkaRDD[620] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 16:50:00 INFO cluster.YarnClusterScheduler: Adding task set 459.0 with 1 tasks 18/04/17 16:50:00 INFO scheduler.DAGScheduler: Got job 461 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 16:50:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 460 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 16:50:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 16:50:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 16:50:00 INFO scheduler.DAGScheduler: Submitting ResultStage 460 (KafkaRDD[613] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 16:50:00 INFO storage.BlockManagerInfo: Added broadcast_451_piece0 in memory on ***hostname masked***:55279 (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:50:00 INFO storage.MemoryStore: Block broadcast_460 stored as values in memory (estimated size 5.7 KB, free 490.2 MB) 18/04/17 16:50:00 INFO storage.BlockManagerInfo: Removed broadcast_424_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.2 MB) 18/04/17 16:50:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 459.0 (TID 459, ***hostname masked***, executor 7, partition 0, RACK_LOCAL, 2051 bytes) 18/04/17 16:50:00 INFO storage.BlockManagerInfo: Removed broadcast_424_piece0 on ***hostname masked***:50260 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:50:00 INFO storage.BlockManagerInfo: Added broadcast_456_piece0 in memory on ***hostname masked***:35790 (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:50:00 INFO spark.ContextCleaner: Cleaned accumulator 425 18/04/17 16:50:00 INFO storage.BlockManagerInfo: Removed broadcast_427_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.2 MB) 18/04/17 16:50:00 INFO storage.BlockManagerInfo: Removed broadcast_427_piece0 on ***hostname masked***:42188 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:50:00 INFO spark.ContextCleaner: Cleaned accumulator 428 18/04/17 16:50:00 INFO storage.MemoryStore: Block broadcast_460_piece0 stored as bytes in memory (estimated size 3.1 KB, free 490.3 MB) 18/04/17 16:50:00 INFO storage.BlockManagerInfo: Removed broadcast_426_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.2 MB) 18/04/17 16:50:00 INFO storage.BlockManagerInfo: Added broadcast_460_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.2 MB) 18/04/17 16:50:00 INFO spark.SparkContext: Created broadcast 460 from broadcast at DAGScheduler.scala:1006 18/04/17 16:50:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 460 (KafkaRDD[613] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 16:50:00 INFO cluster.YarnClusterScheduler: Adding task set 460.0 with 1 tasks 18/04/17 16:50:00 INFO scheduler.DAGScheduler: Got job 460 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 16:50:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 461 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 16:50:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 16:50:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 16:50:00 INFO scheduler.DAGScheduler: Submitting ResultStage 461 (KafkaRDD[627] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 16:50:00 INFO storage.MemoryStore: Block broadcast_461 stored as values in memory (estimated size 5.7 KB, free 490.3 MB) 18/04/17 16:50:00 INFO storage.BlockManagerInfo: Removed broadcast_426_piece0 on ***hostname masked***:53081 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:50:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 460.0 (TID 460, ***hostname masked***, executor 7, partition 0, RACK_LOCAL, 2063 bytes) 18/04/17 16:50:00 INFO storage.BlockManagerInfo: Added broadcast_459_piece0 in memory on ***hostname masked***:41751 (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:50:00 INFO storage.BlockManagerInfo: Added broadcast_457_piece0 in memory on ***hostname masked***:55279 (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:50:00 INFO storage.MemoryStore: Block broadcast_461_piece0 stored as bytes in memory (estimated size 3.1 KB, free 490.3 MB) 18/04/17 16:50:00 INFO storage.BlockManagerInfo: Added broadcast_461_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.2 MB) 18/04/17 16:50:00 INFO spark.SparkContext: Created broadcast 461 from broadcast at DAGScheduler.scala:1006 18/04/17 16:50:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 461 (KafkaRDD[627] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 16:50:00 INFO cluster.YarnClusterScheduler: Adding task set 461.0 with 1 tasks 18/04/17 16:50:00 INFO scheduler.DAGScheduler: Got job 463 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 16:50:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 462 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 16:50:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 16:50:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 16:50:00 INFO scheduler.DAGScheduler: Submitting ResultStage 462 (KafkaRDD[618] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 16:50:00 INFO storage.MemoryStore: Block broadcast_462 stored as values in memory (estimated size 5.7 KB, free 490.3 MB) 18/04/17 16:50:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 461.0 (TID 461, ***hostname masked***, executor 1, partition 0, RACK_LOCAL, 2064 bytes) 18/04/17 16:50:00 INFO storage.MemoryStore: Block broadcast_462_piece0 stored as bytes in memory (estimated size 3.1 KB, free 490.3 MB) 18/04/17 16:50:00 INFO storage.BlockManagerInfo: Added broadcast_462_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.2 MB) 18/04/17 16:50:00 INFO spark.SparkContext: Created broadcast 462 from broadcast at DAGScheduler.scala:1006 18/04/17 16:50:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 462 (KafkaRDD[618] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 16:50:00 INFO cluster.YarnClusterScheduler: Adding task set 462.0 with 1 tasks 18/04/17 16:50:00 INFO scheduler.DAGScheduler: Got job 462 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 16:50:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 463 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 16:50:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 16:50:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 16:50:00 INFO scheduler.DAGScheduler: Submitting ResultStage 463 (KafkaRDD[614] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 16:50:00 INFO storage.MemoryStore: Block broadcast_463 stored as values in memory (estimated size 5.7 KB, free 490.2 MB) 18/04/17 16:50:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 462.0 (TID 462, ***hostname masked***, executor 7, partition 0, NODE_LOCAL, 2045 bytes) 18/04/17 16:50:00 INFO storage.BlockManagerInfo: Added broadcast_460_piece0 in memory on ***hostname masked***:41751 (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:50:00 INFO storage.MemoryStore: Block broadcast_463_piece0 stored as bytes in memory (estimated size 3.1 KB, free 490.2 MB) 18/04/17 16:50:00 INFO storage.BlockManagerInfo: Added broadcast_463_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.2 MB) 18/04/17 16:50:00 INFO spark.SparkContext: Created broadcast 463 from broadcast at DAGScheduler.scala:1006 18/04/17 16:50:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 463 (KafkaRDD[614] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 16:50:00 INFO cluster.YarnClusterScheduler: Adding task set 463.0 with 1 tasks 18/04/17 16:50:00 INFO scheduler.DAGScheduler: Got job 464 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 16:50:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 464 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 16:50:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 16:50:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 16:50:00 INFO scheduler.DAGScheduler: Submitting ResultStage 464 (KafkaRDD[634] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 16:50:00 INFO spark.ContextCleaner: Cleaned accumulator 427 18/04/17 16:50:00 INFO storage.BlockManagerInfo: Added broadcast_458_piece0 in memory on ***hostname masked***:60107 (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:50:00 INFO storage.MemoryStore: Block broadcast_464 stored as values in memory (estimated size 5.7 KB, free 490.2 MB) 18/04/17 16:50:00 INFO storage.BlockManagerInfo: Removed broadcast_429_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.2 MB) 18/04/17 16:50:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 463.0 (TID 463, ***hostname masked***, executor 9, partition 0, RACK_LOCAL, 2034 bytes) 18/04/17 16:50:00 INFO storage.BlockManagerInfo: Added broadcast_454_piece0 in memory on ***hostname masked***:55279 (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:50:00 INFO storage.BlockManagerInfo: Removed broadcast_429_piece0 on ***hostname masked***:50260 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:50:00 INFO spark.ContextCleaner: Cleaned accumulator 430 18/04/17 16:50:00 INFO storage.MemoryStore: Block broadcast_464_piece0 stored as bytes in memory (estimated size 3.1 KB, free 490.2 MB) 18/04/17 16:50:00 INFO storage.BlockManagerInfo: Removed broadcast_428_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.2 MB) 18/04/17 16:50:00 INFO storage.BlockManagerInfo: Added broadcast_464_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.2 MB) 18/04/17 16:50:00 INFO spark.SparkContext: Created broadcast 464 from broadcast at DAGScheduler.scala:1006 18/04/17 16:50:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 464 (KafkaRDD[634] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 16:50:00 INFO cluster.YarnClusterScheduler: Adding task set 464.0 with 1 tasks 18/04/17 16:50:00 INFO scheduler.DAGScheduler: Got job 466 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 16:50:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 465 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 16:50:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 16:50:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 16:50:00 INFO scheduler.DAGScheduler: Submitting ResultStage 465 (KafkaRDD[632] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 16:50:00 INFO storage.MemoryStore: Block broadcast_465 stored as values in memory (estimated size 5.7 KB, free 490.2 MB) 18/04/17 16:50:00 INFO storage.BlockManagerInfo: Removed broadcast_428_piece0 on ***hostname masked***:41751 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:50:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 464.0 (TID 464, ***hostname masked***, executor 12, partition 0, RACK_LOCAL, 2069 bytes) 18/04/17 16:50:00 INFO spark.ContextCleaner: Cleaned accumulator 429 18/04/17 16:50:00 INFO storage.BlockManagerInfo: Added broadcast_461_piece0 in memory on ***hostname masked***:56034 (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:50:00 INFO storage.BlockManagerInfo: Removed broadcast_431_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.2 MB) 18/04/17 16:50:00 INFO storage.MemoryStore: Block broadcast_465_piece0 stored as bytes in memory (estimated size 3.1 KB, free 490.3 MB) 18/04/17 16:50:00 INFO storage.BlockManagerInfo: Added broadcast_462_piece0 in memory on ***hostname masked***:41751 (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:50:00 INFO storage.BlockManagerInfo: Added broadcast_465_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.2 MB) 18/04/17 16:50:00 INFO spark.SparkContext: Created broadcast 465 from broadcast at DAGScheduler.scala:1006 18/04/17 16:50:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 465 (KafkaRDD[632] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 16:50:00 INFO cluster.YarnClusterScheduler: Adding task set 465.0 with 1 tasks 18/04/17 16:50:00 INFO storage.BlockManagerInfo: Removed broadcast_431_piece0 on ***hostname masked***:55095 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:50:00 INFO scheduler.DAGScheduler: Got job 467 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 16:50:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 466 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 16:50:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 16:50:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 16:50:00 INFO scheduler.DAGScheduler: Submitting ResultStage 466 (KafkaRDD[637] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 16:50:00 INFO storage.MemoryStore: Block broadcast_466 stored as values in memory (estimated size 5.7 KB, free 490.2 MB) 18/04/17 16:50:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 465.0 (TID 465, ***hostname masked***, executor 4, partition 0, RACK_LOCAL, 2048 bytes) 18/04/17 16:50:00 INFO spark.ContextCleaner: Cleaned accumulator 432 18/04/17 16:50:00 INFO storage.BlockManagerInfo: Removed broadcast_430_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.2 MB) 18/04/17 16:50:00 INFO storage.BlockManagerInfo: Added broadcast_463_piece0 in memory on ***hostname masked***:55033 (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:50:00 INFO storage.MemoryStore: Block broadcast_466_piece0 stored as bytes in memory (estimated size 3.1 KB, free 490.3 MB) 18/04/17 16:50:00 INFO storage.BlockManagerInfo: Removed broadcast_430_piece0 on ***hostname masked***:53081 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:50:00 INFO storage.BlockManagerInfo: Added broadcast_466_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.2 MB) 18/04/17 16:50:00 INFO spark.SparkContext: Created broadcast 466 from broadcast at DAGScheduler.scala:1006 18/04/17 16:50:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 466 (KafkaRDD[637] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 16:50:00 INFO cluster.YarnClusterScheduler: Adding task set 466.0 with 1 tasks 18/04/17 16:50:00 INFO scheduler.DAGScheduler: Got job 465 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 16:50:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 467 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 16:50:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 16:50:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 16:50:00 INFO scheduler.DAGScheduler: Submitting ResultStage 467 (KafkaRDD[619] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 16:50:00 INFO storage.MemoryStore: Block broadcast_467 stored as values in memory (estimated size 5.7 KB, free 490.2 MB) 18/04/17 16:50:00 INFO storage.BlockManagerInfo: Removed broadcast_433_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.2 MB) 18/04/17 16:50:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 466.0 (TID 466, ***hostname masked***, executor 11, partition 0, RACK_LOCAL, 2045 bytes) 18/04/17 16:50:00 INFO storage.BlockManagerInfo: Removed broadcast_433_piece0 on ***hostname masked***:35790 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:50:00 INFO storage.BlockManagerInfo: Added broadcast_464_piece0 in memory on ***hostname masked***:42188 (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:50:00 INFO storage.MemoryStore: Block broadcast_467_piece0 stored as bytes in memory (estimated size 3.1 KB, free 490.3 MB) 18/04/17 16:50:00 INFO spark.ContextCleaner: Cleaned accumulator 434 18/04/17 16:50:00 INFO storage.BlockManagerInfo: Added broadcast_467_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.2 MB) 18/04/17 16:50:00 INFO spark.SparkContext: Created broadcast 467 from broadcast at DAGScheduler.scala:1006 18/04/17 16:50:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 467 (KafkaRDD[619] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 16:50:00 INFO cluster.YarnClusterScheduler: Adding task set 467.0 with 1 tasks 18/04/17 16:50:00 INFO scheduler.DAGScheduler: Got job 468 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 16:50:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 468 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 16:50:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 16:50:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 16:50:00 INFO scheduler.DAGScheduler: Submitting ResultStage 468 (KafkaRDD[622] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 16:50:00 INFO storage.BlockManagerInfo: Removed broadcast_432_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.2 MB) 18/04/17 16:50:00 INFO storage.MemoryStore: Block broadcast_468 stored as values in memory (estimated size 5.7 KB, free 490.3 MB) 18/04/17 16:50:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 467.0 (TID 467, ***hostname masked***, executor 9, partition 0, RACK_LOCAL, 2060 bytes) 18/04/17 16:50:00 INFO storage.BlockManagerInfo: Removed broadcast_432_piece0 on ***hostname masked***:60107 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:50:00 INFO storage.BlockManagerInfo: Added broadcast_465_piece0 in memory on ***hostname masked***:55279 (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:50:00 INFO spark.ContextCleaner: Cleaned accumulator 433 18/04/17 16:50:00 INFO storage.MemoryStore: Block broadcast_468_piece0 stored as bytes in memory (estimated size 3.1 KB, free 490.3 MB) 18/04/17 16:50:00 INFO storage.BlockManagerInfo: Removed broadcast_435_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.2 MB) 18/04/17 16:50:00 INFO storage.BlockManagerInfo: Added broadcast_468_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.2 MB) 18/04/17 16:50:00 INFO spark.SparkContext: Created broadcast 468 from broadcast at DAGScheduler.scala:1006 18/04/17 16:50:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 468 (KafkaRDD[622] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 16:50:00 INFO cluster.YarnClusterScheduler: Adding task set 468.0 with 1 tasks 18/04/17 16:50:00 INFO scheduler.DAGScheduler: Got job 469 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 16:50:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 469 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 16:50:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 16:50:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 16:50:00 INFO scheduler.DAGScheduler: Submitting ResultStage 469 (KafkaRDD[630] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 16:50:00 INFO storage.BlockManagerInfo: Removed broadcast_435_piece0 on ***hostname masked***:53081 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:50:00 INFO storage.BlockManagerInfo: Added broadcast_466_piece0 in memory on ***hostname masked***:57847 (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:50:00 INFO storage.MemoryStore: Block broadcast_469 stored as values in memory (estimated size 5.7 KB, free 490.3 MB) 18/04/17 16:50:00 INFO spark.ContextCleaner: Cleaned accumulator 436 18/04/17 16:50:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 468.0 (TID 468, ***hostname masked***, executor 6, partition 0, NODE_LOCAL, 2059 bytes) 18/04/17 16:50:00 INFO storage.BlockManagerInfo: Removed broadcast_434_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.2 MB) 18/04/17 16:50:00 INFO storage.BlockManagerInfo: Removed broadcast_434_piece0 on ***hostname masked***:50260 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:50:00 INFO storage.MemoryStore: Block broadcast_469_piece0 stored as bytes in memory (estimated size 3.1 KB, free 490.3 MB) 18/04/17 16:50:00 INFO storage.BlockManagerInfo: Added broadcast_469_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.2 MB) 18/04/17 16:50:00 INFO spark.SparkContext: Created broadcast 469 from broadcast at DAGScheduler.scala:1006 18/04/17 16:50:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 469 (KafkaRDD[630] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 16:50:00 INFO cluster.YarnClusterScheduler: Adding task set 469.0 with 1 tasks 18/04/17 16:50:00 INFO scheduler.DAGScheduler: Got job 470 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 16:50:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 470 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 16:50:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 16:50:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 16:50:00 INFO scheduler.DAGScheduler: Submitting ResultStage 470 (KafkaRDD[631] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 16:50:00 INFO storage.MemoryStore: Block broadcast_470 stored as values in memory (estimated size 5.7 KB, free 490.3 MB) 18/04/17 16:50:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 469.0 (TID 469, ***hostname masked***, executor 2, partition 0, RACK_LOCAL, 2048 bytes) 18/04/17 16:50:00 INFO spark.ContextCleaner: Cleaned accumulator 435 18/04/17 16:50:00 INFO storage.MemoryStore: Block broadcast_470_piece0 stored as bytes in memory (estimated size 3.1 KB, free 490.3 MB) 18/04/17 16:50:00 INFO storage.BlockManagerInfo: Added broadcast_467_piece0 in memory on ***hostname masked***:55033 (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:50:00 INFO storage.BlockManagerInfo: Added broadcast_470_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.2 MB) 18/04/17 16:50:00 INFO storage.BlockManagerInfo: Removed broadcast_437_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.2 MB) 18/04/17 16:50:00 INFO spark.SparkContext: Created broadcast 470 from broadcast at DAGScheduler.scala:1006 18/04/17 16:50:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 470 (KafkaRDD[631] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 16:50:00 INFO cluster.YarnClusterScheduler: Adding task set 470.0 with 1 tasks 18/04/17 16:50:00 INFO scheduler.DAGScheduler: Got job 471 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 16:50:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 471 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 16:50:00 INFO storage.BlockManagerInfo: Added broadcast_468_piece0 in memory on ***hostname masked***:35790 (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:50:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 16:50:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 16:50:00 INFO scheduler.DAGScheduler: Submitting ResultStage 471 (KafkaRDD[639] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 16:50:00 INFO storage.MemoryStore: Block broadcast_471 stored as values in memory (estimated size 5.7 KB, free 490.3 MB) 18/04/17 16:50:00 INFO storage.BlockManagerInfo: Removed broadcast_437_piece0 on ***hostname masked***:43653 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:50:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 470.0 (TID 470, ***hostname masked***, executor 2, partition 0, RACK_LOCAL, 2050 bytes) 18/04/17 16:50:00 INFO spark.ContextCleaner: Cleaned accumulator 438 18/04/17 16:50:00 INFO storage.BlockManagerInfo: Removed broadcast_436_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.2 MB) 18/04/17 16:50:00 INFO storage.MemoryStore: Block broadcast_471_piece0 stored as bytes in memory (estimated size 3.1 KB, free 490.3 MB) 18/04/17 16:50:00 INFO storage.BlockManagerInfo: Added broadcast_471_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.2 MB) 18/04/17 16:50:00 INFO spark.SparkContext: Created broadcast 471 from broadcast at DAGScheduler.scala:1006 18/04/17 16:50:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 471 (KafkaRDD[639] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 16:50:00 INFO cluster.YarnClusterScheduler: Adding task set 471.0 with 1 tasks 18/04/17 16:50:00 INFO scheduler.DAGScheduler: Got job 472 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 16:50:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 472 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 16:50:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 16:50:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 16:50:00 INFO scheduler.DAGScheduler: Submitting ResultStage 472 (KafkaRDD[621] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 16:50:00 INFO storage.BlockManagerInfo: Removed broadcast_436_piece0 on ***hostname masked***:56034 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:50:00 INFO storage.MemoryStore: Block broadcast_472 stored as values in memory (estimated size 5.7 KB, free 490.3 MB) 18/04/17 16:50:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 471.0 (TID 471, ***hostname masked***, executor 10, partition 0, RACK_LOCAL, 2048 bytes) 18/04/17 16:50:00 INFO spark.ContextCleaner: Cleaned accumulator 437 18/04/17 16:50:00 INFO storage.MemoryStore: Block broadcast_472_piece0 stored as bytes in memory (estimated size 3.1 KB, free 490.3 MB) 18/04/17 16:50:00 INFO storage.BlockManagerInfo: Added broadcast_472_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.2 MB) 18/04/17 16:50:00 INFO storage.BlockManagerInfo: Removed broadcast_439_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.2 MB) 18/04/17 16:50:00 INFO spark.SparkContext: Created broadcast 472 from broadcast at DAGScheduler.scala:1006 18/04/17 16:50:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 472 (KafkaRDD[621] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 16:50:00 INFO cluster.YarnClusterScheduler: Adding task set 472.0 with 1 tasks 18/04/17 16:50:00 INFO scheduler.DAGScheduler: Got job 473 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 16:50:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 473 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 16:50:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 16:50:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 16:50:00 INFO scheduler.DAGScheduler: Submitting ResultStage 473 (KafkaRDD[624] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 16:50:00 INFO storage.BlockManagerInfo: Added broadcast_469_piece0 in memory on ***hostname masked***:43653 (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:50:00 INFO storage.MemoryStore: Block broadcast_473 stored as values in memory (estimated size 5.7 KB, free 490.3 MB) 18/04/17 16:50:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 472.0 (TID 472, ***hostname masked***, executor 9, partition 0, RACK_LOCAL, 2041 bytes) 18/04/17 16:50:00 INFO storage.BlockManagerInfo: Removed broadcast_439_piece0 on ***hostname masked***:55279 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:50:00 INFO storage.MemoryStore: Block broadcast_473_piece0 stored as bytes in memory (estimated size 3.1 KB, free 490.3 MB) 18/04/17 16:50:00 INFO storage.BlockManagerInfo: Added broadcast_473_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.2 MB) 18/04/17 16:50:00 INFO spark.SparkContext: Created broadcast 473 from broadcast at DAGScheduler.scala:1006 18/04/17 16:50:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 473 (KafkaRDD[624] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 16:50:00 INFO cluster.YarnClusterScheduler: Adding task set 473.0 with 1 tasks 18/04/17 16:50:00 INFO spark.ContextCleaner: Cleaned accumulator 440 18/04/17 16:50:00 INFO storage.BlockManagerInfo: Removed broadcast_438_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.2 MB) 18/04/17 16:50:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 473.0 (TID 473, ***hostname masked***, executor 3, partition 0, RACK_LOCAL, 2051 bytes) 18/04/17 16:50:00 INFO storage.BlockManagerInfo: Added broadcast_471_piece0 in memory on ***hostname masked***:55095 (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:50:00 INFO storage.BlockManagerInfo: Removed broadcast_438_piece0 on ***hostname masked***:50260 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:50:00 INFO spark.ContextCleaner: Cleaned accumulator 439 18/04/17 16:50:00 INFO storage.BlockManagerInfo: Removed broadcast_441_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.2 MB) 18/04/17 16:50:00 INFO storage.BlockManagerInfo: Added broadcast_472_piece0 in memory on ***hostname masked***:55033 (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:50:00 INFO storage.BlockManagerInfo: Removed broadcast_441_piece0 on ***hostname masked***:55279 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:50:00 INFO spark.ContextCleaner: Cleaned accumulator 442 18/04/17 16:50:00 INFO storage.BlockManagerInfo: Removed broadcast_440_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.2 MB) 18/04/17 16:50:00 INFO storage.BlockManagerInfo: Removed broadcast_440_piece0 on ***hostname masked***:57847 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:50:00 INFO storage.BlockManagerInfo: Added broadcast_473_piece0 in memory on ***hostname masked***:60107 (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:50:00 INFO spark.ContextCleaner: Cleaned accumulator 441 18/04/17 16:50:00 INFO storage.BlockManagerInfo: Added broadcast_470_piece0 in memory on ***hostname masked***:43653 (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:50:00 INFO storage.BlockManagerInfo: Removed broadcast_443_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.2 MB) 18/04/17 16:50:00 INFO storage.BlockManagerInfo: Removed broadcast_443_piece0 on ***hostname masked***:41751 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:50:00 INFO spark.ContextCleaner: Cleaned accumulator 444 18/04/17 16:50:00 INFO storage.BlockManagerInfo: Removed broadcast_442_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.2 MB) 18/04/17 16:50:00 INFO storage.BlockManagerInfo: Removed broadcast_442_piece0 on ***hostname masked***:55033 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:50:00 INFO spark.ContextCleaner: Cleaned accumulator 443 18/04/17 16:50:00 INFO storage.BlockManagerInfo: Removed broadcast_445_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.2 MB) 18/04/17 16:50:00 INFO storage.BlockManagerInfo: Removed broadcast_445_piece0 on ***hostname masked***:42188 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:50:00 INFO spark.ContextCleaner: Cleaned accumulator 446 18/04/17 16:50:00 INFO storage.BlockManagerInfo: Removed broadcast_444_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.2 MB) 18/04/17 16:50:00 INFO storage.BlockManagerInfo: Removed broadcast_444_piece0 on ***hostname masked***:35790 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:50:00 INFO spark.ContextCleaner: Cleaned accumulator 445 18/04/17 16:50:00 INFO storage.BlockManagerInfo: Removed broadcast_447_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.2 MB) 18/04/17 16:50:00 INFO storage.BlockManagerInfo: Removed broadcast_447_piece0 on ***hostname masked***:41751 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:50:00 INFO spark.ContextCleaner: Cleaned accumulator 448 18/04/17 16:50:00 INFO storage.BlockManagerInfo: Removed broadcast_446_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.2 MB) 18/04/17 16:50:00 INFO storage.BlockManagerInfo: Removed broadcast_446_piece0 on ***hostname masked***:57847 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:50:00 INFO spark.ContextCleaner: Cleaned accumulator 447 18/04/17 16:50:00 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 463.0 (TID 463) in 171 ms on ***hostname masked*** (executor 9) (1/1) 18/04/17 16:50:00 INFO cluster.YarnClusterScheduler: Removed TaskSet 463.0, whose tasks have all completed, from pool 18/04/17 16:50:00 INFO scheduler.DAGScheduler: ResultStage 463 (foreachPartition at PredictorEngineApp.java:153) finished in 0.172 s 18/04/17 16:50:00 INFO scheduler.DAGScheduler: Job 462 finished: foreachPartition at PredictorEngineApp.java:153, took 0.283419 s 18/04/17 16:50:00 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x7a422277 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 16:50:00 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x7a4222770x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 16:50:00 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 16:50:00 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:56419, server: ***hostname masked***/***IP masked***:2181 18/04/17 16:50:00 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x3626be1439a9399, negotiated timeout = 60000 18/04/17 16:50:00 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x3626be1439a9399 18/04/17 16:50:00 INFO zookeeper.ZooKeeper: Session: 0x3626be1439a9399 closed 18/04/17 16:50:00 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 16:50:00 INFO scheduler.JobScheduler: Finished job streaming job 1523973000000 ms.2 from job set of time 1523973000000 ms 18/04/17 16:50:01 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 466.0 (TID 466) in 1221 ms on ***hostname masked*** (executor 11) (1/1) 18/04/17 16:50:01 INFO scheduler.DAGScheduler: ResultStage 466 (foreachPartition at PredictorEngineApp.java:153) finished in 1.222 s 18/04/17 16:50:01 INFO cluster.YarnClusterScheduler: Removed TaskSet 466.0, whose tasks have all completed, from pool 18/04/17 16:50:01 INFO scheduler.DAGScheduler: Job 467 finished: foreachPartition at PredictorEngineApp.java:153, took 1.342888 s 18/04/17 16:50:01 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x3ba6cc2f connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 16:50:01 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x3ba6cc2f0x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 16:50:01 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 16:50:01 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:56426, server: ***hostname masked***/***IP masked***:2181 18/04/17 16:50:01 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x3626be1439a93a0, negotiated timeout = 60000 18/04/17 16:50:01 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x3626be1439a93a0 18/04/17 16:50:01 INFO zookeeper.ZooKeeper: Session: 0x3626be1439a93a0 closed 18/04/17 16:50:01 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 16:50:01 INFO scheduler.JobScheduler: Finished job streaming job 1523973000000 ms.25 from job set of time 1523973000000 ms 18/04/17 16:50:02 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 467.0 (TID 467) in 2324 ms on ***hostname masked*** (executor 9) (1/1) 18/04/17 16:50:02 INFO cluster.YarnClusterScheduler: Removed TaskSet 467.0, whose tasks have all completed, from pool 18/04/17 16:50:02 INFO scheduler.DAGScheduler: ResultStage 467 (foreachPartition at PredictorEngineApp.java:153) finished in 2.325 s 18/04/17 16:50:02 INFO scheduler.DAGScheduler: Job 465 finished: foreachPartition at PredictorEngineApp.java:153, took 2.449745 s 18/04/17 16:50:02 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x72457684 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 16:50:02 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x724576840x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 16:50:02 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 16:50:02 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:34580, server: ***hostname masked***/***IP masked***:2181 18/04/17 16:50:02 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x1626be1444c93ca, negotiated timeout = 60000 18/04/17 16:50:02 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x1626be1444c93ca 18/04/17 16:50:02 INFO zookeeper.ZooKeeper: Session: 0x1626be1444c93ca closed 18/04/17 16:50:02 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 16:50:02 INFO scheduler.JobScheduler: Finished job streaming job 1523973000000 ms.7 from job set of time 1523973000000 ms 18/04/17 16:50:02 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 459.0 (TID 459) in 2609 ms on ***hostname masked*** (executor 7) (1/1) 18/04/17 16:50:02 INFO cluster.YarnClusterScheduler: Removed TaskSet 459.0, whose tasks have all completed, from pool 18/04/17 16:50:02 INFO scheduler.DAGScheduler: ResultStage 459 (foreachPartition at PredictorEngineApp.java:153) finished in 2.611 s 18/04/17 16:50:02 INFO scheduler.DAGScheduler: Job 458 finished: foreachPartition at PredictorEngineApp.java:153, took 2.705288 s 18/04/17 16:50:02 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x4cf18db6 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 16:50:02 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x4cf18db60x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 16:50:02 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 16:50:02 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:39178, server: ***hostname masked***/***IP masked***:2181 18/04/17 16:50:02 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x2626be142b28cdc, negotiated timeout = 60000 18/04/17 16:50:02 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x2626be142b28cdc 18/04/17 16:50:02 INFO zookeeper.ZooKeeper: Session: 0x2626be142b28cdc closed 18/04/17 16:50:02 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 16:50:02 INFO scheduler.JobScheduler: Finished job streaming job 1523973000000 ms.8 from job set of time 1523973000000 ms 18/04/17 16:50:04 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 461.0 (TID 461) in 3838 ms on ***hostname masked*** (executor 1) (1/1) 18/04/17 16:50:04 INFO cluster.YarnClusterScheduler: Removed TaskSet 461.0, whose tasks have all completed, from pool 18/04/17 16:50:04 INFO scheduler.DAGScheduler: ResultStage 461 (foreachPartition at PredictorEngineApp.java:153) finished in 3.839 s 18/04/17 16:50:04 INFO scheduler.DAGScheduler: Job 460 finished: foreachPartition at PredictorEngineApp.java:153, took 3.944029 s 18/04/17 16:50:04 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x1f406d3e connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 16:50:04 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x1f406d3e0x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 16:50:04 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 16:50:04 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:39184, server: ***hostname masked***/***IP masked***:2181 18/04/17 16:50:04 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x2626be142b28cde, negotiated timeout = 60000 18/04/17 16:50:04 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x2626be142b28cde 18/04/17 16:50:04 INFO zookeeper.ZooKeeper: Session: 0x2626be142b28cde closed 18/04/17 16:50:04 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 16:50:04 INFO scheduler.JobScheduler: Finished job streaming job 1523973000000 ms.15 from job set of time 1523973000000 ms 18/04/17 16:50:04 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 450.0 (TID 450) in 4715 ms on ***hostname masked*** (executor 11) (1/1) 18/04/17 16:50:04 INFO cluster.YarnClusterScheduler: Removed TaskSet 450.0, whose tasks have all completed, from pool 18/04/17 16:50:04 INFO scheduler.DAGScheduler: ResultStage 450 (foreachPartition at PredictorEngineApp.java:153) finished in 4.716 s 18/04/17 16:50:04 INFO scheduler.DAGScheduler: Job 450 finished: foreachPartition at PredictorEngineApp.java:153, took 4.763299 s 18/04/17 16:50:04 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x237dd3b9 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 16:50:04 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x237dd3b90x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 16:50:04 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 16:50:04 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:34593, server: ***hostname masked***/***IP masked***:2181 18/04/17 16:50:04 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x1626be1444c93cd, negotiated timeout = 60000 18/04/17 16:50:04 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x1626be1444c93cd 18/04/17 16:50:04 INFO zookeeper.ZooKeeper: Session: 0x1626be1444c93cd closed 18/04/17 16:50:04 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 16:50:04 INFO scheduler.JobScheduler: Finished job streaming job 1523973000000 ms.23 from job set of time 1523973000000 ms 18/04/17 16:50:05 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 454.0 (TID 454) in 5857 ms on ***hostname masked*** (executor 4) (1/1) 18/04/17 16:50:05 INFO cluster.YarnClusterScheduler: Removed TaskSet 454.0, whose tasks have all completed, from pool 18/04/17 16:50:05 INFO scheduler.DAGScheduler: ResultStage 454 (foreachPartition at PredictorEngineApp.java:153) finished in 5.858 s 18/04/17 16:50:05 INFO scheduler.DAGScheduler: Job 454 finished: foreachPartition at PredictorEngineApp.java:153, took 5.923048 s 18/04/17 16:50:05 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0xaa4d42a connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 16:50:05 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0xaa4d42a0x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 16:50:05 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 16:50:05 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:39192, server: ***hostname masked***/***IP masked***:2181 18/04/17 16:50:06 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x2626be142b28cdf, negotiated timeout = 60000 18/04/17 16:50:06 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x2626be142b28cdf 18/04/17 16:50:06 INFO zookeeper.ZooKeeper: Session: 0x2626be142b28cdf closed 18/04/17 16:50:06 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 16:50:06 INFO scheduler.JobScheduler: Finished job streaming job 1523973000000 ms.31 from job set of time 1523973000000 ms 18/04/17 16:50:07 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 472.0 (TID 472) in 6909 ms on ***hostname masked*** (executor 9) (1/1) 18/04/17 16:50:07 INFO cluster.YarnClusterScheduler: Removed TaskSet 472.0, whose tasks have all completed, from pool 18/04/17 16:50:07 INFO scheduler.DAGScheduler: ResultStage 472 (foreachPartition at PredictorEngineApp.java:153) finished in 6.910 s 18/04/17 16:50:07 INFO scheduler.DAGScheduler: Job 472 finished: foreachPartition at PredictorEngineApp.java:153, took 7.039256 s 18/04/17 16:50:07 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x23a7c5a connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 16:50:07 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x23a7c5a0x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 16:50:07 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 16:50:07 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:34602, server: ***hostname masked***/***IP masked***:2181 18/04/17 16:50:07 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x1626be1444c93d0, negotiated timeout = 60000 18/04/17 16:50:07 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x1626be1444c93d0 18/04/17 16:50:07 INFO zookeeper.ZooKeeper: Session: 0x1626be1444c93d0 closed 18/04/17 16:50:07 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 16:50:07 INFO scheduler.JobScheduler: Finished job streaming job 1523973000000 ms.9 from job set of time 1523973000000 ms 18/04/17 16:50:07 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 456.0 (TID 456) in 7261 ms on ***hostname masked*** (executor 6) (1/1) 18/04/17 16:50:07 INFO cluster.YarnClusterScheduler: Removed TaskSet 456.0, whose tasks have all completed, from pool 18/04/17 16:50:07 INFO scheduler.DAGScheduler: ResultStage 456 (foreachPartition at PredictorEngineApp.java:153) finished in 7.261 s 18/04/17 16:50:07 INFO scheduler.DAGScheduler: Job 456 finished: foreachPartition at PredictorEngineApp.java:153, took 7.345941 s 18/04/17 16:50:07 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x3b122987 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 16:50:07 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x3b1229870x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 16:50:07 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 16:50:07 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:34605, server: ***hostname masked***/***IP masked***:2181 18/04/17 16:50:07 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 473.0 (TID 473) in 7207 ms on ***hostname masked*** (executor 3) (1/1) 18/04/17 16:50:07 INFO scheduler.DAGScheduler: ResultStage 473 (foreachPartition at PredictorEngineApp.java:153) finished in 7.209 s 18/04/17 16:50:07 INFO cluster.YarnClusterScheduler: Removed TaskSet 473.0, whose tasks have all completed, from pool 18/04/17 16:50:07 INFO scheduler.DAGScheduler: Job 473 finished: foreachPartition at PredictorEngineApp.java:153, took 7.339439 s 18/04/17 16:50:07 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x23ff91ce connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 16:50:07 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x23ff91ce0x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 16:50:07 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 16:50:07 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:39201, server: ***hostname masked***/***IP masked***:2181 18/04/17 16:50:07 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x1626be1444c93d1, negotiated timeout = 60000 18/04/17 16:50:07 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x1626be1444c93d1 18/04/17 16:50:07 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x2626be142b28ce1, negotiated timeout = 60000 18/04/17 16:50:07 INFO zookeeper.ZooKeeper: Session: 0x1626be1444c93d1 closed 18/04/17 16:50:07 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 16:50:07 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x2626be142b28ce1 18/04/17 16:50:07 INFO zookeeper.ZooKeeper: Session: 0x2626be142b28ce1 closed 18/04/17 16:50:07 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 16:50:07 INFO scheduler.JobScheduler: Finished job streaming job 1523973000000 ms.34 from job set of time 1523973000000 ms 18/04/17 16:50:07 INFO scheduler.JobScheduler: Finished job streaming job 1523973000000 ms.12 from job set of time 1523973000000 ms 18/04/17 16:50:07 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 455.0 (TID 455) in 7609 ms on ***hostname masked*** (executor 3) (1/1) 18/04/17 16:50:07 INFO cluster.YarnClusterScheduler: Removed TaskSet 455.0, whose tasks have all completed, from pool 18/04/17 16:50:07 INFO scheduler.DAGScheduler: ResultStage 455 (foreachPartition at PredictorEngineApp.java:153) finished in 7.609 s 18/04/17 16:50:07 INFO scheduler.DAGScheduler: Job 455 finished: foreachPartition at PredictorEngineApp.java:153, took 7.691683 s 18/04/17 16:50:07 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x22b2b4b8 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 16:50:07 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x22b2b4b80x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 16:50:07 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 16:50:07 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:39207, server: ***hostname masked***/***IP masked***:2181 18/04/17 16:50:07 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x2626be142b28ce3, negotiated timeout = 60000 18/04/17 16:50:07 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x2626be142b28ce3 18/04/17 16:50:07 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 471.0 (TID 471) in 7567 ms on ***hostname masked*** (executor 10) (1/1) 18/04/17 16:50:07 INFO cluster.YarnClusterScheduler: Removed TaskSet 471.0, whose tasks have all completed, from pool 18/04/17 16:50:07 INFO scheduler.DAGScheduler: ResultStage 471 (foreachPartition at PredictorEngineApp.java:153) finished in 7.568 s 18/04/17 16:50:07 INFO scheduler.DAGScheduler: Job 471 finished: foreachPartition at PredictorEngineApp.java:153, took 7.693588 s 18/04/17 16:50:07 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x14f60b98 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 16:50:07 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x14f60b980x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 16:50:07 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 16:50:07 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:56466, server: ***hostname masked***/***IP masked***:2181 18/04/17 16:50:07 INFO zookeeper.ZooKeeper: Session: 0x2626be142b28ce3 closed 18/04/17 16:50:07 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 16:50:07 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x3626be1439a93a3, negotiated timeout = 60000 18/04/17 16:50:07 INFO scheduler.JobScheduler: Finished job streaming job 1523973000000 ms.29 from job set of time 1523973000000 ms 18/04/17 16:50:07 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x3626be1439a93a3 18/04/17 16:50:07 INFO zookeeper.ZooKeeper: Session: 0x3626be1439a93a3 closed 18/04/17 16:50:07 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 16:50:07 INFO scheduler.JobScheduler: Finished job streaming job 1523973000000 ms.27 from job set of time 1523973000000 ms 18/04/17 16:50:08 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 452.0 (TID 452) in 7881 ms on ***hostname masked*** (executor 9) (1/1) 18/04/17 16:50:08 INFO cluster.YarnClusterScheduler: Removed TaskSet 452.0, whose tasks have all completed, from pool 18/04/17 16:50:08 INFO scheduler.DAGScheduler: ResultStage 452 (foreachPartition at PredictorEngineApp.java:153) finished in 7.881 s 18/04/17 16:50:08 INFO scheduler.DAGScheduler: Job 452 finished: foreachPartition at PredictorEngineApp.java:153, took 7.937542 s 18/04/17 16:50:08 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x2552c26a connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 16:50:08 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x2552c26a0x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 16:50:08 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 16:50:08 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:34618, server: ***hostname masked***/***IP masked***:2181 18/04/17 16:50:08 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x1626be1444c93d4, negotiated timeout = 60000 18/04/17 16:50:08 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x1626be1444c93d4 18/04/17 16:50:08 INFO zookeeper.ZooKeeper: Session: 0x1626be1444c93d4 closed 18/04/17 16:50:08 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 16:50:08 INFO scheduler.JobScheduler: Finished job streaming job 1523973000000 ms.33 from job set of time 1523973000000 ms 18/04/17 16:50:08 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 458.0 (TID 458) in 8334 ms on ***hostname masked*** (executor 3) (1/1) 18/04/17 16:50:08 INFO cluster.YarnClusterScheduler: Removed TaskSet 458.0, whose tasks have all completed, from pool 18/04/17 16:50:08 INFO scheduler.DAGScheduler: ResultStage 458 (foreachPartition at PredictorEngineApp.java:153) finished in 8.334 s 18/04/17 16:50:08 INFO scheduler.DAGScheduler: Job 459 finished: foreachPartition at PredictorEngineApp.java:153, took 8.425538 s 18/04/17 16:50:08 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x864395 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 16:50:08 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x8643950x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 16:50:08 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 16:50:08 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:34623, server: ***hostname masked***/***IP masked***:2181 18/04/17 16:50:08 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x1626be1444c93d5, negotiated timeout = 60000 18/04/17 16:50:08 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x1626be1444c93d5 18/04/17 16:50:08 INFO zookeeper.ZooKeeper: Session: 0x1626be1444c93d5 closed 18/04/17 16:50:08 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 16:50:08 INFO scheduler.JobScheduler: Finished job streaming job 1523973000000 ms.24 from job set of time 1523973000000 ms 18/04/17 16:50:08 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 449.0 (TID 449) in 8757 ms on ***hostname masked*** (executor 7) (1/1) 18/04/17 16:50:08 INFO cluster.YarnClusterScheduler: Removed TaskSet 449.0, whose tasks have all completed, from pool 18/04/17 16:50:08 INFO scheduler.DAGScheduler: ResultStage 449 (foreachPartition at PredictorEngineApp.java:153) finished in 8.758 s 18/04/17 16:50:08 INFO scheduler.DAGScheduler: Job 449 finished: foreachPartition at PredictorEngineApp.java:153, took 8.793550 s 18/04/17 16:50:08 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x4c0dcb4e connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 16:50:08 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x4c0dcb4e0x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 16:50:08 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 16:50:08 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:34626, server: ***hostname masked***/***IP masked***:2181 18/04/17 16:50:08 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x1626be1444c93d6, negotiated timeout = 60000 18/04/17 16:50:08 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x1626be1444c93d6 18/04/17 16:50:08 INFO zookeeper.ZooKeeper: Session: 0x1626be1444c93d6 closed 18/04/17 16:50:08 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 16:50:08 INFO scheduler.JobScheduler: Finished job streaming job 1523973000000 ms.28 from job set of time 1523973000000 ms 18/04/17 16:50:09 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 464.0 (TID 464) in 9190 ms on ***hostname masked*** (executor 12) (1/1) 18/04/17 16:50:09 INFO cluster.YarnClusterScheduler: Removed TaskSet 464.0, whose tasks have all completed, from pool 18/04/17 16:50:09 INFO scheduler.DAGScheduler: ResultStage 464 (foreachPartition at PredictorEngineApp.java:153) finished in 9.191 s 18/04/17 16:50:09 INFO scheduler.DAGScheduler: Job 464 finished: foreachPartition at PredictorEngineApp.java:153, took 9.305296 s 18/04/17 16:50:09 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x34330b05 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 16:50:09 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x34330b050x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 16:50:09 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 16:50:09 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:34631, server: ***hostname masked***/***IP masked***:2181 18/04/17 16:50:09 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x1626be1444c93d7, negotiated timeout = 60000 18/04/17 16:50:09 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x1626be1444c93d7 18/04/17 16:50:09 INFO zookeeper.ZooKeeper: Session: 0x1626be1444c93d7 closed 18/04/17 16:50:09 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 16:50:09 INFO scheduler.JobScheduler: Finished job streaming job 1523973000000 ms.22 from job set of time 1523973000000 ms 18/04/17 16:50:09 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 470.0 (TID 470) in 9542 ms on ***hostname masked*** (executor 2) (1/1) 18/04/17 16:50:09 INFO cluster.YarnClusterScheduler: Removed TaskSet 470.0, whose tasks have all completed, from pool 18/04/17 16:50:09 INFO scheduler.DAGScheduler: ResultStage 470 (foreachPartition at PredictorEngineApp.java:153) finished in 9.543 s 18/04/17 16:50:09 INFO scheduler.DAGScheduler: Job 470 finished: foreachPartition at PredictorEngineApp.java:153, took 9.666014 s 18/04/17 16:50:09 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x239129ed connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 16:50:09 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x239129ed0x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 16:50:09 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 16:50:09 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:39233, server: ***hostname masked***/***IP masked***:2181 18/04/17 16:50:09 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x2626be142b28ce6, negotiated timeout = 60000 18/04/17 16:50:09 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x2626be142b28ce6 18/04/17 16:50:09 INFO zookeeper.ZooKeeper: Session: 0x2626be142b28ce6 closed 18/04/17 16:50:09 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 16:50:09 INFO scheduler.JobScheduler: Finished job streaming job 1523973000000 ms.19 from job set of time 1523973000000 ms 18/04/17 16:50:10 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 457.0 (TID 457) in 9969 ms on ***hostname masked*** (executor 4) (1/1) 18/04/17 16:50:10 INFO cluster.YarnClusterScheduler: Removed TaskSet 457.0, whose tasks have all completed, from pool 18/04/17 16:50:10 INFO scheduler.DAGScheduler: ResultStage 457 (foreachPartition at PredictorEngineApp.java:153) finished in 9.969 s 18/04/17 16:50:10 INFO scheduler.DAGScheduler: Job 457 finished: foreachPartition at PredictorEngineApp.java:153, took 10.057184 s 18/04/17 16:50:10 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0xee3a58b connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 16:50:10 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0xee3a58b0x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 16:50:10 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 16:50:10 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:39239, server: ***hostname masked***/***IP masked***:2181 18/04/17 16:50:10 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x2626be142b28ce7, negotiated timeout = 60000 18/04/17 16:50:10 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x2626be142b28ce7 18/04/17 16:50:10 INFO zookeeper.ZooKeeper: Session: 0x2626be142b28ce7 closed 18/04/17 16:50:10 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 16:50:10 INFO scheduler.JobScheduler: Finished job streaming job 1523973000000 ms.32 from job set of time 1523973000000 ms 18/04/17 16:50:10 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 469.0 (TID 469) in 9966 ms on ***hostname masked*** (executor 2) (1/1) 18/04/17 16:50:10 INFO cluster.YarnClusterScheduler: Removed TaskSet 469.0, whose tasks have all completed, from pool 18/04/17 16:50:10 INFO scheduler.DAGScheduler: ResultStage 469 (foreachPartition at PredictorEngineApp.java:153) finished in 9.967 s 18/04/17 16:50:10 INFO scheduler.DAGScheduler: Job 469 finished: foreachPartition at PredictorEngineApp.java:153, took 10.086968 s 18/04/17 16:50:10 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x1c654725 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 16:50:10 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x1c6547250x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 16:50:10 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 16:50:10 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:34647, server: ***hostname masked***/***IP masked***:2181 18/04/17 16:50:10 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x1626be1444c93d9, negotiated timeout = 60000 18/04/17 16:50:10 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x1626be1444c93d9 18/04/17 16:50:10 INFO zookeeper.ZooKeeper: Session: 0x1626be1444c93d9 closed 18/04/17 16:50:10 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 16:50:10 INFO scheduler.JobScheduler: Finished job streaming job 1523973000000 ms.18 from job set of time 1523973000000 ms 18/04/17 16:50:10 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 462.0 (TID 462) in 10107 ms on ***hostname masked*** (executor 7) (1/1) 18/04/17 16:50:10 INFO cluster.YarnClusterScheduler: Removed TaskSet 462.0, whose tasks have all completed, from pool 18/04/17 16:50:10 INFO scheduler.DAGScheduler: ResultStage 462 (foreachPartition at PredictorEngineApp.java:153) finished in 10.108 s 18/04/17 16:50:10 INFO scheduler.DAGScheduler: Job 463 finished: foreachPartition at PredictorEngineApp.java:153, took 10.216735 s 18/04/17 16:50:10 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x7faaf982 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 16:50:10 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x7faaf9820x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 16:50:10 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 16:50:10 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:39245, server: ***hostname masked***/***IP masked***:2181 18/04/17 16:50:10 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x2626be142b28ce9, negotiated timeout = 60000 18/04/17 16:50:10 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x2626be142b28ce9 18/04/17 16:50:10 INFO zookeeper.ZooKeeper: Session: 0x2626be142b28ce9 closed 18/04/17 16:50:10 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 16:50:10 INFO scheduler.JobScheduler: Finished job streaming job 1523973000000 ms.6 from job set of time 1523973000000 ms 18/04/17 16:50:11 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 465.0 (TID 465) in 11553 ms on ***hostname masked*** (executor 4) (1/1) 18/04/17 16:50:11 INFO cluster.YarnClusterScheduler: Removed TaskSet 465.0, whose tasks have all completed, from pool 18/04/17 16:50:11 INFO scheduler.DAGScheduler: ResultStage 465 (foreachPartition at PredictorEngineApp.java:153) finished in 11.554 s 18/04/17 16:50:11 INFO scheduler.DAGScheduler: Job 466 finished: foreachPartition at PredictorEngineApp.java:153, took 11.672384 s 18/04/17 16:50:11 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x14a893b2 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 16:50:11 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x14a893b20x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 16:50:11 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 16:50:11 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:39249, server: ***hostname masked***/***IP masked***:2181 18/04/17 16:50:11 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x2626be142b28cea, negotiated timeout = 60000 18/04/17 16:50:11 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x2626be142b28cea 18/04/17 16:50:11 INFO zookeeper.ZooKeeper: Session: 0x2626be142b28cea closed 18/04/17 16:50:11 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 16:50:11 INFO scheduler.JobScheduler: Finished job streaming job 1523973000000 ms.20 from job set of time 1523973000000 ms 18/04/17 16:50:12 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 448.0 (TID 448) in 12500 ms on ***hostname masked*** (executor 10) (1/1) 18/04/17 16:50:12 INFO cluster.YarnClusterScheduler: Removed TaskSet 448.0, whose tasks have all completed, from pool 18/04/17 16:50:12 INFO scheduler.DAGScheduler: ResultStage 448 (foreachPartition at PredictorEngineApp.java:153) finished in 12.500 s 18/04/17 16:50:12 INFO scheduler.DAGScheduler: Job 448 finished: foreachPartition at PredictorEngineApp.java:153, took 12.520209 s 18/04/17 16:50:12 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x239a490b connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 16:50:12 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x239a490b0x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 16:50:12 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 16:50:12 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:34658, server: ***hostname masked***/***IP masked***:2181 18/04/17 16:50:12 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x1626be1444c93da, negotiated timeout = 60000 18/04/17 16:50:12 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x1626be1444c93da 18/04/17 16:50:12 INFO zookeeper.ZooKeeper: Session: 0x1626be1444c93da closed 18/04/17 16:50:12 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 16:50:12 INFO scheduler.JobScheduler: Finished job streaming job 1523973000000 ms.11 from job set of time 1523973000000 ms 18/04/17 16:50:14 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 468.0 (TID 468) in 14047 ms on ***hostname masked*** (executor 6) (1/1) 18/04/17 16:50:14 INFO cluster.YarnClusterScheduler: Removed TaskSet 468.0, whose tasks have all completed, from pool 18/04/17 16:50:14 INFO scheduler.DAGScheduler: ResultStage 468 (foreachPartition at PredictorEngineApp.java:153) finished in 14.048 s 18/04/17 16:50:14 INFO scheduler.DAGScheduler: Job 468 finished: foreachPartition at PredictorEngineApp.java:153, took 14.176187 s 18/04/17 16:50:14 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x1317fa4b connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 16:50:14 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x1317fa4b0x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 16:50:14 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 16:50:14 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:34664, server: ***hostname masked***/***IP masked***:2181 18/04/17 16:50:14 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x1626be1444c93dc, negotiated timeout = 60000 18/04/17 16:50:14 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x1626be1444c93dc 18/04/17 16:50:14 INFO zookeeper.ZooKeeper: Session: 0x1626be1444c93dc closed 18/04/17 16:50:14 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 16:50:14 INFO scheduler.JobScheduler: Finished job streaming job 1523973000000 ms.10 from job set of time 1523973000000 ms 18/04/17 16:50:14 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 460.0 (TID 460) in 14752 ms on ***hostname masked*** (executor 7) (1/1) 18/04/17 16:50:14 INFO cluster.YarnClusterScheduler: Removed TaskSet 460.0, whose tasks have all completed, from pool 18/04/17 16:50:14 INFO scheduler.DAGScheduler: ResultStage 460 (foreachPartition at PredictorEngineApp.java:153) finished in 14.753 s 18/04/17 16:50:14 INFO scheduler.DAGScheduler: Job 461 finished: foreachPartition at PredictorEngineApp.java:153, took 14.854775 s 18/04/17 16:50:14 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x5c167075 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 16:50:14 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x5c1670750x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 16:50:14 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 16:50:14 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:56518, server: ***hostname masked***/***IP masked***:2181 18/04/17 16:50:14 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x3626be1439a93a9, negotiated timeout = 60000 18/04/17 16:50:14 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x3626be1439a93a9 18/04/17 16:50:14 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 16:50:14 INFO zookeeper.ZooKeeper: Session: 0x3626be1439a93a9 closed 18/04/17 16:50:14 INFO scheduler.JobScheduler: Finished job streaming job 1523973000000 ms.1 from job set of time 1523973000000 ms 18/04/17 16:50:15 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 451.0 (TID 451) in 15366 ms on ***hostname masked*** (executor 4) (1/1) 18/04/17 16:50:15 INFO cluster.YarnClusterScheduler: Removed TaskSet 451.0, whose tasks have all completed, from pool 18/04/17 16:50:15 INFO scheduler.DAGScheduler: ResultStage 451 (foreachPartition at PredictorEngineApp.java:153) finished in 15.367 s 18/04/17 16:50:15 INFO scheduler.DAGScheduler: Job 451 finished: foreachPartition at PredictorEngineApp.java:153, took 15.419364 s 18/04/17 16:50:15 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x3b9fd383 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 16:50:15 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x3b9fd3830x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 16:50:15 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 16:50:15 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:34671, server: ***hostname masked***/***IP masked***:2181 18/04/17 16:50:15 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x1626be1444c93dd, negotiated timeout = 60000 18/04/17 16:50:15 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x1626be1444c93dd 18/04/17 16:50:15 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 16:50:15 INFO zookeeper.ZooKeeper: Session: 0x1626be1444c93dd closed 18/04/17 16:50:15 INFO scheduler.JobScheduler: Finished job streaming job 1523973000000 ms.26 from job set of time 1523973000000 ms 18/04/17 16:50:16 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 453.0 (TID 453) in 15892 ms on ***hostname masked*** (executor 2) (1/1) 18/04/17 16:50:16 INFO cluster.YarnClusterScheduler: Removed TaskSet 453.0, whose tasks have all completed, from pool 18/04/17 16:50:16 INFO scheduler.DAGScheduler: ResultStage 453 (foreachPartition at PredictorEngineApp.java:153) finished in 15.893 s 18/04/17 16:50:16 INFO scheduler.DAGScheduler: Job 453 finished: foreachPartition at PredictorEngineApp.java:153, took 15.953279 s 18/04/17 16:50:16 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x11193975 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 16:50:16 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x111939750x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 16:50:16 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 16:50:16 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:56527, server: ***hostname masked***/***IP masked***:2181 18/04/17 16:50:16 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x3626be1439a93aa, negotiated timeout = 60000 18/04/17 16:50:16 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x3626be1439a93aa 18/04/17 16:50:16 INFO zookeeper.ZooKeeper: Session: 0x3626be1439a93aa closed 18/04/17 16:50:16 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 16:50:16 INFO scheduler.JobScheduler: Finished job streaming job 1523973000000 ms.5 from job set of time 1523973000000 ms 18/04/17 16:50:16 INFO scheduler.JobScheduler: Total delay: 16.057 s for time 1523973000000 ms (execution: 15.994 s) 18/04/17 16:50:16 INFO kafka.KafkaRDD: Removing RDD 576 from persistence list 18/04/17 16:50:16 INFO storage.BlockManager: Removing RDD 576 18/04/17 16:50:16 INFO kafka.KafkaRDD: Removing RDD 576 from persistence list 18/04/17 16:50:16 INFO storage.BlockManager: Removing RDD 576 18/04/17 16:50:16 INFO kafka.KafkaRDD: Removing RDD 577 from persistence list 18/04/17 16:50:16 INFO storage.BlockManager: Removing RDD 577 18/04/17 16:50:16 INFO kafka.KafkaRDD: Removing RDD 577 from persistence list 18/04/17 16:50:16 INFO storage.BlockManager: Removing RDD 577 18/04/17 16:50:16 INFO kafka.KafkaRDD: Removing RDD 578 from persistence list 18/04/17 16:50:16 INFO storage.BlockManager: Removing RDD 578 18/04/17 16:50:16 INFO kafka.KafkaRDD: Removing RDD 578 from persistence list 18/04/17 16:50:16 INFO storage.BlockManager: Removing RDD 578 18/04/17 16:50:16 INFO kafka.KafkaRDD: Removing RDD 579 from persistence list 18/04/17 16:50:16 INFO storage.BlockManager: Removing RDD 579 18/04/17 16:50:16 INFO kafka.KafkaRDD: Removing RDD 579 from persistence list 18/04/17 16:50:16 INFO storage.BlockManager: Removing RDD 579 18/04/17 16:50:16 INFO kafka.KafkaRDD: Removing RDD 580 from persistence list 18/04/17 16:50:16 INFO storage.BlockManager: Removing RDD 580 18/04/17 16:50:16 INFO kafka.KafkaRDD: Removing RDD 580 from persistence list 18/04/17 16:50:16 INFO storage.BlockManager: Removing RDD 580 18/04/17 16:50:16 INFO kafka.KafkaRDD: Removing RDD 581 from persistence list 18/04/17 16:50:16 INFO storage.BlockManager: Removing RDD 581 18/04/17 16:50:16 INFO kafka.KafkaRDD: Removing RDD 581 from persistence list 18/04/17 16:50:16 INFO storage.BlockManager: Removing RDD 581 18/04/17 16:50:16 INFO kafka.KafkaRDD: Removing RDD 582 from persistence list 18/04/17 16:50:16 INFO storage.BlockManager: Removing RDD 582 18/04/17 16:50:16 INFO kafka.KafkaRDD: Removing RDD 582 from persistence list 18/04/17 16:50:16 INFO storage.BlockManager: Removing RDD 582 18/04/17 16:50:16 INFO kafka.KafkaRDD: Removing RDD 583 from persistence list 18/04/17 16:50:16 INFO storage.BlockManager: Removing RDD 583 18/04/17 16:50:16 INFO kafka.KafkaRDD: Removing RDD 583 from persistence list 18/04/17 16:50:16 INFO storage.BlockManager: Removing RDD 583 18/04/17 16:50:16 INFO kafka.KafkaRDD: Removing RDD 584 from persistence list 18/04/17 16:50:16 INFO storage.BlockManager: Removing RDD 584 18/04/17 16:50:16 INFO kafka.KafkaRDD: Removing RDD 584 from persistence list 18/04/17 16:50:16 INFO storage.BlockManager: Removing RDD 584 18/04/17 16:50:16 INFO kafka.KafkaRDD: Removing RDD 585 from persistence list 18/04/17 16:50:16 INFO storage.BlockManagerInfo: Removed broadcast_461_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.2 MB) 18/04/17 16:50:16 INFO storage.BlockManager: Removing RDD 585 18/04/17 16:50:16 INFO kafka.KafkaRDD: Removing RDD 585 from persistence list 18/04/17 16:50:16 INFO storage.BlockManagerInfo: Removed broadcast_461_piece0 on ***hostname masked***:56034 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:50:16 INFO storage.BlockManager: Removing RDD 585 18/04/17 16:50:16 INFO kafka.KafkaRDD: Removing RDD 586 from persistence list 18/04/17 16:50:16 INFO storage.BlockManager: Removing RDD 586 18/04/17 16:50:16 INFO kafka.KafkaRDD: Removing RDD 586 from persistence list 18/04/17 16:50:16 INFO storage.BlockManager: Removing RDD 586 18/04/17 16:50:16 INFO kafka.KafkaRDD: Removing RDD 587 from persistence list 18/04/17 16:50:16 INFO storage.BlockManagerInfo: Removed broadcast_448_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.2 MB) 18/04/17 16:50:16 INFO storage.BlockManager: Removing RDD 587 18/04/17 16:50:16 INFO kafka.KafkaRDD: Removing RDD 587 from persistence list 18/04/17 16:50:16 INFO storage.BlockManager: Removing RDD 587 18/04/17 16:50:16 INFO kafka.KafkaRDD: Removing RDD 588 from persistence list 18/04/17 16:50:16 INFO storage.BlockManagerInfo: Removed broadcast_448_piece0 on ***hostname masked***:55095 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:50:16 INFO storage.BlockManager: Removing RDD 588 18/04/17 16:50:16 INFO kafka.KafkaRDD: Removing RDD 588 from persistence list 18/04/17 16:50:16 INFO storage.BlockManager: Removing RDD 588 18/04/17 16:50:16 INFO spark.ContextCleaner: Cleaned accumulator 449 18/04/17 16:50:16 INFO spark.ContextCleaner: Cleaned accumulator 451 18/04/17 16:50:16 INFO kafka.KafkaRDD: Removing RDD 589 from persistence list 18/04/17 16:50:16 INFO storage.BlockManager: Removing RDD 589 18/04/17 16:50:16 INFO kafka.KafkaRDD: Removing RDD 589 from persistence list 18/04/17 16:50:16 INFO storage.BlockManagerInfo: Removed broadcast_449_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.2 MB) 18/04/17 16:50:16 INFO storage.BlockManager: Removing RDD 589 18/04/17 16:50:16 INFO kafka.KafkaRDD: Removing RDD 590 from persistence list 18/04/17 16:50:16 INFO storage.BlockManager: Removing RDD 590 18/04/17 16:50:16 INFO kafka.KafkaRDD: Removing RDD 590 from persistence list 18/04/17 16:50:16 INFO storage.BlockManagerInfo: Removed broadcast_449_piece0 on ***hostname masked***:41751 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:50:16 INFO storage.BlockManager: Removing RDD 590 18/04/17 16:50:16 INFO kafka.KafkaRDD: Removing RDD 591 from persistence list 18/04/17 16:50:16 INFO spark.ContextCleaner: Cleaned accumulator 450 18/04/17 16:50:16 INFO storage.BlockManager: Removing RDD 591 18/04/17 16:50:16 INFO kafka.KafkaRDD: Removing RDD 591 from persistence list 18/04/17 16:50:16 INFO storage.BlockManager: Removing RDD 591 18/04/17 16:50:16 INFO kafka.KafkaRDD: Removing RDD 592 from persistence list 18/04/17 16:50:16 INFO storage.BlockManagerInfo: Removed broadcast_451_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.2 MB) 18/04/17 16:50:16 INFO storage.BlockManager: Removing RDD 592 18/04/17 16:50:16 INFO kafka.KafkaRDD: Removing RDD 592 from persistence list 18/04/17 16:50:16 INFO storage.BlockManagerInfo: Removed broadcast_451_piece0 on ***hostname masked***:55279 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:50:16 INFO storage.BlockManager: Removing RDD 592 18/04/17 16:50:16 INFO spark.ContextCleaner: Cleaned accumulator 452 18/04/17 16:50:16 INFO kafka.KafkaRDD: Removing RDD 593 from persistence list 18/04/17 16:50:16 INFO storage.BlockManager: Removing RDD 593 18/04/17 16:50:16 INFO kafka.KafkaRDD: Removing RDD 593 from persistence list 18/04/17 16:50:16 INFO storage.BlockManagerInfo: Removed broadcast_450_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.2 MB) 18/04/17 16:50:16 INFO storage.BlockManager: Removing RDD 593 18/04/17 16:50:16 INFO kafka.KafkaRDD: Removing RDD 594 from persistence list 18/04/17 16:50:16 INFO storage.BlockManager: Removing RDD 594 18/04/17 16:50:16 INFO kafka.KafkaRDD: Removing RDD 594 from persistence list 18/04/17 16:50:16 INFO storage.BlockManager: Removing RDD 594 18/04/17 16:50:16 INFO kafka.KafkaRDD: Removing RDD 595 from persistence list 18/04/17 16:50:16 INFO storage.BlockManagerInfo: Removed broadcast_450_piece0 on ***hostname masked***:57847 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:50:16 INFO storage.BlockManager: Removing RDD 595 18/04/17 16:50:16 INFO kafka.KafkaRDD: Removing RDD 595 from persistence list 18/04/17 16:50:16 INFO spark.ContextCleaner: Cleaned accumulator 454 18/04/17 16:50:16 INFO storage.BlockManager: Removing RDD 595 18/04/17 16:50:16 INFO kafka.KafkaRDD: Removing RDD 596 from persistence list 18/04/17 16:50:16 INFO storage.BlockManager: Removing RDD 596 18/04/17 16:50:16 INFO kafka.KafkaRDD: Removing RDD 596 from persistence list 18/04/17 16:50:16 INFO storage.BlockManagerInfo: Removed broadcast_452_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.2 MB) 18/04/17 16:50:16 INFO storage.BlockManager: Removing RDD 596 18/04/17 16:50:16 INFO kafka.KafkaRDD: Removing RDD 597 from persistence list 18/04/17 16:50:16 INFO storage.BlockManager: Removing RDD 597 18/04/17 16:50:16 INFO kafka.KafkaRDD: Removing RDD 597 from persistence list 18/04/17 16:50:16 INFO storage.BlockManagerInfo: Removed broadcast_452_piece0 on ***hostname masked***:55033 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:50:16 INFO storage.BlockManager: Removing RDD 597 18/04/17 16:50:16 INFO kafka.KafkaRDD: Removing RDD 598 from persistence list 18/04/17 16:50:16 INFO storage.BlockManager: Removing RDD 598 18/04/17 16:50:16 INFO kafka.KafkaRDD: Removing RDD 598 from persistence list 18/04/17 16:50:16 INFO storage.BlockManager: Removing RDD 598 18/04/17 16:50:16 INFO kafka.KafkaRDD: Removing RDD 599 from persistence list 18/04/17 16:50:16 INFO spark.ContextCleaner: Cleaned accumulator 453 18/04/17 16:50:16 INFO storage.BlockManager: Removing RDD 599 18/04/17 16:50:16 INFO kafka.KafkaRDD: Removing RDD 599 from persistence list 18/04/17 16:50:16 INFO storage.BlockManager: Removing RDD 599 18/04/17 16:50:16 INFO kafka.KafkaRDD: Removing RDD 600 from persistence list 18/04/17 16:50:16 INFO storage.BlockManagerInfo: Removed broadcast_454_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.2 MB) 18/04/17 16:50:16 INFO storage.BlockManager: Removing RDD 600 18/04/17 16:50:16 INFO kafka.KafkaRDD: Removing RDD 600 from persistence list 18/04/17 16:50:16 INFO storage.BlockManagerInfo: Removed broadcast_454_piece0 on ***hostname masked***:55279 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:50:16 INFO storage.BlockManager: Removing RDD 600 18/04/17 16:50:16 INFO kafka.KafkaRDD: Removing RDD 601 from persistence list 18/04/17 16:50:16 INFO spark.ContextCleaner: Cleaned accumulator 455 18/04/17 16:50:16 INFO storage.BlockManager: Removing RDD 601 18/04/17 16:50:16 INFO kafka.KafkaRDD: Removing RDD 601 from persistence list 18/04/17 16:50:16 INFO storage.BlockManager: Removing RDD 601 18/04/17 16:50:16 INFO kafka.KafkaRDD: Removing RDD 602 from persistence list 18/04/17 16:50:16 INFO storage.BlockManagerInfo: Removed broadcast_453_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.2 MB) 18/04/17 16:50:16 INFO storage.BlockManager: Removing RDD 602 18/04/17 16:50:16 INFO kafka.KafkaRDD: Removing RDD 602 from persistence list 18/04/17 16:50:16 INFO storage.BlockManager: Removing RDD 602 18/04/17 16:50:16 INFO kafka.KafkaRDD: Removing RDD 603 from persistence list 18/04/17 16:50:16 INFO storage.BlockManagerInfo: Removed broadcast_453_piece0 on ***hostname masked***:43653 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:50:16 INFO storage.BlockManager: Removing RDD 603 18/04/17 16:50:16 INFO kafka.KafkaRDD: Removing RDD 603 from persistence list 18/04/17 16:50:16 INFO spark.ContextCleaner: Cleaned accumulator 457 18/04/17 16:50:16 INFO storage.BlockManager: Removing RDD 603 18/04/17 16:50:16 INFO kafka.KafkaRDD: Removing RDD 604 from persistence list 18/04/17 16:50:16 INFO storage.BlockManager: Removing RDD 604 18/04/17 16:50:16 INFO kafka.KafkaRDD: Removing RDD 604 from persistence list 18/04/17 16:50:16 INFO storage.BlockManagerInfo: Removed broadcast_455_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.2 MB) 18/04/17 16:50:16 INFO storage.BlockManager: Removing RDD 604 18/04/17 16:50:16 INFO kafka.KafkaRDD: Removing RDD 605 from persistence list 18/04/17 16:50:16 INFO storage.BlockManagerInfo: Removed broadcast_455_piece0 on ***hostname masked***:60107 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:50:16 INFO storage.BlockManager: Removing RDD 605 18/04/17 16:50:16 INFO kafka.KafkaRDD: Removing RDD 605 from persistence list 18/04/17 16:50:16 INFO storage.BlockManager: Removing RDD 605 18/04/17 16:50:16 INFO spark.ContextCleaner: Cleaned accumulator 456 18/04/17 16:50:16 INFO kafka.KafkaRDD: Removing RDD 606 from persistence list 18/04/17 16:50:16 INFO storage.BlockManager: Removing RDD 606 18/04/17 16:50:16 INFO kafka.KafkaRDD: Removing RDD 606 from persistence list 18/04/17 16:50:16 INFO storage.BlockManager: Removing RDD 606 18/04/17 16:50:16 INFO kafka.KafkaRDD: Removing RDD 607 from persistence list 18/04/17 16:50:16 INFO storage.BlockManagerInfo: Removed broadcast_457_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.2 MB) 18/04/17 16:50:16 INFO storage.BlockManager: Removing RDD 607 18/04/17 16:50:16 INFO kafka.KafkaRDD: Removing RDD 607 from persistence list 18/04/17 16:50:16 INFO storage.BlockManagerInfo: Removed broadcast_457_piece0 on ***hostname masked***:55279 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:50:16 INFO storage.BlockManager: Removing RDD 607 18/04/17 16:50:16 INFO spark.ContextCleaner: Cleaned accumulator 458 18/04/17 16:50:16 INFO kafka.KafkaRDD: Removing RDD 608 from persistence list 18/04/17 16:50:16 INFO storage.BlockManager: Removing RDD 608 18/04/17 16:50:16 INFO kafka.KafkaRDD: Removing RDD 608 from persistence list 18/04/17 16:50:16 INFO storage.BlockManagerInfo: Removed broadcast_456_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.2 MB) 18/04/17 16:50:16 INFO storage.BlockManager: Removing RDD 608 18/04/17 16:50:16 INFO kafka.KafkaRDD: Removing RDD 609 from persistence list 18/04/17 16:50:16 INFO storage.BlockManagerInfo: Removed broadcast_456_piece0 on ***hostname masked***:35790 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:50:16 INFO storage.BlockManager: Removing RDD 609 18/04/17 16:50:16 INFO kafka.KafkaRDD: Removing RDD 609 from persistence list 18/04/17 16:50:16 INFO storage.BlockManager: Removing RDD 609 18/04/17 16:50:16 INFO spark.ContextCleaner: Cleaned accumulator 460 18/04/17 16:50:16 INFO kafka.KafkaRDD: Removing RDD 610 from persistence list 18/04/17 16:50:16 INFO storage.BlockManager: Removing RDD 610 18/04/17 16:50:16 INFO kafka.KafkaRDD: Removing RDD 610 from persistence list 18/04/17 16:50:16 INFO storage.BlockManagerInfo: Removed broadcast_458_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.2 MB) 18/04/17 16:50:16 INFO storage.BlockManager: Removing RDD 610 18/04/17 16:50:16 INFO kafka.KafkaRDD: Removing RDD 611 from persistence list 18/04/17 16:50:16 INFO storage.BlockManagerInfo: Removed broadcast_458_piece0 on ***hostname masked***:60107 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:50:16 INFO storage.BlockManager: Removing RDD 611 18/04/17 16:50:16 INFO kafka.KafkaRDD: Removing RDD 611 from persistence list 18/04/17 16:50:16 INFO storage.BlockManager: Removing RDD 611 18/04/17 16:50:16 INFO spark.ContextCleaner: Cleaned accumulator 459 18/04/17 16:50:16 INFO scheduler.ReceivedBlockTracker: Deleting batches ArrayBuffer() 18/04/17 16:50:16 INFO scheduler.InputInfoTracker: remove old batch metadata: 1523972880000 ms 18/04/17 16:50:16 INFO storage.BlockManagerInfo: Removed broadcast_460_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.2 MB) 18/04/17 16:50:16 INFO storage.BlockManagerInfo: Removed broadcast_460_piece0 on ***hostname masked***:41751 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:50:16 INFO spark.ContextCleaner: Cleaned accumulator 461 18/04/17 16:50:16 INFO storage.BlockManagerInfo: Removed broadcast_459_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.2 MB) 18/04/17 16:50:16 INFO storage.BlockManagerInfo: Removed broadcast_459_piece0 on ***hostname masked***:41751 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:50:16 INFO spark.ContextCleaner: Cleaned accumulator 463 18/04/17 16:50:16 INFO spark.ContextCleaner: Cleaned accumulator 462 18/04/17 16:50:16 INFO storage.BlockManagerInfo: Removed broadcast_463_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.2 MB) 18/04/17 16:50:16 INFO storage.BlockManagerInfo: Removed broadcast_463_piece0 on ***hostname masked***:55033 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:50:16 INFO spark.ContextCleaner: Cleaned accumulator 464 18/04/17 16:50:16 INFO storage.BlockManagerInfo: Removed broadcast_462_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.2 MB) 18/04/17 16:50:16 INFO storage.BlockManagerInfo: Removed broadcast_462_piece0 on ***hostname masked***:41751 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:50:16 INFO storage.BlockManagerInfo: Removed broadcast_473_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.2 MB) 18/04/17 16:50:16 INFO storage.BlockManagerInfo: Removed broadcast_473_piece0 on ***hostname masked***:60107 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:50:16 INFO spark.ContextCleaner: Cleaned accumulator 474 18/04/17 16:50:16 INFO storage.BlockManagerInfo: Removed broadcast_472_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.2 MB) 18/04/17 16:50:16 INFO storage.BlockManagerInfo: Removed broadcast_472_piece0 on ***hostname masked***:55033 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:50:16 INFO spark.ContextCleaner: Cleaned accumulator 466 18/04/17 16:50:16 INFO storage.BlockManagerInfo: Removed broadcast_464_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.3 MB) 18/04/17 16:50:16 INFO storage.BlockManagerInfo: Removed broadcast_464_piece0 on ***hostname masked***:42188 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:50:16 INFO spark.ContextCleaner: Cleaned accumulator 465 18/04/17 16:50:16 INFO storage.BlockManagerInfo: Removed broadcast_466_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.3 MB) 18/04/17 16:50:16 INFO storage.BlockManagerInfo: Removed broadcast_466_piece0 on ***hostname masked***:57847 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:50:16 INFO spark.ContextCleaner: Cleaned accumulator 467 18/04/17 16:50:16 INFO storage.BlockManagerInfo: Removed broadcast_465_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.3 MB) 18/04/17 16:50:16 INFO storage.BlockManagerInfo: Removed broadcast_465_piece0 on ***hostname masked***:55279 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:50:16 INFO spark.ContextCleaner: Cleaned accumulator 469 18/04/17 16:50:16 INFO storage.BlockManagerInfo: Removed broadcast_467_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.3 MB) 18/04/17 16:50:16 INFO storage.BlockManagerInfo: Removed broadcast_467_piece0 on ***hostname masked***:55033 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:50:16 INFO spark.ContextCleaner: Cleaned accumulator 468 18/04/17 16:50:16 INFO storage.BlockManagerInfo: Removed broadcast_469_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.3 MB) 18/04/17 16:50:16 INFO storage.BlockManagerInfo: Removed broadcast_469_piece0 on ***hostname masked***:43653 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:50:16 INFO spark.ContextCleaner: Cleaned accumulator 470 18/04/17 16:50:16 INFO storage.BlockManagerInfo: Removed broadcast_468_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.3 MB) 18/04/17 16:50:16 INFO storage.BlockManagerInfo: Removed broadcast_468_piece0 on ***hostname masked***:35790 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:50:16 INFO spark.ContextCleaner: Cleaned accumulator 472 18/04/17 16:50:16 INFO storage.BlockManagerInfo: Removed broadcast_470_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.3 MB) 18/04/17 16:50:16 INFO storage.BlockManagerInfo: Removed broadcast_470_piece0 on ***hostname masked***:43653 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:50:16 INFO spark.ContextCleaner: Cleaned accumulator 471 18/04/17 16:50:16 INFO spark.ContextCleaner: Cleaned accumulator 473 18/04/17 16:50:16 INFO storage.BlockManagerInfo: Removed broadcast_471_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.3 MB) 18/04/17 16:50:16 INFO storage.BlockManagerInfo: Removed broadcast_471_piece0 on ***hostname masked***:55095 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:51:00 INFO scheduler.JobScheduler: Added jobs for time 1523973060000 ms 18/04/17 16:51:00 INFO scheduler.JobScheduler: Starting job streaming job 1523973060000 ms.0 from job set of time 1523973060000 ms 18/04/17 16:51:00 INFO scheduler.JobScheduler: Starting job streaming job 1523973060000 ms.1 from job set of time 1523973060000 ms 18/04/17 16:51:00 INFO scheduler.JobScheduler: Starting job streaming job 1523973060000 ms.2 from job set of time 1523973060000 ms 18/04/17 16:51:00 INFO scheduler.JobScheduler: Starting job streaming job 1523973060000 ms.3 from job set of time 1523973060000 ms 18/04/17 16:51:00 INFO scheduler.JobScheduler: Finished job streaming job 1523973060000 ms.0 from job set of time 1523973060000 ms 18/04/17 16:51:00 INFO scheduler.JobScheduler: Starting job streaming job 1523973060000 ms.5 from job set of time 1523973060000 ms 18/04/17 16:51:00 INFO scheduler.JobScheduler: Finished job streaming job 1523973060000 ms.3 from job set of time 1523973060000 ms 18/04/17 16:51:00 INFO scheduler.JobScheduler: Starting job streaming job 1523973060000 ms.4 from job set of time 1523973060000 ms 18/04/17 16:51:00 INFO scheduler.JobScheduler: Starting job streaming job 1523973060000 ms.7 from job set of time 1523973060000 ms 18/04/17 16:51:00 INFO scheduler.JobScheduler: Starting job streaming job 1523973060000 ms.6 from job set of time 1523973060000 ms 18/04/17 16:51:00 INFO scheduler.JobScheduler: Finished job streaming job 1523973060000 ms.4 from job set of time 1523973060000 ms 18/04/17 16:51:00 INFO scheduler.JobScheduler: Starting job streaming job 1523973060000 ms.8 from job set of time 1523973060000 ms 18/04/17 16:51:00 INFO scheduler.JobScheduler: Starting job streaming job 1523973060000 ms.9 from job set of time 1523973060000 ms 18/04/17 16:51:00 INFO scheduler.JobScheduler: Starting job streaming job 1523973060000 ms.10 from job set of time 1523973060000 ms 18/04/17 16:51:00 INFO scheduler.JobScheduler: Starting job streaming job 1523973060000 ms.11 from job set of time 1523973060000 ms 18/04/17 16:51:00 INFO scheduler.JobScheduler: Starting job streaming job 1523973060000 ms.12 from job set of time 1523973060000 ms 18/04/17 16:51:00 INFO scheduler.JobScheduler: Starting job streaming job 1523973060000 ms.13 from job set of time 1523973060000 ms 18/04/17 16:51:00 INFO scheduler.JobScheduler: Starting job streaming job 1523973060000 ms.14 from job set of time 1523973060000 ms 18/04/17 16:51:00 INFO scheduler.JobScheduler: Finished job streaming job 1523973060000 ms.13 from job set of time 1523973060000 ms 18/04/17 16:51:00 INFO scheduler.JobScheduler: Starting job streaming job 1523973060000 ms.15 from job set of time 1523973060000 ms 18/04/17 16:51:00 INFO scheduler.JobScheduler: Starting job streaming job 1523973060000 ms.17 from job set of time 1523973060000 ms 18/04/17 16:51:00 INFO scheduler.JobScheduler: Finished job streaming job 1523973060000 ms.14 from job set of time 1523973060000 ms 18/04/17 16:51:00 INFO scheduler.JobScheduler: Starting job streaming job 1523973060000 ms.16 from job set of time 1523973060000 ms 18/04/17 16:51:00 INFO scheduler.JobScheduler: Finished job streaming job 1523973060000 ms.17 from job set of time 1523973060000 ms 18/04/17 16:51:00 INFO scheduler.JobScheduler: Starting job streaming job 1523973060000 ms.19 from job set of time 1523973060000 ms 18/04/17 16:51:00 INFO scheduler.JobScheduler: Starting job streaming job 1523973060000 ms.20 from job set of time 1523973060000 ms 18/04/17 16:51:00 INFO scheduler.JobScheduler: Finished job streaming job 1523973060000 ms.16 from job set of time 1523973060000 ms 18/04/17 16:51:00 INFO scheduler.JobScheduler: Starting job streaming job 1523973060000 ms.18 from job set of time 1523973060000 ms 18/04/17 16:51:00 INFO scheduler.JobScheduler: Starting job streaming job 1523973060000 ms.21 from job set of time 1523973060000 ms 18/04/17 16:51:00 INFO scheduler.JobScheduler: Starting job streaming job 1523973060000 ms.22 from job set of time 1523973060000 ms 18/04/17 16:51:00 INFO scheduler.JobScheduler: Starting job streaming job 1523973060000 ms.23 from job set of time 1523973060000 ms 18/04/17 16:51:00 INFO scheduler.JobScheduler: Finished job streaming job 1523973060000 ms.21 from job set of time 1523973060000 ms 18/04/17 16:51:00 INFO scheduler.JobScheduler: Starting job streaming job 1523973060000 ms.24 from job set of time 1523973060000 ms 18/04/17 16:51:00 INFO scheduler.JobScheduler: Starting job streaming job 1523973060000 ms.26 from job set of time 1523973060000 ms 18/04/17 16:51:00 INFO scheduler.JobScheduler: Starting job streaming job 1523973060000 ms.25 from job set of time 1523973060000 ms 18/04/17 16:51:00 INFO scheduler.JobScheduler: Starting job streaming job 1523973060000 ms.27 from job set of time 1523973060000 ms 18/04/17 16:51:00 INFO scheduler.JobScheduler: Starting job streaming job 1523973060000 ms.28 from job set of time 1523973060000 ms 18/04/17 16:51:00 INFO scheduler.JobScheduler: Starting job streaming job 1523973060000 ms.29 from job set of time 1523973060000 ms 18/04/17 16:51:00 INFO scheduler.JobScheduler: Starting job streaming job 1523973060000 ms.30 from job set of time 1523973060000 ms 18/04/17 16:51:00 INFO scheduler.JobScheduler: Starting job streaming job 1523973060000 ms.32 from job set of time 1523973060000 ms 18/04/17 16:51:00 INFO scheduler.JobScheduler: Starting job streaming job 1523973060000 ms.31 from job set of time 1523973060000 ms 18/04/17 16:51:00 INFO scheduler.JobScheduler: Starting job streaming job 1523973060000 ms.33 from job set of time 1523973060000 ms 18/04/17 16:51:00 INFO scheduler.JobScheduler: Starting job streaming job 1523973060000 ms.34 from job set of time 1523973060000 ms 18/04/17 16:51:00 INFO scheduler.JobScheduler: Finished job streaming job 1523973060000 ms.30 from job set of time 1523973060000 ms 18/04/17 16:51:00 INFO scheduler.JobScheduler: Starting job streaming job 1523973060000 ms.35 from job set of time 1523973060000 ms 18/04/17 16:51:00 INFO scheduler.JobScheduler: Finished job streaming job 1523973060000 ms.35 from job set of time 1523973060000 ms 18/04/17 16:51:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 16:51:00 INFO scheduler.DAGScheduler: Got job 474 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 16:51:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 474 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 16:51:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 16:51:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 16:51:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 16:51:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 16:51:00 INFO scheduler.DAGScheduler: Submitting ResultStage 474 (KafkaRDD[663] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 16:51:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 16:51:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 16:51:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 16:51:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 16:51:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 16:51:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 16:51:00 INFO storage.MemoryStore: Block broadcast_474 stored as values in memory (estimated size 5.7 KB, free 490.5 MB) 18/04/17 16:51:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 16:51:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 16:51:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 16:51:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 16:51:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 16:51:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 16:51:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 16:51:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 16:51:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 16:51:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 16:51:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 16:51:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 16:51:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 16:51:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 16:51:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 16:51:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 16:51:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 16:51:00 INFO storage.MemoryStore: Block broadcast_474_piece0 stored as bytes in memory (estimated size 3.1 KB, free 490.5 MB) 18/04/17 16:51:00 INFO storage.BlockManagerInfo: Added broadcast_474_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.3 MB) 18/04/17 16:51:00 INFO spark.SparkContext: Created broadcast 474 from broadcast at DAGScheduler.scala:1006 18/04/17 16:51:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 474 (KafkaRDD[663] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 16:51:00 INFO cluster.YarnClusterScheduler: Adding task set 474.0 with 1 tasks 18/04/17 16:51:00 INFO scheduler.DAGScheduler: Got job 475 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 16:51:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 475 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 16:51:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 16:51:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 16:51:00 INFO scheduler.DAGScheduler: Submitting ResultStage 475 (KafkaRDD[666] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 16:51:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 474.0 (TID 474, ***hostname masked***, executor 10, partition 0, RACK_LOCAL, 2064 bytes) 18/04/17 16:51:00 INFO storage.MemoryStore: Block broadcast_475 stored as values in memory (estimated size 5.7 KB, free 490.5 MB) 18/04/17 16:51:00 INFO storage.MemoryStore: Block broadcast_475_piece0 stored as bytes in memory (estimated size 3.1 KB, free 490.5 MB) 18/04/17 16:51:00 INFO storage.BlockManagerInfo: Added broadcast_475_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.3 MB) 18/04/17 16:51:00 INFO spark.SparkContext: Created broadcast 475 from broadcast at DAGScheduler.scala:1006 18/04/17 16:51:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 475 (KafkaRDD[666] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 16:51:00 INFO cluster.YarnClusterScheduler: Adding task set 475.0 with 1 tasks 18/04/17 16:51:00 INFO scheduler.DAGScheduler: Got job 476 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 16:51:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 476 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 16:51:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 16:51:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 16:51:00 INFO scheduler.DAGScheduler: Submitting ResultStage 476 (KafkaRDD[673] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 16:51:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 475.0 (TID 475, ***hostname masked***, executor 3, partition 0, RACK_LOCAL, 2048 bytes) 18/04/17 16:51:00 INFO storage.MemoryStore: Block broadcast_476 stored as values in memory (estimated size 5.7 KB, free 490.5 MB) 18/04/17 16:51:00 INFO storage.MemoryStore: Block broadcast_476_piece0 stored as bytes in memory (estimated size 3.1 KB, free 490.5 MB) 18/04/17 16:51:00 INFO storage.BlockManagerInfo: Added broadcast_476_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.3 MB) 18/04/17 16:51:00 INFO spark.SparkContext: Created broadcast 476 from broadcast at DAGScheduler.scala:1006 18/04/17 16:51:00 INFO storage.BlockManagerInfo: Added broadcast_474_piece0 in memory on ***hostname masked***:55095 (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:51:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 476 (KafkaRDD[673] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 16:51:00 INFO cluster.YarnClusterScheduler: Adding task set 476.0 with 1 tasks 18/04/17 16:51:00 INFO scheduler.DAGScheduler: Got job 477 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 16:51:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 477 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 16:51:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 16:51:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 16:51:00 INFO scheduler.DAGScheduler: Submitting ResultStage 477 (KafkaRDD[680] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 16:51:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 476.0 (TID 476, ***hostname masked***, executor 2, partition 0, RACK_LOCAL, 2045 bytes) 18/04/17 16:51:00 INFO storage.MemoryStore: Block broadcast_477 stored as values in memory (estimated size 5.7 KB, free 490.5 MB) 18/04/17 16:51:00 INFO storage.MemoryStore: Block broadcast_477_piece0 stored as bytes in memory (estimated size 3.1 KB, free 490.5 MB) 18/04/17 16:51:00 INFO storage.BlockManagerInfo: Added broadcast_477_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.3 MB) 18/04/17 16:51:00 INFO spark.SparkContext: Created broadcast 477 from broadcast at DAGScheduler.scala:1006 18/04/17 16:51:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 477 (KafkaRDD[680] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 16:51:00 INFO cluster.YarnClusterScheduler: Adding task set 477.0 with 1 tasks 18/04/17 16:51:00 INFO scheduler.DAGScheduler: Got job 479 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 16:51:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 478 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 16:51:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 16:51:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 16:51:00 INFO scheduler.DAGScheduler: Submitting ResultStage 478 (KafkaRDD[682] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 16:51:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 477.0 (TID 477, ***hostname masked***, executor 11, partition 0, RACK_LOCAL, 2040 bytes) 18/04/17 16:51:00 INFO storage.MemoryStore: Block broadcast_478 stored as values in memory (estimated size 5.7 KB, free 490.5 MB) 18/04/17 16:51:00 INFO storage.BlockManagerInfo: Added broadcast_475_piece0 in memory on ***hostname masked***:60107 (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:51:00 INFO storage.MemoryStore: Block broadcast_478_piece0 stored as bytes in memory (estimated size 3.1 KB, free 490.5 MB) 18/04/17 16:51:00 INFO storage.BlockManagerInfo: Added broadcast_478_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.3 MB) 18/04/17 16:51:00 INFO spark.SparkContext: Created broadcast 478 from broadcast at DAGScheduler.scala:1006 18/04/17 16:51:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 478 (KafkaRDD[682] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 16:51:00 INFO cluster.YarnClusterScheduler: Adding task set 478.0 with 1 tasks 18/04/17 16:51:00 INFO scheduler.DAGScheduler: Got job 478 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 16:51:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 479 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 16:51:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 16:51:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 16:51:00 INFO scheduler.DAGScheduler: Submitting ResultStage 479 (KafkaRDD[649] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 16:51:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 478.0 (TID 478, ***hostname masked***, executor 7, partition 0, RACK_LOCAL, 2053 bytes) 18/04/17 16:51:00 INFO storage.MemoryStore: Block broadcast_479 stored as values in memory (estimated size 5.7 KB, free 490.5 MB) 18/04/17 16:51:00 INFO storage.MemoryStore: Block broadcast_479_piece0 stored as bytes in memory (estimated size 3.1 KB, free 490.5 MB) 18/04/17 16:51:00 INFO storage.BlockManagerInfo: Added broadcast_479_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.3 MB) 18/04/17 16:51:00 INFO spark.SparkContext: Created broadcast 479 from broadcast at DAGScheduler.scala:1006 18/04/17 16:51:00 INFO storage.BlockManagerInfo: Added broadcast_477_piece0 in memory on ***hostname masked***:57847 (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:51:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 479 (KafkaRDD[649] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 16:51:00 INFO cluster.YarnClusterScheduler: Adding task set 479.0 with 1 tasks 18/04/17 16:51:00 INFO scheduler.DAGScheduler: Got job 480 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 16:51:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 480 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 16:51:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 16:51:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 16:51:00 INFO scheduler.DAGScheduler: Submitting ResultStage 480 (KafkaRDD[657] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 16:51:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 479.0 (TID 479, ***hostname masked***, executor 10, partition 0, RACK_LOCAL, 2063 bytes) 18/04/17 16:51:00 INFO storage.MemoryStore: Block broadcast_480 stored as values in memory (estimated size 5.7 KB, free 490.5 MB) 18/04/17 16:51:00 INFO storage.BlockManagerInfo: Added broadcast_479_piece0 in memory on ***hostname masked***:55095 (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:51:00 INFO storage.MemoryStore: Block broadcast_480_piece0 stored as bytes in memory (estimated size 3.1 KB, free 490.5 MB) 18/04/17 16:51:00 INFO storage.BlockManagerInfo: Added broadcast_480_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.3 MB) 18/04/17 16:51:00 INFO spark.SparkContext: Created broadcast 480 from broadcast at DAGScheduler.scala:1006 18/04/17 16:51:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 480 (KafkaRDD[657] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 16:51:00 INFO cluster.YarnClusterScheduler: Adding task set 480.0 with 1 tasks 18/04/17 16:51:00 INFO scheduler.DAGScheduler: Got job 481 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 16:51:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 481 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 16:51:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 16:51:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 16:51:00 INFO scheduler.DAGScheduler: Submitting ResultStage 481 (KafkaRDD[668] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 16:51:00 INFO storage.MemoryStore: Block broadcast_481 stored as values in memory (estimated size 5.7 KB, free 490.5 MB) 18/04/17 16:51:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 480.0 (TID 480, ***hostname masked***, executor 12, partition 0, RACK_LOCAL, 2041 bytes) 18/04/17 16:51:00 INFO storage.BlockManagerInfo: Added broadcast_478_piece0 in memory on ***hostname masked***:41751 (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:51:00 INFO storage.MemoryStore: Block broadcast_481_piece0 stored as bytes in memory (estimated size 3.1 KB, free 490.5 MB) 18/04/17 16:51:00 INFO storage.BlockManagerInfo: Added broadcast_481_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.2 MB) 18/04/17 16:51:00 INFO spark.SparkContext: Created broadcast 481 from broadcast at DAGScheduler.scala:1006 18/04/17 16:51:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 481 (KafkaRDD[668] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 16:51:00 INFO cluster.YarnClusterScheduler: Adding task set 481.0 with 1 tasks 18/04/17 16:51:00 INFO scheduler.DAGScheduler: Got job 482 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 16:51:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 482 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 16:51:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 16:51:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 16:51:00 INFO scheduler.DAGScheduler: Submitting ResultStage 482 (KafkaRDD[674] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 16:51:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 481.0 (TID 481, ***hostname masked***, executor 9, partition 0, RACK_LOCAL, 2048 bytes) 18/04/17 16:51:00 INFO storage.MemoryStore: Block broadcast_482 stored as values in memory (estimated size 5.7 KB, free 490.5 MB) 18/04/17 16:51:00 INFO storage.BlockManagerInfo: Added broadcast_476_piece0 in memory on ***hostname masked***:43653 (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:51:00 INFO storage.MemoryStore: Block broadcast_482_piece0 stored as bytes in memory (estimated size 3.1 KB, free 490.5 MB) 18/04/17 16:51:00 INFO storage.BlockManagerInfo: Added broadcast_482_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.2 MB) 18/04/17 16:51:00 INFO spark.SparkContext: Created broadcast 482 from broadcast at DAGScheduler.scala:1006 18/04/17 16:51:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 482 (KafkaRDD[674] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 16:51:00 INFO cluster.YarnClusterScheduler: Adding task set 482.0 with 1 tasks 18/04/17 16:51:00 INFO scheduler.DAGScheduler: Got job 483 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 16:51:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 483 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 16:51:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 16:51:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 16:51:00 INFO scheduler.DAGScheduler: Submitting ResultStage 483 (KafkaRDD[672] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 16:51:00 INFO storage.BlockManagerInfo: Added broadcast_480_piece0 in memory on ***hostname masked***:42188 (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:51:00 INFO storage.MemoryStore: Block broadcast_483 stored as values in memory (estimated size 5.7 KB, free 490.5 MB) 18/04/17 16:51:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 482.0 (TID 482, ***hostname masked***, executor 6, partition 0, NODE_LOCAL, 2056 bytes) 18/04/17 16:51:00 INFO storage.BlockManagerInfo: Added broadcast_481_piece0 in memory on ***hostname masked***:55033 (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:51:00 INFO storage.MemoryStore: Block broadcast_483_piece0 stored as bytes in memory (estimated size 3.1 KB, free 490.5 MB) 18/04/17 16:51:00 INFO storage.BlockManagerInfo: Added broadcast_483_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.2 MB) 18/04/17 16:51:00 INFO spark.SparkContext: Created broadcast 483 from broadcast at DAGScheduler.scala:1006 18/04/17 16:51:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 483 (KafkaRDD[672] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 16:51:00 INFO cluster.YarnClusterScheduler: Adding task set 483.0 with 1 tasks 18/04/17 16:51:00 INFO scheduler.DAGScheduler: Got job 484 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 16:51:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 484 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 16:51:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 16:51:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 16:51:00 INFO scheduler.DAGScheduler: Submitting ResultStage 484 (KafkaRDD[675] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 16:51:00 INFO storage.MemoryStore: Block broadcast_484 stored as values in memory (estimated size 5.7 KB, free 490.5 MB) 18/04/17 16:51:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 483.0 (TID 483, ***hostname masked***, executor 1, partition 0, NODE_LOCAL, 2047 bytes) 18/04/17 16:51:00 INFO storage.MemoryStore: Block broadcast_484_piece0 stored as bytes in memory (estimated size 3.1 KB, free 490.5 MB) 18/04/17 16:51:00 INFO storage.BlockManagerInfo: Added broadcast_484_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.2 MB) 18/04/17 16:51:00 INFO spark.SparkContext: Created broadcast 484 from broadcast at DAGScheduler.scala:1006 18/04/17 16:51:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 484 (KafkaRDD[675] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 16:51:00 INFO cluster.YarnClusterScheduler: Adding task set 484.0 with 1 tasks 18/04/17 16:51:00 INFO scheduler.DAGScheduler: Got job 485 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 16:51:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 485 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 16:51:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 16:51:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 16:51:00 INFO scheduler.DAGScheduler: Submitting ResultStage 485 (KafkaRDD[656] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 16:51:00 INFO storage.MemoryStore: Block broadcast_485 stored as values in memory (estimated size 5.7 KB, free 490.5 MB) 18/04/17 16:51:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 484.0 (TID 484, ***hostname masked***, executor 9, partition 0, RACK_LOCAL, 2048 bytes) 18/04/17 16:51:00 INFO storage.MemoryStore: Block broadcast_485_piece0 stored as bytes in memory (estimated size 3.1 KB, free 490.4 MB) 18/04/17 16:51:00 INFO storage.BlockManagerInfo: Added broadcast_485_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.2 MB) 18/04/17 16:51:00 INFO spark.SparkContext: Created broadcast 485 from broadcast at DAGScheduler.scala:1006 18/04/17 16:51:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 485 (KafkaRDD[656] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 16:51:00 INFO cluster.YarnClusterScheduler: Adding task set 485.0 with 1 tasks 18/04/17 16:51:00 INFO scheduler.DAGScheduler: Got job 486 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 16:51:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 486 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 16:51:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 16:51:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 16:51:00 INFO scheduler.DAGScheduler: Submitting ResultStage 486 (KafkaRDD[658] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 16:51:00 INFO storage.MemoryStore: Block broadcast_486 stored as values in memory (estimated size 5.7 KB, free 490.4 MB) 18/04/17 16:51:00 INFO storage.BlockManagerInfo: Added broadcast_482_piece0 in memory on ***hostname masked***:35790 (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:51:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 485.0 (TID 485, ***hostname masked***, executor 9, partition 0, RACK_LOCAL, 2051 bytes) 18/04/17 16:51:00 INFO storage.BlockManagerInfo: Added broadcast_484_piece0 in memory on ***hostname masked***:55033 (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:51:00 INFO storage.MemoryStore: Block broadcast_486_piece0 stored as bytes in memory (estimated size 3.1 KB, free 490.4 MB) 18/04/17 16:51:00 INFO storage.BlockManagerInfo: Added broadcast_486_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.2 MB) 18/04/17 16:51:00 INFO spark.SparkContext: Created broadcast 486 from broadcast at DAGScheduler.scala:1006 18/04/17 16:51:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 486 (KafkaRDD[658] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 16:51:00 INFO cluster.YarnClusterScheduler: Adding task set 486.0 with 1 tasks 18/04/17 16:51:00 INFO scheduler.DAGScheduler: Got job 487 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 16:51:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 487 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 16:51:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 16:51:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 16:51:00 INFO scheduler.DAGScheduler: Submitting ResultStage 487 (KafkaRDD[650] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 16:51:00 INFO storage.MemoryStore: Block broadcast_487 stored as values in memory (estimated size 5.7 KB, free 490.4 MB) 18/04/17 16:51:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 486.0 (TID 486, ***hostname masked***, executor 1, partition 0, NODE_LOCAL, 2059 bytes) 18/04/17 16:51:00 INFO storage.MemoryStore: Block broadcast_487_piece0 stored as bytes in memory (estimated size 3.1 KB, free 490.4 MB) 18/04/17 16:51:00 INFO storage.BlockManagerInfo: Added broadcast_487_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.2 MB) 18/04/17 16:51:00 INFO spark.SparkContext: Created broadcast 487 from broadcast at DAGScheduler.scala:1006 18/04/17 16:51:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 487 (KafkaRDD[650] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 16:51:00 INFO cluster.YarnClusterScheduler: Adding task set 487.0 with 1 tasks 18/04/17 16:51:00 INFO scheduler.DAGScheduler: Got job 490 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 16:51:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 488 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 16:51:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 16:51:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 16:51:00 INFO scheduler.DAGScheduler: Submitting ResultStage 488 (KafkaRDD[670] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 16:51:00 INFO storage.MemoryStore: Block broadcast_488 stored as values in memory (estimated size 5.7 KB, free 490.4 MB) 18/04/17 16:51:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 487.0 (TID 487, ***hostname masked***, executor 5, partition 0, RACK_LOCAL, 2034 bytes) 18/04/17 16:51:00 INFO storage.MemoryStore: Block broadcast_488_piece0 stored as bytes in memory (estimated size 3.1 KB, free 490.4 MB) 18/04/17 16:51:00 INFO storage.BlockManagerInfo: Added broadcast_488_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.2 MB) 18/04/17 16:51:00 INFO spark.SparkContext: Created broadcast 488 from broadcast at DAGScheduler.scala:1006 18/04/17 16:51:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 488 (KafkaRDD[670] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 16:51:00 INFO cluster.YarnClusterScheduler: Adding task set 488.0 with 1 tasks 18/04/17 16:51:00 INFO scheduler.DAGScheduler: Got job 488 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 16:51:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 489 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 16:51:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 16:51:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 16:51:00 INFO scheduler.DAGScheduler: Submitting ResultStage 489 (KafkaRDD[653] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 16:51:00 INFO storage.MemoryStore: Block broadcast_489 stored as values in memory (estimated size 5.7 KB, free 490.4 MB) 18/04/17 16:51:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 488.0 (TID 488, ***hostname masked***, executor 9, partition 0, RACK_LOCAL, 2069 bytes) 18/04/17 16:51:00 INFO storage.BlockManagerInfo: Added broadcast_485_piece0 in memory on ***hostname masked***:55033 (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:51:00 INFO storage.MemoryStore: Block broadcast_489_piece0 stored as bytes in memory (estimated size 3.1 KB, free 490.4 MB) 18/04/17 16:51:00 INFO storage.BlockManagerInfo: Added broadcast_489_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.2 MB) 18/04/17 16:51:00 INFO spark.SparkContext: Created broadcast 489 from broadcast at DAGScheduler.scala:1006 18/04/17 16:51:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 489 (KafkaRDD[653] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 16:51:00 INFO cluster.YarnClusterScheduler: Adding task set 489.0 with 1 tasks 18/04/17 16:51:00 INFO scheduler.DAGScheduler: Got job 489 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 16:51:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 490 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 16:51:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 16:51:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 16:51:00 INFO storage.BlockManagerInfo: Added broadcast_483_piece0 in memory on ***hostname masked***:56034 (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:51:00 INFO scheduler.DAGScheduler: Submitting ResultStage 490 (KafkaRDD[655] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 16:51:00 INFO storage.MemoryStore: Block broadcast_490 stored as values in memory (estimated size 5.7 KB, free 490.4 MB) 18/04/17 16:51:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 489.0 (TID 489, ***hostname masked***, executor 5, partition 0, NODE_LOCAL, 2059 bytes) 18/04/17 16:51:00 INFO storage.BlockManagerInfo: Added broadcast_487_piece0 in memory on ***hostname masked***:53081 (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:51:00 INFO storage.MemoryStore: Block broadcast_490_piece0 stored as bytes in memory (estimated size 3.1 KB, free 490.4 MB) 18/04/17 16:51:00 INFO storage.BlockManagerInfo: Added broadcast_490_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.2 MB) 18/04/17 16:51:00 INFO spark.SparkContext: Created broadcast 490 from broadcast at DAGScheduler.scala:1006 18/04/17 16:51:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 490 (KafkaRDD[655] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 16:51:00 INFO cluster.YarnClusterScheduler: Adding task set 490.0 with 1 tasks 18/04/17 16:51:00 INFO scheduler.DAGScheduler: Got job 491 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 16:51:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 491 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 16:51:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 16:51:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 16:51:00 INFO scheduler.DAGScheduler: Submitting ResultStage 491 (KafkaRDD[660] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 16:51:00 INFO storage.MemoryStore: Block broadcast_491 stored as values in memory (estimated size 5.7 KB, free 490.4 MB) 18/04/17 16:51:00 INFO storage.BlockManagerInfo: Added broadcast_488_piece0 in memory on ***hostname masked***:55033 (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:51:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 490.0 (TID 490, ***hostname masked***, executor 3, partition 0, RACK_LOCAL, 2060 bytes) 18/04/17 16:51:00 INFO storage.MemoryStore: Block broadcast_491_piece0 stored as bytes in memory (estimated size 3.1 KB, free 490.4 MB) 18/04/17 16:51:00 INFO storage.BlockManagerInfo: Added broadcast_486_piece0 in memory on ***hostname masked***:56034 (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:51:00 INFO storage.BlockManagerInfo: Added broadcast_491_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.2 MB) 18/04/17 16:51:00 INFO spark.SparkContext: Created broadcast 491 from broadcast at DAGScheduler.scala:1006 18/04/17 16:51:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 491 (KafkaRDD[660] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 16:51:00 INFO cluster.YarnClusterScheduler: Adding task set 491.0 with 1 tasks 18/04/17 16:51:00 INFO scheduler.DAGScheduler: Got job 493 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 16:51:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 492 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 16:51:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 16:51:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 16:51:00 INFO scheduler.DAGScheduler: Submitting ResultStage 492 (KafkaRDD[659] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 16:51:00 INFO storage.MemoryStore: Block broadcast_492 stored as values in memory (estimated size 5.7 KB, free 490.4 MB) 18/04/17 16:51:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 491.0 (TID 491, ***hostname masked***, executor 4, partition 0, RACK_LOCAL, 2051 bytes) 18/04/17 16:51:00 INFO storage.BlockManagerInfo: Added broadcast_489_piece0 in memory on ***hostname masked***:53081 (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:51:00 INFO storage.BlockManagerInfo: Added broadcast_490_piece0 in memory on ***hostname masked***:60107 (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:51:00 INFO storage.MemoryStore: Block broadcast_492_piece0 stored as bytes in memory (estimated size 3.1 KB, free 490.4 MB) 18/04/17 16:51:00 INFO storage.BlockManagerInfo: Added broadcast_492_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.2 MB) 18/04/17 16:51:00 INFO spark.SparkContext: Created broadcast 492 from broadcast at DAGScheduler.scala:1006 18/04/17 16:51:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 492 (KafkaRDD[659] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 16:51:00 INFO cluster.YarnClusterScheduler: Adding task set 492.0 with 1 tasks 18/04/17 16:51:00 INFO scheduler.DAGScheduler: Got job 492 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 16:51:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 493 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 16:51:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 16:51:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 16:51:00 INFO scheduler.DAGScheduler: Submitting ResultStage 493 (KafkaRDD[667] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 16:51:00 INFO storage.MemoryStore: Block broadcast_493 stored as values in memory (estimated size 5.7 KB, free 490.4 MB) 18/04/17 16:51:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 492.0 (TID 492, ***hostname masked***, executor 1, partition 0, RACK_LOCAL, 2046 bytes) 18/04/17 16:51:00 INFO storage.MemoryStore: Block broadcast_493_piece0 stored as bytes in memory (estimated size 3.1 KB, free 490.4 MB) 18/04/17 16:51:00 INFO storage.BlockManagerInfo: Added broadcast_493_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.2 MB) 18/04/17 16:51:00 INFO spark.SparkContext: Created broadcast 493 from broadcast at DAGScheduler.scala:1006 18/04/17 16:51:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 493 (KafkaRDD[667] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 16:51:00 INFO cluster.YarnClusterScheduler: Adding task set 493.0 with 1 tasks 18/04/17 16:51:00 INFO scheduler.DAGScheduler: Got job 494 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 16:51:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 494 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 16:51:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 16:51:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 16:51:00 INFO scheduler.DAGScheduler: Submitting ResultStage 494 (KafkaRDD[679] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 16:51:00 INFO storage.MemoryStore: Block broadcast_494 stored as values in memory (estimated size 5.7 KB, free 490.4 MB) 18/04/17 16:51:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 493.0 (TID 493, ***hostname masked***, executor 2, partition 0, RACK_LOCAL, 2050 bytes) 18/04/17 16:51:00 INFO storage.MemoryStore: Block broadcast_494_piece0 stored as bytes in memory (estimated size 3.1 KB, free 490.4 MB) 18/04/17 16:51:00 INFO storage.BlockManagerInfo: Added broadcast_494_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.2 MB) 18/04/17 16:51:00 INFO storage.BlockManagerInfo: Added broadcast_491_piece0 in memory on ***hostname masked***:55279 (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:51:00 INFO spark.SparkContext: Created broadcast 494 from broadcast at DAGScheduler.scala:1006 18/04/17 16:51:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 494 (KafkaRDD[679] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 16:51:00 INFO cluster.YarnClusterScheduler: Adding task set 494.0 with 1 tasks 18/04/17 16:51:00 INFO scheduler.DAGScheduler: Got job 495 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 16:51:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 495 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 16:51:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 16:51:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 16:51:00 INFO scheduler.DAGScheduler: Submitting ResultStage 495 (KafkaRDD[654] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 16:51:00 INFO storage.BlockManagerInfo: Added broadcast_492_piece0 in memory on ***hostname masked***:56034 (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:51:00 INFO storage.MemoryStore: Block broadcast_495 stored as values in memory (estimated size 5.7 KB, free 490.4 MB) 18/04/17 16:51:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 494.0 (TID 494, ***hostname masked***, executor 5, partition 0, RACK_LOCAL, 2042 bytes) 18/04/17 16:51:00 INFO storage.BlockManagerInfo: Added broadcast_493_piece0 in memory on ***hostname masked***:43653 (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:51:00 INFO storage.MemoryStore: Block broadcast_495_piece0 stored as bytes in memory (estimated size 3.1 KB, free 490.4 MB) 18/04/17 16:51:00 INFO storage.BlockManagerInfo: Added broadcast_495_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.2 MB) 18/04/17 16:51:00 INFO spark.SparkContext: Created broadcast 495 from broadcast at DAGScheduler.scala:1006 18/04/17 16:51:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 495 (KafkaRDD[654] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 16:51:00 INFO cluster.YarnClusterScheduler: Adding task set 495.0 with 1 tasks 18/04/17 16:51:00 INFO scheduler.DAGScheduler: Got job 497 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 16:51:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 496 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 16:51:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 16:51:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 16:51:00 INFO scheduler.DAGScheduler: Submitting ResultStage 496 (KafkaRDD[677] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 16:51:00 INFO storage.MemoryStore: Block broadcast_496 stored as values in memory (estimated size 5.7 KB, free 490.4 MB) 18/04/17 16:51:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 495.0 (TID 495, ***hostname masked***, executor 7, partition 0, NODE_LOCAL, 2045 bytes) 18/04/17 16:51:00 INFO storage.BlockManagerInfo: Added broadcast_494_piece0 in memory on ***hostname masked***:53081 (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:51:00 INFO storage.MemoryStore: Block broadcast_496_piece0 stored as bytes in memory (estimated size 3.1 KB, free 490.4 MB) 18/04/17 16:51:00 INFO storage.BlockManagerInfo: Added broadcast_496_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.2 MB) 18/04/17 16:51:00 INFO spark.SparkContext: Created broadcast 496 from broadcast at DAGScheduler.scala:1006 18/04/17 16:51:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 496 (KafkaRDD[677] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 16:51:00 INFO cluster.YarnClusterScheduler: Adding task set 496.0 with 1 tasks 18/04/17 16:51:00 INFO scheduler.DAGScheduler: Got job 496 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 16:51:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 497 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 16:51:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 16:51:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 16:51:00 INFO scheduler.DAGScheduler: Submitting ResultStage 497 (KafkaRDD[681] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 16:51:00 INFO storage.MemoryStore: Block broadcast_497 stored as values in memory (estimated size 5.7 KB, free 490.3 MB) 18/04/17 16:51:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 496.0 (TID 496, ***hostname masked***, executor 5, partition 0, NODE_LOCAL, 2049 bytes) 18/04/17 16:51:00 INFO storage.BlockManagerInfo: Added broadcast_495_piece0 in memory on ***hostname masked***:41751 (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:51:00 INFO storage.MemoryStore: Block broadcast_497_piece0 stored as bytes in memory (estimated size 3.1 KB, free 490.3 MB) 18/04/17 16:51:00 INFO storage.BlockManagerInfo: Added broadcast_497_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.2 MB) 18/04/17 16:51:00 INFO spark.SparkContext: Created broadcast 497 from broadcast at DAGScheduler.scala:1006 18/04/17 16:51:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 497 (KafkaRDD[681] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 16:51:00 INFO cluster.YarnClusterScheduler: Adding task set 497.0 with 1 tasks 18/04/17 16:51:00 INFO scheduler.DAGScheduler: Got job 498 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 16:51:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 498 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 16:51:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 16:51:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 16:51:00 INFO scheduler.DAGScheduler: Submitting ResultStage 498 (KafkaRDD[671] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 16:51:00 INFO storage.MemoryStore: Block broadcast_498 stored as values in memory (estimated size 5.7 KB, free 490.3 MB) 18/04/17 16:51:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 497.0 (TID 497, ***hostname masked***, executor 7, partition 0, NODE_LOCAL, 2048 bytes) 18/04/17 16:51:00 INFO storage.MemoryStore: Block broadcast_498_piece0 stored as bytes in memory (estimated size 3.1 KB, free 490.3 MB) 18/04/17 16:51:00 INFO storage.BlockManagerInfo: Added broadcast_498_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.2 MB) 18/04/17 16:51:00 INFO spark.SparkContext: Created broadcast 498 from broadcast at DAGScheduler.scala:1006 18/04/17 16:51:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 498 (KafkaRDD[671] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 16:51:00 INFO cluster.YarnClusterScheduler: Adding task set 498.0 with 1 tasks 18/04/17 16:51:00 INFO scheduler.DAGScheduler: Got job 499 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 16:51:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 499 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 16:51:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 16:51:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 16:51:00 INFO scheduler.DAGScheduler: Submitting ResultStage 499 (KafkaRDD[676] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 16:51:00 INFO storage.MemoryStore: Block broadcast_499 stored as values in memory (estimated size 5.7 KB, free 490.3 MB) 18/04/17 16:51:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 498.0 (TID 498, ***hostname masked***, executor 6, partition 0, RACK_LOCAL, 2048 bytes) 18/04/17 16:51:00 INFO storage.MemoryStore: Block broadcast_499_piece0 stored as bytes in memory (estimated size 3.1 KB, free 490.3 MB) 18/04/17 16:51:00 INFO storage.BlockManagerInfo: Added broadcast_499_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.2 MB) 18/04/17 16:51:00 INFO spark.SparkContext: Created broadcast 499 from broadcast at DAGScheduler.scala:1006 18/04/17 16:51:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 499 (KafkaRDD[676] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 16:51:00 INFO cluster.YarnClusterScheduler: Adding task set 499.0 with 1 tasks 18/04/17 16:51:00 INFO storage.BlockManagerInfo: Added broadcast_496_piece0 in memory on ***hostname masked***:53081 (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:51:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 499.0 (TID 499, ***hostname masked***, executor 12, partition 0, RACK_LOCAL, 2045 bytes) 18/04/17 16:51:00 INFO storage.BlockManagerInfo: Added broadcast_498_piece0 in memory on ***hostname masked***:35790 (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:51:00 INFO storage.BlockManagerInfo: Added broadcast_497_piece0 in memory on ***hostname masked***:41751 (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:51:00 INFO storage.BlockManagerInfo: Added broadcast_499_piece0 in memory on ***hostname masked***:42188 (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:51:00 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 484.0 (TID 484) in 155 ms on ***hostname masked*** (executor 9) (1/1) 18/04/17 16:51:00 INFO cluster.YarnClusterScheduler: Removed TaskSet 484.0, whose tasks have all completed, from pool 18/04/17 16:51:00 INFO scheduler.DAGScheduler: ResultStage 484 (foreachPartition at PredictorEngineApp.java:153) finished in 0.156 s 18/04/17 16:51:00 INFO scheduler.DAGScheduler: Job 484 finished: foreachPartition at PredictorEngineApp.java:153, took 0.220066 s 18/04/17 16:51:00 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x6b17b839 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 16:51:00 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x6b17b8390x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 16:51:00 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 481.0 (TID 481) in 173 ms on ***hostname masked*** (executor 9) (1/1) 18/04/17 16:51:00 INFO scheduler.DAGScheduler: ResultStage 481 (foreachPartition at PredictorEngineApp.java:153) finished in 0.173 s 18/04/17 16:51:00 INFO cluster.YarnClusterScheduler: Removed TaskSet 481.0, whose tasks have all completed, from pool 18/04/17 16:51:00 INFO scheduler.DAGScheduler: Job 481 finished: foreachPartition at PredictorEngineApp.java:153, took 0.221977 s 18/04/17 16:51:00 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x14a4e69a connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 16:51:00 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x14a4e69a0x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 16:51:00 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 16:51:00 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 16:51:00 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:39423, server: ***hostname masked***/***IP masked***:2181 18/04/17 16:51:00 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:34829, server: ***hostname masked***/***IP masked***:2181 18/04/17 16:51:00 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x2626be142b28cfa, negotiated timeout = 60000 18/04/17 16:51:00 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x1626be1444c93ec, negotiated timeout = 60000 18/04/17 16:51:00 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x2626be142b28cfa 18/04/17 16:51:00 INFO zookeeper.ZooKeeper: Session: 0x2626be142b28cfa closed 18/04/17 16:51:00 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 16:51:00 INFO scheduler.JobScheduler: Finished job streaming job 1523973060000 ms.27 from job set of time 1523973060000 ms 18/04/17 16:51:00 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x1626be1444c93ec 18/04/17 16:51:00 INFO zookeeper.ZooKeeper: Session: 0x1626be1444c93ec closed 18/04/17 16:51:00 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 16:51:00 INFO scheduler.JobScheduler: Finished job streaming job 1523973060000 ms.20 from job set of time 1523973060000 ms 18/04/17 16:51:01 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 476.0 (TID 476) in 1791 ms on ***hostname masked*** (executor 2) (1/1) 18/04/17 16:51:01 INFO cluster.YarnClusterScheduler: Removed TaskSet 476.0, whose tasks have all completed, from pool 18/04/17 16:51:01 INFO scheduler.DAGScheduler: ResultStage 476 (foreachPartition at PredictorEngineApp.java:153) finished in 1.793 s 18/04/17 16:51:01 INFO scheduler.DAGScheduler: Job 476 finished: foreachPartition at PredictorEngineApp.java:153, took 1.815518 s 18/04/17 16:51:01 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x29951df1 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 16:51:01 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x29951df10x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 16:51:01 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 16:51:01 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:56686, server: ***hostname masked***/***IP masked***:2181 18/04/17 16:51:01 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x3626be1439a93c5, negotiated timeout = 60000 18/04/17 16:51:01 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x3626be1439a93c5 18/04/17 16:51:01 INFO zookeeper.ZooKeeper: Session: 0x3626be1439a93c5 closed 18/04/17 16:51:01 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 16:51:01 INFO scheduler.JobScheduler: Finished job streaming job 1523973060000 ms.25 from job set of time 1523973060000 ms 18/04/17 16:51:02 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 490.0 (TID 490) in 2039 ms on ***hostname masked*** (executor 3) (1/1) 18/04/17 16:51:02 INFO cluster.YarnClusterScheduler: Removed TaskSet 490.0, whose tasks have all completed, from pool 18/04/17 16:51:02 INFO scheduler.DAGScheduler: ResultStage 490 (foreachPartition at PredictorEngineApp.java:153) finished in 2.040 s 18/04/17 16:51:02 INFO scheduler.DAGScheduler: Job 489 finished: foreachPartition at PredictorEngineApp.java:153, took 2.128694 s 18/04/17 16:51:02 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x642d6fda connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 16:51:02 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x642d6fda0x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 16:51:02 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 16:51:02 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:39434, server: ***hostname masked***/***IP masked***:2181 18/04/17 16:51:02 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x2626be142b28cfe, negotiated timeout = 60000 18/04/17 16:51:02 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x2626be142b28cfe 18/04/17 16:51:02 INFO zookeeper.ZooKeeper: Session: 0x2626be142b28cfe closed 18/04/17 16:51:02 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 16:51:02 INFO scheduler.JobScheduler: Finished job streaming job 1523973060000 ms.7 from job set of time 1523973060000 ms 18/04/17 16:51:02 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 485.0 (TID 485) in 2373 ms on ***hostname masked*** (executor 9) (1/1) 18/04/17 16:51:02 INFO cluster.YarnClusterScheduler: Removed TaskSet 485.0, whose tasks have all completed, from pool 18/04/17 16:51:02 INFO scheduler.DAGScheduler: ResultStage 485 (foreachPartition at PredictorEngineApp.java:153) finished in 2.373 s 18/04/17 16:51:02 INFO scheduler.DAGScheduler: Job 485 finished: foreachPartition at PredictorEngineApp.java:153, took 2.441090 s 18/04/17 16:51:02 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x1dc4c726 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 16:51:02 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x1dc4c7260x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 16:51:02 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 16:51:02 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:34843, server: ***hostname masked***/***IP masked***:2181 18/04/17 16:51:02 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x1626be1444c93f2, negotiated timeout = 60000 18/04/17 16:51:02 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x1626be1444c93f2 18/04/17 16:51:02 INFO zookeeper.ZooKeeper: Session: 0x1626be1444c93f2 closed 18/04/17 16:51:02 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 16:51:02 INFO scheduler.JobScheduler: Finished job streaming job 1523973060000 ms.8 from job set of time 1523973060000 ms 18/04/17 16:51:03 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 491.0 (TID 491) in 3484 ms on ***hostname masked*** (executor 4) (1/1) 18/04/17 16:51:03 INFO cluster.YarnClusterScheduler: Removed TaskSet 491.0, whose tasks have all completed, from pool 18/04/17 16:51:03 INFO scheduler.DAGScheduler: ResultStage 491 (foreachPartition at PredictorEngineApp.java:153) finished in 3.497 s 18/04/17 16:51:03 INFO scheduler.DAGScheduler: Job 491 finished: foreachPartition at PredictorEngineApp.java:153, took 3.588970 s 18/04/17 16:51:03 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x6d9db1af connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 16:51:03 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x6d9db1af0x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 16:51:03 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 16:51:03 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:39443, server: ***hostname masked***/***IP masked***:2181 18/04/17 16:51:03 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x2626be142b28cff, negotiated timeout = 60000 18/04/17 16:51:03 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x2626be142b28cff 18/04/17 16:51:03 INFO zookeeper.ZooKeeper: Session: 0x2626be142b28cff closed 18/04/17 16:51:03 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 16:51:03 INFO scheduler.JobScheduler: Finished job streaming job 1523973060000 ms.12 from job set of time 1523973060000 ms 18/04/17 16:51:04 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 477.0 (TID 477) in 3991 ms on ***hostname masked*** (executor 11) (1/1) 18/04/17 16:51:04 INFO cluster.YarnClusterScheduler: Removed TaskSet 477.0, whose tasks have all completed, from pool 18/04/17 16:51:04 INFO scheduler.DAGScheduler: ResultStage 477 (foreachPartition at PredictorEngineApp.java:153) finished in 3.991 s 18/04/17 16:51:04 INFO scheduler.DAGScheduler: Job 477 finished: foreachPartition at PredictorEngineApp.java:153, took 4.018010 s 18/04/17 16:51:04 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x6f29c60a connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 16:51:04 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x6f29c60a0x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 16:51:04 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 16:51:04 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:56703, server: ***hostname masked***/***IP masked***:2181 18/04/17 16:51:04 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x3626be1439a93c8, negotiated timeout = 60000 18/04/17 16:51:04 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x3626be1439a93c8 18/04/17 16:51:04 INFO zookeeper.ZooKeeper: Session: 0x3626be1439a93c8 closed 18/04/17 16:51:04 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 16:51:04 INFO scheduler.JobScheduler: Finished job streaming job 1523973060000 ms.32 from job set of time 1523973060000 ms 18/04/17 16:51:05 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 475.0 (TID 475) in 4985 ms on ***hostname masked*** (executor 3) (1/1) 18/04/17 16:51:05 INFO cluster.YarnClusterScheduler: Removed TaskSet 475.0, whose tasks have all completed, from pool 18/04/17 16:51:05 INFO scheduler.DAGScheduler: ResultStage 475 (foreachPartition at PredictorEngineApp.java:153) finished in 4.985 s 18/04/17 16:51:05 INFO scheduler.DAGScheduler: Job 475 finished: foreachPartition at PredictorEngineApp.java:153, took 5.004017 s 18/04/17 16:51:05 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x291efa1f connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 16:51:05 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x291efa1f0x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 16:51:05 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 16:51:05 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:39451, server: ***hostname masked***/***IP masked***:2181 18/04/17 16:51:05 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x2626be142b28d00, negotiated timeout = 60000 18/04/17 16:51:05 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x2626be142b28d00 18/04/17 16:51:05 INFO zookeeper.ZooKeeper: Session: 0x2626be142b28d00 closed 18/04/17 16:51:05 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 16:51:05 INFO scheduler.JobScheduler: Finished job streaming job 1523973060000 ms.18 from job set of time 1523973060000 ms 18/04/17 16:51:05 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 493.0 (TID 493) in 5388 ms on ***hostname masked*** (executor 2) (1/1) 18/04/17 16:51:05 INFO cluster.YarnClusterScheduler: Removed TaskSet 493.0, whose tasks have all completed, from pool 18/04/17 16:51:05 INFO scheduler.DAGScheduler: ResultStage 493 (foreachPartition at PredictorEngineApp.java:153) finished in 5.389 s 18/04/17 16:51:05 INFO scheduler.DAGScheduler: Job 492 finished: foreachPartition at PredictorEngineApp.java:153, took 5.500377 s 18/04/17 16:51:05 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x19773331 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 16:51:05 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x197733310x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 16:51:05 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 16:51:05 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:34860, server: ***hostname masked***/***IP masked***:2181 18/04/17 16:51:05 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x1626be1444c93f3, negotiated timeout = 60000 18/04/17 16:51:05 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x1626be1444c93f3 18/04/17 16:51:05 INFO zookeeper.ZooKeeper: Session: 0x1626be1444c93f3 closed 18/04/17 16:51:05 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 16:51:05 INFO scheduler.JobScheduler: Finished job streaming job 1523973060000 ms.19 from job set of time 1523973060000 ms 18/04/17 16:51:06 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 474.0 (TID 474) in 6769 ms on ***hostname masked*** (executor 10) (1/1) 18/04/17 16:51:06 INFO cluster.YarnClusterScheduler: Removed TaskSet 474.0, whose tasks have all completed, from pool 18/04/17 16:51:06 INFO scheduler.DAGScheduler: ResultStage 474 (foreachPartition at PredictorEngineApp.java:153) finished in 6.769 s 18/04/17 16:51:06 INFO scheduler.DAGScheduler: Job 474 finished: foreachPartition at PredictorEngineApp.java:153, took 6.783662 s 18/04/17 16:51:06 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x6c92374e connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 16:51:06 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x6c92374e0x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 16:51:06 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 16:51:06 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:39459, server: ***hostname masked***/***IP masked***:2181 18/04/17 16:51:06 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x2626be142b28d01, negotiated timeout = 60000 18/04/17 16:51:06 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x2626be142b28d01 18/04/17 16:51:06 INFO zookeeper.ZooKeeper: Session: 0x2626be142b28d01 closed 18/04/17 16:51:06 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 16:51:06 INFO scheduler.JobScheduler: Finished job streaming job 1523973060000 ms.15 from job set of time 1523973060000 ms 18/04/17 16:51:07 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 499.0 (TID 499) in 7198 ms on ***hostname masked*** (executor 12) (1/1) 18/04/17 16:51:07 INFO cluster.YarnClusterScheduler: Removed TaskSet 499.0, whose tasks have all completed, from pool 18/04/17 16:51:07 INFO scheduler.DAGScheduler: ResultStage 499 (foreachPartition at PredictorEngineApp.java:153) finished in 7.199 s 18/04/17 16:51:07 INFO scheduler.DAGScheduler: Job 499 finished: foreachPartition at PredictorEngineApp.java:153, took 7.326744 s 18/04/17 16:51:07 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0xf1fa9f9 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 16:51:07 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0xf1fa9f90x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 16:51:07 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 16:51:07 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:34868, server: ***hostname masked***/***IP masked***:2181 18/04/17 16:51:07 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x1626be1444c93f4, negotiated timeout = 60000 18/04/17 16:51:07 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x1626be1444c93f4 18/04/17 16:51:07 INFO zookeeper.ZooKeeper: Session: 0x1626be1444c93f4 closed 18/04/17 16:51:07 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 16:51:07 INFO scheduler.JobScheduler: Finished job streaming job 1523973060000 ms.28 from job set of time 1523973060000 ms 18/04/17 16:51:07 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 480.0 (TID 480) in 7604 ms on ***hostname masked*** (executor 12) (1/1) 18/04/17 16:51:07 INFO cluster.YarnClusterScheduler: Removed TaskSet 480.0, whose tasks have all completed, from pool 18/04/17 16:51:07 INFO scheduler.DAGScheduler: ResultStage 480 (foreachPartition at PredictorEngineApp.java:153) finished in 7.605 s 18/04/17 16:51:07 INFO scheduler.DAGScheduler: Job 480 finished: foreachPartition at PredictorEngineApp.java:153, took 7.648414 s 18/04/17 16:51:07 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x65ae8364 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 16:51:07 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x65ae83640x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 16:51:07 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 16:51:07 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:34871, server: ***hostname masked***/***IP masked***:2181 18/04/17 16:51:07 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x1626be1444c93f6, negotiated timeout = 60000 18/04/17 16:51:07 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x1626be1444c93f6 18/04/17 16:51:07 INFO zookeeper.ZooKeeper: Session: 0x1626be1444c93f6 closed 18/04/17 16:51:07 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 16:51:07 INFO scheduler.JobScheduler: Finished job streaming job 1523973060000 ms.9 from job set of time 1523973060000 ms 18/04/17 16:51:07 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 494.0 (TID 494) in 7777 ms on ***hostname masked*** (executor 5) (1/1) 18/04/17 16:51:07 INFO cluster.YarnClusterScheduler: Removed TaskSet 494.0, whose tasks have all completed, from pool 18/04/17 16:51:07 INFO scheduler.DAGScheduler: ResultStage 494 (foreachPartition at PredictorEngineApp.java:153) finished in 7.778 s 18/04/17 16:51:07 INFO scheduler.DAGScheduler: Job 494 finished: foreachPartition at PredictorEngineApp.java:153, took 7.893595 s 18/04/17 16:51:07 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x2cdc55fa connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 16:51:07 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x2cdc55fa0x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 16:51:07 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 16:51:07 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:34875, server: ***hostname masked***/***IP masked***:2181 18/04/17 16:51:07 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x1626be1444c93f7, negotiated timeout = 60000 18/04/17 16:51:07 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x1626be1444c93f7 18/04/17 16:51:07 INFO zookeeper.ZooKeeper: Session: 0x1626be1444c93f7 closed 18/04/17 16:51:07 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 16:51:07 INFO scheduler.JobScheduler: Finished job streaming job 1523973060000 ms.31 from job set of time 1523973060000 ms 18/04/17 16:51:09 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 498.0 (TID 498) in 9239 ms on ***hostname masked*** (executor 6) (1/1) 18/04/17 16:51:09 INFO cluster.YarnClusterScheduler: Removed TaskSet 498.0, whose tasks have all completed, from pool 18/04/17 16:51:09 INFO scheduler.DAGScheduler: ResultStage 498 (foreachPartition at PredictorEngineApp.java:153) finished in 9.240 s 18/04/17 16:51:09 INFO scheduler.DAGScheduler: Job 498 finished: foreachPartition at PredictorEngineApp.java:153, took 9.365001 s 18/04/17 16:51:09 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x204a6285 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 16:51:09 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x204a62850x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 16:51:09 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 16:51:09 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:34881, server: ***hostname masked***/***IP masked***:2181 18/04/17 16:51:09 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x1626be1444c93f9, negotiated timeout = 60000 18/04/17 16:51:09 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x1626be1444c93f9 18/04/17 16:51:09 INFO zookeeper.ZooKeeper: Session: 0x1626be1444c93f9 closed 18/04/17 16:51:09 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 16:51:09 INFO scheduler.JobScheduler: Finished job streaming job 1523973060000 ms.23 from job set of time 1523973060000 ms 18/04/17 16:51:09 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 495.0 (TID 495) in 9329 ms on ***hostname masked*** (executor 7) (1/1) 18/04/17 16:51:09 INFO cluster.YarnClusterScheduler: Removed TaskSet 495.0, whose tasks have all completed, from pool 18/04/17 16:51:09 INFO scheduler.DAGScheduler: ResultStage 495 (foreachPartition at PredictorEngineApp.java:153) finished in 9.330 s 18/04/17 16:51:09 INFO scheduler.DAGScheduler: Job 495 finished: foreachPartition at PredictorEngineApp.java:153, took 9.449987 s 18/04/17 16:51:09 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x7e241436 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 16:51:09 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x7e2414360x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 16:51:09 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 16:51:09 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:34884, server: ***hostname masked***/***IP masked***:2181 18/04/17 16:51:09 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x1626be1444c93fa, negotiated timeout = 60000 18/04/17 16:51:09 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x1626be1444c93fa 18/04/17 16:51:09 INFO zookeeper.ZooKeeper: Session: 0x1626be1444c93fa closed 18/04/17 16:51:09 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 16:51:09 INFO scheduler.JobScheduler: Finished job streaming job 1523973060000 ms.6 from job set of time 1523973060000 ms 18/04/17 16:51:11 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 497.0 (TID 497) in 10901 ms on ***hostname masked*** (executor 7) (1/1) 18/04/17 16:51:11 INFO cluster.YarnClusterScheduler: Removed TaskSet 497.0, whose tasks have all completed, from pool 18/04/17 16:51:11 INFO scheduler.DAGScheduler: ResultStage 497 (foreachPartition at PredictorEngineApp.java:153) finished in 10.903 s 18/04/17 16:51:11 INFO scheduler.DAGScheduler: Job 496 finished: foreachPartition at PredictorEngineApp.java:153, took 11.030437 s 18/04/17 16:51:11 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x34fbd403 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 16:51:11 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x34fbd4030x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 16:51:11 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 16:51:11 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:56745, server: ***hostname masked***/***IP masked***:2181 18/04/17 16:51:11 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x3626be1439a93cd, negotiated timeout = 60000 18/04/17 16:51:11 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x3626be1439a93cd 18/04/17 16:51:11 INFO zookeeper.ZooKeeper: Session: 0x3626be1439a93cd closed 18/04/17 16:51:11 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 16:51:11 INFO scheduler.JobScheduler: Finished job streaming job 1523973060000 ms.33 from job set of time 1523973060000 ms 18/04/17 16:51:11 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 479.0 (TID 479) in 11539 ms on ***hostname masked*** (executor 10) (1/1) 18/04/17 16:51:11 INFO cluster.YarnClusterScheduler: Removed TaskSet 479.0, whose tasks have all completed, from pool 18/04/17 16:51:11 INFO scheduler.DAGScheduler: ResultStage 479 (foreachPartition at PredictorEngineApp.java:153) finished in 11.539 s 18/04/17 16:51:11 INFO scheduler.DAGScheduler: Job 478 finished: foreachPartition at PredictorEngineApp.java:153, took 11.575876 s 18/04/17 16:51:11 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x685ff32a connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 16:51:11 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x685ff32a0x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 16:51:11 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 16:51:11 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:34898, server: ***hostname masked***/***IP masked***:2181 18/04/17 16:51:11 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x1626be1444c93fc, negotiated timeout = 60000 18/04/17 16:51:11 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x1626be1444c93fc 18/04/17 16:51:11 INFO zookeeper.ZooKeeper: Session: 0x1626be1444c93fc closed 18/04/17 16:51:11 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 16:51:11 INFO scheduler.JobScheduler: Finished job streaming job 1523973060000 ms.1 from job set of time 1523973060000 ms 18/04/17 16:51:11 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 488.0 (TID 488) in 11532 ms on ***hostname masked*** (executor 9) (1/1) 18/04/17 16:51:11 INFO cluster.YarnClusterScheduler: Removed TaskSet 488.0, whose tasks have all completed, from pool 18/04/17 16:51:11 INFO scheduler.DAGScheduler: ResultStage 488 (foreachPartition at PredictorEngineApp.java:153) finished in 11.533 s 18/04/17 16:51:11 INFO scheduler.DAGScheduler: Job 490 finished: foreachPartition at PredictorEngineApp.java:153, took 11.611955 s 18/04/17 16:51:11 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x7f3537eb connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 16:51:11 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x7f3537eb0x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 16:51:11 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 16:51:11 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:39496, server: ***hostname masked***/***IP masked***:2181 18/04/17 16:51:11 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x2626be142b28d03, negotiated timeout = 60000 18/04/17 16:51:11 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x2626be142b28d03 18/04/17 16:51:11 INFO zookeeper.ZooKeeper: Session: 0x2626be142b28d03 closed 18/04/17 16:51:11 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 16:51:11 INFO scheduler.JobScheduler: Finished job streaming job 1523973060000 ms.22 from job set of time 1523973060000 ms 18/04/17 16:51:12 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 483.0 (TID 483) in 12217 ms on ***hostname masked*** (executor 1) (1/1) 18/04/17 16:51:12 INFO cluster.YarnClusterScheduler: Removed TaskSet 483.0, whose tasks have all completed, from pool 18/04/17 16:51:12 INFO scheduler.DAGScheduler: ResultStage 483 (foreachPartition at PredictorEngineApp.java:153) finished in 12.219 s 18/04/17 16:51:12 INFO scheduler.DAGScheduler: Job 483 finished: foreachPartition at PredictorEngineApp.java:153, took 12.277430 s 18/04/17 16:51:12 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x37c597ca connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 16:51:12 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x37c597ca0x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 16:51:12 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 16:51:12 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:34905, server: ***hostname masked***/***IP masked***:2181 18/04/17 16:51:12 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x1626be1444c93fd, negotiated timeout = 60000 18/04/17 16:51:12 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x1626be1444c93fd 18/04/17 16:51:12 INFO zookeeper.ZooKeeper: Session: 0x1626be1444c93fd closed 18/04/17 16:51:12 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 16:51:12 INFO scheduler.JobScheduler: Finished job streaming job 1523973060000 ms.24 from job set of time 1523973060000 ms 18/04/17 16:51:13 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 482.0 (TID 482) in 13303 ms on ***hostname masked*** (executor 6) (1/1) 18/04/17 16:51:13 INFO cluster.YarnClusterScheduler: Removed TaskSet 482.0, whose tasks have all completed, from pool 18/04/17 16:51:13 INFO scheduler.DAGScheduler: ResultStage 482 (foreachPartition at PredictorEngineApp.java:153) finished in 13.303 s 18/04/17 16:51:13 INFO scheduler.DAGScheduler: Job 482 finished: foreachPartition at PredictorEngineApp.java:153, took 13.358641 s 18/04/17 16:51:13 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x2bf6bd09 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 16:51:13 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x2bf6bd090x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 16:51:13 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 16:51:13 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:34909, server: ***hostname masked***/***IP masked***:2181 18/04/17 16:51:13 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x1626be1444c93ff, negotiated timeout = 60000 18/04/17 16:51:13 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x1626be1444c93ff 18/04/17 16:51:13 INFO zookeeper.ZooKeeper: Session: 0x1626be1444c93ff closed 18/04/17 16:51:13 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 16:51:13 INFO scheduler.JobScheduler: Finished job streaming job 1523973060000 ms.26 from job set of time 1523973060000 ms 18/04/17 16:51:13 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 496.0 (TID 496) in 13688 ms on ***hostname masked*** (executor 5) (1/1) 18/04/17 16:51:13 INFO cluster.YarnClusterScheduler: Removed TaskSet 496.0, whose tasks have all completed, from pool 18/04/17 16:51:13 INFO scheduler.DAGScheduler: ResultStage 496 (foreachPartition at PredictorEngineApp.java:153) finished in 13.689 s 18/04/17 16:51:13 INFO scheduler.DAGScheduler: Job 497 finished: foreachPartition at PredictorEngineApp.java:153, took 13.814333 s 18/04/17 16:51:13 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x66b60f05 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 16:51:13 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x66b60f050x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 16:51:13 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 16:51:13 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:39508, server: ***hostname masked***/***IP masked***:2181 18/04/17 16:51:13 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x2626be142b28d05, negotiated timeout = 60000 18/04/17 16:51:13 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x2626be142b28d05 18/04/17 16:51:13 INFO zookeeper.ZooKeeper: Session: 0x2626be142b28d05 closed 18/04/17 16:51:13 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 16:51:13 INFO scheduler.JobScheduler: Finished job streaming job 1523973060000 ms.29 from job set of time 1523973060000 ms 18/04/17 16:51:13 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 487.0 (TID 487) in 13839 ms on ***hostname masked*** (executor 5) (1/1) 18/04/17 16:51:13 INFO cluster.YarnClusterScheduler: Removed TaskSet 487.0, whose tasks have all completed, from pool 18/04/17 16:51:13 INFO scheduler.DAGScheduler: ResultStage 487 (foreachPartition at PredictorEngineApp.java:153) finished in 13.840 s 18/04/17 16:51:13 INFO scheduler.DAGScheduler: Job 487 finished: foreachPartition at PredictorEngineApp.java:153, took 13.915820 s 18/04/17 16:51:13 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0xf619a4 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 16:51:13 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0xf619a40x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 16:51:13 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 16:51:13 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:56767, server: ***hostname masked***/***IP masked***:2181 18/04/17 16:51:13 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x3626be1439a93cf, negotiated timeout = 60000 18/04/17 16:51:13 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x3626be1439a93cf 18/04/17 16:51:14 INFO zookeeper.ZooKeeper: Session: 0x3626be1439a93cf closed 18/04/17 16:51:14 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 16:51:14 INFO scheduler.JobScheduler: Finished job streaming job 1523973060000 ms.2 from job set of time 1523973060000 ms 18/04/17 16:51:18 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 489.0 (TID 489) in 18482 ms on ***hostname masked*** (executor 5) (1/1) 18/04/17 16:51:18 INFO cluster.YarnClusterScheduler: Removed TaskSet 489.0, whose tasks have all completed, from pool 18/04/17 16:51:18 INFO scheduler.DAGScheduler: ResultStage 489 (foreachPartition at PredictorEngineApp.java:153) finished in 18.483 s 18/04/17 16:51:18 INFO scheduler.DAGScheduler: Job 488 finished: foreachPartition at PredictorEngineApp.java:153, took 18.565863 s 18/04/17 16:51:18 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x4a84af51 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 16:51:18 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x4a84af510x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 16:51:18 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 16:51:18 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:39530, server: ***hostname masked***/***IP masked***:2181 18/04/17 16:51:18 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x2626be142b28d07, negotiated timeout = 60000 18/04/17 16:51:18 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x2626be142b28d07 18/04/17 16:51:18 INFO zookeeper.ZooKeeper: Session: 0x2626be142b28d07 closed 18/04/17 16:51:18 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 16:51:18 INFO scheduler.JobScheduler: Finished job streaming job 1523973060000 ms.5 from job set of time 1523973060000 ms 18/04/17 16:51:21 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 492.0 (TID 492) in 21332 ms on ***hostname masked*** (executor 1) (1/1) 18/04/17 16:51:21 INFO cluster.YarnClusterScheduler: Removed TaskSet 492.0, whose tasks have all completed, from pool 18/04/17 16:51:21 INFO scheduler.DAGScheduler: ResultStage 492 (foreachPartition at PredictorEngineApp.java:153) finished in 21.333 s 18/04/17 16:51:21 INFO scheduler.DAGScheduler: Job 493 finished: foreachPartition at PredictorEngineApp.java:153, took 21.441318 s 18/04/17 16:51:21 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x3efed052 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 16:51:21 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x3efed0520x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 16:51:21 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 16:51:21 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:56802, server: ***hostname masked***/***IP masked***:2181 18/04/17 16:51:21 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x3626be1439a93d2, negotiated timeout = 60000 18/04/17 16:51:21 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x3626be1439a93d2 18/04/17 16:51:21 INFO zookeeper.ZooKeeper: Session: 0x3626be1439a93d2 closed 18/04/17 16:51:21 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 16:51:21 INFO scheduler.JobScheduler: Finished job streaming job 1523973060000 ms.11 from job set of time 1523973060000 ms 18/04/17 16:51:22 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 486.0 (TID 486) in 22366 ms on ***hostname masked*** (executor 1) (1/1) 18/04/17 16:51:22 INFO scheduler.DAGScheduler: ResultStage 486 (foreachPartition at PredictorEngineApp.java:153) finished in 22.367 s 18/04/17 16:51:22 INFO cluster.YarnClusterScheduler: Removed TaskSet 486.0, whose tasks have all completed, from pool 18/04/17 16:51:22 INFO scheduler.DAGScheduler: Job 486 finished: foreachPartition at PredictorEngineApp.java:153, took 22.437531 s 18/04/17 16:51:22 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x7d64cb25 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 16:51:22 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x7d64cb250x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 16:51:22 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 16:51:22 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:39550, server: ***hostname masked***/***IP masked***:2181 18/04/17 16:51:22 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x2626be142b28d09, negotiated timeout = 60000 18/04/17 16:51:22 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x2626be142b28d09 18/04/17 16:51:22 INFO zookeeper.ZooKeeper: Session: 0x2626be142b28d09 closed 18/04/17 16:51:22 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 16:51:22 INFO scheduler.JobScheduler: Finished job streaming job 1523973060000 ms.10 from job set of time 1523973060000 ms 18/04/17 16:52:00 INFO scheduler.JobScheduler: Added jobs for time 1523973120000 ms 18/04/17 16:52:00 INFO scheduler.JobScheduler: Starting job streaming job 1523973120000 ms.0 from job set of time 1523973120000 ms 18/04/17 16:52:00 INFO scheduler.JobScheduler: Starting job streaming job 1523973120000 ms.1 from job set of time 1523973120000 ms 18/04/17 16:52:00 INFO scheduler.JobScheduler: Starting job streaming job 1523973120000 ms.3 from job set of time 1523973120000 ms 18/04/17 16:52:00 INFO scheduler.JobScheduler: Finished job streaming job 1523973120000 ms.3 from job set of time 1523973120000 ms 18/04/17 16:52:00 INFO scheduler.JobScheduler: Starting job streaming job 1523973120000 ms.2 from job set of time 1523973120000 ms 18/04/17 16:52:00 INFO scheduler.JobScheduler: Starting job streaming job 1523973120000 ms.5 from job set of time 1523973120000 ms 18/04/17 16:52:00 INFO scheduler.JobScheduler: Starting job streaming job 1523973120000 ms.4 from job set of time 1523973120000 ms 18/04/17 16:52:00 INFO scheduler.JobScheduler: Starting job streaming job 1523973120000 ms.7 from job set of time 1523973120000 ms 18/04/17 16:52:00 INFO scheduler.JobScheduler: Finished job streaming job 1523973120000 ms.0 from job set of time 1523973120000 ms 18/04/17 16:52:00 INFO scheduler.JobScheduler: Starting job streaming job 1523973120000 ms.6 from job set of time 1523973120000 ms 18/04/17 16:52:00 INFO scheduler.JobScheduler: Starting job streaming job 1523973120000 ms.8 from job set of time 1523973120000 ms 18/04/17 16:52:00 INFO scheduler.JobScheduler: Finished job streaming job 1523973120000 ms.4 from job set of time 1523973120000 ms 18/04/17 16:52:00 INFO scheduler.JobScheduler: Starting job streaming job 1523973120000 ms.9 from job set of time 1523973120000 ms 18/04/17 16:52:00 INFO scheduler.JobScheduler: Starting job streaming job 1523973120000 ms.10 from job set of time 1523973120000 ms 18/04/17 16:52:00 INFO scheduler.JobScheduler: Starting job streaming job 1523973120000 ms.11 from job set of time 1523973120000 ms 18/04/17 16:52:00 INFO scheduler.JobScheduler: Starting job streaming job 1523973120000 ms.12 from job set of time 1523973120000 ms 18/04/17 16:52:00 INFO scheduler.JobScheduler: Starting job streaming job 1523973120000 ms.13 from job set of time 1523973120000 ms 18/04/17 16:52:00 INFO scheduler.JobScheduler: Starting job streaming job 1523973120000 ms.14 from job set of time 1523973120000 ms 18/04/17 16:52:00 INFO scheduler.JobScheduler: Finished job streaming job 1523973120000 ms.13 from job set of time 1523973120000 ms 18/04/17 16:52:00 INFO scheduler.JobScheduler: Finished job streaming job 1523973120000 ms.14 from job set of time 1523973120000 ms 18/04/17 16:52:00 INFO scheduler.JobScheduler: Starting job streaming job 1523973120000 ms.16 from job set of time 1523973120000 ms 18/04/17 16:52:00 INFO scheduler.JobScheduler: Starting job streaming job 1523973120000 ms.15 from job set of time 1523973120000 ms 18/04/17 16:52:00 INFO scheduler.JobScheduler: Starting job streaming job 1523973120000 ms.17 from job set of time 1523973120000 ms 18/04/17 16:52:00 INFO scheduler.JobScheduler: Finished job streaming job 1523973120000 ms.16 from job set of time 1523973120000 ms 18/04/17 16:52:00 INFO scheduler.JobScheduler: Starting job streaming job 1523973120000 ms.18 from job set of time 1523973120000 ms 18/04/17 16:52:00 INFO scheduler.JobScheduler: Finished job streaming job 1523973120000 ms.17 from job set of time 1523973120000 ms 18/04/17 16:52:00 INFO scheduler.JobScheduler: Starting job streaming job 1523973120000 ms.20 from job set of time 1523973120000 ms 18/04/17 16:52:00 INFO scheduler.JobScheduler: Starting job streaming job 1523973120000 ms.19 from job set of time 1523973120000 ms 18/04/17 16:52:00 INFO scheduler.JobScheduler: Starting job streaming job 1523973120000 ms.21 from job set of time 1523973120000 ms 18/04/17 16:52:00 INFO scheduler.JobScheduler: Finished job streaming job 1523973120000 ms.21 from job set of time 1523973120000 ms 18/04/17 16:52:00 INFO scheduler.JobScheduler: Starting job streaming job 1523973120000 ms.22 from job set of time 1523973120000 ms 18/04/17 16:52:00 INFO scheduler.JobScheduler: Starting job streaming job 1523973120000 ms.23 from job set of time 1523973120000 ms 18/04/17 16:52:00 INFO scheduler.JobScheduler: Starting job streaming job 1523973120000 ms.24 from job set of time 1523973120000 ms 18/04/17 16:52:00 INFO scheduler.JobScheduler: Starting job streaming job 1523973120000 ms.26 from job set of time 1523973120000 ms 18/04/17 16:52:00 INFO scheduler.JobScheduler: Starting job streaming job 1523973120000 ms.25 from job set of time 1523973120000 ms 18/04/17 16:52:00 INFO scheduler.JobScheduler: Starting job streaming job 1523973120000 ms.27 from job set of time 1523973120000 ms 18/04/17 16:52:00 INFO scheduler.JobScheduler: Starting job streaming job 1523973120000 ms.28 from job set of time 1523973120000 ms 18/04/17 16:52:00 INFO scheduler.JobScheduler: Starting job streaming job 1523973120000 ms.29 from job set of time 1523973120000 ms 18/04/17 16:52:00 INFO scheduler.JobScheduler: Starting job streaming job 1523973120000 ms.31 from job set of time 1523973120000 ms 18/04/17 16:52:00 INFO scheduler.JobScheduler: Starting job streaming job 1523973120000 ms.32 from job set of time 1523973120000 ms 18/04/17 16:52:00 INFO scheduler.JobScheduler: Starting job streaming job 1523973120000 ms.30 from job set of time 1523973120000 ms 18/04/17 16:52:00 INFO scheduler.JobScheduler: Starting job streaming job 1523973120000 ms.33 from job set of time 1523973120000 ms 18/04/17 16:52:00 INFO scheduler.JobScheduler: Finished job streaming job 1523973120000 ms.30 from job set of time 1523973120000 ms 18/04/17 16:52:00 INFO scheduler.JobScheduler: Starting job streaming job 1523973120000 ms.35 from job set of time 1523973120000 ms 18/04/17 16:52:00 INFO scheduler.JobScheduler: Starting job streaming job 1523973120000 ms.34 from job set of time 1523973120000 ms 18/04/17 16:52:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 16:52:00 INFO scheduler.DAGScheduler: Got job 500 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 16:52:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 500 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 16:52:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 16:52:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 16:52:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 16:52:00 INFO scheduler.DAGScheduler: Submitting ResultStage 500 (KafkaRDD[693] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 16:52:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 16:52:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 16:52:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 16:52:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 16:52:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 16:52:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 16:52:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 16:52:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 16:52:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 16:52:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 16:52:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 16:52:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 16:52:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 16:52:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 16:52:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 16:52:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 16:52:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 16:52:00 INFO storage.MemoryStore: Block broadcast_500 stored as values in memory (estimated size 5.7 KB, free 490.3 MB) 18/04/17 16:52:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 16:52:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 16:52:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 16:52:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 16:52:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 16:52:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 16:52:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 16:52:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 16:52:00 INFO storage.MemoryStore: Block broadcast_500_piece0 stored as bytes in memory (estimated size 3.1 KB, free 490.3 MB) 18/04/17 16:52:00 INFO storage.BlockManagerInfo: Added broadcast_500_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.2 MB) 18/04/17 16:52:00 INFO spark.SparkContext: Created broadcast 500 from broadcast at DAGScheduler.scala:1006 18/04/17 16:52:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 500 (KafkaRDD[693] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 16:52:00 INFO cluster.YarnClusterScheduler: Adding task set 500.0 with 1 tasks 18/04/17 16:52:00 INFO scheduler.DAGScheduler: Got job 501 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 16:52:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 501 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 16:52:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 16:52:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 16:52:00 INFO scheduler.DAGScheduler: Submitting ResultStage 501 (KafkaRDD[718] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 16:52:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 500.0 (TID 500, ***hostname masked***, executor 2, partition 0, RACK_LOCAL, 2041 bytes) 18/04/17 16:52:00 INFO storage.MemoryStore: Block broadcast_501 stored as values in memory (estimated size 5.7 KB, free 490.3 MB) 18/04/17 16:52:00 INFO storage.MemoryStore: Block broadcast_501_piece0 stored as bytes in memory (estimated size 3.1 KB, free 490.3 MB) 18/04/17 16:52:00 INFO storage.BlockManagerInfo: Added broadcast_501_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.2 MB) 18/04/17 16:52:00 INFO spark.SparkContext: Created broadcast 501 from broadcast at DAGScheduler.scala:1006 18/04/17 16:52:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 501 (KafkaRDD[718] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 16:52:00 INFO cluster.YarnClusterScheduler: Adding task set 501.0 with 1 tasks 18/04/17 16:52:00 INFO scheduler.DAGScheduler: Got job 502 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 16:52:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 502 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 16:52:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 16:52:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 16:52:00 INFO scheduler.DAGScheduler: Submitting ResultStage 502 (KafkaRDD[690] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 16:52:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 501.0 (TID 501, ***hostname masked***, executor 12, partition 0, RACK_LOCAL, 2053 bytes) 18/04/17 16:52:00 INFO storage.MemoryStore: Block broadcast_502 stored as values in memory (estimated size 5.7 KB, free 490.3 MB) 18/04/17 16:52:00 INFO storage.MemoryStore: Block broadcast_502_piece0 stored as bytes in memory (estimated size 3.1 KB, free 490.3 MB) 18/04/17 16:52:00 INFO storage.BlockManagerInfo: Added broadcast_502_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.2 MB) 18/04/17 16:52:00 INFO spark.SparkContext: Created broadcast 502 from broadcast at DAGScheduler.scala:1006 18/04/17 16:52:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 502 (KafkaRDD[690] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 16:52:00 INFO cluster.YarnClusterScheduler: Adding task set 502.0 with 1 tasks 18/04/17 16:52:00 INFO scheduler.DAGScheduler: Got job 503 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 16:52:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 503 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 16:52:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 16:52:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 16:52:00 INFO scheduler.DAGScheduler: Submitting ResultStage 503 (KafkaRDD[704] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 16:52:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 502.0 (TID 502, ***hostname masked***, executor 9, partition 0, NODE_LOCAL, 2045 bytes) 18/04/17 16:52:00 INFO storage.MemoryStore: Block broadcast_503 stored as values in memory (estimated size 5.7 KB, free 490.3 MB) 18/04/17 16:52:00 INFO storage.BlockManagerInfo: Added broadcast_501_piece0 in memory on ***hostname masked***:42188 (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:52:00 INFO storage.MemoryStore: Block broadcast_503_piece0 stored as bytes in memory (estimated size 3.1 KB, free 490.3 MB) 18/04/17 16:52:00 INFO storage.BlockManagerInfo: Added broadcast_503_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.2 MB) 18/04/17 16:52:00 INFO spark.SparkContext: Created broadcast 503 from broadcast at DAGScheduler.scala:1006 18/04/17 16:52:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 503 (KafkaRDD[704] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 16:52:00 INFO storage.BlockManagerInfo: Added broadcast_500_piece0 in memory on ***hostname masked***:43653 (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:52:00 INFO cluster.YarnClusterScheduler: Adding task set 503.0 with 1 tasks 18/04/17 16:52:00 INFO scheduler.DAGScheduler: Got job 505 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 16:52:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 504 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 16:52:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 16:52:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 16:52:00 INFO scheduler.DAGScheduler: Submitting ResultStage 504 (KafkaRDD[696] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 16:52:00 INFO storage.MemoryStore: Block broadcast_504 stored as values in memory (estimated size 5.7 KB, free 490.3 MB) 18/04/17 16:52:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 503.0 (TID 503, ***hostname masked***, executor 7, partition 0, RACK_LOCAL, 2048 bytes) 18/04/17 16:52:00 INFO storage.MemoryStore: Block broadcast_504_piece0 stored as bytes in memory (estimated size 3.1 KB, free 490.3 MB) 18/04/17 16:52:00 INFO storage.BlockManagerInfo: Added broadcast_504_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.2 MB) 18/04/17 16:52:00 INFO spark.SparkContext: Created broadcast 504 from broadcast at DAGScheduler.scala:1006 18/04/17 16:52:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 504 (KafkaRDD[696] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 16:52:00 INFO cluster.YarnClusterScheduler: Adding task set 504.0 with 1 tasks 18/04/17 16:52:00 INFO scheduler.DAGScheduler: Got job 504 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 16:52:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 505 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 16:52:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 16:52:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 16:52:00 INFO scheduler.DAGScheduler: Submitting ResultStage 505 (KafkaRDD[691] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 16:52:00 INFO storage.MemoryStore: Block broadcast_505 stored as values in memory (estimated size 5.7 KB, free 490.3 MB) 18/04/17 16:52:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 504.0 (TID 504, ***hostname masked***, executor 8, partition 0, RACK_LOCAL, 2051 bytes) 18/04/17 16:52:00 INFO storage.MemoryStore: Block broadcast_505_piece0 stored as bytes in memory (estimated size 3.1 KB, free 490.3 MB) 18/04/17 16:52:00 INFO storage.BlockManagerInfo: Added broadcast_505_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.2 MB) 18/04/17 16:52:00 INFO spark.SparkContext: Created broadcast 505 from broadcast at DAGScheduler.scala:1006 18/04/17 16:52:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 505 (KafkaRDD[691] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 16:52:00 INFO cluster.YarnClusterScheduler: Adding task set 505.0 with 1 tasks 18/04/17 16:52:00 INFO scheduler.DAGScheduler: Got job 506 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 16:52:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 506 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 16:52:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 16:52:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 16:52:00 INFO scheduler.DAGScheduler: Submitting ResultStage 506 (KafkaRDD[706] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 16:52:00 INFO storage.MemoryStore: Block broadcast_506 stored as values in memory (estimated size 5.7 KB, free 490.3 MB) 18/04/17 16:52:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 505.0 (TID 505, ***hostname masked***, executor 9, partition 0, RACK_LOCAL, 2060 bytes) 18/04/17 16:52:00 INFO storage.BlockManagerInfo: Added broadcast_502_piece0 in memory on ***hostname masked***:55033 (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:52:00 INFO storage.MemoryStore: Block broadcast_506_piece0 stored as bytes in memory (estimated size 3.1 KB, free 490.3 MB) 18/04/17 16:52:00 INFO storage.BlockManagerInfo: Added broadcast_506_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.2 MB) 18/04/17 16:52:00 INFO spark.SparkContext: Created broadcast 506 from broadcast at DAGScheduler.scala:1006 18/04/17 16:52:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 506 (KafkaRDD[706] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 16:52:00 INFO cluster.YarnClusterScheduler: Adding task set 506.0 with 1 tasks 18/04/17 16:52:00 INFO scheduler.DAGScheduler: Got job 507 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 16:52:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 507 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 16:52:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 16:52:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 16:52:00 INFO scheduler.DAGScheduler: Submitting ResultStage 507 (KafkaRDD[709] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 16:52:00 INFO storage.MemoryStore: Block broadcast_507 stored as values in memory (estimated size 5.7 KB, free 490.3 MB) 18/04/17 16:52:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 506.0 (TID 506, ***hostname masked***, executor 4, partition 0, RACK_LOCAL, 2069 bytes) 18/04/17 16:52:00 INFO storage.BlockManagerInfo: Added broadcast_503_piece0 in memory on ***hostname masked***:41751 (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:52:00 INFO storage.MemoryStore: Block broadcast_507_piece0 stored as bytes in memory (estimated size 3.1 KB, free 490.3 MB) 18/04/17 16:52:00 INFO storage.BlockManagerInfo: Added broadcast_507_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.2 MB) 18/04/17 16:52:00 INFO spark.SparkContext: Created broadcast 507 from broadcast at DAGScheduler.scala:1006 18/04/17 16:52:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 507 (KafkaRDD[709] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 16:52:00 INFO cluster.YarnClusterScheduler: Adding task set 507.0 with 1 tasks 18/04/17 16:52:00 INFO scheduler.DAGScheduler: Got job 508 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 16:52:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 508 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 16:52:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 16:52:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 16:52:00 INFO scheduler.DAGScheduler: Submitting ResultStage 508 (KafkaRDD[715] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 16:52:00 INFO storage.MemoryStore: Block broadcast_508 stored as values in memory (estimated size 5.7 KB, free 490.3 MB) 18/04/17 16:52:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 507.0 (TID 507, ***hostname masked***, executor 11, partition 0, RACK_LOCAL, 2045 bytes) 18/04/17 16:52:00 INFO storage.BlockManagerInfo: Added broadcast_504_piece0 in memory on ***hostname masked***:50260 (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:52:00 INFO storage.MemoryStore: Block broadcast_508_piece0 stored as bytes in memory (estimated size 3.1 KB, free 490.3 MB) 18/04/17 16:52:00 INFO storage.BlockManagerInfo: Added broadcast_508_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.2 MB) 18/04/17 16:52:00 INFO spark.SparkContext: Created broadcast 508 from broadcast at DAGScheduler.scala:1006 18/04/17 16:52:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 508 (KafkaRDD[715] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 16:52:00 INFO cluster.YarnClusterScheduler: Adding task set 508.0 with 1 tasks 18/04/17 16:52:00 INFO scheduler.DAGScheduler: Got job 509 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 16:52:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 509 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 16:52:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 16:52:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 16:52:00 INFO scheduler.DAGScheduler: Submitting ResultStage 509 (KafkaRDD[689] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 16:52:00 INFO storage.MemoryStore: Block broadcast_509 stored as values in memory (estimated size 5.7 KB, free 490.2 MB) 18/04/17 16:52:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 508.0 (TID 508, ***hostname masked***, executor 8, partition 0, RACK_LOCAL, 2042 bytes) 18/04/17 16:52:00 INFO storage.BlockManagerInfo: Added broadcast_505_piece0 in memory on ***hostname masked***:55033 (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:52:00 INFO storage.MemoryStore: Block broadcast_509_piece0 stored as bytes in memory (estimated size 3.1 KB, free 490.2 MB) 18/04/17 16:52:00 INFO storage.BlockManagerInfo: Added broadcast_509_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.2 MB) 18/04/17 16:52:00 INFO storage.BlockManagerInfo: Added broadcast_506_piece0 in memory on ***hostname masked***:55279 (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:52:00 INFO spark.SparkContext: Created broadcast 509 from broadcast at DAGScheduler.scala:1006 18/04/17 16:52:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 509 (KafkaRDD[689] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 16:52:00 INFO cluster.YarnClusterScheduler: Adding task set 509.0 with 1 tasks 18/04/17 16:52:00 INFO storage.BlockManagerInfo: Removed broadcast_495_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.2 MB) 18/04/17 16:52:00 INFO scheduler.DAGScheduler: Got job 510 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 16:52:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 510 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 16:52:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 16:52:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 16:52:00 INFO scheduler.DAGScheduler: Submitting ResultStage 510 (KafkaRDD[708] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 16:52:00 INFO storage.MemoryStore: Block broadcast_510 stored as values in memory (estimated size 5.7 KB, free 490.2 MB) 18/04/17 16:52:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 509.0 (TID 509, ***hostname masked***, executor 6, partition 0, NODE_LOCAL, 2059 bytes) 18/04/17 16:52:00 INFO storage.BlockManagerInfo: Removed broadcast_495_piece0 on ***hostname masked***:41751 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:52:00 INFO storage.BlockManagerInfo: Added broadcast_507_piece0 in memory on ***hostname masked***:57847 (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:52:00 INFO storage.MemoryStore: Block broadcast_510_piece0 stored as bytes in memory (estimated size 3.1 KB, free 490.2 MB) 18/04/17 16:52:00 INFO storage.BlockManagerInfo: Added broadcast_510_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.2 MB) 18/04/17 16:52:00 INFO spark.SparkContext: Created broadcast 510 from broadcast at DAGScheduler.scala:1006 18/04/17 16:52:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 510 (KafkaRDD[708] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 16:52:00 INFO cluster.YarnClusterScheduler: Adding task set 510.0 with 1 tasks 18/04/17 16:52:00 INFO scheduler.DAGScheduler: Got job 511 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 16:52:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 511 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 16:52:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 16:52:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 16:52:00 INFO scheduler.DAGScheduler: Submitting ResultStage 511 (KafkaRDD[713] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 16:52:00 INFO spark.ContextCleaner: Cleaned accumulator 476 18/04/17 16:52:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 510.0 (TID 510, ***hostname masked***, executor 4, partition 0, NODE_LOCAL, 2047 bytes) 18/04/17 16:52:00 INFO storage.BlockManagerInfo: Removed broadcast_474_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.2 MB) 18/04/17 16:52:00 INFO storage.MemoryStore: Block broadcast_511 stored as values in memory (estimated size 5.7 KB, free 490.2 MB) 18/04/17 16:52:00 INFO storage.BlockManagerInfo: Removed broadcast_474_piece0 on ***hostname masked***:55095 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:52:00 INFO storage.MemoryStore: Block broadcast_511_piece0 stored as bytes in memory (estimated size 3.1 KB, free 490.2 MB) 18/04/17 16:52:00 INFO storage.BlockManagerInfo: Added broadcast_511_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.2 MB) 18/04/17 16:52:00 INFO spark.SparkContext: Created broadcast 511 from broadcast at DAGScheduler.scala:1006 18/04/17 16:52:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 511 (KafkaRDD[713] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 16:52:00 INFO cluster.YarnClusterScheduler: Adding task set 511.0 with 1 tasks 18/04/17 16:52:00 INFO storage.BlockManagerInfo: Added broadcast_510_piece0 in memory on ***hostname masked***:55279 (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:52:00 INFO scheduler.DAGScheduler: Got job 512 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 16:52:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 512 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 16:52:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 16:52:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 16:52:00 INFO spark.ContextCleaner: Cleaned accumulator 475 18/04/17 16:52:00 INFO spark.ContextCleaner: Cleaned accumulator 478 18/04/17 16:52:00 INFO scheduler.DAGScheduler: Submitting ResultStage 512 (KafkaRDD[702] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 16:52:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 511.0 (TID 511, ***hostname masked***, executor 1, partition 0, NODE_LOCAL, 2049 bytes) 18/04/17 16:52:00 INFO storage.MemoryStore: Block broadcast_512 stored as values in memory (estimated size 5.7 KB, free 490.2 MB) 18/04/17 16:52:00 INFO storage.BlockManagerInfo: Removed broadcast_476_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.2 MB) 18/04/17 16:52:00 INFO storage.BlockManagerInfo: Removed broadcast_476_piece0 on ***hostname masked***:43653 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:52:00 INFO storage.BlockManagerInfo: Added broadcast_509_piece0 in memory on ***hostname masked***:35790 (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:52:00 INFO spark.ContextCleaner: Cleaned accumulator 477 18/04/17 16:52:00 INFO storage.MemoryStore: Block broadcast_512_piece0 stored as bytes in memory (estimated size 3.1 KB, free 490.2 MB) 18/04/17 16:52:00 INFO storage.BlockManagerInfo: Removed broadcast_475_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.2 MB) 18/04/17 16:52:00 INFO storage.BlockManagerInfo: Added broadcast_512_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.2 MB) 18/04/17 16:52:00 INFO spark.SparkContext: Created broadcast 512 from broadcast at DAGScheduler.scala:1006 18/04/17 16:52:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 512 (KafkaRDD[702] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 16:52:00 INFO cluster.YarnClusterScheduler: Adding task set 512.0 with 1 tasks 18/04/17 16:52:00 INFO scheduler.DAGScheduler: Got job 514 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 16:52:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 513 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 16:52:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 16:52:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 16:52:00 INFO scheduler.DAGScheduler: Submitting ResultStage 513 (KafkaRDD[711] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 16:52:00 INFO storage.BlockManagerInfo: Removed broadcast_475_piece0 on ***hostname masked***:60107 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:52:00 INFO storage.MemoryStore: Block broadcast_513 stored as values in memory (estimated size 5.7 KB, free 490.2 MB) 18/04/17 16:52:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 512.0 (TID 512, ***hostname masked***, executor 2, partition 0, RACK_LOCAL, 2048 bytes) 18/04/17 16:52:00 INFO spark.ContextCleaner: Cleaned accumulator 480 18/04/17 16:52:00 INFO storage.BlockManagerInfo: Removed broadcast_477_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.2 MB) 18/04/17 16:52:00 INFO storage.BlockManagerInfo: Removed broadcast_477_piece0 on ***hostname masked***:57847 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:52:00 INFO storage.BlockManagerInfo: Added broadcast_511_piece0 in memory on ***hostname masked***:56034 (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:52:00 INFO storage.MemoryStore: Block broadcast_513_piece0 stored as bytes in memory (estimated size 3.1 KB, free 490.3 MB) 18/04/17 16:52:00 INFO storage.BlockManagerInfo: Added broadcast_513_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.2 MB) 18/04/17 16:52:00 INFO spark.SparkContext: Created broadcast 513 from broadcast at DAGScheduler.scala:1006 18/04/17 16:52:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 513 (KafkaRDD[711] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 16:52:00 INFO cluster.YarnClusterScheduler: Adding task set 513.0 with 1 tasks 18/04/17 16:52:00 INFO spark.ContextCleaner: Cleaned accumulator 482 18/04/17 16:52:00 INFO scheduler.DAGScheduler: Got job 513 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 16:52:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 514 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 16:52:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 16:52:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 16:52:00 INFO scheduler.DAGScheduler: Submitting ResultStage 514 (KafkaRDD[716] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 16:52:00 INFO storage.BlockManagerInfo: Removed broadcast_480_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.2 MB) 18/04/17 16:52:00 INFO storage.MemoryStore: Block broadcast_514 stored as values in memory (estimated size 5.7 KB, free 490.3 MB) 18/04/17 16:52:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 513.0 (TID 513, ***hostname masked***, executor 1, partition 0, RACK_LOCAL, 2048 bytes) 18/04/17 16:52:00 INFO storage.BlockManagerInfo: Removed broadcast_480_piece0 on ***hostname masked***:42188 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:52:00 INFO spark.ContextCleaner: Cleaned accumulator 481 18/04/17 16:52:00 INFO storage.BlockManagerInfo: Removed broadcast_479_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.2 MB) 18/04/17 16:52:00 INFO storage.MemoryStore: Block broadcast_514_piece0 stored as bytes in memory (estimated size 3.1 KB, free 490.3 MB) 18/04/17 16:52:00 INFO storage.BlockManagerInfo: Added broadcast_514_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.2 MB) 18/04/17 16:52:00 INFO spark.SparkContext: Created broadcast 514 from broadcast at DAGScheduler.scala:1006 18/04/17 16:52:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 514 (KafkaRDD[716] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 16:52:00 INFO cluster.YarnClusterScheduler: Adding task set 514.0 with 1 tasks 18/04/17 16:52:00 INFO storage.BlockManagerInfo: Removed broadcast_479_piece0 on ***hostname masked***:55095 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:52:00 INFO scheduler.DAGScheduler: Got job 515 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 16:52:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 515 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 16:52:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 16:52:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 16:52:00 INFO scheduler.DAGScheduler: Submitting ResultStage 515 (KafkaRDD[707] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 16:52:00 INFO storage.BlockManagerInfo: Added broadcast_512_piece0 in memory on ***hostname masked***:43653 (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:52:00 INFO spark.ContextCleaner: Cleaned accumulator 484 18/04/17 16:52:00 INFO storage.MemoryStore: Block broadcast_515 stored as values in memory (estimated size 5.7 KB, free 490.3 MB) 18/04/17 16:52:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 514.0 (TID 514, ***hostname masked***, executor 9, partition 0, RACK_LOCAL, 2040 bytes) 18/04/17 16:52:00 INFO storage.BlockManagerInfo: Removed broadcast_482_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.2 MB) 18/04/17 16:52:00 INFO storage.BlockManagerInfo: Removed broadcast_482_piece0 on ***hostname masked***:35790 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:52:00 INFO spark.ContextCleaner: Cleaned accumulator 483 18/04/17 16:52:00 INFO storage.MemoryStore: Block broadcast_515_piece0 stored as bytes in memory (estimated size 3.1 KB, free 490.3 MB) 18/04/17 16:52:00 INFO storage.BlockManagerInfo: Added broadcast_515_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.2 MB) 18/04/17 16:52:00 INFO storage.BlockManagerInfo: Removed broadcast_481_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.2 MB) 18/04/17 16:52:00 INFO spark.SparkContext: Created broadcast 515 from broadcast at DAGScheduler.scala:1006 18/04/17 16:52:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 515 (KafkaRDD[707] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 16:52:00 INFO cluster.YarnClusterScheduler: Adding task set 515.0 with 1 tasks 18/04/17 16:52:00 INFO scheduler.DAGScheduler: Got job 516 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 16:52:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 516 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 16:52:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 16:52:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 16:52:00 INFO scheduler.DAGScheduler: Submitting ResultStage 516 (KafkaRDD[717] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 16:52:00 INFO storage.MemoryStore: Block broadcast_516 stored as values in memory (estimated size 5.7 KB, free 490.3 MB) 18/04/17 16:52:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 515.0 (TID 515, ***hostname masked***, executor 10, partition 0, RACK_LOCAL, 2048 bytes) 18/04/17 16:52:00 INFO storage.BlockManagerInfo: Removed broadcast_481_piece0 on ***hostname masked***:55033 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:52:00 INFO storage.MemoryStore: Block broadcast_516_piece0 stored as bytes in memory (estimated size 3.1 KB, free 490.3 MB) 18/04/17 16:52:00 INFO storage.BlockManagerInfo: Added broadcast_516_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.2 MB) 18/04/17 16:52:00 INFO storage.BlockManagerInfo: Added broadcast_514_piece0 in memory on ***hostname masked***:55033 (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:52:00 INFO spark.SparkContext: Created broadcast 516 from broadcast at DAGScheduler.scala:1006 18/04/17 16:52:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 516 (KafkaRDD[717] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 16:52:00 INFO cluster.YarnClusterScheduler: Adding task set 516.0 with 1 tasks 18/04/17 16:52:00 INFO scheduler.DAGScheduler: Got job 517 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 16:52:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 517 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 16:52:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 16:52:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 16:52:00 INFO scheduler.DAGScheduler: Submitting ResultStage 517 (KafkaRDD[692] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 16:52:00 INFO storage.MemoryStore: Block broadcast_517 stored as values in memory (estimated size 5.7 KB, free 490.3 MB) 18/04/17 16:52:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 516.0 (TID 516, ***hostname masked***, executor 8, partition 0, NODE_LOCAL, 2048 bytes) 18/04/17 16:52:00 INFO storage.MemoryStore: Block broadcast_517_piece0 stored as bytes in memory (estimated size 3.1 KB, free 490.3 MB) 18/04/17 16:52:00 INFO storage.BlockManagerInfo: Added broadcast_517_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.2 MB) 18/04/17 16:52:00 INFO spark.SparkContext: Created broadcast 517 from broadcast at DAGScheduler.scala:1006 18/04/17 16:52:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 517 (KafkaRDD[692] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 16:52:00 INFO cluster.YarnClusterScheduler: Adding task set 517.0 with 1 tasks 18/04/17 16:52:00 INFO scheduler.DAGScheduler: Got job 518 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 16:52:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 518 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 16:52:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 16:52:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 16:52:00 INFO scheduler.DAGScheduler: Submitting ResultStage 518 (KafkaRDD[703] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 16:52:00 INFO storage.MemoryStore: Block broadcast_518 stored as values in memory (estimated size 5.7 KB, free 490.2 MB) 18/04/17 16:52:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 517.0 (TID 517, ***hostname masked***, executor 8, partition 0, RACK_LOCAL, 2051 bytes) 18/04/17 16:52:00 INFO storage.BlockManagerInfo: Added broadcast_515_piece0 in memory on ***hostname masked***:55095 (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:52:00 INFO storage.MemoryStore: Block broadcast_518_piece0 stored as bytes in memory (estimated size 3.1 KB, free 490.2 MB) 18/04/17 16:52:00 INFO storage.BlockManagerInfo: Added broadcast_518_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.2 MB) 18/04/17 16:52:00 INFO spark.SparkContext: Created broadcast 518 from broadcast at DAGScheduler.scala:1006 18/04/17 16:52:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 518 (KafkaRDD[703] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 16:52:00 INFO cluster.YarnClusterScheduler: Adding task set 518.0 with 1 tasks 18/04/17 16:52:00 INFO scheduler.DAGScheduler: Got job 519 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 16:52:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 519 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 16:52:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 16:52:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 16:52:00 INFO storage.BlockManagerInfo: Added broadcast_513_piece0 in memory on ***hostname masked***:56034 (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:52:00 INFO spark.ContextCleaner: Cleaned accumulator 486 18/04/17 16:52:00 INFO scheduler.DAGScheduler: Submitting ResultStage 519 (KafkaRDD[710] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 16:52:00 INFO storage.MemoryStore: Block broadcast_519 stored as values in memory (estimated size 5.7 KB, free 490.2 MB) 18/04/17 16:52:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 518.0 (TID 518, ***hostname masked***, executor 2, partition 0, RACK_LOCAL, 2050 bytes) 18/04/17 16:52:00 INFO storage.BlockManagerInfo: Removed broadcast_484_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.2 MB) 18/04/17 16:52:00 INFO storage.BlockManagerInfo: Removed broadcast_484_piece0 on ***hostname masked***:55033 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:52:00 INFO spark.ContextCleaner: Cleaned accumulator 485 18/04/17 16:52:00 INFO storage.MemoryStore: Block broadcast_519_piece0 stored as bytes in memory (estimated size 3.1 KB, free 490.2 MB) 18/04/17 16:52:00 INFO storage.BlockManagerInfo: Added broadcast_519_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.2 MB) 18/04/17 16:52:00 INFO storage.BlockManagerInfo: Removed broadcast_483_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.2 MB) 18/04/17 16:52:00 INFO spark.SparkContext: Created broadcast 519 from broadcast at DAGScheduler.scala:1006 18/04/17 16:52:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 519 (KafkaRDD[710] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 16:52:00 INFO cluster.YarnClusterScheduler: Adding task set 519.0 with 1 tasks 18/04/17 16:52:00 INFO scheduler.DAGScheduler: Got job 520 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 16:52:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 520 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 16:52:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 16:52:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 16:52:00 INFO scheduler.DAGScheduler: Submitting ResultStage 520 (KafkaRDD[699] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 16:52:00 INFO storage.BlockManagerInfo: Added broadcast_508_piece0 in memory on ***hostname masked***:50260 (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:52:00 INFO storage.MemoryStore: Block broadcast_520 stored as values in memory (estimated size 5.7 KB, free 490.2 MB) 18/04/17 16:52:00 INFO storage.BlockManagerInfo: Removed broadcast_483_piece0 on ***hostname masked***:56034 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:52:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 519.0 (TID 519, ***hostname masked***, executor 3, partition 0, NODE_LOCAL, 2056 bytes) 18/04/17 16:52:00 INFO spark.ContextCleaner: Cleaned accumulator 488 18/04/17 16:52:00 INFO storage.BlockManagerInfo: Removed broadcast_486_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.2 MB) 18/04/17 16:52:00 INFO storage.BlockManagerInfo: Removed broadcast_486_piece0 on ***hostname masked***:56034 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:52:00 INFO storage.MemoryStore: Block broadcast_520_piece0 stored as bytes in memory (estimated size 3.1 KB, free 490.3 MB) 18/04/17 16:52:00 INFO storage.BlockManagerInfo: Added broadcast_520_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.2 MB) 18/04/17 16:52:00 INFO spark.SparkContext: Created broadcast 520 from broadcast at DAGScheduler.scala:1006 18/04/17 16:52:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 520 (KafkaRDD[699] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 16:52:00 INFO cluster.YarnClusterScheduler: Adding task set 520.0 with 1 tasks 18/04/17 16:52:00 INFO scheduler.DAGScheduler: Got job 521 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 16:52:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 521 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 16:52:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 16:52:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 16:52:00 INFO scheduler.DAGScheduler: Submitting ResultStage 521 (KafkaRDD[695] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 16:52:00 INFO spark.ContextCleaner: Cleaned accumulator 487 18/04/17 16:52:00 INFO storage.MemoryStore: Block broadcast_521 stored as values in memory (estimated size 5.7 KB, free 490.2 MB) 18/04/17 16:52:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 520.0 (TID 520, ***hostname masked***, executor 2, partition 0, RACK_LOCAL, 2064 bytes) 18/04/17 16:52:00 INFO storage.BlockManagerInfo: Added broadcast_518_piece0 in memory on ***hostname masked***:43653 (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:52:00 INFO storage.BlockManagerInfo: Removed broadcast_485_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.2 MB) 18/04/17 16:52:00 INFO storage.BlockManagerInfo: Removed broadcast_485_piece0 on ***hostname masked***:55033 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:52:00 INFO spark.ContextCleaner: Cleaned accumulator 490 18/04/17 16:52:00 INFO storage.MemoryStore: Block broadcast_521_piece0 stored as bytes in memory (estimated size 3.1 KB, free 490.3 MB) 18/04/17 16:52:00 INFO storage.BlockManagerInfo: Added broadcast_521_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.2 MB) 18/04/17 16:52:00 INFO storage.BlockManagerInfo: Removed broadcast_488_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.2 MB) 18/04/17 16:52:00 INFO spark.SparkContext: Created broadcast 521 from broadcast at DAGScheduler.scala:1006 18/04/17 16:52:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 521 (KafkaRDD[695] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 16:52:00 INFO cluster.YarnClusterScheduler: Adding task set 521.0 with 1 tasks 18/04/17 16:52:00 INFO scheduler.DAGScheduler: Got job 522 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 16:52:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 522 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 16:52:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 16:52:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 16:52:00 INFO scheduler.DAGScheduler: Submitting ResultStage 522 (KafkaRDD[694] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 16:52:00 INFO storage.BlockManagerInfo: Removed broadcast_488_piece0 on ***hostname masked***:55033 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:52:00 INFO storage.MemoryStore: Block broadcast_522 stored as values in memory (estimated size 5.7 KB, free 490.3 MB) 18/04/17 16:52:00 INFO storage.BlockManagerInfo: Added broadcast_517_piece0 in memory on ***hostname masked***:50260 (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:52:00 INFO spark.ContextCleaner: Cleaned accumulator 489 18/04/17 16:52:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 521.0 (TID 521, ***hostname masked***, executor 6, partition 0, RACK_LOCAL, 2046 bytes) 18/04/17 16:52:00 INFO storage.BlockManagerInfo: Added broadcast_519_piece0 in memory on ***hostname masked***:60107 (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:52:00 INFO storage.BlockManagerInfo: Removed broadcast_487_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.2 MB) 18/04/17 16:52:00 INFO storage.MemoryStore: Block broadcast_522_piece0 stored as bytes in memory (estimated size 3.1 KB, free 490.3 MB) 18/04/17 16:52:00 INFO storage.BlockManagerInfo: Added broadcast_522_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.2 MB) 18/04/17 16:52:00 INFO spark.SparkContext: Created broadcast 522 from broadcast at DAGScheduler.scala:1006 18/04/17 16:52:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 522 (KafkaRDD[694] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 16:52:00 INFO storage.BlockManagerInfo: Removed broadcast_487_piece0 on ***hostname masked***:53081 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:52:00 INFO cluster.YarnClusterScheduler: Adding task set 522.0 with 1 tasks 18/04/17 16:52:00 INFO scheduler.DAGScheduler: Got job 523 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 16:52:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 523 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 16:52:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 16:52:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 16:52:00 INFO scheduler.DAGScheduler: Submitting ResultStage 523 (KafkaRDD[686] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 16:52:00 INFO storage.MemoryStore: Block broadcast_523 stored as values in memory (estimated size 5.7 KB, free 490.3 MB) 18/04/17 16:52:00 INFO spark.ContextCleaner: Cleaned accumulator 492 18/04/17 16:52:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 522.0 (TID 522, ***hostname masked***, executor 3, partition 0, NODE_LOCAL, 2059 bytes) 18/04/17 16:52:00 INFO storage.BlockManagerInfo: Removed broadcast_490_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.2 MB) 18/04/17 16:52:00 INFO storage.MemoryStore: Block broadcast_523_piece0 stored as bytes in memory (estimated size 3.1 KB, free 490.3 MB) 18/04/17 16:52:00 INFO storage.BlockManagerInfo: Added broadcast_523_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.2 MB) 18/04/17 16:52:00 INFO spark.SparkContext: Created broadcast 523 from broadcast at DAGScheduler.scala:1006 18/04/17 16:52:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 523 (KafkaRDD[686] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 16:52:00 INFO cluster.YarnClusterScheduler: Adding task set 523.0 with 1 tasks 18/04/17 16:52:00 INFO storage.BlockManagerInfo: Added broadcast_520_piece0 in memory on ***hostname masked***:43653 (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:52:00 INFO storage.BlockManagerInfo: Removed broadcast_490_piece0 on ***hostname masked***:60107 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:52:00 INFO scheduler.DAGScheduler: Got job 524 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 16:52:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 524 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 16:52:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 16:52:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 16:52:00 INFO scheduler.DAGScheduler: Submitting ResultStage 524 (KafkaRDD[719] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 16:52:00 INFO storage.MemoryStore: Block broadcast_524 stored as values in memory (estimated size 5.7 KB, free 490.3 MB) 18/04/17 16:52:00 INFO spark.ContextCleaner: Cleaned accumulator 491 18/04/17 16:52:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 523.0 (TID 523, ***hostname masked***, executor 10, partition 0, RACK_LOCAL, 2034 bytes) 18/04/17 16:52:00 INFO storage.BlockManagerInfo: Removed broadcast_489_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.2 MB) 18/04/17 16:52:00 INFO storage.BlockManagerInfo: Removed broadcast_489_piece0 on ***hostname masked***:53081 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:52:00 INFO storage.MemoryStore: Block broadcast_524_piece0 stored as bytes in memory (estimated size 3.1 KB, free 490.3 MB) 18/04/17 16:52:00 INFO storage.BlockManagerInfo: Added broadcast_524_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.2 MB) 18/04/17 16:52:00 INFO spark.SparkContext: Created broadcast 524 from broadcast at DAGScheduler.scala:1006 18/04/17 16:52:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 524 (KafkaRDD[719] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 16:52:00 INFO cluster.YarnClusterScheduler: Adding task set 524.0 with 1 tasks 18/04/17 16:52:00 INFO storage.BlockManagerInfo: Added broadcast_521_piece0 in memory on ***hostname masked***:35790 (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:52:00 INFO scheduler.DAGScheduler: Got job 525 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 16:52:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 525 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 16:52:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 16:52:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 16:52:00 INFO scheduler.DAGScheduler: Submitting ResultStage 525 (KafkaRDD[712] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 16:52:00 INFO spark.ContextCleaner: Cleaned accumulator 494 18/04/17 16:52:00 INFO storage.BlockManagerInfo: Added broadcast_516_piece0 in memory on ***hostname masked***:50260 (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:52:00 INFO storage.MemoryStore: Block broadcast_525 stored as values in memory (estimated size 5.7 KB, free 490.3 MB) 18/04/17 16:52:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 524.0 (TID 524, ***hostname masked***, executor 12, partition 0, NODE_LOCAL, 2037 bytes) 18/04/17 16:52:00 INFO storage.BlockManagerInfo: Removed broadcast_492_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.2 MB) 18/04/17 16:52:00 INFO storage.BlockManagerInfo: Removed broadcast_492_piece0 on ***hostname masked***:56034 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:52:00 INFO storage.MemoryStore: Block broadcast_525_piece0 stored as bytes in memory (estimated size 3.1 KB, free 490.3 MB) 18/04/17 16:52:00 INFO storage.BlockManagerInfo: Added broadcast_525_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.2 MB) 18/04/17 16:52:00 INFO spark.SparkContext: Created broadcast 525 from broadcast at DAGScheduler.scala:1006 18/04/17 16:52:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 525 (KafkaRDD[712] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 16:52:00 INFO cluster.YarnClusterScheduler: Adding task set 525.0 with 1 tasks 18/04/17 16:52:00 INFO scheduler.DAGScheduler: Got job 526 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 16:52:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 526 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 16:52:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 16:52:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 16:52:00 INFO scheduler.DAGScheduler: Submitting ResultStage 526 (KafkaRDD[685] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 16:52:00 INFO storage.MemoryStore: Block broadcast_526 stored as values in memory (estimated size 5.7 KB, free 490.3 MB) 18/04/17 16:52:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 525.0 (TID 525, ***hostname masked***, executor 3, partition 0, RACK_LOCAL, 2045 bytes) 18/04/17 16:52:00 INFO storage.BlockManagerInfo: Added broadcast_523_piece0 in memory on ***hostname masked***:55095 (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:52:00 INFO storage.MemoryStore: Block broadcast_526_piece0 stored as bytes in memory (estimated size 3.1 KB, free 490.3 MB) 18/04/17 16:52:00 INFO storage.BlockManagerInfo: Added broadcast_526_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.2 MB) 18/04/17 16:52:00 INFO spark.SparkContext: Created broadcast 526 from broadcast at DAGScheduler.scala:1006 18/04/17 16:52:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 526 (KafkaRDD[685] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 16:52:00 INFO cluster.YarnClusterScheduler: Adding task set 526.0 with 1 tasks 18/04/17 16:52:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 526.0 (TID 526, ***hostname masked***, executor 12, partition 0, RACK_LOCAL, 2063 bytes) 18/04/17 16:52:00 INFO storage.BlockManagerInfo: Added broadcast_524_piece0 in memory on ***hostname masked***:42188 (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:52:00 INFO spark.ContextCleaner: Cleaned accumulator 493 18/04/17 16:52:00 INFO storage.BlockManagerInfo: Added broadcast_522_piece0 in memory on ***hostname masked***:60107 (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:52:00 INFO storage.BlockManagerInfo: Removed broadcast_491_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.2 MB) 18/04/17 16:52:00 INFO storage.BlockManagerInfo: Removed broadcast_491_piece0 on ***hostname masked***:55279 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:52:00 INFO spark.ContextCleaner: Cleaned accumulator 496 18/04/17 16:52:00 INFO storage.BlockManagerInfo: Added broadcast_526_piece0 in memory on ***hostname masked***:42188 (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:52:00 INFO storage.BlockManagerInfo: Removed broadcast_494_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.2 MB) 18/04/17 16:52:00 INFO storage.BlockManagerInfo: Removed broadcast_494_piece0 on ***hostname masked***:53081 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:52:00 INFO spark.ContextCleaner: Cleaned accumulator 495 18/04/17 16:52:00 INFO storage.BlockManagerInfo: Removed broadcast_493_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.2 MB) 18/04/17 16:52:00 INFO storage.BlockManagerInfo: Removed broadcast_493_piece0 on ***hostname masked***:43653 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:52:00 INFO storage.BlockManagerInfo: Added broadcast_525_piece0 in memory on ***hostname masked***:60107 (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:52:00 INFO spark.ContextCleaner: Cleaned accumulator 498 18/04/17 16:52:00 INFO storage.BlockManagerInfo: Removed broadcast_496_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.2 MB) 18/04/17 16:52:00 INFO storage.BlockManagerInfo: Removed broadcast_496_piece0 on ***hostname masked***:53081 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:52:00 INFO spark.ContextCleaner: Cleaned accumulator 497 18/04/17 16:52:00 INFO spark.ContextCleaner: Cleaned accumulator 500 18/04/17 16:52:00 INFO storage.BlockManagerInfo: Removed broadcast_498_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.2 MB) 18/04/17 16:52:00 INFO storage.BlockManagerInfo: Removed broadcast_498_piece0 on ***hostname masked***:35790 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:52:00 INFO spark.ContextCleaner: Cleaned accumulator 499 18/04/17 16:52:00 INFO storage.BlockManagerInfo: Removed broadcast_497_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.2 MB) 18/04/17 16:52:00 INFO storage.BlockManagerInfo: Removed broadcast_497_piece0 on ***hostname masked***:41751 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:52:00 INFO storage.BlockManagerInfo: Removed broadcast_499_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.2 MB) 18/04/17 16:52:00 INFO storage.BlockManagerInfo: Removed broadcast_499_piece0 on ***hostname masked***:42188 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:52:00 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 508.0 (TID 508) in 117 ms on ***hostname masked*** (executor 8) (1/1) 18/04/17 16:52:00 INFO scheduler.DAGScheduler: ResultStage 508 (foreachPartition at PredictorEngineApp.java:153) finished in 0.117 s 18/04/17 16:52:00 INFO cluster.YarnClusterScheduler: Removed TaskSet 508.0, whose tasks have all completed, from pool 18/04/17 16:52:00 INFO scheduler.DAGScheduler: Job 508 finished: foreachPartition at PredictorEngineApp.java:153, took 0.166066 s 18/04/17 16:52:00 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x1d6b229c connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 16:52:00 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x1d6b229c0x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 16:52:00 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 16:52:00 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:35180, server: ***hostname masked***/***IP masked***:2181 18/04/17 16:52:00 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 522.0 (TID 522) in 52 ms on ***hostname masked*** (executor 3) (1/1) 18/04/17 16:52:00 INFO scheduler.DAGScheduler: ResultStage 522 (foreachPartition at PredictorEngineApp.java:153) finished in 0.053 s 18/04/17 16:52:00 INFO cluster.YarnClusterScheduler: Removed TaskSet 522.0, whose tasks have all completed, from pool 18/04/17 16:52:00 INFO scheduler.DAGScheduler: Job 522 finished: foreachPartition at PredictorEngineApp.java:153, took 0.164961 s 18/04/17 16:52:00 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x72f70127 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 16:52:00 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x72f701270x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 16:52:00 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 16:52:00 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:35181, server: ***hostname masked***/***IP masked***:2181 18/04/17 16:52:00 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x1626be1444c940f, negotiated timeout = 60000 18/04/17 16:52:00 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x1626be1444c9410, negotiated timeout = 60000 18/04/17 16:52:00 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x1626be1444c9410 18/04/17 16:52:00 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x1626be1444c940f 18/04/17 16:52:00 INFO zookeeper.ZooKeeper: Session: 0x1626be1444c9410 closed 18/04/17 16:52:00 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 16:52:00 INFO zookeeper.ZooKeeper: Session: 0x1626be1444c940f closed 18/04/17 16:52:00 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 16:52:00 INFO scheduler.JobScheduler: Finished job streaming job 1523973120000 ms.10 from job set of time 1523973120000 ms 18/04/17 16:52:00 INFO scheduler.JobScheduler: Finished job streaming job 1523973120000 ms.31 from job set of time 1523973120000 ms 18/04/17 16:52:00 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 510.0 (TID 510) in 140 ms on ***hostname masked*** (executor 4) (1/1) 18/04/17 16:52:00 INFO cluster.YarnClusterScheduler: Removed TaskSet 510.0, whose tasks have all completed, from pool 18/04/17 16:52:00 INFO scheduler.DAGScheduler: ResultStage 510 (foreachPartition at PredictorEngineApp.java:153) finished in 0.141 s 18/04/17 16:52:00 INFO scheduler.DAGScheduler: Job 510 finished: foreachPartition at PredictorEngineApp.java:153, took 0.211645 s 18/04/17 16:52:00 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x567e371a connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 16:52:00 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x567e371a0x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 16:52:00 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 16:52:00 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:39782, server: ***hostname masked***/***IP masked***:2181 18/04/17 16:52:00 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x2626be142b28d18, negotiated timeout = 60000 18/04/17 16:52:00 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x2626be142b28d18 18/04/17 16:52:00 INFO zookeeper.ZooKeeper: Session: 0x2626be142b28d18 closed 18/04/17 16:52:00 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 16:52:00 INFO scheduler.JobScheduler: Finished job streaming job 1523973120000 ms.24 from job set of time 1523973120000 ms 18/04/17 16:52:00 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 511.0 (TID 511) in 167 ms on ***hostname masked*** (executor 1) (1/1) 18/04/17 16:52:00 INFO scheduler.DAGScheduler: ResultStage 511 (foreachPartition at PredictorEngineApp.java:153) finished in 0.168 s 18/04/17 16:52:00 INFO cluster.YarnClusterScheduler: Removed TaskSet 511.0, whose tasks have all completed, from pool 18/04/17 16:52:00 INFO scheduler.DAGScheduler: Job 511 finished: foreachPartition at PredictorEngineApp.java:153, took 0.243657 s 18/04/17 16:52:00 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0xb1a8ab4 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 16:52:00 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0xb1a8ab40x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 16:52:00 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 16:52:00 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:35190, server: ***hostname masked***/***IP masked***:2181 18/04/17 16:52:00 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x1626be1444c9412, negotiated timeout = 60000 18/04/17 16:52:00 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x1626be1444c9412 18/04/17 16:52:00 INFO zookeeper.ZooKeeper: Session: 0x1626be1444c9412 closed 18/04/17 16:52:00 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 16:52:00 INFO scheduler.JobScheduler: Finished job streaming job 1523973120000 ms.29 from job set of time 1523973120000 ms 18/04/17 16:52:00 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 524.0 (TID 524) in 495 ms on ***hostname masked*** (executor 12) (1/1) 18/04/17 16:52:00 INFO cluster.YarnClusterScheduler: Removed TaskSet 524.0, whose tasks have all completed, from pool 18/04/17 16:52:00 INFO scheduler.DAGScheduler: ResultStage 524 (foreachPartition at PredictorEngineApp.java:153) finished in 0.495 s 18/04/17 16:52:00 INFO scheduler.DAGScheduler: Job 524 finished: foreachPartition at PredictorEngineApp.java:153, took 0.612191 s 18/04/17 16:52:00 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x52d238b connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 16:52:00 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x52d238b0x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 16:52:00 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 16:52:00 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:57044, server: ***hostname masked***/***IP masked***:2181 18/04/17 16:52:00 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x3626be1439a93e2, negotiated timeout = 60000 18/04/17 16:52:00 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x3626be1439a93e2 18/04/17 16:52:00 INFO zookeeper.ZooKeeper: Session: 0x3626be1439a93e2 closed 18/04/17 16:52:00 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 16:52:00 INFO scheduler.JobScheduler: Finished job streaming job 1523973120000 ms.35 from job set of time 1523973120000 ms 18/04/17 16:52:00 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 507.0 (TID 507) in 848 ms on ***hostname masked*** (executor 11) (1/1) 18/04/17 16:52:00 INFO cluster.YarnClusterScheduler: Removed TaskSet 507.0, whose tasks have all completed, from pool 18/04/17 16:52:00 INFO scheduler.DAGScheduler: ResultStage 507 (foreachPartition at PredictorEngineApp.java:153) finished in 0.849 s 18/04/17 16:52:00 INFO scheduler.DAGScheduler: Job 507 finished: foreachPartition at PredictorEngineApp.java:153, took 0.894116 s 18/04/17 16:52:00 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x8c0f684 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 16:52:00 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x8c0f6840x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 16:52:00 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 16:52:00 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:39791, server: ***hostname masked***/***IP masked***:2181 18/04/17 16:52:00 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x2626be142b28d21, negotiated timeout = 60000 18/04/17 16:52:00 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x2626be142b28d21 18/04/17 16:52:00 INFO zookeeper.ZooKeeper: Session: 0x2626be142b28d21 closed 18/04/17 16:52:00 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 16:52:00 INFO scheduler.JobScheduler: Finished job streaming job 1523973120000 ms.25 from job set of time 1523973120000 ms 18/04/17 16:52:02 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 505.0 (TID 505) in 2889 ms on ***hostname masked*** (executor 9) (1/1) 18/04/17 16:52:02 INFO cluster.YarnClusterScheduler: Removed TaskSet 505.0, whose tasks have all completed, from pool 18/04/17 16:52:02 INFO scheduler.DAGScheduler: ResultStage 505 (foreachPartition at PredictorEngineApp.java:153) finished in 2.890 s 18/04/17 16:52:02 INFO scheduler.DAGScheduler: Job 504 finished: foreachPartition at PredictorEngineApp.java:153, took 2.927708 s 18/04/17 16:52:02 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x3081ff37 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 16:52:02 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x3081ff370x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 16:52:02 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 16:52:02 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:35203, server: ***hostname masked***/***IP masked***:2181 18/04/17 16:52:02 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x1626be1444c9418, negotiated timeout = 60000 18/04/17 16:52:03 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x1626be1444c9418 18/04/17 16:52:03 INFO zookeeper.ZooKeeper: Session: 0x1626be1444c9418 closed 18/04/17 16:52:03 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 16:52:03 INFO scheduler.JobScheduler: Finished job streaming job 1523973120000 ms.7 from job set of time 1523973120000 ms 18/04/17 16:52:03 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 517.0 (TID 517) in 3015 ms on ***hostname masked*** (executor 8) (1/1) 18/04/17 16:52:03 INFO cluster.YarnClusterScheduler: Removed TaskSet 517.0, whose tasks have all completed, from pool 18/04/17 16:52:03 INFO scheduler.DAGScheduler: ResultStage 517 (foreachPartition at PredictorEngineApp.java:153) finished in 3.016 s 18/04/17 16:52:03 INFO scheduler.DAGScheduler: Job 517 finished: foreachPartition at PredictorEngineApp.java:153, took 3.111394 s 18/04/17 16:52:03 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x49fe643f connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 16:52:03 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x49fe643f0x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 16:52:03 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 16:52:03 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:39802, server: ***hostname masked***/***IP masked***:2181 18/04/17 16:52:03 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x2626be142b28d24, negotiated timeout = 60000 18/04/17 16:52:03 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x2626be142b28d24 18/04/17 16:52:03 INFO zookeeper.ZooKeeper: Session: 0x2626be142b28d24 closed 18/04/17 16:52:03 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 16:52:03 INFO scheduler.JobScheduler: Finished job streaming job 1523973120000 ms.8 from job set of time 1523973120000 ms 18/04/17 16:52:04 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 503.0 (TID 503) in 4435 ms on ***hostname masked*** (executor 7) (1/1) 18/04/17 16:52:04 INFO cluster.YarnClusterScheduler: Removed TaskSet 503.0, whose tasks have all completed, from pool 18/04/17 16:52:04 INFO scheduler.DAGScheduler: ResultStage 503 (foreachPartition at PredictorEngineApp.java:153) finished in 4.435 s 18/04/17 16:52:04 INFO scheduler.DAGScheduler: Job 503 finished: foreachPartition at PredictorEngineApp.java:153, took 4.465750 s 18/04/17 16:52:04 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x151ffc9 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 16:52:04 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x151ffc90x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 16:52:04 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 16:52:04 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:39807, server: ***hostname masked***/***IP masked***:2181 18/04/17 16:52:04 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x2626be142b28d25, negotiated timeout = 60000 18/04/17 16:52:04 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x2626be142b28d25 18/04/17 16:52:04 INFO zookeeper.ZooKeeper: Session: 0x2626be142b28d25 closed 18/04/17 16:52:04 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 16:52:04 INFO scheduler.JobScheduler: Finished job streaming job 1523973120000 ms.20 from job set of time 1523973120000 ms 18/04/17 16:52:04 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 513.0 (TID 513) in 4559 ms on ***hostname masked*** (executor 1) (1/1) 18/04/17 16:52:04 INFO cluster.YarnClusterScheduler: Removed TaskSet 513.0, whose tasks have all completed, from pool 18/04/17 16:52:04 INFO scheduler.DAGScheduler: ResultStage 513 (foreachPartition at PredictorEngineApp.java:153) finished in 4.560 s 18/04/17 16:52:04 INFO scheduler.DAGScheduler: Job 514 finished: foreachPartition at PredictorEngineApp.java:153, took 4.645695 s 18/04/17 16:52:04 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x63f923b9 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 16:52:04 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x63f923b90x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 16:52:04 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 16:52:04 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:39810, server: ***hostname masked***/***IP masked***:2181 18/04/17 16:52:04 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x2626be142b28d26, negotiated timeout = 60000 18/04/17 16:52:04 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x2626be142b28d26 18/04/17 16:52:04 INFO zookeeper.ZooKeeper: Session: 0x2626be142b28d26 closed 18/04/17 16:52:04 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 16:52:04 INFO scheduler.JobScheduler: Finished job streaming job 1523973120000 ms.27 from job set of time 1523973120000 ms 18/04/17 16:52:05 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 525.0 (TID 525) in 5592 ms on ***hostname masked*** (executor 3) (1/1) 18/04/17 16:52:05 INFO cluster.YarnClusterScheduler: Removed TaskSet 525.0, whose tasks have all completed, from pool 18/04/17 16:52:05 INFO scheduler.DAGScheduler: ResultStage 525 (foreachPartition at PredictorEngineApp.java:153) finished in 5.593 s 18/04/17 16:52:05 INFO scheduler.DAGScheduler: Job 525 finished: foreachPartition at PredictorEngineApp.java:153, took 5.711703 s 18/04/17 16:52:05 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x4decefbd connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 16:52:05 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x4decefbd0x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 16:52:05 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 16:52:05 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:57072, server: ***hostname masked***/***IP masked***:2181 18/04/17 16:52:05 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x3626be1439a93e4, negotiated timeout = 60000 18/04/17 16:52:05 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x3626be1439a93e4 18/04/17 16:52:05 INFO zookeeper.ZooKeeper: Session: 0x3626be1439a93e4 closed 18/04/17 16:52:05 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 16:52:05 INFO scheduler.JobScheduler: Finished job streaming job 1523973120000 ms.28 from job set of time 1523973120000 ms 18/04/17 16:52:06 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 523.0 (TID 523) in 6308 ms on ***hostname masked*** (executor 10) (1/1) 18/04/17 16:52:06 INFO cluster.YarnClusterScheduler: Removed TaskSet 523.0, whose tasks have all completed, from pool 18/04/17 16:52:06 INFO scheduler.DAGScheduler: ResultStage 523 (foreachPartition at PredictorEngineApp.java:153) finished in 6.308 s 18/04/17 16:52:06 INFO scheduler.DAGScheduler: Job 523 finished: foreachPartition at PredictorEngineApp.java:153, took 6.423502 s 18/04/17 16:52:06 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x6b3b77bf connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 16:52:06 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x6b3b77bf0x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 16:52:06 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 16:52:06 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:57076, server: ***hostname masked***/***IP masked***:2181 18/04/17 16:52:06 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x3626be1439a93e5, negotiated timeout = 60000 18/04/17 16:52:06 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x3626be1439a93e5 18/04/17 16:52:06 INFO zookeeper.ZooKeeper: Session: 0x3626be1439a93e5 closed 18/04/17 16:52:06 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 16:52:06 INFO scheduler.JobScheduler: Finished job streaming job 1523973120000 ms.2 from job set of time 1523973120000 ms 18/04/17 16:52:07 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 504.0 (TID 504) in 7071 ms on ***hostname masked*** (executor 8) (1/1) 18/04/17 16:52:07 INFO cluster.YarnClusterScheduler: Removed TaskSet 504.0, whose tasks have all completed, from pool 18/04/17 16:52:07 INFO scheduler.DAGScheduler: ResultStage 504 (foreachPartition at PredictorEngineApp.java:153) finished in 7.072 s 18/04/17 16:52:07 INFO scheduler.DAGScheduler: Job 505 finished: foreachPartition at PredictorEngineApp.java:153, took 7.107027 s 18/04/17 16:52:07 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x2cf8dd1d connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 16:52:07 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x2cf8dd1d0x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 16:52:07 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 16:52:07 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:39824, server: ***hostname masked***/***IP masked***:2181 18/04/17 16:52:07 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x2626be142b28d29, negotiated timeout = 60000 18/04/17 16:52:07 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x2626be142b28d29 18/04/17 16:52:07 INFO zookeeper.ZooKeeper: Session: 0x2626be142b28d29 closed 18/04/17 16:52:07 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 16:52:07 INFO scheduler.JobScheduler: Finished job streaming job 1523973120000 ms.12 from job set of time 1523973120000 ms 18/04/17 16:52:07 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 514.0 (TID 514) in 7088 ms on ***hostname masked*** (executor 9) (1/1) 18/04/17 16:52:07 INFO cluster.YarnClusterScheduler: Removed TaskSet 514.0, whose tasks have all completed, from pool 18/04/17 16:52:07 INFO scheduler.DAGScheduler: ResultStage 514 (foreachPartition at PredictorEngineApp.java:153) finished in 7.089 s 18/04/17 16:52:07 INFO scheduler.DAGScheduler: Job 513 finished: foreachPartition at PredictorEngineApp.java:153, took 7.179178 s 18/04/17 16:52:07 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x257c4b85 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 16:52:07 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x257c4b850x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 16:52:07 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 16:52:07 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:39827, server: ***hostname masked***/***IP masked***:2181 18/04/17 16:52:07 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x2626be142b28d2a, negotiated timeout = 60000 18/04/17 16:52:07 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x2626be142b28d2a 18/04/17 16:52:07 INFO zookeeper.ZooKeeper: Session: 0x2626be142b28d2a closed 18/04/17 16:52:07 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 16:52:07 INFO scheduler.JobScheduler: Finished job streaming job 1523973120000 ms.32 from job set of time 1523973120000 ms 18/04/17 16:52:07 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 502.0 (TID 502) in 7207 ms on ***hostname masked*** (executor 9) (1/1) 18/04/17 16:52:07 INFO scheduler.DAGScheduler: ResultStage 502 (foreachPartition at PredictorEngineApp.java:153) finished in 7.207 s 18/04/17 16:52:07 INFO cluster.YarnClusterScheduler: Removed TaskSet 502.0, whose tasks have all completed, from pool 18/04/17 16:52:07 INFO scheduler.DAGScheduler: Job 502 finished: foreachPartition at PredictorEngineApp.java:153, took 7.230508 s 18/04/17 16:52:07 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x25b346b connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 16:52:07 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x25b346b0x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 16:52:07 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 16:52:07 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:39830, server: ***hostname masked***/***IP masked***:2181 18/04/17 16:52:07 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x2626be142b28d2c, negotiated timeout = 60000 18/04/17 16:52:07 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x2626be142b28d2c 18/04/17 16:52:07 INFO zookeeper.ZooKeeper: Session: 0x2626be142b28d2c closed 18/04/17 16:52:07 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 16:52:07 INFO scheduler.JobScheduler: Finished job streaming job 1523973120000 ms.6 from job set of time 1523973120000 ms 18/04/17 16:52:07 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 515.0 (TID 515) in 7563 ms on ***hostname masked*** (executor 10) (1/1) 18/04/17 16:52:07 INFO cluster.YarnClusterScheduler: Removed TaskSet 515.0, whose tasks have all completed, from pool 18/04/17 16:52:07 INFO scheduler.DAGScheduler: ResultStage 515 (foreachPartition at PredictorEngineApp.java:153) finished in 7.564 s 18/04/17 16:52:07 INFO scheduler.DAGScheduler: Job 515 finished: foreachPartition at PredictorEngineApp.java:153, took 7.657527 s 18/04/17 16:52:07 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x26e891 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 16:52:07 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x26e8910x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 16:52:07 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 16:52:07 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:35238, server: ***hostname masked***/***IP masked***:2181 18/04/17 16:52:07 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x1626be1444c941b, negotiated timeout = 60000 18/04/17 16:52:07 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x1626be1444c941b 18/04/17 16:52:07 INFO zookeeper.ZooKeeper: Session: 0x1626be1444c941b closed 18/04/17 16:52:07 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 16:52:07 INFO scheduler.JobScheduler: Finished job streaming job 1523973120000 ms.23 from job set of time 1523973120000 ms 18/04/17 16:52:08 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 501.0 (TID 501) in 8395 ms on ***hostname masked*** (executor 12) (1/1) 18/04/17 16:52:08 INFO cluster.YarnClusterScheduler: Removed TaskSet 501.0, whose tasks have all completed, from pool 18/04/17 16:52:08 INFO scheduler.DAGScheduler: ResultStage 501 (foreachPartition at PredictorEngineApp.java:153) finished in 8.396 s 18/04/17 16:52:08 INFO scheduler.DAGScheduler: Job 501 finished: foreachPartition at PredictorEngineApp.java:153, took 8.414468 s 18/04/17 16:52:08 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x1aeb4714 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 16:52:08 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x1aeb47140x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 16:52:08 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 16:52:08 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:35242, server: ***hostname masked***/***IP masked***:2181 18/04/17 16:52:08 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x1626be1444c941c, negotiated timeout = 60000 18/04/17 16:52:08 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x1626be1444c941c 18/04/17 16:52:08 INFO zookeeper.ZooKeeper: Session: 0x1626be1444c941c closed 18/04/17 16:52:08 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 16:52:08 INFO scheduler.JobScheduler: Finished job streaming job 1523973120000 ms.34 from job set of time 1523973120000 ms 18/04/17 16:52:08 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 516.0 (TID 516) in 8536 ms on ***hostname masked*** (executor 8) (1/1) 18/04/17 16:52:08 INFO cluster.YarnClusterScheduler: Removed TaskSet 516.0, whose tasks have all completed, from pool 18/04/17 16:52:08 INFO scheduler.DAGScheduler: ResultStage 516 (foreachPartition at PredictorEngineApp.java:153) finished in 8.536 s 18/04/17 16:52:08 INFO scheduler.DAGScheduler: Job 516 finished: foreachPartition at PredictorEngineApp.java:153, took 8.628284 s 18/04/17 16:52:08 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x45453ba6 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 16:52:08 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x45453ba60x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 16:52:08 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 16:52:08 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:57096, server: ***hostname masked***/***IP masked***:2181 18/04/17 16:52:08 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x3626be1439a93e8, negotiated timeout = 60000 18/04/17 16:52:08 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x3626be1439a93e8 18/04/17 16:52:08 INFO zookeeper.ZooKeeper: Session: 0x3626be1439a93e8 closed 18/04/17 16:52:08 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 16:52:08 INFO scheduler.JobScheduler: Finished job streaming job 1523973120000 ms.33 from job set of time 1523973120000 ms 18/04/17 16:52:09 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 506.0 (TID 506) in 8934 ms on ***hostname masked*** (executor 4) (1/1) 18/04/17 16:52:09 INFO cluster.YarnClusterScheduler: Removed TaskSet 506.0, whose tasks have all completed, from pool 18/04/17 16:52:09 INFO scheduler.DAGScheduler: ResultStage 506 (foreachPartition at PredictorEngineApp.java:153) finished in 8.934 s 18/04/17 16:52:09 INFO scheduler.DAGScheduler: Job 506 finished: foreachPartition at PredictorEngineApp.java:153, took 8.976088 s 18/04/17 16:52:09 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x312a1771 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 16:52:09 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x312a17710x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 16:52:09 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 16:52:09 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:39843, server: ***hostname masked***/***IP masked***:2181 18/04/17 16:52:09 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x2626be142b28d2d, negotiated timeout = 60000 18/04/17 16:52:09 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x2626be142b28d2d 18/04/17 16:52:09 INFO zookeeper.ZooKeeper: Session: 0x2626be142b28d2d closed 18/04/17 16:52:09 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 16:52:09 INFO scheduler.JobScheduler: Finished job streaming job 1523973120000 ms.22 from job set of time 1523973120000 ms 18/04/17 16:52:10 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 519.0 (TID 519) in 9914 ms on ***hostname masked*** (executor 3) (1/1) 18/04/17 16:52:10 INFO cluster.YarnClusterScheduler: Removed TaskSet 519.0, whose tasks have all completed, from pool 18/04/17 16:52:10 INFO scheduler.DAGScheduler: ResultStage 519 (foreachPartition at PredictorEngineApp.java:153) finished in 9.915 s 18/04/17 16:52:10 INFO scheduler.DAGScheduler: Job 519 finished: foreachPartition at PredictorEngineApp.java:153, took 10.017668 s 18/04/17 16:52:10 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x52b0c5ef connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 16:52:10 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x52b0c5ef0x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 16:52:10 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 16:52:10 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:39848, server: ***hostname masked***/***IP masked***:2181 18/04/17 16:52:10 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x2626be142b28d2e, negotiated timeout = 60000 18/04/17 16:52:10 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x2626be142b28d2e 18/04/17 16:52:10 INFO zookeeper.ZooKeeper: Session: 0x2626be142b28d2e closed 18/04/17 16:52:10 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 16:52:10 INFO scheduler.JobScheduler: Finished job streaming job 1523973120000 ms.26 from job set of time 1523973120000 ms 18/04/17 16:52:10 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 518.0 (TID 518) in 10119 ms on ***hostname masked*** (executor 2) (1/1) 18/04/17 16:52:10 INFO cluster.YarnClusterScheduler: Removed TaskSet 518.0, whose tasks have all completed, from pool 18/04/17 16:52:10 INFO scheduler.DAGScheduler: ResultStage 518 (foreachPartition at PredictorEngineApp.java:153) finished in 10.120 s 18/04/17 16:52:10 INFO scheduler.DAGScheduler: Job 518 finished: foreachPartition at PredictorEngineApp.java:153, took 10.218570 s 18/04/17 16:52:10 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x185b03f connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 16:52:10 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x185b03f0x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 16:52:10 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 16:52:10 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:39852, server: ***hostname masked***/***IP masked***:2181 18/04/17 16:52:10 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x2626be142b28d2f, negotiated timeout = 60000 18/04/17 16:52:10 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x2626be142b28d2f 18/04/17 16:52:10 INFO zookeeper.ZooKeeper: Session: 0x2626be142b28d2f closed 18/04/17 16:52:10 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 16:52:10 INFO scheduler.JobScheduler: Finished job streaming job 1523973120000 ms.19 from job set of time 1523973120000 ms 18/04/17 16:52:10 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 512.0 (TID 512) in 10260 ms on ***hostname masked*** (executor 2) (1/1) 18/04/17 16:52:10 INFO cluster.YarnClusterScheduler: Removed TaskSet 512.0, whose tasks have all completed, from pool 18/04/17 16:52:10 INFO scheduler.DAGScheduler: ResultStage 512 (foreachPartition at PredictorEngineApp.java:153) finished in 10.260 s 18/04/17 16:52:10 INFO scheduler.DAGScheduler: Job 512 finished: foreachPartition at PredictorEngineApp.java:153, took 10.343460 s 18/04/17 16:52:10 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x1e255b03 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 16:52:10 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x1e255b030x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 16:52:10 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 16:52:10 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:35260, server: ***hostname masked***/***IP masked***:2181 18/04/17 16:52:10 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x1626be1444c941e, negotiated timeout = 60000 18/04/17 16:52:10 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x1626be1444c941e 18/04/17 16:52:10 INFO zookeeper.ZooKeeper: Session: 0x1626be1444c941e closed 18/04/17 16:52:10 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 16:52:10 INFO scheduler.JobScheduler: Finished job streaming job 1523973120000 ms.18 from job set of time 1523973120000 ms 18/04/17 16:52:10 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 500.0 (TID 500) in 10434 ms on ***hostname masked*** (executor 2) (1/1) 18/04/17 16:52:10 INFO cluster.YarnClusterScheduler: Removed TaskSet 500.0, whose tasks have all completed, from pool 18/04/17 16:52:10 INFO scheduler.DAGScheduler: ResultStage 500 (foreachPartition at PredictorEngineApp.java:153) finished in 10.434 s 18/04/17 16:52:10 INFO scheduler.DAGScheduler: Job 500 finished: foreachPartition at PredictorEngineApp.java:153, took 10.449436 s 18/04/17 16:52:10 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x622585e8 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 16:52:10 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x622585e80x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 16:52:10 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 16:52:10 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:35263, server: ***hostname masked***/***IP masked***:2181 18/04/17 16:52:10 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x1626be1444c941f, negotiated timeout = 60000 18/04/17 16:52:10 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x1626be1444c941f 18/04/17 16:52:10 INFO zookeeper.ZooKeeper: Session: 0x1626be1444c941f closed 18/04/17 16:52:10 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 16:52:10 INFO scheduler.JobScheduler: Finished job streaming job 1523973120000 ms.9 from job set of time 1523973120000 ms 18/04/17 16:52:11 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 526.0 (TID 526) in 11789 ms on ***hostname masked*** (executor 12) (1/1) 18/04/17 16:52:11 INFO scheduler.DAGScheduler: ResultStage 526 (foreachPartition at PredictorEngineApp.java:153) finished in 11.791 s 18/04/17 16:52:11 INFO cluster.YarnClusterScheduler: Removed TaskSet 526.0, whose tasks have all completed, from pool 18/04/17 16:52:11 INFO scheduler.DAGScheduler: Job 526 finished: foreachPartition at PredictorEngineApp.java:153, took 11.910803 s 18/04/17 16:52:11 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x6bae9f54 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 16:52:11 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x6bae9f540x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 16:52:11 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 16:52:11 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:57119, server: ***hostname masked***/***IP masked***:2181 18/04/17 16:52:11 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x3626be1439a93ea, negotiated timeout = 60000 18/04/17 16:52:11 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x3626be1439a93ea 18/04/17 16:52:12 INFO zookeeper.ZooKeeper: Session: 0x3626be1439a93ea closed 18/04/17 16:52:12 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 16:52:12 INFO scheduler.JobScheduler: Finished job streaming job 1523973120000 ms.1 from job set of time 1523973120000 ms 18/04/17 16:52:15 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 509.0 (TID 509) in 15174 ms on ***hostname masked*** (executor 6) (1/1) 18/04/17 16:52:15 INFO scheduler.DAGScheduler: ResultStage 509 (foreachPartition at PredictorEngineApp.java:153) finished in 15.175 s 18/04/17 16:52:15 INFO cluster.YarnClusterScheduler: Removed TaskSet 509.0, whose tasks have all completed, from pool 18/04/17 16:52:15 INFO scheduler.DAGScheduler: Job 509 finished: foreachPartition at PredictorEngineApp.java:153, took 15.241953 s 18/04/17 16:52:15 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x3441445f connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 16:52:15 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x3441445f0x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 16:52:15 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 16:52:15 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:57128, server: ***hostname masked***/***IP masked***:2181 18/04/17 16:52:15 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x3626be1439a93eb, negotiated timeout = 60000 18/04/17 16:52:15 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x3626be1439a93eb 18/04/17 16:52:15 INFO zookeeper.ZooKeeper: Session: 0x3626be1439a93eb closed 18/04/17 16:52:15 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 16:52:15 INFO scheduler.JobScheduler: Finished job streaming job 1523973120000 ms.5 from job set of time 1523973120000 ms 18/04/17 16:52:15 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 521.0 (TID 521) in 15482 ms on ***hostname masked*** (executor 6) (1/1) 18/04/17 16:52:15 INFO cluster.YarnClusterScheduler: Removed TaskSet 521.0, whose tasks have all completed, from pool 18/04/17 16:52:15 INFO scheduler.DAGScheduler: ResultStage 521 (foreachPartition at PredictorEngineApp.java:153) finished in 15.483 s 18/04/17 16:52:15 INFO scheduler.DAGScheduler: Job 521 finished: foreachPartition at PredictorEngineApp.java:153, took 15.592887 s 18/04/17 16:52:15 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x54bf014f connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 16:52:15 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x54bf014f0x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 16:52:15 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 16:52:15 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:35280, server: ***hostname masked***/***IP masked***:2181 18/04/17 16:52:15 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x1626be1444c9421, negotiated timeout = 60000 18/04/17 16:52:15 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x1626be1444c9421 18/04/17 16:52:15 INFO zookeeper.ZooKeeper: Session: 0x1626be1444c9421 closed 18/04/17 16:52:15 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 16:52:15 INFO scheduler.JobScheduler: Finished job streaming job 1523973120000 ms.11 from job set of time 1523973120000 ms 18/04/17 16:53:00 INFO spark.ContextCleaner: Cleaned accumulator 526 18/04/17 16:53:00 INFO spark.ContextCleaner: Cleaned accumulator 502 18/04/17 16:53:00 INFO storage.BlockManagerInfo: Removed broadcast_500_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.2 MB) 18/04/17 16:53:00 INFO storage.BlockManagerInfo: Removed broadcast_500_piece0 on ***hostname masked***:43653 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:53:00 INFO scheduler.JobScheduler: Added jobs for time 1523973180000 ms 18/04/17 16:53:00 INFO spark.ContextCleaner: Cleaned accumulator 501 18/04/17 16:53:00 INFO scheduler.JobScheduler: Starting job streaming job 1523973180000 ms.0 from job set of time 1523973180000 ms 18/04/17 16:53:00 INFO scheduler.JobScheduler: Finished job streaming job 1523973180000 ms.0 from job set of time 1523973180000 ms 18/04/17 16:53:00 INFO scheduler.JobScheduler: Starting job streaming job 1523973180000 ms.3 from job set of time 1523973180000 ms 18/04/17 16:53:00 INFO scheduler.JobScheduler: Starting job streaming job 1523973180000 ms.2 from job set of time 1523973180000 ms 18/04/17 16:53:00 INFO scheduler.JobScheduler: Starting job streaming job 1523973180000 ms.1 from job set of time 1523973180000 ms 18/04/17 16:53:00 INFO scheduler.JobScheduler: Starting job streaming job 1523973180000 ms.4 from job set of time 1523973180000 ms 18/04/17 16:53:00 INFO scheduler.JobScheduler: Finished job streaming job 1523973180000 ms.3 from job set of time 1523973180000 ms 18/04/17 16:53:00 INFO scheduler.JobScheduler: Starting job streaming job 1523973180000 ms.5 from job set of time 1523973180000 ms 18/04/17 16:53:00 INFO scheduler.JobScheduler: Finished job streaming job 1523973180000 ms.4 from job set of time 1523973180000 ms 18/04/17 16:53:00 INFO scheduler.JobScheduler: Starting job streaming job 1523973180000 ms.6 from job set of time 1523973180000 ms 18/04/17 16:53:00 INFO scheduler.JobScheduler: Starting job streaming job 1523973180000 ms.8 from job set of time 1523973180000 ms 18/04/17 16:53:00 INFO scheduler.JobScheduler: Starting job streaming job 1523973180000 ms.7 from job set of time 1523973180000 ms 18/04/17 16:53:00 INFO scheduler.JobScheduler: Starting job streaming job 1523973180000 ms.9 from job set of time 1523973180000 ms 18/04/17 16:53:00 INFO scheduler.JobScheduler: Starting job streaming job 1523973180000 ms.10 from job set of time 1523973180000 ms 18/04/17 16:53:00 INFO scheduler.JobScheduler: Starting job streaming job 1523973180000 ms.11 from job set of time 1523973180000 ms 18/04/17 16:53:00 INFO scheduler.JobScheduler: Starting job streaming job 1523973180000 ms.12 from job set of time 1523973180000 ms 18/04/17 16:53:00 INFO scheduler.JobScheduler: Starting job streaming job 1523973180000 ms.13 from job set of time 1523973180000 ms 18/04/17 16:53:00 INFO scheduler.JobScheduler: Starting job streaming job 1523973180000 ms.14 from job set of time 1523973180000 ms 18/04/17 16:53:00 INFO scheduler.JobScheduler: Finished job streaming job 1523973180000 ms.13 from job set of time 1523973180000 ms 18/04/17 16:53:00 INFO scheduler.JobScheduler: Starting job streaming job 1523973180000 ms.15 from job set of time 1523973180000 ms 18/04/17 16:53:00 INFO scheduler.JobScheduler: Finished job streaming job 1523973180000 ms.14 from job set of time 1523973180000 ms 18/04/17 16:53:00 INFO scheduler.JobScheduler: Starting job streaming job 1523973180000 ms.16 from job set of time 1523973180000 ms 18/04/17 16:53:00 INFO scheduler.JobScheduler: Starting job streaming job 1523973180000 ms.18 from job set of time 1523973180000 ms 18/04/17 16:53:00 INFO scheduler.JobScheduler: Starting job streaming job 1523973180000 ms.19 from job set of time 1523973180000 ms 18/04/17 16:53:00 INFO scheduler.JobScheduler: Starting job streaming job 1523973180000 ms.17 from job set of time 1523973180000 ms 18/04/17 16:53:00 INFO scheduler.JobScheduler: Starting job streaming job 1523973180000 ms.20 from job set of time 1523973180000 ms 18/04/17 16:53:00 INFO scheduler.JobScheduler: Starting job streaming job 1523973180000 ms.21 from job set of time 1523973180000 ms 18/04/17 16:53:00 INFO scheduler.JobScheduler: Finished job streaming job 1523973180000 ms.17 from job set of time 1523973180000 ms 18/04/17 16:53:00 INFO scheduler.JobScheduler: Starting job streaming job 1523973180000 ms.22 from job set of time 1523973180000 ms 18/04/17 16:53:00 INFO scheduler.JobScheduler: Starting job streaming job 1523973180000 ms.23 from job set of time 1523973180000 ms 18/04/17 16:53:00 INFO scheduler.JobScheduler: Finished job streaming job 1523973180000 ms.21 from job set of time 1523973180000 ms 18/04/17 16:53:00 INFO scheduler.JobScheduler: Finished job streaming job 1523973180000 ms.16 from job set of time 1523973180000 ms 18/04/17 16:53:00 INFO scheduler.JobScheduler: Starting job streaming job 1523973180000 ms.25 from job set of time 1523973180000 ms 18/04/17 16:53:00 INFO scheduler.JobScheduler: Starting job streaming job 1523973180000 ms.26 from job set of time 1523973180000 ms 18/04/17 16:53:00 INFO scheduler.JobScheduler: Starting job streaming job 1523973180000 ms.24 from job set of time 1523973180000 ms 18/04/17 16:53:00 INFO scheduler.JobScheduler: Starting job streaming job 1523973180000 ms.28 from job set of time 1523973180000 ms 18/04/17 16:53:00 INFO scheduler.JobScheduler: Starting job streaming job 1523973180000 ms.27 from job set of time 1523973180000 ms 18/04/17 16:53:00 INFO scheduler.JobScheduler: Starting job streaming job 1523973180000 ms.29 from job set of time 1523973180000 ms 18/04/17 16:53:00 INFO scheduler.JobScheduler: Starting job streaming job 1523973180000 ms.30 from job set of time 1523973180000 ms 18/04/17 16:53:00 INFO scheduler.JobScheduler: Starting job streaming job 1523973180000 ms.31 from job set of time 1523973180000 ms 18/04/17 16:53:00 INFO scheduler.JobScheduler: Finished job streaming job 1523973180000 ms.30 from job set of time 1523973180000 ms 18/04/17 16:53:00 INFO scheduler.JobScheduler: Starting job streaming job 1523973180000 ms.32 from job set of time 1523973180000 ms 18/04/17 16:53:00 INFO scheduler.JobScheduler: Starting job streaming job 1523973180000 ms.33 from job set of time 1523973180000 ms 18/04/17 16:53:00 INFO scheduler.JobScheduler: Starting job streaming job 1523973180000 ms.34 from job set of time 1523973180000 ms 18/04/17 16:53:00 INFO storage.BlockManagerInfo: Removed broadcast_502_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.2 MB) 18/04/17 16:53:00 INFO scheduler.JobScheduler: Starting job streaming job 1523973180000 ms.35 from job set of time 1523973180000 ms 18/04/17 16:53:00 INFO storage.BlockManagerInfo: Removed broadcast_502_piece0 on ***hostname masked***:55033 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:53:00 INFO spark.ContextCleaner: Cleaned accumulator 503 18/04/17 16:53:00 INFO storage.BlockManagerInfo: Removed broadcast_501_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.2 MB) 18/04/17 16:53:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 16:53:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 16:53:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 16:53:00 INFO storage.BlockManagerInfo: Removed broadcast_501_piece0 on ***hostname masked***:42188 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:53:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 16:53:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 16:53:00 INFO scheduler.DAGScheduler: Got job 527 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 16:53:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 16:53:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 16:53:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 16:53:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 16:53:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 16:53:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 16:53:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 527 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 16:53:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 16:53:00 INFO spark.ContextCleaner: Cleaned accumulator 505 18/04/17 16:53:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 16:53:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 16:53:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 16:53:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 16:53:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 16:53:00 INFO scheduler.DAGScheduler: Submitting ResultStage 527 (KafkaRDD[743] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 16:53:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 16:53:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 16:53:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 16:53:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 16:53:00 INFO storage.BlockManagerInfo: Removed broadcast_503_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.2 MB) 18/04/17 16:53:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 16:53:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 16:53:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 16:53:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 16:53:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 16:53:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 16:53:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 16:53:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 16:53:00 INFO storage.BlockManagerInfo: Removed broadcast_503_piece0 on ***hostname masked***:41751 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:53:00 INFO spark.ContextCleaner: Cleaned accumulator 504 18/04/17 16:53:00 INFO storage.MemoryStore: Block broadcast_527 stored as values in memory (estimated size 5.7 KB, free 490.3 MB) 18/04/17 16:53:00 INFO storage.BlockManagerInfo: Removed broadcast_505_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.2 MB) 18/04/17 16:53:00 INFO storage.BlockManagerInfo: Removed broadcast_505_piece0 on ***hostname masked***:55033 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:53:00 INFO spark.ContextCleaner: Cleaned accumulator 506 18/04/17 16:53:00 INFO storage.BlockManagerInfo: Removed broadcast_504_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.2 MB) 18/04/17 16:53:00 INFO storage.BlockManagerInfo: Removed broadcast_504_piece0 on ***hostname masked***:50260 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:53:00 INFO spark.ContextCleaner: Cleaned accumulator 508 18/04/17 16:53:00 INFO storage.BlockManagerInfo: Removed broadcast_506_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.2 MB) 18/04/17 16:53:00 INFO storage.MemoryStore: Block broadcast_527_piece0 stored as bytes in memory (estimated size 3.1 KB, free 490.4 MB) 18/04/17 16:53:00 INFO storage.BlockManagerInfo: Added broadcast_527_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.2 MB) 18/04/17 16:53:00 INFO spark.SparkContext: Created broadcast 527 from broadcast at DAGScheduler.scala:1006 18/04/17 16:53:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 527 (KafkaRDD[743] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 16:53:00 INFO cluster.YarnClusterScheduler: Adding task set 527.0 with 1 tasks 18/04/17 16:53:00 INFO storage.BlockManagerInfo: Removed broadcast_506_piece0 on ***hostname masked***:55279 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:53:00 INFO scheduler.DAGScheduler: Got job 528 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 16:53:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 528 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 16:53:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 16:53:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 16:53:00 INFO scheduler.DAGScheduler: Submitting ResultStage 528 (KafkaRDD[753] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 16:53:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 527.0 (TID 527, ***hostname masked***, executor 10, partition 0, RACK_LOCAL, 2048 bytes) 18/04/17 16:53:00 INFO storage.MemoryStore: Block broadcast_528 stored as values in memory (estimated size 5.7 KB, free 490.4 MB) 18/04/17 16:53:00 INFO spark.ContextCleaner: Cleaned accumulator 507 18/04/17 16:53:00 INFO storage.BlockManagerInfo: Removed broadcast_508_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.2 MB) 18/04/17 16:53:00 INFO storage.BlockManagerInfo: Removed broadcast_508_piece0 on ***hostname masked***:50260 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:53:00 INFO spark.ContextCleaner: Cleaned accumulator 509 18/04/17 16:53:00 INFO storage.MemoryStore: Block broadcast_528_piece0 stored as bytes in memory (estimated size 3.1 KB, free 490.4 MB) 18/04/17 16:53:00 INFO storage.BlockManagerInfo: Added broadcast_528_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.2 MB) 18/04/17 16:53:00 INFO storage.BlockManagerInfo: Removed broadcast_507_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.2 MB) 18/04/17 16:53:00 INFO spark.SparkContext: Created broadcast 528 from broadcast at DAGScheduler.scala:1006 18/04/17 16:53:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 528 (KafkaRDD[753] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 16:53:00 INFO cluster.YarnClusterScheduler: Adding task set 528.0 with 1 tasks 18/04/17 16:53:00 INFO scheduler.DAGScheduler: Got job 529 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 16:53:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 529 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 16:53:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 16:53:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 16:53:00 INFO scheduler.DAGScheduler: Submitting ResultStage 529 (KafkaRDD[726] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 16:53:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 528.0 (TID 528, ***hostname masked***, executor 11, partition 0, NODE_LOCAL, 2048 bytes) 18/04/17 16:53:00 INFO storage.MemoryStore: Block broadcast_529 stored as values in memory (estimated size 5.7 KB, free 490.4 MB) 18/04/17 16:53:00 INFO storage.BlockManagerInfo: Removed broadcast_507_piece0 on ***hostname masked***:57847 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:53:00 INFO spark.ContextCleaner: Cleaned accumulator 511 18/04/17 16:53:00 INFO storage.BlockManagerInfo: Removed broadcast_509_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.2 MB) 18/04/17 16:53:00 INFO storage.MemoryStore: Block broadcast_529_piece0 stored as bytes in memory (estimated size 3.1 KB, free 490.4 MB) 18/04/17 16:53:00 INFO storage.BlockManagerInfo: Added broadcast_529_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.2 MB) 18/04/17 16:53:00 INFO spark.SparkContext: Created broadcast 529 from broadcast at DAGScheduler.scala:1006 18/04/17 16:53:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 529 (KafkaRDD[726] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 16:53:00 INFO cluster.YarnClusterScheduler: Adding task set 529.0 with 1 tasks 18/04/17 16:53:00 INFO scheduler.DAGScheduler: Got job 530 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 16:53:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 530 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 16:53:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 16:53:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 16:53:00 INFO scheduler.DAGScheduler: Submitting ResultStage 530 (KafkaRDD[727] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 16:53:00 INFO storage.BlockManagerInfo: Removed broadcast_509_piece0 on ***hostname masked***:35790 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:53:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 529.0 (TID 529, ***hostname masked***, executor 7, partition 0, NODE_LOCAL, 2045 bytes) 18/04/17 16:53:00 INFO storage.MemoryStore: Block broadcast_530 stored as values in memory (estimated size 5.7 KB, free 490.4 MB) 18/04/17 16:53:00 INFO storage.BlockManagerInfo: Added broadcast_527_piece0 in memory on ***hostname masked***:55095 (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:53:00 INFO spark.ContextCleaner: Cleaned accumulator 510 18/04/17 16:53:00 INFO storage.BlockManagerInfo: Removed broadcast_511_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.2 MB) 18/04/17 16:53:00 INFO storage.MemoryStore: Block broadcast_530_piece0 stored as bytes in memory (estimated size 3.1 KB, free 490.4 MB) 18/04/17 16:53:00 INFO storage.BlockManagerInfo: Added broadcast_530_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.2 MB) 18/04/17 16:53:00 INFO spark.SparkContext: Created broadcast 530 from broadcast at DAGScheduler.scala:1006 18/04/17 16:53:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 530 (KafkaRDD[727] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 16:53:00 INFO cluster.YarnClusterScheduler: Adding task set 530.0 with 1 tasks 18/04/17 16:53:00 INFO storage.BlockManagerInfo: Removed broadcast_511_piece0 on ***hostname masked***:56034 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:53:00 INFO scheduler.DAGScheduler: Got job 531 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 16:53:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 531 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 16:53:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 16:53:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 16:53:00 INFO scheduler.DAGScheduler: Submitting ResultStage 531 (KafkaRDD[744] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 16:53:00 INFO storage.MemoryStore: Block broadcast_531 stored as values in memory (estimated size 5.7 KB, free 490.4 MB) 18/04/17 16:53:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 530.0 (TID 530, ***hostname masked***, executor 7, partition 0, RACK_LOCAL, 2060 bytes) 18/04/17 16:53:00 INFO storage.BlockManagerInfo: Added broadcast_528_piece0 in memory on ***hostname masked***:57847 (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:53:00 INFO spark.ContextCleaner: Cleaned accumulator 512 18/04/17 16:53:00 INFO storage.BlockManagerInfo: Removed broadcast_510_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.2 MB) 18/04/17 16:53:00 INFO storage.BlockManagerInfo: Removed broadcast_510_piece0 on ***hostname masked***:55279 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:53:00 INFO spark.ContextCleaner: Cleaned accumulator 514 18/04/17 16:53:00 INFO storage.MemoryStore: Block broadcast_531_piece0 stored as bytes in memory (estimated size 3.1 KB, free 490.4 MB) 18/04/17 16:53:00 INFO storage.BlockManagerInfo: Added broadcast_531_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.2 MB) 18/04/17 16:53:00 INFO storage.BlockManagerInfo: Removed broadcast_512_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.2 MB) 18/04/17 16:53:00 INFO spark.SparkContext: Created broadcast 531 from broadcast at DAGScheduler.scala:1006 18/04/17 16:53:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 531 (KafkaRDD[744] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 16:53:00 INFO cluster.YarnClusterScheduler: Adding task set 531.0 with 1 tasks 18/04/17 16:53:00 INFO scheduler.DAGScheduler: Got job 532 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 16:53:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 532 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 16:53:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 16:53:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 16:53:00 INFO scheduler.DAGScheduler: Submitting ResultStage 532 (KafkaRDD[728] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 16:53:00 INFO storage.BlockManagerInfo: Removed broadcast_512_piece0 on ***hostname masked***:43653 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:53:00 INFO storage.MemoryStore: Block broadcast_532 stored as values in memory (estimated size 5.7 KB, free 490.4 MB) 18/04/17 16:53:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 531.0 (TID 531, ***hostname masked***, executor 3, partition 0, NODE_LOCAL, 2047 bytes) 18/04/17 16:53:00 INFO storage.MemoryStore: Block broadcast_532_piece0 stored as bytes in memory (estimated size 3.1 KB, free 490.4 MB) 18/04/17 16:53:00 INFO storage.BlockManagerInfo: Added broadcast_532_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.2 MB) 18/04/17 16:53:00 INFO spark.SparkContext: Created broadcast 532 from broadcast at DAGScheduler.scala:1006 18/04/17 16:53:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 532 (KafkaRDD[728] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 16:53:00 INFO cluster.YarnClusterScheduler: Adding task set 532.0 with 1 tasks 18/04/17 16:53:00 INFO scheduler.DAGScheduler: Got job 533 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 16:53:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 533 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 16:53:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 16:53:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 16:53:00 INFO scheduler.DAGScheduler: Submitting ResultStage 533 (KafkaRDD[740] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 16:53:00 INFO storage.MemoryStore: Block broadcast_533 stored as values in memory (estimated size 5.7 KB, free 490.4 MB) 18/04/17 16:53:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 532.0 (TID 532, ***hostname masked***, executor 12, partition 0, RACK_LOCAL, 2051 bytes) 18/04/17 16:53:00 INFO spark.ContextCleaner: Cleaned accumulator 513 18/04/17 16:53:00 INFO storage.BlockManagerInfo: Added broadcast_531_piece0 in memory on ***hostname masked***:60107 (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:53:00 INFO storage.BlockManagerInfo: Removed broadcast_514_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.2 MB) 18/04/17 16:53:00 INFO storage.BlockManagerInfo: Removed broadcast_514_piece0 on ***hostname masked***:55033 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:53:00 INFO spark.ContextCleaner: Cleaned accumulator 515 18/04/17 16:53:00 INFO storage.MemoryStore: Block broadcast_533_piece0 stored as bytes in memory (estimated size 3.1 KB, free 490.4 MB) 18/04/17 16:53:00 INFO storage.BlockManagerInfo: Added broadcast_533_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.2 MB) 18/04/17 16:53:00 INFO storage.BlockManagerInfo: Removed broadcast_513_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.2 MB) 18/04/17 16:53:00 INFO spark.SparkContext: Created broadcast 533 from broadcast at DAGScheduler.scala:1006 18/04/17 16:53:00 INFO storage.BlockManagerInfo: Removed broadcast_513_piece0 on ***hostname masked***:56034 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:53:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 533 (KafkaRDD[740] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 16:53:00 INFO cluster.YarnClusterScheduler: Adding task set 533.0 with 1 tasks 18/04/17 16:53:00 INFO scheduler.DAGScheduler: Got job 534 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 16:53:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 534 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 16:53:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 16:53:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 16:53:00 INFO scheduler.DAGScheduler: Submitting ResultStage 534 (KafkaRDD[725] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 16:53:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 533.0 (TID 533, ***hostname masked***, executor 9, partition 0, RACK_LOCAL, 2048 bytes) 18/04/17 16:53:00 INFO storage.MemoryStore: Block broadcast_534 stored as values in memory (estimated size 5.7 KB, free 490.4 MB) 18/04/17 16:53:00 INFO spark.ContextCleaner: Cleaned accumulator 517 18/04/17 16:53:00 INFO storage.BlockManagerInfo: Added broadcast_529_piece0 in memory on ***hostname masked***:41751 (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:53:00 INFO storage.BlockManagerInfo: Removed broadcast_515_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.2 MB) 18/04/17 16:53:00 INFO storage.BlockManagerInfo: Removed broadcast_515_piece0 on ***hostname masked***:55095 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:53:00 INFO storage.BlockManagerInfo: Added broadcast_532_piece0 in memory on ***hostname masked***:42188 (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:53:00 INFO spark.ContextCleaner: Cleaned accumulator 516 18/04/17 16:53:00 INFO storage.BlockManagerInfo: Removed broadcast_517_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.2 MB) 18/04/17 16:53:00 INFO storage.BlockManagerInfo: Removed broadcast_517_piece0 on ***hostname masked***:50260 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:53:00 INFO spark.ContextCleaner: Cleaned accumulator 518 18/04/17 16:53:00 INFO storage.MemoryStore: Block broadcast_534_piece0 stored as bytes in memory (estimated size 3.1 KB, free 490.4 MB) 18/04/17 16:53:00 INFO storage.BlockManagerInfo: Added broadcast_534_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.2 MB) 18/04/17 16:53:00 INFO storage.BlockManagerInfo: Removed broadcast_516_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.2 MB) 18/04/17 16:53:00 INFO spark.SparkContext: Created broadcast 534 from broadcast at DAGScheduler.scala:1006 18/04/17 16:53:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 534 (KafkaRDD[725] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 16:53:00 INFO cluster.YarnClusterScheduler: Adding task set 534.0 with 1 tasks 18/04/17 16:53:00 INFO storage.BlockManagerInfo: Removed broadcast_516_piece0 on ***hostname masked***:50260 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:53:00 INFO scheduler.DAGScheduler: Got job 535 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 16:53:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 535 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 16:53:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 16:53:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 16:53:00 INFO scheduler.DAGScheduler: Submitting ResultStage 535 (KafkaRDD[722] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 16:53:00 INFO storage.MemoryStore: Block broadcast_535 stored as values in memory (estimated size 5.7 KB, free 490.4 MB) 18/04/17 16:53:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 534.0 (TID 534, ***hostname masked***, executor 5, partition 0, NODE_LOCAL, 2059 bytes) 18/04/17 16:53:00 INFO spark.ContextCleaner: Cleaned accumulator 520 18/04/17 16:53:00 INFO storage.BlockManagerInfo: Added broadcast_533_piece0 in memory on ***hostname masked***:55033 (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:53:00 INFO storage.BlockManagerInfo: Removed broadcast_518_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.2 MB) 18/04/17 16:53:00 INFO storage.BlockManagerInfo: Removed broadcast_518_piece0 on ***hostname masked***:43653 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:53:00 INFO storage.BlockManagerInfo: Added broadcast_530_piece0 in memory on ***hostname masked***:41751 (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:53:00 INFO spark.ContextCleaner: Cleaned accumulator 519 18/04/17 16:53:00 INFO storage.BlockManagerInfo: Removed broadcast_519_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.2 MB) 18/04/17 16:53:00 INFO storage.MemoryStore: Block broadcast_535_piece0 stored as bytes in memory (estimated size 3.1 KB, free 490.4 MB) 18/04/17 16:53:00 INFO storage.BlockManagerInfo: Added broadcast_535_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.2 MB) 18/04/17 16:53:00 INFO storage.BlockManagerInfo: Removed broadcast_519_piece0 on ***hostname masked***:60107 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:53:00 INFO spark.SparkContext: Created broadcast 535 from broadcast at DAGScheduler.scala:1006 18/04/17 16:53:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 535 (KafkaRDD[722] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 16:53:00 INFO cluster.YarnClusterScheduler: Adding task set 535.0 with 1 tasks 18/04/17 16:53:00 INFO scheduler.DAGScheduler: Got job 536 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 16:53:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 536 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 16:53:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 16:53:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 16:53:00 INFO scheduler.DAGScheduler: Submitting ResultStage 536 (KafkaRDD[742] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 16:53:00 INFO spark.ContextCleaner: Cleaned accumulator 523 18/04/17 16:53:00 INFO storage.MemoryStore: Block broadcast_536 stored as values in memory (estimated size 5.7 KB, free 490.4 MB) 18/04/17 16:53:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 535.0 (TID 535, ***hostname masked***, executor 4, partition 0, RACK_LOCAL, 2034 bytes) 18/04/17 16:53:00 INFO storage.BlockManagerInfo: Removed broadcast_521_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.2 MB) 18/04/17 16:53:00 INFO storage.BlockManagerInfo: Added broadcast_534_piece0 in memory on ***hostname masked***:53081 (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:53:00 INFO storage.BlockManagerInfo: Removed broadcast_521_piece0 on ***hostname masked***:35790 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:53:00 INFO spark.ContextCleaner: Cleaned accumulator 522 18/04/17 16:53:00 INFO storage.BlockManagerInfo: Removed broadcast_523_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.2 MB) 18/04/17 16:53:00 INFO storage.BlockManagerInfo: Removed broadcast_523_piece0 on ***hostname masked***:55095 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:53:00 INFO storage.MemoryStore: Block broadcast_536_piece0 stored as bytes in memory (estimated size 3.1 KB, free 490.4 MB) 18/04/17 16:53:00 INFO storage.BlockManagerInfo: Added broadcast_536_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.2 MB) 18/04/17 16:53:00 INFO spark.SparkContext: Created broadcast 536 from broadcast at DAGScheduler.scala:1006 18/04/17 16:53:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 536 (KafkaRDD[742] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 16:53:00 INFO cluster.YarnClusterScheduler: Adding task set 536.0 with 1 tasks 18/04/17 16:53:00 INFO scheduler.DAGScheduler: Got job 537 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 16:53:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 537 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 16:53:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 16:53:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 16:53:00 INFO scheduler.DAGScheduler: Submitting ResultStage 537 (KafkaRDD[732] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 16:53:00 INFO storage.MemoryStore: Block broadcast_537 stored as values in memory (estimated size 5.7 KB, free 490.4 MB) 18/04/17 16:53:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 536.0 (TID 536, ***hostname masked***, executor 6, partition 0, RACK_LOCAL, 2069 bytes) 18/04/17 16:53:00 INFO storage.BlockManagerInfo: Added broadcast_535_piece0 in memory on ***hostname masked***:55279 (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:53:00 INFO spark.ContextCleaner: Cleaned accumulator 524 18/04/17 16:53:00 INFO storage.BlockManagerInfo: Removed broadcast_522_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.2 MB) 18/04/17 16:53:00 INFO storage.BlockManagerInfo: Removed broadcast_522_piece0 on ***hostname masked***:60107 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:53:00 INFO storage.MemoryStore: Block broadcast_537_piece0 stored as bytes in memory (estimated size 3.1 KB, free 490.4 MB) 18/04/17 16:53:00 INFO storage.BlockManagerInfo: Added broadcast_537_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.2 MB) 18/04/17 16:53:00 INFO spark.SparkContext: Created broadcast 537 from broadcast at DAGScheduler.scala:1006 18/04/17 16:53:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 537 (KafkaRDD[732] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 16:53:00 INFO cluster.YarnClusterScheduler: Adding task set 537.0 with 1 tasks 18/04/17 16:53:00 INFO scheduler.DAGScheduler: Got job 538 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 16:53:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 538 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 16:53:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 16:53:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 16:53:00 INFO scheduler.DAGScheduler: Submitting ResultStage 538 (KafkaRDD[738] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 16:53:00 INFO storage.MemoryStore: Block broadcast_538 stored as values in memory (estimated size 5.7 KB, free 490.4 MB) 18/04/17 16:53:00 INFO storage.BlockManagerInfo: Removed broadcast_524_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.2 MB) 18/04/17 16:53:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 537.0 (TID 537, ***hostname masked***, executor 6, partition 0, RACK_LOCAL, 2051 bytes) 18/04/17 16:53:00 INFO storage.BlockManagerInfo: Removed broadcast_524_piece0 on ***hostname masked***:42188 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:53:00 INFO spark.ContextCleaner: Cleaned accumulator 525 18/04/17 16:53:00 INFO storage.BlockManagerInfo: Added broadcast_536_piece0 in memory on ***hostname masked***:35790 (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:53:00 INFO storage.BlockManagerInfo: Removed broadcast_526_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.2 MB) 18/04/17 16:53:00 INFO storage.MemoryStore: Block broadcast_538_piece0 stored as bytes in memory (estimated size 3.1 KB, free 490.4 MB) 18/04/17 16:53:00 INFO storage.BlockManagerInfo: Added broadcast_538_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.2 MB) 18/04/17 16:53:00 INFO spark.SparkContext: Created broadcast 538 from broadcast at DAGScheduler.scala:1006 18/04/17 16:53:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 538 (KafkaRDD[738] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 16:53:00 INFO cluster.YarnClusterScheduler: Adding task set 538.0 with 1 tasks 18/04/17 16:53:00 INFO scheduler.DAGScheduler: Got job 539 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 16:53:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 539 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 16:53:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 16:53:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 16:53:00 INFO scheduler.DAGScheduler: Submitting ResultStage 539 (KafkaRDD[735] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 16:53:00 INFO storage.BlockManagerInfo: Removed broadcast_526_piece0 on ***hostname masked***:42188 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:53:00 INFO storage.MemoryStore: Block broadcast_539 stored as values in memory (estimated size 5.7 KB, free 490.4 MB) 18/04/17 16:53:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 538.0 (TID 538, ***hostname masked***, executor 8, partition 0, RACK_LOCAL, 2048 bytes) 18/04/17 16:53:00 INFO spark.ContextCleaner: Cleaned accumulator 527 18/04/17 16:53:00 INFO storage.BlockManagerInfo: Removed broadcast_525_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.2 MB) 18/04/17 16:53:00 INFO storage.BlockManagerInfo: Removed broadcast_525_piece0 on ***hostname masked***:60107 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:53:00 INFO storage.MemoryStore: Block broadcast_539_piece0 stored as bytes in memory (estimated size 3.1 KB, free 490.4 MB) 18/04/17 16:53:00 INFO storage.BlockManagerInfo: Added broadcast_539_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.2 MB) 18/04/17 16:53:00 INFO spark.SparkContext: Created broadcast 539 from broadcast at DAGScheduler.scala:1006 18/04/17 16:53:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 539 (KafkaRDD[735] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 16:53:00 INFO cluster.YarnClusterScheduler: Adding task set 539.0 with 1 tasks 18/04/17 16:53:00 INFO scheduler.DAGScheduler: Got job 540 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 16:53:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 540 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 16:53:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 16:53:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 16:53:00 INFO scheduler.DAGScheduler: Submitting ResultStage 540 (KafkaRDD[747] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 16:53:00 INFO storage.MemoryStore: Block broadcast_540 stored as values in memory (estimated size 5.7 KB, free 490.4 MB) 18/04/17 16:53:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 539.0 (TID 539, ***hostname masked***, executor 12, partition 0, RACK_LOCAL, 2064 bytes) 18/04/17 16:53:00 INFO storage.BlockManagerInfo: Added broadcast_537_piece0 in memory on ***hostname masked***:35790 (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:53:00 INFO storage.MemoryStore: Block broadcast_540_piece0 stored as bytes in memory (estimated size 3.1 KB, free 490.4 MB) 18/04/17 16:53:00 INFO storage.BlockManagerInfo: Added broadcast_540_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.2 MB) 18/04/17 16:53:00 INFO spark.SparkContext: Created broadcast 540 from broadcast at DAGScheduler.scala:1006 18/04/17 16:53:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 540 (KafkaRDD[747] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 16:53:00 INFO cluster.YarnClusterScheduler: Adding task set 540.0 with 1 tasks 18/04/17 16:53:00 INFO scheduler.DAGScheduler: Got job 541 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 16:53:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 541 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 16:53:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 16:53:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 16:53:00 INFO scheduler.DAGScheduler: Submitting ResultStage 541 (KafkaRDD[754] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 16:53:00 INFO storage.BlockManagerInfo: Added broadcast_538_piece0 in memory on ***hostname masked***:50260 (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:53:00 INFO storage.MemoryStore: Block broadcast_541 stored as values in memory (estimated size 5.7 KB, free 490.4 MB) 18/04/17 16:53:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 540.0 (TID 540, ***hostname masked***, executor 6, partition 0, RACK_LOCAL, 2048 bytes) 18/04/17 16:53:00 INFO storage.BlockManagerInfo: Added broadcast_539_piece0 in memory on ***hostname masked***:42188 (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:53:00 INFO storage.MemoryStore: Block broadcast_541_piece0 stored as bytes in memory (estimated size 3.1 KB, free 490.4 MB) 18/04/17 16:53:00 INFO storage.BlockManagerInfo: Added broadcast_541_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.2 MB) 18/04/17 16:53:00 INFO spark.SparkContext: Created broadcast 541 from broadcast at DAGScheduler.scala:1006 18/04/17 16:53:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 541 (KafkaRDD[754] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 16:53:00 INFO cluster.YarnClusterScheduler: Adding task set 541.0 with 1 tasks 18/04/17 16:53:00 INFO scheduler.DAGScheduler: Got job 542 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 16:53:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 542 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 16:53:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 16:53:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 16:53:00 INFO scheduler.DAGScheduler: Submitting ResultStage 542 (KafkaRDD[748] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 16:53:00 INFO storage.MemoryStore: Block broadcast_542 stored as values in memory (estimated size 5.7 KB, free 490.4 MB) 18/04/17 16:53:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 541.0 (TID 541, ***hostname masked***, executor 10, partition 0, RACK_LOCAL, 2053 bytes) 18/04/17 16:53:00 INFO storage.MemoryStore: Block broadcast_542_piece0 stored as bytes in memory (estimated size 3.1 KB, free 490.4 MB) 18/04/17 16:53:00 INFO storage.BlockManagerInfo: Added broadcast_542_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.2 MB) 18/04/17 16:53:00 INFO spark.SparkContext: Created broadcast 542 from broadcast at DAGScheduler.scala:1006 18/04/17 16:53:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 542 (KafkaRDD[748] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 16:53:00 INFO cluster.YarnClusterScheduler: Adding task set 542.0 with 1 tasks 18/04/17 16:53:00 INFO scheduler.DAGScheduler: Got job 543 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 16:53:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 543 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 16:53:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 16:53:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 16:53:00 INFO scheduler.DAGScheduler: Submitting ResultStage 543 (KafkaRDD[746] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 16:53:00 INFO storage.MemoryStore: Block broadcast_543 stored as values in memory (estimated size 5.7 KB, free 490.4 MB) 18/04/17 16:53:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 542.0 (TID 542, ***hostname masked***, executor 5, partition 0, RACK_LOCAL, 2045 bytes) 18/04/17 16:53:00 INFO storage.MemoryStore: Block broadcast_543_piece0 stored as bytes in memory (estimated size 3.1 KB, free 490.4 MB) 18/04/17 16:53:00 INFO storage.BlockManagerInfo: Added broadcast_543_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.2 MB) 18/04/17 16:53:00 INFO spark.SparkContext: Created broadcast 543 from broadcast at DAGScheduler.scala:1006 18/04/17 16:53:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 543 (KafkaRDD[746] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 16:53:00 INFO cluster.YarnClusterScheduler: Adding task set 543.0 with 1 tasks 18/04/17 16:53:00 INFO scheduler.DAGScheduler: Got job 544 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 16:53:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 544 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 16:53:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 16:53:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 16:53:00 INFO scheduler.DAGScheduler: Submitting ResultStage 544 (KafkaRDD[731] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 16:53:00 INFO storage.MemoryStore: Block broadcast_544 stored as values in memory (estimated size 5.7 KB, free 490.4 MB) 18/04/17 16:53:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 543.0 (TID 543, ***hostname masked***, executor 1, partition 0, NODE_LOCAL, 2056 bytes) 18/04/17 16:53:00 INFO storage.BlockManagerInfo: Added broadcast_540_piece0 in memory on ***hostname masked***:35790 (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:53:00 INFO storage.BlockManagerInfo: Added broadcast_541_piece0 in memory on ***hostname masked***:55095 (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:53:00 INFO storage.MemoryStore: Block broadcast_544_piece0 stored as bytes in memory (estimated size 3.1 KB, free 490.4 MB) 18/04/17 16:53:00 INFO storage.BlockManagerInfo: Added broadcast_544_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.2 MB) 18/04/17 16:53:00 INFO spark.SparkContext: Created broadcast 544 from broadcast at DAGScheduler.scala:1006 18/04/17 16:53:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 544 (KafkaRDD[731] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 16:53:00 INFO cluster.YarnClusterScheduler: Adding task set 544.0 with 1 tasks 18/04/17 16:53:00 INFO scheduler.DAGScheduler: Got job 545 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 16:53:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 545 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 16:53:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 16:53:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 16:53:00 INFO scheduler.DAGScheduler: Submitting ResultStage 545 (KafkaRDD[752] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 16:53:00 INFO storage.MemoryStore: Block broadcast_545 stored as values in memory (estimated size 5.7 KB, free 490.4 MB) 18/04/17 16:53:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 544.0 (TID 544, ***hostname masked***, executor 11, partition 0, RACK_LOCAL, 2046 bytes) 18/04/17 16:53:00 INFO storage.BlockManagerInfo: Added broadcast_543_piece0 in memory on ***hostname masked***:56034 (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:53:00 INFO storage.MemoryStore: Block broadcast_545_piece0 stored as bytes in memory (estimated size 3.1 KB, free 490.4 MB) 18/04/17 16:53:00 INFO storage.BlockManagerInfo: Added broadcast_545_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.2 MB) 18/04/17 16:53:00 INFO spark.SparkContext: Created broadcast 545 from broadcast at DAGScheduler.scala:1006 18/04/17 16:53:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 545 (KafkaRDD[752] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 16:53:00 INFO cluster.YarnClusterScheduler: Adding task set 545.0 with 1 tasks 18/04/17 16:53:00 INFO scheduler.DAGScheduler: Got job 546 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 16:53:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 546 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 16:53:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 16:53:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 16:53:00 INFO scheduler.DAGScheduler: Submitting ResultStage 546 (KafkaRDD[730] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 16:53:00 INFO storage.MemoryStore: Block broadcast_546 stored as values in memory (estimated size 5.7 KB, free 490.4 MB) 18/04/17 16:53:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 545.0 (TID 545, ***hostname masked***, executor 7, partition 0, RACK_LOCAL, 2040 bytes) 18/04/17 16:53:00 INFO storage.BlockManagerInfo: Added broadcast_544_piece0 in memory on ***hostname masked***:57847 (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:53:00 INFO storage.MemoryStore: Block broadcast_546_piece0 stored as bytes in memory (estimated size 3.1 KB, free 490.4 MB) 18/04/17 16:53:00 INFO storage.BlockManagerInfo: Added broadcast_546_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.2 MB) 18/04/17 16:53:00 INFO spark.SparkContext: Created broadcast 546 from broadcast at DAGScheduler.scala:1006 18/04/17 16:53:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 546 (KafkaRDD[730] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 16:53:00 INFO cluster.YarnClusterScheduler: Adding task set 546.0 with 1 tasks 18/04/17 16:53:00 INFO scheduler.DAGScheduler: Got job 547 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 16:53:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 547 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 16:53:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 16:53:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 16:53:00 INFO scheduler.DAGScheduler: Submitting ResultStage 547 (KafkaRDD[729] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 16:53:00 INFO storage.MemoryStore: Block broadcast_547 stored as values in memory (estimated size 5.7 KB, free 490.4 MB) 18/04/17 16:53:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 546.0 (TID 546, ***hostname masked***, executor 3, partition 0, NODE_LOCAL, 2059 bytes) 18/04/17 16:53:00 INFO storage.MemoryStore: Block broadcast_547_piece0 stored as bytes in memory (estimated size 3.1 KB, free 490.4 MB) 18/04/17 16:53:00 INFO storage.BlockManagerInfo: Added broadcast_547_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.2 MB) 18/04/17 16:53:00 INFO spark.SparkContext: Created broadcast 547 from broadcast at DAGScheduler.scala:1006 18/04/17 16:53:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 547 (KafkaRDD[729] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 16:53:00 INFO cluster.YarnClusterScheduler: Adding task set 547.0 with 1 tasks 18/04/17 16:53:00 INFO scheduler.DAGScheduler: Got job 548 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 16:53:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 548 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 16:53:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 16:53:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 16:53:00 INFO scheduler.DAGScheduler: Submitting ResultStage 548 (KafkaRDD[739] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 16:53:00 INFO storage.MemoryStore: Block broadcast_548 stored as values in memory (estimated size 5.7 KB, free 490.3 MB) 18/04/17 16:53:00 INFO storage.BlockManagerInfo: Added broadcast_545_piece0 in memory on ***hostname masked***:41751 (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:53:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 547.0 (TID 547, ***hostname masked***, executor 3, partition 0, RACK_LOCAL, 2041 bytes) 18/04/17 16:53:00 INFO storage.BlockManagerInfo: Added broadcast_546_piece0 in memory on ***hostname masked***:60107 (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:53:00 INFO storage.MemoryStore: Block broadcast_548_piece0 stored as bytes in memory (estimated size 3.1 KB, free 490.3 MB) 18/04/17 16:53:00 INFO storage.BlockManagerInfo: Added broadcast_548_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.2 MB) 18/04/17 16:53:00 INFO spark.SparkContext: Created broadcast 548 from broadcast at DAGScheduler.scala:1006 18/04/17 16:53:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 548 (KafkaRDD[739] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 16:53:00 INFO cluster.YarnClusterScheduler: Adding task set 548.0 with 1 tasks 18/04/17 16:53:00 INFO scheduler.DAGScheduler: Got job 549 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 16:53:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 549 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 16:53:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 16:53:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 16:53:00 INFO scheduler.DAGScheduler: Submitting ResultStage 549 (KafkaRDD[749] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 16:53:00 INFO storage.MemoryStore: Block broadcast_549 stored as values in memory (estimated size 5.7 KB, free 490.3 MB) 18/04/17 16:53:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 548.0 (TID 548, ***hostname masked***, executor 8, partition 0, RACK_LOCAL, 2050 bytes) 18/04/17 16:53:00 INFO storage.MemoryStore: Block broadcast_549_piece0 stored as bytes in memory (estimated size 3.1 KB, free 490.3 MB) 18/04/17 16:53:00 INFO storage.BlockManagerInfo: Added broadcast_549_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.2 MB) 18/04/17 16:53:00 INFO spark.SparkContext: Created broadcast 549 from broadcast at DAGScheduler.scala:1006 18/04/17 16:53:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 549 (KafkaRDD[749] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 16:53:00 INFO cluster.YarnClusterScheduler: Adding task set 549.0 with 1 tasks 18/04/17 16:53:00 INFO scheduler.DAGScheduler: Got job 551 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 16:53:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 550 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 16:53:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 16:53:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 16:53:00 INFO scheduler.DAGScheduler: Submitting ResultStage 550 (KafkaRDD[721] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 16:53:00 INFO storage.MemoryStore: Block broadcast_550 stored as values in memory (estimated size 5.7 KB, free 490.3 MB) 18/04/17 16:53:00 INFO storage.BlockManagerInfo: Added broadcast_547_piece0 in memory on ***hostname masked***:60107 (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:53:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 549.0 (TID 549, ***hostname masked***, executor 5, partition 0, NODE_LOCAL, 2049 bytes) 18/04/17 16:53:00 INFO storage.MemoryStore: Block broadcast_550_piece0 stored as bytes in memory (estimated size 3.1 KB, free 490.3 MB) 18/04/17 16:53:00 INFO storage.BlockManagerInfo: Added broadcast_550_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.2 MB) 18/04/17 16:53:00 INFO spark.SparkContext: Created broadcast 550 from broadcast at DAGScheduler.scala:1006 18/04/17 16:53:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 550 (KafkaRDD[721] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 16:53:00 INFO cluster.YarnClusterScheduler: Adding task set 550.0 with 1 tasks 18/04/17 16:53:00 INFO scheduler.DAGScheduler: Got job 550 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 16:53:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 551 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 16:53:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 16:53:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 16:53:00 INFO scheduler.DAGScheduler: Submitting ResultStage 551 (KafkaRDD[751] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 16:53:00 INFO storage.BlockManagerInfo: Added broadcast_548_piece0 in memory on ***hostname masked***:50260 (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:53:00 INFO storage.MemoryStore: Block broadcast_551 stored as values in memory (estimated size 5.7 KB, free 490.3 MB) 18/04/17 16:53:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 550.0 (TID 550, ***hostname masked***, executor 5, partition 0, RACK_LOCAL, 2063 bytes) 18/04/17 16:53:00 INFO storage.MemoryStore: Block broadcast_551_piece0 stored as bytes in memory (estimated size 3.1 KB, free 490.3 MB) 18/04/17 16:53:00 INFO storage.BlockManagerInfo: Added broadcast_551_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.2 MB) 18/04/17 16:53:00 INFO spark.SparkContext: Created broadcast 551 from broadcast at DAGScheduler.scala:1006 18/04/17 16:53:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 551 (KafkaRDD[751] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 16:53:00 INFO cluster.YarnClusterScheduler: Adding task set 551.0 with 1 tasks 18/04/17 16:53:00 INFO scheduler.DAGScheduler: Got job 552 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 16:53:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 552 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 16:53:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 16:53:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 16:53:00 INFO scheduler.DAGScheduler: Submitting ResultStage 552 (KafkaRDD[755] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 16:53:00 INFO storage.MemoryStore: Block broadcast_552 stored as values in memory (estimated size 5.7 KB, free 490.3 MB) 18/04/17 16:53:00 INFO storage.BlockManagerInfo: Added broadcast_542_piece0 in memory on ***hostname masked***:53081 (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:53:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 551.0 (TID 551, ***hostname masked***, executor 12, partition 0, RACK_LOCAL, 2042 bytes) 18/04/17 16:53:00 INFO storage.MemoryStore: Block broadcast_552_piece0 stored as bytes in memory (estimated size 3.1 KB, free 490.3 MB) 18/04/17 16:53:00 INFO storage.BlockManagerInfo: Added broadcast_552_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.2 MB) 18/04/17 16:53:00 INFO spark.SparkContext: Created broadcast 552 from broadcast at DAGScheduler.scala:1006 18/04/17 16:53:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 552 (KafkaRDD[755] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 16:53:00 INFO cluster.YarnClusterScheduler: Adding task set 552.0 with 1 tasks 18/04/17 16:53:00 INFO scheduler.DAGScheduler: Got job 553 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 16:53:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 553 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 16:53:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 16:53:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 16:53:00 INFO scheduler.DAGScheduler: Submitting ResultStage 553 (KafkaRDD[745] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 16:53:00 INFO storage.MemoryStore: Block broadcast_553 stored as values in memory (estimated size 5.7 KB, free 490.3 MB) 18/04/17 16:53:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 552.0 (TID 552, ***hostname masked***, executor 12, partition 0, NODE_LOCAL, 2037 bytes) 18/04/17 16:53:00 INFO storage.MemoryStore: Block broadcast_553_piece0 stored as bytes in memory (estimated size 3.1 KB, free 490.3 MB) 18/04/17 16:53:00 INFO storage.BlockManagerInfo: Added broadcast_553_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.2 MB) 18/04/17 16:53:00 INFO spark.SparkContext: Created broadcast 553 from broadcast at DAGScheduler.scala:1006 18/04/17 16:53:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 553 (KafkaRDD[745] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 16:53:00 INFO cluster.YarnClusterScheduler: Adding task set 553.0 with 1 tasks 18/04/17 16:53:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 553.0 (TID 553, ***hostname masked***, executor 10, partition 0, RACK_LOCAL, 2045 bytes) 18/04/17 16:53:00 INFO storage.BlockManagerInfo: Added broadcast_551_piece0 in memory on ***hostname masked***:42188 (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:53:00 INFO storage.BlockManagerInfo: Added broadcast_550_piece0 in memory on ***hostname masked***:53081 (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:53:00 INFO storage.BlockManagerInfo: Added broadcast_553_piece0 in memory on ***hostname masked***:55095 (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:53:00 INFO storage.BlockManagerInfo: Added broadcast_552_piece0 in memory on ***hostname masked***:42188 (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:53:00 INFO storage.BlockManagerInfo: Added broadcast_549_piece0 in memory on ***hostname masked***:53081 (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:53:00 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 536.0 (TID 536) in 176 ms on ***hostname masked*** (executor 6) (1/1) 18/04/17 16:53:00 INFO cluster.YarnClusterScheduler: Removed TaskSet 536.0, whose tasks have all completed, from pool 18/04/17 16:53:00 INFO scheduler.DAGScheduler: ResultStage 536 (foreachPartition at PredictorEngineApp.java:153) finished in 0.177 s 18/04/17 16:53:00 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 540.0 (TID 540) in 160 ms on ***hostname masked*** (executor 6) (1/1) 18/04/17 16:53:00 INFO cluster.YarnClusterScheduler: Removed TaskSet 540.0, whose tasks have all completed, from pool 18/04/17 16:53:00 INFO scheduler.DAGScheduler: Job 536 finished: foreachPartition at PredictorEngineApp.java:153, took 0.241118 s 18/04/17 16:53:00 INFO scheduler.DAGScheduler: ResultStage 540 (foreachPartition at PredictorEngineApp.java:153) finished in 0.161 s 18/04/17 16:53:00 INFO scheduler.DAGScheduler: Job 540 finished: foreachPartition at PredictorEngineApp.java:153, took 0.240645 s 18/04/17 16:53:00 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x6d8a0cf4 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 16:53:00 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x405b1fa4 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 16:53:00 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x6d8a0cf40x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 16:53:00 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x405b1fa40x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 16:53:00 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 16:53:00 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 16:53:00 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:57313, server: ***hostname masked***/***IP masked***:2181 18/04/17 16:53:00 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:35462, server: ***hostname masked***/***IP masked***:2181 18/04/17 16:53:00 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x3626be1439a93fa, negotiated timeout = 60000 18/04/17 16:53:00 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x1626be1444c9430, negotiated timeout = 60000 18/04/17 16:53:00 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x1626be1444c9430 18/04/17 16:53:00 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x3626be1439a93fa 18/04/17 16:53:00 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 16:53:00 INFO zookeeper.ZooKeeper: Session: 0x1626be1444c9430 closed 18/04/17 16:53:00 INFO zookeeper.ZooKeeper: Session: 0x3626be1439a93fa closed 18/04/17 16:53:00 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 16:53:00 INFO scheduler.JobScheduler: Finished job streaming job 1523973180000 ms.22 from job set of time 1523973180000 ms 18/04/17 16:53:00 INFO scheduler.JobScheduler: Finished job streaming job 1523973180000 ms.27 from job set of time 1523973180000 ms 18/04/17 16:53:00 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 552.0 (TID 552) in 436 ms on ***hostname masked*** (executor 12) (1/1) 18/04/17 16:53:00 INFO scheduler.DAGScheduler: ResultStage 552 (foreachPartition at PredictorEngineApp.java:153) finished in 0.438 s 18/04/17 16:53:00 INFO cluster.YarnClusterScheduler: Removed TaskSet 552.0, whose tasks have all completed, from pool 18/04/17 16:53:00 INFO scheduler.DAGScheduler: Job 552 finished: foreachPartition at PredictorEngineApp.java:153, took 0.564448 s 18/04/17 16:53:00 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x62456318 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 16:53:00 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x624563180x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 16:53:00 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 16:53:00 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:40063, server: ***hostname masked***/***IP masked***:2181 18/04/17 16:53:00 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x2626be142b28d3e, negotiated timeout = 60000 18/04/17 16:53:00 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x2626be142b28d3e 18/04/17 16:53:00 INFO zookeeper.ZooKeeper: Session: 0x2626be142b28d3e closed 18/04/17 16:53:00 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 16:53:00 INFO scheduler.JobScheduler: Finished job streaming job 1523973180000 ms.35 from job set of time 1523973180000 ms 18/04/17 16:53:02 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 553.0 (TID 553) in 2147 ms on ***hostname masked*** (executor 10) (1/1) 18/04/17 16:53:02 INFO scheduler.DAGScheduler: ResultStage 553 (foreachPartition at PredictorEngineApp.java:153) finished in 2.148 s 18/04/17 16:53:02 INFO cluster.YarnClusterScheduler: Removed TaskSet 553.0, whose tasks have all completed, from pool 18/04/17 16:53:02 INFO scheduler.DAGScheduler: Job 553 finished: foreachPartition at PredictorEngineApp.java:153, took 2.276943 s 18/04/17 16:53:02 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x4048db64 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 16:53:02 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x4048db640x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 16:53:02 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 16:53:02 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:57324, server: ***hostname masked***/***IP masked***:2181 18/04/17 16:53:02 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x3626be1439a93ff, negotiated timeout = 60000 18/04/17 16:53:02 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x3626be1439a93ff 18/04/17 16:53:02 INFO zookeeper.ZooKeeper: Session: 0x3626be1439a93ff closed 18/04/17 16:53:02 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 16:53:02 INFO scheduler.JobScheduler: Finished job streaming job 1523973180000 ms.25 from job set of time 1523973180000 ms 18/04/17 16:53:03 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 532.0 (TID 532) in 3057 ms on ***hostname masked*** (executor 12) (1/1) 18/04/17 16:53:03 INFO cluster.YarnClusterScheduler: Removed TaskSet 532.0, whose tasks have all completed, from pool 18/04/17 16:53:03 INFO scheduler.DAGScheduler: ResultStage 532 (foreachPartition at PredictorEngineApp.java:153) finished in 3.057 s 18/04/17 16:53:03 INFO scheduler.DAGScheduler: Job 532 finished: foreachPartition at PredictorEngineApp.java:153, took 3.098339 s 18/04/17 16:53:03 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x78555027 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 16:53:03 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x785550270x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 16:53:03 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 16:53:03 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:57330, server: ***hostname masked***/***IP masked***:2181 18/04/17 16:53:03 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x3626be1439a9401, negotiated timeout = 60000 18/04/17 16:53:03 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x3626be1439a9401 18/04/17 16:53:03 INFO zookeeper.ZooKeeper: Session: 0x3626be1439a9401 closed 18/04/17 16:53:03 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 16:53:03 INFO scheduler.JobScheduler: Finished job streaming job 1523973180000 ms.8 from job set of time 1523973180000 ms 18/04/17 16:53:03 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 530.0 (TID 530) in 3276 ms on ***hostname masked*** (executor 7) (1/1) 18/04/17 16:53:03 INFO scheduler.DAGScheduler: ResultStage 530 (foreachPartition at PredictorEngineApp.java:153) finished in 3.277 s 18/04/17 16:53:03 INFO cluster.YarnClusterScheduler: Removed TaskSet 530.0, whose tasks have all completed, from pool 18/04/17 16:53:03 INFO scheduler.DAGScheduler: Job 530 finished: foreachPartition at PredictorEngineApp.java:153, took 3.306491 s 18/04/17 16:53:03 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x4bc2e8b2 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 16:53:03 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x4bc2e8b20x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 16:53:03 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 16:53:03 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:57333, server: ***hostname masked***/***IP masked***:2181 18/04/17 16:53:03 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x3626be1439a9403, negotiated timeout = 60000 18/04/17 16:53:03 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x3626be1439a9403 18/04/17 16:53:03 INFO zookeeper.ZooKeeper: Session: 0x3626be1439a9403 closed 18/04/17 16:53:03 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 16:53:03 INFO scheduler.JobScheduler: Finished job streaming job 1523973180000 ms.7 from job set of time 1523973180000 ms 18/04/17 16:53:03 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 537.0 (TID 537) in 3498 ms on ***hostname masked*** (executor 6) (1/1) 18/04/17 16:53:03 INFO cluster.YarnClusterScheduler: Removed TaskSet 537.0, whose tasks have all completed, from pool 18/04/17 16:53:03 INFO scheduler.DAGScheduler: ResultStage 537 (foreachPartition at PredictorEngineApp.java:153) finished in 3.499 s 18/04/17 16:53:03 INFO scheduler.DAGScheduler: Job 537 finished: foreachPartition at PredictorEngineApp.java:153, took 3.568163 s 18/04/17 16:53:03 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x7ab46380 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 16:53:03 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x7ab463800x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 16:53:03 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 16:53:03 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:35485, server: ***hostname masked***/***IP masked***:2181 18/04/17 16:53:03 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x1626be1444c9439, negotiated timeout = 60000 18/04/17 16:53:03 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x1626be1444c9439 18/04/17 16:53:03 INFO zookeeper.ZooKeeper: Session: 0x1626be1444c9439 closed 18/04/17 16:53:03 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 16:53:03 INFO scheduler.JobScheduler: Finished job streaming job 1523973180000 ms.12 from job set of time 1523973180000 ms 18/04/17 16:53:03 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 535.0 (TID 535) in 3760 ms on ***hostname masked*** (executor 4) (1/1) 18/04/17 16:53:03 INFO cluster.YarnClusterScheduler: Removed TaskSet 535.0, whose tasks have all completed, from pool 18/04/17 16:53:03 INFO scheduler.DAGScheduler: ResultStage 535 (foreachPartition at PredictorEngineApp.java:153) finished in 3.761 s 18/04/17 16:53:03 INFO scheduler.DAGScheduler: Job 535 finished: foreachPartition at PredictorEngineApp.java:153, took 3.822578 s 18/04/17 16:53:03 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x338cba75 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 16:53:03 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x338cba750x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 16:53:03 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 16:53:03 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:40083, server: ***hostname masked***/***IP masked***:2181 18/04/17 16:53:03 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x2626be142b28d3f, negotiated timeout = 60000 18/04/17 16:53:03 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x2626be142b28d3f 18/04/17 16:53:03 INFO zookeeper.ZooKeeper: Session: 0x2626be142b28d3f closed 18/04/17 16:53:03 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 16:53:03 INFO scheduler.JobScheduler: Finished job streaming job 1523973180000 ms.2 from job set of time 1523973180000 ms 18/04/17 16:53:04 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 533.0 (TID 533) in 4486 ms on ***hostname masked*** (executor 9) (1/1) 18/04/17 16:53:04 INFO cluster.YarnClusterScheduler: Removed TaskSet 533.0, whose tasks have all completed, from pool 18/04/17 16:53:04 INFO scheduler.DAGScheduler: ResultStage 533 (foreachPartition at PredictorEngineApp.java:153) finished in 4.487 s 18/04/17 16:53:04 INFO scheduler.DAGScheduler: Job 533 finished: foreachPartition at PredictorEngineApp.java:153, took 4.534364 s 18/04/17 16:53:04 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x2bfc76de connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 16:53:04 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x2bfc76de0x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 16:53:04 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 16:53:04 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:40089, server: ***hostname masked***/***IP masked***:2181 18/04/17 16:53:04 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x2626be142b28d40, negotiated timeout = 60000 18/04/17 16:53:04 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x2626be142b28d40 18/04/17 16:53:04 INFO zookeeper.ZooKeeper: Session: 0x2626be142b28d40 closed 18/04/17 16:53:04 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 16:53:04 INFO scheduler.JobScheduler: Finished job streaming job 1523973180000 ms.20 from job set of time 1523973180000 ms 18/04/17 16:53:05 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 551.0 (TID 551) in 5221 ms on ***hostname masked*** (executor 12) (1/1) 18/04/17 16:53:05 INFO scheduler.DAGScheduler: ResultStage 551 (foreachPartition at PredictorEngineApp.java:153) finished in 5.222 s 18/04/17 16:53:05 INFO cluster.YarnClusterScheduler: Removed TaskSet 551.0, whose tasks have all completed, from pool 18/04/17 16:53:05 INFO scheduler.DAGScheduler: Job 550 finished: foreachPartition at PredictorEngineApp.java:153, took 5.368273 s 18/04/17 16:53:05 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x37582729 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 16:53:05 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x375827290x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 16:53:05 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 16:53:05 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:35498, server: ***hostname masked***/***IP masked***:2181 18/04/17 16:53:05 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x1626be1444c943a, negotiated timeout = 60000 18/04/17 16:53:05 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x1626be1444c943a 18/04/17 16:53:05 INFO zookeeper.ZooKeeper: Session: 0x1626be1444c943a closed 18/04/17 16:53:05 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 16:53:05 INFO scheduler.JobScheduler: Finished job streaming job 1523973180000 ms.31 from job set of time 1523973180000 ms 18/04/17 16:53:06 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 539.0 (TID 539) in 6582 ms on ***hostname masked*** (executor 12) (1/1) 18/04/17 16:53:06 INFO cluster.YarnClusterScheduler: Removed TaskSet 539.0, whose tasks have all completed, from pool 18/04/17 16:53:06 INFO scheduler.DAGScheduler: ResultStage 539 (foreachPartition at PredictorEngineApp.java:153) finished in 6.583 s 18/04/17 16:53:06 INFO scheduler.DAGScheduler: Job 539 finished: foreachPartition at PredictorEngineApp.java:153, took 6.659411 s 18/04/17 16:53:06 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x7b0c819d connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 16:53:06 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x7b0c819d0x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 16:53:06 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 16:53:06 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:57353, server: ***hostname masked***/***IP masked***:2181 18/04/17 16:53:06 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x3626be1439a9405, negotiated timeout = 60000 18/04/17 16:53:06 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x3626be1439a9405 18/04/17 16:53:06 INFO zookeeper.ZooKeeper: Session: 0x3626be1439a9405 closed 18/04/17 16:53:06 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 16:53:06 INFO scheduler.JobScheduler: Finished job streaming job 1523973180000 ms.15 from job set of time 1523973180000 ms 18/04/17 16:53:07 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 548.0 (TID 548) in 7398 ms on ***hostname masked*** (executor 8) (1/1) 18/04/17 16:53:07 INFO cluster.YarnClusterScheduler: Removed TaskSet 548.0, whose tasks have all completed, from pool 18/04/17 16:53:07 INFO scheduler.DAGScheduler: ResultStage 548 (foreachPartition at PredictorEngineApp.java:153) finished in 7.407 s 18/04/17 16:53:07 INFO scheduler.DAGScheduler: Job 548 finished: foreachPartition at PredictorEngineApp.java:153, took 7.517359 s 18/04/17 16:53:07 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x5d923d3d connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 16:53:07 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x5d923d3d0x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 16:53:07 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 16:53:07 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:40101, server: ***hostname masked***/***IP masked***:2181 18/04/17 16:53:07 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 528.0 (TID 528) in 7506 ms on ***hostname masked*** (executor 11) (1/1) 18/04/17 16:53:07 INFO scheduler.DAGScheduler: ResultStage 528 (foreachPartition at PredictorEngineApp.java:153) finished in 7.506 s 18/04/17 16:53:07 INFO cluster.YarnClusterScheduler: Removed TaskSet 528.0, whose tasks have all completed, from pool 18/04/17 16:53:07 INFO scheduler.DAGScheduler: Job 528 finished: foreachPartition at PredictorEngineApp.java:153, took 7.528087 s 18/04/17 16:53:07 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0xaa14624 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 16:53:07 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0xaa146240x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 16:53:07 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 16:53:07 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:35507, server: ***hostname masked***/***IP masked***:2181 18/04/17 16:53:07 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x2626be142b28d44, negotiated timeout = 60000 18/04/17 16:53:07 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x2626be142b28d44 18/04/17 16:53:07 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x1626be1444c943b, negotiated timeout = 60000 18/04/17 16:53:07 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x1626be1444c943b 18/04/17 16:53:07 INFO zookeeper.ZooKeeper: Session: 0x2626be142b28d44 closed 18/04/17 16:53:07 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 16:53:07 INFO zookeeper.ZooKeeper: Session: 0x1626be1444c943b closed 18/04/17 16:53:07 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 16:53:07 INFO scheduler.JobScheduler: Finished job streaming job 1523973180000 ms.19 from job set of time 1523973180000 ms 18/04/17 16:53:07 INFO scheduler.JobScheduler: Finished job streaming job 1523973180000 ms.33 from job set of time 1523973180000 ms 18/04/17 16:53:07 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 538.0 (TID 538) in 7549 ms on ***hostname masked*** (executor 8) (1/1) 18/04/17 16:53:07 INFO scheduler.DAGScheduler: ResultStage 538 (foreachPartition at PredictorEngineApp.java:153) finished in 7.550 s 18/04/17 16:53:07 INFO cluster.YarnClusterScheduler: Removed TaskSet 538.0, whose tasks have all completed, from pool 18/04/17 16:53:07 INFO scheduler.DAGScheduler: Job 538 finished: foreachPartition at PredictorEngineApp.java:153, took 7.622711 s 18/04/17 16:53:07 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x63709ecd connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 16:53:07 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x63709ecd0x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 16:53:07 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 16:53:07 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:57363, server: ***hostname masked***/***IP masked***:2181 18/04/17 16:53:07 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x3626be1439a9407, negotiated timeout = 60000 18/04/17 16:53:07 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x3626be1439a9407 18/04/17 16:53:07 INFO zookeeper.ZooKeeper: Session: 0x3626be1439a9407 closed 18/04/17 16:53:07 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 16:53:07 INFO scheduler.JobScheduler: Finished job streaming job 1523973180000 ms.18 from job set of time 1523973180000 ms 18/04/17 16:53:08 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 529.0 (TID 529) in 8095 ms on ***hostname masked*** (executor 7) (1/1) 18/04/17 16:53:08 INFO cluster.YarnClusterScheduler: Removed TaskSet 529.0, whose tasks have all completed, from pool 18/04/17 16:53:08 INFO scheduler.DAGScheduler: ResultStage 529 (foreachPartition at PredictorEngineApp.java:153) finished in 8.095 s 18/04/17 16:53:08 INFO scheduler.DAGScheduler: Job 529 finished: foreachPartition at PredictorEngineApp.java:153, took 8.121182 s 18/04/17 16:53:08 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0xa7928a3 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 16:53:08 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0xa7928a30x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 16:53:08 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 16:53:08 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:35516, server: ***hostname masked***/***IP masked***:2181 18/04/17 16:53:08 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x1626be1444c943e, negotiated timeout = 60000 18/04/17 16:53:08 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x1626be1444c943e 18/04/17 16:53:08 INFO zookeeper.ZooKeeper: Session: 0x1626be1444c943e closed 18/04/17 16:53:08 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 16:53:08 INFO scheduler.JobScheduler: Finished job streaming job 1523973180000 ms.6 from job set of time 1523973180000 ms 18/04/17 16:53:08 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 545.0 (TID 545) in 8178 ms on ***hostname masked*** (executor 7) (1/1) 18/04/17 16:53:08 INFO cluster.YarnClusterScheduler: Removed TaskSet 545.0, whose tasks have all completed, from pool 18/04/17 16:53:08 INFO scheduler.DAGScheduler: ResultStage 545 (foreachPartition at PredictorEngineApp.java:153) finished in 8.180 s 18/04/17 16:53:08 INFO scheduler.DAGScheduler: Job 545 finished: foreachPartition at PredictorEngineApp.java:153, took 8.277681 s 18/04/17 16:53:08 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x1c9d6c7b connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 16:53:08 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x1c9d6c7b0x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 16:53:08 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 16:53:08 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:35519, server: ***hostname masked***/***IP masked***:2181 18/04/17 16:53:08 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x1626be1444c9440, negotiated timeout = 60000 18/04/17 16:53:08 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x1626be1444c9440 18/04/17 16:53:08 INFO zookeeper.ZooKeeper: Session: 0x1626be1444c9440 closed 18/04/17 16:53:08 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 16:53:08 INFO scheduler.JobScheduler: Finished job streaming job 1523973180000 ms.32 from job set of time 1523973180000 ms 18/04/17 16:53:08 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 527.0 (TID 527) in 8613 ms on ***hostname masked*** (executor 10) (1/1) 18/04/17 16:53:08 INFO cluster.YarnClusterScheduler: Removed TaskSet 527.0, whose tasks have all completed, from pool 18/04/17 16:53:08 INFO scheduler.DAGScheduler: ResultStage 527 (foreachPartition at PredictorEngineApp.java:153) finished in 8.614 s 18/04/17 16:53:08 INFO scheduler.DAGScheduler: Job 527 finished: foreachPartition at PredictorEngineApp.java:153, took 8.631433 s 18/04/17 16:53:08 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x3d81ac25 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 16:53:08 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x3d81ac250x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 16:53:08 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 16:53:08 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:35522, server: ***hostname masked***/***IP masked***:2181 18/04/17 16:53:08 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x1626be1444c9442, negotiated timeout = 60000 18/04/17 16:53:08 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x1626be1444c9442 18/04/17 16:53:08 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 541.0 (TID 541) in 8573 ms on ***hostname masked*** (executor 10) (1/1) 18/04/17 16:53:08 INFO cluster.YarnClusterScheduler: Removed TaskSet 541.0, whose tasks have all completed, from pool 18/04/17 16:53:08 INFO scheduler.DAGScheduler: ResultStage 541 (foreachPartition at PredictorEngineApp.java:153) finished in 8.574 s 18/04/17 16:53:08 INFO scheduler.DAGScheduler: Job 541 finished: foreachPartition at PredictorEngineApp.java:153, took 8.656001 s 18/04/17 16:53:08 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x55399a2a connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 16:53:08 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x55399a2a0x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 16:53:08 INFO zookeeper.ZooKeeper: Session: 0x1626be1444c9442 closed 18/04/17 16:53:08 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 16:53:08 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 16:53:08 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:40120, server: ***hostname masked***/***IP masked***:2181 18/04/17 16:53:08 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x2626be142b28d46, negotiated timeout = 60000 18/04/17 16:53:08 INFO scheduler.JobScheduler: Finished job streaming job 1523973180000 ms.23 from job set of time 1523973180000 ms 18/04/17 16:53:08 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x2626be142b28d46 18/04/17 16:53:08 INFO zookeeper.ZooKeeper: Session: 0x2626be142b28d46 closed 18/04/17 16:53:08 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 16:53:08 INFO scheduler.JobScheduler: Finished job streaming job 1523973180000 ms.34 from job set of time 1523973180000 ms 18/04/17 16:53:08 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 543.0 (TID 543) in 8679 ms on ***hostname masked*** (executor 1) (1/1) 18/04/17 16:53:08 INFO cluster.YarnClusterScheduler: Removed TaskSet 543.0, whose tasks have all completed, from pool 18/04/17 16:53:08 INFO scheduler.DAGScheduler: ResultStage 543 (foreachPartition at PredictorEngineApp.java:153) finished in 8.680 s 18/04/17 16:53:08 INFO scheduler.DAGScheduler: Job 543 finished: foreachPartition at PredictorEngineApp.java:153, took 8.768071 s 18/04/17 16:53:08 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x6ac355ed connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 16:53:08 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x6ac355ed0x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 16:53:08 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 16:53:08 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:40123, server: ***hostname masked***/***IP masked***:2181 18/04/17 16:53:08 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x2626be142b28d47, negotiated timeout = 60000 18/04/17 16:53:08 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x2626be142b28d47 18/04/17 16:53:08 INFO zookeeper.ZooKeeper: Session: 0x2626be142b28d47 closed 18/04/17 16:53:08 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 16:53:08 INFO scheduler.JobScheduler: Finished job streaming job 1523973180000 ms.26 from job set of time 1523973180000 ms 18/04/17 16:53:09 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 547.0 (TID 547) in 9492 ms on ***hostname masked*** (executor 3) (1/1) 18/04/17 16:53:09 INFO cluster.YarnClusterScheduler: Removed TaskSet 547.0, whose tasks have all completed, from pool 18/04/17 16:53:09 INFO scheduler.DAGScheduler: ResultStage 547 (foreachPartition at PredictorEngineApp.java:153) finished in 9.494 s 18/04/17 16:53:09 INFO scheduler.DAGScheduler: Job 547 finished: foreachPartition at PredictorEngineApp.java:153, took 9.600936 s 18/04/17 16:53:09 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x56fdc704 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 16:53:09 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x56fdc7040x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 16:53:09 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 16:53:09 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:40130, server: ***hostname masked***/***IP masked***:2181 18/04/17 16:53:09 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x2626be142b28d48, negotiated timeout = 60000 18/04/17 16:53:09 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x2626be142b28d48 18/04/17 16:53:09 INFO zookeeper.ZooKeeper: Session: 0x2626be142b28d48 closed 18/04/17 16:53:09 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 16:53:09 INFO scheduler.JobScheduler: Finished job streaming job 1523973180000 ms.9 from job set of time 1523973180000 ms 18/04/17 16:53:10 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 531.0 (TID 531) in 9982 ms on ***hostname masked*** (executor 3) (1/1) 18/04/17 16:53:10 INFO cluster.YarnClusterScheduler: Removed TaskSet 531.0, whose tasks have all completed, from pool 18/04/17 16:53:10 INFO scheduler.DAGScheduler: ResultStage 531 (foreachPartition at PredictorEngineApp.java:153) finished in 9.983 s 18/04/17 16:53:10 INFO scheduler.DAGScheduler: Job 531 finished: foreachPartition at PredictorEngineApp.java:153, took 10.017045 s 18/04/17 16:53:10 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x383cc58b connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 16:53:10 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x383cc58b0x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 16:53:10 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 16:53:10 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:35538, server: ***hostname masked***/***IP masked***:2181 18/04/17 16:53:10 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x1626be1444c9445, negotiated timeout = 60000 18/04/17 16:53:10 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x1626be1444c9445 18/04/17 16:53:10 INFO zookeeper.ZooKeeper: Session: 0x1626be1444c9445 closed 18/04/17 16:53:10 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 16:53:10 INFO scheduler.JobScheduler: Finished job streaming job 1523973180000 ms.24 from job set of time 1523973180000 ms 18/04/17 16:53:11 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 544.0 (TID 544) in 11694 ms on ***hostname masked*** (executor 11) (1/1) 18/04/17 16:53:11 INFO cluster.YarnClusterScheduler: Removed TaskSet 544.0, whose tasks have all completed, from pool 18/04/17 16:53:11 INFO scheduler.DAGScheduler: ResultStage 544 (foreachPartition at PredictorEngineApp.java:153) finished in 11.695 s 18/04/17 16:53:11 INFO scheduler.DAGScheduler: Job 544 finished: foreachPartition at PredictorEngineApp.java:153, took 11.788236 s 18/04/17 16:53:11 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x48474e27 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 16:53:11 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x48474e270x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 16:53:11 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 16:53:11 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:57394, server: ***hostname masked***/***IP masked***:2181 18/04/17 16:53:11 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x3626be1439a9408, negotiated timeout = 60000 18/04/17 16:53:11 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x3626be1439a9408 18/04/17 16:53:11 INFO zookeeper.ZooKeeper: Session: 0x3626be1439a9408 closed 18/04/17 16:53:11 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 16:53:11 INFO scheduler.JobScheduler: Finished job streaming job 1523973180000 ms.11 from job set of time 1523973180000 ms 18/04/17 16:53:15 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 546.0 (TID 546) in 15370 ms on ***hostname masked*** (executor 3) (1/1) 18/04/17 16:53:15 INFO scheduler.DAGScheduler: ResultStage 546 (foreachPartition at PredictorEngineApp.java:153) finished in 15.371 s 18/04/17 16:53:15 INFO cluster.YarnClusterScheduler: Removed TaskSet 546.0, whose tasks have all completed, from pool 18/04/17 16:53:15 INFO scheduler.DAGScheduler: Job 546 finished: foreachPartition at PredictorEngineApp.java:153, took 15.586278 s 18/04/17 16:53:15 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x92df44b connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 16:53:15 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x92df44b0x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 16:53:15 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 16:53:15 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:35551, server: ***hostname masked***/***IP masked***:2181 18/04/17 16:53:15 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x1626be1444c9448, negotiated timeout = 60000 18/04/17 16:53:15 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x1626be1444c9448 18/04/17 16:53:15 INFO zookeeper.ZooKeeper: Session: 0x1626be1444c9448 closed 18/04/17 16:53:15 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 16:53:15 INFO scheduler.JobScheduler: Finished job streaming job 1523973180000 ms.10 from job set of time 1523973180000 ms 18/04/17 16:53:16 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 542.0 (TID 542) in 15884 ms on ***hostname masked*** (executor 5) (1/1) 18/04/17 16:53:16 INFO cluster.YarnClusterScheduler: Removed TaskSet 542.0, whose tasks have all completed, from pool 18/04/17 16:53:16 INFO scheduler.DAGScheduler: ResultStage 542 (foreachPartition at PredictorEngineApp.java:153) finished in 15.885 s 18/04/17 16:53:16 INFO scheduler.DAGScheduler: Job 542 finished: foreachPartition at PredictorEngineApp.java:153, took 15.970290 s 18/04/17 16:53:16 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x77ca493e connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 16:53:16 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x77ca493e0x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 16:53:16 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 16:53:16 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:40149, server: ***hostname masked***/***IP masked***:2181 18/04/17 16:53:16 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 549.0 (TID 549) in 15855 ms on ***hostname masked*** (executor 5) (1/1) 18/04/17 16:53:16 INFO cluster.YarnClusterScheduler: Removed TaskSet 549.0, whose tasks have all completed, from pool 18/04/17 16:53:16 INFO scheduler.DAGScheduler: ResultStage 549 (foreachPartition at PredictorEngineApp.java:153) finished in 15.856 s 18/04/17 16:53:16 INFO scheduler.DAGScheduler: Job 549 finished: foreachPartition at PredictorEngineApp.java:153, took 15.976375 s 18/04/17 16:53:16 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x1bc3463f connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 16:53:16 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x1bc3463f0x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 16:53:16 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 16:53:16 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:40150, server: ***hostname masked***/***IP masked***:2181 18/04/17 16:53:16 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x2626be142b28d4b, negotiated timeout = 60000 18/04/17 16:53:16 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x2626be142b28d4c, negotiated timeout = 60000 18/04/17 16:53:16 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x2626be142b28d4c 18/04/17 16:53:16 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x2626be142b28d4b 18/04/17 16:53:16 INFO zookeeper.ZooKeeper: Session: 0x2626be142b28d4c closed 18/04/17 16:53:16 INFO zookeeper.ZooKeeper: Session: 0x2626be142b28d4b closed 18/04/17 16:53:16 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 16:53:16 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 16:53:16 INFO scheduler.JobScheduler: Finished job streaming job 1523973180000 ms.29 from job set of time 1523973180000 ms 18/04/17 16:53:16 INFO scheduler.JobScheduler: Finished job streaming job 1523973180000 ms.28 from job set of time 1523973180000 ms 18/04/17 16:53:23 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 534.0 (TID 534) in 23866 ms on ***hostname masked*** (executor 5) (1/1) 18/04/17 16:53:23 INFO cluster.YarnClusterScheduler: Removed TaskSet 534.0, whose tasks have all completed, from pool 18/04/17 16:53:23 INFO scheduler.DAGScheduler: ResultStage 534 (foreachPartition at PredictorEngineApp.java:153) finished in 23.867 s 18/04/17 16:53:23 INFO scheduler.DAGScheduler: Job 534 finished: foreachPartition at PredictorEngineApp.java:153, took 23.921288 s 18/04/17 16:53:23 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0xd53e976 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 16:53:23 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0xd53e9760x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 16:53:23 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 16:53:23 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:57426, server: ***hostname masked***/***IP masked***:2181 18/04/17 16:53:23 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x3626be1439a940b, negotiated timeout = 60000 18/04/17 16:53:24 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x3626be1439a940b 18/04/17 16:53:24 INFO zookeeper.ZooKeeper: Session: 0x3626be1439a940b closed 18/04/17 16:53:24 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 16:53:24 INFO scheduler.JobScheduler: Finished job streaming job 1523973180000 ms.5 from job set of time 1523973180000 ms 18/04/17 16:53:24 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 550.0 (TID 550) in 23961 ms on ***hostname masked*** (executor 5) (1/1) 18/04/17 16:53:24 INFO cluster.YarnClusterScheduler: Removed TaskSet 550.0, whose tasks have all completed, from pool 18/04/17 16:53:24 INFO scheduler.DAGScheduler: ResultStage 550 (foreachPartition at PredictorEngineApp.java:153) finished in 23.962 s 18/04/17 16:53:24 INFO scheduler.DAGScheduler: Job 551 finished: foreachPartition at PredictorEngineApp.java:153, took 24.085076 s 18/04/17 16:53:24 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x3c4c1bd0 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 16:53:24 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x3c4c1bd00x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 16:53:24 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 16:53:24 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:40174, server: ***hostname masked***/***IP masked***:2181 18/04/17 16:53:24 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x2626be142b28d4d, negotiated timeout = 60000 18/04/17 16:53:24 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x2626be142b28d4d 18/04/17 16:53:24 INFO zookeeper.ZooKeeper: Session: 0x2626be142b28d4d closed 18/04/17 16:53:24 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 16:53:24 INFO scheduler.JobScheduler: Finished job streaming job 1523973180000 ms.1 from job set of time 1523973180000 ms 18/04/17 16:53:24 INFO scheduler.JobScheduler: Total delay: 24.189 s for time 1523973180000 ms (execution: 24.129 s) 18/04/17 16:53:24 INFO kafka.KafkaRDD: Removing RDD 648 from persistence list 18/04/17 16:53:24 INFO storage.BlockManager: Removing RDD 648 18/04/17 16:53:24 INFO kafka.KafkaRDD: Removing RDD 684 from persistence list 18/04/17 16:53:24 INFO storage.BlockManager: Removing RDD 684 18/04/17 16:53:24 INFO kafka.KafkaRDD: Removing RDD 612 from persistence list 18/04/17 16:53:24 INFO storage.BlockManager: Removing RDD 612 18/04/17 16:53:24 INFO kafka.KafkaRDD: Removing RDD 648 from persistence list 18/04/17 16:53:24 INFO storage.BlockManager: Removing RDD 648 18/04/17 16:53:24 INFO kafka.KafkaRDD: Removing RDD 684 from persistence list 18/04/17 16:53:24 INFO storage.BlockManager: Removing RDD 684 18/04/17 16:53:24 INFO kafka.KafkaRDD: Removing RDD 612 from persistence list 18/04/17 16:53:24 INFO storage.BlockManager: Removing RDD 612 18/04/17 16:53:24 INFO kafka.KafkaRDD: Removing RDD 649 from persistence list 18/04/17 16:53:24 INFO storage.BlockManager: Removing RDD 649 18/04/17 16:53:24 INFO kafka.KafkaRDD: Removing RDD 685 from persistence list 18/04/17 16:53:24 INFO storage.BlockManager: Removing RDD 685 18/04/17 16:53:24 INFO kafka.KafkaRDD: Removing RDD 613 from persistence list 18/04/17 16:53:24 INFO storage.BlockManager: Removing RDD 613 18/04/17 16:53:24 INFO kafka.KafkaRDD: Removing RDD 649 from persistence list 18/04/17 16:53:24 INFO storage.BlockManager: Removing RDD 649 18/04/17 16:53:24 INFO kafka.KafkaRDD: Removing RDD 685 from persistence list 18/04/17 16:53:24 INFO storage.BlockManager: Removing RDD 685 18/04/17 16:53:24 INFO kafka.KafkaRDD: Removing RDD 613 from persistence list 18/04/17 16:53:24 INFO storage.BlockManager: Removing RDD 613 18/04/17 16:53:24 INFO kafka.KafkaRDD: Removing RDD 650 from persistence list 18/04/17 16:53:24 INFO storage.BlockManager: Removing RDD 650 18/04/17 16:53:24 INFO kafka.KafkaRDD: Removing RDD 686 from persistence list 18/04/17 16:53:24 INFO storage.BlockManager: Removing RDD 686 18/04/17 16:53:24 INFO kafka.KafkaRDD: Removing RDD 614 from persistence list 18/04/17 16:53:24 INFO storage.BlockManager: Removing RDD 614 18/04/17 16:53:24 INFO kafka.KafkaRDD: Removing RDD 650 from persistence list 18/04/17 16:53:24 INFO storage.BlockManager: Removing RDD 650 18/04/17 16:53:24 INFO kafka.KafkaRDD: Removing RDD 686 from persistence list 18/04/17 16:53:24 INFO storage.BlockManager: Removing RDD 686 18/04/17 16:53:24 INFO kafka.KafkaRDD: Removing RDD 614 from persistence list 18/04/17 16:53:24 INFO storage.BlockManager: Removing RDD 614 18/04/17 16:53:24 INFO kafka.KafkaRDD: Removing RDD 651 from persistence list 18/04/17 16:53:24 INFO storage.BlockManager: Removing RDD 651 18/04/17 16:53:24 INFO kafka.KafkaRDD: Removing RDD 687 from persistence list 18/04/17 16:53:24 INFO storage.BlockManager: Removing RDD 687 18/04/17 16:53:24 INFO kafka.KafkaRDD: Removing RDD 615 from persistence list 18/04/17 16:53:24 INFO storage.BlockManager: Removing RDD 615 18/04/17 16:53:24 INFO kafka.KafkaRDD: Removing RDD 651 from persistence list 18/04/17 16:53:24 INFO storage.BlockManager: Removing RDD 651 18/04/17 16:53:24 INFO kafka.KafkaRDD: Removing RDD 687 from persistence list 18/04/17 16:53:24 INFO storage.BlockManager: Removing RDD 687 18/04/17 16:53:24 INFO kafka.KafkaRDD: Removing RDD 615 from persistence list 18/04/17 16:53:24 INFO storage.BlockManager: Removing RDD 615 18/04/17 16:53:24 INFO kafka.KafkaRDD: Removing RDD 652 from persistence list 18/04/17 16:53:24 INFO storage.BlockManager: Removing RDD 652 18/04/17 16:53:24 INFO kafka.KafkaRDD: Removing RDD 688 from persistence list 18/04/17 16:53:24 INFO storage.BlockManager: Removing RDD 688 18/04/17 16:53:24 INFO kafka.KafkaRDD: Removing RDD 616 from persistence list 18/04/17 16:53:24 INFO storage.BlockManager: Removing RDD 616 18/04/17 16:53:24 INFO kafka.KafkaRDD: Removing RDD 652 from persistence list 18/04/17 16:53:24 INFO storage.BlockManager: Removing RDD 652 18/04/17 16:53:24 INFO kafka.KafkaRDD: Removing RDD 688 from persistence list 18/04/17 16:53:24 INFO storage.BlockManager: Removing RDD 688 18/04/17 16:53:24 INFO kafka.KafkaRDD: Removing RDD 616 from persistence list 18/04/17 16:53:24 INFO storage.BlockManager: Removing RDD 616 18/04/17 16:53:24 INFO kafka.KafkaRDD: Removing RDD 653 from persistence list 18/04/17 16:53:24 INFO storage.BlockManager: Removing RDD 653 18/04/17 16:53:24 INFO kafka.KafkaRDD: Removing RDD 689 from persistence list 18/04/17 16:53:24 INFO storage.BlockManager: Removing RDD 689 18/04/17 16:53:24 INFO kafka.KafkaRDD: Removing RDD 617 from persistence list 18/04/17 16:53:24 INFO storage.BlockManager: Removing RDD 617 18/04/17 16:53:24 INFO kafka.KafkaRDD: Removing RDD 653 from persistence list 18/04/17 16:53:24 INFO storage.BlockManager: Removing RDD 653 18/04/17 16:53:24 INFO kafka.KafkaRDD: Removing RDD 689 from persistence list 18/04/17 16:53:24 INFO storage.BlockManager: Removing RDD 689 18/04/17 16:53:24 INFO kafka.KafkaRDD: Removing RDD 617 from persistence list 18/04/17 16:53:24 INFO storage.BlockManager: Removing RDD 617 18/04/17 16:53:24 INFO kafka.KafkaRDD: Removing RDD 654 from persistence list 18/04/17 16:53:24 INFO storage.BlockManager: Removing RDD 654 18/04/17 16:53:24 INFO kafka.KafkaRDD: Removing RDD 690 from persistence list 18/04/17 16:53:24 INFO storage.BlockManager: Removing RDD 690 18/04/17 16:53:24 INFO kafka.KafkaRDD: Removing RDD 618 from persistence list 18/04/17 16:53:24 INFO storage.BlockManager: Removing RDD 618 18/04/17 16:53:24 INFO kafka.KafkaRDD: Removing RDD 654 from persistence list 18/04/17 16:53:24 INFO storage.BlockManager: Removing RDD 654 18/04/17 16:53:24 INFO kafka.KafkaRDD: Removing RDD 690 from persistence list 18/04/17 16:53:24 INFO storage.BlockManager: Removing RDD 690 18/04/17 16:53:24 INFO kafka.KafkaRDD: Removing RDD 618 from persistence list 18/04/17 16:53:24 INFO storage.BlockManager: Removing RDD 618 18/04/17 16:53:24 INFO kafka.KafkaRDD: Removing RDD 655 from persistence list 18/04/17 16:53:24 INFO storage.BlockManager: Removing RDD 655 18/04/17 16:53:24 INFO kafka.KafkaRDD: Removing RDD 691 from persistence list 18/04/17 16:53:24 INFO storage.BlockManager: Removing RDD 691 18/04/17 16:53:24 INFO kafka.KafkaRDD: Removing RDD 619 from persistence list 18/04/17 16:53:24 INFO storage.BlockManager: Removing RDD 619 18/04/17 16:53:24 INFO kafka.KafkaRDD: Removing RDD 655 from persistence list 18/04/17 16:53:24 INFO storage.BlockManager: Removing RDD 655 18/04/17 16:53:24 INFO kafka.KafkaRDD: Removing RDD 691 from persistence list 18/04/17 16:53:24 INFO storage.BlockManager: Removing RDD 691 18/04/17 16:53:24 INFO kafka.KafkaRDD: Removing RDD 619 from persistence list 18/04/17 16:53:24 INFO storage.BlockManager: Removing RDD 619 18/04/17 16:53:24 INFO kafka.KafkaRDD: Removing RDD 656 from persistence list 18/04/17 16:53:24 INFO storage.BlockManager: Removing RDD 656 18/04/17 16:53:24 INFO kafka.KafkaRDD: Removing RDD 692 from persistence list 18/04/17 16:53:24 INFO storage.BlockManager: Removing RDD 692 18/04/17 16:53:24 INFO kafka.KafkaRDD: Removing RDD 620 from persistence list 18/04/17 16:53:24 INFO storage.BlockManager: Removing RDD 620 18/04/17 16:53:24 INFO kafka.KafkaRDD: Removing RDD 656 from persistence list 18/04/17 16:53:24 INFO storage.BlockManager: Removing RDD 656 18/04/17 16:53:24 INFO kafka.KafkaRDD: Removing RDD 692 from persistence list 18/04/17 16:53:24 INFO storage.BlockManager: Removing RDD 692 18/04/17 16:53:24 INFO kafka.KafkaRDD: Removing RDD 620 from persistence list 18/04/17 16:53:24 INFO storage.BlockManager: Removing RDD 620 18/04/17 16:53:24 INFO kafka.KafkaRDD: Removing RDD 657 from persistence list 18/04/17 16:53:24 INFO storage.BlockManager: Removing RDD 657 18/04/17 16:53:24 INFO kafka.KafkaRDD: Removing RDD 693 from persistence list 18/04/17 16:53:24 INFO storage.BlockManager: Removing RDD 693 18/04/17 16:53:24 INFO kafka.KafkaRDD: Removing RDD 621 from persistence list 18/04/17 16:53:24 INFO storage.BlockManager: Removing RDD 621 18/04/17 16:53:24 INFO kafka.KafkaRDD: Removing RDD 657 from persistence list 18/04/17 16:53:24 INFO storage.BlockManager: Removing RDD 657 18/04/17 16:53:24 INFO kafka.KafkaRDD: Removing RDD 693 from persistence list 18/04/17 16:53:24 INFO storage.BlockManager: Removing RDD 693 18/04/17 16:53:24 INFO kafka.KafkaRDD: Removing RDD 621 from persistence list 18/04/17 16:53:24 INFO storage.BlockManager: Removing RDD 621 18/04/17 16:53:24 INFO kafka.KafkaRDD: Removing RDD 658 from persistence list 18/04/17 16:53:24 INFO storage.BlockManager: Removing RDD 658 18/04/17 16:53:24 INFO kafka.KafkaRDD: Removing RDD 694 from persistence list 18/04/17 16:53:24 INFO storage.BlockManager: Removing RDD 694 18/04/17 16:53:24 INFO kafka.KafkaRDD: Removing RDD 622 from persistence list 18/04/17 16:53:24 INFO storage.BlockManager: Removing RDD 622 18/04/17 16:53:24 INFO kafka.KafkaRDD: Removing RDD 658 from persistence list 18/04/17 16:53:24 INFO storage.BlockManager: Removing RDD 658 18/04/17 16:53:24 INFO kafka.KafkaRDD: Removing RDD 694 from persistence list 18/04/17 16:53:24 INFO storage.BlockManager: Removing RDD 694 18/04/17 16:53:24 INFO kafka.KafkaRDD: Removing RDD 622 from persistence list 18/04/17 16:53:24 INFO storage.BlockManager: Removing RDD 622 18/04/17 16:53:24 INFO kafka.KafkaRDD: Removing RDD 659 from persistence list 18/04/17 16:53:24 INFO storage.BlockManager: Removing RDD 659 18/04/17 16:53:24 INFO kafka.KafkaRDD: Removing RDD 695 from persistence list 18/04/17 16:53:24 INFO storage.BlockManager: Removing RDD 695 18/04/17 16:53:24 INFO kafka.KafkaRDD: Removing RDD 623 from persistence list 18/04/17 16:53:24 INFO storage.BlockManager: Removing RDD 623 18/04/17 16:53:24 INFO kafka.KafkaRDD: Removing RDD 659 from persistence list 18/04/17 16:53:24 INFO storage.BlockManager: Removing RDD 659 18/04/17 16:53:24 INFO kafka.KafkaRDD: Removing RDD 695 from persistence list 18/04/17 16:53:24 INFO storage.BlockManager: Removing RDD 695 18/04/17 16:53:24 INFO kafka.KafkaRDD: Removing RDD 623 from persistence list 18/04/17 16:53:24 INFO storage.BlockManager: Removing RDD 623 18/04/17 16:53:24 INFO kafka.KafkaRDD: Removing RDD 660 from persistence list 18/04/17 16:53:24 INFO storage.BlockManager: Removing RDD 660 18/04/17 16:53:24 INFO kafka.KafkaRDD: Removing RDD 696 from persistence list 18/04/17 16:53:24 INFO storage.BlockManager: Removing RDD 696 18/04/17 16:53:24 INFO kafka.KafkaRDD: Removing RDD 624 from persistence list 18/04/17 16:53:24 INFO storage.BlockManager: Removing RDD 624 18/04/17 16:53:24 INFO kafka.KafkaRDD: Removing RDD 660 from persistence list 18/04/17 16:53:24 INFO storage.BlockManager: Removing RDD 660 18/04/17 16:53:24 INFO kafka.KafkaRDD: Removing RDD 696 from persistence list 18/04/17 16:53:24 INFO storage.BlockManager: Removing RDD 696 18/04/17 16:53:24 INFO kafka.KafkaRDD: Removing RDD 624 from persistence list 18/04/17 16:53:24 INFO storage.BlockManager: Removing RDD 624 18/04/17 16:53:24 INFO kafka.KafkaRDD: Removing RDD 661 from persistence list 18/04/17 16:53:24 INFO storage.BlockManager: Removing RDD 661 18/04/17 16:53:24 INFO kafka.KafkaRDD: Removing RDD 697 from persistence list 18/04/17 16:53:24 INFO storage.BlockManager: Removing RDD 697 18/04/17 16:53:24 INFO kafka.KafkaRDD: Removing RDD 625 from persistence list 18/04/17 16:53:24 INFO storage.BlockManager: Removing RDD 625 18/04/17 16:53:24 INFO kafka.KafkaRDD: Removing RDD 661 from persistence list 18/04/17 16:53:24 INFO storage.BlockManager: Removing RDD 661 18/04/17 16:53:24 INFO kafka.KafkaRDD: Removing RDD 697 from persistence list 18/04/17 16:53:24 INFO storage.BlockManager: Removing RDD 697 18/04/17 16:53:24 INFO kafka.KafkaRDD: Removing RDD 625 from persistence list 18/04/17 16:53:24 INFO storage.BlockManager: Removing RDD 625 18/04/17 16:53:24 INFO kafka.KafkaRDD: Removing RDD 662 from persistence list 18/04/17 16:53:24 INFO storage.BlockManager: Removing RDD 662 18/04/17 16:53:24 INFO kafka.KafkaRDD: Removing RDD 698 from persistence list 18/04/17 16:53:24 INFO storage.BlockManager: Removing RDD 698 18/04/17 16:53:24 INFO kafka.KafkaRDD: Removing RDD 626 from persistence list 18/04/17 16:53:24 INFO storage.BlockManager: Removing RDD 626 18/04/17 16:53:24 INFO kafka.KafkaRDD: Removing RDD 662 from persistence list 18/04/17 16:53:24 INFO storage.BlockManager: Removing RDD 662 18/04/17 16:53:24 INFO kafka.KafkaRDD: Removing RDD 698 from persistence list 18/04/17 16:53:24 INFO storage.BlockManager: Removing RDD 698 18/04/17 16:53:24 INFO kafka.KafkaRDD: Removing RDD 626 from persistence list 18/04/17 16:53:24 INFO storage.BlockManager: Removing RDD 626 18/04/17 16:53:24 INFO kafka.KafkaRDD: Removing RDD 663 from persistence list 18/04/17 16:53:24 INFO storage.BlockManager: Removing RDD 663 18/04/17 16:53:24 INFO kafka.KafkaRDD: Removing RDD 699 from persistence list 18/04/17 16:53:24 INFO storage.BlockManager: Removing RDD 699 18/04/17 16:53:24 INFO kafka.KafkaRDD: Removing RDD 627 from persistence list 18/04/17 16:53:24 INFO storage.BlockManager: Removing RDD 627 18/04/17 16:53:24 INFO kafka.KafkaRDD: Removing RDD 663 from persistence list 18/04/17 16:53:24 INFO storage.BlockManager: Removing RDD 663 18/04/17 16:53:24 INFO kafka.KafkaRDD: Removing RDD 699 from persistence list 18/04/17 16:53:24 INFO storage.BlockManager: Removing RDD 699 18/04/17 16:53:24 INFO kafka.KafkaRDD: Removing RDD 627 from persistence list 18/04/17 16:53:24 INFO storage.BlockManager: Removing RDD 627 18/04/17 16:53:24 INFO kafka.KafkaRDD: Removing RDD 664 from persistence list 18/04/17 16:53:24 INFO storage.BlockManager: Removing RDD 664 18/04/17 16:53:24 INFO kafka.KafkaRDD: Removing RDD 700 from persistence list 18/04/17 16:53:24 INFO storage.BlockManager: Removing RDD 700 18/04/17 16:53:24 INFO kafka.KafkaRDD: Removing RDD 628 from persistence list 18/04/17 16:53:24 INFO storage.BlockManager: Removing RDD 628 18/04/17 16:53:24 INFO kafka.KafkaRDD: Removing RDD 664 from persistence list 18/04/17 16:53:24 INFO storage.BlockManager: Removing RDD 664 18/04/17 16:53:24 INFO kafka.KafkaRDD: Removing RDD 700 from persistence list 18/04/17 16:53:24 INFO storage.BlockManager: Removing RDD 700 18/04/17 16:53:24 INFO kafka.KafkaRDD: Removing RDD 628 from persistence list 18/04/17 16:53:24 INFO storage.BlockManager: Removing RDD 628 18/04/17 16:53:24 INFO kafka.KafkaRDD: Removing RDD 665 from persistence list 18/04/17 16:53:24 INFO storage.BlockManager: Removing RDD 665 18/04/17 16:53:24 INFO kafka.KafkaRDD: Removing RDD 701 from persistence list 18/04/17 16:53:24 INFO storage.BlockManager: Removing RDD 701 18/04/17 16:53:24 INFO kafka.KafkaRDD: Removing RDD 629 from persistence list 18/04/17 16:53:24 INFO storage.BlockManager: Removing RDD 629 18/04/17 16:53:24 INFO kafka.KafkaRDD: Removing RDD 665 from persistence list 18/04/17 16:53:24 INFO storage.BlockManager: Removing RDD 665 18/04/17 16:53:24 INFO kafka.KafkaRDD: Removing RDD 701 from persistence list 18/04/17 16:53:24 INFO storage.BlockManager: Removing RDD 701 18/04/17 16:53:24 INFO kafka.KafkaRDD: Removing RDD 629 from persistence list 18/04/17 16:53:24 INFO storage.BlockManager: Removing RDD 629 18/04/17 16:53:24 INFO kafka.KafkaRDD: Removing RDD 666 from persistence list 18/04/17 16:53:24 INFO storage.BlockManager: Removing RDD 666 18/04/17 16:53:24 INFO kafka.KafkaRDD: Removing RDD 702 from persistence list 18/04/17 16:53:24 INFO storage.BlockManager: Removing RDD 702 18/04/17 16:53:24 INFO kafka.KafkaRDD: Removing RDD 630 from persistence list 18/04/17 16:53:24 INFO storage.BlockManager: Removing RDD 630 18/04/17 16:53:24 INFO kafka.KafkaRDD: Removing RDD 666 from persistence list 18/04/17 16:53:24 INFO storage.BlockManager: Removing RDD 666 18/04/17 16:53:24 INFO kafka.KafkaRDD: Removing RDD 702 from persistence list 18/04/17 16:53:24 INFO storage.BlockManager: Removing RDD 702 18/04/17 16:53:24 INFO kafka.KafkaRDD: Removing RDD 630 from persistence list 18/04/17 16:53:24 INFO storage.BlockManager: Removing RDD 630 18/04/17 16:53:24 INFO kafka.KafkaRDD: Removing RDD 667 from persistence list 18/04/17 16:53:24 INFO storage.BlockManager: Removing RDD 667 18/04/17 16:53:24 INFO kafka.KafkaRDD: Removing RDD 703 from persistence list 18/04/17 16:53:24 INFO storage.BlockManager: Removing RDD 703 18/04/17 16:53:24 INFO kafka.KafkaRDD: Removing RDD 631 from persistence list 18/04/17 16:53:24 INFO storage.BlockManager: Removing RDD 631 18/04/17 16:53:24 INFO kafka.KafkaRDD: Removing RDD 667 from persistence list 18/04/17 16:53:24 INFO storage.BlockManager: Removing RDD 667 18/04/17 16:53:24 INFO kafka.KafkaRDD: Removing RDD 703 from persistence list 18/04/17 16:53:24 INFO storage.BlockManager: Removing RDD 703 18/04/17 16:53:24 INFO kafka.KafkaRDD: Removing RDD 631 from persistence list 18/04/17 16:53:24 INFO storage.BlockManager: Removing RDD 631 18/04/17 16:53:24 INFO kafka.KafkaRDD: Removing RDD 668 from persistence list 18/04/17 16:53:24 INFO storage.BlockManager: Removing RDD 668 18/04/17 16:53:24 INFO kafka.KafkaRDD: Removing RDD 704 from persistence list 18/04/17 16:53:24 INFO storage.BlockManager: Removing RDD 704 18/04/17 16:53:24 INFO kafka.KafkaRDD: Removing RDD 632 from persistence list 18/04/17 16:53:24 INFO storage.BlockManager: Removing RDD 632 18/04/17 16:53:24 INFO kafka.KafkaRDD: Removing RDD 668 from persistence list 18/04/17 16:53:24 INFO storage.BlockManager: Removing RDD 668 18/04/17 16:53:24 INFO kafka.KafkaRDD: Removing RDD 704 from persistence list 18/04/17 16:53:24 INFO storage.BlockManager: Removing RDD 704 18/04/17 16:53:24 INFO kafka.KafkaRDD: Removing RDD 632 from persistence list 18/04/17 16:53:24 INFO storage.BlockManager: Removing RDD 632 18/04/17 16:53:24 INFO kafka.KafkaRDD: Removing RDD 669 from persistence list 18/04/17 16:53:24 INFO storage.BlockManager: Removing RDD 669 18/04/17 16:53:24 INFO kafka.KafkaRDD: Removing RDD 705 from persistence list 18/04/17 16:53:24 INFO storage.BlockManager: Removing RDD 705 18/04/17 16:53:24 INFO kafka.KafkaRDD: Removing RDD 633 from persistence list 18/04/17 16:53:24 INFO storage.BlockManager: Removing RDD 633 18/04/17 16:53:24 INFO kafka.KafkaRDD: Removing RDD 669 from persistence list 18/04/17 16:53:24 INFO storage.BlockManager: Removing RDD 669 18/04/17 16:53:24 INFO kafka.KafkaRDD: Removing RDD 705 from persistence list 18/04/17 16:53:24 INFO storage.BlockManager: Removing RDD 705 18/04/17 16:53:24 INFO kafka.KafkaRDD: Removing RDD 633 from persistence list 18/04/17 16:53:24 INFO storage.BlockManager: Removing RDD 633 18/04/17 16:53:24 INFO kafka.KafkaRDD: Removing RDD 670 from persistence list 18/04/17 16:53:24 INFO storage.BlockManager: Removing RDD 670 18/04/17 16:53:24 INFO kafka.KafkaRDD: Removing RDD 706 from persistence list 18/04/17 16:53:24 INFO storage.BlockManager: Removing RDD 706 18/04/17 16:53:24 INFO kafka.KafkaRDD: Removing RDD 634 from persistence list 18/04/17 16:53:24 INFO storage.BlockManager: Removing RDD 634 18/04/17 16:53:24 INFO kafka.KafkaRDD: Removing RDD 670 from persistence list 18/04/17 16:53:24 INFO storage.BlockManager: Removing RDD 670 18/04/17 16:53:24 INFO kafka.KafkaRDD: Removing RDD 706 from persistence list 18/04/17 16:53:24 INFO storage.BlockManager: Removing RDD 706 18/04/17 16:53:24 INFO kafka.KafkaRDD: Removing RDD 634 from persistence list 18/04/17 16:53:24 INFO storage.BlockManager: Removing RDD 634 18/04/17 16:53:24 INFO kafka.KafkaRDD: Removing RDD 671 from persistence list 18/04/17 16:53:24 INFO storage.BlockManager: Removing RDD 671 18/04/17 16:53:24 INFO kafka.KafkaRDD: Removing RDD 707 from persistence list 18/04/17 16:53:24 INFO storage.BlockManager: Removing RDD 707 18/04/17 16:53:24 INFO kafka.KafkaRDD: Removing RDD 635 from persistence list 18/04/17 16:53:24 INFO storage.BlockManager: Removing RDD 635 18/04/17 16:53:24 INFO kafka.KafkaRDD: Removing RDD 671 from persistence list 18/04/17 16:53:24 INFO storage.BlockManager: Removing RDD 671 18/04/17 16:53:24 INFO kafka.KafkaRDD: Removing RDD 707 from persistence list 18/04/17 16:53:24 INFO storage.BlockManager: Removing RDD 707 18/04/17 16:53:24 INFO kafka.KafkaRDD: Removing RDD 635 from persistence list 18/04/17 16:53:24 INFO storage.BlockManager: Removing RDD 635 18/04/17 16:53:24 INFO kafka.KafkaRDD: Removing RDD 672 from persistence list 18/04/17 16:53:24 INFO storage.BlockManager: Removing RDD 672 18/04/17 16:53:24 INFO kafka.KafkaRDD: Removing RDD 708 from persistence list 18/04/17 16:53:24 INFO storage.BlockManager: Removing RDD 708 18/04/17 16:53:24 INFO kafka.KafkaRDD: Removing RDD 636 from persistence list 18/04/17 16:53:24 INFO storage.BlockManager: Removing RDD 636 18/04/17 16:53:24 INFO kafka.KafkaRDD: Removing RDD 672 from persistence list 18/04/17 16:53:24 INFO storage.BlockManager: Removing RDD 672 18/04/17 16:53:24 INFO kafka.KafkaRDD: Removing RDD 708 from persistence list 18/04/17 16:53:24 INFO storage.BlockManager: Removing RDD 708 18/04/17 16:53:24 INFO kafka.KafkaRDD: Removing RDD 636 from persistence list 18/04/17 16:53:24 INFO storage.BlockManager: Removing RDD 636 18/04/17 16:53:24 INFO kafka.KafkaRDD: Removing RDD 673 from persistence list 18/04/17 16:53:24 INFO storage.BlockManager: Removing RDD 673 18/04/17 16:53:24 INFO kafka.KafkaRDD: Removing RDD 709 from persistence list 18/04/17 16:53:24 INFO storage.BlockManager: Removing RDD 709 18/04/17 16:53:24 INFO kafka.KafkaRDD: Removing RDD 637 from persistence list 18/04/17 16:53:24 INFO storage.BlockManager: Removing RDD 637 18/04/17 16:53:24 INFO kafka.KafkaRDD: Removing RDD 673 from persistence list 18/04/17 16:53:24 INFO storage.BlockManager: Removing RDD 673 18/04/17 16:53:24 INFO kafka.KafkaRDD: Removing RDD 709 from persistence list 18/04/17 16:53:24 INFO storage.BlockManager: Removing RDD 709 18/04/17 16:53:24 INFO kafka.KafkaRDD: Removing RDD 637 from persistence list 18/04/17 16:53:24 INFO storage.BlockManager: Removing RDD 637 18/04/17 16:53:24 INFO kafka.KafkaRDD: Removing RDD 674 from persistence list 18/04/17 16:53:24 INFO storage.BlockManager: Removing RDD 674 18/04/17 16:53:24 INFO kafka.KafkaRDD: Removing RDD 710 from persistence list 18/04/17 16:53:24 INFO storage.BlockManager: Removing RDD 710 18/04/17 16:53:24 INFO kafka.KafkaRDD: Removing RDD 638 from persistence list 18/04/17 16:53:24 INFO storage.BlockManager: Removing RDD 638 18/04/17 16:53:24 INFO kafka.KafkaRDD: Removing RDD 674 from persistence list 18/04/17 16:53:24 INFO storage.BlockManager: Removing RDD 674 18/04/17 16:53:24 INFO kafka.KafkaRDD: Removing RDD 710 from persistence list 18/04/17 16:53:24 INFO storage.BlockManager: Removing RDD 710 18/04/17 16:53:24 INFO kafka.KafkaRDD: Removing RDD 638 from persistence list 18/04/17 16:53:24 INFO storage.BlockManager: Removing RDD 638 18/04/17 16:53:24 INFO kafka.KafkaRDD: Removing RDD 675 from persistence list 18/04/17 16:53:24 INFO storage.BlockManager: Removing RDD 675 18/04/17 16:53:24 INFO kafka.KafkaRDD: Removing RDD 711 from persistence list 18/04/17 16:53:24 INFO storage.BlockManager: Removing RDD 711 18/04/17 16:53:24 INFO kafka.KafkaRDD: Removing RDD 639 from persistence list 18/04/17 16:53:24 INFO storage.BlockManager: Removing RDD 639 18/04/17 16:53:24 INFO kafka.KafkaRDD: Removing RDD 675 from persistence list 18/04/17 16:53:24 INFO storage.BlockManager: Removing RDD 675 18/04/17 16:53:24 INFO kafka.KafkaRDD: Removing RDD 711 from persistence list 18/04/17 16:53:24 INFO storage.BlockManager: Removing RDD 711 18/04/17 16:53:24 INFO kafka.KafkaRDD: Removing RDD 639 from persistence list 18/04/17 16:53:24 INFO storage.BlockManager: Removing RDD 639 18/04/17 16:53:24 INFO kafka.KafkaRDD: Removing RDD 676 from persistence list 18/04/17 16:53:24 INFO storage.BlockManager: Removing RDD 676 18/04/17 16:53:24 INFO kafka.KafkaRDD: Removing RDD 712 from persistence list 18/04/17 16:53:24 INFO storage.BlockManager: Removing RDD 712 18/04/17 16:53:24 INFO kafka.KafkaRDD: Removing RDD 640 from persistence list 18/04/17 16:53:24 INFO storage.BlockManager: Removing RDD 640 18/04/17 16:53:24 INFO kafka.KafkaRDD: Removing RDD 676 from persistence list 18/04/17 16:53:24 INFO storage.BlockManager: Removing RDD 676 18/04/17 16:53:24 INFO kafka.KafkaRDD: Removing RDD 712 from persistence list 18/04/17 16:53:24 INFO storage.BlockManager: Removing RDD 712 18/04/17 16:53:24 INFO kafka.KafkaRDD: Removing RDD 640 from persistence list 18/04/17 16:53:24 INFO storage.BlockManager: Removing RDD 640 18/04/17 16:53:24 INFO kafka.KafkaRDD: Removing RDD 677 from persistence list 18/04/17 16:53:24 INFO storage.BlockManager: Removing RDD 677 18/04/17 16:53:24 INFO kafka.KafkaRDD: Removing RDD 713 from persistence list 18/04/17 16:53:24 INFO storage.BlockManager: Removing RDD 713 18/04/17 16:53:24 INFO kafka.KafkaRDD: Removing RDD 641 from persistence list 18/04/17 16:53:24 INFO storage.BlockManager: Removing RDD 641 18/04/17 16:53:24 INFO kafka.KafkaRDD: Removing RDD 677 from persistence list 18/04/17 16:53:24 INFO storage.BlockManager: Removing RDD 677 18/04/17 16:53:24 INFO kafka.KafkaRDD: Removing RDD 713 from persistence list 18/04/17 16:53:24 INFO storage.BlockManager: Removing RDD 713 18/04/17 16:53:24 INFO kafka.KafkaRDD: Removing RDD 641 from persistence list 18/04/17 16:53:24 INFO storage.BlockManager: Removing RDD 641 18/04/17 16:53:24 INFO kafka.KafkaRDD: Removing RDD 678 from persistence list 18/04/17 16:53:24 INFO storage.BlockManager: Removing RDD 678 18/04/17 16:53:24 INFO kafka.KafkaRDD: Removing RDD 714 from persistence list 18/04/17 16:53:24 INFO storage.BlockManager: Removing RDD 714 18/04/17 16:53:24 INFO kafka.KafkaRDD: Removing RDD 642 from persistence list 18/04/17 16:53:24 INFO storage.BlockManager: Removing RDD 642 18/04/17 16:53:24 INFO kafka.KafkaRDD: Removing RDD 678 from persistence list 18/04/17 16:53:24 INFO storage.BlockManager: Removing RDD 678 18/04/17 16:53:24 INFO kafka.KafkaRDD: Removing RDD 714 from persistence list 18/04/17 16:53:24 INFO storage.BlockManager: Removing RDD 714 18/04/17 16:53:24 INFO kafka.KafkaRDD: Removing RDD 642 from persistence list 18/04/17 16:53:24 INFO storage.BlockManager: Removing RDD 642 18/04/17 16:53:24 INFO kafka.KafkaRDD: Removing RDD 679 from persistence list 18/04/17 16:53:24 INFO storage.BlockManager: Removing RDD 679 18/04/17 16:53:24 INFO kafka.KafkaRDD: Removing RDD 715 from persistence list 18/04/17 16:53:24 INFO storage.BlockManager: Removing RDD 715 18/04/17 16:53:24 INFO kafka.KafkaRDD: Removing RDD 643 from persistence list 18/04/17 16:53:24 INFO storage.BlockManager: Removing RDD 643 18/04/17 16:53:24 INFO kafka.KafkaRDD: Removing RDD 679 from persistence list 18/04/17 16:53:24 INFO storage.BlockManager: Removing RDD 679 18/04/17 16:53:24 INFO kafka.KafkaRDD: Removing RDD 715 from persistence list 18/04/17 16:53:24 INFO storage.BlockManager: Removing RDD 715 18/04/17 16:53:24 INFO kafka.KafkaRDD: Removing RDD 643 from persistence list 18/04/17 16:53:24 INFO storage.BlockManager: Removing RDD 643 18/04/17 16:53:24 INFO kafka.KafkaRDD: Removing RDD 680 from persistence list 18/04/17 16:53:24 INFO storage.BlockManager: Removing RDD 680 18/04/17 16:53:24 INFO kafka.KafkaRDD: Removing RDD 716 from persistence list 18/04/17 16:53:24 INFO storage.BlockManager: Removing RDD 716 18/04/17 16:53:24 INFO kafka.KafkaRDD: Removing RDD 644 from persistence list 18/04/17 16:53:24 INFO storage.BlockManager: Removing RDD 644 18/04/17 16:53:24 INFO kafka.KafkaRDD: Removing RDD 680 from persistence list 18/04/17 16:53:24 INFO storage.BlockManager: Removing RDD 680 18/04/17 16:53:24 INFO kafka.KafkaRDD: Removing RDD 716 from persistence list 18/04/17 16:53:24 INFO storage.BlockManager: Removing RDD 716 18/04/17 16:53:24 INFO kafka.KafkaRDD: Removing RDD 644 from persistence list 18/04/17 16:53:24 INFO storage.BlockManager: Removing RDD 644 18/04/17 16:53:24 INFO kafka.KafkaRDD: Removing RDD 681 from persistence list 18/04/17 16:53:24 INFO storage.BlockManager: Removing RDD 681 18/04/17 16:53:24 INFO kafka.KafkaRDD: Removing RDD 717 from persistence list 18/04/17 16:53:24 INFO storage.BlockManager: Removing RDD 717 18/04/17 16:53:24 INFO kafka.KafkaRDD: Removing RDD 645 from persistence list 18/04/17 16:53:24 INFO storage.BlockManager: Removing RDD 645 18/04/17 16:53:24 INFO kafka.KafkaRDD: Removing RDD 681 from persistence list 18/04/17 16:53:24 INFO storage.BlockManager: Removing RDD 681 18/04/17 16:53:24 INFO kafka.KafkaRDD: Removing RDD 717 from persistence list 18/04/17 16:53:24 INFO storage.BlockManager: Removing RDD 717 18/04/17 16:53:24 INFO kafka.KafkaRDD: Removing RDD 645 from persistence list 18/04/17 16:53:24 INFO storage.BlockManager: Removing RDD 645 18/04/17 16:53:24 INFO kafka.KafkaRDD: Removing RDD 682 from persistence list 18/04/17 16:53:24 INFO storage.BlockManager: Removing RDD 682 18/04/17 16:53:24 INFO kafka.KafkaRDD: Removing RDD 718 from persistence list 18/04/17 16:53:24 INFO storage.BlockManager: Removing RDD 718 18/04/17 16:53:24 INFO kafka.KafkaRDD: Removing RDD 646 from persistence list 18/04/17 16:53:24 INFO storage.BlockManager: Removing RDD 646 18/04/17 16:53:24 INFO kafka.KafkaRDD: Removing RDD 682 from persistence list 18/04/17 16:53:24 INFO storage.BlockManager: Removing RDD 682 18/04/17 16:53:24 INFO kafka.KafkaRDD: Removing RDD 718 from persistence list 18/04/17 16:53:24 INFO storage.BlockManager: Removing RDD 718 18/04/17 16:53:24 INFO kafka.KafkaRDD: Removing RDD 646 from persistence list 18/04/17 16:53:24 INFO storage.BlockManager: Removing RDD 646 18/04/17 16:53:24 INFO kafka.KafkaRDD: Removing RDD 683 from persistence list 18/04/17 16:53:24 INFO storage.BlockManager: Removing RDD 683 18/04/17 16:53:24 INFO kafka.KafkaRDD: Removing RDD 719 from persistence list 18/04/17 16:53:24 INFO storage.BlockManager: Removing RDD 719 18/04/17 16:53:24 INFO kafka.KafkaRDD: Removing RDD 647 from persistence list 18/04/17 16:53:24 INFO storage.BlockManager: Removing RDD 647 18/04/17 16:53:24 INFO kafka.KafkaRDD: Removing RDD 683 from persistence list 18/04/17 16:53:24 INFO storage.BlockManager: Removing RDD 683 18/04/17 16:53:24 INFO kafka.KafkaRDD: Removing RDD 719 from persistence list 18/04/17 16:53:24 INFO storage.BlockManager: Removing RDD 719 18/04/17 16:53:24 INFO kafka.KafkaRDD: Removing RDD 647 from persistence list 18/04/17 16:53:24 INFO storage.BlockManager: Removing RDD 647 18/04/17 16:53:24 INFO scheduler.ReceivedBlockTracker: Deleting batches ArrayBuffer() 18/04/17 16:53:24 INFO scheduler.InputInfoTracker: remove old batch metadata: 1523973060000 ms 1523973000000 ms 1523972940000 ms 18/04/17 16:53:30 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 520.0 (TID 520) in 90477 ms on ***hostname masked*** (executor 2) (1/1) 18/04/17 16:53:30 INFO cluster.YarnClusterScheduler: Removed TaskSet 520.0, whose tasks have all completed, from pool 18/04/17 16:53:30 INFO scheduler.DAGScheduler: ResultStage 520 (foreachPartition at PredictorEngineApp.java:153) finished in 90.479 s 18/04/17 16:53:30 INFO scheduler.DAGScheduler: Job 520 finished: foreachPartition at PredictorEngineApp.java:153, took 90.585022 s 18/04/17 16:53:30 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x4d280318 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 16:53:30 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x4d2803180x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 16:53:30 INFO storage.BlockManagerInfo: Removed broadcast_540_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.2 MB) 18/04/17 16:53:30 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 16:53:30 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:40188, server: ***hostname masked***/***IP masked***:2181 18/04/17 16:53:30 INFO storage.BlockManagerInfo: Removed broadcast_540_piece0 on ***hostname masked***:35790 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:53:30 INFO storage.BlockManagerInfo: Removed broadcast_553_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.2 MB) 18/04/17 16:53:30 INFO storage.BlockManagerInfo: Removed broadcast_553_piece0 on ***hostname masked***:55095 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:53:30 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x2626be142b28d51, negotiated timeout = 60000 18/04/17 16:53:30 INFO spark.ContextCleaner: Cleaned accumulator 554 18/04/17 16:53:30 INFO storage.BlockManagerInfo: Removed broadcast_552_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.2 MB) 18/04/17 16:53:30 INFO storage.BlockManagerInfo: Removed broadcast_552_piece0 on ***hostname masked***:42188 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:53:30 INFO spark.ContextCleaner: Cleaned accumulator 529 18/04/17 16:53:30 INFO storage.BlockManagerInfo: Removed broadcast_527_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.2 MB) 18/04/17 16:53:30 INFO storage.BlockManagerInfo: Removed broadcast_527_piece0 on ***hostname masked***:55095 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:53:30 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x2626be142b28d51 18/04/17 16:53:30 INFO zookeeper.ZooKeeper: Session: 0x2626be142b28d51 closed 18/04/17 16:53:30 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 16:53:30 INFO scheduler.JobScheduler: Finished job streaming job 1523973120000 ms.15 from job set of time 1523973120000 ms 18/04/17 16:53:30 INFO scheduler.JobScheduler: Total delay: 90.686 s for time 1523973120000 ms (execution: 90.634 s) 18/04/17 16:53:30 INFO scheduler.ReceivedBlockTracker: Deleting batches ArrayBuffer() 18/04/17 16:53:30 INFO scheduler.InputInfoTracker: remove old batch metadata: 18/04/17 16:53:30 INFO spark.ContextCleaner: Cleaned accumulator 528 18/04/17 16:53:30 INFO storage.BlockManagerInfo: Removed broadcast_529_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.2 MB) 18/04/17 16:53:30 INFO storage.BlockManagerInfo: Removed broadcast_529_piece0 on ***hostname masked***:41751 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:53:30 INFO spark.ContextCleaner: Cleaned accumulator 530 18/04/17 16:53:30 INFO storage.BlockManagerInfo: Removed broadcast_528_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.2 MB) 18/04/17 16:53:30 INFO storage.BlockManagerInfo: Removed broadcast_528_piece0 on ***hostname masked***:57847 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:53:30 INFO spark.ContextCleaner: Cleaned accumulator 532 18/04/17 16:53:30 INFO storage.BlockManagerInfo: Removed broadcast_530_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.2 MB) 18/04/17 16:53:30 INFO storage.BlockManagerInfo: Removed broadcast_530_piece0 on ***hostname masked***:41751 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:53:30 INFO spark.ContextCleaner: Cleaned accumulator 531 18/04/17 16:53:30 INFO spark.ContextCleaner: Cleaned accumulator 533 18/04/17 16:53:30 INFO storage.BlockManagerInfo: Removed broadcast_531_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.2 MB) 18/04/17 16:53:30 INFO storage.BlockManagerInfo: Removed broadcast_531_piece0 on ***hostname masked***:60107 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:53:30 INFO storage.BlockManagerInfo: Removed broadcast_533_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.2 MB) 18/04/17 16:53:30 INFO storage.BlockManagerInfo: Removed broadcast_533_piece0 on ***hostname masked***:55033 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:53:30 INFO spark.ContextCleaner: Cleaned accumulator 534 18/04/17 16:53:30 INFO storage.BlockManagerInfo: Removed broadcast_532_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.2 MB) 18/04/17 16:53:30 INFO storage.BlockManagerInfo: Removed broadcast_532_piece0 on ***hostname masked***:42188 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:53:30 INFO spark.ContextCleaner: Cleaned accumulator 536 18/04/17 16:53:30 INFO storage.BlockManagerInfo: Removed broadcast_534_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.2 MB) 18/04/17 16:53:30 INFO storage.BlockManagerInfo: Removed broadcast_534_piece0 on ***hostname masked***:53081 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:53:30 INFO spark.ContextCleaner: Cleaned accumulator 535 18/04/17 16:53:30 INFO storage.BlockManagerInfo: Removed broadcast_536_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.2 MB) 18/04/17 16:53:30 INFO storage.BlockManagerInfo: Removed broadcast_536_piece0 on ***hostname masked***:35790 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:53:30 INFO spark.ContextCleaner: Cleaned accumulator 537 18/04/17 16:53:30 INFO storage.BlockManagerInfo: Removed broadcast_535_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.2 MB) 18/04/17 16:53:30 INFO storage.BlockManagerInfo: Removed broadcast_535_piece0 on ***hostname masked***:55279 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:53:30 INFO spark.ContextCleaner: Cleaned accumulator 539 18/04/17 16:53:30 INFO storage.BlockManagerInfo: Removed broadcast_537_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.2 MB) 18/04/17 16:53:30 INFO storage.BlockManagerInfo: Removed broadcast_537_piece0 on ***hostname masked***:35790 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:53:30 INFO spark.ContextCleaner: Cleaned accumulator 538 18/04/17 16:53:30 INFO storage.BlockManagerInfo: Removed broadcast_539_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.2 MB) 18/04/17 16:53:30 INFO storage.BlockManagerInfo: Removed broadcast_539_piece0 on ***hostname masked***:42188 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:53:30 INFO spark.ContextCleaner: Cleaned accumulator 540 18/04/17 16:53:30 INFO storage.BlockManagerInfo: Removed broadcast_538_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.2 MB) 18/04/17 16:53:30 INFO storage.BlockManagerInfo: Removed broadcast_538_piece0 on ***hostname masked***:50260 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:53:30 INFO spark.ContextCleaner: Cleaned accumulator 542 18/04/17 16:53:30 INFO spark.ContextCleaner: Cleaned accumulator 541 18/04/17 16:53:30 INFO storage.BlockManagerInfo: Removed broadcast_542_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.2 MB) 18/04/17 16:53:30 INFO storage.BlockManagerInfo: Removed broadcast_542_piece0 on ***hostname masked***:53081 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:53:30 INFO spark.ContextCleaner: Cleaned accumulator 543 18/04/17 16:53:30 INFO storage.BlockManagerInfo: Removed broadcast_541_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.2 MB) 18/04/17 16:53:30 INFO storage.BlockManagerInfo: Removed broadcast_541_piece0 on ***hostname masked***:55095 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:53:30 INFO spark.ContextCleaner: Cleaned accumulator 545 18/04/17 16:53:30 INFO storage.BlockManagerInfo: Removed broadcast_543_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.2 MB) 18/04/17 16:53:30 INFO storage.BlockManagerInfo: Removed broadcast_543_piece0 on ***hostname masked***:56034 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:53:30 INFO spark.ContextCleaner: Cleaned accumulator 544 18/04/17 16:53:30 INFO storage.BlockManagerInfo: Removed broadcast_545_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.2 MB) 18/04/17 16:53:30 INFO storage.BlockManagerInfo: Removed broadcast_545_piece0 on ***hostname masked***:41751 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:53:30 INFO spark.ContextCleaner: Cleaned accumulator 546 18/04/17 16:53:30 INFO storage.BlockManagerInfo: Removed broadcast_544_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.2 MB) 18/04/17 16:53:30 INFO storage.BlockManagerInfo: Removed broadcast_544_piece0 on ***hostname masked***:57847 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:53:30 INFO spark.ContextCleaner: Cleaned accumulator 548 18/04/17 16:53:30 INFO storage.BlockManagerInfo: Removed broadcast_546_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.3 MB) 18/04/17 16:53:30 INFO storage.BlockManagerInfo: Removed broadcast_546_piece0 on ***hostname masked***:60107 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:53:30 INFO spark.ContextCleaner: Cleaned accumulator 547 18/04/17 16:53:30 INFO storage.BlockManagerInfo: Removed broadcast_548_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.3 MB) 18/04/17 16:53:30 INFO storage.BlockManagerInfo: Removed broadcast_548_piece0 on ***hostname masked***:50260 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:53:30 INFO spark.ContextCleaner: Cleaned accumulator 549 18/04/17 16:53:30 INFO storage.BlockManagerInfo: Removed broadcast_547_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.3 MB) 18/04/17 16:53:30 INFO storage.BlockManagerInfo: Removed broadcast_547_piece0 on ***hostname masked***:60107 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:53:30 INFO spark.ContextCleaner: Cleaned accumulator 551 18/04/17 16:53:30 INFO storage.BlockManagerInfo: Removed broadcast_549_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.3 MB) 18/04/17 16:53:30 INFO storage.BlockManagerInfo: Removed broadcast_549_piece0 on ***hostname masked***:53081 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:53:30 INFO spark.ContextCleaner: Cleaned accumulator 550 18/04/17 16:53:30 INFO storage.BlockManagerInfo: Removed broadcast_551_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.3 MB) 18/04/17 16:53:30 INFO storage.BlockManagerInfo: Removed broadcast_551_piece0 on ***hostname masked***:42188 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:53:30 INFO spark.ContextCleaner: Cleaned accumulator 552 18/04/17 16:53:30 INFO storage.BlockManagerInfo: Removed broadcast_550_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.3 MB) 18/04/17 16:53:30 INFO storage.BlockManagerInfo: Removed broadcast_550_piece0 on ***hostname masked***:53081 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:53:30 INFO spark.ContextCleaner: Cleaned accumulator 553 18/04/17 16:54:00 INFO scheduler.JobScheduler: Added jobs for time 1523973240000 ms 18/04/17 16:54:00 INFO scheduler.JobScheduler: Starting job streaming job 1523973240000 ms.0 from job set of time 1523973240000 ms 18/04/17 16:54:00 INFO scheduler.JobScheduler: Starting job streaming job 1523973240000 ms.1 from job set of time 1523973240000 ms 18/04/17 16:54:00 INFO scheduler.JobScheduler: Starting job streaming job 1523973240000 ms.2 from job set of time 1523973240000 ms 18/04/17 16:54:00 INFO scheduler.JobScheduler: Starting job streaming job 1523973240000 ms.3 from job set of time 1523973240000 ms 18/04/17 16:54:00 INFO scheduler.JobScheduler: Finished job streaming job 1523973240000 ms.0 from job set of time 1523973240000 ms 18/04/17 16:54:00 INFO scheduler.JobScheduler: Starting job streaming job 1523973240000 ms.4 from job set of time 1523973240000 ms 18/04/17 16:54:00 INFO scheduler.JobScheduler: Finished job streaming job 1523973240000 ms.3 from job set of time 1523973240000 ms 18/04/17 16:54:00 INFO scheduler.JobScheduler: Finished job streaming job 1523973240000 ms.4 from job set of time 1523973240000 ms 18/04/17 16:54:00 INFO scheduler.JobScheduler: Starting job streaming job 1523973240000 ms.7 from job set of time 1523973240000 ms 18/04/17 16:54:00 INFO scheduler.JobScheduler: Starting job streaming job 1523973240000 ms.8 from job set of time 1523973240000 ms 18/04/17 16:54:00 INFO scheduler.JobScheduler: Starting job streaming job 1523973240000 ms.5 from job set of time 1523973240000 ms 18/04/17 16:54:00 INFO scheduler.JobScheduler: Starting job streaming job 1523973240000 ms.6 from job set of time 1523973240000 ms 18/04/17 16:54:00 INFO scheduler.JobScheduler: Starting job streaming job 1523973240000 ms.9 from job set of time 1523973240000 ms 18/04/17 16:54:00 INFO scheduler.JobScheduler: Starting job streaming job 1523973240000 ms.10 from job set of time 1523973240000 ms 18/04/17 16:54:00 INFO scheduler.JobScheduler: Starting job streaming job 1523973240000 ms.11 from job set of time 1523973240000 ms 18/04/17 16:54:00 INFO scheduler.JobScheduler: Starting job streaming job 1523973240000 ms.12 from job set of time 1523973240000 ms 18/04/17 16:54:00 INFO scheduler.JobScheduler: Starting job streaming job 1523973240000 ms.13 from job set of time 1523973240000 ms 18/04/17 16:54:00 INFO scheduler.JobScheduler: Starting job streaming job 1523973240000 ms.14 from job set of time 1523973240000 ms 18/04/17 16:54:00 INFO scheduler.JobScheduler: Finished job streaming job 1523973240000 ms.13 from job set of time 1523973240000 ms 18/04/17 16:54:00 INFO scheduler.JobScheduler: Starting job streaming job 1523973240000 ms.17 from job set of time 1523973240000 ms 18/04/17 16:54:00 INFO scheduler.JobScheduler: Starting job streaming job 1523973240000 ms.15 from job set of time 1523973240000 ms 18/04/17 16:54:00 INFO scheduler.JobScheduler: Starting job streaming job 1523973240000 ms.16 from job set of time 1523973240000 ms 18/04/17 16:54:00 INFO scheduler.JobScheduler: Finished job streaming job 1523973240000 ms.14 from job set of time 1523973240000 ms 18/04/17 16:54:00 INFO scheduler.JobScheduler: Finished job streaming job 1523973240000 ms.17 from job set of time 1523973240000 ms 18/04/17 16:54:00 INFO scheduler.JobScheduler: Starting job streaming job 1523973240000 ms.20 from job set of time 1523973240000 ms 18/04/17 16:54:00 INFO scheduler.JobScheduler: Starting job streaming job 1523973240000 ms.18 from job set of time 1523973240000 ms 18/04/17 16:54:00 INFO scheduler.JobScheduler: Starting job streaming job 1523973240000 ms.19 from job set of time 1523973240000 ms 18/04/17 16:54:00 INFO scheduler.JobScheduler: Finished job streaming job 1523973240000 ms.16 from job set of time 1523973240000 ms 18/04/17 16:54:00 INFO scheduler.JobScheduler: Starting job streaming job 1523973240000 ms.21 from job set of time 1523973240000 ms 18/04/17 16:54:00 INFO scheduler.JobScheduler: Finished job streaming job 1523973240000 ms.21 from job set of time 1523973240000 ms 18/04/17 16:54:00 INFO scheduler.JobScheduler: Starting job streaming job 1523973240000 ms.23 from job set of time 1523973240000 ms 18/04/17 16:54:00 INFO scheduler.JobScheduler: Starting job streaming job 1523973240000 ms.22 from job set of time 1523973240000 ms 18/04/17 16:54:00 INFO scheduler.JobScheduler: Starting job streaming job 1523973240000 ms.24 from job set of time 1523973240000 ms 18/04/17 16:54:00 INFO scheduler.JobScheduler: Starting job streaming job 1523973240000 ms.25 from job set of time 1523973240000 ms 18/04/17 16:54:00 INFO scheduler.JobScheduler: Starting job streaming job 1523973240000 ms.26 from job set of time 1523973240000 ms 18/04/17 16:54:00 INFO scheduler.JobScheduler: Starting job streaming job 1523973240000 ms.28 from job set of time 1523973240000 ms 18/04/17 16:54:00 INFO scheduler.JobScheduler: Starting job streaming job 1523973240000 ms.27 from job set of time 1523973240000 ms 18/04/17 16:54:00 INFO scheduler.JobScheduler: Starting job streaming job 1523973240000 ms.29 from job set of time 1523973240000 ms 18/04/17 16:54:00 INFO scheduler.JobScheduler: Starting job streaming job 1523973240000 ms.30 from job set of time 1523973240000 ms 18/04/17 16:54:00 INFO scheduler.JobScheduler: Starting job streaming job 1523973240000 ms.31 from job set of time 1523973240000 ms 18/04/17 16:54:00 INFO scheduler.JobScheduler: Starting job streaming job 1523973240000 ms.32 from job set of time 1523973240000 ms 18/04/17 16:54:00 INFO scheduler.JobScheduler: Starting job streaming job 1523973240000 ms.33 from job set of time 1523973240000 ms 18/04/17 16:54:00 INFO scheduler.JobScheduler: Finished job streaming job 1523973240000 ms.30 from job set of time 1523973240000 ms 18/04/17 16:54:00 INFO scheduler.JobScheduler: Starting job streaming job 1523973240000 ms.34 from job set of time 1523973240000 ms 18/04/17 16:54:00 INFO scheduler.JobScheduler: Starting job streaming job 1523973240000 ms.35 from job set of time 1523973240000 ms 18/04/17 16:54:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 16:54:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 16:54:00 INFO scheduler.DAGScheduler: Got job 554 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 16:54:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 554 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 16:54:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 16:54:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 16:54:00 INFO scheduler.DAGScheduler: Submitting ResultStage 554 (KafkaRDD[784] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 16:54:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 16:54:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 16:54:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 16:54:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 16:54:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 16:54:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 16:54:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 16:54:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 16:54:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 16:54:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 16:54:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 16:54:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 16:54:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 16:54:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 16:54:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 16:54:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 16:54:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 16:54:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 16:54:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 16:54:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 16:54:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 16:54:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 16:54:00 INFO storage.MemoryStore: Block broadcast_554 stored as values in memory (estimated size 5.7 KB, free 490.5 MB) 18/04/17 16:54:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 16:54:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 16:54:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 16:54:00 INFO storage.MemoryStore: Block broadcast_554_piece0 stored as bytes in memory (estimated size 3.1 KB, free 490.5 MB) 18/04/17 16:54:00 INFO storage.BlockManagerInfo: Added broadcast_554_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.3 MB) 18/04/17 16:54:00 INFO spark.SparkContext: Created broadcast 554 from broadcast at DAGScheduler.scala:1006 18/04/17 16:54:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 554 (KafkaRDD[784] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 16:54:00 INFO cluster.YarnClusterScheduler: Adding task set 554.0 with 1 tasks 18/04/17 16:54:00 INFO scheduler.DAGScheduler: Got job 555 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 16:54:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 555 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 16:54:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 16:54:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 16:54:00 INFO scheduler.DAGScheduler: Submitting ResultStage 555 (KafkaRDD[775] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 16:54:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 554.0 (TID 554, ***hostname masked***, executor 7, partition 0, RACK_LOCAL, 2045 bytes) 18/04/17 16:54:00 INFO storage.MemoryStore: Block broadcast_555 stored as values in memory (estimated size 5.7 KB, free 490.5 MB) 18/04/17 16:54:00 INFO storage.MemoryStore: Block broadcast_555_piece0 stored as bytes in memory (estimated size 3.1 KB, free 490.5 MB) 18/04/17 16:54:00 INFO storage.BlockManagerInfo: Added broadcast_555_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.3 MB) 18/04/17 16:54:00 INFO spark.SparkContext: Created broadcast 555 from broadcast at DAGScheduler.scala:1006 18/04/17 16:54:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 555 (KafkaRDD[775] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 16:54:00 INFO cluster.YarnClusterScheduler: Adding task set 555.0 with 1 tasks 18/04/17 16:54:00 INFO scheduler.DAGScheduler: Got job 556 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 16:54:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 556 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 16:54:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 16:54:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 16:54:00 INFO scheduler.DAGScheduler: Submitting ResultStage 556 (KafkaRDD[789] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 16:54:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 555.0 (TID 555, ***hostname masked***, executor 12, partition 0, RACK_LOCAL, 2050 bytes) 18/04/17 16:54:00 INFO storage.MemoryStore: Block broadcast_556 stored as values in memory (estimated size 5.7 KB, free 490.5 MB) 18/04/17 16:54:00 INFO storage.MemoryStore: Block broadcast_556_piece0 stored as bytes in memory (estimated size 3.1 KB, free 490.5 MB) 18/04/17 16:54:00 INFO storage.BlockManagerInfo: Added broadcast_556_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.3 MB) 18/04/17 16:54:00 INFO spark.SparkContext: Created broadcast 556 from broadcast at DAGScheduler.scala:1006 18/04/17 16:54:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 556 (KafkaRDD[789] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 16:54:00 INFO cluster.YarnClusterScheduler: Adding task set 556.0 with 1 tasks 18/04/17 16:54:00 INFO scheduler.DAGScheduler: Got job 557 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 16:54:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 557 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 16:54:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 16:54:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 16:54:00 INFO scheduler.DAGScheduler: Submitting ResultStage 557 (KafkaRDD[768] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 16:54:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 556.0 (TID 556, ***hostname masked***, executor 7, partition 0, NODE_LOCAL, 2048 bytes) 18/04/17 16:54:00 INFO storage.MemoryStore: Block broadcast_557 stored as values in memory (estimated size 5.7 KB, free 490.5 MB) 18/04/17 16:54:00 INFO storage.MemoryStore: Block broadcast_557_piece0 stored as bytes in memory (estimated size 3.1 KB, free 490.5 MB) 18/04/17 16:54:00 INFO storage.BlockManagerInfo: Added broadcast_557_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.3 MB) 18/04/17 16:54:00 INFO spark.SparkContext: Created broadcast 557 from broadcast at DAGScheduler.scala:1006 18/04/17 16:54:00 INFO storage.BlockManagerInfo: Added broadcast_554_piece0 in memory on ***hostname masked***:41751 (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:54:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 557 (KafkaRDD[768] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 16:54:00 INFO cluster.YarnClusterScheduler: Adding task set 557.0 with 1 tasks 18/04/17 16:54:00 INFO scheduler.DAGScheduler: Got job 558 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 16:54:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 558 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 16:54:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 16:54:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 16:54:00 INFO scheduler.DAGScheduler: Submitting ResultStage 558 (KafkaRDD[787] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 16:54:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 557.0 (TID 557, ***hostname masked***, executor 11, partition 0, RACK_LOCAL, 2051 bytes) 18/04/17 16:54:00 INFO storage.MemoryStore: Block broadcast_558 stored as values in memory (estimated size 5.7 KB, free 490.5 MB) 18/04/17 16:54:00 INFO storage.MemoryStore: Block broadcast_558_piece0 stored as bytes in memory (estimated size 3.1 KB, free 490.5 MB) 18/04/17 16:54:00 INFO storage.BlockManagerInfo: Added broadcast_558_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.3 MB) 18/04/17 16:54:00 INFO spark.SparkContext: Created broadcast 558 from broadcast at DAGScheduler.scala:1006 18/04/17 16:54:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 558 (KafkaRDD[787] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 16:54:00 INFO cluster.YarnClusterScheduler: Adding task set 558.0 with 1 tasks 18/04/17 16:54:00 INFO scheduler.DAGScheduler: Got job 559 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 16:54:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 559 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 16:54:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 16:54:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 16:54:00 INFO scheduler.DAGScheduler: Submitting ResultStage 559 (KafkaRDD[774] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 16:54:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 558.0 (TID 558, ***hostname masked***, executor 10, partition 0, RACK_LOCAL, 2042 bytes) 18/04/17 16:54:00 INFO storage.MemoryStore: Block broadcast_559 stored as values in memory (estimated size 5.7 KB, free 490.5 MB) 18/04/17 16:54:00 INFO storage.BlockManagerInfo: Added broadcast_556_piece0 in memory on ***hostname masked***:41751 (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:54:00 INFO storage.MemoryStore: Block broadcast_559_piece0 stored as bytes in memory (estimated size 3.1 KB, free 490.5 MB) 18/04/17 16:54:00 INFO storage.BlockManagerInfo: Added broadcast_559_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.2 MB) 18/04/17 16:54:00 INFO spark.SparkContext: Created broadcast 559 from broadcast at DAGScheduler.scala:1006 18/04/17 16:54:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 559 (KafkaRDD[774] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 16:54:00 INFO cluster.YarnClusterScheduler: Adding task set 559.0 with 1 tasks 18/04/17 16:54:00 INFO scheduler.DAGScheduler: Got job 560 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 16:54:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 560 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 16:54:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 16:54:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 16:54:00 INFO scheduler.DAGScheduler: Submitting ResultStage 560 (KafkaRDD[766] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 16:54:00 INFO storage.MemoryStore: Block broadcast_560 stored as values in memory (estimated size 5.7 KB, free 490.5 MB) 18/04/17 16:54:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 559.0 (TID 559, ***hostname masked***, executor 11, partition 0, RACK_LOCAL, 2048 bytes) 18/04/17 16:54:00 INFO storage.MemoryStore: Block broadcast_560_piece0 stored as bytes in memory (estimated size 3.1 KB, free 490.5 MB) 18/04/17 16:54:00 INFO storage.BlockManagerInfo: Added broadcast_560_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.2 MB) 18/04/17 16:54:00 INFO spark.SparkContext: Created broadcast 560 from broadcast at DAGScheduler.scala:1006 18/04/17 16:54:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 560 (KafkaRDD[766] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 16:54:00 INFO cluster.YarnClusterScheduler: Adding task set 560.0 with 1 tasks 18/04/17 16:54:00 INFO scheduler.DAGScheduler: Got job 561 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 16:54:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 561 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 16:54:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 16:54:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 16:54:00 INFO scheduler.DAGScheduler: Submitting ResultStage 561 (KafkaRDD[788] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 16:54:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 560.0 (TID 560, ***hostname masked***, executor 6, partition 0, NODE_LOCAL, 2059 bytes) 18/04/17 16:54:00 INFO storage.MemoryStore: Block broadcast_561 stored as values in memory (estimated size 5.7 KB, free 490.5 MB) 18/04/17 16:54:00 INFO storage.BlockManagerInfo: Added broadcast_557_piece0 in memory on ***hostname masked***:57847 (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:54:00 INFO storage.MemoryStore: Block broadcast_561_piece0 stored as bytes in memory (estimated size 3.1 KB, free 490.5 MB) 18/04/17 16:54:00 INFO storage.BlockManagerInfo: Added broadcast_561_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.2 MB) 18/04/17 16:54:00 INFO spark.SparkContext: Created broadcast 561 from broadcast at DAGScheduler.scala:1006 18/04/17 16:54:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 561 (KafkaRDD[788] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 16:54:00 INFO cluster.YarnClusterScheduler: Adding task set 561.0 with 1 tasks 18/04/17 16:54:00 INFO scheduler.DAGScheduler: Got job 562 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 16:54:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 562 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 16:54:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 16:54:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 16:54:00 INFO scheduler.DAGScheduler: Submitting ResultStage 562 (KafkaRDD[782] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 16:54:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 561.0 (TID 561, ***hostname masked***, executor 9, partition 0, RACK_LOCAL, 2040 bytes) 18/04/17 16:54:00 INFO storage.MemoryStore: Block broadcast_562 stored as values in memory (estimated size 5.7 KB, free 490.5 MB) 18/04/17 16:54:00 INFO storage.BlockManagerInfo: Added broadcast_555_piece0 in memory on ***hostname masked***:42188 (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:54:00 INFO storage.MemoryStore: Block broadcast_562_piece0 stored as bytes in memory (estimated size 3.1 KB, free 490.5 MB) 18/04/17 16:54:00 INFO storage.BlockManagerInfo: Added broadcast_562_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.2 MB) 18/04/17 16:54:00 INFO spark.SparkContext: Created broadcast 562 from broadcast at DAGScheduler.scala:1006 18/04/17 16:54:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 562 (KafkaRDD[782] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 16:54:00 INFO cluster.YarnClusterScheduler: Adding task set 562.0 with 1 tasks 18/04/17 16:54:00 INFO scheduler.DAGScheduler: Got job 563 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 16:54:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 563 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 16:54:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 16:54:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 16:54:00 INFO scheduler.DAGScheduler: Submitting ResultStage 563 (KafkaRDD[791] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 16:54:00 INFO storage.BlockManagerInfo: Added broadcast_558_piece0 in memory on ***hostname masked***:55095 (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:54:00 INFO storage.MemoryStore: Block broadcast_563 stored as values in memory (estimated size 5.7 KB, free 490.5 MB) 18/04/17 16:54:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 562.0 (TID 562, ***hostname masked***, executor 3, partition 0, NODE_LOCAL, 2056 bytes) 18/04/17 16:54:00 INFO storage.BlockManagerInfo: Added broadcast_559_piece0 in memory on ***hostname masked***:57847 (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:54:00 INFO storage.MemoryStore: Block broadcast_563_piece0 stored as bytes in memory (estimated size 3.1 KB, free 490.4 MB) 18/04/17 16:54:00 INFO storage.BlockManagerInfo: Added broadcast_563_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.2 MB) 18/04/17 16:54:00 INFO spark.SparkContext: Created broadcast 563 from broadcast at DAGScheduler.scala:1006 18/04/17 16:54:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 563 (KafkaRDD[791] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 16:54:00 INFO cluster.YarnClusterScheduler: Adding task set 563.0 with 1 tasks 18/04/17 16:54:00 INFO storage.BlockManagerInfo: Added broadcast_560_piece0 in memory on ***hostname masked***:35790 (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:54:00 INFO scheduler.DAGScheduler: Got job 564 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 16:54:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 564 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 16:54:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 16:54:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 16:54:00 INFO scheduler.DAGScheduler: Submitting ResultStage 564 (KafkaRDD[765] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 16:54:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 563.0 (TID 563, ***hostname masked***, executor 9, partition 0, NODE_LOCAL, 2037 bytes) 18/04/17 16:54:00 INFO storage.MemoryStore: Block broadcast_564 stored as values in memory (estimated size 5.7 KB, free 490.4 MB) 18/04/17 16:54:00 INFO storage.MemoryStore: Block broadcast_564_piece0 stored as bytes in memory (estimated size 3.1 KB, free 490.4 MB) 18/04/17 16:54:00 INFO storage.BlockManagerInfo: Added broadcast_564_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.2 MB) 18/04/17 16:54:00 INFO spark.SparkContext: Created broadcast 564 from broadcast at DAGScheduler.scala:1006 18/04/17 16:54:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 564 (KafkaRDD[765] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 16:54:00 INFO cluster.YarnClusterScheduler: Adding task set 564.0 with 1 tasks 18/04/17 16:54:00 INFO scheduler.DAGScheduler: Got job 565 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 16:54:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 565 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 16:54:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 16:54:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 16:54:00 INFO scheduler.DAGScheduler: Submitting ResultStage 565 (KafkaRDD[776] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 16:54:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 564.0 (TID 564, ***hostname masked***, executor 4, partition 0, RACK_LOCAL, 2041 bytes) 18/04/17 16:54:00 INFO storage.MemoryStore: Block broadcast_565 stored as values in memory (estimated size 5.7 KB, free 490.4 MB) 18/04/17 16:54:00 INFO storage.BlockManagerInfo: Added broadcast_563_piece0 in memory on ***hostname masked***:55033 (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:54:00 INFO storage.MemoryStore: Block broadcast_565_piece0 stored as bytes in memory (estimated size 3.1 KB, free 490.4 MB) 18/04/17 16:54:00 INFO storage.BlockManagerInfo: Added broadcast_565_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.2 MB) 18/04/17 16:54:00 INFO spark.SparkContext: Created broadcast 565 from broadcast at DAGScheduler.scala:1006 18/04/17 16:54:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 565 (KafkaRDD[776] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 16:54:00 INFO cluster.YarnClusterScheduler: Adding task set 565.0 with 1 tasks 18/04/17 16:54:00 INFO scheduler.DAGScheduler: Got job 566 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 16:54:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 566 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 16:54:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 16:54:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 16:54:00 INFO scheduler.DAGScheduler: Submitting ResultStage 566 (KafkaRDD[790] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 16:54:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 565.0 (TID 565, ***hostname masked***, executor 10, partition 0, RACK_LOCAL, 2048 bytes) 18/04/17 16:54:00 INFO storage.MemoryStore: Block broadcast_566 stored as values in memory (estimated size 5.7 KB, free 490.4 MB) 18/04/17 16:54:00 INFO storage.MemoryStore: Block broadcast_566_piece0 stored as bytes in memory (estimated size 3.1 KB, free 490.4 MB) 18/04/17 16:54:00 INFO storage.BlockManagerInfo: Added broadcast_566_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.2 MB) 18/04/17 16:54:00 INFO spark.SparkContext: Created broadcast 566 from broadcast at DAGScheduler.scala:1006 18/04/17 16:54:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 566 (KafkaRDD[790] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 16:54:00 INFO cluster.YarnClusterScheduler: Adding task set 566.0 with 1 tasks 18/04/17 16:54:00 INFO scheduler.DAGScheduler: Got job 567 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 16:54:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 567 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 16:54:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 16:54:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 16:54:00 INFO scheduler.DAGScheduler: Submitting ResultStage 567 (KafkaRDD[778] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 16:54:00 INFO storage.MemoryStore: Block broadcast_567 stored as values in memory (estimated size 5.7 KB, free 490.4 MB) 18/04/17 16:54:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 566.0 (TID 566, ***hostname masked***, executor 6, partition 0, RACK_LOCAL, 2053 bytes) 18/04/17 16:54:00 INFO storage.BlockManagerInfo: Added broadcast_565_piece0 in memory on ***hostname masked***:55095 (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:54:00 INFO storage.BlockManagerInfo: Added broadcast_562_piece0 in memory on ***hostname masked***:60107 (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:54:00 INFO storage.BlockManagerInfo: Added broadcast_564_piece0 in memory on ***hostname masked***:55279 (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:54:00 INFO storage.BlockManagerInfo: Added broadcast_561_piece0 in memory on ***hostname masked***:55033 (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:54:00 INFO storage.MemoryStore: Block broadcast_567_piece0 stored as bytes in memory (estimated size 3.1 KB, free 490.4 MB) 18/04/17 16:54:00 INFO storage.BlockManagerInfo: Added broadcast_567_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.2 MB) 18/04/17 16:54:00 INFO spark.SparkContext: Created broadcast 567 from broadcast at DAGScheduler.scala:1006 18/04/17 16:54:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 567 (KafkaRDD[778] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 16:54:00 INFO cluster.YarnClusterScheduler: Adding task set 567.0 with 1 tasks 18/04/17 16:54:00 INFO scheduler.DAGScheduler: Got job 568 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 16:54:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 568 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 16:54:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 16:54:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 16:54:00 INFO scheduler.DAGScheduler: Submitting ResultStage 568 (KafkaRDD[758] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 16:54:00 INFO storage.MemoryStore: Block broadcast_568 stored as values in memory (estimated size 5.7 KB, free 490.4 MB) 18/04/17 16:54:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 567.0 (TID 567, ***hostname masked***, executor 1, partition 0, RACK_LOCAL, 2069 bytes) 18/04/17 16:54:00 INFO storage.BlockManagerInfo: Added broadcast_566_piece0 in memory on ***hostname masked***:35790 (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:54:00 INFO storage.MemoryStore: Block broadcast_568_piece0 stored as bytes in memory (estimated size 3.1 KB, free 490.4 MB) 18/04/17 16:54:00 INFO storage.BlockManagerInfo: Added broadcast_568_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.2 MB) 18/04/17 16:54:00 INFO spark.SparkContext: Created broadcast 568 from broadcast at DAGScheduler.scala:1006 18/04/17 16:54:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 568 (KafkaRDD[758] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 16:54:00 INFO cluster.YarnClusterScheduler: Adding task set 568.0 with 1 tasks 18/04/17 16:54:00 INFO scheduler.DAGScheduler: Got job 569 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 16:54:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 569 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 16:54:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 16:54:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 16:54:00 INFO scheduler.DAGScheduler: Submitting ResultStage 569 (KafkaRDD[780] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 16:54:00 INFO storage.MemoryStore: Block broadcast_569 stored as values in memory (estimated size 5.7 KB, free 490.4 MB) 18/04/17 16:54:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 568.0 (TID 568, ***hostname masked***, executor 6, partition 0, RACK_LOCAL, 2034 bytes) 18/04/17 16:54:00 INFO storage.MemoryStore: Block broadcast_569_piece0 stored as bytes in memory (estimated size 3.1 KB, free 490.4 MB) 18/04/17 16:54:00 INFO storage.BlockManagerInfo: Added broadcast_569_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.2 MB) 18/04/17 16:54:00 INFO spark.SparkContext: Created broadcast 569 from broadcast at DAGScheduler.scala:1006 18/04/17 16:54:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 569 (KafkaRDD[780] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 16:54:00 INFO cluster.YarnClusterScheduler: Adding task set 569.0 with 1 tasks 18/04/17 16:54:00 INFO scheduler.DAGScheduler: Got job 571 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 16:54:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 570 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 16:54:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 16:54:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 16:54:00 INFO scheduler.DAGScheduler: Submitting ResultStage 570 (KafkaRDD[763] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 16:54:00 INFO storage.MemoryStore: Block broadcast_570 stored as values in memory (estimated size 5.7 KB, free 490.4 MB) 18/04/17 16:54:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 569.0 (TID 569, ***hostname masked***, executor 1, partition 0, NODE_LOCAL, 2047 bytes) 18/04/17 16:54:00 INFO storage.BlockManagerInfo: Added broadcast_568_piece0 in memory on ***hostname masked***:35790 (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:54:00 INFO storage.MemoryStore: Block broadcast_570_piece0 stored as bytes in memory (estimated size 3.1 KB, free 490.4 MB) 18/04/17 16:54:00 INFO storage.BlockManagerInfo: Added broadcast_570_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.2 MB) 18/04/17 16:54:00 INFO spark.SparkContext: Created broadcast 570 from broadcast at DAGScheduler.scala:1006 18/04/17 16:54:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 570 (KafkaRDD[763] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 16:54:00 INFO cluster.YarnClusterScheduler: Adding task set 570.0 with 1 tasks 18/04/17 16:54:00 INFO scheduler.DAGScheduler: Got job 570 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 16:54:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 571 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 16:54:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 16:54:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 16:54:00 INFO scheduler.DAGScheduler: Submitting ResultStage 571 (KafkaRDD[757] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 16:54:00 INFO storage.MemoryStore: Block broadcast_571 stored as values in memory (estimated size 5.7 KB, free 490.4 MB) 18/04/17 16:54:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 570.0 (TID 570, ***hostname masked***, executor 5, partition 0, RACK_LOCAL, 2060 bytes) 18/04/17 16:54:00 INFO storage.MemoryStore: Block broadcast_571_piece0 stored as bytes in memory (estimated size 3.1 KB, free 490.4 MB) 18/04/17 16:54:00 INFO storage.BlockManagerInfo: Added broadcast_571_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.2 MB) 18/04/17 16:54:00 INFO spark.SparkContext: Created broadcast 571 from broadcast at DAGScheduler.scala:1006 18/04/17 16:54:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 571 (KafkaRDD[757] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 16:54:00 INFO cluster.YarnClusterScheduler: Adding task set 571.0 with 1 tasks 18/04/17 16:54:00 INFO scheduler.DAGScheduler: Got job 573 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 16:54:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 572 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 16:54:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 16:54:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 16:54:00 INFO scheduler.DAGScheduler: Submitting ResultStage 572 (KafkaRDD[767] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 16:54:00 INFO storage.MemoryStore: Block broadcast_572 stored as values in memory (estimated size 5.7 KB, free 490.4 MB) 18/04/17 16:54:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 571.0 (TID 571, ***hostname masked***, executor 12, partition 0, RACK_LOCAL, 2063 bytes) 18/04/17 16:54:00 INFO storage.MemoryStore: Block broadcast_572_piece0 stored as bytes in memory (estimated size 3.1 KB, free 490.4 MB) 18/04/17 16:54:00 INFO storage.BlockManagerInfo: Added broadcast_572_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.2 MB) 18/04/17 16:54:00 INFO spark.SparkContext: Created broadcast 572 from broadcast at DAGScheduler.scala:1006 18/04/17 16:54:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 572 (KafkaRDD[767] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 16:54:00 INFO cluster.YarnClusterScheduler: Adding task set 572.0 with 1 tasks 18/04/17 16:54:00 INFO scheduler.DAGScheduler: Got job 572 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 16:54:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 573 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 16:54:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 16:54:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 16:54:00 INFO scheduler.DAGScheduler: Submitting ResultStage 573 (KafkaRDD[762] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 16:54:00 INFO storage.MemoryStore: Block broadcast_573 stored as values in memory (estimated size 5.7 KB, free 490.4 MB) 18/04/17 16:54:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 572.0 (TID 572, ***hostname masked***, executor 6, partition 0, RACK_LOCAL, 2046 bytes) 18/04/17 16:54:00 INFO storage.BlockManagerInfo: Added broadcast_569_piece0 in memory on ***hostname masked***:56034 (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:54:00 INFO storage.BlockManagerInfo: Added broadcast_571_piece0 in memory on ***hostname masked***:42188 (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:54:00 INFO storage.MemoryStore: Block broadcast_573_piece0 stored as bytes in memory (estimated size 3.1 KB, free 490.4 MB) 18/04/17 16:54:00 INFO storage.BlockManagerInfo: Added broadcast_573_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.2 MB) 18/04/17 16:54:00 INFO spark.SparkContext: Created broadcast 573 from broadcast at DAGScheduler.scala:1006 18/04/17 16:54:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 573 (KafkaRDD[762] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 16:54:00 INFO cluster.YarnClusterScheduler: Adding task set 573.0 with 1 tasks 18/04/17 16:54:00 INFO scheduler.DAGScheduler: Got job 575 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 16:54:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 574 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 16:54:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 16:54:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 16:54:00 INFO scheduler.DAGScheduler: Submitting ResultStage 574 (KafkaRDD[781] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 16:54:00 INFO storage.MemoryStore: Block broadcast_574 stored as values in memory (estimated size 5.7 KB, free 490.4 MB) 18/04/17 16:54:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 573.0 (TID 573, ***hostname masked***, executor 9, partition 0, NODE_LOCAL, 2045 bytes) 18/04/17 16:54:00 INFO storage.BlockManagerInfo: Added broadcast_570_piece0 in memory on ***hostname masked***:53081 (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:54:00 INFO storage.MemoryStore: Block broadcast_574_piece0 stored as bytes in memory (estimated size 3.1 KB, free 490.4 MB) 18/04/17 16:54:00 INFO storage.BlockManagerInfo: Added broadcast_574_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.2 MB) 18/04/17 16:54:00 INFO spark.SparkContext: Created broadcast 574 from broadcast at DAGScheduler.scala:1006 18/04/17 16:54:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 574 (KafkaRDD[781] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 16:54:00 INFO cluster.YarnClusterScheduler: Adding task set 574.0 with 1 tasks 18/04/17 16:54:00 INFO scheduler.DAGScheduler: Got job 576 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 16:54:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 575 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 16:54:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 16:54:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 16:54:00 INFO scheduler.DAGScheduler: Submitting ResultStage 575 (KafkaRDD[783] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 16:54:00 INFO storage.MemoryStore: Block broadcast_575 stored as values in memory (estimated size 5.7 KB, free 490.3 MB) 18/04/17 16:54:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 574.0 (TID 574, ***hostname masked***, executor 9, partition 0, RACK_LOCAL, 2045 bytes) 18/04/17 16:54:00 INFO spark.ContextCleaner: Cleaned accumulator 521 18/04/17 16:54:00 INFO storage.BlockManagerInfo: Added broadcast_573_piece0 in memory on ***hostname masked***:55033 (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:54:00 INFO storage.BlockManagerInfo: Removed broadcast_520_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.2 MB) 18/04/17 16:54:00 INFO storage.MemoryStore: Block broadcast_575_piece0 stored as bytes in memory (estimated size 3.1 KB, free 490.3 MB) 18/04/17 16:54:00 INFO storage.BlockManagerInfo: Added broadcast_567_piece0 in memory on ***hostname masked***:56034 (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:54:00 INFO storage.BlockManagerInfo: Added broadcast_575_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.2 MB) 18/04/17 16:54:00 INFO spark.SparkContext: Created broadcast 575 from broadcast at DAGScheduler.scala:1006 18/04/17 16:54:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 575 (KafkaRDD[783] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 16:54:00 INFO cluster.YarnClusterScheduler: Adding task set 575.0 with 1 tasks 18/04/17 16:54:00 INFO scheduler.DAGScheduler: Got job 574 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 16:54:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 576 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 16:54:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 16:54:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 16:54:00 INFO scheduler.DAGScheduler: Submitting ResultStage 576 (KafkaRDD[785] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 16:54:00 INFO storage.MemoryStore: Block broadcast_576 stored as values in memory (estimated size 5.7 KB, free 490.3 MB) 18/04/17 16:54:00 INFO storage.BlockManagerInfo: Added broadcast_572_piece0 in memory on ***hostname masked***:35790 (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:54:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 575.0 (TID 575, ***hostname masked***, executor 8, partition 0, RACK_LOCAL, 2048 bytes) 18/04/17 16:54:00 INFO storage.MemoryStore: Block broadcast_576_piece0 stored as bytes in memory (estimated size 3.1 KB, free 490.3 MB) 18/04/17 16:54:00 INFO storage.BlockManagerInfo: Added broadcast_576_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.2 MB) 18/04/17 16:54:00 INFO spark.SparkContext: Created broadcast 576 from broadcast at DAGScheduler.scala:1006 18/04/17 16:54:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 576 (KafkaRDD[785] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 16:54:00 INFO cluster.YarnClusterScheduler: Adding task set 576.0 with 1 tasks 18/04/17 16:54:00 INFO scheduler.DAGScheduler: Got job 577 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 16:54:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 577 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 16:54:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 16:54:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 16:54:00 INFO scheduler.DAGScheduler: Submitting ResultStage 577 (KafkaRDD[771] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 16:54:00 INFO storage.MemoryStore: Block broadcast_577 stored as values in memory (estimated size 5.7 KB, free 490.3 MB) 18/04/17 16:54:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 576.0 (TID 576, ***hostname masked***, executor 4, partition 0, NODE_LOCAL, 2049 bytes) 18/04/17 16:54:00 INFO storage.BlockManagerInfo: Removed broadcast_520_piece0 on ***hostname masked***:43653 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:54:00 INFO storage.MemoryStore: Block broadcast_577_piece0 stored as bytes in memory (estimated size 3.1 KB, free 490.3 MB) 18/04/17 16:54:00 INFO storage.BlockManagerInfo: Added broadcast_577_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.2 MB) 18/04/17 16:54:00 INFO spark.SparkContext: Created broadcast 577 from broadcast at DAGScheduler.scala:1006 18/04/17 16:54:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 577 (KafkaRDD[771] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 16:54:00 INFO cluster.YarnClusterScheduler: Adding task set 577.0 with 1 tasks 18/04/17 16:54:00 INFO scheduler.DAGScheduler: Got job 578 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 16:54:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 578 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 16:54:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 16:54:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 16:54:00 INFO scheduler.DAGScheduler: Submitting ResultStage 578 (KafkaRDD[779] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 16:54:00 INFO storage.MemoryStore: Block broadcast_578 stored as values in memory (estimated size 5.7 KB, free 490.3 MB) 18/04/17 16:54:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 577.0 (TID 577, ***hostname masked***, executor 10, partition 0, RACK_LOCAL, 2064 bytes) 18/04/17 16:54:00 INFO storage.BlockManagerInfo: Added broadcast_574_piece0 in memory on ***hostname masked***:55033 (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:54:00 INFO storage.MemoryStore: Block broadcast_578_piece0 stored as bytes in memory (estimated size 3.1 KB, free 490.3 MB) 18/04/17 16:54:00 INFO storage.BlockManagerInfo: Added broadcast_578_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.2 MB) 18/04/17 16:54:00 INFO spark.SparkContext: Created broadcast 578 from broadcast at DAGScheduler.scala:1006 18/04/17 16:54:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 578 (KafkaRDD[779] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 16:54:00 INFO cluster.YarnClusterScheduler: Adding task set 578.0 with 1 tasks 18/04/17 16:54:00 INFO scheduler.DAGScheduler: Got job 579 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 16:54:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 579 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 16:54:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 16:54:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 16:54:00 INFO scheduler.DAGScheduler: Submitting ResultStage 579 (KafkaRDD[764] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 16:54:00 INFO storage.MemoryStore: Block broadcast_579 stored as values in memory (estimated size 5.7 KB, free 490.3 MB) 18/04/17 16:54:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 578.0 (TID 578, ***hostname masked***, executor 10, partition 0, RACK_LOCAL, 2048 bytes) 18/04/17 16:54:00 INFO storage.BlockManagerInfo: Added broadcast_576_piece0 in memory on ***hostname masked***:55279 (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:54:00 INFO storage.MemoryStore: Block broadcast_579_piece0 stored as bytes in memory (estimated size 3.1 KB, free 490.3 MB) 18/04/17 16:54:00 INFO storage.BlockManagerInfo: Added broadcast_579_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.2 MB) 18/04/17 16:54:00 INFO spark.SparkContext: Created broadcast 579 from broadcast at DAGScheduler.scala:1006 18/04/17 16:54:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 579 (KafkaRDD[764] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 16:54:00 INFO cluster.YarnClusterScheduler: Adding task set 579.0 with 1 tasks 18/04/17 16:54:00 INFO storage.BlockManagerInfo: Added broadcast_575_piece0 in memory on ***hostname masked***:50260 (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:54:00 INFO scheduler.DAGScheduler: Got job 580 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 16:54:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 580 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 16:54:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 16:54:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 16:54:00 INFO scheduler.DAGScheduler: Submitting ResultStage 580 (KafkaRDD[761] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 16:54:00 INFO storage.MemoryStore: Block broadcast_580 stored as values in memory (estimated size 5.7 KB, free 490.3 MB) 18/04/17 16:54:00 INFO storage.BlockManagerInfo: Added broadcast_577_piece0 in memory on ***hostname masked***:55095 (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:54:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 579.0 (TID 579, ***hostname masked***, executor 7, partition 0, RACK_LOCAL, 2051 bytes) 18/04/17 16:54:00 INFO storage.MemoryStore: Block broadcast_580_piece0 stored as bytes in memory (estimated size 3.1 KB, free 490.3 MB) 18/04/17 16:54:00 INFO storage.BlockManagerInfo: Added broadcast_580_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.2 MB) 18/04/17 16:54:00 INFO spark.SparkContext: Created broadcast 580 from broadcast at DAGScheduler.scala:1006 18/04/17 16:54:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 580 (KafkaRDD[761] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 16:54:00 INFO cluster.YarnClusterScheduler: Adding task set 580.0 with 1 tasks 18/04/17 16:54:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 580.0 (TID 580, ***hostname masked***, executor 5, partition 0, NODE_LOCAL, 2059 bytes) 18/04/17 16:54:00 INFO storage.BlockManagerInfo: Added broadcast_579_piece0 in memory on ***hostname masked***:41751 (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:54:00 INFO storage.BlockManagerInfo: Added broadcast_578_piece0 in memory on ***hostname masked***:55095 (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:54:00 INFO storage.BlockManagerInfo: Added broadcast_580_piece0 in memory on ***hostname masked***:53081 (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:54:00 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 554.0 (TID 554) in 170 ms on ***hostname masked*** (executor 7) (1/1) 18/04/17 16:54:00 INFO cluster.YarnClusterScheduler: Removed TaskSet 554.0, whose tasks have all completed, from pool 18/04/17 16:54:00 INFO scheduler.DAGScheduler: ResultStage 554 (foreachPartition at PredictorEngineApp.java:153) finished in 0.170 s 18/04/17 16:54:00 INFO scheduler.DAGScheduler: Job 554 finished: foreachPartition at PredictorEngineApp.java:153, took 0.180689 s 18/04/17 16:54:00 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x2c264e35 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 16:54:00 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x2c264e350x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 16:54:00 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 16:54:00 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:40313, server: ***hostname masked***/***IP masked***:2181 18/04/17 16:54:00 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x2626be142b28d5e, negotiated timeout = 60000 18/04/17 16:54:00 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x2626be142b28d5e 18/04/17 16:54:00 INFO zookeeper.ZooKeeper: Session: 0x2626be142b28d5e closed 18/04/17 16:54:00 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 16:54:00 INFO scheduler.JobScheduler: Finished job streaming job 1523973240000 ms.28 from job set of time 1523973240000 ms 18/04/17 16:54:00 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 563.0 (TID 563) in 629 ms on ***hostname masked*** (executor 9) (1/1) 18/04/17 16:54:00 INFO cluster.YarnClusterScheduler: Removed TaskSet 563.0, whose tasks have all completed, from pool 18/04/17 16:54:00 INFO scheduler.DAGScheduler: ResultStage 563 (foreachPartition at PredictorEngineApp.java:153) finished in 0.629 s 18/04/17 16:54:00 INFO scheduler.DAGScheduler: Job 563 finished: foreachPartition at PredictorEngineApp.java:153, took 0.678402 s 18/04/17 16:54:00 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x642023ee connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 16:54:00 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x642023ee0x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 16:54:00 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 16:54:00 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:57572, server: ***hostname masked***/***IP masked***:2181 18/04/17 16:54:00 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x3626be1439a9425, negotiated timeout = 60000 18/04/17 16:54:00 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x3626be1439a9425 18/04/17 16:54:00 INFO zookeeper.ZooKeeper: Session: 0x3626be1439a9425 closed 18/04/17 16:54:00 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 16:54:00 INFO scheduler.JobScheduler: Finished job streaming job 1523973240000 ms.35 from job set of time 1523973240000 ms 18/04/17 16:54:02 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 579.0 (TID 579) in 1989 ms on ***hostname masked*** (executor 7) (1/1) 18/04/17 16:54:02 INFO cluster.YarnClusterScheduler: Removed TaskSet 579.0, whose tasks have all completed, from pool 18/04/17 16:54:02 INFO scheduler.DAGScheduler: ResultStage 579 (foreachPartition at PredictorEngineApp.java:153) finished in 1.990 s 18/04/17 16:54:02 INFO scheduler.DAGScheduler: Job 579 finished: foreachPartition at PredictorEngineApp.java:153, took 2.113815 s 18/04/17 16:54:02 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x70eca314 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 16:54:02 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x70eca3140x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 16:54:02 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 16:54:02 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:35726, server: ***hostname masked***/***IP masked***:2181 18/04/17 16:54:02 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x1626be1444c945c, negotiated timeout = 60000 18/04/17 16:54:02 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x1626be1444c945c 18/04/17 16:54:02 INFO zookeeper.ZooKeeper: Session: 0x1626be1444c945c closed 18/04/17 16:54:02 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 16:54:02 INFO scheduler.JobScheduler: Finished job streaming job 1523973240000 ms.8 from job set of time 1523973240000 ms 18/04/17 16:54:02 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 570.0 (TID 570) in 2180 ms on ***hostname masked*** (executor 5) (1/1) 18/04/17 16:54:02 INFO cluster.YarnClusterScheduler: Removed TaskSet 570.0, whose tasks have all completed, from pool 18/04/17 16:54:02 INFO scheduler.DAGScheduler: ResultStage 570 (foreachPartition at PredictorEngineApp.java:153) finished in 2.182 s 18/04/17 16:54:02 INFO scheduler.DAGScheduler: Job 571 finished: foreachPartition at PredictorEngineApp.java:153, took 2.263362 s 18/04/17 16:54:02 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x2f69d8e9 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 16:54:02 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x2f69d8e90x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 16:54:02 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 16:54:02 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:40324, server: ***hostname masked***/***IP masked***:2181 18/04/17 16:54:02 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x2626be142b28d63, negotiated timeout = 60000 18/04/17 16:54:02 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x2626be142b28d63 18/04/17 16:54:02 INFO zookeeper.ZooKeeper: Session: 0x2626be142b28d63 closed 18/04/17 16:54:02 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 16:54:02 INFO scheduler.JobScheduler: Finished job streaming job 1523973240000 ms.7 from job set of time 1523973240000 ms 18/04/17 16:54:05 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 575.0 (TID 575) in 4822 ms on ***hostname masked*** (executor 8) (1/1) 18/04/17 16:54:05 INFO cluster.YarnClusterScheduler: Removed TaskSet 575.0, whose tasks have all completed, from pool 18/04/17 16:54:05 INFO scheduler.DAGScheduler: ResultStage 575 (foreachPartition at PredictorEngineApp.java:153) finished in 4.823 s 18/04/17 16:54:05 INFO scheduler.DAGScheduler: Job 576 finished: foreachPartition at PredictorEngineApp.java:153, took 4.935607 s 18/04/17 16:54:05 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x4ec94313 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 16:54:05 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x4ec943130x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 16:54:05 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 16:54:05 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:40330, server: ***hostname masked***/***IP masked***:2181 18/04/17 16:54:05 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x2626be142b28d64, negotiated timeout = 60000 18/04/17 16:54:05 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x2626be142b28d64 18/04/17 16:54:05 INFO zookeeper.ZooKeeper: Session: 0x2626be142b28d64 closed 18/04/17 16:54:05 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 16:54:05 INFO scheduler.JobScheduler: Finished job streaming job 1523973240000 ms.27 from job set of time 1523973240000 ms 18/04/17 16:54:05 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 556.0 (TID 556) in 5291 ms on ***hostname masked*** (executor 7) (1/1) 18/04/17 16:54:05 INFO cluster.YarnClusterScheduler: Removed TaskSet 556.0, whose tasks have all completed, from pool 18/04/17 16:54:05 INFO scheduler.DAGScheduler: ResultStage 556 (foreachPartition at PredictorEngineApp.java:153) finished in 5.291 s 18/04/17 16:54:05 INFO scheduler.DAGScheduler: Job 556 finished: foreachPartition at PredictorEngineApp.java:153, took 5.310293 s 18/04/17 16:54:05 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x34b5bc51 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 16:54:05 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x34b5bc510x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 16:54:05 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 16:54:05 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:35740, server: ***hostname masked***/***IP masked***:2181 18/04/17 16:54:05 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x1626be1444c945f, negotiated timeout = 60000 18/04/17 16:54:05 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x1626be1444c945f 18/04/17 16:54:05 INFO zookeeper.ZooKeeper: Session: 0x1626be1444c945f closed 18/04/17 16:54:05 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 16:54:05 INFO scheduler.JobScheduler: Finished job streaming job 1523973240000 ms.33 from job set of time 1523973240000 ms 18/04/17 16:54:06 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 557.0 (TID 557) in 6349 ms on ***hostname masked*** (executor 11) (1/1) 18/04/17 16:54:06 INFO cluster.YarnClusterScheduler: Removed TaskSet 557.0, whose tasks have all completed, from pool 18/04/17 16:54:06 INFO scheduler.DAGScheduler: ResultStage 557 (foreachPartition at PredictorEngineApp.java:153) finished in 6.349 s 18/04/17 16:54:06 INFO scheduler.DAGScheduler: Job 557 finished: foreachPartition at PredictorEngineApp.java:153, took 6.371551 s 18/04/17 16:54:06 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x1fdcaa0f connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 16:54:06 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x1fdcaa0f0x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 16:54:06 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 16:54:06 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:57596, server: ***hostname masked***/***IP masked***:2181 18/04/17 16:54:06 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x3626be1439a9429, negotiated timeout = 60000 18/04/17 16:54:06 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x3626be1439a9429 18/04/17 16:54:06 INFO zookeeper.ZooKeeper: Session: 0x3626be1439a9429 closed 18/04/17 16:54:06 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 16:54:06 INFO scheduler.JobScheduler: Finished job streaming job 1523973240000 ms.12 from job set of time 1523973240000 ms 18/04/17 16:54:06 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 559.0 (TID 559) in 6582 ms on ***hostname masked*** (executor 11) (1/1) 18/04/17 16:54:06 INFO cluster.YarnClusterScheduler: Removed TaskSet 559.0, whose tasks have all completed, from pool 18/04/17 16:54:06 INFO scheduler.DAGScheduler: ResultStage 559 (foreachPartition at PredictorEngineApp.java:153) finished in 6.582 s 18/04/17 16:54:06 INFO scheduler.DAGScheduler: Job 559 finished: foreachPartition at PredictorEngineApp.java:153, took 6.615073 s 18/04/17 16:54:06 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x5a548d64 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 16:54:06 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x5a548d640x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 16:54:06 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 16:54:06 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:35748, server: ***hostname masked***/***IP masked***:2181 18/04/17 16:54:06 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x1626be1444c9461, negotiated timeout = 60000 18/04/17 16:54:06 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x1626be1444c9461 18/04/17 16:54:06 INFO zookeeper.ZooKeeper: Session: 0x1626be1444c9461 closed 18/04/17 16:54:06 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 16:54:06 INFO scheduler.JobScheduler: Finished job streaming job 1523973240000 ms.18 from job set of time 1523973240000 ms 18/04/17 16:54:06 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 555.0 (TID 555) in 6757 ms on ***hostname masked*** (executor 12) (1/1) 18/04/17 16:54:06 INFO cluster.YarnClusterScheduler: Removed TaskSet 555.0, whose tasks have all completed, from pool 18/04/17 16:54:06 INFO scheduler.DAGScheduler: ResultStage 555 (foreachPartition at PredictorEngineApp.java:153) finished in 6.757 s 18/04/17 16:54:06 INFO scheduler.DAGScheduler: Job 555 finished: foreachPartition at PredictorEngineApp.java:153, took 6.771892 s 18/04/17 16:54:06 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x4b08457d connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 16:54:06 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x4b08457d0x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 16:54:06 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 16:54:06 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:40346, server: ***hostname masked***/***IP masked***:2181 18/04/17 16:54:06 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x2626be142b28d65, negotiated timeout = 60000 18/04/17 16:54:06 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x2626be142b28d65 18/04/17 16:54:06 INFO zookeeper.ZooKeeper: Session: 0x2626be142b28d65 closed 18/04/17 16:54:06 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 16:54:06 INFO scheduler.JobScheduler: Finished job streaming job 1523973240000 ms.19 from job set of time 1523973240000 ms 18/04/17 16:54:07 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 561.0 (TID 561) in 6947 ms on ***hostname masked*** (executor 9) (1/1) 18/04/17 16:54:07 INFO cluster.YarnClusterScheduler: Removed TaskSet 561.0, whose tasks have all completed, from pool 18/04/17 16:54:07 INFO scheduler.DAGScheduler: ResultStage 561 (foreachPartition at PredictorEngineApp.java:153) finished in 6.948 s 18/04/17 16:54:07 INFO scheduler.DAGScheduler: Job 561 finished: foreachPartition at PredictorEngineApp.java:153, took 6.988906 s 18/04/17 16:54:07 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x3d381c0 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 16:54:07 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x3d381c00x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 16:54:07 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 16:54:07 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:35755, server: ***hostname masked***/***IP masked***:2181 18/04/17 16:54:07 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x1626be1444c9462, negotiated timeout = 60000 18/04/17 16:54:07 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x1626be1444c9462 18/04/17 16:54:07 INFO zookeeper.ZooKeeper: Session: 0x1626be1444c9462 closed 18/04/17 16:54:07 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 16:54:07 INFO scheduler.JobScheduler: Finished job streaming job 1523973240000 ms.32 from job set of time 1523973240000 ms 18/04/17 16:54:07 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 564.0 (TID 564) in 7077 ms on ***hostname masked*** (executor 4) (1/1) 18/04/17 16:54:07 INFO cluster.YarnClusterScheduler: Removed TaskSet 564.0, whose tasks have all completed, from pool 18/04/17 16:54:07 INFO scheduler.DAGScheduler: ResultStage 564 (foreachPartition at PredictorEngineApp.java:153) finished in 7.078 s 18/04/17 16:54:07 INFO scheduler.DAGScheduler: Job 564 finished: foreachPartition at PredictorEngineApp.java:153, took 7.133214 s 18/04/17 16:54:07 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x23618e6 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 16:54:07 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x23618e60x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 16:54:07 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 16:54:07 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:35758, server: ***hostname masked***/***IP masked***:2181 18/04/17 16:54:07 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x1626be1444c9463, negotiated timeout = 60000 18/04/17 16:54:07 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x1626be1444c9463 18/04/17 16:54:07 INFO zookeeper.ZooKeeper: Session: 0x1626be1444c9463 closed 18/04/17 16:54:07 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 16:54:07 INFO scheduler.JobScheduler: Finished job streaming job 1523973240000 ms.9 from job set of time 1523973240000 ms 18/04/17 16:54:07 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 576.0 (TID 576) in 7143 ms on ***hostname masked*** (executor 4) (1/1) 18/04/17 16:54:07 INFO cluster.YarnClusterScheduler: Removed TaskSet 576.0, whose tasks have all completed, from pool 18/04/17 16:54:07 INFO scheduler.DAGScheduler: ResultStage 576 (foreachPartition at PredictorEngineApp.java:153) finished in 7.144 s 18/04/17 16:54:07 INFO scheduler.DAGScheduler: Job 574 finished: foreachPartition at PredictorEngineApp.java:153, took 7.259288 s 18/04/17 16:54:07 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x13c69b17 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 16:54:07 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x13c69b170x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 16:54:07 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 16:54:07 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:35761, server: ***hostname masked***/***IP masked***:2181 18/04/17 16:54:07 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x1626be1444c9465, negotiated timeout = 60000 18/04/17 16:54:07 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x1626be1444c9465 18/04/17 16:54:07 INFO zookeeper.ZooKeeper: Session: 0x1626be1444c9465 closed 18/04/17 16:54:07 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 16:54:07 INFO scheduler.JobScheduler: Finished job streaming job 1523973240000 ms.29 from job set of time 1523973240000 ms 18/04/17 16:54:07 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 573.0 (TID 573) in 7404 ms on ***hostname masked*** (executor 9) (1/1) 18/04/17 16:54:07 INFO cluster.YarnClusterScheduler: Removed TaskSet 573.0, whose tasks have all completed, from pool 18/04/17 16:54:07 INFO scheduler.DAGScheduler: ResultStage 573 (foreachPartition at PredictorEngineApp.java:153) finished in 7.405 s 18/04/17 16:54:07 INFO scheduler.DAGScheduler: Job 572 finished: foreachPartition at PredictorEngineApp.java:153, took 7.498826 s 18/04/17 16:54:07 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x78adb61 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 16:54:07 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x78adb610x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 16:54:07 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 16:54:07 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:35764, server: ***hostname masked***/***IP masked***:2181 18/04/17 16:54:07 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x1626be1444c9466, negotiated timeout = 60000 18/04/17 16:54:07 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x1626be1444c9466 18/04/17 16:54:07 INFO zookeeper.ZooKeeper: Session: 0x1626be1444c9466 closed 18/04/17 16:54:07 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 16:54:07 INFO scheduler.JobScheduler: Finished job streaming job 1523973240000 ms.6 from job set of time 1523973240000 ms 18/04/17 16:54:07 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 569.0 (TID 569) in 7729 ms on ***hostname masked*** (executor 1) (1/1) 18/04/17 16:54:07 INFO scheduler.DAGScheduler: ResultStage 569 (foreachPartition at PredictorEngineApp.java:153) finished in 7.730 s 18/04/17 16:54:07 INFO cluster.YarnClusterScheduler: Removed TaskSet 569.0, whose tasks have all completed, from pool 18/04/17 16:54:07 INFO scheduler.DAGScheduler: Job 569 finished: foreachPartition at PredictorEngineApp.java:153, took 7.807535 s 18/04/17 16:54:07 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x1b6b1bb6 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 16:54:07 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x1b6b1bb60x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 16:54:07 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 16:54:07 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:57618, server: ***hostname masked***/***IP masked***:2181 18/04/17 16:54:07 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x3626be1439a942a, negotiated timeout = 60000 18/04/17 16:54:07 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x3626be1439a942a 18/04/17 16:54:07 INFO zookeeper.ZooKeeper: Session: 0x3626be1439a942a closed 18/04/17 16:54:07 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 16:54:07 INFO scheduler.JobScheduler: Finished job streaming job 1523973240000 ms.24 from job set of time 1523973240000 ms 18/04/17 16:54:09 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 558.0 (TID 558) in 8960 ms on ***hostname masked*** (executor 10) (1/1) 18/04/17 16:54:09 INFO cluster.YarnClusterScheduler: Removed TaskSet 558.0, whose tasks have all completed, from pool 18/04/17 16:54:09 INFO scheduler.DAGScheduler: ResultStage 558 (foreachPartition at PredictorEngineApp.java:153) finished in 8.960 s 18/04/17 16:54:09 INFO scheduler.DAGScheduler: Job 558 finished: foreachPartition at PredictorEngineApp.java:153, took 8.988113 s 18/04/17 16:54:09 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x74fce7ab connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 16:54:09 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x74fce7ab0x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 16:54:09 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 16:54:09 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:57623, server: ***hostname masked***/***IP masked***:2181 18/04/17 16:54:09 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x3626be1439a942e, negotiated timeout = 60000 18/04/17 16:54:09 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x3626be1439a942e 18/04/17 16:54:09 INFO zookeeper.ZooKeeper: Session: 0x3626be1439a942e closed 18/04/17 16:54:09 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 16:54:09 INFO scheduler.JobScheduler: Finished job streaming job 1523973240000 ms.31 from job set of time 1523973240000 ms 18/04/17 16:54:09 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 562.0 (TID 562) in 9182 ms on ***hostname masked*** (executor 3) (1/1) 18/04/17 16:54:09 INFO cluster.YarnClusterScheduler: Removed TaskSet 562.0, whose tasks have all completed, from pool 18/04/17 16:54:09 INFO scheduler.DAGScheduler: ResultStage 562 (foreachPartition at PredictorEngineApp.java:153) finished in 9.183 s 18/04/17 16:54:09 INFO scheduler.DAGScheduler: Job 562 finished: foreachPartition at PredictorEngineApp.java:153, took 9.227894 s 18/04/17 16:54:09 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x26e1761a connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 16:54:09 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x26e1761a0x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 16:54:09 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 16:54:09 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:57626, server: ***hostname masked***/***IP masked***:2181 18/04/17 16:54:09 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x3626be1439a942f, negotiated timeout = 60000 18/04/17 16:54:09 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x3626be1439a942f 18/04/17 16:54:09 INFO zookeeper.ZooKeeper: Session: 0x3626be1439a942f closed 18/04/17 16:54:09 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 16:54:09 INFO scheduler.JobScheduler: Finished job streaming job 1523973240000 ms.26 from job set of time 1523973240000 ms 18/04/17 16:54:10 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 580.0 (TID 580) in 10018 ms on ***hostname masked*** (executor 5) (1/1) 18/04/17 16:54:10 INFO cluster.YarnClusterScheduler: Removed TaskSet 580.0, whose tasks have all completed, from pool 18/04/17 16:54:10 INFO scheduler.DAGScheduler: ResultStage 580 (foreachPartition at PredictorEngineApp.java:153) finished in 10.018 s 18/04/17 16:54:10 INFO scheduler.DAGScheduler: Job 580 finished: foreachPartition at PredictorEngineApp.java:153, took 10.143213 s 18/04/17 16:54:10 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x5cb1aa1c connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 16:54:10 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x5cb1aa1c0x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 16:54:10 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 16:54:10 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:35780, server: ***hostname masked***/***IP masked***:2181 18/04/17 16:54:10 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x1626be1444c9467, negotiated timeout = 60000 18/04/17 16:54:10 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x1626be1444c9467 18/04/17 16:54:10 INFO zookeeper.ZooKeeper: Session: 0x1626be1444c9467 closed 18/04/17 16:54:10 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 16:54:10 INFO scheduler.JobScheduler: Finished job streaming job 1523973240000 ms.5 from job set of time 1523973240000 ms 18/04/17 16:54:11 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 571.0 (TID 571) in 10924 ms on ***hostname masked*** (executor 12) (1/1) 18/04/17 16:54:11 INFO cluster.YarnClusterScheduler: Removed TaskSet 571.0, whose tasks have all completed, from pool 18/04/17 16:54:11 INFO scheduler.DAGScheduler: ResultStage 571 (foreachPartition at PredictorEngineApp.java:153) finished in 10.926 s 18/04/17 16:54:11 INFO scheduler.DAGScheduler: Job 570 finished: foreachPartition at PredictorEngineApp.java:153, took 11.011423 s 18/04/17 16:54:11 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x7ee2bcd9 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 16:54:11 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x7ee2bcd90x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 16:54:11 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 16:54:11 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:35784, server: ***hostname masked***/***IP masked***:2181 18/04/17 16:54:11 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x1626be1444c9469, negotiated timeout = 60000 18/04/17 16:54:11 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x1626be1444c9469 18/04/17 16:54:11 INFO zookeeper.ZooKeeper: Session: 0x1626be1444c9469 closed 18/04/17 16:54:11 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 16:54:11 INFO scheduler.JobScheduler: Finished job streaming job 1523973240000 ms.1 from job set of time 1523973240000 ms 18/04/17 16:54:12 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 567.0 (TID 567) in 12089 ms on ***hostname masked*** (executor 1) (1/1) 18/04/17 16:54:12 INFO cluster.YarnClusterScheduler: Removed TaskSet 567.0, whose tasks have all completed, from pool 18/04/17 16:54:12 INFO scheduler.DAGScheduler: ResultStage 567 (foreachPartition at PredictorEngineApp.java:153) finished in 12.089 s 18/04/17 16:54:12 INFO scheduler.DAGScheduler: Job 567 finished: foreachPartition at PredictorEngineApp.java:153, took 12.159047 s 18/04/17 16:54:12 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x72509733 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 16:54:12 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x725097330x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 16:54:12 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 16:54:12 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:57639, server: ***hostname masked***/***IP masked***:2181 18/04/17 16:54:12 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x3626be1439a9430, negotiated timeout = 60000 18/04/17 16:54:12 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x3626be1439a9430 18/04/17 16:54:12 INFO zookeeper.ZooKeeper: Session: 0x3626be1439a9430 closed 18/04/17 16:54:12 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 16:54:12 INFO scheduler.JobScheduler: Finished job streaming job 1523973240000 ms.22 from job set of time 1523973240000 ms 18/04/17 16:54:12 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 577.0 (TID 577) in 12484 ms on ***hostname masked*** (executor 10) (1/1) 18/04/17 16:54:12 INFO cluster.YarnClusterScheduler: Removed TaskSet 577.0, whose tasks have all completed, from pool 18/04/17 16:54:12 INFO scheduler.DAGScheduler: ResultStage 577 (foreachPartition at PredictorEngineApp.java:153) finished in 12.485 s 18/04/17 16:54:12 INFO scheduler.DAGScheduler: Job 577 finished: foreachPartition at PredictorEngineApp.java:153, took 12.603307 s 18/04/17 16:54:12 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x114f5e6e connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 16:54:12 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x114f5e6e0x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 16:54:12 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 16:54:12 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:40386, server: ***hostname masked***/***IP masked***:2181 18/04/17 16:54:12 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x2626be142b28d6a, negotiated timeout = 60000 18/04/17 16:54:12 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x2626be142b28d6a 18/04/17 16:54:12 INFO zookeeper.ZooKeeper: Session: 0x2626be142b28d6a closed 18/04/17 16:54:12 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 16:54:12 INFO scheduler.JobScheduler: Finished job streaming job 1523973240000 ms.15 from job set of time 1523973240000 ms 18/04/17 16:54:13 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 578.0 (TID 578) in 13600 ms on ***hostname masked*** (executor 10) (1/1) 18/04/17 16:54:13 INFO cluster.YarnClusterScheduler: Removed TaskSet 578.0, whose tasks have all completed, from pool 18/04/17 16:54:13 INFO scheduler.DAGScheduler: ResultStage 578 (foreachPartition at PredictorEngineApp.java:153) finished in 13.601 s 18/04/17 16:54:13 INFO scheduler.DAGScheduler: Job 578 finished: foreachPartition at PredictorEngineApp.java:153, took 13.722418 s 18/04/17 16:54:13 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x632ad79b connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 16:54:13 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x632ad79b0x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 16:54:13 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 16:54:13 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:57646, server: ***hostname masked***/***IP masked***:2181 18/04/17 16:54:13 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x3626be1439a9433, negotiated timeout = 60000 18/04/17 16:54:13 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x3626be1439a9433 18/04/17 16:54:13 INFO zookeeper.ZooKeeper: Session: 0x3626be1439a9433 closed 18/04/17 16:54:13 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 16:54:13 INFO scheduler.JobScheduler: Finished job streaming job 1523973240000 ms.23 from job set of time 1523973240000 ms 18/04/17 16:54:14 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 565.0 (TID 565) in 14015 ms on ***hostname masked*** (executor 10) (1/1) 18/04/17 16:54:14 INFO cluster.YarnClusterScheduler: Removed TaskSet 565.0, whose tasks have all completed, from pool 18/04/17 16:54:14 INFO scheduler.DAGScheduler: ResultStage 565 (foreachPartition at PredictorEngineApp.java:153) finished in 14.015 s 18/04/17 16:54:14 INFO scheduler.DAGScheduler: Job 565 finished: foreachPartition at PredictorEngineApp.java:153, took 14.075943 s 18/04/17 16:54:14 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x4f0dd6d7 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 16:54:14 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x4f0dd6d70x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 16:54:14 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 16:54:14 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:40394, server: ***hostname masked***/***IP masked***:2181 18/04/17 16:54:14 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x2626be142b28d6b, negotiated timeout = 60000 18/04/17 16:54:14 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x2626be142b28d6b 18/04/17 16:54:14 INFO zookeeper.ZooKeeper: Session: 0x2626be142b28d6b closed 18/04/17 16:54:14 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 16:54:14 INFO scheduler.JobScheduler: Finished job streaming job 1523973240000 ms.20 from job set of time 1523973240000 ms 18/04/17 16:54:14 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 568.0 (TID 568) in 14807 ms on ***hostname masked*** (executor 6) (1/1) 18/04/17 16:54:14 INFO cluster.YarnClusterScheduler: Removed TaskSet 568.0, whose tasks have all completed, from pool 18/04/17 16:54:14 INFO scheduler.DAGScheduler: ResultStage 568 (foreachPartition at PredictorEngineApp.java:153) finished in 14.808 s 18/04/17 16:54:14 INFO scheduler.DAGScheduler: Job 568 finished: foreachPartition at PredictorEngineApp.java:153, took 14.881880 s 18/04/17 16:54:14 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x29042230 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 16:54:14 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x290422300x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 16:54:14 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 16:54:14 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:57654, server: ***hostname masked***/***IP masked***:2181 18/04/17 16:54:14 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x3626be1439a9435, negotiated timeout = 60000 18/04/17 16:54:14 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x3626be1439a9435 18/04/17 16:54:14 INFO zookeeper.ZooKeeper: Session: 0x3626be1439a9435 closed 18/04/17 16:54:14 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 16:54:14 INFO scheduler.JobScheduler: Finished job streaming job 1523973240000 ms.2 from job set of time 1523973240000 ms 18/04/17 16:54:16 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 566.0 (TID 566) in 16536 ms on ***hostname masked*** (executor 6) (1/1) 18/04/17 16:54:16 INFO cluster.YarnClusterScheduler: Removed TaskSet 566.0, whose tasks have all completed, from pool 18/04/17 16:54:16 INFO scheduler.DAGScheduler: ResultStage 566 (foreachPartition at PredictorEngineApp.java:153) finished in 16.537 s 18/04/17 16:54:16 INFO scheduler.DAGScheduler: Job 566 finished: foreachPartition at PredictorEngineApp.java:153, took 16.602302 s 18/04/17 16:54:16 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x363d528a connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 16:54:16 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x363d528a0x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 16:54:16 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 16:54:16 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:35809, server: ***hostname masked***/***IP masked***:2181 18/04/17 16:54:16 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x1626be1444c946d, negotiated timeout = 60000 18/04/17 16:54:16 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x1626be1444c946d 18/04/17 16:54:16 INFO zookeeper.ZooKeeper: Session: 0x1626be1444c946d closed 18/04/17 16:54:16 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 16:54:16 INFO scheduler.JobScheduler: Finished job streaming job 1523973240000 ms.34 from job set of time 1523973240000 ms 18/04/17 16:54:26 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 572.0 (TID 572) in 26547 ms on ***hostname masked*** (executor 6) (1/1) 18/04/17 16:54:26 INFO scheduler.DAGScheduler: ResultStage 572 (foreachPartition at PredictorEngineApp.java:153) finished in 26.548 s 18/04/17 16:54:26 INFO cluster.YarnClusterScheduler: Removed TaskSet 572.0, whose tasks have all completed, from pool 18/04/17 16:54:26 INFO scheduler.DAGScheduler: Job 573 finished: foreachPartition at PredictorEngineApp.java:153, took 26.637748 s 18/04/17 16:54:26 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x27b5d47a connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 16:54:26 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x27b5d47a0x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 16:54:26 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 16:54:26 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:57678, server: ***hostname masked***/***IP masked***:2181 18/04/17 16:54:26 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x3626be1439a9438, negotiated timeout = 60000 18/04/17 16:54:26 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x3626be1439a9438 18/04/17 16:54:26 INFO zookeeper.ZooKeeper: Session: 0x3626be1439a9438 closed 18/04/17 16:54:26 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 16:54:26 INFO scheduler.JobScheduler: Finished job streaming job 1523973240000 ms.11 from job set of time 1523973240000 ms 18/04/17 16:54:28 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 560.0 (TID 560) in 28058 ms on ***hostname masked*** (executor 6) (1/1) 18/04/17 16:54:28 INFO scheduler.DAGScheduler: ResultStage 560 (foreachPartition at PredictorEngineApp.java:153) finished in 28.059 s 18/04/17 16:54:28 INFO cluster.YarnClusterScheduler: Removed TaskSet 560.0, whose tasks have all completed, from pool 18/04/17 16:54:28 INFO scheduler.DAGScheduler: Job 560 finished: foreachPartition at PredictorEngineApp.java:153, took 28.098154 s 18/04/17 16:54:28 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x7cf11337 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 16:54:28 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x7cf113370x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 16:54:28 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 16:54:28 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:57683, server: ***hostname masked***/***IP masked***:2181 18/04/17 16:54:28 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x3626be1439a9439, negotiated timeout = 60000 18/04/17 16:54:28 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x3626be1439a9439 18/04/17 16:54:28 INFO zookeeper.ZooKeeper: Session: 0x3626be1439a9439 closed 18/04/17 16:54:28 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 16:54:28 INFO scheduler.JobScheduler: Finished job streaming job 1523973240000 ms.10 from job set of time 1523973240000 ms 18/04/17 16:55:00 INFO scheduler.JobScheduler: Added jobs for time 1523973300000 ms 18/04/17 16:55:00 INFO scheduler.JobScheduler: Starting job streaming job 1523973300000 ms.0 from job set of time 1523973300000 ms 18/04/17 16:55:00 INFO scheduler.JobScheduler: Starting job streaming job 1523973300000 ms.1 from job set of time 1523973300000 ms 18/04/17 16:55:00 INFO scheduler.JobScheduler: Starting job streaming job 1523973300000 ms.2 from job set of time 1523973300000 ms 18/04/17 16:55:00 INFO scheduler.JobScheduler: Finished job streaming job 1523973300000 ms.0 from job set of time 1523973300000 ms 18/04/17 16:55:00 INFO scheduler.JobScheduler: Starting job streaming job 1523973300000 ms.3 from job set of time 1523973300000 ms 18/04/17 16:55:00 INFO scheduler.JobScheduler: Finished job streaming job 1523973300000 ms.3 from job set of time 1523973300000 ms 18/04/17 16:55:00 INFO scheduler.JobScheduler: Starting job streaming job 1523973300000 ms.4 from job set of time 1523973300000 ms 18/04/17 16:55:00 INFO scheduler.JobScheduler: Starting job streaming job 1523973300000 ms.6 from job set of time 1523973300000 ms 18/04/17 16:55:00 INFO scheduler.JobScheduler: Starting job streaming job 1523973300000 ms.5 from job set of time 1523973300000 ms 18/04/17 16:55:00 INFO scheduler.JobScheduler: Starting job streaming job 1523973300000 ms.7 from job set of time 1523973300000 ms 18/04/17 16:55:00 INFO scheduler.JobScheduler: Finished job streaming job 1523973300000 ms.4 from job set of time 1523973300000 ms 18/04/17 16:55:00 INFO scheduler.JobScheduler: Starting job streaming job 1523973300000 ms.9 from job set of time 1523973300000 ms 18/04/17 16:55:00 INFO scheduler.JobScheduler: Starting job streaming job 1523973300000 ms.8 from job set of time 1523973300000 ms 18/04/17 16:55:00 INFO scheduler.JobScheduler: Starting job streaming job 1523973300000 ms.10 from job set of time 1523973300000 ms 18/04/17 16:55:00 INFO scheduler.JobScheduler: Starting job streaming job 1523973300000 ms.11 from job set of time 1523973300000 ms 18/04/17 16:55:00 INFO scheduler.JobScheduler: Starting job streaming job 1523973300000 ms.12 from job set of time 1523973300000 ms 18/04/17 16:55:00 INFO scheduler.JobScheduler: Starting job streaming job 1523973300000 ms.13 from job set of time 1523973300000 ms 18/04/17 16:55:00 INFO scheduler.JobScheduler: Finished job streaming job 1523973300000 ms.13 from job set of time 1523973300000 ms 18/04/17 16:55:00 INFO scheduler.JobScheduler: Starting job streaming job 1523973300000 ms.15 from job set of time 1523973300000 ms 18/04/17 16:55:00 INFO scheduler.JobScheduler: Starting job streaming job 1523973300000 ms.14 from job set of time 1523973300000 ms 18/04/17 16:55:00 INFO scheduler.JobScheduler: Starting job streaming job 1523973300000 ms.16 from job set of time 1523973300000 ms 18/04/17 16:55:00 INFO scheduler.JobScheduler: Finished job streaming job 1523973300000 ms.14 from job set of time 1523973300000 ms 18/04/17 16:55:00 INFO scheduler.JobScheduler: Finished job streaming job 1523973300000 ms.16 from job set of time 1523973300000 ms 18/04/17 16:55:00 INFO scheduler.JobScheduler: Starting job streaming job 1523973300000 ms.18 from job set of time 1523973300000 ms 18/04/17 16:55:00 INFO scheduler.JobScheduler: Starting job streaming job 1523973300000 ms.19 from job set of time 1523973300000 ms 18/04/17 16:55:00 INFO scheduler.JobScheduler: Starting job streaming job 1523973300000 ms.20 from job set of time 1523973300000 ms 18/04/17 16:55:00 INFO scheduler.JobScheduler: Starting job streaming job 1523973300000 ms.17 from job set of time 1523973300000 ms 18/04/17 16:55:00 INFO scheduler.JobScheduler: Finished job streaming job 1523973300000 ms.17 from job set of time 1523973300000 ms 18/04/17 16:55:00 INFO scheduler.JobScheduler: Starting job streaming job 1523973300000 ms.21 from job set of time 1523973300000 ms 18/04/17 16:55:00 INFO scheduler.JobScheduler: Starting job streaming job 1523973300000 ms.22 from job set of time 1523973300000 ms 18/04/17 16:55:00 INFO scheduler.JobScheduler: Finished job streaming job 1523973300000 ms.21 from job set of time 1523973300000 ms 18/04/17 16:55:00 INFO scheduler.JobScheduler: Starting job streaming job 1523973300000 ms.24 from job set of time 1523973300000 ms 18/04/17 16:55:00 INFO scheduler.JobScheduler: Starting job streaming job 1523973300000 ms.23 from job set of time 1523973300000 ms 18/04/17 16:55:00 INFO scheduler.JobScheduler: Starting job streaming job 1523973300000 ms.26 from job set of time 1523973300000 ms 18/04/17 16:55:00 INFO scheduler.JobScheduler: Starting job streaming job 1523973300000 ms.25 from job set of time 1523973300000 ms 18/04/17 16:55:00 INFO scheduler.JobScheduler: Starting job streaming job 1523973300000 ms.27 from job set of time 1523973300000 ms 18/04/17 16:55:00 INFO scheduler.JobScheduler: Starting job streaming job 1523973300000 ms.28 from job set of time 1523973300000 ms 18/04/17 16:55:00 INFO scheduler.JobScheduler: Starting job streaming job 1523973300000 ms.29 from job set of time 1523973300000 ms 18/04/17 16:55:00 INFO scheduler.JobScheduler: Starting job streaming job 1523973300000 ms.30 from job set of time 1523973300000 ms 18/04/17 16:55:00 INFO scheduler.JobScheduler: Starting job streaming job 1523973300000 ms.31 from job set of time 1523973300000 ms 18/04/17 16:55:00 INFO scheduler.JobScheduler: Finished job streaming job 1523973300000 ms.30 from job set of time 1523973300000 ms 18/04/17 16:55:00 INFO scheduler.JobScheduler: Starting job streaming job 1523973300000 ms.33 from job set of time 1523973300000 ms 18/04/17 16:55:00 INFO scheduler.JobScheduler: Starting job streaming job 1523973300000 ms.32 from job set of time 1523973300000 ms 18/04/17 16:55:00 INFO scheduler.JobScheduler: Starting job streaming job 1523973300000 ms.34 from job set of time 1523973300000 ms 18/04/17 16:55:00 INFO scheduler.JobScheduler: Starting job streaming job 1523973300000 ms.35 from job set of time 1523973300000 ms 18/04/17 16:55:00 INFO scheduler.JobScheduler: Finished job streaming job 1523973300000 ms.35 from job set of time 1523973300000 ms 18/04/17 16:55:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 16:55:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 16:55:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 16:55:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 16:55:00 INFO scheduler.DAGScheduler: Got job 581 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 16:55:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 16:55:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 16:55:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 581 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 16:55:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 16:55:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 16:55:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 16:55:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 16:55:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 16:55:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 16:55:00 INFO scheduler.DAGScheduler: Submitting ResultStage 581 (KafkaRDD[817] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 16:55:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 16:55:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 16:55:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 16:55:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 16:55:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 16:55:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 16:55:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 16:55:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 16:55:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 16:55:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 16:55:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 16:55:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 16:55:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 16:55:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 16:55:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 16:55:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 16:55:00 INFO storage.MemoryStore: Block broadcast_581 stored as values in memory (estimated size 5.7 KB, free 490.3 MB) 18/04/17 16:55:00 INFO storage.MemoryStore: Block broadcast_581_piece0 stored as bytes in memory (estimated size 3.1 KB, free 490.3 MB) 18/04/17 16:55:00 INFO storage.BlockManagerInfo: Added broadcast_581_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.2 MB) 18/04/17 16:55:00 INFO spark.SparkContext: Created broadcast 581 from broadcast at DAGScheduler.scala:1006 18/04/17 16:55:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 581 (KafkaRDD[817] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 16:55:00 INFO cluster.YarnClusterScheduler: Adding task set 581.0 with 1 tasks 18/04/17 16:55:00 INFO scheduler.DAGScheduler: Got job 582 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 16:55:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 582 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 16:55:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 16:55:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 16:55:00 INFO scheduler.DAGScheduler: Submitting ResultStage 582 (KafkaRDD[804] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 16:55:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 581.0 (TID 581, ***hostname masked***, executor 4, partition 0, RACK_LOCAL, 2045 bytes) 18/04/17 16:55:00 INFO storage.MemoryStore: Block broadcast_582 stored as values in memory (estimated size 5.7 KB, free 490.3 MB) 18/04/17 16:55:00 INFO storage.MemoryStore: Block broadcast_582_piece0 stored as bytes in memory (estimated size 3.1 KB, free 490.3 MB) 18/04/17 16:55:00 INFO storage.BlockManagerInfo: Added broadcast_582_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.2 MB) 18/04/17 16:55:00 INFO spark.SparkContext: Created broadcast 582 from broadcast at DAGScheduler.scala:1006 18/04/17 16:55:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 582 (KafkaRDD[804] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 16:55:00 INFO cluster.YarnClusterScheduler: Adding task set 582.0 with 1 tasks 18/04/17 16:55:00 INFO scheduler.DAGScheduler: Got job 583 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 16:55:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 583 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 16:55:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 16:55:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 16:55:00 INFO scheduler.DAGScheduler: Submitting ResultStage 583 (KafkaRDD[801] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 16:55:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 582.0 (TID 582, ***hostname masked***, executor 4, partition 0, RACK_LOCAL, 2051 bytes) 18/04/17 16:55:00 INFO storage.MemoryStore: Block broadcast_583 stored as values in memory (estimated size 5.7 KB, free 490.3 MB) 18/04/17 16:55:00 INFO storage.MemoryStore: Block broadcast_583_piece0 stored as bytes in memory (estimated size 3.1 KB, free 490.3 MB) 18/04/17 16:55:00 INFO storage.BlockManagerInfo: Added broadcast_583_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.2 MB) 18/04/17 16:55:00 INFO spark.SparkContext: Created broadcast 583 from broadcast at DAGScheduler.scala:1006 18/04/17 16:55:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 583 (KafkaRDD[801] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 16:55:00 INFO cluster.YarnClusterScheduler: Adding task set 583.0 with 1 tasks 18/04/17 16:55:00 INFO scheduler.DAGScheduler: Got job 584 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 16:55:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 584 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 16:55:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 16:55:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 16:55:00 INFO scheduler.DAGScheduler: Submitting ResultStage 584 (KafkaRDD[820] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 16:55:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 583.0 (TID 583, ***hostname masked***, executor 7, partition 0, RACK_LOCAL, 2041 bytes) 18/04/17 16:55:00 INFO storage.MemoryStore: Block broadcast_584 stored as values in memory (estimated size 5.7 KB, free 490.3 MB) 18/04/17 16:55:00 INFO storage.MemoryStore: Block broadcast_584_piece0 stored as bytes in memory (estimated size 3.1 KB, free 490.3 MB) 18/04/17 16:55:00 INFO storage.BlockManagerInfo: Added broadcast_584_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.2 MB) 18/04/17 16:55:00 INFO spark.SparkContext: Created broadcast 584 from broadcast at DAGScheduler.scala:1006 18/04/17 16:55:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 584 (KafkaRDD[820] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 16:55:00 INFO cluster.YarnClusterScheduler: Adding task set 584.0 with 1 tasks 18/04/17 16:55:00 INFO scheduler.DAGScheduler: Got job 585 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 16:55:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 585 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 16:55:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 16:55:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 16:55:00 INFO scheduler.DAGScheduler: Submitting ResultStage 585 (KafkaRDD[818] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 16:55:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 584.0 (TID 584, ***hostname masked***, executor 10, partition 0, RACK_LOCAL, 2045 bytes) 18/04/17 16:55:00 INFO storage.MemoryStore: Block broadcast_585 stored as values in memory (estimated size 5.7 KB, free 490.3 MB) 18/04/17 16:55:00 INFO storage.BlockManagerInfo: Added broadcast_582_piece0 in memory on ***hostname masked***:55279 (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:55:00 INFO storage.MemoryStore: Block broadcast_585_piece0 stored as bytes in memory (estimated size 3.1 KB, free 490.3 MB) 18/04/17 16:55:00 INFO storage.BlockManagerInfo: Added broadcast_585_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.2 MB) 18/04/17 16:55:00 INFO spark.SparkContext: Created broadcast 585 from broadcast at DAGScheduler.scala:1006 18/04/17 16:55:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 585 (KafkaRDD[818] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 16:55:00 INFO cluster.YarnClusterScheduler: Adding task set 585.0 with 1 tasks 18/04/17 16:55:00 INFO scheduler.DAGScheduler: Got job 586 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 16:55:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 586 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 16:55:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 16:55:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 16:55:00 INFO scheduler.DAGScheduler: Submitting ResultStage 586 (KafkaRDD[793] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 16:55:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 585.0 (TID 585, ***hostname masked***, executor 6, partition 0, NODE_LOCAL, 2056 bytes) 18/04/17 16:55:00 INFO storage.MemoryStore: Block broadcast_586 stored as values in memory (estimated size 5.7 KB, free 490.3 MB) 18/04/17 16:55:00 INFO storage.BlockManagerInfo: Added broadcast_583_piece0 in memory on ***hostname masked***:41751 (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:55:00 INFO storage.MemoryStore: Block broadcast_586_piece0 stored as bytes in memory (estimated size 3.1 KB, free 490.3 MB) 18/04/17 16:55:00 INFO storage.BlockManagerInfo: Added broadcast_586_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.2 MB) 18/04/17 16:55:00 INFO spark.SparkContext: Created broadcast 586 from broadcast at DAGScheduler.scala:1006 18/04/17 16:55:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 586 (KafkaRDD[793] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 16:55:00 INFO cluster.YarnClusterScheduler: Adding task set 586.0 with 1 tasks 18/04/17 16:55:00 INFO scheduler.DAGScheduler: Got job 587 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 16:55:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 587 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 16:55:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 16:55:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 16:55:00 INFO scheduler.DAGScheduler: Submitting ResultStage 587 (KafkaRDD[800] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 16:55:00 INFO storage.MemoryStore: Block broadcast_587 stored as values in memory (estimated size 5.7 KB, free 490.3 MB) 18/04/17 16:55:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 586.0 (TID 586, ***hostname masked***, executor 1, partition 0, RACK_LOCAL, 2063 bytes) 18/04/17 16:55:00 INFO storage.MemoryStore: Block broadcast_587_piece0 stored as bytes in memory (estimated size 3.1 KB, free 490.3 MB) 18/04/17 16:55:00 INFO storage.BlockManagerInfo: Added broadcast_587_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.2 MB) 18/04/17 16:55:00 INFO spark.SparkContext: Created broadcast 587 from broadcast at DAGScheduler.scala:1006 18/04/17 16:55:00 INFO storage.BlockManagerInfo: Added broadcast_581_piece0 in memory on ***hostname masked***:55279 (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:55:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 587 (KafkaRDD[800] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 16:55:00 INFO cluster.YarnClusterScheduler: Adding task set 587.0 with 1 tasks 18/04/17 16:55:00 INFO scheduler.DAGScheduler: Got job 588 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 16:55:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 588 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 16:55:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 16:55:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 16:55:00 INFO scheduler.DAGScheduler: Submitting ResultStage 588 (KafkaRDD[798] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 16:55:00 INFO storage.MemoryStore: Block broadcast_588 stored as values in memory (estimated size 5.7 KB, free 490.2 MB) 18/04/17 16:55:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 587.0 (TID 587, ***hostname masked***, executor 10, partition 0, RACK_LOCAL, 2051 bytes) 18/04/17 16:55:00 INFO storage.MemoryStore: Block broadcast_588_piece0 stored as bytes in memory (estimated size 3.1 KB, free 490.2 MB) 18/04/17 16:55:00 INFO storage.BlockManagerInfo: Added broadcast_588_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.2 MB) 18/04/17 16:55:00 INFO spark.SparkContext: Created broadcast 588 from broadcast at DAGScheduler.scala:1006 18/04/17 16:55:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 588 (KafkaRDD[798] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 16:55:00 INFO cluster.YarnClusterScheduler: Adding task set 588.0 with 1 tasks 18/04/17 16:55:00 INFO scheduler.DAGScheduler: Got job 589 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 16:55:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 589 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 16:55:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 16:55:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 16:55:00 INFO scheduler.DAGScheduler: Submitting ResultStage 589 (KafkaRDD[814] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 16:55:00 INFO storage.BlockManagerInfo: Added broadcast_584_piece0 in memory on ***hostname masked***:55095 (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:55:00 INFO storage.MemoryStore: Block broadcast_589 stored as values in memory (estimated size 5.7 KB, free 490.2 MB) 18/04/17 16:55:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 588.0 (TID 588, ***hostname masked***, executor 12, partition 0, NODE_LOCAL, 2045 bytes) 18/04/17 16:55:00 INFO storage.MemoryStore: Block broadcast_589_piece0 stored as bytes in memory (estimated size 3.1 KB, free 490.2 MB) 18/04/17 16:55:00 INFO storage.BlockManagerInfo: Added broadcast_589_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.2 MB) 18/04/17 16:55:00 INFO spark.SparkContext: Created broadcast 589 from broadcast at DAGScheduler.scala:1006 18/04/17 16:55:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 589 (KafkaRDD[814] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 16:55:00 INFO cluster.YarnClusterScheduler: Adding task set 589.0 with 1 tasks 18/04/17 16:55:00 INFO scheduler.DAGScheduler: Got job 590 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 16:55:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 590 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 16:55:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 16:55:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 16:55:00 INFO scheduler.DAGScheduler: Submitting ResultStage 590 (KafkaRDD[810] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 16:55:00 INFO storage.MemoryStore: Block broadcast_590 stored as values in memory (estimated size 5.7 KB, free 490.2 MB) 18/04/17 16:55:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 589.0 (TID 589, ***hostname masked***, executor 10, partition 0, RACK_LOCAL, 2069 bytes) 18/04/17 16:55:00 INFO storage.MemoryStore: Block broadcast_590_piece0 stored as bytes in memory (estimated size 3.1 KB, free 490.2 MB) 18/04/17 16:55:00 INFO storage.BlockManagerInfo: Added broadcast_590_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.2 MB) 18/04/17 16:55:00 INFO spark.SparkContext: Created broadcast 590 from broadcast at DAGScheduler.scala:1006 18/04/17 16:55:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 590 (KafkaRDD[810] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 16:55:00 INFO cluster.YarnClusterScheduler: Adding task set 590.0 with 1 tasks 18/04/17 16:55:00 INFO scheduler.DAGScheduler: Got job 591 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 16:55:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 591 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 16:55:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 16:55:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 16:55:00 INFO scheduler.DAGScheduler: Submitting ResultStage 591 (KafkaRDD[799] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 16:55:00 INFO storage.MemoryStore: Block broadcast_591 stored as values in memory (estimated size 5.7 KB, free 490.2 MB) 18/04/17 16:55:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 590.0 (TID 590, ***hostname masked***, executor 9, partition 0, RACK_LOCAL, 2048 bytes) 18/04/17 16:55:00 INFO storage.BlockManagerInfo: Added broadcast_589_piece0 in memory on ***hostname masked***:55095 (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:55:00 INFO storage.BlockManagerInfo: Added broadcast_585_piece0 in memory on ***hostname masked***:35790 (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:55:00 INFO storage.BlockManagerInfo: Added broadcast_588_piece0 in memory on ***hostname masked***:42188 (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:55:00 INFO storage.MemoryStore: Block broadcast_591_piece0 stored as bytes in memory (estimated size 3.1 KB, free 490.2 MB) 18/04/17 16:55:00 INFO storage.BlockManagerInfo: Removed broadcast_565_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.2 MB) 18/04/17 16:55:00 INFO storage.BlockManagerInfo: Added broadcast_591_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.2 MB) 18/04/17 16:55:00 INFO spark.SparkContext: Created broadcast 591 from broadcast at DAGScheduler.scala:1006 18/04/17 16:55:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 591 (KafkaRDD[799] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 16:55:00 INFO cluster.YarnClusterScheduler: Adding task set 591.0 with 1 tasks 18/04/17 16:55:00 INFO scheduler.DAGScheduler: Got job 592 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 16:55:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 592 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 16:55:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 16:55:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 16:55:00 INFO scheduler.DAGScheduler: Submitting ResultStage 592 (KafkaRDD[812] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 16:55:00 INFO storage.BlockManagerInfo: Removed broadcast_565_piece0 on ***hostname masked***:55095 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:55:00 INFO storage.MemoryStore: Block broadcast_592 stored as values in memory (estimated size 5.7 KB, free 490.2 MB) 18/04/17 16:55:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 591.0 (TID 591, ***hostname masked***, executor 2, partition 0, RACK_LOCAL, 2060 bytes) 18/04/17 16:55:00 INFO storage.MemoryStore: Block broadcast_592_piece0 stored as bytes in memory (estimated size 3.1 KB, free 490.2 MB) 18/04/17 16:55:00 INFO storage.BlockManagerInfo: Added broadcast_592_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.2 MB) 18/04/17 16:55:00 INFO storage.BlockManagerInfo: Removed broadcast_555_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.2 MB) 18/04/17 16:55:00 INFO spark.SparkContext: Created broadcast 592 from broadcast at DAGScheduler.scala:1006 18/04/17 16:55:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 592 (KafkaRDD[812] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 16:55:00 INFO cluster.YarnClusterScheduler: Adding task set 592.0 with 1 tasks 18/04/17 16:55:00 INFO scheduler.DAGScheduler: Got job 593 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 16:55:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 593 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 16:55:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 16:55:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 16:55:00 INFO scheduler.DAGScheduler: Submitting ResultStage 593 (KafkaRDD[826] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 16:55:00 INFO storage.MemoryStore: Block broadcast_593 stored as values in memory (estimated size 5.7 KB, free 490.2 MB) 18/04/17 16:55:00 INFO storage.BlockManagerInfo: Removed broadcast_555_piece0 on ***hostname masked***:42188 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:55:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 592.0 (TID 592, ***hostname masked***, executor 12, partition 0, RACK_LOCAL, 2048 bytes) 18/04/17 16:55:00 INFO storage.BlockManagerInfo: Added broadcast_587_piece0 in memory on ***hostname masked***:55095 (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:55:00 INFO spark.ContextCleaner: Cleaned accumulator 556 18/04/17 16:55:00 INFO storage.BlockManagerInfo: Added broadcast_590_piece0 in memory on ***hostname masked***:55033 (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:55:00 INFO storage.BlockManagerInfo: Removed broadcast_554_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.2 MB) 18/04/17 16:55:00 INFO storage.BlockManagerInfo: Removed broadcast_554_piece0 on ***hostname masked***:41751 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:55:00 INFO spark.ContextCleaner: Cleaned accumulator 555 18/04/17 16:55:00 INFO storage.MemoryStore: Block broadcast_593_piece0 stored as bytes in memory (estimated size 3.1 KB, free 490.2 MB) 18/04/17 16:55:00 INFO storage.BlockManagerInfo: Added broadcast_593_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.2 MB) 18/04/17 16:55:00 INFO storage.BlockManagerInfo: Removed broadcast_557_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.2 MB) 18/04/17 16:55:00 INFO spark.SparkContext: Created broadcast 593 from broadcast at DAGScheduler.scala:1006 18/04/17 16:55:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 593 (KafkaRDD[826] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 16:55:00 INFO cluster.YarnClusterScheduler: Adding task set 593.0 with 1 tasks 18/04/17 16:55:00 INFO scheduler.DAGScheduler: Got job 594 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 16:55:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 594 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 16:55:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 16:55:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 16:55:00 INFO scheduler.DAGScheduler: Submitting ResultStage 594 (KafkaRDD[797] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 16:55:00 INFO storage.MemoryStore: Block broadcast_594 stored as values in memory (estimated size 5.7 KB, free 490.2 MB) 18/04/17 16:55:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 593.0 (TID 593, ***hostname masked***, executor 4, partition 0, RACK_LOCAL, 2053 bytes) 18/04/17 16:55:00 INFO storage.BlockManagerInfo: Removed broadcast_557_piece0 on ***hostname masked***:57847 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:55:00 INFO spark.ContextCleaner: Cleaned accumulator 558 18/04/17 16:55:00 INFO storage.BlockManagerInfo: Added broadcast_592_piece0 in memory on ***hostname masked***:42188 (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:55:00 INFO storage.BlockManagerInfo: Removed broadcast_556_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.2 MB) 18/04/17 16:55:00 INFO storage.MemoryStore: Block broadcast_594_piece0 stored as bytes in memory (estimated size 3.1 KB, free 490.2 MB) 18/04/17 16:55:00 INFO storage.BlockManagerInfo: Removed broadcast_556_piece0 on ***hostname masked***:41751 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:55:00 INFO storage.BlockManagerInfo: Added broadcast_594_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.2 MB) 18/04/17 16:55:00 INFO spark.SparkContext: Created broadcast 594 from broadcast at DAGScheduler.scala:1006 18/04/17 16:55:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 594 (KafkaRDD[797] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 16:55:00 INFO cluster.YarnClusterScheduler: Adding task set 594.0 with 1 tasks 18/04/17 16:55:00 INFO scheduler.DAGScheduler: Got job 596 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 16:55:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 595 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 16:55:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 16:55:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 16:55:00 INFO scheduler.DAGScheduler: Submitting ResultStage 595 (KafkaRDD[823] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 16:55:00 INFO storage.MemoryStore: Block broadcast_595 stored as values in memory (estimated size 5.7 KB, free 490.2 MB) 18/04/17 16:55:00 INFO storage.BlockManagerInfo: Added broadcast_591_piece0 in memory on ***hostname masked***:43653 (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:55:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 594.0 (TID 594, ***hostname masked***, executor 6, partition 0, NODE_LOCAL, 2059 bytes) 18/04/17 16:55:00 INFO storage.BlockManagerInfo: Added broadcast_593_piece0 in memory on ***hostname masked***:55279 (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:55:00 INFO storage.MemoryStore: Block broadcast_595_piece0 stored as bytes in memory (estimated size 3.1 KB, free 490.2 MB) 18/04/17 16:55:00 INFO storage.BlockManagerInfo: Added broadcast_595_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.2 MB) 18/04/17 16:55:00 INFO spark.SparkContext: Created broadcast 595 from broadcast at DAGScheduler.scala:1006 18/04/17 16:55:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 595 (KafkaRDD[823] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 16:55:00 INFO cluster.YarnClusterScheduler: Adding task set 595.0 with 1 tasks 18/04/17 16:55:00 INFO scheduler.DAGScheduler: Got job 595 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 16:55:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 596 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 16:55:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 16:55:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 16:55:00 INFO scheduler.DAGScheduler: Submitting ResultStage 596 (KafkaRDD[821] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 16:55:00 INFO spark.ContextCleaner: Cleaned accumulator 557 18/04/17 16:55:00 INFO spark.ContextCleaner: Cleaned accumulator 560 18/04/17 16:55:00 INFO storage.MemoryStore: Block broadcast_596 stored as values in memory (estimated size 5.7 KB, free 490.2 MB) 18/04/17 16:55:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 595.0 (TID 595, ***hostname masked***, executor 10, partition 0, RACK_LOCAL, 2042 bytes) 18/04/17 16:55:00 INFO storage.BlockManagerInfo: Removed broadcast_558_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.2 MB) 18/04/17 16:55:00 INFO storage.BlockManagerInfo: Removed broadcast_558_piece0 on ***hostname masked***:55095 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:55:00 INFO storage.BlockManagerInfo: Added broadcast_586_piece0 in memory on ***hostname masked***:56034 (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:55:00 INFO spark.ContextCleaner: Cleaned accumulator 559 18/04/17 16:55:00 INFO spark.ContextCleaner: Cleaned accumulator 562 18/04/17 16:55:00 INFO storage.MemoryStore: Block broadcast_596_piece0 stored as bytes in memory (estimated size 3.1 KB, free 490.2 MB) 18/04/17 16:55:00 INFO storage.BlockManagerInfo: Added broadcast_596_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.2 MB) 18/04/17 16:55:00 INFO storage.BlockManagerInfo: Removed broadcast_560_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.2 MB) 18/04/17 16:55:00 INFO spark.SparkContext: Created broadcast 596 from broadcast at DAGScheduler.scala:1006 18/04/17 16:55:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 596 (KafkaRDD[821] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 16:55:00 INFO cluster.YarnClusterScheduler: Adding task set 596.0 with 1 tasks 18/04/17 16:55:00 INFO storage.BlockManagerInfo: Added broadcast_594_piece0 in memory on ***hostname masked***:35790 (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:55:00 INFO scheduler.DAGScheduler: Got job 597 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 16:55:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 597 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 16:55:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 16:55:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 16:55:00 INFO scheduler.DAGScheduler: Submitting ResultStage 597 (KafkaRDD[811] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 16:55:00 INFO storage.MemoryStore: Block broadcast_597 stored as values in memory (estimated size 5.7 KB, free 490.2 MB) 18/04/17 16:55:00 INFO storage.BlockManagerInfo: Removed broadcast_560_piece0 on ***hostname masked***:35790 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:55:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 596.0 (TID 596, ***hostname masked***, executor 6, partition 0, NODE_LOCAL, 2049 bytes) 18/04/17 16:55:00 INFO spark.ContextCleaner: Cleaned accumulator 561 18/04/17 16:55:00 INFO storage.BlockManagerInfo: Removed broadcast_559_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.2 MB) 18/04/17 16:55:00 INFO storage.BlockManagerInfo: Added broadcast_595_piece0 in memory on ***hostname masked***:55095 (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:55:00 INFO storage.BlockManagerInfo: Removed broadcast_559_piece0 on ***hostname masked***:57847 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:55:00 INFO storage.MemoryStore: Block broadcast_597_piece0 stored as bytes in memory (estimated size 3.1 KB, free 490.2 MB) 18/04/17 16:55:00 INFO storage.BlockManagerInfo: Added broadcast_597_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.2 MB) 18/04/17 16:55:00 INFO spark.SparkContext: Created broadcast 597 from broadcast at DAGScheduler.scala:1006 18/04/17 16:55:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 597 (KafkaRDD[811] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 16:55:00 INFO cluster.YarnClusterScheduler: Adding task set 597.0 with 1 tasks 18/04/17 16:55:00 INFO scheduler.DAGScheduler: Got job 600 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 16:55:00 INFO spark.ContextCleaner: Cleaned accumulator 564 18/04/17 16:55:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 598 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 16:55:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 16:55:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 16:55:00 INFO scheduler.DAGScheduler: Submitting ResultStage 598 (KafkaRDD[803] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 16:55:00 INFO storage.MemoryStore: Block broadcast_598 stored as values in memory (estimated size 5.7 KB, free 490.2 MB) 18/04/17 16:55:00 INFO storage.BlockManagerInfo: Removed broadcast_562_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.2 MB) 18/04/17 16:55:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 597.0 (TID 597, ***hostname masked***, executor 2, partition 0, RACK_LOCAL, 2050 bytes) 18/04/17 16:55:00 INFO storage.BlockManagerInfo: Removed broadcast_562_piece0 on ***hostname masked***:60107 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:55:00 INFO spark.ContextCleaner: Cleaned accumulator 563 18/04/17 16:55:00 INFO storage.MemoryStore: Block broadcast_598_piece0 stored as bytes in memory (estimated size 3.1 KB, free 490.2 MB) 18/04/17 16:55:00 INFO storage.BlockManagerInfo: Added broadcast_598_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.2 MB) 18/04/17 16:55:00 INFO storage.BlockManagerInfo: Removed broadcast_561_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.2 MB) 18/04/17 16:55:00 INFO spark.SparkContext: Created broadcast 598 from broadcast at DAGScheduler.scala:1006 18/04/17 16:55:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 598 (KafkaRDD[803] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 16:55:00 INFO cluster.YarnClusterScheduler: Adding task set 598.0 with 1 tasks 18/04/17 16:55:00 INFO storage.BlockManagerInfo: Added broadcast_596_piece0 in memory on ***hostname masked***:35790 (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:55:00 INFO scheduler.DAGScheduler: Got job 598 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 16:55:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 599 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 16:55:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 16:55:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 16:55:00 INFO scheduler.DAGScheduler: Submitting ResultStage 599 (KafkaRDD[816] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 16:55:00 INFO storage.MemoryStore: Block broadcast_599 stored as values in memory (estimated size 5.7 KB, free 490.2 MB) 18/04/17 16:55:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 598.0 (TID 598, ***hostname masked***, executor 2, partition 0, RACK_LOCAL, 2046 bytes) 18/04/17 16:55:00 INFO storage.MemoryStore: Block broadcast_599_piece0 stored as bytes in memory (estimated size 3.1 KB, free 490.2 MB) 18/04/17 16:55:00 INFO storage.BlockManagerInfo: Added broadcast_599_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.2 MB) 18/04/17 16:55:00 INFO spark.SparkContext: Created broadcast 599 from broadcast at DAGScheduler.scala:1006 18/04/17 16:55:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 599 (KafkaRDD[816] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 16:55:00 INFO cluster.YarnClusterScheduler: Adding task set 599.0 with 1 tasks 18/04/17 16:55:00 INFO storage.BlockManagerInfo: Added broadcast_597_piece0 in memory on ***hostname masked***:43653 (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:55:00 INFO scheduler.DAGScheduler: Got job 599 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 16:55:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 600 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 16:55:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 16:55:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 16:55:00 INFO scheduler.DAGScheduler: Submitting ResultStage 600 (KafkaRDD[819] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 16:55:00 INFO storage.MemoryStore: Block broadcast_600 stored as values in memory (estimated size 5.7 KB, free 490.2 MB) 18/04/17 16:55:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 599.0 (TID 599, ***hostname masked***, executor 5, partition 0, NODE_LOCAL, 2047 bytes) 18/04/17 16:55:00 INFO storage.MemoryStore: Block broadcast_600_piece0 stored as bytes in memory (estimated size 3.1 KB, free 490.2 MB) 18/04/17 16:55:00 INFO storage.BlockManagerInfo: Added broadcast_600_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.2 MB) 18/04/17 16:55:00 INFO spark.SparkContext: Created broadcast 600 from broadcast at DAGScheduler.scala:1006 18/04/17 16:55:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 600 (KafkaRDD[819] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 16:55:00 INFO cluster.YarnClusterScheduler: Adding task set 600.0 with 1 tasks 18/04/17 16:55:00 INFO scheduler.DAGScheduler: Got job 601 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 16:55:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 601 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 16:55:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 16:55:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 16:55:00 INFO scheduler.DAGScheduler: Submitting ResultStage 601 (KafkaRDD[807] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 16:55:00 INFO storage.MemoryStore: Block broadcast_601 stored as values in memory (estimated size 5.7 KB, free 490.2 MB) 18/04/17 16:55:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 600.0 (TID 600, ***hostname masked***, executor 9, partition 0, RACK_LOCAL, 2048 bytes) 18/04/17 16:55:00 INFO storage.BlockManagerInfo: Removed broadcast_561_piece0 on ***hostname masked***:55033 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:55:00 INFO spark.ContextCleaner: Cleaned accumulator 566 18/04/17 16:55:00 INFO storage.MemoryStore: Block broadcast_601_piece0 stored as bytes in memory (estimated size 3.1 KB, free 490.2 MB) 18/04/17 16:55:00 INFO storage.BlockManagerInfo: Added broadcast_601_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.2 MB) 18/04/17 16:55:00 INFO storage.BlockManagerInfo: Removed broadcast_564_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.2 MB) 18/04/17 16:55:00 INFO spark.SparkContext: Created broadcast 601 from broadcast at DAGScheduler.scala:1006 18/04/17 16:55:00 INFO storage.BlockManagerInfo: Added broadcast_598_piece0 in memory on ***hostname masked***:43653 (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:55:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 601 (KafkaRDD[807] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 16:55:00 INFO cluster.YarnClusterScheduler: Adding task set 601.0 with 1 tasks 18/04/17 16:55:00 INFO scheduler.DAGScheduler: Got job 602 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 16:55:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 602 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 16:55:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 16:55:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 16:55:00 INFO scheduler.DAGScheduler: Submitting ResultStage 602 (KafkaRDD[824] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 16:55:00 INFO storage.MemoryStore: Block broadcast_602 stored as values in memory (estimated size 5.7 KB, free 490.2 MB) 18/04/17 16:55:00 INFO storage.BlockManagerInfo: Removed broadcast_564_piece0 on ***hostname masked***:55279 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:55:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 601.0 (TID 601, ***hostname masked***, executor 5, partition 0, RACK_LOCAL, 2064 bytes) 18/04/17 16:55:00 INFO spark.ContextCleaner: Cleaned accumulator 565 18/04/17 16:55:00 INFO storage.MemoryStore: Block broadcast_602_piece0 stored as bytes in memory (estimated size 3.1 KB, free 490.2 MB) 18/04/17 16:55:00 INFO storage.BlockManagerInfo: Added broadcast_602_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.2 MB) 18/04/17 16:55:00 INFO storage.BlockManagerInfo: Removed broadcast_563_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.2 MB) 18/04/17 16:55:00 INFO spark.SparkContext: Created broadcast 602 from broadcast at DAGScheduler.scala:1006 18/04/17 16:55:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 602 (KafkaRDD[824] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 16:55:00 INFO cluster.YarnClusterScheduler: Adding task set 602.0 with 1 tasks 18/04/17 16:55:00 INFO scheduler.DAGScheduler: Got job 603 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 16:55:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 603 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 16:55:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 16:55:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 16:55:00 INFO scheduler.DAGScheduler: Submitting ResultStage 603 (KafkaRDD[825] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 16:55:00 INFO storage.BlockManagerInfo: Added broadcast_599_piece0 in memory on ***hostname masked***:53081 (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:55:00 INFO storage.MemoryStore: Block broadcast_603 stored as values in memory (estimated size 5.7 KB, free 490.2 MB) 18/04/17 16:55:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 602.0 (TID 602, ***hostname masked***, executor 8, partition 0, RACK_LOCAL, 2040 bytes) 18/04/17 16:55:00 INFO storage.MemoryStore: Block broadcast_603_piece0 stored as bytes in memory (estimated size 3.1 KB, free 490.2 MB) 18/04/17 16:55:00 INFO storage.BlockManagerInfo: Added broadcast_603_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.2 MB) 18/04/17 16:55:00 INFO spark.SparkContext: Created broadcast 603 from broadcast at DAGScheduler.scala:1006 18/04/17 16:55:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 603 (KafkaRDD[825] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 16:55:00 INFO cluster.YarnClusterScheduler: Adding task set 603.0 with 1 tasks 18/04/17 16:55:00 INFO scheduler.DAGScheduler: Got job 604 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 16:55:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 604 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 16:55:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 16:55:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 16:55:00 INFO scheduler.DAGScheduler: Submitting ResultStage 604 (KafkaRDD[794] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 16:55:00 INFO storage.MemoryStore: Block broadcast_604 stored as values in memory (estimated size 5.7 KB, free 490.2 MB) 18/04/17 16:55:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 603.0 (TID 603, ***hostname masked***, executor 11, partition 0, NODE_LOCAL, 2048 bytes) 18/04/17 16:55:00 INFO storage.MemoryStore: Block broadcast_604_piece0 stored as bytes in memory (estimated size 3.1 KB, free 490.2 MB) 18/04/17 16:55:00 INFO storage.BlockManagerInfo: Added broadcast_604_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.2 MB) 18/04/17 16:55:00 INFO spark.SparkContext: Created broadcast 604 from broadcast at DAGScheduler.scala:1006 18/04/17 16:55:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 604 (KafkaRDD[794] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 16:55:00 INFO cluster.YarnClusterScheduler: Adding task set 604.0 with 1 tasks 18/04/17 16:55:00 INFO scheduler.DAGScheduler: Got job 605 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 16:55:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 605 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 16:55:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 16:55:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 16:55:00 INFO scheduler.DAGScheduler: Submitting ResultStage 605 (KafkaRDD[802] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 16:55:00 INFO storage.MemoryStore: Block broadcast_605 stored as values in memory (estimated size 5.7 KB, free 490.2 MB) 18/04/17 16:55:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 604.0 (TID 604, ***hostname masked***, executor 9, partition 0, RACK_LOCAL, 2034 bytes) 18/04/17 16:55:00 INFO storage.MemoryStore: Block broadcast_605_piece0 stored as bytes in memory (estimated size 3.1 KB, free 490.2 MB) 18/04/17 16:55:00 INFO storage.BlockManagerInfo: Added broadcast_605_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.1 MB) 18/04/17 16:55:00 INFO spark.SparkContext: Created broadcast 605 from broadcast at DAGScheduler.scala:1006 18/04/17 16:55:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 605 (KafkaRDD[802] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 16:55:00 INFO cluster.YarnClusterScheduler: Adding task set 605.0 with 1 tasks 18/04/17 16:55:00 INFO scheduler.DAGScheduler: Got job 606 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 16:55:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 606 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 16:55:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 16:55:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 16:55:00 INFO scheduler.DAGScheduler: Submitting ResultStage 606 (KafkaRDD[815] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 16:55:00 INFO storage.MemoryStore: Block broadcast_606 stored as values in memory (estimated size 5.7 KB, free 490.2 MB) 18/04/17 16:55:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 605.0 (TID 605, ***hostname masked***, executor 2, partition 0, NODE_LOCAL, 2059 bytes) 18/04/17 16:55:00 INFO storage.BlockManagerInfo: Added broadcast_601_piece0 in memory on ***hostname masked***:53081 (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:55:00 INFO storage.MemoryStore: Block broadcast_606_piece0 stored as bytes in memory (estimated size 3.1 KB, free 490.2 MB) 18/04/17 16:55:00 INFO storage.BlockManagerInfo: Added broadcast_602_piece0 in memory on ***hostname masked***:50260 (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:55:00 INFO storage.BlockManagerInfo: Added broadcast_606_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.1 MB) 18/04/17 16:55:00 INFO spark.SparkContext: Created broadcast 606 from broadcast at DAGScheduler.scala:1006 18/04/17 16:55:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 606 (KafkaRDD[815] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 16:55:00 INFO cluster.YarnClusterScheduler: Adding task set 606.0 with 1 tasks 18/04/17 16:55:00 INFO storage.BlockManagerInfo: Added broadcast_603_piece0 in memory on ***hostname masked***:57847 (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:55:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 606.0 (TID 606, ***hostname masked***, executor 3, partition 0, RACK_LOCAL, 2048 bytes) 18/04/17 16:55:00 INFO storage.BlockManagerInfo: Added broadcast_600_piece0 in memory on ***hostname masked***:55033 (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:55:00 INFO storage.BlockManagerInfo: Removed broadcast_563_piece0 on ***hostname masked***:55033 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:55:00 INFO storage.BlockManagerInfo: Removed broadcast_566_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.1 MB) 18/04/17 16:55:00 INFO storage.BlockManagerInfo: Added broadcast_606_piece0 in memory on ***hostname masked***:60107 (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:55:00 INFO storage.BlockManagerInfo: Removed broadcast_566_piece0 on ***hostname masked***:35790 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:55:00 INFO storage.BlockManagerInfo: Added broadcast_604_piece0 in memory on ***hostname masked***:55033 (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:55:00 INFO storage.BlockManagerInfo: Added broadcast_605_piece0 in memory on ***hostname masked***:43653 (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:55:00 INFO spark.ContextCleaner: Cleaned accumulator 567 18/04/17 16:55:00 INFO storage.BlockManagerInfo: Removed broadcast_568_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.2 MB) 18/04/17 16:55:00 INFO storage.BlockManagerInfo: Removed broadcast_568_piece0 on ***hostname masked***:35790 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:55:00 INFO spark.ContextCleaner: Cleaned accumulator 569 18/04/17 16:55:00 INFO storage.BlockManagerInfo: Removed broadcast_567_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.2 MB) 18/04/17 16:55:00 INFO storage.BlockManagerInfo: Removed broadcast_567_piece0 on ***hostname masked***:56034 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:55:00 INFO spark.ContextCleaner: Cleaned accumulator 568 18/04/17 16:55:00 INFO storage.BlockManagerInfo: Removed broadcast_570_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.2 MB) 18/04/17 16:55:00 INFO storage.BlockManagerInfo: Removed broadcast_570_piece0 on ***hostname masked***:53081 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:55:00 INFO spark.ContextCleaner: Cleaned accumulator 571 18/04/17 16:55:00 INFO storage.BlockManagerInfo: Removed broadcast_569_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.2 MB) 18/04/17 16:55:00 INFO storage.BlockManagerInfo: Removed broadcast_569_piece0 on ***hostname masked***:56034 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:55:00 INFO spark.ContextCleaner: Cleaned accumulator 570 18/04/17 16:55:00 INFO storage.BlockManagerInfo: Removed broadcast_572_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.2 MB) 18/04/17 16:55:00 INFO storage.BlockManagerInfo: Removed broadcast_572_piece0 on ***hostname masked***:35790 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:55:00 INFO spark.ContextCleaner: Cleaned accumulator 573 18/04/17 16:55:00 INFO storage.BlockManagerInfo: Removed broadcast_571_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.2 MB) 18/04/17 16:55:00 INFO storage.BlockManagerInfo: Removed broadcast_571_piece0 on ***hostname masked***:42188 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:55:00 INFO spark.ContextCleaner: Cleaned accumulator 572 18/04/17 16:55:00 INFO storage.BlockManagerInfo: Removed broadcast_573_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.2 MB) 18/04/17 16:55:00 INFO storage.BlockManagerInfo: Removed broadcast_573_piece0 on ***hostname masked***:55033 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:55:00 INFO spark.ContextCleaner: Cleaned accumulator 574 18/04/17 16:55:00 INFO storage.BlockManagerInfo: Removed broadcast_576_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.2 MB) 18/04/17 16:55:00 INFO storage.BlockManagerInfo: Removed broadcast_576_piece0 on ***hostname masked***:55279 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:55:00 INFO spark.ContextCleaner: Cleaned accumulator 577 18/04/17 16:55:00 INFO storage.BlockManagerInfo: Removed broadcast_575_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.2 MB) 18/04/17 16:55:00 INFO storage.BlockManagerInfo: Removed broadcast_575_piece0 on ***hostname masked***:50260 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:55:00 INFO spark.ContextCleaner: Cleaned accumulator 576 18/04/17 16:55:00 INFO storage.BlockManagerInfo: Removed broadcast_578_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.2 MB) 18/04/17 16:55:00 INFO storage.BlockManagerInfo: Removed broadcast_578_piece0 on ***hostname masked***:55095 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:55:00 INFO spark.ContextCleaner: Cleaned accumulator 579 18/04/17 16:55:00 INFO storage.BlockManagerInfo: Removed broadcast_577_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.2 MB) 18/04/17 16:55:00 INFO storage.BlockManagerInfo: Removed broadcast_577_piece0 on ***hostname masked***:55095 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:55:00 INFO spark.ContextCleaner: Cleaned accumulator 578 18/04/17 16:55:00 INFO storage.BlockManagerInfo: Removed broadcast_580_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.2 MB) 18/04/17 16:55:00 INFO storage.BlockManagerInfo: Removed broadcast_580_piece0 on ***hostname masked***:53081 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:55:00 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 604.0 (TID 604) in 51 ms on ***hostname masked*** (executor 9) (1/1) 18/04/17 16:55:00 INFO cluster.YarnClusterScheduler: Removed TaskSet 604.0, whose tasks have all completed, from pool 18/04/17 16:55:00 INFO scheduler.DAGScheduler: ResultStage 604 (foreachPartition at PredictorEngineApp.java:153) finished in 0.051 s 18/04/17 16:55:00 INFO scheduler.DAGScheduler: Job 604 finished: foreachPartition at PredictorEngineApp.java:153, took 0.162943 s 18/04/17 16:55:00 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x416eb788 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 16:55:00 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x416eb7880x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 16:55:00 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 16:55:00 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:35972, server: ***hostname masked***/***IP masked***:2181 18/04/17 16:55:00 INFO spark.ContextCleaner: Cleaned accumulator 581 18/04/17 16:55:00 INFO storage.BlockManagerInfo: Removed broadcast_579_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.2 MB) 18/04/17 16:55:00 INFO storage.BlockManagerInfo: Removed broadcast_579_piece0 on ***hostname masked***:41751 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:55:00 INFO spark.ContextCleaner: Cleaned accumulator 580 18/04/17 16:55:00 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x1626be1444c947e, negotiated timeout = 60000 18/04/17 16:55:00 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x1626be1444c947e 18/04/17 16:55:00 INFO zookeeper.ZooKeeper: Session: 0x1626be1444c947e closed 18/04/17 16:55:00 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 16:55:00 INFO scheduler.JobScheduler: Finished job streaming job 1523973300000 ms.2 from job set of time 1523973300000 ms 18/04/17 16:55:00 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 597.0 (TID 597) in 112 ms on ***hostname masked*** (executor 2) (1/1) 18/04/17 16:55:00 INFO scheduler.DAGScheduler: ResultStage 597 (foreachPartition at PredictorEngineApp.java:153) finished in 0.113 s 18/04/17 16:55:00 INFO cluster.YarnClusterScheduler: Removed TaskSet 597.0, whose tasks have all completed, from pool 18/04/17 16:55:00 INFO scheduler.DAGScheduler: Job 597 finished: foreachPartition at PredictorEngineApp.java:153, took 0.200657 s 18/04/17 16:55:00 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x51cb22f5 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 16:55:00 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x51cb22f50x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 16:55:00 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 16:55:00 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:57826, server: ***hostname masked***/***IP masked***:2181 18/04/17 16:55:00 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x3626be1439a9443, negotiated timeout = 60000 18/04/17 16:55:00 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x3626be1439a9443 18/04/17 16:55:00 INFO zookeeper.ZooKeeper: Session: 0x3626be1439a9443 closed 18/04/17 16:55:00 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 16:55:00 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 598.0 (TID 598) in 134 ms on ***hostname masked*** (executor 2) (1/1) 18/04/17 16:55:00 INFO scheduler.DAGScheduler: ResultStage 598 (foreachPartition at PredictorEngineApp.java:153) finished in 0.135 s 18/04/17 16:55:00 INFO cluster.YarnClusterScheduler: Removed TaskSet 598.0, whose tasks have all completed, from pool 18/04/17 16:55:00 INFO scheduler.DAGScheduler: Job 600 finished: foreachPartition at PredictorEngineApp.java:153, took 0.226974 s 18/04/17 16:55:00 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x7521f47a connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 16:55:00 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x7521f47a0x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 16:55:00 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 16:55:00 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:40573, server: ***hostname masked***/***IP masked***:2181 18/04/17 16:55:00 INFO scheduler.JobScheduler: Finished job streaming job 1523973300000 ms.19 from job set of time 1523973300000 ms 18/04/17 16:55:00 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x2626be142b28d75, negotiated timeout = 60000 18/04/17 16:55:00 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x2626be142b28d75 18/04/17 16:55:00 INFO zookeeper.ZooKeeper: Session: 0x2626be142b28d75 closed 18/04/17 16:55:00 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 16:55:00 INFO scheduler.JobScheduler: Finished job streaming job 1523973300000 ms.11 from job set of time 1523973300000 ms 18/04/17 16:55:01 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 581.0 (TID 581) in 1698 ms on ***hostname masked*** (executor 4) (1/1) 18/04/17 16:55:01 INFO cluster.YarnClusterScheduler: Removed TaskSet 581.0, whose tasks have all completed, from pool 18/04/17 16:55:01 INFO scheduler.DAGScheduler: ResultStage 581 (foreachPartition at PredictorEngineApp.java:153) finished in 1.699 s 18/04/17 16:55:01 INFO scheduler.DAGScheduler: Job 581 finished: foreachPartition at PredictorEngineApp.java:153, took 1.707514 s 18/04/17 16:55:01 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x6ad65143 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 16:55:01 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x6ad651430x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 16:55:01 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 16:55:01 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:40583, server: ***hostname masked***/***IP masked***:2181 18/04/17 16:55:01 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x2626be142b28d7c, negotiated timeout = 60000 18/04/17 16:55:01 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x2626be142b28d7c 18/04/17 16:55:01 INFO zookeeper.ZooKeeper: Session: 0x2626be142b28d7c closed 18/04/17 16:55:01 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 16:55:01 INFO scheduler.JobScheduler: Finished job streaming job 1523973300000 ms.25 from job set of time 1523973300000 ms 18/04/17 16:55:02 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 591.0 (TID 591) in 1978 ms on ***hostname masked*** (executor 2) (1/1) 18/04/17 16:55:02 INFO cluster.YarnClusterScheduler: Removed TaskSet 591.0, whose tasks have all completed, from pool 18/04/17 16:55:02 INFO scheduler.DAGScheduler: ResultStage 591 (foreachPartition at PredictorEngineApp.java:153) finished in 1.978 s 18/04/17 16:55:02 INFO scheduler.DAGScheduler: Job 591 finished: foreachPartition at PredictorEngineApp.java:153, took 2.042758 s 18/04/17 16:55:02 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x1ae4c505 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 16:55:02 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x1ae4c5050x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 16:55:02 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 16:55:02 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:40587, server: ***hostname masked***/***IP masked***:2181 18/04/17 16:55:02 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x2626be142b28d7d, negotiated timeout = 60000 18/04/17 16:55:02 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x2626be142b28d7d 18/04/17 16:55:02 INFO zookeeper.ZooKeeper: Session: 0x2626be142b28d7d closed 18/04/17 16:55:02 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 16:55:02 INFO scheduler.JobScheduler: Finished job streaming job 1523973300000 ms.7 from job set of time 1523973300000 ms 18/04/17 16:55:02 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 602.0 (TID 602) in 2733 ms on ***hostname masked*** (executor 8) (1/1) 18/04/17 16:55:02 INFO cluster.YarnClusterScheduler: Removed TaskSet 602.0, whose tasks have all completed, from pool 18/04/17 16:55:02 INFO scheduler.DAGScheduler: ResultStage 602 (foreachPartition at PredictorEngineApp.java:153) finished in 2.734 s 18/04/17 16:55:02 INFO scheduler.DAGScheduler: Job 602 finished: foreachPartition at PredictorEngineApp.java:153, took 2.840987 s 18/04/17 16:55:02 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x37246446 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 16:55:02 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x372464460x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 16:55:02 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 16:55:02 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:40591, server: ***hostname masked***/***IP masked***:2181 18/04/17 16:55:02 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x2626be142b28d7e, negotiated timeout = 60000 18/04/17 16:55:02 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x2626be142b28d7e 18/04/17 16:55:02 INFO zookeeper.ZooKeeper: Session: 0x2626be142b28d7e closed 18/04/17 16:55:02 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 16:55:02 INFO scheduler.JobScheduler: Finished job streaming job 1523973300000 ms.32 from job set of time 1523973300000 ms 18/04/17 16:55:02 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 583.0 (TID 583) in 2865 ms on ***hostname masked*** (executor 7) (1/1) 18/04/17 16:55:02 INFO cluster.YarnClusterScheduler: Removed TaskSet 583.0, whose tasks have all completed, from pool 18/04/17 16:55:02 INFO scheduler.DAGScheduler: ResultStage 583 (foreachPartition at PredictorEngineApp.java:153) finished in 2.865 s 18/04/17 16:55:02 INFO scheduler.DAGScheduler: Job 583 finished: foreachPartition at PredictorEngineApp.java:153, took 2.882265 s 18/04/17 16:55:02 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x23c8e25b connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 16:55:02 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x23c8e25b0x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 16:55:02 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 16:55:02 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:35999, server: ***hostname masked***/***IP masked***:2181 18/04/17 16:55:02 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x1626be1444c9484, negotiated timeout = 60000 18/04/17 16:55:02 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x1626be1444c9484 18/04/17 16:55:02 INFO zookeeper.ZooKeeper: Session: 0x1626be1444c9484 closed 18/04/17 16:55:02 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 16:55:02 INFO scheduler.JobScheduler: Finished job streaming job 1523973300000 ms.9 from job set of time 1523973300000 ms 18/04/17 16:55:03 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 587.0 (TID 587) in 3297 ms on ***hostname masked*** (executor 10) (1/1) 18/04/17 16:55:03 INFO cluster.YarnClusterScheduler: Removed TaskSet 587.0, whose tasks have all completed, from pool 18/04/17 16:55:03 INFO scheduler.DAGScheduler: ResultStage 587 (foreachPartition at PredictorEngineApp.java:153) finished in 3.298 s 18/04/17 16:55:03 INFO scheduler.DAGScheduler: Job 587 finished: foreachPartition at PredictorEngineApp.java:153, took 3.329749 s 18/04/17 16:55:03 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x55f33c49 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 16:55:03 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x55f33c490x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 16:55:03 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 16:55:03 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:36003, server: ***hostname masked***/***IP masked***:2181 18/04/17 16:55:03 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x1626be1444c9485, negotiated timeout = 60000 18/04/17 16:55:03 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 603.0 (TID 603) in 3233 ms on ***hostname masked*** (executor 11) (1/1) 18/04/17 16:55:03 INFO cluster.YarnClusterScheduler: Removed TaskSet 603.0, whose tasks have all completed, from pool 18/04/17 16:55:03 INFO scheduler.DAGScheduler: ResultStage 603 (foreachPartition at PredictorEngineApp.java:153) finished in 3.234 s 18/04/17 16:55:03 INFO scheduler.DAGScheduler: Job 603 finished: foreachPartition at PredictorEngineApp.java:153, took 3.343132 s 18/04/17 16:55:03 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x1626be1444c9485 18/04/17 16:55:03 INFO zookeeper.ZooKeeper: Session: 0x1626be1444c9485 closed 18/04/17 16:55:03 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 16:55:03 INFO scheduler.JobScheduler: Finished job streaming job 1523973300000 ms.33 from job set of time 1523973300000 ms 18/04/17 16:55:03 INFO scheduler.JobScheduler: Finished job streaming job 1523973300000 ms.8 from job set of time 1523973300000 ms 18/04/17 16:55:03 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 606.0 (TID 606) in 3370 ms on ***hostname masked*** (executor 3) (1/1) 18/04/17 16:55:03 INFO cluster.YarnClusterScheduler: Removed TaskSet 606.0, whose tasks have all completed, from pool 18/04/17 16:55:03 INFO scheduler.DAGScheduler: ResultStage 606 (foreachPartition at PredictorEngineApp.java:153) finished in 3.371 s 18/04/17 16:55:03 INFO scheduler.DAGScheduler: Job 606 finished: foreachPartition at PredictorEngineApp.java:153, took 3.486210 s 18/04/17 16:55:03 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x366f0f connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 16:55:03 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x366f0f0x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 16:55:03 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 16:55:03 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:57857, server: ***hostname masked***/***IP masked***:2181 18/04/17 16:55:03 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x3626be1439a9453, negotiated timeout = 60000 18/04/17 16:55:03 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x3626be1439a9453 18/04/17 16:55:03 INFO zookeeper.ZooKeeper: Session: 0x3626be1439a9453 closed 18/04/17 16:55:03 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 16:55:03 INFO scheduler.JobScheduler: Finished job streaming job 1523973300000 ms.23 from job set of time 1523973300000 ms 18/04/17 16:55:05 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 595.0 (TID 595) in 5082 ms on ***hostname masked*** (executor 10) (1/1) 18/04/17 16:55:05 INFO cluster.YarnClusterScheduler: Removed TaskSet 595.0, whose tasks have all completed, from pool 18/04/17 16:55:05 INFO scheduler.DAGScheduler: ResultStage 595 (foreachPartition at PredictorEngineApp.java:153) finished in 5.082 s 18/04/17 16:55:05 INFO scheduler.DAGScheduler: Job 596 finished: foreachPartition at PredictorEngineApp.java:153, took 5.163649 s 18/04/17 16:55:05 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x624bc90d connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 16:55:05 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x624bc90d0x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 16:55:05 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 16:55:05 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:57862, server: ***hostname masked***/***IP masked***:2181 18/04/17 16:55:05 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x3626be1439a9458, negotiated timeout = 60000 18/04/17 16:55:05 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x3626be1439a9458 18/04/17 16:55:05 INFO zookeeper.ZooKeeper: Session: 0x3626be1439a9458 closed 18/04/17 16:55:05 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 16:55:05 INFO scheduler.JobScheduler: Finished job streaming job 1523973300000 ms.31 from job set of time 1523973300000 ms 18/04/17 16:55:05 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 582.0 (TID 582) in 5205 ms on ***hostname masked*** (executor 4) (1/1) 18/04/17 16:55:05 INFO scheduler.DAGScheduler: ResultStage 582 (foreachPartition at PredictorEngineApp.java:153) finished in 5.205 s 18/04/17 16:55:05 INFO cluster.YarnClusterScheduler: Removed TaskSet 582.0, whose tasks have all completed, from pool 18/04/17 16:55:05 INFO scheduler.DAGScheduler: Job 582 finished: foreachPartition at PredictorEngineApp.java:153, took 5.218278 s 18/04/17 16:55:05 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x72544add connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 16:55:05 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x72544add0x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 16:55:05 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 16:55:05 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:36014, server: ***hostname masked***/***IP masked***:2181 18/04/17 16:55:05 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x1626be1444c9489, negotiated timeout = 60000 18/04/17 16:55:05 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x1626be1444c9489 18/04/17 16:55:05 INFO zookeeper.ZooKeeper: Session: 0x1626be1444c9489 closed 18/04/17 16:55:05 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 16:55:05 INFO scheduler.JobScheduler: Finished job streaming job 1523973300000 ms.12 from job set of time 1523973300000 ms 18/04/17 16:55:05 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 590.0 (TID 590) in 5238 ms on ***hostname masked*** (executor 9) (1/1) 18/04/17 16:55:05 INFO cluster.YarnClusterScheduler: Removed TaskSet 590.0, whose tasks have all completed, from pool 18/04/17 16:55:05 INFO scheduler.DAGScheduler: ResultStage 590 (foreachPartition at PredictorEngineApp.java:153) finished in 5.239 s 18/04/17 16:55:05 INFO scheduler.DAGScheduler: Job 590 finished: foreachPartition at PredictorEngineApp.java:153, took 5.284398 s 18/04/17 16:55:05 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x4e67fea connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 16:55:05 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x4e67fea0x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 16:55:05 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 16:55:05 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:57868, server: ***hostname masked***/***IP masked***:2181 18/04/17 16:55:05 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x3626be1439a9459, negotiated timeout = 60000 18/04/17 16:55:05 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x3626be1439a9459 18/04/17 16:55:05 INFO zookeeper.ZooKeeper: Session: 0x3626be1439a9459 closed 18/04/17 16:55:05 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 16:55:05 INFO scheduler.JobScheduler: Finished job streaming job 1523973300000 ms.18 from job set of time 1523973300000 ms 18/04/17 16:55:05 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 601.0 (TID 601) in 5645 ms on ***hostname masked*** (executor 5) (1/1) 18/04/17 16:55:05 INFO cluster.YarnClusterScheduler: Removed TaskSet 601.0, whose tasks have all completed, from pool 18/04/17 16:55:05 INFO scheduler.DAGScheduler: ResultStage 601 (foreachPartition at PredictorEngineApp.java:153) finished in 5.645 s 18/04/17 16:55:05 INFO scheduler.DAGScheduler: Job 601 finished: foreachPartition at PredictorEngineApp.java:153, took 5.749227 s 18/04/17 16:55:05 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x796c78dd connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 16:55:05 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x796c78dd0x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 16:55:05 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 16:55:05 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:36021, server: ***hostname masked***/***IP masked***:2181 18/04/17 16:55:05 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x1626be1444c948b, negotiated timeout = 60000 18/04/17 16:55:05 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x1626be1444c948b 18/04/17 16:55:05 INFO zookeeper.ZooKeeper: Session: 0x1626be1444c948b closed 18/04/17 16:55:05 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 16:55:05 INFO scheduler.JobScheduler: Finished job streaming job 1523973300000 ms.15 from job set of time 1523973300000 ms 18/04/17 16:55:05 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 599.0 (TID 599) in 5720 ms on ***hostname masked*** (executor 5) (1/1) 18/04/17 16:55:05 INFO cluster.YarnClusterScheduler: Removed TaskSet 599.0, whose tasks have all completed, from pool 18/04/17 16:55:05 INFO scheduler.DAGScheduler: ResultStage 599 (foreachPartition at PredictorEngineApp.java:153) finished in 5.722 s 18/04/17 16:55:05 INFO scheduler.DAGScheduler: Job 598 finished: foreachPartition at PredictorEngineApp.java:153, took 5.817467 s 18/04/17 16:55:05 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x66f7e7ff connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 16:55:05 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x66f7e7ff0x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 16:55:05 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 16:55:05 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:40619, server: ***hostname masked***/***IP masked***:2181 18/04/17 16:55:05 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x2626be142b28d81, negotiated timeout = 60000 18/04/17 16:55:05 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x2626be142b28d81 18/04/17 16:55:05 INFO zookeeper.ZooKeeper: Session: 0x2626be142b28d81 closed 18/04/17 16:55:05 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 16:55:05 INFO scheduler.JobScheduler: Finished job streaming job 1523973300000 ms.24 from job set of time 1523973300000 ms 18/04/17 16:55:05 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 588.0 (TID 588) in 5822 ms on ***hostname masked*** (executor 12) (1/1) 18/04/17 16:55:05 INFO scheduler.DAGScheduler: ResultStage 588 (foreachPartition at PredictorEngineApp.java:153) finished in 5.823 s 18/04/17 16:55:05 INFO cluster.YarnClusterScheduler: Removed TaskSet 588.0, whose tasks have all completed, from pool 18/04/17 16:55:05 INFO scheduler.DAGScheduler: Job 588 finished: foreachPartition at PredictorEngineApp.java:153, took 5.859305 s 18/04/17 16:55:05 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x3b9136c2 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 16:55:05 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x3b9136c20x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 16:55:05 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 16:55:05 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:57878, server: ***hostname masked***/***IP masked***:2181 18/04/17 16:55:05 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x3626be1439a945a, negotiated timeout = 60000 18/04/17 16:55:05 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x3626be1439a945a 18/04/17 16:55:05 INFO zookeeper.ZooKeeper: Session: 0x3626be1439a945a closed 18/04/17 16:55:05 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 16:55:05 INFO scheduler.JobScheduler: Finished job streaming job 1523973300000 ms.6 from job set of time 1523973300000 ms 18/04/17 16:55:06 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 593.0 (TID 593) in 5870 ms on ***hostname masked*** (executor 4) (1/1) 18/04/17 16:55:06 INFO cluster.YarnClusterScheduler: Removed TaskSet 593.0, whose tasks have all completed, from pool 18/04/17 16:55:06 INFO scheduler.DAGScheduler: ResultStage 593 (foreachPartition at PredictorEngineApp.java:153) finished in 5.871 s 18/04/17 16:55:06 INFO scheduler.DAGScheduler: Job 593 finished: foreachPartition at PredictorEngineApp.java:153, took 5.943251 s 18/04/17 16:55:06 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x2e0b25c1 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 16:55:06 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x2e0b25c10x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 16:55:06 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 16:55:06 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:36031, server: ***hostname masked***/***IP masked***:2181 18/04/17 16:55:06 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x1626be1444c948d, negotiated timeout = 60000 18/04/17 16:55:06 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x1626be1444c948d 18/04/17 16:55:06 INFO zookeeper.ZooKeeper: Session: 0x1626be1444c948d closed 18/04/17 16:55:06 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 16:55:06 INFO scheduler.JobScheduler: Finished job streaming job 1523973300000 ms.34 from job set of time 1523973300000 ms 18/04/17 16:55:06 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 600.0 (TID 600) in 5873 ms on ***hostname masked*** (executor 9) (1/1) 18/04/17 16:55:06 INFO scheduler.DAGScheduler: ResultStage 600 (foreachPartition at PredictorEngineApp.java:153) finished in 5.874 s 18/04/17 16:55:06 INFO cluster.YarnClusterScheduler: Removed TaskSet 600.0, whose tasks have all completed, from pool 18/04/17 16:55:06 INFO scheduler.DAGScheduler: Job 599 finished: foreachPartition at PredictorEngineApp.java:153, took 5.973650 s 18/04/17 16:55:06 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x2a9e9eca connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 16:55:06 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x2a9e9eca0x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 16:55:06 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 16:55:06 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:57885, server: ***hostname masked***/***IP masked***:2181 18/04/17 16:55:06 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x3626be1439a945b, negotiated timeout = 60000 18/04/17 16:55:06 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x3626be1439a945b 18/04/17 16:55:06 INFO zookeeper.ZooKeeper: Session: 0x3626be1439a945b closed 18/04/17 16:55:06 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 16:55:06 INFO scheduler.JobScheduler: Finished job streaming job 1523973300000 ms.27 from job set of time 1523973300000 ms 18/04/17 16:55:06 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 592.0 (TID 592) in 6058 ms on ***hostname masked*** (executor 12) (1/1) 18/04/17 16:55:06 INFO cluster.YarnClusterScheduler: Removed TaskSet 592.0, whose tasks have all completed, from pool 18/04/17 16:55:06 INFO scheduler.DAGScheduler: ResultStage 592 (foreachPartition at PredictorEngineApp.java:153) finished in 6.059 s 18/04/17 16:55:06 INFO scheduler.DAGScheduler: Job 592 finished: foreachPartition at PredictorEngineApp.java:153, took 6.127095 s 18/04/17 16:55:06 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x207512ff connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 16:55:06 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x207512ff0x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 16:55:06 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 16:55:06 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:36037, server: ***hostname masked***/***IP masked***:2181 18/04/17 16:55:06 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x1626be1444c948e, negotiated timeout = 60000 18/04/17 16:55:06 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x1626be1444c948e 18/04/17 16:55:06 INFO zookeeper.ZooKeeper: Session: 0x1626be1444c948e closed 18/04/17 16:55:06 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 16:55:06 INFO scheduler.JobScheduler: Finished job streaming job 1523973300000 ms.20 from job set of time 1523973300000 ms 18/04/17 16:55:06 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 586.0 (TID 586) in 6284 ms on ***hostname masked*** (executor 1) (1/1) 18/04/17 16:55:06 INFO cluster.YarnClusterScheduler: Removed TaskSet 586.0, whose tasks have all completed, from pool 18/04/17 16:55:06 INFO scheduler.DAGScheduler: ResultStage 586 (foreachPartition at PredictorEngineApp.java:153) finished in 6.284 s 18/04/17 16:55:06 INFO scheduler.DAGScheduler: Job 586 finished: foreachPartition at PredictorEngineApp.java:153, took 6.313195 s 18/04/17 16:55:06 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x78b34d96 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 16:55:06 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x78b34d960x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 16:55:06 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 16:55:06 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:57891, server: ***hostname masked***/***IP masked***:2181 18/04/17 16:55:06 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x3626be1439a945c, negotiated timeout = 60000 18/04/17 16:55:06 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x3626be1439a945c 18/04/17 16:55:06 INFO zookeeper.ZooKeeper: Session: 0x3626be1439a945c closed 18/04/17 16:55:06 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 16:55:06 INFO scheduler.JobScheduler: Finished job streaming job 1523973300000 ms.1 from job set of time 1523973300000 ms 18/04/17 16:55:07 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 584.0 (TID 584) in 7304 ms on ***hostname masked*** (executor 10) (1/1) 18/04/17 16:55:07 INFO cluster.YarnClusterScheduler: Removed TaskSet 584.0, whose tasks have all completed, from pool 18/04/17 16:55:07 INFO scheduler.DAGScheduler: ResultStage 584 (foreachPartition at PredictorEngineApp.java:153) finished in 7.304 s 18/04/17 16:55:07 INFO scheduler.DAGScheduler: Job 584 finished: foreachPartition at PredictorEngineApp.java:153, took 7.324897 s 18/04/17 16:55:07 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x73c77258 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 16:55:07 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x73c772580x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 16:55:07 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 16:55:07 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:57896, server: ***hostname masked***/***IP masked***:2181 18/04/17 16:55:07 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x3626be1439a945e, negotiated timeout = 60000 18/04/17 16:55:07 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x3626be1439a945e 18/04/17 16:55:07 INFO zookeeper.ZooKeeper: Session: 0x3626be1439a945e closed 18/04/17 16:55:07 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 16:55:07 INFO scheduler.JobScheduler: Finished job streaming job 1523973300000 ms.28 from job set of time 1523973300000 ms 18/04/17 16:55:08 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 596.0 (TID 596) in 8369 ms on ***hostname masked*** (executor 6) (1/1) 18/04/17 16:55:08 INFO cluster.YarnClusterScheduler: Removed TaskSet 596.0, whose tasks have all completed, from pool 18/04/17 16:55:08 INFO scheduler.DAGScheduler: ResultStage 596 (foreachPartition at PredictorEngineApp.java:153) finished in 8.371 s 18/04/17 16:55:08 INFO scheduler.DAGScheduler: Job 595 finished: foreachPartition at PredictorEngineApp.java:153, took 8.455956 s 18/04/17 16:55:08 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x6dacc6d5 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 16:55:08 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x6dacc6d50x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 16:55:08 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 16:55:08 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:57900, server: ***hostname masked***/***IP masked***:2181 18/04/17 16:55:08 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x3626be1439a945f, negotiated timeout = 60000 18/04/17 16:55:08 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x3626be1439a945f 18/04/17 16:55:08 INFO zookeeper.ZooKeeper: Session: 0x3626be1439a945f closed 18/04/17 16:55:08 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 16:55:08 INFO scheduler.JobScheduler: Finished job streaming job 1523973300000 ms.29 from job set of time 1523973300000 ms 18/04/17 16:55:10 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 605.0 (TID 605) in 9829 ms on ***hostname masked*** (executor 2) (1/1) 18/04/17 16:55:10 INFO cluster.YarnClusterScheduler: Removed TaskSet 605.0, whose tasks have all completed, from pool 18/04/17 16:55:10 INFO scheduler.DAGScheduler: ResultStage 605 (foreachPartition at PredictorEngineApp.java:153) finished in 9.830 s 18/04/17 16:55:10 INFO scheduler.DAGScheduler: Job 605 finished: foreachPartition at PredictorEngineApp.java:153, took 9.943252 s 18/04/17 16:55:10 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x4be0aae7 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 16:55:10 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x4be0aae70x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 16:55:10 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 16:55:10 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:40649, server: ***hostname masked***/***IP masked***:2181 18/04/17 16:55:10 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x2626be142b28d88, negotiated timeout = 60000 18/04/17 16:55:10 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x2626be142b28d88 18/04/17 16:55:10 INFO zookeeper.ZooKeeper: Session: 0x2626be142b28d88 closed 18/04/17 16:55:10 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 16:55:10 INFO scheduler.JobScheduler: Finished job streaming job 1523973300000 ms.10 from job set of time 1523973300000 ms 18/04/17 16:55:11 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 589.0 (TID 589) in 11196 ms on ***hostname masked*** (executor 10) (1/1) 18/04/17 16:55:11 INFO cluster.YarnClusterScheduler: Removed TaskSet 589.0, whose tasks have all completed, from pool 18/04/17 16:55:11 INFO scheduler.DAGScheduler: ResultStage 589 (foreachPartition at PredictorEngineApp.java:153) finished in 11.197 s 18/04/17 16:55:11 INFO scheduler.DAGScheduler: Job 589 finished: foreachPartition at PredictorEngineApp.java:153, took 11.236574 s 18/04/17 16:55:11 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x24518631 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 16:55:11 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x245186310x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 16:55:11 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 16:55:11 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:57911, server: ***hostname masked***/***IP masked***:2181 18/04/17 16:55:11 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x3626be1439a9460, negotiated timeout = 60000 18/04/17 16:55:11 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x3626be1439a9460 18/04/17 16:55:11 INFO zookeeper.ZooKeeper: Session: 0x3626be1439a9460 closed 18/04/17 16:55:11 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 16:55:11 INFO scheduler.JobScheduler: Finished job streaming job 1523973300000 ms.22 from job set of time 1523973300000 ms 18/04/17 16:55:15 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 585.0 (TID 585) in 15893 ms on ***hostname masked*** (executor 6) (1/1) 18/04/17 16:55:15 INFO cluster.YarnClusterScheduler: Removed TaskSet 585.0, whose tasks have all completed, from pool 18/04/17 16:55:15 INFO scheduler.DAGScheduler: ResultStage 585 (foreachPartition at PredictorEngineApp.java:153) finished in 15.893 s 18/04/17 16:55:15 INFO scheduler.DAGScheduler: Job 585 finished: foreachPartition at PredictorEngineApp.java:153, took 15.917852 s 18/04/17 16:55:15 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x6ca123d5 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 16:55:15 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x6ca123d50x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 16:55:15 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 16:55:15 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:36086, server: ***hostname masked***/***IP masked***:2181 18/04/17 16:55:15 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x1626be1444c9490, negotiated timeout = 60000 18/04/17 16:55:16 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x1626be1444c9490 18/04/17 16:55:16 INFO zookeeper.ZooKeeper: Session: 0x1626be1444c9490 closed 18/04/17 16:55:16 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 16:55:16 INFO scheduler.JobScheduler: Finished job streaming job 1523973300000 ms.26 from job set of time 1523973300000 ms 18/04/17 16:55:16 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 594.0 (TID 594) in 15988 ms on ***hostname masked*** (executor 6) (1/1) 18/04/17 16:55:16 INFO cluster.YarnClusterScheduler: Removed TaskSet 594.0, whose tasks have all completed, from pool 18/04/17 16:55:16 INFO scheduler.DAGScheduler: ResultStage 594 (foreachPartition at PredictorEngineApp.java:153) finished in 15.990 s 18/04/17 16:55:16 INFO scheduler.DAGScheduler: Job 594 finished: foreachPartition at PredictorEngineApp.java:153, took 16.066489 s 18/04/17 16:55:16 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x7343bfb6 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 16:55:16 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x7343bfb60x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 16:55:16 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 16:55:16 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:40684, server: ***hostname masked***/***IP masked***:2181 18/04/17 16:55:16 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x2626be142b28d89, negotiated timeout = 60000 18/04/17 16:55:16 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x2626be142b28d89 18/04/17 16:55:16 INFO zookeeper.ZooKeeper: Session: 0x2626be142b28d89 closed 18/04/17 16:55:16 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 16:55:16 INFO scheduler.JobScheduler: Finished job streaming job 1523973300000 ms.5 from job set of time 1523973300000 ms 18/04/17 16:55:16 INFO scheduler.JobScheduler: Total delay: 16.165 s for time 1523973300000 ms (execution: 16.105 s) 18/04/17 16:55:16 INFO kafka.KafkaRDD: Removing RDD 756 from persistence list 18/04/17 16:55:16 INFO storage.BlockManager: Removing RDD 756 18/04/17 16:55:16 INFO kafka.KafkaRDD: Removing RDD 720 from persistence list 18/04/17 16:55:16 INFO storage.BlockManager: Removing RDD 720 18/04/17 16:55:16 INFO kafka.KafkaRDD: Removing RDD 756 from persistence list 18/04/17 16:55:16 INFO storage.BlockManager: Removing RDD 756 18/04/17 16:55:16 INFO kafka.KafkaRDD: Removing RDD 720 from persistence list 18/04/17 16:55:16 INFO storage.BlockManager: Removing RDD 720 18/04/17 16:55:16 INFO kafka.KafkaRDD: Removing RDD 757 from persistence list 18/04/17 16:55:16 INFO storage.BlockManager: Removing RDD 757 18/04/17 16:55:16 INFO kafka.KafkaRDD: Removing RDD 721 from persistence list 18/04/17 16:55:16 INFO storage.BlockManager: Removing RDD 721 18/04/17 16:55:16 INFO kafka.KafkaRDD: Removing RDD 757 from persistence list 18/04/17 16:55:16 INFO storage.BlockManager: Removing RDD 757 18/04/17 16:55:16 INFO kafka.KafkaRDD: Removing RDD 721 from persistence list 18/04/17 16:55:16 INFO storage.BlockManager: Removing RDD 721 18/04/17 16:55:16 INFO kafka.KafkaRDD: Removing RDD 758 from persistence list 18/04/17 16:55:16 INFO storage.BlockManager: Removing RDD 758 18/04/17 16:55:16 INFO kafka.KafkaRDD: Removing RDD 722 from persistence list 18/04/17 16:55:16 INFO storage.BlockManager: Removing RDD 722 18/04/17 16:55:16 INFO kafka.KafkaRDD: Removing RDD 758 from persistence list 18/04/17 16:55:16 INFO storage.BlockManager: Removing RDD 758 18/04/17 16:55:16 INFO kafka.KafkaRDD: Removing RDD 722 from persistence list 18/04/17 16:55:16 INFO storage.BlockManager: Removing RDD 722 18/04/17 16:55:16 INFO kafka.KafkaRDD: Removing RDD 759 from persistence list 18/04/17 16:55:16 INFO storage.BlockManager: Removing RDD 759 18/04/17 16:55:16 INFO kafka.KafkaRDD: Removing RDD 723 from persistence list 18/04/17 16:55:16 INFO storage.BlockManager: Removing RDD 723 18/04/17 16:55:16 INFO kafka.KafkaRDD: Removing RDD 759 from persistence list 18/04/17 16:55:16 INFO storage.BlockManager: Removing RDD 759 18/04/17 16:55:16 INFO kafka.KafkaRDD: Removing RDD 723 from persistence list 18/04/17 16:55:16 INFO storage.BlockManager: Removing RDD 723 18/04/17 16:55:16 INFO kafka.KafkaRDD: Removing RDD 760 from persistence list 18/04/17 16:55:16 INFO storage.BlockManager: Removing RDD 760 18/04/17 16:55:16 INFO kafka.KafkaRDD: Removing RDD 724 from persistence list 18/04/17 16:55:16 INFO storage.BlockManager: Removing RDD 724 18/04/17 16:55:16 INFO kafka.KafkaRDD: Removing RDD 760 from persistence list 18/04/17 16:55:16 INFO storage.BlockManager: Removing RDD 760 18/04/17 16:55:16 INFO kafka.KafkaRDD: Removing RDD 724 from persistence list 18/04/17 16:55:16 INFO storage.BlockManager: Removing RDD 724 18/04/17 16:55:16 INFO kafka.KafkaRDD: Removing RDD 761 from persistence list 18/04/17 16:55:16 INFO storage.BlockManager: Removing RDD 761 18/04/17 16:55:16 INFO kafka.KafkaRDD: Removing RDD 725 from persistence list 18/04/17 16:55:16 INFO storage.BlockManager: Removing RDD 725 18/04/17 16:55:16 INFO kafka.KafkaRDD: Removing RDD 761 from persistence list 18/04/17 16:55:16 INFO storage.BlockManager: Removing RDD 761 18/04/17 16:55:16 INFO kafka.KafkaRDD: Removing RDD 725 from persistence list 18/04/17 16:55:16 INFO storage.BlockManager: Removing RDD 725 18/04/17 16:55:16 INFO kafka.KafkaRDD: Removing RDD 762 from persistence list 18/04/17 16:55:16 INFO storage.BlockManager: Removing RDD 762 18/04/17 16:55:16 INFO kafka.KafkaRDD: Removing RDD 726 from persistence list 18/04/17 16:55:16 INFO storage.BlockManager: Removing RDD 726 18/04/17 16:55:16 INFO kafka.KafkaRDD: Removing RDD 762 from persistence list 18/04/17 16:55:16 INFO storage.BlockManager: Removing RDD 762 18/04/17 16:55:16 INFO kafka.KafkaRDD: Removing RDD 726 from persistence list 18/04/17 16:55:16 INFO storage.BlockManager: Removing RDD 726 18/04/17 16:55:16 INFO kafka.KafkaRDD: Removing RDD 763 from persistence list 18/04/17 16:55:16 INFO storage.BlockManager: Removing RDD 763 18/04/17 16:55:16 INFO kafka.KafkaRDD: Removing RDD 727 from persistence list 18/04/17 16:55:16 INFO storage.BlockManager: Removing RDD 727 18/04/17 16:55:16 INFO kafka.KafkaRDD: Removing RDD 763 from persistence list 18/04/17 16:55:16 INFO storage.BlockManager: Removing RDD 763 18/04/17 16:55:16 INFO kafka.KafkaRDD: Removing RDD 727 from persistence list 18/04/17 16:55:16 INFO storage.BlockManager: Removing RDD 727 18/04/17 16:55:16 INFO kafka.KafkaRDD: Removing RDD 764 from persistence list 18/04/17 16:55:16 INFO storage.BlockManager: Removing RDD 764 18/04/17 16:55:16 INFO kafka.KafkaRDD: Removing RDD 728 from persistence list 18/04/17 16:55:16 INFO storage.BlockManager: Removing RDD 728 18/04/17 16:55:16 INFO kafka.KafkaRDD: Removing RDD 764 from persistence list 18/04/17 16:55:16 INFO storage.BlockManager: Removing RDD 764 18/04/17 16:55:16 INFO kafka.KafkaRDD: Removing RDD 728 from persistence list 18/04/17 16:55:16 INFO storage.BlockManager: Removing RDD 728 18/04/17 16:55:16 INFO kafka.KafkaRDD: Removing RDD 765 from persistence list 18/04/17 16:55:16 INFO storage.BlockManager: Removing RDD 765 18/04/17 16:55:16 INFO kafka.KafkaRDD: Removing RDD 729 from persistence list 18/04/17 16:55:16 INFO storage.BlockManager: Removing RDD 729 18/04/17 16:55:16 INFO spark.ContextCleaner: Cleaned accumulator 606 18/04/17 16:55:16 INFO kafka.KafkaRDD: Removing RDD 765 from persistence list 18/04/17 16:55:16 INFO spark.ContextCleaner: Cleaned accumulator 583 18/04/17 16:55:16 INFO storage.BlockManager: Removing RDD 765 18/04/17 16:55:16 INFO kafka.KafkaRDD: Removing RDD 729 from persistence list 18/04/17 16:55:16 INFO storage.BlockManager: Removing RDD 729 18/04/17 16:55:16 INFO kafka.KafkaRDD: Removing RDD 766 from persistence list 18/04/17 16:55:16 INFO storage.BlockManagerInfo: Removed broadcast_581_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.2 MB) 18/04/17 16:55:16 INFO storage.BlockManager: Removing RDD 766 18/04/17 16:55:16 INFO kafka.KafkaRDD: Removing RDD 730 from persistence list 18/04/17 16:55:16 INFO storage.BlockManager: Removing RDD 730 18/04/17 16:55:16 INFO kafka.KafkaRDD: Removing RDD 766 from persistence list 18/04/17 16:55:16 INFO storage.BlockManagerInfo: Removed broadcast_581_piece0 on ***hostname masked***:55279 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:55:16 INFO storage.BlockManager: Removing RDD 766 18/04/17 16:55:16 INFO kafka.KafkaRDD: Removing RDD 730 from persistence list 18/04/17 16:55:16 INFO storage.BlockManager: Removing RDD 730 18/04/17 16:55:16 INFO kafka.KafkaRDD: Removing RDD 767 from persistence list 18/04/17 16:55:16 INFO storage.BlockManager: Removing RDD 767 18/04/17 16:55:16 INFO kafka.KafkaRDD: Removing RDD 731 from persistence list 18/04/17 16:55:16 INFO storage.BlockManager: Removing RDD 731 18/04/17 16:55:16 INFO kafka.KafkaRDD: Removing RDD 767 from persistence list 18/04/17 16:55:16 INFO storage.BlockManager: Removing RDD 767 18/04/17 16:55:16 INFO kafka.KafkaRDD: Removing RDD 731 from persistence list 18/04/17 16:55:16 INFO storage.BlockManager: Removing RDD 731 18/04/17 16:55:16 INFO kafka.KafkaRDD: Removing RDD 768 from persistence list 18/04/17 16:55:16 INFO storage.BlockManager: Removing RDD 768 18/04/17 16:55:16 INFO kafka.KafkaRDD: Removing RDD 732 from persistence list 18/04/17 16:55:16 INFO storage.BlockManager: Removing RDD 732 18/04/17 16:55:16 INFO kafka.KafkaRDD: Removing RDD 768 from persistence list 18/04/17 16:55:16 INFO storage.BlockManager: Removing RDD 768 18/04/17 16:55:16 INFO kafka.KafkaRDD: Removing RDD 732 from persistence list 18/04/17 16:55:16 INFO storage.BlockManager: Removing RDD 732 18/04/17 16:55:16 INFO kafka.KafkaRDD: Removing RDD 769 from persistence list 18/04/17 16:55:16 INFO storage.BlockManager: Removing RDD 769 18/04/17 16:55:16 INFO kafka.KafkaRDD: Removing RDD 733 from persistence list 18/04/17 16:55:16 INFO storage.BlockManager: Removing RDD 733 18/04/17 16:55:16 INFO kafka.KafkaRDD: Removing RDD 769 from persistence list 18/04/17 16:55:16 INFO storage.BlockManager: Removing RDD 769 18/04/17 16:55:16 INFO kafka.KafkaRDD: Removing RDD 733 from persistence list 18/04/17 16:55:16 INFO storage.BlockManager: Removing RDD 733 18/04/17 16:55:16 INFO kafka.KafkaRDD: Removing RDD 770 from persistence list 18/04/17 16:55:16 INFO storage.BlockManager: Removing RDD 770 18/04/17 16:55:16 INFO kafka.KafkaRDD: Removing RDD 734 from persistence list 18/04/17 16:55:16 INFO storage.BlockManager: Removing RDD 734 18/04/17 16:55:16 INFO kafka.KafkaRDD: Removing RDD 770 from persistence list 18/04/17 16:55:16 INFO storage.BlockManager: Removing RDD 770 18/04/17 16:55:16 INFO kafka.KafkaRDD: Removing RDD 734 from persistence list 18/04/17 16:55:16 INFO storage.BlockManager: Removing RDD 734 18/04/17 16:55:16 INFO kafka.KafkaRDD: Removing RDD 771 from persistence list 18/04/17 16:55:16 INFO storage.BlockManager: Removing RDD 771 18/04/17 16:55:16 INFO kafka.KafkaRDD: Removing RDD 735 from persistence list 18/04/17 16:55:16 INFO storage.BlockManager: Removing RDD 735 18/04/17 16:55:16 INFO kafka.KafkaRDD: Removing RDD 771 from persistence list 18/04/17 16:55:16 INFO storage.BlockManager: Removing RDD 771 18/04/17 16:55:16 INFO kafka.KafkaRDD: Removing RDD 735 from persistence list 18/04/17 16:55:16 INFO storage.BlockManager: Removing RDD 735 18/04/17 16:55:16 INFO kafka.KafkaRDD: Removing RDD 772 from persistence list 18/04/17 16:55:16 INFO storage.BlockManager: Removing RDD 772 18/04/17 16:55:16 INFO kafka.KafkaRDD: Removing RDD 736 from persistence list 18/04/17 16:55:16 INFO storage.BlockManager: Removing RDD 736 18/04/17 16:55:16 INFO kafka.KafkaRDD: Removing RDD 772 from persistence list 18/04/17 16:55:16 INFO storage.BlockManager: Removing RDD 772 18/04/17 16:55:16 INFO kafka.KafkaRDD: Removing RDD 736 from persistence list 18/04/17 16:55:16 INFO storage.BlockManager: Removing RDD 736 18/04/17 16:55:16 INFO kafka.KafkaRDD: Removing RDD 773 from persistence list 18/04/17 16:55:16 INFO storage.BlockManager: Removing RDD 773 18/04/17 16:55:16 INFO kafka.KafkaRDD: Removing RDD 737 from persistence list 18/04/17 16:55:16 INFO storage.BlockManager: Removing RDD 737 18/04/17 16:55:16 INFO kafka.KafkaRDD: Removing RDD 773 from persistence list 18/04/17 16:55:16 INFO storage.BlockManager: Removing RDD 773 18/04/17 16:55:16 INFO kafka.KafkaRDD: Removing RDD 737 from persistence list 18/04/17 16:55:16 INFO storage.BlockManager: Removing RDD 737 18/04/17 16:55:16 INFO kafka.KafkaRDD: Removing RDD 774 from persistence list 18/04/17 16:55:16 INFO storage.BlockManager: Removing RDD 774 18/04/17 16:55:16 INFO kafka.KafkaRDD: Removing RDD 738 from persistence list 18/04/17 16:55:16 INFO storage.BlockManager: Removing RDD 738 18/04/17 16:55:16 INFO kafka.KafkaRDD: Removing RDD 774 from persistence list 18/04/17 16:55:16 INFO storage.BlockManager: Removing RDD 774 18/04/17 16:55:16 INFO kafka.KafkaRDD: Removing RDD 738 from persistence list 18/04/17 16:55:16 INFO storage.BlockManager: Removing RDD 738 18/04/17 16:55:16 INFO kafka.KafkaRDD: Removing RDD 775 from persistence list 18/04/17 16:55:16 INFO storage.BlockManager: Removing RDD 775 18/04/17 16:55:16 INFO kafka.KafkaRDD: Removing RDD 739 from persistence list 18/04/17 16:55:16 INFO storage.BlockManager: Removing RDD 739 18/04/17 16:55:16 INFO kafka.KafkaRDD: Removing RDD 775 from persistence list 18/04/17 16:55:16 INFO storage.BlockManager: Removing RDD 775 18/04/17 16:55:16 INFO kafka.KafkaRDD: Removing RDD 739 from persistence list 18/04/17 16:55:16 INFO spark.ContextCleaner: Cleaned accumulator 582 18/04/17 16:55:16 INFO storage.BlockManager: Removing RDD 739 18/04/17 16:55:16 INFO kafka.KafkaRDD: Removing RDD 776 from persistence list 18/04/17 16:55:16 INFO storage.BlockManager: Removing RDD 776 18/04/17 16:55:16 INFO kafka.KafkaRDD: Removing RDD 740 from persistence list 18/04/17 16:55:16 INFO storage.BlockManagerInfo: Removed broadcast_583_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.2 MB) 18/04/17 16:55:16 INFO storage.BlockManager: Removing RDD 740 18/04/17 16:55:16 INFO kafka.KafkaRDD: Removing RDD 776 from persistence list 18/04/17 16:55:16 INFO storage.BlockManager: Removing RDD 776 18/04/17 16:55:16 INFO kafka.KafkaRDD: Removing RDD 740 from persistence list 18/04/17 16:55:16 INFO storage.BlockManagerInfo: Removed broadcast_583_piece0 on ***hostname masked***:41751 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:55:16 INFO storage.BlockManager: Removing RDD 740 18/04/17 16:55:16 INFO kafka.KafkaRDD: Removing RDD 777 from persistence list 18/04/17 16:55:16 INFO storage.BlockManager: Removing RDD 777 18/04/17 16:55:16 INFO kafka.KafkaRDD: Removing RDD 741 from persistence list 18/04/17 16:55:16 INFO storage.BlockManager: Removing RDD 741 18/04/17 16:55:16 INFO kafka.KafkaRDD: Removing RDD 777 from persistence list 18/04/17 16:55:16 INFO storage.BlockManager: Removing RDD 777 18/04/17 16:55:16 INFO kafka.KafkaRDD: Removing RDD 741 from persistence list 18/04/17 16:55:16 INFO storage.BlockManager: Removing RDD 741 18/04/17 16:55:16 INFO kafka.KafkaRDD: Removing RDD 778 from persistence list 18/04/17 16:55:16 INFO storage.BlockManager: Removing RDD 778 18/04/17 16:55:16 INFO kafka.KafkaRDD: Removing RDD 742 from persistence list 18/04/17 16:55:16 INFO storage.BlockManager: Removing RDD 742 18/04/17 16:55:16 INFO kafka.KafkaRDD: Removing RDD 778 from persistence list 18/04/17 16:55:16 INFO storage.BlockManager: Removing RDD 778 18/04/17 16:55:16 INFO kafka.KafkaRDD: Removing RDD 742 from persistence list 18/04/17 16:55:16 INFO storage.BlockManager: Removing RDD 742 18/04/17 16:55:16 INFO kafka.KafkaRDD: Removing RDD 779 from persistence list 18/04/17 16:55:16 INFO storage.BlockManager: Removing RDD 779 18/04/17 16:55:16 INFO kafka.KafkaRDD: Removing RDD 743 from persistence list 18/04/17 16:55:16 INFO storage.BlockManager: Removing RDD 743 18/04/17 16:55:16 INFO kafka.KafkaRDD: Removing RDD 779 from persistence list 18/04/17 16:55:16 INFO storage.BlockManager: Removing RDD 779 18/04/17 16:55:16 INFO kafka.KafkaRDD: Removing RDD 743 from persistence list 18/04/17 16:55:16 INFO storage.BlockManager: Removing RDD 743 18/04/17 16:55:16 INFO kafka.KafkaRDD: Removing RDD 780 from persistence list 18/04/17 16:55:16 INFO storage.BlockManager: Removing RDD 780 18/04/17 16:55:16 INFO kafka.KafkaRDD: Removing RDD 744 from persistence list 18/04/17 16:55:16 INFO storage.BlockManager: Removing RDD 744 18/04/17 16:55:16 INFO kafka.KafkaRDD: Removing RDD 780 from persistence list 18/04/17 16:55:16 INFO storage.BlockManager: Removing RDD 780 18/04/17 16:55:16 INFO kafka.KafkaRDD: Removing RDD 744 from persistence list 18/04/17 16:55:16 INFO storage.BlockManager: Removing RDD 744 18/04/17 16:55:16 INFO kafka.KafkaRDD: Removing RDD 781 from persistence list 18/04/17 16:55:16 INFO storage.BlockManager: Removing RDD 781 18/04/17 16:55:16 INFO kafka.KafkaRDD: Removing RDD 745 from persistence list 18/04/17 16:55:16 INFO storage.BlockManager: Removing RDD 745 18/04/17 16:55:16 INFO kafka.KafkaRDD: Removing RDD 781 from persistence list 18/04/17 16:55:16 INFO storage.BlockManager: Removing RDD 781 18/04/17 16:55:16 INFO kafka.KafkaRDD: Removing RDD 745 from persistence list 18/04/17 16:55:16 INFO storage.BlockManager: Removing RDD 745 18/04/17 16:55:16 INFO kafka.KafkaRDD: Removing RDD 782 from persistence list 18/04/17 16:55:16 INFO storage.BlockManager: Removing RDD 782 18/04/17 16:55:16 INFO kafka.KafkaRDD: Removing RDD 746 from persistence list 18/04/17 16:55:16 INFO storage.BlockManager: Removing RDD 746 18/04/17 16:55:16 INFO kafka.KafkaRDD: Removing RDD 782 from persistence list 18/04/17 16:55:16 INFO storage.BlockManager: Removing RDD 782 18/04/17 16:55:16 INFO kafka.KafkaRDD: Removing RDD 746 from persistence list 18/04/17 16:55:16 INFO storage.BlockManager: Removing RDD 746 18/04/17 16:55:16 INFO kafka.KafkaRDD: Removing RDD 783 from persistence list 18/04/17 16:55:16 INFO storage.BlockManager: Removing RDD 783 18/04/17 16:55:16 INFO kafka.KafkaRDD: Removing RDD 747 from persistence list 18/04/17 16:55:16 INFO storage.BlockManager: Removing RDD 747 18/04/17 16:55:16 INFO kafka.KafkaRDD: Removing RDD 783 from persistence list 18/04/17 16:55:16 INFO storage.BlockManager: Removing RDD 783 18/04/17 16:55:16 INFO kafka.KafkaRDD: Removing RDD 747 from persistence list 18/04/17 16:55:16 INFO storage.BlockManager: Removing RDD 747 18/04/17 16:55:16 INFO kafka.KafkaRDD: Removing RDD 784 from persistence list 18/04/17 16:55:16 INFO storage.BlockManager: Removing RDD 784 18/04/17 16:55:16 INFO kafka.KafkaRDD: Removing RDD 748 from persistence list 18/04/17 16:55:16 INFO storage.BlockManager: Removing RDD 748 18/04/17 16:55:16 INFO kafka.KafkaRDD: Removing RDD 784 from persistence list 18/04/17 16:55:16 INFO storage.BlockManager: Removing RDD 784 18/04/17 16:55:16 INFO kafka.KafkaRDD: Removing RDD 748 from persistence list 18/04/17 16:55:16 INFO storage.BlockManager: Removing RDD 748 18/04/17 16:55:16 INFO kafka.KafkaRDD: Removing RDD 785 from persistence list 18/04/17 16:55:16 INFO storage.BlockManager: Removing RDD 785 18/04/17 16:55:16 INFO kafka.KafkaRDD: Removing RDD 749 from persistence list 18/04/17 16:55:16 INFO storage.BlockManager: Removing RDD 749 18/04/17 16:55:16 INFO kafka.KafkaRDD: Removing RDD 785 from persistence list 18/04/17 16:55:16 INFO storage.BlockManager: Removing RDD 785 18/04/17 16:55:16 INFO kafka.KafkaRDD: Removing RDD 749 from persistence list 18/04/17 16:55:16 INFO storage.BlockManager: Removing RDD 749 18/04/17 16:55:16 INFO kafka.KafkaRDD: Removing RDD 786 from persistence list 18/04/17 16:55:16 INFO storage.BlockManager: Removing RDD 786 18/04/17 16:55:16 INFO kafka.KafkaRDD: Removing RDD 750 from persistence list 18/04/17 16:55:16 INFO storage.BlockManager: Removing RDD 750 18/04/17 16:55:16 INFO spark.ContextCleaner: Cleaned accumulator 584 18/04/17 16:55:16 INFO kafka.KafkaRDD: Removing RDD 786 from persistence list 18/04/17 16:55:16 INFO storage.BlockManager: Removing RDD 786 18/04/17 16:55:16 INFO kafka.KafkaRDD: Removing RDD 750 from persistence list 18/04/17 16:55:16 INFO storage.BlockManager: Removing RDD 750 18/04/17 16:55:16 INFO kafka.KafkaRDD: Removing RDD 787 from persistence list 18/04/17 16:55:16 INFO storage.BlockManagerInfo: Removed broadcast_582_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.2 MB) 18/04/17 16:55:16 INFO storage.BlockManager: Removing RDD 787 18/04/17 16:55:16 INFO kafka.KafkaRDD: Removing RDD 751 from persistence list 18/04/17 16:55:16 INFO storage.BlockManagerInfo: Removed broadcast_582_piece0 on ***hostname masked***:55279 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:55:16 INFO storage.BlockManager: Removing RDD 751 18/04/17 16:55:16 INFO kafka.KafkaRDD: Removing RDD 787 from persistence list 18/04/17 16:55:16 INFO spark.ContextCleaner: Cleaned accumulator 586 18/04/17 16:55:16 INFO storage.BlockManager: Removing RDD 787 18/04/17 16:55:16 INFO kafka.KafkaRDD: Removing RDD 751 from persistence list 18/04/17 16:55:16 INFO storage.BlockManager: Removing RDD 751 18/04/17 16:55:16 INFO kafka.KafkaRDD: Removing RDD 788 from persistence list 18/04/17 16:55:16 INFO storage.BlockManagerInfo: Removed broadcast_584_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.2 MB) 18/04/17 16:55:16 INFO storage.BlockManager: Removing RDD 788 18/04/17 16:55:16 INFO kafka.KafkaRDD: Removing RDD 752 from persistence list 18/04/17 16:55:16 INFO storage.BlockManagerInfo: Removed broadcast_584_piece0 on ***hostname masked***:55095 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:55:16 INFO storage.BlockManager: Removing RDD 752 18/04/17 16:55:16 INFO kafka.KafkaRDD: Removing RDD 788 from persistence list 18/04/17 16:55:16 INFO spark.ContextCleaner: Cleaned accumulator 585 18/04/17 16:55:16 INFO storage.BlockManager: Removing RDD 788 18/04/17 16:55:16 INFO kafka.KafkaRDD: Removing RDD 752 from persistence list 18/04/17 16:55:16 INFO storage.BlockManager: Removing RDD 752 18/04/17 16:55:16 INFO kafka.KafkaRDD: Removing RDD 789 from persistence list 18/04/17 16:55:16 INFO storage.BlockManagerInfo: Removed broadcast_586_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.2 MB) 18/04/17 16:55:16 INFO storage.BlockManager: Removing RDD 789 18/04/17 16:55:16 INFO kafka.KafkaRDD: Removing RDD 753 from persistence list 18/04/17 16:55:16 INFO storage.BlockManagerInfo: Removed broadcast_586_piece0 on ***hostname masked***:56034 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:55:16 INFO storage.BlockManager: Removing RDD 753 18/04/17 16:55:16 INFO kafka.KafkaRDD: Removing RDD 789 from persistence list 18/04/17 16:55:16 INFO storage.BlockManager: Removing RDD 789 18/04/17 16:55:16 INFO spark.ContextCleaner: Cleaned accumulator 587 18/04/17 16:55:16 INFO kafka.KafkaRDD: Removing RDD 753 from persistence list 18/04/17 16:55:16 INFO storage.BlockManager: Removing RDD 753 18/04/17 16:55:16 INFO kafka.KafkaRDD: Removing RDD 790 from persistence list 18/04/17 16:55:16 INFO storage.BlockManagerInfo: Removed broadcast_585_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.2 MB) 18/04/17 16:55:16 INFO storage.BlockManager: Removing RDD 790 18/04/17 16:55:16 INFO kafka.KafkaRDD: Removing RDD 754 from persistence list 18/04/17 16:55:16 INFO storage.BlockManagerInfo: Removed broadcast_585_piece0 on ***hostname masked***:35790 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:55:16 INFO storage.BlockManager: Removing RDD 754 18/04/17 16:55:16 INFO spark.ContextCleaner: Cleaned accumulator 589 18/04/17 16:55:16 INFO kafka.KafkaRDD: Removing RDD 790 from persistence list 18/04/17 16:55:16 INFO storage.BlockManager: Removing RDD 790 18/04/17 16:55:16 INFO kafka.KafkaRDD: Removing RDD 754 from persistence list 18/04/17 16:55:16 INFO storage.BlockManager: Removing RDD 754 18/04/17 16:55:16 INFO kafka.KafkaRDD: Removing RDD 791 from persistence list 18/04/17 16:55:16 INFO storage.BlockManagerInfo: Removed broadcast_587_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.2 MB) 18/04/17 16:55:16 INFO storage.BlockManager: Removing RDD 791 18/04/17 16:55:16 INFO kafka.KafkaRDD: Removing RDD 755 from persistence list 18/04/17 16:55:16 INFO storage.BlockManagerInfo: Removed broadcast_587_piece0 on ***hostname masked***:55095 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:55:16 INFO storage.BlockManager: Removing RDD 755 18/04/17 16:55:16 INFO kafka.KafkaRDD: Removing RDD 791 from persistence list 18/04/17 16:55:16 INFO storage.BlockManager: Removing RDD 791 18/04/17 16:55:16 INFO spark.ContextCleaner: Cleaned accumulator 588 18/04/17 16:55:16 INFO spark.ContextCleaner: Cleaned accumulator 590 18/04/17 16:55:16 INFO kafka.KafkaRDD: Removing RDD 755 from persistence list 18/04/17 16:55:16 INFO storage.BlockManager: Removing RDD 755 18/04/17 16:55:16 INFO storage.BlockManagerInfo: Removed broadcast_588_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.2 MB) 18/04/17 16:55:16 INFO scheduler.ReceivedBlockTracker: Deleting batches ArrayBuffer() 18/04/17 16:55:16 INFO scheduler.InputInfoTracker: remove old batch metadata: 1523973120000 ms 1523973180000 ms 18/04/17 16:55:16 INFO storage.BlockManagerInfo: Removed broadcast_588_piece0 on ***hostname masked***:42188 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:55:16 INFO storage.BlockManagerInfo: Removed broadcast_590_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.2 MB) 18/04/17 16:55:16 INFO storage.BlockManagerInfo: Removed broadcast_590_piece0 on ***hostname masked***:55033 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:55:16 INFO spark.ContextCleaner: Cleaned accumulator 591 18/04/17 16:55:16 INFO storage.BlockManagerInfo: Removed broadcast_589_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.2 MB) 18/04/17 16:55:16 INFO storage.BlockManagerInfo: Removed broadcast_589_piece0 on ***hostname masked***:55095 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:55:16 INFO spark.ContextCleaner: Cleaned accumulator 593 18/04/17 16:55:16 INFO storage.BlockManagerInfo: Removed broadcast_591_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.2 MB) 18/04/17 16:55:16 INFO storage.BlockManagerInfo: Removed broadcast_591_piece0 on ***hostname masked***:43653 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:55:16 INFO spark.ContextCleaner: Cleaned accumulator 592 18/04/17 16:55:16 INFO storage.BlockManagerInfo: Removed broadcast_593_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.2 MB) 18/04/17 16:55:16 INFO storage.BlockManagerInfo: Removed broadcast_593_piece0 on ***hostname masked***:55279 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:55:16 INFO spark.ContextCleaner: Cleaned accumulator 594 18/04/17 16:55:16 INFO storage.BlockManagerInfo: Removed broadcast_592_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.2 MB) 18/04/17 16:55:16 INFO storage.BlockManagerInfo: Removed broadcast_592_piece0 on ***hostname masked***:42188 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:55:16 INFO storage.BlockManagerInfo: Removed broadcast_606_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.2 MB) 18/04/17 16:55:16 INFO storage.BlockManagerInfo: Removed broadcast_606_piece0 on ***hostname masked***:60107 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:55:16 INFO spark.ContextCleaner: Cleaned accumulator 607 18/04/17 16:55:16 INFO storage.BlockManagerInfo: Removed broadcast_605_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.2 MB) 18/04/17 16:55:16 INFO storage.BlockManagerInfo: Removed broadcast_605_piece0 on ***hostname masked***:43653 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:55:16 INFO spark.ContextCleaner: Cleaned accumulator 596 18/04/17 16:55:16 INFO storage.BlockManagerInfo: Removed broadcast_594_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.2 MB) 18/04/17 16:55:16 INFO storage.BlockManagerInfo: Removed broadcast_594_piece0 on ***hostname masked***:35790 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:55:16 INFO spark.ContextCleaner: Cleaned accumulator 595 18/04/17 16:55:16 INFO storage.BlockManagerInfo: Removed broadcast_596_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.2 MB) 18/04/17 16:55:16 INFO storage.BlockManagerInfo: Removed broadcast_596_piece0 on ***hostname masked***:35790 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:55:16 INFO spark.ContextCleaner: Cleaned accumulator 597 18/04/17 16:55:16 INFO storage.BlockManagerInfo: Removed broadcast_595_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.2 MB) 18/04/17 16:55:16 INFO storage.BlockManagerInfo: Removed broadcast_595_piece0 on ***hostname masked***:55095 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:55:16 INFO spark.ContextCleaner: Cleaned accumulator 599 18/04/17 16:55:16 INFO storage.BlockManagerInfo: Removed broadcast_597_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.2 MB) 18/04/17 16:55:16 INFO storage.BlockManagerInfo: Removed broadcast_597_piece0 on ***hostname masked***:43653 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:55:16 INFO spark.ContextCleaner: Cleaned accumulator 598 18/04/17 16:55:16 INFO storage.BlockManagerInfo: Removed broadcast_599_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.2 MB) 18/04/17 16:55:16 INFO storage.BlockManagerInfo: Removed broadcast_599_piece0 on ***hostname masked***:53081 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:55:16 INFO spark.ContextCleaner: Cleaned accumulator 600 18/04/17 16:55:16 INFO storage.BlockManagerInfo: Removed broadcast_598_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.3 MB) 18/04/17 16:55:16 INFO storage.BlockManagerInfo: Removed broadcast_598_piece0 on ***hostname masked***:43653 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:55:16 INFO storage.BlockManagerInfo: Removed broadcast_600_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.3 MB) 18/04/17 16:55:16 INFO storage.BlockManagerInfo: Removed broadcast_600_piece0 on ***hostname masked***:55033 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:55:16 INFO spark.ContextCleaner: Cleaned accumulator 601 18/04/17 16:55:16 INFO spark.ContextCleaner: Cleaned accumulator 603 18/04/17 16:55:16 INFO storage.BlockManagerInfo: Removed broadcast_601_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.3 MB) 18/04/17 16:55:16 INFO storage.BlockManagerInfo: Removed broadcast_601_piece0 on ***hostname masked***:53081 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:55:16 INFO spark.ContextCleaner: Cleaned accumulator 602 18/04/17 16:55:16 INFO storage.BlockManagerInfo: Removed broadcast_603_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.3 MB) 18/04/17 16:55:16 INFO storage.BlockManagerInfo: Removed broadcast_603_piece0 on ***hostname masked***:57847 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:55:16 INFO spark.ContextCleaner: Cleaned accumulator 604 18/04/17 16:55:16 INFO storage.BlockManagerInfo: Removed broadcast_602_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.3 MB) 18/04/17 16:55:16 INFO storage.BlockManagerInfo: Removed broadcast_602_piece0 on ***hostname masked***:50260 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:55:16 INFO storage.BlockManagerInfo: Removed broadcast_604_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.3 MB) 18/04/17 16:55:16 INFO storage.BlockManagerInfo: Removed broadcast_604_piece0 on ***hostname masked***:55033 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:55:16 INFO spark.ContextCleaner: Cleaned accumulator 605 18/04/17 16:56:00 INFO scheduler.JobScheduler: Added jobs for time 1523973360000 ms 18/04/17 16:56:00 INFO scheduler.JobScheduler: Starting job streaming job 1523973360000 ms.0 from job set of time 1523973360000 ms 18/04/17 16:56:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 16:56:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 16:56:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 16:56:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 16:56:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 16:56:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 16:56:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 16:56:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 16:56:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 16:56:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 16:56:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 16:56:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 16:56:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 16:56:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 16:56:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 16:56:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 16:56:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 16:56:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 16:56:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 16:56:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 16:56:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 16:56:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 16:56:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 16:56:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 16:56:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 16:56:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 16:56:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 16:56:00 INFO scheduler.DAGScheduler: Got job 607 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 16:56:00 INFO scheduler.JobScheduler: Starting job streaming job 1523973360000 ms.1 from job set of time 1523973360000 ms 18/04/17 16:56:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 607 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 16:56:00 INFO scheduler.JobScheduler: Starting job streaming job 1523973360000 ms.2 from job set of time 1523973360000 ms 18/04/17 16:56:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 16:56:00 INFO scheduler.JobScheduler: Starting job streaming job 1523973360000 ms.3 from job set of time 1523973360000 ms 18/04/17 16:56:00 INFO scheduler.JobScheduler: Starting job streaming job 1523973360000 ms.4 from job set of time 1523973360000 ms 18/04/17 16:56:00 INFO scheduler.JobScheduler: Finished job streaming job 1523973360000 ms.0 from job set of time 1523973360000 ms 18/04/17 16:56:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 16:56:00 INFO scheduler.JobScheduler: Starting job streaming job 1523973360000 ms.5 from job set of time 1523973360000 ms 18/04/17 16:56:00 INFO scheduler.JobScheduler: Finished job streaming job 1523973360000 ms.3 from job set of time 1523973360000 ms 18/04/17 16:56:00 INFO scheduler.JobScheduler: Starting job streaming job 1523973360000 ms.6 from job set of time 1523973360000 ms 18/04/17 16:56:00 INFO scheduler.JobScheduler: Starting job streaming job 1523973360000 ms.7 from job set of time 1523973360000 ms 18/04/17 16:56:00 INFO scheduler.JobScheduler: Starting job streaming job 1523973360000 ms.9 from job set of time 1523973360000 ms 18/04/17 16:56:00 INFO scheduler.JobScheduler: Finished job streaming job 1523973360000 ms.4 from job set of time 1523973360000 ms 18/04/17 16:56:00 INFO scheduler.JobScheduler: Starting job streaming job 1523973360000 ms.8 from job set of time 1523973360000 ms 18/04/17 16:56:00 INFO scheduler.JobScheduler: Starting job streaming job 1523973360000 ms.11 from job set of time 1523973360000 ms 18/04/17 16:56:00 INFO scheduler.JobScheduler: Starting job streaming job 1523973360000 ms.10 from job set of time 1523973360000 ms 18/04/17 16:56:00 INFO scheduler.JobScheduler: Starting job streaming job 1523973360000 ms.12 from job set of time 1523973360000 ms 18/04/17 16:56:00 INFO scheduler.JobScheduler: Starting job streaming job 1523973360000 ms.13 from job set of time 1523973360000 ms 18/04/17 16:56:00 INFO scheduler.JobScheduler: Starting job streaming job 1523973360000 ms.14 from job set of time 1523973360000 ms 18/04/17 16:56:00 INFO scheduler.DAGScheduler: Submitting ResultStage 607 (KafkaRDD[833] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 16:56:00 INFO scheduler.JobScheduler: Starting job streaming job 1523973360000 ms.15 from job set of time 1523973360000 ms 18/04/17 16:56:00 INFO scheduler.JobScheduler: Finished job streaming job 1523973360000 ms.13 from job set of time 1523973360000 ms 18/04/17 16:56:00 INFO scheduler.JobScheduler: Finished job streaming job 1523973360000 ms.14 from job set of time 1523973360000 ms 18/04/17 16:56:00 INFO scheduler.JobScheduler: Starting job streaming job 1523973360000 ms.17 from job set of time 1523973360000 ms 18/04/17 16:56:00 INFO scheduler.JobScheduler: Starting job streaming job 1523973360000 ms.16 from job set of time 1523973360000 ms 18/04/17 16:56:00 INFO scheduler.JobScheduler: Starting job streaming job 1523973360000 ms.19 from job set of time 1523973360000 ms 18/04/17 16:56:00 INFO scheduler.JobScheduler: Starting job streaming job 1523973360000 ms.18 from job set of time 1523973360000 ms 18/04/17 16:56:00 INFO scheduler.JobScheduler: Finished job streaming job 1523973360000 ms.17 from job set of time 1523973360000 ms 18/04/17 16:56:00 INFO scheduler.JobScheduler: Starting job streaming job 1523973360000 ms.20 from job set of time 1523973360000 ms 18/04/17 16:56:00 INFO scheduler.JobScheduler: Starting job streaming job 1523973360000 ms.22 from job set of time 1523973360000 ms 18/04/17 16:56:00 INFO scheduler.JobScheduler: Finished job streaming job 1523973360000 ms.16 from job set of time 1523973360000 ms 18/04/17 16:56:00 INFO scheduler.JobScheduler: Starting job streaming job 1523973360000 ms.21 from job set of time 1523973360000 ms 18/04/17 16:56:00 INFO scheduler.JobScheduler: Starting job streaming job 1523973360000 ms.24 from job set of time 1523973360000 ms 18/04/17 16:56:00 INFO scheduler.JobScheduler: Starting job streaming job 1523973360000 ms.23 from job set of time 1523973360000 ms 18/04/17 16:56:00 INFO scheduler.JobScheduler: Finished job streaming job 1523973360000 ms.21 from job set of time 1523973360000 ms 18/04/17 16:56:00 INFO scheduler.JobScheduler: Starting job streaming job 1523973360000 ms.25 from job set of time 1523973360000 ms 18/04/17 16:56:00 INFO scheduler.JobScheduler: Starting job streaming job 1523973360000 ms.26 from job set of time 1523973360000 ms 18/04/17 16:56:00 INFO scheduler.JobScheduler: Starting job streaming job 1523973360000 ms.27 from job set of time 1523973360000 ms 18/04/17 16:56:00 INFO scheduler.JobScheduler: Starting job streaming job 1523973360000 ms.28 from job set of time 1523973360000 ms 18/04/17 16:56:00 INFO scheduler.JobScheduler: Starting job streaming job 1523973360000 ms.29 from job set of time 1523973360000 ms 18/04/17 16:56:00 INFO scheduler.JobScheduler: Starting job streaming job 1523973360000 ms.30 from job set of time 1523973360000 ms 18/04/17 16:56:00 INFO scheduler.JobScheduler: Starting job streaming job 1523973360000 ms.31 from job set of time 1523973360000 ms 18/04/17 16:56:00 INFO scheduler.JobScheduler: Finished job streaming job 1523973360000 ms.30 from job set of time 1523973360000 ms 18/04/17 16:56:00 INFO scheduler.JobScheduler: Starting job streaming job 1523973360000 ms.32 from job set of time 1523973360000 ms 18/04/17 16:56:00 INFO scheduler.JobScheduler: Starting job streaming job 1523973360000 ms.33 from job set of time 1523973360000 ms 18/04/17 16:56:00 INFO scheduler.JobScheduler: Starting job streaming job 1523973360000 ms.34 from job set of time 1523973360000 ms 18/04/17 16:56:00 INFO scheduler.JobScheduler: Starting job streaming job 1523973360000 ms.35 from job set of time 1523973360000 ms 18/04/17 16:56:00 INFO storage.MemoryStore: Block broadcast_607 stored as values in memory (estimated size 5.7 KB, free 490.5 MB) 18/04/17 16:56:00 INFO storage.MemoryStore: Block broadcast_607_piece0 stored as bytes in memory (estimated size 3.1 KB, free 490.5 MB) 18/04/17 16:56:00 INFO storage.BlockManagerInfo: Added broadcast_607_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.3 MB) 18/04/17 16:56:00 INFO spark.SparkContext: Created broadcast 607 from broadcast at DAGScheduler.scala:1006 18/04/17 16:56:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 607 (KafkaRDD[833] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 16:56:00 INFO cluster.YarnClusterScheduler: Adding task set 607.0 with 1 tasks 18/04/17 16:56:00 INFO scheduler.DAGScheduler: Got job 608 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 16:56:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 608 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 16:56:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 16:56:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 16:56:00 INFO scheduler.DAGScheduler: Submitting ResultStage 608 (KafkaRDD[829] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 16:56:00 INFO storage.MemoryStore: Block broadcast_608 stored as values in memory (estimated size 5.7 KB, free 490.5 MB) 18/04/17 16:56:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 607.0 (TID 607, ***hostname masked***, executor 3, partition 0, NODE_LOCAL, 2059 bytes) 18/04/17 16:56:00 INFO storage.MemoryStore: Block broadcast_608_piece0 stored as bytes in memory (estimated size 3.1 KB, free 490.5 MB) 18/04/17 16:56:00 INFO storage.BlockManagerInfo: Added broadcast_608_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.3 MB) 18/04/17 16:56:00 INFO spark.SparkContext: Created broadcast 608 from broadcast at DAGScheduler.scala:1006 18/04/17 16:56:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 608 (KafkaRDD[829] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 16:56:00 INFO cluster.YarnClusterScheduler: Adding task set 608.0 with 1 tasks 18/04/17 16:56:00 INFO scheduler.DAGScheduler: Got job 609 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 16:56:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 609 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 16:56:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 16:56:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 16:56:00 INFO scheduler.DAGScheduler: Submitting ResultStage 609 (KafkaRDD[835] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 16:56:00 INFO storage.MemoryStore: Block broadcast_609 stored as values in memory (estimated size 5.7 KB, free 490.5 MB) 18/04/17 16:56:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 608.0 (TID 608, ***hostname masked***, executor 3, partition 0, RACK_LOCAL, 2063 bytes) 18/04/17 16:56:00 INFO storage.MemoryStore: Block broadcast_609_piece0 stored as bytes in memory (estimated size 3.1 KB, free 490.5 MB) 18/04/17 16:56:00 INFO storage.BlockManagerInfo: Added broadcast_609_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.3 MB) 18/04/17 16:56:00 INFO spark.SparkContext: Created broadcast 609 from broadcast at DAGScheduler.scala:1006 18/04/17 16:56:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 609 (KafkaRDD[835] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 16:56:00 INFO cluster.YarnClusterScheduler: Adding task set 609.0 with 1 tasks 18/04/17 16:56:00 INFO scheduler.DAGScheduler: Got job 611 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 16:56:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 610 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 16:56:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 16:56:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 16:56:00 INFO scheduler.DAGScheduler: Submitting ResultStage 610 (KafkaRDD[857] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 16:56:00 INFO storage.MemoryStore: Block broadcast_610 stored as values in memory (estimated size 5.7 KB, free 490.5 MB) 18/04/17 16:56:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 609.0 (TID 609, ***hostname masked***, executor 1, partition 0, RACK_LOCAL, 2060 bytes) 18/04/17 16:56:00 INFO storage.MemoryStore: Block broadcast_610_piece0 stored as bytes in memory (estimated size 3.1 KB, free 490.5 MB) 18/04/17 16:56:00 INFO storage.BlockManagerInfo: Added broadcast_610_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.3 MB) 18/04/17 16:56:00 INFO spark.SparkContext: Created broadcast 610 from broadcast at DAGScheduler.scala:1006 18/04/17 16:56:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 610 (KafkaRDD[857] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 16:56:00 INFO cluster.YarnClusterScheduler: Adding task set 610.0 with 1 tasks 18/04/17 16:56:00 INFO scheduler.DAGScheduler: Got job 610 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 16:56:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 611 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 16:56:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 16:56:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 16:56:00 INFO scheduler.DAGScheduler: Submitting ResultStage 611 (KafkaRDD[839] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 16:56:00 INFO storage.MemoryStore: Block broadcast_611 stored as values in memory (estimated size 5.7 KB, free 490.5 MB) 18/04/17 16:56:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 610.0 (TID 610, ***hostname masked***, executor 3, partition 0, NODE_LOCAL, 2049 bytes) 18/04/17 16:56:00 INFO storage.BlockManagerInfo: Added broadcast_607_piece0 in memory on ***hostname masked***:60107 (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:56:00 INFO storage.MemoryStore: Block broadcast_611_piece0 stored as bytes in memory (estimated size 3.1 KB, free 490.5 MB) 18/04/17 16:56:00 INFO storage.BlockManagerInfo: Added broadcast_611_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.3 MB) 18/04/17 16:56:00 INFO spark.SparkContext: Created broadcast 611 from broadcast at DAGScheduler.scala:1006 18/04/17 16:56:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 611 (KafkaRDD[839] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 16:56:00 INFO cluster.YarnClusterScheduler: Adding task set 611.0 with 1 tasks 18/04/17 16:56:00 INFO scheduler.DAGScheduler: Got job 612 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 16:56:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 612 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 16:56:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 16:56:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 16:56:00 INFO scheduler.DAGScheduler: Submitting ResultStage 612 (KafkaRDD[851] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 16:56:00 INFO storage.MemoryStore: Block broadcast_612 stored as values in memory (estimated size 5.7 KB, free 490.5 MB) 18/04/17 16:56:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 611.0 (TID 611, ***hostname masked***, executor 9, partition 0, RACK_LOCAL, 2046 bytes) 18/04/17 16:56:00 INFO storage.MemoryStore: Block broadcast_612_piece0 stored as bytes in memory (estimated size 3.1 KB, free 490.5 MB) 18/04/17 16:56:00 INFO storage.BlockManagerInfo: Added broadcast_612_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.2 MB) 18/04/17 16:56:00 INFO spark.SparkContext: Created broadcast 612 from broadcast at DAGScheduler.scala:1006 18/04/17 16:56:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 612 (KafkaRDD[851] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 16:56:00 INFO cluster.YarnClusterScheduler: Adding task set 612.0 with 1 tasks 18/04/17 16:56:00 INFO scheduler.DAGScheduler: Got job 614 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 16:56:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 613 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 16:56:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 16:56:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 16:56:00 INFO scheduler.DAGScheduler: Submitting ResultStage 613 (KafkaRDD[848] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 16:56:00 INFO storage.MemoryStore: Block broadcast_613 stored as values in memory (estimated size 5.7 KB, free 490.5 MB) 18/04/17 16:56:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 612.0 (TID 612, ***hostname masked***, executor 3, partition 0, RACK_LOCAL, 2048 bytes) 18/04/17 16:56:00 INFO storage.MemoryStore: Block broadcast_613_piece0 stored as bytes in memory (estimated size 3.1 KB, free 490.5 MB) 18/04/17 16:56:00 INFO storage.BlockManagerInfo: Added broadcast_613_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.2 MB) 18/04/17 16:56:00 INFO spark.SparkContext: Created broadcast 613 from broadcast at DAGScheduler.scala:1006 18/04/17 16:56:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 613 (KafkaRDD[848] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 16:56:00 INFO cluster.YarnClusterScheduler: Adding task set 613.0 with 1 tasks 18/04/17 16:56:00 INFO scheduler.DAGScheduler: Got job 613 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 16:56:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 614 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 16:56:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 16:56:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 16:56:00 INFO scheduler.DAGScheduler: Submitting ResultStage 614 (KafkaRDD[838] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 16:56:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 613.0 (TID 613, ***hostname masked***, executor 12, partition 0, RACK_LOCAL, 2048 bytes) 18/04/17 16:56:00 INFO storage.MemoryStore: Block broadcast_614 stored as values in memory (estimated size 5.7 KB, free 490.5 MB) 18/04/17 16:56:00 INFO storage.BlockManagerInfo: Added broadcast_610_piece0 in memory on ***hostname masked***:60107 (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:56:00 INFO storage.MemoryStore: Block broadcast_614_piece0 stored as bytes in memory (estimated size 3.1 KB, free 490.5 MB) 18/04/17 16:56:00 INFO storage.BlockManagerInfo: Added broadcast_614_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.2 MB) 18/04/17 16:56:00 INFO spark.SparkContext: Created broadcast 614 from broadcast at DAGScheduler.scala:1006 18/04/17 16:56:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 614 (KafkaRDD[838] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 16:56:00 INFO cluster.YarnClusterScheduler: Adding task set 614.0 with 1 tasks 18/04/17 16:56:00 INFO scheduler.DAGScheduler: Got job 615 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 16:56:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 615 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 16:56:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 16:56:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 16:56:00 INFO scheduler.DAGScheduler: Submitting ResultStage 615 (KafkaRDD[834] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 16:56:00 INFO storage.MemoryStore: Block broadcast_615 stored as values in memory (estimated size 5.7 KB, free 490.5 MB) 18/04/17 16:56:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 614.0 (TID 614, ***hostname masked***, executor 5, partition 0, NODE_LOCAL, 2059 bytes) 18/04/17 16:56:00 INFO storage.BlockManagerInfo: Added broadcast_609_piece0 in memory on ***hostname masked***:56034 (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:56:00 INFO storage.MemoryStore: Block broadcast_615_piece0 stored as bytes in memory (estimated size 3.1 KB, free 490.5 MB) 18/04/17 16:56:00 INFO storage.BlockManagerInfo: Added broadcast_615_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.2 MB) 18/04/17 16:56:00 INFO storage.BlockManagerInfo: Added broadcast_611_piece0 in memory on ***hostname masked***:55033 (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:56:00 INFO spark.SparkContext: Created broadcast 615 from broadcast at DAGScheduler.scala:1006 18/04/17 16:56:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 615 (KafkaRDD[834] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 16:56:00 INFO cluster.YarnClusterScheduler: Adding task set 615.0 with 1 tasks 18/04/17 16:56:00 INFO scheduler.DAGScheduler: Got job 616 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 16:56:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 616 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 16:56:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 16:56:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 16:56:00 INFO scheduler.DAGScheduler: Submitting ResultStage 616 (KafkaRDD[846] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 16:56:00 INFO storage.MemoryStore: Block broadcast_616 stored as values in memory (estimated size 5.7 KB, free 490.5 MB) 18/04/17 16:56:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 615.0 (TID 615, ***hostname masked***, executor 9, partition 0, NODE_LOCAL, 2045 bytes) 18/04/17 16:56:00 INFO storage.BlockManagerInfo: Added broadcast_608_piece0 in memory on ***hostname masked***:60107 (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:56:00 INFO storage.MemoryStore: Block broadcast_616_piece0 stored as bytes in memory (estimated size 3.1 KB, free 490.4 MB) 18/04/17 16:56:00 INFO storage.BlockManagerInfo: Added broadcast_616_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.2 MB) 18/04/17 16:56:00 INFO spark.SparkContext: Created broadcast 616 from broadcast at DAGScheduler.scala:1006 18/04/17 16:56:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 616 (KafkaRDD[846] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 16:56:00 INFO cluster.YarnClusterScheduler: Adding task set 616.0 with 1 tasks 18/04/17 16:56:00 INFO scheduler.DAGScheduler: Got job 617 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 16:56:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 617 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 16:56:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 16:56:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 16:56:00 INFO scheduler.DAGScheduler: Submitting ResultStage 617 (KafkaRDD[862] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 16:56:00 INFO storage.MemoryStore: Block broadcast_617 stored as values in memory (estimated size 5.7 KB, free 490.4 MB) 18/04/17 16:56:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 616.0 (TID 616, ***hostname masked***, executor 12, partition 0, RACK_LOCAL, 2048 bytes) 18/04/17 16:56:00 INFO storage.MemoryStore: Block broadcast_617_piece0 stored as bytes in memory (estimated size 3.1 KB, free 490.4 MB) 18/04/17 16:56:00 INFO storage.BlockManagerInfo: Added broadcast_613_piece0 in memory on ***hostname masked***:42188 (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:56:00 INFO storage.BlockManagerInfo: Added broadcast_617_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.2 MB) 18/04/17 16:56:00 INFO spark.SparkContext: Created broadcast 617 from broadcast at DAGScheduler.scala:1006 18/04/17 16:56:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 617 (KafkaRDD[862] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 16:56:00 INFO cluster.YarnClusterScheduler: Adding task set 617.0 with 1 tasks 18/04/17 16:56:00 INFO scheduler.DAGScheduler: Got job 618 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 16:56:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 618 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 16:56:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 16:56:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 16:56:00 INFO scheduler.DAGScheduler: Submitting ResultStage 618 (KafkaRDD[853] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 16:56:00 INFO storage.MemoryStore: Block broadcast_618 stored as values in memory (estimated size 5.7 KB, free 490.4 MB) 18/04/17 16:56:00 INFO storage.BlockManagerInfo: Added broadcast_615_piece0 in memory on ***hostname masked***:55033 (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:56:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 617.0 (TID 617, ***hostname masked***, executor 11, partition 0, RACK_LOCAL, 2053 bytes) 18/04/17 16:56:00 INFO storage.BlockManagerInfo: Added broadcast_612_piece0 in memory on ***hostname masked***:60107 (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:56:00 INFO storage.BlockManagerInfo: Added broadcast_614_piece0 in memory on ***hostname masked***:53081 (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:56:00 INFO storage.MemoryStore: Block broadcast_618_piece0 stored as bytes in memory (estimated size 3.1 KB, free 490.4 MB) 18/04/17 16:56:00 INFO storage.BlockManagerInfo: Added broadcast_618_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.2 MB) 18/04/17 16:56:00 INFO spark.SparkContext: Created broadcast 618 from broadcast at DAGScheduler.scala:1006 18/04/17 16:56:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 618 (KafkaRDD[853] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 16:56:00 INFO cluster.YarnClusterScheduler: Adding task set 618.0 with 1 tasks 18/04/17 16:56:00 INFO scheduler.DAGScheduler: Got job 620 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 16:56:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 619 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 16:56:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 16:56:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 16:56:00 INFO scheduler.DAGScheduler: Submitting ResultStage 619 (KafkaRDD[863] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 16:56:00 INFO storage.MemoryStore: Block broadcast_619 stored as values in memory (estimated size 5.7 KB, free 490.4 MB) 18/04/17 16:56:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 618.0 (TID 618, ***hostname masked***, executor 4, partition 0, RACK_LOCAL, 2045 bytes) 18/04/17 16:56:00 INFO storage.MemoryStore: Block broadcast_619_piece0 stored as bytes in memory (estimated size 3.1 KB, free 490.4 MB) 18/04/17 16:56:00 INFO storage.BlockManagerInfo: Added broadcast_619_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.2 MB) 18/04/17 16:56:00 INFO spark.SparkContext: Created broadcast 619 from broadcast at DAGScheduler.scala:1006 18/04/17 16:56:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 619 (KafkaRDD[863] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 16:56:00 INFO cluster.YarnClusterScheduler: Adding task set 619.0 with 1 tasks 18/04/17 16:56:00 INFO scheduler.DAGScheduler: Got job 619 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 16:56:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 620 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 16:56:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 16:56:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 16:56:00 INFO scheduler.DAGScheduler: Submitting ResultStage 620 (KafkaRDD[861] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 16:56:00 INFO storage.MemoryStore: Block broadcast_620 stored as values in memory (estimated size 5.7 KB, free 490.4 MB) 18/04/17 16:56:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 619.0 (TID 619, ***hostname masked***, executor 9, partition 0, NODE_LOCAL, 2037 bytes) 18/04/17 16:56:00 INFO storage.BlockManagerInfo: Added broadcast_616_piece0 in memory on ***hostname masked***:42188 (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:56:00 INFO storage.MemoryStore: Block broadcast_620_piece0 stored as bytes in memory (estimated size 3.1 KB, free 490.4 MB) 18/04/17 16:56:00 INFO storage.BlockManagerInfo: Added broadcast_620_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.2 MB) 18/04/17 16:56:00 INFO spark.SparkContext: Created broadcast 620 from broadcast at DAGScheduler.scala:1006 18/04/17 16:56:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 620 (KafkaRDD[861] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 16:56:00 INFO cluster.YarnClusterScheduler: Adding task set 620.0 with 1 tasks 18/04/17 16:56:00 INFO scheduler.DAGScheduler: Got job 621 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 16:56:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 621 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 16:56:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 16:56:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 16:56:00 INFO scheduler.DAGScheduler: Submitting ResultStage 621 (KafkaRDD[860] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 16:56:00 INFO storage.MemoryStore: Block broadcast_621 stored as values in memory (estimated size 5.7 KB, free 490.4 MB) 18/04/17 16:56:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 620.0 (TID 620, ***hostname masked***, executor 7, partition 0, NODE_LOCAL, 2048 bytes) 18/04/17 16:56:00 INFO storage.BlockManagerInfo: Added broadcast_618_piece0 in memory on ***hostname masked***:55279 (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:56:00 INFO storage.BlockManagerInfo: Added broadcast_617_piece0 in memory on ***hostname masked***:57847 (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:56:00 INFO storage.MemoryStore: Block broadcast_621_piece0 stored as bytes in memory (estimated size 3.1 KB, free 490.4 MB) 18/04/17 16:56:00 INFO storage.BlockManagerInfo: Added broadcast_621_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.2 MB) 18/04/17 16:56:00 INFO spark.SparkContext: Created broadcast 621 from broadcast at DAGScheduler.scala:1006 18/04/17 16:56:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 621 (KafkaRDD[860] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 16:56:00 INFO cluster.YarnClusterScheduler: Adding task set 621.0 with 1 tasks 18/04/17 16:56:00 INFO scheduler.DAGScheduler: Got job 622 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 16:56:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 622 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 16:56:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 16:56:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 16:56:00 INFO scheduler.DAGScheduler: Submitting ResultStage 622 (KafkaRDD[852] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 16:56:00 INFO storage.MemoryStore: Block broadcast_622 stored as values in memory (estimated size 5.7 KB, free 490.4 MB) 18/04/17 16:56:00 INFO storage.BlockManagerInfo: Added broadcast_619_piece0 in memory on ***hostname masked***:55033 (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:56:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 621.0 (TID 621, ***hostname masked***, executor 11, partition 0, RACK_LOCAL, 2040 bytes) 18/04/17 16:56:00 INFO storage.MemoryStore: Block broadcast_622_piece0 stored as bytes in memory (estimated size 3.1 KB, free 490.4 MB) 18/04/17 16:56:00 INFO storage.BlockManagerInfo: Added broadcast_622_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.2 MB) 18/04/17 16:56:00 INFO spark.SparkContext: Created broadcast 622 from broadcast at DAGScheduler.scala:1006 18/04/17 16:56:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 622 (KafkaRDD[852] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 16:56:00 INFO cluster.YarnClusterScheduler: Adding task set 622.0 with 1 tasks 18/04/17 16:56:00 INFO scheduler.DAGScheduler: Got job 624 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 16:56:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 623 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 16:56:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 16:56:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 16:56:00 INFO scheduler.DAGScheduler: Submitting ResultStage 623 (KafkaRDD[856] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 16:56:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 622.0 (TID 622, ***hostname masked***, executor 2, partition 0, NODE_LOCAL, 2047 bytes) 18/04/17 16:56:00 INFO storage.MemoryStore: Block broadcast_623 stored as values in memory (estimated size 5.7 KB, free 490.4 MB) 18/04/17 16:56:00 INFO storage.BlockManagerInfo: Added broadcast_620_piece0 in memory on ***hostname masked***:41751 (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:56:00 INFO storage.BlockManagerInfo: Added broadcast_621_piece0 in memory on ***hostname masked***:57847 (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:56:00 INFO storage.MemoryStore: Block broadcast_623_piece0 stored as bytes in memory (estimated size 3.1 KB, free 490.4 MB) 18/04/17 16:56:00 INFO storage.BlockManagerInfo: Added broadcast_623_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.2 MB) 18/04/17 16:56:00 INFO spark.SparkContext: Created broadcast 623 from broadcast at DAGScheduler.scala:1006 18/04/17 16:56:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 623 (KafkaRDD[856] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 16:56:00 INFO cluster.YarnClusterScheduler: Adding task set 623.0 with 1 tasks 18/04/17 16:56:00 INFO scheduler.DAGScheduler: Got job 623 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 16:56:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 624 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 16:56:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 16:56:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 16:56:00 INFO scheduler.DAGScheduler: Submitting ResultStage 624 (KafkaRDD[847] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 16:56:00 INFO storage.MemoryStore: Block broadcast_624 stored as values in memory (estimated size 5.7 KB, free 490.4 MB) 18/04/17 16:56:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 623.0 (TID 623, ***hostname masked***, executor 12, partition 0, RACK_LOCAL, 2045 bytes) 18/04/17 16:56:00 INFO storage.BlockManagerInfo: Added broadcast_622_piece0 in memory on ***hostname masked***:43653 (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:56:00 INFO storage.MemoryStore: Block broadcast_624_piece0 stored as bytes in memory (estimated size 3.1 KB, free 490.4 MB) 18/04/17 16:56:00 INFO storage.BlockManagerInfo: Added broadcast_624_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.2 MB) 18/04/17 16:56:00 INFO spark.SparkContext: Created broadcast 624 from broadcast at DAGScheduler.scala:1006 18/04/17 16:56:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 624 (KafkaRDD[847] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 16:56:00 INFO cluster.YarnClusterScheduler: Adding task set 624.0 with 1 tasks 18/04/17 16:56:00 INFO scheduler.DAGScheduler: Got job 625 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 16:56:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 625 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 16:56:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 16:56:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 16:56:00 INFO scheduler.DAGScheduler: Submitting ResultStage 625 (KafkaRDD[859] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 16:56:00 INFO storage.BlockManagerInfo: Added broadcast_623_piece0 in memory on ***hostname masked***:42188 (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:56:00 INFO storage.MemoryStore: Block broadcast_625 stored as values in memory (estimated size 5.7 KB, free 490.4 MB) 18/04/17 16:56:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 624.0 (TID 624, ***hostname masked***, executor 4, partition 0, RACK_LOCAL, 2050 bytes) 18/04/17 16:56:00 INFO storage.MemoryStore: Block broadcast_625_piece0 stored as bytes in memory (estimated size 3.1 KB, free 490.4 MB) 18/04/17 16:56:00 INFO storage.BlockManagerInfo: Added broadcast_625_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.2 MB) 18/04/17 16:56:00 INFO storage.BlockManagerInfo: Added broadcast_624_piece0 in memory on ***hostname masked***:55279 (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:56:00 INFO spark.SparkContext: Created broadcast 625 from broadcast at DAGScheduler.scala:1006 18/04/17 16:56:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 625 (KafkaRDD[859] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 16:56:00 INFO cluster.YarnClusterScheduler: Adding task set 625.0 with 1 tasks 18/04/17 16:56:00 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 615.0 (TID 615) in 57 ms on ***hostname masked*** (executor 9) (1/1) 18/04/17 16:56:00 INFO scheduler.DAGScheduler: Got job 626 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 16:56:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 626 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 16:56:00 INFO cluster.YarnClusterScheduler: Removed TaskSet 615.0, whose tasks have all completed, from pool 18/04/17 16:56:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 16:56:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 16:56:00 INFO scheduler.DAGScheduler: Submitting ResultStage 626 (KafkaRDD[855] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 16:56:00 INFO storage.MemoryStore: Block broadcast_626 stored as values in memory (estimated size 5.7 KB, free 490.4 MB) 18/04/17 16:56:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 625.0 (TID 625, ***hostname masked***, executor 6, partition 0, RACK_LOCAL, 2042 bytes) 18/04/17 16:56:00 INFO storage.MemoryStore: Block broadcast_626_piece0 stored as bytes in memory (estimated size 3.1 KB, free 490.4 MB) 18/04/17 16:56:00 INFO storage.BlockManagerInfo: Added broadcast_626_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.2 MB) 18/04/17 16:56:00 INFO spark.SparkContext: Created broadcast 626 from broadcast at DAGScheduler.scala:1006 18/04/17 16:56:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 626 (KafkaRDD[855] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 16:56:00 INFO cluster.YarnClusterScheduler: Adding task set 626.0 with 1 tasks 18/04/17 16:56:00 INFO scheduler.DAGScheduler: Got job 627 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 16:56:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 627 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 16:56:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 16:56:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 16:56:00 INFO scheduler.DAGScheduler: Submitting ResultStage 627 (KafkaRDD[830] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 16:56:00 INFO storage.MemoryStore: Block broadcast_627 stored as values in memory (estimated size 5.7 KB, free 490.4 MB) 18/04/17 16:56:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 626.0 (TID 626, ***hostname masked***, executor 11, partition 0, RACK_LOCAL, 2048 bytes) 18/04/17 16:56:00 INFO storage.MemoryStore: Block broadcast_627_piece0 stored as bytes in memory (estimated size 3.1 KB, free 490.4 MB) 18/04/17 16:56:00 INFO storage.BlockManagerInfo: Added broadcast_627_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.2 MB) 18/04/17 16:56:00 INFO spark.SparkContext: Created broadcast 627 from broadcast at DAGScheduler.scala:1006 18/04/17 16:56:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 627 (KafkaRDD[830] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 16:56:00 INFO cluster.YarnClusterScheduler: Adding task set 627.0 with 1 tasks 18/04/17 16:56:00 INFO scheduler.DAGScheduler: Got job 628 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 16:56:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 628 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 16:56:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 16:56:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 16:56:00 INFO scheduler.DAGScheduler: Submitting ResultStage 628 (KafkaRDD[837] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 16:56:00 INFO storage.MemoryStore: Block broadcast_628 stored as values in memory (estimated size 5.7 KB, free 490.3 MB) 18/04/17 16:56:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 627.0 (TID 627, ***hostname masked***, executor 8, partition 0, RACK_LOCAL, 2034 bytes) 18/04/17 16:56:00 INFO storage.BlockManagerInfo: Added broadcast_625_piece0 in memory on ***hostname masked***:35790 (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:56:00 INFO storage.MemoryStore: Block broadcast_628_piece0 stored as bytes in memory (estimated size 3.1 KB, free 490.3 MB) 18/04/17 16:56:00 INFO storage.BlockManagerInfo: Added broadcast_628_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.2 MB) 18/04/17 16:56:00 INFO spark.SparkContext: Created broadcast 628 from broadcast at DAGScheduler.scala:1006 18/04/17 16:56:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 628 (KafkaRDD[837] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 16:56:00 INFO cluster.YarnClusterScheduler: Adding task set 628.0 with 1 tasks 18/04/17 16:56:00 INFO storage.BlockManagerInfo: Added broadcast_626_piece0 in memory on ***hostname masked***:57847 (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:56:00 INFO scheduler.DAGScheduler: Got job 629 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 16:56:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 629 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 16:56:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 16:56:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 16:56:00 INFO scheduler.DAGScheduler: Submitting ResultStage 629 (KafkaRDD[850] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 16:56:00 INFO storage.MemoryStore: Block broadcast_629 stored as values in memory (estimated size 5.7 KB, free 490.3 MB) 18/04/17 16:56:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 628.0 (TID 628, ***hostname masked***, executor 7, partition 0, RACK_LOCAL, 2041 bytes) 18/04/17 16:56:00 INFO storage.MemoryStore: Block broadcast_629_piece0 stored as bytes in memory (estimated size 3.1 KB, free 490.3 MB) 18/04/17 16:56:00 INFO storage.BlockManagerInfo: Added broadcast_629_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.2 MB) 18/04/17 16:56:00 INFO spark.SparkContext: Created broadcast 629 from broadcast at DAGScheduler.scala:1006 18/04/17 16:56:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 629 (KafkaRDD[850] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 16:56:00 INFO cluster.YarnClusterScheduler: Adding task set 629.0 with 1 tasks 18/04/17 16:56:00 INFO scheduler.DAGScheduler: Got job 630 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 16:56:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 630 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 16:56:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 16:56:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 16:56:00 INFO scheduler.DAGScheduler: Submitting ResultStage 630 (KafkaRDD[843] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 16:56:00 INFO storage.MemoryStore: Block broadcast_630 stored as values in memory (estimated size 5.7 KB, free 490.3 MB) 18/04/17 16:56:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 629.0 (TID 629, ***hostname masked***, executor 8, partition 0, RACK_LOCAL, 2069 bytes) 18/04/17 16:56:00 INFO storage.MemoryStore: Block broadcast_630_piece0 stored as bytes in memory (estimated size 3.1 KB, free 490.3 MB) 18/04/17 16:56:00 INFO storage.BlockManagerInfo: Added broadcast_630_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.2 MB) 18/04/17 16:56:00 INFO spark.SparkContext: Created broadcast 630 from broadcast at DAGScheduler.scala:1006 18/04/17 16:56:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 630 (KafkaRDD[843] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 16:56:00 INFO cluster.YarnClusterScheduler: Adding task set 630.0 with 1 tasks 18/04/17 16:56:00 INFO scheduler.DAGScheduler: Got job 631 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 16:56:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 631 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 16:56:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 16:56:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 16:56:00 INFO scheduler.DAGScheduler: Submitting ResultStage 631 (KafkaRDD[836] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 16:56:00 INFO storage.BlockManagerInfo: Added broadcast_628_piece0 in memory on ***hostname masked***:41751 (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:56:00 INFO storage.MemoryStore: Block broadcast_631 stored as values in memory (estimated size 5.7 KB, free 490.3 MB) 18/04/17 16:56:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 630.0 (TID 630, ***hostname masked***, executor 1, partition 0, RACK_LOCAL, 2064 bytes) 18/04/17 16:56:00 INFO storage.MemoryStore: Block broadcast_631_piece0 stored as bytes in memory (estimated size 3.1 KB, free 490.3 MB) 18/04/17 16:56:00 INFO storage.BlockManagerInfo: Added broadcast_631_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.2 MB) 18/04/17 16:56:00 INFO spark.SparkContext: Created broadcast 631 from broadcast at DAGScheduler.scala:1006 18/04/17 16:56:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 631 (KafkaRDD[836] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 16:56:00 INFO cluster.YarnClusterScheduler: Adding task set 631.0 with 1 tasks 18/04/17 16:56:00 INFO scheduler.DAGScheduler: Got job 632 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 16:56:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 632 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 16:56:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 16:56:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 16:56:00 INFO scheduler.DAGScheduler: Submitting ResultStage 632 (KafkaRDD[854] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 16:56:00 INFO storage.MemoryStore: Block broadcast_632 stored as values in memory (estimated size 5.7 KB, free 490.3 MB) 18/04/17 16:56:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 631.0 (TID 631, ***hostname masked***, executor 11, partition 0, RACK_LOCAL, 2051 bytes) 18/04/17 16:56:00 INFO storage.MemoryStore: Block broadcast_632_piece0 stored as bytes in memory (estimated size 3.1 KB, free 490.3 MB) 18/04/17 16:56:00 INFO storage.BlockManagerInfo: Added broadcast_632_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.2 MB) 18/04/17 16:56:00 INFO spark.SparkContext: Created broadcast 632 from broadcast at DAGScheduler.scala:1006 18/04/17 16:56:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 632 (KafkaRDD[854] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 16:56:00 INFO cluster.YarnClusterScheduler: Adding task set 632.0 with 1 tasks 18/04/17 16:56:00 INFO scheduler.DAGScheduler: Got job 633 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 16:56:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 633 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 16:56:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 16:56:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 16:56:00 INFO scheduler.DAGScheduler: Submitting ResultStage 633 (KafkaRDD[840] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 16:56:00 INFO storage.MemoryStore: Block broadcast_633 stored as values in memory (estimated size 5.7 KB, free 490.3 MB) 18/04/17 16:56:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 632.0 (TID 632, ***hostname masked***, executor 6, partition 0, NODE_LOCAL, 2056 bytes) 18/04/17 16:56:00 INFO storage.BlockManagerInfo: Added broadcast_630_piece0 in memory on ***hostname masked***:56034 (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:56:00 INFO storage.MemoryStore: Block broadcast_633_piece0 stored as bytes in memory (estimated size 3.1 KB, free 490.3 MB) 18/04/17 16:56:00 INFO storage.BlockManagerInfo: Added broadcast_633_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.2 MB) 18/04/17 16:56:00 INFO spark.SparkContext: Created broadcast 633 from broadcast at DAGScheduler.scala:1006 18/04/17 16:56:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 633 (KafkaRDD[840] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 16:56:00 INFO cluster.YarnClusterScheduler: Adding task set 633.0 with 1 tasks 18/04/17 16:56:00 INFO scheduler.DAGScheduler: ResultStage 615 (foreachPartition at PredictorEngineApp.java:153) finished in 0.089 s 18/04/17 16:56:00 INFO scheduler.DAGScheduler: Job 615 finished: foreachPartition at PredictorEngineApp.java:153, took 0.121214 s 18/04/17 16:56:00 INFO storage.BlockManagerInfo: Added broadcast_627_piece0 in memory on ***hostname masked***:50260 (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:56:00 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x3464161c connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 16:56:00 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x3464161c0x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 16:56:00 INFO storage.BlockManagerInfo: Added broadcast_631_piece0 in memory on ***hostname masked***:57847 (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:56:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 633.0 (TID 633, ***hostname masked***, executor 9, partition 0, RACK_LOCAL, 2051 bytes) 18/04/17 16:56:00 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 16:56:00 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:40838, server: ***hostname masked***/***IP masked***:2181 18/04/17 16:56:00 INFO storage.BlockManagerInfo: Added broadcast_632_piece0 in memory on ***hostname masked***:35790 (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:56:00 INFO storage.BlockManagerInfo: Added broadcast_633_piece0 in memory on ***hostname masked***:55033 (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:56:00 INFO storage.BlockManagerInfo: Added broadcast_629_piece0 in memory on ***hostname masked***:50260 (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:56:00 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x2626be142b28d91, negotiated timeout = 60000 18/04/17 16:56:00 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x2626be142b28d91 18/04/17 16:56:00 INFO zookeeper.ZooKeeper: Session: 0x2626be142b28d91 closed 18/04/17 16:56:00 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 16:56:00 INFO scheduler.JobScheduler: Finished job streaming job 1523973360000 ms.6 from job set of time 1523973360000 ms 18/04/17 16:56:00 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 632.0 (TID 632) in 155 ms on ***hostname masked*** (executor 6) (1/1) 18/04/17 16:56:00 INFO cluster.YarnClusterScheduler: Removed TaskSet 632.0, whose tasks have all completed, from pool 18/04/17 16:56:00 INFO scheduler.DAGScheduler: ResultStage 632 (foreachPartition at PredictorEngineApp.java:153) finished in 0.156 s 18/04/17 16:56:00 INFO scheduler.DAGScheduler: Job 632 finished: foreachPartition at PredictorEngineApp.java:153, took 0.273121 s 18/04/17 16:56:00 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x519aabf2 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 16:56:00 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x519aabf20x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 16:56:00 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 16:56:00 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:58097, server: ***hostname masked***/***IP masked***:2181 18/04/17 16:56:00 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 619.0 (TID 619) in 222 ms on ***hostname masked*** (executor 9) (1/1) 18/04/17 16:56:00 INFO cluster.YarnClusterScheduler: Removed TaskSet 619.0, whose tasks have all completed, from pool 18/04/17 16:56:00 INFO scheduler.DAGScheduler: ResultStage 619 (foreachPartition at PredictorEngineApp.java:153) finished in 0.223 s 18/04/17 16:56:00 INFO scheduler.DAGScheduler: Job 620 finished: foreachPartition at PredictorEngineApp.java:153, took 0.278634 s 18/04/17 16:56:00 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x4f99a2f5 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 16:56:00 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x4f99a2f50x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 16:56:00 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 16:56:00 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:36247, server: ***hostname masked***/***IP masked***:2181 18/04/17 16:56:00 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x3626be1439a946b, negotiated timeout = 60000 18/04/17 16:56:00 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x1626be1444c949b, negotiated timeout = 60000 18/04/17 16:56:00 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x1626be1444c949b 18/04/17 16:56:00 INFO zookeeper.ZooKeeper: Session: 0x1626be1444c949b closed 18/04/17 16:56:00 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 16:56:00 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x3626be1439a946b 18/04/17 16:56:00 INFO scheduler.JobScheduler: Finished job streaming job 1523973360000 ms.35 from job set of time 1523973360000 ms 18/04/17 16:56:00 INFO zookeeper.ZooKeeper: Session: 0x3626be1439a946b closed 18/04/17 16:56:00 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 16:56:00 INFO scheduler.JobScheduler: Finished job streaming job 1523973360000 ms.26 from job set of time 1523973360000 ms 18/04/17 16:56:01 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 618.0 (TID 618) in 1385 ms on ***hostname masked*** (executor 4) (1/1) 18/04/17 16:56:01 INFO cluster.YarnClusterScheduler: Removed TaskSet 618.0, whose tasks have all completed, from pool 18/04/17 16:56:01 INFO scheduler.DAGScheduler: ResultStage 618 (foreachPartition at PredictorEngineApp.java:153) finished in 1.385 s 18/04/17 16:56:01 INFO scheduler.DAGScheduler: Job 618 finished: foreachPartition at PredictorEngineApp.java:153, took 1.427229 s 18/04/17 16:56:01 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x7b98c844 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 16:56:01 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x7b98c8440x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 16:56:01 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 16:56:01 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:40849, server: ***hostname masked***/***IP masked***:2181 18/04/17 16:56:01 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x2626be142b28d97, negotiated timeout = 60000 18/04/17 16:56:01 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x2626be142b28d97 18/04/17 16:56:01 INFO zookeeper.ZooKeeper: Session: 0x2626be142b28d97 closed 18/04/17 16:56:01 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 16:56:01 INFO scheduler.JobScheduler: Finished job streaming job 1523973360000 ms.25 from job set of time 1523973360000 ms 18/04/17 16:56:02 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 609.0 (TID 609) in 2431 ms on ***hostname masked*** (executor 1) (1/1) 18/04/17 16:56:02 INFO cluster.YarnClusterScheduler: Removed TaskSet 609.0, whose tasks have all completed, from pool 18/04/17 16:56:02 INFO scheduler.DAGScheduler: ResultStage 609 (foreachPartition at PredictorEngineApp.java:153) finished in 2.431 s 18/04/17 16:56:02 INFO scheduler.DAGScheduler: Job 609 finished: foreachPartition at PredictorEngineApp.java:153, took 2.444598 s 18/04/17 16:56:02 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x52a86971 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 16:56:02 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x52a869710x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 16:56:02 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 16:56:02 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:40853, server: ***hostname masked***/***IP masked***:2181 18/04/17 16:56:02 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x2626be142b28d99, negotiated timeout = 60000 18/04/17 16:56:02 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x2626be142b28d99 18/04/17 16:56:02 INFO zookeeper.ZooKeeper: Session: 0x2626be142b28d99 closed 18/04/17 16:56:02 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 16:56:02 INFO scheduler.JobScheduler: Finished job streaming job 1523973360000 ms.7 from job set of time 1523973360000 ms 18/04/17 16:56:02 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 625.0 (TID 625) in 2666 ms on ***hostname masked*** (executor 6) (1/1) 18/04/17 16:56:02 INFO cluster.YarnClusterScheduler: Removed TaskSet 625.0, whose tasks have all completed, from pool 18/04/17 16:56:02 INFO scheduler.DAGScheduler: ResultStage 625 (foreachPartition at PredictorEngineApp.java:153) finished in 2.667 s 18/04/17 16:56:02 INFO scheduler.DAGScheduler: Job 625 finished: foreachPartition at PredictorEngineApp.java:153, took 2.756155 s 18/04/17 16:56:02 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x7b3ad25 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 16:56:02 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x7b3ad250x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 16:56:02 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 16:56:02 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:58113, server: ***hostname masked***/***IP masked***:2181 18/04/17 16:56:02 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x3626be1439a9471, negotiated timeout = 60000 18/04/17 16:56:02 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x3626be1439a9471 18/04/17 16:56:02 INFO zookeeper.ZooKeeper: Session: 0x3626be1439a9471 closed 18/04/17 16:56:02 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 16:56:02 INFO scheduler.JobScheduler: Finished job streaming job 1523973360000 ms.31 from job set of time 1523973360000 ms 18/04/17 16:56:04 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 631.0 (TID 631) in 4680 ms on ***hostname masked*** (executor 11) (1/1) 18/04/17 16:56:04 INFO cluster.YarnClusterScheduler: Removed TaskSet 631.0, whose tasks have all completed, from pool 18/04/17 16:56:04 INFO scheduler.DAGScheduler: ResultStage 631 (foreachPartition at PredictorEngineApp.java:153) finished in 4.682 s 18/04/17 16:56:04 INFO scheduler.DAGScheduler: Job 631 finished: foreachPartition at PredictorEngineApp.java:153, took 4.797104 s 18/04/17 16:56:04 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x1b67e657 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 16:56:04 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x1b67e6570x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 16:56:04 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 16:56:04 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:36268, server: ***hostname masked***/***IP masked***:2181 18/04/17 16:56:04 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x1626be1444c94a1, negotiated timeout = 60000 18/04/17 16:56:04 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x1626be1444c94a1 18/04/17 16:56:04 INFO zookeeper.ZooKeeper: Session: 0x1626be1444c94a1 closed 18/04/17 16:56:04 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 16:56:04 INFO scheduler.JobScheduler: Finished job streaming job 1523973360000 ms.8 from job set of time 1523973360000 ms 18/04/17 16:56:05 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 624.0 (TID 624) in 4870 ms on ***hostname masked*** (executor 4) (1/1) 18/04/17 16:56:05 INFO scheduler.DAGScheduler: ResultStage 624 (foreachPartition at PredictorEngineApp.java:153) finished in 4.872 s 18/04/17 16:56:05 INFO cluster.YarnClusterScheduler: Removed TaskSet 624.0, whose tasks have all completed, from pool 18/04/17 16:56:05 INFO scheduler.DAGScheduler: Job 623 finished: foreachPartition at PredictorEngineApp.java:153, took 4.952644 s 18/04/17 16:56:05 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x4e54bec8 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 16:56:05 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x4e54bec80x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 16:56:05 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 16:56:05 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:58122, server: ***hostname masked***/***IP masked***:2181 18/04/17 16:56:05 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x3626be1439a9473, negotiated timeout = 60000 18/04/17 16:56:05 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x3626be1439a9473 18/04/17 16:56:05 INFO zookeeper.ZooKeeper: Session: 0x3626be1439a9473 closed 18/04/17 16:56:05 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 16:56:05 INFO scheduler.JobScheduler: Finished job streaming job 1523973360000 ms.19 from job set of time 1523973360000 ms 18/04/17 16:56:05 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 622.0 (TID 622) in 5008 ms on ***hostname masked*** (executor 2) (1/1) 18/04/17 16:56:05 INFO cluster.YarnClusterScheduler: Removed TaskSet 622.0, whose tasks have all completed, from pool 18/04/17 16:56:05 INFO scheduler.DAGScheduler: ResultStage 622 (foreachPartition at PredictorEngineApp.java:153) finished in 5.009 s 18/04/17 16:56:05 INFO scheduler.DAGScheduler: Job 622 finished: foreachPartition at PredictorEngineApp.java:153, took 5.073648 s 18/04/17 16:56:05 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0xa9bd4b0 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 16:56:05 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0xa9bd4b00x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 16:56:05 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 16:56:05 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:58126, server: ***hostname masked***/***IP masked***:2181 18/04/17 16:56:05 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x3626be1439a9474, negotiated timeout = 60000 18/04/17 16:56:05 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x3626be1439a9474 18/04/17 16:56:05 INFO zookeeper.ZooKeeper: Session: 0x3626be1439a9474 closed 18/04/17 16:56:05 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 16:56:05 INFO scheduler.JobScheduler: Finished job streaming job 1523973360000 ms.24 from job set of time 1523973360000 ms 18/04/17 16:56:05 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 630.0 (TID 630) in 5395 ms on ***hostname masked*** (executor 1) (1/1) 18/04/17 16:56:05 INFO scheduler.DAGScheduler: ResultStage 630 (foreachPartition at PredictorEngineApp.java:153) finished in 5.396 s 18/04/17 16:56:05 INFO cluster.YarnClusterScheduler: Removed TaskSet 630.0, whose tasks have all completed, from pool 18/04/17 16:56:05 INFO scheduler.DAGScheduler: Job 630 finished: foreachPartition at PredictorEngineApp.java:153, took 5.508640 s 18/04/17 16:56:05 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x1831c7f3 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 16:56:05 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x1831c7f30x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 16:56:05 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 16:56:05 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:58129, server: ***hostname masked***/***IP masked***:2181 18/04/17 16:56:05 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x3626be1439a9475, negotiated timeout = 60000 18/04/17 16:56:05 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x3626be1439a9475 18/04/17 16:56:05 INFO zookeeper.ZooKeeper: Session: 0x3626be1439a9475 closed 18/04/17 16:56:05 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 16:56:05 INFO scheduler.JobScheduler: Finished job streaming job 1523973360000 ms.15 from job set of time 1523973360000 ms 18/04/17 16:56:07 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 628.0 (TID 628) in 6988 ms on ***hostname masked*** (executor 7) (1/1) 18/04/17 16:56:07 INFO cluster.YarnClusterScheduler: Removed TaskSet 628.0, whose tasks have all completed, from pool 18/04/17 16:56:07 INFO scheduler.DAGScheduler: ResultStage 628 (foreachPartition at PredictorEngineApp.java:153) finished in 6.990 s 18/04/17 16:56:07 INFO scheduler.DAGScheduler: Job 628 finished: foreachPartition at PredictorEngineApp.java:153, took 7.094864 s 18/04/17 16:56:07 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0xf01c801 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 16:56:07 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0xf01c8010x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 16:56:07 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 16:56:07 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:40879, server: ***hostname masked***/***IP masked***:2181 18/04/17 16:56:07 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x2626be142b28d9e, negotiated timeout = 60000 18/04/17 16:56:07 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x2626be142b28d9e 18/04/17 16:56:07 INFO zookeeper.ZooKeeper: Session: 0x2626be142b28d9e closed 18/04/17 16:56:07 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 16:56:07 INFO scheduler.JobScheduler: Finished job streaming job 1523973360000 ms.9 from job set of time 1523973360000 ms 18/04/17 16:56:07 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 633.0 (TID 633) in 7025 ms on ***hostname masked*** (executor 9) (1/1) 18/04/17 16:56:07 INFO cluster.YarnClusterScheduler: Removed TaskSet 633.0, whose tasks have all completed, from pool 18/04/17 16:56:07 INFO scheduler.DAGScheduler: ResultStage 633 (foreachPartition at PredictorEngineApp.java:153) finished in 7.027 s 18/04/17 16:56:07 INFO scheduler.DAGScheduler: Job 633 finished: foreachPartition at PredictorEngineApp.java:153, took 7.147706 s 18/04/17 16:56:07 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x2f616895 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 16:56:07 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x2f6168950x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 16:56:07 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 16:56:07 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:36287, server: ***hostname masked***/***IP masked***:2181 18/04/17 16:56:07 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x1626be1444c94a2, negotiated timeout = 60000 18/04/17 16:56:07 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x1626be1444c94a2 18/04/17 16:56:07 INFO zookeeper.ZooKeeper: Session: 0x1626be1444c94a2 closed 18/04/17 16:56:07 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 16:56:07 INFO scheduler.JobScheduler: Finished job streaming job 1523973360000 ms.12 from job set of time 1523973360000 ms 18/04/17 16:56:07 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 620.0 (TID 620) in 7541 ms on ***hostname masked*** (executor 7) (1/1) 18/04/17 16:56:07 INFO cluster.YarnClusterScheduler: Removed TaskSet 620.0, whose tasks have all completed, from pool 18/04/17 16:56:07 INFO scheduler.DAGScheduler: ResultStage 620 (foreachPartition at PredictorEngineApp.java:153) finished in 7.542 s 18/04/17 16:56:07 INFO scheduler.DAGScheduler: Job 619 finished: foreachPartition at PredictorEngineApp.java:153, took 7.601279 s 18/04/17 16:56:07 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x558fc32f connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 16:56:07 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x558fc32f0x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 16:56:07 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 16:56:07 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:36290, server: ***hostname masked***/***IP masked***:2181 18/04/17 16:56:07 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x1626be1444c94a3, negotiated timeout = 60000 18/04/17 16:56:07 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x1626be1444c94a3 18/04/17 16:56:07 INFO zookeeper.ZooKeeper: Session: 0x1626be1444c94a3 closed 18/04/17 16:56:07 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 16:56:07 INFO scheduler.JobScheduler: Finished job streaming job 1523973360000 ms.33 from job set of time 1523973360000 ms 18/04/17 16:56:07 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 627.0 (TID 627) in 7601 ms on ***hostname masked*** (executor 8) (1/1) 18/04/17 16:56:07 INFO cluster.YarnClusterScheduler: Removed TaskSet 627.0, whose tasks have all completed, from pool 18/04/17 16:56:07 INFO scheduler.DAGScheduler: ResultStage 627 (foreachPartition at PredictorEngineApp.java:153) finished in 7.602 s 18/04/17 16:56:07 INFO scheduler.DAGScheduler: Job 627 finished: foreachPartition at PredictorEngineApp.java:153, took 7.701762 s 18/04/17 16:56:07 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x4b450351 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 16:56:07 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x4b4503510x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 16:56:07 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 16:56:07 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:58144, server: ***hostname masked***/***IP masked***:2181 18/04/17 16:56:07 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x3626be1439a947b, negotiated timeout = 60000 18/04/17 16:56:07 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x3626be1439a947b 18/04/17 16:56:07 INFO zookeeper.ZooKeeper: Session: 0x3626be1439a947b closed 18/04/17 16:56:07 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 16:56:07 INFO scheduler.JobScheduler: Finished job streaming job 1523973360000 ms.2 from job set of time 1523973360000 ms 18/04/17 16:56:11 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 623.0 (TID 623) in 11097 ms on ***hostname masked*** (executor 12) (1/1) 18/04/17 16:56:11 INFO cluster.YarnClusterScheduler: Removed TaskSet 623.0, whose tasks have all completed, from pool 18/04/17 16:56:11 INFO scheduler.DAGScheduler: ResultStage 623 (foreachPartition at PredictorEngineApp.java:153) finished in 11.099 s 18/04/17 16:56:11 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 616.0 (TID 616) in 11136 ms on ***hostname masked*** (executor 12) (1/1) 18/04/17 16:56:11 INFO cluster.YarnClusterScheduler: Removed TaskSet 616.0, whose tasks have all completed, from pool 18/04/17 16:56:11 INFO scheduler.DAGScheduler: Job 624 finished: foreachPartition at PredictorEngineApp.java:153, took 11.171993 s 18/04/17 16:56:11 INFO scheduler.DAGScheduler: ResultStage 616 (foreachPartition at PredictorEngineApp.java:153) finished in 11.137 s 18/04/17 16:56:11 INFO scheduler.DAGScheduler: Job 616 finished: foreachPartition at PredictorEngineApp.java:153, took 11.172460 s 18/04/17 16:56:11 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x3b192a2e connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 16:56:11 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x3b192a2e0x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 16:56:11 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x1c269ae4 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 16:56:11 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x1c269ae40x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 16:56:11 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 16:56:11 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 16:56:11 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:36303, server: ***hostname masked***/***IP masked***:2181 18/04/17 16:56:11 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:36304, server: ***hostname masked***/***IP masked***:2181 18/04/17 16:56:11 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x1626be1444c94a6, negotiated timeout = 60000 18/04/17 16:56:11 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x1626be1444c94a7, negotiated timeout = 60000 18/04/17 16:56:11 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x1626be1444c94a7 18/04/17 16:56:11 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x1626be1444c94a6 18/04/17 16:56:11 INFO zookeeper.ZooKeeper: Session: 0x1626be1444c94a7 closed 18/04/17 16:56:11 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 16:56:11 INFO zookeeper.ZooKeeper: Session: 0x1626be1444c94a6 closed 18/04/17 16:56:11 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 16:56:11 INFO scheduler.JobScheduler: Finished job streaming job 1523973360000 ms.28 from job set of time 1523973360000 ms 18/04/17 16:56:11 INFO scheduler.JobScheduler: Finished job streaming job 1523973360000 ms.18 from job set of time 1523973360000 ms 18/04/17 16:56:11 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 613.0 (TID 613) in 11608 ms on ***hostname masked*** (executor 12) (1/1) 18/04/17 16:56:11 INFO cluster.YarnClusterScheduler: Removed TaskSet 613.0, whose tasks have all completed, from pool 18/04/17 16:56:11 INFO scheduler.DAGScheduler: ResultStage 613 (foreachPartition at PredictorEngineApp.java:153) finished in 11.609 s 18/04/17 16:56:11 INFO scheduler.DAGScheduler: Job 614 finished: foreachPartition at PredictorEngineApp.java:153, took 11.634562 s 18/04/17 16:56:11 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x661e6119 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 16:56:11 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x661e61190x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 16:56:11 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 16:56:11 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:58161, server: ***hostname masked***/***IP masked***:2181 18/04/17 16:56:11 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x3626be1439a947f, negotiated timeout = 60000 18/04/17 16:56:11 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x3626be1439a947f 18/04/17 16:56:11 INFO zookeeper.ZooKeeper: Session: 0x3626be1439a947f closed 18/04/17 16:56:11 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 16:56:11 INFO scheduler.JobScheduler: Finished job streaming job 1523973360000 ms.20 from job set of time 1523973360000 ms 18/04/17 16:56:11 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 621.0 (TID 621) in 11708 ms on ***hostname masked*** (executor 11) (1/1) 18/04/17 16:56:11 INFO cluster.YarnClusterScheduler: Removed TaskSet 621.0, whose tasks have all completed, from pool 18/04/17 16:56:11 INFO scheduler.DAGScheduler: ResultStage 621 (foreachPartition at PredictorEngineApp.java:153) finished in 11.709 s 18/04/17 16:56:11 INFO scheduler.DAGScheduler: Job 621 finished: foreachPartition at PredictorEngineApp.java:153, took 11.770932 s 18/04/17 16:56:11 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x7ad3e314 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 16:56:11 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x7ad3e3140x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 16:56:11 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 16:56:11 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:36313, server: ***hostname masked***/***IP masked***:2181 18/04/17 16:56:11 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x1626be1444c94a9, negotiated timeout = 60000 18/04/17 16:56:11 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x1626be1444c94a9 18/04/17 16:56:11 INFO zookeeper.ZooKeeper: Session: 0x1626be1444c94a9 closed 18/04/17 16:56:11 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 16:56:11 INFO scheduler.JobScheduler: Finished job streaming job 1523973360000 ms.32 from job set of time 1523973360000 ms 18/04/17 16:56:12 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 614.0 (TID 614) in 11987 ms on ***hostname masked*** (executor 5) (1/1) 18/04/17 16:56:12 INFO cluster.YarnClusterScheduler: Removed TaskSet 614.0, whose tasks have all completed, from pool 18/04/17 16:56:12 INFO scheduler.DAGScheduler: ResultStage 614 (foreachPartition at PredictorEngineApp.java:153) finished in 11.987 s 18/04/17 16:56:12 INFO scheduler.DAGScheduler: Job 613 finished: foreachPartition at PredictorEngineApp.java:153, took 12.016545 s 18/04/17 16:56:12 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x3cdb3576 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 16:56:12 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x3cdb35760x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 16:56:12 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 16:56:12 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:36316, server: ***hostname masked***/***IP masked***:2181 18/04/17 16:56:12 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x1626be1444c94aa, negotiated timeout = 60000 18/04/17 16:56:12 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x1626be1444c94aa 18/04/17 16:56:12 INFO zookeeper.ZooKeeper: Session: 0x1626be1444c94aa closed 18/04/17 16:56:12 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 16:56:12 INFO scheduler.JobScheduler: Finished job streaming job 1523973360000 ms.10 from job set of time 1523973360000 ms 18/04/17 16:56:12 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 611.0 (TID 611) in 12762 ms on ***hostname masked*** (executor 9) (1/1) 18/04/17 16:56:12 INFO cluster.YarnClusterScheduler: Removed TaskSet 611.0, whose tasks have all completed, from pool 18/04/17 16:56:12 INFO scheduler.DAGScheduler: ResultStage 611 (foreachPartition at PredictorEngineApp.java:153) finished in 12.764 s 18/04/17 16:56:12 INFO scheduler.DAGScheduler: Job 610 finished: foreachPartition at PredictorEngineApp.java:153, took 12.782885 s 18/04/17 16:56:12 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x2d30d175 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 16:56:12 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x2d30d1750x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 16:56:12 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 16:56:12 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:58171, server: ***hostname masked***/***IP masked***:2181 18/04/17 16:56:12 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x3626be1439a9480, negotiated timeout = 60000 18/04/17 16:56:12 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x3626be1439a9480 18/04/17 16:56:12 INFO zookeeper.ZooKeeper: Session: 0x3626be1439a9480 closed 18/04/17 16:56:12 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 16:56:12 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 629.0 (TID 629) in 12713 ms on ***hostname masked*** (executor 8) (1/1) 18/04/17 16:56:12 INFO cluster.YarnClusterScheduler: Removed TaskSet 629.0, whose tasks have all completed, from pool 18/04/17 16:56:12 INFO scheduler.DAGScheduler: ResultStage 629 (foreachPartition at PredictorEngineApp.java:153) finished in 12.714 s 18/04/17 16:56:12 INFO scheduler.DAGScheduler: Job 629 finished: foreachPartition at PredictorEngineApp.java:153, took 12.824005 s 18/04/17 16:56:12 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x1e54b409 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 16:56:12 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x1e54b4090x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 16:56:12 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 16:56:12 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:58174, server: ***hostname masked***/***IP masked***:2181 18/04/17 16:56:12 INFO scheduler.JobScheduler: Finished job streaming job 1523973360000 ms.11 from job set of time 1523973360000 ms 18/04/17 16:56:12 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x3626be1439a9481, negotiated timeout = 60000 18/04/17 16:56:12 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x3626be1439a9481 18/04/17 16:56:12 INFO zookeeper.ZooKeeper: Session: 0x3626be1439a9481 closed 18/04/17 16:56:12 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 16:56:12 INFO scheduler.JobScheduler: Finished job streaming job 1523973360000 ms.22 from job set of time 1523973360000 ms 18/04/17 16:56:13 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 626.0 (TID 626) in 12834 ms on ***hostname masked*** (executor 11) (1/1) 18/04/17 16:56:13 INFO cluster.YarnClusterScheduler: Removed TaskSet 626.0, whose tasks have all completed, from pool 18/04/17 16:56:13 INFO scheduler.DAGScheduler: ResultStage 626 (foreachPartition at PredictorEngineApp.java:153) finished in 12.835 s 18/04/17 16:56:13 INFO scheduler.DAGScheduler: Job 626 finished: foreachPartition at PredictorEngineApp.java:153, took 12.929204 s 18/04/17 16:56:13 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x665be9ee connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 16:56:13 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x665be9ee0x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 16:56:13 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 16:56:13 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:40921, server: ***hostname masked***/***IP masked***:2181 18/04/17 16:56:13 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x2626be142b28da3, negotiated timeout = 60000 18/04/17 16:56:13 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x2626be142b28da3 18/04/17 16:56:13 INFO zookeeper.ZooKeeper: Session: 0x2626be142b28da3 closed 18/04/17 16:56:13 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 16:56:13 INFO scheduler.JobScheduler: Finished job streaming job 1523973360000 ms.27 from job set of time 1523973360000 ms 18/04/17 16:56:13 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 617.0 (TID 617) in 12947 ms on ***hostname masked*** (executor 11) (1/1) 18/04/17 16:56:13 INFO scheduler.DAGScheduler: ResultStage 617 (foreachPartition at PredictorEngineApp.java:153) finished in 12.947 s 18/04/17 16:56:13 INFO cluster.YarnClusterScheduler: Removed TaskSet 617.0, whose tasks have all completed, from pool 18/04/17 16:56:13 INFO scheduler.DAGScheduler: Job 617 finished: foreachPartition at PredictorEngineApp.java:153, took 12.986376 s 18/04/17 16:56:13 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x71ae088d connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 16:56:13 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x71ae088d0x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 16:56:13 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 16:56:13 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:58180, server: ***hostname masked***/***IP masked***:2181 18/04/17 16:56:13 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x3626be1439a9482, negotiated timeout = 60000 18/04/17 16:56:13 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x3626be1439a9482 18/04/17 16:56:13 INFO zookeeper.ZooKeeper: Session: 0x3626be1439a9482 closed 18/04/17 16:56:13 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 16:56:13 INFO scheduler.JobScheduler: Finished job streaming job 1523973360000 ms.34 from job set of time 1523973360000 ms 18/04/17 16:56:15 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 610.0 (TID 610) in 15119 ms on ***hostname masked*** (executor 3) (1/1) 18/04/17 16:56:15 INFO cluster.YarnClusterScheduler: Removed TaskSet 610.0, whose tasks have all completed, from pool 18/04/17 16:56:15 INFO scheduler.DAGScheduler: ResultStage 610 (foreachPartition at PredictorEngineApp.java:153) finished in 15.120 s 18/04/17 16:56:15 INFO scheduler.DAGScheduler: Job 611 finished: foreachPartition at PredictorEngineApp.java:153, took 15.136466 s 18/04/17 16:56:15 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x25b3bb8b connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 16:56:15 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x25b3bb8b0x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 16:56:15 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 16:56:15 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:36335, server: ***hostname masked***/***IP masked***:2181 18/04/17 16:56:15 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x1626be1444c94ac, negotiated timeout = 60000 18/04/17 16:56:15 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x1626be1444c94ac 18/04/17 16:56:15 INFO zookeeper.ZooKeeper: Session: 0x1626be1444c94ac closed 18/04/17 16:56:15 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 16:56:15 INFO scheduler.JobScheduler: Finished job streaming job 1523973360000 ms.29 from job set of time 1523973360000 ms 18/04/17 16:56:16 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 612.0 (TID 612) in 16801 ms on ***hostname masked*** (executor 3) (1/1) 18/04/17 16:56:16 INFO cluster.YarnClusterScheduler: Removed TaskSet 612.0, whose tasks have all completed, from pool 18/04/17 16:56:16 INFO scheduler.DAGScheduler: ResultStage 612 (foreachPartition at PredictorEngineApp.java:153) finished in 16.802 s 18/04/17 16:56:16 INFO scheduler.DAGScheduler: Job 612 finished: foreachPartition at PredictorEngineApp.java:153, took 16.824665 s 18/04/17 16:56:16 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0xee6aa51 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 16:56:16 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0xee6aa510x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 16:56:16 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 16:56:16 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:36340, server: ***hostname masked***/***IP masked***:2181 18/04/17 16:56:16 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x1626be1444c94ad, negotiated timeout = 60000 18/04/17 16:56:16 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x1626be1444c94ad 18/04/17 16:56:16 INFO zookeeper.ZooKeeper: Session: 0x1626be1444c94ad closed 18/04/17 16:56:16 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 16:56:16 INFO scheduler.JobScheduler: Finished job streaming job 1523973360000 ms.23 from job set of time 1523973360000 ms 18/04/17 16:56:25 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 608.0 (TID 608) in 25094 ms on ***hostname masked*** (executor 3) (1/1) 18/04/17 16:56:25 INFO cluster.YarnClusterScheduler: Removed TaskSet 608.0, whose tasks have all completed, from pool 18/04/17 16:56:25 INFO scheduler.DAGScheduler: ResultStage 608 (foreachPartition at PredictorEngineApp.java:153) finished in 25.094 s 18/04/17 16:56:25 INFO scheduler.DAGScheduler: Job 608 finished: foreachPartition at PredictorEngineApp.java:153, took 25.104879 s 18/04/17 16:56:25 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x5c3cce44 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 16:56:25 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x5c3cce440x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 16:56:25 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 16:56:25 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:40955, server: ***hostname masked***/***IP masked***:2181 18/04/17 16:56:25 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 607.0 (TID 607) in 25104 ms on ***hostname masked*** (executor 3) (1/1) 18/04/17 16:56:25 INFO cluster.YarnClusterScheduler: Removed TaskSet 607.0, whose tasks have all completed, from pool 18/04/17 16:56:25 INFO scheduler.DAGScheduler: ResultStage 607 (foreachPartition at PredictorEngineApp.java:153) finished in 25.105 s 18/04/17 16:56:25 INFO scheduler.DAGScheduler: Job 607 finished: foreachPartition at PredictorEngineApp.java:153, took 25.111146 s 18/04/17 16:56:25 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x4dc7c2c4 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 16:56:25 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x4dc7c2c40x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 16:56:25 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 16:56:25 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:36361, server: ***hostname masked***/***IP masked***:2181 18/04/17 16:56:25 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x2626be142b28daa, negotiated timeout = 60000 18/04/17 16:56:25 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x1626be1444c94af, negotiated timeout = 60000 18/04/17 16:56:25 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x2626be142b28daa 18/04/17 16:56:25 INFO zookeeper.ZooKeeper: Session: 0x2626be142b28daa closed 18/04/17 16:56:25 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 16:56:25 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x1626be1444c94af 18/04/17 16:56:25 INFO zookeeper.ZooKeeper: Session: 0x1626be1444c94af closed 18/04/17 16:56:25 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 16:56:25 INFO scheduler.JobScheduler: Finished job streaming job 1523973360000 ms.1 from job set of time 1523973360000 ms 18/04/17 16:56:25 INFO scheduler.JobScheduler: Finished job streaming job 1523973360000 ms.5 from job set of time 1523973360000 ms 18/04/17 16:56:25 INFO scheduler.JobScheduler: Total delay: 25.250 s for time 1523973360000 ms (execution: 25.194 s) 18/04/17 16:56:25 INFO kafka.KafkaRDD: Removing RDD 792 from persistence list 18/04/17 16:56:25 INFO storage.BlockManager: Removing RDD 792 18/04/17 16:56:25 INFO kafka.KafkaRDD: Removing RDD 792 from persistence list 18/04/17 16:56:25 INFO storage.BlockManager: Removing RDD 792 18/04/17 16:56:25 INFO kafka.KafkaRDD: Removing RDD 793 from persistence list 18/04/17 16:56:25 INFO storage.BlockManager: Removing RDD 793 18/04/17 16:56:25 INFO kafka.KafkaRDD: Removing RDD 793 from persistence list 18/04/17 16:56:25 INFO storage.BlockManager: Removing RDD 793 18/04/17 16:56:25 INFO kafka.KafkaRDD: Removing RDD 794 from persistence list 18/04/17 16:56:25 INFO storage.BlockManager: Removing RDD 794 18/04/17 16:56:25 INFO kafka.KafkaRDD: Removing RDD 794 from persistence list 18/04/17 16:56:25 INFO storage.BlockManager: Removing RDD 794 18/04/17 16:56:25 INFO kafka.KafkaRDD: Removing RDD 795 from persistence list 18/04/17 16:56:25 INFO storage.BlockManager: Removing RDD 795 18/04/17 16:56:25 INFO kafka.KafkaRDD: Removing RDD 795 from persistence list 18/04/17 16:56:25 INFO storage.BlockManager: Removing RDD 795 18/04/17 16:56:25 INFO kafka.KafkaRDD: Removing RDD 796 from persistence list 18/04/17 16:56:25 INFO storage.BlockManager: Removing RDD 796 18/04/17 16:56:25 INFO kafka.KafkaRDD: Removing RDD 796 from persistence list 18/04/17 16:56:25 INFO storage.BlockManager: Removing RDD 796 18/04/17 16:56:25 INFO kafka.KafkaRDD: Removing RDD 797 from persistence list 18/04/17 16:56:25 INFO storage.BlockManager: Removing RDD 797 18/04/17 16:56:25 INFO kafka.KafkaRDD: Removing RDD 797 from persistence list 18/04/17 16:56:25 INFO storage.BlockManager: Removing RDD 797 18/04/17 16:56:25 INFO kafka.KafkaRDD: Removing RDD 798 from persistence list 18/04/17 16:56:25 INFO storage.BlockManager: Removing RDD 798 18/04/17 16:56:25 INFO kafka.KafkaRDD: Removing RDD 798 from persistence list 18/04/17 16:56:25 INFO storage.BlockManager: Removing RDD 798 18/04/17 16:56:25 INFO kafka.KafkaRDD: Removing RDD 799 from persistence list 18/04/17 16:56:25 INFO storage.BlockManager: Removing RDD 799 18/04/17 16:56:25 INFO kafka.KafkaRDD: Removing RDD 799 from persistence list 18/04/17 16:56:25 INFO storage.BlockManager: Removing RDD 799 18/04/17 16:56:25 INFO kafka.KafkaRDD: Removing RDD 800 from persistence list 18/04/17 16:56:25 INFO storage.BlockManager: Removing RDD 800 18/04/17 16:56:25 INFO kafka.KafkaRDD: Removing RDD 800 from persistence list 18/04/17 16:56:25 INFO storage.BlockManager: Removing RDD 800 18/04/17 16:56:25 INFO kafka.KafkaRDD: Removing RDD 801 from persistence list 18/04/17 16:56:25 INFO storage.BlockManager: Removing RDD 801 18/04/17 16:56:25 INFO kafka.KafkaRDD: Removing RDD 801 from persistence list 18/04/17 16:56:25 INFO storage.BlockManager: Removing RDD 801 18/04/17 16:56:25 INFO kafka.KafkaRDD: Removing RDD 802 from persistence list 18/04/17 16:56:25 INFO storage.BlockManager: Removing RDD 802 18/04/17 16:56:25 INFO kafka.KafkaRDD: Removing RDD 802 from persistence list 18/04/17 16:56:25 INFO storage.BlockManager: Removing RDD 802 18/04/17 16:56:25 INFO kafka.KafkaRDD: Removing RDD 803 from persistence list 18/04/17 16:56:25 INFO storage.BlockManager: Removing RDD 803 18/04/17 16:56:25 INFO kafka.KafkaRDD: Removing RDD 803 from persistence list 18/04/17 16:56:25 INFO storage.BlockManager: Removing RDD 803 18/04/17 16:56:25 INFO kafka.KafkaRDD: Removing RDD 804 from persistence list 18/04/17 16:56:25 INFO storage.BlockManager: Removing RDD 804 18/04/17 16:56:25 INFO kafka.KafkaRDD: Removing RDD 804 from persistence list 18/04/17 16:56:25 INFO storage.BlockManager: Removing RDD 804 18/04/17 16:56:25 INFO kafka.KafkaRDD: Removing RDD 805 from persistence list 18/04/17 16:56:25 INFO storage.BlockManager: Removing RDD 805 18/04/17 16:56:25 INFO kafka.KafkaRDD: Removing RDD 805 from persistence list 18/04/17 16:56:25 INFO storage.BlockManager: Removing RDD 805 18/04/17 16:56:25 INFO kafka.KafkaRDD: Removing RDD 806 from persistence list 18/04/17 16:56:25 INFO storage.BlockManager: Removing RDD 806 18/04/17 16:56:25 INFO kafka.KafkaRDD: Removing RDD 806 from persistence list 18/04/17 16:56:25 INFO storage.BlockManager: Removing RDD 806 18/04/17 16:56:25 INFO kafka.KafkaRDD: Removing RDD 807 from persistence list 18/04/17 16:56:25 INFO storage.BlockManager: Removing RDD 807 18/04/17 16:56:25 INFO kafka.KafkaRDD: Removing RDD 807 from persistence list 18/04/17 16:56:25 INFO storage.BlockManager: Removing RDD 807 18/04/17 16:56:25 INFO kafka.KafkaRDD: Removing RDD 808 from persistence list 18/04/17 16:56:25 INFO storage.BlockManager: Removing RDD 808 18/04/17 16:56:25 INFO kafka.KafkaRDD: Removing RDD 808 from persistence list 18/04/17 16:56:25 INFO storage.BlockManager: Removing RDD 808 18/04/17 16:56:25 INFO kafka.KafkaRDD: Removing RDD 809 from persistence list 18/04/17 16:56:25 INFO storage.BlockManager: Removing RDD 809 18/04/17 16:56:25 INFO kafka.KafkaRDD: Removing RDD 809 from persistence list 18/04/17 16:56:25 INFO storage.BlockManager: Removing RDD 809 18/04/17 16:56:25 INFO kafka.KafkaRDD: Removing RDD 810 from persistence list 18/04/17 16:56:25 INFO storage.BlockManager: Removing RDD 810 18/04/17 16:56:25 INFO kafka.KafkaRDD: Removing RDD 810 from persistence list 18/04/17 16:56:25 INFO storage.BlockManager: Removing RDD 810 18/04/17 16:56:25 INFO kafka.KafkaRDD: Removing RDD 811 from persistence list 18/04/17 16:56:25 INFO storage.BlockManager: Removing RDD 811 18/04/17 16:56:25 INFO kafka.KafkaRDD: Removing RDD 811 from persistence list 18/04/17 16:56:25 INFO storage.BlockManager: Removing RDD 811 18/04/17 16:56:25 INFO kafka.KafkaRDD: Removing RDD 812 from persistence list 18/04/17 16:56:25 INFO storage.BlockManager: Removing RDD 812 18/04/17 16:56:25 INFO kafka.KafkaRDD: Removing RDD 812 from persistence list 18/04/17 16:56:25 INFO storage.BlockManager: Removing RDD 812 18/04/17 16:56:25 INFO kafka.KafkaRDD: Removing RDD 813 from persistence list 18/04/17 16:56:25 INFO storage.BlockManager: Removing RDD 813 18/04/17 16:56:25 INFO kafka.KafkaRDD: Removing RDD 813 from persistence list 18/04/17 16:56:25 INFO storage.BlockManager: Removing RDD 813 18/04/17 16:56:25 INFO kafka.KafkaRDD: Removing RDD 814 from persistence list 18/04/17 16:56:25 INFO storage.BlockManager: Removing RDD 814 18/04/17 16:56:25 INFO kafka.KafkaRDD: Removing RDD 814 from persistence list 18/04/17 16:56:25 INFO storage.BlockManager: Removing RDD 814 18/04/17 16:56:25 INFO kafka.KafkaRDD: Removing RDD 815 from persistence list 18/04/17 16:56:25 INFO storage.BlockManager: Removing RDD 815 18/04/17 16:56:25 INFO kafka.KafkaRDD: Removing RDD 815 from persistence list 18/04/17 16:56:25 INFO storage.BlockManager: Removing RDD 815 18/04/17 16:56:25 INFO kafka.KafkaRDD: Removing RDD 816 from persistence list 18/04/17 16:56:25 INFO storage.BlockManager: Removing RDD 816 18/04/17 16:56:25 INFO kafka.KafkaRDD: Removing RDD 816 from persistence list 18/04/17 16:56:25 INFO storage.BlockManager: Removing RDD 816 18/04/17 16:56:25 INFO kafka.KafkaRDD: Removing RDD 817 from persistence list 18/04/17 16:56:25 INFO storage.BlockManager: Removing RDD 817 18/04/17 16:56:25 INFO kafka.KafkaRDD: Removing RDD 817 from persistence list 18/04/17 16:56:25 INFO storage.BlockManager: Removing RDD 817 18/04/17 16:56:25 INFO kafka.KafkaRDD: Removing RDD 818 from persistence list 18/04/17 16:56:25 INFO storage.BlockManager: Removing RDD 818 18/04/17 16:56:25 INFO kafka.KafkaRDD: Removing RDD 818 from persistence list 18/04/17 16:56:25 INFO storage.BlockManager: Removing RDD 818 18/04/17 16:56:25 INFO kafka.KafkaRDD: Removing RDD 819 from persistence list 18/04/17 16:56:25 INFO storage.BlockManager: Removing RDD 819 18/04/17 16:56:25 INFO kafka.KafkaRDD: Removing RDD 819 from persistence list 18/04/17 16:56:25 INFO storage.BlockManager: Removing RDD 819 18/04/17 16:56:25 INFO kafka.KafkaRDD: Removing RDD 820 from persistence list 18/04/17 16:56:25 INFO storage.BlockManager: Removing RDD 820 18/04/17 16:56:25 INFO kafka.KafkaRDD: Removing RDD 820 from persistence list 18/04/17 16:56:25 INFO storage.BlockManager: Removing RDD 820 18/04/17 16:56:25 INFO kafka.KafkaRDD: Removing RDD 821 from persistence list 18/04/17 16:56:25 INFO storage.BlockManager: Removing RDD 821 18/04/17 16:56:25 INFO kafka.KafkaRDD: Removing RDD 821 from persistence list 18/04/17 16:56:25 INFO storage.BlockManager: Removing RDD 821 18/04/17 16:56:25 INFO kafka.KafkaRDD: Removing RDD 822 from persistence list 18/04/17 16:56:25 INFO storage.BlockManager: Removing RDD 822 18/04/17 16:56:25 INFO kafka.KafkaRDD: Removing RDD 822 from persistence list 18/04/17 16:56:25 INFO storage.BlockManager: Removing RDD 822 18/04/17 16:56:25 INFO kafka.KafkaRDD: Removing RDD 823 from persistence list 18/04/17 16:56:25 INFO storage.BlockManager: Removing RDD 823 18/04/17 16:56:25 INFO kafka.KafkaRDD: Removing RDD 823 from persistence list 18/04/17 16:56:25 INFO storage.BlockManager: Removing RDD 823 18/04/17 16:56:25 INFO kafka.KafkaRDD: Removing RDD 824 from persistence list 18/04/17 16:56:25 INFO storage.BlockManager: Removing RDD 824 18/04/17 16:56:25 INFO kafka.KafkaRDD: Removing RDD 824 from persistence list 18/04/17 16:56:25 INFO storage.BlockManager: Removing RDD 824 18/04/17 16:56:25 INFO kafka.KafkaRDD: Removing RDD 825 from persistence list 18/04/17 16:56:25 INFO storage.BlockManager: Removing RDD 825 18/04/17 16:56:25 INFO kafka.KafkaRDD: Removing RDD 825 from persistence list 18/04/17 16:56:25 INFO storage.BlockManager: Removing RDD 825 18/04/17 16:56:25 INFO kafka.KafkaRDD: Removing RDD 826 from persistence list 18/04/17 16:56:25 INFO storage.BlockManager: Removing RDD 826 18/04/17 16:56:25 INFO kafka.KafkaRDD: Removing RDD 826 from persistence list 18/04/17 16:56:25 INFO storage.BlockManager: Removing RDD 826 18/04/17 16:56:25 INFO kafka.KafkaRDD: Removing RDD 827 from persistence list 18/04/17 16:56:25 INFO storage.BlockManager: Removing RDD 827 18/04/17 16:56:25 INFO kafka.KafkaRDD: Removing RDD 827 from persistence list 18/04/17 16:56:25 INFO storage.BlockManager: Removing RDD 827 18/04/17 16:56:25 INFO scheduler.ReceivedBlockTracker: Deleting batches ArrayBuffer() 18/04/17 16:56:25 INFO scheduler.InputInfoTracker: remove old batch metadata: 1523973240000 ms 18/04/17 16:56:25 INFO spark.ContextCleaner: Cleaned accumulator 631 18/04/17 16:56:25 INFO storage.BlockManagerInfo: Removed broadcast_607_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.2 MB) 18/04/17 16:56:25 INFO storage.BlockManagerInfo: Removed broadcast_607_piece0 on ***hostname masked***:60107 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:56:25 INFO spark.ContextCleaner: Cleaned accumulator 608 18/04/17 16:56:25 INFO spark.ContextCleaner: Cleaned accumulator 610 18/04/17 16:56:25 INFO storage.BlockManagerInfo: Removed broadcast_608_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.2 MB) 18/04/17 16:56:25 INFO storage.BlockManagerInfo: Removed broadcast_608_piece0 on ***hostname masked***:60107 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:56:25 INFO spark.ContextCleaner: Cleaned accumulator 609 18/04/17 16:56:25 INFO storage.BlockManagerInfo: Removed broadcast_610_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.2 MB) 18/04/17 16:56:25 INFO storage.BlockManagerInfo: Removed broadcast_610_piece0 on ***hostname masked***:60107 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:56:25 INFO spark.ContextCleaner: Cleaned accumulator 611 18/04/17 16:56:25 INFO storage.BlockManagerInfo: Removed broadcast_609_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.2 MB) 18/04/17 16:56:25 INFO storage.BlockManagerInfo: Removed broadcast_609_piece0 on ***hostname masked***:56034 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:56:25 INFO spark.ContextCleaner: Cleaned accumulator 613 18/04/17 16:56:25 INFO storage.BlockManagerInfo: Removed broadcast_611_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.2 MB) 18/04/17 16:56:25 INFO storage.BlockManagerInfo: Removed broadcast_611_piece0 on ***hostname masked***:55033 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:56:25 INFO spark.ContextCleaner: Cleaned accumulator 612 18/04/17 16:56:25 INFO storage.BlockManagerInfo: Removed broadcast_613_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.2 MB) 18/04/17 16:56:25 INFO storage.BlockManagerInfo: Removed broadcast_613_piece0 on ***hostname masked***:42188 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:56:25 INFO spark.ContextCleaner: Cleaned accumulator 614 18/04/17 16:56:25 INFO storage.BlockManagerInfo: Removed broadcast_612_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.2 MB) 18/04/17 16:56:25 INFO storage.BlockManagerInfo: Removed broadcast_612_piece0 on ***hostname masked***:60107 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:56:25 INFO spark.ContextCleaner: Cleaned accumulator 616 18/04/17 16:56:25 INFO storage.BlockManagerInfo: Removed broadcast_614_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.2 MB) 18/04/17 16:56:25 INFO storage.BlockManagerInfo: Removed broadcast_614_piece0 on ***hostname masked***:53081 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:56:25 INFO spark.ContextCleaner: Cleaned accumulator 615 18/04/17 16:56:25 INFO storage.BlockManagerInfo: Removed broadcast_616_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.2 MB) 18/04/17 16:56:25 INFO storage.BlockManagerInfo: Removed broadcast_616_piece0 on ***hostname masked***:42188 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:56:25 INFO spark.ContextCleaner: Cleaned accumulator 617 18/04/17 16:56:25 INFO storage.BlockManagerInfo: Removed broadcast_615_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.2 MB) 18/04/17 16:56:25 INFO storage.BlockManagerInfo: Removed broadcast_615_piece0 on ***hostname masked***:55033 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:56:25 INFO spark.ContextCleaner: Cleaned accumulator 619 18/04/17 16:56:25 INFO storage.BlockManagerInfo: Removed broadcast_617_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.2 MB) 18/04/17 16:56:25 INFO storage.BlockManagerInfo: Removed broadcast_617_piece0 on ***hostname masked***:57847 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:56:25 INFO spark.ContextCleaner: Cleaned accumulator 618 18/04/17 16:56:25 INFO storage.BlockManagerInfo: Removed broadcast_619_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.2 MB) 18/04/17 16:56:25 INFO storage.BlockManagerInfo: Removed broadcast_619_piece0 on ***hostname masked***:55033 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:56:25 INFO spark.ContextCleaner: Cleaned accumulator 620 18/04/17 16:56:25 INFO storage.BlockManagerInfo: Removed broadcast_618_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.2 MB) 18/04/17 16:56:25 INFO storage.BlockManagerInfo: Removed broadcast_618_piece0 on ***hostname masked***:55279 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:56:25 INFO spark.ContextCleaner: Cleaned accumulator 622 18/04/17 16:56:25 INFO storage.BlockManagerInfo: Removed broadcast_620_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.2 MB) 18/04/17 16:56:25 INFO storage.BlockManagerInfo: Removed broadcast_620_piece0 on ***hostname masked***:41751 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:56:25 INFO spark.ContextCleaner: Cleaned accumulator 621 18/04/17 16:56:25 INFO storage.BlockManagerInfo: Removed broadcast_621_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.2 MB) 18/04/17 16:56:25 INFO storage.BlockManagerInfo: Removed broadcast_621_piece0 on ***hostname masked***:57847 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:56:25 INFO spark.ContextCleaner: Cleaned accumulator 624 18/04/17 16:56:25 INFO storage.BlockManagerInfo: Removed broadcast_622_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.2 MB) 18/04/17 16:56:25 INFO storage.BlockManagerInfo: Removed broadcast_622_piece0 on ***hostname masked***:43653 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:56:25 INFO spark.ContextCleaner: Cleaned accumulator 623 18/04/17 16:56:25 INFO storage.BlockManagerInfo: Removed broadcast_624_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.2 MB) 18/04/17 16:56:25 INFO storage.BlockManagerInfo: Removed broadcast_624_piece0 on ***hostname masked***:55279 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:56:25 INFO spark.ContextCleaner: Cleaned accumulator 625 18/04/17 16:56:25 INFO storage.BlockManagerInfo: Removed broadcast_623_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.2 MB) 18/04/17 16:56:25 INFO storage.BlockManagerInfo: Removed broadcast_623_piece0 on ***hostname masked***:42188 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:56:25 INFO storage.BlockManagerInfo: Removed broadcast_633_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.2 MB) 18/04/17 16:56:25 INFO storage.BlockManagerInfo: Removed broadcast_633_piece0 on ***hostname masked***:55033 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:56:25 INFO spark.ContextCleaner: Cleaned accumulator 634 18/04/17 16:56:25 INFO storage.BlockManagerInfo: Removed broadcast_632_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.2 MB) 18/04/17 16:56:25 INFO storage.BlockManagerInfo: Removed broadcast_632_piece0 on ***hostname masked***:35790 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:56:25 INFO spark.ContextCleaner: Cleaned accumulator 627 18/04/17 16:56:25 INFO storage.BlockManagerInfo: Removed broadcast_625_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.2 MB) 18/04/17 16:56:25 INFO storage.BlockManagerInfo: Removed broadcast_625_piece0 on ***hostname masked***:35790 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:56:25 INFO spark.ContextCleaner: Cleaned accumulator 626 18/04/17 16:56:25 INFO storage.BlockManagerInfo: Removed broadcast_627_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.3 MB) 18/04/17 16:56:25 INFO storage.BlockManagerInfo: Removed broadcast_627_piece0 on ***hostname masked***:50260 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:56:25 INFO spark.ContextCleaner: Cleaned accumulator 628 18/04/17 16:56:25 INFO storage.BlockManagerInfo: Removed broadcast_626_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.3 MB) 18/04/17 16:56:25 INFO storage.BlockManagerInfo: Removed broadcast_626_piece0 on ***hostname masked***:57847 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:56:25 INFO spark.ContextCleaner: Cleaned accumulator 630 18/04/17 16:56:25 INFO storage.BlockManagerInfo: Removed broadcast_628_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.3 MB) 18/04/17 16:56:25 INFO storage.BlockManagerInfo: Removed broadcast_628_piece0 on ***hostname masked***:41751 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:56:25 INFO spark.ContextCleaner: Cleaned accumulator 629 18/04/17 16:56:25 INFO storage.BlockManagerInfo: Removed broadcast_630_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.3 MB) 18/04/17 16:56:25 INFO storage.BlockManagerInfo: Removed broadcast_630_piece0 on ***hostname masked***:56034 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:56:25 INFO storage.BlockManagerInfo: Removed broadcast_629_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.3 MB) 18/04/17 16:56:25 INFO storage.BlockManagerInfo: Removed broadcast_629_piece0 on ***hostname masked***:50260 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:56:25 INFO spark.ContextCleaner: Cleaned accumulator 633 18/04/17 16:56:25 INFO storage.BlockManagerInfo: Removed broadcast_631_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.3 MB) 18/04/17 16:56:25 INFO storage.BlockManagerInfo: Removed broadcast_631_piece0 on ***hostname masked***:57847 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:56:25 INFO spark.ContextCleaner: Cleaned accumulator 632 18/04/17 16:57:00 INFO scheduler.JobScheduler: Added jobs for time 1523973420000 ms 18/04/17 16:57:00 INFO scheduler.JobScheduler: Starting job streaming job 1523973420000 ms.0 from job set of time 1523973420000 ms 18/04/17 16:57:00 INFO scheduler.JobScheduler: Starting job streaming job 1523973420000 ms.1 from job set of time 1523973420000 ms 18/04/17 16:57:00 INFO scheduler.JobScheduler: Starting job streaming job 1523973420000 ms.2 from job set of time 1523973420000 ms 18/04/17 16:57:00 INFO scheduler.JobScheduler: Starting job streaming job 1523973420000 ms.3 from job set of time 1523973420000 ms 18/04/17 16:57:00 INFO scheduler.JobScheduler: Finished job streaming job 1523973420000 ms.0 from job set of time 1523973420000 ms 18/04/17 16:57:00 INFO scheduler.JobScheduler: Starting job streaming job 1523973420000 ms.5 from job set of time 1523973420000 ms 18/04/17 16:57:00 INFO scheduler.JobScheduler: Starting job streaming job 1523973420000 ms.4 from job set of time 1523973420000 ms 18/04/17 16:57:00 INFO scheduler.JobScheduler: Finished job streaming job 1523973420000 ms.3 from job set of time 1523973420000 ms 18/04/17 16:57:00 INFO scheduler.JobScheduler: Finished job streaming job 1523973420000 ms.4 from job set of time 1523973420000 ms 18/04/17 16:57:00 INFO scheduler.JobScheduler: Starting job streaming job 1523973420000 ms.6 from job set of time 1523973420000 ms 18/04/17 16:57:00 INFO scheduler.JobScheduler: Starting job streaming job 1523973420000 ms.8 from job set of time 1523973420000 ms 18/04/17 16:57:00 INFO scheduler.JobScheduler: Starting job streaming job 1523973420000 ms.9 from job set of time 1523973420000 ms 18/04/17 16:57:00 INFO scheduler.JobScheduler: Starting job streaming job 1523973420000 ms.7 from job set of time 1523973420000 ms 18/04/17 16:57:00 INFO scheduler.JobScheduler: Starting job streaming job 1523973420000 ms.10 from job set of time 1523973420000 ms 18/04/17 16:57:00 INFO scheduler.JobScheduler: Starting job streaming job 1523973420000 ms.11 from job set of time 1523973420000 ms 18/04/17 16:57:00 INFO scheduler.JobScheduler: Starting job streaming job 1523973420000 ms.12 from job set of time 1523973420000 ms 18/04/17 16:57:00 INFO scheduler.JobScheduler: Starting job streaming job 1523973420000 ms.13 from job set of time 1523973420000 ms 18/04/17 16:57:00 INFO scheduler.JobScheduler: Finished job streaming job 1523973420000 ms.13 from job set of time 1523973420000 ms 18/04/17 16:57:00 INFO scheduler.JobScheduler: Starting job streaming job 1523973420000 ms.14 from job set of time 1523973420000 ms 18/04/17 16:57:00 INFO scheduler.JobScheduler: Starting job streaming job 1523973420000 ms.16 from job set of time 1523973420000 ms 18/04/17 16:57:00 INFO scheduler.JobScheduler: Finished job streaming job 1523973420000 ms.16 from job set of time 1523973420000 ms 18/04/17 16:57:00 INFO scheduler.JobScheduler: Starting job streaming job 1523973420000 ms.15 from job set of time 1523973420000 ms 18/04/17 16:57:00 INFO scheduler.JobScheduler: Finished job streaming job 1523973420000 ms.14 from job set of time 1523973420000 ms 18/04/17 16:57:00 INFO scheduler.JobScheduler: Starting job streaming job 1523973420000 ms.18 from job set of time 1523973420000 ms 18/04/17 16:57:00 INFO scheduler.JobScheduler: Starting job streaming job 1523973420000 ms.20 from job set of time 1523973420000 ms 18/04/17 16:57:00 INFO scheduler.JobScheduler: Starting job streaming job 1523973420000 ms.19 from job set of time 1523973420000 ms 18/04/17 16:57:00 INFO scheduler.JobScheduler: Starting job streaming job 1523973420000 ms.17 from job set of time 1523973420000 ms 18/04/17 16:57:00 INFO scheduler.JobScheduler: Finished job streaming job 1523973420000 ms.17 from job set of time 1523973420000 ms 18/04/17 16:57:00 INFO scheduler.JobScheduler: Starting job streaming job 1523973420000 ms.22 from job set of time 1523973420000 ms 18/04/17 16:57:00 INFO scheduler.JobScheduler: Starting job streaming job 1523973420000 ms.21 from job set of time 1523973420000 ms 18/04/17 16:57:00 INFO scheduler.JobScheduler: Finished job streaming job 1523973420000 ms.21 from job set of time 1523973420000 ms 18/04/17 16:57:00 INFO scheduler.JobScheduler: Starting job streaming job 1523973420000 ms.23 from job set of time 1523973420000 ms 18/04/17 16:57:00 INFO scheduler.JobScheduler: Starting job streaming job 1523973420000 ms.24 from job set of time 1523973420000 ms 18/04/17 16:57:00 INFO scheduler.JobScheduler: Starting job streaming job 1523973420000 ms.25 from job set of time 1523973420000 ms 18/04/17 16:57:00 INFO scheduler.JobScheduler: Starting job streaming job 1523973420000 ms.26 from job set of time 1523973420000 ms 18/04/17 16:57:00 INFO scheduler.JobScheduler: Starting job streaming job 1523973420000 ms.27 from job set of time 1523973420000 ms 18/04/17 16:57:00 INFO scheduler.JobScheduler: Starting job streaming job 1523973420000 ms.28 from job set of time 1523973420000 ms 18/04/17 16:57:00 INFO scheduler.JobScheduler: Starting job streaming job 1523973420000 ms.29 from job set of time 1523973420000 ms 18/04/17 16:57:00 INFO scheduler.JobScheduler: Starting job streaming job 1523973420000 ms.30 from job set of time 1523973420000 ms 18/04/17 16:57:00 INFO scheduler.JobScheduler: Starting job streaming job 1523973420000 ms.31 from job set of time 1523973420000 ms 18/04/17 16:57:00 INFO scheduler.JobScheduler: Finished job streaming job 1523973420000 ms.30 from job set of time 1523973420000 ms 18/04/17 16:57:00 INFO scheduler.JobScheduler: Starting job streaming job 1523973420000 ms.32 from job set of time 1523973420000 ms 18/04/17 16:57:00 INFO scheduler.JobScheduler: Starting job streaming job 1523973420000 ms.33 from job set of time 1523973420000 ms 18/04/17 16:57:00 INFO scheduler.JobScheduler: Starting job streaming job 1523973420000 ms.35 from job set of time 1523973420000 ms 18/04/17 16:57:00 INFO scheduler.JobScheduler: Starting job streaming job 1523973420000 ms.34 from job set of time 1523973420000 ms 18/04/17 16:57:00 INFO scheduler.JobScheduler: Finished job streaming job 1523973420000 ms.35 from job set of time 1523973420000 ms 18/04/17 16:57:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 16:57:00 INFO scheduler.DAGScheduler: Got job 634 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 16:57:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 634 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 16:57:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 16:57:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 16:57:00 INFO scheduler.DAGScheduler: Submitting ResultStage 634 (KafkaRDD[887] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 16:57:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 16:57:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 16:57:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 16:57:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 16:57:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 16:57:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 16:57:00 INFO storage.MemoryStore: Block broadcast_634 stored as values in memory (estimated size 5.7 KB, free 490.5 MB) 18/04/17 16:57:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 16:57:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 16:57:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 16:57:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 16:57:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 16:57:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 16:57:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 16:57:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 16:57:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 16:57:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 16:57:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 16:57:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 16:57:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 16:57:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 16:57:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 16:57:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 16:57:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 16:57:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 16:57:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 16:57:00 INFO storage.MemoryStore: Block broadcast_634_piece0 stored as bytes in memory (estimated size 3.1 KB, free 490.5 MB) 18/04/17 16:57:00 INFO storage.BlockManagerInfo: Added broadcast_634_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.3 MB) 18/04/17 16:57:00 INFO spark.SparkContext: Created broadcast 634 from broadcast at DAGScheduler.scala:1006 18/04/17 16:57:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 634 (KafkaRDD[887] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 16:57:00 INFO cluster.YarnClusterScheduler: Adding task set 634.0 with 1 tasks 18/04/17 16:57:00 INFO scheduler.DAGScheduler: Got job 635 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 16:57:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 635 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 16:57:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 16:57:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 16:57:00 INFO scheduler.DAGScheduler: Submitting ResultStage 635 (KafkaRDD[883] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 16:57:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 634.0 (TID 634, ***hostname masked***, executor 9, partition 0, RACK_LOCAL, 2048 bytes) 18/04/17 16:57:00 INFO storage.MemoryStore: Block broadcast_635 stored as values in memory (estimated size 5.7 KB, free 490.5 MB) 18/04/17 16:57:00 INFO storage.MemoryStore: Block broadcast_635_piece0 stored as bytes in memory (estimated size 3.1 KB, free 490.5 MB) 18/04/17 16:57:00 INFO storage.BlockManagerInfo: Added broadcast_635_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.3 MB) 18/04/17 16:57:00 INFO spark.SparkContext: Created broadcast 635 from broadcast at DAGScheduler.scala:1006 18/04/17 16:57:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 635 (KafkaRDD[883] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 16:57:00 INFO cluster.YarnClusterScheduler: Adding task set 635.0 with 1 tasks 18/04/17 16:57:00 INFO scheduler.DAGScheduler: Got job 636 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 16:57:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 636 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 16:57:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 16:57:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 16:57:00 INFO scheduler.DAGScheduler: Submitting ResultStage 636 (KafkaRDD[893] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 16:57:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 635.0 (TID 635, ***hostname masked***, executor 7, partition 0, RACK_LOCAL, 2050 bytes) 18/04/17 16:57:00 INFO storage.MemoryStore: Block broadcast_636 stored as values in memory (estimated size 5.7 KB, free 490.5 MB) 18/04/17 16:57:00 INFO storage.MemoryStore: Block broadcast_636_piece0 stored as bytes in memory (estimated size 3.1 KB, free 490.5 MB) 18/04/17 16:57:00 INFO storage.BlockManagerInfo: Added broadcast_636_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.3 MB) 18/04/17 16:57:00 INFO spark.SparkContext: Created broadcast 636 from broadcast at DAGScheduler.scala:1006 18/04/17 16:57:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 636 (KafkaRDD[893] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 16:57:00 INFO cluster.YarnClusterScheduler: Adding task set 636.0 with 1 tasks 18/04/17 16:57:00 INFO scheduler.DAGScheduler: Got job 637 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 16:57:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 637 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 16:57:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 16:57:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 16:57:00 INFO scheduler.DAGScheduler: Submitting ResultStage 637 (KafkaRDD[888] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 16:57:00 INFO storage.MemoryStore: Block broadcast_637 stored as values in memory (estimated size 5.7 KB, free 490.5 MB) 18/04/17 16:57:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 636.0 (TID 636, ***hostname masked***, executor 3, partition 0, NODE_LOCAL, 2049 bytes) 18/04/17 16:57:00 INFO storage.MemoryStore: Block broadcast_637_piece0 stored as bytes in memory (estimated size 3.1 KB, free 490.5 MB) 18/04/17 16:57:00 INFO storage.BlockManagerInfo: Added broadcast_637_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.3 MB) 18/04/17 16:57:00 INFO spark.SparkContext: Created broadcast 637 from broadcast at DAGScheduler.scala:1006 18/04/17 16:57:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 637 (KafkaRDD[888] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 16:57:00 INFO cluster.YarnClusterScheduler: Adding task set 637.0 with 1 tasks 18/04/17 16:57:00 INFO scheduler.DAGScheduler: Got job 638 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 16:57:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 638 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 16:57:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 16:57:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 16:57:00 INFO scheduler.DAGScheduler: Submitting ResultStage 638 (KafkaRDD[889] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 16:57:00 INFO storage.MemoryStore: Block broadcast_638 stored as values in memory (estimated size 5.7 KB, free 490.5 MB) 18/04/17 16:57:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 637.0 (TID 637, ***hostname masked***, executor 6, partition 0, NODE_LOCAL, 2047 bytes) 18/04/17 16:57:00 INFO storage.MemoryStore: Block broadcast_638_piece0 stored as bytes in memory (estimated size 3.1 KB, free 490.5 MB) 18/04/17 16:57:00 INFO storage.BlockManagerInfo: Added broadcast_638_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.3 MB) 18/04/17 16:57:00 INFO spark.SparkContext: Created broadcast 638 from broadcast at DAGScheduler.scala:1006 18/04/17 16:57:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 638 (KafkaRDD[889] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 16:57:00 INFO cluster.YarnClusterScheduler: Adding task set 638.0 with 1 tasks 18/04/17 16:57:00 INFO scheduler.DAGScheduler: Got job 639 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 16:57:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 639 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 16:57:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 16:57:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 16:57:00 INFO scheduler.DAGScheduler: Submitting ResultStage 639 (KafkaRDD[882] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 16:57:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 638.0 (TID 638, ***hostname masked***, executor 12, partition 0, RACK_LOCAL, 2045 bytes) 18/04/17 16:57:00 INFO storage.BlockManagerInfo: Added broadcast_635_piece0 in memory on ***hostname masked***:41751 (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:57:00 INFO storage.MemoryStore: Block broadcast_639 stored as values in memory (estimated size 5.7 KB, free 490.5 MB) 18/04/17 16:57:00 INFO storage.BlockManagerInfo: Added broadcast_634_piece0 in memory on ***hostname masked***:55033 (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:57:00 INFO storage.BlockManagerInfo: Added broadcast_636_piece0 in memory on ***hostname masked***:60107 (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:57:00 INFO storage.MemoryStore: Block broadcast_639_piece0 stored as bytes in memory (estimated size 3.1 KB, free 490.5 MB) 18/04/17 16:57:00 INFO storage.BlockManagerInfo: Added broadcast_639_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.2 MB) 18/04/17 16:57:00 INFO spark.SparkContext: Created broadcast 639 from broadcast at DAGScheduler.scala:1006 18/04/17 16:57:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 639 (KafkaRDD[882] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 16:57:00 INFO cluster.YarnClusterScheduler: Adding task set 639.0 with 1 tasks 18/04/17 16:57:00 INFO scheduler.DAGScheduler: Got job 640 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 16:57:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 640 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 16:57:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 16:57:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 16:57:00 INFO scheduler.DAGScheduler: Submitting ResultStage 640 (KafkaRDD[891] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 16:57:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 639.0 (TID 639, ***hostname masked***, executor 2, partition 0, RACK_LOCAL, 2048 bytes) 18/04/17 16:57:00 INFO storage.MemoryStore: Block broadcast_640 stored as values in memory (estimated size 5.7 KB, free 490.5 MB) 18/04/17 16:57:00 INFO storage.MemoryStore: Block broadcast_640_piece0 stored as bytes in memory (estimated size 3.1 KB, free 490.5 MB) 18/04/17 16:57:00 INFO storage.BlockManagerInfo: Added broadcast_640_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.2 MB) 18/04/17 16:57:00 INFO spark.SparkContext: Created broadcast 640 from broadcast at DAGScheduler.scala:1006 18/04/17 16:57:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 640 (KafkaRDD[891] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 16:57:00 INFO cluster.YarnClusterScheduler: Adding task set 640.0 with 1 tasks 18/04/17 16:57:00 INFO scheduler.DAGScheduler: Got job 641 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 16:57:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 641 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 16:57:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 16:57:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 16:57:00 INFO scheduler.DAGScheduler: Submitting ResultStage 641 (KafkaRDD[892] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 16:57:00 INFO storage.MemoryStore: Block broadcast_641 stored as values in memory (estimated size 5.7 KB, free 490.5 MB) 18/04/17 16:57:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 640.0 (TID 640, ***hostname masked***, executor 9, partition 0, RACK_LOCAL, 2048 bytes) 18/04/17 16:57:00 INFO storage.BlockManagerInfo: Added broadcast_638_piece0 in memory on ***hostname masked***:42188 (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:57:00 INFO storage.MemoryStore: Block broadcast_641_piece0 stored as bytes in memory (estimated size 3.1 KB, free 490.5 MB) 18/04/17 16:57:00 INFO storage.BlockManagerInfo: Added broadcast_641_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.2 MB) 18/04/17 16:57:00 INFO spark.SparkContext: Created broadcast 641 from broadcast at DAGScheduler.scala:1006 18/04/17 16:57:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 641 (KafkaRDD[892] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 16:57:00 INFO cluster.YarnClusterScheduler: Adding task set 641.0 with 1 tasks 18/04/17 16:57:00 INFO scheduler.DAGScheduler: Got job 643 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 16:57:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 642 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 16:57:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 16:57:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 16:57:00 INFO scheduler.DAGScheduler: Submitting ResultStage 642 (KafkaRDD[879] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 16:57:00 INFO storage.MemoryStore: Block broadcast_642 stored as values in memory (estimated size 5.7 KB, free 490.5 MB) 18/04/17 16:57:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 641.0 (TID 641, ***hostname masked***, executor 10, partition 0, RACK_LOCAL, 2045 bytes) 18/04/17 16:57:00 INFO storage.BlockManagerInfo: Added broadcast_640_piece0 in memory on ***hostname masked***:55033 (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:57:00 INFO storage.MemoryStore: Block broadcast_642_piece0 stored as bytes in memory (estimated size 3.1 KB, free 490.5 MB) 18/04/17 16:57:00 INFO storage.BlockManagerInfo: Added broadcast_637_piece0 in memory on ***hostname masked***:35790 (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:57:00 INFO storage.BlockManagerInfo: Added broadcast_642_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.2 MB) 18/04/17 16:57:00 INFO spark.SparkContext: Created broadcast 642 from broadcast at DAGScheduler.scala:1006 18/04/17 16:57:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 642 (KafkaRDD[879] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 16:57:00 INFO cluster.YarnClusterScheduler: Adding task set 642.0 with 1 tasks 18/04/17 16:57:00 INFO scheduler.DAGScheduler: Got job 642 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 16:57:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 643 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 16:57:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 16:57:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 16:57:00 INFO scheduler.DAGScheduler: Submitting ResultStage 643 (KafkaRDD[870] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 16:57:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 642.0 (TID 642, ***hostname masked***, executor 9, partition 0, RACK_LOCAL, 2064 bytes) 18/04/17 16:57:00 INFO storage.MemoryStore: Block broadcast_643 stored as values in memory (estimated size 5.7 KB, free 490.5 MB) 18/04/17 16:57:00 INFO storage.BlockManagerInfo: Added broadcast_639_piece0 in memory on ***hostname masked***:43653 (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:57:00 INFO storage.MemoryStore: Block broadcast_643_piece0 stored as bytes in memory (estimated size 3.1 KB, free 490.4 MB) 18/04/17 16:57:00 INFO storage.BlockManagerInfo: Added broadcast_643_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.2 MB) 18/04/17 16:57:00 INFO spark.SparkContext: Created broadcast 643 from broadcast at DAGScheduler.scala:1006 18/04/17 16:57:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 643 (KafkaRDD[870] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 16:57:00 INFO cluster.YarnClusterScheduler: Adding task set 643.0 with 1 tasks 18/04/17 16:57:00 INFO scheduler.DAGScheduler: Got job 644 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 16:57:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 644 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 16:57:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 16:57:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 16:57:00 INFO scheduler.DAGScheduler: Submitting ResultStage 644 (KafkaRDD[865] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 16:57:00 INFO storage.MemoryStore: Block broadcast_644 stored as values in memory (estimated size 5.7 KB, free 490.4 MB) 18/04/17 16:57:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 643.0 (TID 643, ***hostname masked***, executor 10, partition 0, NODE_LOCAL, 2045 bytes) 18/04/17 16:57:00 INFO storage.BlockManagerInfo: Added broadcast_641_piece0 in memory on ***hostname masked***:55095 (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:57:00 INFO storage.MemoryStore: Block broadcast_644_piece0 stored as bytes in memory (estimated size 3.1 KB, free 490.4 MB) 18/04/17 16:57:00 INFO storage.BlockManagerInfo: Added broadcast_644_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.2 MB) 18/04/17 16:57:00 INFO spark.SparkContext: Created broadcast 644 from broadcast at DAGScheduler.scala:1006 18/04/17 16:57:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 644 (KafkaRDD[865] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 16:57:00 INFO cluster.YarnClusterScheduler: Adding task set 644.0 with 1 tasks 18/04/17 16:57:00 INFO scheduler.DAGScheduler: Got job 645 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 16:57:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 645 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 16:57:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 16:57:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 16:57:00 INFO scheduler.DAGScheduler: Submitting ResultStage 645 (KafkaRDD[890] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 16:57:00 INFO storage.BlockManagerInfo: Added broadcast_642_piece0 in memory on ***hostname masked***:55033 (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:57:00 INFO storage.MemoryStore: Block broadcast_645 stored as values in memory (estimated size 5.7 KB, free 490.4 MB) 18/04/17 16:57:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 644.0 (TID 644, ***hostname masked***, executor 7, partition 0, RACK_LOCAL, 2063 bytes) 18/04/17 16:57:00 INFO storage.MemoryStore: Block broadcast_645_piece0 stored as bytes in memory (estimated size 3.1 KB, free 490.4 MB) 18/04/17 16:57:00 INFO storage.BlockManagerInfo: Added broadcast_645_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.2 MB) 18/04/17 16:57:00 INFO spark.SparkContext: Created broadcast 645 from broadcast at DAGScheduler.scala:1006 18/04/17 16:57:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 645 (KafkaRDD[890] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 16:57:00 INFO cluster.YarnClusterScheduler: Adding task set 645.0 with 1 tasks 18/04/17 16:57:00 INFO scheduler.DAGScheduler: Got job 647 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 16:57:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 646 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 16:57:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 16:57:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 16:57:00 INFO scheduler.DAGScheduler: Submitting ResultStage 646 (KafkaRDD[876] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 16:57:00 INFO storage.MemoryStore: Block broadcast_646 stored as values in memory (estimated size 5.7 KB, free 490.4 MB) 18/04/17 16:57:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 645.0 (TID 645, ***hostname masked***, executor 4, partition 0, NODE_LOCAL, 2056 bytes) 18/04/17 16:57:00 INFO storage.BlockManagerInfo: Added broadcast_644_piece0 in memory on ***hostname masked***:41751 (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:57:00 INFO storage.MemoryStore: Block broadcast_646_piece0 stored as bytes in memory (estimated size 3.1 KB, free 490.4 MB) 18/04/17 16:57:00 INFO storage.BlockManagerInfo: Added broadcast_646_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.2 MB) 18/04/17 16:57:00 INFO spark.SparkContext: Created broadcast 646 from broadcast at DAGScheduler.scala:1006 18/04/17 16:57:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 646 (KafkaRDD[876] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 16:57:00 INFO cluster.YarnClusterScheduler: Adding task set 646.0 with 1 tasks 18/04/17 16:57:00 INFO storage.BlockManagerInfo: Added broadcast_643_piece0 in memory on ***hostname masked***:55095 (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:57:00 INFO scheduler.DAGScheduler: Got job 646 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 16:57:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 647 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 16:57:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 16:57:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 16:57:00 INFO scheduler.DAGScheduler: Submitting ResultStage 647 (KafkaRDD[884] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 16:57:00 INFO storage.MemoryStore: Block broadcast_647 stored as values in memory (estimated size 5.7 KB, free 490.4 MB) 18/04/17 16:57:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 646.0 (TID 646, ***hostname masked***, executor 3, partition 0, RACK_LOCAL, 2051 bytes) 18/04/17 16:57:00 INFO storage.MemoryStore: Block broadcast_647_piece0 stored as bytes in memory (estimated size 3.1 KB, free 490.4 MB) 18/04/17 16:57:00 INFO storage.BlockManagerInfo: Added broadcast_647_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.2 MB) 18/04/17 16:57:00 INFO spark.SparkContext: Created broadcast 647 from broadcast at DAGScheduler.scala:1006 18/04/17 16:57:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 647 (KafkaRDD[884] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 16:57:00 INFO cluster.YarnClusterScheduler: Adding task set 647.0 with 1 tasks 18/04/17 16:57:00 INFO scheduler.DAGScheduler: Got job 648 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 16:57:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 648 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 16:57:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 16:57:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 16:57:00 INFO scheduler.DAGScheduler: Submitting ResultStage 648 (KafkaRDD[873] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 16:57:00 INFO storage.MemoryStore: Block broadcast_648 stored as values in memory (estimated size 5.7 KB, free 490.4 MB) 18/04/17 16:57:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 647.0 (TID 647, ***hostname masked***, executor 10, partition 0, RACK_LOCAL, 2048 bytes) 18/04/17 16:57:00 INFO storage.MemoryStore: Block broadcast_648_piece0 stored as bytes in memory (estimated size 3.1 KB, free 490.4 MB) 18/04/17 16:57:00 INFO storage.BlockManagerInfo: Added broadcast_648_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.2 MB) 18/04/17 16:57:00 INFO spark.SparkContext: Created broadcast 648 from broadcast at DAGScheduler.scala:1006 18/04/17 16:57:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 648 (KafkaRDD[873] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 16:57:00 INFO cluster.YarnClusterScheduler: Adding task set 648.0 with 1 tasks 18/04/17 16:57:00 INFO scheduler.DAGScheduler: Got job 649 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 16:57:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 649 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 16:57:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 16:57:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 16:57:00 INFO scheduler.DAGScheduler: Submitting ResultStage 649 (KafkaRDD[866] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 16:57:00 INFO storage.MemoryStore: Block broadcast_649 stored as values in memory (estimated size 5.7 KB, free 490.4 MB) 18/04/17 16:57:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 648.0 (TID 648, ***hostname masked***, executor 4, partition 0, RACK_LOCAL, 2041 bytes) 18/04/17 16:57:00 INFO storage.BlockManagerInfo: Added broadcast_646_piece0 in memory on ***hostname masked***:60107 (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:57:00 INFO storage.MemoryStore: Block broadcast_649_piece0 stored as bytes in memory (estimated size 3.1 KB, free 490.4 MB) 18/04/17 16:57:00 INFO storage.BlockManagerInfo: Added broadcast_649_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.2 MB) 18/04/17 16:57:00 INFO spark.SparkContext: Created broadcast 649 from broadcast at DAGScheduler.scala:1006 18/04/17 16:57:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 649 (KafkaRDD[866] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 16:57:00 INFO cluster.YarnClusterScheduler: Adding task set 649.0 with 1 tasks 18/04/17 16:57:00 INFO scheduler.DAGScheduler: Got job 650 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 16:57:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 650 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 16:57:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 16:57:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 16:57:00 INFO scheduler.DAGScheduler: Submitting ResultStage 650 (KafkaRDD[872] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 16:57:00 INFO storage.MemoryStore: Block broadcast_650 stored as values in memory (estimated size 5.7 KB, free 490.4 MB) 18/04/17 16:57:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 649.0 (TID 649, ***hostname masked***, executor 6, partition 0, RACK_LOCAL, 2034 bytes) 18/04/17 16:57:00 INFO storage.BlockManagerInfo: Added broadcast_645_piece0 in memory on ***hostname masked***:55279 (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:57:00 INFO storage.MemoryStore: Block broadcast_650_piece0 stored as bytes in memory (estimated size 3.1 KB, free 490.4 MB) 18/04/17 16:57:00 INFO storage.BlockManagerInfo: Added broadcast_650_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.2 MB) 18/04/17 16:57:00 INFO spark.SparkContext: Created broadcast 650 from broadcast at DAGScheduler.scala:1006 18/04/17 16:57:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 650 (KafkaRDD[872] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 16:57:00 INFO cluster.YarnClusterScheduler: Adding task set 650.0 with 1 tasks 18/04/17 16:57:00 INFO scheduler.DAGScheduler: Got job 651 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 16:57:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 651 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 16:57:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 16:57:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 16:57:00 INFO scheduler.DAGScheduler: Submitting ResultStage 651 (KafkaRDD[874] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 16:57:00 INFO storage.MemoryStore: Block broadcast_651 stored as values in memory (estimated size 5.7 KB, free 490.4 MB) 18/04/17 16:57:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 650.0 (TID 650, ***hostname masked***, executor 11, partition 0, RACK_LOCAL, 2051 bytes) 18/04/17 16:57:00 INFO storage.BlockManagerInfo: Added broadcast_647_piece0 in memory on ***hostname masked***:55095 (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:57:00 INFO storage.MemoryStore: Block broadcast_651_piece0 stored as bytes in memory (estimated size 3.1 KB, free 490.4 MB) 18/04/17 16:57:00 INFO storage.BlockManagerInfo: Added broadcast_651_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.2 MB) 18/04/17 16:57:00 INFO spark.SparkContext: Created broadcast 651 from broadcast at DAGScheduler.scala:1006 18/04/17 16:57:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 651 (KafkaRDD[874] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 16:57:00 INFO cluster.YarnClusterScheduler: Adding task set 651.0 with 1 tasks 18/04/17 16:57:00 INFO scheduler.DAGScheduler: Got job 653 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 16:57:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 652 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 16:57:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 16:57:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 16:57:00 INFO scheduler.DAGScheduler: Submitting ResultStage 652 (KafkaRDD[875] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 16:57:00 INFO storage.MemoryStore: Block broadcast_652 stored as values in memory (estimated size 5.7 KB, free 490.4 MB) 18/04/17 16:57:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 651.0 (TID 651, ***hostname masked***, executor 1, partition 0, NODE_LOCAL, 2059 bytes) 18/04/17 16:57:00 INFO storage.BlockManagerInfo: Added broadcast_649_piece0 in memory on ***hostname masked***:35790 (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:57:00 INFO storage.MemoryStore: Block broadcast_652_piece0 stored as bytes in memory (estimated size 3.1 KB, free 490.4 MB) 18/04/17 16:57:00 INFO storage.BlockManagerInfo: Added broadcast_652_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.2 MB) 18/04/17 16:57:00 INFO spark.SparkContext: Created broadcast 652 from broadcast at DAGScheduler.scala:1006 18/04/17 16:57:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 652 (KafkaRDD[875] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 16:57:00 INFO cluster.YarnClusterScheduler: Adding task set 652.0 with 1 tasks 18/04/17 16:57:00 INFO scheduler.DAGScheduler: Got job 652 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 16:57:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 653 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 16:57:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 16:57:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 16:57:00 INFO scheduler.DAGScheduler: Submitting ResultStage 653 (KafkaRDD[895] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 16:57:00 INFO storage.MemoryStore: Block broadcast_653 stored as values in memory (estimated size 5.7 KB, free 490.4 MB) 18/04/17 16:57:00 INFO storage.BlockManagerInfo: Added broadcast_648_piece0 in memory on ***hostname masked***:55279 (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:57:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 652.0 (TID 652, ***hostname masked***, executor 10, partition 0, RACK_LOCAL, 2046 bytes) 18/04/17 16:57:00 INFO storage.MemoryStore: Block broadcast_653_piece0 stored as bytes in memory (estimated size 3.1 KB, free 490.4 MB) 18/04/17 16:57:00 INFO storage.BlockManagerInfo: Added broadcast_653_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.2 MB) 18/04/17 16:57:00 INFO spark.SparkContext: Created broadcast 653 from broadcast at DAGScheduler.scala:1006 18/04/17 16:57:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 653 (KafkaRDD[895] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 16:57:00 INFO cluster.YarnClusterScheduler: Adding task set 653.0 with 1 tasks 18/04/17 16:57:00 INFO scheduler.DAGScheduler: Got job 654 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 16:57:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 654 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 16:57:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 16:57:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 16:57:00 INFO scheduler.DAGScheduler: Submitting ResultStage 654 (KafkaRDD[871] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 16:57:00 INFO storage.MemoryStore: Block broadcast_654 stored as values in memory (estimated size 5.7 KB, free 490.4 MB) 18/04/17 16:57:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 653.0 (TID 653, ***hostname masked***, executor 7, partition 0, RACK_LOCAL, 2042 bytes) 18/04/17 16:57:00 INFO storage.MemoryStore: Block broadcast_654_piece0 stored as bytes in memory (estimated size 3.1 KB, free 490.4 MB) 18/04/17 16:57:00 INFO storage.BlockManagerInfo: Added broadcast_654_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.2 MB) 18/04/17 16:57:00 INFO spark.SparkContext: Created broadcast 654 from broadcast at DAGScheduler.scala:1006 18/04/17 16:57:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 654 (KafkaRDD[871] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 16:57:00 INFO cluster.YarnClusterScheduler: Adding task set 654.0 with 1 tasks 18/04/17 16:57:00 INFO scheduler.DAGScheduler: Got job 655 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 16:57:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 655 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 16:57:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 16:57:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 16:57:00 INFO scheduler.DAGScheduler: Submitting ResultStage 655 (KafkaRDD[898] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 16:57:00 INFO storage.MemoryStore: Block broadcast_655 stored as values in memory (estimated size 5.7 KB, free 490.3 MB) 18/04/17 16:57:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 654.0 (TID 654, ***hostname masked***, executor 6, partition 0, RACK_LOCAL, 2060 bytes) 18/04/17 16:57:00 INFO storage.BlockManagerInfo: Added broadcast_652_piece0 in memory on ***hostname masked***:55095 (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:57:00 INFO storage.BlockManagerInfo: Added broadcast_650_piece0 in memory on ***hostname masked***:57847 (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:57:00 INFO storage.BlockManagerInfo: Added broadcast_651_piece0 in memory on ***hostname masked***:56034 (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:57:00 INFO storage.MemoryStore: Block broadcast_655_piece0 stored as bytes in memory (estimated size 3.1 KB, free 490.3 MB) 18/04/17 16:57:00 INFO storage.BlockManagerInfo: Added broadcast_655_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.2 MB) 18/04/17 16:57:00 INFO spark.SparkContext: Created broadcast 655 from broadcast at DAGScheduler.scala:1006 18/04/17 16:57:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 655 (KafkaRDD[898] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 16:57:00 INFO cluster.YarnClusterScheduler: Adding task set 655.0 with 1 tasks 18/04/17 16:57:00 INFO scheduler.DAGScheduler: Got job 656 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 16:57:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 656 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 16:57:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 16:57:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 16:57:00 INFO scheduler.DAGScheduler: Submitting ResultStage 656 (KafkaRDD[897] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 16:57:00 INFO storage.MemoryStore: Block broadcast_656 stored as values in memory (estimated size 5.7 KB, free 490.3 MB) 18/04/17 16:57:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 655.0 (TID 655, ***hostname masked***, executor 6, partition 0, RACK_LOCAL, 2053 bytes) 18/04/17 16:57:00 INFO storage.BlockManagerInfo: Added broadcast_653_piece0 in memory on ***hostname masked***:41751 (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:57:00 INFO storage.MemoryStore: Block broadcast_656_piece0 stored as bytes in memory (estimated size 3.1 KB, free 490.3 MB) 18/04/17 16:57:00 INFO storage.BlockManagerInfo: Added broadcast_656_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.2 MB) 18/04/17 16:57:00 INFO spark.SparkContext: Created broadcast 656 from broadcast at DAGScheduler.scala:1006 18/04/17 16:57:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 656 (KafkaRDD[897] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 16:57:00 INFO cluster.YarnClusterScheduler: Adding task set 656.0 with 1 tasks 18/04/17 16:57:00 INFO scheduler.DAGScheduler: Got job 657 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 16:57:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 657 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 16:57:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 16:57:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 16:57:00 INFO scheduler.DAGScheduler: Submitting ResultStage 657 (KafkaRDD[886] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 16:57:00 INFO storage.MemoryStore: Block broadcast_657 stored as values in memory (estimated size 5.7 KB, free 490.3 MB) 18/04/17 16:57:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 656.0 (TID 656, ***hostname masked***, executor 12, partition 0, NODE_LOCAL, 2048 bytes) 18/04/17 16:57:00 INFO storage.MemoryStore: Block broadcast_657_piece0 stored as bytes in memory (estimated size 3.1 KB, free 490.3 MB) 18/04/17 16:57:00 INFO storage.BlockManagerInfo: Added broadcast_657_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.2 MB) 18/04/17 16:57:00 INFO spark.SparkContext: Created broadcast 657 from broadcast at DAGScheduler.scala:1006 18/04/17 16:57:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 657 (KafkaRDD[886] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 16:57:00 INFO cluster.YarnClusterScheduler: Adding task set 657.0 with 1 tasks 18/04/17 16:57:00 INFO scheduler.DAGScheduler: Got job 658 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 16:57:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 658 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 16:57:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 16:57:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 16:57:00 INFO scheduler.DAGScheduler: Submitting ResultStage 658 (KafkaRDD[896] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 16:57:00 INFO storage.MemoryStore: Block broadcast_658 stored as values in memory (estimated size 5.7 KB, free 490.3 MB) 18/04/17 16:57:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 657.0 (TID 657, ***hostname masked***, executor 1, partition 0, RACK_LOCAL, 2069 bytes) 18/04/17 16:57:00 INFO storage.BlockManagerInfo: Added broadcast_654_piece0 in memory on ***hostname masked***:35790 (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:57:00 INFO storage.MemoryStore: Block broadcast_658_piece0 stored as bytes in memory (estimated size 3.1 KB, free 490.3 MB) 18/04/17 16:57:00 INFO storage.BlockManagerInfo: Added broadcast_658_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.2 MB) 18/04/17 16:57:00 INFO spark.SparkContext: Created broadcast 658 from broadcast at DAGScheduler.scala:1006 18/04/17 16:57:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 658 (KafkaRDD[896] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 16:57:00 INFO cluster.YarnClusterScheduler: Adding task set 658.0 with 1 tasks 18/04/17 16:57:00 INFO scheduler.DAGScheduler: Got job 659 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 16:57:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 659 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 16:57:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 16:57:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 16:57:00 INFO scheduler.DAGScheduler: Submitting ResultStage 659 (KafkaRDD[869] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 16:57:00 INFO storage.MemoryStore: Block broadcast_659 stored as values in memory (estimated size 5.7 KB, free 490.3 MB) 18/04/17 16:57:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 658.0 (TID 658, ***hostname masked***, executor 8, partition 0, RACK_LOCAL, 2040 bytes) 18/04/17 16:57:00 INFO storage.BlockManagerInfo: Added broadcast_656_piece0 in memory on ***hostname masked***:42188 (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:57:00 INFO storage.MemoryStore: Block broadcast_659_piece0 stored as bytes in memory (estimated size 3.1 KB, free 490.3 MB) 18/04/17 16:57:00 INFO storage.BlockManagerInfo: Added broadcast_659_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.2 MB) 18/04/17 16:57:00 INFO spark.SparkContext: Created broadcast 659 from broadcast at DAGScheduler.scala:1006 18/04/17 16:57:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 659 (KafkaRDD[869] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 16:57:00 INFO cluster.YarnClusterScheduler: Adding task set 659.0 with 1 tasks 18/04/17 16:57:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 659.0 (TID 659, ***hostname masked***, executor 1, partition 0, NODE_LOCAL, 2059 bytes) 18/04/17 16:57:00 INFO storage.BlockManagerInfo: Added broadcast_655_piece0 in memory on ***hostname masked***:35790 (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:57:00 INFO storage.BlockManagerInfo: Added broadcast_658_piece0 in memory on ***hostname masked***:50260 (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:57:00 INFO storage.BlockManagerInfo: Added broadcast_657_piece0 in memory on ***hostname masked***:56034 (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:57:00 INFO storage.BlockManagerInfo: Added broadcast_659_piece0 in memory on ***hostname masked***:56034 (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:57:00 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 650.0 (TID 650) in 726 ms on ***hostname masked*** (executor 11) (1/1) 18/04/17 16:57:00 INFO cluster.YarnClusterScheduler: Removed TaskSet 650.0, whose tasks have all completed, from pool 18/04/17 16:57:00 INFO scheduler.DAGScheduler: ResultStage 650 (foreachPartition at PredictorEngineApp.java:153) finished in 0.728 s 18/04/17 16:57:00 INFO scheduler.DAGScheduler: Job 650 finished: foreachPartition at PredictorEngineApp.java:153, took 0.803573 s 18/04/17 16:57:00 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x174052f3 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 16:57:00 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x174052f30x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 16:57:00 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 16:57:00 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:41098, server: ***hostname masked***/***IP masked***:2181 18/04/17 16:57:00 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x2626be142b28dbe, negotiated timeout = 60000 18/04/17 16:57:00 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x2626be142b28dbe 18/04/17 16:57:00 INFO zookeeper.ZooKeeper: Session: 0x2626be142b28dbe closed 18/04/17 16:57:00 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 16:57:00 INFO scheduler.JobScheduler: Finished job streaming job 1523973420000 ms.8 from job set of time 1523973420000 ms 18/04/17 16:57:01 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 638.0 (TID 638) in 1039 ms on ***hostname masked*** (executor 12) (1/1) 18/04/17 16:57:01 INFO cluster.YarnClusterScheduler: Removed TaskSet 638.0, whose tasks have all completed, from pool 18/04/17 16:57:01 INFO scheduler.DAGScheduler: ResultStage 638 (foreachPartition at PredictorEngineApp.java:153) finished in 1.040 s 18/04/17 16:57:01 INFO scheduler.DAGScheduler: Job 638 finished: foreachPartition at PredictorEngineApp.java:153, took 1.061913 s 18/04/17 16:57:01 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x11f4588a connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 16:57:01 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x11f4588a0x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 16:57:01 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 16:57:01 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:41102, server: ***hostname masked***/***IP masked***:2181 18/04/17 16:57:01 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x2626be142b28dbf, negotiated timeout = 60000 18/04/17 16:57:01 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x2626be142b28dbf 18/04/17 16:57:01 INFO zookeeper.ZooKeeper: Session: 0x2626be142b28dbf closed 18/04/17 16:57:01 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 16:57:01 INFO scheduler.JobScheduler: Finished job streaming job 1523973420000 ms.25 from job set of time 1523973420000 ms 18/04/17 16:57:02 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 654.0 (TID 654) in 2368 ms on ***hostname masked*** (executor 6) (1/1) 18/04/17 16:57:02 INFO cluster.YarnClusterScheduler: Removed TaskSet 654.0, whose tasks have all completed, from pool 18/04/17 16:57:02 INFO scheduler.DAGScheduler: ResultStage 654 (foreachPartition at PredictorEngineApp.java:153) finished in 2.369 s 18/04/17 16:57:02 INFO scheduler.DAGScheduler: Job 654 finished: foreachPartition at PredictorEngineApp.java:153, took 2.467079 s 18/04/17 16:57:02 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x72d3db3c connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 16:57:02 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x72d3db3c0x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 16:57:02 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 16:57:02 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:41107, server: ***hostname masked***/***IP masked***:2181 18/04/17 16:57:02 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x2626be142b28dc0, negotiated timeout = 60000 18/04/17 16:57:02 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x2626be142b28dc0 18/04/17 16:57:02 INFO zookeeper.ZooKeeper: Session: 0x2626be142b28dc0 closed 18/04/17 16:57:02 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 16:57:02 INFO scheduler.JobScheduler: Finished job streaming job 1523973420000 ms.7 from job set of time 1523973420000 ms 18/04/17 16:57:03 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 658.0 (TID 658) in 3578 ms on ***hostname masked*** (executor 8) (1/1) 18/04/17 16:57:03 INFO cluster.YarnClusterScheduler: Removed TaskSet 658.0, whose tasks have all completed, from pool 18/04/17 16:57:03 INFO scheduler.DAGScheduler: ResultStage 658 (foreachPartition at PredictorEngineApp.java:153) finished in 3.580 s 18/04/17 16:57:03 INFO scheduler.DAGScheduler: Job 658 finished: foreachPartition at PredictorEngineApp.java:153, took 3.687756 s 18/04/17 16:57:03 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x2eac124a connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 16:57:03 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x2eac124a0x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 16:57:03 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 16:57:03 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:36518, server: ***hostname masked***/***IP masked***:2181 18/04/17 16:57:03 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x1626be1444c94bf, negotiated timeout = 60000 18/04/17 16:57:03 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x1626be1444c94bf 18/04/17 16:57:03 INFO zookeeper.ZooKeeper: Session: 0x1626be1444c94bf closed 18/04/17 16:57:03 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 16:57:03 INFO scheduler.JobScheduler: Finished job streaming job 1523973420000 ms.32 from job set of time 1523973420000 ms 18/04/17 16:57:04 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 639.0 (TID 639) in 4073 ms on ***hostname masked*** (executor 2) (1/1) 18/04/17 16:57:04 INFO cluster.YarnClusterScheduler: Removed TaskSet 639.0, whose tasks have all completed, from pool 18/04/17 16:57:04 INFO scheduler.DAGScheduler: ResultStage 639 (foreachPartition at PredictorEngineApp.java:153) finished in 4.074 s 18/04/17 16:57:04 INFO scheduler.DAGScheduler: Job 639 finished: foreachPartition at PredictorEngineApp.java:153, took 4.102540 s 18/04/17 16:57:04 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x538c3118 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 16:57:04 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x538c31180x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 16:57:04 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 16:57:04 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:36522, server: ***hostname masked***/***IP masked***:2181 18/04/17 16:57:04 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x1626be1444c94c0, negotiated timeout = 60000 18/04/17 16:57:04 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x1626be1444c94c0 18/04/17 16:57:04 INFO zookeeper.ZooKeeper: Session: 0x1626be1444c94c0 closed 18/04/17 16:57:04 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 16:57:04 INFO scheduler.JobScheduler: Finished job streaming job 1523973420000 ms.18 from job set of time 1523973420000 ms 18/04/17 16:57:04 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 656.0 (TID 656) in 4485 ms on ***hostname masked*** (executor 12) (1/1) 18/04/17 16:57:04 INFO cluster.YarnClusterScheduler: Removed TaskSet 656.0, whose tasks have all completed, from pool 18/04/17 16:57:04 INFO scheduler.DAGScheduler: ResultStage 656 (foreachPartition at PredictorEngineApp.java:153) finished in 4.486 s 18/04/17 16:57:04 INFO scheduler.DAGScheduler: Job 656 finished: foreachPartition at PredictorEngineApp.java:153, took 4.589785 s 18/04/17 16:57:04 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x5924a31f connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 16:57:04 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x5924a31f0x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 16:57:04 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 16:57:04 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:36525, server: ***hostname masked***/***IP masked***:2181 18/04/17 16:57:04 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x1626be1444c94c3, negotiated timeout = 60000 18/04/17 16:57:04 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x1626be1444c94c3 18/04/17 16:57:04 INFO zookeeper.ZooKeeper: Session: 0x1626be1444c94c3 closed 18/04/17 16:57:04 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 16:57:04 INFO scheduler.JobScheduler: Finished job streaming job 1523973420000 ms.33 from job set of time 1523973420000 ms 18/04/17 16:57:05 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 653.0 (TID 653) in 5227 ms on ***hostname masked*** (executor 7) (1/1) 18/04/17 16:57:05 INFO cluster.YarnClusterScheduler: Removed TaskSet 653.0, whose tasks have all completed, from pool 18/04/17 16:57:05 INFO scheduler.DAGScheduler: ResultStage 653 (foreachPartition at PredictorEngineApp.java:153) finished in 5.238 s 18/04/17 16:57:05 INFO scheduler.DAGScheduler: Job 652 finished: foreachPartition at PredictorEngineApp.java:153, took 5.322467 s 18/04/17 16:57:05 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x119460ff connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 16:57:05 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x119460ff0x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 16:57:05 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 16:57:05 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:36529, server: ***hostname masked***/***IP masked***:2181 18/04/17 16:57:05 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x1626be1444c94c4, negotiated timeout = 60000 18/04/17 16:57:05 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x1626be1444c94c4 18/04/17 16:57:05 INFO zookeeper.ZooKeeper: Session: 0x1626be1444c94c4 closed 18/04/17 16:57:05 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 16:57:05 INFO scheduler.JobScheduler: Finished job streaming job 1523973420000 ms.31 from job set of time 1523973420000 ms 18/04/17 16:57:06 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 646.0 (TID 646) in 5894 ms on ***hostname masked*** (executor 3) (1/1) 18/04/17 16:57:06 INFO cluster.YarnClusterScheduler: Removed TaskSet 646.0, whose tasks have all completed, from pool 18/04/17 16:57:06 INFO scheduler.DAGScheduler: ResultStage 646 (foreachPartition at PredictorEngineApp.java:153) finished in 5.895 s 18/04/17 16:57:06 INFO scheduler.DAGScheduler: Job 647 finished: foreachPartition at PredictorEngineApp.java:153, took 5.958418 s 18/04/17 16:57:06 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x2ee69f27 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 16:57:06 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x2ee69f270x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 16:57:06 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 16:57:06 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:58384, server: ***hostname masked***/***IP masked***:2181 18/04/17 16:57:06 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x3626be1439a9497, negotiated timeout = 60000 18/04/17 16:57:06 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x3626be1439a9497 18/04/17 16:57:06 INFO zookeeper.ZooKeeper: Session: 0x3626be1439a9497 closed 18/04/17 16:57:06 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 16:57:06 INFO scheduler.JobScheduler: Finished job streaming job 1523973420000 ms.12 from job set of time 1523973420000 ms 18/04/17 16:57:06 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 636.0 (TID 636) in 6243 ms on ***hostname masked*** (executor 3) (1/1) 18/04/17 16:57:06 INFO scheduler.DAGScheduler: ResultStage 636 (foreachPartition at PredictorEngineApp.java:153) finished in 6.244 s 18/04/17 16:57:06 INFO cluster.YarnClusterScheduler: Removed TaskSet 636.0, whose tasks have all completed, from pool 18/04/17 16:57:06 INFO scheduler.DAGScheduler: Job 636 finished: foreachPartition at PredictorEngineApp.java:153, took 6.259575 s 18/04/17 16:57:06 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0xd2986d5 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 16:57:06 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0xd2986d50x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 16:57:06 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 16:57:06 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:41131, server: ***hostname masked***/***IP masked***:2181 18/04/17 16:57:06 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x2626be142b28dc5, negotiated timeout = 60000 18/04/17 16:57:06 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x2626be142b28dc5 18/04/17 16:57:06 INFO zookeeper.ZooKeeper: Session: 0x2626be142b28dc5 closed 18/04/17 16:57:06 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 16:57:06 INFO scheduler.JobScheduler: Finished job streaming job 1523973420000 ms.29 from job set of time 1523973420000 ms 18/04/17 16:57:06 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 648.0 (TID 648) in 6226 ms on ***hostname masked*** (executor 4) (1/1) 18/04/17 16:57:06 INFO scheduler.DAGScheduler: ResultStage 648 (foreachPartition at PredictorEngineApp.java:153) finished in 6.227 s 18/04/17 16:57:06 INFO cluster.YarnClusterScheduler: Removed TaskSet 648.0, whose tasks have all completed, from pool 18/04/17 16:57:06 INFO scheduler.DAGScheduler: Job 648 finished: foreachPartition at PredictorEngineApp.java:153, took 6.296619 s 18/04/17 16:57:06 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x619b0162 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 16:57:06 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x619b01620x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 16:57:06 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 16:57:06 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:41134, server: ***hostname masked***/***IP masked***:2181 18/04/17 16:57:06 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x2626be142b28dc6, negotiated timeout = 60000 18/04/17 16:57:06 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x2626be142b28dc6 18/04/17 16:57:06 INFO zookeeper.ZooKeeper: Session: 0x2626be142b28dc6 closed 18/04/17 16:57:06 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 16:57:06 INFO scheduler.JobScheduler: Finished job streaming job 1523973420000 ms.9 from job set of time 1523973420000 ms 18/04/17 16:57:08 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 635.0 (TID 635) in 8285 ms on ***hostname masked*** (executor 7) (1/1) 18/04/17 16:57:08 INFO scheduler.DAGScheduler: ResultStage 635 (foreachPartition at PredictorEngineApp.java:153) finished in 8.286 s 18/04/17 16:57:08 INFO cluster.YarnClusterScheduler: Removed TaskSet 635.0, whose tasks have all completed, from pool 18/04/17 16:57:08 INFO scheduler.DAGScheduler: Job 635 finished: foreachPartition at PredictorEngineApp.java:153, took 8.298367 s 18/04/17 16:57:08 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x3ee7ae12 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 16:57:08 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x3ee7ae120x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 16:57:08 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 16:57:08 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:58398, server: ***hostname masked***/***IP masked***:2181 18/04/17 16:57:08 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x3626be1439a9499, negotiated timeout = 60000 18/04/17 16:57:08 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x3626be1439a9499 18/04/17 16:57:08 INFO zookeeper.ZooKeeper: Session: 0x3626be1439a9499 closed 18/04/17 16:57:08 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 16:57:08 INFO scheduler.JobScheduler: Finished job streaming job 1523973420000 ms.19 from job set of time 1523973420000 ms 18/04/17 16:57:11 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 642.0 (TID 642) in 10983 ms on ***hostname masked*** (executor 9) (1/1) 18/04/17 16:57:11 INFO cluster.YarnClusterScheduler: Removed TaskSet 642.0, whose tasks have all completed, from pool 18/04/17 16:57:11 INFO scheduler.DAGScheduler: ResultStage 642 (foreachPartition at PredictorEngineApp.java:153) finished in 10.984 s 18/04/17 16:57:11 INFO scheduler.DAGScheduler: Job 643 finished: foreachPartition at PredictorEngineApp.java:153, took 11.027075 s 18/04/17 16:57:11 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x49c42768 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 16:57:11 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x49c427680x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 16:57:11 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 16:57:11 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:41149, server: ***hostname masked***/***IP masked***:2181 18/04/17 16:57:11 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x2626be142b28dca, negotiated timeout = 60000 18/04/17 16:57:11 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x2626be142b28dca 18/04/17 16:57:11 INFO zookeeper.ZooKeeper: Session: 0x2626be142b28dca closed 18/04/17 16:57:11 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 16:57:11 INFO scheduler.JobScheduler: Finished job streaming job 1523973420000 ms.15 from job set of time 1523973420000 ms 18/04/17 16:57:11 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 649.0 (TID 649) in 11053 ms on ***hostname masked*** (executor 6) (1/1) 18/04/17 16:57:11 INFO cluster.YarnClusterScheduler: Removed TaskSet 649.0, whose tasks have all completed, from pool 18/04/17 16:57:11 INFO scheduler.DAGScheduler: ResultStage 649 (foreachPartition at PredictorEngineApp.java:153) finished in 11.054 s 18/04/17 16:57:11 INFO scheduler.DAGScheduler: Job 649 finished: foreachPartition at PredictorEngineApp.java:153, took 11.127323 s 18/04/17 16:57:11 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x41001b19 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 16:57:11 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x41001b190x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 16:57:11 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 16:57:11 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:41152, server: ***hostname masked***/***IP masked***:2181 18/04/17 16:57:11 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x2626be142b28dcb, negotiated timeout = 60000 18/04/17 16:57:11 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x2626be142b28dcb 18/04/17 16:57:11 INFO zookeeper.ZooKeeper: Session: 0x2626be142b28dcb closed 18/04/17 16:57:11 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 16:57:11 INFO scheduler.JobScheduler: Finished job streaming job 1523973420000 ms.2 from job set of time 1523973420000 ms 18/04/17 16:57:11 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 641.0 (TID 641) in 11590 ms on ***hostname masked*** (executor 10) (1/1) 18/04/17 16:57:11 INFO cluster.YarnClusterScheduler: Removed TaskSet 641.0, whose tasks have all completed, from pool 18/04/17 16:57:11 INFO scheduler.DAGScheduler: ResultStage 641 (foreachPartition at PredictorEngineApp.java:153) finished in 11.591 s 18/04/17 16:57:11 INFO scheduler.DAGScheduler: Job 641 finished: foreachPartition at PredictorEngineApp.java:153, took 11.627707 s 18/04/17 16:57:11 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x4b238635 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 16:57:11 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x4b2386350x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 16:57:11 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 16:57:11 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:41155, server: ***hostname masked***/***IP masked***:2181 18/04/17 16:57:11 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x2626be142b28dcc, negotiated timeout = 60000 18/04/17 16:57:11 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x2626be142b28dcc 18/04/17 16:57:11 INFO zookeeper.ZooKeeper: Session: 0x2626be142b28dcc closed 18/04/17 16:57:11 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 16:57:11 INFO scheduler.JobScheduler: Finished job streaming job 1523973420000 ms.28 from job set of time 1523973420000 ms 18/04/17 16:57:11 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 645.0 (TID 645) in 11641 ms on ***hostname masked*** (executor 4) (1/1) 18/04/17 16:57:11 INFO scheduler.DAGScheduler: ResultStage 645 (foreachPartition at PredictorEngineApp.java:153) finished in 11.642 s 18/04/17 16:57:11 INFO cluster.YarnClusterScheduler: Removed TaskSet 645.0, whose tasks have all completed, from pool 18/04/17 16:57:11 INFO scheduler.DAGScheduler: Job 645 finished: foreachPartition at PredictorEngineApp.java:153, took 11.701164 s 18/04/17 16:57:11 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x369b48d4 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 16:57:11 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x369b48d40x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 16:57:11 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 16:57:11 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:41158, server: ***hostname masked***/***IP masked***:2181 18/04/17 16:57:11 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x2626be142b28dcd, negotiated timeout = 60000 18/04/17 16:57:11 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x2626be142b28dcd 18/04/17 16:57:11 INFO zookeeper.ZooKeeper: Session: 0x2626be142b28dcd closed 18/04/17 16:57:11 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 16:57:11 INFO scheduler.JobScheduler: Finished job streaming job 1523973420000 ms.26 from job set of time 1523973420000 ms 18/04/17 16:57:11 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 634.0 (TID 634) in 11789 ms on ***hostname masked*** (executor 9) (1/1) 18/04/17 16:57:11 INFO cluster.YarnClusterScheduler: Removed TaskSet 634.0, whose tasks have all completed, from pool 18/04/17 16:57:11 INFO scheduler.DAGScheduler: ResultStage 634 (foreachPartition at PredictorEngineApp.java:153) finished in 11.789 s 18/04/17 16:57:11 INFO scheduler.DAGScheduler: Job 634 finished: foreachPartition at PredictorEngineApp.java:153, took 11.797659 s 18/04/17 16:57:11 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x43934e11 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 16:57:11 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x43934e110x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 16:57:11 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 16:57:11 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:36566, server: ***hostname masked***/***IP masked***:2181 18/04/17 16:57:11 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x1626be1444c94c9, negotiated timeout = 60000 18/04/17 16:57:11 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x1626be1444c94c9 18/04/17 16:57:11 INFO zookeeper.ZooKeeper: Session: 0x1626be1444c94c9 closed 18/04/17 16:57:11 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 16:57:11 INFO scheduler.JobScheduler: Finished job streaming job 1523973420000 ms.23 from job set of time 1523973420000 ms 18/04/17 16:57:12 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 640.0 (TID 640) in 12006 ms on ***hostname masked*** (executor 9) (1/1) 18/04/17 16:57:12 INFO cluster.YarnClusterScheduler: Removed TaskSet 640.0, whose tasks have all completed, from pool 18/04/17 16:57:12 INFO scheduler.DAGScheduler: ResultStage 640 (foreachPartition at PredictorEngineApp.java:153) finished in 12.007 s 18/04/17 16:57:12 INFO scheduler.DAGScheduler: Job 640 finished: foreachPartition at PredictorEngineApp.java:153, took 12.040238 s 18/04/17 16:57:12 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x21f28101 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 16:57:12 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x21f281010x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 16:57:12 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 16:57:12 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:36571, server: ***hostname masked***/***IP masked***:2181 18/04/17 16:57:12 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x1626be1444c94ca, negotiated timeout = 60000 18/04/17 16:57:12 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x1626be1444c94ca 18/04/17 16:57:12 INFO zookeeper.ZooKeeper: Session: 0x1626be1444c94ca closed 18/04/17 16:57:12 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 16:57:12 INFO scheduler.JobScheduler: Finished job streaming job 1523973420000 ms.27 from job set of time 1523973420000 ms 18/04/17 16:57:12 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 637.0 (TID 637) in 12199 ms on ***hostname masked*** (executor 6) (1/1) 18/04/17 16:57:12 INFO scheduler.DAGScheduler: ResultStage 637 (foreachPartition at PredictorEngineApp.java:153) finished in 12.199 s 18/04/17 16:57:12 INFO cluster.YarnClusterScheduler: Removed TaskSet 637.0, whose tasks have all completed, from pool 18/04/17 16:57:12 INFO scheduler.DAGScheduler: Job 637 finished: foreachPartition at PredictorEngineApp.java:153, took 12.218589 s 18/04/17 16:57:12 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x31ccea62 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 16:57:12 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x31ccea620x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 16:57:12 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 16:57:12 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:58425, server: ***hostname masked***/***IP masked***:2181 18/04/17 16:57:12 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x3626be1439a949d, negotiated timeout = 60000 18/04/17 16:57:12 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x3626be1439a949d 18/04/17 16:57:12 INFO zookeeper.ZooKeeper: Session: 0x3626be1439a949d closed 18/04/17 16:57:12 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 16:57:12 INFO scheduler.JobScheduler: Finished job streaming job 1523973420000 ms.24 from job set of time 1523973420000 ms 18/04/17 16:57:12 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 655.0 (TID 655) in 12171 ms on ***hostname masked*** (executor 6) (1/1) 18/04/17 16:57:12 INFO cluster.YarnClusterScheduler: Removed TaskSet 655.0, whose tasks have all completed, from pool 18/04/17 16:57:12 INFO scheduler.DAGScheduler: ResultStage 655 (foreachPartition at PredictorEngineApp.java:153) finished in 12.171 s 18/04/17 16:57:12 INFO scheduler.DAGScheduler: Job 655 finished: foreachPartition at PredictorEngineApp.java:153, took 12.273119 s 18/04/17 16:57:12 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0xccc28a5 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 16:57:12 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0xccc28a50x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 16:57:12 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 16:57:12 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:36577, server: ***hostname masked***/***IP masked***:2181 18/04/17 16:57:12 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x1626be1444c94cb, negotiated timeout = 60000 18/04/17 16:57:12 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x1626be1444c94cb 18/04/17 16:57:12 INFO zookeeper.ZooKeeper: Session: 0x1626be1444c94cb closed 18/04/17 16:57:12 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 16:57:12 INFO scheduler.JobScheduler: Finished job streaming job 1523973420000 ms.34 from job set of time 1523973420000 ms 18/04/17 16:57:12 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 643.0 (TID 643) in 12671 ms on ***hostname masked*** (executor 10) (1/1) 18/04/17 16:57:12 INFO scheduler.DAGScheduler: ResultStage 643 (foreachPartition at PredictorEngineApp.java:153) finished in 12.671 s 18/04/17 16:57:12 INFO cluster.YarnClusterScheduler: Removed TaskSet 643.0, whose tasks have all completed, from pool 18/04/17 16:57:12 INFO scheduler.DAGScheduler: Job 642 finished: foreachPartition at PredictorEngineApp.java:153, took 12.721689 s 18/04/17 16:57:12 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x639eb072 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 16:57:12 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x639eb0720x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 16:57:12 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 16:57:12 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:36580, server: ***hostname masked***/***IP masked***:2181 18/04/17 16:57:12 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x1626be1444c94cc, negotiated timeout = 60000 18/04/17 16:57:12 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x1626be1444c94cc 18/04/17 16:57:12 INFO zookeeper.ZooKeeper: Session: 0x1626be1444c94cc closed 18/04/17 16:57:12 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 16:57:12 INFO scheduler.JobScheduler: Finished job streaming job 1523973420000 ms.6 from job set of time 1523973420000 ms 18/04/17 16:57:13 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 644.0 (TID 644) in 13074 ms on ***hostname masked*** (executor 7) (1/1) 18/04/17 16:57:13 INFO cluster.YarnClusterScheduler: Removed TaskSet 644.0, whose tasks have all completed, from pool 18/04/17 16:57:13 INFO scheduler.DAGScheduler: ResultStage 644 (foreachPartition at PredictorEngineApp.java:153) finished in 13.074 s 18/04/17 16:57:13 INFO scheduler.DAGScheduler: Job 644 finished: foreachPartition at PredictorEngineApp.java:153, took 13.129282 s 18/04/17 16:57:13 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x5f84265 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 16:57:13 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x5f842650x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 16:57:13 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 16:57:13 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:41179, server: ***hostname masked***/***IP masked***:2181 18/04/17 16:57:13 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x2626be142b28dd0, negotiated timeout = 60000 18/04/17 16:57:13 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x2626be142b28dd0 18/04/17 16:57:13 INFO zookeeper.ZooKeeper: Session: 0x2626be142b28dd0 closed 18/04/17 16:57:13 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 16:57:13 INFO scheduler.JobScheduler: Finished job streaming job 1523973420000 ms.1 from job set of time 1523973420000 ms 18/04/17 16:57:13 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 647.0 (TID 647) in 13729 ms on ***hostname masked*** (executor 10) (1/1) 18/04/17 16:57:13 INFO cluster.YarnClusterScheduler: Removed TaskSet 647.0, whose tasks have all completed, from pool 18/04/17 16:57:13 INFO scheduler.DAGScheduler: ResultStage 647 (foreachPartition at PredictorEngineApp.java:153) finished in 13.730 s 18/04/17 16:57:13 INFO scheduler.DAGScheduler: Job 646 finished: foreachPartition at PredictorEngineApp.java:153, took 13.796366 s 18/04/17 16:57:13 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x595346d8 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 16:57:13 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x595346d80x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 16:57:13 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 16:57:13 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:36587, server: ***hostname masked***/***IP masked***:2181 18/04/17 16:57:13 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x1626be1444c94cd, negotiated timeout = 60000 18/04/17 16:57:13 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x1626be1444c94cd 18/04/17 16:57:13 INFO zookeeper.ZooKeeper: Session: 0x1626be1444c94cd closed 18/04/17 16:57:13 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 16:57:13 INFO scheduler.JobScheduler: Finished job streaming job 1523973420000 ms.20 from job set of time 1523973420000 ms 18/04/17 16:57:17 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 659.0 (TID 659) in 17181 ms on ***hostname masked*** (executor 1) (1/1) 18/04/17 16:57:17 INFO cluster.YarnClusterScheduler: Removed TaskSet 659.0, whose tasks have all completed, from pool 18/04/17 16:57:17 INFO scheduler.DAGScheduler: ResultStage 659 (foreachPartition at PredictorEngineApp.java:153) finished in 17.181 s 18/04/17 16:57:17 INFO scheduler.DAGScheduler: Job 659 finished: foreachPartition at PredictorEngineApp.java:153, took 17.291305 s 18/04/17 16:57:17 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x591b7934 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 16:57:17 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x591b79340x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 16:57:17 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 16:57:17 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:41191, server: ***hostname masked***/***IP masked***:2181 18/04/17 16:57:17 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x2626be142b28dd1, negotiated timeout = 60000 18/04/17 16:57:17 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x2626be142b28dd1 18/04/17 16:57:17 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 16:57:17 INFO zookeeper.ZooKeeper: Session: 0x2626be142b28dd1 closed 18/04/17 16:57:17 INFO scheduler.JobScheduler: Finished job streaming job 1523973420000 ms.5 from job set of time 1523973420000 ms 18/04/17 16:57:18 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 652.0 (TID 652) in 18530 ms on ***hostname masked*** (executor 10) (1/1) 18/04/17 16:57:18 INFO scheduler.DAGScheduler: ResultStage 652 (foreachPartition at PredictorEngineApp.java:153) finished in 18.532 s 18/04/17 16:57:18 INFO cluster.YarnClusterScheduler: Removed TaskSet 652.0, whose tasks have all completed, from pool 18/04/17 16:57:18 INFO scheduler.DAGScheduler: Job 653 finished: foreachPartition at PredictorEngineApp.java:153, took 18.613261 s 18/04/17 16:57:18 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x5bcaf92e connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 16:57:18 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x5bcaf92e0x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 16:57:18 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 16:57:18 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:41195, server: ***hostname masked***/***IP masked***:2181 18/04/17 16:57:18 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x2626be142b28dd2, negotiated timeout = 60000 18/04/17 16:57:18 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x2626be142b28dd2 18/04/17 16:57:18 INFO zookeeper.ZooKeeper: Session: 0x2626be142b28dd2 closed 18/04/17 16:57:18 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 16:57:18 INFO scheduler.JobScheduler: Finished job streaming job 1523973420000 ms.11 from job set of time 1523973420000 ms 18/04/17 16:57:18 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 651.0 (TID 651) in 18712 ms on ***hostname masked*** (executor 1) (1/1) 18/04/17 16:57:18 INFO cluster.YarnClusterScheduler: Removed TaskSet 651.0, whose tasks have all completed, from pool 18/04/17 16:57:18 INFO scheduler.DAGScheduler: ResultStage 651 (foreachPartition at PredictorEngineApp.java:153) finished in 18.713 s 18/04/17 16:57:18 INFO scheduler.DAGScheduler: Job 651 finished: foreachPartition at PredictorEngineApp.java:153, took 18.792270 s 18/04/17 16:57:18 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x548eb5ab connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 16:57:18 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x548eb5ab0x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 16:57:18 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 16:57:18 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:36603, server: ***hostname masked***/***IP masked***:2181 18/04/17 16:57:18 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x1626be1444c94d1, negotiated timeout = 60000 18/04/17 16:57:18 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x1626be1444c94d1 18/04/17 16:57:18 INFO zookeeper.ZooKeeper: Session: 0x1626be1444c94d1 closed 18/04/17 16:57:18 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 16:57:18 INFO scheduler.JobScheduler: Finished job streaming job 1523973420000 ms.10 from job set of time 1523973420000 ms 18/04/17 16:58:00 INFO scheduler.JobScheduler: Added jobs for time 1523973480000 ms 18/04/17 16:58:00 INFO scheduler.JobScheduler: Starting job streaming job 1523973480000 ms.0 from job set of time 1523973480000 ms 18/04/17 16:58:00 INFO scheduler.JobScheduler: Starting job streaming job 1523973480000 ms.1 from job set of time 1523973480000 ms 18/04/17 16:58:00 INFO scheduler.JobScheduler: Starting job streaming job 1523973480000 ms.2 from job set of time 1523973480000 ms 18/04/17 16:58:00 INFO scheduler.JobScheduler: Starting job streaming job 1523973480000 ms.3 from job set of time 1523973480000 ms 18/04/17 16:58:00 INFO scheduler.JobScheduler: Starting job streaming job 1523973480000 ms.4 from job set of time 1523973480000 ms 18/04/17 16:58:00 INFO scheduler.JobScheduler: Starting job streaming job 1523973480000 ms.5 from job set of time 1523973480000 ms 18/04/17 16:58:00 INFO scheduler.JobScheduler: Finished job streaming job 1523973480000 ms.0 from job set of time 1523973480000 ms 18/04/17 16:58:00 INFO scheduler.JobScheduler: Starting job streaming job 1523973480000 ms.6 from job set of time 1523973480000 ms 18/04/17 16:58:00 INFO scheduler.JobScheduler: Starting job streaming job 1523973480000 ms.7 from job set of time 1523973480000 ms 18/04/17 16:58:00 INFO scheduler.JobScheduler: Finished job streaming job 1523973480000 ms.4 from job set of time 1523973480000 ms 18/04/17 16:58:00 INFO scheduler.JobScheduler: Finished job streaming job 1523973480000 ms.3 from job set of time 1523973480000 ms 18/04/17 16:58:00 INFO scheduler.JobScheduler: Starting job streaming job 1523973480000 ms.8 from job set of time 1523973480000 ms 18/04/17 16:58:00 INFO scheduler.JobScheduler: Starting job streaming job 1523973480000 ms.10 from job set of time 1523973480000 ms 18/04/17 16:58:00 INFO scheduler.JobScheduler: Starting job streaming job 1523973480000 ms.9 from job set of time 1523973480000 ms 18/04/17 16:58:00 INFO scheduler.JobScheduler: Starting job streaming job 1523973480000 ms.11 from job set of time 1523973480000 ms 18/04/17 16:58:00 INFO scheduler.JobScheduler: Starting job streaming job 1523973480000 ms.12 from job set of time 1523973480000 ms 18/04/17 16:58:00 INFO scheduler.JobScheduler: Starting job streaming job 1523973480000 ms.13 from job set of time 1523973480000 ms 18/04/17 16:58:00 INFO scheduler.JobScheduler: Starting job streaming job 1523973480000 ms.14 from job set of time 1523973480000 ms 18/04/17 16:58:00 INFO scheduler.JobScheduler: Finished job streaming job 1523973480000 ms.13 from job set of time 1523973480000 ms 18/04/17 16:58:00 INFO scheduler.JobScheduler: Starting job streaming job 1523973480000 ms.15 from job set of time 1523973480000 ms 18/04/17 16:58:00 INFO scheduler.JobScheduler: Starting job streaming job 1523973480000 ms.16 from job set of time 1523973480000 ms 18/04/17 16:58:00 INFO scheduler.JobScheduler: Finished job streaming job 1523973480000 ms.14 from job set of time 1523973480000 ms 18/04/17 16:58:00 INFO scheduler.JobScheduler: Starting job streaming job 1523973480000 ms.17 from job set of time 1523973480000 ms 18/04/17 16:58:00 INFO scheduler.JobScheduler: Starting job streaming job 1523973480000 ms.18 from job set of time 1523973480000 ms 18/04/17 16:58:00 INFO scheduler.JobScheduler: Finished job streaming job 1523973480000 ms.16 from job set of time 1523973480000 ms 18/04/17 16:58:00 INFO scheduler.JobScheduler: Starting job streaming job 1523973480000 ms.19 from job set of time 1523973480000 ms 18/04/17 16:58:00 INFO scheduler.JobScheduler: Finished job streaming job 1523973480000 ms.17 from job set of time 1523973480000 ms 18/04/17 16:58:00 INFO scheduler.JobScheduler: Starting job streaming job 1523973480000 ms.20 from job set of time 1523973480000 ms 18/04/17 16:58:00 INFO scheduler.JobScheduler: Starting job streaming job 1523973480000 ms.21 from job set of time 1523973480000 ms 18/04/17 16:58:00 INFO scheduler.JobScheduler: Finished job streaming job 1523973480000 ms.21 from job set of time 1523973480000 ms 18/04/17 16:58:00 INFO scheduler.JobScheduler: Starting job streaming job 1523973480000 ms.22 from job set of time 1523973480000 ms 18/04/17 16:58:00 INFO scheduler.JobScheduler: Starting job streaming job 1523973480000 ms.23 from job set of time 1523973480000 ms 18/04/17 16:58:00 INFO scheduler.JobScheduler: Starting job streaming job 1523973480000 ms.24 from job set of time 1523973480000 ms 18/04/17 16:58:00 INFO scheduler.JobScheduler: Starting job streaming job 1523973480000 ms.25 from job set of time 1523973480000 ms 18/04/17 16:58:00 INFO scheduler.JobScheduler: Starting job streaming job 1523973480000 ms.26 from job set of time 1523973480000 ms 18/04/17 16:58:00 INFO scheduler.JobScheduler: Starting job streaming job 1523973480000 ms.28 from job set of time 1523973480000 ms 18/04/17 16:58:00 INFO scheduler.JobScheduler: Starting job streaming job 1523973480000 ms.27 from job set of time 1523973480000 ms 18/04/17 16:58:00 INFO scheduler.JobScheduler: Starting job streaming job 1523973480000 ms.30 from job set of time 1523973480000 ms 18/04/17 16:58:00 INFO scheduler.JobScheduler: Starting job streaming job 1523973480000 ms.29 from job set of time 1523973480000 ms 18/04/17 16:58:00 INFO scheduler.JobScheduler: Starting job streaming job 1523973480000 ms.31 from job set of time 1523973480000 ms 18/04/17 16:58:00 INFO scheduler.JobScheduler: Finished job streaming job 1523973480000 ms.30 from job set of time 1523973480000 ms 18/04/17 16:58:00 INFO scheduler.JobScheduler: Starting job streaming job 1523973480000 ms.32 from job set of time 1523973480000 ms 18/04/17 16:58:00 INFO scheduler.JobScheduler: Starting job streaming job 1523973480000 ms.33 from job set of time 1523973480000 ms 18/04/17 16:58:00 INFO scheduler.JobScheduler: Starting job streaming job 1523973480000 ms.34 from job set of time 1523973480000 ms 18/04/17 16:58:00 INFO scheduler.JobScheduler: Starting job streaming job 1523973480000 ms.35 from job set of time 1523973480000 ms 18/04/17 16:58:00 INFO scheduler.JobScheduler: Finished job streaming job 1523973480000 ms.35 from job set of time 1523973480000 ms 18/04/17 16:58:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 16:58:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 16:58:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 16:58:00 INFO scheduler.DAGScheduler: Got job 660 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 16:58:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 660 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 16:58:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 16:58:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 16:58:00 INFO scheduler.DAGScheduler: Submitting ResultStage 660 (KafkaRDD[912] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 16:58:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 16:58:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 16:58:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 16:58:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 16:58:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 16:58:00 INFO storage.MemoryStore: Block broadcast_660 stored as values in memory (estimated size 5.7 KB, free 490.3 MB) 18/04/17 16:58:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 16:58:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 16:58:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 16:58:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 16:58:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 16:58:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 16:58:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 16:58:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 16:58:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 16:58:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 16:58:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 16:58:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 16:58:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 16:58:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 16:58:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 16:58:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 16:58:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 16:58:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 16:58:00 INFO storage.MemoryStore: Block broadcast_660_piece0 stored as bytes in memory (estimated size 3.1 KB, free 490.3 MB) 18/04/17 16:58:00 INFO storage.BlockManagerInfo: Added broadcast_660_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.2 MB) 18/04/17 16:58:00 INFO spark.SparkContext: Created broadcast 660 from broadcast at DAGScheduler.scala:1006 18/04/17 16:58:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 660 (KafkaRDD[912] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 16:58:00 INFO cluster.YarnClusterScheduler: Adding task set 660.0 with 1 tasks 18/04/17 16:58:00 INFO scheduler.DAGScheduler: Got job 661 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 16:58:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 661 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 16:58:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 16:58:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 16:58:00 INFO scheduler.DAGScheduler: Submitting ResultStage 661 (KafkaRDD[905] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 16:58:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 660.0 (TID 660, ***hostname masked***, executor 4, partition 0, RACK_LOCAL, 2051 bytes) 18/04/17 16:58:00 INFO storage.MemoryStore: Block broadcast_661 stored as values in memory (estimated size 5.7 KB, free 490.3 MB) 18/04/17 16:58:00 INFO storage.MemoryStore: Block broadcast_661_piece0 stored as bytes in memory (estimated size 3.1 KB, free 490.3 MB) 18/04/17 16:58:00 INFO storage.BlockManagerInfo: Added broadcast_661_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.2 MB) 18/04/17 16:58:00 INFO spark.SparkContext: Created broadcast 661 from broadcast at DAGScheduler.scala:1006 18/04/17 16:58:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 661 (KafkaRDD[905] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 16:58:00 INFO cluster.YarnClusterScheduler: Adding task set 661.0 with 1 tasks 18/04/17 16:58:00 INFO scheduler.DAGScheduler: Got job 662 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 16:58:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 662 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 16:58:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 16:58:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 16:58:00 INFO scheduler.DAGScheduler: Submitting ResultStage 662 (KafkaRDD[911] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 16:58:00 INFO storage.MemoryStore: Block broadcast_662 stored as values in memory (estimated size 5.7 KB, free 490.3 MB) 18/04/17 16:58:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 661.0 (TID 661, ***hostname masked***, executor 5, partition 0, NODE_LOCAL, 2059 bytes) 18/04/17 16:58:00 INFO storage.MemoryStore: Block broadcast_662_piece0 stored as bytes in memory (estimated size 3.1 KB, free 490.3 MB) 18/04/17 16:58:00 INFO storage.BlockManagerInfo: Added broadcast_662_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.2 MB) 18/04/17 16:58:00 INFO spark.SparkContext: Created broadcast 662 from broadcast at DAGScheduler.scala:1006 18/04/17 16:58:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 662 (KafkaRDD[911] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 16:58:00 INFO cluster.YarnClusterScheduler: Adding task set 662.0 with 1 tasks 18/04/17 16:58:00 INFO scheduler.DAGScheduler: Got job 663 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 16:58:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 663 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 16:58:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 16:58:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 16:58:00 INFO scheduler.DAGScheduler: Submitting ResultStage 663 (KafkaRDD[906] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 16:58:00 INFO storage.MemoryStore: Block broadcast_663 stored as values in memory (estimated size 5.7 KB, free 490.3 MB) 18/04/17 16:58:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 662.0 (TID 662, ***hostname masked***, executor 12, partition 0, RACK_LOCAL, 2046 bytes) 18/04/17 16:58:00 INFO storage.MemoryStore: Block broadcast_663_piece0 stored as bytes in memory (estimated size 3.1 KB, free 490.3 MB) 18/04/17 16:58:00 INFO storage.BlockManagerInfo: Added broadcast_663_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.2 MB) 18/04/17 16:58:00 INFO storage.BlockManagerInfo: Added broadcast_660_piece0 in memory on ***hostname masked***:55279 (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:58:00 INFO spark.SparkContext: Created broadcast 663 from broadcast at DAGScheduler.scala:1006 18/04/17 16:58:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 663 (KafkaRDD[906] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 16:58:00 INFO cluster.YarnClusterScheduler: Adding task set 663.0 with 1 tasks 18/04/17 16:58:00 INFO scheduler.DAGScheduler: Got job 664 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 16:58:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 664 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 16:58:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 16:58:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 16:58:00 INFO scheduler.DAGScheduler: Submitting ResultStage 664 (KafkaRDD[934] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 16:58:00 INFO storage.MemoryStore: Block broadcast_664 stored as values in memory (estimated size 5.7 KB, free 490.3 MB) 18/04/17 16:58:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 663.0 (TID 663, ***hostname masked***, executor 10, partition 0, NODE_LOCAL, 2045 bytes) 18/04/17 16:58:00 INFO storage.MemoryStore: Block broadcast_664_piece0 stored as bytes in memory (estimated size 3.1 KB, free 490.3 MB) 18/04/17 16:58:00 INFO storage.BlockManagerInfo: Added broadcast_664_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.2 MB) 18/04/17 16:58:00 INFO spark.SparkContext: Created broadcast 664 from broadcast at DAGScheduler.scala:1006 18/04/17 16:58:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 664 (KafkaRDD[934] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 16:58:00 INFO cluster.YarnClusterScheduler: Adding task set 664.0 with 1 tasks 18/04/17 16:58:00 INFO scheduler.DAGScheduler: Got job 665 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 16:58:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 665 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 16:58:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 16:58:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 16:58:00 INFO scheduler.DAGScheduler: Submitting ResultStage 665 (KafkaRDD[910] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 16:58:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 664.0 (TID 664, ***hostname masked***, executor 2, partition 0, RACK_LOCAL, 2053 bytes) 18/04/17 16:58:00 INFO storage.MemoryStore: Block broadcast_665 stored as values in memory (estimated size 5.7 KB, free 490.3 MB) 18/04/17 16:58:00 INFO storage.BlockManagerInfo: Added broadcast_662_piece0 in memory on ***hostname masked***:42188 (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:58:00 INFO storage.MemoryStore: Block broadcast_665_piece0 stored as bytes in memory (estimated size 3.1 KB, free 490.3 MB) 18/04/17 16:58:00 INFO storage.BlockManagerInfo: Added broadcast_665_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.2 MB) 18/04/17 16:58:00 INFO spark.SparkContext: Created broadcast 665 from broadcast at DAGScheduler.scala:1006 18/04/17 16:58:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 665 (KafkaRDD[910] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 16:58:00 INFO cluster.YarnClusterScheduler: Adding task set 665.0 with 1 tasks 18/04/17 16:58:00 INFO scheduler.DAGScheduler: Got job 666 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 16:58:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 666 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 16:58:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 16:58:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 16:58:00 INFO scheduler.DAGScheduler: Submitting ResultStage 666 (KafkaRDD[931] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 16:58:00 INFO storage.MemoryStore: Block broadcast_666 stored as values in memory (estimated size 5.7 KB, free 490.3 MB) 18/04/17 16:58:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 665.0 (TID 665, ***hostname masked***, executor 3, partition 0, NODE_LOCAL, 2059 bytes) 18/04/17 16:58:00 INFO storage.BlockManagerInfo: Added broadcast_661_piece0 in memory on ***hostname masked***:53081 (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:58:00 INFO storage.MemoryStore: Block broadcast_666_piece0 stored as bytes in memory (estimated size 3.1 KB, free 490.3 MB) 18/04/17 16:58:00 INFO storage.BlockManagerInfo: Added broadcast_666_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.2 MB) 18/04/17 16:58:00 INFO spark.SparkContext: Created broadcast 666 from broadcast at DAGScheduler.scala:1006 18/04/17 16:58:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 666 (KafkaRDD[931] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 16:58:00 INFO cluster.YarnClusterScheduler: Adding task set 666.0 with 1 tasks 18/04/17 16:58:00 INFO scheduler.DAGScheduler: Got job 667 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 16:58:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 667 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 16:58:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 16:58:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 16:58:00 INFO scheduler.DAGScheduler: Submitting ResultStage 667 (KafkaRDD[932] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 16:58:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 666.0 (TID 666, ***hostname masked***, executor 3, partition 0, RACK_LOCAL, 2042 bytes) 18/04/17 16:58:00 INFO storage.MemoryStore: Block broadcast_667 stored as values in memory (estimated size 5.7 KB, free 490.2 MB) 18/04/17 16:58:00 INFO storage.MemoryStore: Block broadcast_667_piece0 stored as bytes in memory (estimated size 3.1 KB, free 490.2 MB) 18/04/17 16:58:00 INFO storage.BlockManagerInfo: Added broadcast_667_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.2 MB) 18/04/17 16:58:00 INFO spark.SparkContext: Created broadcast 667 from broadcast at DAGScheduler.scala:1006 18/04/17 16:58:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 667 (KafkaRDD[932] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 16:58:00 INFO cluster.YarnClusterScheduler: Adding task set 667.0 with 1 tasks 18/04/17 16:58:00 INFO scheduler.DAGScheduler: Got job 668 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 16:58:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 668 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 16:58:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 16:58:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 16:58:00 INFO scheduler.DAGScheduler: Submitting ResultStage 668 (KafkaRDD[908] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 16:58:00 INFO storage.MemoryStore: Block broadcast_668 stored as values in memory (estimated size 5.7 KB, free 490.2 MB) 18/04/17 16:58:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 667.0 (TID 667, ***hostname masked***, executor 4, partition 0, RACK_LOCAL, 2040 bytes) 18/04/17 16:58:00 INFO storage.MemoryStore: Block broadcast_668_piece0 stored as bytes in memory (estimated size 3.1 KB, free 490.2 MB) 18/04/17 16:58:00 INFO storage.BlockManagerInfo: Added broadcast_668_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.2 MB) 18/04/17 16:58:00 INFO spark.SparkContext: Created broadcast 668 from broadcast at DAGScheduler.scala:1006 18/04/17 16:58:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 668 (KafkaRDD[908] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 16:58:00 INFO cluster.YarnClusterScheduler: Adding task set 668.0 with 1 tasks 18/04/17 16:58:00 INFO scheduler.DAGScheduler: Got job 669 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 16:58:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 669 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 16:58:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 16:58:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 16:58:00 INFO scheduler.DAGScheduler: Submitting ResultStage 669 (KafkaRDD[915] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 16:58:00 INFO storage.MemoryStore: Block broadcast_669 stored as values in memory (estimated size 5.7 KB, free 490.2 MB) 18/04/17 16:58:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 668.0 (TID 668, ***hostname masked***, executor 10, partition 0, RACK_LOCAL, 2051 bytes) 18/04/17 16:58:00 INFO storage.BlockManagerInfo: Added broadcast_667_piece0 in memory on ***hostname masked***:55279 (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:58:00 INFO storage.MemoryStore: Block broadcast_669_piece0 stored as bytes in memory (estimated size 3.1 KB, free 490.2 MB) 18/04/17 16:58:00 INFO storage.BlockManagerInfo: Added broadcast_669_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.2 MB) 18/04/17 16:58:00 INFO spark.SparkContext: Created broadcast 669 from broadcast at DAGScheduler.scala:1006 18/04/17 16:58:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 669 (KafkaRDD[915] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 16:58:00 INFO cluster.YarnClusterScheduler: Adding task set 669.0 with 1 tasks 18/04/17 16:58:00 INFO storage.BlockManagerInfo: Removed broadcast_659_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.2 MB) 18/04/17 16:58:00 INFO scheduler.DAGScheduler: Got job 670 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 16:58:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 670 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 16:58:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 16:58:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 16:58:00 INFO scheduler.DAGScheduler: Submitting ResultStage 670 (KafkaRDD[901] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 16:58:00 INFO storage.MemoryStore: Block broadcast_670 stored as values in memory (estimated size 5.7 KB, free 490.2 MB) 18/04/17 16:58:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 669.0 (TID 669, ***hostname masked***, executor 1, partition 0, RACK_LOCAL, 2064 bytes) 18/04/17 16:58:00 INFO storage.BlockManagerInfo: Added broadcast_664_piece0 in memory on ***hostname masked***:43653 (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:58:00 INFO storage.BlockManagerInfo: Added broadcast_663_piece0 in memory on ***hostname masked***:55095 (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:58:00 INFO storage.BlockManagerInfo: Added broadcast_666_piece0 in memory on ***hostname masked***:60107 (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:58:00 INFO storage.MemoryStore: Block broadcast_670_piece0 stored as bytes in memory (estimated size 3.1 KB, free 490.2 MB) 18/04/17 16:58:00 INFO storage.BlockManagerInfo: Added broadcast_670_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.2 MB) 18/04/17 16:58:00 INFO spark.SparkContext: Created broadcast 670 from broadcast at DAGScheduler.scala:1006 18/04/17 16:58:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 670 (KafkaRDD[901] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 16:58:00 INFO cluster.YarnClusterScheduler: Adding task set 670.0 with 1 tasks 18/04/17 16:58:00 INFO scheduler.DAGScheduler: Got job 671 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 16:58:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 671 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 16:58:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 16:58:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 16:58:00 INFO scheduler.DAGScheduler: Submitting ResultStage 671 (KafkaRDD[909] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 16:58:00 INFO storage.MemoryStore: Block broadcast_671 stored as values in memory (estimated size 5.7 KB, free 490.2 MB) 18/04/17 16:58:00 INFO storage.BlockManagerInfo: Removed broadcast_659_piece0 on ***hostname masked***:56034 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:58:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 670.0 (TID 670, ***hostname masked***, executor 2, partition 0, RACK_LOCAL, 2063 bytes) 18/04/17 16:58:00 INFO storage.MemoryStore: Block broadcast_671_piece0 stored as bytes in memory (estimated size 3.1 KB, free 490.2 MB) 18/04/17 16:58:00 INFO storage.BlockManagerInfo: Added broadcast_671_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.2 MB) 18/04/17 16:58:00 INFO spark.SparkContext: Created broadcast 671 from broadcast at DAGScheduler.scala:1006 18/04/17 16:58:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 671 (KafkaRDD[909] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 16:58:00 INFO cluster.YarnClusterScheduler: Adding task set 671.0 with 1 tasks 18/04/17 16:58:00 INFO scheduler.DAGScheduler: Got job 672 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 16:58:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 672 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 16:58:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 16:58:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 16:58:00 INFO scheduler.DAGScheduler: Submitting ResultStage 672 (KafkaRDD[920] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 16:58:00 INFO storage.MemoryStore: Block broadcast_672 stored as values in memory (estimated size 5.7 KB, free 490.2 MB) 18/04/17 16:58:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 671.0 (TID 671, ***hostname masked***, executor 9, partition 0, RACK_LOCAL, 2041 bytes) 18/04/17 16:58:00 INFO storage.BlockManagerInfo: Removed broadcast_635_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.2 MB) 18/04/17 16:58:00 INFO storage.BlockManagerInfo: Removed broadcast_635_piece0 on ***hostname masked***:41751 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:58:00 INFO storage.MemoryStore: Block broadcast_672_piece0 stored as bytes in memory (estimated size 3.1 KB, free 490.2 MB) 18/04/17 16:58:00 INFO storage.BlockManagerInfo: Added broadcast_672_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.2 MB) 18/04/17 16:58:00 INFO spark.SparkContext: Created broadcast 672 from broadcast at DAGScheduler.scala:1006 18/04/17 16:58:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 672 (KafkaRDD[920] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 16:58:00 INFO cluster.YarnClusterScheduler: Adding task set 672.0 with 1 tasks 18/04/17 16:58:00 INFO scheduler.DAGScheduler: Got job 673 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 16:58:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 673 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 16:58:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 16:58:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 16:58:00 INFO spark.ContextCleaner: Cleaned accumulator 636 18/04/17 16:58:00 INFO scheduler.DAGScheduler: Submitting ResultStage 673 (KafkaRDD[927] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 16:58:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 672.0 (TID 672, ***hostname masked***, executor 6, partition 0, RACK_LOCAL, 2048 bytes) 18/04/17 16:58:00 INFO storage.MemoryStore: Block broadcast_673 stored as values in memory (estimated size 5.7 KB, free 490.2 MB) 18/04/17 16:58:00 INFO storage.BlockManagerInfo: Removed broadcast_634_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.2 MB) 18/04/17 16:58:00 INFO storage.BlockManagerInfo: Added broadcast_668_piece0 in memory on ***hostname masked***:55095 (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:58:00 INFO storage.BlockManagerInfo: Removed broadcast_634_piece0 on ***hostname masked***:55033 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:58:00 INFO spark.ContextCleaner: Cleaned accumulator 635 18/04/17 16:58:00 INFO storage.MemoryStore: Block broadcast_673_piece0 stored as bytes in memory (estimated size 3.1 KB, free 490.2 MB) 18/04/17 16:58:00 INFO storage.BlockManagerInfo: Added broadcast_673_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.2 MB) 18/04/17 16:58:00 INFO storage.BlockManagerInfo: Removed broadcast_637_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.2 MB) 18/04/17 16:58:00 INFO spark.SparkContext: Created broadcast 673 from broadcast at DAGScheduler.scala:1006 18/04/17 16:58:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 673 (KafkaRDD[927] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 16:58:00 INFO cluster.YarnClusterScheduler: Adding task set 673.0 with 1 tasks 18/04/17 16:58:00 INFO storage.BlockManagerInfo: Added broadcast_669_piece0 in memory on ***hostname masked***:56034 (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:58:00 INFO scheduler.DAGScheduler: Got job 674 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 16:58:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 674 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 16:58:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 16:58:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 16:58:00 INFO scheduler.DAGScheduler: Submitting ResultStage 674 (KafkaRDD[925] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 16:58:00 INFO storage.MemoryStore: Block broadcast_674 stored as values in memory (estimated size 5.7 KB, free 490.2 MB) 18/04/17 16:58:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 673.0 (TID 673, ***hostname masked***, executor 7, partition 0, RACK_LOCAL, 2048 bytes) 18/04/17 16:58:00 INFO storage.BlockManagerInfo: Removed broadcast_637_piece0 on ***hostname masked***:35790 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:58:00 INFO spark.ContextCleaner: Cleaned accumulator 638 18/04/17 16:58:00 INFO storage.BlockManagerInfo: Added broadcast_665_piece0 in memory on ***hostname masked***:60107 (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:58:00 INFO storage.BlockManagerInfo: Added broadcast_671_piece0 in memory on ***hostname masked***:55033 (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:58:00 INFO storage.BlockManagerInfo: Removed broadcast_636_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.2 MB) 18/04/17 16:58:00 INFO storage.MemoryStore: Block broadcast_674_piece0 stored as bytes in memory (estimated size 3.1 KB, free 490.2 MB) 18/04/17 16:58:00 INFO storage.BlockManagerInfo: Added broadcast_674_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.2 MB) 18/04/17 16:58:00 INFO spark.SparkContext: Created broadcast 674 from broadcast at DAGScheduler.scala:1006 18/04/17 16:58:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 674 (KafkaRDD[925] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 16:58:00 INFO cluster.YarnClusterScheduler: Adding task set 674.0 with 1 tasks 18/04/17 16:58:00 INFO scheduler.DAGScheduler: Got job 675 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 16:58:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 675 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 16:58:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 16:58:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 16:58:00 INFO scheduler.DAGScheduler: Submitting ResultStage 675 (KafkaRDD[902] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 16:58:00 INFO storage.BlockManagerInfo: Removed broadcast_636_piece0 on ***hostname masked***:60107 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:58:00 INFO storage.MemoryStore: Block broadcast_675 stored as values in memory (estimated size 5.7 KB, free 490.2 MB) 18/04/17 16:58:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 674.0 (TID 674, ***hostname masked***, executor 10, partition 0, RACK_LOCAL, 2045 bytes) 18/04/17 16:58:00 INFO spark.ContextCleaner: Cleaned accumulator 637 18/04/17 16:58:00 INFO storage.BlockManagerInfo: Removed broadcast_639_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.2 MB) 18/04/17 16:58:00 INFO storage.BlockManagerInfo: Added broadcast_673_piece0 in memory on ***hostname masked***:41751 (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:58:00 INFO storage.BlockManagerInfo: Added broadcast_672_piece0 in memory on ***hostname masked***:35790 (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:58:00 INFO storage.BlockManagerInfo: Removed broadcast_639_piece0 on ***hostname masked***:43653 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:58:00 INFO spark.ContextCleaner: Cleaned accumulator 640 18/04/17 16:58:00 INFO storage.MemoryStore: Block broadcast_675_piece0 stored as bytes in memory (estimated size 3.1 KB, free 490.2 MB) 18/04/17 16:58:00 INFO storage.BlockManagerInfo: Added broadcast_675_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.2 MB) 18/04/17 16:58:00 INFO storage.BlockManagerInfo: Removed broadcast_638_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.2 MB) 18/04/17 16:58:00 INFO spark.SparkContext: Created broadcast 675 from broadcast at DAGScheduler.scala:1006 18/04/17 16:58:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 675 (KafkaRDD[902] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 16:58:00 INFO cluster.YarnClusterScheduler: Adding task set 675.0 with 1 tasks 18/04/17 16:58:00 INFO scheduler.DAGScheduler: Got job 676 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 16:58:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 676 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 16:58:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 16:58:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 16:58:00 INFO scheduler.DAGScheduler: Submitting ResultStage 676 (KafkaRDD[919] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 16:58:00 INFO storage.BlockManagerInfo: Removed broadcast_638_piece0 on ***hostname masked***:42188 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:58:00 INFO storage.MemoryStore: Block broadcast_676 stored as values in memory (estimated size 5.7 KB, free 490.2 MB) 18/04/17 16:58:00 INFO spark.ContextCleaner: Cleaned accumulator 639 18/04/17 16:58:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 675.0 (TID 675, ***hostname masked***, executor 3, partition 0, RACK_LOCAL, 2034 bytes) 18/04/17 16:58:00 INFO storage.BlockManagerInfo: Removed broadcast_641_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.2 MB) 18/04/17 16:58:00 INFO storage.MemoryStore: Block broadcast_676_piece0 stored as bytes in memory (estimated size 3.1 KB, free 490.2 MB) 18/04/17 16:58:00 INFO storage.BlockManagerInfo: Added broadcast_676_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.2 MB) 18/04/17 16:58:00 INFO spark.SparkContext: Created broadcast 676 from broadcast at DAGScheduler.scala:1006 18/04/17 16:58:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 676 (KafkaRDD[919] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 16:58:00 INFO cluster.YarnClusterScheduler: Adding task set 676.0 with 1 tasks 18/04/17 16:58:00 INFO scheduler.DAGScheduler: Got job 677 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 16:58:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 677 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 16:58:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 16:58:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 16:58:00 INFO scheduler.DAGScheduler: Submitting ResultStage 677 (KafkaRDD[926] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 16:58:00 INFO storage.MemoryStore: Block broadcast_677 stored as values in memory (estimated size 5.7 KB, free 490.2 MB) 18/04/17 16:58:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 676.0 (TID 676, ***hostname masked***, executor 12, partition 0, RACK_LOCAL, 2050 bytes) 18/04/17 16:58:00 INFO storage.BlockManagerInfo: Added broadcast_674_piece0 in memory on ***hostname masked***:55095 (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:58:00 INFO storage.BlockManagerInfo: Removed broadcast_641_piece0 on ***hostname masked***:55095 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:58:00 INFO spark.ContextCleaner: Cleaned accumulator 642 18/04/17 16:58:00 INFO storage.BlockManagerInfo: Added broadcast_675_piece0 in memory on ***hostname masked***:60107 (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:58:00 INFO storage.BlockManagerInfo: Removed broadcast_640_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.2 MB) 18/04/17 16:58:00 INFO storage.BlockManagerInfo: Removed broadcast_640_piece0 on ***hostname masked***:55033 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:58:00 INFO spark.ContextCleaner: Cleaned accumulator 641 18/04/17 16:58:00 INFO storage.MemoryStore: Block broadcast_677_piece0 stored as bytes in memory (estimated size 3.1 KB, free 490.2 MB) 18/04/17 16:58:00 INFO storage.BlockManagerInfo: Added broadcast_677_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.2 MB) 18/04/17 16:58:00 INFO storage.BlockManagerInfo: Removed broadcast_643_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.2 MB) 18/04/17 16:58:00 INFO spark.SparkContext: Created broadcast 677 from broadcast at DAGScheduler.scala:1006 18/04/17 16:58:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 677 (KafkaRDD[926] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 16:58:00 INFO cluster.YarnClusterScheduler: Adding task set 677.0 with 1 tasks 18/04/17 16:58:00 INFO storage.BlockManagerInfo: Added broadcast_676_piece0 in memory on ***hostname masked***:42188 (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:58:00 INFO scheduler.DAGScheduler: Got job 678 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 16:58:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 678 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 16:58:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 16:58:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 16:58:00 INFO scheduler.DAGScheduler: Submitting ResultStage 678 (KafkaRDD[929] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 16:58:00 INFO storage.MemoryStore: Block broadcast_678 stored as values in memory (estimated size 5.7 KB, free 490.2 MB) 18/04/17 16:58:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 677.0 (TID 677, ***hostname masked***, executor 6, partition 0, NODE_LOCAL, 2056 bytes) 18/04/17 16:58:00 INFO storage.BlockManagerInfo: Removed broadcast_643_piece0 on ***hostname masked***:55095 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:58:00 INFO storage.BlockManagerInfo: Added broadcast_670_piece0 in memory on ***hostname masked***:43653 (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:58:00 INFO spark.ContextCleaner: Cleaned accumulator 644 18/04/17 16:58:00 INFO storage.BlockManagerInfo: Removed broadcast_642_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.2 MB) 18/04/17 16:58:00 INFO storage.MemoryStore: Block broadcast_678_piece0 stored as bytes in memory (estimated size 3.1 KB, free 490.2 MB) 18/04/17 16:58:00 INFO storage.BlockManagerInfo: Added broadcast_678_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.2 MB) 18/04/17 16:58:00 INFO storage.BlockManagerInfo: Removed broadcast_642_piece0 on ***hostname masked***:55033 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:58:00 INFO spark.SparkContext: Created broadcast 678 from broadcast at DAGScheduler.scala:1006 18/04/17 16:58:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 678 (KafkaRDD[929] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 16:58:00 INFO cluster.YarnClusterScheduler: Adding task set 678.0 with 1 tasks 18/04/17 16:58:00 INFO scheduler.DAGScheduler: Got job 679 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 16:58:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 679 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 16:58:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 16:58:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 16:58:00 INFO scheduler.DAGScheduler: Submitting ResultStage 679 (KafkaRDD[933] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 16:58:00 INFO storage.MemoryStore: Block broadcast_679 stored as values in memory (estimated size 5.7 KB, free 490.2 MB) 18/04/17 16:58:00 INFO spark.ContextCleaner: Cleaned accumulator 643 18/04/17 16:58:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 678.0 (TID 678, ***hostname masked***, executor 5, partition 0, NODE_LOCAL, 2049 bytes) 18/04/17 16:58:00 INFO storage.BlockManagerInfo: Removed broadcast_645_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.2 MB) 18/04/17 16:58:00 INFO storage.BlockManagerInfo: Removed broadcast_645_piece0 on ***hostname masked***:55279 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:58:00 INFO storage.MemoryStore: Block broadcast_679_piece0 stored as bytes in memory (estimated size 3.1 KB, free 490.2 MB) 18/04/17 16:58:00 INFO storage.BlockManagerInfo: Added broadcast_679_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.2 MB) 18/04/17 16:58:00 INFO storage.BlockManagerInfo: Added broadcast_677_piece0 in memory on ***hostname masked***:35790 (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:58:00 INFO spark.SparkContext: Created broadcast 679 from broadcast at DAGScheduler.scala:1006 18/04/17 16:58:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 679 (KafkaRDD[933] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 16:58:00 INFO cluster.YarnClusterScheduler: Adding task set 679.0 with 1 tasks 18/04/17 16:58:00 INFO scheduler.DAGScheduler: Got job 680 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 16:58:00 INFO spark.ContextCleaner: Cleaned accumulator 646 18/04/17 16:58:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 680 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 16:58:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 16:58:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 16:58:00 INFO scheduler.DAGScheduler: Submitting ResultStage 680 (KafkaRDD[924] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 16:58:00 INFO storage.MemoryStore: Block broadcast_680 stored as values in memory (estimated size 5.7 KB, free 490.2 MB) 18/04/17 16:58:00 INFO storage.BlockManagerInfo: Removed broadcast_644_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.2 MB) 18/04/17 16:58:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 679.0 (TID 679, ***hostname masked***, executor 7, partition 0, NODE_LOCAL, 2048 bytes) 18/04/17 16:58:00 INFO storage.BlockManagerInfo: Removed broadcast_644_piece0 on ***hostname masked***:41751 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:58:00 INFO spark.ContextCleaner: Cleaned accumulator 645 18/04/17 16:58:00 INFO storage.MemoryStore: Block broadcast_680_piece0 stored as bytes in memory (estimated size 3.1 KB, free 490.2 MB) 18/04/17 16:58:00 INFO storage.BlockManagerInfo: Added broadcast_680_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.2 MB) 18/04/17 16:58:00 INFO storage.BlockManagerInfo: Removed broadcast_647_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.2 MB) 18/04/17 16:58:00 INFO spark.SparkContext: Created broadcast 680 from broadcast at DAGScheduler.scala:1006 18/04/17 16:58:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 680 (KafkaRDD[924] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 16:58:00 INFO cluster.YarnClusterScheduler: Adding task set 680.0 with 1 tasks 18/04/17 16:58:00 INFO scheduler.DAGScheduler: Got job 682 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 16:58:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 681 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 16:58:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 16:58:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 16:58:00 INFO scheduler.DAGScheduler: Submitting ResultStage 681 (KafkaRDD[918] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 16:58:00 INFO storage.BlockManagerInfo: Removed broadcast_647_piece0 on ***hostname masked***:55095 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:58:00 INFO storage.MemoryStore: Block broadcast_681 stored as values in memory (estimated size 5.7 KB, free 490.2 MB) 18/04/17 16:58:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 680.0 (TID 680, ***hostname masked***, executor 6, partition 0, NODE_LOCAL, 2047 bytes) 18/04/17 16:58:00 INFO spark.ContextCleaner: Cleaned accumulator 648 18/04/17 16:58:00 INFO storage.MemoryStore: Block broadcast_681_piece0 stored as bytes in memory (estimated size 3.1 KB, free 490.2 MB) 18/04/17 16:58:00 INFO storage.BlockManagerInfo: Added broadcast_678_piece0 in memory on ***hostname masked***:53081 (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:58:00 INFO storage.BlockManagerInfo: Added broadcast_681_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.2 MB) 18/04/17 16:58:00 INFO storage.BlockManagerInfo: Added broadcast_679_piece0 in memory on ***hostname masked***:41751 (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:58:00 INFO storage.BlockManagerInfo: Removed broadcast_646_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.2 MB) 18/04/17 16:58:00 INFO spark.SparkContext: Created broadcast 681 from broadcast at DAGScheduler.scala:1006 18/04/17 16:58:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 681 (KafkaRDD[918] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 16:58:00 INFO cluster.YarnClusterScheduler: Adding task set 681.0 with 1 tasks 18/04/17 16:58:00 INFO scheduler.DAGScheduler: Got job 681 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 16:58:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 682 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 16:58:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 16:58:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 16:58:00 INFO scheduler.DAGScheduler: Submitting ResultStage 682 (KafkaRDD[928] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 16:58:00 INFO storage.MemoryStore: Block broadcast_682 stored as values in memory (estimated size 5.7 KB, free 490.2 MB) 18/04/17 16:58:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 681.0 (TID 681, ***hostname masked***, executor 10, partition 0, RACK_LOCAL, 2048 bytes) 18/04/17 16:58:00 INFO storage.BlockManagerInfo: Removed broadcast_646_piece0 on ***hostname masked***:60107 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:58:00 INFO storage.MemoryStore: Block broadcast_682_piece0 stored as bytes in memory (estimated size 3.1 KB, free 490.2 MB) 18/04/17 16:58:00 INFO storage.BlockManagerInfo: Added broadcast_682_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.2 MB) 18/04/17 16:58:00 INFO spark.SparkContext: Created broadcast 682 from broadcast at DAGScheduler.scala:1006 18/04/17 16:58:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 682 (KafkaRDD[928] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 16:58:00 INFO cluster.YarnClusterScheduler: Adding task set 682.0 with 1 tasks 18/04/17 16:58:00 INFO scheduler.DAGScheduler: Got job 684 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 16:58:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 683 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 16:58:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 16:58:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 16:58:00 INFO scheduler.DAGScheduler: Submitting ResultStage 683 (KafkaRDD[923] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 16:58:00 INFO storage.MemoryStore: Block broadcast_683 stored as values in memory (estimated size 5.7 KB, free 490.2 MB) 18/04/17 16:58:00 INFO storage.BlockManagerInfo: Added broadcast_680_piece0 in memory on ***hostname masked***:35790 (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:58:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 682.0 (TID 682, ***hostname masked***, executor 11, partition 0, RACK_LOCAL, 2045 bytes) 18/04/17 16:58:00 INFO storage.MemoryStore: Block broadcast_683_piece0 stored as bytes in memory (estimated size 3.1 KB, free 490.2 MB) 18/04/17 16:58:00 INFO storage.BlockManagerInfo: Added broadcast_683_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.2 MB) 18/04/17 16:58:00 INFO spark.SparkContext: Created broadcast 683 from broadcast at DAGScheduler.scala:1006 18/04/17 16:58:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 683 (KafkaRDD[923] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 16:58:00 INFO cluster.YarnClusterScheduler: Adding task set 683.0 with 1 tasks 18/04/17 16:58:00 INFO scheduler.DAGScheduler: Got job 683 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 16:58:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 684 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 16:58:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 16:58:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 16:58:00 INFO scheduler.DAGScheduler: Submitting ResultStage 684 (KafkaRDD[922] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 16:58:00 INFO storage.MemoryStore: Block broadcast_684 stored as values in memory (estimated size 5.7 KB, free 490.2 MB) 18/04/17 16:58:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 683.0 (TID 683, ***hostname masked***, executor 11, partition 0, RACK_LOCAL, 2048 bytes) 18/04/17 16:58:00 INFO storage.BlockManagerInfo: Added broadcast_681_piece0 in memory on ***hostname masked***:55095 (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:58:00 INFO storage.MemoryStore: Block broadcast_684_piece0 stored as bytes in memory (estimated size 3.1 KB, free 490.2 MB) 18/04/17 16:58:00 INFO storage.BlockManagerInfo: Added broadcast_684_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.2 MB) 18/04/17 16:58:00 INFO spark.SparkContext: Created broadcast 684 from broadcast at DAGScheduler.scala:1006 18/04/17 16:58:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 684 (KafkaRDD[922] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 16:58:00 INFO cluster.YarnClusterScheduler: Adding task set 684.0 with 1 tasks 18/04/17 16:58:00 INFO scheduler.DAGScheduler: Got job 685 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 16:58:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 685 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 16:58:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 16:58:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 16:58:00 INFO scheduler.DAGScheduler: Submitting ResultStage 685 (KafkaRDD[907] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 16:58:00 INFO storage.MemoryStore: Block broadcast_685 stored as values in memory (estimated size 5.7 KB, free 490.2 MB) 18/04/17 16:58:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 684.0 (TID 684, ***hostname masked***, executor 11, partition 0, RACK_LOCAL, 2069 bytes) 18/04/17 16:58:00 INFO storage.MemoryStore: Block broadcast_685_piece0 stored as bytes in memory (estimated size 3.1 KB, free 490.2 MB) 18/04/17 16:58:00 INFO storage.BlockManagerInfo: Added broadcast_685_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.2 MB) 18/04/17 16:58:00 INFO spark.SparkContext: Created broadcast 685 from broadcast at DAGScheduler.scala:1006 18/04/17 16:58:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 685 (KafkaRDD[907] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 16:58:00 INFO cluster.YarnClusterScheduler: Adding task set 685.0 with 1 tasks 18/04/17 16:58:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 685.0 (TID 685, ***hostname masked***, executor 6, partition 0, RACK_LOCAL, 2060 bytes) 18/04/17 16:58:00 INFO storage.BlockManagerInfo: Added broadcast_682_piece0 in memory on ***hostname masked***:57847 (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:58:00 INFO storage.BlockManagerInfo: Added broadcast_685_piece0 in memory on ***hostname masked***:35790 (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:58:00 INFO storage.BlockManagerInfo: Added broadcast_684_piece0 in memory on ***hostname masked***:57847 (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:58:00 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 665.0 (TID 665) in 93 ms on ***hostname masked*** (executor 3) (1/1) 18/04/17 16:58:00 INFO cluster.YarnClusterScheduler: Removed TaskSet 665.0, whose tasks have all completed, from pool 18/04/17 16:58:00 INFO scheduler.DAGScheduler: ResultStage 665 (foreachPartition at PredictorEngineApp.java:153) finished in 0.094 s 18/04/17 16:58:00 INFO scheduler.DAGScheduler: Job 665 finished: foreachPartition at PredictorEngineApp.java:153, took 0.117117 s 18/04/17 16:58:00 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x567da89c connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 16:58:00 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x567da89c0x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 16:58:00 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 16:58:00 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:36760, server: ***hostname masked***/***IP masked***:2181 18/04/17 16:58:00 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x1626be1444c94d6, negotiated timeout = 60000 18/04/17 16:58:00 INFO storage.BlockManagerInfo: Added broadcast_683_piece0 in memory on ***hostname masked***:57847 (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:58:00 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x1626be1444c94d6 18/04/17 16:58:00 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 16:58:00 INFO zookeeper.ZooKeeper: Session: 0x1626be1444c94d6 closed 18/04/17 16:58:00 INFO spark.ContextCleaner: Cleaned accumulator 647 18/04/17 16:58:00 INFO storage.BlockManagerInfo: Removed broadcast_649_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.2 MB) 18/04/17 16:58:00 INFO storage.BlockManagerInfo: Removed broadcast_649_piece0 on ***hostname masked***:35790 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:58:00 INFO spark.ContextCleaner: Cleaned accumulator 650 18/04/17 16:58:00 INFO storage.BlockManagerInfo: Removed broadcast_648_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.2 MB) 18/04/17 16:58:00 INFO storage.BlockManagerInfo: Removed broadcast_648_piece0 on ***hostname masked***:55279 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:58:00 INFO spark.ContextCleaner: Cleaned accumulator 649 18/04/17 16:58:00 INFO scheduler.JobScheduler: Finished job streaming job 1523973480000 ms.10 from job set of time 1523973480000 ms 18/04/17 16:58:00 INFO storage.BlockManagerInfo: Removed broadcast_651_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.2 MB) 18/04/17 16:58:00 INFO storage.BlockManagerInfo: Removed broadcast_651_piece0 on ***hostname masked***:56034 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:58:00 INFO spark.ContextCleaner: Cleaned accumulator 652 18/04/17 16:58:00 INFO storage.BlockManagerInfo: Removed broadcast_650_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.2 MB) 18/04/17 16:58:00 INFO storage.BlockManagerInfo: Removed broadcast_650_piece0 on ***hostname masked***:57847 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:58:00 INFO spark.ContextCleaner: Cleaned accumulator 651 18/04/17 16:58:00 INFO storage.BlockManagerInfo: Removed broadcast_653_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.2 MB) 18/04/17 16:58:00 INFO storage.BlockManagerInfo: Removed broadcast_653_piece0 on ***hostname masked***:41751 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:58:00 INFO spark.ContextCleaner: Cleaned accumulator 654 18/04/17 16:58:00 INFO storage.BlockManagerInfo: Removed broadcast_652_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.2 MB) 18/04/17 16:58:00 INFO storage.BlockManagerInfo: Removed broadcast_652_piece0 on ***hostname masked***:55095 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:58:00 INFO spark.ContextCleaner: Cleaned accumulator 653 18/04/17 16:58:00 INFO storage.BlockManagerInfo: Removed broadcast_655_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.2 MB) 18/04/17 16:58:00 INFO storage.BlockManagerInfo: Removed broadcast_655_piece0 on ***hostname masked***:35790 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:58:00 INFO spark.ContextCleaner: Cleaned accumulator 656 18/04/17 16:58:00 INFO storage.BlockManagerInfo: Removed broadcast_654_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.2 MB) 18/04/17 16:58:00 INFO storage.BlockManagerInfo: Removed broadcast_654_piece0 on ***hostname masked***:35790 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:58:00 INFO spark.ContextCleaner: Cleaned accumulator 655 18/04/17 16:58:00 INFO storage.BlockManagerInfo: Removed broadcast_656_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.2 MB) 18/04/17 16:58:00 INFO storage.BlockManagerInfo: Removed broadcast_656_piece0 on ***hostname masked***:42188 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:58:00 INFO spark.ContextCleaner: Cleaned accumulator 657 18/04/17 16:58:00 INFO spark.ContextCleaner: Cleaned accumulator 660 18/04/17 16:58:00 INFO storage.BlockManagerInfo: Removed broadcast_658_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.2 MB) 18/04/17 16:58:00 INFO storage.BlockManagerInfo: Removed broadcast_658_piece0 on ***hostname masked***:50260 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:58:00 INFO spark.ContextCleaner: Cleaned accumulator 659 18/04/17 16:58:01 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 674.0 (TID 674) in 1604 ms on ***hostname masked*** (executor 10) (1/1) 18/04/17 16:58:01 INFO cluster.YarnClusterScheduler: Removed TaskSet 674.0, whose tasks have all completed, from pool 18/04/17 16:58:01 INFO scheduler.DAGScheduler: ResultStage 674 (foreachPartition at PredictorEngineApp.java:153) finished in 1.605 s 18/04/17 16:58:01 INFO scheduler.DAGScheduler: Job 674 finished: foreachPartition at PredictorEngineApp.java:153, took 1.670883 s 18/04/17 16:58:01 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x2c1f59a9 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 16:58:01 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x2c1f59a90x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 16:58:01 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 16:58:01 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:58615, server: ***hostname masked***/***IP masked***:2181 18/04/17 16:58:01 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x3626be1439a94b0, negotiated timeout = 60000 18/04/17 16:58:01 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x3626be1439a94b0 18/04/17 16:58:01 INFO zookeeper.ZooKeeper: Session: 0x3626be1439a94b0 closed 18/04/17 16:58:01 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 16:58:01 INFO scheduler.JobScheduler: Finished job streaming job 1523973480000 ms.25 from job set of time 1523973480000 ms 18/04/17 16:58:03 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 666.0 (TID 666) in 2945 ms on ***hostname masked*** (executor 3) (1/1) 18/04/17 16:58:03 INFO cluster.YarnClusterScheduler: Removed TaskSet 666.0, whose tasks have all completed, from pool 18/04/17 16:58:03 INFO scheduler.DAGScheduler: ResultStage 666 (foreachPartition at PredictorEngineApp.java:153) finished in 2.945 s 18/04/17 16:58:03 INFO scheduler.DAGScheduler: Job 666 finished: foreachPartition at PredictorEngineApp.java:153, took 2.972464 s 18/04/17 16:58:03 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x68e0b067 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 16:58:03 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x68e0b0670x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 16:58:03 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 16:58:03 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:41367, server: ***hostname masked***/***IP masked***:2181 18/04/17 16:58:03 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x2626be142b28ddd, negotiated timeout = 60000 18/04/17 16:58:03 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x2626be142b28ddd 18/04/17 16:58:03 INFO zookeeper.ZooKeeper: Session: 0x2626be142b28ddd closed 18/04/17 16:58:03 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 16:58:03 INFO scheduler.JobScheduler: Finished job streaming job 1523973480000 ms.31 from job set of time 1523973480000 ms 18/04/17 16:58:03 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 668.0 (TID 668) in 3151 ms on ***hostname masked*** (executor 10) (1/1) 18/04/17 16:58:03 INFO cluster.YarnClusterScheduler: Removed TaskSet 668.0, whose tasks have all completed, from pool 18/04/17 16:58:03 INFO scheduler.DAGScheduler: ResultStage 668 (foreachPartition at PredictorEngineApp.java:153) finished in 3.151 s 18/04/17 16:58:03 INFO scheduler.DAGScheduler: Job 668 finished: foreachPartition at PredictorEngineApp.java:153, took 3.184401 s 18/04/17 16:58:03 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x721637df connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 16:58:03 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x721637df0x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 16:58:03 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 16:58:03 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:36776, server: ***hostname masked***/***IP masked***:2181 18/04/17 16:58:03 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x1626be1444c94e2, negotiated timeout = 60000 18/04/17 16:58:03 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x1626be1444c94e2 18/04/17 16:58:03 INFO zookeeper.ZooKeeper: Session: 0x1626be1444c94e2 closed 18/04/17 16:58:03 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 16:58:03 INFO scheduler.JobScheduler: Finished job streaming job 1523973480000 ms.8 from job set of time 1523973480000 ms 18/04/17 16:58:03 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 685.0 (TID 685) in 3299 ms on ***hostname masked*** (executor 6) (1/1) 18/04/17 16:58:03 INFO cluster.YarnClusterScheduler: Removed TaskSet 685.0, whose tasks have all completed, from pool 18/04/17 16:58:03 INFO scheduler.DAGScheduler: ResultStage 685 (foreachPartition at PredictorEngineApp.java:153) finished in 3.300 s 18/04/17 16:58:03 INFO scheduler.DAGScheduler: Job 685 finished: foreachPartition at PredictorEngineApp.java:153, took 3.402610 s 18/04/17 16:58:03 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x18adca4b connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 16:58:03 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x18adca4b0x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 16:58:03 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 16:58:03 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:58630, server: ***hostname masked***/***IP masked***:2181 18/04/17 16:58:03 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x3626be1439a94b2, negotiated timeout = 60000 18/04/17 16:58:03 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x3626be1439a94b2 18/04/17 16:58:03 INFO zookeeper.ZooKeeper: Session: 0x3626be1439a94b2 closed 18/04/17 16:58:03 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 16:58:03 INFO scheduler.JobScheduler: Finished job streaming job 1523973480000 ms.7 from job set of time 1523973480000 ms 18/04/17 16:58:03 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 669.0 (TID 669) in 3664 ms on ***hostname masked*** (executor 1) (1/1) 18/04/17 16:58:03 INFO cluster.YarnClusterScheduler: Removed TaskSet 669.0, whose tasks have all completed, from pool 18/04/17 16:58:03 INFO scheduler.DAGScheduler: ResultStage 669 (foreachPartition at PredictorEngineApp.java:153) finished in 3.665 s 18/04/17 16:58:03 INFO scheduler.DAGScheduler: Job 669 finished: foreachPartition at PredictorEngineApp.java:153, took 3.714356 s 18/04/17 16:58:03 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x645b7d35 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 16:58:03 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x645b7d350x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 16:58:03 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 16:58:03 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:41377, server: ***hostname masked***/***IP masked***:2181 18/04/17 16:58:03 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x2626be142b28dde, negotiated timeout = 60000 18/04/17 16:58:03 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x2626be142b28dde 18/04/17 16:58:03 INFO zookeeper.ZooKeeper: Session: 0x2626be142b28dde closed 18/04/17 16:58:03 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 16:58:03 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 673.0 (TID 673) in 3678 ms on ***hostname masked*** (executor 7) (1/1) 18/04/17 16:58:03 INFO cluster.YarnClusterScheduler: Removed TaskSet 673.0, whose tasks have all completed, from pool 18/04/17 16:58:03 INFO scheduler.DAGScheduler: ResultStage 673 (foreachPartition at PredictorEngineApp.java:153) finished in 3.680 s 18/04/17 16:58:03 INFO scheduler.DAGScheduler: Job 673 finished: foreachPartition at PredictorEngineApp.java:153, took 3.741523 s 18/04/17 16:58:03 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x1252bf95 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 16:58:03 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x1252bf950x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 16:58:03 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 16:58:03 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:41380, server: ***hostname masked***/***IP masked***:2181 18/04/17 16:58:03 INFO scheduler.JobScheduler: Finished job streaming job 1523973480000 ms.15 from job set of time 1523973480000 ms 18/04/17 16:58:03 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x2626be142b28ddf, negotiated timeout = 60000 18/04/17 16:58:03 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 671.0 (TID 671) in 3695 ms on ***hostname masked*** (executor 9) (1/1) 18/04/17 16:58:03 INFO cluster.YarnClusterScheduler: Removed TaskSet 671.0, whose tasks have all completed, from pool 18/04/17 16:58:03 INFO scheduler.DAGScheduler: ResultStage 671 (foreachPartition at PredictorEngineApp.java:153) finished in 3.696 s 18/04/17 16:58:03 INFO scheduler.DAGScheduler: Job 671 finished: foreachPartition at PredictorEngineApp.java:153, took 3.751308 s 18/04/17 16:58:03 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x2626be142b28ddf 18/04/17 16:58:03 INFO scheduler.JobScheduler: Finished job streaming job 1523973480000 ms.9 from job set of time 1523973480000 ms 18/04/17 16:58:03 INFO zookeeper.ZooKeeper: Session: 0x2626be142b28ddf closed 18/04/17 16:58:03 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 16:58:03 INFO scheduler.JobScheduler: Finished job streaming job 1523973480000 ms.27 from job set of time 1523973480000 ms 18/04/17 16:58:04 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 675.0 (TID 675) in 4751 ms on ***hostname masked*** (executor 3) (1/1) 18/04/17 16:58:04 INFO cluster.YarnClusterScheduler: Removed TaskSet 675.0, whose tasks have all completed, from pool 18/04/17 16:58:04 INFO scheduler.DAGScheduler: ResultStage 675 (foreachPartition at PredictorEngineApp.java:153) finished in 4.752 s 18/04/17 16:58:04 INFO scheduler.DAGScheduler: Job 675 finished: foreachPartition at PredictorEngineApp.java:153, took 4.822382 s 18/04/17 16:58:04 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x6a9c7aee connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 16:58:04 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x6a9c7aee0x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 16:58:04 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 16:58:04 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:36789, server: ***hostname masked***/***IP masked***:2181 18/04/17 16:58:04 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x1626be1444c94e6, negotiated timeout = 60000 18/04/17 16:58:04 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x1626be1444c94e6 18/04/17 16:58:04 INFO zookeeper.ZooKeeper: Session: 0x1626be1444c94e6 closed 18/04/17 16:58:04 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 16:58:04 INFO scheduler.JobScheduler: Finished job streaming job 1523973480000 ms.2 from job set of time 1523973480000 ms 18/04/17 16:58:05 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 676.0 (TID 676) in 5820 ms on ***hostname masked*** (executor 12) (1/1) 18/04/17 16:58:05 INFO cluster.YarnClusterScheduler: Removed TaskSet 676.0, whose tasks have all completed, from pool 18/04/17 16:58:05 INFO scheduler.DAGScheduler: ResultStage 676 (foreachPartition at PredictorEngineApp.java:153) finished in 5.821 s 18/04/17 16:58:05 INFO scheduler.DAGScheduler: Job 676 finished: foreachPartition at PredictorEngineApp.java:153, took 5.896669 s 18/04/17 16:58:05 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x1763e81c connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 16:58:05 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x1763e81c0x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 16:58:05 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 16:58:05 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:36793, server: ***hostname masked***/***IP masked***:2181 18/04/17 16:58:05 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x1626be1444c94e8, negotiated timeout = 60000 18/04/17 16:58:05 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 660.0 (TID 660) in 5906 ms on ***hostname masked*** (executor 4) (1/1) 18/04/17 16:58:05 INFO cluster.YarnClusterScheduler: Removed TaskSet 660.0, whose tasks have all completed, from pool 18/04/17 16:58:05 INFO scheduler.DAGScheduler: ResultStage 660 (foreachPartition at PredictorEngineApp.java:153) finished in 5.906 s 18/04/17 16:58:05 INFO scheduler.DAGScheduler: Job 660 finished: foreachPartition at PredictorEngineApp.java:153, took 5.913000 s 18/04/17 16:58:06 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x1626be1444c94e8 18/04/17 16:58:06 INFO zookeeper.ZooKeeper: Session: 0x1626be1444c94e8 closed 18/04/17 16:58:06 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 16:58:06 INFO scheduler.JobScheduler: Finished job streaming job 1523973480000 ms.12 from job set of time 1523973480000 ms 18/04/17 16:58:06 INFO scheduler.JobScheduler: Finished job streaming job 1523973480000 ms.19 from job set of time 1523973480000 ms 18/04/17 16:58:06 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 667.0 (TID 667) in 6186 ms on ***hostname masked*** (executor 4) (1/1) 18/04/17 16:58:06 INFO cluster.YarnClusterScheduler: Removed TaskSet 667.0, whose tasks have all completed, from pool 18/04/17 16:58:06 INFO scheduler.DAGScheduler: ResultStage 667 (foreachPartition at PredictorEngineApp.java:153) finished in 6.187 s 18/04/17 16:58:06 INFO scheduler.DAGScheduler: Job 667 finished: foreachPartition at PredictorEngineApp.java:153, took 6.216668 s 18/04/17 16:58:06 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x1a6f7e44 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 16:58:06 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x1a6f7e440x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 16:58:06 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 16:58:06 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:41392, server: ***hostname masked***/***IP masked***:2181 18/04/17 16:58:06 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x2626be142b28de2, negotiated timeout = 60000 18/04/17 16:58:06 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x2626be142b28de2 18/04/17 16:58:06 INFO zookeeper.ZooKeeper: Session: 0x2626be142b28de2 closed 18/04/17 16:58:06 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 16:58:06 INFO scheduler.JobScheduler: Finished job streaming job 1523973480000 ms.32 from job set of time 1523973480000 ms 18/04/17 16:58:07 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 681.0 (TID 681) in 6976 ms on ***hostname masked*** (executor 10) (1/1) 18/04/17 16:58:07 INFO cluster.YarnClusterScheduler: Removed TaskSet 681.0, whose tasks have all completed, from pool 18/04/17 16:58:07 INFO scheduler.DAGScheduler: ResultStage 681 (foreachPartition at PredictorEngineApp.java:153) finished in 6.976 s 18/04/17 16:58:07 INFO scheduler.DAGScheduler: Job 682 finished: foreachPartition at PredictorEngineApp.java:153, took 7.070222 s 18/04/17 16:58:07 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x41a62b21 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 16:58:07 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x41a62b210x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 16:58:07 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 16:58:07 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:36801, server: ***hostname masked***/***IP masked***:2181 18/04/17 16:58:07 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x1626be1444c94ea, negotiated timeout = 60000 18/04/17 16:58:07 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x1626be1444c94ea 18/04/17 16:58:07 INFO zookeeper.ZooKeeper: Session: 0x1626be1444c94ea closed 18/04/17 16:58:07 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 16:58:07 INFO scheduler.JobScheduler: Finished job streaming job 1523973480000 ms.18 from job set of time 1523973480000 ms 18/04/17 16:58:07 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 663.0 (TID 663) in 7466 ms on ***hostname masked*** (executor 10) (1/1) 18/04/17 16:58:07 INFO cluster.YarnClusterScheduler: Removed TaskSet 663.0, whose tasks have all completed, from pool 18/04/17 16:58:07 INFO scheduler.DAGScheduler: ResultStage 663 (foreachPartition at PredictorEngineApp.java:153) finished in 7.468 s 18/04/17 16:58:07 INFO scheduler.DAGScheduler: Job 663 finished: foreachPartition at PredictorEngineApp.java:153, took 7.484378 s 18/04/17 16:58:07 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x521b9ed4 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 16:58:07 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x521b9ed40x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 16:58:07 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 16:58:07 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:58656, server: ***hostname masked***/***IP masked***:2181 18/04/17 16:58:07 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x3626be1439a94b7, negotiated timeout = 60000 18/04/17 16:58:07 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x3626be1439a94b7 18/04/17 16:58:07 INFO zookeeper.ZooKeeper: Session: 0x3626be1439a94b7 closed 18/04/17 16:58:07 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 16:58:07 INFO scheduler.JobScheduler: Finished job streaming job 1523973480000 ms.6 from job set of time 1523973480000 ms 18/04/17 16:58:07 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 678.0 (TID 678) in 7531 ms on ***hostname masked*** (executor 5) (1/1) 18/04/17 16:58:07 INFO cluster.YarnClusterScheduler: Removed TaskSet 678.0, whose tasks have all completed, from pool 18/04/17 16:58:07 INFO scheduler.DAGScheduler: ResultStage 678 (foreachPartition at PredictorEngineApp.java:153) finished in 7.531 s 18/04/17 16:58:07 INFO scheduler.DAGScheduler: Job 678 finished: foreachPartition at PredictorEngineApp.java:153, took 7.616724 s 18/04/17 16:58:07 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x68e3efd4 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 16:58:07 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x68e3efd40x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 16:58:07 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 16:58:07 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:41403, server: ***hostname masked***/***IP masked***:2181 18/04/17 16:58:07 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x2626be142b28de3, negotiated timeout = 60000 18/04/17 16:58:07 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x2626be142b28de3 18/04/17 16:58:07 INFO zookeeper.ZooKeeper: Session: 0x2626be142b28de3 closed 18/04/17 16:58:07 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 16:58:07 INFO scheduler.JobScheduler: Finished job streaming job 1523973480000 ms.29 from job set of time 1523973480000 ms 18/04/17 16:58:07 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 664.0 (TID 664) in 7653 ms on ***hostname masked*** (executor 2) (1/1) 18/04/17 16:58:07 INFO cluster.YarnClusterScheduler: Removed TaskSet 664.0, whose tasks have all completed, from pool 18/04/17 16:58:07 INFO scheduler.DAGScheduler: ResultStage 664 (foreachPartition at PredictorEngineApp.java:153) finished in 7.653 s 18/04/17 16:58:07 INFO scheduler.DAGScheduler: Job 664 finished: foreachPartition at PredictorEngineApp.java:153, took 7.673872 s 18/04/17 16:58:07 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x22eaa731 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 16:58:07 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x22eaa7310x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 16:58:07 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 16:58:07 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:36811, server: ***hostname masked***/***IP masked***:2181 18/04/17 16:58:07 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x1626be1444c94ec, negotiated timeout = 60000 18/04/17 16:58:07 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x1626be1444c94ec 18/04/17 16:58:07 INFO zookeeper.ZooKeeper: Session: 0x1626be1444c94ec closed 18/04/17 16:58:07 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 16:58:07 INFO scheduler.JobScheduler: Finished job streaming job 1523973480000 ms.34 from job set of time 1523973480000 ms 18/04/17 16:58:08 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 680.0 (TID 680) in 8337 ms on ***hostname masked*** (executor 6) (1/1) 18/04/17 16:58:08 INFO cluster.YarnClusterScheduler: Removed TaskSet 680.0, whose tasks have all completed, from pool 18/04/17 16:58:08 INFO scheduler.DAGScheduler: ResultStage 680 (foreachPartition at PredictorEngineApp.java:153) finished in 8.338 s 18/04/17 16:58:08 INFO scheduler.DAGScheduler: Job 680 finished: foreachPartition at PredictorEngineApp.java:153, took 8.429570 s 18/04/17 16:58:08 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x483ff7ff connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 16:58:08 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x483ff7ff0x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 16:58:08 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 16:58:08 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:36817, server: ***hostname masked***/***IP masked***:2181 18/04/17 16:58:08 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x1626be1444c94ee, negotiated timeout = 60000 18/04/17 16:58:08 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x1626be1444c94ee 18/04/17 16:58:08 INFO zookeeper.ZooKeeper: Session: 0x1626be1444c94ee closed 18/04/17 16:58:08 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 16:58:08 INFO scheduler.JobScheduler: Finished job streaming job 1523973480000 ms.24 from job set of time 1523973480000 ms 18/04/17 16:58:08 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 672.0 (TID 672) in 8439 ms on ***hostname masked*** (executor 6) (1/1) 18/04/17 16:58:08 INFO scheduler.DAGScheduler: ResultStage 672 (foreachPartition at PredictorEngineApp.java:153) finished in 8.440 s 18/04/17 16:58:08 INFO cluster.YarnClusterScheduler: Removed TaskSet 672.0, whose tasks have all completed, from pool 18/04/17 16:58:08 INFO scheduler.DAGScheduler: Job 672 finished: foreachPartition at PredictorEngineApp.java:153, took 8.497864 s 18/04/17 16:58:08 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x66fa9b1b connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 16:58:08 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x66fa9b1b0x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 16:58:08 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 16:58:08 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:41415, server: ***hostname masked***/***IP masked***:2181 18/04/17 16:58:08 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x2626be142b28de4, negotiated timeout = 60000 18/04/17 16:58:08 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x2626be142b28de4 18/04/17 16:58:08 INFO zookeeper.ZooKeeper: Session: 0x2626be142b28de4 closed 18/04/17 16:58:08 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 16:58:08 INFO scheduler.JobScheduler: Finished job streaming job 1523973480000 ms.20 from job set of time 1523973480000 ms 18/04/17 16:58:09 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 682.0 (TID 682) in 9187 ms on ***hostname masked*** (executor 11) (1/1) 18/04/17 16:58:09 INFO cluster.YarnClusterScheduler: Removed TaskSet 682.0, whose tasks have all completed, from pool 18/04/17 16:58:09 INFO scheduler.DAGScheduler: ResultStage 682 (foreachPartition at PredictorEngineApp.java:153) finished in 9.188 s 18/04/17 16:58:09 INFO scheduler.DAGScheduler: Job 681 finished: foreachPartition at PredictorEngineApp.java:153, took 9.283847 s 18/04/17 16:58:09 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x2d08c871 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 16:58:09 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x2d08c8710x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 16:58:09 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 16:58:09 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:41419, server: ***hostname masked***/***IP masked***:2181 18/04/17 16:58:09 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x2626be142b28de5, negotiated timeout = 60000 18/04/17 16:58:09 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x2626be142b28de5 18/04/17 16:58:09 INFO zookeeper.ZooKeeper: Session: 0x2626be142b28de5 closed 18/04/17 16:58:09 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 16:58:09 INFO scheduler.JobScheduler: Finished job streaming job 1523973480000 ms.28 from job set of time 1523973480000 ms 18/04/17 16:58:09 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 683.0 (TID 683) in 9293 ms on ***hostname masked*** (executor 11) (1/1) 18/04/17 16:58:09 INFO cluster.YarnClusterScheduler: Removed TaskSet 683.0, whose tasks have all completed, from pool 18/04/17 16:58:09 INFO scheduler.DAGScheduler: ResultStage 683 (foreachPartition at PredictorEngineApp.java:153) finished in 9.293 s 18/04/17 16:58:09 INFO scheduler.DAGScheduler: Job 684 finished: foreachPartition at PredictorEngineApp.java:153, took 9.392374 s 18/04/17 16:58:09 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x4b5dd0de connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 16:58:09 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x4b5dd0de0x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 16:58:09 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 16:58:09 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:58678, server: ***hostname masked***/***IP masked***:2181 18/04/17 16:58:09 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x3626be1439a94b8, negotiated timeout = 60000 18/04/17 16:58:09 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x3626be1439a94b8 18/04/17 16:58:09 INFO zookeeper.ZooKeeper: Session: 0x3626be1439a94b8 closed 18/04/17 16:58:09 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 16:58:09 INFO scheduler.JobScheduler: Finished job streaming job 1523973480000 ms.23 from job set of time 1523973480000 ms 18/04/17 16:58:10 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 662.0 (TID 662) in 10314 ms on ***hostname masked*** (executor 12) (1/1) 18/04/17 16:58:10 INFO cluster.YarnClusterScheduler: Removed TaskSet 662.0, whose tasks have all completed, from pool 18/04/17 16:58:10 INFO scheduler.DAGScheduler: ResultStage 662 (foreachPartition at PredictorEngineApp.java:153) finished in 10.314 s 18/04/17 16:58:10 INFO scheduler.DAGScheduler: Job 662 finished: foreachPartition at PredictorEngineApp.java:153, took 10.325997 s 18/04/17 16:58:10 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x397ee9b1 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 16:58:10 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x397ee9b10x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 16:58:10 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 16:58:10 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:36831, server: ***hostname masked***/***IP masked***:2181 18/04/17 16:58:10 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x1626be1444c94f1, negotiated timeout = 60000 18/04/17 16:58:10 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x1626be1444c94f1 18/04/17 16:58:10 INFO zookeeper.ZooKeeper: Session: 0x1626be1444c94f1 closed 18/04/17 16:58:10 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 16:58:10 INFO scheduler.JobScheduler: Finished job streaming job 1523973480000 ms.11 from job set of time 1523973480000 ms 18/04/17 16:58:11 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 661.0 (TID 661) in 11920 ms on ***hostname masked*** (executor 5) (1/1) 18/04/17 16:58:11 INFO cluster.YarnClusterScheduler: Removed TaskSet 661.0, whose tasks have all completed, from pool 18/04/17 16:58:11 INFO scheduler.DAGScheduler: ResultStage 661 (foreachPartition at PredictorEngineApp.java:153) finished in 11.920 s 18/04/17 16:58:11 INFO scheduler.DAGScheduler: Job 661 finished: foreachPartition at PredictorEngineApp.java:153, took 11.929599 s 18/04/17 16:58:11 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x394209b7 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 16:58:11 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x394209b70x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 16:58:11 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 16:58:11 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:36835, server: ***hostname masked***/***IP masked***:2181 18/04/17 16:58:11 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x1626be1444c94f3, negotiated timeout = 60000 18/04/17 16:58:12 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x1626be1444c94f3 18/04/17 16:58:12 INFO zookeeper.ZooKeeper: Session: 0x1626be1444c94f3 closed 18/04/17 16:58:12 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 16:58:12 INFO scheduler.JobScheduler: Finished job streaming job 1523973480000 ms.5 from job set of time 1523973480000 ms 18/04/17 16:58:12 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 670.0 (TID 670) in 12011 ms on ***hostname masked*** (executor 2) (1/1) 18/04/17 16:58:12 INFO cluster.YarnClusterScheduler: Removed TaskSet 670.0, whose tasks have all completed, from pool 18/04/17 16:58:12 INFO scheduler.DAGScheduler: ResultStage 670 (foreachPartition at PredictorEngineApp.java:153) finished in 12.012 s 18/04/17 16:58:12 INFO scheduler.DAGScheduler: Job 670 finished: foreachPartition at PredictorEngineApp.java:153, took 12.064670 s 18/04/17 16:58:12 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x24ae8c52 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 16:58:12 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x24ae8c520x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 16:58:12 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 16:58:12 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:36839, server: ***hostname masked***/***IP masked***:2181 18/04/17 16:58:12 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x1626be1444c94f4, negotiated timeout = 60000 18/04/17 16:58:12 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x1626be1444c94f4 18/04/17 16:58:12 INFO zookeeper.ZooKeeper: Session: 0x1626be1444c94f4 closed 18/04/17 16:58:12 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 16:58:12 INFO scheduler.JobScheduler: Finished job streaming job 1523973480000 ms.1 from job set of time 1523973480000 ms 18/04/17 16:58:14 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 684.0 (TID 684) in 13863 ms on ***hostname masked*** (executor 11) (1/1) 18/04/17 16:58:14 INFO scheduler.DAGScheduler: ResultStage 684 (foreachPartition at PredictorEngineApp.java:153) finished in 13.863 s 18/04/17 16:58:14 INFO cluster.YarnClusterScheduler: Removed TaskSet 684.0, whose tasks have all completed, from pool 18/04/17 16:58:14 INFO scheduler.DAGScheduler: Job 683 finished: foreachPartition at PredictorEngineApp.java:153, took 13.964100 s 18/04/17 16:58:14 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x52b8b276 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 16:58:14 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x52b8b2760x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 16:58:14 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 16:58:14 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:36846, server: ***hostname masked***/***IP masked***:2181 18/04/17 16:58:14 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x1626be1444c94f6, negotiated timeout = 60000 18/04/17 16:58:14 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x1626be1444c94f6 18/04/17 16:58:14 INFO zookeeper.ZooKeeper: Session: 0x1626be1444c94f6 closed 18/04/17 16:58:14 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 16:58:14 INFO scheduler.JobScheduler: Finished job streaming job 1523973480000 ms.22 from job set of time 1523973480000 ms 18/04/17 16:58:23 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 657.0 (TID 657) in 82928 ms on ***hostname masked*** (executor 1) (1/1) 18/04/17 16:58:23 INFO cluster.YarnClusterScheduler: Removed TaskSet 657.0, whose tasks have all completed, from pool 18/04/17 16:58:23 INFO scheduler.DAGScheduler: ResultStage 657 (foreachPartition at PredictorEngineApp.java:153) finished in 82.929 s 18/04/17 16:58:23 INFO scheduler.DAGScheduler: Job 657 finished: foreachPartition at PredictorEngineApp.java:153, took 83.035363 s 18/04/17 16:58:23 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x79ff2380 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 16:58:23 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x79ff23800x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 16:58:23 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 16:58:23 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:41459, server: ***hostname masked***/***IP masked***:2181 18/04/17 16:58:23 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x2626be142b28deb, negotiated timeout = 60000 18/04/17 16:58:23 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x2626be142b28deb 18/04/17 16:58:23 INFO zookeeper.ZooKeeper: Session: 0x2626be142b28deb closed 18/04/17 16:58:23 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 16:58:23 INFO scheduler.JobScheduler: Finished job streaming job 1523973420000 ms.22 from job set of time 1523973420000 ms 18/04/17 16:58:23 INFO scheduler.JobScheduler: Total delay: 83.157 s for time 1523973420000 ms (execution: 83.096 s) 18/04/17 16:58:23 INFO kafka.KafkaRDD: Removing RDD 828 from persistence list 18/04/17 16:58:23 INFO storage.BlockManager: Removing RDD 828 18/04/17 16:58:23 INFO kafka.KafkaRDD: Removing RDD 828 from persistence list 18/04/17 16:58:23 INFO storage.BlockManager: Removing RDD 828 18/04/17 16:58:23 INFO kafka.KafkaRDD: Removing RDD 829 from persistence list 18/04/17 16:58:23 INFO storage.BlockManager: Removing RDD 829 18/04/17 16:58:23 INFO kafka.KafkaRDD: Removing RDD 829 from persistence list 18/04/17 16:58:23 INFO storage.BlockManager: Removing RDD 829 18/04/17 16:58:23 INFO kafka.KafkaRDD: Removing RDD 830 from persistence list 18/04/17 16:58:23 INFO storage.BlockManager: Removing RDD 830 18/04/17 16:58:23 INFO kafka.KafkaRDD: Removing RDD 830 from persistence list 18/04/17 16:58:23 INFO spark.ContextCleaner: Cleaned accumulator 670 18/04/17 16:58:23 INFO storage.BlockManager: Removing RDD 830 18/04/17 16:58:23 INFO kafka.KafkaRDD: Removing RDD 831 from persistence list 18/04/17 16:58:23 INFO storage.BlockManager: Removing RDD 831 18/04/17 16:58:23 INFO kafka.KafkaRDD: Removing RDD 831 from persistence list 18/04/17 16:58:23 INFO storage.BlockManagerInfo: Removed broadcast_685_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.2 MB) 18/04/17 16:58:23 INFO storage.BlockManager: Removing RDD 831 18/04/17 16:58:23 INFO kafka.KafkaRDD: Removing RDD 832 from persistence list 18/04/17 16:58:23 INFO storage.BlockManager: Removing RDD 832 18/04/17 16:58:23 INFO kafka.KafkaRDD: Removing RDD 832 from persistence list 18/04/17 16:58:23 INFO storage.BlockManagerInfo: Removed broadcast_685_piece0 on ***hostname masked***:35790 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:58:23 INFO storage.BlockManager: Removing RDD 832 18/04/17 16:58:23 INFO spark.ContextCleaner: Cleaned accumulator 686 18/04/17 16:58:23 INFO kafka.KafkaRDD: Removing RDD 833 from persistence list 18/04/17 16:58:23 INFO storage.BlockManager: Removing RDD 833 18/04/17 16:58:23 INFO kafka.KafkaRDD: Removing RDD 833 from persistence list 18/04/17 16:58:23 INFO storage.BlockManager: Removing RDD 833 18/04/17 16:58:23 INFO kafka.KafkaRDD: Removing RDD 834 from persistence list 18/04/17 16:58:23 INFO storage.BlockManagerInfo: Removed broadcast_684_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.2 MB) 18/04/17 16:58:23 INFO storage.BlockManager: Removing RDD 834 18/04/17 16:58:23 INFO kafka.KafkaRDD: Removing RDD 834 from persistence list 18/04/17 16:58:23 INFO storage.BlockManager: Removing RDD 834 18/04/17 16:58:23 INFO kafka.KafkaRDD: Removing RDD 835 from persistence list 18/04/17 16:58:23 INFO storage.BlockManager: Removing RDD 835 18/04/17 16:58:23 INFO kafka.KafkaRDD: Removing RDD 835 from persistence list 18/04/17 16:58:23 INFO storage.BlockManager: Removing RDD 835 18/04/17 16:58:23 INFO kafka.KafkaRDD: Removing RDD 836 from persistence list 18/04/17 16:58:23 INFO storage.BlockManager: Removing RDD 836 18/04/17 16:58:23 INFO kafka.KafkaRDD: Removing RDD 836 from persistence list 18/04/17 16:58:23 INFO storage.BlockManager: Removing RDD 836 18/04/17 16:58:23 INFO kafka.KafkaRDD: Removing RDD 837 from persistence list 18/04/17 16:58:23 INFO storage.BlockManager: Removing RDD 837 18/04/17 16:58:23 INFO kafka.KafkaRDD: Removing RDD 837 from persistence list 18/04/17 16:58:23 INFO storage.BlockManagerInfo: Removed broadcast_684_piece0 on ***hostname masked***:57847 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:58:23 INFO storage.BlockManager: Removing RDD 837 18/04/17 16:58:23 INFO kafka.KafkaRDD: Removing RDD 838 from persistence list 18/04/17 16:58:23 INFO storage.BlockManager: Removing RDD 838 18/04/17 16:58:23 INFO kafka.KafkaRDD: Removing RDD 838 from persistence list 18/04/17 16:58:23 INFO storage.BlockManager: Removing RDD 838 18/04/17 16:58:23 INFO kafka.KafkaRDD: Removing RDD 839 from persistence list 18/04/17 16:58:23 INFO storage.BlockManager: Removing RDD 839 18/04/17 16:58:23 INFO kafka.KafkaRDD: Removing RDD 839 from persistence list 18/04/17 16:58:23 INFO storage.BlockManager: Removing RDD 839 18/04/17 16:58:23 INFO kafka.KafkaRDD: Removing RDD 840 from persistence list 18/04/17 16:58:23 INFO storage.BlockManager: Removing RDD 840 18/04/17 16:58:23 INFO kafka.KafkaRDD: Removing RDD 840 from persistence list 18/04/17 16:58:23 INFO storage.BlockManager: Removing RDD 840 18/04/17 16:58:23 INFO kafka.KafkaRDD: Removing RDD 841 from persistence list 18/04/17 16:58:23 INFO storage.BlockManager: Removing RDD 841 18/04/17 16:58:23 INFO kafka.KafkaRDD: Removing RDD 841 from persistence list 18/04/17 16:58:23 INFO storage.BlockManager: Removing RDD 841 18/04/17 16:58:23 INFO kafka.KafkaRDD: Removing RDD 842 from persistence list 18/04/17 16:58:23 INFO storage.BlockManager: Removing RDD 842 18/04/17 16:58:23 INFO kafka.KafkaRDD: Removing RDD 842 from persistence list 18/04/17 16:58:23 INFO storage.BlockManager: Removing RDD 842 18/04/17 16:58:23 INFO kafka.KafkaRDD: Removing RDD 843 from persistence list 18/04/17 16:58:23 INFO storage.BlockManager: Removing RDD 843 18/04/17 16:58:23 INFO kafka.KafkaRDD: Removing RDD 843 from persistence list 18/04/17 16:58:23 INFO storage.BlockManager: Removing RDD 843 18/04/17 16:58:23 INFO kafka.KafkaRDD: Removing RDD 844 from persistence list 18/04/17 16:58:23 INFO storage.BlockManager: Removing RDD 844 18/04/17 16:58:23 INFO kafka.KafkaRDD: Removing RDD 844 from persistence list 18/04/17 16:58:23 INFO storage.BlockManager: Removing RDD 844 18/04/17 16:58:23 INFO kafka.KafkaRDD: Removing RDD 845 from persistence list 18/04/17 16:58:23 INFO storage.BlockManager: Removing RDD 845 18/04/17 16:58:23 INFO kafka.KafkaRDD: Removing RDD 845 from persistence list 18/04/17 16:58:23 INFO storage.BlockManager: Removing RDD 845 18/04/17 16:58:23 INFO kafka.KafkaRDD: Removing RDD 846 from persistence list 18/04/17 16:58:23 INFO storage.BlockManager: Removing RDD 846 18/04/17 16:58:23 INFO kafka.KafkaRDD: Removing RDD 846 from persistence list 18/04/17 16:58:23 INFO storage.BlockManagerInfo: Removed broadcast_657_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.2 MB) 18/04/17 16:58:23 INFO storage.BlockManager: Removing RDD 846 18/04/17 16:58:23 INFO kafka.KafkaRDD: Removing RDD 847 from persistence list 18/04/17 16:58:23 INFO storage.BlockManagerInfo: Removed broadcast_657_piece0 on ***hostname masked***:56034 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:58:23 INFO storage.BlockManager: Removing RDD 847 18/04/17 16:58:23 INFO kafka.KafkaRDD: Removing RDD 847 from persistence list 18/04/17 16:58:23 INFO storage.BlockManager: Removing RDD 847 18/04/17 16:58:23 INFO kafka.KafkaRDD: Removing RDD 848 from persistence list 18/04/17 16:58:23 INFO storage.BlockManager: Removing RDD 848 18/04/17 16:58:23 INFO kafka.KafkaRDD: Removing RDD 848 from persistence list 18/04/17 16:58:23 INFO storage.BlockManager: Removing RDD 848 18/04/17 16:58:23 INFO kafka.KafkaRDD: Removing RDD 849 from persistence list 18/04/17 16:58:23 INFO storage.BlockManager: Removing RDD 849 18/04/17 16:58:23 INFO kafka.KafkaRDD: Removing RDD 849 from persistence list 18/04/17 16:58:23 INFO storage.BlockManager: Removing RDD 849 18/04/17 16:58:23 INFO kafka.KafkaRDD: Removing RDD 850 from persistence list 18/04/17 16:58:23 INFO storage.BlockManager: Removing RDD 850 18/04/17 16:58:23 INFO kafka.KafkaRDD: Removing RDD 850 from persistence list 18/04/17 16:58:23 INFO storage.BlockManager: Removing RDD 850 18/04/17 16:58:23 INFO kafka.KafkaRDD: Removing RDD 851 from persistence list 18/04/17 16:58:23 INFO storage.BlockManager: Removing RDD 851 18/04/17 16:58:23 INFO kafka.KafkaRDD: Removing RDD 851 from persistence list 18/04/17 16:58:23 INFO storage.BlockManager: Removing RDD 851 18/04/17 16:58:23 INFO kafka.KafkaRDD: Removing RDD 852 from persistence list 18/04/17 16:58:23 INFO storage.BlockManager: Removing RDD 852 18/04/17 16:58:23 INFO kafka.KafkaRDD: Removing RDD 852 from persistence list 18/04/17 16:58:23 INFO storage.BlockManager: Removing RDD 852 18/04/17 16:58:23 INFO kafka.KafkaRDD: Removing RDD 853 from persistence list 18/04/17 16:58:23 INFO spark.ContextCleaner: Cleaned accumulator 658 18/04/17 16:58:23 INFO spark.ContextCleaner: Cleaned accumulator 662 18/04/17 16:58:23 INFO storage.BlockManager: Removing RDD 853 18/04/17 16:58:23 INFO kafka.KafkaRDD: Removing RDD 853 from persistence list 18/04/17 16:58:23 INFO storage.BlockManager: Removing RDD 853 18/04/17 16:58:23 INFO kafka.KafkaRDD: Removing RDD 854 from persistence list 18/04/17 16:58:23 INFO storage.BlockManagerInfo: Removed broadcast_660_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.2 MB) 18/04/17 16:58:23 INFO storage.BlockManager: Removing RDD 854 18/04/17 16:58:23 INFO kafka.KafkaRDD: Removing RDD 854 from persistence list 18/04/17 16:58:23 INFO storage.BlockManagerInfo: Removed broadcast_660_piece0 on ***hostname masked***:55279 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:58:23 INFO storage.BlockManager: Removing RDD 854 18/04/17 16:58:23 INFO kafka.KafkaRDD: Removing RDD 855 from persistence list 18/04/17 16:58:23 INFO storage.BlockManager: Removing RDD 855 18/04/17 16:58:23 INFO kafka.KafkaRDD: Removing RDD 855 from persistence list 18/04/17 16:58:23 INFO storage.BlockManager: Removing RDD 855 18/04/17 16:58:23 INFO kafka.KafkaRDD: Removing RDD 856 from persistence list 18/04/17 16:58:23 INFO storage.BlockManager: Removing RDD 856 18/04/17 16:58:23 INFO kafka.KafkaRDD: Removing RDD 856 from persistence list 18/04/17 16:58:23 INFO storage.BlockManager: Removing RDD 856 18/04/17 16:58:23 INFO kafka.KafkaRDD: Removing RDD 857 from persistence list 18/04/17 16:58:23 INFO storage.BlockManager: Removing RDD 857 18/04/17 16:58:23 INFO kafka.KafkaRDD: Removing RDD 857 from persistence list 18/04/17 16:58:23 INFO storage.BlockManager: Removing RDD 857 18/04/17 16:58:23 INFO kafka.KafkaRDD: Removing RDD 858 from persistence list 18/04/17 16:58:23 INFO storage.BlockManager: Removing RDD 858 18/04/17 16:58:23 INFO kafka.KafkaRDD: Removing RDD 858 from persistence list 18/04/17 16:58:23 INFO spark.ContextCleaner: Cleaned accumulator 661 18/04/17 16:58:23 INFO storage.BlockManager: Removing RDD 858 18/04/17 16:58:23 INFO kafka.KafkaRDD: Removing RDD 859 from persistence list 18/04/17 16:58:23 INFO storage.BlockManager: Removing RDD 859 18/04/17 16:58:23 INFO kafka.KafkaRDD: Removing RDD 859 from persistence list 18/04/17 16:58:23 INFO storage.BlockManagerInfo: Removed broadcast_662_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.2 MB) 18/04/17 16:58:23 INFO storage.BlockManager: Removing RDD 859 18/04/17 16:58:23 INFO kafka.KafkaRDD: Removing RDD 860 from persistence list 18/04/17 16:58:23 INFO storage.BlockManager: Removing RDD 860 18/04/17 16:58:23 INFO kafka.KafkaRDD: Removing RDD 860 from persistence list 18/04/17 16:58:23 INFO storage.BlockManager: Removing RDD 860 18/04/17 16:58:23 INFO kafka.KafkaRDD: Removing RDD 861 from persistence list 18/04/17 16:58:23 INFO storage.BlockManagerInfo: Removed broadcast_662_piece0 on ***hostname masked***:42188 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:58:23 INFO storage.BlockManager: Removing RDD 861 18/04/17 16:58:23 INFO kafka.KafkaRDD: Removing RDD 861 from persistence list 18/04/17 16:58:23 INFO storage.BlockManager: Removing RDD 861 18/04/17 16:58:23 INFO kafka.KafkaRDD: Removing RDD 862 from persistence list 18/04/17 16:58:23 INFO storage.BlockManager: Removing RDD 862 18/04/17 16:58:23 INFO kafka.KafkaRDD: Removing RDD 862 from persistence list 18/04/17 16:58:23 INFO storage.BlockManager: Removing RDD 862 18/04/17 16:58:23 INFO kafka.KafkaRDD: Removing RDD 863 from persistence list 18/04/17 16:58:23 INFO storage.BlockManager: Removing RDD 863 18/04/17 16:58:23 INFO kafka.KafkaRDD: Removing RDD 863 from persistence list 18/04/17 16:58:23 INFO storage.BlockManager: Removing RDD 863 18/04/17 16:58:23 INFO scheduler.ReceivedBlockTracker: Deleting batches ArrayBuffer() 18/04/17 16:58:23 INFO spark.ContextCleaner: Cleaned accumulator 663 18/04/17 16:58:23 INFO scheduler.InputInfoTracker: remove old batch metadata: 1523973300000 ms 18/04/17 16:58:23 INFO storage.BlockManagerInfo: Removed broadcast_661_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.2 MB) 18/04/17 16:58:23 INFO storage.BlockManagerInfo: Removed broadcast_661_piece0 on ***hostname masked***:53081 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:58:23 INFO spark.ContextCleaner: Cleaned accumulator 665 18/04/17 16:58:23 INFO storage.BlockManagerInfo: Removed broadcast_663_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.2 MB) 18/04/17 16:58:23 INFO storage.BlockManagerInfo: Removed broadcast_663_piece0 on ***hostname masked***:55095 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:58:23 INFO spark.ContextCleaner: Cleaned accumulator 664 18/04/17 16:58:23 INFO storage.BlockManagerInfo: Removed broadcast_665_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.2 MB) 18/04/17 16:58:23 INFO storage.BlockManagerInfo: Removed broadcast_665_piece0 on ***hostname masked***:60107 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:58:23 INFO spark.ContextCleaner: Cleaned accumulator 666 18/04/17 16:58:23 INFO storage.BlockManagerInfo: Removed broadcast_664_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.2 MB) 18/04/17 16:58:23 INFO storage.BlockManagerInfo: Removed broadcast_664_piece0 on ***hostname masked***:43653 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:58:23 INFO spark.ContextCleaner: Cleaned accumulator 668 18/04/17 16:58:23 INFO storage.BlockManagerInfo: Removed broadcast_666_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.2 MB) 18/04/17 16:58:23 INFO storage.BlockManagerInfo: Removed broadcast_666_piece0 on ***hostname masked***:60107 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:58:23 INFO spark.ContextCleaner: Cleaned accumulator 667 18/04/17 16:58:23 INFO storage.BlockManagerInfo: Removed broadcast_668_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.2 MB) 18/04/17 16:58:23 INFO storage.BlockManagerInfo: Removed broadcast_668_piece0 on ***hostname masked***:55095 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:58:23 INFO spark.ContextCleaner: Cleaned accumulator 669 18/04/17 16:58:23 INFO storage.BlockManagerInfo: Removed broadcast_667_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.2 MB) 18/04/17 16:58:23 INFO storage.BlockManagerInfo: Removed broadcast_667_piece0 on ***hostname masked***:55279 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:58:23 INFO spark.ContextCleaner: Cleaned accumulator 671 18/04/17 16:58:23 INFO storage.BlockManagerInfo: Removed broadcast_669_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.2 MB) 18/04/17 16:58:23 INFO storage.BlockManagerInfo: Removed broadcast_669_piece0 on ***hostname masked***:56034 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:58:23 INFO storage.BlockManagerInfo: Removed broadcast_671_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.2 MB) 18/04/17 16:58:23 INFO storage.BlockManagerInfo: Removed broadcast_671_piece0 on ***hostname masked***:55033 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:58:23 INFO spark.ContextCleaner: Cleaned accumulator 672 18/04/17 16:58:23 INFO storage.BlockManagerInfo: Removed broadcast_670_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.2 MB) 18/04/17 16:58:23 INFO storage.BlockManagerInfo: Removed broadcast_670_piece0 on ***hostname masked***:43653 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:58:23 INFO spark.ContextCleaner: Cleaned accumulator 674 18/04/17 16:58:23 INFO storage.BlockManagerInfo: Removed broadcast_672_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.2 MB) 18/04/17 16:58:23 INFO storage.BlockManagerInfo: Removed broadcast_672_piece0 on ***hostname masked***:35790 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:58:23 INFO spark.ContextCleaner: Cleaned accumulator 673 18/04/17 16:58:23 INFO storage.BlockManagerInfo: Removed broadcast_674_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.2 MB) 18/04/17 16:58:23 INFO storage.BlockManagerInfo: Removed broadcast_674_piece0 on ***hostname masked***:55095 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:58:23 INFO spark.ContextCleaner: Cleaned accumulator 675 18/04/17 16:58:23 INFO storage.BlockManagerInfo: Removed broadcast_673_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.2 MB) 18/04/17 16:58:23 INFO storage.BlockManagerInfo: Removed broadcast_673_piece0 on ***hostname masked***:41751 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:58:23 INFO spark.ContextCleaner: Cleaned accumulator 677 18/04/17 16:58:23 INFO storage.BlockManagerInfo: Removed broadcast_675_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.2 MB) 18/04/17 16:58:23 INFO storage.BlockManagerInfo: Removed broadcast_675_piece0 on ***hostname masked***:60107 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:58:23 INFO spark.ContextCleaner: Cleaned accumulator 676 18/04/17 16:58:23 INFO storage.BlockManagerInfo: Removed broadcast_676_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.2 MB) 18/04/17 16:58:23 INFO storage.BlockManagerInfo: Removed broadcast_676_piece0 on ***hostname masked***:42188 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:58:23 INFO storage.BlockManagerInfo: Removed broadcast_678_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.2 MB) 18/04/17 16:58:23 INFO storage.BlockManagerInfo: Removed broadcast_678_piece0 on ***hostname masked***:53081 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:58:23 INFO spark.ContextCleaner: Cleaned accumulator 679 18/04/17 16:58:23 INFO storage.BlockManagerInfo: Removed broadcast_680_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.3 MB) 18/04/17 16:58:23 INFO storage.BlockManagerInfo: Removed broadcast_680_piece0 on ***hostname masked***:35790 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:58:23 INFO spark.ContextCleaner: Cleaned accumulator 681 18/04/17 16:58:23 INFO spark.ContextCleaner: Cleaned accumulator 683 18/04/17 16:58:23 INFO storage.BlockManagerInfo: Removed broadcast_681_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.3 MB) 18/04/17 16:58:23 INFO storage.BlockManagerInfo: Removed broadcast_681_piece0 on ***hostname masked***:55095 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:58:23 INFO spark.ContextCleaner: Cleaned accumulator 682 18/04/17 16:58:23 INFO storage.BlockManagerInfo: Removed broadcast_683_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.3 MB) 18/04/17 16:58:23 INFO storage.BlockManagerInfo: Removed broadcast_683_piece0 on ***hostname masked***:57847 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:58:23 INFO spark.ContextCleaner: Cleaned accumulator 684 18/04/17 16:58:23 INFO storage.BlockManagerInfo: Removed broadcast_682_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.3 MB) 18/04/17 16:58:23 INFO storage.BlockManagerInfo: Removed broadcast_682_piece0 on ***hostname masked***:57847 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:58:23 INFO spark.ContextCleaner: Cleaned accumulator 685 18/04/17 16:59:00 INFO scheduler.JobScheduler: Added jobs for time 1523973540000 ms 18/04/17 16:59:00 INFO scheduler.JobScheduler: Starting job streaming job 1523973540000 ms.0 from job set of time 1523973540000 ms 18/04/17 16:59:00 INFO scheduler.JobScheduler: Starting job streaming job 1523973540000 ms.1 from job set of time 1523973540000 ms 18/04/17 16:59:00 INFO scheduler.JobScheduler: Starting job streaming job 1523973540000 ms.2 from job set of time 1523973540000 ms 18/04/17 16:59:00 INFO scheduler.JobScheduler: Starting job streaming job 1523973540000 ms.3 from job set of time 1523973540000 ms 18/04/17 16:59:00 INFO scheduler.JobScheduler: Starting job streaming job 1523973540000 ms.4 from job set of time 1523973540000 ms 18/04/17 16:59:00 INFO scheduler.JobScheduler: Finished job streaming job 1523973540000 ms.0 from job set of time 1523973540000 ms 18/04/17 16:59:00 INFO scheduler.JobScheduler: Starting job streaming job 1523973540000 ms.5 from job set of time 1523973540000 ms 18/04/17 16:59:00 INFO scheduler.JobScheduler: Starting job streaming job 1523973540000 ms.6 from job set of time 1523973540000 ms 18/04/17 16:59:00 INFO scheduler.JobScheduler: Finished job streaming job 1523973540000 ms.4 from job set of time 1523973540000 ms 18/04/17 16:59:00 INFO scheduler.JobScheduler: Finished job streaming job 1523973540000 ms.3 from job set of time 1523973540000 ms 18/04/17 16:59:00 INFO scheduler.JobScheduler: Starting job streaming job 1523973540000 ms.7 from job set of time 1523973540000 ms 18/04/17 16:59:00 INFO scheduler.JobScheduler: Starting job streaming job 1523973540000 ms.9 from job set of time 1523973540000 ms 18/04/17 16:59:00 INFO scheduler.JobScheduler: Starting job streaming job 1523973540000 ms.8 from job set of time 1523973540000 ms 18/04/17 16:59:00 INFO scheduler.JobScheduler: Starting job streaming job 1523973540000 ms.10 from job set of time 1523973540000 ms 18/04/17 16:59:00 INFO scheduler.JobScheduler: Starting job streaming job 1523973540000 ms.11 from job set of time 1523973540000 ms 18/04/17 16:59:00 INFO scheduler.JobScheduler: Starting job streaming job 1523973540000 ms.12 from job set of time 1523973540000 ms 18/04/17 16:59:00 INFO scheduler.JobScheduler: Starting job streaming job 1523973540000 ms.13 from job set of time 1523973540000 ms 18/04/17 16:59:00 INFO scheduler.JobScheduler: Starting job streaming job 1523973540000 ms.14 from job set of time 1523973540000 ms 18/04/17 16:59:00 INFO scheduler.JobScheduler: Finished job streaming job 1523973540000 ms.13 from job set of time 1523973540000 ms 18/04/17 16:59:00 INFO scheduler.JobScheduler: Starting job streaming job 1523973540000 ms.15 from job set of time 1523973540000 ms 18/04/17 16:59:00 INFO scheduler.JobScheduler: Starting job streaming job 1523973540000 ms.16 from job set of time 1523973540000 ms 18/04/17 16:59:00 INFO scheduler.JobScheduler: Starting job streaming job 1523973540000 ms.18 from job set of time 1523973540000 ms 18/04/17 16:59:00 INFO scheduler.JobScheduler: Starting job streaming job 1523973540000 ms.17 from job set of time 1523973540000 ms 18/04/17 16:59:00 INFO scheduler.JobScheduler: Finished job streaming job 1523973540000 ms.14 from job set of time 1523973540000 ms 18/04/17 16:59:00 INFO scheduler.JobScheduler: Starting job streaming job 1523973540000 ms.19 from job set of time 1523973540000 ms 18/04/17 16:59:00 INFO scheduler.JobScheduler: Finished job streaming job 1523973540000 ms.16 from job set of time 1523973540000 ms 18/04/17 16:59:00 INFO scheduler.JobScheduler: Starting job streaming job 1523973540000 ms.20 from job set of time 1523973540000 ms 18/04/17 16:59:00 INFO scheduler.JobScheduler: Finished job streaming job 1523973540000 ms.17 from job set of time 1523973540000 ms 18/04/17 16:59:00 INFO scheduler.JobScheduler: Starting job streaming job 1523973540000 ms.22 from job set of time 1523973540000 ms 18/04/17 16:59:00 INFO scheduler.JobScheduler: Starting job streaming job 1523973540000 ms.21 from job set of time 1523973540000 ms 18/04/17 16:59:00 INFO scheduler.JobScheduler: Starting job streaming job 1523973540000 ms.23 from job set of time 1523973540000 ms 18/04/17 16:59:00 INFO scheduler.JobScheduler: Starting job streaming job 1523973540000 ms.24 from job set of time 1523973540000 ms 18/04/17 16:59:00 INFO scheduler.JobScheduler: Finished job streaming job 1523973540000 ms.21 from job set of time 1523973540000 ms 18/04/17 16:59:00 INFO scheduler.JobScheduler: Starting job streaming job 1523973540000 ms.26 from job set of time 1523973540000 ms 18/04/17 16:59:00 INFO scheduler.JobScheduler: Starting job streaming job 1523973540000 ms.25 from job set of time 1523973540000 ms 18/04/17 16:59:00 INFO scheduler.JobScheduler: Starting job streaming job 1523973540000 ms.27 from job set of time 1523973540000 ms 18/04/17 16:59:00 INFO scheduler.JobScheduler: Starting job streaming job 1523973540000 ms.28 from job set of time 1523973540000 ms 18/04/17 16:59:00 INFO scheduler.JobScheduler: Starting job streaming job 1523973540000 ms.30 from job set of time 1523973540000 ms 18/04/17 16:59:00 INFO scheduler.JobScheduler: Starting job streaming job 1523973540000 ms.29 from job set of time 1523973540000 ms 18/04/17 16:59:00 INFO scheduler.JobScheduler: Starting job streaming job 1523973540000 ms.31 from job set of time 1523973540000 ms 18/04/17 16:59:00 INFO scheduler.JobScheduler: Finished job streaming job 1523973540000 ms.30 from job set of time 1523973540000 ms 18/04/17 16:59:00 INFO scheduler.JobScheduler: Starting job streaming job 1523973540000 ms.32 from job set of time 1523973540000 ms 18/04/17 16:59:00 INFO scheduler.JobScheduler: Starting job streaming job 1523973540000 ms.34 from job set of time 1523973540000 ms 18/04/17 16:59:00 INFO scheduler.JobScheduler: Starting job streaming job 1523973540000 ms.33 from job set of time 1523973540000 ms 18/04/17 16:59:00 INFO scheduler.JobScheduler: Starting job streaming job 1523973540000 ms.35 from job set of time 1523973540000 ms 18/04/17 16:59:00 INFO scheduler.JobScheduler: Finished job streaming job 1523973540000 ms.35 from job set of time 1523973540000 ms 18/04/17 16:59:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 16:59:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 16:59:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 16:59:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 16:59:00 INFO scheduler.DAGScheduler: Got job 686 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 16:59:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 686 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 16:59:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 16:59:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 16:59:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 16:59:00 INFO scheduler.DAGScheduler: Submitting ResultStage 686 (KafkaRDD[964] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 16:59:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 16:59:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 16:59:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 16:59:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 16:59:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 16:59:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 16:59:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 16:59:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 16:59:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 16:59:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 16:59:00 INFO storage.MemoryStore: Block broadcast_686 stored as values in memory (estimated size 5.7 KB, free 490.5 MB) 18/04/17 16:59:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 16:59:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 16:59:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 16:59:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 16:59:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 16:59:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 16:59:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 16:59:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 16:59:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 16:59:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 16:59:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 16:59:00 INFO storage.MemoryStore: Block broadcast_686_piece0 stored as bytes in memory (estimated size 3.1 KB, free 490.5 MB) 18/04/17 16:59:00 INFO storage.BlockManagerInfo: Added broadcast_686_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.3 MB) 18/04/17 16:59:00 INFO spark.SparkContext: Created broadcast 686 from broadcast at DAGScheduler.scala:1006 18/04/17 16:59:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 686 (KafkaRDD[964] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 16:59:00 INFO cluster.YarnClusterScheduler: Adding task set 686.0 with 1 tasks 18/04/17 16:59:00 INFO scheduler.DAGScheduler: Got job 687 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 16:59:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 687 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 16:59:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 16:59:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 16:59:00 INFO scheduler.DAGScheduler: Submitting ResultStage 687 (KafkaRDD[970] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 16:59:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 686.0 (TID 686, ***hostname masked***, executor 7, partition 0, RACK_LOCAL, 2045 bytes) 18/04/17 16:59:00 INFO storage.MemoryStore: Block broadcast_687 stored as values in memory (estimated size 5.7 KB, free 490.5 MB) 18/04/17 16:59:00 INFO storage.MemoryStore: Block broadcast_687_piece0 stored as bytes in memory (estimated size 3.1 KB, free 490.5 MB) 18/04/17 16:59:00 INFO storage.BlockManagerInfo: Added broadcast_687_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.3 MB) 18/04/17 16:59:00 INFO spark.SparkContext: Created broadcast 687 from broadcast at DAGScheduler.scala:1006 18/04/17 16:59:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 687 (KafkaRDD[970] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 16:59:00 INFO cluster.YarnClusterScheduler: Adding task set 687.0 with 1 tasks 18/04/17 16:59:00 INFO scheduler.DAGScheduler: Got job 688 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 16:59:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 688 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 16:59:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 16:59:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 16:59:00 INFO scheduler.DAGScheduler: Submitting ResultStage 688 (KafkaRDD[941] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 16:59:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 687.0 (TID 687, ***hostname masked***, executor 3, partition 0, RACK_LOCAL, 2053 bytes) 18/04/17 16:59:00 INFO storage.MemoryStore: Block broadcast_688 stored as values in memory (estimated size 5.7 KB, free 490.5 MB) 18/04/17 16:59:00 INFO storage.MemoryStore: Block broadcast_688_piece0 stored as bytes in memory (estimated size 3.1 KB, free 490.5 MB) 18/04/17 16:59:00 INFO storage.BlockManagerInfo: Added broadcast_688_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.3 MB) 18/04/17 16:59:00 INFO spark.SparkContext: Created broadcast 688 from broadcast at DAGScheduler.scala:1006 18/04/17 16:59:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 688 (KafkaRDD[941] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 16:59:00 INFO cluster.YarnClusterScheduler: Adding task set 688.0 with 1 tasks 18/04/17 16:59:00 INFO scheduler.DAGScheduler: Got job 689 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 16:59:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 689 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 16:59:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 16:59:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 16:59:00 INFO scheduler.DAGScheduler: Submitting ResultStage 689 (KafkaRDD[948] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 16:59:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 688.0 (TID 688, ***hostname masked***, executor 3, partition 0, NODE_LOCAL, 2059 bytes) 18/04/17 16:59:00 INFO storage.MemoryStore: Block broadcast_689 stored as values in memory (estimated size 5.7 KB, free 490.5 MB) 18/04/17 16:59:00 INFO storage.MemoryStore: Block broadcast_689_piece0 stored as bytes in memory (estimated size 3.1 KB, free 490.5 MB) 18/04/17 16:59:00 INFO storage.BlockManagerInfo: Added broadcast_689_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.2 MB) 18/04/17 16:59:00 INFO spark.SparkContext: Created broadcast 689 from broadcast at DAGScheduler.scala:1006 18/04/17 16:59:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 689 (KafkaRDD[948] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 16:59:00 INFO cluster.YarnClusterScheduler: Adding task set 689.0 with 1 tasks 18/04/17 16:59:00 INFO scheduler.DAGScheduler: Got job 690 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 16:59:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 690 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 16:59:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 16:59:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 16:59:00 INFO scheduler.DAGScheduler: Submitting ResultStage 690 (KafkaRDD[968] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 16:59:00 INFO storage.MemoryStore: Block broadcast_690 stored as values in memory (estimated size 5.7 KB, free 490.5 MB) 18/04/17 16:59:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 689.0 (TID 689, ***hostname masked***, executor 3, partition 0, RACK_LOCAL, 2051 bytes) 18/04/17 16:59:00 INFO storage.MemoryStore: Block broadcast_690_piece0 stored as bytes in memory (estimated size 3.1 KB, free 490.5 MB) 18/04/17 16:59:00 INFO storage.BlockManagerInfo: Added broadcast_690_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.2 MB) 18/04/17 16:59:00 INFO spark.SparkContext: Created broadcast 690 from broadcast at DAGScheduler.scala:1006 18/04/17 16:59:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 690 (KafkaRDD[968] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 16:59:00 INFO cluster.YarnClusterScheduler: Adding task set 690.0 with 1 tasks 18/04/17 16:59:00 INFO scheduler.DAGScheduler: Got job 691 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 16:59:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 691 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 16:59:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 16:59:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 16:59:00 INFO scheduler.DAGScheduler: Submitting ResultStage 691 (KafkaRDD[954] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 16:59:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 690.0 (TID 690, ***hostname masked***, executor 8, partition 0, RACK_LOCAL, 2040 bytes) 18/04/17 16:59:00 INFO storage.MemoryStore: Block broadcast_691 stored as values in memory (estimated size 5.7 KB, free 490.5 MB) 18/04/17 16:59:00 INFO storage.BlockManagerInfo: Added broadcast_686_piece0 in memory on ***hostname masked***:41751 (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:59:00 INFO storage.MemoryStore: Block broadcast_691_piece0 stored as bytes in memory (estimated size 3.1 KB, free 490.5 MB) 18/04/17 16:59:00 INFO storage.BlockManagerInfo: Added broadcast_691_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.2 MB) 18/04/17 16:59:00 INFO spark.SparkContext: Created broadcast 691 from broadcast at DAGScheduler.scala:1006 18/04/17 16:59:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 691 (KafkaRDD[954] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 16:59:00 INFO cluster.YarnClusterScheduler: Adding task set 691.0 with 1 tasks 18/04/17 16:59:00 INFO scheduler.DAGScheduler: Got job 692 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 16:59:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 692 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 16:59:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 16:59:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 16:59:00 INFO scheduler.DAGScheduler: Submitting ResultStage 692 (KafkaRDD[955] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 16:59:00 INFO storage.MemoryStore: Block broadcast_692 stored as values in memory (estimated size 5.7 KB, free 490.5 MB) 18/04/17 16:59:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 691.0 (TID 691, ***hostname masked***, executor 11, partition 0, RACK_LOCAL, 2048 bytes) 18/04/17 16:59:00 INFO storage.BlockManagerInfo: Added broadcast_688_piece0 in memory on ***hostname masked***:60107 (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:59:00 INFO storage.MemoryStore: Block broadcast_692_piece0 stored as bytes in memory (estimated size 3.1 KB, free 490.5 MB) 18/04/17 16:59:00 INFO storage.BlockManagerInfo: Added broadcast_692_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.2 MB) 18/04/17 16:59:00 INFO spark.SparkContext: Created broadcast 692 from broadcast at DAGScheduler.scala:1006 18/04/17 16:59:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 692 (KafkaRDD[955] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 16:59:00 INFO cluster.YarnClusterScheduler: Adding task set 692.0 with 1 tasks 18/04/17 16:59:00 INFO scheduler.DAGScheduler: Got job 693 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 16:59:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 693 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 16:59:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 16:59:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 16:59:00 INFO scheduler.DAGScheduler: Submitting ResultStage 693 (KafkaRDD[945] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 16:59:00 INFO storage.MemoryStore: Block broadcast_693 stored as values in memory (estimated size 5.7 KB, free 490.5 MB) 18/04/17 16:59:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 692.0 (TID 692, ***hostname masked***, executor 8, partition 0, RACK_LOCAL, 2050 bytes) 18/04/17 16:59:00 INFO storage.MemoryStore: Block broadcast_693_piece0 stored as bytes in memory (estimated size 3.1 KB, free 490.4 MB) 18/04/17 16:59:00 INFO storage.BlockManagerInfo: Added broadcast_693_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.2 MB) 18/04/17 16:59:00 INFO spark.SparkContext: Created broadcast 693 from broadcast at DAGScheduler.scala:1006 18/04/17 16:59:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 693 (KafkaRDD[945] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 16:59:00 INFO cluster.YarnClusterScheduler: Adding task set 693.0 with 1 tasks 18/04/17 16:59:00 INFO scheduler.DAGScheduler: Got job 694 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 16:59:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 694 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 16:59:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 16:59:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 16:59:00 INFO scheduler.DAGScheduler: Submitting ResultStage 694 (KafkaRDD[961] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 16:59:00 INFO storage.MemoryStore: Block broadcast_694 stored as values in memory (estimated size 5.7 KB, free 490.4 MB) 18/04/17 16:59:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 693.0 (TID 693, ***hostname masked***, executor 4, partition 0, RACK_LOCAL, 2041 bytes) 18/04/17 16:59:00 INFO storage.MemoryStore: Block broadcast_694_piece0 stored as bytes in memory (estimated size 3.1 KB, free 490.4 MB) 18/04/17 16:59:00 INFO storage.BlockManagerInfo: Added broadcast_694_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.2 MB) 18/04/17 16:59:00 INFO spark.SparkContext: Created broadcast 694 from broadcast at DAGScheduler.scala:1006 18/04/17 16:59:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 694 (KafkaRDD[961] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 16:59:00 INFO cluster.YarnClusterScheduler: Adding task set 694.0 with 1 tasks 18/04/17 16:59:00 INFO scheduler.DAGScheduler: Got job 695 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 16:59:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 695 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 16:59:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 16:59:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 16:59:00 INFO scheduler.DAGScheduler: Submitting ResultStage 695 (KafkaRDD[947] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 16:59:00 INFO storage.MemoryStore: Block broadcast_695 stored as values in memory (estimated size 5.7 KB, free 490.4 MB) 18/04/17 16:59:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 694.0 (TID 694, ***hostname masked***, executor 12, partition 0, RACK_LOCAL, 2045 bytes) 18/04/17 16:59:00 INFO storage.BlockManagerInfo: Added broadcast_690_piece0 in memory on ***hostname masked***:50260 (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:59:00 INFO storage.MemoryStore: Block broadcast_695_piece0 stored as bytes in memory (estimated size 3.1 KB, free 490.4 MB) 18/04/17 16:59:00 INFO storage.BlockManagerInfo: Added broadcast_695_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.2 MB) 18/04/17 16:59:00 INFO spark.SparkContext: Created broadcast 695 from broadcast at DAGScheduler.scala:1006 18/04/17 16:59:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 695 (KafkaRDD[947] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 16:59:00 INFO cluster.YarnClusterScheduler: Adding task set 695.0 with 1 tasks 18/04/17 16:59:00 INFO scheduler.DAGScheduler: Got job 696 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 16:59:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 696 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 16:59:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 16:59:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 16:59:00 INFO scheduler.DAGScheduler: Submitting ResultStage 696 (KafkaRDD[965] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 16:59:00 INFO storage.MemoryStore: Block broadcast_696 stored as values in memory (estimated size 5.7 KB, free 490.4 MB) 18/04/17 16:59:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 695.0 (TID 695, ***hostname masked***, executor 3, partition 0, RACK_LOCAL, 2046 bytes) 18/04/17 16:59:00 INFO storage.BlockManagerInfo: Added broadcast_691_piece0 in memory on ***hostname masked***:57847 (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:59:00 INFO storage.MemoryStore: Block broadcast_696_piece0 stored as bytes in memory (estimated size 3.1 KB, free 490.4 MB) 18/04/17 16:59:00 INFO storage.BlockManagerInfo: Added broadcast_696_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.2 MB) 18/04/17 16:59:00 INFO spark.SparkContext: Created broadcast 696 from broadcast at DAGScheduler.scala:1006 18/04/17 16:59:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 696 (KafkaRDD[965] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 16:59:00 INFO cluster.YarnClusterScheduler: Adding task set 696.0 with 1 tasks 18/04/17 16:59:00 INFO scheduler.DAGScheduler: Got job 697 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 16:59:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 697 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 16:59:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 16:59:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 16:59:00 INFO scheduler.DAGScheduler: Submitting ResultStage 697 (KafkaRDD[942] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 16:59:00 INFO storage.MemoryStore: Block broadcast_697 stored as values in memory (estimated size 5.7 KB, free 490.4 MB) 18/04/17 16:59:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 696.0 (TID 696, ***hostname masked***, executor 6, partition 0, NODE_LOCAL, 2049 bytes) 18/04/17 16:59:00 INFO storage.MemoryStore: Block broadcast_697_piece0 stored as bytes in memory (estimated size 3.1 KB, free 490.4 MB) 18/04/17 16:59:00 INFO storage.BlockManagerInfo: Added broadcast_697_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.2 MB) 18/04/17 16:59:00 INFO spark.SparkContext: Created broadcast 697 from broadcast at DAGScheduler.scala:1006 18/04/17 16:59:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 697 (KafkaRDD[942] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 16:59:00 INFO cluster.YarnClusterScheduler: Adding task set 697.0 with 1 tasks 18/04/17 16:59:00 INFO scheduler.DAGScheduler: Got job 698 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 16:59:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 698 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 16:59:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 16:59:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 16:59:00 INFO scheduler.DAGScheduler: Submitting ResultStage 698 (KafkaRDD[944] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 16:59:00 INFO storage.MemoryStore: Block broadcast_698 stored as values in memory (estimated size 5.7 KB, free 490.4 MB) 18/04/17 16:59:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 697.0 (TID 697, ***hostname masked***, executor 12, partition 0, NODE_LOCAL, 2045 bytes) 18/04/17 16:59:00 INFO storage.MemoryStore: Block broadcast_698_piece0 stored as bytes in memory (estimated size 3.1 KB, free 490.4 MB) 18/04/17 16:59:00 INFO storage.BlockManagerInfo: Added broadcast_698_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.2 MB) 18/04/17 16:59:00 INFO spark.SparkContext: Created broadcast 698 from broadcast at DAGScheduler.scala:1006 18/04/17 16:59:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 698 (KafkaRDD[944] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 16:59:00 INFO cluster.YarnClusterScheduler: Adding task set 698.0 with 1 tasks 18/04/17 16:59:00 INFO scheduler.DAGScheduler: Got job 699 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 16:59:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 699 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 16:59:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 16:59:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 16:59:00 INFO scheduler.DAGScheduler: Submitting ResultStage 699 (KafkaRDD[937] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 16:59:00 INFO storage.MemoryStore: Block broadcast_699 stored as values in memory (estimated size 5.7 KB, free 490.4 MB) 18/04/17 16:59:00 INFO storage.BlockManagerInfo: Added broadcast_693_piece0 in memory on ***hostname masked***:55279 (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:59:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 698.0 (TID 698, ***hostname masked***, executor 9, partition 0, RACK_LOCAL, 2051 bytes) 18/04/17 16:59:00 INFO storage.BlockManagerInfo: Added broadcast_696_piece0 in memory on ***hostname masked***:35790 (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:59:00 INFO storage.BlockManagerInfo: Added broadcast_694_piece0 in memory on ***hostname masked***:42188 (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:59:00 INFO storage.MemoryStore: Block broadcast_699_piece0 stored as bytes in memory (estimated size 3.1 KB, free 490.4 MB) 18/04/17 16:59:00 INFO storage.BlockManagerInfo: Added broadcast_699_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.2 MB) 18/04/17 16:59:00 INFO spark.SparkContext: Created broadcast 699 from broadcast at DAGScheduler.scala:1006 18/04/17 16:59:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 699 (KafkaRDD[937] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 16:59:00 INFO cluster.YarnClusterScheduler: Adding task set 699.0 with 1 tasks 18/04/17 16:59:00 INFO scheduler.DAGScheduler: Got job 700 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 16:59:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 700 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 16:59:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 16:59:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 16:59:00 INFO scheduler.DAGScheduler: Submitting ResultStage 700 (KafkaRDD[959] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 16:59:00 INFO storage.MemoryStore: Block broadcast_700 stored as values in memory (estimated size 5.7 KB, free 490.4 MB) 18/04/17 16:59:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 699.0 (TID 699, ***hostname masked***, executor 7, partition 0, RACK_LOCAL, 2063 bytes) 18/04/17 16:59:00 INFO storage.MemoryStore: Block broadcast_700_piece0 stored as bytes in memory (estimated size 3.1 KB, free 490.4 MB) 18/04/17 16:59:00 INFO storage.BlockManagerInfo: Added broadcast_700_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.2 MB) 18/04/17 16:59:00 INFO spark.SparkContext: Created broadcast 700 from broadcast at DAGScheduler.scala:1006 18/04/17 16:59:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 700 (KafkaRDD[959] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 16:59:00 INFO cluster.YarnClusterScheduler: Adding task set 700.0 with 1 tasks 18/04/17 16:59:00 INFO scheduler.DAGScheduler: Got job 701 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 16:59:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 701 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 16:59:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 16:59:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 16:59:00 INFO scheduler.DAGScheduler: Submitting ResultStage 701 (KafkaRDD[967] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 16:59:00 INFO storage.MemoryStore: Block broadcast_701 stored as values in memory (estimated size 5.7 KB, free 490.4 MB) 18/04/17 16:59:00 INFO storage.BlockManagerInfo: Added broadcast_689_piece0 in memory on ***hostname masked***:60107 (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:59:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 700.0 (TID 700, ***hostname masked***, executor 8, partition 0, RACK_LOCAL, 2048 bytes) 18/04/17 16:59:00 INFO storage.MemoryStore: Block broadcast_701_piece0 stored as bytes in memory (estimated size 3.1 KB, free 490.4 MB) 18/04/17 16:59:00 INFO storage.BlockManagerInfo: Added broadcast_701_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.2 MB) 18/04/17 16:59:00 INFO spark.SparkContext: Created broadcast 701 from broadcast at DAGScheduler.scala:1006 18/04/17 16:59:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 701 (KafkaRDD[967] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 16:59:00 INFO cluster.YarnClusterScheduler: Adding task set 701.0 with 1 tasks 18/04/17 16:59:00 INFO scheduler.DAGScheduler: Got job 702 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 16:59:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 702 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 16:59:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 16:59:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 16:59:00 INFO scheduler.DAGScheduler: Submitting ResultStage 702 (KafkaRDD[938] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 16:59:00 INFO storage.MemoryStore: Block broadcast_702 stored as values in memory (estimated size 5.7 KB, free 490.4 MB) 18/04/17 16:59:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 701.0 (TID 701, ***hostname masked***, executor 6, partition 0, RACK_LOCAL, 2042 bytes) 18/04/17 16:59:00 INFO storage.BlockManagerInfo: Added broadcast_699_piece0 in memory on ***hostname masked***:41751 (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:59:00 INFO storage.MemoryStore: Block broadcast_702_piece0 stored as bytes in memory (estimated size 3.1 KB, free 490.4 MB) 18/04/17 16:59:00 INFO storage.BlockManagerInfo: Added broadcast_702_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.2 MB) 18/04/17 16:59:00 INFO spark.SparkContext: Created broadcast 702 from broadcast at DAGScheduler.scala:1006 18/04/17 16:59:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 702 (KafkaRDD[938] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 16:59:00 INFO cluster.YarnClusterScheduler: Adding task set 702.0 with 1 tasks 18/04/17 16:59:00 INFO scheduler.DAGScheduler: Got job 704 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 16:59:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 703 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 16:59:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 16:59:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 16:59:00 INFO scheduler.DAGScheduler: Submitting ResultStage 703 (KafkaRDD[956] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 16:59:00 INFO storage.MemoryStore: Block broadcast_703 stored as values in memory (estimated size 5.7 KB, free 490.4 MB) 18/04/17 16:59:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 702.0 (TID 702, ***hostname masked***, executor 10, partition 0, RACK_LOCAL, 2034 bytes) 18/04/17 16:59:00 INFO storage.MemoryStore: Block broadcast_703_piece0 stored as bytes in memory (estimated size 3.1 KB, free 490.4 MB) 18/04/17 16:59:00 INFO storage.BlockManagerInfo: Added broadcast_703_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.2 MB) 18/04/17 16:59:00 INFO spark.SparkContext: Created broadcast 703 from broadcast at DAGScheduler.scala:1006 18/04/17 16:59:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 703 (KafkaRDD[956] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 16:59:00 INFO cluster.YarnClusterScheduler: Adding task set 703.0 with 1 tasks 18/04/17 16:59:00 INFO scheduler.DAGScheduler: Got job 703 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 16:59:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 704 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 16:59:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 16:59:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 16:59:00 INFO scheduler.DAGScheduler: Submitting ResultStage 704 (KafkaRDD[962] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 16:59:00 INFO storage.MemoryStore: Block broadcast_704 stored as values in memory (estimated size 5.7 KB, free 490.4 MB) 18/04/17 16:59:00 INFO storage.BlockManagerInfo: Added broadcast_701_piece0 in memory on ***hostname masked***:35790 (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:59:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 703.0 (TID 703, ***hostname masked***, executor 4, partition 0, RACK_LOCAL, 2048 bytes) 18/04/17 16:59:00 INFO storage.BlockManagerInfo: Added broadcast_687_piece0 in memory on ***hostname masked***:60107 (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:59:00 INFO storage.MemoryStore: Block broadcast_704_piece0 stored as bytes in memory (estimated size 3.1 KB, free 490.4 MB) 18/04/17 16:59:00 INFO storage.BlockManagerInfo: Added broadcast_704_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.2 MB) 18/04/17 16:59:00 INFO spark.SparkContext: Created broadcast 704 from broadcast at DAGScheduler.scala:1006 18/04/17 16:59:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 704 (KafkaRDD[962] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 16:59:00 INFO cluster.YarnClusterScheduler: Adding task set 704.0 with 1 tasks 18/04/17 16:59:00 INFO scheduler.DAGScheduler: Got job 705 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 16:59:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 705 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 16:59:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 16:59:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 16:59:00 INFO scheduler.DAGScheduler: Submitting ResultStage 705 (KafkaRDD[960] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 16:59:00 INFO storage.MemoryStore: Block broadcast_705 stored as values in memory (estimated size 5.7 KB, free 490.3 MB) 18/04/17 16:59:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 704.0 (TID 704, ***hostname masked***, executor 5, partition 0, NODE_LOCAL, 2056 bytes) 18/04/17 16:59:00 INFO storage.MemoryStore: Block broadcast_705_piece0 stored as bytes in memory (estimated size 3.1 KB, free 490.3 MB) 18/04/17 16:59:00 INFO storage.BlockManagerInfo: Added broadcast_705_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.2 MB) 18/04/17 16:59:00 INFO storage.BlockManagerInfo: Added broadcast_698_piece0 in memory on ***hostname masked***:55033 (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:59:00 INFO spark.SparkContext: Created broadcast 705 from broadcast at DAGScheduler.scala:1006 18/04/17 16:59:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 705 (KafkaRDD[960] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 16:59:00 INFO cluster.YarnClusterScheduler: Adding task set 705.0 with 1 tasks 18/04/17 16:59:00 INFO scheduler.DAGScheduler: Got job 706 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 16:59:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 706 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 16:59:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 16:59:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 16:59:00 INFO storage.BlockManagerInfo: Added broadcast_703_piece0 in memory on ***hostname masked***:55279 (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:59:00 INFO scheduler.DAGScheduler: Submitting ResultStage 706 (KafkaRDD[943] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 16:59:00 INFO storage.MemoryStore: Block broadcast_706 stored as values in memory (estimated size 5.7 KB, free 490.3 MB) 18/04/17 16:59:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 705.0 (TID 705, ***hostname masked***, executor 6, partition 0, NODE_LOCAL, 2047 bytes) 18/04/17 16:59:00 INFO storage.MemoryStore: Block broadcast_706_piece0 stored as bytes in memory (estimated size 3.1 KB, free 490.3 MB) 18/04/17 16:59:00 INFO storage.BlockManagerInfo: Added broadcast_706_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.2 MB) 18/04/17 16:59:00 INFO spark.SparkContext: Created broadcast 706 from broadcast at DAGScheduler.scala:1006 18/04/17 16:59:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 706 (KafkaRDD[943] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 16:59:00 INFO cluster.YarnClusterScheduler: Adding task set 706.0 with 1 tasks 18/04/17 16:59:00 INFO scheduler.DAGScheduler: Got job 707 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 16:59:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 707 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 16:59:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 16:59:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 16:59:00 INFO scheduler.DAGScheduler: Submitting ResultStage 707 (KafkaRDD[946] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 16:59:00 INFO storage.MemoryStore: Block broadcast_707 stored as values in memory (estimated size 5.7 KB, free 490.3 MB) 18/04/17 16:59:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 706.0 (TID 706, ***hostname masked***, executor 9, partition 0, RACK_LOCAL, 2060 bytes) 18/04/17 16:59:00 INFO storage.BlockManagerInfo: Added broadcast_695_piece0 in memory on ***hostname masked***:60107 (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:59:00 INFO storage.MemoryStore: Block broadcast_707_piece0 stored as bytes in memory (estimated size 3.1 KB, free 490.3 MB) 18/04/17 16:59:00 INFO storage.BlockManagerInfo: Added broadcast_702_piece0 in memory on ***hostname masked***:55095 (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:59:00 INFO storage.BlockManagerInfo: Added broadcast_707_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.2 MB) 18/04/17 16:59:00 INFO spark.SparkContext: Created broadcast 707 from broadcast at DAGScheduler.scala:1006 18/04/17 16:59:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 707 (KafkaRDD[946] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 16:59:00 INFO cluster.YarnClusterScheduler: Adding task set 707.0 with 1 tasks 18/04/17 16:59:00 INFO scheduler.DAGScheduler: Got job 708 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 16:59:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 708 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 16:59:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 16:59:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 16:59:00 INFO scheduler.DAGScheduler: Submitting ResultStage 708 (KafkaRDD[958] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 16:59:00 INFO storage.MemoryStore: Block broadcast_708 stored as values in memory (estimated size 5.7 KB, free 490.3 MB) 18/04/17 16:59:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 707.0 (TID 707, ***hostname masked***, executor 1, partition 0, NODE_LOCAL, 2059 bytes) 18/04/17 16:59:00 INFO storage.MemoryStore: Block broadcast_708_piece0 stored as bytes in memory (estimated size 3.1 KB, free 490.3 MB) 18/04/17 16:59:00 INFO storage.BlockManagerInfo: Added broadcast_708_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.2 MB) 18/04/17 16:59:00 INFO spark.SparkContext: Created broadcast 708 from broadcast at DAGScheduler.scala:1006 18/04/17 16:59:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 708 (KafkaRDD[958] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 16:59:00 INFO cluster.YarnClusterScheduler: Adding task set 708.0 with 1 tasks 18/04/17 16:59:00 INFO scheduler.DAGScheduler: Got job 709 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 16:59:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 709 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 16:59:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 16:59:00 INFO storage.BlockManagerInfo: Added broadcast_705_piece0 in memory on ***hostname masked***:35790 (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:59:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 16:59:00 INFO scheduler.DAGScheduler: Submitting ResultStage 709 (KafkaRDD[951] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 16:59:00 INFO storage.MemoryStore: Block broadcast_709 stored as values in memory (estimated size 5.7 KB, free 490.3 MB) 18/04/17 16:59:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 708.0 (TID 708, ***hostname masked***, executor 10, partition 0, RACK_LOCAL, 2069 bytes) 18/04/17 16:59:00 INFO storage.MemoryStore: Block broadcast_709_piece0 stored as bytes in memory (estimated size 3.1 KB, free 490.3 MB) 18/04/17 16:59:00 INFO storage.BlockManagerInfo: Added broadcast_709_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.2 MB) 18/04/17 16:59:00 INFO spark.SparkContext: Created broadcast 709 from broadcast at DAGScheduler.scala:1006 18/04/17 16:59:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 709 (KafkaRDD[951] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 16:59:00 INFO storage.BlockManagerInfo: Added broadcast_706_piece0 in memory on ***hostname masked***:55033 (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:59:00 INFO cluster.YarnClusterScheduler: Adding task set 709.0 with 1 tasks 18/04/17 16:59:00 INFO scheduler.DAGScheduler: Got job 710 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 16:59:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 710 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 16:59:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 16:59:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 16:59:00 INFO scheduler.DAGScheduler: Submitting ResultStage 710 (KafkaRDD[969] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 16:59:00 INFO storage.MemoryStore: Block broadcast_710 stored as values in memory (estimated size 5.7 KB, free 490.3 MB) 18/04/17 16:59:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 709.0 (TID 709, ***hostname masked***, executor 5, partition 0, RACK_LOCAL, 2064 bytes) 18/04/17 16:59:00 INFO storage.BlockManagerInfo: Added broadcast_704_piece0 in memory on ***hostname masked***:53081 (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:59:00 INFO storage.MemoryStore: Block broadcast_710_piece0 stored as bytes in memory (estimated size 3.1 KB, free 490.3 MB) 18/04/17 16:59:00 INFO storage.BlockManagerInfo: Added broadcast_710_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.2 MB) 18/04/17 16:59:00 INFO spark.SparkContext: Created broadcast 710 from broadcast at DAGScheduler.scala:1006 18/04/17 16:59:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 710 (KafkaRDD[969] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 16:59:00 INFO cluster.YarnClusterScheduler: Adding task set 710.0 with 1 tasks 18/04/17 16:59:00 INFO scheduler.DAGScheduler: Got job 711 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 16:59:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 711 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 16:59:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 16:59:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 16:59:00 INFO scheduler.DAGScheduler: Submitting ResultStage 711 (KafkaRDD[963] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 16:59:00 INFO storage.MemoryStore: Block broadcast_711 stored as values in memory (estimated size 5.7 KB, free 490.3 MB) 18/04/17 16:59:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 710.0 (TID 710, ***hostname masked***, executor 11, partition 0, NODE_LOCAL, 2048 bytes) 18/04/17 16:59:00 INFO storage.BlockManagerInfo: Added broadcast_708_piece0 in memory on ***hostname masked***:55095 (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:59:00 INFO storage.MemoryStore: Block broadcast_711_piece0 stored as bytes in memory (estimated size 3.1 KB, free 490.3 MB) 18/04/17 16:59:00 INFO storage.BlockManagerInfo: Added broadcast_711_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.2 MB) 18/04/17 16:59:00 INFO spark.SparkContext: Created broadcast 711 from broadcast at DAGScheduler.scala:1006 18/04/17 16:59:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 711 (KafkaRDD[963] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 16:59:00 INFO cluster.YarnClusterScheduler: Adding task set 711.0 with 1 tasks 18/04/17 16:59:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 711.0 (TID 711, ***hostname masked***, executor 8, partition 0, RACK_LOCAL, 2048 bytes) 18/04/17 16:59:00 INFO storage.BlockManagerInfo: Added broadcast_710_piece0 in memory on ***hostname masked***:57847 (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:59:00 INFO storage.BlockManagerInfo: Added broadcast_707_piece0 in memory on ***hostname masked***:56034 (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:59:00 INFO storage.BlockManagerInfo: Added broadcast_709_piece0 in memory on ***hostname masked***:53081 (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:59:00 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 702.0 (TID 702) in 82 ms on ***hostname masked*** (executor 10) (1/1) 18/04/17 16:59:00 INFO cluster.YarnClusterScheduler: Removed TaskSet 702.0, whose tasks have all completed, from pool 18/04/17 16:59:00 INFO scheduler.DAGScheduler: ResultStage 702 (foreachPartition at PredictorEngineApp.java:153) finished in 0.083 s 18/04/17 16:59:00 INFO scheduler.DAGScheduler: Job 702 finished: foreachPartition at PredictorEngineApp.java:153, took 0.143141 s 18/04/17 16:59:00 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x41015077 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 16:59:00 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x410150770x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 16:59:00 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 16:59:00 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:58863, server: ***hostname masked***/***IP masked***:2181 18/04/17 16:59:00 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x3626be1439a94c9, negotiated timeout = 60000 18/04/17 16:59:00 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x3626be1439a94c9 18/04/17 16:59:00 INFO zookeeper.ZooKeeper: Session: 0x3626be1439a94c9 closed 18/04/17 16:59:00 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 16:59:00 INFO scheduler.JobScheduler: Finished job streaming job 1523973540000 ms.2 from job set of time 1523973540000 ms 18/04/17 16:59:00 INFO storage.BlockManagerInfo: Added broadcast_697_piece0 in memory on ***hostname masked***:42188 (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:59:00 INFO storage.BlockManagerInfo: Added broadcast_692_piece0 in memory on ***hostname masked***:50260 (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:59:00 INFO storage.BlockManagerInfo: Added broadcast_711_piece0 in memory on ***hostname masked***:50260 (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:59:00 INFO storage.BlockManagerInfo: Added broadcast_700_piece0 in memory on ***hostname masked***:50260 (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:59:00 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 692.0 (TID 692) in 230 ms on ***hostname masked*** (executor 8) (1/1) 18/04/17 16:59:00 INFO cluster.YarnClusterScheduler: Removed TaskSet 692.0, whose tasks have all completed, from pool 18/04/17 16:59:00 INFO scheduler.DAGScheduler: ResultStage 692 (foreachPartition at PredictorEngineApp.java:153) finished in 0.230 s 18/04/17 16:59:00 INFO scheduler.DAGScheduler: Job 692 finished: foreachPartition at PredictorEngineApp.java:153, took 0.253194 s 18/04/17 16:59:00 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x73e6b706 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 16:59:00 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x73e6b7060x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 16:59:00 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 16:59:00 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:41610, server: ***hostname masked***/***IP masked***:2181 18/04/17 16:59:00 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x2626be142b28dfb, negotiated timeout = 60000 18/04/17 16:59:00 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x2626be142b28dfb 18/04/17 16:59:00 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 700.0 (TID 700) in 210 ms on ***hostname masked*** (executor 8) (1/1) 18/04/17 16:59:00 INFO cluster.YarnClusterScheduler: Removed TaskSet 700.0, whose tasks have all completed, from pool 18/04/17 16:59:00 INFO scheduler.DAGScheduler: ResultStage 700 (foreachPartition at PredictorEngineApp.java:153) finished in 0.211 s 18/04/17 16:59:00 INFO scheduler.DAGScheduler: Job 700 finished: foreachPartition at PredictorEngineApp.java:153, took 0.265444 s 18/04/17 16:59:00 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x328a8bdb connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 16:59:00 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x328a8bdb0x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 16:59:00 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 16:59:00 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:37018, server: ***hostname masked***/***IP masked***:2181 18/04/17 16:59:00 INFO zookeeper.ZooKeeper: Session: 0x2626be142b28dfb closed 18/04/17 16:59:00 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 16:59:00 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x1626be1444c9507, negotiated timeout = 60000 18/04/17 16:59:00 INFO scheduler.JobScheduler: Finished job streaming job 1523973540000 ms.19 from job set of time 1523973540000 ms 18/04/17 16:59:00 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x1626be1444c9507 18/04/17 16:59:00 INFO zookeeper.ZooKeeper: Session: 0x1626be1444c9507 closed 18/04/17 16:59:00 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 16:59:00 INFO scheduler.JobScheduler: Finished job streaming job 1523973540000 ms.23 from job set of time 1523973540000 ms 18/04/17 16:59:01 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 706.0 (TID 706) in 1048 ms on ***hostname masked*** (executor 9) (1/1) 18/04/17 16:59:01 INFO scheduler.DAGScheduler: ResultStage 706 (foreachPartition at PredictorEngineApp.java:153) finished in 1.049 s 18/04/17 16:59:01 INFO cluster.YarnClusterScheduler: Removed TaskSet 706.0, whose tasks have all completed, from pool 18/04/17 16:59:01 INFO scheduler.DAGScheduler: Job 706 finished: foreachPartition at PredictorEngineApp.java:153, took 1.120261 s 18/04/17 16:59:01 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x28c49f38 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 16:59:01 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x28c49f380x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 16:59:01 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 16:59:01 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:37022, server: ***hostname masked***/***IP masked***:2181 18/04/17 16:59:01 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x1626be1444c950c, negotiated timeout = 60000 18/04/17 16:59:01 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x1626be1444c950c 18/04/17 16:59:01 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 698.0 (TID 698) in 1101 ms on ***hostname masked*** (executor 9) (1/1) 18/04/17 16:59:01 INFO scheduler.DAGScheduler: ResultStage 698 (foreachPartition at PredictorEngineApp.java:153) finished in 1.101 s 18/04/17 16:59:01 INFO cluster.YarnClusterScheduler: Removed TaskSet 698.0, whose tasks have all completed, from pool 18/04/17 16:59:01 INFO scheduler.DAGScheduler: Job 698 finished: foreachPartition at PredictorEngineApp.java:153, took 1.141178 s 18/04/17 16:59:01 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x4babcc56 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 16:59:01 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x4babcc560x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 16:59:01 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 16:59:01 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:37025, server: ***hostname masked***/***IP masked***:2181 18/04/17 16:59:01 INFO zookeeper.ZooKeeper: Session: 0x1626be1444c950c closed 18/04/17 16:59:01 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 16:59:01 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x1626be1444c950d, negotiated timeout = 60000 18/04/17 16:59:01 INFO scheduler.JobScheduler: Finished job streaming job 1523973540000 ms.7 from job set of time 1523973540000 ms 18/04/17 16:59:01 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x1626be1444c950d 18/04/17 16:59:01 INFO zookeeper.ZooKeeper: Session: 0x1626be1444c950d closed 18/04/17 16:59:01 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 16:59:01 INFO scheduler.JobScheduler: Finished job streaming job 1523973540000 ms.8 from job set of time 1523973540000 ms 18/04/17 16:59:01 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 694.0 (TID 694) in 1820 ms on ***hostname masked*** (executor 12) (1/1) 18/04/17 16:59:01 INFO cluster.YarnClusterScheduler: Removed TaskSet 694.0, whose tasks have all completed, from pool 18/04/17 16:59:01 INFO scheduler.DAGScheduler: ResultStage 694 (foreachPartition at PredictorEngineApp.java:153) finished in 1.820 s 18/04/17 16:59:01 INFO scheduler.DAGScheduler: Job 694 finished: foreachPartition at PredictorEngineApp.java:153, took 1.849122 s 18/04/17 16:59:01 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x7e5d87d8 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 16:59:01 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x7e5d87d80x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 16:59:01 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 16:59:01 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:37029, server: ***hostname masked***/***IP masked***:2181 18/04/17 16:59:01 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x1626be1444c950f, negotiated timeout = 60000 18/04/17 16:59:01 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x1626be1444c950f 18/04/17 16:59:01 INFO zookeeper.ZooKeeper: Session: 0x1626be1444c950f closed 18/04/17 16:59:01 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 16:59:01 INFO scheduler.JobScheduler: Finished job streaming job 1523973540000 ms.25 from job set of time 1523973540000 ms 18/04/17 16:59:04 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 697.0 (TID 697) in 4180 ms on ***hostname masked*** (executor 12) (1/1) 18/04/17 16:59:04 INFO scheduler.DAGScheduler: ResultStage 697 (foreachPartition at PredictorEngineApp.java:153) finished in 4.180 s 18/04/17 16:59:04 INFO cluster.YarnClusterScheduler: Removed TaskSet 697.0, whose tasks have all completed, from pool 18/04/17 16:59:04 INFO scheduler.DAGScheduler: Job 697 finished: foreachPartition at PredictorEngineApp.java:153, took 4.218014 s 18/04/17 16:59:04 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0xc946d3e connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 16:59:04 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0xc946d3e0x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 16:59:04 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 16:59:04 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:41632, server: ***hostname masked***/***IP masked***:2181 18/04/17 16:59:04 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x2626be142b28dff, negotiated timeout = 60000 18/04/17 16:59:04 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x2626be142b28dff 18/04/17 16:59:04 INFO zookeeper.ZooKeeper: Session: 0x2626be142b28dff closed 18/04/17 16:59:04 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 16:59:04 INFO scheduler.JobScheduler: Finished job streaming job 1523973540000 ms.6 from job set of time 1523973540000 ms 18/04/17 16:59:05 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 686.0 (TID 686) in 5324 ms on ***hostname masked*** (executor 7) (1/1) 18/04/17 16:59:05 INFO cluster.YarnClusterScheduler: Removed TaskSet 686.0, whose tasks have all completed, from pool 18/04/17 16:59:05 INFO scheduler.DAGScheduler: ResultStage 686 (foreachPartition at PredictorEngineApp.java:153) finished in 5.325 s 18/04/17 16:59:05 INFO scheduler.DAGScheduler: Job 686 finished: foreachPartition at PredictorEngineApp.java:153, took 5.330583 s 18/04/17 16:59:05 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x19f6d716 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 16:59:05 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x19f6d7160x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 16:59:05 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 16:59:05 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:58892, server: ***hostname masked***/***IP masked***:2181 18/04/17 16:59:05 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x3626be1439a94d1, negotiated timeout = 60000 18/04/17 16:59:05 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x3626be1439a94d1 18/04/17 16:59:05 INFO zookeeper.ZooKeeper: Session: 0x3626be1439a94d1 closed 18/04/17 16:59:05 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 16:59:05 INFO scheduler.JobScheduler: Finished job streaming job 1523973540000 ms.28 from job set of time 1523973540000 ms 18/04/17 16:59:06 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 701.0 (TID 701) in 6116 ms on ***hostname masked*** (executor 6) (1/1) 18/04/17 16:59:06 INFO cluster.YarnClusterScheduler: Removed TaskSet 701.0, whose tasks have all completed, from pool 18/04/17 16:59:06 INFO scheduler.DAGScheduler: ResultStage 701 (foreachPartition at PredictorEngineApp.java:153) finished in 6.116 s 18/04/17 16:59:06 INFO scheduler.DAGScheduler: Job 701 finished: foreachPartition at PredictorEngineApp.java:153, took 6.173787 s 18/04/17 16:59:06 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x16669f42 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 16:59:06 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x16669f420x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 16:59:06 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 16:59:06 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:41640, server: ***hostname masked***/***IP masked***:2181 18/04/17 16:59:06 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x2626be142b28e02, negotiated timeout = 60000 18/04/17 16:59:06 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x2626be142b28e02 18/04/17 16:59:06 INFO zookeeper.ZooKeeper: Session: 0x2626be142b28e02 closed 18/04/17 16:59:06 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 16:59:06 INFO scheduler.JobScheduler: Finished job streaming job 1523973540000 ms.31 from job set of time 1523973540000 ms 18/04/17 16:59:06 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 691.0 (TID 691) in 6324 ms on ***hostname masked*** (executor 11) (1/1) 18/04/17 16:59:06 INFO cluster.YarnClusterScheduler: Removed TaskSet 691.0, whose tasks have all completed, from pool 18/04/17 16:59:06 INFO scheduler.DAGScheduler: ResultStage 691 (foreachPartition at PredictorEngineApp.java:153) finished in 6.325 s 18/04/17 16:59:06 INFO scheduler.DAGScheduler: Job 691 finished: foreachPartition at PredictorEngineApp.java:153, took 6.344877 s 18/04/17 16:59:06 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x68dff15d connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 16:59:06 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x68dff15d0x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 16:59:06 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 16:59:06 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:37048, server: ***hostname masked***/***IP masked***:2181 18/04/17 16:59:06 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x1626be1444c9511, negotiated timeout = 60000 18/04/17 16:59:06 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x1626be1444c9511 18/04/17 16:59:06 INFO zookeeper.ZooKeeper: Session: 0x1626be1444c9511 closed 18/04/17 16:59:06 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 16:59:06 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 709.0 (TID 709) in 6294 ms on ***hostname masked*** (executor 5) (1/1) 18/04/17 16:59:06 INFO cluster.YarnClusterScheduler: Removed TaskSet 709.0, whose tasks have all completed, from pool 18/04/17 16:59:06 INFO scheduler.DAGScheduler: ResultStage 709 (foreachPartition at PredictorEngineApp.java:153) finished in 6.295 s 18/04/17 16:59:06 INFO scheduler.DAGScheduler: Job 709 finished: foreachPartition at PredictorEngineApp.java:153, took 6.372284 s 18/04/17 16:59:06 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x7576c27d connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 16:59:06 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x7576c27d0x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 16:59:06 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 16:59:06 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:37051, server: ***hostname masked***/***IP masked***:2181 18/04/17 16:59:06 INFO scheduler.JobScheduler: Finished job streaming job 1523973540000 ms.18 from job set of time 1523973540000 ms 18/04/17 16:59:06 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x1626be1444c9512, negotiated timeout = 60000 18/04/17 16:59:06 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x1626be1444c9512 18/04/17 16:59:06 INFO zookeeper.ZooKeeper: Session: 0x1626be1444c9512 closed 18/04/17 16:59:06 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 16:59:06 INFO scheduler.JobScheduler: Finished job streaming job 1523973540000 ms.15 from job set of time 1523973540000 ms 18/04/17 16:59:06 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 693.0 (TID 693) in 6386 ms on ***hostname masked*** (executor 4) (1/1) 18/04/17 16:59:06 INFO cluster.YarnClusterScheduler: Removed TaskSet 693.0, whose tasks have all completed, from pool 18/04/17 16:59:06 INFO scheduler.DAGScheduler: ResultStage 693 (foreachPartition at PredictorEngineApp.java:153) finished in 6.386 s 18/04/17 16:59:06 INFO scheduler.DAGScheduler: Job 693 finished: foreachPartition at PredictorEngineApp.java:153, took 6.412145 s 18/04/17 16:59:06 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0xbc3b487 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 16:59:06 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0xbc3b4870x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 16:59:06 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 16:59:06 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:37054, server: ***hostname masked***/***IP masked***:2181 18/04/17 16:59:06 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x1626be1444c9513, negotiated timeout = 60000 18/04/17 16:59:06 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x1626be1444c9513 18/04/17 16:59:06 INFO zookeeper.ZooKeeper: Session: 0x1626be1444c9513 closed 18/04/17 16:59:06 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 16:59:06 INFO scheduler.JobScheduler: Finished job streaming job 1523973540000 ms.9 from job set of time 1523973540000 ms 18/04/17 16:59:06 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 690.0 (TID 690) in 6505 ms on ***hostname masked*** (executor 8) (1/1) 18/04/17 16:59:06 INFO cluster.YarnClusterScheduler: Removed TaskSet 690.0, whose tasks have all completed, from pool 18/04/17 16:59:06 INFO scheduler.DAGScheduler: ResultStage 690 (foreachPartition at PredictorEngineApp.java:153) finished in 6.505 s 18/04/17 16:59:06 INFO scheduler.DAGScheduler: Job 690 finished: foreachPartition at PredictorEngineApp.java:153, took 6.522273 s 18/04/17 16:59:06 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x141fcd69 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 16:59:06 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x141fcd690x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 16:59:06 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 16:59:06 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:37057, server: ***hostname masked***/***IP masked***:2181 18/04/17 16:59:06 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x1626be1444c9514, negotiated timeout = 60000 18/04/17 16:59:06 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x1626be1444c9514 18/04/17 16:59:06 INFO zookeeper.ZooKeeper: Session: 0x1626be1444c9514 closed 18/04/17 16:59:06 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 16:59:06 INFO scheduler.JobScheduler: Finished job streaming job 1523973540000 ms.32 from job set of time 1523973540000 ms 18/04/17 16:59:07 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 710.0 (TID 710) in 7420 ms on ***hostname masked*** (executor 11) (1/1) 18/04/17 16:59:07 INFO cluster.YarnClusterScheduler: Removed TaskSet 710.0, whose tasks have all completed, from pool 18/04/17 16:59:07 INFO scheduler.DAGScheduler: ResultStage 710 (foreachPartition at PredictorEngineApp.java:153) finished in 7.421 s 18/04/17 16:59:07 INFO scheduler.DAGScheduler: Job 710 finished: foreachPartition at PredictorEngineApp.java:153, took 7.501070 s 18/04/17 16:59:07 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x4022bb8c connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 16:59:07 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x4022bb8c0x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 16:59:07 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 16:59:07 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:41656, server: ***hostname masked***/***IP masked***:2181 18/04/17 16:59:07 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x2626be142b28e05, negotiated timeout = 60000 18/04/17 16:59:07 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x2626be142b28e05 18/04/17 16:59:07 INFO zookeeper.ZooKeeper: Session: 0x2626be142b28e05 closed 18/04/17 16:59:07 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 16:59:07 INFO scheduler.JobScheduler: Finished job streaming job 1523973540000 ms.33 from job set of time 1523973540000 ms 18/04/17 16:59:07 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 703.0 (TID 703) in 7534 ms on ***hostname masked*** (executor 4) (1/1) 18/04/17 16:59:07 INFO cluster.YarnClusterScheduler: Removed TaskSet 703.0, whose tasks have all completed, from pool 18/04/17 16:59:07 INFO scheduler.DAGScheduler: ResultStage 703 (foreachPartition at PredictorEngineApp.java:153) finished in 7.534 s 18/04/17 16:59:07 INFO scheduler.DAGScheduler: Job 704 finished: foreachPartition at PredictorEngineApp.java:153, took 7.596797 s 18/04/17 16:59:07 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x37a0b086 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 16:59:07 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x37a0b0860x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 16:59:07 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 16:59:07 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:37064, server: ***hostname masked***/***IP masked***:2181 18/04/17 16:59:07 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x1626be1444c9515, negotiated timeout = 60000 18/04/17 16:59:07 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x1626be1444c9515 18/04/17 16:59:07 INFO zookeeper.ZooKeeper: Session: 0x1626be1444c9515 closed 18/04/17 16:59:07 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 16:59:07 INFO scheduler.JobScheduler: Finished job streaming job 1523973540000 ms.20 from job set of time 1523973540000 ms 18/04/17 16:59:08 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 711.0 (TID 711) in 7909 ms on ***hostname masked*** (executor 8) (1/1) 18/04/17 16:59:08 INFO cluster.YarnClusterScheduler: Removed TaskSet 711.0, whose tasks have all completed, from pool 18/04/17 16:59:08 INFO scheduler.DAGScheduler: ResultStage 711 (foreachPartition at PredictorEngineApp.java:153) finished in 7.911 s 18/04/17 16:59:08 INFO scheduler.DAGScheduler: Job 711 finished: foreachPartition at PredictorEngineApp.java:153, took 7.992178 s 18/04/17 16:59:08 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x5efe3d46 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 16:59:08 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x5efe3d460x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 16:59:08 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 16:59:08 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:37069, server: ***hostname masked***/***IP masked***:2181 18/04/17 16:59:08 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x1626be1444c9516, negotiated timeout = 60000 18/04/17 16:59:08 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x1626be1444c9516 18/04/17 16:59:08 INFO zookeeper.ZooKeeper: Session: 0x1626be1444c9516 closed 18/04/17 16:59:08 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 16:59:08 INFO scheduler.JobScheduler: Finished job streaming job 1523973540000 ms.27 from job set of time 1523973540000 ms 18/04/17 16:59:08 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 708.0 (TID 708) in 8346 ms on ***hostname masked*** (executor 10) (1/1) 18/04/17 16:59:08 INFO cluster.YarnClusterScheduler: Removed TaskSet 708.0, whose tasks have all completed, from pool 18/04/17 16:59:08 INFO scheduler.DAGScheduler: ResultStage 708 (foreachPartition at PredictorEngineApp.java:153) finished in 8.346 s 18/04/17 16:59:08 INFO scheduler.DAGScheduler: Job 708 finished: foreachPartition at PredictorEngineApp.java:153, took 8.421451 s 18/04/17 16:59:08 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x439c77c2 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 16:59:08 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x439c77c20x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 16:59:08 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 16:59:08 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:58923, server: ***hostname masked***/***IP masked***:2181 18/04/17 16:59:08 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x3626be1439a94d7, negotiated timeout = 60000 18/04/17 16:59:08 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x3626be1439a94d7 18/04/17 16:59:08 INFO zookeeper.ZooKeeper: Session: 0x3626be1439a94d7 closed 18/04/17 16:59:08 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 16:59:08 INFO scheduler.JobScheduler: Finished job streaming job 1523973540000 ms.22 from job set of time 1523973540000 ms 18/04/17 16:59:08 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 707.0 (TID 707) in 8726 ms on ***hostname masked*** (executor 1) (1/1) 18/04/17 16:59:08 INFO cluster.YarnClusterScheduler: Removed TaskSet 707.0, whose tasks have all completed, from pool 18/04/17 16:59:08 INFO scheduler.DAGScheduler: ResultStage 707 (foreachPartition at PredictorEngineApp.java:153) finished in 8.727 s 18/04/17 16:59:08 INFO scheduler.DAGScheduler: Job 707 finished: foreachPartition at PredictorEngineApp.java:153, took 8.801225 s 18/04/17 16:59:08 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x1e74f9bc connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 16:59:08 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x1e74f9bc0x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 16:59:08 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 16:59:08 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:41673, server: ***hostname masked***/***IP masked***:2181 18/04/17 16:59:08 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x2626be142b28e06, negotiated timeout = 60000 18/04/17 16:59:08 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x2626be142b28e06 18/04/17 16:59:08 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 696.0 (TID 696) in 8791 ms on ***hostname masked*** (executor 6) (1/1) 18/04/17 16:59:08 INFO scheduler.DAGScheduler: ResultStage 696 (foreachPartition at PredictorEngineApp.java:153) finished in 8.791 s 18/04/17 16:59:08 INFO cluster.YarnClusterScheduler: Removed TaskSet 696.0, whose tasks have all completed, from pool 18/04/17 16:59:08 INFO scheduler.DAGScheduler: Job 696 finished: foreachPartition at PredictorEngineApp.java:153, took 8.826189 s 18/04/17 16:59:08 INFO zookeeper.ZooKeeper: Session: 0x2626be142b28e06 closed 18/04/17 16:59:08 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 16:59:08 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x4f121cc connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 16:59:08 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x4f121cc0x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 16:59:08 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 16:59:08 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:58932, server: ***hostname masked***/***IP masked***:2181 18/04/17 16:59:08 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x3626be1439a94d9, negotiated timeout = 60000 18/04/17 16:59:08 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x3626be1439a94d9 18/04/17 16:59:08 INFO scheduler.JobScheduler: Finished job streaming job 1523973540000 ms.10 from job set of time 1523973540000 ms 18/04/17 16:59:08 INFO zookeeper.ZooKeeper: Session: 0x3626be1439a94d9 closed 18/04/17 16:59:08 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 16:59:08 INFO scheduler.JobScheduler: Finished job streaming job 1523973540000 ms.29 from job set of time 1523973540000 ms 18/04/17 16:59:09 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 689.0 (TID 689) in 9182 ms on ***hostname masked*** (executor 3) (1/1) 18/04/17 16:59:09 INFO scheduler.DAGScheduler: ResultStage 689 (foreachPartition at PredictorEngineApp.java:153) finished in 9.183 s 18/04/17 16:59:09 INFO cluster.YarnClusterScheduler: Removed TaskSet 689.0, whose tasks have all completed, from pool 18/04/17 16:59:09 INFO scheduler.DAGScheduler: Job 689 finished: foreachPartition at PredictorEngineApp.java:153, took 9.197422 s 18/04/17 16:59:09 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x2a4dfba9 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 16:59:09 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x2a4dfba90x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 16:59:09 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 16:59:09 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:58936, server: ***hostname masked***/***IP masked***:2181 18/04/17 16:59:09 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x3626be1439a94db, negotiated timeout = 60000 18/04/17 16:59:09 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x3626be1439a94db 18/04/17 16:59:09 INFO zookeeper.ZooKeeper: Session: 0x3626be1439a94db closed 18/04/17 16:59:09 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 16:59:09 INFO scheduler.JobScheduler: Finished job streaming job 1523973540000 ms.12 from job set of time 1523973540000 ms 18/04/17 16:59:09 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 705.0 (TID 705) in 9387 ms on ***hostname masked*** (executor 6) (1/1) 18/04/17 16:59:09 INFO cluster.YarnClusterScheduler: Removed TaskSet 705.0, whose tasks have all completed, from pool 18/04/17 16:59:09 INFO scheduler.DAGScheduler: ResultStage 705 (foreachPartition at PredictorEngineApp.java:153) finished in 9.388 s 18/04/17 16:59:09 INFO scheduler.DAGScheduler: Job 705 finished: foreachPartition at PredictorEngineApp.java:153, took 9.457162 s 18/04/17 16:59:09 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x4f3d983d connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 16:59:09 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x4f3d983d0x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 16:59:09 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 16:59:09 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:41683, server: ***hostname masked***/***IP masked***:2181 18/04/17 16:59:09 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x2626be142b28e08, negotiated timeout = 60000 18/04/17 16:59:09 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x2626be142b28e08 18/04/17 16:59:09 INFO zookeeper.ZooKeeper: Session: 0x2626be142b28e08 closed 18/04/17 16:59:09 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 16:59:09 INFO scheduler.JobScheduler: Finished job streaming job 1523973540000 ms.24 from job set of time 1523973540000 ms 18/04/17 16:59:09 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 699.0 (TID 699) in 9616 ms on ***hostname masked*** (executor 7) (1/1) 18/04/17 16:59:09 INFO cluster.YarnClusterScheduler: Removed TaskSet 699.0, whose tasks have all completed, from pool 18/04/17 16:59:09 INFO scheduler.DAGScheduler: ResultStage 699 (foreachPartition at PredictorEngineApp.java:153) finished in 9.624 s 18/04/17 16:59:09 INFO scheduler.DAGScheduler: Job 699 finished: foreachPartition at PredictorEngineApp.java:153, took 9.667629 s 18/04/17 16:59:09 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x51484e9 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 16:59:09 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x51484e90x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 16:59:09 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 16:59:09 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:41686, server: ***hostname masked***/***IP masked***:2181 18/04/17 16:59:09 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x2626be142b28e09, negotiated timeout = 60000 18/04/17 16:59:09 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x2626be142b28e09 18/04/17 16:59:09 INFO zookeeper.ZooKeeper: Session: 0x2626be142b28e09 closed 18/04/17 16:59:09 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 16:59:09 INFO scheduler.JobScheduler: Finished job streaming job 1523973540000 ms.1 from job set of time 1523973540000 ms 18/04/17 16:59:10 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 704.0 (TID 704) in 10869 ms on ***hostname masked*** (executor 5) (1/1) 18/04/17 16:59:10 INFO cluster.YarnClusterScheduler: Removed TaskSet 704.0, whose tasks have all completed, from pool 18/04/17 16:59:10 INFO scheduler.DAGScheduler: ResultStage 704 (foreachPartition at PredictorEngineApp.java:153) finished in 10.869 s 18/04/17 16:59:10 INFO scheduler.DAGScheduler: Job 703 finished: foreachPartition at PredictorEngineApp.java:153, took 10.935997 s 18/04/17 16:59:10 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x7e4e696f connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 16:59:10 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x7e4e696f0x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 16:59:10 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 16:59:10 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:41691, server: ***hostname masked***/***IP masked***:2181 18/04/17 16:59:10 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x2626be142b28e0a, negotiated timeout = 60000 18/04/17 16:59:11 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x2626be142b28e0a 18/04/17 16:59:11 INFO zookeeper.ZooKeeper: Session: 0x2626be142b28e0a closed 18/04/17 16:59:11 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 16:59:11 INFO scheduler.JobScheduler: Finished job streaming job 1523973540000 ms.26 from job set of time 1523973540000 ms 18/04/17 16:59:15 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 687.0 (TID 687) in 14973 ms on ***hostname masked*** (executor 3) (1/1) 18/04/17 16:59:15 INFO cluster.YarnClusterScheduler: Removed TaskSet 687.0, whose tasks have all completed, from pool 18/04/17 16:59:15 INFO scheduler.DAGScheduler: ResultStage 687 (foreachPartition at PredictorEngineApp.java:153) finished in 14.975 s 18/04/17 16:59:15 INFO scheduler.DAGScheduler: Job 687 finished: foreachPartition at PredictorEngineApp.java:153, took 14.983637 s 18/04/17 16:59:15 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x58ebec05 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 16:59:15 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x58ebec050x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 16:59:15 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 16:59:15 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:58955, server: ***hostname masked***/***IP masked***:2181 18/04/17 16:59:15 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x3626be1439a94de, negotiated timeout = 60000 18/04/17 16:59:15 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x3626be1439a94de 18/04/17 16:59:15 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 16:59:15 INFO zookeeper.ZooKeeper: Session: 0x3626be1439a94de closed 18/04/17 16:59:15 INFO scheduler.JobScheduler: Finished job streaming job 1523973540000 ms.34 from job set of time 1523973540000 ms 18/04/17 16:59:19 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 679.0 (TID 679) in 78965 ms on ***hostname masked*** (executor 7) (1/1) 18/04/17 16:59:19 INFO cluster.YarnClusterScheduler: Removed TaskSet 679.0, whose tasks have all completed, from pool 18/04/17 16:59:19 INFO scheduler.DAGScheduler: ResultStage 679 (foreachPartition at PredictorEngineApp.java:153) finished in 78.967 s 18/04/17 16:59:19 INFO scheduler.DAGScheduler: Job 679 finished: foreachPartition at PredictorEngineApp.java:153, took 79.054953 s 18/04/17 16:59:19 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x2c7b4ecc connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 16:59:19 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x2c7b4ecc0x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 16:59:19 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 16:59:19 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:41708, server: ***hostname masked***/***IP masked***:2181 18/04/17 16:59:19 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x2626be142b28e0d, negotiated timeout = 60000 18/04/17 16:59:19 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x2626be142b28e0d 18/04/17 16:59:19 INFO zookeeper.ZooKeeper: Session: 0x2626be142b28e0d closed 18/04/17 16:59:19 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 16:59:19 INFO scheduler.JobScheduler: Finished job streaming job 1523973480000 ms.33 from job set of time 1523973480000 ms 18/04/17 16:59:20 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 688.0 (TID 688) in 20454 ms on ***hostname masked*** (executor 3) (1/1) 18/04/17 16:59:20 INFO cluster.YarnClusterScheduler: Removed TaskSet 688.0, whose tasks have all completed, from pool 18/04/17 16:59:20 INFO scheduler.DAGScheduler: ResultStage 688 (foreachPartition at PredictorEngineApp.java:153) finished in 20.455 s 18/04/17 16:59:20 INFO scheduler.DAGScheduler: Job 688 finished: foreachPartition at PredictorEngineApp.java:153, took 20.466903 s 18/04/17 16:59:20 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x53e2f404 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 16:59:20 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x53e2f4040x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 16:59:20 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 16:59:20 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:37117, server: ***hostname masked***/***IP masked***:2181 18/04/17 16:59:20 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x1626be1444c951e, negotiated timeout = 60000 18/04/17 16:59:20 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x1626be1444c951e 18/04/17 16:59:20 INFO zookeeper.ZooKeeper: Session: 0x1626be1444c951e closed 18/04/17 16:59:20 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 16:59:20 INFO scheduler.JobScheduler: Finished job streaming job 1523973540000 ms.5 from job set of time 1523973540000 ms 18/04/17 16:59:20 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 695.0 (TID 695) in 20912 ms on ***hostname masked*** (executor 3) (1/1) 18/04/17 16:59:20 INFO cluster.YarnClusterScheduler: Removed TaskSet 695.0, whose tasks have all completed, from pool 18/04/17 16:59:20 INFO scheduler.DAGScheduler: ResultStage 695 (foreachPartition at PredictorEngineApp.java:153) finished in 20.912 s 18/04/17 16:59:20 INFO scheduler.DAGScheduler: Job 695 finished: foreachPartition at PredictorEngineApp.java:153, took 20.943661 s 18/04/17 16:59:20 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x779333b0 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 16:59:20 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x779333b00x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 16:59:20 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 16:59:20 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:58973, server: ***hostname masked***/***IP masked***:2181 18/04/17 16:59:21 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x3626be1439a94df, negotiated timeout = 60000 18/04/17 16:59:21 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x3626be1439a94df 18/04/17 16:59:21 INFO zookeeper.ZooKeeper: Session: 0x3626be1439a94df closed 18/04/17 16:59:21 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 16:59:21 INFO scheduler.JobScheduler: Finished job streaming job 1523973540000 ms.11 from job set of time 1523973540000 ms 18/04/17 16:59:21 INFO scheduler.JobScheduler: Total delay: 21.023 s for time 1523973540000 ms (execution: 20.978 s) 18/04/17 16:59:21 INFO kafka.KafkaRDD: Removing RDD 900 from persistence list 18/04/17 16:59:21 INFO storage.BlockManager: Removing RDD 900 18/04/17 16:59:21 INFO kafka.KafkaRDD: Removing RDD 864 from persistence list 18/04/17 16:59:21 INFO storage.BlockManager: Removing RDD 864 18/04/17 16:59:21 INFO kafka.KafkaRDD: Removing RDD 900 from persistence list 18/04/17 16:59:21 INFO storage.BlockManager: Removing RDD 900 18/04/17 16:59:21 INFO kafka.KafkaRDD: Removing RDD 864 from persistence list 18/04/17 16:59:21 INFO storage.BlockManager: Removing RDD 864 18/04/17 16:59:21 INFO kafka.KafkaRDD: Removing RDD 901 from persistence list 18/04/17 16:59:21 INFO storage.BlockManager: Removing RDD 901 18/04/17 16:59:21 INFO kafka.KafkaRDD: Removing RDD 865 from persistence list 18/04/17 16:59:21 INFO storage.BlockManager: Removing RDD 865 18/04/17 16:59:21 INFO kafka.KafkaRDD: Removing RDD 901 from persistence list 18/04/17 16:59:21 INFO storage.BlockManager: Removing RDD 901 18/04/17 16:59:21 INFO kafka.KafkaRDD: Removing RDD 865 from persistence list 18/04/17 16:59:21 INFO storage.BlockManager: Removing RDD 865 18/04/17 16:59:21 INFO kafka.KafkaRDD: Removing RDD 902 from persistence list 18/04/17 16:59:21 INFO storage.BlockManager: Removing RDD 902 18/04/17 16:59:21 INFO kafka.KafkaRDD: Removing RDD 866 from persistence list 18/04/17 16:59:21 INFO storage.BlockManager: Removing RDD 866 18/04/17 16:59:21 INFO kafka.KafkaRDD: Removing RDD 902 from persistence list 18/04/17 16:59:21 INFO storage.BlockManager: Removing RDD 902 18/04/17 16:59:21 INFO kafka.KafkaRDD: Removing RDD 866 from persistence list 18/04/17 16:59:21 INFO storage.BlockManager: Removing RDD 866 18/04/17 16:59:21 INFO kafka.KafkaRDD: Removing RDD 903 from persistence list 18/04/17 16:59:21 INFO storage.BlockManager: Removing RDD 903 18/04/17 16:59:21 INFO kafka.KafkaRDD: Removing RDD 867 from persistence list 18/04/17 16:59:21 INFO storage.BlockManager: Removing RDD 867 18/04/17 16:59:21 INFO kafka.KafkaRDD: Removing RDD 903 from persistence list 18/04/17 16:59:21 INFO storage.BlockManager: Removing RDD 903 18/04/17 16:59:21 INFO kafka.KafkaRDD: Removing RDD 867 from persistence list 18/04/17 16:59:21 INFO storage.BlockManager: Removing RDD 867 18/04/17 16:59:21 INFO kafka.KafkaRDD: Removing RDD 904 from persistence list 18/04/17 16:59:21 INFO storage.BlockManager: Removing RDD 904 18/04/17 16:59:21 INFO kafka.KafkaRDD: Removing RDD 868 from persistence list 18/04/17 16:59:21 INFO storage.BlockManager: Removing RDD 868 18/04/17 16:59:21 INFO kafka.KafkaRDD: Removing RDD 904 from persistence list 18/04/17 16:59:21 INFO storage.BlockManager: Removing RDD 904 18/04/17 16:59:21 INFO kafka.KafkaRDD: Removing RDD 868 from persistence list 18/04/17 16:59:21 INFO storage.BlockManager: Removing RDD 868 18/04/17 16:59:21 INFO kafka.KafkaRDD: Removing RDD 905 from persistence list 18/04/17 16:59:21 INFO storage.BlockManager: Removing RDD 905 18/04/17 16:59:21 INFO kafka.KafkaRDD: Removing RDD 869 from persistence list 18/04/17 16:59:21 INFO storage.BlockManager: Removing RDD 869 18/04/17 16:59:21 INFO kafka.KafkaRDD: Removing RDD 905 from persistence list 18/04/17 16:59:21 INFO storage.BlockManager: Removing RDD 905 18/04/17 16:59:21 INFO kafka.KafkaRDD: Removing RDD 869 from persistence list 18/04/17 16:59:21 INFO storage.BlockManager: Removing RDD 869 18/04/17 16:59:21 INFO kafka.KafkaRDD: Removing RDD 906 from persistence list 18/04/17 16:59:21 INFO storage.BlockManager: Removing RDD 906 18/04/17 16:59:21 INFO kafka.KafkaRDD: Removing RDD 870 from persistence list 18/04/17 16:59:21 INFO storage.BlockManager: Removing RDD 870 18/04/17 16:59:21 INFO kafka.KafkaRDD: Removing RDD 906 from persistence list 18/04/17 16:59:21 INFO storage.BlockManager: Removing RDD 906 18/04/17 16:59:21 INFO kafka.KafkaRDD: Removing RDD 870 from persistence list 18/04/17 16:59:21 INFO storage.BlockManager: Removing RDD 870 18/04/17 16:59:21 INFO kafka.KafkaRDD: Removing RDD 907 from persistence list 18/04/17 16:59:21 INFO storage.BlockManager: Removing RDD 907 18/04/17 16:59:21 INFO kafka.KafkaRDD: Removing RDD 871 from persistence list 18/04/17 16:59:21 INFO storage.BlockManager: Removing RDD 871 18/04/17 16:59:21 INFO kafka.KafkaRDD: Removing RDD 907 from persistence list 18/04/17 16:59:21 INFO storage.BlockManager: Removing RDD 907 18/04/17 16:59:21 INFO kafka.KafkaRDD: Removing RDD 871 from persistence list 18/04/17 16:59:21 INFO storage.BlockManager: Removing RDD 871 18/04/17 16:59:21 INFO kafka.KafkaRDD: Removing RDD 908 from persistence list 18/04/17 16:59:21 INFO storage.BlockManager: Removing RDD 908 18/04/17 16:59:21 INFO kafka.KafkaRDD: Removing RDD 872 from persistence list 18/04/17 16:59:21 INFO storage.BlockManager: Removing RDD 872 18/04/17 16:59:21 INFO kafka.KafkaRDD: Removing RDD 908 from persistence list 18/04/17 16:59:21 INFO storage.BlockManager: Removing RDD 908 18/04/17 16:59:21 INFO kafka.KafkaRDD: Removing RDD 872 from persistence list 18/04/17 16:59:21 INFO storage.BlockManager: Removing RDD 872 18/04/17 16:59:21 INFO kafka.KafkaRDD: Removing RDD 909 from persistence list 18/04/17 16:59:21 INFO storage.BlockManager: Removing RDD 909 18/04/17 16:59:21 INFO kafka.KafkaRDD: Removing RDD 873 from persistence list 18/04/17 16:59:21 INFO storage.BlockManager: Removing RDD 873 18/04/17 16:59:21 INFO kafka.KafkaRDD: Removing RDD 909 from persistence list 18/04/17 16:59:21 INFO storage.BlockManager: Removing RDD 909 18/04/17 16:59:21 INFO kafka.KafkaRDD: Removing RDD 873 from persistence list 18/04/17 16:59:21 INFO storage.BlockManager: Removing RDD 873 18/04/17 16:59:21 INFO kafka.KafkaRDD: Removing RDD 910 from persistence list 18/04/17 16:59:21 INFO storage.BlockManager: Removing RDD 910 18/04/17 16:59:21 INFO kafka.KafkaRDD: Removing RDD 874 from persistence list 18/04/17 16:59:21 INFO storage.BlockManager: Removing RDD 874 18/04/17 16:59:21 INFO kafka.KafkaRDD: Removing RDD 910 from persistence list 18/04/17 16:59:21 INFO storage.BlockManager: Removing RDD 910 18/04/17 16:59:21 INFO kafka.KafkaRDD: Removing RDD 874 from persistence list 18/04/17 16:59:21 INFO storage.BlockManager: Removing RDD 874 18/04/17 16:59:21 INFO kafka.KafkaRDD: Removing RDD 911 from persistence list 18/04/17 16:59:21 INFO storage.BlockManager: Removing RDD 911 18/04/17 16:59:21 INFO kafka.KafkaRDD: Removing RDD 875 from persistence list 18/04/17 16:59:21 INFO storage.BlockManager: Removing RDD 875 18/04/17 16:59:21 INFO kafka.KafkaRDD: Removing RDD 911 from persistence list 18/04/17 16:59:21 INFO storage.BlockManager: Removing RDD 911 18/04/17 16:59:21 INFO kafka.KafkaRDD: Removing RDD 875 from persistence list 18/04/17 16:59:21 INFO storage.BlockManager: Removing RDD 875 18/04/17 16:59:21 INFO kafka.KafkaRDD: Removing RDD 912 from persistence list 18/04/17 16:59:21 INFO storage.BlockManager: Removing RDD 912 18/04/17 16:59:21 INFO kafka.KafkaRDD: Removing RDD 876 from persistence list 18/04/17 16:59:21 INFO storage.BlockManager: Removing RDD 876 18/04/17 16:59:21 INFO kafka.KafkaRDD: Removing RDD 912 from persistence list 18/04/17 16:59:21 INFO storage.BlockManager: Removing RDD 912 18/04/17 16:59:21 INFO kafka.KafkaRDD: Removing RDD 876 from persistence list 18/04/17 16:59:21 INFO storage.BlockManager: Removing RDD 876 18/04/17 16:59:21 INFO kafka.KafkaRDD: Removing RDD 913 from persistence list 18/04/17 16:59:21 INFO storage.BlockManager: Removing RDD 913 18/04/17 16:59:21 INFO kafka.KafkaRDD: Removing RDD 877 from persistence list 18/04/17 16:59:21 INFO storage.BlockManager: Removing RDD 877 18/04/17 16:59:21 INFO kafka.KafkaRDD: Removing RDD 913 from persistence list 18/04/17 16:59:21 INFO storage.BlockManager: Removing RDD 913 18/04/17 16:59:21 INFO kafka.KafkaRDD: Removing RDD 877 from persistence list 18/04/17 16:59:21 INFO storage.BlockManager: Removing RDD 877 18/04/17 16:59:21 INFO kafka.KafkaRDD: Removing RDD 914 from persistence list 18/04/17 16:59:21 INFO storage.BlockManager: Removing RDD 914 18/04/17 16:59:21 INFO kafka.KafkaRDD: Removing RDD 878 from persistence list 18/04/17 16:59:21 INFO storage.BlockManager: Removing RDD 878 18/04/17 16:59:21 INFO kafka.KafkaRDD: Removing RDD 914 from persistence list 18/04/17 16:59:21 INFO storage.BlockManager: Removing RDD 914 18/04/17 16:59:21 INFO kafka.KafkaRDD: Removing RDD 878 from persistence list 18/04/17 16:59:21 INFO storage.BlockManager: Removing RDD 878 18/04/17 16:59:21 INFO kafka.KafkaRDD: Removing RDD 915 from persistence list 18/04/17 16:59:21 INFO storage.BlockManager: Removing RDD 915 18/04/17 16:59:21 INFO kafka.KafkaRDD: Removing RDD 879 from persistence list 18/04/17 16:59:21 INFO storage.BlockManager: Removing RDD 879 18/04/17 16:59:21 INFO kafka.KafkaRDD: Removing RDD 915 from persistence list 18/04/17 16:59:21 INFO storage.BlockManager: Removing RDD 915 18/04/17 16:59:21 INFO kafka.KafkaRDD: Removing RDD 879 from persistence list 18/04/17 16:59:21 INFO storage.BlockManager: Removing RDD 879 18/04/17 16:59:21 INFO kafka.KafkaRDD: Removing RDD 916 from persistence list 18/04/17 16:59:21 INFO storage.BlockManager: Removing RDD 916 18/04/17 16:59:21 INFO kafka.KafkaRDD: Removing RDD 880 from persistence list 18/04/17 16:59:21 INFO storage.BlockManager: Removing RDD 880 18/04/17 16:59:21 INFO kafka.KafkaRDD: Removing RDD 916 from persistence list 18/04/17 16:59:21 INFO storage.BlockManager: Removing RDD 916 18/04/17 16:59:21 INFO kafka.KafkaRDD: Removing RDD 880 from persistence list 18/04/17 16:59:21 INFO storage.BlockManager: Removing RDD 880 18/04/17 16:59:21 INFO kafka.KafkaRDD: Removing RDD 917 from persistence list 18/04/17 16:59:21 INFO storage.BlockManager: Removing RDD 917 18/04/17 16:59:21 INFO kafka.KafkaRDD: Removing RDD 881 from persistence list 18/04/17 16:59:21 INFO storage.BlockManager: Removing RDD 881 18/04/17 16:59:21 INFO kafka.KafkaRDD: Removing RDD 917 from persistence list 18/04/17 16:59:21 INFO storage.BlockManager: Removing RDD 917 18/04/17 16:59:21 INFO kafka.KafkaRDD: Removing RDD 881 from persistence list 18/04/17 16:59:21 INFO storage.BlockManager: Removing RDD 881 18/04/17 16:59:21 INFO kafka.KafkaRDD: Removing RDD 918 from persistence list 18/04/17 16:59:21 INFO storage.BlockManager: Removing RDD 918 18/04/17 16:59:21 INFO kafka.KafkaRDD: Removing RDD 882 from persistence list 18/04/17 16:59:21 INFO storage.BlockManager: Removing RDD 882 18/04/17 16:59:21 INFO kafka.KafkaRDD: Removing RDD 918 from persistence list 18/04/17 16:59:21 INFO storage.BlockManager: Removing RDD 918 18/04/17 16:59:21 INFO kafka.KafkaRDD: Removing RDD 882 from persistence list 18/04/17 16:59:21 INFO storage.BlockManager: Removing RDD 882 18/04/17 16:59:21 INFO kafka.KafkaRDD: Removing RDD 919 from persistence list 18/04/17 16:59:21 INFO storage.BlockManager: Removing RDD 919 18/04/17 16:59:21 INFO kafka.KafkaRDD: Removing RDD 883 from persistence list 18/04/17 16:59:21 INFO storage.BlockManager: Removing RDD 883 18/04/17 16:59:21 INFO kafka.KafkaRDD: Removing RDD 919 from persistence list 18/04/17 16:59:21 INFO storage.BlockManager: Removing RDD 919 18/04/17 16:59:21 INFO kafka.KafkaRDD: Removing RDD 883 from persistence list 18/04/17 16:59:21 INFO storage.BlockManager: Removing RDD 883 18/04/17 16:59:21 INFO kafka.KafkaRDD: Removing RDD 920 from persistence list 18/04/17 16:59:21 INFO storage.BlockManager: Removing RDD 920 18/04/17 16:59:21 INFO kafka.KafkaRDD: Removing RDD 884 from persistence list 18/04/17 16:59:21 INFO storage.BlockManager: Removing RDD 884 18/04/17 16:59:21 INFO kafka.KafkaRDD: Removing RDD 920 from persistence list 18/04/17 16:59:21 INFO storage.BlockManager: Removing RDD 920 18/04/17 16:59:21 INFO kafka.KafkaRDD: Removing RDD 884 from persistence list 18/04/17 16:59:21 INFO storage.BlockManager: Removing RDD 884 18/04/17 16:59:21 INFO kafka.KafkaRDD: Removing RDD 921 from persistence list 18/04/17 16:59:21 INFO storage.BlockManager: Removing RDD 921 18/04/17 16:59:21 INFO kafka.KafkaRDD: Removing RDD 885 from persistence list 18/04/17 16:59:21 INFO storage.BlockManager: Removing RDD 885 18/04/17 16:59:21 INFO kafka.KafkaRDD: Removing RDD 921 from persistence list 18/04/17 16:59:21 INFO storage.BlockManager: Removing RDD 921 18/04/17 16:59:21 INFO kafka.KafkaRDD: Removing RDD 885 from persistence list 18/04/17 16:59:21 INFO storage.BlockManager: Removing RDD 885 18/04/17 16:59:21 INFO kafka.KafkaRDD: Removing RDD 922 from persistence list 18/04/17 16:59:21 INFO storage.BlockManager: Removing RDD 922 18/04/17 16:59:21 INFO kafka.KafkaRDD: Removing RDD 886 from persistence list 18/04/17 16:59:21 INFO storage.BlockManager: Removing RDD 886 18/04/17 16:59:21 INFO kafka.KafkaRDD: Removing RDD 922 from persistence list 18/04/17 16:59:21 INFO storage.BlockManager: Removing RDD 922 18/04/17 16:59:21 INFO kafka.KafkaRDD: Removing RDD 886 from persistence list 18/04/17 16:59:21 INFO storage.BlockManager: Removing RDD 886 18/04/17 16:59:21 INFO kafka.KafkaRDD: Removing RDD 923 from persistence list 18/04/17 16:59:21 INFO storage.BlockManager: Removing RDD 923 18/04/17 16:59:21 INFO kafka.KafkaRDD: Removing RDD 887 from persistence list 18/04/17 16:59:21 INFO storage.BlockManagerInfo: Removed broadcast_709_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.2 MB) 18/04/17 16:59:21 INFO storage.BlockManager: Removing RDD 887 18/04/17 16:59:21 INFO kafka.KafkaRDD: Removing RDD 923 from persistence list 18/04/17 16:59:21 INFO storage.BlockManager: Removing RDD 923 18/04/17 16:59:21 INFO kafka.KafkaRDD: Removing RDD 887 from persistence list 18/04/17 16:59:21 INFO storage.BlockManager: Removing RDD 887 18/04/17 16:59:21 INFO kafka.KafkaRDD: Removing RDD 924 from persistence list 18/04/17 16:59:21 INFO storage.BlockManagerInfo: Removed broadcast_709_piece0 on ***hostname masked***:53081 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:59:21 INFO storage.BlockManager: Removing RDD 924 18/04/17 16:59:21 INFO kafka.KafkaRDD: Removing RDD 888 from persistence list 18/04/17 16:59:21 INFO spark.ContextCleaner: Cleaned accumulator 687 18/04/17 16:59:21 INFO storage.BlockManager: Removing RDD 888 18/04/17 16:59:21 INFO kafka.KafkaRDD: Removing RDD 924 from persistence list 18/04/17 16:59:21 INFO storage.BlockManager: Removing RDD 924 18/04/17 16:59:21 INFO kafka.KafkaRDD: Removing RDD 888 from persistence list 18/04/17 16:59:21 INFO storage.BlockManagerInfo: Removed broadcast_679_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.2 MB) 18/04/17 16:59:21 INFO storage.BlockManager: Removing RDD 888 18/04/17 16:59:21 INFO kafka.KafkaRDD: Removing RDD 925 from persistence list 18/04/17 16:59:21 INFO storage.BlockManager: Removing RDD 925 18/04/17 16:59:21 INFO kafka.KafkaRDD: Removing RDD 889 from persistence list 18/04/17 16:59:21 INFO storage.BlockManagerInfo: Removed broadcast_679_piece0 on ***hostname masked***:41751 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:59:21 INFO storage.BlockManager: Removing RDD 889 18/04/17 16:59:21 INFO kafka.KafkaRDD: Removing RDD 925 from persistence list 18/04/17 16:59:21 INFO storage.BlockManager: Removing RDD 925 18/04/17 16:59:21 INFO kafka.KafkaRDD: Removing RDD 889 from persistence list 18/04/17 16:59:21 INFO spark.ContextCleaner: Cleaned accumulator 680 18/04/17 16:59:21 INFO storage.BlockManager: Removing RDD 889 18/04/17 16:59:21 INFO kafka.KafkaRDD: Removing RDD 926 from persistence list 18/04/17 16:59:21 INFO storage.BlockManager: Removing RDD 926 18/04/17 16:59:21 INFO kafka.KafkaRDD: Removing RDD 890 from persistence list 18/04/17 16:59:21 INFO storage.BlockManagerInfo: Removed broadcast_687_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.2 MB) 18/04/17 16:59:21 INFO storage.BlockManager: Removing RDD 890 18/04/17 16:59:21 INFO kafka.KafkaRDD: Removing RDD 926 from persistence list 18/04/17 16:59:21 INFO storage.BlockManager: Removing RDD 926 18/04/17 16:59:21 INFO kafka.KafkaRDD: Removing RDD 890 from persistence list 18/04/17 16:59:21 INFO storage.BlockManagerInfo: Removed broadcast_687_piece0 on ***hostname masked***:60107 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:59:21 INFO storage.BlockManager: Removing RDD 890 18/04/17 16:59:21 INFO kafka.KafkaRDD: Removing RDD 927 from persistence list 18/04/17 16:59:21 INFO spark.ContextCleaner: Cleaned accumulator 688 18/04/17 16:59:21 INFO storage.BlockManager: Removing RDD 927 18/04/17 16:59:21 INFO kafka.KafkaRDD: Removing RDD 891 from persistence list 18/04/17 16:59:21 INFO storage.BlockManager: Removing RDD 891 18/04/17 16:59:21 INFO kafka.KafkaRDD: Removing RDD 927 from persistence list 18/04/17 16:59:21 INFO storage.BlockManagerInfo: Removed broadcast_686_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.2 MB) 18/04/17 16:59:21 INFO storage.BlockManager: Removing RDD 927 18/04/17 16:59:21 INFO kafka.KafkaRDD: Removing RDD 891 from persistence list 18/04/17 16:59:21 INFO storage.BlockManagerInfo: Removed broadcast_686_piece0 on ***hostname masked***:41751 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:59:21 INFO storage.BlockManager: Removing RDD 891 18/04/17 16:59:21 INFO kafka.KafkaRDD: Removing RDD 928 from persistence list 18/04/17 16:59:21 INFO spark.ContextCleaner: Cleaned accumulator 690 18/04/17 16:59:21 INFO storage.BlockManager: Removing RDD 928 18/04/17 16:59:21 INFO kafka.KafkaRDD: Removing RDD 892 from persistence list 18/04/17 16:59:21 INFO storage.BlockManager: Removing RDD 892 18/04/17 16:59:21 INFO kafka.KafkaRDD: Removing RDD 928 from persistence list 18/04/17 16:59:21 INFO storage.BlockManagerInfo: Removed broadcast_688_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.2 MB) 18/04/17 16:59:21 INFO storage.BlockManager: Removing RDD 928 18/04/17 16:59:21 INFO kafka.KafkaRDD: Removing RDD 892 from persistence list 18/04/17 16:59:21 INFO storage.BlockManager: Removing RDD 892 18/04/17 16:59:21 INFO kafka.KafkaRDD: Removing RDD 929 from persistence list 18/04/17 16:59:21 INFO storage.BlockManagerInfo: Removed broadcast_688_piece0 on ***hostname masked***:60107 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:59:21 INFO storage.BlockManager: Removing RDD 929 18/04/17 16:59:21 INFO spark.ContextCleaner: Cleaned accumulator 689 18/04/17 16:59:21 INFO kafka.KafkaRDD: Removing RDD 893 from persistence list 18/04/17 16:59:21 INFO storage.BlockManager: Removing RDD 893 18/04/17 16:59:21 INFO kafka.KafkaRDD: Removing RDD 929 from persistence list 18/04/17 16:59:21 INFO storage.BlockManager: Removing RDD 929 18/04/17 16:59:21 INFO kafka.KafkaRDD: Removing RDD 893 from persistence list 18/04/17 16:59:21 INFO storage.BlockManagerInfo: Removed broadcast_690_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.2 MB) 18/04/17 16:59:21 INFO storage.BlockManager: Removing RDD 893 18/04/17 16:59:21 INFO kafka.KafkaRDD: Removing RDD 930 from persistence list 18/04/17 16:59:21 INFO storage.BlockManagerInfo: Removed broadcast_690_piece0 on ***hostname masked***:50260 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:59:21 INFO storage.BlockManager: Removing RDD 930 18/04/17 16:59:21 INFO spark.ContextCleaner: Cleaned accumulator 691 18/04/17 16:59:21 INFO kafka.KafkaRDD: Removing RDD 894 from persistence list 18/04/17 16:59:21 INFO storage.BlockManager: Removing RDD 894 18/04/17 16:59:21 INFO kafka.KafkaRDD: Removing RDD 930 from persistence list 18/04/17 16:59:21 INFO storage.BlockManager: Removing RDD 930 18/04/17 16:59:21 INFO kafka.KafkaRDD: Removing RDD 894 from persistence list 18/04/17 16:59:21 INFO storage.BlockManagerInfo: Removed broadcast_689_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.2 MB) 18/04/17 16:59:21 INFO storage.BlockManager: Removing RDD 894 18/04/17 16:59:21 INFO kafka.KafkaRDD: Removing RDD 931 from persistence list 18/04/17 16:59:21 INFO storage.BlockManager: Removing RDD 931 18/04/17 16:59:21 INFO kafka.KafkaRDD: Removing RDD 895 from persistence list 18/04/17 16:59:21 INFO storage.BlockManagerInfo: Removed broadcast_689_piece0 on ***hostname masked***:60107 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:59:21 INFO storage.BlockManager: Removing RDD 895 18/04/17 16:59:21 INFO kafka.KafkaRDD: Removing RDD 931 from persistence list 18/04/17 16:59:21 INFO storage.BlockManager: Removing RDD 931 18/04/17 16:59:21 INFO kafka.KafkaRDD: Removing RDD 895 from persistence list 18/04/17 16:59:21 INFO storage.BlockManager: Removing RDD 895 18/04/17 16:59:21 INFO kafka.KafkaRDD: Removing RDD 932 from persistence list 18/04/17 16:59:21 INFO storage.BlockManager: Removing RDD 932 18/04/17 16:59:21 INFO kafka.KafkaRDD: Removing RDD 896 from persistence list 18/04/17 16:59:21 INFO spark.ContextCleaner: Cleaned accumulator 693 18/04/17 16:59:21 INFO storage.BlockManager: Removing RDD 896 18/04/17 16:59:21 INFO kafka.KafkaRDD: Removing RDD 932 from persistence list 18/04/17 16:59:21 INFO storage.BlockManager: Removing RDD 932 18/04/17 16:59:21 INFO kafka.KafkaRDD: Removing RDD 896 from persistence list 18/04/17 16:59:21 INFO storage.BlockManagerInfo: Removed broadcast_691_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.2 MB) 18/04/17 16:59:21 INFO storage.BlockManager: Removing RDD 896 18/04/17 16:59:21 INFO kafka.KafkaRDD: Removing RDD 933 from persistence list 18/04/17 16:59:21 INFO storage.BlockManager: Removing RDD 933 18/04/17 16:59:21 INFO kafka.KafkaRDD: Removing RDD 897 from persistence list 18/04/17 16:59:21 INFO storage.BlockManagerInfo: Removed broadcast_691_piece0 on ***hostname masked***:57847 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:59:21 INFO storage.BlockManager: Removing RDD 897 18/04/17 16:59:21 INFO kafka.KafkaRDD: Removing RDD 933 from persistence list 18/04/17 16:59:21 INFO spark.ContextCleaner: Cleaned accumulator 692 18/04/17 16:59:21 INFO storage.BlockManager: Removing RDD 933 18/04/17 16:59:21 INFO kafka.KafkaRDD: Removing RDD 897 from persistence list 18/04/17 16:59:21 INFO storage.BlockManager: Removing RDD 897 18/04/17 16:59:21 INFO kafka.KafkaRDD: Removing RDD 934 from persistence list 18/04/17 16:59:21 INFO storage.BlockManagerInfo: Removed broadcast_711_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.2 MB) 18/04/17 16:59:21 INFO storage.BlockManager: Removing RDD 934 18/04/17 16:59:21 INFO kafka.KafkaRDD: Removing RDD 898 from persistence list 18/04/17 16:59:21 INFO storage.BlockManagerInfo: Removed broadcast_711_piece0 on ***hostname masked***:50260 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:59:21 INFO storage.BlockManager: Removing RDD 898 18/04/17 16:59:21 INFO spark.ContextCleaner: Cleaned accumulator 712 18/04/17 16:59:21 INFO kafka.KafkaRDD: Removing RDD 934 from persistence list 18/04/17 16:59:21 INFO storage.BlockManager: Removing RDD 934 18/04/17 16:59:21 INFO kafka.KafkaRDD: Removing RDD 898 from persistence list 18/04/17 16:59:21 INFO storage.BlockManagerInfo: Removed broadcast_710_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.2 MB) 18/04/17 16:59:21 INFO storage.BlockManager: Removing RDD 898 18/04/17 16:59:21 INFO storage.BlockManagerInfo: Removed broadcast_710_piece0 on ***hostname masked***:57847 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:59:21 INFO kafka.KafkaRDD: Removing RDD 935 from persistence list 18/04/17 16:59:21 INFO storage.BlockManager: Removing RDD 935 18/04/17 16:59:21 INFO kafka.KafkaRDD: Removing RDD 899 from persistence list 18/04/17 16:59:21 INFO storage.BlockManager: Removing RDD 899 18/04/17 16:59:21 INFO kafka.KafkaRDD: Removing RDD 935 from persistence list 18/04/17 16:59:21 INFO storage.BlockManagerInfo: Removed broadcast_693_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.2 MB) 18/04/17 16:59:21 INFO storage.BlockManager: Removing RDD 935 18/04/17 16:59:21 INFO kafka.KafkaRDD: Removing RDD 899 from persistence list 18/04/17 16:59:21 INFO storage.BlockManager: Removing RDD 899 18/04/17 16:59:21 INFO scheduler.ReceivedBlockTracker: Deleting batches ArrayBuffer() 18/04/17 16:59:21 INFO scheduler.InputInfoTracker: remove old batch metadata: 1523973360000 ms 1523973420000 ms 18/04/17 16:59:21 INFO storage.BlockManagerInfo: Removed broadcast_693_piece0 on ***hostname masked***:55279 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:59:21 INFO spark.ContextCleaner: Cleaned accumulator 694 18/04/17 16:59:21 INFO storage.BlockManagerInfo: Removed broadcast_692_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.2 MB) 18/04/17 16:59:21 INFO storage.BlockManagerInfo: Removed broadcast_692_piece0 on ***hostname masked***:50260 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:59:21 INFO spark.ContextCleaner: Cleaned accumulator 696 18/04/17 16:59:21 INFO storage.BlockManagerInfo: Removed broadcast_694_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.2 MB) 18/04/17 16:59:21 INFO storage.BlockManagerInfo: Removed broadcast_694_piece0 on ***hostname masked***:42188 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:59:21 INFO spark.ContextCleaner: Cleaned accumulator 695 18/04/17 16:59:21 INFO storage.BlockManagerInfo: Removed broadcast_696_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.2 MB) 18/04/17 16:59:21 INFO storage.BlockManagerInfo: Removed broadcast_696_piece0 on ***hostname masked***:35790 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:59:21 INFO spark.ContextCleaner: Cleaned accumulator 697 18/04/17 16:59:21 INFO storage.BlockManagerInfo: Removed broadcast_695_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.2 MB) 18/04/17 16:59:21 INFO storage.BlockManagerInfo: Removed broadcast_695_piece0 on ***hostname masked***:60107 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:59:21 INFO spark.ContextCleaner: Cleaned accumulator 699 18/04/17 16:59:21 INFO storage.BlockManagerInfo: Removed broadcast_697_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.2 MB) 18/04/17 16:59:21 INFO storage.BlockManagerInfo: Removed broadcast_697_piece0 on ***hostname masked***:42188 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:59:21 INFO spark.ContextCleaner: Cleaned accumulator 698 18/04/17 16:59:21 INFO storage.BlockManagerInfo: Removed broadcast_699_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.2 MB) 18/04/17 16:59:21 INFO storage.BlockManagerInfo: Removed broadcast_699_piece0 on ***hostname masked***:41751 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:59:21 INFO spark.ContextCleaner: Cleaned accumulator 700 18/04/17 16:59:21 INFO storage.BlockManagerInfo: Removed broadcast_698_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.2 MB) 18/04/17 16:59:21 INFO storage.BlockManagerInfo: Removed broadcast_698_piece0 on ***hostname masked***:55033 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:59:21 INFO spark.ContextCleaner: Cleaned accumulator 702 18/04/17 16:59:21 INFO storage.BlockManagerInfo: Removed broadcast_700_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.2 MB) 18/04/17 16:59:21 INFO storage.BlockManagerInfo: Removed broadcast_700_piece0 on ***hostname masked***:50260 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:59:21 INFO spark.ContextCleaner: Cleaned accumulator 701 18/04/17 16:59:21 INFO spark.ContextCleaner: Cleaned accumulator 703 18/04/17 16:59:21 INFO storage.BlockManagerInfo: Removed broadcast_701_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.2 MB) 18/04/17 16:59:21 INFO storage.BlockManagerInfo: Removed broadcast_701_piece0 on ***hostname masked***:35790 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:59:21 INFO storage.BlockManagerInfo: Removed broadcast_703_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.2 MB) 18/04/17 16:59:21 INFO storage.BlockManagerInfo: Removed broadcast_703_piece0 on ***hostname masked***:55279 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:59:21 INFO spark.ContextCleaner: Cleaned accumulator 704 18/04/17 16:59:21 INFO storage.BlockManagerInfo: Removed broadcast_702_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.2 MB) 18/04/17 16:59:21 INFO storage.BlockManagerInfo: Removed broadcast_702_piece0 on ***hostname masked***:55095 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:59:21 INFO spark.ContextCleaner: Cleaned accumulator 706 18/04/17 16:59:21 INFO storage.BlockManagerInfo: Removed broadcast_704_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.3 MB) 18/04/17 16:59:21 INFO storage.BlockManagerInfo: Removed broadcast_704_piece0 on ***hostname masked***:53081 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:59:21 INFO spark.ContextCleaner: Cleaned accumulator 705 18/04/17 16:59:21 INFO storage.BlockManagerInfo: Removed broadcast_706_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.3 MB) 18/04/17 16:59:21 INFO storage.BlockManagerInfo: Removed broadcast_706_piece0 on ***hostname masked***:55033 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:59:21 INFO spark.ContextCleaner: Cleaned accumulator 707 18/04/17 16:59:21 INFO storage.BlockManagerInfo: Removed broadcast_705_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.3 MB) 18/04/17 16:59:21 INFO storage.BlockManagerInfo: Removed broadcast_705_piece0 on ***hostname masked***:35790 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:59:21 INFO spark.ContextCleaner: Cleaned accumulator 709 18/04/17 16:59:21 INFO storage.BlockManagerInfo: Removed broadcast_707_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.3 MB) 18/04/17 16:59:21 INFO storage.BlockManagerInfo: Removed broadcast_707_piece0 on ***hostname masked***:56034 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:59:21 INFO spark.ContextCleaner: Cleaned accumulator 708 18/04/17 16:59:21 INFO spark.ContextCleaner: Cleaned accumulator 710 18/04/17 16:59:21 INFO storage.BlockManagerInfo: Removed broadcast_708_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.3 MB) 18/04/17 16:59:21 INFO storage.BlockManagerInfo: Removed broadcast_708_piece0 on ***hostname masked***:55095 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 16:59:21 INFO spark.ContextCleaner: Cleaned accumulator 711 18/04/17 16:59:39 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 677.0 (TID 677) in 99038 ms on ***hostname masked*** (executor 6) (1/1) 18/04/17 16:59:39 INFO scheduler.DAGScheduler: ResultStage 677 (foreachPartition at PredictorEngineApp.java:153) finished in 99.038 s 18/04/17 16:59:39 INFO cluster.YarnClusterScheduler: Removed TaskSet 677.0, whose tasks have all completed, from pool 18/04/17 16:59:39 INFO scheduler.DAGScheduler: Job 677 finished: foreachPartition at PredictorEngineApp.java:153, took 99.119886 s 18/04/17 16:59:39 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x1cb2f02 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 16:59:39 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x1cb2f020x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 16:59:39 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 16:59:39 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:59023, server: ***hostname masked***/***IP masked***:2181 18/04/17 16:59:39 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x3626be1439a94e0, negotiated timeout = 60000 18/04/17 16:59:39 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x3626be1439a94e0 18/04/17 16:59:39 INFO zookeeper.ZooKeeper: Session: 0x3626be1439a94e0 closed 18/04/17 16:59:39 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 16:59:39 INFO scheduler.JobScheduler: Finished job streaming job 1523973480000 ms.26 from job set of time 1523973480000 ms 18/04/17 16:59:39 INFO scheduler.JobScheduler: Total delay: 99.223 s for time 1523973480000 ms (execution: 99.177 s) 18/04/17 16:59:39 INFO scheduler.ReceivedBlockTracker: Deleting batches ArrayBuffer() 18/04/17 16:59:39 INFO scheduler.InputInfoTracker: remove old batch metadata: 18/04/17 17:00:00 INFO scheduler.JobScheduler: Added jobs for time 1523973600000 ms 18/04/17 17:00:00 INFO scheduler.JobScheduler: Starting job streaming job 1523973600000 ms.0 from job set of time 1523973600000 ms 18/04/17 17:00:00 INFO scheduler.JobScheduler: Starting job streaming job 1523973600000 ms.1 from job set of time 1523973600000 ms 18/04/17 17:00:00 INFO scheduler.JobScheduler: Starting job streaming job 1523973600000 ms.2 from job set of time 1523973600000 ms 18/04/17 17:00:00 INFO scheduler.JobScheduler: Finished job streaming job 1523973600000 ms.0 from job set of time 1523973600000 ms 18/04/17 17:00:00 INFO scheduler.JobScheduler: Starting job streaming job 1523973600000 ms.3 from job set of time 1523973600000 ms 18/04/17 17:00:00 INFO scheduler.JobScheduler: Starting job streaming job 1523973600000 ms.4 from job set of time 1523973600000 ms 18/04/17 17:00:00 INFO scheduler.JobScheduler: Starting job streaming job 1523973600000 ms.5 from job set of time 1523973600000 ms 18/04/17 17:00:00 INFO scheduler.JobScheduler: Starting job streaming job 1523973600000 ms.6 from job set of time 1523973600000 ms 18/04/17 17:00:00 INFO scheduler.JobScheduler: Finished job streaming job 1523973600000 ms.4 from job set of time 1523973600000 ms 18/04/17 17:00:00 INFO scheduler.JobScheduler: Starting job streaming job 1523973600000 ms.7 from job set of time 1523973600000 ms 18/04/17 17:00:00 INFO scheduler.JobScheduler: Finished job streaming job 1523973600000 ms.3 from job set of time 1523973600000 ms 18/04/17 17:00:00 INFO scheduler.JobScheduler: Starting job streaming job 1523973600000 ms.9 from job set of time 1523973600000 ms 18/04/17 17:00:00 INFO scheduler.JobScheduler: Starting job streaming job 1523973600000 ms.8 from job set of time 1523973600000 ms 18/04/17 17:00:00 INFO scheduler.JobScheduler: Starting job streaming job 1523973600000 ms.11 from job set of time 1523973600000 ms 18/04/17 17:00:00 INFO scheduler.JobScheduler: Starting job streaming job 1523973600000 ms.10 from job set of time 1523973600000 ms 18/04/17 17:00:00 INFO scheduler.JobScheduler: Starting job streaming job 1523973600000 ms.12 from job set of time 1523973600000 ms 18/04/17 17:00:00 INFO scheduler.JobScheduler: Starting job streaming job 1523973600000 ms.13 from job set of time 1523973600000 ms 18/04/17 17:00:00 INFO scheduler.JobScheduler: Starting job streaming job 1523973600000 ms.14 from job set of time 1523973600000 ms 18/04/17 17:00:00 INFO scheduler.JobScheduler: Finished job streaming job 1523973600000 ms.13 from job set of time 1523973600000 ms 18/04/17 17:00:00 INFO scheduler.JobScheduler: Starting job streaming job 1523973600000 ms.16 from job set of time 1523973600000 ms 18/04/17 17:00:00 INFO scheduler.JobScheduler: Starting job streaming job 1523973600000 ms.15 from job set of time 1523973600000 ms 18/04/17 17:00:00 INFO scheduler.JobScheduler: Finished job streaming job 1523973600000 ms.14 from job set of time 1523973600000 ms 18/04/17 17:00:00 INFO scheduler.JobScheduler: Finished job streaming job 1523973600000 ms.16 from job set of time 1523973600000 ms 18/04/17 17:00:00 INFO scheduler.JobScheduler: Starting job streaming job 1523973600000 ms.17 from job set of time 1523973600000 ms 18/04/17 17:00:00 INFO scheduler.JobScheduler: Starting job streaming job 1523973600000 ms.18 from job set of time 1523973600000 ms 18/04/17 17:00:00 INFO scheduler.JobScheduler: Starting job streaming job 1523973600000 ms.19 from job set of time 1523973600000 ms 18/04/17 17:00:00 INFO scheduler.JobScheduler: Finished job streaming job 1523973600000 ms.17 from job set of time 1523973600000 ms 18/04/17 17:00:00 INFO scheduler.JobScheduler: Starting job streaming job 1523973600000 ms.20 from job set of time 1523973600000 ms 18/04/17 17:00:00 INFO scheduler.JobScheduler: Starting job streaming job 1523973600000 ms.21 from job set of time 1523973600000 ms 18/04/17 17:00:00 INFO scheduler.JobScheduler: Finished job streaming job 1523973600000 ms.21 from job set of time 1523973600000 ms 18/04/17 17:00:00 INFO scheduler.JobScheduler: Starting job streaming job 1523973600000 ms.22 from job set of time 1523973600000 ms 18/04/17 17:00:00 INFO scheduler.JobScheduler: Starting job streaming job 1523973600000 ms.23 from job set of time 1523973600000 ms 18/04/17 17:00:00 INFO scheduler.JobScheduler: Starting job streaming job 1523973600000 ms.24 from job set of time 1523973600000 ms 18/04/17 17:00:00 INFO scheduler.JobScheduler: Starting job streaming job 1523973600000 ms.25 from job set of time 1523973600000 ms 18/04/17 17:00:00 INFO scheduler.JobScheduler: Starting job streaming job 1523973600000 ms.26 from job set of time 1523973600000 ms 18/04/17 17:00:00 INFO scheduler.JobScheduler: Starting job streaming job 1523973600000 ms.27 from job set of time 1523973600000 ms 18/04/17 17:00:00 INFO scheduler.JobScheduler: Starting job streaming job 1523973600000 ms.28 from job set of time 1523973600000 ms 18/04/17 17:00:00 INFO scheduler.JobScheduler: Starting job streaming job 1523973600000 ms.29 from job set of time 1523973600000 ms 18/04/17 17:00:00 INFO scheduler.JobScheduler: Starting job streaming job 1523973600000 ms.30 from job set of time 1523973600000 ms 18/04/17 17:00:00 INFO scheduler.JobScheduler: Starting job streaming job 1523973600000 ms.31 from job set of time 1523973600000 ms 18/04/17 17:00:00 INFO scheduler.JobScheduler: Starting job streaming job 1523973600000 ms.32 from job set of time 1523973600000 ms 18/04/17 17:00:00 INFO scheduler.JobScheduler: Finished job streaming job 1523973600000 ms.30 from job set of time 1523973600000 ms 18/04/17 17:00:00 INFO scheduler.JobScheduler: Starting job streaming job 1523973600000 ms.33 from job set of time 1523973600000 ms 18/04/17 17:00:00 INFO scheduler.JobScheduler: Starting job streaming job 1523973600000 ms.34 from job set of time 1523973600000 ms 18/04/17 17:00:00 INFO scheduler.JobScheduler: Starting job streaming job 1523973600000 ms.35 from job set of time 1523973600000 ms 18/04/17 17:00:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 17:00:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 17:00:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 17:00:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 17:00:00 INFO scheduler.DAGScheduler: Got job 713 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 17:00:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 712 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 17:00:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 17:00:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 17:00:00 INFO scheduler.DAGScheduler: Submitting ResultStage 712 (KafkaRDD[987] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 17:00:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 17:00:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 17:00:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 17:00:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 17:00:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 17:00:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 17:00:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 17:00:00 INFO storage.MemoryStore: Block broadcast_712 stored as values in memory (estimated size 5.7 KB, free 490.5 MB) 18/04/17 17:00:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 17:00:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 17:00:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 17:00:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 17:00:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 17:00:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 17:00:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 17:00:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 17:00:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 17:00:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 17:00:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 17:00:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 17:00:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 17:00:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 17:00:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 17:00:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 17:00:00 INFO storage.MemoryStore: Block broadcast_712_piece0 stored as bytes in memory (estimated size 3.1 KB, free 490.5 MB) 18/04/17 17:00:00 INFO storage.BlockManagerInfo: Added broadcast_712_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.3 MB) 18/04/17 17:00:00 INFO spark.SparkContext: Created broadcast 712 from broadcast at DAGScheduler.scala:1006 18/04/17 17:00:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 712 (KafkaRDD[987] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 17:00:00 INFO cluster.YarnClusterScheduler: Adding task set 712.0 with 1 tasks 18/04/17 17:00:00 INFO scheduler.DAGScheduler: Got job 712 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 17:00:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 713 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 17:00:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 17:00:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 17:00:00 INFO scheduler.DAGScheduler: Submitting ResultStage 713 (KafkaRDD[980] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 17:00:00 INFO storage.MemoryStore: Block broadcast_713 stored as values in memory (estimated size 5.7 KB, free 490.5 MB) 18/04/17 17:00:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 712.0 (TID 712, ***hostname masked***, executor 8, partition 0, RACK_LOCAL, 2064 bytes) 18/04/17 17:00:00 INFO storage.MemoryStore: Block broadcast_713_piece0 stored as bytes in memory (estimated size 3.1 KB, free 490.5 MB) 18/04/17 17:00:00 INFO storage.BlockManagerInfo: Added broadcast_713_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.3 MB) 18/04/17 17:00:00 INFO spark.SparkContext: Created broadcast 713 from broadcast at DAGScheduler.scala:1006 18/04/17 17:00:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 713 (KafkaRDD[980] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 17:00:00 INFO cluster.YarnClusterScheduler: Adding task set 713.0 with 1 tasks 18/04/17 17:00:00 INFO scheduler.DAGScheduler: Got job 714 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 17:00:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 714 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 17:00:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 17:00:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 17:00:00 INFO scheduler.DAGScheduler: Submitting ResultStage 714 (KafkaRDD[995] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 17:00:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 713.0 (TID 713, ***hostname masked***, executor 5, partition 0, RACK_LOCAL, 2051 bytes) 18/04/17 17:00:00 INFO storage.MemoryStore: Block broadcast_714 stored as values in memory (estimated size 5.7 KB, free 490.5 MB) 18/04/17 17:00:00 INFO storage.MemoryStore: Block broadcast_714_piece0 stored as bytes in memory (estimated size 3.1 KB, free 490.5 MB) 18/04/17 17:00:00 INFO storage.BlockManagerInfo: Added broadcast_714_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.3 MB) 18/04/17 17:00:00 INFO spark.SparkContext: Created broadcast 714 from broadcast at DAGScheduler.scala:1006 18/04/17 17:00:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 714 (KafkaRDD[995] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 17:00:00 INFO cluster.YarnClusterScheduler: Adding task set 714.0 with 1 tasks 18/04/17 17:00:00 INFO scheduler.DAGScheduler: Got job 715 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 17:00:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 715 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 17:00:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 17:00:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 17:00:00 INFO scheduler.DAGScheduler: Submitting ResultStage 715 (KafkaRDD[990] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 17:00:00 INFO storage.MemoryStore: Block broadcast_715 stored as values in memory (estimated size 5.7 KB, free 490.5 MB) 18/04/17 17:00:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 714.0 (TID 714, ***hostname masked***, executor 4, partition 0, RACK_LOCAL, 2048 bytes) 18/04/17 17:00:00 INFO storage.MemoryStore: Block broadcast_715_piece0 stored as bytes in memory (estimated size 3.1 KB, free 490.5 MB) 18/04/17 17:00:00 INFO storage.BlockManagerInfo: Added broadcast_715_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.3 MB) 18/04/17 17:00:00 INFO spark.SparkContext: Created broadcast 715 from broadcast at DAGScheduler.scala:1006 18/04/17 17:00:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 715 (KafkaRDD[990] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 17:00:00 INFO cluster.YarnClusterScheduler: Adding task set 715.0 with 1 tasks 18/04/17 17:00:00 INFO scheduler.DAGScheduler: Got job 716 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 17:00:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 716 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 17:00:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 17:00:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 17:00:00 INFO scheduler.DAGScheduler: Submitting ResultStage 716 (KafkaRDD[1001] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 17:00:00 INFO storage.MemoryStore: Block broadcast_716 stored as values in memory (estimated size 5.7 KB, free 490.5 MB) 18/04/17 17:00:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 715.0 (TID 715, ***hostname masked***, executor 1, partition 0, RACK_LOCAL, 2048 bytes) 18/04/17 17:00:00 INFO storage.MemoryStore: Block broadcast_716_piece0 stored as bytes in memory (estimated size 3.1 KB, free 490.5 MB) 18/04/17 17:00:00 INFO storage.BlockManagerInfo: Added broadcast_716_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.2 MB) 18/04/17 17:00:00 INFO spark.SparkContext: Created broadcast 716 from broadcast at DAGScheduler.scala:1006 18/04/17 17:00:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 716 (KafkaRDD[1001] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 17:00:00 INFO cluster.YarnClusterScheduler: Adding task set 716.0 with 1 tasks 18/04/17 17:00:00 INFO scheduler.DAGScheduler: Got job 717 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 17:00:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 717 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 17:00:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 17:00:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 17:00:00 INFO scheduler.DAGScheduler: Submitting ResultStage 717 (KafkaRDD[1000] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 17:00:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 716.0 (TID 716, ***hostname masked***, executor 1, partition 0, NODE_LOCAL, 2049 bytes) 18/04/17 17:00:00 INFO storage.MemoryStore: Block broadcast_717 stored as values in memory (estimated size 5.7 KB, free 490.5 MB) 18/04/17 17:00:00 INFO storage.MemoryStore: Block broadcast_717_piece0 stored as bytes in memory (estimated size 3.1 KB, free 490.5 MB) 18/04/17 17:00:00 INFO storage.BlockManagerInfo: Added broadcast_717_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.2 MB) 18/04/17 17:00:00 INFO spark.SparkContext: Created broadcast 717 from broadcast at DAGScheduler.scala:1006 18/04/17 17:00:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 717 (KafkaRDD[1000] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 17:00:00 INFO cluster.YarnClusterScheduler: Adding task set 717.0 with 1 tasks 18/04/17 17:00:00 INFO scheduler.DAGScheduler: Got job 718 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 17:00:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 718 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 17:00:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 17:00:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 17:00:00 INFO scheduler.DAGScheduler: Submitting ResultStage 718 (KafkaRDD[994] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 17:00:00 INFO storage.BlockManagerInfo: Added broadcast_713_piece0 in memory on ***hostname masked***:53081 (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:00:00 INFO storage.MemoryStore: Block broadcast_718 stored as values in memory (estimated size 5.7 KB, free 490.5 MB) 18/04/17 17:00:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 717.0 (TID 717, ***hostname masked***, executor 1, partition 0, RACK_LOCAL, 2045 bytes) 18/04/17 17:00:00 INFO storage.BlockManagerInfo: Added broadcast_712_piece0 in memory on ***hostname masked***:50260 (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:00:00 INFO storage.MemoryStore: Block broadcast_718_piece0 stored as bytes in memory (estimated size 3.1 KB, free 490.5 MB) 18/04/17 17:00:00 INFO storage.BlockManagerInfo: Added broadcast_718_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.2 MB) 18/04/17 17:00:00 INFO spark.SparkContext: Created broadcast 718 from broadcast at DAGScheduler.scala:1006 18/04/17 17:00:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 718 (KafkaRDD[994] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 17:00:00 INFO cluster.YarnClusterScheduler: Adding task set 718.0 with 1 tasks 18/04/17 17:00:00 INFO scheduler.DAGScheduler: Got job 720 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 17:00:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 719 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 17:00:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 17:00:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 17:00:00 INFO scheduler.DAGScheduler: Submitting ResultStage 719 (KafkaRDD[996] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 17:00:00 INFO storage.MemoryStore: Block broadcast_719 stored as values in memory (estimated size 5.7 KB, free 490.5 MB) 18/04/17 17:00:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 718.0 (TID 718, ***hostname masked***, executor 3, partition 0, RACK_LOCAL, 2069 bytes) 18/04/17 17:00:00 INFO storage.BlockManagerInfo: Added broadcast_714_piece0 in memory on ***hostname masked***:55279 (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:00:00 INFO storage.MemoryStore: Block broadcast_719_piece0 stored as bytes in memory (estimated size 3.1 KB, free 490.5 MB) 18/04/17 17:00:00 INFO storage.BlockManagerInfo: Added broadcast_719_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.2 MB) 18/04/17 17:00:00 INFO spark.SparkContext: Created broadcast 719 from broadcast at DAGScheduler.scala:1006 18/04/17 17:00:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 719 (KafkaRDD[996] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 17:00:00 INFO cluster.YarnClusterScheduler: Adding task set 719.0 with 1 tasks 18/04/17 17:00:00 INFO scheduler.DAGScheduler: Got job 719 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 17:00:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 720 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 17:00:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 17:00:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 17:00:00 INFO scheduler.DAGScheduler: Submitting ResultStage 720 (KafkaRDD[1007] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 17:00:00 INFO storage.MemoryStore: Block broadcast_720 stored as values in memory (estimated size 5.7 KB, free 490.5 MB) 18/04/17 17:00:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 719.0 (TID 719, ***hostname masked***, executor 4, partition 0, NODE_LOCAL, 2047 bytes) 18/04/17 17:00:00 INFO storage.BlockManagerInfo: Added broadcast_715_piece0 in memory on ***hostname masked***:56034 (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:00:00 INFO storage.MemoryStore: Block broadcast_720_piece0 stored as bytes in memory (estimated size 3.1 KB, free 490.4 MB) 18/04/17 17:00:00 INFO storage.BlockManagerInfo: Added broadcast_720_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.2 MB) 18/04/17 17:00:00 INFO spark.SparkContext: Created broadcast 720 from broadcast at DAGScheduler.scala:1006 18/04/17 17:00:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 720 (KafkaRDD[1007] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 17:00:00 INFO cluster.YarnClusterScheduler: Adding task set 720.0 with 1 tasks 18/04/17 17:00:00 INFO scheduler.DAGScheduler: Got job 721 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 17:00:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 721 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 17:00:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 17:00:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 17:00:00 INFO scheduler.DAGScheduler: Submitting ResultStage 721 (KafkaRDD[991] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 17:00:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 720.0 (TID 720, ***hostname masked***, executor 9, partition 0, NODE_LOCAL, 2037 bytes) 18/04/17 17:00:00 INFO storage.MemoryStore: Block broadcast_721 stored as values in memory (estimated size 5.7 KB, free 490.4 MB) 18/04/17 17:00:00 INFO storage.MemoryStore: Block broadcast_721_piece0 stored as bytes in memory (estimated size 3.1 KB, free 490.4 MB) 18/04/17 17:00:00 INFO storage.BlockManagerInfo: Added broadcast_721_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.2 MB) 18/04/17 17:00:00 INFO spark.SparkContext: Created broadcast 721 from broadcast at DAGScheduler.scala:1006 18/04/17 17:00:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 721 (KafkaRDD[991] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 17:00:00 INFO cluster.YarnClusterScheduler: Adding task set 721.0 with 1 tasks 18/04/17 17:00:00 INFO scheduler.DAGScheduler: Got job 722 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 17:00:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 722 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 17:00:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 17:00:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 17:00:00 INFO scheduler.DAGScheduler: Submitting ResultStage 722 (KafkaRDD[1006] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 17:00:00 INFO storage.MemoryStore: Block broadcast_722 stored as values in memory (estimated size 5.7 KB, free 490.4 MB) 18/04/17 17:00:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 721.0 (TID 721, ***hostname masked***, executor 1, partition 0, RACK_LOCAL, 2050 bytes) 18/04/17 17:00:00 INFO storage.MemoryStore: Block broadcast_722_piece0 stored as bytes in memory (estimated size 3.1 KB, free 490.4 MB) 18/04/17 17:00:00 INFO storage.BlockManagerInfo: Added broadcast_722_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.2 MB) 18/04/17 17:00:00 INFO spark.SparkContext: Created broadcast 722 from broadcast at DAGScheduler.scala:1006 18/04/17 17:00:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 722 (KafkaRDD[1006] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 17:00:00 INFO cluster.YarnClusterScheduler: Adding task set 722.0 with 1 tasks 18/04/17 17:00:00 INFO scheduler.DAGScheduler: Got job 723 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 17:00:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 723 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 17:00:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 17:00:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 17:00:00 INFO scheduler.DAGScheduler: Submitting ResultStage 723 (KafkaRDD[981] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 17:00:00 INFO storage.MemoryStore: Block broadcast_723 stored as values in memory (estimated size 5.7 KB, free 490.4 MB) 18/04/17 17:00:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 722.0 (TID 722, ***hostname masked***, executor 5, partition 0, RACK_LOCAL, 2053 bytes) 18/04/17 17:00:00 INFO storage.BlockManagerInfo: Added broadcast_720_piece0 in memory on ***hostname masked***:55033 (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:00:00 INFO storage.BlockManagerInfo: Added broadcast_717_piece0 in memory on ***hostname masked***:56034 (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:00:00 INFO storage.MemoryStore: Block broadcast_723_piece0 stored as bytes in memory (estimated size 3.1 KB, free 490.4 MB) 18/04/17 17:00:00 INFO storage.BlockManagerInfo: Added broadcast_723_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.2 MB) 18/04/17 17:00:00 INFO spark.SparkContext: Created broadcast 723 from broadcast at DAGScheduler.scala:1006 18/04/17 17:00:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 723 (KafkaRDD[981] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 17:00:00 INFO cluster.YarnClusterScheduler: Adding task set 723.0 with 1 tasks 18/04/17 17:00:00 INFO scheduler.DAGScheduler: Got job 724 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 17:00:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 724 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 17:00:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 17:00:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 17:00:00 INFO scheduler.DAGScheduler: Submitting ResultStage 724 (KafkaRDD[1005] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 17:00:00 INFO storage.MemoryStore: Block broadcast_724 stored as values in memory (estimated size 5.7 KB, free 490.4 MB) 18/04/17 17:00:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 723.0 (TID 723, ***hostname masked***, executor 6, partition 0, RACK_LOCAL, 2041 bytes) 18/04/17 17:00:00 INFO storage.MemoryStore: Block broadcast_724_piece0 stored as bytes in memory (estimated size 3.1 KB, free 490.4 MB) 18/04/17 17:00:00 INFO storage.BlockManagerInfo: Added broadcast_724_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.2 MB) 18/04/17 17:00:00 INFO spark.SparkContext: Created broadcast 724 from broadcast at DAGScheduler.scala:1006 18/04/17 17:00:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 724 (KafkaRDD[1005] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 17:00:00 INFO cluster.YarnClusterScheduler: Adding task set 724.0 with 1 tasks 18/04/17 17:00:00 INFO scheduler.DAGScheduler: Got job 725 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 17:00:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 725 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 17:00:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 17:00:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 17:00:00 INFO scheduler.DAGScheduler: Submitting ResultStage 725 (KafkaRDD[977] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 17:00:00 INFO storage.MemoryStore: Block broadcast_725 stored as values in memory (estimated size 5.7 KB, free 490.4 MB) 18/04/17 17:00:00 INFO storage.BlockManagerInfo: Added broadcast_719_piece0 in memory on ***hostname masked***:55279 (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:00:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 724.0 (TID 724, ***hostname masked***, executor 12, partition 0, NODE_LOCAL, 2048 bytes) 18/04/17 17:00:00 INFO storage.BlockManagerInfo: Added broadcast_722_piece0 in memory on ***hostname masked***:53081 (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:00:00 INFO storage.BlockManagerInfo: Added broadcast_716_piece0 in memory on ***hostname masked***:56034 (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:00:00 INFO storage.MemoryStore: Block broadcast_725_piece0 stored as bytes in memory (estimated size 3.1 KB, free 490.4 MB) 18/04/17 17:00:00 INFO storage.BlockManagerInfo: Added broadcast_725_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.2 MB) 18/04/17 17:00:00 INFO spark.SparkContext: Created broadcast 725 from broadcast at DAGScheduler.scala:1006 18/04/17 17:00:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 725 (KafkaRDD[977] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 17:00:00 INFO cluster.YarnClusterScheduler: Adding task set 725.0 with 1 tasks 18/04/17 17:00:00 INFO scheduler.DAGScheduler: Got job 726 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 17:00:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 726 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 17:00:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 17:00:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 17:00:00 INFO scheduler.DAGScheduler: Submitting ResultStage 726 (KafkaRDD[973] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 17:00:00 INFO storage.MemoryStore: Block broadcast_726 stored as values in memory (estimated size 5.7 KB, free 490.4 MB) 18/04/17 17:00:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 725.0 (TID 725, ***hostname masked***, executor 4, partition 0, NODE_LOCAL, 2059 bytes) 18/04/17 17:00:00 INFO storage.BlockManagerInfo: Added broadcast_718_piece0 in memory on ***hostname masked***:60107 (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:00:00 INFO storage.MemoryStore: Block broadcast_726_piece0 stored as bytes in memory (estimated size 3.1 KB, free 490.4 MB) 18/04/17 17:00:00 INFO storage.BlockManagerInfo: Added broadcast_726_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.2 MB) 18/04/17 17:00:00 INFO spark.SparkContext: Created broadcast 726 from broadcast at DAGScheduler.scala:1006 18/04/17 17:00:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 726 (KafkaRDD[973] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 17:00:00 INFO cluster.YarnClusterScheduler: Adding task set 726.0 with 1 tasks 18/04/17 17:00:00 INFO storage.BlockManagerInfo: Added broadcast_723_piece0 in memory on ***hostname masked***:35790 (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:00:00 INFO scheduler.DAGScheduler: Got job 727 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 17:00:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 727 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 17:00:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 17:00:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 17:00:00 INFO scheduler.DAGScheduler: Submitting ResultStage 727 (KafkaRDD[978] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 17:00:00 INFO storage.MemoryStore: Block broadcast_727 stored as values in memory (estimated size 5.7 KB, free 490.4 MB) 18/04/17 17:00:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 726.0 (TID 726, ***hostname masked***, executor 9, partition 0, RACK_LOCAL, 2063 bytes) 18/04/17 17:00:00 INFO storage.BlockManagerInfo: Removed broadcast_677_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.2 MB) 18/04/17 17:00:00 INFO storage.MemoryStore: Block broadcast_727_piece0 stored as bytes in memory (estimated size 3.1 KB, free 490.4 MB) 18/04/17 17:00:00 INFO storage.BlockManagerInfo: Added broadcast_727_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.2 MB) 18/04/17 17:00:00 INFO spark.SparkContext: Created broadcast 727 from broadcast at DAGScheduler.scala:1006 18/04/17 17:00:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 727 (KafkaRDD[978] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 17:00:00 INFO cluster.YarnClusterScheduler: Adding task set 727.0 with 1 tasks 18/04/17 17:00:00 INFO storage.BlockManagerInfo: Added broadcast_721_piece0 in memory on ***hostname masked***:56034 (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:00:00 INFO scheduler.DAGScheduler: Got job 728 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 17:00:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 728 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 17:00:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 17:00:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 17:00:00 INFO scheduler.DAGScheduler: Submitting ResultStage 728 (KafkaRDD[982] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 17:00:00 INFO storage.BlockManagerInfo: Removed broadcast_677_piece0 on ***hostname masked***:35790 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:00:00 INFO storage.BlockManagerInfo: Added broadcast_725_piece0 in memory on ***hostname masked***:55279 (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:00:00 INFO storage.MemoryStore: Block broadcast_728 stored as values in memory (estimated size 5.7 KB, free 490.4 MB) 18/04/17 17:00:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 727.0 (TID 727, ***hostname masked***, executor 12, partition 0, NODE_LOCAL, 2045 bytes) 18/04/17 17:00:00 INFO spark.ContextCleaner: Cleaned accumulator 678 18/04/17 17:00:00 INFO storage.MemoryStore: Block broadcast_728_piece0 stored as bytes in memory (estimated size 3.1 KB, free 490.4 MB) 18/04/17 17:00:00 INFO storage.BlockManagerInfo: Added broadcast_728_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.2 MB) 18/04/17 17:00:00 INFO spark.SparkContext: Created broadcast 728 from broadcast at DAGScheduler.scala:1006 18/04/17 17:00:00 INFO storage.BlockManagerInfo: Added broadcast_724_piece0 in memory on ***hostname masked***:42188 (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:00:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 728 (KafkaRDD[982] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 17:00:00 INFO cluster.YarnClusterScheduler: Adding task set 728.0 with 1 tasks 18/04/17 17:00:00 INFO scheduler.DAGScheduler: Got job 730 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 17:00:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 729 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 17:00:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 17:00:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 17:00:00 INFO scheduler.DAGScheduler: Submitting ResultStage 729 (KafkaRDD[983] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 17:00:00 INFO storage.MemoryStore: Block broadcast_729 stored as values in memory (estimated size 5.7 KB, free 490.4 MB) 18/04/17 17:00:00 INFO storage.BlockManagerInfo: Added broadcast_726_piece0 in memory on ***hostname masked***:55033 (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:00:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 728.0 (TID 728, ***hostname masked***, executor 4, partition 0, NODE_LOCAL, 2059 bytes) 18/04/17 17:00:00 INFO storage.MemoryStore: Block broadcast_729_piece0 stored as bytes in memory (estimated size 3.1 KB, free 490.4 MB) 18/04/17 17:00:00 INFO storage.BlockManagerInfo: Added broadcast_729_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.2 MB) 18/04/17 17:00:00 INFO spark.SparkContext: Created broadcast 729 from broadcast at DAGScheduler.scala:1006 18/04/17 17:00:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 729 (KafkaRDD[983] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 17:00:00 INFO cluster.YarnClusterScheduler: Adding task set 729.0 with 1 tasks 18/04/17 17:00:00 INFO scheduler.DAGScheduler: Got job 729 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 17:00:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 730 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 17:00:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 17:00:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 17:00:00 INFO scheduler.DAGScheduler: Submitting ResultStage 730 (KafkaRDD[992] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 17:00:00 INFO storage.MemoryStore: Block broadcast_730 stored as values in memory (estimated size 5.7 KB, free 490.4 MB) 18/04/17 17:00:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 729.0 (TID 729, ***hostname masked***, executor 9, partition 0, RACK_LOCAL, 2046 bytes) 18/04/17 17:00:00 INFO storage.MemoryStore: Block broadcast_730_piece0 stored as bytes in memory (estimated size 3.1 KB, free 490.4 MB) 18/04/17 17:00:00 INFO storage.BlockManagerInfo: Added broadcast_730_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.2 MB) 18/04/17 17:00:00 INFO spark.SparkContext: Created broadcast 730 from broadcast at DAGScheduler.scala:1006 18/04/17 17:00:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 730 (KafkaRDD[992] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 17:00:00 INFO cluster.YarnClusterScheduler: Adding task set 730.0 with 1 tasks 18/04/17 17:00:00 INFO scheduler.DAGScheduler: Got job 731 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 17:00:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 731 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 17:00:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 17:00:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 17:00:00 INFO scheduler.DAGScheduler: Submitting ResultStage 731 (KafkaRDD[974] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 17:00:00 INFO storage.MemoryStore: Block broadcast_731 stored as values in memory (estimated size 5.7 KB, free 490.4 MB) 18/04/17 17:00:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 730.0 (TID 730, ***hostname masked***, executor 10, partition 0, RACK_LOCAL, 2048 bytes) 18/04/17 17:00:00 INFO storage.BlockManagerInfo: Added broadcast_727_piece0 in memory on ***hostname masked***:42188 (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:00:00 INFO storage.BlockManagerInfo: Added broadcast_728_piece0 in memory on ***hostname masked***:55279 (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:00:00 INFO storage.MemoryStore: Block broadcast_731_piece0 stored as bytes in memory (estimated size 3.1 KB, free 490.4 MB) 18/04/17 17:00:00 INFO storage.BlockManagerInfo: Added broadcast_731_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.2 MB) 18/04/17 17:00:00 INFO spark.SparkContext: Created broadcast 731 from broadcast at DAGScheduler.scala:1006 18/04/17 17:00:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 731 (KafkaRDD[974] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 17:00:00 INFO cluster.YarnClusterScheduler: Adding task set 731.0 with 1 tasks 18/04/17 17:00:00 INFO scheduler.DAGScheduler: Got job 733 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 17:00:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 732 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 17:00:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 17:00:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 17:00:00 INFO storage.BlockManagerInfo: Added broadcast_729_piece0 in memory on ***hostname masked***:55033 (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:00:00 INFO scheduler.DAGScheduler: Submitting ResultStage 732 (KafkaRDD[984] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 17:00:00 INFO storage.MemoryStore: Block broadcast_732 stored as values in memory (estimated size 5.7 KB, free 490.4 MB) 18/04/17 17:00:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 731.0 (TID 731, ***hostname masked***, executor 11, partition 0, RACK_LOCAL, 2034 bytes) 18/04/17 17:00:00 INFO storage.MemoryStore: Block broadcast_732_piece0 stored as bytes in memory (estimated size 3.1 KB, free 490.4 MB) 18/04/17 17:00:00 INFO storage.BlockManagerInfo: Added broadcast_732_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.2 MB) 18/04/17 17:00:00 INFO spark.SparkContext: Created broadcast 732 from broadcast at DAGScheduler.scala:1006 18/04/17 17:00:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 732 (KafkaRDD[984] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 17:00:00 INFO cluster.YarnClusterScheduler: Adding task set 732.0 with 1 tasks 18/04/17 17:00:00 INFO scheduler.DAGScheduler: Got job 732 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 17:00:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 733 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 17:00:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 17:00:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 17:00:00 INFO scheduler.DAGScheduler: Submitting ResultStage 733 (KafkaRDD[998] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 17:00:00 INFO storage.MemoryStore: Block broadcast_733 stored as values in memory (estimated size 5.7 KB, free 490.3 MB) 18/04/17 17:00:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 732.0 (TID 732, ***hostname masked***, executor 5, partition 0, RACK_LOCAL, 2051 bytes) 18/04/17 17:00:00 INFO storage.MemoryStore: Block broadcast_733_piece0 stored as bytes in memory (estimated size 3.1 KB, free 490.3 MB) 18/04/17 17:00:00 INFO storage.BlockManagerInfo: Added broadcast_733_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.2 MB) 18/04/17 17:00:00 INFO spark.SparkContext: Created broadcast 733 from broadcast at DAGScheduler.scala:1006 18/04/17 17:00:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 733 (KafkaRDD[998] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 17:00:00 INFO cluster.YarnClusterScheduler: Adding task set 733.0 with 1 tasks 18/04/17 17:00:00 INFO scheduler.DAGScheduler: Got job 734 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 17:00:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 734 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 17:00:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 17:00:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 17:00:00 INFO scheduler.DAGScheduler: Submitting ResultStage 734 (KafkaRDD[999] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 17:00:00 INFO storage.MemoryStore: Block broadcast_734 stored as values in memory (estimated size 5.7 KB, free 490.3 MB) 18/04/17 17:00:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 733.0 (TID 733, ***hostname masked***, executor 2, partition 0, NODE_LOCAL, 2056 bytes) 18/04/17 17:00:00 INFO storage.MemoryStore: Block broadcast_734_piece0 stored as bytes in memory (estimated size 3.1 KB, free 490.3 MB) 18/04/17 17:00:00 INFO storage.BlockManagerInfo: Added broadcast_734_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.2 MB) 18/04/17 17:00:00 INFO spark.SparkContext: Created broadcast 734 from broadcast at DAGScheduler.scala:1006 18/04/17 17:00:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 734 (KafkaRDD[999] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 17:00:00 INFO cluster.YarnClusterScheduler: Adding task set 734.0 with 1 tasks 18/04/17 17:00:00 INFO scheduler.DAGScheduler: Got job 735 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 17:00:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 735 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 17:00:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 17:00:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 17:00:00 INFO scheduler.DAGScheduler: Submitting ResultStage 735 (KafkaRDD[979] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 17:00:00 INFO storage.MemoryStore: Block broadcast_735 stored as values in memory (estimated size 5.7 KB, free 490.3 MB) 18/04/17 17:00:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 734.0 (TID 734, ***hostname masked***, executor 6, partition 0, RACK_LOCAL, 2048 bytes) 18/04/17 17:00:00 INFO storage.BlockManagerInfo: Added broadcast_731_piece0 in memory on ***hostname masked***:57847 (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:00:00 INFO storage.MemoryStore: Block broadcast_735_piece0 stored as bytes in memory (estimated size 3.1 KB, free 490.3 MB) 18/04/17 17:00:00 INFO storage.BlockManagerInfo: Added broadcast_735_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.2 MB) 18/04/17 17:00:00 INFO spark.SparkContext: Created broadcast 735 from broadcast at DAGScheduler.scala:1006 18/04/17 17:00:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 735 (KafkaRDD[979] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 17:00:00 INFO cluster.YarnClusterScheduler: Adding task set 735.0 with 1 tasks 18/04/17 17:00:00 INFO scheduler.DAGScheduler: Got job 736 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 17:00:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 736 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 17:00:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 17:00:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 17:00:00 INFO scheduler.DAGScheduler: Submitting ResultStage 736 (KafkaRDD[1003] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 17:00:00 INFO storage.BlockManagerInfo: Added broadcast_732_piece0 in memory on ***hostname masked***:53081 (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:00:00 INFO storage.MemoryStore: Block broadcast_736 stored as values in memory (estimated size 5.7 KB, free 490.3 MB) 18/04/17 17:00:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 735.0 (TID 735, ***hostname masked***, executor 8, partition 0, RACK_LOCAL, 2060 bytes) 18/04/17 17:00:00 INFO storage.MemoryStore: Block broadcast_736_piece0 stored as bytes in memory (estimated size 3.1 KB, free 490.3 MB) 18/04/17 17:00:00 INFO storage.BlockManagerInfo: Added broadcast_736_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.2 MB) 18/04/17 17:00:00 INFO spark.SparkContext: Created broadcast 736 from broadcast at DAGScheduler.scala:1006 18/04/17 17:00:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 736 (KafkaRDD[1003] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 17:00:00 INFO cluster.YarnClusterScheduler: Adding task set 736.0 with 1 tasks 18/04/17 17:00:00 INFO scheduler.DAGScheduler: Got job 737 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 17:00:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 737 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 17:00:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 17:00:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 17:00:00 INFO scheduler.DAGScheduler: Submitting ResultStage 737 (KafkaRDD[1004] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 17:00:00 INFO storage.MemoryStore: Block broadcast_737 stored as values in memory (estimated size 5.7 KB, free 490.3 MB) 18/04/17 17:00:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 736.0 (TID 736, ***hostname masked***, executor 10, partition 0, RACK_LOCAL, 2042 bytes) 18/04/17 17:00:00 INFO storage.MemoryStore: Block broadcast_737_piece0 stored as bytes in memory (estimated size 3.1 KB, free 490.3 MB) 18/04/17 17:00:00 INFO storage.BlockManagerInfo: Added broadcast_737_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.2 MB) 18/04/17 17:00:00 INFO spark.SparkContext: Created broadcast 737 from broadcast at DAGScheduler.scala:1006 18/04/17 17:00:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 737 (KafkaRDD[1004] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 17:00:00 INFO cluster.YarnClusterScheduler: Adding task set 737.0 with 1 tasks 18/04/17 17:00:00 INFO scheduler.DAGScheduler: Got job 738 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 17:00:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 738 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 17:00:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 17:00:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 17:00:00 INFO scheduler.DAGScheduler: Submitting ResultStage 738 (KafkaRDD[997] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 17:00:00 INFO storage.MemoryStore: Block broadcast_738 stored as values in memory (estimated size 5.7 KB, free 490.3 MB) 18/04/17 17:00:00 INFO storage.BlockManagerInfo: Added broadcast_734_piece0 in memory on ***hostname masked***:35790 (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:00:00 INFO storage.BlockManagerInfo: Added broadcast_735_piece0 in memory on ***hostname masked***:50260 (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:00:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 737.0 (TID 737, ***hostname masked***, executor 5, partition 0, RACK_LOCAL, 2040 bytes) 18/04/17 17:00:00 INFO storage.BlockManagerInfo: Added broadcast_730_piece0 in memory on ***hostname masked***:55095 (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:00:00 INFO storage.MemoryStore: Block broadcast_738_piece0 stored as bytes in memory (estimated size 3.1 KB, free 490.3 MB) 18/04/17 17:00:00 INFO storage.BlockManagerInfo: Added broadcast_738_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.2 MB) 18/04/17 17:00:00 INFO spark.SparkContext: Created broadcast 738 from broadcast at DAGScheduler.scala:1006 18/04/17 17:00:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 738 (KafkaRDD[997] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 17:00:00 INFO cluster.YarnClusterScheduler: Adding task set 738.0 with 1 tasks 18/04/17 17:00:00 INFO storage.BlockManagerInfo: Added broadcast_733_piece0 in memory on ***hostname masked***:43653 (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:00:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 738.0 (TID 738, ***hostname masked***, executor 7, partition 0, RACK_LOCAL, 2045 bytes) 18/04/17 17:00:00 INFO storage.BlockManagerInfo: Added broadcast_737_piece0 in memory on ***hostname masked***:53081 (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:00:00 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 715.0 (TID 715) in 85 ms on ***hostname masked*** (executor 1) (1/1) 18/04/17 17:00:00 INFO scheduler.DAGScheduler: ResultStage 715 (foreachPartition at PredictorEngineApp.java:153) finished in 0.086 s 18/04/17 17:00:00 INFO cluster.YarnClusterScheduler: Removed TaskSet 715.0, whose tasks have all completed, from pool 18/04/17 17:00:00 INFO scheduler.DAGScheduler: Job 715 finished: foreachPartition at PredictorEngineApp.java:153, took 0.100845 s 18/04/17 17:00:00 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x21765518 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 17:00:00 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x217655180x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 17:00:00 INFO storage.BlockManagerInfo: Added broadcast_736_piece0 in memory on ***hostname masked***:55095 (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:00:00 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 17:00:00 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:59142, server: ***hostname masked***/***IP masked***:2181 18/04/17 17:00:00 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x3626be1439a94e8, negotiated timeout = 60000 18/04/17 17:00:00 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x3626be1439a94e8 18/04/17 17:00:00 INFO zookeeper.ZooKeeper: Session: 0x3626be1439a94e8 closed 18/04/17 17:00:00 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 17:00:00 INFO scheduler.JobScheduler: Finished job streaming job 1523973600000 ms.18 from job set of time 1523973600000 ms 18/04/17 17:00:00 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 730.0 (TID 730) in 71 ms on ***hostname masked*** (executor 10) (1/1) 18/04/17 17:00:00 INFO cluster.YarnClusterScheduler: Removed TaskSet 730.0, whose tasks have all completed, from pool 18/04/17 17:00:00 INFO scheduler.DAGScheduler: ResultStage 730 (foreachPartition at PredictorEngineApp.java:153) finished in 0.072 s 18/04/17 17:00:00 INFO scheduler.DAGScheduler: Job 729 finished: foreachPartition at PredictorEngineApp.java:153, took 0.145111 s 18/04/17 17:00:00 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x389790b9 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 17:00:00 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x389790b90x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 17:00:00 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 17:00:00 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:37294, server: ***hostname masked***/***IP masked***:2181 18/04/17 17:00:00 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x1626be1444c952d, negotiated timeout = 60000 18/04/17 17:00:00 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x1626be1444c952d 18/04/17 17:00:00 INFO zookeeper.ZooKeeper: Session: 0x1626be1444c952d closed 18/04/17 17:00:00 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 17:00:00 INFO scheduler.JobScheduler: Finished job streaming job 1523973600000 ms.20 from job set of time 1523973600000 ms 18/04/17 17:00:00 INFO storage.BlockManagerInfo: Added broadcast_738_piece0 in memory on ***hostname masked***:41751 (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:00:00 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 720.0 (TID 720) in 280 ms on ***hostname masked*** (executor 9) (1/1) 18/04/17 17:00:00 INFO scheduler.DAGScheduler: ResultStage 720 (foreachPartition at PredictorEngineApp.java:153) finished in 0.281 s 18/04/17 17:00:00 INFO cluster.YarnClusterScheduler: Removed TaskSet 720.0, whose tasks have all completed, from pool 18/04/17 17:00:00 INFO scheduler.DAGScheduler: Job 719 finished: foreachPartition at PredictorEngineApp.java:153, took 0.310853 s 18/04/17 17:00:00 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x488585e5 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 17:00:00 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x488585e50x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 17:00:00 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 17:00:00 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:59148, server: ***hostname masked***/***IP masked***:2181 18/04/17 17:00:00 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x3626be1439a94ec, negotiated timeout = 60000 18/04/17 17:00:00 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x3626be1439a94ec 18/04/17 17:00:00 INFO zookeeper.ZooKeeper: Session: 0x3626be1439a94ec closed 18/04/17 17:00:00 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 17:00:00 INFO scheduler.JobScheduler: Finished job streaming job 1523973600000 ms.35 from job set of time 1523973600000 ms 18/04/17 17:00:00 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 735.0 (TID 735) in 847 ms on ***hostname masked*** (executor 8) (1/1) 18/04/17 17:00:00 INFO scheduler.DAGScheduler: ResultStage 735 (foreachPartition at PredictorEngineApp.java:153) finished in 0.848 s 18/04/17 17:00:00 INFO cluster.YarnClusterScheduler: Removed TaskSet 735.0, whose tasks have all completed, from pool 18/04/17 17:00:00 INFO scheduler.DAGScheduler: Job 735 finished: foreachPartition at PredictorEngineApp.java:153, took 0.934173 s 18/04/17 17:00:00 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x2eb1eb5b connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 17:00:00 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x2eb1eb5b0x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 17:00:00 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 17:00:00 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:41895, server: ***hostname masked***/***IP masked***:2181 18/04/17 17:00:01 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x2626be142b28e21, negotiated timeout = 60000 18/04/17 17:00:01 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x2626be142b28e21 18/04/17 17:00:01 INFO zookeeper.ZooKeeper: Session: 0x2626be142b28e21 closed 18/04/17 17:00:01 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 17:00:01 INFO scheduler.JobScheduler: Finished job streaming job 1523973600000 ms.7 from job set of time 1523973600000 ms 18/04/17 17:00:01 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 738.0 (TID 738) in 1023 ms on ***hostname masked*** (executor 7) (1/1) 18/04/17 17:00:01 INFO cluster.YarnClusterScheduler: Removed TaskSet 738.0, whose tasks have all completed, from pool 18/04/17 17:00:01 INFO scheduler.DAGScheduler: ResultStage 738 (foreachPartition at PredictorEngineApp.java:153) finished in 1.023 s 18/04/17 17:00:01 INFO scheduler.DAGScheduler: Job 738 finished: foreachPartition at PredictorEngineApp.java:153, took 1.115686 s 18/04/17 17:00:01 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x77c2afad connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 17:00:01 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x77c2afad0x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 17:00:01 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 17:00:01 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:59155, server: ***hostname masked***/***IP masked***:2181 18/04/17 17:00:01 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x3626be1439a94f0, negotiated timeout = 60000 18/04/17 17:00:01 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x3626be1439a94f0 18/04/17 17:00:01 INFO zookeeper.ZooKeeper: Session: 0x3626be1439a94f0 closed 18/04/17 17:00:01 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 17:00:01 INFO scheduler.JobScheduler: Finished job streaming job 1523973600000 ms.25 from job set of time 1523973600000 ms 18/04/17 17:00:02 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 713.0 (TID 713) in 2047 ms on ***hostname masked*** (executor 5) (1/1) 18/04/17 17:00:02 INFO cluster.YarnClusterScheduler: Removed TaskSet 713.0, whose tasks have all completed, from pool 18/04/17 17:00:02 INFO scheduler.DAGScheduler: ResultStage 713 (foreachPartition at PredictorEngineApp.java:153) finished in 2.047 s 18/04/17 17:00:02 INFO scheduler.DAGScheduler: Job 712 finished: foreachPartition at PredictorEngineApp.java:153, took 2.056337 s 18/04/17 17:00:02 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x26903012 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 17:00:02 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x269030120x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 17:00:02 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 17:00:02 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:41904, server: ***hostname masked***/***IP masked***:2181 18/04/17 17:00:02 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x2626be142b28e24, negotiated timeout = 60000 18/04/17 17:00:02 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x2626be142b28e24 18/04/17 17:00:02 INFO zookeeper.ZooKeeper: Session: 0x2626be142b28e24 closed 18/04/17 17:00:02 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 17:00:02 INFO scheduler.JobScheduler: Finished job streaming job 1523973600000 ms.8 from job set of time 1523973600000 ms 18/04/17 17:00:02 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 736.0 (TID 736) in 2227 ms on ***hostname masked*** (executor 10) (1/1) 18/04/17 17:00:02 INFO cluster.YarnClusterScheduler: Removed TaskSet 736.0, whose tasks have all completed, from pool 18/04/17 17:00:02 INFO scheduler.DAGScheduler: ResultStage 736 (foreachPartition at PredictorEngineApp.java:153) finished in 2.228 s 18/04/17 17:00:02 INFO scheduler.DAGScheduler: Job 736 finished: foreachPartition at PredictorEngineApp.java:153, took 2.316214 s 18/04/17 17:00:02 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x2272f3ec connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 17:00:02 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x2272f3ec0x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 17:00:02 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 17:00:02 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:41907, server: ***hostname masked***/***IP masked***:2181 18/04/17 17:00:02 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x2626be142b28e25, negotiated timeout = 60000 18/04/17 17:00:02 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x2626be142b28e25 18/04/17 17:00:02 INFO zookeeper.ZooKeeper: Session: 0x2626be142b28e25 closed 18/04/17 17:00:02 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 17:00:02 INFO scheduler.JobScheduler: Finished job streaming job 1523973600000 ms.31 from job set of time 1523973600000 ms 18/04/17 17:00:03 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 731.0 (TID 731) in 3792 ms on ***hostname masked*** (executor 11) (1/1) 18/04/17 17:00:03 INFO scheduler.DAGScheduler: ResultStage 731 (foreachPartition at PredictorEngineApp.java:153) finished in 3.793 s 18/04/17 17:00:03 INFO cluster.YarnClusterScheduler: Removed TaskSet 731.0, whose tasks have all completed, from pool 18/04/17 17:00:03 INFO scheduler.DAGScheduler: Job 731 finished: foreachPartition at PredictorEngineApp.java:153, took 3.870318 s 18/04/17 17:00:03 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x9744fce connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 17:00:03 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x9744fce0x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 17:00:03 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 17:00:03 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:41913, server: ***hostname masked***/***IP masked***:2181 18/04/17 17:00:03 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x2626be142b28e26, negotiated timeout = 60000 18/04/17 17:00:03 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x2626be142b28e26 18/04/17 17:00:03 INFO zookeeper.ZooKeeper: Session: 0x2626be142b28e26 closed 18/04/17 17:00:03 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 17:00:03 INFO scheduler.JobScheduler: Finished job streaming job 1523973600000 ms.2 from job set of time 1523973600000 ms 18/04/17 17:00:04 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 712.0 (TID 712) in 4169 ms on ***hostname masked*** (executor 8) (1/1) 18/04/17 17:00:04 INFO scheduler.DAGScheduler: ResultStage 712 (foreachPartition at PredictorEngineApp.java:153) finished in 4.169 s 18/04/17 17:00:04 INFO cluster.YarnClusterScheduler: Removed TaskSet 712.0, whose tasks have all completed, from pool 18/04/17 17:00:04 INFO scheduler.DAGScheduler: Job 713 finished: foreachPartition at PredictorEngineApp.java:153, took 4.174620 s 18/04/17 17:00:04 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x280eaae5 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 17:00:04 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x280eaae50x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 17:00:04 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 17:00:04 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:41917, server: ***hostname masked***/***IP masked***:2181 18/04/17 17:00:04 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x2626be142b28e27, negotiated timeout = 60000 18/04/17 17:00:04 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x2626be142b28e27 18/04/17 17:00:04 INFO zookeeper.ZooKeeper: Session: 0x2626be142b28e27 closed 18/04/17 17:00:04 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 17:00:04 INFO scheduler.JobScheduler: Finished job streaming job 1523973600000 ms.15 from job set of time 1523973600000 ms 18/04/17 17:00:07 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 727.0 (TID 727) in 7295 ms on ***hostname masked*** (executor 12) (1/1) 18/04/17 17:00:07 INFO scheduler.DAGScheduler: ResultStage 727 (foreachPartition at PredictorEngineApp.java:153) finished in 7.296 s 18/04/17 17:00:07 INFO cluster.YarnClusterScheduler: Removed TaskSet 727.0, whose tasks have all completed, from pool 18/04/17 17:00:07 INFO scheduler.DAGScheduler: Job 727 finished: foreachPartition at PredictorEngineApp.java:153, took 7.391082 s 18/04/17 17:00:07 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x1b98b9b0 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 17:00:07 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x1b98b9b00x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 17:00:07 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 17:00:07 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:37354, server: ***hostname masked***/***IP masked***:2181 18/04/17 17:00:07 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x1626be1444c9536, negotiated timeout = 60000 18/04/17 17:00:07 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x1626be1444c9536 18/04/17 17:00:07 INFO zookeeper.ZooKeeper: Session: 0x1626be1444c9536 closed 18/04/17 17:00:07 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 17:00:07 INFO scheduler.JobScheduler: Finished job streaming job 1523973600000 ms.6 from job set of time 1523973600000 ms 18/04/17 17:00:07 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 723.0 (TID 723) in 7485 ms on ***hostname masked*** (executor 6) (1/1) 18/04/17 17:00:07 INFO scheduler.DAGScheduler: ResultStage 723 (foreachPartition at PredictorEngineApp.java:153) finished in 7.486 s 18/04/17 17:00:07 INFO cluster.YarnClusterScheduler: Removed TaskSet 723.0, whose tasks have all completed, from pool 18/04/17 17:00:07 INFO scheduler.DAGScheduler: Job 723 finished: foreachPartition at PredictorEngineApp.java:153, took 7.523971 s 18/04/17 17:00:07 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x1485602c connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 17:00:07 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x1485602c0x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 17:00:07 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 17:00:07 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:37357, server: ***hostname masked***/***IP masked***:2181 18/04/17 17:00:07 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x1626be1444c9537, negotiated timeout = 60000 18/04/17 17:00:07 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x1626be1444c9537 18/04/17 17:00:07 INFO zookeeper.ZooKeeper: Session: 0x1626be1444c9537 closed 18/04/17 17:00:07 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 17:00:07 INFO scheduler.JobScheduler: Finished job streaming job 1523973600000 ms.9 from job set of time 1523973600000 ms 18/04/17 17:00:07 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 724.0 (TID 724) in 7685 ms on ***hostname masked*** (executor 12) (1/1) 18/04/17 17:00:07 INFO cluster.YarnClusterScheduler: Removed TaskSet 724.0, whose tasks have all completed, from pool 18/04/17 17:00:07 INFO scheduler.DAGScheduler: ResultStage 724 (foreachPartition at PredictorEngineApp.java:153) finished in 7.686 s 18/04/17 17:00:07 INFO scheduler.DAGScheduler: Job 724 finished: foreachPartition at PredictorEngineApp.java:153, took 7.726806 s 18/04/17 17:00:07 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x763e441e connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 17:00:07 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x763e441e0x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 17:00:07 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 17:00:07 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:41955, server: ***hostname masked***/***IP masked***:2181 18/04/17 17:00:07 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x2626be142b28e2d, negotiated timeout = 60000 18/04/17 17:00:07 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x2626be142b28e2d 18/04/17 17:00:07 INFO zookeeper.ZooKeeper: Session: 0x2626be142b28e2d closed 18/04/17 17:00:07 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 17:00:07 INFO scheduler.JobScheduler: Finished job streaming job 1523973600000 ms.33 from job set of time 1523973600000 ms 18/04/17 17:00:08 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 734.0 (TID 734) in 8083 ms on ***hostname masked*** (executor 6) (1/1) 18/04/17 17:00:08 INFO scheduler.DAGScheduler: ResultStage 734 (foreachPartition at PredictorEngineApp.java:153) finished in 8.083 s 18/04/17 17:00:08 INFO cluster.YarnClusterScheduler: Removed TaskSet 734.0, whose tasks have all completed, from pool 18/04/17 17:00:08 INFO scheduler.DAGScheduler: Job 734 finished: foreachPartition at PredictorEngineApp.java:153, took 8.167209 s 18/04/17 17:00:08 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x1821ab9 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 17:00:08 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x1821ab90x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 17:00:08 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 17:00:08 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:37364, server: ***hostname masked***/***IP masked***:2181 18/04/17 17:00:08 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x1626be1444c9538, negotiated timeout = 60000 18/04/17 17:00:08 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x1626be1444c9538 18/04/17 17:00:08 INFO zookeeper.ZooKeeper: Session: 0x1626be1444c9538 closed 18/04/17 17:00:08 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 17:00:08 INFO scheduler.JobScheduler: Finished job streaming job 1523973600000 ms.27 from job set of time 1523973600000 ms 18/04/17 17:00:09 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 733.0 (TID 733) in 9156 ms on ***hostname masked*** (executor 2) (1/1) 18/04/17 17:00:09 INFO cluster.YarnClusterScheduler: Removed TaskSet 733.0, whose tasks have all completed, from pool 18/04/17 17:00:09 INFO scheduler.DAGScheduler: ResultStage 733 (foreachPartition at PredictorEngineApp.java:153) finished in 9.156 s 18/04/17 17:00:09 INFO scheduler.DAGScheduler: Job 732 finished: foreachPartition at PredictorEngineApp.java:153, took 9.238853 s 18/04/17 17:00:09 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x5daf6984 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 17:00:09 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x5daf69840x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 17:00:09 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 17:00:09 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:41964, server: ***hostname masked***/***IP masked***:2181 18/04/17 17:00:09 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x2626be142b28e2e, negotiated timeout = 60000 18/04/17 17:00:09 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x2626be142b28e2e 18/04/17 17:00:09 INFO zookeeper.ZooKeeper: Session: 0x2626be142b28e2e closed 18/04/17 17:00:09 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 17:00:09 INFO scheduler.JobScheduler: Finished job streaming job 1523973600000 ms.26 from job set of time 1523973600000 ms 18/04/17 17:00:09 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 732.0 (TID 732) in 9528 ms on ***hostname masked*** (executor 5) (1/1) 18/04/17 17:00:09 INFO cluster.YarnClusterScheduler: Removed TaskSet 732.0, whose tasks have all completed, from pool 18/04/17 17:00:09 INFO scheduler.DAGScheduler: ResultStage 732 (foreachPartition at PredictorEngineApp.java:153) finished in 9.529 s 18/04/17 17:00:09 INFO scheduler.DAGScheduler: Job 733 finished: foreachPartition at PredictorEngineApp.java:153, took 9.608420 s 18/04/17 17:00:09 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x68394573 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 17:00:09 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x683945730x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 17:00:09 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 17:00:09 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:59223, server: ***hostname masked***/***IP masked***:2181 18/04/17 17:00:09 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x3626be1439a94f8, negotiated timeout = 60000 18/04/17 17:00:09 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x3626be1439a94f8 18/04/17 17:00:09 INFO zookeeper.ZooKeeper: Session: 0x3626be1439a94f8 closed 18/04/17 17:00:09 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 17:00:09 INFO scheduler.JobScheduler: Finished job streaming job 1523973600000 ms.12 from job set of time 1523973600000 ms 18/04/17 17:00:09 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 718.0 (TID 718) in 9635 ms on ***hostname masked*** (executor 3) (1/1) 18/04/17 17:00:09 INFO cluster.YarnClusterScheduler: Removed TaskSet 718.0, whose tasks have all completed, from pool 18/04/17 17:00:09 INFO scheduler.DAGScheduler: ResultStage 718 (foreachPartition at PredictorEngineApp.java:153) finished in 9.636 s 18/04/17 17:00:09 INFO scheduler.DAGScheduler: Job 718 finished: foreachPartition at PredictorEngineApp.java:153, took 9.660234 s 18/04/17 17:00:09 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x68d503b2 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 17:00:09 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x68d503b20x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 17:00:09 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 17:00:09 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:37375, server: ***hostname masked***/***IP masked***:2181 18/04/17 17:00:09 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x1626be1444c953b, negotiated timeout = 60000 18/04/17 17:00:09 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x1626be1444c953b 18/04/17 17:00:09 INFO zookeeper.ZooKeeper: Session: 0x1626be1444c953b closed 18/04/17 17:00:09 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 17:00:09 INFO scheduler.JobScheduler: Finished job streaming job 1523973600000 ms.22 from job set of time 1523973600000 ms 18/04/17 17:00:11 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 737.0 (TID 737) in 10908 ms on ***hostname masked*** (executor 5) (1/1) 18/04/17 17:00:11 INFO cluster.YarnClusterScheduler: Removed TaskSet 737.0, whose tasks have all completed, from pool 18/04/17 17:00:11 INFO scheduler.DAGScheduler: ResultStage 737 (foreachPartition at PredictorEngineApp.java:153) finished in 10.909 s 18/04/17 17:00:11 INFO scheduler.DAGScheduler: Job 737 finished: foreachPartition at PredictorEngineApp.java:153, took 10.999410 s 18/04/17 17:00:11 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x55a65047 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 17:00:11 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x55a650470x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 17:00:11 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 17:00:11 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:41977, server: ***hostname masked***/***IP masked***:2181 18/04/17 17:00:11 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x2626be142b28e32, negotiated timeout = 60000 18/04/17 17:00:11 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x2626be142b28e32 18/04/17 17:00:11 INFO zookeeper.ZooKeeper: Session: 0x2626be142b28e32 closed 18/04/17 17:00:11 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 17:00:11 INFO scheduler.JobScheduler: Finished job streaming job 1523973600000 ms.32 from job set of time 1523973600000 ms 18/04/17 17:00:11 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 717.0 (TID 717) in 11156 ms on ***hostname masked*** (executor 1) (1/1) 18/04/17 17:00:11 INFO cluster.YarnClusterScheduler: Removed TaskSet 717.0, whose tasks have all completed, from pool 18/04/17 17:00:11 INFO scheduler.DAGScheduler: ResultStage 717 (foreachPartition at PredictorEngineApp.java:153) finished in 11.157 s 18/04/17 17:00:11 INFO scheduler.DAGScheduler: Job 717 finished: foreachPartition at PredictorEngineApp.java:153, took 11.177499 s 18/04/17 17:00:11 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x3f7bb06a connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 17:00:11 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x3f7bb06a0x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 17:00:11 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 17:00:11 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:59237, server: ***hostname masked***/***IP masked***:2181 18/04/17 17:00:11 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x3626be1439a94fb, negotiated timeout = 60000 18/04/17 17:00:11 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x3626be1439a94fb 18/04/17 17:00:11 INFO zookeeper.ZooKeeper: Session: 0x3626be1439a94fb closed 18/04/17 17:00:11 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 17:00:11 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 716.0 (TID 716) in 11187 ms on ***hostname masked*** (executor 1) (1/1) 18/04/17 17:00:11 INFO cluster.YarnClusterScheduler: Removed TaskSet 716.0, whose tasks have all completed, from pool 18/04/17 17:00:11 INFO scheduler.DAGScheduler: ResultStage 716 (foreachPartition at PredictorEngineApp.java:153) finished in 11.188 s 18/04/17 17:00:11 INFO scheduler.DAGScheduler: Job 716 finished: foreachPartition at PredictorEngineApp.java:153, took 11.205637 s 18/04/17 17:00:11 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x58c1ae65 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 17:00:11 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x58c1ae650x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 17:00:11 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 17:00:11 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:41985, server: ***hostname masked***/***IP masked***:2181 18/04/17 17:00:11 INFO scheduler.JobScheduler: Finished job streaming job 1523973600000 ms.28 from job set of time 1523973600000 ms 18/04/17 17:00:11 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x2626be142b28e34, negotiated timeout = 60000 18/04/17 17:00:11 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x2626be142b28e34 18/04/17 17:00:11 INFO zookeeper.ZooKeeper: Session: 0x2626be142b28e34 closed 18/04/17 17:00:11 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 17:00:11 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 721.0 (TID 721) in 11207 ms on ***hostname masked*** (executor 1) (1/1) 18/04/17 17:00:11 INFO scheduler.DAGScheduler: ResultStage 721 (foreachPartition at PredictorEngineApp.java:153) finished in 11.208 s 18/04/17 17:00:11 INFO cluster.YarnClusterScheduler: Removed TaskSet 721.0, whose tasks have all completed, from pool 18/04/17 17:00:11 INFO scheduler.DAGScheduler: Job 721 finished: foreachPartition at PredictorEngineApp.java:153, took 11.240425 s 18/04/17 17:00:11 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x6c84adca connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 17:00:11 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x6c84adca0x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 17:00:11 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 17:00:11 INFO scheduler.JobScheduler: Finished job streaming job 1523973600000 ms.29 from job set of time 1523973600000 ms 18/04/17 17:00:11 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:37393, server: ***hostname masked***/***IP masked***:2181 18/04/17 17:00:11 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x1626be1444c953d, negotiated timeout = 60000 18/04/17 17:00:11 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x1626be1444c953d 18/04/17 17:00:11 INFO zookeeper.ZooKeeper: Session: 0x1626be1444c953d closed 18/04/17 17:00:11 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 17:00:11 INFO scheduler.JobScheduler: Finished job streaming job 1523973600000 ms.19 from job set of time 1523973600000 ms 18/04/17 17:00:11 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 722.0 (TID 722) in 11481 ms on ***hostname masked*** (executor 5) (1/1) 18/04/17 17:00:11 INFO cluster.YarnClusterScheduler: Removed TaskSet 722.0, whose tasks have all completed, from pool 18/04/17 17:00:11 INFO scheduler.DAGScheduler: ResultStage 722 (foreachPartition at PredictorEngineApp.java:153) finished in 11.482 s 18/04/17 17:00:11 INFO scheduler.DAGScheduler: Job 722 finished: foreachPartition at PredictorEngineApp.java:153, took 11.517316 s 18/04/17 17:00:11 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x525855b9 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 17:00:11 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x525855b90x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 17:00:11 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 17:00:11 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:59249, server: ***hostname masked***/***IP masked***:2181 18/04/17 17:00:11 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x3626be1439a94fc, negotiated timeout = 60000 18/04/17 17:00:11 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x3626be1439a94fc 18/04/17 17:00:11 INFO zookeeper.ZooKeeper: Session: 0x3626be1439a94fc closed 18/04/17 17:00:11 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 17:00:11 INFO scheduler.JobScheduler: Finished job streaming job 1523973600000 ms.34 from job set of time 1523973600000 ms 18/04/17 17:00:14 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 719.0 (TID 719) in 14755 ms on ***hostname masked*** (executor 4) (1/1) 18/04/17 17:00:14 INFO cluster.YarnClusterScheduler: Removed TaskSet 719.0, whose tasks have all completed, from pool 18/04/17 17:00:14 INFO scheduler.DAGScheduler: ResultStage 719 (foreachPartition at PredictorEngineApp.java:153) finished in 14.756 s 18/04/17 17:00:14 INFO scheduler.DAGScheduler: Job 720 finished: foreachPartition at PredictorEngineApp.java:153, took 14.782956 s 18/04/17 17:00:14 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x5bda3352 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 17:00:14 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x5bda33520x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 17:00:14 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 17:00:14 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:37408, server: ***hostname masked***/***IP masked***:2181 18/04/17 17:00:14 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x1626be1444c9540, negotiated timeout = 60000 18/04/17 17:00:14 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x1626be1444c9540 18/04/17 17:00:14 INFO zookeeper.ZooKeeper: Session: 0x1626be1444c9540 closed 18/04/17 17:00:14 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 17:00:14 INFO scheduler.JobScheduler: Finished job streaming job 1523973600000 ms.24 from job set of time 1523973600000 ms 18/04/17 17:00:15 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 714.0 (TID 714) in 15238 ms on ***hostname masked*** (executor 4) (1/1) 18/04/17 17:00:15 INFO cluster.YarnClusterScheduler: Removed TaskSet 714.0, whose tasks have all completed, from pool 18/04/17 17:00:15 INFO scheduler.DAGScheduler: ResultStage 714 (foreachPartition at PredictorEngineApp.java:153) finished in 15.238 s 18/04/17 17:00:15 INFO scheduler.DAGScheduler: Job 714 finished: foreachPartition at PredictorEngineApp.java:153, took 15.250660 s 18/04/17 17:00:15 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x66f43bda connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 17:00:15 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x66f43bda0x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 17:00:15 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 17:00:15 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:59263, server: ***hostname masked***/***IP masked***:2181 18/04/17 17:00:15 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x3626be1439a94fd, negotiated timeout = 60000 18/04/17 17:00:15 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x3626be1439a94fd 18/04/17 17:00:15 INFO zookeeper.ZooKeeper: Session: 0x3626be1439a94fd closed 18/04/17 17:00:15 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 17:00:15 INFO scheduler.JobScheduler: Finished job streaming job 1523973600000 ms.23 from job set of time 1523973600000 ms 18/04/17 17:00:16 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 726.0 (TID 726) in 16312 ms on ***hostname masked*** (executor 9) (1/1) 18/04/17 17:00:16 INFO cluster.YarnClusterScheduler: Removed TaskSet 726.0, whose tasks have all completed, from pool 18/04/17 17:00:16 INFO scheduler.DAGScheduler: ResultStage 726 (foreachPartition at PredictorEngineApp.java:153) finished in 16.324 s 18/04/17 17:00:16 INFO scheduler.DAGScheduler: Job 726 finished: foreachPartition at PredictorEngineApp.java:153, took 16.372444 s 18/04/17 17:00:16 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x1ceaf5be connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 17:00:16 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x1ceaf5be0x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 17:00:16 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 17:00:16 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:42012, server: ***hostname masked***/***IP masked***:2181 18/04/17 17:00:16 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x2626be142b28e35, negotiated timeout = 60000 18/04/17 17:00:16 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x2626be142b28e35 18/04/17 17:00:16 INFO zookeeper.ZooKeeper: Session: 0x2626be142b28e35 closed 18/04/17 17:00:16 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 17:00:16 INFO scheduler.JobScheduler: Finished job streaming job 1523973600000 ms.1 from job set of time 1523973600000 ms 18/04/17 17:00:17 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 729.0 (TID 729) in 16922 ms on ***hostname masked*** (executor 9) (1/1) 18/04/17 17:00:17 INFO cluster.YarnClusterScheduler: Removed TaskSet 729.0, whose tasks have all completed, from pool 18/04/17 17:00:17 INFO scheduler.DAGScheduler: ResultStage 729 (foreachPartition at PredictorEngineApp.java:153) finished in 16.923 s 18/04/17 17:00:17 INFO scheduler.DAGScheduler: Job 730 finished: foreachPartition at PredictorEngineApp.java:153, took 16.992623 s 18/04/17 17:00:17 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x535e483c connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 17:00:17 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x535e483c0x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 17:00:17 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 17:00:17 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:42015, server: ***hostname masked***/***IP masked***:2181 18/04/17 17:00:17 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x2626be142b28e36, negotiated timeout = 60000 18/04/17 17:00:17 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x2626be142b28e36 18/04/17 17:00:17 INFO zookeeper.ZooKeeper: Session: 0x2626be142b28e36 closed 18/04/17 17:00:17 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 17:00:17 INFO scheduler.JobScheduler: Finished job streaming job 1523973600000 ms.11 from job set of time 1523973600000 ms 18/04/17 17:00:22 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 478.0 (TID 478) in 561912 ms on ***hostname masked*** (executor 7) (1/1) 18/04/17 17:00:22 INFO cluster.YarnClusterScheduler: Removed TaskSet 478.0, whose tasks have all completed, from pool 18/04/17 17:00:22 INFO scheduler.DAGScheduler: ResultStage 478 (foreachPartition at PredictorEngineApp.java:153) finished in 561.912 s 18/04/17 17:00:22 INFO scheduler.DAGScheduler: Job 479 finished: foreachPartition at PredictorEngineApp.java:153, took 561.942959 s 18/04/17 17:00:22 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0xaf50912 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 17:00:22 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0xaf509120x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 17:00:22 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 17:00:22 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:37432, server: ***hostname masked***/***IP masked***:2181 18/04/17 17:00:22 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x1626be1444c9543, negotiated timeout = 60000 18/04/17 17:00:22 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x1626be1444c9543 18/04/17 17:00:22 INFO zookeeper.ZooKeeper: Session: 0x1626be1444c9543 closed 18/04/17 17:00:22 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 17:00:22 INFO scheduler.JobScheduler: Finished job streaming job 1523973060000 ms.34 from job set of time 1523973060000 ms 18/04/17 17:00:22 INFO scheduler.JobScheduler: Total delay: 562.044 s for time 1523973060000 ms (execution: 561.989 s) 18/04/17 17:00:22 INFO scheduler.ReceivedBlockTracker: Deleting batches ArrayBuffer() 18/04/17 17:00:22 INFO scheduler.InputInfoTracker: remove old batch metadata: 18/04/17 17:00:22 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 725.0 (TID 725) in 22791 ms on ***hostname masked*** (executor 4) (1/1) 18/04/17 17:00:22 INFO cluster.YarnClusterScheduler: Removed TaskSet 725.0, whose tasks have all completed, from pool 18/04/17 17:00:22 INFO scheduler.DAGScheduler: ResultStage 725 (foreachPartition at PredictorEngineApp.java:153) finished in 22.791 s 18/04/17 17:00:22 INFO scheduler.DAGScheduler: Job 725 finished: foreachPartition at PredictorEngineApp.java:153, took 22.835712 s 18/04/17 17:00:22 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x2d480e5a connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 17:00:22 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x2d480e5a0x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 17:00:22 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 17:00:22 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:37436, server: ***hostname masked***/***IP masked***:2181 18/04/17 17:00:22 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x1626be1444c9544, negotiated timeout = 60000 18/04/17 17:00:22 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x1626be1444c9544 18/04/17 17:00:22 INFO zookeeper.ZooKeeper: Session: 0x1626be1444c9544 closed 18/04/17 17:00:22 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 17:00:22 INFO scheduler.JobScheduler: Finished job streaming job 1523973600000 ms.5 from job set of time 1523973600000 ms 18/04/17 17:00:24 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 728.0 (TID 728) in 24074 ms on ***hostname masked*** (executor 4) (1/1) 18/04/17 17:00:24 INFO cluster.YarnClusterScheduler: Removed TaskSet 728.0, whose tasks have all completed, from pool 18/04/17 17:00:24 INFO scheduler.DAGScheduler: ResultStage 728 (foreachPartition at PredictorEngineApp.java:153) finished in 24.074 s 18/04/17 17:00:24 INFO scheduler.DAGScheduler: Job 728 finished: foreachPartition at PredictorEngineApp.java:153, took 24.140600 s 18/04/17 17:00:24 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x65a90459 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 17:00:24 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x65a904590x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 17:00:24 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 17:00:24 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:59294, server: ***hostname masked***/***IP masked***:2181 18/04/17 17:00:24 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x3626be1439a9502, negotiated timeout = 60000 18/04/17 17:00:24 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x3626be1439a9502 18/04/17 17:00:24 INFO zookeeper.ZooKeeper: Session: 0x3626be1439a9502 closed 18/04/17 17:00:24 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 17:00:24 INFO scheduler.JobScheduler: Finished job streaming job 1523973600000 ms.10 from job set of time 1523973600000 ms 18/04/17 17:00:24 INFO scheduler.JobScheduler: Total delay: 24.228 s for time 1523973600000 ms (execution: 24.178 s) 18/04/17 17:00:24 INFO kafka.KafkaRDD: Removing RDD 936 from persistence list 18/04/17 17:00:24 INFO storage.BlockManager: Removing RDD 936 18/04/17 17:00:24 INFO kafka.KafkaRDD: Removing RDD 936 from persistence list 18/04/17 17:00:24 INFO storage.BlockManager: Removing RDD 936 18/04/17 17:00:24 INFO kafka.KafkaRDD: Removing RDD 937 from persistence list 18/04/17 17:00:24 INFO storage.BlockManager: Removing RDD 937 18/04/17 17:00:24 INFO kafka.KafkaRDD: Removing RDD 937 from persistence list 18/04/17 17:00:24 INFO storage.BlockManager: Removing RDD 937 18/04/17 17:00:24 INFO kafka.KafkaRDD: Removing RDD 938 from persistence list 18/04/17 17:00:24 INFO storage.BlockManager: Removing RDD 938 18/04/17 17:00:24 INFO kafka.KafkaRDD: Removing RDD 938 from persistence list 18/04/17 17:00:24 INFO storage.BlockManager: Removing RDD 938 18/04/17 17:00:24 INFO kafka.KafkaRDD: Removing RDD 939 from persistence list 18/04/17 17:00:24 INFO storage.BlockManager: Removing RDD 939 18/04/17 17:00:24 INFO kafka.KafkaRDD: Removing RDD 939 from persistence list 18/04/17 17:00:24 INFO storage.BlockManager: Removing RDD 939 18/04/17 17:00:24 INFO kafka.KafkaRDD: Removing RDD 940 from persistence list 18/04/17 17:00:24 INFO storage.BlockManager: Removing RDD 940 18/04/17 17:00:24 INFO kafka.KafkaRDD: Removing RDD 940 from persistence list 18/04/17 17:00:24 INFO storage.BlockManager: Removing RDD 940 18/04/17 17:00:24 INFO kafka.KafkaRDD: Removing RDD 941 from persistence list 18/04/17 17:00:24 INFO storage.BlockManager: Removing RDD 941 18/04/17 17:00:24 INFO kafka.KafkaRDD: Removing RDD 941 from persistence list 18/04/17 17:00:24 INFO storage.BlockManager: Removing RDD 941 18/04/17 17:00:24 INFO kafka.KafkaRDD: Removing RDD 942 from persistence list 18/04/17 17:00:24 INFO storage.BlockManager: Removing RDD 942 18/04/17 17:00:24 INFO kafka.KafkaRDD: Removing RDD 942 from persistence list 18/04/17 17:00:24 INFO storage.BlockManager: Removing RDD 942 18/04/17 17:00:24 INFO kafka.KafkaRDD: Removing RDD 943 from persistence list 18/04/17 17:00:24 INFO storage.BlockManager: Removing RDD 943 18/04/17 17:00:24 INFO kafka.KafkaRDD: Removing RDD 943 from persistence list 18/04/17 17:00:24 INFO storage.BlockManager: Removing RDD 943 18/04/17 17:00:24 INFO kafka.KafkaRDD: Removing RDD 944 from persistence list 18/04/17 17:00:24 INFO storage.BlockManager: Removing RDD 944 18/04/17 17:00:24 INFO kafka.KafkaRDD: Removing RDD 944 from persistence list 18/04/17 17:00:24 INFO storage.BlockManager: Removing RDD 944 18/04/17 17:00:24 INFO kafka.KafkaRDD: Removing RDD 945 from persistence list 18/04/17 17:00:24 INFO storage.BlockManager: Removing RDD 945 18/04/17 17:00:24 INFO kafka.KafkaRDD: Removing RDD 945 from persistence list 18/04/17 17:00:24 INFO storage.BlockManager: Removing RDD 945 18/04/17 17:00:24 INFO kafka.KafkaRDD: Removing RDD 946 from persistence list 18/04/17 17:00:24 INFO storage.BlockManager: Removing RDD 946 18/04/17 17:00:24 INFO kafka.KafkaRDD: Removing RDD 946 from persistence list 18/04/17 17:00:24 INFO storage.BlockManager: Removing RDD 946 18/04/17 17:00:24 INFO kafka.KafkaRDD: Removing RDD 947 from persistence list 18/04/17 17:00:24 INFO storage.BlockManager: Removing RDD 947 18/04/17 17:00:24 INFO kafka.KafkaRDD: Removing RDD 947 from persistence list 18/04/17 17:00:24 INFO storage.BlockManager: Removing RDD 947 18/04/17 17:00:24 INFO kafka.KafkaRDD: Removing RDD 948 from persistence list 18/04/17 17:00:24 INFO storage.BlockManager: Removing RDD 948 18/04/17 17:00:24 INFO kafka.KafkaRDD: Removing RDD 948 from persistence list 18/04/17 17:00:24 INFO storage.BlockManager: Removing RDD 948 18/04/17 17:00:24 INFO kafka.KafkaRDD: Removing RDD 949 from persistence list 18/04/17 17:00:24 INFO storage.BlockManager: Removing RDD 949 18/04/17 17:00:24 INFO kafka.KafkaRDD: Removing RDD 949 from persistence list 18/04/17 17:00:24 INFO storage.BlockManager: Removing RDD 949 18/04/17 17:00:24 INFO kafka.KafkaRDD: Removing RDD 950 from persistence list 18/04/17 17:00:24 INFO storage.BlockManager: Removing RDD 950 18/04/17 17:00:24 INFO kafka.KafkaRDD: Removing RDD 950 from persistence list 18/04/17 17:00:24 INFO storage.BlockManager: Removing RDD 950 18/04/17 17:00:24 INFO kafka.KafkaRDD: Removing RDD 951 from persistence list 18/04/17 17:00:24 INFO storage.BlockManager: Removing RDD 951 18/04/17 17:00:24 INFO kafka.KafkaRDD: Removing RDD 951 from persistence list 18/04/17 17:00:24 INFO storage.BlockManager: Removing RDD 951 18/04/17 17:00:24 INFO kafka.KafkaRDD: Removing RDD 952 from persistence list 18/04/17 17:00:24 INFO storage.BlockManager: Removing RDD 952 18/04/17 17:00:24 INFO kafka.KafkaRDD: Removing RDD 952 from persistence list 18/04/17 17:00:24 INFO storage.BlockManager: Removing RDD 952 18/04/17 17:00:24 INFO kafka.KafkaRDD: Removing RDD 953 from persistence list 18/04/17 17:00:24 INFO storage.BlockManager: Removing RDD 953 18/04/17 17:00:24 INFO kafka.KafkaRDD: Removing RDD 953 from persistence list 18/04/17 17:00:24 INFO storage.BlockManager: Removing RDD 953 18/04/17 17:00:24 INFO kafka.KafkaRDD: Removing RDD 954 from persistence list 18/04/17 17:00:24 INFO storage.BlockManager: Removing RDD 954 18/04/17 17:00:24 INFO kafka.KafkaRDD: Removing RDD 954 from persistence list 18/04/17 17:00:24 INFO storage.BlockManager: Removing RDD 954 18/04/17 17:00:24 INFO kafka.KafkaRDD: Removing RDD 955 from persistence list 18/04/17 17:00:24 INFO storage.BlockManager: Removing RDD 955 18/04/17 17:00:24 INFO kafka.KafkaRDD: Removing RDD 955 from persistence list 18/04/17 17:00:24 INFO storage.BlockManager: Removing RDD 955 18/04/17 17:00:24 INFO kafka.KafkaRDD: Removing RDD 956 from persistence list 18/04/17 17:00:24 INFO storage.BlockManager: Removing RDD 956 18/04/17 17:00:24 INFO kafka.KafkaRDD: Removing RDD 956 from persistence list 18/04/17 17:00:24 INFO storage.BlockManager: Removing RDD 956 18/04/17 17:00:24 INFO kafka.KafkaRDD: Removing RDD 957 from persistence list 18/04/17 17:00:24 INFO storage.BlockManager: Removing RDD 957 18/04/17 17:00:24 INFO kafka.KafkaRDD: Removing RDD 957 from persistence list 18/04/17 17:00:24 INFO storage.BlockManager: Removing RDD 957 18/04/17 17:00:24 INFO kafka.KafkaRDD: Removing RDD 958 from persistence list 18/04/17 17:00:24 INFO storage.BlockManager: Removing RDD 958 18/04/17 17:00:24 INFO kafka.KafkaRDD: Removing RDD 958 from persistence list 18/04/17 17:00:24 INFO storage.BlockManager: Removing RDD 958 18/04/17 17:00:24 INFO kafka.KafkaRDD: Removing RDD 959 from persistence list 18/04/17 17:00:24 INFO storage.BlockManager: Removing RDD 959 18/04/17 17:00:24 INFO kafka.KafkaRDD: Removing RDD 959 from persistence list 18/04/17 17:00:24 INFO storage.BlockManager: Removing RDD 959 18/04/17 17:00:24 INFO kafka.KafkaRDD: Removing RDD 960 from persistence list 18/04/17 17:00:24 INFO storage.BlockManager: Removing RDD 960 18/04/17 17:00:24 INFO kafka.KafkaRDD: Removing RDD 960 from persistence list 18/04/17 17:00:24 INFO storage.BlockManager: Removing RDD 960 18/04/17 17:00:24 INFO kafka.KafkaRDD: Removing RDD 961 from persistence list 18/04/17 17:00:24 INFO storage.BlockManager: Removing RDD 961 18/04/17 17:00:24 INFO kafka.KafkaRDD: Removing RDD 961 from persistence list 18/04/17 17:00:24 INFO storage.BlockManager: Removing RDD 961 18/04/17 17:00:24 INFO kafka.KafkaRDD: Removing RDD 962 from persistence list 18/04/17 17:00:24 INFO storage.BlockManager: Removing RDD 962 18/04/17 17:00:24 INFO kafka.KafkaRDD: Removing RDD 962 from persistence list 18/04/17 17:00:24 INFO storage.BlockManager: Removing RDD 962 18/04/17 17:00:24 INFO kafka.KafkaRDD: Removing RDD 963 from persistence list 18/04/17 17:00:24 INFO storage.BlockManager: Removing RDD 963 18/04/17 17:00:24 INFO kafka.KafkaRDD: Removing RDD 963 from persistence list 18/04/17 17:00:24 INFO storage.BlockManager: Removing RDD 963 18/04/17 17:00:24 INFO kafka.KafkaRDD: Removing RDD 964 from persistence list 18/04/17 17:00:24 INFO storage.BlockManager: Removing RDD 964 18/04/17 17:00:24 INFO kafka.KafkaRDD: Removing RDD 964 from persistence list 18/04/17 17:00:24 INFO storage.BlockManager: Removing RDD 964 18/04/17 17:00:24 INFO kafka.KafkaRDD: Removing RDD 965 from persistence list 18/04/17 17:00:24 INFO storage.BlockManager: Removing RDD 965 18/04/17 17:00:24 INFO kafka.KafkaRDD: Removing RDD 965 from persistence list 18/04/17 17:00:24 INFO storage.BlockManager: Removing RDD 965 18/04/17 17:00:24 INFO kafka.KafkaRDD: Removing RDD 966 from persistence list 18/04/17 17:00:24 INFO storage.BlockManager: Removing RDD 966 18/04/17 17:00:24 INFO kafka.KafkaRDD: Removing RDD 966 from persistence list 18/04/17 17:00:24 INFO storage.BlockManager: Removing RDD 966 18/04/17 17:00:24 INFO kafka.KafkaRDD: Removing RDD 967 from persistence list 18/04/17 17:00:24 INFO storage.BlockManager: Removing RDD 967 18/04/17 17:00:24 INFO kafka.KafkaRDD: Removing RDD 967 from persistence list 18/04/17 17:00:24 INFO storage.BlockManager: Removing RDD 967 18/04/17 17:00:24 INFO kafka.KafkaRDD: Removing RDD 968 from persistence list 18/04/17 17:00:24 INFO storage.BlockManager: Removing RDD 968 18/04/17 17:00:24 INFO kafka.KafkaRDD: Removing RDD 968 from persistence list 18/04/17 17:00:24 INFO storage.BlockManager: Removing RDD 968 18/04/17 17:00:24 INFO kafka.KafkaRDD: Removing RDD 969 from persistence list 18/04/17 17:00:24 INFO storage.BlockManager: Removing RDD 969 18/04/17 17:00:24 INFO kafka.KafkaRDD: Removing RDD 969 from persistence list 18/04/17 17:00:24 INFO storage.BlockManager: Removing RDD 969 18/04/17 17:00:24 INFO kafka.KafkaRDD: Removing RDD 970 from persistence list 18/04/17 17:00:24 INFO storage.BlockManager: Removing RDD 970 18/04/17 17:00:24 INFO kafka.KafkaRDD: Removing RDD 970 from persistence list 18/04/17 17:00:24 INFO storage.BlockManager: Removing RDD 970 18/04/17 17:00:24 INFO kafka.KafkaRDD: Removing RDD 971 from persistence list 18/04/17 17:00:24 INFO storage.BlockManager: Removing RDD 971 18/04/17 17:00:24 INFO kafka.KafkaRDD: Removing RDD 971 from persistence list 18/04/17 17:00:24 INFO storage.BlockManager: Removing RDD 971 18/04/17 17:00:24 INFO scheduler.ReceivedBlockTracker: Deleting batches ArrayBuffer() 18/04/17 17:00:24 INFO scheduler.InputInfoTracker: remove old batch metadata: 1523973480000 ms 18/04/17 17:01:00 INFO storage.BlockManagerInfo: Removed broadcast_737_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.2 MB) 18/04/17 17:01:00 INFO storage.BlockManagerInfo: Removed broadcast_737_piece0 on ***hostname masked***:53081 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:01:00 INFO storage.BlockManagerInfo: Removed broadcast_712_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.2 MB) 18/04/17 17:01:00 INFO storage.BlockManagerInfo: Removed broadcast_712_piece0 on ***hostname masked***:50260 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:01:00 INFO spark.ContextCleaner: Cleaned accumulator 713 18/04/17 17:01:00 INFO spark.ContextCleaner: Cleaned accumulator 715 18/04/17 17:01:00 INFO storage.BlockManagerInfo: Removed broadcast_713_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.2 MB) 18/04/17 17:01:00 INFO storage.BlockManagerInfo: Removed broadcast_713_piece0 on ***hostname masked***:53081 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:01:00 INFO spark.ContextCleaner: Cleaned accumulator 714 18/04/17 17:01:00 INFO storage.BlockManagerInfo: Removed broadcast_715_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.2 MB) 18/04/17 17:01:00 INFO storage.BlockManagerInfo: Removed broadcast_715_piece0 on ***hostname masked***:56034 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:01:00 INFO spark.ContextCleaner: Cleaned accumulator 716 18/04/17 17:01:00 INFO storage.BlockManagerInfo: Removed broadcast_714_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.2 MB) 18/04/17 17:01:00 INFO storage.BlockManagerInfo: Removed broadcast_714_piece0 on ***hostname masked***:55279 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:01:00 INFO spark.ContextCleaner: Cleaned accumulator 718 18/04/17 17:01:00 INFO storage.BlockManagerInfo: Removed broadcast_716_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.2 MB) 18/04/17 17:01:00 INFO storage.BlockManagerInfo: Removed broadcast_716_piece0 on ***hostname masked***:56034 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:01:00 INFO spark.ContextCleaner: Cleaned accumulator 717 18/04/17 17:01:00 INFO storage.BlockManagerInfo: Removed broadcast_718_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.2 MB) 18/04/17 17:01:00 INFO storage.BlockManagerInfo: Removed broadcast_718_piece0 on ***hostname masked***:60107 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:01:00 INFO spark.ContextCleaner: Cleaned accumulator 719 18/04/17 17:01:00 INFO storage.BlockManagerInfo: Removed broadcast_717_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.2 MB) 18/04/17 17:01:00 INFO storage.BlockManagerInfo: Removed broadcast_717_piece0 on ***hostname masked***:56034 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:01:00 INFO spark.ContextCleaner: Cleaned accumulator 721 18/04/17 17:01:00 INFO storage.BlockManagerInfo: Removed broadcast_719_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.2 MB) 18/04/17 17:01:00 INFO storage.BlockManagerInfo: Removed broadcast_719_piece0 on ***hostname masked***:55279 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:01:00 INFO scheduler.JobScheduler: Added jobs for time 1523973660000 ms 18/04/17 17:01:00 INFO scheduler.JobScheduler: Starting job streaming job 1523973660000 ms.0 from job set of time 1523973660000 ms 18/04/17 17:01:00 INFO scheduler.JobScheduler: Starting job streaming job 1523973660000 ms.1 from job set of time 1523973660000 ms 18/04/17 17:01:00 INFO scheduler.JobScheduler: Starting job streaming job 1523973660000 ms.2 from job set of time 1523973660000 ms 18/04/17 17:01:00 INFO scheduler.JobScheduler: Starting job streaming job 1523973660000 ms.3 from job set of time 1523973660000 ms 18/04/17 17:01:00 INFO scheduler.JobScheduler: Finished job streaming job 1523973660000 ms.0 from job set of time 1523973660000 ms 18/04/17 17:01:00 INFO scheduler.JobScheduler: Starting job streaming job 1523973660000 ms.4 from job set of time 1523973660000 ms 18/04/17 17:01:00 INFO scheduler.JobScheduler: Starting job streaming job 1523973660000 ms.6 from job set of time 1523973660000 ms 18/04/17 17:01:00 INFO scheduler.JobScheduler: Finished job streaming job 1523973660000 ms.3 from job set of time 1523973660000 ms 18/04/17 17:01:00 INFO scheduler.JobScheduler: Starting job streaming job 1523973660000 ms.5 from job set of time 1523973660000 ms 18/04/17 17:01:00 INFO scheduler.JobScheduler: Finished job streaming job 1523973660000 ms.4 from job set of time 1523973660000 ms 18/04/17 17:01:00 INFO scheduler.JobScheduler: Starting job streaming job 1523973660000 ms.8 from job set of time 1523973660000 ms 18/04/17 17:01:00 INFO scheduler.JobScheduler: Starting job streaming job 1523973660000 ms.7 from job set of time 1523973660000 ms 18/04/17 17:01:00 INFO scheduler.JobScheduler: Starting job streaming job 1523973660000 ms.9 from job set of time 1523973660000 ms 18/04/17 17:01:00 INFO scheduler.JobScheduler: Starting job streaming job 1523973660000 ms.10 from job set of time 1523973660000 ms 18/04/17 17:01:00 INFO scheduler.JobScheduler: Starting job streaming job 1523973660000 ms.11 from job set of time 1523973660000 ms 18/04/17 17:01:00 INFO scheduler.JobScheduler: Starting job streaming job 1523973660000 ms.12 from job set of time 1523973660000 ms 18/04/17 17:01:00 INFO scheduler.JobScheduler: Starting job streaming job 1523973660000 ms.13 from job set of time 1523973660000 ms 18/04/17 17:01:00 INFO scheduler.JobScheduler: Starting job streaming job 1523973660000 ms.14 from job set of time 1523973660000 ms 18/04/17 17:01:00 INFO scheduler.JobScheduler: Finished job streaming job 1523973660000 ms.13 from job set of time 1523973660000 ms 18/04/17 17:01:00 INFO scheduler.JobScheduler: Starting job streaming job 1523973660000 ms.15 from job set of time 1523973660000 ms 18/04/17 17:01:00 INFO scheduler.JobScheduler: Starting job streaming job 1523973660000 ms.17 from job set of time 1523973660000 ms 18/04/17 17:01:00 INFO scheduler.JobScheduler: Starting job streaming job 1523973660000 ms.16 from job set of time 1523973660000 ms 18/04/17 17:01:00 INFO scheduler.JobScheduler: Finished job streaming job 1523973660000 ms.14 from job set of time 1523973660000 ms 18/04/17 17:01:00 INFO scheduler.JobScheduler: Finished job streaming job 1523973660000 ms.17 from job set of time 1523973660000 ms 18/04/17 17:01:00 INFO scheduler.JobScheduler: Finished job streaming job 1523973660000 ms.16 from job set of time 1523973660000 ms 18/04/17 17:01:00 INFO scheduler.JobScheduler: Starting job streaming job 1523973660000 ms.20 from job set of time 1523973660000 ms 18/04/17 17:01:00 INFO scheduler.JobScheduler: Starting job streaming job 1523973660000 ms.19 from job set of time 1523973660000 ms 18/04/17 17:01:00 INFO scheduler.JobScheduler: Starting job streaming job 1523973660000 ms.18 from job set of time 1523973660000 ms 18/04/17 17:01:00 INFO scheduler.JobScheduler: Starting job streaming job 1523973660000 ms.21 from job set of time 1523973660000 ms 18/04/17 17:01:00 INFO scheduler.JobScheduler: Finished job streaming job 1523973660000 ms.21 from job set of time 1523973660000 ms 18/04/17 17:01:00 INFO scheduler.JobScheduler: Starting job streaming job 1523973660000 ms.22 from job set of time 1523973660000 ms 18/04/17 17:01:00 INFO scheduler.JobScheduler: Starting job streaming job 1523973660000 ms.23 from job set of time 1523973660000 ms 18/04/17 17:01:00 INFO scheduler.JobScheduler: Starting job streaming job 1523973660000 ms.24 from job set of time 1523973660000 ms 18/04/17 17:01:00 INFO scheduler.JobScheduler: Starting job streaming job 1523973660000 ms.25 from job set of time 1523973660000 ms 18/04/17 17:01:00 INFO scheduler.JobScheduler: Starting job streaming job 1523973660000 ms.26 from job set of time 1523973660000 ms 18/04/17 17:01:00 INFO scheduler.JobScheduler: Starting job streaming job 1523973660000 ms.27 from job set of time 1523973660000 ms 18/04/17 17:01:00 INFO scheduler.JobScheduler: Starting job streaming job 1523973660000 ms.28 from job set of time 1523973660000 ms 18/04/17 17:01:00 INFO scheduler.JobScheduler: Starting job streaming job 1523973660000 ms.29 from job set of time 1523973660000 ms 18/04/17 17:01:00 INFO scheduler.JobScheduler: Starting job streaming job 1523973660000 ms.31 from job set of time 1523973660000 ms 18/04/17 17:01:00 INFO scheduler.JobScheduler: Starting job streaming job 1523973660000 ms.32 from job set of time 1523973660000 ms 18/04/17 17:01:00 INFO scheduler.JobScheduler: Starting job streaming job 1523973660000 ms.30 from job set of time 1523973660000 ms 18/04/17 17:01:00 INFO scheduler.JobScheduler: Finished job streaming job 1523973660000 ms.30 from job set of time 1523973660000 ms 18/04/17 17:01:00 INFO scheduler.JobScheduler: Starting job streaming job 1523973660000 ms.33 from job set of time 1523973660000 ms 18/04/17 17:01:00 INFO scheduler.JobScheduler: Starting job streaming job 1523973660000 ms.34 from job set of time 1523973660000 ms 18/04/17 17:01:00 INFO scheduler.JobScheduler: Starting job streaming job 1523973660000 ms.35 from job set of time 1523973660000 ms 18/04/17 17:01:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 17:01:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 17:01:00 INFO scheduler.DAGScheduler: Got job 739 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 17:01:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 17:01:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 739 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 17:01:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 17:01:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 17:01:00 INFO scheduler.DAGScheduler: Submitting ResultStage 739 (KafkaRDD[1019] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 17:01:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 17:01:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 17:01:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 17:01:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 17:01:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 17:01:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 17:01:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 17:01:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 17:01:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 17:01:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 17:01:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 17:01:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 17:01:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 17:01:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 17:01:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 17:01:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 17:01:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 17:01:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 17:01:00 INFO storage.MemoryStore: Block broadcast_739 stored as values in memory (estimated size 5.7 KB, free 490.4 MB) 18/04/17 17:01:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 17:01:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 17:01:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 17:01:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 17:01:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 17:01:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 17:01:00 INFO storage.MemoryStore: Block broadcast_739_piece0 stored as bytes in memory (estimated size 3.1 KB, free 490.4 MB) 18/04/17 17:01:00 INFO storage.BlockManagerInfo: Added broadcast_739_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.2 MB) 18/04/17 17:01:00 INFO spark.SparkContext: Created broadcast 739 from broadcast at DAGScheduler.scala:1006 18/04/17 17:01:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 739 (KafkaRDD[1019] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 17:01:00 INFO cluster.YarnClusterScheduler: Adding task set 739.0 with 1 tasks 18/04/17 17:01:00 INFO scheduler.DAGScheduler: Got job 740 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 17:01:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 740 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 17:01:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 17:01:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 17:01:00 INFO scheduler.DAGScheduler: Submitting ResultStage 740 (KafkaRDD[1035] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 17:01:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 739.0 (TID 739, ***hostname masked***, executor 4, partition 0, RACK_LOCAL, 2046 bytes) 18/04/17 17:01:00 INFO storage.MemoryStore: Block broadcast_740 stored as values in memory (estimated size 5.7 KB, free 490.4 MB) 18/04/17 17:01:00 INFO spark.ContextCleaner: Cleaned accumulator 720 18/04/17 17:01:00 INFO storage.BlockManagerInfo: Removed broadcast_721_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.2 MB) 18/04/17 17:01:00 INFO storage.BlockManagerInfo: Removed broadcast_721_piece0 on ***hostname masked***:56034 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:01:00 INFO storage.MemoryStore: Block broadcast_740_piece0 stored as bytes in memory (estimated size 3.1 KB, free 490.4 MB) 18/04/17 17:01:00 INFO spark.ContextCleaner: Cleaned accumulator 722 18/04/17 17:01:00 INFO storage.BlockManagerInfo: Added broadcast_740_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.2 MB) 18/04/17 17:01:00 INFO spark.SparkContext: Created broadcast 740 from broadcast at DAGScheduler.scala:1006 18/04/17 17:01:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 740 (KafkaRDD[1035] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 17:01:00 INFO cluster.YarnClusterScheduler: Adding task set 740.0 with 1 tasks 18/04/17 17:01:00 INFO scheduler.DAGScheduler: Got job 741 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 17:01:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 741 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 17:01:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 17:01:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 17:01:00 INFO scheduler.DAGScheduler: Submitting ResultStage 741 (KafkaRDD[1014] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 17:01:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 740.0 (TID 740, ***hostname masked***, executor 6, partition 0, RACK_LOCAL, 2048 bytes) 18/04/17 17:01:00 INFO storage.BlockManagerInfo: Removed broadcast_720_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.2 MB) 18/04/17 17:01:00 INFO storage.MemoryStore: Block broadcast_741 stored as values in memory (estimated size 5.7 KB, free 490.4 MB) 18/04/17 17:01:00 INFO storage.BlockManagerInfo: Removed broadcast_720_piece0 on ***hostname masked***:55033 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:01:00 INFO storage.BlockManagerInfo: Added broadcast_739_piece0 in memory on ***hostname masked***:55279 (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:01:00 INFO spark.ContextCleaner: Cleaned accumulator 724 18/04/17 17:01:00 INFO storage.BlockManagerInfo: Removed broadcast_722_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.2 MB) 18/04/17 17:01:00 INFO storage.BlockManagerInfo: Removed broadcast_722_piece0 on ***hostname masked***:53081 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:01:00 INFO storage.MemoryStore: Block broadcast_741_piece0 stored as bytes in memory (estimated size 3.1 KB, free 490.4 MB) 18/04/17 17:01:00 INFO spark.ContextCleaner: Cleaned accumulator 723 18/04/17 17:01:00 INFO storage.BlockManagerInfo: Added broadcast_741_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.2 MB) 18/04/17 17:01:00 INFO spark.SparkContext: Created broadcast 741 from broadcast at DAGScheduler.scala:1006 18/04/17 17:01:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 741 (KafkaRDD[1014] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 17:01:00 INFO cluster.YarnClusterScheduler: Adding task set 741.0 with 1 tasks 18/04/17 17:01:00 INFO scheduler.DAGScheduler: Got job 742 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 17:01:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 742 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 17:01:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 17:01:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 17:01:00 INFO scheduler.DAGScheduler: Submitting ResultStage 742 (KafkaRDD[1040] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 17:01:00 INFO storage.BlockManagerInfo: Removed broadcast_724_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.2 MB) 18/04/17 17:01:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 741.0 (TID 741, ***hostname masked***, executor 9, partition 0, NODE_LOCAL, 2045 bytes) 18/04/17 17:01:00 INFO storage.MemoryStore: Block broadcast_742 stored as values in memory (estimated size 5.7 KB, free 490.4 MB) 18/04/17 17:01:00 INFO storage.BlockManagerInfo: Removed broadcast_724_piece0 on ***hostname masked***:42188 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:01:00 INFO storage.BlockManagerInfo: Added broadcast_740_piece0 in memory on ***hostname masked***:35790 (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:01:00 INFO spark.ContextCleaner: Cleaned accumulator 725 18/04/17 17:01:00 INFO storage.MemoryStore: Block broadcast_742_piece0 stored as bytes in memory (estimated size 3.1 KB, free 490.4 MB) 18/04/17 17:01:00 INFO storage.BlockManagerInfo: Added broadcast_742_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.2 MB) 18/04/17 17:01:00 INFO storage.BlockManagerInfo: Removed broadcast_723_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.2 MB) 18/04/17 17:01:00 INFO spark.SparkContext: Created broadcast 742 from broadcast at DAGScheduler.scala:1006 18/04/17 17:01:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 742 (KafkaRDD[1040] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 17:01:00 INFO cluster.YarnClusterScheduler: Adding task set 742.0 with 1 tasks 18/04/17 17:01:00 INFO scheduler.DAGScheduler: Got job 743 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 17:01:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 743 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 17:01:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 17:01:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 17:01:00 INFO storage.BlockManagerInfo: Removed broadcast_723_piece0 on ***hostname masked***:35790 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:01:00 INFO scheduler.DAGScheduler: Submitting ResultStage 743 (KafkaRDD[1015] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 17:01:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 742.0 (TID 742, ***hostname masked***, executor 4, partition 0, RACK_LOCAL, 2040 bytes) 18/04/17 17:01:00 INFO storage.MemoryStore: Block broadcast_743 stored as values in memory (estimated size 5.7 KB, free 490.4 MB) 18/04/17 17:01:00 INFO spark.ContextCleaner: Cleaned accumulator 727 18/04/17 17:01:00 INFO storage.BlockManagerInfo: Removed broadcast_725_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.2 MB) 18/04/17 17:01:00 INFO storage.BlockManagerInfo: Removed broadcast_725_piece0 on ***hostname masked***:55279 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:01:00 INFO spark.ContextCleaner: Cleaned accumulator 726 18/04/17 17:01:00 INFO storage.BlockManagerInfo: Removed broadcast_727_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.2 MB) 18/04/17 17:01:00 INFO storage.MemoryStore: Block broadcast_743_piece0 stored as bytes in memory (estimated size 3.1 KB, free 490.4 MB) 18/04/17 17:01:00 INFO storage.BlockManagerInfo: Added broadcast_743_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.2 MB) 18/04/17 17:01:00 INFO spark.SparkContext: Created broadcast 743 from broadcast at DAGScheduler.scala:1006 18/04/17 17:01:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 743 (KafkaRDD[1015] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 17:01:00 INFO cluster.YarnClusterScheduler: Adding task set 743.0 with 1 tasks 18/04/17 17:01:00 INFO scheduler.DAGScheduler: Got job 744 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 17:01:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 744 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 17:01:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 17:01:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 17:01:00 INFO scheduler.DAGScheduler: Submitting ResultStage 744 (KafkaRDD[1043] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 17:01:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 743.0 (TID 743, ***hostname masked***, executor 6, partition 0, RACK_LOCAL, 2060 bytes) 18/04/17 17:01:00 INFO storage.MemoryStore: Block broadcast_744 stored as values in memory (estimated size 5.7 KB, free 490.4 MB) 18/04/17 17:01:00 INFO storage.BlockManagerInfo: Removed broadcast_727_piece0 on ***hostname masked***:42188 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:01:00 INFO spark.ContextCleaner: Cleaned accumulator 728 18/04/17 17:01:00 INFO storage.BlockManagerInfo: Removed broadcast_726_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.2 MB) 18/04/17 17:01:00 INFO storage.BlockManagerInfo: Added broadcast_742_piece0 in memory on ***hostname masked***:55279 (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:01:00 INFO storage.BlockManagerInfo: Removed broadcast_726_piece0 on ***hostname masked***:55033 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:01:00 INFO storage.MemoryStore: Block broadcast_744_piece0 stored as bytes in memory (estimated size 3.1 KB, free 490.4 MB) 18/04/17 17:01:00 INFO storage.BlockManagerInfo: Added broadcast_744_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.2 MB) 18/04/17 17:01:00 INFO spark.SparkContext: Created broadcast 744 from broadcast at DAGScheduler.scala:1006 18/04/17 17:01:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 744 (KafkaRDD[1043] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 17:01:00 INFO cluster.YarnClusterScheduler: Adding task set 744.0 with 1 tasks 18/04/17 17:01:00 INFO scheduler.DAGScheduler: Got job 745 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 17:01:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 745 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 17:01:00 INFO spark.ContextCleaner: Cleaned accumulator 730 18/04/17 17:01:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 17:01:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 17:01:00 INFO scheduler.DAGScheduler: Submitting ResultStage 745 (KafkaRDD[1010] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 17:01:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 744.0 (TID 744, ***hostname masked***, executor 9, partition 0, NODE_LOCAL, 2037 bytes) 18/04/17 17:01:00 INFO storage.MemoryStore: Block broadcast_745 stored as values in memory (estimated size 5.7 KB, free 490.4 MB) 18/04/17 17:01:00 INFO storage.BlockManagerInfo: Removed broadcast_728_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.2 MB) 18/04/17 17:01:00 INFO storage.BlockManagerInfo: Added broadcast_741_piece0 in memory on ***hostname masked***:55033 (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:01:00 INFO storage.BlockManagerInfo: Removed broadcast_728_piece0 on ***hostname masked***:55279 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:01:00 INFO spark.ContextCleaner: Cleaned accumulator 729 18/04/17 17:01:00 INFO storage.BlockManagerInfo: Added broadcast_743_piece0 in memory on ***hostname masked***:35790 (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:01:00 INFO storage.BlockManagerInfo: Removed broadcast_738_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.2 MB) 18/04/17 17:01:00 INFO storage.MemoryStore: Block broadcast_745_piece0 stored as bytes in memory (estimated size 3.1 KB, free 490.4 MB) 18/04/17 17:01:00 INFO storage.BlockManagerInfo: Added broadcast_745_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.2 MB) 18/04/17 17:01:00 INFO storage.BlockManagerInfo: Removed broadcast_738_piece0 on ***hostname masked***:41751 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:01:00 INFO spark.SparkContext: Created broadcast 745 from broadcast at DAGScheduler.scala:1006 18/04/17 17:01:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 745 (KafkaRDD[1010] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 17:01:00 INFO cluster.YarnClusterScheduler: Adding task set 745.0 with 1 tasks 18/04/17 17:01:00 INFO scheduler.DAGScheduler: Got job 746 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 17:01:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 746 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 17:01:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 17:01:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 17:01:00 INFO scheduler.DAGScheduler: Submitting ResultStage 746 (KafkaRDD[1037] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 17:01:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 745.0 (TID 745, ***hostname masked***, executor 4, partition 0, RACK_LOCAL, 2034 bytes) 18/04/17 17:01:00 INFO storage.MemoryStore: Block broadcast_746 stored as values in memory (estimated size 5.7 KB, free 490.4 MB) 18/04/17 17:01:00 INFO spark.ContextCleaner: Cleaned accumulator 739 18/04/17 17:01:00 INFO storage.BlockManagerInfo: Removed broadcast_730_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.2 MB) 18/04/17 17:01:00 INFO storage.BlockManagerInfo: Removed broadcast_730_piece0 on ***hostname masked***:55095 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:01:00 INFO spark.ContextCleaner: Cleaned accumulator 731 18/04/17 17:01:00 INFO storage.BlockManagerInfo: Removed broadcast_729_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.2 MB) 18/04/17 17:01:00 INFO storage.BlockManagerInfo: Added broadcast_745_piece0 in memory on ***hostname masked***:55279 (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:01:00 INFO storage.BlockManagerInfo: Removed broadcast_729_piece0 on ***hostname masked***:55033 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:01:00 INFO storage.BlockManagerInfo: Added broadcast_744_piece0 in memory on ***hostname masked***:55033 (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:01:00 INFO spark.ContextCleaner: Cleaned accumulator 733 18/04/17 17:01:00 INFO storage.BlockManagerInfo: Removed broadcast_731_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.2 MB) 18/04/17 17:01:00 INFO storage.MemoryStore: Block broadcast_746_piece0 stored as bytes in memory (estimated size 3.1 KB, free 490.4 MB) 18/04/17 17:01:00 INFO storage.BlockManagerInfo: Added broadcast_746_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.2 MB) 18/04/17 17:01:00 INFO spark.SparkContext: Created broadcast 746 from broadcast at DAGScheduler.scala:1006 18/04/17 17:01:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 746 (KafkaRDD[1037] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 17:01:00 INFO cluster.YarnClusterScheduler: Adding task set 746.0 with 1 tasks 18/04/17 17:01:00 INFO scheduler.DAGScheduler: Got job 747 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 17:01:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 747 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 17:01:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 17:01:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 17:01:00 INFO scheduler.DAGScheduler: Submitting ResultStage 747 (KafkaRDD[1042] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 17:01:00 INFO storage.MemoryStore: Block broadcast_747 stored as values in memory (estimated size 5.7 KB, free 490.4 MB) 18/04/17 17:01:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 746.0 (TID 746, ***hostname masked***, executor 5, partition 0, NODE_LOCAL, 2049 bytes) 18/04/17 17:01:00 INFO storage.BlockManagerInfo: Removed broadcast_731_piece0 on ***hostname masked***:57847 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:01:00 INFO spark.ContextCleaner: Cleaned accumulator 732 18/04/17 17:01:00 INFO storage.BlockManagerInfo: Removed broadcast_733_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.2 MB) 18/04/17 17:01:00 INFO storage.BlockManagerInfo: Removed broadcast_733_piece0 on ***hostname masked***:43653 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:01:00 INFO spark.ContextCleaner: Cleaned accumulator 734 18/04/17 17:01:00 INFO storage.MemoryStore: Block broadcast_747_piece0 stored as bytes in memory (estimated size 3.1 KB, free 490.4 MB) 18/04/17 17:01:00 INFO storage.BlockManagerInfo: Added broadcast_747_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.2 MB) 18/04/17 17:01:00 INFO storage.BlockManagerInfo: Removed broadcast_732_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.2 MB) 18/04/17 17:01:00 INFO spark.SparkContext: Created broadcast 747 from broadcast at DAGScheduler.scala:1006 18/04/17 17:01:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 747 (KafkaRDD[1042] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 17:01:00 INFO cluster.YarnClusterScheduler: Adding task set 747.0 with 1 tasks 18/04/17 17:01:00 INFO scheduler.DAGScheduler: Got job 748 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 17:01:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 748 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 17:01:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 17:01:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 17:01:00 INFO scheduler.DAGScheduler: Submitting ResultStage 748 (KafkaRDD[1013] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 17:01:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 747.0 (TID 747, ***hostname masked***, executor 8, partition 0, RACK_LOCAL, 2053 bytes) 18/04/17 17:01:00 INFO storage.MemoryStore: Block broadcast_748 stored as values in memory (estimated size 5.7 KB, free 490.4 MB) 18/04/17 17:01:00 INFO storage.BlockManagerInfo: Removed broadcast_732_piece0 on ***hostname masked***:53081 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:01:00 INFO storage.MemoryStore: Block broadcast_748_piece0 stored as bytes in memory (estimated size 3.1 KB, free 490.4 MB) 18/04/17 17:01:00 INFO storage.BlockManagerInfo: Added broadcast_748_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.2 MB) 18/04/17 17:01:00 INFO spark.ContextCleaner: Cleaned accumulator 736 18/04/17 17:01:00 INFO spark.SparkContext: Created broadcast 748 from broadcast at DAGScheduler.scala:1006 18/04/17 17:01:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 748 (KafkaRDD[1013] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 17:01:00 INFO cluster.YarnClusterScheduler: Adding task set 748.0 with 1 tasks 18/04/17 17:01:00 INFO scheduler.DAGScheduler: Got job 749 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 17:01:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 749 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 17:01:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 17:01:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 17:01:00 INFO scheduler.DAGScheduler: Submitting ResultStage 749 (KafkaRDD[1033] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 17:01:00 INFO storage.BlockManagerInfo: Removed broadcast_734_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.2 MB) 18/04/17 17:01:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 748.0 (TID 748, ***hostname masked***, executor 5, partition 0, NODE_LOCAL, 2059 bytes) 18/04/17 17:01:00 INFO storage.MemoryStore: Block broadcast_749 stored as values in memory (estimated size 5.7 KB, free 490.4 MB) 18/04/17 17:01:00 INFO storage.BlockManagerInfo: Removed broadcast_734_piece0 on ***hostname masked***:35790 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:01:00 INFO spark.ContextCleaner: Cleaned accumulator 735 18/04/17 17:01:00 INFO storage.BlockManagerInfo: Removed broadcast_736_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.2 MB) 18/04/17 17:01:00 INFO storage.BlockManagerInfo: Removed broadcast_736_piece0 on ***hostname masked***:55095 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:01:00 INFO storage.MemoryStore: Block broadcast_749_piece0 stored as bytes in memory (estimated size 3.1 KB, free 490.4 MB) 18/04/17 17:01:00 INFO storage.BlockManagerInfo: Added broadcast_749_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.2 MB) 18/04/17 17:01:00 INFO spark.SparkContext: Created broadcast 749 from broadcast at DAGScheduler.scala:1006 18/04/17 17:01:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 749 (KafkaRDD[1033] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 17:01:00 INFO cluster.YarnClusterScheduler: Adding task set 749.0 with 1 tasks 18/04/17 17:01:00 INFO scheduler.DAGScheduler: Got job 751 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 17:01:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 750 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 17:01:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 17:01:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 17:01:00 INFO scheduler.DAGScheduler: Submitting ResultStage 750 (KafkaRDD[1027] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 17:01:00 INFO spark.ContextCleaner: Cleaned accumulator 737 18/04/17 17:01:00 INFO storage.MemoryStore: Block broadcast_750 stored as values in memory (estimated size 5.7 KB, free 490.4 MB) 18/04/17 17:01:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 749.0 (TID 749, ***hostname masked***, executor 2, partition 0, RACK_LOCAL, 2045 bytes) 18/04/17 17:01:00 INFO storage.BlockManagerInfo: Removed broadcast_735_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.2 MB) 18/04/17 17:01:00 INFO storage.BlockManagerInfo: Removed broadcast_735_piece0 on ***hostname masked***:50260 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:01:00 INFO storage.BlockManagerInfo: Added broadcast_747_piece0 in memory on ***hostname masked***:50260 (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:01:00 INFO storage.BlockManagerInfo: Added broadcast_746_piece0 in memory on ***hostname masked***:53081 (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:01:00 INFO storage.MemoryStore: Block broadcast_750_piece0 stored as bytes in memory (estimated size 3.1 KB, free 490.4 MB) 18/04/17 17:01:00 INFO spark.ContextCleaner: Cleaned accumulator 738 18/04/17 17:01:00 INFO storage.BlockManagerInfo: Added broadcast_750_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.2 MB) 18/04/17 17:01:00 INFO spark.SparkContext: Created broadcast 750 from broadcast at DAGScheduler.scala:1006 18/04/17 17:01:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 750 (KafkaRDD[1027] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 17:01:00 INFO cluster.YarnClusterScheduler: Adding task set 750.0 with 1 tasks 18/04/17 17:01:00 INFO scheduler.DAGScheduler: Got job 750 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 17:01:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 751 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 17:01:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 17:01:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 17:01:00 INFO scheduler.DAGScheduler: Submitting ResultStage 751 (KafkaRDD[1009] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 17:01:00 INFO storage.MemoryStore: Block broadcast_751 stored as values in memory (estimated size 5.7 KB, free 490.4 MB) 18/04/17 17:01:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 750.0 (TID 750, ***hostname masked***, executor 11, partition 0, RACK_LOCAL, 2050 bytes) 18/04/17 17:01:00 INFO storage.MemoryStore: Block broadcast_751_piece0 stored as bytes in memory (estimated size 3.1 KB, free 490.4 MB) 18/04/17 17:01:00 INFO storage.BlockManagerInfo: Added broadcast_751_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.2 MB) 18/04/17 17:01:00 INFO spark.SparkContext: Created broadcast 751 from broadcast at DAGScheduler.scala:1006 18/04/17 17:01:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 751 (KafkaRDD[1009] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 17:01:00 INFO cluster.YarnClusterScheduler: Adding task set 751.0 with 1 tasks 18/04/17 17:01:00 INFO scheduler.DAGScheduler: Got job 752 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 17:01:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 752 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 17:01:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 17:01:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 17:01:00 INFO scheduler.DAGScheduler: Submitting ResultStage 752 (KafkaRDD[1031] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 17:01:00 INFO storage.MemoryStore: Block broadcast_752 stored as values in memory (estimated size 5.7 KB, free 490.4 MB) 18/04/17 17:01:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 751.0 (TID 751, ***hostname masked***, executor 1, partition 0, RACK_LOCAL, 2063 bytes) 18/04/17 17:01:00 INFO storage.MemoryStore: Block broadcast_752_piece0 stored as bytes in memory (estimated size 3.1 KB, free 490.4 MB) 18/04/17 17:01:00 INFO storage.BlockManagerInfo: Added broadcast_752_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.2 MB) 18/04/17 17:01:00 INFO spark.SparkContext: Created broadcast 752 from broadcast at DAGScheduler.scala:1006 18/04/17 17:01:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 752 (KafkaRDD[1031] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 17:01:00 INFO cluster.YarnClusterScheduler: Adding task set 752.0 with 1 tasks 18/04/17 17:01:00 INFO scheduler.DAGScheduler: Got job 753 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 17:01:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 753 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 17:01:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 17:01:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 17:01:00 INFO scheduler.DAGScheduler: Submitting ResultStage 753 (KafkaRDD[1017] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 17:01:00 INFO storage.MemoryStore: Block broadcast_753 stored as values in memory (estimated size 5.7 KB, free 490.4 MB) 18/04/17 17:01:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 752.0 (TID 752, ***hostname masked***, executor 4, partition 0, RACK_LOCAL, 2048 bytes) 18/04/17 17:01:00 INFO storage.BlockManagerInfo: Added broadcast_748_piece0 in memory on ***hostname masked***:53081 (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:01:00 INFO storage.BlockManagerInfo: Added broadcast_749_piece0 in memory on ***hostname masked***:43653 (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:01:00 INFO storage.MemoryStore: Block broadcast_753_piece0 stored as bytes in memory (estimated size 3.1 KB, free 490.4 MB) 18/04/17 17:01:00 INFO storage.BlockManagerInfo: Added broadcast_753_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.2 MB) 18/04/17 17:01:00 INFO spark.SparkContext: Created broadcast 753 from broadcast at DAGScheduler.scala:1006 18/04/17 17:01:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 753 (KafkaRDD[1017] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 17:01:00 INFO cluster.YarnClusterScheduler: Adding task set 753.0 with 1 tasks 18/04/17 17:01:00 INFO scheduler.DAGScheduler: Got job 754 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 17:01:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 754 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 17:01:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 17:01:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 17:01:00 INFO scheduler.DAGScheduler: Submitting ResultStage 754 (KafkaRDD[1026] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 17:01:00 INFO storage.MemoryStore: Block broadcast_754 stored as values in memory (estimated size 5.7 KB, free 490.4 MB) 18/04/17 17:01:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 753.0 (TID 753, ***hostname masked***, executor 7, partition 0, RACK_LOCAL, 2041 bytes) 18/04/17 17:01:00 INFO storage.BlockManagerInfo: Added broadcast_750_piece0 in memory on ***hostname masked***:57847 (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:01:00 INFO storage.MemoryStore: Block broadcast_754_piece0 stored as bytes in memory (estimated size 3.1 KB, free 490.4 MB) 18/04/17 17:01:00 INFO storage.BlockManagerInfo: Added broadcast_754_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.2 MB) 18/04/17 17:01:00 INFO spark.SparkContext: Created broadcast 754 from broadcast at DAGScheduler.scala:1006 18/04/17 17:01:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 754 (KafkaRDD[1026] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 17:01:00 INFO cluster.YarnClusterScheduler: Adding task set 754.0 with 1 tasks 18/04/17 17:01:00 INFO scheduler.DAGScheduler: Got job 755 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 17:01:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 755 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 17:01:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 17:01:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 17:01:00 INFO scheduler.DAGScheduler: Submitting ResultStage 755 (KafkaRDD[1032] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 17:01:00 INFO storage.MemoryStore: Block broadcast_755 stored as values in memory (estimated size 5.7 KB, free 490.4 MB) 18/04/17 17:01:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 754.0 (TID 754, ***hostname masked***, executor 1, partition 0, RACK_LOCAL, 2048 bytes) 18/04/17 17:01:00 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 740.0 (TID 740) in 55 ms on ***hostname masked*** (executor 6) (1/1) 18/04/17 17:01:00 INFO cluster.YarnClusterScheduler: Removed TaskSet 740.0, whose tasks have all completed, from pool 18/04/17 17:01:00 INFO storage.BlockManagerInfo: Added broadcast_751_piece0 in memory on ***hostname masked***:56034 (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:01:00 INFO storage.MemoryStore: Block broadcast_755_piece0 stored as bytes in memory (estimated size 3.1 KB, free 490.4 MB) 18/04/17 17:01:00 INFO storage.BlockManagerInfo: Added broadcast_755_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.2 MB) 18/04/17 17:01:00 INFO spark.SparkContext: Created broadcast 755 from broadcast at DAGScheduler.scala:1006 18/04/17 17:01:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 755 (KafkaRDD[1032] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 17:01:00 INFO cluster.YarnClusterScheduler: Adding task set 755.0 with 1 tasks 18/04/17 17:01:00 INFO scheduler.DAGScheduler: Got job 756 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 17:01:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 756 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 17:01:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 17:01:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 17:01:00 INFO scheduler.DAGScheduler: Submitting ResultStage 756 (KafkaRDD[1018] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 17:01:00 INFO storage.MemoryStore: Block broadcast_756 stored as values in memory (estimated size 5.7 KB, free 490.4 MB) 18/04/17 17:01:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 755.0 (TID 755, ***hostname masked***, executor 5, partition 0, NODE_LOCAL, 2047 bytes) 18/04/17 17:01:00 INFO storage.MemoryStore: Block broadcast_756_piece0 stored as bytes in memory (estimated size 3.1 KB, free 490.4 MB) 18/04/17 17:01:00 INFO storage.BlockManagerInfo: Added broadcast_756_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.2 MB) 18/04/17 17:01:00 INFO spark.SparkContext: Created broadcast 756 from broadcast at DAGScheduler.scala:1006 18/04/17 17:01:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 756 (KafkaRDD[1018] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 17:01:00 INFO cluster.YarnClusterScheduler: Adding task set 756.0 with 1 tasks 18/04/17 17:01:00 INFO scheduler.DAGScheduler: Got job 757 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 17:01:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 757 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 17:01:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 17:01:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 17:01:00 INFO scheduler.DAGScheduler: Submitting ResultStage 757 (KafkaRDD[1023] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 17:01:00 INFO storage.MemoryStore: Block broadcast_757 stored as values in memory (estimated size 5.7 KB, free 490.4 MB) 18/04/17 17:01:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 756.0 (TID 756, ***hostname masked***, executor 6, partition 0, NODE_LOCAL, 2059 bytes) 18/04/17 17:01:00 INFO storage.BlockManagerInfo: Added broadcast_753_piece0 in memory on ***hostname masked***:41751 (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:01:00 INFO storage.BlockManagerInfo: Added broadcast_752_piece0 in memory on ***hostname masked***:55279 (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:01:00 INFO storage.MemoryStore: Block broadcast_757_piece0 stored as bytes in memory (estimated size 3.1 KB, free 490.4 MB) 18/04/17 17:01:00 INFO storage.BlockManagerInfo: Added broadcast_757_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.2 MB) 18/04/17 17:01:00 INFO spark.SparkContext: Created broadcast 757 from broadcast at DAGScheduler.scala:1006 18/04/17 17:01:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 757 (KafkaRDD[1023] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 17:01:00 INFO cluster.YarnClusterScheduler: Adding task set 757.0 with 1 tasks 18/04/17 17:01:00 INFO scheduler.DAGScheduler: Got job 758 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 17:01:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 758 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 17:01:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 17:01:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 17:01:00 INFO scheduler.DAGScheduler: Submitting ResultStage 758 (KafkaRDD[1030] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 17:01:00 INFO storage.BlockManagerInfo: Added broadcast_755_piece0 in memory on ***hostname masked***:53081 (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:01:00 INFO storage.MemoryStore: Block broadcast_758 stored as values in memory (estimated size 5.7 KB, free 490.4 MB) 18/04/17 17:01:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 757.0 (TID 757, ***hostname masked***, executor 2, partition 0, RACK_LOCAL, 2064 bytes) 18/04/17 17:01:00 INFO storage.BlockManagerInfo: Added broadcast_754_piece0 in memory on ***hostname masked***:56034 (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:01:00 INFO storage.MemoryStore: Block broadcast_758_piece0 stored as bytes in memory (estimated size 3.1 KB, free 490.4 MB) 18/04/17 17:01:00 INFO storage.BlockManagerInfo: Added broadcast_758_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.2 MB) 18/04/17 17:01:00 INFO spark.SparkContext: Created broadcast 758 from broadcast at DAGScheduler.scala:1006 18/04/17 17:01:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 758 (KafkaRDD[1030] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 17:01:00 INFO cluster.YarnClusterScheduler: Adding task set 758.0 with 1 tasks 18/04/17 17:01:00 INFO scheduler.DAGScheduler: Got job 759 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 17:01:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 759 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 17:01:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 17:01:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 17:01:00 INFO scheduler.DAGScheduler: Submitting ResultStage 759 (KafkaRDD[1039] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 17:01:00 INFO storage.MemoryStore: Block broadcast_759 stored as values in memory (estimated size 5.7 KB, free 490.4 MB) 18/04/17 17:01:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 758.0 (TID 758, ***hostname masked***, executor 5, partition 0, RACK_LOCAL, 2069 bytes) 18/04/17 17:01:00 INFO storage.BlockManagerInfo: Added broadcast_756_piece0 in memory on ***hostname masked***:35790 (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:01:00 INFO storage.MemoryStore: Block broadcast_759_piece0 stored as bytes in memory (estimated size 3.1 KB, free 490.4 MB) 18/04/17 17:01:00 INFO storage.BlockManagerInfo: Added broadcast_759_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.2 MB) 18/04/17 17:01:00 INFO spark.SparkContext: Created broadcast 759 from broadcast at DAGScheduler.scala:1006 18/04/17 17:01:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 759 (KafkaRDD[1039] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 17:01:00 INFO cluster.YarnClusterScheduler: Adding task set 759.0 with 1 tasks 18/04/17 17:01:00 INFO scheduler.DAGScheduler: Got job 761 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 17:01:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 760 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 17:01:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 17:01:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 17:01:00 INFO scheduler.DAGScheduler: Submitting ResultStage 760 (KafkaRDD[1028] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 17:01:00 INFO storage.MemoryStore: Block broadcast_760 stored as values in memory (estimated size 5.7 KB, free 490.3 MB) 18/04/17 17:01:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 759.0 (TID 759, ***hostname masked***, executor 7, partition 0, RACK_LOCAL, 2042 bytes) 18/04/17 17:01:00 INFO storage.MemoryStore: Block broadcast_760_piece0 stored as bytes in memory (estimated size 3.1 KB, free 490.3 MB) 18/04/17 17:01:00 INFO storage.BlockManagerInfo: Added broadcast_760_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.2 MB) 18/04/17 17:01:00 INFO spark.SparkContext: Created broadcast 760 from broadcast at DAGScheduler.scala:1006 18/04/17 17:01:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 760 (KafkaRDD[1028] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 17:01:00 INFO cluster.YarnClusterScheduler: Adding task set 760.0 with 1 tasks 18/04/17 17:01:00 INFO scheduler.DAGScheduler: Got job 760 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 17:01:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 761 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 17:01:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 17:01:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 17:01:00 INFO storage.BlockManagerInfo: Added broadcast_757_piece0 in memory on ***hostname masked***:43653 (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:01:00 INFO scheduler.DAGScheduler: Submitting ResultStage 761 (KafkaRDD[1020] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 17:01:00 INFO storage.MemoryStore: Block broadcast_761 stored as values in memory (estimated size 5.7 KB, free 490.3 MB) 18/04/17 17:01:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 760.0 (TID 760, ***hostname masked***, executor 6, partition 0, RACK_LOCAL, 2048 bytes) 18/04/17 17:01:00 INFO storage.MemoryStore: Block broadcast_761_piece0 stored as bytes in memory (estimated size 3.1 KB, free 490.3 MB) 18/04/17 17:01:00 INFO storage.BlockManagerInfo: Added broadcast_761_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.2 MB) 18/04/17 17:01:00 INFO spark.SparkContext: Created broadcast 761 from broadcast at DAGScheduler.scala:1006 18/04/17 17:01:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 761 (KafkaRDD[1020] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 17:01:00 INFO cluster.YarnClusterScheduler: Adding task set 761.0 with 1 tasks 18/04/17 17:01:00 INFO scheduler.DAGScheduler: Got job 762 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 17:01:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 762 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 17:01:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 17:01:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 17:01:00 INFO scheduler.DAGScheduler: Submitting ResultStage 762 (KafkaRDD[1034] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 17:01:00 INFO storage.MemoryStore: Block broadcast_762 stored as values in memory (estimated size 5.7 KB, free 490.3 MB) 18/04/17 17:01:00 INFO storage.BlockManagerInfo: Added broadcast_759_piece0 in memory on ***hostname masked***:41751 (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:01:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 761.0 (TID 761, ***hostname masked***, executor 1, partition 0, RACK_LOCAL, 2051 bytes) 18/04/17 17:01:00 INFO storage.MemoryStore: Block broadcast_762_piece0 stored as bytes in memory (estimated size 3.1 KB, free 490.3 MB) 18/04/17 17:01:00 INFO storage.BlockManagerInfo: Added broadcast_762_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.2 MB) 18/04/17 17:01:00 INFO spark.SparkContext: Created broadcast 762 from broadcast at DAGScheduler.scala:1006 18/04/17 17:01:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 762 (KafkaRDD[1034] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 17:01:00 INFO cluster.YarnClusterScheduler: Adding task set 762.0 with 1 tasks 18/04/17 17:01:00 INFO scheduler.DAGScheduler: Got job 763 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 17:01:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 763 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 17:01:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 17:01:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 17:01:00 INFO scheduler.DAGScheduler: Submitting ResultStage 763 (KafkaRDD[1016] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 17:01:00 INFO storage.MemoryStore: Block broadcast_763 stored as values in memory (estimated size 5.7 KB, free 490.3 MB) 18/04/17 17:01:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 762.0 (TID 762, ***hostname masked***, executor 3, partition 0, NODE_LOCAL, 2056 bytes) 18/04/17 17:01:00 INFO storage.MemoryStore: Block broadcast_763_piece0 stored as bytes in memory (estimated size 3.1 KB, free 490.3 MB) 18/04/17 17:01:00 INFO storage.BlockManagerInfo: Added broadcast_763_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.2 MB) 18/04/17 17:01:00 INFO spark.SparkContext: Created broadcast 763 from broadcast at DAGScheduler.scala:1006 18/04/17 17:01:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 763 (KafkaRDD[1016] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 17:01:00 INFO cluster.YarnClusterScheduler: Adding task set 763.0 with 1 tasks 18/04/17 17:01:00 INFO scheduler.DAGScheduler: Got job 764 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 17:01:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 764 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 17:01:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 17:01:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 17:01:00 INFO scheduler.DAGScheduler: Submitting ResultStage 764 (KafkaRDD[1036] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 17:01:00 INFO storage.MemoryStore: Block broadcast_764 stored as values in memory (estimated size 5.7 KB, free 490.3 MB) 18/04/17 17:01:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 763.0 (TID 763, ***hostname masked***, executor 12, partition 0, RACK_LOCAL, 2051 bytes) 18/04/17 17:01:00 INFO storage.BlockManagerInfo: Added broadcast_760_piece0 in memory on ***hostname masked***:35790 (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:01:00 INFO storage.MemoryStore: Block broadcast_764_piece0 stored as bytes in memory (estimated size 3.1 KB, free 490.3 MB) 18/04/17 17:01:00 INFO storage.BlockManagerInfo: Added broadcast_764_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.2 MB) 18/04/17 17:01:00 INFO spark.SparkContext: Created broadcast 764 from broadcast at DAGScheduler.scala:1006 18/04/17 17:01:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 764 (KafkaRDD[1036] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 17:01:00 INFO cluster.YarnClusterScheduler: Adding task set 764.0 with 1 tasks 18/04/17 17:01:00 INFO scheduler.DAGScheduler: Got job 765 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 17:01:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 765 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 17:01:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 17:01:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 17:01:00 INFO scheduler.DAGScheduler: Submitting ResultStage 765 (KafkaRDD[1041] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 17:01:00 INFO storage.MemoryStore: Block broadcast_765 stored as values in memory (estimated size 5.7 KB, free 490.3 MB) 18/04/17 17:01:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 764.0 (TID 764, ***hostname masked***, executor 8, partition 0, RACK_LOCAL, 2045 bytes) 18/04/17 17:01:00 INFO storage.BlockManagerInfo: Added broadcast_761_piece0 in memory on ***hostname masked***:56034 (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:01:00 INFO storage.BlockManagerInfo: Added broadcast_758_piece0 in memory on ***hostname masked***:53081 (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:01:00 INFO storage.MemoryStore: Block broadcast_765_piece0 stored as bytes in memory (estimated size 3.1 KB, free 490.3 MB) 18/04/17 17:01:00 INFO storage.BlockManagerInfo: Added broadcast_765_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.2 MB) 18/04/17 17:01:00 INFO spark.SparkContext: Created broadcast 765 from broadcast at DAGScheduler.scala:1006 18/04/17 17:01:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 765 (KafkaRDD[1041] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 17:01:00 INFO cluster.YarnClusterScheduler: Adding task set 765.0 with 1 tasks 18/04/17 17:01:00 INFO scheduler.DAGScheduler: ResultStage 740 (foreachPartition at PredictorEngineApp.java:153) finished in 0.086 s 18/04/17 17:01:00 INFO scheduler.DAGScheduler: Job 740 finished: foreachPartition at PredictorEngineApp.java:153, took 0.101339 s 18/04/17 17:01:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 765.0 (TID 765, ***hostname masked***, executor 11, partition 0, NODE_LOCAL, 2048 bytes) 18/04/17 17:01:00 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x36ceda6f connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 17:01:00 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x36ceda6f0x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 17:01:00 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 17:01:00 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:42237, server: ***hostname masked***/***IP masked***:2181 18/04/17 17:01:00 INFO storage.BlockManagerInfo: Added broadcast_763_piece0 in memory on ***hostname masked***:42188 (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:01:00 INFO storage.BlockManagerInfo: Added broadcast_764_piece0 in memory on ***hostname masked***:50260 (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:01:00 INFO storage.BlockManagerInfo: Added broadcast_765_piece0 in memory on ***hostname masked***:57847 (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:01:00 INFO storage.BlockManagerInfo: Added broadcast_762_piece0 in memory on ***hostname masked***:60107 (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:01:00 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x2626be142b28e41, negotiated timeout = 60000 18/04/17 17:01:00 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x2626be142b28e41 18/04/17 17:01:00 INFO zookeeper.ZooKeeper: Session: 0x2626be142b28e41 closed 18/04/17 17:01:00 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 17:01:00 INFO scheduler.JobScheduler: Finished job streaming job 1523973660000 ms.27 from job set of time 1523973660000 ms 18/04/17 17:01:00 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 753.0 (TID 753) in 80 ms on ***hostname masked*** (executor 7) (1/1) 18/04/17 17:01:00 INFO cluster.YarnClusterScheduler: Removed TaskSet 753.0, whose tasks have all completed, from pool 18/04/17 17:01:00 INFO scheduler.DAGScheduler: ResultStage 753 (foreachPartition at PredictorEngineApp.java:153) finished in 0.080 s 18/04/17 17:01:00 INFO scheduler.DAGScheduler: Job 753 finished: foreachPartition at PredictorEngineApp.java:153, took 0.145377 s 18/04/17 17:01:00 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x30ee2e0f connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 17:01:00 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x30ee2e0f0x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 17:01:00 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 17:01:00 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:42240, server: ***hostname masked***/***IP masked***:2181 18/04/17 17:01:00 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x2626be142b28e43, negotiated timeout = 60000 18/04/17 17:01:00 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x2626be142b28e43 18/04/17 17:01:00 INFO zookeeper.ZooKeeper: Session: 0x2626be142b28e43 closed 18/04/17 17:01:00 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 17:01:00 INFO scheduler.JobScheduler: Finished job streaming job 1523973660000 ms.9 from job set of time 1523973660000 ms 18/04/17 17:01:00 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 742.0 (TID 742) in 168 ms on ***hostname masked*** (executor 4) (1/1) 18/04/17 17:01:00 INFO cluster.YarnClusterScheduler: Removed TaskSet 742.0, whose tasks have all completed, from pool 18/04/17 17:01:00 INFO scheduler.DAGScheduler: ResultStage 742 (foreachPartition at PredictorEngineApp.java:153) finished in 0.168 s 18/04/17 17:01:00 INFO scheduler.DAGScheduler: Job 742 finished: foreachPartition at PredictorEngineApp.java:153, took 0.191590 s 18/04/17 17:01:00 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x528888ea connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 17:01:00 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x528888ea0x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 17:01:00 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 17:01:00 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:42243, server: ***hostname masked***/***IP masked***:2181 18/04/17 17:01:00 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x2626be142b28e45, negotiated timeout = 60000 18/04/17 17:01:00 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x2626be142b28e45 18/04/17 17:01:00 INFO zookeeper.ZooKeeper: Session: 0x2626be142b28e45 closed 18/04/17 17:01:00 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 17:01:00 INFO scheduler.JobScheduler: Finished job streaming job 1523973660000 ms.32 from job set of time 1523973660000 ms 18/04/17 17:01:00 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 744.0 (TID 744) in 331 ms on ***hostname masked*** (executor 9) (1/1) 18/04/17 17:01:00 INFO scheduler.DAGScheduler: ResultStage 744 (foreachPartition at PredictorEngineApp.java:153) finished in 0.331 s 18/04/17 17:01:00 INFO cluster.YarnClusterScheduler: Removed TaskSet 744.0, whose tasks have all completed, from pool 18/04/17 17:01:00 INFO scheduler.DAGScheduler: Job 744 finished: foreachPartition at PredictorEngineApp.java:153, took 0.360937 s 18/04/17 17:01:00 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x2983973c connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 17:01:00 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x2983973c0x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 17:01:00 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 17:01:00 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:59502, server: ***hostname masked***/***IP masked***:2181 18/04/17 17:01:00 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x3626be1439a9514, negotiated timeout = 60000 18/04/17 17:01:00 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x3626be1439a9514 18/04/17 17:01:00 INFO zookeeper.ZooKeeper: Session: 0x3626be1439a9514 closed 18/04/17 17:01:00 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 17:01:00 INFO scheduler.JobScheduler: Finished job streaming job 1523973660000 ms.35 from job set of time 1523973660000 ms 18/04/17 17:01:01 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 763.0 (TID 763) in 1213 ms on ***hostname masked*** (executor 12) (1/1) 18/04/17 17:01:01 INFO cluster.YarnClusterScheduler: Removed TaskSet 763.0, whose tasks have all completed, from pool 18/04/17 17:01:01 INFO scheduler.DAGScheduler: ResultStage 763 (foreachPartition at PredictorEngineApp.java:153) finished in 1.214 s 18/04/17 17:01:01 INFO scheduler.DAGScheduler: Job 763 finished: foreachPartition at PredictorEngineApp.java:153, took 1.308194 s 18/04/17 17:01:01 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x62612422 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 17:01:01 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x626124220x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 17:01:01 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 17:01:01 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:37655, server: ***hostname masked***/***IP masked***:2181 18/04/17 17:01:01 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x1626be1444c954f, negotiated timeout = 60000 18/04/17 17:01:01 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x1626be1444c954f 18/04/17 17:01:01 INFO zookeeper.ZooKeeper: Session: 0x1626be1444c954f closed 18/04/17 17:01:01 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 17:01:01 INFO scheduler.JobScheduler: Finished job streaming job 1523973660000 ms.8 from job set of time 1523973660000 ms 18/04/17 17:01:01 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 749.0 (TID 749) in 1555 ms on ***hostname masked*** (executor 2) (1/1) 18/04/17 17:01:01 INFO cluster.YarnClusterScheduler: Removed TaskSet 749.0, whose tasks have all completed, from pool 18/04/17 17:01:01 INFO scheduler.DAGScheduler: ResultStage 749 (foreachPartition at PredictorEngineApp.java:153) finished in 1.556 s 18/04/17 17:01:01 INFO scheduler.DAGScheduler: Job 749 finished: foreachPartition at PredictorEngineApp.java:153, took 1.609897 s 18/04/17 17:01:01 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x10ffaf46 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 17:01:01 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x10ffaf460x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 17:01:01 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 17:01:01 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:59509, server: ***hostname masked***/***IP masked***:2181 18/04/17 17:01:01 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x3626be1439a9518, negotiated timeout = 60000 18/04/17 17:01:01 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x3626be1439a9518 18/04/17 17:01:01 INFO zookeeper.ZooKeeper: Session: 0x3626be1439a9518 closed 18/04/17 17:01:01 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 17:01:01 INFO scheduler.JobScheduler: Finished job streaming job 1523973660000 ms.25 from job set of time 1523973660000 ms 18/04/17 17:01:02 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 759.0 (TID 759) in 2262 ms on ***hostname masked*** (executor 7) (1/1) 18/04/17 17:01:02 INFO cluster.YarnClusterScheduler: Removed TaskSet 759.0, whose tasks have all completed, from pool 18/04/17 17:01:02 INFO scheduler.DAGScheduler: ResultStage 759 (foreachPartition at PredictorEngineApp.java:153) finished in 2.262 s 18/04/17 17:01:02 INFO scheduler.DAGScheduler: Job 759 finished: foreachPartition at PredictorEngineApp.java:153, took 2.349371 s 18/04/17 17:01:02 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x674832b5 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 17:01:02 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x674832b50x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 17:01:02 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 17:01:02 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:59513, server: ***hostname masked***/***IP masked***:2181 18/04/17 17:01:02 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x3626be1439a9519, negotiated timeout = 60000 18/04/17 17:01:02 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x3626be1439a9519 18/04/17 17:01:02 INFO zookeeper.ZooKeeper: Session: 0x3626be1439a9519 closed 18/04/17 17:01:02 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 17:01:02 INFO scheduler.JobScheduler: Finished job streaming job 1523973660000 ms.31 from job set of time 1523973660000 ms 18/04/17 17:01:02 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 743.0 (TID 743) in 2873 ms on ***hostname masked*** (executor 6) (1/1) 18/04/17 17:01:02 INFO cluster.YarnClusterScheduler: Removed TaskSet 743.0, whose tasks have all completed, from pool 18/04/17 17:01:02 INFO scheduler.DAGScheduler: ResultStage 743 (foreachPartition at PredictorEngineApp.java:153) finished in 2.874 s 18/04/17 17:01:02 INFO scheduler.DAGScheduler: Job 743 finished: foreachPartition at PredictorEngineApp.java:153, took 2.900492 s 18/04/17 17:01:02 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x179e56d8 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 17:01:02 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x179e56d80x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 17:01:02 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 17:01:02 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:42261, server: ***hostname masked***/***IP masked***:2181 18/04/17 17:01:02 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x2626be142b28e4b, negotiated timeout = 60000 18/04/17 17:01:02 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x2626be142b28e4b 18/04/17 17:01:02 INFO zookeeper.ZooKeeper: Session: 0x2626be142b28e4b closed 18/04/17 17:01:02 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 17:01:03 INFO scheduler.JobScheduler: Finished job streaming job 1523973660000 ms.7 from job set of time 1523973660000 ms 18/04/17 17:01:04 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 741.0 (TID 741) in 4676 ms on ***hostname masked*** (executor 9) (1/1) 18/04/17 17:01:04 INFO cluster.YarnClusterScheduler: Removed TaskSet 741.0, whose tasks have all completed, from pool 18/04/17 17:01:04 INFO scheduler.DAGScheduler: ResultStage 741 (foreachPartition at PredictorEngineApp.java:153) finished in 4.676 s 18/04/17 17:01:04 INFO scheduler.DAGScheduler: Job 741 finished: foreachPartition at PredictorEngineApp.java:153, took 4.696174 s 18/04/17 17:01:04 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x3866635e connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 17:01:04 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x3866635e0x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 17:01:04 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 17:01:04 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:42267, server: ***hostname masked***/***IP masked***:2181 18/04/17 17:01:04 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x2626be142b28e4d, negotiated timeout = 60000 18/04/17 17:01:04 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x2626be142b28e4d 18/04/17 17:01:04 INFO zookeeper.ZooKeeper: Session: 0x2626be142b28e4d closed 18/04/17 17:01:04 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 17:01:04 INFO scheduler.JobScheduler: Finished job streaming job 1523973660000 ms.6 from job set of time 1523973660000 ms 18/04/17 17:01:05 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 757.0 (TID 757) in 5235 ms on ***hostname masked*** (executor 2) (1/1) 18/04/17 17:01:05 INFO scheduler.DAGScheduler: ResultStage 757 (foreachPartition at PredictorEngineApp.java:153) finished in 5.242 s 18/04/17 17:01:05 INFO cluster.YarnClusterScheduler: Removed TaskSet 757.0, whose tasks have all completed, from pool 18/04/17 17:01:05 INFO scheduler.DAGScheduler: Job 757 finished: foreachPartition at PredictorEngineApp.java:153, took 5.318494 s 18/04/17 17:01:05 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x391fb9a7 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 17:01:05 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x391fb9a70x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 17:01:05 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 17:01:05 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:59528, server: ***hostname masked***/***IP masked***:2181 18/04/17 17:01:05 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x3626be1439a951b, negotiated timeout = 60000 18/04/17 17:01:05 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x3626be1439a951b 18/04/17 17:01:05 INFO zookeeper.ZooKeeper: Session: 0x3626be1439a951b closed 18/04/17 17:01:05 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 17:01:05 INFO scheduler.JobScheduler: Finished job streaming job 1523973660000 ms.15 from job set of time 1523973660000 ms 18/04/17 17:01:08 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 764.0 (TID 764) in 8025 ms on ***hostname masked*** (executor 8) (1/1) 18/04/17 17:01:08 INFO cluster.YarnClusterScheduler: Removed TaskSet 764.0, whose tasks have all completed, from pool 18/04/17 17:01:08 INFO scheduler.DAGScheduler: ResultStage 764 (foreachPartition at PredictorEngineApp.java:153) finished in 8.026 s 18/04/17 17:01:08 INFO scheduler.DAGScheduler: Job 764 finished: foreachPartition at PredictorEngineApp.java:153, took 8.122136 s 18/04/17 17:01:08 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x7d7bfb75 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 17:01:08 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x7d7bfb750x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 17:01:08 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 17:01:08 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:37683, server: ***hostname masked***/***IP masked***:2181 18/04/17 17:01:08 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 750.0 (TID 750) in 8071 ms on ***hostname masked*** (executor 11) (1/1) 18/04/17 17:01:08 INFO scheduler.DAGScheduler: ResultStage 750 (foreachPartition at PredictorEngineApp.java:153) finished in 8.072 s 18/04/17 17:01:08 INFO cluster.YarnClusterScheduler: Removed TaskSet 750.0, whose tasks have all completed, from pool 18/04/17 17:01:08 INFO scheduler.DAGScheduler: Job 751 finished: foreachPartition at PredictorEngineApp.java:153, took 8.128282 s 18/04/17 17:01:08 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x2dd97f9a connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 17:01:08 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x2dd97f9a0x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 17:01:08 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 17:01:08 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x1626be1444c9553, negotiated timeout = 60000 18/04/17 17:01:08 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:59535, server: ***hostname masked***/***IP masked***:2181 18/04/17 17:01:08 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x1626be1444c9553 18/04/17 17:01:08 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x3626be1439a951f, negotiated timeout = 60000 18/04/17 17:01:08 INFO zookeeper.ZooKeeper: Session: 0x1626be1444c9553 closed 18/04/17 17:01:08 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 17:01:08 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x3626be1439a951f 18/04/17 17:01:08 INFO scheduler.JobScheduler: Finished job streaming job 1523973660000 ms.28 from job set of time 1523973660000 ms 18/04/17 17:01:08 INFO zookeeper.ZooKeeper: Session: 0x3626be1439a951f closed 18/04/17 17:01:08 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 17:01:08 INFO scheduler.JobScheduler: Finished job streaming job 1523973660000 ms.19 from job set of time 1523973660000 ms 18/04/17 17:01:08 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 747.0 (TID 747) in 8662 ms on ***hostname masked*** (executor 8) (1/1) 18/04/17 17:01:08 INFO scheduler.DAGScheduler: ResultStage 747 (foreachPartition at PredictorEngineApp.java:153) finished in 8.662 s 18/04/17 17:01:08 INFO cluster.YarnClusterScheduler: Removed TaskSet 747.0, whose tasks have all completed, from pool 18/04/17 17:01:08 INFO scheduler.DAGScheduler: Job 747 finished: foreachPartition at PredictorEngineApp.java:153, took 8.710059 s 18/04/17 17:01:08 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x757cfa3e connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 17:01:08 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x757cfa3e0x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 17:01:08 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 17:01:08 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:37689, server: ***hostname masked***/***IP masked***:2181 18/04/17 17:01:08 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x1626be1444c9554, negotiated timeout = 60000 18/04/17 17:01:08 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x1626be1444c9554 18/04/17 17:01:08 INFO zookeeper.ZooKeeper: Session: 0x1626be1444c9554 closed 18/04/17 17:01:08 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 17:01:08 INFO scheduler.JobScheduler: Finished job streaming job 1523973660000 ms.34 from job set of time 1523973660000 ms 18/04/17 17:01:08 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 765.0 (TID 765) in 8688 ms on ***hostname masked*** (executor 11) (1/1) 18/04/17 17:01:08 INFO cluster.YarnClusterScheduler: Removed TaskSet 765.0, whose tasks have all completed, from pool 18/04/17 17:01:08 INFO scheduler.DAGScheduler: ResultStage 765 (foreachPartition at PredictorEngineApp.java:153) finished in 8.688 s 18/04/17 17:01:08 INFO scheduler.DAGScheduler: Job 765 finished: foreachPartition at PredictorEngineApp.java:153, took 8.786581 s 18/04/17 17:01:08 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x5612093a connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 17:01:08 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x5612093a0x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 17:01:08 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 17:01:08 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:59543, server: ***hostname masked***/***IP masked***:2181 18/04/17 17:01:08 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x3626be1439a9520, negotiated timeout = 60000 18/04/17 17:01:08 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x3626be1439a9520 18/04/17 17:01:08 INFO zookeeper.ZooKeeper: Session: 0x3626be1439a9520 closed 18/04/17 17:01:08 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 17:01:08 INFO scheduler.JobScheduler: Finished job streaming job 1523973660000 ms.33 from job set of time 1523973660000 ms 18/04/17 17:01:10 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 762.0 (TID 762) in 9835 ms on ***hostname masked*** (executor 3) (1/1) 18/04/17 17:01:10 INFO scheduler.DAGScheduler: ResultStage 762 (foreachPartition at PredictorEngineApp.java:153) finished in 9.836 s 18/04/17 17:01:10 INFO cluster.YarnClusterScheduler: Removed TaskSet 762.0, whose tasks have all completed, from pool 18/04/17 17:01:10 INFO scheduler.DAGScheduler: Job 762 finished: foreachPartition at PredictorEngineApp.java:153, took 9.928393 s 18/04/17 17:01:10 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x5f89497c connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 17:01:10 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x5f89497c0x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 17:01:10 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 17:01:10 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:59548, server: ***hostname masked***/***IP masked***:2181 18/04/17 17:01:10 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x3626be1439a9521, negotiated timeout = 60000 18/04/17 17:01:10 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x3626be1439a9521 18/04/17 17:01:10 INFO zookeeper.ZooKeeper: Session: 0x3626be1439a9521 closed 18/04/17 17:01:10 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 17:01:10 INFO scheduler.JobScheduler: Finished job streaming job 1523973660000 ms.26 from job set of time 1523973660000 ms 18/04/17 17:01:10 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 761.0 (TID 761) in 10206 ms on ***hostname masked*** (executor 1) (1/1) 18/04/17 17:01:10 INFO scheduler.DAGScheduler: ResultStage 761 (foreachPartition at PredictorEngineApp.java:153) finished in 10.206 s 18/04/17 17:01:10 INFO cluster.YarnClusterScheduler: Removed TaskSet 761.0, whose tasks have all completed, from pool 18/04/17 17:01:10 INFO scheduler.DAGScheduler: Job 760 finished: foreachPartition at PredictorEngineApp.java:153, took 10.296617 s 18/04/17 17:01:10 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x1b8c8436 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 17:01:10 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x1b8c84360x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 17:01:10 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 17:01:10 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:42296, server: ***hostname masked***/***IP masked***:2181 18/04/17 17:01:10 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x2626be142b28e52, negotiated timeout = 60000 18/04/17 17:01:10 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x2626be142b28e52 18/04/17 17:01:10 INFO zookeeper.ZooKeeper: Session: 0x2626be142b28e52 closed 18/04/17 17:01:10 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 17:01:10 INFO scheduler.JobScheduler: Finished job streaming job 1523973660000 ms.12 from job set of time 1523973660000 ms 18/04/17 17:01:10 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 754.0 (TID 754) in 10825 ms on ***hostname masked*** (executor 1) (1/1) 18/04/17 17:01:10 INFO scheduler.DAGScheduler: ResultStage 754 (foreachPartition at PredictorEngineApp.java:153) finished in 10.826 s 18/04/17 17:01:10 INFO cluster.YarnClusterScheduler: Removed TaskSet 754.0, whose tasks have all completed, from pool 18/04/17 17:01:10 INFO scheduler.DAGScheduler: Job 754 finished: foreachPartition at PredictorEngineApp.java:153, took 10.892990 s 18/04/17 17:01:10 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x49c1c59c connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 17:01:10 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x49c1c59c0x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 17:01:10 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 17:01:10 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:37704, server: ***hostname masked***/***IP masked***:2181 18/04/17 17:01:10 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x1626be1444c9557, negotiated timeout = 60000 18/04/17 17:01:10 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x1626be1444c9557 18/04/17 17:01:10 INFO zookeeper.ZooKeeper: Session: 0x1626be1444c9557 closed 18/04/17 17:01:10 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 17:01:10 INFO scheduler.JobScheduler: Finished job streaming job 1523973660000 ms.18 from job set of time 1523973660000 ms 18/04/17 17:01:11 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 745.0 (TID 745) in 11230 ms on ***hostname masked*** (executor 4) (1/1) 18/04/17 17:01:11 INFO cluster.YarnClusterScheduler: Removed TaskSet 745.0, whose tasks have all completed, from pool 18/04/17 17:01:11 INFO scheduler.DAGScheduler: ResultStage 745 (foreachPartition at PredictorEngineApp.java:153) finished in 11.230 s 18/04/17 17:01:11 INFO scheduler.DAGScheduler: Job 745 finished: foreachPartition at PredictorEngineApp.java:153, took 11.264610 s 18/04/17 17:01:11 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x3b81c0ad connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 17:01:11 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x3b81c0ad0x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 17:01:11 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 17:01:11 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:37708, server: ***hostname masked***/***IP masked***:2181 18/04/17 17:01:11 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x1626be1444c9558, negotiated timeout = 60000 18/04/17 17:01:11 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x1626be1444c9558 18/04/17 17:01:11 INFO zookeeper.ZooKeeper: Session: 0x1626be1444c9558 closed 18/04/17 17:01:11 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 17:01:11 INFO scheduler.JobScheduler: Finished job streaming job 1523973660000 ms.2 from job set of time 1523973660000 ms 18/04/17 17:01:11 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 760.0 (TID 760) in 11418 ms on ***hostname masked*** (executor 6) (1/1) 18/04/17 17:01:11 INFO scheduler.DAGScheduler: ResultStage 760 (foreachPartition at PredictorEngineApp.java:153) finished in 11.419 s 18/04/17 17:01:11 INFO cluster.YarnClusterScheduler: Removed TaskSet 760.0, whose tasks have all completed, from pool 18/04/17 17:01:11 INFO scheduler.DAGScheduler: Job 761 finished: foreachPartition at PredictorEngineApp.java:153, took 11.507481 s 18/04/17 17:01:11 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x1cc6c07 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 17:01:11 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x1cc6c070x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 17:01:11 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 17:01:11 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:37711, server: ***hostname masked***/***IP masked***:2181 18/04/17 17:01:11 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x1626be1444c955a, negotiated timeout = 60000 18/04/17 17:01:11 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x1626be1444c955a 18/04/17 17:01:11 INFO zookeeper.ZooKeeper: Session: 0x1626be1444c955a closed 18/04/17 17:01:11 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 17:01:11 INFO scheduler.JobScheduler: Finished job streaming job 1523973660000 ms.20 from job set of time 1523973660000 ms 18/04/17 17:01:13 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 752.0 (TID 752) in 13090 ms on ***hostname masked*** (executor 4) (1/1) 18/04/17 17:01:13 INFO scheduler.DAGScheduler: ResultStage 752 (foreachPartition at PredictorEngineApp.java:153) finished in 13.091 s 18/04/17 17:01:13 INFO cluster.YarnClusterScheduler: Removed TaskSet 752.0, whose tasks have all completed, from pool 18/04/17 17:01:13 INFO scheduler.DAGScheduler: Job 752 finished: foreachPartition at PredictorEngineApp.java:153, took 13.153602 s 18/04/17 17:01:13 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x76d20564 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 17:01:13 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x76d205640x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 17:01:13 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 17:01:13 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:59567, server: ***hostname masked***/***IP masked***:2181 18/04/17 17:01:13 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x3626be1439a9524, negotiated timeout = 60000 18/04/17 17:01:13 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x3626be1439a9524 18/04/17 17:01:13 INFO zookeeper.ZooKeeper: Session: 0x3626be1439a9524 closed 18/04/17 17:01:13 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 17:01:13 INFO scheduler.JobScheduler: Finished job streaming job 1523973660000 ms.23 from job set of time 1523973660000 ms 18/04/17 17:01:15 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 746.0 (TID 746) in 14902 ms on ***hostname masked*** (executor 5) (1/1) 18/04/17 17:01:15 INFO scheduler.DAGScheduler: ResultStage 746 (foreachPartition at PredictorEngineApp.java:153) finished in 14.904 s 18/04/17 17:01:15 INFO cluster.YarnClusterScheduler: Removed TaskSet 746.0, whose tasks have all completed, from pool 18/04/17 17:01:15 INFO scheduler.DAGScheduler: Job 746 finished: foreachPartition at PredictorEngineApp.java:153, took 14.945419 s 18/04/17 17:01:15 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x771e2d99 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 17:01:15 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x771e2d990x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 17:01:15 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 17:01:15 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:37721, server: ***hostname masked***/***IP masked***:2181 18/04/17 17:01:15 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x1626be1444c955d, negotiated timeout = 60000 18/04/17 17:01:15 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x1626be1444c955d 18/04/17 17:01:15 INFO zookeeper.ZooKeeper: Session: 0x1626be1444c955d closed 18/04/17 17:01:15 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 17:01:15 INFO scheduler.JobScheduler: Finished job streaming job 1523973660000 ms.29 from job set of time 1523973660000 ms 18/04/17 17:01:16 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 751.0 (TID 751) in 15954 ms on ***hostname masked*** (executor 1) (1/1) 18/04/17 17:01:16 INFO cluster.YarnClusterScheduler: Removed TaskSet 751.0, whose tasks have all completed, from pool 18/04/17 17:01:16 INFO scheduler.DAGScheduler: ResultStage 751 (foreachPartition at PredictorEngineApp.java:153) finished in 15.955 s 18/04/17 17:01:16 INFO scheduler.DAGScheduler: Job 750 finished: foreachPartition at PredictorEngineApp.java:153, took 16.014617 s 18/04/17 17:01:16 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x8be437 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 17:01:16 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x8be4370x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 17:01:16 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 17:01:16 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:37726, server: ***hostname masked***/***IP masked***:2181 18/04/17 17:01:16 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x1626be1444c955f, negotiated timeout = 60000 18/04/17 17:01:16 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x1626be1444c955f 18/04/17 17:01:16 INFO zookeeper.ZooKeeper: Session: 0x1626be1444c955f closed 18/04/17 17:01:16 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 17:01:16 INFO scheduler.JobScheduler: Finished job streaming job 1523973660000 ms.1 from job set of time 1523973660000 ms 18/04/17 17:01:17 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 739.0 (TID 739) in 17109 ms on ***hostname masked*** (executor 4) (1/1) 18/04/17 17:01:17 INFO cluster.YarnClusterScheduler: Removed TaskSet 739.0, whose tasks have all completed, from pool 18/04/17 17:01:17 INFO scheduler.DAGScheduler: ResultStage 739 (foreachPartition at PredictorEngineApp.java:153) finished in 17.109 s 18/04/17 17:01:17 INFO scheduler.DAGScheduler: Job 739 finished: foreachPartition at PredictorEngineApp.java:153, took 17.117406 s 18/04/17 17:01:17 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x392ca6b4 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 17:01:17 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x392ca6b40x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 17:01:17 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 17:01:17 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:59581, server: ***hostname masked***/***IP masked***:2181 18/04/17 17:01:17 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x3626be1439a9526, negotiated timeout = 60000 18/04/17 17:01:17 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x3626be1439a9526 18/04/17 17:01:17 INFO zookeeper.ZooKeeper: Session: 0x3626be1439a9526 closed 18/04/17 17:01:17 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 17:01:17 INFO scheduler.JobScheduler: Finished job streaming job 1523973660000 ms.11 from job set of time 1523973660000 ms 18/04/17 17:01:17 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 755.0 (TID 755) in 17100 ms on ***hostname masked*** (executor 5) (1/1) 18/04/17 17:01:17 INFO cluster.YarnClusterScheduler: Removed TaskSet 755.0, whose tasks have all completed, from pool 18/04/17 17:01:17 INFO scheduler.DAGScheduler: ResultStage 755 (foreachPartition at PredictorEngineApp.java:153) finished in 17.101 s 18/04/17 17:01:17 INFO scheduler.DAGScheduler: Job 755 finished: foreachPartition at PredictorEngineApp.java:153, took 17.170981 s 18/04/17 17:01:17 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x7f793917 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 17:01:17 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x7f7939170x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 17:01:17 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 17:01:17 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:37733, server: ***hostname masked***/***IP masked***:2181 18/04/17 17:01:17 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x1626be1444c9560, negotiated timeout = 60000 18/04/17 17:01:17 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x1626be1444c9560 18/04/17 17:01:17 INFO zookeeper.ZooKeeper: Session: 0x1626be1444c9560 closed 18/04/17 17:01:17 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 17:01:17 INFO scheduler.JobScheduler: Finished job streaming job 1523973660000 ms.24 from job set of time 1523973660000 ms 18/04/17 17:01:25 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 748.0 (TID 748) in 25415 ms on ***hostname masked*** (executor 5) (1/1) 18/04/17 17:01:25 INFO cluster.YarnClusterScheduler: Removed TaskSet 748.0, whose tasks have all completed, from pool 18/04/17 17:01:25 INFO scheduler.DAGScheduler: ResultStage 748 (foreachPartition at PredictorEngineApp.java:153) finished in 25.415 s 18/04/17 17:01:25 INFO scheduler.DAGScheduler: Job 748 finished: foreachPartition at PredictorEngineApp.java:153, took 25.465340 s 18/04/17 17:01:25 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x52a974e connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 17:01:25 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x52a974e0x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 17:01:25 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 17:01:25 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:37748, server: ***hostname masked***/***IP masked***:2181 18/04/17 17:01:25 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x1626be1444c9562, negotiated timeout = 60000 18/04/17 17:01:25 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x1626be1444c9562 18/04/17 17:01:25 INFO zookeeper.ZooKeeper: Session: 0x1626be1444c9562 closed 18/04/17 17:01:25 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 17:01:25 INFO scheduler.JobScheduler: Finished job streaming job 1523973660000 ms.5 from job set of time 1523973660000 ms 18/04/17 17:01:26 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 758.0 (TID 758) in 26476 ms on ***hostname masked*** (executor 5) (1/1) 18/04/17 17:01:26 INFO cluster.YarnClusterScheduler: Removed TaskSet 758.0, whose tasks have all completed, from pool 18/04/17 17:01:26 INFO scheduler.DAGScheduler: ResultStage 758 (foreachPartition at PredictorEngineApp.java:153) finished in 26.476 s 18/04/17 17:01:26 INFO scheduler.DAGScheduler: Job 758 finished: foreachPartition at PredictorEngineApp.java:153, took 26.561382 s 18/04/17 17:01:26 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x684d3216 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 17:01:26 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x684d32160x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 17:01:26 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 17:01:26 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:42347, server: ***hostname masked***/***IP masked***:2181 18/04/17 17:01:26 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x2626be142b28e59, negotiated timeout = 60000 18/04/17 17:01:26 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x2626be142b28e59 18/04/17 17:01:26 INFO zookeeper.ZooKeeper: Session: 0x2626be142b28e59 closed 18/04/17 17:01:26 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 17:01:26 INFO scheduler.JobScheduler: Finished job streaming job 1523973660000 ms.22 from job set of time 1523973660000 ms 18/04/17 17:02:00 INFO scheduler.JobScheduler: Added jobs for time 1523973720000 ms 18/04/17 17:02:00 INFO scheduler.JobScheduler: Starting job streaming job 1523973720000 ms.1 from job set of time 1523973720000 ms 18/04/17 17:02:00 INFO scheduler.JobScheduler: Starting job streaming job 1523973720000 ms.0 from job set of time 1523973720000 ms 18/04/17 17:02:00 INFO scheduler.JobScheduler: Starting job streaming job 1523973720000 ms.2 from job set of time 1523973720000 ms 18/04/17 17:02:00 INFO scheduler.JobScheduler: Finished job streaming job 1523973720000 ms.0 from job set of time 1523973720000 ms 18/04/17 17:02:00 INFO scheduler.JobScheduler: Starting job streaming job 1523973720000 ms.3 from job set of time 1523973720000 ms 18/04/17 17:02:00 INFO scheduler.JobScheduler: Starting job streaming job 1523973720000 ms.5 from job set of time 1523973720000 ms 18/04/17 17:02:00 INFO scheduler.JobScheduler: Starting job streaming job 1523973720000 ms.4 from job set of time 1523973720000 ms 18/04/17 17:02:00 INFO scheduler.JobScheduler: Starting job streaming job 1523973720000 ms.6 from job set of time 1523973720000 ms 18/04/17 17:02:00 INFO scheduler.JobScheduler: Finished job streaming job 1523973720000 ms.3 from job set of time 1523973720000 ms 18/04/17 17:02:00 INFO scheduler.JobScheduler: Starting job streaming job 1523973720000 ms.7 from job set of time 1523973720000 ms 18/04/17 17:02:00 INFO scheduler.JobScheduler: Finished job streaming job 1523973720000 ms.4 from job set of time 1523973720000 ms 18/04/17 17:02:00 INFO scheduler.JobScheduler: Starting job streaming job 1523973720000 ms.8 from job set of time 1523973720000 ms 18/04/17 17:02:00 INFO scheduler.JobScheduler: Starting job streaming job 1523973720000 ms.10 from job set of time 1523973720000 ms 18/04/17 17:02:00 INFO scheduler.JobScheduler: Starting job streaming job 1523973720000 ms.9 from job set of time 1523973720000 ms 18/04/17 17:02:00 INFO scheduler.JobScheduler: Starting job streaming job 1523973720000 ms.11 from job set of time 1523973720000 ms 18/04/17 17:02:00 INFO scheduler.JobScheduler: Starting job streaming job 1523973720000 ms.12 from job set of time 1523973720000 ms 18/04/17 17:02:00 INFO scheduler.JobScheduler: Starting job streaming job 1523973720000 ms.13 from job set of time 1523973720000 ms 18/04/17 17:02:00 INFO scheduler.JobScheduler: Finished job streaming job 1523973720000 ms.13 from job set of time 1523973720000 ms 18/04/17 17:02:00 INFO scheduler.JobScheduler: Starting job streaming job 1523973720000 ms.14 from job set of time 1523973720000 ms 18/04/17 17:02:00 INFO scheduler.JobScheduler: Starting job streaming job 1523973720000 ms.16 from job set of time 1523973720000 ms 18/04/17 17:02:00 INFO scheduler.JobScheduler: Starting job streaming job 1523973720000 ms.15 from job set of time 1523973720000 ms 18/04/17 17:02:00 INFO scheduler.JobScheduler: Finished job streaming job 1523973720000 ms.14 from job set of time 1523973720000 ms 18/04/17 17:02:00 INFO scheduler.JobScheduler: Finished job streaming job 1523973720000 ms.16 from job set of time 1523973720000 ms 18/04/17 17:02:00 INFO scheduler.JobScheduler: Starting job streaming job 1523973720000 ms.19 from job set of time 1523973720000 ms 18/04/17 17:02:00 INFO scheduler.JobScheduler: Starting job streaming job 1523973720000 ms.17 from job set of time 1523973720000 ms 18/04/17 17:02:00 INFO scheduler.JobScheduler: Starting job streaming job 1523973720000 ms.18 from job set of time 1523973720000 ms 18/04/17 17:02:00 INFO scheduler.JobScheduler: Starting job streaming job 1523973720000 ms.20 from job set of time 1523973720000 ms 18/04/17 17:02:00 INFO scheduler.JobScheduler: Finished job streaming job 1523973720000 ms.17 from job set of time 1523973720000 ms 18/04/17 17:02:00 INFO scheduler.JobScheduler: Starting job streaming job 1523973720000 ms.21 from job set of time 1523973720000 ms 18/04/17 17:02:00 INFO scheduler.JobScheduler: Starting job streaming job 1523973720000 ms.22 from job set of time 1523973720000 ms 18/04/17 17:02:00 INFO scheduler.JobScheduler: Finished job streaming job 1523973720000 ms.21 from job set of time 1523973720000 ms 18/04/17 17:02:00 INFO scheduler.JobScheduler: Starting job streaming job 1523973720000 ms.24 from job set of time 1523973720000 ms 18/04/17 17:02:00 INFO scheduler.JobScheduler: Starting job streaming job 1523973720000 ms.23 from job set of time 1523973720000 ms 18/04/17 17:02:00 INFO scheduler.JobScheduler: Starting job streaming job 1523973720000 ms.25 from job set of time 1523973720000 ms 18/04/17 17:02:00 INFO scheduler.JobScheduler: Starting job streaming job 1523973720000 ms.26 from job set of time 1523973720000 ms 18/04/17 17:02:00 INFO scheduler.JobScheduler: Starting job streaming job 1523973720000 ms.27 from job set of time 1523973720000 ms 18/04/17 17:02:00 INFO scheduler.JobScheduler: Starting job streaming job 1523973720000 ms.28 from job set of time 1523973720000 ms 18/04/17 17:02:00 INFO scheduler.JobScheduler: Starting job streaming job 1523973720000 ms.29 from job set of time 1523973720000 ms 18/04/17 17:02:00 INFO scheduler.JobScheduler: Starting job streaming job 1523973720000 ms.30 from job set of time 1523973720000 ms 18/04/17 17:02:00 INFO scheduler.JobScheduler: Starting job streaming job 1523973720000 ms.31 from job set of time 1523973720000 ms 18/04/17 17:02:00 INFO scheduler.JobScheduler: Finished job streaming job 1523973720000 ms.30 from job set of time 1523973720000 ms 18/04/17 17:02:00 INFO scheduler.JobScheduler: Starting job streaming job 1523973720000 ms.32 from job set of time 1523973720000 ms 18/04/17 17:02:00 INFO scheduler.JobScheduler: Starting job streaming job 1523973720000 ms.34 from job set of time 1523973720000 ms 18/04/17 17:02:00 INFO scheduler.JobScheduler: Starting job streaming job 1523973720000 ms.33 from job set of time 1523973720000 ms 18/04/17 17:02:00 INFO scheduler.JobScheduler: Starting job streaming job 1523973720000 ms.35 from job set of time 1523973720000 ms 18/04/17 17:02:00 INFO scheduler.JobScheduler: Finished job streaming job 1523973720000 ms.35 from job set of time 1523973720000 ms 18/04/17 17:02:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 17:02:00 INFO scheduler.DAGScheduler: Got job 766 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 17:02:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 766 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 17:02:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 17:02:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 17:02:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 17:02:00 INFO scheduler.DAGScheduler: Submitting ResultStage 766 (KafkaRDD[1045] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 17:02:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 17:02:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 17:02:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 17:02:00 INFO storage.MemoryStore: Block broadcast_766 stored as values in memory (estimated size 5.7 KB, free 490.3 MB) 18/04/17 17:02:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 17:02:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 17:02:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 17:02:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 17:02:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 17:02:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 17:02:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 17:02:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 17:02:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 17:02:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 17:02:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 17:02:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 17:02:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 17:02:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 17:02:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 17:02:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 17:02:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 17:02:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 17:02:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 17:02:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 17:02:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 17:02:00 INFO storage.MemoryStore: Block broadcast_766_piece0 stored as bytes in memory (estimated size 3.1 KB, free 490.3 MB) 18/04/17 17:02:00 INFO storage.BlockManagerInfo: Added broadcast_766_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.2 MB) 18/04/17 17:02:00 INFO spark.SparkContext: Created broadcast 766 from broadcast at DAGScheduler.scala:1006 18/04/17 17:02:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 766 (KafkaRDD[1045] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 17:02:00 INFO cluster.YarnClusterScheduler: Adding task set 766.0 with 1 tasks 18/04/17 17:02:00 INFO scheduler.DAGScheduler: Got job 767 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 17:02:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 767 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 17:02:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 17:02:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 17:02:00 INFO scheduler.DAGScheduler: Submitting ResultStage 767 (KafkaRDD[1068] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 17:02:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 766.0 (TID 766, ***hostname masked***, executor 5, partition 0, RACK_LOCAL, 2063 bytes) 18/04/17 17:02:00 INFO storage.MemoryStore: Block broadcast_767 stored as values in memory (estimated size 5.7 KB, free 490.3 MB) 18/04/17 17:02:00 INFO storage.MemoryStore: Block broadcast_767_piece0 stored as bytes in memory (estimated size 3.1 KB, free 490.3 MB) 18/04/17 17:02:00 INFO storage.BlockManagerInfo: Added broadcast_767_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.2 MB) 18/04/17 17:02:00 INFO spark.SparkContext: Created broadcast 767 from broadcast at DAGScheduler.scala:1006 18/04/17 17:02:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 767 (KafkaRDD[1068] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 17:02:00 INFO cluster.YarnClusterScheduler: Adding task set 767.0 with 1 tasks 18/04/17 17:02:00 INFO scheduler.DAGScheduler: Got job 768 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 17:02:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 768 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 17:02:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 17:02:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 17:02:00 INFO scheduler.DAGScheduler: Submitting ResultStage 768 (KafkaRDD[1072] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 17:02:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 767.0 (TID 767, ***hostname masked***, executor 1, partition 0, NODE_LOCAL, 2047 bytes) 18/04/17 17:02:00 INFO storage.MemoryStore: Block broadcast_768 stored as values in memory (estimated size 5.7 KB, free 490.3 MB) 18/04/17 17:02:00 INFO storage.MemoryStore: Block broadcast_768_piece0 stored as bytes in memory (estimated size 3.1 KB, free 490.3 MB) 18/04/17 17:02:00 INFO storage.BlockManagerInfo: Added broadcast_768_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.2 MB) 18/04/17 17:02:00 INFO spark.SparkContext: Created broadcast 768 from broadcast at DAGScheduler.scala:1006 18/04/17 17:02:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 768 (KafkaRDD[1072] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 17:02:00 INFO cluster.YarnClusterScheduler: Adding task set 768.0 with 1 tasks 18/04/17 17:02:00 INFO scheduler.DAGScheduler: Got job 769 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 17:02:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 769 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 17:02:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 17:02:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 17:02:00 INFO scheduler.DAGScheduler: Submitting ResultStage 769 (KafkaRDD[1078] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 17:02:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 768.0 (TID 768, ***hostname masked***, executor 6, partition 0, RACK_LOCAL, 2045 bytes) 18/04/17 17:02:00 INFO storage.MemoryStore: Block broadcast_769 stored as values in memory (estimated size 5.7 KB, free 490.3 MB) 18/04/17 17:02:00 INFO storage.MemoryStore: Block broadcast_769_piece0 stored as bytes in memory (estimated size 3.1 KB, free 490.3 MB) 18/04/17 17:02:00 INFO storage.BlockManagerInfo: Added broadcast_769_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.2 MB) 18/04/17 17:02:00 INFO spark.SparkContext: Created broadcast 769 from broadcast at DAGScheduler.scala:1006 18/04/17 17:02:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 769 (KafkaRDD[1078] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 17:02:00 INFO cluster.YarnClusterScheduler: Adding task set 769.0 with 1 tasks 18/04/17 17:02:00 INFO scheduler.DAGScheduler: Got job 770 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 17:02:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 770 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 17:02:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 17:02:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 17:02:00 INFO scheduler.DAGScheduler: Submitting ResultStage 770 (KafkaRDD[1071] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 17:02:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 769.0 (TID 769, ***hostname masked***, executor 5, partition 0, RACK_LOCAL, 2053 bytes) 18/04/17 17:02:00 INFO storage.MemoryStore: Block broadcast_770 stored as values in memory (estimated size 5.7 KB, free 490.3 MB) 18/04/17 17:02:00 INFO storage.BlockManagerInfo: Added broadcast_767_piece0 in memory on ***hostname masked***:56034 (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:02:00 INFO storage.MemoryStore: Block broadcast_770_piece0 stored as bytes in memory (estimated size 3.1 KB, free 490.3 MB) 18/04/17 17:02:00 INFO storage.BlockManagerInfo: Added broadcast_770_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.2 MB) 18/04/17 17:02:00 INFO spark.SparkContext: Created broadcast 770 from broadcast at DAGScheduler.scala:1006 18/04/17 17:02:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 770 (KafkaRDD[1071] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 17:02:00 INFO cluster.YarnClusterScheduler: Adding task set 770.0 with 1 tasks 18/04/17 17:02:00 INFO scheduler.DAGScheduler: Got job 771 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 17:02:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 771 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 17:02:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 17:02:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 17:02:00 INFO scheduler.DAGScheduler: Submitting ResultStage 771 (KafkaRDD[1051] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 17:02:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 770.0 (TID 770, ***hostname masked***, executor 3, partition 0, RACK_LOCAL, 2048 bytes) 18/04/17 17:02:00 INFO storage.MemoryStore: Block broadcast_771 stored as values in memory (estimated size 5.7 KB, free 490.3 MB) 18/04/17 17:02:00 INFO storage.BlockManagerInfo: Added broadcast_766_piece0 in memory on ***hostname masked***:53081 (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:02:00 INFO storage.MemoryStore: Block broadcast_771_piece0 stored as bytes in memory (estimated size 3.1 KB, free 490.3 MB) 18/04/17 17:02:00 INFO storage.BlockManagerInfo: Added broadcast_771_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.2 MB) 18/04/17 17:02:00 INFO spark.SparkContext: Created broadcast 771 from broadcast at DAGScheduler.scala:1006 18/04/17 17:02:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 771 (KafkaRDD[1051] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 17:02:00 INFO cluster.YarnClusterScheduler: Adding task set 771.0 with 1 tasks 18/04/17 17:02:00 INFO scheduler.DAGScheduler: Got job 773 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 17:02:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 772 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 17:02:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 17:02:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 17:02:00 INFO scheduler.DAGScheduler: Submitting ResultStage 772 (KafkaRDD[1053] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 17:02:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 771.0 (TID 771, ***hostname masked***, executor 8, partition 0, RACK_LOCAL, 2060 bytes) 18/04/17 17:02:00 INFO storage.MemoryStore: Block broadcast_772 stored as values in memory (estimated size 5.7 KB, free 490.2 MB) 18/04/17 17:02:00 INFO storage.BlockManagerInfo: Removed broadcast_764_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.2 MB) 18/04/17 17:02:00 INFO storage.MemoryStore: Block broadcast_772_piece0 stored as bytes in memory (estimated size 3.1 KB, free 490.3 MB) 18/04/17 17:02:00 INFO storage.BlockManagerInfo: Added broadcast_772_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.2 MB) 18/04/17 17:02:00 INFO spark.SparkContext: Created broadcast 772 from broadcast at DAGScheduler.scala:1006 18/04/17 17:02:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 772 (KafkaRDD[1053] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 17:02:00 INFO cluster.YarnClusterScheduler: Adding task set 772.0 with 1 tasks 18/04/17 17:02:00 INFO scheduler.DAGScheduler: Got job 772 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 17:02:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 773 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 17:02:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 17:02:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 17:02:00 INFO scheduler.DAGScheduler: Submitting ResultStage 773 (KafkaRDD[1050] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 17:02:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 772.0 (TID 772, ***hostname masked***, executor 5, partition 0, RACK_LOCAL, 2041 bytes) 18/04/17 17:02:00 INFO storage.BlockManagerInfo: Added broadcast_768_piece0 in memory on ***hostname masked***:35790 (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:02:00 INFO storage.BlockManagerInfo: Removed broadcast_764_piece0 on ***hostname masked***:50260 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:02:00 INFO storage.MemoryStore: Block broadcast_773 stored as values in memory (estimated size 5.7 KB, free 490.2 MB) 18/04/17 17:02:00 INFO storage.MemoryStore: Block broadcast_773_piece0 stored as bytes in memory (estimated size 3.1 KB, free 490.2 MB) 18/04/17 17:02:00 INFO storage.BlockManagerInfo: Added broadcast_773_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.2 MB) 18/04/17 17:02:00 INFO spark.SparkContext: Created broadcast 773 from broadcast at DAGScheduler.scala:1006 18/04/17 17:02:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 773 (KafkaRDD[1050] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 17:02:00 INFO cluster.YarnClusterScheduler: Adding task set 773.0 with 1 tasks 18/04/17 17:02:00 INFO scheduler.DAGScheduler: Got job 774 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 17:02:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 774 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 17:02:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 17:02:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 17:02:00 INFO scheduler.DAGScheduler: Submitting ResultStage 774 (KafkaRDD[1064] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 17:02:00 INFO storage.MemoryStore: Block broadcast_774 stored as values in memory (estimated size 5.7 KB, free 490.2 MB) 18/04/17 17:02:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 773.0 (TID 773, ***hostname masked***, executor 11, partition 0, NODE_LOCAL, 2045 bytes) 18/04/17 17:02:00 INFO storage.BlockManagerInfo: Added broadcast_770_piece0 in memory on ***hostname masked***:60107 (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:02:00 INFO storage.MemoryStore: Block broadcast_774_piece0 stored as bytes in memory (estimated size 3.1 KB, free 490.2 MB) 18/04/17 17:02:00 INFO storage.BlockManagerInfo: Added broadcast_774_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.2 MB) 18/04/17 17:02:00 INFO spark.SparkContext: Created broadcast 774 from broadcast at DAGScheduler.scala:1006 18/04/17 17:02:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 774 (KafkaRDD[1064] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 17:02:00 INFO cluster.YarnClusterScheduler: Adding task set 774.0 with 1 tasks 18/04/17 17:02:00 INFO scheduler.DAGScheduler: Got job 775 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 17:02:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 775 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 17:02:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 17:02:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 17:02:00 INFO scheduler.DAGScheduler: Submitting ResultStage 775 (KafkaRDD[1077] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 17:02:00 INFO storage.MemoryStore: Block broadcast_775 stored as values in memory (estimated size 5.7 KB, free 490.2 MB) 18/04/17 17:02:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 774.0 (TID 774, ***hostname masked***, executor 11, partition 0, RACK_LOCAL, 2048 bytes) 18/04/17 17:02:00 INFO storage.MemoryStore: Block broadcast_775_piece0 stored as bytes in memory (estimated size 3.1 KB, free 490.2 MB) 18/04/17 17:02:00 INFO storage.BlockManagerInfo: Added broadcast_775_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.2 MB) 18/04/17 17:02:00 INFO spark.SparkContext: Created broadcast 775 from broadcast at DAGScheduler.scala:1006 18/04/17 17:02:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 775 (KafkaRDD[1077] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 17:02:00 INFO cluster.YarnClusterScheduler: Adding task set 775.0 with 1 tasks 18/04/17 17:02:00 INFO scheduler.DAGScheduler: Got job 776 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 17:02:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 776 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 17:02:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 17:02:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 17:02:00 INFO scheduler.DAGScheduler: Submitting ResultStage 776 (KafkaRDD[1052] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 17:02:00 INFO storage.MemoryStore: Block broadcast_776 stored as values in memory (estimated size 5.7 KB, free 490.2 MB) 18/04/17 17:02:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 775.0 (TID 775, ***hostname masked***, executor 8, partition 0, NODE_LOCAL, 2048 bytes) 18/04/17 17:02:00 INFO storage.BlockManagerInfo: Added broadcast_769_piece0 in memory on ***hostname masked***:53081 (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:02:00 INFO storage.MemoryStore: Block broadcast_776_piece0 stored as bytes in memory (estimated size 3.1 KB, free 490.2 MB) 18/04/17 17:02:00 INFO storage.BlockManagerInfo: Added broadcast_776_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.2 MB) 18/04/17 17:02:00 INFO storage.BlockManagerInfo: Removed broadcast_739_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.2 MB) 18/04/17 17:02:00 INFO spark.SparkContext: Created broadcast 776 from broadcast at DAGScheduler.scala:1006 18/04/17 17:02:00 INFO storage.BlockManagerInfo: Added broadcast_773_piece0 in memory on ***hostname masked***:57847 (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:02:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 776 (KafkaRDD[1052] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 17:02:00 INFO cluster.YarnClusterScheduler: Adding task set 776.0 with 1 tasks 18/04/17 17:02:00 INFO scheduler.DAGScheduler: Got job 777 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 17:02:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 777 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 17:02:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 17:02:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 17:02:00 INFO scheduler.DAGScheduler: Submitting ResultStage 777 (KafkaRDD[1066] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 17:02:00 INFO storage.MemoryStore: Block broadcast_777 stored as values in memory (estimated size 5.7 KB, free 490.2 MB) 18/04/17 17:02:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 776.0 (TID 776, ***hostname masked***, executor 11, partition 0, RACK_LOCAL, 2051 bytes) 18/04/17 17:02:00 INFO storage.BlockManagerInfo: Removed broadcast_739_piece0 on ***hostname masked***:55279 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:02:00 INFO spark.ContextCleaner: Cleaned accumulator 740 18/04/17 17:02:00 INFO storage.MemoryStore: Block broadcast_777_piece0 stored as bytes in memory (estimated size 3.1 KB, free 490.2 MB) 18/04/17 17:02:00 INFO storage.BlockManagerInfo: Added broadcast_777_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.2 MB) 18/04/17 17:02:00 INFO storage.BlockManagerInfo: Removed broadcast_741_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.2 MB) 18/04/17 17:02:00 INFO spark.SparkContext: Created broadcast 777 from broadcast at DAGScheduler.scala:1006 18/04/17 17:02:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 777 (KafkaRDD[1066] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 17:02:00 INFO cluster.YarnClusterScheduler: Adding task set 777.0 with 1 tasks 18/04/17 17:02:00 INFO scheduler.DAGScheduler: Got job 778 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 17:02:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 778 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 17:02:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 17:02:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 17:02:00 INFO scheduler.DAGScheduler: Submitting ResultStage 778 (KafkaRDD[1073] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 17:02:00 INFO storage.MemoryStore: Block broadcast_778 stored as values in memory (estimated size 5.7 KB, free 490.2 MB) 18/04/17 17:02:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 777.0 (TID 777, ***hostname masked***, executor 3, partition 0, RACK_LOCAL, 2069 bytes) 18/04/17 17:02:00 INFO storage.BlockManagerInfo: Removed broadcast_741_piece0 on ***hostname masked***:55033 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:02:00 INFO spark.ContextCleaner: Cleaned accumulator 742 18/04/17 17:02:00 INFO storage.MemoryStore: Block broadcast_778_piece0 stored as bytes in memory (estimated size 3.1 KB, free 490.2 MB) 18/04/17 17:02:00 INFO storage.BlockManagerInfo: Added broadcast_772_piece0 in memory on ***hostname masked***:53081 (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:02:00 INFO storage.BlockManagerInfo: Added broadcast_778_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.2 MB) 18/04/17 17:02:00 INFO storage.BlockManagerInfo: Removed broadcast_740_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.2 MB) 18/04/17 17:02:00 INFO spark.SparkContext: Created broadcast 778 from broadcast at DAGScheduler.scala:1006 18/04/17 17:02:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 778 (KafkaRDD[1073] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 17:02:00 INFO cluster.YarnClusterScheduler: Adding task set 778.0 with 1 tasks 18/04/17 17:02:00 INFO scheduler.DAGScheduler: Got job 779 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 17:02:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 779 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 17:02:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 17:02:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 17:02:00 INFO scheduler.DAGScheduler: Submitting ResultStage 779 (KafkaRDD[1056] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 17:02:00 INFO storage.MemoryStore: Block broadcast_779 stored as values in memory (estimated size 5.7 KB, free 490.2 MB) 18/04/17 17:02:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 778.0 (TID 778, ***hostname masked***, executor 6, partition 0, NODE_LOCAL, 2049 bytes) 18/04/17 17:02:00 INFO storage.BlockManagerInfo: Removed broadcast_740_piece0 on ***hostname masked***:35790 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:02:00 INFO storage.BlockManagerInfo: Added broadcast_774_piece0 in memory on ***hostname masked***:57847 (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:02:00 INFO spark.ContextCleaner: Cleaned accumulator 741 18/04/17 17:02:00 INFO storage.BlockManagerInfo: Removed broadcast_743_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.2 MB) 18/04/17 17:02:00 INFO storage.MemoryStore: Block broadcast_779_piece0 stored as bytes in memory (estimated size 3.1 KB, free 490.2 MB) 18/04/17 17:02:00 INFO storage.BlockManagerInfo: Added broadcast_779_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.2 MB) 18/04/17 17:02:00 INFO spark.SparkContext: Created broadcast 779 from broadcast at DAGScheduler.scala:1006 18/04/17 17:02:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 779 (KafkaRDD[1056] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 17:02:00 INFO cluster.YarnClusterScheduler: Adding task set 779.0 with 1 tasks 18/04/17 17:02:00 INFO scheduler.DAGScheduler: Got job 780 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 17:02:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 780 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 17:02:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 17:02:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 17:02:00 INFO scheduler.DAGScheduler: Submitting ResultStage 780 (KafkaRDD[1054] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 17:02:00 INFO storage.MemoryStore: Block broadcast_780 stored as values in memory (estimated size 5.7 KB, free 490.2 MB) 18/04/17 17:02:00 INFO storage.BlockManagerInfo: Removed broadcast_743_piece0 on ***hostname masked***:35790 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:02:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 779.0 (TID 779, ***hostname masked***, executor 10, partition 0, RACK_LOCAL, 2051 bytes) 18/04/17 17:02:00 INFO storage.MemoryStore: Block broadcast_780_piece0 stored as bytes in memory (estimated size 3.1 KB, free 490.2 MB) 18/04/17 17:02:00 INFO storage.BlockManagerInfo: Added broadcast_780_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.2 MB) 18/04/17 17:02:00 INFO spark.SparkContext: Created broadcast 780 from broadcast at DAGScheduler.scala:1006 18/04/17 17:02:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 780 (KafkaRDD[1054] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 17:02:00 INFO cluster.YarnClusterScheduler: Adding task set 780.0 with 1 tasks 18/04/17 17:02:00 INFO scheduler.DAGScheduler: Got job 781 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 17:02:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 781 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 17:02:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 17:02:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 17:02:00 INFO scheduler.DAGScheduler: Submitting ResultStage 781 (KafkaRDD[1059] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 17:02:00 INFO storage.MemoryStore: Block broadcast_781 stored as values in memory (estimated size 5.7 KB, free 490.2 MB) 18/04/17 17:02:00 INFO storage.BlockManagerInfo: Added broadcast_778_piece0 in memory on ***hostname masked***:35790 (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:02:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 780.0 (TID 780, ***hostname masked***, executor 5, partition 0, NODE_LOCAL, 2059 bytes) 18/04/17 17:02:00 INFO storage.BlockManagerInfo: Added broadcast_776_piece0 in memory on ***hostname masked***:57847 (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:02:00 INFO storage.MemoryStore: Block broadcast_781_piece0 stored as bytes in memory (estimated size 3.1 KB, free 490.2 MB) 18/04/17 17:02:00 INFO storage.BlockManagerInfo: Added broadcast_781_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.2 MB) 18/04/17 17:02:00 INFO spark.SparkContext: Created broadcast 781 from broadcast at DAGScheduler.scala:1006 18/04/17 17:02:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 781 (KafkaRDD[1059] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 17:02:00 INFO cluster.YarnClusterScheduler: Adding task set 781.0 with 1 tasks 18/04/17 17:02:00 INFO scheduler.DAGScheduler: Got job 782 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 17:02:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 782 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 17:02:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 17:02:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 17:02:00 INFO scheduler.DAGScheduler: Submitting ResultStage 782 (KafkaRDD[1076] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 17:02:00 INFO storage.MemoryStore: Block broadcast_782 stored as values in memory (estimated size 5.7 KB, free 490.2 MB) 18/04/17 17:02:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 781.0 (TID 781, ***hostname masked***, executor 11, partition 0, RACK_LOCAL, 2064 bytes) 18/04/17 17:02:00 INFO spark.ContextCleaner: Cleaned accumulator 744 18/04/17 17:02:00 INFO storage.BlockManagerInfo: Added broadcast_777_piece0 in memory on ***hostname masked***:60107 (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:02:00 INFO storage.BlockManagerInfo: Removed broadcast_742_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.2 MB) 18/04/17 17:02:00 INFO storage.MemoryStore: Block broadcast_782_piece0 stored as bytes in memory (estimated size 3.1 KB, free 490.2 MB) 18/04/17 17:02:00 INFO storage.BlockManagerInfo: Added broadcast_782_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.2 MB) 18/04/17 17:02:00 INFO spark.SparkContext: Created broadcast 782 from broadcast at DAGScheduler.scala:1006 18/04/17 17:02:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 782 (KafkaRDD[1076] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 17:02:00 INFO cluster.YarnClusterScheduler: Adding task set 782.0 with 1 tasks 18/04/17 17:02:00 INFO scheduler.DAGScheduler: Got job 783 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 17:02:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 783 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 17:02:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 17:02:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 17:02:00 INFO scheduler.DAGScheduler: Submitting ResultStage 783 (KafkaRDD[1049] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 17:02:00 INFO storage.BlockManagerInfo: Removed broadcast_742_piece0 on ***hostname masked***:55279 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:02:00 INFO storage.MemoryStore: Block broadcast_783 stored as values in memory (estimated size 5.7 KB, free 490.2 MB) 18/04/17 17:02:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 782.0 (TID 782, ***hostname masked***, executor 8, partition 0, RACK_LOCAL, 2040 bytes) 18/04/17 17:02:00 INFO storage.BlockManagerInfo: Added broadcast_780_piece0 in memory on ***hostname masked***:53081 (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:02:00 INFO spark.ContextCleaner: Cleaned accumulator 743 18/04/17 17:02:00 INFO storage.BlockManagerInfo: Removed broadcast_745_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.2 MB) 18/04/17 17:02:00 INFO storage.BlockManagerInfo: Added broadcast_779_piece0 in memory on ***hostname masked***:55095 (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:02:00 INFO storage.BlockManagerInfo: Removed broadcast_745_piece0 on ***hostname masked***:55279 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:02:00 INFO storage.MemoryStore: Block broadcast_783_piece0 stored as bytes in memory (estimated size 3.1 KB, free 490.2 MB) 18/04/17 17:02:00 INFO storage.BlockManagerInfo: Added broadcast_783_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.2 MB) 18/04/17 17:02:00 INFO spark.SparkContext: Created broadcast 783 from broadcast at DAGScheduler.scala:1006 18/04/17 17:02:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 783 (KafkaRDD[1049] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 17:02:00 INFO cluster.YarnClusterScheduler: Adding task set 783.0 with 1 tasks 18/04/17 17:02:00 INFO storage.BlockManagerInfo: Added broadcast_771_piece0 in memory on ***hostname masked***:50260 (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:02:00 INFO scheduler.DAGScheduler: Got job 784 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 17:02:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 784 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 17:02:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 17:02:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 17:02:00 INFO scheduler.DAGScheduler: Submitting ResultStage 784 (KafkaRDD[1067] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 17:02:00 INFO storage.MemoryStore: Block broadcast_784 stored as values in memory (estimated size 5.7 KB, free 490.2 MB) 18/04/17 17:02:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 783.0 (TID 783, ***hostname masked***, executor 3, partition 0, NODE_LOCAL, 2059 bytes) 18/04/17 17:02:00 INFO storage.MemoryStore: Block broadcast_784_piece0 stored as bytes in memory (estimated size 3.1 KB, free 490.2 MB) 18/04/17 17:02:00 INFO storage.BlockManagerInfo: Added broadcast_784_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.1 MB) 18/04/17 17:02:00 INFO spark.SparkContext: Created broadcast 784 from broadcast at DAGScheduler.scala:1006 18/04/17 17:02:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 784 (KafkaRDD[1067] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 17:02:00 INFO cluster.YarnClusterScheduler: Adding task set 784.0 with 1 tasks 18/04/17 17:02:00 INFO scheduler.DAGScheduler: Got job 785 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 17:02:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 785 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 17:02:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 17:02:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 17:02:00 INFO scheduler.DAGScheduler: Submitting ResultStage 785 (KafkaRDD[1055] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 17:02:00 INFO storage.MemoryStore: Block broadcast_785 stored as values in memory (estimated size 5.7 KB, free 490.2 MB) 18/04/17 17:02:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 784.0 (TID 784, ***hostname masked***, executor 6, partition 0, RACK_LOCAL, 2048 bytes) 18/04/17 17:02:00 INFO storage.MemoryStore: Block broadcast_785_piece0 stored as bytes in memory (estimated size 3.1 KB, free 490.2 MB) 18/04/17 17:02:00 INFO storage.BlockManagerInfo: Added broadcast_785_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.1 MB) 18/04/17 17:02:00 INFO spark.SparkContext: Created broadcast 785 from broadcast at DAGScheduler.scala:1006 18/04/17 17:02:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 785 (KafkaRDD[1055] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 17:02:00 INFO cluster.YarnClusterScheduler: Adding task set 785.0 with 1 tasks 18/04/17 17:02:00 INFO scheduler.DAGScheduler: Got job 786 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 17:02:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 786 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 17:02:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 17:02:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 17:02:00 INFO scheduler.DAGScheduler: Submitting ResultStage 786 (KafkaRDD[1062] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 17:02:00 INFO storage.BlockManagerInfo: Added broadcast_783_piece0 in memory on ***hostname masked***:60107 (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:02:00 INFO storage.MemoryStore: Block broadcast_786 stored as values in memory (estimated size 5.7 KB, free 490.2 MB) 18/04/17 17:02:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 785.0 (TID 785, ***hostname masked***, executor 8, partition 0, RACK_LOCAL, 2046 bytes) 18/04/17 17:02:00 INFO storage.BlockManagerInfo: Added broadcast_782_piece0 in memory on ***hostname masked***:50260 (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:02:00 INFO storage.MemoryStore: Block broadcast_786_piece0 stored as bytes in memory (estimated size 3.1 KB, free 490.2 MB) 18/04/17 17:02:00 INFO storage.BlockManagerInfo: Added broadcast_786_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.1 MB) 18/04/17 17:02:00 INFO spark.SparkContext: Created broadcast 786 from broadcast at DAGScheduler.scala:1006 18/04/17 17:02:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 786 (KafkaRDD[1062] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 17:02:00 INFO cluster.YarnClusterScheduler: Adding task set 786.0 with 1 tasks 18/04/17 17:02:00 INFO scheduler.DAGScheduler: Got job 787 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 17:02:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 787 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 17:02:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 17:02:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 17:02:00 INFO scheduler.DAGScheduler: Submitting ResultStage 787 (KafkaRDD[1069] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 17:02:00 INFO storage.MemoryStore: Block broadcast_787 stored as values in memory (estimated size 5.7 KB, free 490.2 MB) 18/04/17 17:02:00 INFO spark.ContextCleaner: Cleaned accumulator 746 18/04/17 17:02:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 786.0 (TID 786, ***hostname masked***, executor 2, partition 0, RACK_LOCAL, 2048 bytes) 18/04/17 17:02:00 INFO storage.BlockManagerInfo: Added broadcast_781_piece0 in memory on ***hostname masked***:57847 (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:02:00 INFO storage.BlockManagerInfo: Removed broadcast_744_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.1 MB) 18/04/17 17:02:00 INFO storage.BlockManagerInfo: Added broadcast_784_piece0 in memory on ***hostname masked***:35790 (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:02:00 INFO storage.MemoryStore: Block broadcast_787_piece0 stored as bytes in memory (estimated size 3.1 KB, free 490.2 MB) 18/04/17 17:02:00 INFO storage.BlockManagerInfo: Added broadcast_787_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.1 MB) 18/04/17 17:02:00 INFO spark.SparkContext: Created broadcast 787 from broadcast at DAGScheduler.scala:1006 18/04/17 17:02:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 787 (KafkaRDD[1069] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 17:02:00 INFO cluster.YarnClusterScheduler: Adding task set 787.0 with 1 tasks 18/04/17 17:02:00 INFO scheduler.DAGScheduler: Got job 788 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 17:02:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 788 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 17:02:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 17:02:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 17:02:00 INFO scheduler.DAGScheduler: Submitting ResultStage 788 (KafkaRDD[1075] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 17:02:00 INFO storage.MemoryStore: Block broadcast_788 stored as values in memory (estimated size 5.7 KB, free 490.2 MB) 18/04/17 17:02:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 787.0 (TID 787, ***hostname masked***, executor 9, partition 0, RACK_LOCAL, 2045 bytes) 18/04/17 17:02:00 INFO storage.BlockManagerInfo: Removed broadcast_744_piece0 on ***hostname masked***:55033 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:02:00 INFO storage.MemoryStore: Block broadcast_788_piece0 stored as bytes in memory (estimated size 3.1 KB, free 490.2 MB) 18/04/17 17:02:00 INFO storage.BlockManagerInfo: Added broadcast_788_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.1 MB) 18/04/17 17:02:00 INFO spark.SparkContext: Created broadcast 788 from broadcast at DAGScheduler.scala:1006 18/04/17 17:02:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 788 (KafkaRDD[1075] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 17:02:00 INFO cluster.YarnClusterScheduler: Adding task set 788.0 with 1 tasks 18/04/17 17:02:00 INFO scheduler.DAGScheduler: Got job 789 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 17:02:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 789 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 17:02:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 17:02:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 17:02:00 INFO scheduler.DAGScheduler: Submitting ResultStage 789 (KafkaRDD[1046] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 17:02:00 INFO storage.MemoryStore: Block broadcast_789 stored as values in memory (estimated size 5.7 KB, free 490.2 MB) 18/04/17 17:02:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 788.0 (TID 788, ***hostname masked***, executor 2, partition 0, RACK_LOCAL, 2042 bytes) 18/04/17 17:02:00 INFO spark.ContextCleaner: Cleaned accumulator 745 18/04/17 17:02:00 INFO storage.BlockManagerInfo: Removed broadcast_747_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.1 MB) 18/04/17 17:02:00 INFO storage.MemoryStore: Block broadcast_789_piece0 stored as bytes in memory (estimated size 3.1 KB, free 490.2 MB) 18/04/17 17:02:00 INFO storage.BlockManagerInfo: Added broadcast_789_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.1 MB) 18/04/17 17:02:00 INFO spark.SparkContext: Created broadcast 789 from broadcast at DAGScheduler.scala:1006 18/04/17 17:02:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 789 (KafkaRDD[1046] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 17:02:00 INFO cluster.YarnClusterScheduler: Adding task set 789.0 with 1 tasks 18/04/17 17:02:00 INFO scheduler.DAGScheduler: Got job 790 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 17:02:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 790 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 17:02:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 17:02:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 17:02:00 INFO storage.BlockManagerInfo: Removed broadcast_747_piece0 on ***hostname masked***:50260 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:02:00 INFO scheduler.DAGScheduler: Submitting ResultStage 790 (KafkaRDD[1070] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 17:02:00 INFO storage.MemoryStore: Block broadcast_790 stored as values in memory (estimated size 5.7 KB, free 490.2 MB) 18/04/17 17:02:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 789.0 (TID 789, ***hostname masked***, executor 7, partition 0, RACK_LOCAL, 2034 bytes) 18/04/17 17:02:00 INFO storage.MemoryStore: Block broadcast_790_piece0 stored as bytes in memory (estimated size 3.1 KB, free 490.2 MB) 18/04/17 17:02:00 INFO storage.BlockManagerInfo: Added broadcast_790_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.1 MB) 18/04/17 17:02:00 INFO spark.SparkContext: Created broadcast 790 from broadcast at DAGScheduler.scala:1006 18/04/17 17:02:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 790 (KafkaRDD[1070] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 17:02:00 INFO cluster.YarnClusterScheduler: Adding task set 790.0 with 1 tasks 18/04/17 17:02:00 INFO scheduler.DAGScheduler: Got job 791 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 17:02:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 791 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 17:02:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 17:02:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 17:02:00 INFO scheduler.DAGScheduler: Submitting ResultStage 791 (KafkaRDD[1063] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 17:02:00 INFO storage.MemoryStore: Block broadcast_791 stored as values in memory (estimated size 5.7 KB, free 490.2 MB) 18/04/17 17:02:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 790.0 (TID 790, ***hostname masked***, executor 2, partition 0, NODE_LOCAL, 2056 bytes) 18/04/17 17:02:00 INFO storage.MemoryStore: Block broadcast_791_piece0 stored as bytes in memory (estimated size 3.1 KB, free 490.2 MB) 18/04/17 17:02:00 INFO storage.BlockManagerInfo: Added broadcast_791_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.1 MB) 18/04/17 17:02:00 INFO spark.SparkContext: Created broadcast 791 from broadcast at DAGScheduler.scala:1006 18/04/17 17:02:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 791 (KafkaRDD[1063] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 17:02:00 INFO cluster.YarnClusterScheduler: Adding task set 791.0 with 1 tasks 18/04/17 17:02:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 791.0 (TID 791, ***hostname masked***, executor 1, partition 0, RACK_LOCAL, 2050 bytes) 18/04/17 17:02:00 INFO storage.BlockManagerInfo: Added broadcast_787_piece0 in memory on ***hostname masked***:55033 (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:02:00 INFO storage.BlockManagerInfo: Added broadcast_789_piece0 in memory on ***hostname masked***:41751 (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:02:00 INFO storage.BlockManagerInfo: Added broadcast_788_piece0 in memory on ***hostname masked***:43653 (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:02:00 INFO spark.ContextCleaner: Cleaned accumulator 748 18/04/17 17:02:00 INFO storage.BlockManagerInfo: Removed broadcast_746_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.1 MB) 18/04/17 17:02:00 INFO storage.BlockManagerInfo: Removed broadcast_746_piece0 on ***hostname masked***:53081 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:02:00 INFO spark.ContextCleaner: Cleaned accumulator 747 18/04/17 17:02:00 INFO storage.BlockManagerInfo: Added broadcast_791_piece0 in memory on ***hostname masked***:56034 (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:02:00 INFO storage.BlockManagerInfo: Added broadcast_775_piece0 in memory on ***hostname masked***:50260 (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:02:00 INFO storage.BlockManagerInfo: Removed broadcast_749_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.1 MB) 18/04/17 17:02:00 INFO storage.BlockManagerInfo: Added broadcast_785_piece0 in memory on ***hostname masked***:50260 (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:02:00 INFO storage.BlockManagerInfo: Removed broadcast_749_piece0 on ***hostname masked***:43653 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:02:00 INFO spark.ContextCleaner: Cleaned accumulator 750 18/04/17 17:02:00 INFO storage.BlockManagerInfo: Removed broadcast_748_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.1 MB) 18/04/17 17:02:00 INFO storage.BlockManagerInfo: Removed broadcast_748_piece0 on ***hostname masked***:53081 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:02:00 INFO spark.ContextCleaner: Cleaned accumulator 749 18/04/17 17:02:00 INFO storage.BlockManagerInfo: Removed broadcast_750_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.1 MB) 18/04/17 17:02:00 INFO storage.BlockManagerInfo: Removed broadcast_750_piece0 on ***hostname masked***:57847 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:02:00 INFO spark.ContextCleaner: Cleaned accumulator 751 18/04/17 17:02:00 INFO storage.BlockManagerInfo: Removed broadcast_752_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.1 MB) 18/04/17 17:02:00 INFO storage.BlockManagerInfo: Removed broadcast_752_piece0 on ***hostname masked***:55279 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:02:00 INFO spark.ContextCleaner: Cleaned accumulator 753 18/04/17 17:02:00 INFO storage.BlockManagerInfo: Removed broadcast_751_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.2 MB) 18/04/17 17:02:00 INFO storage.BlockManagerInfo: Removed broadcast_751_piece0 on ***hostname masked***:56034 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:02:00 INFO spark.ContextCleaner: Cleaned accumulator 752 18/04/17 17:02:00 INFO storage.BlockManagerInfo: Removed broadcast_754_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.2 MB) 18/04/17 17:02:00 INFO storage.BlockManagerInfo: Added broadcast_790_piece0 in memory on ***hostname masked***:43653 (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:02:00 INFO storage.BlockManagerInfo: Removed broadcast_754_piece0 on ***hostname masked***:56034 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:02:00 INFO spark.ContextCleaner: Cleaned accumulator 755 18/04/17 17:02:00 INFO storage.BlockManagerInfo: Removed broadcast_753_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.2 MB) 18/04/17 17:02:00 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 782.0 (TID 782) in 61 ms on ***hostname masked*** (executor 8) (1/1) 18/04/17 17:02:00 INFO scheduler.DAGScheduler: ResultStage 782 (foreachPartition at PredictorEngineApp.java:153) finished in 0.062 s 18/04/17 17:02:00 INFO cluster.YarnClusterScheduler: Removed TaskSet 782.0, whose tasks have all completed, from pool 18/04/17 17:02:00 INFO scheduler.DAGScheduler: Job 782 finished: foreachPartition at PredictorEngineApp.java:153, took 0.131950 s 18/04/17 17:02:00 INFO storage.BlockManagerInfo: Removed broadcast_753_piece0 on ***hostname masked***:41751 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:02:00 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x72f07f96 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 17:02:00 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x72f07f960x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 17:02:00 INFO spark.ContextCleaner: Cleaned accumulator 754 18/04/17 17:02:00 INFO storage.BlockManagerInfo: Removed broadcast_755_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.2 MB) 18/04/17 17:02:00 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 17:02:00 INFO storage.BlockManagerInfo: Removed broadcast_755_piece0 on ***hostname masked***:53081 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:02:00 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:59741, server: ***hostname masked***/***IP masked***:2181 18/04/17 17:02:00 INFO spark.ContextCleaner: Cleaned accumulator 756 18/04/17 17:02:00 INFO storage.BlockManagerInfo: Removed broadcast_758_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.2 MB) 18/04/17 17:02:00 INFO storage.BlockManagerInfo: Removed broadcast_758_piece0 on ***hostname masked***:53081 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:02:00 INFO spark.ContextCleaner: Cleaned accumulator 759 18/04/17 17:02:00 INFO storage.BlockManagerInfo: Removed broadcast_757_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.2 MB) 18/04/17 17:02:00 INFO storage.BlockManagerInfo: Removed broadcast_757_piece0 on ***hostname masked***:43653 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:02:00 INFO storage.BlockManagerInfo: Added broadcast_786_piece0 in memory on ***hostname masked***:43653 (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:02:00 INFO spark.ContextCleaner: Cleaned accumulator 758 18/04/17 17:02:00 INFO storage.BlockManagerInfo: Removed broadcast_760_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.2 MB) 18/04/17 17:02:00 INFO storage.BlockManagerInfo: Removed broadcast_760_piece0 on ***hostname masked***:35790 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:02:00 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x3626be1439a9533, negotiated timeout = 60000 18/04/17 17:02:00 INFO spark.ContextCleaner: Cleaned accumulator 761 18/04/17 17:02:00 INFO storage.BlockManagerInfo: Removed broadcast_759_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.2 MB) 18/04/17 17:02:00 INFO storage.BlockManagerInfo: Removed broadcast_759_piece0 on ***hostname masked***:41751 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:02:00 INFO spark.ContextCleaner: Cleaned accumulator 760 18/04/17 17:02:00 INFO storage.BlockManagerInfo: Removed broadcast_762_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.2 MB) 18/04/17 17:02:00 INFO storage.BlockManagerInfo: Removed broadcast_762_piece0 on ***hostname masked***:60107 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:02:00 INFO spark.ContextCleaner: Cleaned accumulator 763 18/04/17 17:02:00 INFO storage.BlockManagerInfo: Removed broadcast_761_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.2 MB) 18/04/17 17:02:00 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x3626be1439a9533 18/04/17 17:02:00 INFO storage.BlockManagerInfo: Removed broadcast_761_piece0 on ***hostname masked***:56034 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:02:00 INFO spark.ContextCleaner: Cleaned accumulator 762 18/04/17 17:02:00 INFO spark.ContextCleaner: Cleaned accumulator 765 18/04/17 17:02:00 INFO storage.BlockManagerInfo: Removed broadcast_763_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.2 MB) 18/04/17 17:02:00 INFO zookeeper.ZooKeeper: Session: 0x3626be1439a9533 closed 18/04/17 17:02:00 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 17:02:00 INFO storage.BlockManagerInfo: Removed broadcast_763_piece0 on ***hostname masked***:42188 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:02:00 INFO spark.ContextCleaner: Cleaned accumulator 764 18/04/17 17:02:00 INFO storage.BlockManagerInfo: Removed broadcast_765_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.2 MB) 18/04/17 17:02:00 INFO storage.BlockManagerInfo: Removed broadcast_765_piece0 on ***hostname masked***:57847 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:02:00 INFO spark.ContextCleaner: Cleaned accumulator 766 18/04/17 17:02:00 INFO scheduler.JobScheduler: Finished job streaming job 1523973720000 ms.32 from job set of time 1523973720000 ms 18/04/17 17:02:00 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 790.0 (TID 790) in 80 ms on ***hostname masked*** (executor 2) (1/1) 18/04/17 17:02:00 INFO scheduler.DAGScheduler: ResultStage 790 (foreachPartition at PredictorEngineApp.java:153) finished in 0.081 s 18/04/17 17:02:00 INFO cluster.YarnClusterScheduler: Removed TaskSet 790.0, whose tasks have all completed, from pool 18/04/17 17:02:00 INFO scheduler.DAGScheduler: Job 790 finished: foreachPartition at PredictorEngineApp.java:153, took 0.169297 s 18/04/17 17:02:00 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x6f60dfaa connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 17:02:00 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x6f60dfaa0x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 17:02:00 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 17:02:00 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:37893, server: ***hostname masked***/***IP masked***:2181 18/04/17 17:02:00 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x1626be1444c9570, negotiated timeout = 60000 18/04/17 17:02:00 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x1626be1444c9570 18/04/17 17:02:00 INFO zookeeper.ZooKeeper: Session: 0x1626be1444c9570 closed 18/04/17 17:02:00 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 17:02:00 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 769.0 (TID 769) in 181 ms on ***hostname masked*** (executor 5) (1/1) 18/04/17 17:02:00 INFO scheduler.DAGScheduler: ResultStage 769 (foreachPartition at PredictorEngineApp.java:153) finished in 0.181 s 18/04/17 17:02:00 INFO cluster.YarnClusterScheduler: Removed TaskSet 769.0, whose tasks have all completed, from pool 18/04/17 17:02:00 INFO scheduler.DAGScheduler: Job 769 finished: foreachPartition at PredictorEngineApp.java:153, took 0.198390 s 18/04/17 17:02:00 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x6f1ed429 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 17:02:00 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x6f1ed4290x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 17:02:00 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 17:02:00 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:42491, server: ***hostname masked***/***IP masked***:2181 18/04/17 17:02:00 INFO scheduler.JobScheduler: Finished job streaming job 1523973720000 ms.26 from job set of time 1523973720000 ms 18/04/17 17:02:00 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x2626be142b28e65, negotiated timeout = 60000 18/04/17 17:02:00 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x2626be142b28e65 18/04/17 17:02:00 INFO zookeeper.ZooKeeper: Session: 0x2626be142b28e65 closed 18/04/17 17:02:00 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 17:02:00 INFO scheduler.JobScheduler: Finished job streaming job 1523973720000 ms.34 from job set of time 1523973720000 ms 18/04/17 17:02:01 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 787.0 (TID 787) in 939 ms on ***hostname masked*** (executor 9) (1/1) 18/04/17 17:02:01 INFO cluster.YarnClusterScheduler: Removed TaskSet 787.0, whose tasks have all completed, from pool 18/04/17 17:02:01 INFO scheduler.DAGScheduler: ResultStage 787 (foreachPartition at PredictorEngineApp.java:153) finished in 0.940 s 18/04/17 17:02:01 INFO scheduler.DAGScheduler: Job 787 finished: foreachPartition at PredictorEngineApp.java:153, took 1.022864 s 18/04/17 17:02:01 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x56c27930 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 17:02:01 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x56c279300x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 17:02:01 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 17:02:01 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:37899, server: ***hostname masked***/***IP masked***:2181 18/04/17 17:02:01 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x1626be1444c9578, negotiated timeout = 60000 18/04/17 17:02:01 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x1626be1444c9578 18/04/17 17:02:01 INFO zookeeper.ZooKeeper: Session: 0x1626be1444c9578 closed 18/04/17 17:02:01 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 17:02:01 INFO scheduler.JobScheduler: Finished job streaming job 1523973720000 ms.25 from job set of time 1523973720000 ms 18/04/17 17:02:02 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 574.0 (TID 574) in 482528 ms on ***hostname masked*** (executor 9) (1/1) 18/04/17 17:02:02 INFO cluster.YarnClusterScheduler: Removed TaskSet 574.0, whose tasks have all completed, from pool 18/04/17 17:02:02 INFO scheduler.DAGScheduler: ResultStage 574 (foreachPartition at PredictorEngineApp.java:153) finished in 482.541 s 18/04/17 17:02:02 INFO scheduler.DAGScheduler: Job 575 finished: foreachPartition at PredictorEngineApp.java:153, took 482.638213 s 18/04/17 17:02:02 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x64908586 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 17:02:02 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x649085860x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 17:02:02 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 17:02:02 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:59756, server: ***hostname masked***/***IP masked***:2181 18/04/17 17:02:02 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x3626be1439a9537, negotiated timeout = 60000 18/04/17 17:02:02 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x3626be1439a9537 18/04/17 17:02:02 INFO zookeeper.ZooKeeper: Session: 0x3626be1439a9537 closed 18/04/17 17:02:02 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 17:02:02 INFO scheduler.JobScheduler: Finished job streaming job 1523973240000 ms.25 from job set of time 1523973240000 ms 18/04/17 17:02:02 INFO scheduler.JobScheduler: Total delay: 482.752 s for time 1523973240000 ms (execution: 482.695 s) 18/04/17 17:02:02 INFO scheduler.ReceivedBlockTracker: Deleting batches ArrayBuffer() 18/04/17 17:02:02 INFO scheduler.InputInfoTracker: remove old batch metadata: 18/04/17 17:02:02 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 771.0 (TID 771) in 2828 ms on ***hostname masked*** (executor 8) (1/1) 18/04/17 17:02:02 INFO cluster.YarnClusterScheduler: Removed TaskSet 771.0, whose tasks have all completed, from pool 18/04/17 17:02:02 INFO scheduler.DAGScheduler: ResultStage 771 (foreachPartition at PredictorEngineApp.java:153) finished in 2.828 s 18/04/17 17:02:02 INFO scheduler.DAGScheduler: Job 771 finished: foreachPartition at PredictorEngineApp.java:153, took 2.852647 s 18/04/17 17:02:02 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x4ebd6795 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 17:02:02 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x4ebd67950x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 17:02:02 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 17:02:02 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:59760, server: ***hostname masked***/***IP masked***:2181 18/04/17 17:02:02 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x3626be1439a9538, negotiated timeout = 60000 18/04/17 17:02:02 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x3626be1439a9538 18/04/17 17:02:02 INFO zookeeper.ZooKeeper: Session: 0x3626be1439a9538 closed 18/04/17 17:02:02 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 17:02:02 INFO scheduler.JobScheduler: Finished job streaming job 1523973720000 ms.7 from job set of time 1523973720000 ms 18/04/17 17:02:03 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 779.0 (TID 779) in 3355 ms on ***hostname masked*** (executor 10) (1/1) 18/04/17 17:02:03 INFO cluster.YarnClusterScheduler: Removed TaskSet 779.0, whose tasks have all completed, from pool 18/04/17 17:02:03 INFO scheduler.DAGScheduler: ResultStage 779 (foreachPartition at PredictorEngineApp.java:153) finished in 3.355 s 18/04/17 17:02:03 INFO scheduler.DAGScheduler: Job 779 finished: foreachPartition at PredictorEngineApp.java:153, took 3.417840 s 18/04/17 17:02:03 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x66c7e966 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 17:02:03 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x66c7e9660x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 17:02:03 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 17:02:03 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:59764, server: ***hostname masked***/***IP masked***:2181 18/04/17 17:02:03 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x3626be1439a9539, negotiated timeout = 60000 18/04/17 17:02:03 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x3626be1439a9539 18/04/17 17:02:03 INFO zookeeper.ZooKeeper: Session: 0x3626be1439a9539 closed 18/04/17 17:02:03 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 17:02:03 INFO scheduler.JobScheduler: Finished job streaming job 1523973720000 ms.12 from job set of time 1523973720000 ms 18/04/17 17:02:03 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 789.0 (TID 789) in 3557 ms on ***hostname masked*** (executor 7) (1/1) 18/04/17 17:02:03 INFO cluster.YarnClusterScheduler: Removed TaskSet 789.0, whose tasks have all completed, from pool 18/04/17 17:02:03 INFO scheduler.DAGScheduler: ResultStage 789 (foreachPartition at PredictorEngineApp.java:153) finished in 3.558 s 18/04/17 17:02:03 INFO scheduler.DAGScheduler: Job 789 finished: foreachPartition at PredictorEngineApp.java:153, took 3.644937 s 18/04/17 17:02:03 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x120014e connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 17:02:03 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x120014e0x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 17:02:03 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 17:02:03 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:37916, server: ***hostname masked***/***IP masked***:2181 18/04/17 17:02:03 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x1626be1444c957a, negotiated timeout = 60000 18/04/17 17:02:03 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x1626be1444c957a 18/04/17 17:02:03 INFO zookeeper.ZooKeeper: Session: 0x1626be1444c957a closed 18/04/17 17:02:03 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 17:02:03 INFO scheduler.JobScheduler: Finished job streaming job 1523973720000 ms.2 from job set of time 1523973720000 ms 18/04/17 17:02:04 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 788.0 (TID 788) in 4353 ms on ***hostname masked*** (executor 2) (1/1) 18/04/17 17:02:04 INFO cluster.YarnClusterScheduler: Removed TaskSet 788.0, whose tasks have all completed, from pool 18/04/17 17:02:04 INFO scheduler.DAGScheduler: ResultStage 788 (foreachPartition at PredictorEngineApp.java:153) finished in 4.355 s 18/04/17 17:02:04 INFO scheduler.DAGScheduler: Job 788 finished: foreachPartition at PredictorEngineApp.java:153, took 4.439130 s 18/04/17 17:02:04 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x6c4b00b1 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 17:02:04 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x6c4b00b10x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 17:02:04 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 17:02:04 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:42516, server: ***hostname masked***/***IP masked***:2181 18/04/17 17:02:04 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 776.0 (TID 776) in 4398 ms on ***hostname masked*** (executor 11) (1/1) 18/04/17 17:02:04 INFO scheduler.DAGScheduler: ResultStage 776 (foreachPartition at PredictorEngineApp.java:153) finished in 4.399 s 18/04/17 17:02:04 INFO cluster.YarnClusterScheduler: Removed TaskSet 776.0, whose tasks have all completed, from pool 18/04/17 17:02:04 INFO scheduler.DAGScheduler: Job 776 finished: foreachPartition at PredictorEngineApp.java:153, took 4.451729 s 18/04/17 17:02:04 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x5a7e459e connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 17:02:04 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x5a7e459e0x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 17:02:04 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x2626be142b28e6b, negotiated timeout = 60000 18/04/17 17:02:04 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 17:02:04 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:37922, server: ***hostname masked***/***IP masked***:2181 18/04/17 17:02:04 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x2626be142b28e6b 18/04/17 17:02:04 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x1626be1444c957d, negotiated timeout = 60000 18/04/17 17:02:04 INFO zookeeper.ZooKeeper: Session: 0x2626be142b28e6b closed 18/04/17 17:02:04 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 17:02:04 INFO storage.BlockManagerInfo: Removed broadcast_789_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.2 MB) 18/04/17 17:02:04 INFO storage.BlockManagerInfo: Removed broadcast_789_piece0 on ***hostname masked***:41751 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:02:04 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x1626be1444c957d 18/04/17 17:02:04 INFO storage.BlockManagerInfo: Removed broadcast_574_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.2 MB) 18/04/17 17:02:04 INFO scheduler.JobScheduler: Finished job streaming job 1523973720000 ms.31 from job set of time 1523973720000 ms 18/04/17 17:02:04 INFO zookeeper.ZooKeeper: Session: 0x1626be1444c957d closed 18/04/17 17:02:04 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 17:02:04 INFO storage.BlockManagerInfo: Removed broadcast_574_piece0 on ***hostname masked***:55033 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:02:04 INFO spark.ContextCleaner: Cleaned accumulator 575 18/04/17 17:02:04 INFO storage.BlockManagerInfo: Removed broadcast_769_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.2 MB) 18/04/17 17:02:04 INFO storage.BlockManagerInfo: Removed broadcast_769_piece0 on ***hostname masked***:53081 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:02:04 INFO spark.ContextCleaner: Cleaned accumulator 770 18/04/17 17:02:04 INFO spark.ContextCleaner: Cleaned accumulator 772 18/04/17 17:02:04 INFO storage.BlockManagerInfo: Removed broadcast_771_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.2 MB) 18/04/17 17:02:04 INFO storage.BlockManagerInfo: Removed broadcast_771_piece0 on ***hostname masked***:50260 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:02:04 INFO storage.BlockManagerInfo: Removed broadcast_776_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.2 MB) 18/04/17 17:02:04 INFO scheduler.JobScheduler: Finished job streaming job 1523973720000 ms.8 from job set of time 1523973720000 ms 18/04/17 17:02:04 INFO storage.BlockManagerInfo: Removed broadcast_776_piece0 on ***hostname masked***:57847 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:02:04 INFO spark.ContextCleaner: Cleaned accumulator 777 18/04/17 17:02:04 INFO storage.BlockManagerInfo: Removed broadcast_779_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.2 MB) 18/04/17 17:02:04 INFO storage.BlockManagerInfo: Removed broadcast_779_piece0 on ***hostname masked***:55095 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:02:04 INFO spark.ContextCleaner: Cleaned accumulator 780 18/04/17 17:02:04 INFO storage.BlockManagerInfo: Removed broadcast_782_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.2 MB) 18/04/17 17:02:04 INFO storage.BlockManagerInfo: Removed broadcast_782_piece0 on ***hostname masked***:50260 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:02:04 INFO spark.ContextCleaner: Cleaned accumulator 783 18/04/17 17:02:04 INFO spark.ContextCleaner: Cleaned accumulator 789 18/04/17 17:02:04 INFO storage.BlockManagerInfo: Removed broadcast_787_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.2 MB) 18/04/17 17:02:04 INFO storage.BlockManagerInfo: Removed broadcast_787_piece0 on ***hostname masked***:55033 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:02:04 INFO spark.ContextCleaner: Cleaned accumulator 788 18/04/17 17:02:04 INFO spark.ContextCleaner: Cleaned accumulator 790 18/04/17 17:02:04 INFO storage.BlockManagerInfo: Removed broadcast_788_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.2 MB) 18/04/17 17:02:04 INFO storage.BlockManagerInfo: Removed broadcast_788_piece0 on ***hostname masked***:43653 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:02:04 INFO spark.ContextCleaner: Cleaned accumulator 791 18/04/17 17:02:04 INFO storage.BlockManagerInfo: Removed broadcast_790_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.2 MB) 18/04/17 17:02:04 INFO storage.BlockManagerInfo: Removed broadcast_790_piece0 on ***hostname masked***:43653 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:02:06 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 786.0 (TID 786) in 6008 ms on ***hostname masked*** (executor 2) (1/1) 18/04/17 17:02:06 INFO scheduler.DAGScheduler: ResultStage 786 (foreachPartition at PredictorEngineApp.java:153) finished in 6.009 s 18/04/17 17:02:06 INFO cluster.YarnClusterScheduler: Removed TaskSet 786.0, whose tasks have all completed, from pool 18/04/17 17:02:06 INFO scheduler.DAGScheduler: Job 786 finished: foreachPartition at PredictorEngineApp.java:153, took 6.089324 s 18/04/17 17:02:06 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x6b2473b3 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 17:02:06 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x6b2473b30x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 17:02:06 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 17:02:06 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:42525, server: ***hostname masked***/***IP masked***:2181 18/04/17 17:02:06 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x2626be142b28e6d, negotiated timeout = 60000 18/04/17 17:02:06 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x2626be142b28e6d 18/04/17 17:02:06 INFO zookeeper.ZooKeeper: Session: 0x2626be142b28e6d closed 18/04/17 17:02:06 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 17:02:06 INFO scheduler.JobScheduler: Finished job streaming job 1523973720000 ms.18 from job set of time 1523973720000 ms 18/04/17 17:02:07 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 791.0 (TID 791) in 6983 ms on ***hostname masked*** (executor 1) (1/1) 18/04/17 17:02:07 INFO cluster.YarnClusterScheduler: Removed TaskSet 791.0, whose tasks have all completed, from pool 18/04/17 17:02:07 INFO scheduler.DAGScheduler: ResultStage 791 (foreachPartition at PredictorEngineApp.java:153) finished in 6.984 s 18/04/17 17:02:07 INFO scheduler.DAGScheduler: Job 791 finished: foreachPartition at PredictorEngineApp.java:153, took 7.074234 s 18/04/17 17:02:07 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x1ffa8ef6 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 17:02:07 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x1ffa8ef60x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 17:02:07 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 17:02:07 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:37934, server: ***hostname masked***/***IP masked***:2181 18/04/17 17:02:07 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x1626be1444c957f, negotiated timeout = 60000 18/04/17 17:02:07 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x1626be1444c957f 18/04/17 17:02:07 INFO zookeeper.ZooKeeper: Session: 0x1626be1444c957f closed 18/04/17 17:02:07 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 17:02:07 INFO scheduler.JobScheduler: Finished job streaming job 1523973720000 ms.19 from job set of time 1523973720000 ms 18/04/17 17:02:07 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 767.0 (TID 767) in 7556 ms on ***hostname masked*** (executor 1) (1/1) 18/04/17 17:02:07 INFO cluster.YarnClusterScheduler: Removed TaskSet 767.0, whose tasks have all completed, from pool 18/04/17 17:02:07 INFO scheduler.DAGScheduler: ResultStage 767 (foreachPartition at PredictorEngineApp.java:153) finished in 7.557 s 18/04/17 17:02:07 INFO scheduler.DAGScheduler: Job 767 finished: foreachPartition at PredictorEngineApp.java:153, took 7.566755 s 18/04/17 17:02:07 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x5a234e67 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 17:02:07 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x5a234e670x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 17:02:07 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 17:02:07 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:37937, server: ***hostname masked***/***IP masked***:2181 18/04/17 17:02:07 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x1626be1444c9580, negotiated timeout = 60000 18/04/17 17:02:07 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x1626be1444c9580 18/04/17 17:02:07 INFO zookeeper.ZooKeeper: Session: 0x1626be1444c9580 closed 18/04/17 17:02:07 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 17:02:07 INFO scheduler.JobScheduler: Finished job streaming job 1523973720000 ms.24 from job set of time 1523973720000 ms 18/04/17 17:02:08 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 775.0 (TID 775) in 8402 ms on ***hostname masked*** (executor 8) (1/1) 18/04/17 17:02:08 INFO cluster.YarnClusterScheduler: Removed TaskSet 775.0, whose tasks have all completed, from pool 18/04/17 17:02:08 INFO scheduler.DAGScheduler: ResultStage 775 (foreachPartition at PredictorEngineApp.java:153) finished in 8.402 s 18/04/17 17:02:08 INFO scheduler.DAGScheduler: Job 775 finished: foreachPartition at PredictorEngineApp.java:153, took 8.452010 s 18/04/17 17:02:08 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x7b86eb5 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 17:02:08 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x7b86eb50x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 17:02:08 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 17:02:08 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:59794, server: ***hostname masked***/***IP masked***:2181 18/04/17 17:02:08 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x3626be1439a9540, negotiated timeout = 60000 18/04/17 17:02:08 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x3626be1439a9540 18/04/17 17:02:08 INFO zookeeper.ZooKeeper: Session: 0x3626be1439a9540 closed 18/04/17 17:02:08 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 17:02:08 INFO scheduler.JobScheduler: Finished job streaming job 1523973720000 ms.33 from job set of time 1523973720000 ms 18/04/17 17:02:10 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 768.0 (TID 768) in 10263 ms on ***hostname masked*** (executor 6) (1/1) 18/04/17 17:02:10 INFO cluster.YarnClusterScheduler: Removed TaskSet 768.0, whose tasks have all completed, from pool 18/04/17 17:02:10 INFO scheduler.DAGScheduler: ResultStage 768 (foreachPartition at PredictorEngineApp.java:153) finished in 10.264 s 18/04/17 17:02:10 INFO scheduler.DAGScheduler: Job 768 finished: foreachPartition at PredictorEngineApp.java:153, took 10.277639 s 18/04/17 17:02:10 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x78abffea connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 17:02:10 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x78abffea0x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 17:02:10 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 17:02:10 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:42544, server: ***hostname masked***/***IP masked***:2181 18/04/17 17:02:10 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x2626be142b28e6f, negotiated timeout = 60000 18/04/17 17:02:10 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x2626be142b28e6f 18/04/17 17:02:10 INFO zookeeper.ZooKeeper: Session: 0x2626be142b28e6f closed 18/04/17 17:02:10 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 17:02:10 INFO scheduler.JobScheduler: Finished job streaming job 1523973720000 ms.28 from job set of time 1523973720000 ms 18/04/17 17:02:10 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 770.0 (TID 770) in 10295 ms on ***hostname masked*** (executor 3) (1/1) 18/04/17 17:02:10 INFO cluster.YarnClusterScheduler: Removed TaskSet 770.0, whose tasks have all completed, from pool 18/04/17 17:02:10 INFO scheduler.DAGScheduler: ResultStage 770 (foreachPartition at PredictorEngineApp.java:153) finished in 10.295 s 18/04/17 17:02:10 INFO scheduler.DAGScheduler: Job 770 finished: foreachPartition at PredictorEngineApp.java:153, took 10.316024 s 18/04/17 17:02:10 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x3a6cae28 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 17:02:10 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x3a6cae280x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 17:02:10 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 17:02:10 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:42547, server: ***hostname masked***/***IP masked***:2181 18/04/17 17:02:10 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x2626be142b28e70, negotiated timeout = 60000 18/04/17 17:02:10 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x2626be142b28e70 18/04/17 17:02:10 INFO zookeeper.ZooKeeper: Session: 0x2626be142b28e70 closed 18/04/17 17:02:10 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 17:02:10 INFO scheduler.JobScheduler: Finished job streaming job 1523973720000 ms.27 from job set of time 1523973720000 ms 18/04/17 17:02:10 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 784.0 (TID 784) in 10683 ms on ***hostname masked*** (executor 6) (1/1) 18/04/17 17:02:10 INFO cluster.YarnClusterScheduler: Removed TaskSet 784.0, whose tasks have all completed, from pool 18/04/17 17:02:10 INFO scheduler.DAGScheduler: ResultStage 784 (foreachPartition at PredictorEngineApp.java:153) finished in 10.684 s 18/04/17 17:02:10 INFO scheduler.DAGScheduler: Job 784 finished: foreachPartition at PredictorEngineApp.java:153, took 10.759655 s 18/04/17 17:02:10 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x67b4bc2a connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 17:02:10 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x67b4bc2a0x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 17:02:10 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 17:02:10 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:37956, server: ***hostname masked***/***IP masked***:2181 18/04/17 17:02:10 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 778.0 (TID 778) in 10704 ms on ***hostname masked*** (executor 6) (1/1) 18/04/17 17:02:10 INFO scheduler.DAGScheduler: ResultStage 778 (foreachPartition at PredictorEngineApp.java:153) finished in 10.705 s 18/04/17 17:02:10 INFO cluster.YarnClusterScheduler: Removed TaskSet 778.0, whose tasks have all completed, from pool 18/04/17 17:02:10 INFO scheduler.DAGScheduler: Job 778 finished: foreachPartition at PredictorEngineApp.java:153, took 10.763794 s 18/04/17 17:02:10 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x1f62a7f1 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 17:02:10 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x1f62a7f10x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 17:02:10 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 17:02:10 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:59808, server: ***hostname masked***/***IP masked***:2181 18/04/17 17:02:10 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x1626be1444c9583, negotiated timeout = 60000 18/04/17 17:02:10 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x1626be1444c9583 18/04/17 17:02:10 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x3626be1439a9545, negotiated timeout = 60000 18/04/17 17:02:10 INFO zookeeper.ZooKeeper: Session: 0x1626be1444c9583 closed 18/04/17 17:02:10 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 17:02:10 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x3626be1439a9545 18/04/17 17:02:10 INFO zookeeper.ZooKeeper: Session: 0x3626be1439a9545 closed 18/04/17 17:02:10 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 17:02:10 INFO scheduler.JobScheduler: Finished job streaming job 1523973720000 ms.23 from job set of time 1523973720000 ms 18/04/17 17:02:10 INFO scheduler.JobScheduler: Finished job streaming job 1523973720000 ms.29 from job set of time 1523973720000 ms 18/04/17 17:02:11 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 772.0 (TID 772) in 10933 ms on ***hostname masked*** (executor 5) (1/1) 18/04/17 17:02:11 INFO cluster.YarnClusterScheduler: Removed TaskSet 772.0, whose tasks have all completed, from pool 18/04/17 17:02:11 INFO scheduler.DAGScheduler: ResultStage 772 (foreachPartition at PredictorEngineApp.java:153) finished in 10.933 s 18/04/17 17:02:11 INFO scheduler.DAGScheduler: Job 773 finished: foreachPartition at PredictorEngineApp.java:153, took 10.974054 s 18/04/17 17:02:11 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x410d9a10 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 17:02:11 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x410d9a100x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 17:02:11 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 17:02:11 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:42557, server: ***hostname masked***/***IP masked***:2181 18/04/17 17:02:11 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x2626be142b28e72, negotiated timeout = 60000 18/04/17 17:02:11 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x2626be142b28e72 18/04/17 17:02:11 INFO zookeeper.ZooKeeper: Session: 0x2626be142b28e72 closed 18/04/17 17:02:11 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 17:02:11 INFO scheduler.JobScheduler: Finished job streaming job 1523973720000 ms.9 from job set of time 1523973720000 ms 18/04/17 17:02:11 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 781.0 (TID 781) in 11303 ms on ***hostname masked*** (executor 11) (1/1) 18/04/17 17:02:11 INFO cluster.YarnClusterScheduler: Removed TaskSet 781.0, whose tasks have all completed, from pool 18/04/17 17:02:11 INFO scheduler.DAGScheduler: ResultStage 781 (foreachPartition at PredictorEngineApp.java:153) finished in 11.304 s 18/04/17 17:02:11 INFO scheduler.DAGScheduler: Job 781 finished: foreachPartition at PredictorEngineApp.java:153, took 11.371405 s 18/04/17 17:02:11 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x2f81b8bf connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 17:02:11 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x2f81b8bf0x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 17:02:11 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 17:02:11 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:59817, server: ***hostname masked***/***IP masked***:2181 18/04/17 17:02:11 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x3626be1439a9546, negotiated timeout = 60000 18/04/17 17:02:11 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x3626be1439a9546 18/04/17 17:02:11 INFO zookeeper.ZooKeeper: Session: 0x3626be1439a9546 closed 18/04/17 17:02:11 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 17:02:11 INFO scheduler.JobScheduler: Finished job streaming job 1523973720000 ms.15 from job set of time 1523973720000 ms 18/04/17 17:02:11 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 773.0 (TID 773) in 11638 ms on ***hostname masked*** (executor 11) (1/1) 18/04/17 17:02:11 INFO cluster.YarnClusterScheduler: Removed TaskSet 773.0, whose tasks have all completed, from pool 18/04/17 17:02:11 INFO scheduler.DAGScheduler: ResultStage 773 (foreachPartition at PredictorEngineApp.java:153) finished in 11.638 s 18/04/17 17:02:11 INFO scheduler.DAGScheduler: Job 772 finished: foreachPartition at PredictorEngineApp.java:153, took 11.681746 s 18/04/17 17:02:11 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x4ac07cb0 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 17:02:11 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x4ac07cb00x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 17:02:11 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 17:02:11 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:42564, server: ***hostname masked***/***IP masked***:2181 18/04/17 17:02:11 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x2626be142b28e74, negotiated timeout = 60000 18/04/17 17:02:11 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x2626be142b28e74 18/04/17 17:02:11 INFO zookeeper.ZooKeeper: Session: 0x2626be142b28e74 closed 18/04/17 17:02:11 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 17:02:11 INFO scheduler.JobScheduler: Finished job streaming job 1523973720000 ms.6 from job set of time 1523973720000 ms 18/04/17 17:02:12 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 774.0 (TID 774) in 12159 ms on ***hostname masked*** (executor 11) (1/1) 18/04/17 17:02:12 INFO cluster.YarnClusterScheduler: Removed TaskSet 774.0, whose tasks have all completed, from pool 18/04/17 17:02:12 INFO scheduler.DAGScheduler: ResultStage 774 (foreachPartition at PredictorEngineApp.java:153) finished in 12.159 s 18/04/17 17:02:12 INFO scheduler.DAGScheduler: Job 774 finished: foreachPartition at PredictorEngineApp.java:153, took 12.205719 s 18/04/17 17:02:12 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x1f292307 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 17:02:12 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x1f2923070x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 17:02:12 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 17:02:12 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:37973, server: ***hostname masked***/***IP masked***:2181 18/04/17 17:02:12 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x1626be1444c9586, negotiated timeout = 60000 18/04/17 17:02:12 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x1626be1444c9586 18/04/17 17:02:12 INFO zookeeper.ZooKeeper: Session: 0x1626be1444c9586 closed 18/04/17 17:02:12 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 17:02:12 INFO scheduler.JobScheduler: Finished job streaming job 1523973720000 ms.20 from job set of time 1523973720000 ms 18/04/17 17:02:13 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 785.0 (TID 785) in 12969 ms on ***hostname masked*** (executor 8) (1/1) 18/04/17 17:02:13 INFO cluster.YarnClusterScheduler: Removed TaskSet 785.0, whose tasks have all completed, from pool 18/04/17 17:02:13 INFO scheduler.DAGScheduler: ResultStage 785 (foreachPartition at PredictorEngineApp.java:153) finished in 12.970 s 18/04/17 17:02:13 INFO scheduler.DAGScheduler: Job 785 finished: foreachPartition at PredictorEngineApp.java:153, took 13.048481 s 18/04/17 17:02:13 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x5ebff52f connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 17:02:13 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x5ebff52f0x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 17:02:13 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 17:02:13 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:37976, server: ***hostname masked***/***IP masked***:2181 18/04/17 17:02:13 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x1626be1444c9587, negotiated timeout = 60000 18/04/17 17:02:13 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x1626be1444c9587 18/04/17 17:02:13 INFO zookeeper.ZooKeeper: Session: 0x1626be1444c9587 closed 18/04/17 17:02:13 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 17:02:13 INFO scheduler.JobScheduler: Finished job streaming job 1523973720000 ms.11 from job set of time 1523973720000 ms 18/04/17 17:02:17 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 783.0 (TID 783) in 17833 ms on ***hostname masked*** (executor 3) (1/1) 18/04/17 17:02:17 INFO cluster.YarnClusterScheduler: Removed TaskSet 783.0, whose tasks have all completed, from pool 18/04/17 17:02:17 INFO scheduler.DAGScheduler: ResultStage 783 (foreachPartition at PredictorEngineApp.java:153) finished in 17.835 s 18/04/17 17:02:17 INFO scheduler.DAGScheduler: Job 783 finished: foreachPartition at PredictorEngineApp.java:153, took 17.907866 s 18/04/17 17:02:17 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x14ab3c12 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 17:02:17 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x14ab3c120x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 17:02:17 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 17:02:17 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:42581, server: ***hostname masked***/***IP masked***:2181 18/04/17 17:02:17 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x2626be142b28e78, negotiated timeout = 60000 18/04/17 17:02:17 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x2626be142b28e78 18/04/17 17:02:17 INFO zookeeper.ZooKeeper: Session: 0x2626be142b28e78 closed 18/04/17 17:02:17 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 17:02:18 INFO scheduler.JobScheduler: Finished job streaming job 1523973720000 ms.5 from job set of time 1523973720000 ms 18/04/17 17:02:18 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 777.0 (TID 777) in 18091 ms on ***hostname masked*** (executor 3) (1/1) 18/04/17 17:02:18 INFO cluster.YarnClusterScheduler: Removed TaskSet 777.0, whose tasks have all completed, from pool 18/04/17 17:02:18 INFO scheduler.DAGScheduler: ResultStage 777 (foreachPartition at PredictorEngineApp.java:153) finished in 18.092 s 18/04/17 17:02:18 INFO scheduler.DAGScheduler: Job 777 finished: foreachPartition at PredictorEngineApp.java:153, took 18.147379 s 18/04/17 17:02:18 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x502d4c39 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 17:02:18 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x502d4c390x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 17:02:18 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 17:02:18 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:59842, server: ***hostname masked***/***IP masked***:2181 18/04/17 17:02:18 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x3626be1439a954b, negotiated timeout = 60000 18/04/17 17:02:18 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x3626be1439a954b 18/04/17 17:02:18 INFO zookeeper.ZooKeeper: Session: 0x3626be1439a954b closed 18/04/17 17:02:18 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 17:02:18 INFO scheduler.JobScheduler: Finished job streaming job 1523973720000 ms.22 from job set of time 1523973720000 ms 18/04/17 17:02:19 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 766.0 (TID 766) in 19526 ms on ***hostname masked*** (executor 5) (1/1) 18/04/17 17:02:19 INFO scheduler.DAGScheduler: ResultStage 766 (foreachPartition at PredictorEngineApp.java:153) finished in 19.527 s 18/04/17 17:02:19 INFO cluster.YarnClusterScheduler: Removed TaskSet 766.0, whose tasks have all completed, from pool 18/04/17 17:02:19 INFO scheduler.DAGScheduler: Job 766 finished: foreachPartition at PredictorEngineApp.java:153, took 19.533114 s 18/04/17 17:02:19 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0xbd1deb4 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 17:02:19 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0xbd1deb40x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 17:02:19 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 17:02:19 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:42591, server: ***hostname masked***/***IP masked***:2181 18/04/17 17:02:19 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x2626be142b28e79, negotiated timeout = 60000 18/04/17 17:02:19 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x2626be142b28e79 18/04/17 17:02:19 INFO zookeeper.ZooKeeper: Session: 0x2626be142b28e79 closed 18/04/17 17:02:19 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 17:02:19 INFO scheduler.JobScheduler: Finished job streaming job 1523973720000 ms.1 from job set of time 1523973720000 ms 18/04/17 17:02:21 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 780.0 (TID 780) in 21054 ms on ***hostname masked*** (executor 5) (1/1) 18/04/17 17:02:21 INFO cluster.YarnClusterScheduler: Removed TaskSet 780.0, whose tasks have all completed, from pool 18/04/17 17:02:21 INFO scheduler.DAGScheduler: ResultStage 780 (foreachPartition at PredictorEngineApp.java:153) finished in 21.055 s 18/04/17 17:02:21 INFO scheduler.DAGScheduler: Job 780 finished: foreachPartition at PredictorEngineApp.java:153, took 21.118696 s 18/04/17 17:02:21 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x5b1faf6f connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 17:02:21 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x5b1faf6f0x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 17:02:21 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 17:02:21 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:42597, server: ***hostname masked***/***IP masked***:2181 18/04/17 17:02:21 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x2626be142b28e7c, negotiated timeout = 60000 18/04/17 17:02:21 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x2626be142b28e7c 18/04/17 17:02:21 INFO zookeeper.ZooKeeper: Session: 0x2626be142b28e7c closed 18/04/17 17:02:21 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 17:02:21 INFO scheduler.JobScheduler: Finished job streaming job 1523973720000 ms.10 from job set of time 1523973720000 ms 18/04/17 17:02:21 INFO scheduler.JobScheduler: Total delay: 21.212 s for time 1523973720000 ms (execution: 21.158 s) 18/04/17 17:02:21 INFO kafka.KafkaRDD: Removing RDD 972 from persistence list 18/04/17 17:02:21 INFO storage.BlockManager: Removing RDD 972 18/04/17 17:02:21 INFO kafka.KafkaRDD: Removing RDD 1008 from persistence list 18/04/17 17:02:21 INFO storage.BlockManager: Removing RDD 1008 18/04/17 17:02:21 INFO kafka.KafkaRDD: Removing RDD 972 from persistence list 18/04/17 17:02:21 INFO storage.BlockManager: Removing RDD 972 18/04/17 17:02:21 INFO kafka.KafkaRDD: Removing RDD 1008 from persistence list 18/04/17 17:02:21 INFO storage.BlockManager: Removing RDD 1008 18/04/17 17:02:21 INFO kafka.KafkaRDD: Removing RDD 973 from persistence list 18/04/17 17:02:21 INFO storage.BlockManager: Removing RDD 973 18/04/17 17:02:21 INFO kafka.KafkaRDD: Removing RDD 1009 from persistence list 18/04/17 17:02:21 INFO storage.BlockManager: Removing RDD 1009 18/04/17 17:02:21 INFO kafka.KafkaRDD: Removing RDD 973 from persistence list 18/04/17 17:02:21 INFO storage.BlockManager: Removing RDD 973 18/04/17 17:02:21 INFO kafka.KafkaRDD: Removing RDD 1009 from persistence list 18/04/17 17:02:21 INFO storage.BlockManager: Removing RDD 1009 18/04/17 17:02:21 INFO kafka.KafkaRDD: Removing RDD 974 from persistence list 18/04/17 17:02:21 INFO storage.BlockManager: Removing RDD 974 18/04/17 17:02:21 INFO kafka.KafkaRDD: Removing RDD 1010 from persistence list 18/04/17 17:02:21 INFO storage.BlockManager: Removing RDD 1010 18/04/17 17:02:21 INFO kafka.KafkaRDD: Removing RDD 974 from persistence list 18/04/17 17:02:21 INFO storage.BlockManager: Removing RDD 974 18/04/17 17:02:21 INFO kafka.KafkaRDD: Removing RDD 1010 from persistence list 18/04/17 17:02:21 INFO storage.BlockManager: Removing RDD 1010 18/04/17 17:02:21 INFO kafka.KafkaRDD: Removing RDD 975 from persistence list 18/04/17 17:02:21 INFO storage.BlockManager: Removing RDD 975 18/04/17 17:02:21 INFO kafka.KafkaRDD: Removing RDD 1011 from persistence list 18/04/17 17:02:21 INFO storage.BlockManager: Removing RDD 1011 18/04/17 17:02:21 INFO kafka.KafkaRDD: Removing RDD 975 from persistence list 18/04/17 17:02:21 INFO storage.BlockManager: Removing RDD 975 18/04/17 17:02:21 INFO kafka.KafkaRDD: Removing RDD 1011 from persistence list 18/04/17 17:02:21 INFO storage.BlockManager: Removing RDD 1011 18/04/17 17:02:21 INFO kafka.KafkaRDD: Removing RDD 976 from persistence list 18/04/17 17:02:21 INFO storage.BlockManager: Removing RDD 976 18/04/17 17:02:21 INFO kafka.KafkaRDD: Removing RDD 1012 from persistence list 18/04/17 17:02:21 INFO storage.BlockManager: Removing RDD 1012 18/04/17 17:02:21 INFO kafka.KafkaRDD: Removing RDD 976 from persistence list 18/04/17 17:02:21 INFO storage.BlockManager: Removing RDD 976 18/04/17 17:02:21 INFO kafka.KafkaRDD: Removing RDD 1012 from persistence list 18/04/17 17:02:21 INFO storage.BlockManager: Removing RDD 1012 18/04/17 17:02:21 INFO kafka.KafkaRDD: Removing RDD 977 from persistence list 18/04/17 17:02:21 INFO storage.BlockManager: Removing RDD 977 18/04/17 17:02:21 INFO kafka.KafkaRDD: Removing RDD 1013 from persistence list 18/04/17 17:02:21 INFO storage.BlockManager: Removing RDD 1013 18/04/17 17:02:21 INFO kafka.KafkaRDD: Removing RDD 977 from persistence list 18/04/17 17:02:21 INFO storage.BlockManager: Removing RDD 977 18/04/17 17:02:21 INFO kafka.KafkaRDD: Removing RDD 1013 from persistence list 18/04/17 17:02:21 INFO storage.BlockManager: Removing RDD 1013 18/04/17 17:02:21 INFO kafka.KafkaRDD: Removing RDD 978 from persistence list 18/04/17 17:02:21 INFO storage.BlockManager: Removing RDD 978 18/04/17 17:02:21 INFO kafka.KafkaRDD: Removing RDD 1014 from persistence list 18/04/17 17:02:21 INFO storage.BlockManager: Removing RDD 1014 18/04/17 17:02:21 INFO kafka.KafkaRDD: Removing RDD 978 from persistence list 18/04/17 17:02:21 INFO storage.BlockManager: Removing RDD 978 18/04/17 17:02:21 INFO kafka.KafkaRDD: Removing RDD 1014 from persistence list 18/04/17 17:02:21 INFO storage.BlockManager: Removing RDD 1014 18/04/17 17:02:21 INFO kafka.KafkaRDD: Removing RDD 979 from persistence list 18/04/17 17:02:21 INFO storage.BlockManager: Removing RDD 979 18/04/17 17:02:21 INFO kafka.KafkaRDD: Removing RDD 1015 from persistence list 18/04/17 17:02:21 INFO storage.BlockManager: Removing RDD 1015 18/04/17 17:02:21 INFO kafka.KafkaRDD: Removing RDD 979 from persistence list 18/04/17 17:02:21 INFO storage.BlockManager: Removing RDD 979 18/04/17 17:02:21 INFO kafka.KafkaRDD: Removing RDD 1015 from persistence list 18/04/17 17:02:21 INFO storage.BlockManager: Removing RDD 1015 18/04/17 17:02:21 INFO kafka.KafkaRDD: Removing RDD 980 from persistence list 18/04/17 17:02:21 INFO storage.BlockManager: Removing RDD 980 18/04/17 17:02:21 INFO kafka.KafkaRDD: Removing RDD 1016 from persistence list 18/04/17 17:02:21 INFO storage.BlockManager: Removing RDD 1016 18/04/17 17:02:21 INFO kafka.KafkaRDD: Removing RDD 980 from persistence list 18/04/17 17:02:21 INFO storage.BlockManager: Removing RDD 980 18/04/17 17:02:21 INFO kafka.KafkaRDD: Removing RDD 1016 from persistence list 18/04/17 17:02:21 INFO storage.BlockManager: Removing RDD 1016 18/04/17 17:02:21 INFO kafka.KafkaRDD: Removing RDD 981 from persistence list 18/04/17 17:02:21 INFO storage.BlockManager: Removing RDD 981 18/04/17 17:02:21 INFO kafka.KafkaRDD: Removing RDD 1017 from persistence list 18/04/17 17:02:21 INFO storage.BlockManager: Removing RDD 1017 18/04/17 17:02:21 INFO kafka.KafkaRDD: Removing RDD 981 from persistence list 18/04/17 17:02:21 INFO storage.BlockManager: Removing RDD 981 18/04/17 17:02:21 INFO kafka.KafkaRDD: Removing RDD 1017 from persistence list 18/04/17 17:02:21 INFO storage.BlockManager: Removing RDD 1017 18/04/17 17:02:21 INFO kafka.KafkaRDD: Removing RDD 982 from persistence list 18/04/17 17:02:21 INFO storage.BlockManager: Removing RDD 982 18/04/17 17:02:21 INFO kafka.KafkaRDD: Removing RDD 1018 from persistence list 18/04/17 17:02:21 INFO storage.BlockManager: Removing RDD 1018 18/04/17 17:02:21 INFO kafka.KafkaRDD: Removing RDD 982 from persistence list 18/04/17 17:02:21 INFO storage.BlockManager: Removing RDD 982 18/04/17 17:02:21 INFO kafka.KafkaRDD: Removing RDD 1018 from persistence list 18/04/17 17:02:21 INFO storage.BlockManager: Removing RDD 1018 18/04/17 17:02:21 INFO kafka.KafkaRDD: Removing RDD 983 from persistence list 18/04/17 17:02:21 INFO storage.BlockManager: Removing RDD 983 18/04/17 17:02:21 INFO kafka.KafkaRDD: Removing RDD 1019 from persistence list 18/04/17 17:02:21 INFO storage.BlockManager: Removing RDD 1019 18/04/17 17:02:21 INFO kafka.KafkaRDD: Removing RDD 983 from persistence list 18/04/17 17:02:21 INFO storage.BlockManager: Removing RDD 983 18/04/17 17:02:21 INFO kafka.KafkaRDD: Removing RDD 1019 from persistence list 18/04/17 17:02:21 INFO storage.BlockManager: Removing RDD 1019 18/04/17 17:02:21 INFO kafka.KafkaRDD: Removing RDD 984 from persistence list 18/04/17 17:02:21 INFO storage.BlockManager: Removing RDD 984 18/04/17 17:02:21 INFO kafka.KafkaRDD: Removing RDD 1020 from persistence list 18/04/17 17:02:21 INFO storage.BlockManager: Removing RDD 1020 18/04/17 17:02:21 INFO kafka.KafkaRDD: Removing RDD 984 from persistence list 18/04/17 17:02:21 INFO storage.BlockManager: Removing RDD 984 18/04/17 17:02:21 INFO kafka.KafkaRDD: Removing RDD 1020 from persistence list 18/04/17 17:02:21 INFO storage.BlockManager: Removing RDD 1020 18/04/17 17:02:21 INFO kafka.KafkaRDD: Removing RDD 985 from persistence list 18/04/17 17:02:21 INFO storage.BlockManager: Removing RDD 985 18/04/17 17:02:21 INFO kafka.KafkaRDD: Removing RDD 1021 from persistence list 18/04/17 17:02:21 INFO storage.BlockManager: Removing RDD 1021 18/04/17 17:02:21 INFO kafka.KafkaRDD: Removing RDD 985 from persistence list 18/04/17 17:02:21 INFO storage.BlockManager: Removing RDD 985 18/04/17 17:02:21 INFO kafka.KafkaRDD: Removing RDD 1021 from persistence list 18/04/17 17:02:21 INFO storage.BlockManager: Removing RDD 1021 18/04/17 17:02:21 INFO kafka.KafkaRDD: Removing RDD 986 from persistence list 18/04/17 17:02:21 INFO storage.BlockManager: Removing RDD 986 18/04/17 17:02:21 INFO kafka.KafkaRDD: Removing RDD 1022 from persistence list 18/04/17 17:02:21 INFO storage.BlockManager: Removing RDD 1022 18/04/17 17:02:21 INFO kafka.KafkaRDD: Removing RDD 986 from persistence list 18/04/17 17:02:21 INFO storage.BlockManager: Removing RDD 986 18/04/17 17:02:21 INFO kafka.KafkaRDD: Removing RDD 1022 from persistence list 18/04/17 17:02:21 INFO storage.BlockManager: Removing RDD 1022 18/04/17 17:02:21 INFO kafka.KafkaRDD: Removing RDD 987 from persistence list 18/04/17 17:02:21 INFO storage.BlockManager: Removing RDD 987 18/04/17 17:02:21 INFO kafka.KafkaRDD: Removing RDD 1023 from persistence list 18/04/17 17:02:21 INFO storage.BlockManager: Removing RDD 1023 18/04/17 17:02:21 INFO kafka.KafkaRDD: Removing RDD 987 from persistence list 18/04/17 17:02:21 INFO storage.BlockManager: Removing RDD 987 18/04/17 17:02:21 INFO kafka.KafkaRDD: Removing RDD 1023 from persistence list 18/04/17 17:02:21 INFO storage.BlockManager: Removing RDD 1023 18/04/17 17:02:21 INFO kafka.KafkaRDD: Removing RDD 988 from persistence list 18/04/17 17:02:21 INFO storage.BlockManager: Removing RDD 988 18/04/17 17:02:21 INFO kafka.KafkaRDD: Removing RDD 1024 from persistence list 18/04/17 17:02:21 INFO storage.BlockManager: Removing RDD 1024 18/04/17 17:02:21 INFO kafka.KafkaRDD: Removing RDD 988 from persistence list 18/04/17 17:02:21 INFO storage.BlockManager: Removing RDD 988 18/04/17 17:02:21 INFO kafka.KafkaRDD: Removing RDD 1024 from persistence list 18/04/17 17:02:21 INFO storage.BlockManager: Removing RDD 1024 18/04/17 17:02:21 INFO kafka.KafkaRDD: Removing RDD 989 from persistence list 18/04/17 17:02:21 INFO storage.BlockManager: Removing RDD 989 18/04/17 17:02:21 INFO kafka.KafkaRDD: Removing RDD 1025 from persistence list 18/04/17 17:02:21 INFO storage.BlockManager: Removing RDD 1025 18/04/17 17:02:21 INFO kafka.KafkaRDD: Removing RDD 989 from persistence list 18/04/17 17:02:21 INFO storage.BlockManager: Removing RDD 989 18/04/17 17:02:21 INFO kafka.KafkaRDD: Removing RDD 1025 from persistence list 18/04/17 17:02:21 INFO storage.BlockManager: Removing RDD 1025 18/04/17 17:02:21 INFO kafka.KafkaRDD: Removing RDD 990 from persistence list 18/04/17 17:02:21 INFO storage.BlockManager: Removing RDD 990 18/04/17 17:02:21 INFO kafka.KafkaRDD: Removing RDD 1026 from persistence list 18/04/17 17:02:21 INFO storage.BlockManager: Removing RDD 1026 18/04/17 17:02:21 INFO kafka.KafkaRDD: Removing RDD 990 from persistence list 18/04/17 17:02:21 INFO storage.BlockManager: Removing RDD 990 18/04/17 17:02:21 INFO kafka.KafkaRDD: Removing RDD 1026 from persistence list 18/04/17 17:02:21 INFO storage.BlockManager: Removing RDD 1026 18/04/17 17:02:21 INFO kafka.KafkaRDD: Removing RDD 991 from persistence list 18/04/17 17:02:21 INFO storage.BlockManager: Removing RDD 991 18/04/17 17:02:21 INFO kafka.KafkaRDD: Removing RDD 1027 from persistence list 18/04/17 17:02:21 INFO storage.BlockManager: Removing RDD 1027 18/04/17 17:02:21 INFO kafka.KafkaRDD: Removing RDD 991 from persistence list 18/04/17 17:02:21 INFO storage.BlockManager: Removing RDD 991 18/04/17 17:02:21 INFO kafka.KafkaRDD: Removing RDD 1027 from persistence list 18/04/17 17:02:21 INFO storage.BlockManager: Removing RDD 1027 18/04/17 17:02:21 INFO kafka.KafkaRDD: Removing RDD 992 from persistence list 18/04/17 17:02:21 INFO storage.BlockManager: Removing RDD 992 18/04/17 17:02:21 INFO kafka.KafkaRDD: Removing RDD 1028 from persistence list 18/04/17 17:02:21 INFO storage.BlockManager: Removing RDD 1028 18/04/17 17:02:21 INFO kafka.KafkaRDD: Removing RDD 992 from persistence list 18/04/17 17:02:21 INFO storage.BlockManager: Removing RDD 992 18/04/17 17:02:21 INFO kafka.KafkaRDD: Removing RDD 1028 from persistence list 18/04/17 17:02:21 INFO storage.BlockManager: Removing RDD 1028 18/04/17 17:02:21 INFO kafka.KafkaRDD: Removing RDD 993 from persistence list 18/04/17 17:02:21 INFO storage.BlockManager: Removing RDD 993 18/04/17 17:02:21 INFO kafka.KafkaRDD: Removing RDD 1029 from persistence list 18/04/17 17:02:21 INFO storage.BlockManager: Removing RDD 1029 18/04/17 17:02:21 INFO kafka.KafkaRDD: Removing RDD 993 from persistence list 18/04/17 17:02:21 INFO storage.BlockManager: Removing RDD 993 18/04/17 17:02:21 INFO kafka.KafkaRDD: Removing RDD 1029 from persistence list 18/04/17 17:02:21 INFO storage.BlockManager: Removing RDD 1029 18/04/17 17:02:21 INFO kafka.KafkaRDD: Removing RDD 994 from persistence list 18/04/17 17:02:21 INFO storage.BlockManager: Removing RDD 994 18/04/17 17:02:21 INFO kafka.KafkaRDD: Removing RDD 1030 from persistence list 18/04/17 17:02:21 INFO storage.BlockManager: Removing RDD 1030 18/04/17 17:02:21 INFO kafka.KafkaRDD: Removing RDD 994 from persistence list 18/04/17 17:02:21 INFO storage.BlockManager: Removing RDD 994 18/04/17 17:02:21 INFO kafka.KafkaRDD: Removing RDD 1030 from persistence list 18/04/17 17:02:21 INFO storage.BlockManager: Removing RDD 1030 18/04/17 17:02:21 INFO kafka.KafkaRDD: Removing RDD 995 from persistence list 18/04/17 17:02:21 INFO storage.BlockManager: Removing RDD 995 18/04/17 17:02:21 INFO kafka.KafkaRDD: Removing RDD 1031 from persistence list 18/04/17 17:02:21 INFO storage.BlockManager: Removing RDD 1031 18/04/17 17:02:21 INFO kafka.KafkaRDD: Removing RDD 995 from persistence list 18/04/17 17:02:21 INFO storage.BlockManager: Removing RDD 995 18/04/17 17:02:21 INFO kafka.KafkaRDD: Removing RDD 1031 from persistence list 18/04/17 17:02:21 INFO storage.BlockManager: Removing RDD 1031 18/04/17 17:02:21 INFO kafka.KafkaRDD: Removing RDD 996 from persistence list 18/04/17 17:02:21 INFO storage.BlockManager: Removing RDD 996 18/04/17 17:02:21 INFO kafka.KafkaRDD: Removing RDD 1032 from persistence list 18/04/17 17:02:21 INFO storage.BlockManager: Removing RDD 1032 18/04/17 17:02:21 INFO kafka.KafkaRDD: Removing RDD 996 from persistence list 18/04/17 17:02:21 INFO storage.BlockManager: Removing RDD 996 18/04/17 17:02:21 INFO kafka.KafkaRDD: Removing RDD 1032 from persistence list 18/04/17 17:02:21 INFO storage.BlockManager: Removing RDD 1032 18/04/17 17:02:21 INFO kafka.KafkaRDD: Removing RDD 997 from persistence list 18/04/17 17:02:21 INFO storage.BlockManager: Removing RDD 997 18/04/17 17:02:21 INFO kafka.KafkaRDD: Removing RDD 1033 from persistence list 18/04/17 17:02:21 INFO storage.BlockManager: Removing RDD 1033 18/04/17 17:02:21 INFO kafka.KafkaRDD: Removing RDD 997 from persistence list 18/04/17 17:02:21 INFO storage.BlockManager: Removing RDD 997 18/04/17 17:02:21 INFO kafka.KafkaRDD: Removing RDD 1033 from persistence list 18/04/17 17:02:21 INFO storage.BlockManager: Removing RDD 1033 18/04/17 17:02:21 INFO kafka.KafkaRDD: Removing RDD 998 from persistence list 18/04/17 17:02:21 INFO storage.BlockManager: Removing RDD 998 18/04/17 17:02:21 INFO kafka.KafkaRDD: Removing RDD 1034 from persistence list 18/04/17 17:02:21 INFO storage.BlockManager: Removing RDD 1034 18/04/17 17:02:21 INFO kafka.KafkaRDD: Removing RDD 998 from persistence list 18/04/17 17:02:21 INFO storage.BlockManager: Removing RDD 998 18/04/17 17:02:21 INFO kafka.KafkaRDD: Removing RDD 1034 from persistence list 18/04/17 17:02:21 INFO storage.BlockManager: Removing RDD 1034 18/04/17 17:02:21 INFO kafka.KafkaRDD: Removing RDD 999 from persistence list 18/04/17 17:02:21 INFO storage.BlockManager: Removing RDD 999 18/04/17 17:02:21 INFO kafka.KafkaRDD: Removing RDD 1035 from persistence list 18/04/17 17:02:21 INFO storage.BlockManager: Removing RDD 1035 18/04/17 17:02:21 INFO kafka.KafkaRDD: Removing RDD 999 from persistence list 18/04/17 17:02:21 INFO storage.BlockManager: Removing RDD 999 18/04/17 17:02:21 INFO kafka.KafkaRDD: Removing RDD 1035 from persistence list 18/04/17 17:02:21 INFO storage.BlockManager: Removing RDD 1035 18/04/17 17:02:21 INFO kafka.KafkaRDD: Removing RDD 1000 from persistence list 18/04/17 17:02:21 INFO storage.BlockManager: Removing RDD 1000 18/04/17 17:02:21 INFO kafka.KafkaRDD: Removing RDD 1036 from persistence list 18/04/17 17:02:21 INFO storage.BlockManager: Removing RDD 1036 18/04/17 17:02:21 INFO kafka.KafkaRDD: Removing RDD 1000 from persistence list 18/04/17 17:02:21 INFO storage.BlockManager: Removing RDD 1000 18/04/17 17:02:21 INFO kafka.KafkaRDD: Removing RDD 1036 from persistence list 18/04/17 17:02:21 INFO storage.BlockManager: Removing RDD 1036 18/04/17 17:02:21 INFO kafka.KafkaRDD: Removing RDD 1001 from persistence list 18/04/17 17:02:21 INFO storage.BlockManager: Removing RDD 1001 18/04/17 17:02:21 INFO kafka.KafkaRDD: Removing RDD 1037 from persistence list 18/04/17 17:02:21 INFO storage.BlockManager: Removing RDD 1037 18/04/17 17:02:21 INFO kafka.KafkaRDD: Removing RDD 1001 from persistence list 18/04/17 17:02:21 INFO storage.BlockManager: Removing RDD 1001 18/04/17 17:02:21 INFO kafka.KafkaRDD: Removing RDD 1037 from persistence list 18/04/17 17:02:21 INFO storage.BlockManager: Removing RDD 1037 18/04/17 17:02:21 INFO kafka.KafkaRDD: Removing RDD 1002 from persistence list 18/04/17 17:02:21 INFO storage.BlockManager: Removing RDD 1002 18/04/17 17:02:21 INFO kafka.KafkaRDD: Removing RDD 1038 from persistence list 18/04/17 17:02:21 INFO storage.BlockManager: Removing RDD 1038 18/04/17 17:02:21 INFO kafka.KafkaRDD: Removing RDD 1002 from persistence list 18/04/17 17:02:21 INFO storage.BlockManager: Removing RDD 1002 18/04/17 17:02:21 INFO kafka.KafkaRDD: Removing RDD 1038 from persistence list 18/04/17 17:02:21 INFO storage.BlockManager: Removing RDD 1038 18/04/17 17:02:21 INFO kafka.KafkaRDD: Removing RDD 1003 from persistence list 18/04/17 17:02:21 INFO storage.BlockManager: Removing RDD 1003 18/04/17 17:02:21 INFO kafka.KafkaRDD: Removing RDD 1039 from persistence list 18/04/17 17:02:21 INFO storage.BlockManager: Removing RDD 1039 18/04/17 17:02:21 INFO kafka.KafkaRDD: Removing RDD 1003 from persistence list 18/04/17 17:02:21 INFO storage.BlockManager: Removing RDD 1003 18/04/17 17:02:21 INFO kafka.KafkaRDD: Removing RDD 1039 from persistence list 18/04/17 17:02:21 INFO storage.BlockManager: Removing RDD 1039 18/04/17 17:02:21 INFO kafka.KafkaRDD: Removing RDD 1004 from persistence list 18/04/17 17:02:21 INFO storage.BlockManager: Removing RDD 1004 18/04/17 17:02:21 INFO kafka.KafkaRDD: Removing RDD 1040 from persistence list 18/04/17 17:02:21 INFO storage.BlockManager: Removing RDD 1040 18/04/17 17:02:21 INFO kafka.KafkaRDD: Removing RDD 1004 from persistence list 18/04/17 17:02:21 INFO storage.BlockManager: Removing RDD 1004 18/04/17 17:02:21 INFO kafka.KafkaRDD: Removing RDD 1040 from persistence list 18/04/17 17:02:21 INFO storage.BlockManager: Removing RDD 1040 18/04/17 17:02:21 INFO kafka.KafkaRDD: Removing RDD 1005 from persistence list 18/04/17 17:02:21 INFO storage.BlockManager: Removing RDD 1005 18/04/17 17:02:21 INFO kafka.KafkaRDD: Removing RDD 1041 from persistence list 18/04/17 17:02:21 INFO storage.BlockManager: Removing RDD 1041 18/04/17 17:02:21 INFO kafka.KafkaRDD: Removing RDD 1005 from persistence list 18/04/17 17:02:21 INFO storage.BlockManager: Removing RDD 1005 18/04/17 17:02:21 INFO kafka.KafkaRDD: Removing RDD 1041 from persistence list 18/04/17 17:02:21 INFO storage.BlockManager: Removing RDD 1041 18/04/17 17:02:21 INFO kafka.KafkaRDD: Removing RDD 1006 from persistence list 18/04/17 17:02:21 INFO storage.BlockManager: Removing RDD 1006 18/04/17 17:02:21 INFO kafka.KafkaRDD: Removing RDD 1042 from persistence list 18/04/17 17:02:21 INFO storage.BlockManager: Removing RDD 1042 18/04/17 17:02:21 INFO kafka.KafkaRDD: Removing RDD 1006 from persistence list 18/04/17 17:02:21 INFO storage.BlockManager: Removing RDD 1006 18/04/17 17:02:21 INFO kafka.KafkaRDD: Removing RDD 1042 from persistence list 18/04/17 17:02:21 INFO storage.BlockManager: Removing RDD 1042 18/04/17 17:02:21 INFO kafka.KafkaRDD: Removing RDD 1007 from persistence list 18/04/17 17:02:21 INFO storage.BlockManager: Removing RDD 1007 18/04/17 17:02:21 INFO kafka.KafkaRDD: Removing RDD 1043 from persistence list 18/04/17 17:02:21 INFO storage.BlockManager: Removing RDD 1043 18/04/17 17:02:21 INFO kafka.KafkaRDD: Removing RDD 1007 from persistence list 18/04/17 17:02:21 INFO storage.BlockManager: Removing RDD 1007 18/04/17 17:02:21 INFO kafka.KafkaRDD: Removing RDD 1043 from persistence list 18/04/17 17:02:21 INFO storage.BlockManager: Removing RDD 1043 18/04/17 17:02:21 INFO scheduler.ReceivedBlockTracker: Deleting batches ArrayBuffer() 18/04/17 17:02:21 INFO scheduler.InputInfoTracker: remove old batch metadata: 1523973600000 ms 1523973540000 ms 18/04/17 17:02:23 INFO spark.ContextCleaner: Cleaned accumulator 126 18/04/17 17:02:23 INFO storage.BlockManagerInfo: Removed broadcast_58_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.2 MB) 18/04/17 17:02:23 INFO storage.BlockManagerInfo: Removed broadcast_58_piece0 on ***hostname masked***:43653 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:02:23 INFO storage.BlockManagerInfo: Removed broadcast_766_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.2 MB) 18/04/17 17:02:23 INFO storage.BlockManagerInfo: Removed broadcast_766_piece0 on ***hostname masked***:53081 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:02:23 INFO spark.ContextCleaner: Cleaned accumulator 767 18/04/17 17:02:23 INFO storage.BlockManagerInfo: Removed broadcast_767_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.2 MB) 18/04/17 17:02:23 INFO storage.BlockManagerInfo: Removed broadcast_767_piece0 on ***hostname masked***:56034 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:02:23 INFO spark.ContextCleaner: Cleaned accumulator 768 18/04/17 17:02:23 INFO storage.BlockManagerInfo: Removed broadcast_768_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.2 MB) 18/04/17 17:02:23 INFO storage.BlockManagerInfo: Removed broadcast_768_piece0 on ***hostname masked***:35790 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:02:23 INFO spark.ContextCleaner: Cleaned accumulator 769 18/04/17 17:02:23 INFO storage.BlockManagerInfo: Removed broadcast_770_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.2 MB) 18/04/17 17:02:23 INFO storage.BlockManagerInfo: Removed broadcast_770_piece0 on ***hostname masked***:60107 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:02:23 INFO spark.ContextCleaner: Cleaned accumulator 771 18/04/17 17:02:23 INFO storage.BlockManagerInfo: Removed broadcast_791_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.2 MB) 18/04/17 17:02:23 INFO storage.BlockManagerInfo: Removed broadcast_791_piece0 on ***hostname masked***:56034 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:02:23 INFO spark.ContextCleaner: Cleaned accumulator 792 18/04/17 17:02:23 INFO spark.ContextCleaner: Cleaned accumulator 773 18/04/17 17:02:23 INFO spark.ContextCleaner: Cleaned accumulator 774 18/04/17 17:02:23 INFO storage.BlockManagerInfo: Removed broadcast_772_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.2 MB) 18/04/17 17:02:23 INFO storage.BlockManagerInfo: Removed broadcast_772_piece0 on ***hostname masked***:53081 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:02:23 INFO spark.ContextCleaner: Cleaned accumulator 775 18/04/17 17:02:23 INFO storage.BlockManagerInfo: Removed broadcast_773_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.2 MB) 18/04/17 17:02:23 INFO storage.BlockManagerInfo: Removed broadcast_773_piece0 on ***hostname masked***:57847 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:02:23 INFO spark.ContextCleaner: Cleaned accumulator 776 18/04/17 17:02:23 INFO storage.BlockManagerInfo: Removed broadcast_774_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.2 MB) 18/04/17 17:02:23 INFO storage.BlockManagerInfo: Removed broadcast_774_piece0 on ***hostname masked***:57847 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:02:23 INFO spark.ContextCleaner: Cleaned accumulator 778 18/04/17 17:02:23 INFO storage.BlockManagerInfo: Removed broadcast_775_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.2 MB) 18/04/17 17:02:23 INFO storage.BlockManagerInfo: Removed broadcast_775_piece0 on ***hostname masked***:50260 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:02:23 INFO spark.ContextCleaner: Cleaned accumulator 779 18/04/17 17:02:23 INFO storage.BlockManagerInfo: Removed broadcast_777_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.2 MB) 18/04/17 17:02:23 INFO storage.BlockManagerInfo: Removed broadcast_777_piece0 on ***hostname masked***:60107 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:02:23 INFO spark.ContextCleaner: Cleaned accumulator 781 18/04/17 17:02:23 INFO storage.BlockManagerInfo: Removed broadcast_778_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.3 MB) 18/04/17 17:02:23 INFO storage.BlockManagerInfo: Removed broadcast_778_piece0 on ***hostname masked***:35790 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:02:23 INFO spark.ContextCleaner: Cleaned accumulator 782 18/04/17 17:02:23 INFO storage.BlockManagerInfo: Removed broadcast_780_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.3 MB) 18/04/17 17:02:23 INFO storage.BlockManagerInfo: Removed broadcast_780_piece0 on ***hostname masked***:53081 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:02:23 INFO spark.ContextCleaner: Cleaned accumulator 784 18/04/17 17:02:23 INFO storage.BlockManagerInfo: Removed broadcast_781_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.3 MB) 18/04/17 17:02:23 INFO storage.BlockManagerInfo: Removed broadcast_781_piece0 on ***hostname masked***:57847 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:02:23 INFO spark.ContextCleaner: Cleaned accumulator 785 18/04/17 17:02:23 INFO storage.BlockManagerInfo: Removed broadcast_783_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.3 MB) 18/04/17 17:02:23 INFO storage.BlockManagerInfo: Removed broadcast_783_piece0 on ***hostname masked***:60107 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:02:23 INFO storage.BlockManagerInfo: Removed broadcast_784_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.3 MB) 18/04/17 17:02:23 INFO storage.BlockManagerInfo: Removed broadcast_784_piece0 on ***hostname masked***:35790 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:02:23 INFO storage.BlockManagerInfo: Removed broadcast_785_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.3 MB) 18/04/17 17:02:23 INFO storage.BlockManagerInfo: Removed broadcast_785_piece0 on ***hostname masked***:50260 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:02:23 INFO spark.ContextCleaner: Cleaned accumulator 786 18/04/17 17:02:23 INFO storage.BlockManagerInfo: Removed broadcast_786_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.3 MB) 18/04/17 17:02:23 INFO storage.BlockManagerInfo: Removed broadcast_786_piece0 on ***hostname masked***:43653 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:02:23 INFO spark.ContextCleaner: Cleaned accumulator 787 18/04/17 17:02:23 INFO storage.BlockManagerInfo: Removed broadcast_12_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.3 MB) 18/04/17 17:02:23 INFO storage.BlockManagerInfo: Removed broadcast_12_piece0 on ***hostname masked***:55095 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:02:23 INFO spark.ContextCleaner: Cleaned accumulator 13 18/04/17 17:02:23 INFO storage.BlockManagerInfo: Removed broadcast_11_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.3 MB) 18/04/17 17:02:23 INFO storage.BlockManagerInfo: Removed broadcast_11_piece0 on ***hostname masked***:56034 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:02:23 INFO spark.ContextCleaner: Cleaned accumulator 12 18/04/17 17:02:23 INFO storage.BlockManagerInfo: Removed broadcast_10_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.3 MB) 18/04/17 17:02:23 INFO storage.BlockManagerInfo: Removed broadcast_10_piece0 on ***hostname masked***:55095 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:02:23 INFO spark.ContextCleaner: Cleaned accumulator 11 18/04/17 17:02:23 INFO storage.BlockManagerInfo: Removed broadcast_9_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.3 MB) 18/04/17 17:02:23 INFO storage.BlockManagerInfo: Removed broadcast_9_piece0 on ***hostname masked***:56034 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:02:23 INFO spark.ContextCleaner: Cleaned accumulator 10 18/04/17 17:02:23 INFO storage.BlockManagerInfo: Removed broadcast_8_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.3 MB) 18/04/17 17:02:23 INFO storage.BlockManagerInfo: Removed broadcast_8_piece0 on ***hostname masked***:57847 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:02:23 INFO spark.ContextCleaner: Cleaned accumulator 9 18/04/17 17:02:23 INFO storage.BlockManagerInfo: Removed broadcast_7_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.3 MB) 18/04/17 17:02:23 INFO storage.BlockManagerInfo: Removed broadcast_7_piece0 on ***hostname masked***:55279 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:02:23 INFO spark.ContextCleaner: Cleaned accumulator 8 18/04/17 17:02:23 INFO storage.BlockManagerInfo: Removed broadcast_6_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.3 MB) 18/04/17 17:02:23 INFO storage.BlockManagerInfo: Removed broadcast_6_piece0 on ***hostname masked***:60107 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:02:23 INFO spark.ContextCleaner: Cleaned accumulator 7 18/04/17 17:02:23 INFO storage.BlockManagerInfo: Removed broadcast_5_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.3 MB) 18/04/17 17:02:23 INFO storage.BlockManagerInfo: Removed broadcast_5_piece0 on ***hostname masked***:35790 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:02:23 INFO spark.ContextCleaner: Cleaned accumulator 6 18/04/17 17:02:23 INFO storage.BlockManagerInfo: Removed broadcast_4_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.3 MB) 18/04/17 17:02:23 INFO storage.BlockManagerInfo: Removed broadcast_4_piece0 on ***hostname masked***:53081 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:02:23 INFO spark.ContextCleaner: Cleaned accumulator 5 18/04/17 17:02:23 INFO storage.BlockManagerInfo: Removed broadcast_3_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.3 MB) 18/04/17 17:02:23 INFO storage.BlockManagerInfo: Removed broadcast_3_piece0 on ***hostname masked***:41751 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:02:23 INFO spark.ContextCleaner: Cleaned accumulator 4 18/04/17 17:02:23 INFO spark.ContextCleaner: Cleaned accumulator 3 18/04/17 17:02:23 INFO spark.ContextCleaner: Cleaned accumulator 2 18/04/17 17:02:23 INFO storage.BlockManagerInfo: Removed broadcast_0_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.3 MB) 18/04/17 17:02:23 INFO storage.BlockManagerInfo: Removed broadcast_0_piece0 on ***hostname masked***:50260 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:02:23 INFO spark.ContextCleaner: Cleaned accumulator 1 18/04/17 17:02:23 INFO storage.BlockManagerInfo: Removed broadcast_478_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.3 MB) 18/04/17 17:02:23 INFO storage.BlockManagerInfo: Removed broadcast_478_piece0 on ***hostname masked***:41751 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:02:23 INFO spark.ContextCleaner: Cleaned accumulator 479 18/04/17 17:02:23 INFO storage.BlockManagerInfo: Removed broadcast_369_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.3 MB) 18/04/17 17:02:23 INFO storage.BlockManagerInfo: Removed broadcast_369_piece0 on ***hostname masked***:35790 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:02:23 INFO spark.ContextCleaner: Cleaned accumulator 370 18/04/17 17:02:23 INFO spark.ContextCleaner: Cleaned accumulator 186 18/04/17 17:02:23 INFO storage.BlockManagerInfo: Removed broadcast_184_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.3 MB) 18/04/17 17:02:23 INFO storage.BlockManagerInfo: Removed broadcast_184_piece0 on ***hostname masked***:50260 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:02:23 INFO spark.ContextCleaner: Cleaned accumulator 185 18/04/17 17:02:23 INFO spark.ContextCleaner: Cleaned accumulator 181 18/04/17 17:02:23 INFO storage.BlockManagerInfo: Removed broadcast_179_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.3 MB) 18/04/17 17:02:23 INFO storage.BlockManagerInfo: Removed broadcast_179_piece0 on ***hostname masked***:42188 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:02:23 INFO spark.ContextCleaner: Cleaned accumulator 180 18/04/17 17:02:23 INFO storage.BlockManagerInfo: Removed broadcast_178_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.3 MB) 18/04/17 17:02:23 INFO storage.BlockManagerInfo: Removed broadcast_178_piece0 on ***hostname masked***:56034 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:02:23 INFO spark.ContextCleaner: Cleaned accumulator 179 18/04/17 17:02:23 INFO storage.BlockManagerInfo: Removed broadcast_177_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.3 MB) 18/04/17 17:02:23 INFO storage.BlockManagerInfo: Removed broadcast_177_piece0 on ***hostname masked***:35790 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:02:23 INFO spark.ContextCleaner: Cleaned accumulator 178 18/04/17 17:02:23 INFO storage.BlockManagerInfo: Removed broadcast_176_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.3 MB) 18/04/17 17:02:23 INFO storage.BlockManagerInfo: Removed broadcast_176_piece0 on ***hostname masked***:60107 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:02:23 INFO spark.ContextCleaner: Cleaned accumulator 177 18/04/17 17:02:23 INFO storage.BlockManagerInfo: Removed broadcast_175_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.3 MB) 18/04/17 17:02:23 INFO storage.BlockManagerInfo: Removed broadcast_175_piece0 on ***hostname masked***:55033 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:02:23 INFO spark.ContextCleaner: Cleaned accumulator 176 18/04/17 17:02:23 INFO storage.BlockManagerInfo: Removed broadcast_174_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.3 MB) 18/04/17 17:02:23 INFO storage.BlockManagerInfo: Removed broadcast_174_piece0 on ***hostname masked***:43653 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:02:23 INFO spark.ContextCleaner: Cleaned accumulator 175 18/04/17 17:02:23 INFO storage.BlockManagerInfo: Removed broadcast_173_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.3 MB) 18/04/17 17:02:23 INFO storage.BlockManagerInfo: Removed broadcast_173_piece0 on ***hostname masked***:41751 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:02:23 INFO spark.ContextCleaner: Cleaned accumulator 174 18/04/17 17:02:23 INFO storage.BlockManagerInfo: Removed broadcast_172_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.3 MB) 18/04/17 17:02:23 INFO storage.BlockManagerInfo: Removed broadcast_172_piece0 on ***hostname masked***:42188 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:02:23 INFO spark.ContextCleaner: Cleaned accumulator 173 18/04/17 17:02:23 INFO storage.BlockManagerInfo: Removed broadcast_171_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.3 MB) 18/04/17 17:02:23 INFO storage.BlockManagerInfo: Removed broadcast_171_piece0 on ***hostname masked***:55095 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:02:23 INFO spark.ContextCleaner: Cleaned accumulator 172 18/04/17 17:02:23 INFO storage.BlockManagerInfo: Removed broadcast_170_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.3 MB) 18/04/17 17:02:23 INFO storage.BlockManagerInfo: Removed broadcast_170_piece0 on ***hostname masked***:50260 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:02:23 INFO spark.ContextCleaner: Cleaned accumulator 171 18/04/17 17:02:23 INFO storage.BlockManagerInfo: Removed broadcast_169_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.3 MB) 18/04/17 17:02:23 INFO storage.BlockManagerInfo: Removed broadcast_169_piece0 on ***hostname masked***:60107 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:02:23 INFO spark.ContextCleaner: Cleaned accumulator 170 18/04/17 17:02:23 INFO storage.BlockManagerInfo: Removed broadcast_168_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.4 MB) 18/04/17 17:02:23 INFO storage.BlockManagerInfo: Removed broadcast_168_piece0 on ***hostname masked***:43653 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:02:23 INFO spark.ContextCleaner: Cleaned accumulator 169 18/04/17 17:02:23 INFO storage.BlockManagerInfo: Removed broadcast_167_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.4 MB) 18/04/17 17:02:23 INFO storage.BlockManagerInfo: Removed broadcast_167_piece0 on ***hostname masked***:56034 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:02:23 INFO spark.ContextCleaner: Cleaned accumulator 168 18/04/17 17:02:23 INFO storage.BlockManagerInfo: Removed broadcast_166_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.4 MB) 18/04/17 17:02:23 INFO storage.BlockManagerInfo: Removed broadcast_166_piece0 on ***hostname masked***:35790 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:02:23 INFO spark.ContextCleaner: Cleaned accumulator 167 18/04/17 17:02:23 INFO storage.BlockManagerInfo: Removed broadcast_165_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.4 MB) 18/04/17 17:02:23 INFO storage.BlockManagerInfo: Removed broadcast_165_piece0 on ***hostname masked***:60107 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:02:23 INFO spark.ContextCleaner: Cleaned accumulator 166 18/04/17 17:02:23 INFO storage.BlockManagerInfo: Removed broadcast_164_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.4 MB) 18/04/17 17:02:23 INFO storage.BlockManagerInfo: Removed broadcast_164_piece0 on ***hostname masked***:55095 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:02:23 INFO spark.ContextCleaner: Cleaned accumulator 165 18/04/17 17:02:23 INFO storage.BlockManagerInfo: Removed broadcast_163_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.4 MB) 18/04/17 17:02:23 INFO storage.BlockManagerInfo: Removed broadcast_163_piece0 on ***hostname masked***:35790 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:02:23 INFO spark.ContextCleaner: Cleaned accumulator 164 18/04/17 17:02:23 INFO storage.BlockManagerInfo: Removed broadcast_162_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.4 MB) 18/04/17 17:02:23 INFO storage.BlockManagerInfo: Removed broadcast_162_piece0 on ***hostname masked***:56034 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:02:23 INFO spark.ContextCleaner: Cleaned accumulator 163 18/04/17 17:02:23 INFO storage.BlockManagerInfo: Removed broadcast_161_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.4 MB) 18/04/17 17:02:23 INFO storage.BlockManagerInfo: Removed broadcast_161_piece0 on ***hostname masked***:50260 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:02:23 INFO spark.ContextCleaner: Cleaned accumulator 162 18/04/17 17:02:23 INFO storage.BlockManagerInfo: Removed broadcast_160_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.4 MB) 18/04/17 17:02:23 INFO storage.BlockManagerInfo: Removed broadcast_160_piece0 on ***hostname masked***:50260 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:02:23 INFO spark.ContextCleaner: Cleaned accumulator 161 18/04/17 17:02:23 INFO storage.BlockManagerInfo: Removed broadcast_159_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.4 MB) 18/04/17 17:02:23 INFO storage.BlockManagerInfo: Removed broadcast_159_piece0 on ***hostname masked***:57847 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:02:23 INFO spark.ContextCleaner: Cleaned accumulator 160 18/04/17 17:02:23 INFO storage.BlockManagerInfo: Removed broadcast_158_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.4 MB) 18/04/17 17:02:23 INFO storage.BlockManagerInfo: Removed broadcast_158_piece0 on ***hostname masked***:41751 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:02:23 INFO spark.ContextCleaner: Cleaned accumulator 159 18/04/17 17:02:23 INFO storage.BlockManagerInfo: Removed broadcast_157_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.4 MB) 18/04/17 17:02:23 INFO storage.BlockManagerInfo: Removed broadcast_157_piece0 on ***hostname masked***:43653 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:02:23 INFO spark.ContextCleaner: Cleaned accumulator 68 18/04/17 17:02:23 INFO storage.BlockManagerInfo: Removed broadcast_66_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.4 MB) 18/04/17 17:02:23 INFO storage.BlockManagerInfo: Removed broadcast_66_piece0 on ***hostname masked***:53081 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:02:23 INFO spark.ContextCleaner: Cleaned accumulator 67 18/04/17 17:02:23 INFO storage.BlockManagerInfo: Removed broadcast_65_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.4 MB) 18/04/17 17:02:23 INFO storage.BlockManagerInfo: Removed broadcast_65_piece0 on ***hostname masked***:56034 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:02:23 INFO spark.ContextCleaner: Cleaned accumulator 66 18/04/17 17:02:23 INFO storage.BlockManagerInfo: Removed broadcast_64_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.4 MB) 18/04/17 17:02:23 INFO storage.BlockManagerInfo: Removed broadcast_64_piece0 on ***hostname masked***:55279 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:02:23 INFO spark.ContextCleaner: Cleaned accumulator 65 18/04/17 17:02:23 INFO storage.BlockManagerInfo: Removed broadcast_63_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.4 MB) 18/04/17 17:02:23 INFO storage.BlockManagerInfo: Removed broadcast_63_piece0 on ***hostname masked***:35790 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:02:23 INFO spark.ContextCleaner: Cleaned accumulator 64 18/04/17 17:02:23 INFO storage.BlockManagerInfo: Removed broadcast_62_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.4 MB) 18/04/17 17:02:23 INFO storage.BlockManagerInfo: Removed broadcast_62_piece0 on ***hostname masked***:42188 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:02:23 INFO spark.ContextCleaner: Cleaned accumulator 63 18/04/17 17:02:23 INFO storage.BlockManagerInfo: Removed broadcast_61_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.4 MB) 18/04/17 17:02:23 INFO storage.BlockManagerInfo: Removed broadcast_61_piece0 on ***hostname masked***:53081 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:02:23 INFO spark.ContextCleaner: Cleaned accumulator 62 18/04/17 17:02:23 INFO storage.BlockManagerInfo: Removed broadcast_60_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.4 MB) 18/04/17 17:02:23 INFO storage.BlockManagerInfo: Removed broadcast_60_piece0 on ***hostname masked***:55095 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:02:23 INFO spark.ContextCleaner: Cleaned accumulator 61 18/04/17 17:02:23 INFO storage.BlockManagerInfo: Removed broadcast_59_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.4 MB) 18/04/17 17:02:23 INFO storage.BlockManagerInfo: Removed broadcast_59_piece0 on ***hostname masked***:55279 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:02:23 INFO spark.ContextCleaner: Cleaned accumulator 60 18/04/17 17:02:23 INFO spark.ContextCleaner: Cleaned accumulator 59 18/04/17 17:02:23 INFO storage.BlockManagerInfo: Removed broadcast_57_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.4 MB) 18/04/17 17:02:23 INFO storage.BlockManagerInfo: Removed broadcast_57_piece0 on ***hostname masked***:60107 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:02:23 INFO spark.ContextCleaner: Cleaned accumulator 58 18/04/17 17:02:23 INFO storage.BlockManagerInfo: Removed broadcast_56_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.4 MB) 18/04/17 17:02:23 INFO storage.BlockManagerInfo: Removed broadcast_56_piece0 on ***hostname masked***:55033 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:02:23 INFO spark.ContextCleaner: Cleaned accumulator 57 18/04/17 17:02:23 INFO storage.BlockManagerInfo: Removed broadcast_55_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.4 MB) 18/04/17 17:02:23 INFO storage.BlockManagerInfo: Removed broadcast_55_piece0 on ***hostname masked***:43653 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:02:23 INFO spark.ContextCleaner: Cleaned accumulator 56 18/04/17 17:02:23 INFO storage.BlockManagerInfo: Removed broadcast_54_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.4 MB) 18/04/17 17:02:23 INFO storage.BlockManagerInfo: Removed broadcast_54_piece0 on ***hostname masked***:35790 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:02:23 INFO spark.ContextCleaner: Cleaned accumulator 55 18/04/17 17:02:23 INFO storage.BlockManagerInfo: Removed broadcast_53_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.4 MB) 18/04/17 17:02:23 INFO storage.BlockManagerInfo: Removed broadcast_53_piece0 on ***hostname masked***:60107 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:02:23 INFO spark.ContextCleaner: Cleaned accumulator 54 18/04/17 17:02:23 INFO storage.BlockManagerInfo: Removed broadcast_52_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.4 MB) 18/04/17 17:02:23 INFO storage.BlockManagerInfo: Removed broadcast_52_piece0 on ***hostname masked***:60107 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:02:23 INFO spark.ContextCleaner: Cleaned accumulator 53 18/04/17 17:02:23 INFO storage.BlockManagerInfo: Removed broadcast_51_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.4 MB) 18/04/17 17:02:23 INFO storage.BlockManagerInfo: Removed broadcast_51_piece0 on ***hostname masked***:60107 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:02:23 INFO spark.ContextCleaner: Cleaned accumulator 52 18/04/17 17:02:23 INFO spark.ContextCleaner: Cleaned accumulator 23 18/04/17 17:02:23 INFO storage.BlockManagerInfo: Removed broadcast_21_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.4 MB) 18/04/17 17:02:23 INFO storage.BlockManagerInfo: Removed broadcast_21_piece0 on ***hostname masked***:53081 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:02:23 INFO spark.ContextCleaner: Cleaned accumulator 22 18/04/17 17:02:23 INFO spark.ContextCleaner: Cleaned accumulator 20 18/04/17 17:02:23 INFO storage.BlockManagerInfo: Removed broadcast_18_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.4 MB) 18/04/17 17:02:23 INFO storage.BlockManagerInfo: Removed broadcast_18_piece0 on ***hostname masked***:43653 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:02:23 INFO spark.ContextCleaner: Cleaned accumulator 19 18/04/17 17:02:23 INFO storage.BlockManagerInfo: Removed broadcast_17_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.4 MB) 18/04/17 17:02:23 INFO storage.BlockManagerInfo: Removed broadcast_17_piece0 on ***hostname masked***:57847 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:02:23 INFO spark.ContextCleaner: Cleaned accumulator 18 18/04/17 17:02:23 INFO storage.BlockManagerInfo: Removed broadcast_16_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.4 MB) 18/04/17 17:02:23 INFO storage.BlockManagerInfo: Removed broadcast_16_piece0 on ***hostname masked***:42188 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:02:23 INFO spark.ContextCleaner: Cleaned accumulator 17 18/04/17 17:02:23 INFO spark.ContextCleaner: Cleaned accumulator 15 18/04/17 17:02:23 INFO storage.BlockManagerInfo: Removed broadcast_13_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.4 MB) 18/04/17 17:02:23 INFO storage.BlockManagerInfo: Removed broadcast_13_piece0 on ***hostname masked***:53081 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:02:23 INFO spark.ContextCleaner: Cleaned accumulator 14 18/04/17 17:02:23 INFO storage.BlockManagerInfo: Removed broadcast_86_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.4 MB) 18/04/17 17:02:23 INFO storage.BlockManagerInfo: Removed broadcast_86_piece0 on ***hostname masked***:56034 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:02:23 INFO spark.ContextCleaner: Cleaned accumulator 87 18/04/17 17:02:23 INFO storage.BlockManagerInfo: Removed broadcast_85_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.5 MB) 18/04/17 17:02:23 INFO storage.BlockManagerInfo: Removed broadcast_85_piece0 on ***hostname masked***:56034 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:02:23 INFO spark.ContextCleaner: Cleaned accumulator 86 18/04/17 17:02:23 INFO storage.BlockManagerInfo: Removed broadcast_84_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.5 MB) 18/04/17 17:02:23 INFO storage.BlockManagerInfo: Removed broadcast_84_piece0 on ***hostname masked***:55279 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:02:23 INFO spark.ContextCleaner: Cleaned accumulator 85 18/04/17 17:02:23 INFO storage.BlockManagerInfo: Removed broadcast_83_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.5 MB) 18/04/17 17:02:23 INFO storage.BlockManagerInfo: Removed broadcast_83_piece0 on ***hostname masked***:55095 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:02:23 INFO spark.ContextCleaner: Cleaned accumulator 84 18/04/17 17:02:23 INFO storage.BlockManagerInfo: Removed broadcast_82_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.5 MB) 18/04/17 17:02:23 INFO storage.BlockManagerInfo: Removed broadcast_82_piece0 on ***hostname masked***:55279 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:02:23 INFO spark.ContextCleaner: Cleaned accumulator 83 18/04/17 17:02:23 INFO storage.BlockManagerInfo: Removed broadcast_81_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.5 MB) 18/04/17 17:02:23 INFO storage.BlockManagerInfo: Removed broadcast_81_piece0 on ***hostname masked***:42188 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:02:23 INFO spark.ContextCleaner: Cleaned accumulator 82 18/04/17 17:02:23 INFO storage.BlockManagerInfo: Removed broadcast_80_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.5 MB) 18/04/17 17:02:23 INFO storage.BlockManagerInfo: Removed broadcast_80_piece0 on ***hostname masked***:57847 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:02:23 INFO spark.ContextCleaner: Cleaned accumulator 81 18/04/17 17:02:23 INFO storage.BlockManagerInfo: Removed broadcast_79_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.5 MB) 18/04/17 17:02:23 INFO storage.BlockManagerInfo: Removed broadcast_79_piece0 on ***hostname masked***:55279 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:02:23 INFO spark.ContextCleaner: Cleaned accumulator 80 18/04/17 17:02:23 INFO storage.BlockManagerInfo: Removed broadcast_78_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.5 MB) 18/04/17 17:02:23 INFO storage.BlockManagerInfo: Removed broadcast_78_piece0 on ***hostname masked***:55279 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:02:23 INFO spark.ContextCleaner: Cleaned accumulator 79 18/04/17 17:02:23 INFO storage.BlockManagerInfo: Removed broadcast_77_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.5 MB) 18/04/17 17:02:23 INFO storage.BlockManagerInfo: Removed broadcast_77_piece0 on ***hostname masked***:57847 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:02:23 INFO spark.ContextCleaner: Cleaned accumulator 78 18/04/17 17:02:23 INFO storage.BlockManagerInfo: Removed broadcast_76_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.5 MB) 18/04/17 17:02:23 INFO storage.BlockManagerInfo: Removed broadcast_76_piece0 on ***hostname masked***:43653 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:02:23 INFO spark.ContextCleaner: Cleaned accumulator 77 18/04/17 17:02:23 INFO storage.BlockManagerInfo: Removed broadcast_75_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.5 MB) 18/04/17 17:02:23 INFO storage.BlockManagerInfo: Removed broadcast_75_piece0 on ***hostname masked***:55033 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:02:23 INFO spark.ContextCleaner: Cleaned accumulator 76 18/04/17 17:02:23 INFO storage.BlockManagerInfo: Removed broadcast_74_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.5 MB) 18/04/17 17:02:23 INFO storage.BlockManagerInfo: Removed broadcast_74_piece0 on ***hostname masked***:55033 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:02:23 INFO spark.ContextCleaner: Cleaned accumulator 75 18/04/17 17:02:23 INFO storage.BlockManagerInfo: Removed broadcast_73_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.5 MB) 18/04/17 17:02:23 INFO storage.BlockManagerInfo: Removed broadcast_73_piece0 on ***hostname masked***:55095 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:02:23 INFO spark.ContextCleaner: Cleaned accumulator 74 18/04/17 17:02:23 INFO storage.BlockManagerInfo: Removed broadcast_72_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.5 MB) 18/04/17 17:02:23 INFO storage.BlockManagerInfo: Removed broadcast_72_piece0 on ***hostname masked***:57847 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:02:23 INFO spark.ContextCleaner: Cleaned accumulator 73 18/04/17 17:02:23 INFO storage.BlockManagerInfo: Removed broadcast_71_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.5 MB) 18/04/17 17:02:23 INFO storage.BlockManagerInfo: Removed broadcast_71_piece0 on ***hostname masked***:41751 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:02:23 INFO spark.ContextCleaner: Cleaned accumulator 72 18/04/17 17:02:23 INFO storage.BlockManagerInfo: Removed broadcast_70_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.5 MB) 18/04/17 17:02:23 INFO storage.BlockManagerInfo: Removed broadcast_70_piece0 on ***hostname masked***:41751 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:02:23 INFO spark.ContextCleaner: Cleaned accumulator 71 18/04/17 17:02:23 INFO storage.BlockManagerInfo: Removed broadcast_69_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.5 MB) 18/04/17 17:02:23 INFO storage.BlockManagerInfo: Removed broadcast_69_piece0 on ***hostname masked***:53081 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:02:23 INFO spark.ContextCleaner: Cleaned accumulator 70 18/04/17 17:02:23 INFO storage.BlockManagerInfo: Removed broadcast_68_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.5 MB) 18/04/17 17:02:23 INFO storage.BlockManagerInfo: Removed broadcast_68_piece0 on ***hostname masked***:56034 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:02:23 INFO spark.ContextCleaner: Cleaned accumulator 69 18/04/17 17:02:23 INFO storage.BlockManagerInfo: Removed broadcast_67_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.5 MB) 18/04/17 17:02:23 INFO storage.BlockManagerInfo: Removed broadcast_67_piece0 on ***hostname masked***:50260 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:02:23 INFO storage.BlockManagerInfo: Removed broadcast_96_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.5 MB) 18/04/17 17:02:23 INFO storage.BlockManagerInfo: Removed broadcast_96_piece0 on ***hostname masked***:42188 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:02:23 INFO spark.ContextCleaner: Cleaned accumulator 97 18/04/17 17:02:23 INFO storage.BlockManagerInfo: Removed broadcast_95_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.5 MB) 18/04/17 17:02:23 INFO storage.BlockManagerInfo: Removed broadcast_95_piece0 on ***hostname masked***:42188 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:02:23 INFO spark.ContextCleaner: Cleaned accumulator 96 18/04/17 17:02:23 INFO storage.BlockManagerInfo: Removed broadcast_94_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.5 MB) 18/04/17 17:02:23 INFO storage.BlockManagerInfo: Removed broadcast_94_piece0 on ***hostname masked***:35790 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:02:23 INFO spark.ContextCleaner: Cleaned accumulator 95 18/04/17 17:02:23 INFO storage.BlockManagerInfo: Removed broadcast_93_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.5 MB) 18/04/17 17:02:23 INFO storage.BlockManagerInfo: Removed broadcast_93_piece0 on ***hostname masked***:53081 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:02:23 INFO spark.ContextCleaner: Cleaned accumulator 94 18/04/17 17:02:23 INFO storage.BlockManagerInfo: Removed broadcast_92_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.5 MB) 18/04/17 17:02:23 INFO storage.BlockManagerInfo: Removed broadcast_92_piece0 on ***hostname masked***:56034 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:02:23 INFO spark.ContextCleaner: Cleaned accumulator 93 18/04/17 17:02:23 INFO storage.BlockManagerInfo: Removed broadcast_91_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.5 MB) 18/04/17 17:02:23 INFO storage.BlockManagerInfo: Removed broadcast_91_piece0 on ***hostname masked***:57847 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:02:23 INFO spark.ContextCleaner: Cleaned accumulator 92 18/04/17 17:02:23 INFO storage.BlockManagerInfo: Removed broadcast_90_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.5 MB) 18/04/17 17:02:23 INFO storage.BlockManagerInfo: Removed broadcast_90_piece0 on ***hostname masked***:55033 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:02:23 INFO spark.ContextCleaner: Cleaned accumulator 91 18/04/17 17:02:23 INFO storage.BlockManagerInfo: Removed broadcast_89_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.5 MB) 18/04/17 17:02:23 INFO storage.BlockManagerInfo: Removed broadcast_89_piece0 on ***hostname masked***:50260 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:02:23 INFO spark.ContextCleaner: Cleaned accumulator 90 18/04/17 17:02:23 INFO storage.BlockManagerInfo: Removed broadcast_88_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.5 MB) 18/04/17 17:02:23 INFO storage.BlockManagerInfo: Removed broadcast_88_piece0 on ***hostname masked***:57847 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:02:23 INFO spark.ContextCleaner: Cleaned accumulator 89 18/04/17 17:02:23 INFO storage.BlockManagerInfo: Removed broadcast_87_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.5 MB) 18/04/17 17:02:23 INFO storage.BlockManagerInfo: Removed broadcast_87_piece0 on ***hostname masked***:57847 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:02:23 INFO spark.ContextCleaner: Cleaned accumulator 88 18/04/17 17:02:23 INFO storage.BlockManagerInfo: Removed broadcast_122_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.5 MB) 18/04/17 17:02:23 INFO storage.BlockManagerInfo: Removed broadcast_122_piece0 on ***hostname masked***:55095 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:02:23 INFO spark.ContextCleaner: Cleaned accumulator 123 18/04/17 17:02:23 INFO storage.BlockManagerInfo: Removed broadcast_121_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.5 MB) 18/04/17 17:02:23 INFO storage.BlockManagerInfo: Removed broadcast_121_piece0 on ***hostname masked***:41751 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:02:23 INFO spark.ContextCleaner: Cleaned accumulator 122 18/04/17 17:02:23 INFO storage.BlockManagerInfo: Removed broadcast_120_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.5 MB) 18/04/17 17:02:23 INFO storage.BlockManagerInfo: Removed broadcast_120_piece0 on ***hostname masked***:56034 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:02:23 INFO spark.ContextCleaner: Cleaned accumulator 121 18/04/17 17:02:23 INFO storage.BlockManagerInfo: Removed broadcast_119_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.5 MB) 18/04/17 17:02:23 INFO storage.BlockManagerInfo: Removed broadcast_119_piece0 on ***hostname masked***:55095 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:02:23 INFO spark.ContextCleaner: Cleaned accumulator 120 18/04/17 17:02:23 INFO storage.BlockManagerInfo: Removed broadcast_118_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:02:23 INFO storage.BlockManagerInfo: Removed broadcast_118_piece0 on ***hostname masked***:41751 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:02:23 INFO spark.ContextCleaner: Cleaned accumulator 119 18/04/17 17:02:23 INFO storage.BlockManagerInfo: Removed broadcast_117_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:02:23 INFO storage.BlockManagerInfo: Removed broadcast_117_piece0 on ***hostname masked***:43653 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:02:23 INFO spark.ContextCleaner: Cleaned accumulator 118 18/04/17 17:02:23 INFO storage.BlockManagerInfo: Removed broadcast_116_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:02:23 INFO storage.BlockManagerInfo: Removed broadcast_116_piece0 on ***hostname masked***:43653 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:02:23 INFO spark.ContextCleaner: Cleaned accumulator 117 18/04/17 17:02:23 INFO storage.BlockManagerInfo: Removed broadcast_115_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:02:23 INFO storage.BlockManagerInfo: Removed broadcast_115_piece0 on ***hostname masked***:35790 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:02:23 INFO spark.ContextCleaner: Cleaned accumulator 116 18/04/17 17:02:23 INFO storage.BlockManagerInfo: Removed broadcast_114_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:02:23 INFO storage.BlockManagerInfo: Removed broadcast_114_piece0 on ***hostname masked***:53081 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:02:23 INFO spark.ContextCleaner: Cleaned accumulator 115 18/04/17 17:02:23 INFO storage.BlockManagerInfo: Removed broadcast_113_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:02:23 INFO storage.BlockManagerInfo: Removed broadcast_113_piece0 on ***hostname masked***:53081 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:02:23 INFO spark.ContextCleaner: Cleaned accumulator 114 18/04/17 17:02:23 INFO storage.BlockManagerInfo: Removed broadcast_112_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:02:23 INFO storage.BlockManagerInfo: Removed broadcast_112_piece0 on ***hostname masked***:50260 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:02:23 INFO spark.ContextCleaner: Cleaned accumulator 113 18/04/17 17:02:23 INFO storage.BlockManagerInfo: Removed broadcast_111_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:02:23 INFO storage.BlockManagerInfo: Removed broadcast_111_piece0 on ***hostname masked***:50260 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:02:23 INFO spark.ContextCleaner: Cleaned accumulator 112 18/04/17 17:02:23 INFO storage.BlockManagerInfo: Removed broadcast_110_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:02:23 INFO storage.BlockManagerInfo: Removed broadcast_110_piece0 on ***hostname masked***:56034 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:02:23 INFO spark.ContextCleaner: Cleaned accumulator 111 18/04/17 17:02:23 INFO storage.BlockManagerInfo: Removed broadcast_109_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:02:23 INFO storage.BlockManagerInfo: Removed broadcast_109_piece0 on ***hostname masked***:50260 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:02:23 INFO spark.ContextCleaner: Cleaned accumulator 110 18/04/17 17:02:23 INFO storage.BlockManagerInfo: Removed broadcast_108_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:02:23 INFO storage.BlockManagerInfo: Removed broadcast_108_piece0 on ***hostname masked***:55279 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:02:23 INFO spark.ContextCleaner: Cleaned accumulator 109 18/04/17 17:02:23 INFO storage.BlockManagerInfo: Removed broadcast_107_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:02:23 INFO storage.BlockManagerInfo: Removed broadcast_107_piece0 on ***hostname masked***:57847 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:02:23 INFO spark.ContextCleaner: Cleaned accumulator 108 18/04/17 17:02:23 INFO storage.BlockManagerInfo: Removed broadcast_106_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:02:23 INFO storage.BlockManagerInfo: Removed broadcast_106_piece0 on ***hostname masked***:50260 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:02:23 INFO spark.ContextCleaner: Cleaned accumulator 107 18/04/17 17:02:23 INFO storage.BlockManagerInfo: Removed broadcast_105_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:02:23 INFO storage.BlockManagerInfo: Removed broadcast_105_piece0 on ***hostname masked***:55033 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:02:23 INFO spark.ContextCleaner: Cleaned accumulator 106 18/04/17 17:02:23 INFO spark.ContextCleaner: Cleaned accumulator 105 18/04/17 17:02:23 INFO spark.ContextCleaner: Cleaned accumulator 104 18/04/17 17:02:23 INFO spark.ContextCleaner: Cleaned accumulator 103 18/04/17 17:02:23 INFO spark.ContextCleaner: Cleaned accumulator 102 18/04/17 17:02:23 INFO spark.ContextCleaner: Cleaned accumulator 101 18/04/17 17:02:23 INFO storage.BlockManagerInfo: Removed broadcast_99_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:02:23 INFO storage.BlockManagerInfo: Removed broadcast_99_piece0 on ***hostname masked***:50260 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:02:23 INFO spark.ContextCleaner: Cleaned accumulator 100 18/04/17 17:02:23 INFO storage.BlockManagerInfo: Removed broadcast_98_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:02:23 INFO storage.BlockManagerInfo: Removed broadcast_98_piece0 on ***hostname masked***:55033 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:02:23 INFO spark.ContextCleaner: Cleaned accumulator 99 18/04/17 17:02:23 INFO storage.BlockManagerInfo: Removed broadcast_97_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:02:23 INFO storage.BlockManagerInfo: Removed broadcast_97_piece0 on ***hostname masked***:55095 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:02:23 INFO spark.ContextCleaner: Cleaned accumulator 98 18/04/17 17:02:23 INFO spark.ContextCleaner: Cleaned accumulator 158 18/04/17 17:02:23 INFO storage.BlockManagerInfo: Removed broadcast_154_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:02:23 INFO storage.BlockManagerInfo: Removed broadcast_154_piece0 on ***hostname masked***:55279 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:02:23 INFO spark.ContextCleaner: Cleaned accumulator 155 18/04/17 17:02:23 INFO storage.BlockManagerInfo: Removed broadcast_152_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:02:23 INFO storage.BlockManagerInfo: Removed broadcast_152_piece0 on ***hostname masked***:43653 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:02:23 INFO spark.ContextCleaner: Cleaned accumulator 153 18/04/17 17:02:23 INFO storage.BlockManagerInfo: Removed broadcast_151_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:02:23 INFO storage.BlockManagerInfo: Removed broadcast_151_piece0 on ***hostname masked***:55279 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:02:23 INFO spark.ContextCleaner: Cleaned accumulator 152 18/04/17 17:02:23 INFO storage.BlockManagerInfo: Removed broadcast_150_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:02:23 INFO storage.BlockManagerInfo: Removed broadcast_150_piece0 on ***hostname masked***:60107 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:02:23 INFO spark.ContextCleaner: Cleaned accumulator 151 18/04/17 17:02:23 INFO storage.BlockManagerInfo: Removed broadcast_148_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:02:23 INFO storage.BlockManagerInfo: Removed broadcast_148_piece0 on ***hostname masked***:50260 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:02:23 INFO spark.ContextCleaner: Cleaned accumulator 149 18/04/17 17:02:23 INFO storage.BlockManagerInfo: Removed broadcast_146_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:02:23 INFO storage.BlockManagerInfo: Removed broadcast_146_piece0 on ***hostname masked***:43653 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:02:23 INFO spark.ContextCleaner: Cleaned accumulator 147 18/04/17 17:02:23 INFO storage.BlockManagerInfo: Removed broadcast_145_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:02:23 INFO storage.BlockManagerInfo: Removed broadcast_145_piece0 on ***hostname masked***:35790 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:02:23 INFO spark.ContextCleaner: Cleaned accumulator 146 18/04/17 17:02:23 INFO storage.BlockManagerInfo: Removed broadcast_143_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:02:23 INFO storage.BlockManagerInfo: Removed broadcast_143_piece0 on ***hostname masked***:50260 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:02:23 INFO spark.ContextCleaner: Cleaned accumulator 144 18/04/17 17:02:23 INFO storage.BlockManagerInfo: Removed broadcast_141_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:02:23 INFO storage.BlockManagerInfo: Removed broadcast_141_piece0 on ***hostname masked***:43653 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:02:23 INFO spark.ContextCleaner: Cleaned accumulator 142 18/04/17 17:02:23 INFO storage.BlockManagerInfo: Removed broadcast_140_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:02:23 INFO storage.BlockManagerInfo: Removed broadcast_140_piece0 on ***hostname masked***:35790 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:02:23 INFO spark.ContextCleaner: Cleaned accumulator 141 18/04/17 17:02:23 INFO spark.ContextCleaner: Cleaned accumulator 139 18/04/17 17:02:23 INFO storage.BlockManagerInfo: Removed broadcast_137_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:02:23 INFO storage.BlockManagerInfo: Removed broadcast_137_piece0 on ***hostname masked***:57847 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:02:23 INFO spark.ContextCleaner: Cleaned accumulator 138 18/04/17 17:02:23 INFO storage.BlockManagerInfo: Removed broadcast_136_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:02:23 INFO storage.BlockManagerInfo: Removed broadcast_136_piece0 on ***hostname masked***:43653 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:02:23 INFO spark.ContextCleaner: Cleaned accumulator 137 18/04/17 17:02:23 INFO storage.BlockManagerInfo: Removed broadcast_135_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:02:23 INFO storage.BlockManagerInfo: Removed broadcast_135_piece0 on ***hostname masked***:53081 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:02:23 INFO spark.ContextCleaner: Cleaned accumulator 136 18/04/17 17:02:23 INFO storage.BlockManagerInfo: Removed broadcast_134_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:02:23 INFO storage.BlockManagerInfo: Removed broadcast_134_piece0 on ***hostname masked***:53081 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:02:23 INFO spark.ContextCleaner: Cleaned accumulator 135 18/04/17 17:02:23 INFO storage.BlockManagerInfo: Removed broadcast_133_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:02:23 INFO storage.BlockManagerInfo: Removed broadcast_133_piece0 on ***hostname masked***:60107 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:02:23 INFO spark.ContextCleaner: Cleaned accumulator 134 18/04/17 17:02:23 INFO storage.BlockManagerInfo: Removed broadcast_132_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:02:23 INFO storage.BlockManagerInfo: Removed broadcast_132_piece0 on ***hostname masked***:35790 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:02:23 INFO spark.ContextCleaner: Cleaned accumulator 133 18/04/17 17:02:23 INFO storage.BlockManagerInfo: Removed broadcast_131_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.7 MB) 18/04/17 17:02:23 INFO storage.BlockManagerInfo: Removed broadcast_131_piece0 on ***hostname masked***:53081 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:02:23 INFO spark.ContextCleaner: Cleaned accumulator 132 18/04/17 17:02:23 INFO storage.BlockManagerInfo: Removed broadcast_129_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.7 MB) 18/04/17 17:02:23 INFO storage.BlockManagerInfo: Removed broadcast_129_piece0 on ***hostname masked***:53081 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:02:23 INFO spark.ContextCleaner: Cleaned accumulator 130 18/04/17 17:02:23 INFO spark.ContextCleaner: Cleaned accumulator 128 18/04/17 17:02:23 INFO storage.BlockManagerInfo: Removed broadcast_126_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.7 MB) 18/04/17 17:02:23 INFO storage.BlockManagerInfo: Removed broadcast_126_piece0 on ***hostname masked***:57847 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:02:23 INFO spark.ContextCleaner: Cleaned accumulator 127 18/04/17 17:02:23 INFO storage.BlockManagerInfo: Removed broadcast_125_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.7 MB) 18/04/17 17:02:23 INFO storage.BlockManagerInfo: Removed broadcast_125_piece0 on ***hostname masked***:35790 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:02:23 INFO storage.BlockManagerInfo: Removed broadcast_124_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.7 MB) 18/04/17 17:02:23 INFO storage.BlockManagerInfo: Removed broadcast_124_piece0 on ***hostname masked***:42188 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:02:23 INFO spark.ContextCleaner: Cleaned accumulator 125 18/04/17 17:02:23 INFO storage.BlockManagerInfo: Removed broadcast_123_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.7 MB) 18/04/17 17:02:23 INFO storage.BlockManagerInfo: Removed broadcast_123_piece0 on ***hostname masked***:35790 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:02:23 INFO spark.ContextCleaner: Cleaned accumulator 124 18/04/17 17:03:00 INFO scheduler.JobScheduler: Added jobs for time 1523973780000 ms 18/04/17 17:03:00 INFO scheduler.JobScheduler: Starting job streaming job 1523973780000 ms.2 from job set of time 1523973780000 ms 18/04/17 17:03:00 INFO scheduler.JobScheduler: Starting job streaming job 1523973780000 ms.0 from job set of time 1523973780000 ms 18/04/17 17:03:00 INFO scheduler.JobScheduler: Starting job streaming job 1523973780000 ms.1 from job set of time 1523973780000 ms 18/04/17 17:03:00 INFO scheduler.JobScheduler: Starting job streaming job 1523973780000 ms.3 from job set of time 1523973780000 ms 18/04/17 17:03:00 INFO scheduler.JobScheduler: Starting job streaming job 1523973780000 ms.4 from job set of time 1523973780000 ms 18/04/17 17:03:00 INFO scheduler.JobScheduler: Starting job streaming job 1523973780000 ms.5 from job set of time 1523973780000 ms 18/04/17 17:03:00 INFO scheduler.JobScheduler: Finished job streaming job 1523973780000 ms.4 from job set of time 1523973780000 ms 18/04/17 17:03:00 INFO scheduler.JobScheduler: Finished job streaming job 1523973780000 ms.3 from job set of time 1523973780000 ms 18/04/17 17:03:00 INFO scheduler.JobScheduler: Finished job streaming job 1523973780000 ms.0 from job set of time 1523973780000 ms 18/04/17 17:03:00 INFO scheduler.JobScheduler: Starting job streaming job 1523973780000 ms.6 from job set of time 1523973780000 ms 18/04/17 17:03:00 INFO scheduler.JobScheduler: Starting job streaming job 1523973780000 ms.8 from job set of time 1523973780000 ms 18/04/17 17:03:00 INFO scheduler.JobScheduler: Starting job streaming job 1523973780000 ms.9 from job set of time 1523973780000 ms 18/04/17 17:03:00 INFO scheduler.JobScheduler: Starting job streaming job 1523973780000 ms.7 from job set of time 1523973780000 ms 18/04/17 17:03:00 INFO scheduler.JobScheduler: Starting job streaming job 1523973780000 ms.10 from job set of time 1523973780000 ms 18/04/17 17:03:00 INFO scheduler.JobScheduler: Starting job streaming job 1523973780000 ms.11 from job set of time 1523973780000 ms 18/04/17 17:03:00 INFO scheduler.JobScheduler: Starting job streaming job 1523973780000 ms.12 from job set of time 1523973780000 ms 18/04/17 17:03:00 INFO scheduler.JobScheduler: Starting job streaming job 1523973780000 ms.13 from job set of time 1523973780000 ms 18/04/17 17:03:00 INFO scheduler.JobScheduler: Starting job streaming job 1523973780000 ms.14 from job set of time 1523973780000 ms 18/04/17 17:03:00 INFO scheduler.JobScheduler: Finished job streaming job 1523973780000 ms.13 from job set of time 1523973780000 ms 18/04/17 17:03:00 INFO scheduler.JobScheduler: Starting job streaming job 1523973780000 ms.16 from job set of time 1523973780000 ms 18/04/17 17:03:00 INFO scheduler.JobScheduler: Starting job streaming job 1523973780000 ms.15 from job set of time 1523973780000 ms 18/04/17 17:03:00 INFO scheduler.JobScheduler: Finished job streaming job 1523973780000 ms.14 from job set of time 1523973780000 ms 18/04/17 17:03:00 INFO scheduler.JobScheduler: Starting job streaming job 1523973780000 ms.18 from job set of time 1523973780000 ms 18/04/17 17:03:00 INFO scheduler.JobScheduler: Finished job streaming job 1523973780000 ms.16 from job set of time 1523973780000 ms 18/04/17 17:03:00 INFO scheduler.JobScheduler: Starting job streaming job 1523973780000 ms.17 from job set of time 1523973780000 ms 18/04/17 17:03:00 INFO scheduler.JobScheduler: Starting job streaming job 1523973780000 ms.19 from job set of time 1523973780000 ms 18/04/17 17:03:00 INFO scheduler.JobScheduler: Starting job streaming job 1523973780000 ms.20 from job set of time 1523973780000 ms 18/04/17 17:03:00 INFO scheduler.JobScheduler: Finished job streaming job 1523973780000 ms.17 from job set of time 1523973780000 ms 18/04/17 17:03:00 INFO scheduler.JobScheduler: Starting job streaming job 1523973780000 ms.21 from job set of time 1523973780000 ms 18/04/17 17:03:00 INFO scheduler.JobScheduler: Finished job streaming job 1523973780000 ms.21 from job set of time 1523973780000 ms 18/04/17 17:03:00 INFO scheduler.JobScheduler: Starting job streaming job 1523973780000 ms.22 from job set of time 1523973780000 ms 18/04/17 17:03:00 INFO scheduler.JobScheduler: Starting job streaming job 1523973780000 ms.23 from job set of time 1523973780000 ms 18/04/17 17:03:00 INFO scheduler.JobScheduler: Starting job streaming job 1523973780000 ms.26 from job set of time 1523973780000 ms 18/04/17 17:03:00 INFO scheduler.JobScheduler: Starting job streaming job 1523973780000 ms.25 from job set of time 1523973780000 ms 18/04/17 17:03:00 INFO scheduler.JobScheduler: Starting job streaming job 1523973780000 ms.27 from job set of time 1523973780000 ms 18/04/17 17:03:00 INFO scheduler.JobScheduler: Starting job streaming job 1523973780000 ms.24 from job set of time 1523973780000 ms 18/04/17 17:03:00 INFO scheduler.JobScheduler: Starting job streaming job 1523973780000 ms.28 from job set of time 1523973780000 ms 18/04/17 17:03:00 INFO scheduler.JobScheduler: Starting job streaming job 1523973780000 ms.29 from job set of time 1523973780000 ms 18/04/17 17:03:00 INFO scheduler.JobScheduler: Starting job streaming job 1523973780000 ms.30 from job set of time 1523973780000 ms 18/04/17 17:03:00 INFO scheduler.JobScheduler: Starting job streaming job 1523973780000 ms.31 from job set of time 1523973780000 ms 18/04/17 17:03:00 INFO scheduler.JobScheduler: Starting job streaming job 1523973780000 ms.32 from job set of time 1523973780000 ms 18/04/17 17:03:00 INFO scheduler.JobScheduler: Starting job streaming job 1523973780000 ms.33 from job set of time 1523973780000 ms 18/04/17 17:03:00 INFO scheduler.JobScheduler: Finished job streaming job 1523973780000 ms.30 from job set of time 1523973780000 ms 18/04/17 17:03:00 INFO scheduler.JobScheduler: Starting job streaming job 1523973780000 ms.34 from job set of time 1523973780000 ms 18/04/17 17:03:00 INFO scheduler.JobScheduler: Starting job streaming job 1523973780000 ms.35 from job set of time 1523973780000 ms 18/04/17 17:03:00 INFO scheduler.JobScheduler: Finished job streaming job 1523973780000 ms.35 from job set of time 1523973780000 ms 18/04/17 17:03:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 17:03:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 17:03:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 17:03:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 17:03:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 17:03:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 17:03:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 17:03:00 INFO scheduler.DAGScheduler: Got job 792 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 17:03:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 17:03:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 17:03:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 17:03:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 792 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 17:03:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 17:03:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 17:03:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 17:03:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 17:03:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 17:03:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 17:03:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 17:03:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 17:03:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 17:03:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 17:03:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 17:03:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 17:03:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 17:03:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 17:03:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 17:03:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 17:03:00 INFO scheduler.DAGScheduler: Submitting ResultStage 792 (KafkaRDD[1107] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 17:03:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 17:03:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 17:03:00 INFO storage.MemoryStore: Block broadcast_792 stored as values in memory (estimated size 5.7 KB, free 491.7 MB) 18/04/17 17:03:00 INFO storage.MemoryStore: Block broadcast_792_piece0 stored as bytes in memory (estimated size 3.1 KB, free 491.7 MB) 18/04/17 17:03:00 INFO storage.BlockManagerInfo: Added broadcast_792_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.7 MB) 18/04/17 17:03:00 INFO spark.SparkContext: Created broadcast 792 from broadcast at DAGScheduler.scala:1006 18/04/17 17:03:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 792 (KafkaRDD[1107] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 17:03:00 INFO cluster.YarnClusterScheduler: Adding task set 792.0 with 1 tasks 18/04/17 17:03:00 INFO scheduler.DAGScheduler: Got job 793 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 17:03:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 793 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 17:03:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 17:03:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 17:03:00 INFO scheduler.DAGScheduler: Submitting ResultStage 793 (KafkaRDD[1081] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 17:03:00 INFO storage.MemoryStore: Block broadcast_793 stored as values in memory (estimated size 5.7 KB, free 491.6 MB) 18/04/17 17:03:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 792.0 (TID 792, ***hostname masked***, executor 1, partition 0, RACK_LOCAL, 2048 bytes) 18/04/17 17:03:00 INFO storage.MemoryStore: Block broadcast_793_piece0 stored as bytes in memory (estimated size 3.1 KB, free 491.6 MB) 18/04/17 17:03:00 INFO storage.BlockManagerInfo: Added broadcast_793_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.7 MB) 18/04/17 17:03:00 INFO spark.SparkContext: Created broadcast 793 from broadcast at DAGScheduler.scala:1006 18/04/17 17:03:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 793 (KafkaRDD[1081] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 17:03:00 INFO cluster.YarnClusterScheduler: Adding task set 793.0 with 1 tasks 18/04/17 17:03:00 INFO scheduler.DAGScheduler: Got job 795 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 17:03:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 794 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 17:03:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 17:03:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 17:03:00 INFO scheduler.DAGScheduler: Submitting ResultStage 794 (KafkaRDD[1089] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 17:03:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 793.0 (TID 793, ***hostname masked***, executor 9, partition 0, RACK_LOCAL, 2063 bytes) 18/04/17 17:03:00 INFO storage.MemoryStore: Block broadcast_794 stored as values in memory (estimated size 5.7 KB, free 491.6 MB) 18/04/17 17:03:00 INFO storage.MemoryStore: Block broadcast_794_piece0 stored as bytes in memory (estimated size 3.1 KB, free 491.6 MB) 18/04/17 17:03:00 INFO storage.BlockManagerInfo: Added broadcast_794_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.7 MB) 18/04/17 17:03:00 INFO spark.SparkContext: Created broadcast 794 from broadcast at DAGScheduler.scala:1006 18/04/17 17:03:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 794 (KafkaRDD[1089] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 17:03:00 INFO cluster.YarnClusterScheduler: Adding task set 794.0 with 1 tasks 18/04/17 17:03:00 INFO scheduler.DAGScheduler: Got job 794 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 17:03:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 795 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 17:03:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 17:03:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 17:03:00 INFO scheduler.DAGScheduler: Submitting ResultStage 795 (KafkaRDD[1100] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 17:03:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 794.0 (TID 794, ***hostname masked***, executor 7, partition 0, RACK_LOCAL, 2041 bytes) 18/04/17 17:03:00 INFO storage.MemoryStore: Block broadcast_795 stored as values in memory (estimated size 5.7 KB, free 491.6 MB) 18/04/17 17:03:00 INFO storage.MemoryStore: Block broadcast_795_piece0 stored as bytes in memory (estimated size 3.1 KB, free 491.6 MB) 18/04/17 17:03:00 INFO storage.BlockManagerInfo: Added broadcast_795_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.7 MB) 18/04/17 17:03:00 INFO spark.SparkContext: Created broadcast 795 from broadcast at DAGScheduler.scala:1006 18/04/17 17:03:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 795 (KafkaRDD[1100] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 17:03:00 INFO cluster.YarnClusterScheduler: Adding task set 795.0 with 1 tasks 18/04/17 17:03:00 INFO scheduler.DAGScheduler: Got job 796 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 17:03:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 796 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 17:03:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 17:03:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 17:03:00 INFO scheduler.DAGScheduler: Submitting ResultStage 796 (KafkaRDD[1106] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 17:03:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 795.0 (TID 795, ***hostname masked***, executor 6, partition 0, RACK_LOCAL, 2048 bytes) 18/04/17 17:03:00 INFO storage.MemoryStore: Block broadcast_796 stored as values in memory (estimated size 5.7 KB, free 491.6 MB) 18/04/17 17:03:00 INFO storage.MemoryStore: Block broadcast_796_piece0 stored as bytes in memory (estimated size 3.1 KB, free 491.6 MB) 18/04/17 17:03:00 INFO storage.BlockManagerInfo: Added broadcast_796_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.7 MB) 18/04/17 17:03:00 INFO spark.SparkContext: Created broadcast 796 from broadcast at DAGScheduler.scala:1006 18/04/17 17:03:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 796 (KafkaRDD[1106] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 17:03:00 INFO cluster.YarnClusterScheduler: Adding task set 796.0 with 1 tasks 18/04/17 17:03:00 INFO scheduler.DAGScheduler: Got job 797 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 17:03:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 797 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 17:03:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 17:03:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 17:03:00 INFO scheduler.DAGScheduler: Submitting ResultStage 797 (KafkaRDD[1104] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 17:03:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 796.0 (TID 796, ***hostname masked***, executor 3, partition 0, NODE_LOCAL, 2056 bytes) 18/04/17 17:03:00 INFO storage.MemoryStore: Block broadcast_797 stored as values in memory (estimated size 5.7 KB, free 491.6 MB) 18/04/17 17:03:00 INFO storage.BlockManagerInfo: Added broadcast_792_piece0 in memory on ***hostname masked***:56034 (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:03:00 INFO storage.BlockManagerInfo: Added broadcast_793_piece0 in memory on ***hostname masked***:55033 (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:03:00 INFO storage.MemoryStore: Block broadcast_797_piece0 stored as bytes in memory (estimated size 3.1 KB, free 491.6 MB) 18/04/17 17:03:00 INFO storage.BlockManagerInfo: Added broadcast_797_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:03:00 INFO spark.SparkContext: Created broadcast 797 from broadcast at DAGScheduler.scala:1006 18/04/17 17:03:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 797 (KafkaRDD[1104] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 17:03:00 INFO cluster.YarnClusterScheduler: Adding task set 797.0 with 1 tasks 18/04/17 17:03:00 INFO scheduler.DAGScheduler: Got job 798 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 17:03:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 798 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 17:03:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 17:03:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 17:03:00 INFO scheduler.DAGScheduler: Submitting ResultStage 798 (KafkaRDD[1092] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 17:03:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 797.0 (TID 797, ***hostname masked***, executor 6, partition 0, NODE_LOCAL, 2047 bytes) 18/04/17 17:03:00 INFO storage.MemoryStore: Block broadcast_798 stored as values in memory (estimated size 5.7 KB, free 491.6 MB) 18/04/17 17:03:00 INFO storage.MemoryStore: Block broadcast_798_piece0 stored as bytes in memory (estimated size 3.1 KB, free 491.6 MB) 18/04/17 17:03:00 INFO storage.BlockManagerInfo: Added broadcast_798_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:03:00 INFO spark.SparkContext: Created broadcast 798 from broadcast at DAGScheduler.scala:1006 18/04/17 17:03:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 798 (KafkaRDD[1092] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 17:03:00 INFO cluster.YarnClusterScheduler: Adding task set 798.0 with 1 tasks 18/04/17 17:03:00 INFO scheduler.DAGScheduler: Got job 800 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 17:03:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 799 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 17:03:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 17:03:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 17:03:00 INFO scheduler.DAGScheduler: Submitting ResultStage 799 (KafkaRDD[1087] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 17:03:00 INFO storage.MemoryStore: Block broadcast_799 stored as values in memory (estimated size 5.7 KB, free 491.6 MB) 18/04/17 17:03:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 798.0 (TID 798, ***hostname masked***, executor 2, partition 0, RACK_LOCAL, 2051 bytes) 18/04/17 17:03:00 INFO storage.BlockManagerInfo: Added broadcast_794_piece0 in memory on ***hostname masked***:41751 (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:03:00 INFO storage.MemoryStore: Block broadcast_799_piece0 stored as bytes in memory (estimated size 3.1 KB, free 491.6 MB) 18/04/17 17:03:00 INFO storage.BlockManagerInfo: Added broadcast_799_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:03:00 INFO spark.SparkContext: Created broadcast 799 from broadcast at DAGScheduler.scala:1006 18/04/17 17:03:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 799 (KafkaRDD[1087] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 17:03:00 INFO cluster.YarnClusterScheduler: Adding task set 799.0 with 1 tasks 18/04/17 17:03:00 INFO scheduler.DAGScheduler: Got job 801 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 17:03:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 800 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 17:03:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 17:03:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 17:03:00 INFO scheduler.DAGScheduler: Submitting ResultStage 800 (KafkaRDD[1099] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 17:03:00 INFO storage.MemoryStore: Block broadcast_800 stored as values in memory (estimated size 5.7 KB, free 491.6 MB) 18/04/17 17:03:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 799.0 (TID 799, ***hostname masked***, executor 5, partition 0, RACK_LOCAL, 2060 bytes) 18/04/17 17:03:00 INFO storage.BlockManagerInfo: Added broadcast_796_piece0 in memory on ***hostname masked***:60107 (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:03:00 INFO storage.MemoryStore: Block broadcast_800_piece0 stored as bytes in memory (estimated size 3.1 KB, free 491.6 MB) 18/04/17 17:03:00 INFO storage.BlockManagerInfo: Added broadcast_800_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:03:00 INFO spark.SparkContext: Created broadcast 800 from broadcast at DAGScheduler.scala:1006 18/04/17 17:03:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 800 (KafkaRDD[1099] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 17:03:00 INFO cluster.YarnClusterScheduler: Adding task set 800.0 with 1 tasks 18/04/17 17:03:00 INFO scheduler.DAGScheduler: Got job 799 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 17:03:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 801 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 17:03:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 17:03:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 17:03:00 INFO scheduler.DAGScheduler: Submitting ResultStage 801 (KafkaRDD[1090] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 17:03:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 800.0 (TID 800, ***hostname masked***, executor 2, partition 0, RACK_LOCAL, 2050 bytes) 18/04/17 17:03:00 INFO storage.MemoryStore: Block broadcast_801 stored as values in memory (estimated size 5.7 KB, free 491.6 MB) 18/04/17 17:03:00 INFO storage.BlockManagerInfo: Added broadcast_795_piece0 in memory on ***hostname masked***:35790 (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:03:00 INFO storage.MemoryStore: Block broadcast_801_piece0 stored as bytes in memory (estimated size 3.1 KB, free 491.6 MB) 18/04/17 17:03:00 INFO storage.BlockManagerInfo: Added broadcast_801_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:03:00 INFO spark.SparkContext: Created broadcast 801 from broadcast at DAGScheduler.scala:1006 18/04/17 17:03:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 801 (KafkaRDD[1090] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 17:03:00 INFO cluster.YarnClusterScheduler: Adding task set 801.0 with 1 tasks 18/04/17 17:03:00 INFO scheduler.DAGScheduler: Got job 802 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 17:03:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 802 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 17:03:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 17:03:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 17:03:00 INFO scheduler.DAGScheduler: Submitting ResultStage 802 (KafkaRDD[1091] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 17:03:00 INFO storage.MemoryStore: Block broadcast_802 stored as values in memory (estimated size 5.7 KB, free 491.6 MB) 18/04/17 17:03:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 801.0 (TID 801, ***hostname masked***, executor 4, partition 0, NODE_LOCAL, 2059 bytes) 18/04/17 17:03:00 INFO storage.BlockManagerInfo: Added broadcast_798_piece0 in memory on ***hostname masked***:43653 (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:03:00 INFO storage.MemoryStore: Block broadcast_802_piece0 stored as bytes in memory (estimated size 3.1 KB, free 491.6 MB) 18/04/17 17:03:00 INFO storage.BlockManagerInfo: Added broadcast_802_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:03:00 INFO spark.SparkContext: Created broadcast 802 from broadcast at DAGScheduler.scala:1006 18/04/17 17:03:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 802 (KafkaRDD[1091] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 17:03:00 INFO cluster.YarnClusterScheduler: Adding task set 802.0 with 1 tasks 18/04/17 17:03:00 INFO scheduler.DAGScheduler: Got job 803 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 17:03:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 803 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 17:03:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 17:03:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 17:03:00 INFO scheduler.DAGScheduler: Submitting ResultStage 803 (KafkaRDD[1103] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 17:03:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 802.0 (TID 802, ***hostname masked***, executor 4, partition 0, RACK_LOCAL, 2046 bytes) 18/04/17 17:03:00 INFO storage.MemoryStore: Block broadcast_803 stored as values in memory (estimated size 5.7 KB, free 491.6 MB) 18/04/17 17:03:00 INFO storage.BlockManagerInfo: Added broadcast_799_piece0 in memory on ***hostname masked***:53081 (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:03:00 INFO storage.BlockManagerInfo: Added broadcast_797_piece0 in memory on ***hostname masked***:35790 (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:03:00 INFO storage.MemoryStore: Block broadcast_803_piece0 stored as bytes in memory (estimated size 3.1 KB, free 491.6 MB) 18/04/17 17:03:00 INFO storage.BlockManagerInfo: Added broadcast_803_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:03:00 INFO spark.SparkContext: Created broadcast 803 from broadcast at DAGScheduler.scala:1006 18/04/17 17:03:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 803 (KafkaRDD[1103] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 17:03:00 INFO cluster.YarnClusterScheduler: Adding task set 803.0 with 1 tasks 18/04/17 17:03:00 INFO scheduler.DAGScheduler: Got job 804 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 17:03:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 804 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 17:03:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 17:03:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 17:03:00 INFO scheduler.DAGScheduler: Submitting ResultStage 804 (KafkaRDD[1082] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 17:03:00 INFO storage.MemoryStore: Block broadcast_804 stored as values in memory (estimated size 5.7 KB, free 491.6 MB) 18/04/17 17:03:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 803.0 (TID 803, ***hostname masked***, executor 8, partition 0, RACK_LOCAL, 2048 bytes) 18/04/17 17:03:00 INFO storage.MemoryStore: Block broadcast_804_piece0 stored as bytes in memory (estimated size 3.1 KB, free 491.5 MB) 18/04/17 17:03:00 INFO storage.BlockManagerInfo: Added broadcast_804_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:03:00 INFO spark.SparkContext: Created broadcast 804 from broadcast at DAGScheduler.scala:1006 18/04/17 17:03:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 804 (KafkaRDD[1082] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 17:03:00 INFO cluster.YarnClusterScheduler: Adding task set 804.0 with 1 tasks 18/04/17 17:03:00 INFO scheduler.DAGScheduler: Got job 805 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 17:03:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 805 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 17:03:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 17:03:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 17:03:00 INFO scheduler.DAGScheduler: Submitting ResultStage 805 (KafkaRDD[1112] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 17:03:00 INFO storage.MemoryStore: Block broadcast_805 stored as values in memory (estimated size 5.7 KB, free 491.5 MB) 18/04/17 17:03:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 804.0 (TID 804, ***hostname masked***, executor 1, partition 0, RACK_LOCAL, 2034 bytes) 18/04/17 17:03:00 INFO storage.MemoryStore: Block broadcast_805_piece0 stored as bytes in memory (estimated size 3.1 KB, free 491.5 MB) 18/04/17 17:03:00 INFO storage.BlockManagerInfo: Added broadcast_805_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:03:00 INFO spark.SparkContext: Created broadcast 805 from broadcast at DAGScheduler.scala:1006 18/04/17 17:03:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 805 (KafkaRDD[1112] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 17:03:00 INFO cluster.YarnClusterScheduler: Adding task set 805.0 with 1 tasks 18/04/17 17:03:00 INFO scheduler.DAGScheduler: Got job 807 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 17:03:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 806 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 17:03:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 17:03:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 17:03:00 INFO scheduler.DAGScheduler: Submitting ResultStage 806 (KafkaRDD[1085] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 17:03:00 INFO storage.MemoryStore: Block broadcast_806 stored as values in memory (estimated size 5.7 KB, free 491.5 MB) 18/04/17 17:03:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 805.0 (TID 805, ***hostname masked***, executor 10, partition 0, RACK_LOCAL, 2040 bytes) 18/04/17 17:03:00 INFO storage.BlockManagerInfo: Added broadcast_800_piece0 in memory on ***hostname masked***:43653 (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:03:00 INFO storage.MemoryStore: Block broadcast_806_piece0 stored as bytes in memory (estimated size 3.1 KB, free 491.5 MB) 18/04/17 17:03:00 INFO storage.BlockManagerInfo: Added broadcast_806_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:03:00 INFO spark.SparkContext: Created broadcast 806 from broadcast at DAGScheduler.scala:1006 18/04/17 17:03:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 806 (KafkaRDD[1085] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 17:03:00 INFO cluster.YarnClusterScheduler: Adding task set 806.0 with 1 tasks 18/04/17 17:03:00 INFO scheduler.DAGScheduler: Got job 806 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 17:03:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 807 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 17:03:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 17:03:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 17:03:00 INFO scheduler.DAGScheduler: Submitting ResultStage 807 (KafkaRDD[1108] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 17:03:00 INFO storage.MemoryStore: Block broadcast_807 stored as values in memory (estimated size 5.7 KB, free 491.5 MB) 18/04/17 17:03:00 INFO storage.BlockManagerInfo: Added broadcast_802_piece0 in memory on ***hostname masked***:55279 (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:03:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 806.0 (TID 806, ***hostname masked***, executor 3, partition 0, NODE_LOCAL, 2059 bytes) 18/04/17 17:03:00 INFO storage.BlockManagerInfo: Added broadcast_803_piece0 in memory on ***hostname masked***:50260 (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:03:00 INFO storage.MemoryStore: Block broadcast_807_piece0 stored as bytes in memory (estimated size 3.1 KB, free 491.5 MB) 18/04/17 17:03:00 INFO storage.BlockManagerInfo: Added broadcast_807_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:03:00 INFO spark.SparkContext: Created broadcast 807 from broadcast at DAGScheduler.scala:1006 18/04/17 17:03:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 807 (KafkaRDD[1108] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 17:03:00 INFO cluster.YarnClusterScheduler: Adding task set 807.0 with 1 tasks 18/04/17 17:03:00 INFO scheduler.DAGScheduler: Got job 808 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 17:03:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 808 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 17:03:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 17:03:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 17:03:00 INFO scheduler.DAGScheduler: Submitting ResultStage 808 (KafkaRDD[1113] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 17:03:00 INFO storage.MemoryStore: Block broadcast_808 stored as values in memory (estimated size 5.7 KB, free 491.5 MB) 18/04/17 17:03:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 807.0 (TID 807, ***hostname masked***, executor 3, partition 0, RACK_LOCAL, 2045 bytes) 18/04/17 17:03:00 INFO storage.BlockManagerInfo: Added broadcast_806_piece0 in memory on ***hostname masked***:60107 (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:03:00 INFO storage.MemoryStore: Block broadcast_808_piece0 stored as bytes in memory (estimated size 3.1 KB, free 491.5 MB) 18/04/17 17:03:00 INFO storage.BlockManagerInfo: Added broadcast_808_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:03:00 INFO spark.SparkContext: Created broadcast 808 from broadcast at DAGScheduler.scala:1006 18/04/17 17:03:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 808 (KafkaRDD[1113] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 17:03:00 INFO cluster.YarnClusterScheduler: Adding task set 808.0 with 1 tasks 18/04/17 17:03:00 INFO scheduler.DAGScheduler: Got job 809 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 17:03:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 809 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 17:03:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 17:03:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 17:03:00 INFO scheduler.DAGScheduler: Submitting ResultStage 809 (KafkaRDD[1088] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 17:03:00 INFO storage.MemoryStore: Block broadcast_809 stored as values in memory (estimated size 5.7 KB, free 491.5 MB) 18/04/17 17:03:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 808.0 (TID 808, ***hostname masked***, executor 7, partition 0, NODE_LOCAL, 2048 bytes) 18/04/17 17:03:00 INFO storage.BlockManagerInfo: Added broadcast_805_piece0 in memory on ***hostname masked***:55095 (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:03:00 INFO storage.MemoryStore: Block broadcast_809_piece0 stored as bytes in memory (estimated size 3.1 KB, free 491.5 MB) 18/04/17 17:03:00 INFO storage.BlockManagerInfo: Added broadcast_809_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:03:00 INFO spark.SparkContext: Created broadcast 809 from broadcast at DAGScheduler.scala:1006 18/04/17 17:03:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 809 (KafkaRDD[1088] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 17:03:00 INFO cluster.YarnClusterScheduler: Adding task set 809.0 with 1 tasks 18/04/17 17:03:00 INFO scheduler.DAGScheduler: Got job 811 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 17:03:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 810 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 17:03:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 17:03:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 17:03:00 INFO scheduler.DAGScheduler: Submitting ResultStage 810 (KafkaRDD[1095] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 17:03:00 INFO storage.MemoryStore: Block broadcast_810 stored as values in memory (estimated size 5.7 KB, free 491.5 MB) 18/04/17 17:03:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 809.0 (TID 809, ***hostname masked***, executor 6, partition 0, RACK_LOCAL, 2051 bytes) 18/04/17 17:03:00 INFO storage.MemoryStore: Block broadcast_810_piece0 stored as bytes in memory (estimated size 3.1 KB, free 491.5 MB) 18/04/17 17:03:00 INFO storage.BlockManagerInfo: Added broadcast_810_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:03:00 INFO spark.SparkContext: Created broadcast 810 from broadcast at DAGScheduler.scala:1006 18/04/17 17:03:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 810 (KafkaRDD[1095] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 17:03:00 INFO cluster.YarnClusterScheduler: Adding task set 810.0 with 1 tasks 18/04/17 17:03:00 INFO storage.BlockManagerInfo: Added broadcast_807_piece0 in memory on ***hostname masked***:60107 (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:03:00 INFO scheduler.DAGScheduler: Got job 810 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 17:03:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 811 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 17:03:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 17:03:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 17:03:00 INFO scheduler.DAGScheduler: Submitting ResultStage 811 (KafkaRDD[1111] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 17:03:00 INFO storage.MemoryStore: Block broadcast_811 stored as values in memory (estimated size 5.7 KB, free 491.5 MB) 18/04/17 17:03:00 INFO storage.BlockManagerInfo: Added broadcast_808_piece0 in memory on ***hostname masked***:41751 (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:03:00 INFO storage.BlockManagerInfo: Added broadcast_801_piece0 in memory on ***hostname masked***:55279 (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:03:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 810.0 (TID 810, ***hostname masked***, executor 10, partition 0, RACK_LOCAL, 2064 bytes) 18/04/17 17:03:00 INFO storage.MemoryStore: Block broadcast_811_piece0 stored as bytes in memory (estimated size 3.1 KB, free 491.5 MB) 18/04/17 17:03:00 INFO storage.BlockManagerInfo: Added broadcast_811_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:03:00 INFO spark.SparkContext: Created broadcast 811 from broadcast at DAGScheduler.scala:1006 18/04/17 17:03:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 811 (KafkaRDD[1111] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 17:03:00 INFO cluster.YarnClusterScheduler: Adding task set 811.0 with 1 tasks 18/04/17 17:03:00 INFO scheduler.DAGScheduler: Got job 812 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 17:03:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 812 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 17:03:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 17:03:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 17:03:00 INFO scheduler.DAGScheduler: Submitting ResultStage 812 (KafkaRDD[1105] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 17:03:00 INFO storage.MemoryStore: Block broadcast_812 stored as values in memory (estimated size 5.7 KB, free 491.5 MB) 18/04/17 17:03:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 811.0 (TID 811, ***hostname masked***, executor 10, partition 0, RACK_LOCAL, 2042 bytes) 18/04/17 17:03:00 INFO storage.BlockManagerInfo: Added broadcast_809_piece0 in memory on ***hostname masked***:35790 (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:03:00 INFO storage.BlockManagerInfo: Added broadcast_804_piece0 in memory on ***hostname masked***:56034 (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:03:00 INFO storage.MemoryStore: Block broadcast_812_piece0 stored as bytes in memory (estimated size 3.1 KB, free 491.5 MB) 18/04/17 17:03:00 INFO storage.BlockManagerInfo: Added broadcast_812_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:03:00 INFO spark.SparkContext: Created broadcast 812 from broadcast at DAGScheduler.scala:1006 18/04/17 17:03:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 812 (KafkaRDD[1105] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 17:03:00 INFO cluster.YarnClusterScheduler: Adding task set 812.0 with 1 tasks 18/04/17 17:03:00 INFO scheduler.DAGScheduler: Got job 813 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 17:03:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 813 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 17:03:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 17:03:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 17:03:00 INFO scheduler.DAGScheduler: Submitting ResultStage 813 (KafkaRDD[1098] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 17:03:00 INFO storage.MemoryStore: Block broadcast_813 stored as values in memory (estimated size 5.7 KB, free 491.5 MB) 18/04/17 17:03:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 812.0 (TID 812, ***hostname masked***, executor 11, partition 0, RACK_LOCAL, 2045 bytes) 18/04/17 17:03:00 INFO storage.BlockManagerInfo: Added broadcast_810_piece0 in memory on ***hostname masked***:55095 (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:03:00 INFO storage.MemoryStore: Block broadcast_813_piece0 stored as bytes in memory (estimated size 3.1 KB, free 491.5 MB) 18/04/17 17:03:00 INFO storage.BlockManagerInfo: Added broadcast_813_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:03:00 INFO spark.SparkContext: Created broadcast 813 from broadcast at DAGScheduler.scala:1006 18/04/17 17:03:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 813 (KafkaRDD[1098] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 17:03:00 INFO cluster.YarnClusterScheduler: Adding task set 813.0 with 1 tasks 18/04/17 17:03:00 INFO scheduler.DAGScheduler: Got job 814 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 17:03:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 814 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 17:03:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 17:03:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 17:03:00 INFO scheduler.DAGScheduler: Submitting ResultStage 814 (KafkaRDD[1102] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 17:03:00 INFO storage.MemoryStore: Block broadcast_814 stored as values in memory (estimated size 5.7 KB, free 491.5 MB) 18/04/17 17:03:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 813.0 (TID 813, ***hostname masked***, executor 10, partition 0, RACK_LOCAL, 2048 bytes) 18/04/17 17:03:00 INFO storage.MemoryStore: Block broadcast_814_piece0 stored as bytes in memory (estimated size 3.1 KB, free 491.5 MB) 18/04/17 17:03:00 INFO storage.BlockManagerInfo: Added broadcast_814_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:03:00 INFO spark.SparkContext: Created broadcast 814 from broadcast at DAGScheduler.scala:1006 18/04/17 17:03:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 814 (KafkaRDD[1102] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 17:03:00 INFO cluster.YarnClusterScheduler: Adding task set 814.0 with 1 tasks 18/04/17 17:03:00 INFO scheduler.DAGScheduler: Got job 815 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 17:03:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 815 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 17:03:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 17:03:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 17:03:00 INFO scheduler.DAGScheduler: Submitting ResultStage 815 (KafkaRDD[1109] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 17:03:00 INFO storage.MemoryStore: Block broadcast_815 stored as values in memory (estimated size 5.7 KB, free 491.5 MB) 18/04/17 17:03:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 814.0 (TID 814, ***hostname masked***, executor 9, partition 0, RACK_LOCAL, 2069 bytes) 18/04/17 17:03:00 INFO storage.MemoryStore: Block broadcast_815_piece0 stored as bytes in memory (estimated size 3.1 KB, free 491.5 MB) 18/04/17 17:03:00 INFO storage.BlockManagerInfo: Added broadcast_815_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:03:00 INFO spark.SparkContext: Created broadcast 815 from broadcast at DAGScheduler.scala:1006 18/04/17 17:03:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 815 (KafkaRDD[1109] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 17:03:00 INFO cluster.YarnClusterScheduler: Adding task set 815.0 with 1 tasks 18/04/17 17:03:00 INFO scheduler.DAGScheduler: Got job 816 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 17:03:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 816 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 17:03:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 17:03:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 17:03:00 INFO scheduler.DAGScheduler: Submitting ResultStage 816 (KafkaRDD[1086] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 17:03:00 INFO storage.MemoryStore: Block broadcast_816 stored as values in memory (estimated size 5.7 KB, free 491.4 MB) 18/04/17 17:03:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 815.0 (TID 815, ***hostname masked***, executor 1, partition 0, NODE_LOCAL, 2049 bytes) 18/04/17 17:03:00 INFO storage.MemoryStore: Block broadcast_816_piece0 stored as bytes in memory (estimated size 3.1 KB, free 491.4 MB) 18/04/17 17:03:00 INFO storage.BlockManagerInfo: Added broadcast_816_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:03:00 INFO spark.SparkContext: Created broadcast 816 from broadcast at DAGScheduler.scala:1006 18/04/17 17:03:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 816 (KafkaRDD[1086] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 17:03:00 INFO cluster.YarnClusterScheduler: Adding task set 816.0 with 1 tasks 18/04/17 17:03:00 INFO scheduler.DAGScheduler: Got job 817 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 17:03:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 817 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 17:03:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 17:03:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 17:03:00 INFO scheduler.DAGScheduler: Submitting ResultStage 817 (KafkaRDD[1114] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 17:03:00 INFO storage.MemoryStore: Block broadcast_817 stored as values in memory (estimated size 5.7 KB, free 491.4 MB) 18/04/17 17:03:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 816.0 (TID 816, ***hostname masked***, executor 8, partition 0, NODE_LOCAL, 2045 bytes) 18/04/17 17:03:00 INFO storage.BlockManagerInfo: Added broadcast_814_piece0 in memory on ***hostname masked***:55033 (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:03:00 INFO storage.MemoryStore: Block broadcast_817_piece0 stored as bytes in memory (estimated size 3.1 KB, free 491.4 MB) 18/04/17 17:03:00 INFO storage.BlockManagerInfo: Added broadcast_817_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:03:00 INFO spark.SparkContext: Created broadcast 817 from broadcast at DAGScheduler.scala:1006 18/04/17 17:03:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 817 (KafkaRDD[1114] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 17:03:00 INFO cluster.YarnClusterScheduler: Adding task set 817.0 with 1 tasks 18/04/17 17:03:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 817.0 (TID 817, ***hostname masked***, executor 9, partition 0, RACK_LOCAL, 2053 bytes) 18/04/17 17:03:00 INFO storage.BlockManagerInfo: Added broadcast_815_piece0 in memory on ***hostname masked***:56034 (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:03:00 INFO storage.BlockManagerInfo: Added broadcast_816_piece0 in memory on ***hostname masked***:50260 (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:03:00 INFO storage.BlockManagerInfo: Added broadcast_817_piece0 in memory on ***hostname masked***:55033 (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:03:00 INFO storage.BlockManagerInfo: Added broadcast_812_piece0 in memory on ***hostname masked***:57847 (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:03:00 INFO storage.BlockManagerInfo: Added broadcast_813_piece0 in memory on ***hostname masked***:55095 (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:03:00 INFO storage.BlockManagerInfo: Added broadcast_811_piece0 in memory on ***hostname masked***:55095 (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:03:00 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 801.0 (TID 801) in 184 ms on ***hostname masked*** (executor 4) (1/1) 18/04/17 17:03:00 INFO scheduler.DAGScheduler: ResultStage 801 (foreachPartition at PredictorEngineApp.java:153) finished in 0.184 s 18/04/17 17:03:00 INFO cluster.YarnClusterScheduler: Removed TaskSet 801.0, whose tasks have all completed, from pool 18/04/17 17:03:00 INFO scheduler.DAGScheduler: Job 799 finished: foreachPartition at PredictorEngineApp.java:153, took 0.221983 s 18/04/17 17:03:00 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x15e46323 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 17:03:00 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x15e463230x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 17:03:00 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 17:03:00 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:42750, server: ***hostname masked***/***IP masked***:2181 18/04/17 17:03:00 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x2626be142b28e8b, negotiated timeout = 60000 18/04/17 17:03:00 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x2626be142b28e8b 18/04/17 17:03:00 INFO zookeeper.ZooKeeper: Session: 0x2626be142b28e8b closed 18/04/17 17:03:00 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 17:03:00 INFO scheduler.JobScheduler: Finished job streaming job 1523973780000 ms.10 from job set of time 1523973780000 ms 18/04/17 17:03:00 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 812.0 (TID 812) in 802 ms on ***hostname masked*** (executor 11) (1/1) 18/04/17 17:03:00 INFO cluster.YarnClusterScheduler: Removed TaskSet 812.0, whose tasks have all completed, from pool 18/04/17 17:03:00 INFO scheduler.DAGScheduler: ResultStage 812 (foreachPartition at PredictorEngineApp.java:153) finished in 0.803 s 18/04/17 17:03:00 INFO scheduler.DAGScheduler: Job 812 finished: foreachPartition at PredictorEngineApp.java:153, took 0.881641 s 18/04/17 17:03:00 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x22969788 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 17:03:00 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x229697880x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 17:03:00 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 17:03:00 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:60009, server: ***hostname masked***/***IP masked***:2181 18/04/17 17:03:00 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x3626be1439a9558, negotiated timeout = 60000 18/04/17 17:03:00 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x3626be1439a9558 18/04/17 17:03:00 INFO zookeeper.ZooKeeper: Session: 0x3626be1439a9558 closed 18/04/17 17:03:00 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 17:03:00 INFO scheduler.JobScheduler: Finished job streaming job 1523973780000 ms.25 from job set of time 1523973780000 ms 18/04/17 17:03:01 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 799.0 (TID 799) in 1458 ms on ***hostname masked*** (executor 5) (1/1) 18/04/17 17:03:01 INFO cluster.YarnClusterScheduler: Removed TaskSet 799.0, whose tasks have all completed, from pool 18/04/17 17:03:01 INFO scheduler.DAGScheduler: ResultStage 799 (foreachPartition at PredictorEngineApp.java:153) finished in 1.459 s 18/04/17 17:03:01 INFO scheduler.DAGScheduler: Job 800 finished: foreachPartition at PredictorEngineApp.java:153, took 1.489257 s 18/04/17 17:03:01 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x747d7152 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 17:03:01 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x747d71520x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 17:03:01 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 17:03:01 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:60013, server: ***hostname masked***/***IP masked***:2181 18/04/17 17:03:01 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x3626be1439a9559, negotiated timeout = 60000 18/04/17 17:03:01 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x3626be1439a9559 18/04/17 17:03:01 INFO zookeeper.ZooKeeper: Session: 0x3626be1439a9559 closed 18/04/17 17:03:01 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 17:03:01 INFO scheduler.JobScheduler: Finished job streaming job 1523973780000 ms.7 from job set of time 1523973780000 ms 18/04/17 17:03:03 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 809.0 (TID 809) in 3699 ms on ***hostname masked*** (executor 6) (1/1) 18/04/17 17:03:03 INFO cluster.YarnClusterScheduler: Removed TaskSet 809.0, whose tasks have all completed, from pool 18/04/17 17:03:03 INFO scheduler.DAGScheduler: ResultStage 809 (foreachPartition at PredictorEngineApp.java:153) finished in 3.700 s 18/04/17 17:03:03 INFO scheduler.DAGScheduler: Job 809 finished: foreachPartition at PredictorEngineApp.java:153, took 3.770181 s 18/04/17 17:03:03 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x286a1e connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 17:03:03 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x286a1e0x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 17:03:03 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 17:03:03 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:38168, server: ***hostname masked***/***IP masked***:2181 18/04/17 17:03:03 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x1626be1444c9597, negotiated timeout = 60000 18/04/17 17:03:03 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x1626be1444c9597 18/04/17 17:03:03 INFO zookeeper.ZooKeeper: Session: 0x1626be1444c9597 closed 18/04/17 17:03:03 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 17:03:03 INFO scheduler.JobScheduler: Finished job streaming job 1523973780000 ms.8 from job set of time 1523973780000 ms 18/04/17 17:03:06 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 794.0 (TID 794) in 6334 ms on ***hostname masked*** (executor 7) (1/1) 18/04/17 17:03:06 INFO cluster.YarnClusterScheduler: Removed TaskSet 794.0, whose tasks have all completed, from pool 18/04/17 17:03:06 INFO scheduler.DAGScheduler: ResultStage 794 (foreachPartition at PredictorEngineApp.java:153) finished in 6.334 s 18/04/17 17:03:06 INFO scheduler.DAGScheduler: Job 795 finished: foreachPartition at PredictorEngineApp.java:153, took 6.349820 s 18/04/17 17:03:06 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x4c1c38a4 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 17:03:06 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x4c1c38a40x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 17:03:06 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 17:03:06 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:60027, server: ***hostname masked***/***IP masked***:2181 18/04/17 17:03:06 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x3626be1439a955c, negotiated timeout = 60000 18/04/17 17:03:06 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x3626be1439a955c 18/04/17 17:03:06 INFO zookeeper.ZooKeeper: Session: 0x3626be1439a955c closed 18/04/17 17:03:06 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 17:03:06 INFO scheduler.JobScheduler: Finished job streaming job 1523973780000 ms.9 from job set of time 1523973780000 ms 18/04/17 17:03:06 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 808.0 (TID 808) in 6364 ms on ***hostname masked*** (executor 7) (1/1) 18/04/17 17:03:06 INFO cluster.YarnClusterScheduler: Removed TaskSet 808.0, whose tasks have all completed, from pool 18/04/17 17:03:06 INFO scheduler.DAGScheduler: ResultStage 808 (foreachPartition at PredictorEngineApp.java:153) finished in 6.365 s 18/04/17 17:03:06 INFO scheduler.DAGScheduler: Job 808 finished: foreachPartition at PredictorEngineApp.java:153, took 6.432234 s 18/04/17 17:03:06 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x4f799609 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 17:03:06 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x4f7996090x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 17:03:06 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 17:03:06 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:38179, server: ***hostname masked***/***IP masked***:2181 18/04/17 17:03:06 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x1626be1444c959e, negotiated timeout = 60000 18/04/17 17:03:06 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x1626be1444c959e 18/04/17 17:03:06 INFO zookeeper.ZooKeeper: Session: 0x1626be1444c959e closed 18/04/17 17:03:06 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 17:03:06 INFO scheduler.JobScheduler: Finished job streaming job 1523973780000 ms.33 from job set of time 1523973780000 ms 18/04/17 17:03:06 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 811.0 (TID 811) in 6531 ms on ***hostname masked*** (executor 10) (1/1) 18/04/17 17:03:06 INFO cluster.YarnClusterScheduler: Removed TaskSet 811.0, whose tasks have all completed, from pool 18/04/17 17:03:06 INFO scheduler.DAGScheduler: ResultStage 811 (foreachPartition at PredictorEngineApp.java:153) finished in 6.531 s 18/04/17 17:03:06 INFO scheduler.DAGScheduler: Job 810 finished: foreachPartition at PredictorEngineApp.java:153, took 6.607505 s 18/04/17 17:03:06 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x7dbebb0a connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 17:03:06 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x7dbebb0a0x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 17:03:06 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 17:03:06 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:42777, server: ***hostname masked***/***IP masked***:2181 18/04/17 17:03:06 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x2626be142b28e94, negotiated timeout = 60000 18/04/17 17:03:06 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x2626be142b28e94 18/04/17 17:03:06 INFO zookeeper.ZooKeeper: Session: 0x2626be142b28e94 closed 18/04/17 17:03:06 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 17:03:06 INFO scheduler.JobScheduler: Finished job streaming job 1523973780000 ms.31 from job set of time 1523973780000 ms 18/04/17 17:03:06 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 816.0 (TID 816) in 6648 ms on ***hostname masked*** (executor 8) (1/1) 18/04/17 17:03:06 INFO cluster.YarnClusterScheduler: Removed TaskSet 816.0, whose tasks have all completed, from pool 18/04/17 17:03:06 INFO scheduler.DAGScheduler: ResultStage 816 (foreachPartition at PredictorEngineApp.java:153) finished in 6.648 s 18/04/17 17:03:06 INFO scheduler.DAGScheduler: Job 816 finished: foreachPartition at PredictorEngineApp.java:153, took 6.736716 s 18/04/17 17:03:06 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x6bcecf4 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 17:03:06 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x6bcecf40x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 17:03:06 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 17:03:06 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:38185, server: ***hostname masked***/***IP masked***:2181 18/04/17 17:03:06 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x1626be1444c959f, negotiated timeout = 60000 18/04/17 17:03:06 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x1626be1444c959f 18/04/17 17:03:06 INFO zookeeper.ZooKeeper: Session: 0x1626be1444c959f closed 18/04/17 17:03:06 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 17:03:06 INFO scheduler.JobScheduler: Finished job streaming job 1523973780000 ms.6 from job set of time 1523973780000 ms 18/04/17 17:03:07 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 798.0 (TID 798) in 6942 ms on ***hostname masked*** (executor 2) (1/1) 18/04/17 17:03:07 INFO cluster.YarnClusterScheduler: Removed TaskSet 798.0, whose tasks have all completed, from pool 18/04/17 17:03:07 INFO scheduler.DAGScheduler: ResultStage 798 (foreachPartition at PredictorEngineApp.java:153) finished in 6.942 s 18/04/17 17:03:07 INFO scheduler.DAGScheduler: Job 798 finished: foreachPartition at PredictorEngineApp.java:153, took 6.970123 s 18/04/17 17:03:07 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x2bb4350e connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 17:03:07 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x2bb4350e0x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 17:03:07 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 17:03:07 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:60040, server: ***hostname masked***/***IP masked***:2181 18/04/17 17:03:07 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x3626be1439a955e, negotiated timeout = 60000 18/04/17 17:03:07 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x3626be1439a955e 18/04/17 17:03:07 INFO zookeeper.ZooKeeper: Session: 0x3626be1439a955e closed 18/04/17 17:03:07 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 17:03:07 INFO scheduler.JobScheduler: Finished job streaming job 1523973780000 ms.12 from job set of time 1523973780000 ms 18/04/17 17:03:07 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 803.0 (TID 803) in 7020 ms on ***hostname masked*** (executor 8) (1/1) 18/04/17 17:03:07 INFO cluster.YarnClusterScheduler: Removed TaskSet 803.0, whose tasks have all completed, from pool 18/04/17 17:03:07 INFO scheduler.DAGScheduler: ResultStage 803 (foreachPartition at PredictorEngineApp.java:153) finished in 7.020 s 18/04/17 17:03:07 INFO scheduler.DAGScheduler: Job 803 finished: foreachPartition at PredictorEngineApp.java:153, took 7.072433 s 18/04/17 17:03:07 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x1370e957 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 17:03:07 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x1370e9570x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 17:03:07 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 17:03:07 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:60043, server: ***hostname masked***/***IP masked***:2181 18/04/17 17:03:07 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x3626be1439a9560, negotiated timeout = 60000 18/04/17 17:03:07 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x3626be1439a9560 18/04/17 17:03:07 INFO zookeeper.ZooKeeper: Session: 0x3626be1439a9560 closed 18/04/17 17:03:07 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 17:03:07 INFO scheduler.JobScheduler: Finished job streaming job 1523973780000 ms.23 from job set of time 1523973780000 ms 18/04/17 17:03:07 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 800.0 (TID 800) in 7484 ms on ***hostname masked*** (executor 2) (1/1) 18/04/17 17:03:07 INFO scheduler.DAGScheduler: ResultStage 800 (foreachPartition at PredictorEngineApp.java:153) finished in 7.485 s 18/04/17 17:03:07 INFO cluster.YarnClusterScheduler: Removed TaskSet 800.0, whose tasks have all completed, from pool 18/04/17 17:03:07 INFO scheduler.DAGScheduler: Job 801 finished: foreachPartition at PredictorEngineApp.java:153, took 7.518027 s 18/04/17 17:03:07 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x3d88c629 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 17:03:07 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x3d88c6290x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 17:03:07 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 17:03:07 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:42790, server: ***hostname masked***/***IP masked***:2181 18/04/17 17:03:07 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x2626be142b28e95, negotiated timeout = 60000 18/04/17 17:03:07 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x2626be142b28e95 18/04/17 17:03:07 INFO zookeeper.ZooKeeper: Session: 0x2626be142b28e95 closed 18/04/17 17:03:07 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 17:03:07 INFO scheduler.JobScheduler: Finished job streaming job 1523973780000 ms.19 from job set of time 1523973780000 ms 18/04/17 17:03:09 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 802.0 (TID 802) in 8913 ms on ***hostname masked*** (executor 4) (1/1) 18/04/17 17:03:09 INFO cluster.YarnClusterScheduler: Removed TaskSet 802.0, whose tasks have all completed, from pool 18/04/17 17:03:09 INFO scheduler.DAGScheduler: ResultStage 802 (foreachPartition at PredictorEngineApp.java:153) finished in 8.914 s 18/04/17 17:03:09 INFO scheduler.DAGScheduler: Job 802 finished: foreachPartition at PredictorEngineApp.java:153, took 8.962378 s 18/04/17 17:03:09 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x152332bd connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 17:03:09 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x152332bd0x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 17:03:09 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 17:03:09 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:60051, server: ***hostname masked***/***IP masked***:2181 18/04/17 17:03:09 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x3626be1439a9561, negotiated timeout = 60000 18/04/17 17:03:09 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x3626be1439a9561 18/04/17 17:03:09 INFO zookeeper.ZooKeeper: Session: 0x3626be1439a9561 closed 18/04/17 17:03:09 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 17:03:09 INFO scheduler.JobScheduler: Finished job streaming job 1523973780000 ms.11 from job set of time 1523973780000 ms 18/04/17 17:03:09 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 795.0 (TID 795) in 9345 ms on ***hostname masked*** (executor 6) (1/1) 18/04/17 17:03:09 INFO scheduler.DAGScheduler: ResultStage 795 (foreachPartition at PredictorEngineApp.java:153) finished in 9.345 s 18/04/17 17:03:09 INFO cluster.YarnClusterScheduler: Removed TaskSet 795.0, whose tasks have all completed, from pool 18/04/17 17:03:09 INFO scheduler.DAGScheduler: Job 794 finished: foreachPartition at PredictorEngineApp.java:153, took 9.363550 s 18/04/17 17:03:09 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x41c5b4b0 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 17:03:09 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x41c5b4b00x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 17:03:09 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 17:03:09 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:38203, server: ***hostname masked***/***IP masked***:2181 18/04/17 17:03:09 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x1626be1444c95a3, negotiated timeout = 60000 18/04/17 17:03:09 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x1626be1444c95a3 18/04/17 17:03:09 INFO zookeeper.ZooKeeper: Session: 0x1626be1444c95a3 closed 18/04/17 17:03:09 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 17:03:09 INFO scheduler.JobScheduler: Finished job streaming job 1523973780000 ms.20 from job set of time 1523973780000 ms 18/04/17 17:03:09 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 797.0 (TID 797) in 9489 ms on ***hostname masked*** (executor 6) (1/1) 18/04/17 17:03:09 INFO cluster.YarnClusterScheduler: Removed TaskSet 797.0, whose tasks have all completed, from pool 18/04/17 17:03:09 INFO scheduler.DAGScheduler: ResultStage 797 (foreachPartition at PredictorEngineApp.java:153) finished in 9.489 s 18/04/17 17:03:09 INFO scheduler.DAGScheduler: Job 797 finished: foreachPartition at PredictorEngineApp.java:153, took 9.514176 s 18/04/17 17:03:09 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x2e39bde5 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 17:03:09 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x2e39bde50x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 17:03:09 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 17:03:09 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:60057, server: ***hostname masked***/***IP masked***:2181 18/04/17 17:03:09 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x3626be1439a9562, negotiated timeout = 60000 18/04/17 17:03:09 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x3626be1439a9562 18/04/17 17:03:09 INFO zookeeper.ZooKeeper: Session: 0x3626be1439a9562 closed 18/04/17 17:03:09 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 17:03:09 INFO scheduler.JobScheduler: Finished job streaming job 1523973780000 ms.24 from job set of time 1523973780000 ms 18/04/17 17:03:10 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 807.0 (TID 807) in 10117 ms on ***hostname masked*** (executor 3) (1/1) 18/04/17 17:03:10 INFO cluster.YarnClusterScheduler: Removed TaskSet 807.0, whose tasks have all completed, from pool 18/04/17 17:03:10 INFO scheduler.DAGScheduler: ResultStage 807 (foreachPartition at PredictorEngineApp.java:153) finished in 10.118 s 18/04/17 17:03:10 INFO scheduler.DAGScheduler: Job 806 finished: foreachPartition at PredictorEngineApp.java:153, took 10.182271 s 18/04/17 17:03:10 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0xc5f1ab9 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 17:03:10 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0xc5f1ab90x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 17:03:10 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 17:03:10 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:42806, server: ***hostname masked***/***IP masked***:2181 18/04/17 17:03:10 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x2626be142b28e96, negotiated timeout = 60000 18/04/17 17:03:10 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x2626be142b28e96 18/04/17 17:03:10 INFO zookeeper.ZooKeeper: Session: 0x2626be142b28e96 closed 18/04/17 17:03:10 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 17:03:10 INFO scheduler.JobScheduler: Finished job streaming job 1523973780000 ms.28 from job set of time 1523973780000 ms 18/04/17 17:03:10 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 815.0 (TID 815) in 10558 ms on ***hostname masked*** (executor 1) (1/1) 18/04/17 17:03:10 INFO scheduler.DAGScheduler: ResultStage 815 (foreachPartition at PredictorEngineApp.java:153) finished in 10.559 s 18/04/17 17:03:10 INFO cluster.YarnClusterScheduler: Removed TaskSet 815.0, whose tasks have all completed, from pool 18/04/17 17:03:10 INFO scheduler.DAGScheduler: Job 815 finished: foreachPartition at PredictorEngineApp.java:153, took 10.644492 s 18/04/17 17:03:10 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x754772c3 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 17:03:10 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x754772c30x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 17:03:10 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 17:03:10 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:38214, server: ***hostname masked***/***IP masked***:2181 18/04/17 17:03:10 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x1626be1444c95a7, negotiated timeout = 60000 18/04/17 17:03:10 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 792.0 (TID 792) in 10653 ms on ***hostname masked*** (executor 1) (1/1) 18/04/17 17:03:10 INFO cluster.YarnClusterScheduler: Removed TaskSet 792.0, whose tasks have all completed, from pool 18/04/17 17:03:10 INFO scheduler.DAGScheduler: ResultStage 792 (foreachPartition at PredictorEngineApp.java:153) finished in 10.653 s 18/04/17 17:03:10 INFO scheduler.DAGScheduler: Job 792 finished: foreachPartition at PredictorEngineApp.java:153, took 10.662784 s 18/04/17 17:03:10 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x1626be1444c95a7 18/04/17 17:03:10 INFO zookeeper.ZooKeeper: Session: 0x1626be1444c95a7 closed 18/04/17 17:03:10 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 17:03:10 INFO scheduler.JobScheduler: Finished job streaming job 1523973780000 ms.29 from job set of time 1523973780000 ms 18/04/17 17:03:10 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 804.0 (TID 804) in 10627 ms on ***hostname masked*** (executor 1) (1/1) 18/04/17 17:03:10 INFO cluster.YarnClusterScheduler: Removed TaskSet 804.0, whose tasks have all completed, from pool 18/04/17 17:03:10 INFO scheduler.DAGScheduler: ResultStage 804 (foreachPartition at PredictorEngineApp.java:153) finished in 10.627 s 18/04/17 17:03:10 INFO scheduler.DAGScheduler: Job 804 finished: foreachPartition at PredictorEngineApp.java:153, took 10.682254 s 18/04/17 17:03:10 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x45b59b63 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 17:03:10 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x45b59b630x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 17:03:10 INFO scheduler.JobScheduler: Finished job streaming job 1523973780000 ms.27 from job set of time 1523973780000 ms 18/04/17 17:03:10 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 17:03:10 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:42812, server: ***hostname masked***/***IP masked***:2181 18/04/17 17:03:10 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x2626be142b28e97, negotiated timeout = 60000 18/04/17 17:03:10 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x2626be142b28e97 18/04/17 17:03:10 INFO zookeeper.ZooKeeper: Session: 0x2626be142b28e97 closed 18/04/17 17:03:10 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 17:03:10 INFO scheduler.JobScheduler: Finished job streaming job 1523973780000 ms.2 from job set of time 1523973780000 ms 18/04/17 17:03:10 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 805.0 (TID 805) in 10723 ms on ***hostname masked*** (executor 10) (1/1) 18/04/17 17:03:10 INFO cluster.YarnClusterScheduler: Removed TaskSet 805.0, whose tasks have all completed, from pool 18/04/17 17:03:10 INFO scheduler.DAGScheduler: ResultStage 805 (foreachPartition at PredictorEngineApp.java:153) finished in 10.723 s 18/04/17 17:03:10 INFO scheduler.DAGScheduler: Job 805 finished: foreachPartition at PredictorEngineApp.java:153, took 10.780924 s 18/04/17 17:03:10 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x3eca1cb2 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 17:03:10 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x3eca1cb20x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 17:03:10 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 17:03:10 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:38220, server: ***hostname masked***/***IP masked***:2181 18/04/17 17:03:10 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x1626be1444c95a8, negotiated timeout = 60000 18/04/17 17:03:10 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x1626be1444c95a8 18/04/17 17:03:10 INFO zookeeper.ZooKeeper: Session: 0x1626be1444c95a8 closed 18/04/17 17:03:10 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 17:03:10 INFO scheduler.JobScheduler: Finished job streaming job 1523973780000 ms.32 from job set of time 1523973780000 ms 18/04/17 17:03:11 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 810.0 (TID 810) in 11001 ms on ***hostname masked*** (executor 10) (1/1) 18/04/17 17:03:11 INFO cluster.YarnClusterScheduler: Removed TaskSet 810.0, whose tasks have all completed, from pool 18/04/17 17:03:11 INFO scheduler.DAGScheduler: ResultStage 810 (foreachPartition at PredictorEngineApp.java:153) finished in 11.001 s 18/04/17 17:03:11 INFO scheduler.DAGScheduler: Job 811 finished: foreachPartition at PredictorEngineApp.java:153, took 11.074954 s 18/04/17 17:03:11 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x6781333a connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 17:03:11 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x6781333a0x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 17:03:11 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 17:03:11 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:38229, server: ***hostname masked***/***IP masked***:2181 18/04/17 17:03:11 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x1626be1444c95ab, negotiated timeout = 60000 18/04/17 17:03:11 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x1626be1444c95ab 18/04/17 17:03:11 INFO zookeeper.ZooKeeper: Session: 0x1626be1444c95ab closed 18/04/17 17:03:11 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 17:03:11 INFO scheduler.JobScheduler: Finished job streaming job 1523973780000 ms.15 from job set of time 1523973780000 ms 18/04/17 17:03:11 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 813.0 (TID 813) in 11067 ms on ***hostname masked*** (executor 10) (1/1) 18/04/17 17:03:11 INFO cluster.YarnClusterScheduler: Removed TaskSet 813.0, whose tasks have all completed, from pool 18/04/17 17:03:11 INFO scheduler.DAGScheduler: ResultStage 813 (foreachPartition at PredictorEngineApp.java:153) finished in 11.067 s 18/04/17 17:03:11 INFO scheduler.DAGScheduler: Job 813 finished: foreachPartition at PredictorEngineApp.java:153, took 11.148523 s 18/04/17 17:03:11 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x4045854a connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 17:03:11 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x4045854a0x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 17:03:11 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 17:03:11 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:38232, server: ***hostname masked***/***IP masked***:2181 18/04/17 17:03:11 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x1626be1444c95ac, negotiated timeout = 60000 18/04/17 17:03:11 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x1626be1444c95ac 18/04/17 17:03:11 INFO zookeeper.ZooKeeper: Session: 0x1626be1444c95ac closed 18/04/17 17:03:11 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 17:03:11 INFO scheduler.JobScheduler: Finished job streaming job 1523973780000 ms.18 from job set of time 1523973780000 ms 18/04/17 17:03:11 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 817.0 (TID 817) in 11472 ms on ***hostname masked*** (executor 9) (1/1) 18/04/17 17:03:11 INFO cluster.YarnClusterScheduler: Removed TaskSet 817.0, whose tasks have all completed, from pool 18/04/17 17:03:11 INFO scheduler.DAGScheduler: ResultStage 817 (foreachPartition at PredictorEngineApp.java:153) finished in 11.472 s 18/04/17 17:03:11 INFO scheduler.DAGScheduler: Job 817 finished: foreachPartition at PredictorEngineApp.java:153, took 11.562232 s 18/04/17 17:03:11 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0xce29489 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 17:03:11 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0xce294890x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 17:03:11 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 17:03:11 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:60086, server: ***hostname masked***/***IP masked***:2181 18/04/17 17:03:11 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x3626be1439a9564, negotiated timeout = 60000 18/04/17 17:03:11 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x3626be1439a9564 18/04/17 17:03:11 INFO zookeeper.ZooKeeper: Session: 0x3626be1439a9564 closed 18/04/17 17:03:11 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 17:03:11 INFO scheduler.JobScheduler: Finished job streaming job 1523973780000 ms.34 from job set of time 1523973780000 ms 18/04/17 17:03:17 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 806.0 (TID 806) in 17801 ms on ***hostname masked*** (executor 3) (1/1) 18/04/17 17:03:17 INFO cluster.YarnClusterScheduler: Removed TaskSet 806.0, whose tasks have all completed, from pool 18/04/17 17:03:17 INFO scheduler.DAGScheduler: ResultStage 806 (foreachPartition at PredictorEngineApp.java:153) finished in 17.802 s 18/04/17 17:03:17 INFO scheduler.DAGScheduler: Job 807 finished: foreachPartition at PredictorEngineApp.java:153, took 17.862953 s 18/04/17 17:03:17 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x72031664 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 17:03:17 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x720316640x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 17:03:17 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 17:03:17 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:60100, server: ***hostname masked***/***IP masked***:2181 18/04/17 17:03:17 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x3626be1439a9567, negotiated timeout = 60000 18/04/17 17:03:17 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x3626be1439a9567 18/04/17 17:03:18 INFO zookeeper.ZooKeeper: Session: 0x3626be1439a9567 closed 18/04/17 17:03:18 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 17:03:18 INFO scheduler.JobScheduler: Finished job streaming job 1523973780000 ms.5 from job set of time 1523973780000 ms 18/04/17 17:03:18 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 796.0 (TID 796) in 17962 ms on ***hostname masked*** (executor 3) (1/1) 18/04/17 17:03:18 INFO scheduler.DAGScheduler: ResultStage 796 (foreachPartition at PredictorEngineApp.java:153) finished in 17.963 s 18/04/17 17:03:18 INFO cluster.YarnClusterScheduler: Removed TaskSet 796.0, whose tasks have all completed, from pool 18/04/17 17:03:18 INFO scheduler.DAGScheduler: Job 796 finished: foreachPartition at PredictorEngineApp.java:153, took 17.984535 s 18/04/17 17:03:18 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x51a82a03 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 17:03:18 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x51a82a030x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 17:03:18 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 17:03:18 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:60104, server: ***hostname masked***/***IP masked***:2181 18/04/17 17:03:18 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x3626be1439a9568, negotiated timeout = 60000 18/04/17 17:03:18 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x3626be1439a9568 18/04/17 17:03:18 INFO zookeeper.ZooKeeper: Session: 0x3626be1439a9568 closed 18/04/17 17:03:18 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 17:03:18 INFO scheduler.JobScheduler: Finished job streaming job 1523973780000 ms.26 from job set of time 1523973780000 ms 18/04/17 17:03:19 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 793.0 (TID 793) in 19599 ms on ***hostname masked*** (executor 9) (1/1) 18/04/17 17:03:19 INFO scheduler.DAGScheduler: ResultStage 793 (foreachPartition at PredictorEngineApp.java:153) finished in 19.599 s 18/04/17 17:03:19 INFO cluster.YarnClusterScheduler: Removed TaskSet 793.0, whose tasks have all completed, from pool 18/04/17 17:03:19 INFO scheduler.DAGScheduler: Job 793 finished: foreachPartition at PredictorEngineApp.java:153, took 19.611390 s 18/04/17 17:03:19 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0xf2bb53b connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 17:03:19 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0xf2bb53b0x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 17:03:19 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 17:03:19 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:38257, server: ***hostname masked***/***IP masked***:2181 18/04/17 17:03:19 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x1626be1444c95b1, negotiated timeout = 60000 18/04/17 17:03:19 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x1626be1444c95b1 18/04/17 17:03:19 INFO zookeeper.ZooKeeper: Session: 0x1626be1444c95b1 closed 18/04/17 17:03:19 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 17:03:19 INFO scheduler.JobScheduler: Finished job streaming job 1523973780000 ms.1 from job set of time 1523973780000 ms 18/04/17 17:03:20 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 814.0 (TID 814) in 20157 ms on ***hostname masked*** (executor 9) (1/1) 18/04/17 17:03:20 INFO cluster.YarnClusterScheduler: Removed TaskSet 814.0, whose tasks have all completed, from pool 18/04/17 17:03:20 INFO scheduler.DAGScheduler: ResultStage 814 (foreachPartition at PredictorEngineApp.java:153) finished in 20.158 s 18/04/17 17:03:20 INFO scheduler.DAGScheduler: Job 814 finished: foreachPartition at PredictorEngineApp.java:153, took 20.241957 s 18/04/17 17:03:20 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x59f8e165 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 17:03:20 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x59f8e1650x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 17:03:20 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 17:03:20 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:38262, server: ***hostname masked***/***IP masked***:2181 18/04/17 17:03:20 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x1626be1444c95b2, negotiated timeout = 60000 18/04/17 17:03:20 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x1626be1444c95b2 18/04/17 17:03:20 INFO zookeeper.ZooKeeper: Session: 0x1626be1444c95b2 closed 18/04/17 17:03:20 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 17:03:20 INFO scheduler.JobScheduler: Finished job streaming job 1523973780000 ms.22 from job set of time 1523973780000 ms 18/04/17 17:03:20 INFO scheduler.JobScheduler: Total delay: 20.335 s for time 1523973780000 ms (execution: 20.283 s) 18/04/17 17:03:20 INFO kafka.KafkaRDD: Removing RDD 1044 from persistence list 18/04/17 17:03:20 INFO storage.BlockManager: Removing RDD 1044 18/04/17 17:03:20 INFO kafka.KafkaRDD: Removing RDD 1044 from persistence list 18/04/17 17:03:20 INFO storage.BlockManager: Removing RDD 1044 18/04/17 17:03:20 INFO kafka.KafkaRDD: Removing RDD 1045 from persistence list 18/04/17 17:03:20 INFO storage.BlockManager: Removing RDD 1045 18/04/17 17:03:20 INFO kafka.KafkaRDD: Removing RDD 1045 from persistence list 18/04/17 17:03:20 INFO storage.BlockManager: Removing RDD 1045 18/04/17 17:03:20 INFO kafka.KafkaRDD: Removing RDD 1046 from persistence list 18/04/17 17:03:20 INFO storage.BlockManager: Removing RDD 1046 18/04/17 17:03:20 INFO kafka.KafkaRDD: Removing RDD 1046 from persistence list 18/04/17 17:03:20 INFO storage.BlockManager: Removing RDD 1046 18/04/17 17:03:20 INFO kafka.KafkaRDD: Removing RDD 1047 from persistence list 18/04/17 17:03:20 INFO storage.BlockManager: Removing RDD 1047 18/04/17 17:03:20 INFO kafka.KafkaRDD: Removing RDD 1047 from persistence list 18/04/17 17:03:20 INFO storage.BlockManager: Removing RDD 1047 18/04/17 17:03:20 INFO kafka.KafkaRDD: Removing RDD 1048 from persistence list 18/04/17 17:03:20 INFO storage.BlockManager: Removing RDD 1048 18/04/17 17:03:20 INFO kafka.KafkaRDD: Removing RDD 1048 from persistence list 18/04/17 17:03:20 INFO storage.BlockManager: Removing RDD 1048 18/04/17 17:03:20 INFO kafka.KafkaRDD: Removing RDD 1049 from persistence list 18/04/17 17:03:20 INFO storage.BlockManager: Removing RDD 1049 18/04/17 17:03:20 INFO kafka.KafkaRDD: Removing RDD 1049 from persistence list 18/04/17 17:03:20 INFO storage.BlockManager: Removing RDD 1049 18/04/17 17:03:20 INFO kafka.KafkaRDD: Removing RDD 1050 from persistence list 18/04/17 17:03:20 INFO storage.BlockManager: Removing RDD 1050 18/04/17 17:03:20 INFO kafka.KafkaRDD: Removing RDD 1050 from persistence list 18/04/17 17:03:20 INFO storage.BlockManager: Removing RDD 1050 18/04/17 17:03:20 INFO kafka.KafkaRDD: Removing RDD 1051 from persistence list 18/04/17 17:03:20 INFO storage.BlockManager: Removing RDD 1051 18/04/17 17:03:20 INFO kafka.KafkaRDD: Removing RDD 1051 from persistence list 18/04/17 17:03:20 INFO storage.BlockManager: Removing RDD 1051 18/04/17 17:03:20 INFO kafka.KafkaRDD: Removing RDD 1052 from persistence list 18/04/17 17:03:20 INFO storage.BlockManager: Removing RDD 1052 18/04/17 17:03:20 INFO kafka.KafkaRDD: Removing RDD 1052 from persistence list 18/04/17 17:03:20 INFO storage.BlockManager: Removing RDD 1052 18/04/17 17:03:20 INFO kafka.KafkaRDD: Removing RDD 1053 from persistence list 18/04/17 17:03:20 INFO storage.BlockManager: Removing RDD 1053 18/04/17 17:03:20 INFO kafka.KafkaRDD: Removing RDD 1053 from persistence list 18/04/17 17:03:20 INFO storage.BlockManager: Removing RDD 1053 18/04/17 17:03:20 INFO kafka.KafkaRDD: Removing RDD 1054 from persistence list 18/04/17 17:03:20 INFO storage.BlockManager: Removing RDD 1054 18/04/17 17:03:20 INFO kafka.KafkaRDD: Removing RDD 1054 from persistence list 18/04/17 17:03:20 INFO storage.BlockManager: Removing RDD 1054 18/04/17 17:03:20 INFO kafka.KafkaRDD: Removing RDD 1055 from persistence list 18/04/17 17:03:20 INFO storage.BlockManager: Removing RDD 1055 18/04/17 17:03:20 INFO kafka.KafkaRDD: Removing RDD 1055 from persistence list 18/04/17 17:03:20 INFO storage.BlockManager: Removing RDD 1055 18/04/17 17:03:20 INFO kafka.KafkaRDD: Removing RDD 1056 from persistence list 18/04/17 17:03:20 INFO storage.BlockManager: Removing RDD 1056 18/04/17 17:03:20 INFO kafka.KafkaRDD: Removing RDD 1056 from persistence list 18/04/17 17:03:20 INFO storage.BlockManager: Removing RDD 1056 18/04/17 17:03:20 INFO kafka.KafkaRDD: Removing RDD 1057 from persistence list 18/04/17 17:03:20 INFO storage.BlockManager: Removing RDD 1057 18/04/17 17:03:20 INFO kafka.KafkaRDD: Removing RDD 1057 from persistence list 18/04/17 17:03:20 INFO storage.BlockManager: Removing RDD 1057 18/04/17 17:03:20 INFO kafka.KafkaRDD: Removing RDD 1058 from persistence list 18/04/17 17:03:20 INFO storage.BlockManager: Removing RDD 1058 18/04/17 17:03:20 INFO kafka.KafkaRDD: Removing RDD 1058 from persistence list 18/04/17 17:03:20 INFO storage.BlockManagerInfo: Removed broadcast_794_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:03:20 INFO storage.BlockManager: Removing RDD 1058 18/04/17 17:03:20 INFO kafka.KafkaRDD: Removing RDD 1059 from persistence list 18/04/17 17:03:20 INFO storage.BlockManager: Removing RDD 1059 18/04/17 17:03:20 INFO kafka.KafkaRDD: Removing RDD 1059 from persistence list 18/04/17 17:03:20 INFO storage.BlockManager: Removing RDD 1059 18/04/17 17:03:20 INFO kafka.KafkaRDD: Removing RDD 1060 from persistence list 18/04/17 17:03:20 INFO storage.BlockManagerInfo: Removed broadcast_794_piece0 on ***hostname masked***:41751 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:03:20 INFO storage.BlockManager: Removing RDD 1060 18/04/17 17:03:20 INFO kafka.KafkaRDD: Removing RDD 1060 from persistence list 18/04/17 17:03:20 INFO spark.ContextCleaner: Cleaned accumulator 793 18/04/17 17:03:20 INFO spark.ContextCleaner: Cleaned accumulator 800 18/04/17 17:03:20 INFO storage.BlockManager: Removing RDD 1060 18/04/17 17:03:20 INFO kafka.KafkaRDD: Removing RDD 1061 from persistence list 18/04/17 17:03:20 INFO storage.BlockManager: Removing RDD 1061 18/04/17 17:03:20 INFO kafka.KafkaRDD: Removing RDD 1061 from persistence list 18/04/17 17:03:20 INFO storage.BlockManagerInfo: Removed broadcast_793_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:03:20 INFO storage.BlockManager: Removing RDD 1061 18/04/17 17:03:20 INFO kafka.KafkaRDD: Removing RDD 1062 from persistence list 18/04/17 17:03:20 INFO storage.BlockManagerInfo: Removed broadcast_793_piece0 on ***hostname masked***:55033 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:03:20 INFO storage.BlockManager: Removing RDD 1062 18/04/17 17:03:20 INFO kafka.KafkaRDD: Removing RDD 1062 from persistence list 18/04/17 17:03:20 INFO storage.BlockManager: Removing RDD 1062 18/04/17 17:03:20 INFO spark.ContextCleaner: Cleaned accumulator 796 18/04/17 17:03:20 INFO kafka.KafkaRDD: Removing RDD 1063 from persistence list 18/04/17 17:03:20 INFO spark.ContextCleaner: Cleaned accumulator 799 18/04/17 17:03:20 INFO spark.ContextCleaner: Cleaned accumulator 801 18/04/17 17:03:20 INFO spark.ContextCleaner: Cleaned accumulator 795 18/04/17 17:03:20 INFO storage.BlockManager: Removing RDD 1063 18/04/17 17:03:20 INFO kafka.KafkaRDD: Removing RDD 1063 from persistence list 18/04/17 17:03:20 INFO storage.BlockManager: Removing RDD 1063 18/04/17 17:03:20 INFO kafka.KafkaRDD: Removing RDD 1064 from persistence list 18/04/17 17:03:20 INFO storage.BlockManagerInfo: Removed broadcast_795_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:03:20 INFO storage.BlockManager: Removing RDD 1064 18/04/17 17:03:20 INFO kafka.KafkaRDD: Removing RDD 1064 from persistence list 18/04/17 17:03:20 INFO storage.BlockManager: Removing RDD 1064 18/04/17 17:03:20 INFO kafka.KafkaRDD: Removing RDD 1065 from persistence list 18/04/17 17:03:20 INFO storage.BlockManagerInfo: Removed broadcast_795_piece0 on ***hostname masked***:35790 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:03:20 INFO storage.BlockManager: Removing RDD 1065 18/04/17 17:03:20 INFO kafka.KafkaRDD: Removing RDD 1065 from persistence list 18/04/17 17:03:20 INFO storage.BlockManager: Removing RDD 1065 18/04/17 17:03:20 INFO kafka.KafkaRDD: Removing RDD 1066 from persistence list 18/04/17 17:03:20 INFO storage.BlockManager: Removing RDD 1066 18/04/17 17:03:20 INFO kafka.KafkaRDD: Removing RDD 1066 from persistence list 18/04/17 17:03:20 INFO storage.BlockManagerInfo: Removed broadcast_797_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:03:20 INFO storage.BlockManager: Removing RDD 1066 18/04/17 17:03:20 INFO storage.BlockManagerInfo: Removed broadcast_797_piece0 on ***hostname masked***:35790 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:03:20 INFO kafka.KafkaRDD: Removing RDD 1067 from persistence list 18/04/17 17:03:20 INFO storage.BlockManager: Removing RDD 1067 18/04/17 17:03:20 INFO kafka.KafkaRDD: Removing RDD 1067 from persistence list 18/04/17 17:03:20 INFO storage.BlockManager: Removing RDD 1067 18/04/17 17:03:20 INFO kafka.KafkaRDD: Removing RDD 1068 from persistence list 18/04/17 17:03:20 INFO storage.BlockManager: Removing RDD 1068 18/04/17 17:03:20 INFO kafka.KafkaRDD: Removing RDD 1068 from persistence list 18/04/17 17:03:20 INFO storage.BlockManagerInfo: Removed broadcast_801_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:03:20 INFO storage.BlockManager: Removing RDD 1068 18/04/17 17:03:20 INFO kafka.KafkaRDD: Removing RDD 1069 from persistence list 18/04/17 17:03:20 INFO storage.BlockManager: Removing RDD 1069 18/04/17 17:03:20 INFO kafka.KafkaRDD: Removing RDD 1069 from persistence list 18/04/17 17:03:20 INFO storage.BlockManagerInfo: Removed broadcast_801_piece0 on ***hostname masked***:55279 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:03:20 INFO storage.BlockManager: Removing RDD 1069 18/04/17 17:03:20 INFO kafka.KafkaRDD: Removing RDD 1070 from persistence list 18/04/17 17:03:20 INFO spark.ContextCleaner: Cleaned accumulator 802 18/04/17 17:03:20 INFO storage.BlockManager: Removing RDD 1070 18/04/17 17:03:20 INFO kafka.KafkaRDD: Removing RDD 1070 from persistence list 18/04/17 17:03:20 INFO storage.BlockManager: Removing RDD 1070 18/04/17 17:03:20 INFO kafka.KafkaRDD: Removing RDD 1071 from persistence list 18/04/17 17:03:20 INFO storage.BlockManagerInfo: Removed broadcast_800_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:03:20 INFO storage.BlockManager: Removing RDD 1071 18/04/17 17:03:20 INFO kafka.KafkaRDD: Removing RDD 1071 from persistence list 18/04/17 17:03:20 INFO storage.BlockManager: Removing RDD 1071 18/04/17 17:03:20 INFO kafka.KafkaRDD: Removing RDD 1072 from persistence list 18/04/17 17:03:20 INFO storage.BlockManager: Removing RDD 1072 18/04/17 17:03:20 INFO kafka.KafkaRDD: Removing RDD 1072 from persistence list 18/04/17 17:03:20 INFO storage.BlockManagerInfo: Removed broadcast_800_piece0 on ***hostname masked***:43653 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:03:20 INFO storage.BlockManager: Removing RDD 1072 18/04/17 17:03:20 INFO kafka.KafkaRDD: Removing RDD 1073 from persistence list 18/04/17 17:03:20 INFO storage.BlockManager: Removing RDD 1073 18/04/17 17:03:20 INFO kafka.KafkaRDD: Removing RDD 1073 from persistence list 18/04/17 17:03:20 INFO storage.BlockManager: Removing RDD 1073 18/04/17 17:03:20 INFO kafka.KafkaRDD: Removing RDD 1074 from persistence list 18/04/17 17:03:20 INFO storage.BlockManagerInfo: Removed broadcast_796_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:03:20 INFO storage.BlockManager: Removing RDD 1074 18/04/17 17:03:20 INFO kafka.KafkaRDD: Removing RDD 1074 from persistence list 18/04/17 17:03:20 INFO storage.BlockManagerInfo: Removed broadcast_796_piece0 on ***hostname masked***:60107 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:03:20 INFO storage.BlockManager: Removing RDD 1074 18/04/17 17:03:20 INFO kafka.KafkaRDD: Removing RDD 1075 from persistence list 18/04/17 17:03:20 INFO storage.BlockManager: Removing RDD 1075 18/04/17 17:03:20 INFO kafka.KafkaRDD: Removing RDD 1075 from persistence list 18/04/17 17:03:20 INFO storage.BlockManager: Removing RDD 1075 18/04/17 17:03:20 INFO kafka.KafkaRDD: Removing RDD 1076 from persistence list 18/04/17 17:03:20 INFO storage.BlockManagerInfo: Removed broadcast_802_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:03:20 INFO storage.BlockManager: Removing RDD 1076 18/04/17 17:03:20 INFO kafka.KafkaRDD: Removing RDD 1076 from persistence list 18/04/17 17:03:20 INFO storage.BlockManagerInfo: Removed broadcast_802_piece0 on ***hostname masked***:55279 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:03:20 INFO storage.BlockManager: Removing RDD 1076 18/04/17 17:03:20 INFO kafka.KafkaRDD: Removing RDD 1077 from persistence list 18/04/17 17:03:20 INFO spark.ContextCleaner: Cleaned accumulator 803 18/04/17 17:03:20 INFO spark.ContextCleaner: Cleaned accumulator 805 18/04/17 17:03:20 INFO storage.BlockManager: Removing RDD 1077 18/04/17 17:03:20 INFO kafka.KafkaRDD: Removing RDD 1077 from persistence list 18/04/17 17:03:20 INFO storage.BlockManager: Removing RDD 1077 18/04/17 17:03:20 INFO kafka.KafkaRDD: Removing RDD 1078 from persistence list 18/04/17 17:03:20 INFO storage.BlockManagerInfo: Removed broadcast_803_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:03:20 INFO storage.BlockManager: Removing RDD 1078 18/04/17 17:03:20 INFO kafka.KafkaRDD: Removing RDD 1078 from persistence list 18/04/17 17:03:20 INFO storage.BlockManagerInfo: Removed broadcast_803_piece0 on ***hostname masked***:50260 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:03:20 INFO storage.BlockManager: Removing RDD 1078 18/04/17 17:03:20 INFO kafka.KafkaRDD: Removing RDD 1079 from persistence list 18/04/17 17:03:20 INFO storage.BlockManager: Removing RDD 1079 18/04/17 17:03:20 INFO kafka.KafkaRDD: Removing RDD 1079 from persistence list 18/04/17 17:03:20 INFO spark.ContextCleaner: Cleaned accumulator 804 18/04/17 17:03:20 INFO storage.BlockManager: Removing RDD 1079 18/04/17 17:03:20 INFO scheduler.ReceivedBlockTracker: Deleting batches ArrayBuffer() 18/04/17 17:03:20 INFO scheduler.InputInfoTracker: remove old batch metadata: 1523973660000 ms 18/04/17 17:03:20 INFO storage.BlockManagerInfo: Removed broadcast_792_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:03:20 INFO storage.BlockManagerInfo: Removed broadcast_792_piece0 on ***hostname masked***:56034 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:03:20 INFO spark.ContextCleaner: Cleaned accumulator 794 18/04/17 17:03:20 INFO spark.ContextCleaner: Cleaned accumulator 798 18/04/17 17:03:20 INFO storage.BlockManagerInfo: Removed broadcast_805_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:03:20 INFO storage.BlockManagerInfo: Removed broadcast_805_piece0 on ***hostname masked***:55095 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:03:20 INFO spark.ContextCleaner: Cleaned accumulator 806 18/04/17 17:03:20 INFO storage.BlockManagerInfo: Removed broadcast_804_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:03:20 INFO storage.BlockManagerInfo: Removed broadcast_804_piece0 on ***hostname masked***:56034 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:03:20 INFO spark.ContextCleaner: Cleaned accumulator 808 18/04/17 17:03:20 INFO storage.BlockManagerInfo: Removed broadcast_806_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:03:20 INFO storage.BlockManagerInfo: Removed broadcast_806_piece0 on ***hostname masked***:60107 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:03:20 INFO spark.ContextCleaner: Cleaned accumulator 807 18/04/17 17:03:20 INFO storage.BlockManagerInfo: Removed broadcast_817_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:03:20 INFO storage.BlockManagerInfo: Removed broadcast_817_piece0 on ***hostname masked***:55033 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:03:20 INFO spark.ContextCleaner: Cleaned accumulator 818 18/04/17 17:03:20 INFO storage.BlockManagerInfo: Removed broadcast_816_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:03:20 INFO storage.BlockManagerInfo: Removed broadcast_816_piece0 on ***hostname masked***:50260 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:03:20 INFO storage.BlockManagerInfo: Removed broadcast_808_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:03:20 INFO storage.BlockManagerInfo: Removed broadcast_808_piece0 on ***hostname masked***:41751 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:03:20 INFO spark.ContextCleaner: Cleaned accumulator 809 18/04/17 17:03:20 INFO storage.BlockManagerInfo: Removed broadcast_807_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:03:20 INFO storage.BlockManagerInfo: Removed broadcast_807_piece0 on ***hostname masked***:60107 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:03:20 INFO spark.ContextCleaner: Cleaned accumulator 811 18/04/17 17:03:20 INFO storage.BlockManagerInfo: Removed broadcast_809_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:03:20 INFO storage.BlockManagerInfo: Removed broadcast_809_piece0 on ***hostname masked***:35790 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:03:20 INFO spark.ContextCleaner: Cleaned accumulator 810 18/04/17 17:03:20 INFO storage.BlockManagerInfo: Removed broadcast_811_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:03:20 INFO storage.BlockManagerInfo: Removed broadcast_811_piece0 on ***hostname masked***:55095 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:03:20 INFO spark.ContextCleaner: Cleaned accumulator 812 18/04/17 17:03:20 INFO storage.BlockManagerInfo: Removed broadcast_810_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:03:20 INFO storage.BlockManagerInfo: Removed broadcast_810_piece0 on ***hostname masked***:55095 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:03:20 INFO spark.ContextCleaner: Cleaned accumulator 814 18/04/17 17:03:20 INFO storage.BlockManagerInfo: Removed broadcast_812_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.7 MB) 18/04/17 17:03:20 INFO storage.BlockManagerInfo: Removed broadcast_812_piece0 on ***hostname masked***:57847 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:03:20 INFO spark.ContextCleaner: Cleaned accumulator 813 18/04/17 17:03:20 INFO storage.BlockManagerInfo: Removed broadcast_814_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.7 MB) 18/04/17 17:03:20 INFO storage.BlockManagerInfo: Removed broadcast_814_piece0 on ***hostname masked***:55033 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:03:20 INFO spark.ContextCleaner: Cleaned accumulator 815 18/04/17 17:03:20 INFO storage.BlockManagerInfo: Removed broadcast_813_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.7 MB) 18/04/17 17:03:20 INFO storage.BlockManagerInfo: Removed broadcast_813_piece0 on ***hostname masked***:55095 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:03:20 INFO spark.ContextCleaner: Cleaned accumulator 797 18/04/17 17:03:20 INFO storage.BlockManagerInfo: Removed broadcast_815_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.7 MB) 18/04/17 17:03:20 INFO storage.BlockManagerInfo: Removed broadcast_815_piece0 on ***hostname masked***:56034 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:03:20 INFO spark.ContextCleaner: Cleaned accumulator 816 18/04/17 17:03:20 INFO spark.ContextCleaner: Cleaned accumulator 817 18/04/17 17:03:20 INFO storage.BlockManagerInfo: Removed broadcast_798_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.7 MB) 18/04/17 17:03:20 INFO storage.BlockManagerInfo: Removed broadcast_798_piece0 on ***hostname masked***:43653 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:03:20 INFO storage.BlockManagerInfo: Removed broadcast_799_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.7 MB) 18/04/17 17:03:20 INFO storage.BlockManagerInfo: Removed broadcast_799_piece0 on ***hostname masked***:53081 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:04:00 INFO scheduler.JobScheduler: Added jobs for time 1523973840000 ms 18/04/17 17:04:00 INFO scheduler.JobScheduler: Starting job streaming job 1523973840000 ms.0 from job set of time 1523973840000 ms 18/04/17 17:04:00 INFO scheduler.JobScheduler: Starting job streaming job 1523973840000 ms.1 from job set of time 1523973840000 ms 18/04/17 17:04:00 INFO scheduler.JobScheduler: Starting job streaming job 1523973840000 ms.2 from job set of time 1523973840000 ms 18/04/17 17:04:00 INFO scheduler.JobScheduler: Finished job streaming job 1523973840000 ms.0 from job set of time 1523973840000 ms 18/04/17 17:04:00 INFO scheduler.JobScheduler: Starting job streaming job 1523973840000 ms.5 from job set of time 1523973840000 ms 18/04/17 17:04:00 INFO scheduler.JobScheduler: Starting job streaming job 1523973840000 ms.3 from job set of time 1523973840000 ms 18/04/17 17:04:00 INFO scheduler.JobScheduler: Starting job streaming job 1523973840000 ms.4 from job set of time 1523973840000 ms 18/04/17 17:04:00 INFO scheduler.JobScheduler: Finished job streaming job 1523973840000 ms.3 from job set of time 1523973840000 ms 18/04/17 17:04:00 INFO scheduler.JobScheduler: Starting job streaming job 1523973840000 ms.6 from job set of time 1523973840000 ms 18/04/17 17:04:00 INFO scheduler.JobScheduler: Finished job streaming job 1523973840000 ms.4 from job set of time 1523973840000 ms 18/04/17 17:04:00 INFO scheduler.JobScheduler: Starting job streaming job 1523973840000 ms.8 from job set of time 1523973840000 ms 18/04/17 17:04:00 INFO scheduler.JobScheduler: Starting job streaming job 1523973840000 ms.7 from job set of time 1523973840000 ms 18/04/17 17:04:00 INFO scheduler.JobScheduler: Starting job streaming job 1523973840000 ms.9 from job set of time 1523973840000 ms 18/04/17 17:04:00 INFO scheduler.JobScheduler: Starting job streaming job 1523973840000 ms.10 from job set of time 1523973840000 ms 18/04/17 17:04:00 INFO scheduler.JobScheduler: Starting job streaming job 1523973840000 ms.11 from job set of time 1523973840000 ms 18/04/17 17:04:00 INFO scheduler.JobScheduler: Starting job streaming job 1523973840000 ms.13 from job set of time 1523973840000 ms 18/04/17 17:04:00 INFO scheduler.JobScheduler: Starting job streaming job 1523973840000 ms.12 from job set of time 1523973840000 ms 18/04/17 17:04:00 INFO scheduler.JobScheduler: Starting job streaming job 1523973840000 ms.14 from job set of time 1523973840000 ms 18/04/17 17:04:00 INFO scheduler.JobScheduler: Finished job streaming job 1523973840000 ms.13 from job set of time 1523973840000 ms 18/04/17 17:04:00 INFO scheduler.JobScheduler: Starting job streaming job 1523973840000 ms.15 from job set of time 1523973840000 ms 18/04/17 17:04:00 INFO scheduler.JobScheduler: Starting job streaming job 1523973840000 ms.16 from job set of time 1523973840000 ms 18/04/17 17:04:00 INFO scheduler.JobScheduler: Finished job streaming job 1523973840000 ms.16 from job set of time 1523973840000 ms 18/04/17 17:04:00 INFO scheduler.JobScheduler: Finished job streaming job 1523973840000 ms.14 from job set of time 1523973840000 ms 18/04/17 17:04:00 INFO scheduler.JobScheduler: Starting job streaming job 1523973840000 ms.19 from job set of time 1523973840000 ms 18/04/17 17:04:00 INFO scheduler.JobScheduler: Starting job streaming job 1523973840000 ms.18 from job set of time 1523973840000 ms 18/04/17 17:04:00 INFO scheduler.JobScheduler: Starting job streaming job 1523973840000 ms.17 from job set of time 1523973840000 ms 18/04/17 17:04:00 INFO scheduler.JobScheduler: Starting job streaming job 1523973840000 ms.20 from job set of time 1523973840000 ms 18/04/17 17:04:00 INFO scheduler.JobScheduler: Finished job streaming job 1523973840000 ms.17 from job set of time 1523973840000 ms 18/04/17 17:04:00 INFO scheduler.JobScheduler: Starting job streaming job 1523973840000 ms.21 from job set of time 1523973840000 ms 18/04/17 17:04:00 INFO scheduler.JobScheduler: Finished job streaming job 1523973840000 ms.21 from job set of time 1523973840000 ms 18/04/17 17:04:00 INFO scheduler.JobScheduler: Starting job streaming job 1523973840000 ms.23 from job set of time 1523973840000 ms 18/04/17 17:04:00 INFO scheduler.JobScheduler: Starting job streaming job 1523973840000 ms.22 from job set of time 1523973840000 ms 18/04/17 17:04:00 INFO scheduler.JobScheduler: Starting job streaming job 1523973840000 ms.24 from job set of time 1523973840000 ms 18/04/17 17:04:00 INFO scheduler.JobScheduler: Starting job streaming job 1523973840000 ms.26 from job set of time 1523973840000 ms 18/04/17 17:04:00 INFO scheduler.JobScheduler: Starting job streaming job 1523973840000 ms.27 from job set of time 1523973840000 ms 18/04/17 17:04:00 INFO scheduler.JobScheduler: Starting job streaming job 1523973840000 ms.25 from job set of time 1523973840000 ms 18/04/17 17:04:00 INFO scheduler.JobScheduler: Starting job streaming job 1523973840000 ms.28 from job set of time 1523973840000 ms 18/04/17 17:04:00 INFO scheduler.JobScheduler: Starting job streaming job 1523973840000 ms.29 from job set of time 1523973840000 ms 18/04/17 17:04:00 INFO scheduler.JobScheduler: Starting job streaming job 1523973840000 ms.30 from job set of time 1523973840000 ms 18/04/17 17:04:00 INFO scheduler.JobScheduler: Starting job streaming job 1523973840000 ms.32 from job set of time 1523973840000 ms 18/04/17 17:04:00 INFO scheduler.JobScheduler: Starting job streaming job 1523973840000 ms.31 from job set of time 1523973840000 ms 18/04/17 17:04:00 INFO scheduler.JobScheduler: Starting job streaming job 1523973840000 ms.33 from job set of time 1523973840000 ms 18/04/17 17:04:00 INFO scheduler.JobScheduler: Finished job streaming job 1523973840000 ms.30 from job set of time 1523973840000 ms 18/04/17 17:04:00 INFO scheduler.JobScheduler: Starting job streaming job 1523973840000 ms.34 from job set of time 1523973840000 ms 18/04/17 17:04:00 INFO scheduler.JobScheduler: Starting job streaming job 1523973840000 ms.35 from job set of time 1523973840000 ms 18/04/17 17:04:00 INFO scheduler.JobScheduler: Finished job streaming job 1523973840000 ms.35 from job set of time 1523973840000 ms 18/04/17 17:04:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 17:04:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 17:04:00 INFO scheduler.DAGScheduler: Got job 818 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 17:04:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 818 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 17:04:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 17:04:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 17:04:00 INFO scheduler.DAGScheduler: Submitting ResultStage 818 (KafkaRDD[1142] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 17:04:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 17:04:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 17:04:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 17:04:00 INFO storage.MemoryStore: Block broadcast_818 stored as values in memory (estimated size 5.7 KB, free 491.7 MB) 18/04/17 17:04:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 17:04:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 17:04:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 17:04:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 17:04:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 17:04:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 17:04:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 17:04:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 17:04:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 17:04:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 17:04:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 17:04:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 17:04:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 17:04:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 17:04:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 17:04:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 17:04:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 17:04:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 17:04:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 17:04:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 17:04:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 17:04:00 INFO storage.MemoryStore: Block broadcast_818_piece0 stored as bytes in memory (estimated size 3.1 KB, free 491.7 MB) 18/04/17 17:04:00 INFO storage.BlockManagerInfo: Added broadcast_818_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.7 MB) 18/04/17 17:04:00 INFO spark.SparkContext: Created broadcast 818 from broadcast at DAGScheduler.scala:1006 18/04/17 17:04:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 818 (KafkaRDD[1142] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 17:04:00 INFO cluster.YarnClusterScheduler: Adding task set 818.0 with 1 tasks 18/04/17 17:04:00 INFO scheduler.DAGScheduler: Got job 819 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 17:04:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 819 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 17:04:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 17:04:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 17:04:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 818.0 (TID 818, ***hostname masked***, executor 1, partition 0, NODE_LOCAL, 2056 bytes) 18/04/17 17:04:00 INFO scheduler.DAGScheduler: Submitting ResultStage 819 (KafkaRDD[1118] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 17:04:00 INFO storage.MemoryStore: Block broadcast_819 stored as values in memory (estimated size 5.7 KB, free 491.6 MB) 18/04/17 17:04:00 INFO storage.MemoryStore: Block broadcast_819_piece0 stored as bytes in memory (estimated size 3.1 KB, free 491.6 MB) 18/04/17 17:04:00 INFO storage.BlockManagerInfo: Added broadcast_819_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.7 MB) 18/04/17 17:04:00 INFO spark.SparkContext: Created broadcast 819 from broadcast at DAGScheduler.scala:1006 18/04/17 17:04:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 819 (KafkaRDD[1118] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 17:04:00 INFO cluster.YarnClusterScheduler: Adding task set 819.0 with 1 tasks 18/04/17 17:04:00 INFO scheduler.DAGScheduler: Got job 820 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 17:04:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 820 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 17:04:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 17:04:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 17:04:00 INFO scheduler.DAGScheduler: Submitting ResultStage 820 (KafkaRDD[1150] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 17:04:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 819.0 (TID 819, ***hostname masked***, executor 8, partition 0, RACK_LOCAL, 2034 bytes) 18/04/17 17:04:00 INFO storage.MemoryStore: Block broadcast_820 stored as values in memory (estimated size 5.7 KB, free 491.6 MB) 18/04/17 17:04:00 INFO storage.MemoryStore: Block broadcast_820_piece0 stored as bytes in memory (estimated size 3.1 KB, free 491.6 MB) 18/04/17 17:04:00 INFO storage.BlockManagerInfo: Added broadcast_820_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.7 MB) 18/04/17 17:04:00 INFO spark.SparkContext: Created broadcast 820 from broadcast at DAGScheduler.scala:1006 18/04/17 17:04:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 820 (KafkaRDD[1150] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 17:04:00 INFO cluster.YarnClusterScheduler: Adding task set 820.0 with 1 tasks 18/04/17 17:04:00 INFO scheduler.DAGScheduler: Got job 821 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 17:04:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 821 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 17:04:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 17:04:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 17:04:00 INFO scheduler.DAGScheduler: Submitting ResultStage 821 (KafkaRDD[1124] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 17:04:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 820.0 (TID 820, ***hostname masked***, executor 7, partition 0, RACK_LOCAL, 2053 bytes) 18/04/17 17:04:00 INFO storage.MemoryStore: Block broadcast_821 stored as values in memory (estimated size 5.7 KB, free 491.6 MB) 18/04/17 17:04:00 INFO storage.BlockManagerInfo: Added broadcast_818_piece0 in memory on ***hostname masked***:56034 (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:04:00 INFO storage.MemoryStore: Block broadcast_821_piece0 stored as bytes in memory (estimated size 3.1 KB, free 491.6 MB) 18/04/17 17:04:00 INFO storage.BlockManagerInfo: Added broadcast_821_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.7 MB) 18/04/17 17:04:00 INFO spark.SparkContext: Created broadcast 821 from broadcast at DAGScheduler.scala:1006 18/04/17 17:04:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 821 (KafkaRDD[1124] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 17:04:00 INFO cluster.YarnClusterScheduler: Adding task set 821.0 with 1 tasks 18/04/17 17:04:00 INFO scheduler.DAGScheduler: Got job 822 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 17:04:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 822 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 17:04:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 17:04:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 17:04:00 INFO scheduler.DAGScheduler: Submitting ResultStage 822 (KafkaRDD[1128] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 17:04:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 821.0 (TID 821, ***hostname masked***, executor 5, partition 0, RACK_LOCAL, 2051 bytes) 18/04/17 17:04:00 INFO storage.MemoryStore: Block broadcast_822 stored as values in memory (estimated size 5.7 KB, free 491.6 MB) 18/04/17 17:04:00 INFO storage.BlockManagerInfo: Added broadcast_819_piece0 in memory on ***hostname masked***:50260 (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:04:00 INFO storage.MemoryStore: Block broadcast_822_piece0 stored as bytes in memory (estimated size 3.1 KB, free 491.6 MB) 18/04/17 17:04:00 INFO storage.BlockManagerInfo: Added broadcast_822_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.7 MB) 18/04/17 17:04:00 INFO spark.SparkContext: Created broadcast 822 from broadcast at DAGScheduler.scala:1006 18/04/17 17:04:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 822 (KafkaRDD[1128] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 17:04:00 INFO cluster.YarnClusterScheduler: Adding task set 822.0 with 1 tasks 18/04/17 17:04:00 INFO scheduler.DAGScheduler: Got job 823 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 17:04:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 823 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 17:04:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 17:04:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 17:04:00 INFO scheduler.DAGScheduler: Submitting ResultStage 823 (KafkaRDD[1138] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 17:04:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 822.0 (TID 822, ***hostname masked***, executor 6, partition 0, RACK_LOCAL, 2051 bytes) 18/04/17 17:04:00 INFO storage.MemoryStore: Block broadcast_823 stored as values in memory (estimated size 5.7 KB, free 491.6 MB) 18/04/17 17:04:00 INFO storage.BlockManagerInfo: Added broadcast_820_piece0 in memory on ***hostname masked***:41751 (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:04:00 INFO storage.MemoryStore: Block broadcast_823_piece0 stored as bytes in memory (estimated size 3.1 KB, free 491.6 MB) 18/04/17 17:04:00 INFO storage.BlockManagerInfo: Added broadcast_823_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:04:00 INFO spark.SparkContext: Created broadcast 823 from broadcast at DAGScheduler.scala:1006 18/04/17 17:04:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 823 (KafkaRDD[1138] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 17:04:00 INFO cluster.YarnClusterScheduler: Adding task set 823.0 with 1 tasks 18/04/17 17:04:00 INFO scheduler.DAGScheduler: Got job 824 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 17:04:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 824 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 17:04:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 17:04:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 17:04:00 INFO scheduler.DAGScheduler: Submitting ResultStage 824 (KafkaRDD[1139] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 17:04:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 823.0 (TID 823, ***hostname masked***, executor 10, partition 0, RACK_LOCAL, 2069 bytes) 18/04/17 17:04:00 INFO storage.MemoryStore: Block broadcast_824 stored as values in memory (estimated size 5.7 KB, free 491.6 MB) 18/04/17 17:04:00 INFO storage.MemoryStore: Block broadcast_824_piece0 stored as bytes in memory (estimated size 3.1 KB, free 491.6 MB) 18/04/17 17:04:00 INFO storage.BlockManagerInfo: Added broadcast_824_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:04:00 INFO spark.SparkContext: Created broadcast 824 from broadcast at DAGScheduler.scala:1006 18/04/17 17:04:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 824 (KafkaRDD[1139] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 17:04:00 INFO cluster.YarnClusterScheduler: Adding task set 824.0 with 1 tasks 18/04/17 17:04:00 INFO scheduler.DAGScheduler: Got job 826 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 17:04:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 825 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 17:04:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 17:04:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 17:04:00 INFO scheduler.DAGScheduler: Submitting ResultStage 825 (KafkaRDD[1149] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 17:04:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 824.0 (TID 824, ***hostname masked***, executor 12, partition 0, RACK_LOCAL, 2048 bytes) 18/04/17 17:04:00 INFO storage.MemoryStore: Block broadcast_825 stored as values in memory (estimated size 5.7 KB, free 491.6 MB) 18/04/17 17:04:00 INFO storage.BlockManagerInfo: Added broadcast_821_piece0 in memory on ***hostname masked***:53081 (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:04:00 INFO storage.BlockManagerInfo: Added broadcast_822_piece0 in memory on ***hostname masked***:35790 (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:04:00 INFO storage.MemoryStore: Block broadcast_825_piece0 stored as bytes in memory (estimated size 3.1 KB, free 491.6 MB) 18/04/17 17:04:00 INFO storage.BlockManagerInfo: Added broadcast_825_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:04:00 INFO spark.SparkContext: Created broadcast 825 from broadcast at DAGScheduler.scala:1006 18/04/17 17:04:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 825 (KafkaRDD[1149] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 17:04:00 INFO cluster.YarnClusterScheduler: Adding task set 825.0 with 1 tasks 18/04/17 17:04:00 INFO scheduler.DAGScheduler: Got job 825 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 17:04:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 826 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 17:04:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 17:04:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 17:04:00 INFO scheduler.DAGScheduler: Submitting ResultStage 826 (KafkaRDD[1140] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 17:04:00 INFO storage.MemoryStore: Block broadcast_826 stored as values in memory (estimated size 5.7 KB, free 491.6 MB) 18/04/17 17:04:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 825.0 (TID 825, ***hostname masked***, executor 11, partition 0, NODE_LOCAL, 2048 bytes) 18/04/17 17:04:00 INFO storage.MemoryStore: Block broadcast_826_piece0 stored as bytes in memory (estimated size 3.1 KB, free 491.6 MB) 18/04/17 17:04:00 INFO storage.BlockManagerInfo: Added broadcast_826_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:04:00 INFO spark.SparkContext: Created broadcast 826 from broadcast at DAGScheduler.scala:1006 18/04/17 17:04:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 826 (KafkaRDD[1140] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 17:04:00 INFO cluster.YarnClusterScheduler: Adding task set 826.0 with 1 tasks 18/04/17 17:04:00 INFO scheduler.DAGScheduler: Got job 827 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 17:04:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 827 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 17:04:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 17:04:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 17:04:00 INFO scheduler.DAGScheduler: Submitting ResultStage 827 (KafkaRDD[1145] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 17:04:00 INFO storage.MemoryStore: Block broadcast_827 stored as values in memory (estimated size 5.7 KB, free 491.6 MB) 18/04/17 17:04:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 826.0 (TID 826, ***hostname masked***, executor 1, partition 0, NODE_LOCAL, 2047 bytes) 18/04/17 17:04:00 INFO storage.MemoryStore: Block broadcast_827_piece0 stored as bytes in memory (estimated size 3.1 KB, free 491.6 MB) 18/04/17 17:04:00 INFO storage.BlockManagerInfo: Added broadcast_827_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:04:00 INFO spark.SparkContext: Created broadcast 827 from broadcast at DAGScheduler.scala:1006 18/04/17 17:04:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 827 (KafkaRDD[1145] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 17:04:00 INFO cluster.YarnClusterScheduler: Adding task set 827.0 with 1 tasks 18/04/17 17:04:00 INFO scheduler.DAGScheduler: Got job 828 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 17:04:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 828 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 17:04:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 17:04:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 17:04:00 INFO scheduler.DAGScheduler: Submitting ResultStage 828 (KafkaRDD[1122] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 17:04:00 INFO storage.MemoryStore: Block broadcast_828 stored as values in memory (estimated size 5.7 KB, free 491.6 MB) 18/04/17 17:04:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 827.0 (TID 827, ***hostname masked***, executor 1, partition 0, NODE_LOCAL, 2049 bytes) 18/04/17 17:04:00 INFO storage.BlockManagerInfo: Added broadcast_823_piece0 in memory on ***hostname masked***:55095 (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:04:00 INFO storage.BlockManagerInfo: Added broadcast_826_piece0 in memory on ***hostname masked***:56034 (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:04:00 INFO storage.MemoryStore: Block broadcast_828_piece0 stored as bytes in memory (estimated size 3.1 KB, free 491.6 MB) 18/04/17 17:04:00 INFO storage.BlockManagerInfo: Added broadcast_828_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:04:00 INFO spark.SparkContext: Created broadcast 828 from broadcast at DAGScheduler.scala:1006 18/04/17 17:04:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 828 (KafkaRDD[1122] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 17:04:00 INFO cluster.YarnClusterScheduler: Adding task set 828.0 with 1 tasks 18/04/17 17:04:00 INFO scheduler.DAGScheduler: Got job 829 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 17:04:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 829 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 17:04:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 17:04:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 17:04:00 INFO scheduler.DAGScheduler: Submitting ResultStage 829 (KafkaRDD[1148] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 17:04:00 INFO storage.MemoryStore: Block broadcast_829 stored as values in memory (estimated size 5.7 KB, free 491.6 MB) 18/04/17 17:04:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 828.0 (TID 828, ***hostname masked***, executor 8, partition 0, NODE_LOCAL, 2045 bytes) 18/04/17 17:04:00 INFO storage.MemoryStore: Block broadcast_829_piece0 stored as bytes in memory (estimated size 3.1 KB, free 491.6 MB) 18/04/17 17:04:00 INFO storage.BlockManagerInfo: Added broadcast_829_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:04:00 INFO spark.SparkContext: Created broadcast 829 from broadcast at DAGScheduler.scala:1006 18/04/17 17:04:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 829 (KafkaRDD[1148] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 17:04:00 INFO cluster.YarnClusterScheduler: Adding task set 829.0 with 1 tasks 18/04/17 17:04:00 INFO scheduler.DAGScheduler: Got job 830 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 17:04:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 830 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 17:04:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 17:04:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 17:04:00 INFO scheduler.DAGScheduler: Submitting ResultStage 830 (KafkaRDD[1123] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 17:04:00 INFO storage.MemoryStore: Block broadcast_830 stored as values in memory (estimated size 5.7 KB, free 491.6 MB) 18/04/17 17:04:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 829.0 (TID 829, ***hostname masked***, executor 3, partition 0, RACK_LOCAL, 2040 bytes) 18/04/17 17:04:00 INFO storage.MemoryStore: Block broadcast_830_piece0 stored as bytes in memory (estimated size 3.1 KB, free 491.5 MB) 18/04/17 17:04:00 INFO storage.BlockManagerInfo: Added broadcast_830_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:04:00 INFO storage.BlockManagerInfo: Added broadcast_828_piece0 in memory on ***hostname masked***:50260 (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:04:00 INFO spark.SparkContext: Created broadcast 830 from broadcast at DAGScheduler.scala:1006 18/04/17 17:04:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 830 (KafkaRDD[1123] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 17:04:00 INFO cluster.YarnClusterScheduler: Adding task set 830.0 with 1 tasks 18/04/17 17:04:00 INFO scheduler.DAGScheduler: Got job 832 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 17:04:00 INFO storage.BlockManagerInfo: Added broadcast_827_piece0 in memory on ***hostname masked***:56034 (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:04:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 831 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 17:04:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 17:04:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 17:04:00 INFO scheduler.DAGScheduler: Submitting ResultStage 831 (KafkaRDD[1126] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 17:04:00 INFO storage.MemoryStore: Block broadcast_831 stored as values in memory (estimated size 5.7 KB, free 491.5 MB) 18/04/17 17:04:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 830.0 (TID 830, ***hostname masked***, executor 8, partition 0, RACK_LOCAL, 2060 bytes) 18/04/17 17:04:00 INFO storage.MemoryStore: Block broadcast_831_piece0 stored as bytes in memory (estimated size 3.1 KB, free 491.5 MB) 18/04/17 17:04:00 INFO storage.BlockManagerInfo: Added broadcast_831_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:04:00 INFO spark.SparkContext: Created broadcast 831 from broadcast at DAGScheduler.scala:1006 18/04/17 17:04:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 831 (KafkaRDD[1126] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 17:04:00 INFO cluster.YarnClusterScheduler: Adding task set 831.0 with 1 tasks 18/04/17 17:04:00 INFO scheduler.DAGScheduler: Got job 831 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 17:04:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 832 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 17:04:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 17:04:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 17:04:00 INFO scheduler.DAGScheduler: Submitting ResultStage 832 (KafkaRDD[1136] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 17:04:00 INFO storage.MemoryStore: Block broadcast_832 stored as values in memory (estimated size 5.7 KB, free 491.5 MB) 18/04/17 17:04:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 831.0 (TID 831, ***hostname masked***, executor 5, partition 0, NODE_LOCAL, 2059 bytes) 18/04/17 17:04:00 INFO storage.MemoryStore: Block broadcast_832_piece0 stored as bytes in memory (estimated size 3.1 KB, free 491.5 MB) 18/04/17 17:04:00 INFO storage.BlockManagerInfo: Added broadcast_832_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:04:00 INFO spark.SparkContext: Created broadcast 832 from broadcast at DAGScheduler.scala:1006 18/04/17 17:04:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 832 (KafkaRDD[1136] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 17:04:00 INFO cluster.YarnClusterScheduler: Adding task set 832.0 with 1 tasks 18/04/17 17:04:00 INFO scheduler.DAGScheduler: Got job 833 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 17:04:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 833 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 17:04:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 17:04:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 17:04:00 INFO scheduler.DAGScheduler: Submitting ResultStage 833 (KafkaRDD[1135] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 17:04:00 INFO storage.BlockManagerInfo: Added broadcast_829_piece0 in memory on ***hostname masked***:60107 (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:04:00 INFO storage.MemoryStore: Block broadcast_833 stored as values in memory (estimated size 5.7 KB, free 491.5 MB) 18/04/17 17:04:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 832.0 (TID 832, ***hostname masked***, executor 9, partition 0, RACK_LOCAL, 2048 bytes) 18/04/17 17:04:00 INFO storage.BlockManagerInfo: Added broadcast_825_piece0 in memory on ***hostname masked***:57847 (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:04:00 INFO storage.BlockManagerInfo: Added broadcast_824_piece0 in memory on ***hostname masked***:42188 (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:04:00 INFO storage.MemoryStore: Block broadcast_833_piece0 stored as bytes in memory (estimated size 3.1 KB, free 491.5 MB) 18/04/17 17:04:00 INFO storage.BlockManagerInfo: Added broadcast_830_piece0 in memory on ***hostname masked***:50260 (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:04:00 INFO storage.BlockManagerInfo: Added broadcast_833_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:04:00 INFO spark.SparkContext: Created broadcast 833 from broadcast at DAGScheduler.scala:1006 18/04/17 17:04:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 833 (KafkaRDD[1135] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 17:04:00 INFO cluster.YarnClusterScheduler: Adding task set 833.0 with 1 tasks 18/04/17 17:04:00 INFO scheduler.DAGScheduler: Got job 834 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 17:04:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 834 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 17:04:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 17:04:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 17:04:00 INFO scheduler.DAGScheduler: Submitting ResultStage 834 (KafkaRDD[1147] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 17:04:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 833.0 (TID 833, ***hostname masked***, executor 4, partition 0, RACK_LOCAL, 2050 bytes) 18/04/17 17:04:00 INFO storage.MemoryStore: Block broadcast_834 stored as values in memory (estimated size 5.7 KB, free 491.5 MB) 18/04/17 17:04:00 INFO storage.BlockManagerInfo: Added broadcast_831_piece0 in memory on ***hostname masked***:53081 (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:04:00 INFO storage.MemoryStore: Block broadcast_834_piece0 stored as bytes in memory (estimated size 3.1 KB, free 491.5 MB) 18/04/17 17:04:00 INFO storage.BlockManagerInfo: Added broadcast_834_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:04:00 INFO spark.SparkContext: Created broadcast 834 from broadcast at DAGScheduler.scala:1006 18/04/17 17:04:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 834 (KafkaRDD[1147] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 17:04:00 INFO cluster.YarnClusterScheduler: Adding task set 834.0 with 1 tasks 18/04/17 17:04:00 INFO scheduler.DAGScheduler: Got job 835 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 17:04:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 835 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 17:04:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 17:04:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 17:04:00 INFO scheduler.DAGScheduler: Submitting ResultStage 835 (KafkaRDD[1134] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 17:04:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 834.0 (TID 834, ***hostname masked***, executor 9, partition 0, RACK_LOCAL, 2042 bytes) 18/04/17 17:04:00 INFO storage.MemoryStore: Block broadcast_835 stored as values in memory (estimated size 5.7 KB, free 491.5 MB) 18/04/17 17:04:00 INFO storage.BlockManagerInfo: Added broadcast_832_piece0 in memory on ***hostname masked***:55033 (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:04:00 INFO storage.MemoryStore: Block broadcast_835_piece0 stored as bytes in memory (estimated size 3.1 KB, free 491.5 MB) 18/04/17 17:04:00 INFO storage.BlockManagerInfo: Added broadcast_835_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:04:00 INFO spark.SparkContext: Created broadcast 835 from broadcast at DAGScheduler.scala:1006 18/04/17 17:04:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 835 (KafkaRDD[1134] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 17:04:00 INFO cluster.YarnClusterScheduler: Adding task set 835.0 with 1 tasks 18/04/17 17:04:00 INFO scheduler.DAGScheduler: Got job 836 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 17:04:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 836 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 17:04:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 17:04:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 17:04:00 INFO scheduler.DAGScheduler: Submitting ResultStage 836 (KafkaRDD[1125] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 17:04:00 INFO storage.MemoryStore: Block broadcast_836 stored as values in memory (estimated size 5.7 KB, free 491.5 MB) 18/04/17 17:04:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 835.0 (TID 835, ***hostname masked***, executor 8, partition 0, RACK_LOCAL, 2048 bytes) 18/04/17 17:04:00 INFO storage.BlockManagerInfo: Added broadcast_833_piece0 in memory on ***hostname masked***:55279 (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:04:00 INFO storage.MemoryStore: Block broadcast_836_piece0 stored as bytes in memory (estimated size 3.1 KB, free 491.5 MB) 18/04/17 17:04:00 INFO storage.BlockManagerInfo: Added broadcast_836_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:04:00 INFO spark.SparkContext: Created broadcast 836 from broadcast at DAGScheduler.scala:1006 18/04/17 17:04:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 836 (KafkaRDD[1125] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 17:04:00 INFO cluster.YarnClusterScheduler: Adding task set 836.0 with 1 tasks 18/04/17 17:04:00 INFO scheduler.DAGScheduler: Got job 837 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 17:04:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 837 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 17:04:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 17:04:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 17:04:00 INFO scheduler.DAGScheduler: Submitting ResultStage 837 (KafkaRDD[1141] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 17:04:00 INFO storage.MemoryStore: Block broadcast_837 stored as values in memory (estimated size 5.7 KB, free 491.5 MB) 18/04/17 17:04:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 836.0 (TID 836, ***hostname masked***, executor 4, partition 0, RACK_LOCAL, 2041 bytes) 18/04/17 17:04:00 INFO storage.BlockManagerInfo: Added broadcast_835_piece0 in memory on ***hostname masked***:50260 (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:04:00 INFO storage.BlockManagerInfo: Added broadcast_834_piece0 in memory on ***hostname masked***:55033 (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:04:00 INFO storage.MemoryStore: Block broadcast_837_piece0 stored as bytes in memory (estimated size 3.1 KB, free 491.5 MB) 18/04/17 17:04:00 INFO storage.BlockManagerInfo: Added broadcast_837_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:04:00 INFO spark.SparkContext: Created broadcast 837 from broadcast at DAGScheduler.scala:1006 18/04/17 17:04:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 837 (KafkaRDD[1141] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 17:04:00 INFO cluster.YarnClusterScheduler: Adding task set 837.0 with 1 tasks 18/04/17 17:04:00 INFO scheduler.DAGScheduler: Got job 838 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 17:04:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 838 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 17:04:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 17:04:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 17:04:00 INFO scheduler.DAGScheduler: Submitting ResultStage 838 (KafkaRDD[1121] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 17:04:00 INFO storage.MemoryStore: Block broadcast_838 stored as values in memory (estimated size 5.7 KB, free 491.5 MB) 18/04/17 17:04:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 837.0 (TID 837, ***hostname masked***, executor 7, partition 0, RACK_LOCAL, 2045 bytes) 18/04/17 17:04:00 INFO storage.BlockManagerInfo: Added broadcast_836_piece0 in memory on ***hostname masked***:55279 (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:04:00 INFO storage.MemoryStore: Block broadcast_838_piece0 stored as bytes in memory (estimated size 3.1 KB, free 491.5 MB) 18/04/17 17:04:00 INFO storage.BlockManagerInfo: Added broadcast_838_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:04:00 INFO spark.SparkContext: Created broadcast 838 from broadcast at DAGScheduler.scala:1006 18/04/17 17:04:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 838 (KafkaRDD[1121] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 17:04:00 INFO cluster.YarnClusterScheduler: Adding task set 838.0 with 1 tasks 18/04/17 17:04:00 INFO scheduler.DAGScheduler: Got job 839 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 17:04:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 839 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 17:04:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 17:04:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 17:04:00 INFO scheduler.DAGScheduler: Submitting ResultStage 839 (KafkaRDD[1131] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 17:04:00 INFO storage.MemoryStore: Block broadcast_839 stored as values in memory (estimated size 5.7 KB, free 491.5 MB) 18/04/17 17:04:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 838.0 (TID 838, ***hostname masked***, executor 4, partition 0, NODE_LOCAL, 2059 bytes) 18/04/17 17:04:00 INFO storage.BlockManagerInfo: Added broadcast_837_piece0 in memory on ***hostname masked***:41751 (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:04:00 INFO storage.MemoryStore: Block broadcast_839_piece0 stored as bytes in memory (estimated size 3.1 KB, free 491.5 MB) 18/04/17 17:04:00 INFO storage.BlockManagerInfo: Added broadcast_839_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:04:00 INFO spark.SparkContext: Created broadcast 839 from broadcast at DAGScheduler.scala:1006 18/04/17 17:04:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 839 (KafkaRDD[1131] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 17:04:00 INFO cluster.YarnClusterScheduler: Adding task set 839.0 with 1 tasks 18/04/17 17:04:00 INFO scheduler.DAGScheduler: Got job 840 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 17:04:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 840 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 17:04:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 17:04:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 17:04:00 INFO scheduler.DAGScheduler: Submitting ResultStage 840 (KafkaRDD[1127] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 17:04:00 INFO storage.MemoryStore: Block broadcast_840 stored as values in memory (estimated size 5.7 KB, free 491.5 MB) 18/04/17 17:04:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 839.0 (TID 839, ***hostname masked***, executor 6, partition 0, RACK_LOCAL, 2064 bytes) 18/04/17 17:04:00 INFO storage.MemoryStore: Block broadcast_840_piece0 stored as bytes in memory (estimated size 3.1 KB, free 491.5 MB) 18/04/17 17:04:00 INFO storage.BlockManagerInfo: Added broadcast_840_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:04:00 INFO spark.SparkContext: Created broadcast 840 from broadcast at DAGScheduler.scala:1006 18/04/17 17:04:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 840 (KafkaRDD[1127] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 17:04:00 INFO cluster.YarnClusterScheduler: Adding task set 840.0 with 1 tasks 18/04/17 17:04:00 INFO scheduler.DAGScheduler: Got job 841 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 17:04:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 841 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 17:04:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 17:04:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 17:04:00 INFO scheduler.DAGScheduler: Submitting ResultStage 841 (KafkaRDD[1117] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 17:04:00 INFO storage.BlockManagerInfo: Added broadcast_838_piece0 in memory on ***hostname masked***:55279 (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:04:00 INFO storage.MemoryStore: Block broadcast_841 stored as values in memory (estimated size 5.7 KB, free 491.5 MB) 18/04/17 17:04:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 840.0 (TID 840, ***hostname masked***, executor 2, partition 0, RACK_LOCAL, 2046 bytes) 18/04/17 17:04:00 INFO storage.MemoryStore: Block broadcast_841_piece0 stored as bytes in memory (estimated size 3.1 KB, free 491.5 MB) 18/04/17 17:04:00 INFO storage.BlockManagerInfo: Added broadcast_841_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:04:00 INFO spark.SparkContext: Created broadcast 841 from broadcast at DAGScheduler.scala:1006 18/04/17 17:04:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 841 (KafkaRDD[1117] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 17:04:00 INFO cluster.YarnClusterScheduler: Adding task set 841.0 with 1 tasks 18/04/17 17:04:00 INFO scheduler.DAGScheduler: Got job 842 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 17:04:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 842 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 17:04:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 17:04:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 17:04:00 INFO scheduler.DAGScheduler: Submitting ResultStage 842 (KafkaRDD[1143] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 17:04:00 INFO storage.MemoryStore: Block broadcast_842 stored as values in memory (estimated size 5.7 KB, free 491.4 MB) 18/04/17 17:04:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 841.0 (TID 841, ***hostname masked***, executor 11, partition 0, RACK_LOCAL, 2063 bytes) 18/04/17 17:04:00 INFO storage.BlockManagerInfo: Added broadcast_839_piece0 in memory on ***hostname masked***:35790 (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:04:00 INFO storage.MemoryStore: Block broadcast_842_piece0 stored as bytes in memory (estimated size 3.1 KB, free 491.4 MB) 18/04/17 17:04:00 INFO storage.BlockManagerInfo: Added broadcast_842_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:04:00 INFO spark.SparkContext: Created broadcast 842 from broadcast at DAGScheduler.scala:1006 18/04/17 17:04:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 842 (KafkaRDD[1143] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 17:04:00 INFO cluster.YarnClusterScheduler: Adding task set 842.0 with 1 tasks 18/04/17 17:04:00 INFO scheduler.DAGScheduler: Got job 843 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 17:04:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 843 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 17:04:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 17:04:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 17:04:00 INFO scheduler.DAGScheduler: Submitting ResultStage 843 (KafkaRDD[1144] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 17:04:00 INFO storage.MemoryStore: Block broadcast_843 stored as values in memory (estimated size 5.7 KB, free 491.4 MB) 18/04/17 17:04:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 842.0 (TID 842, ***hostname masked***, executor 6, partition 0, RACK_LOCAL, 2048 bytes) 18/04/17 17:04:00 INFO storage.MemoryStore: Block broadcast_843_piece0 stored as bytes in memory (estimated size 3.1 KB, free 491.4 MB) 18/04/17 17:04:00 INFO storage.BlockManagerInfo: Added broadcast_843_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:04:00 INFO spark.SparkContext: Created broadcast 843 from broadcast at DAGScheduler.scala:1006 18/04/17 17:04:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 843 (KafkaRDD[1144] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 17:04:00 INFO cluster.YarnClusterScheduler: Adding task set 843.0 with 1 tasks 18/04/17 17:04:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 843.0 (TID 843, ***hostname masked***, executor 2, partition 0, RACK_LOCAL, 2045 bytes) 18/04/17 17:04:00 INFO storage.BlockManagerInfo: Added broadcast_841_piece0 in memory on ***hostname masked***:57847 (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:04:00 INFO storage.BlockManagerInfo: Added broadcast_842_piece0 in memory on ***hostname masked***:35790 (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:04:00 INFO storage.BlockManagerInfo: Added broadcast_840_piece0 in memory on ***hostname masked***:43653 (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:04:00 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 834.0 (TID 834) in 58 ms on ***hostname masked*** (executor 9) (1/1) 18/04/17 17:04:00 INFO cluster.YarnClusterScheduler: Removed TaskSet 834.0, whose tasks have all completed, from pool 18/04/17 17:04:00 INFO scheduler.DAGScheduler: ResultStage 834 (foreachPartition at PredictorEngineApp.java:153) finished in 0.059 s 18/04/17 17:04:00 INFO scheduler.DAGScheduler: Job 834 finished: foreachPartition at PredictorEngineApp.java:153, took 0.135616 s 18/04/17 17:04:00 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x2c973a84 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 17:04:00 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x2c973a840x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 17:04:00 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 17:04:00 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:43005, server: ***hostname masked***/***IP masked***:2181 18/04/17 17:04:00 INFO storage.BlockManagerInfo: Added broadcast_843_piece0 in memory on ***hostname masked***:43653 (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:04:00 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x2626be142b28eac, negotiated timeout = 60000 18/04/17 17:04:00 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 842.0 (TID 842) in 48 ms on ***hostname masked*** (executor 6) (1/1) 18/04/17 17:04:00 INFO cluster.YarnClusterScheduler: Removed TaskSet 842.0, whose tasks have all completed, from pool 18/04/17 17:04:00 INFO scheduler.DAGScheduler: ResultStage 842 (foreachPartition at PredictorEngineApp.java:153) finished in 0.048 s 18/04/17 17:04:00 INFO scheduler.DAGScheduler: Job 842 finished: foreachPartition at PredictorEngineApp.java:153, took 0.155202 s 18/04/17 17:04:00 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x2626be142b28eac 18/04/17 17:04:00 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x5019f1c4 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 17:04:00 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x5019f1c40x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 17:04:00 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 17:04:00 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:38414, server: ***hostname masked***/***IP masked***:2181 18/04/17 17:04:00 INFO zookeeper.ZooKeeper: Session: 0x2626be142b28eac closed 18/04/17 17:04:00 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 17:04:00 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x1626be1444c95be, negotiated timeout = 60000 18/04/17 17:04:00 INFO scheduler.JobScheduler: Finished job streaming job 1523973840000 ms.31 from job set of time 1523973840000 ms 18/04/17 17:04:00 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x1626be1444c95be 18/04/17 17:04:00 INFO zookeeper.ZooKeeper: Session: 0x1626be1444c95be closed 18/04/17 17:04:00 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 17:04:00 INFO scheduler.JobScheduler: Finished job streaming job 1523973840000 ms.27 from job set of time 1523973840000 ms 18/04/17 17:04:00 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 836.0 (TID 836) in 151 ms on ***hostname masked*** (executor 4) (1/1) 18/04/17 17:04:00 INFO cluster.YarnClusterScheduler: Removed TaskSet 836.0, whose tasks have all completed, from pool 18/04/17 17:04:00 INFO scheduler.DAGScheduler: ResultStage 836 (foreachPartition at PredictorEngineApp.java:153) finished in 0.152 s 18/04/17 17:04:00 INFO scheduler.DAGScheduler: Job 836 finished: foreachPartition at PredictorEngineApp.java:153, took 0.238646 s 18/04/17 17:04:00 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x180457d3 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 17:04:00 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x180457d30x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 17:04:00 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 17:04:00 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:43012, server: ***hostname masked***/***IP masked***:2181 18/04/17 17:04:00 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x2626be142b28eae, negotiated timeout = 60000 18/04/17 17:04:00 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x2626be142b28eae 18/04/17 17:04:00 INFO zookeeper.ZooKeeper: Session: 0x2626be142b28eae closed 18/04/17 17:04:00 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 17:04:00 INFO scheduler.JobScheduler: Finished job streaming job 1523973840000 ms.9 from job set of time 1523973840000 ms 18/04/17 17:04:01 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 837.0 (TID 837) in 1304 ms on ***hostname masked*** (executor 7) (1/1) 18/04/17 17:04:01 INFO cluster.YarnClusterScheduler: Removed TaskSet 837.0, whose tasks have all completed, from pool 18/04/17 17:04:01 INFO scheduler.DAGScheduler: ResultStage 837 (foreachPartition at PredictorEngineApp.java:153) finished in 1.306 s 18/04/17 17:04:01 INFO scheduler.DAGScheduler: Job 837 finished: foreachPartition at PredictorEngineApp.java:153, took 1.397325 s 18/04/17 17:04:01 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x3799b24a connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 17:04:01 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x3799b24a0x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 17:04:01 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 17:04:01 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:43017, server: ***hostname masked***/***IP masked***:2181 18/04/17 17:04:01 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x2626be142b28eb6, negotiated timeout = 60000 18/04/17 17:04:01 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x2626be142b28eb6 18/04/17 17:04:01 INFO zookeeper.ZooKeeper: Session: 0x2626be142b28eb6 closed 18/04/17 17:04:01 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 17:04:01 INFO scheduler.JobScheduler: Finished job streaming job 1523973840000 ms.25 from job set of time 1523973840000 ms 18/04/17 17:04:02 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 821.0 (TID 821) in 1956 ms on ***hostname masked*** (executor 5) (1/1) 18/04/17 17:04:02 INFO cluster.YarnClusterScheduler: Removed TaskSet 821.0, whose tasks have all completed, from pool 18/04/17 17:04:02 INFO scheduler.DAGScheduler: ResultStage 821 (foreachPartition at PredictorEngineApp.java:153) finished in 1.957 s 18/04/17 17:04:02 INFO scheduler.DAGScheduler: Job 821 finished: foreachPartition at PredictorEngineApp.java:153, took 1.977804 s 18/04/17 17:04:02 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x45c5934 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 17:04:02 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x45c59340x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 17:04:02 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 17:04:02 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:38425, server: ***hostname masked***/***IP masked***:2181 18/04/17 17:04:02 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x1626be1444c95c0, negotiated timeout = 60000 18/04/17 17:04:02 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x1626be1444c95c0 18/04/17 17:04:02 INFO zookeeper.ZooKeeper: Session: 0x1626be1444c95c0 closed 18/04/17 17:04:02 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 17:04:02 INFO scheduler.JobScheduler: Finished job streaming job 1523973840000 ms.8 from job set of time 1523973840000 ms 18/04/17 17:04:03 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 830.0 (TID 830) in 3273 ms on ***hostname masked*** (executor 8) (1/1) 18/04/17 17:04:03 INFO cluster.YarnClusterScheduler: Removed TaskSet 830.0, whose tasks have all completed, from pool 18/04/17 17:04:03 INFO scheduler.DAGScheduler: ResultStage 830 (foreachPartition at PredictorEngineApp.java:153) finished in 3.274 s 18/04/17 17:04:03 INFO scheduler.DAGScheduler: Job 830 finished: foreachPartition at PredictorEngineApp.java:153, took 3.323890 s 18/04/17 17:04:03 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x604d65d8 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 17:04:03 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x604d65d80x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 17:04:03 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 17:04:03 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:60282, server: ***hostname masked***/***IP masked***:2181 18/04/17 17:04:03 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x3626be1439a9579, negotiated timeout = 60000 18/04/17 17:04:03 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x3626be1439a9579 18/04/17 17:04:03 INFO zookeeper.ZooKeeper: Session: 0x3626be1439a9579 closed 18/04/17 17:04:03 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 17:04:03 INFO scheduler.JobScheduler: Finished job streaming job 1523973840000 ms.7 from job set of time 1523973840000 ms 18/04/17 17:04:03 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 829.0 (TID 829) in 3779 ms on ***hostname masked*** (executor 3) (1/1) 18/04/17 17:04:03 INFO cluster.YarnClusterScheduler: Removed TaskSet 829.0, whose tasks have all completed, from pool 18/04/17 17:04:03 INFO scheduler.DAGScheduler: ResultStage 829 (foreachPartition at PredictorEngineApp.java:153) finished in 3.780 s 18/04/17 17:04:03 INFO scheduler.DAGScheduler: Job 829 finished: foreachPartition at PredictorEngineApp.java:153, took 3.827318 s 18/04/17 17:04:03 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x3f8d5491 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 17:04:03 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x3f8d54910x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 17:04:03 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 17:04:03 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:43029, server: ***hostname masked***/***IP masked***:2181 18/04/17 17:04:03 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x2626be142b28eb8, negotiated timeout = 60000 18/04/17 17:04:03 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x2626be142b28eb8 18/04/17 17:04:03 INFO zookeeper.ZooKeeper: Session: 0x2626be142b28eb8 closed 18/04/17 17:04:03 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 17:04:03 INFO scheduler.JobScheduler: Finished job streaming job 1523973840000 ms.32 from job set of time 1523973840000 ms 18/04/17 17:04:03 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 824.0 (TID 824) in 3850 ms on ***hostname masked*** (executor 12) (1/1) 18/04/17 17:04:03 INFO cluster.YarnClusterScheduler: Removed TaskSet 824.0, whose tasks have all completed, from pool 18/04/17 17:04:03 INFO scheduler.DAGScheduler: ResultStage 824 (foreachPartition at PredictorEngineApp.java:153) finished in 3.850 s 18/04/17 17:04:03 INFO scheduler.DAGScheduler: Job 824 finished: foreachPartition at PredictorEngineApp.java:153, took 3.883218 s 18/04/17 17:04:03 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x3c4cff70 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 17:04:03 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x3c4cff700x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 17:04:03 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 17:04:03 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:38437, server: ***hostname masked***/***IP masked***:2181 18/04/17 17:04:03 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x1626be1444c95c3, negotiated timeout = 60000 18/04/17 17:04:03 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x1626be1444c95c3 18/04/17 17:04:03 INFO zookeeper.ZooKeeper: Session: 0x1626be1444c95c3 closed 18/04/17 17:04:03 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 17:04:03 INFO scheduler.JobScheduler: Finished job streaming job 1523973840000 ms.23 from job set of time 1523973840000 ms 18/04/17 17:04:04 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 832.0 (TID 832) in 4002 ms on ***hostname masked*** (executor 9) (1/1) 18/04/17 17:04:04 INFO cluster.YarnClusterScheduler: Removed TaskSet 832.0, whose tasks have all completed, from pool 18/04/17 17:04:04 INFO scheduler.DAGScheduler: ResultStage 832 (foreachPartition at PredictorEngineApp.java:153) finished in 4.014 s 18/04/17 17:04:04 INFO scheduler.DAGScheduler: Job 831 finished: foreachPartition at PredictorEngineApp.java:153, took 4.069600 s 18/04/17 17:04:04 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x300b0317 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 17:04:04 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x300b03170x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 17:04:04 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 17:04:04 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:38441, server: ***hostname masked***/***IP masked***:2181 18/04/17 17:04:04 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x1626be1444c95c4, negotiated timeout = 60000 18/04/17 17:04:04 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x1626be1444c95c4 18/04/17 17:04:04 INFO zookeeper.ZooKeeper: Session: 0x1626be1444c95c4 closed 18/04/17 17:04:04 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 17:04:04 INFO scheduler.JobScheduler: Finished job streaming job 1523973840000 ms.20 from job set of time 1523973840000 ms 18/04/17 17:04:04 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 820.0 (TID 820) in 4347 ms on ***hostname masked*** (executor 7) (1/1) 18/04/17 17:04:04 INFO cluster.YarnClusterScheduler: Removed TaskSet 820.0, whose tasks have all completed, from pool 18/04/17 17:04:04 INFO scheduler.DAGScheduler: ResultStage 820 (foreachPartition at PredictorEngineApp.java:153) finished in 4.347 s 18/04/17 17:04:04 INFO scheduler.DAGScheduler: Job 820 finished: foreachPartition at PredictorEngineApp.java:153, took 4.363993 s 18/04/17 17:04:04 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x8f52434 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 17:04:04 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x8f524340x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 17:04:04 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 17:04:04 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:60295, server: ***hostname masked***/***IP masked***:2181 18/04/17 17:04:04 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x3626be1439a957a, negotiated timeout = 60000 18/04/17 17:04:04 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x3626be1439a957a 18/04/17 17:04:04 INFO zookeeper.ZooKeeper: Session: 0x3626be1439a957a closed 18/04/17 17:04:04 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 17:04:04 INFO scheduler.JobScheduler: Finished job streaming job 1523973840000 ms.34 from job set of time 1523973840000 ms 18/04/17 17:04:05 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 822.0 (TID 822) in 5489 ms on ***hostname masked*** (executor 6) (1/1) 18/04/17 17:04:05 INFO scheduler.DAGScheduler: ResultStage 822 (foreachPartition at PredictorEngineApp.java:153) finished in 5.489 s 18/04/17 17:04:05 INFO cluster.YarnClusterScheduler: Removed TaskSet 822.0, whose tasks have all completed, from pool 18/04/17 17:04:05 INFO scheduler.DAGScheduler: Job 822 finished: foreachPartition at PredictorEngineApp.java:153, took 5.514710 s 18/04/17 17:04:05 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x62948484 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 17:04:05 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x629484840x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 17:04:05 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 17:04:05 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:60300, server: ***hostname masked***/***IP masked***:2181 18/04/17 17:04:05 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x3626be1439a957c, negotiated timeout = 60000 18/04/17 17:04:05 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x3626be1439a957c 18/04/17 17:04:05 INFO zookeeper.ZooKeeper: Session: 0x3626be1439a957c closed 18/04/17 17:04:05 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 17:04:05 INFO scheduler.JobScheduler: Finished job streaming job 1523973840000 ms.12 from job set of time 1523973840000 ms 18/04/17 17:04:05 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 839.0 (TID 839) in 5736 ms on ***hostname masked*** (executor 6) (1/1) 18/04/17 17:04:05 INFO cluster.YarnClusterScheduler: Removed TaskSet 839.0, whose tasks have all completed, from pool 18/04/17 17:04:05 INFO scheduler.DAGScheduler: ResultStage 839 (foreachPartition at PredictorEngineApp.java:153) finished in 5.736 s 18/04/17 17:04:05 INFO scheduler.DAGScheduler: Job 839 finished: foreachPartition at PredictorEngineApp.java:153, took 5.835232 s 18/04/17 17:04:05 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x755e5d74 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 17:04:05 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x755e5d740x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 17:04:05 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 17:04:05 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:38452, server: ***hostname masked***/***IP masked***:2181 18/04/17 17:04:05 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x1626be1444c95c9, negotiated timeout = 60000 18/04/17 17:04:05 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x1626be1444c95c9 18/04/17 17:04:05 INFO zookeeper.ZooKeeper: Session: 0x1626be1444c95c9 closed 18/04/17 17:04:05 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 17:04:05 INFO scheduler.JobScheduler: Finished job streaming job 1523973840000 ms.15 from job set of time 1523973840000 ms 18/04/17 17:04:07 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 843.0 (TID 843) in 6943 ms on ***hostname masked*** (executor 2) (1/1) 18/04/17 17:04:07 INFO cluster.YarnClusterScheduler: Removed TaskSet 843.0, whose tasks have all completed, from pool 18/04/17 17:04:07 INFO scheduler.DAGScheduler: ResultStage 843 (foreachPartition at PredictorEngineApp.java:153) finished in 6.944 s 18/04/17 17:04:07 INFO scheduler.DAGScheduler: Job 843 finished: foreachPartition at PredictorEngineApp.java:153, took 7.053119 s 18/04/17 17:04:07 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x1d625464 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 17:04:07 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x1d6254640x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 17:04:07 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 17:04:07 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:43052, server: ***hostname masked***/***IP masked***:2181 18/04/17 17:04:07 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x2626be142b28ebc, negotiated timeout = 60000 18/04/17 17:04:07 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x2626be142b28ebc 18/04/17 17:04:07 INFO zookeeper.ZooKeeper: Session: 0x2626be142b28ebc closed 18/04/17 17:04:07 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 17:04:07 INFO scheduler.JobScheduler: Finished job streaming job 1523973840000 ms.28 from job set of time 1523973840000 ms 18/04/17 17:04:07 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 833.0 (TID 833) in 7065 ms on ***hostname masked*** (executor 4) (1/1) 18/04/17 17:04:07 INFO cluster.YarnClusterScheduler: Removed TaskSet 833.0, whose tasks have all completed, from pool 18/04/17 17:04:07 INFO scheduler.DAGScheduler: ResultStage 833 (foreachPartition at PredictorEngineApp.java:153) finished in 7.066 s 18/04/17 17:04:07 INFO scheduler.DAGScheduler: Job 833 finished: foreachPartition at PredictorEngineApp.java:153, took 7.136690 s 18/04/17 17:04:07 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x158c062d connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 17:04:07 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x158c062d0x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 17:04:07 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 17:04:07 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:60311, server: ***hostname masked***/***IP masked***:2181 18/04/17 17:04:07 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x3626be1439a957f, negotiated timeout = 60000 18/04/17 17:04:07 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x3626be1439a957f 18/04/17 17:04:07 INFO zookeeper.ZooKeeper: Session: 0x3626be1439a957f closed 18/04/17 17:04:07 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 17:04:07 INFO scheduler.JobScheduler: Finished job streaming job 1523973840000 ms.19 from job set of time 1523973840000 ms 18/04/17 17:04:07 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 823.0 (TID 823) in 7646 ms on ***hostname masked*** (executor 10) (1/1) 18/04/17 17:04:07 INFO cluster.YarnClusterScheduler: Removed TaskSet 823.0, whose tasks have all completed, from pool 18/04/17 17:04:07 INFO scheduler.DAGScheduler: ResultStage 823 (foreachPartition at PredictorEngineApp.java:153) finished in 7.646 s 18/04/17 17:04:07 INFO scheduler.DAGScheduler: Job 823 finished: foreachPartition at PredictorEngineApp.java:153, took 7.675331 s 18/04/17 17:04:07 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x3de47b05 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 17:04:07 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x3de47b050x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 17:04:07 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 17:04:07 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:43058, server: ***hostname masked***/***IP masked***:2181 18/04/17 17:04:07 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x2626be142b28ebf, negotiated timeout = 60000 18/04/17 17:04:07 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 825.0 (TID 825) in 7649 ms on ***hostname masked*** (executor 11) (1/1) 18/04/17 17:04:07 INFO cluster.YarnClusterScheduler: Removed TaskSet 825.0, whose tasks have all completed, from pool 18/04/17 17:04:07 INFO scheduler.DAGScheduler: ResultStage 825 (foreachPartition at PredictorEngineApp.java:153) finished in 7.650 s 18/04/17 17:04:07 INFO scheduler.DAGScheduler: Job 826 finished: foreachPartition at PredictorEngineApp.java:153, took 7.685550 s 18/04/17 17:04:07 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x2626be142b28ebf 18/04/17 17:04:07 INFO zookeeper.ZooKeeper: Session: 0x2626be142b28ebf closed 18/04/17 17:04:07 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 17:04:07 INFO scheduler.JobScheduler: Finished job streaming job 1523973840000 ms.22 from job set of time 1523973840000 ms 18/04/17 17:04:07 INFO scheduler.JobScheduler: Finished job streaming job 1523973840000 ms.33 from job set of time 1523973840000 ms 18/04/17 17:04:09 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 826.0 (TID 826) in 9045 ms on ***hostname masked*** (executor 1) (1/1) 18/04/17 17:04:09 INFO cluster.YarnClusterScheduler: Removed TaskSet 826.0, whose tasks have all completed, from pool 18/04/17 17:04:09 INFO scheduler.DAGScheduler: ResultStage 826 (foreachPartition at PredictorEngineApp.java:153) finished in 9.046 s 18/04/17 17:04:09 INFO scheduler.DAGScheduler: Job 825 finished: foreachPartition at PredictorEngineApp.java:153, took 9.084443 s 18/04/17 17:04:09 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x69379e4 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 17:04:09 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x69379e40x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 17:04:09 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 17:04:09 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:38469, server: ***hostname masked***/***IP masked***:2181 18/04/17 17:04:09 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x1626be1444c95cb, negotiated timeout = 60000 18/04/17 17:04:09 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x1626be1444c95cb 18/04/17 17:04:09 INFO zookeeper.ZooKeeper: Session: 0x1626be1444c95cb closed 18/04/17 17:04:09 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 17:04:09 INFO scheduler.JobScheduler: Finished job streaming job 1523973840000 ms.24 from job set of time 1523973840000 ms 18/04/17 17:04:09 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 827.0 (TID 827) in 9288 ms on ***hostname masked*** (executor 1) (1/1) 18/04/17 17:04:09 INFO cluster.YarnClusterScheduler: Removed TaskSet 827.0, whose tasks have all completed, from pool 18/04/17 17:04:09 INFO scheduler.DAGScheduler: ResultStage 827 (foreachPartition at PredictorEngineApp.java:153) finished in 9.289 s 18/04/17 17:04:09 INFO scheduler.DAGScheduler: Job 827 finished: foreachPartition at PredictorEngineApp.java:153, took 9.330312 s 18/04/17 17:04:09 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x1a979857 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 17:04:09 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x1a9798570x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 17:04:09 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 17:04:09 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:43067, server: ***hostname masked***/***IP masked***:2181 18/04/17 17:04:09 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x2626be142b28ec0, negotiated timeout = 60000 18/04/17 17:04:09 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x2626be142b28ec0 18/04/17 17:04:09 INFO zookeeper.ZooKeeper: Session: 0x2626be142b28ec0 closed 18/04/17 17:04:09 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 17:04:09 INFO scheduler.JobScheduler: Finished job streaming job 1523973840000 ms.29 from job set of time 1523973840000 ms 18/04/17 17:04:09 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 831.0 (TID 831) in 9551 ms on ***hostname masked*** (executor 5) (1/1) 18/04/17 17:04:09 INFO cluster.YarnClusterScheduler: Removed TaskSet 831.0, whose tasks have all completed, from pool 18/04/17 17:04:09 INFO scheduler.DAGScheduler: ResultStage 831 (foreachPartition at PredictorEngineApp.java:153) finished in 9.552 s 18/04/17 17:04:09 INFO scheduler.DAGScheduler: Job 832 finished: foreachPartition at PredictorEngineApp.java:153, took 9.605021 s 18/04/17 17:04:09 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x283770fc connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 17:04:09 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x283770fc0x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 17:04:09 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 17:04:09 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:43070, server: ***hostname masked***/***IP masked***:2181 18/04/17 17:04:09 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x2626be142b28ec2, negotiated timeout = 60000 18/04/17 17:04:09 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x2626be142b28ec2 18/04/17 17:04:09 INFO zookeeper.ZooKeeper: Session: 0x2626be142b28ec2 closed 18/04/17 17:04:09 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 17:04:09 INFO scheduler.JobScheduler: Finished job streaming job 1523973840000 ms.10 from job set of time 1523973840000 ms 18/04/17 17:04:09 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 828.0 (TID 828) in 9774 ms on ***hostname masked*** (executor 8) (1/1) 18/04/17 17:04:09 INFO scheduler.DAGScheduler: ResultStage 828 (foreachPartition at PredictorEngineApp.java:153) finished in 9.775 s 18/04/17 17:04:09 INFO cluster.YarnClusterScheduler: Removed TaskSet 828.0, whose tasks have all completed, from pool 18/04/17 17:04:09 INFO scheduler.DAGScheduler: Job 828 finished: foreachPartition at PredictorEngineApp.java:153, took 9.819515 s 18/04/17 17:04:09 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x42a7e68a connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 17:04:09 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x42a7e68a0x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 17:04:09 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 17:04:09 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:60329, server: ***hostname masked***/***IP masked***:2181 18/04/17 17:04:09 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x3626be1439a9581, negotiated timeout = 60000 18/04/17 17:04:09 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x3626be1439a9581 18/04/17 17:04:09 INFO zookeeper.ZooKeeper: Session: 0x3626be1439a9581 closed 18/04/17 17:04:09 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 17:04:09 INFO scheduler.JobScheduler: Finished job streaming job 1523973840000 ms.6 from job set of time 1523973840000 ms 18/04/17 17:04:10 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 819.0 (TID 819) in 10147 ms on ***hostname masked*** (executor 8) (1/1) 18/04/17 17:04:10 INFO cluster.YarnClusterScheduler: Removed TaskSet 819.0, whose tasks have all completed, from pool 18/04/17 17:04:10 INFO scheduler.DAGScheduler: ResultStage 819 (foreachPartition at PredictorEngineApp.java:153) finished in 10.147 s 18/04/17 17:04:10 INFO scheduler.DAGScheduler: Job 819 finished: foreachPartition at PredictorEngineApp.java:153, took 10.160584 s 18/04/17 17:04:10 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x75250818 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 17:04:10 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x752508180x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 17:04:10 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 17:04:10 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:38482, server: ***hostname masked***/***IP masked***:2181 18/04/17 17:04:10 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x1626be1444c95ce, negotiated timeout = 60000 18/04/17 17:04:10 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x1626be1444c95ce 18/04/17 17:04:10 INFO zookeeper.ZooKeeper: Session: 0x1626be1444c95ce closed 18/04/17 17:04:10 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 17:04:10 INFO scheduler.JobScheduler: Finished job streaming job 1523973840000 ms.2 from job set of time 1523973840000 ms 18/04/17 17:04:10 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 835.0 (TID 835) in 10154 ms on ***hostname masked*** (executor 8) (1/1) 18/04/17 17:04:10 INFO cluster.YarnClusterScheduler: Removed TaskSet 835.0, whose tasks have all completed, from pool 18/04/17 17:04:10 INFO scheduler.DAGScheduler: ResultStage 835 (foreachPartition at PredictorEngineApp.java:153) finished in 10.155 s 18/04/17 17:04:10 INFO scheduler.DAGScheduler: Job 835 finished: foreachPartition at PredictorEngineApp.java:153, took 10.236325 s 18/04/17 17:04:10 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x71600f00 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 17:04:10 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x71600f000x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 17:04:10 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 17:04:10 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:38485, server: ***hostname masked***/***IP masked***:2181 18/04/17 17:04:10 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x1626be1444c95cf, negotiated timeout = 60000 18/04/17 17:04:10 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x1626be1444c95cf 18/04/17 17:04:10 INFO zookeeper.ZooKeeper: Session: 0x1626be1444c95cf closed 18/04/17 17:04:10 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 17:04:10 INFO scheduler.JobScheduler: Finished job streaming job 1523973840000 ms.18 from job set of time 1523973840000 ms 18/04/17 17:04:10 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 841.0 (TID 841) in 10342 ms on ***hostname masked*** (executor 11) (1/1) 18/04/17 17:04:10 INFO scheduler.DAGScheduler: ResultStage 841 (foreachPartition at PredictorEngineApp.java:153) finished in 10.342 s 18/04/17 17:04:10 INFO cluster.YarnClusterScheduler: Removed TaskSet 841.0, whose tasks have all completed, from pool 18/04/17 17:04:10 INFO scheduler.DAGScheduler: Job 841 finished: foreachPartition at PredictorEngineApp.java:153, took 10.446604 s 18/04/17 17:04:10 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x3cfae5db connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 17:04:10 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x3cfae5db0x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 17:04:10 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 17:04:10 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:43083, server: ***hostname masked***/***IP masked***:2181 18/04/17 17:04:10 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x2626be142b28ec5, negotiated timeout = 60000 18/04/17 17:04:10 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x2626be142b28ec5 18/04/17 17:04:10 INFO zookeeper.ZooKeeper: Session: 0x2626be142b28ec5 closed 18/04/17 17:04:10 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 17:04:10 INFO scheduler.JobScheduler: Finished job streaming job 1523973840000 ms.1 from job set of time 1523973840000 ms 18/04/17 17:04:10 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 838.0 (TID 838) in 10479 ms on ***hostname masked*** (executor 4) (1/1) 18/04/17 17:04:10 INFO cluster.YarnClusterScheduler: Removed TaskSet 838.0, whose tasks have all completed, from pool 18/04/17 17:04:10 INFO scheduler.DAGScheduler: ResultStage 838 (foreachPartition at PredictorEngineApp.java:153) finished in 10.479 s 18/04/17 17:04:10 INFO scheduler.DAGScheduler: Job 838 finished: foreachPartition at PredictorEngineApp.java:153, took 10.575096 s 18/04/17 17:04:10 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x3623fe7f connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 17:04:10 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x3623fe7f0x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 17:04:10 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 17:04:10 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:38492, server: ***hostname masked***/***IP masked***:2181 18/04/17 17:04:10 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x1626be1444c95d0, negotiated timeout = 60000 18/04/17 17:04:10 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x1626be1444c95d0 18/04/17 17:04:10 INFO zookeeper.ZooKeeper: Session: 0x1626be1444c95d0 closed 18/04/17 17:04:10 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 17:04:10 INFO scheduler.JobScheduler: Finished job streaming job 1523973840000 ms.5 from job set of time 1523973840000 ms 18/04/17 17:04:10 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 840.0 (TID 840) in 10814 ms on ***hostname masked*** (executor 2) (1/1) 18/04/17 17:04:10 INFO cluster.YarnClusterScheduler: Removed TaskSet 840.0, whose tasks have all completed, from pool 18/04/17 17:04:10 INFO scheduler.DAGScheduler: ResultStage 840 (foreachPartition at PredictorEngineApp.java:153) finished in 10.815 s 18/04/17 17:04:10 INFO scheduler.DAGScheduler: Job 840 finished: foreachPartition at PredictorEngineApp.java:153, took 10.916242 s 18/04/17 17:04:10 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x65905044 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 17:04:10 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x659050440x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 17:04:10 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 17:04:10 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:38495, server: ***hostname masked***/***IP masked***:2181 18/04/17 17:04:10 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x1626be1444c95d1, negotiated timeout = 60000 18/04/17 17:04:11 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x1626be1444c95d1 18/04/17 17:04:11 INFO zookeeper.ZooKeeper: Session: 0x1626be1444c95d1 closed 18/04/17 17:04:11 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 17:04:11 INFO scheduler.JobScheduler: Finished job streaming job 1523973840000 ms.11 from job set of time 1523973840000 ms 18/04/17 17:04:12 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 818.0 (TID 818) in 12523 ms on ***hostname masked*** (executor 1) (1/1) 18/04/17 17:04:12 INFO cluster.YarnClusterScheduler: Removed TaskSet 818.0, whose tasks have all completed, from pool 18/04/17 17:04:12 INFO scheduler.DAGScheduler: ResultStage 818 (foreachPartition at PredictorEngineApp.java:153) finished in 12.523 s 18/04/17 17:04:12 INFO scheduler.DAGScheduler: Job 818 finished: foreachPartition at PredictorEngineApp.java:153, took 12.531929 s 18/04/17 17:04:12 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x589666d3 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 17:04:12 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x589666d30x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 17:04:12 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 17:04:12 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:43095, server: ***hostname masked***/***IP masked***:2181 18/04/17 17:04:12 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x2626be142b28ec8, negotiated timeout = 60000 18/04/17 17:04:12 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x2626be142b28ec8 18/04/17 17:04:12 INFO zookeeper.ZooKeeper: Session: 0x2626be142b28ec8 closed 18/04/17 17:04:12 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 17:04:12 INFO scheduler.JobScheduler: Finished job streaming job 1523973840000 ms.26 from job set of time 1523973840000 ms 18/04/17 17:04:12 INFO scheduler.JobScheduler: Total delay: 12.628 s for time 1523973840000 ms (execution: 12.568 s) 18/04/17 17:04:12 INFO kafka.KafkaRDD: Removing RDD 1080 from persistence list 18/04/17 17:04:12 INFO storage.BlockManager: Removing RDD 1080 18/04/17 17:04:12 INFO kafka.KafkaRDD: Removing RDD 1080 from persistence list 18/04/17 17:04:12 INFO storage.BlockManager: Removing RDD 1080 18/04/17 17:04:12 INFO kafka.KafkaRDD: Removing RDD 1081 from persistence list 18/04/17 17:04:12 INFO storage.BlockManager: Removing RDD 1081 18/04/17 17:04:12 INFO kafka.KafkaRDD: Removing RDD 1081 from persistence list 18/04/17 17:04:12 INFO storage.BlockManager: Removing RDD 1081 18/04/17 17:04:12 INFO kafka.KafkaRDD: Removing RDD 1082 from persistence list 18/04/17 17:04:12 INFO storage.BlockManager: Removing RDD 1082 18/04/17 17:04:12 INFO kafka.KafkaRDD: Removing RDD 1082 from persistence list 18/04/17 17:04:12 INFO storage.BlockManager: Removing RDD 1082 18/04/17 17:04:12 INFO kafka.KafkaRDD: Removing RDD 1083 from persistence list 18/04/17 17:04:12 INFO storage.BlockManager: Removing RDD 1083 18/04/17 17:04:12 INFO kafka.KafkaRDD: Removing RDD 1083 from persistence list 18/04/17 17:04:12 INFO storage.BlockManager: Removing RDD 1083 18/04/17 17:04:12 INFO kafka.KafkaRDD: Removing RDD 1084 from persistence list 18/04/17 17:04:12 INFO storage.BlockManager: Removing RDD 1084 18/04/17 17:04:12 INFO kafka.KafkaRDD: Removing RDD 1084 from persistence list 18/04/17 17:04:12 INFO storage.BlockManager: Removing RDD 1084 18/04/17 17:04:12 INFO kafka.KafkaRDD: Removing RDD 1085 from persistence list 18/04/17 17:04:12 INFO storage.BlockManager: Removing RDD 1085 18/04/17 17:04:12 INFO kafka.KafkaRDD: Removing RDD 1085 from persistence list 18/04/17 17:04:12 INFO storage.BlockManager: Removing RDD 1085 18/04/17 17:04:12 INFO kafka.KafkaRDD: Removing RDD 1086 from persistence list 18/04/17 17:04:12 INFO storage.BlockManager: Removing RDD 1086 18/04/17 17:04:12 INFO kafka.KafkaRDD: Removing RDD 1086 from persistence list 18/04/17 17:04:12 INFO storage.BlockManager: Removing RDD 1086 18/04/17 17:04:12 INFO kafka.KafkaRDD: Removing RDD 1087 from persistence list 18/04/17 17:04:12 INFO storage.BlockManager: Removing RDD 1087 18/04/17 17:04:12 INFO kafka.KafkaRDD: Removing RDD 1087 from persistence list 18/04/17 17:04:12 INFO storage.BlockManager: Removing RDD 1087 18/04/17 17:04:12 INFO kafka.KafkaRDD: Removing RDD 1088 from persistence list 18/04/17 17:04:12 INFO storage.BlockManager: Removing RDD 1088 18/04/17 17:04:12 INFO kafka.KafkaRDD: Removing RDD 1088 from persistence list 18/04/17 17:04:12 INFO storage.BlockManager: Removing RDD 1088 18/04/17 17:04:12 INFO kafka.KafkaRDD: Removing RDD 1089 from persistence list 18/04/17 17:04:12 INFO storage.BlockManager: Removing RDD 1089 18/04/17 17:04:12 INFO kafka.KafkaRDD: Removing RDD 1089 from persistence list 18/04/17 17:04:12 INFO storage.BlockManager: Removing RDD 1089 18/04/17 17:04:12 INFO kafka.KafkaRDD: Removing RDD 1090 from persistence list 18/04/17 17:04:12 INFO storage.BlockManager: Removing RDD 1090 18/04/17 17:04:12 INFO kafka.KafkaRDD: Removing RDD 1090 from persistence list 18/04/17 17:04:12 INFO storage.BlockManager: Removing RDD 1090 18/04/17 17:04:12 INFO kafka.KafkaRDD: Removing RDD 1091 from persistence list 18/04/17 17:04:12 INFO storage.BlockManager: Removing RDD 1091 18/04/17 17:04:12 INFO kafka.KafkaRDD: Removing RDD 1091 from persistence list 18/04/17 17:04:12 INFO storage.BlockManager: Removing RDD 1091 18/04/17 17:04:12 INFO kafka.KafkaRDD: Removing RDD 1092 from persistence list 18/04/17 17:04:12 INFO storage.BlockManager: Removing RDD 1092 18/04/17 17:04:12 INFO kafka.KafkaRDD: Removing RDD 1092 from persistence list 18/04/17 17:04:12 INFO storage.BlockManager: Removing RDD 1092 18/04/17 17:04:12 INFO kafka.KafkaRDD: Removing RDD 1093 from persistence list 18/04/17 17:04:12 INFO storage.BlockManager: Removing RDD 1093 18/04/17 17:04:12 INFO kafka.KafkaRDD: Removing RDD 1093 from persistence list 18/04/17 17:04:12 INFO storage.BlockManager: Removing RDD 1093 18/04/17 17:04:12 INFO kafka.KafkaRDD: Removing RDD 1094 from persistence list 18/04/17 17:04:12 INFO storage.BlockManager: Removing RDD 1094 18/04/17 17:04:12 INFO kafka.KafkaRDD: Removing RDD 1094 from persistence list 18/04/17 17:04:12 INFO storage.BlockManager: Removing RDD 1094 18/04/17 17:04:12 INFO kafka.KafkaRDD: Removing RDD 1095 from persistence list 18/04/17 17:04:12 INFO storage.BlockManager: Removing RDD 1095 18/04/17 17:04:12 INFO kafka.KafkaRDD: Removing RDD 1095 from persistence list 18/04/17 17:04:12 INFO storage.BlockManager: Removing RDD 1095 18/04/17 17:04:12 INFO kafka.KafkaRDD: Removing RDD 1096 from persistence list 18/04/17 17:04:12 INFO storage.BlockManager: Removing RDD 1096 18/04/17 17:04:12 INFO kafka.KafkaRDD: Removing RDD 1096 from persistence list 18/04/17 17:04:12 INFO storage.BlockManager: Removing RDD 1096 18/04/17 17:04:12 INFO kafka.KafkaRDD: Removing RDD 1097 from persistence list 18/04/17 17:04:12 INFO storage.BlockManager: Removing RDD 1097 18/04/17 17:04:12 INFO kafka.KafkaRDD: Removing RDD 1097 from persistence list 18/04/17 17:04:12 INFO storage.BlockManager: Removing RDD 1097 18/04/17 17:04:12 INFO kafka.KafkaRDD: Removing RDD 1098 from persistence list 18/04/17 17:04:12 INFO storage.BlockManager: Removing RDD 1098 18/04/17 17:04:12 INFO kafka.KafkaRDD: Removing RDD 1098 from persistence list 18/04/17 17:04:12 INFO storage.BlockManager: Removing RDD 1098 18/04/17 17:04:12 INFO kafka.KafkaRDD: Removing RDD 1099 from persistence list 18/04/17 17:04:12 INFO storage.BlockManager: Removing RDD 1099 18/04/17 17:04:12 INFO kafka.KafkaRDD: Removing RDD 1099 from persistence list 18/04/17 17:04:12 INFO storage.BlockManager: Removing RDD 1099 18/04/17 17:04:12 INFO kafka.KafkaRDD: Removing RDD 1100 from persistence list 18/04/17 17:04:12 INFO storage.BlockManager: Removing RDD 1100 18/04/17 17:04:12 INFO kafka.KafkaRDD: Removing RDD 1100 from persistence list 18/04/17 17:04:12 INFO storage.BlockManager: Removing RDD 1100 18/04/17 17:04:12 INFO kafka.KafkaRDD: Removing RDD 1101 from persistence list 18/04/17 17:04:12 INFO storage.BlockManager: Removing RDD 1101 18/04/17 17:04:12 INFO kafka.KafkaRDD: Removing RDD 1101 from persistence list 18/04/17 17:04:12 INFO storage.BlockManager: Removing RDD 1101 18/04/17 17:04:12 INFO kafka.KafkaRDD: Removing RDD 1102 from persistence list 18/04/17 17:04:12 INFO storage.BlockManager: Removing RDD 1102 18/04/17 17:04:12 INFO kafka.KafkaRDD: Removing RDD 1102 from persistence list 18/04/17 17:04:12 INFO storage.BlockManager: Removing RDD 1102 18/04/17 17:04:12 INFO kafka.KafkaRDD: Removing RDD 1103 from persistence list 18/04/17 17:04:12 INFO storage.BlockManager: Removing RDD 1103 18/04/17 17:04:12 INFO kafka.KafkaRDD: Removing RDD 1103 from persistence list 18/04/17 17:04:12 INFO storage.BlockManager: Removing RDD 1103 18/04/17 17:04:12 INFO kafka.KafkaRDD: Removing RDD 1104 from persistence list 18/04/17 17:04:12 INFO storage.BlockManager: Removing RDD 1104 18/04/17 17:04:12 INFO kafka.KafkaRDD: Removing RDD 1104 from persistence list 18/04/17 17:04:12 INFO storage.BlockManager: Removing RDD 1104 18/04/17 17:04:12 INFO kafka.KafkaRDD: Removing RDD 1105 from persistence list 18/04/17 17:04:12 INFO storage.BlockManager: Removing RDD 1105 18/04/17 17:04:12 INFO kafka.KafkaRDD: Removing RDD 1105 from persistence list 18/04/17 17:04:12 INFO storage.BlockManager: Removing RDD 1105 18/04/17 17:04:12 INFO kafka.KafkaRDD: Removing RDD 1106 from persistence list 18/04/17 17:04:12 INFO storage.BlockManager: Removing RDD 1106 18/04/17 17:04:12 INFO kafka.KafkaRDD: Removing RDD 1106 from persistence list 18/04/17 17:04:12 INFO storage.BlockManager: Removing RDD 1106 18/04/17 17:04:12 INFO kafka.KafkaRDD: Removing RDD 1107 from persistence list 18/04/17 17:04:12 INFO storage.BlockManager: Removing RDD 1107 18/04/17 17:04:12 INFO kafka.KafkaRDD: Removing RDD 1107 from persistence list 18/04/17 17:04:12 INFO storage.BlockManager: Removing RDD 1107 18/04/17 17:04:12 INFO kafka.KafkaRDD: Removing RDD 1108 from persistence list 18/04/17 17:04:12 INFO storage.BlockManager: Removing RDD 1108 18/04/17 17:04:12 INFO kafka.KafkaRDD: Removing RDD 1108 from persistence list 18/04/17 17:04:12 INFO storage.BlockManager: Removing RDD 1108 18/04/17 17:04:12 INFO kafka.KafkaRDD: Removing RDD 1109 from persistence list 18/04/17 17:04:12 INFO storage.BlockManager: Removing RDD 1109 18/04/17 17:04:12 INFO kafka.KafkaRDD: Removing RDD 1109 from persistence list 18/04/17 17:04:12 INFO storage.BlockManager: Removing RDD 1109 18/04/17 17:04:12 INFO kafka.KafkaRDD: Removing RDD 1110 from persistence list 18/04/17 17:04:12 INFO storage.BlockManager: Removing RDD 1110 18/04/17 17:04:12 INFO kafka.KafkaRDD: Removing RDD 1110 from persistence list 18/04/17 17:04:12 INFO storage.BlockManager: Removing RDD 1110 18/04/17 17:04:12 INFO kafka.KafkaRDD: Removing RDD 1111 from persistence list 18/04/17 17:04:12 INFO storage.BlockManager: Removing RDD 1111 18/04/17 17:04:12 INFO kafka.KafkaRDD: Removing RDD 1111 from persistence list 18/04/17 17:04:12 INFO storage.BlockManager: Removing RDD 1111 18/04/17 17:04:12 INFO kafka.KafkaRDD: Removing RDD 1112 from persistence list 18/04/17 17:04:12 INFO storage.BlockManager: Removing RDD 1112 18/04/17 17:04:12 INFO kafka.KafkaRDD: Removing RDD 1112 from persistence list 18/04/17 17:04:12 INFO storage.BlockManager: Removing RDD 1112 18/04/17 17:04:12 INFO kafka.KafkaRDD: Removing RDD 1113 from persistence list 18/04/17 17:04:12 INFO storage.BlockManager: Removing RDD 1113 18/04/17 17:04:12 INFO kafka.KafkaRDD: Removing RDD 1113 from persistence list 18/04/17 17:04:12 INFO storage.BlockManager: Removing RDD 1113 18/04/17 17:04:12 INFO kafka.KafkaRDD: Removing RDD 1114 from persistence list 18/04/17 17:04:12 INFO storage.BlockManager: Removing RDD 1114 18/04/17 17:04:12 INFO kafka.KafkaRDD: Removing RDD 1114 from persistence list 18/04/17 17:04:12 INFO storage.BlockManager: Removing RDD 1114 18/04/17 17:04:12 INFO kafka.KafkaRDD: Removing RDD 1115 from persistence list 18/04/17 17:04:12 INFO storage.BlockManager: Removing RDD 1115 18/04/17 17:04:12 INFO kafka.KafkaRDD: Removing RDD 1115 from persistence list 18/04/17 17:04:12 INFO storage.BlockManager: Removing RDD 1115 18/04/17 17:04:12 INFO scheduler.ReceivedBlockTracker: Deleting batches ArrayBuffer() 18/04/17 17:04:12 INFO scheduler.InputInfoTracker: remove old batch metadata: 1523973720000 ms 18/04/17 17:05:00 INFO scheduler.JobScheduler: Added jobs for time 1523973900000 ms 18/04/17 17:05:00 INFO scheduler.JobScheduler: Starting job streaming job 1523973900000 ms.0 from job set of time 1523973900000 ms 18/04/17 17:05:00 INFO scheduler.JobScheduler: Starting job streaming job 1523973900000 ms.1 from job set of time 1523973900000 ms 18/04/17 17:05:00 INFO scheduler.JobScheduler: Starting job streaming job 1523973900000 ms.2 from job set of time 1523973900000 ms 18/04/17 17:05:00 INFO scheduler.JobScheduler: Finished job streaming job 1523973900000 ms.0 from job set of time 1523973900000 ms 18/04/17 17:05:00 INFO scheduler.JobScheduler: Starting job streaming job 1523973900000 ms.3 from job set of time 1523973900000 ms 18/04/17 17:05:00 INFO scheduler.JobScheduler: Starting job streaming job 1523973900000 ms.5 from job set of time 1523973900000 ms 18/04/17 17:05:00 INFO scheduler.JobScheduler: Starting job streaming job 1523973900000 ms.4 from job set of time 1523973900000 ms 18/04/17 17:05:00 INFO scheduler.JobScheduler: Finished job streaming job 1523973900000 ms.4 from job set of time 1523973900000 ms 18/04/17 17:05:00 INFO scheduler.JobScheduler: Starting job streaming job 1523973900000 ms.6 from job set of time 1523973900000 ms 18/04/17 17:05:00 INFO scheduler.JobScheduler: Starting job streaming job 1523973900000 ms.8 from job set of time 1523973900000 ms 18/04/17 17:05:00 INFO scheduler.JobScheduler: Finished job streaming job 1523973900000 ms.3 from job set of time 1523973900000 ms 18/04/17 17:05:00 INFO scheduler.JobScheduler: Starting job streaming job 1523973900000 ms.7 from job set of time 1523973900000 ms 18/04/17 17:05:00 INFO scheduler.JobScheduler: Starting job streaming job 1523973900000 ms.10 from job set of time 1523973900000 ms 18/04/17 17:05:00 INFO scheduler.JobScheduler: Starting job streaming job 1523973900000 ms.9 from job set of time 1523973900000 ms 18/04/17 17:05:00 INFO scheduler.JobScheduler: Starting job streaming job 1523973900000 ms.11 from job set of time 1523973900000 ms 18/04/17 17:05:00 INFO scheduler.JobScheduler: Starting job streaming job 1523973900000 ms.12 from job set of time 1523973900000 ms 18/04/17 17:05:00 INFO scheduler.JobScheduler: Starting job streaming job 1523973900000 ms.13 from job set of time 1523973900000 ms 18/04/17 17:05:00 INFO scheduler.JobScheduler: Starting job streaming job 1523973900000 ms.14 from job set of time 1523973900000 ms 18/04/17 17:05:00 INFO scheduler.JobScheduler: Finished job streaming job 1523973900000 ms.13 from job set of time 1523973900000 ms 18/04/17 17:05:00 INFO scheduler.JobScheduler: Starting job streaming job 1523973900000 ms.16 from job set of time 1523973900000 ms 18/04/17 17:05:00 INFO scheduler.JobScheduler: Finished job streaming job 1523973900000 ms.16 from job set of time 1523973900000 ms 18/04/17 17:05:00 INFO scheduler.JobScheduler: Finished job streaming job 1523973900000 ms.14 from job set of time 1523973900000 ms 18/04/17 17:05:00 INFO scheduler.JobScheduler: Starting job streaming job 1523973900000 ms.18 from job set of time 1523973900000 ms 18/04/17 17:05:00 INFO scheduler.JobScheduler: Starting job streaming job 1523973900000 ms.19 from job set of time 1523973900000 ms 18/04/17 17:05:00 INFO scheduler.JobScheduler: Starting job streaming job 1523973900000 ms.15 from job set of time 1523973900000 ms 18/04/17 17:05:00 INFO scheduler.JobScheduler: Starting job streaming job 1523973900000 ms.17 from job set of time 1523973900000 ms 18/04/17 17:05:00 INFO scheduler.JobScheduler: Finished job streaming job 1523973900000 ms.17 from job set of time 1523973900000 ms 18/04/17 17:05:00 INFO scheduler.JobScheduler: Starting job streaming job 1523973900000 ms.20 from job set of time 1523973900000 ms 18/04/17 17:05:00 INFO scheduler.JobScheduler: Starting job streaming job 1523973900000 ms.21 from job set of time 1523973900000 ms 18/04/17 17:05:00 INFO scheduler.JobScheduler: Finished job streaming job 1523973900000 ms.21 from job set of time 1523973900000 ms 18/04/17 17:05:00 INFO scheduler.JobScheduler: Starting job streaming job 1523973900000 ms.22 from job set of time 1523973900000 ms 18/04/17 17:05:00 INFO scheduler.JobScheduler: Starting job streaming job 1523973900000 ms.23 from job set of time 1523973900000 ms 18/04/17 17:05:00 INFO scheduler.JobScheduler: Starting job streaming job 1523973900000 ms.24 from job set of time 1523973900000 ms 18/04/17 17:05:00 INFO scheduler.JobScheduler: Starting job streaming job 1523973900000 ms.25 from job set of time 1523973900000 ms 18/04/17 17:05:00 INFO scheduler.JobScheduler: Starting job streaming job 1523973900000 ms.26 from job set of time 1523973900000 ms 18/04/17 17:05:00 INFO scheduler.JobScheduler: Starting job streaming job 1523973900000 ms.27 from job set of time 1523973900000 ms 18/04/17 17:05:00 INFO scheduler.JobScheduler: Starting job streaming job 1523973900000 ms.28 from job set of time 1523973900000 ms 18/04/17 17:05:00 INFO scheduler.JobScheduler: Starting job streaming job 1523973900000 ms.29 from job set of time 1523973900000 ms 18/04/17 17:05:00 INFO scheduler.JobScheduler: Starting job streaming job 1523973900000 ms.31 from job set of time 1523973900000 ms 18/04/17 17:05:00 INFO scheduler.JobScheduler: Starting job streaming job 1523973900000 ms.33 from job set of time 1523973900000 ms 18/04/17 17:05:00 INFO scheduler.JobScheduler: Starting job streaming job 1523973900000 ms.32 from job set of time 1523973900000 ms 18/04/17 17:05:00 INFO scheduler.JobScheduler: Starting job streaming job 1523973900000 ms.30 from job set of time 1523973900000 ms 18/04/17 17:05:00 INFO scheduler.JobScheduler: Finished job streaming job 1523973900000 ms.30 from job set of time 1523973900000 ms 18/04/17 17:05:00 INFO scheduler.JobScheduler: Starting job streaming job 1523973900000 ms.34 from job set of time 1523973900000 ms 18/04/17 17:05:00 INFO scheduler.JobScheduler: Starting job streaming job 1523973900000 ms.35 from job set of time 1523973900000 ms 18/04/17 17:05:00 INFO scheduler.JobScheduler: Finished job streaming job 1523973900000 ms.35 from job set of time 1523973900000 ms 18/04/17 17:05:00 INFO spark.ContextCleaner: Cleaned accumulator 842 18/04/17 17:05:00 INFO storage.BlockManagerInfo: Removed broadcast_818_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:05:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 17:05:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 17:05:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 17:05:00 INFO scheduler.DAGScheduler: Got job 844 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 17:05:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 17:05:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 844 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 17:05:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 17:05:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 17:05:00 INFO scheduler.DAGScheduler: Submitting ResultStage 844 (KafkaRDD[1174] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 17:05:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 17:05:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 17:05:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 17:05:00 INFO storage.BlockManagerInfo: Removed broadcast_818_piece0 on ***hostname masked***:56034 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:05:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 17:05:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 17:05:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 17:05:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 17:05:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 17:05:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 17:05:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 17:05:00 INFO storage.MemoryStore: Block broadcast_844 stored as values in memory (estimated size 5.7 KB, free 491.4 MB) 18/04/17 17:05:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 17:05:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 17:05:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 17:05:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 17:05:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 17:05:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 17:05:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 17:05:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 17:05:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 17:05:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 17:05:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 17:05:00 INFO spark.ContextCleaner: Cleaned accumulator 819 18/04/17 17:05:00 INFO spark.ContextCleaner: Cleaned accumulator 821 18/04/17 17:05:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 17:05:00 INFO storage.BlockManagerInfo: Removed broadcast_819_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:05:00 INFO storage.BlockManagerInfo: Removed broadcast_819_piece0 on ***hostname masked***:50260 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:05:00 INFO spark.ContextCleaner: Cleaned accumulator 820 18/04/17 17:05:00 INFO storage.MemoryStore: Block broadcast_844_piece0 stored as bytes in memory (estimated size 3.1 KB, free 491.4 MB) 18/04/17 17:05:00 INFO storage.BlockManagerInfo: Added broadcast_844_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:05:00 INFO storage.BlockManagerInfo: Removed broadcast_821_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:05:00 INFO spark.SparkContext: Created broadcast 844 from broadcast at DAGScheduler.scala:1006 18/04/17 17:05:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 844 (KafkaRDD[1174] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 17:05:00 INFO cluster.YarnClusterScheduler: Adding task set 844.0 with 1 tasks 18/04/17 17:05:00 INFO storage.BlockManagerInfo: Removed broadcast_821_piece0 on ***hostname masked***:53081 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:05:00 INFO scheduler.DAGScheduler: Got job 845 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 17:05:00 INFO spark.ContextCleaner: Cleaned accumulator 822 18/04/17 17:05:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 845 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 17:05:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 17:05:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 17:05:00 INFO scheduler.DAGScheduler: Submitting ResultStage 845 (KafkaRDD[1159] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 17:05:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 844.0 (TID 844, ***hostname masked***, executor 10, partition 0, RACK_LOCAL, 2069 bytes) 18/04/17 17:05:00 INFO storage.MemoryStore: Block broadcast_845 stored as values in memory (estimated size 5.7 KB, free 491.4 MB) 18/04/17 17:05:00 INFO storage.BlockManagerInfo: Removed broadcast_820_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:05:00 INFO storage.BlockManagerInfo: Removed broadcast_820_piece0 on ***hostname masked***:41751 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:05:00 INFO spark.ContextCleaner: Cleaned accumulator 824 18/04/17 17:05:00 INFO storage.BlockManagerInfo: Removed broadcast_822_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:05:00 INFO storage.MemoryStore: Block broadcast_845_piece0 stored as bytes in memory (estimated size 3.1 KB, free 491.5 MB) 18/04/17 17:05:00 INFO storage.BlockManagerInfo: Added broadcast_845_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:05:00 INFO spark.SparkContext: Created broadcast 845 from broadcast at DAGScheduler.scala:1006 18/04/17 17:05:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 845 (KafkaRDD[1159] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 17:05:00 INFO cluster.YarnClusterScheduler: Adding task set 845.0 with 1 tasks 18/04/17 17:05:00 INFO scheduler.DAGScheduler: Got job 846 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 17:05:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 846 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 17:05:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 17:05:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 17:05:00 INFO scheduler.DAGScheduler: Submitting ResultStage 846 (KafkaRDD[1172] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 17:05:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 845.0 (TID 845, ***hostname masked***, executor 4, partition 0, RACK_LOCAL, 2060 bytes) 18/04/17 17:05:00 INFO storage.MemoryStore: Block broadcast_846 stored as values in memory (estimated size 5.7 KB, free 491.5 MB) 18/04/17 17:05:00 INFO storage.BlockManagerInfo: Removed broadcast_822_piece0 on ***hostname masked***:35790 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:05:00 INFO spark.ContextCleaner: Cleaned accumulator 823 18/04/17 17:05:00 INFO storage.BlockManagerInfo: Removed broadcast_824_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:05:00 INFO storage.MemoryStore: Block broadcast_846_piece0 stored as bytes in memory (estimated size 3.1 KB, free 491.5 MB) 18/04/17 17:05:00 INFO storage.BlockManagerInfo: Added broadcast_846_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:05:00 INFO spark.SparkContext: Created broadcast 846 from broadcast at DAGScheduler.scala:1006 18/04/17 17:05:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 846 (KafkaRDD[1172] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 17:05:00 INFO cluster.YarnClusterScheduler: Adding task set 846.0 with 1 tasks 18/04/17 17:05:00 INFO scheduler.DAGScheduler: Got job 847 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 17:05:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 847 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 17:05:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 17:05:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 17:05:00 INFO scheduler.DAGScheduler: Submitting ResultStage 847 (KafkaRDD[1153] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 17:05:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 846.0 (TID 846, ***hostname masked***, executor 11, partition 0, RACK_LOCAL, 2048 bytes) 18/04/17 17:05:00 INFO storage.BlockManagerInfo: Removed broadcast_824_piece0 on ***hostname masked***:42188 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:05:00 INFO storage.MemoryStore: Block broadcast_847 stored as values in memory (estimated size 5.7 KB, free 491.5 MB) 18/04/17 17:05:00 INFO spark.ContextCleaner: Cleaned accumulator 825 18/04/17 17:05:00 INFO storage.BlockManagerInfo: Removed broadcast_823_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:05:00 INFO storage.MemoryStore: Block broadcast_847_piece0 stored as bytes in memory (estimated size 3.1 KB, free 491.5 MB) 18/04/17 17:05:00 INFO storage.BlockManagerInfo: Added broadcast_847_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:05:00 INFO spark.SparkContext: Created broadcast 847 from broadcast at DAGScheduler.scala:1006 18/04/17 17:05:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 847 (KafkaRDD[1153] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 17:05:00 INFO cluster.YarnClusterScheduler: Adding task set 847.0 with 1 tasks 18/04/17 17:05:00 INFO storage.BlockManagerInfo: Removed broadcast_823_piece0 on ***hostname masked***:55095 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:05:00 INFO scheduler.DAGScheduler: Got job 848 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 17:05:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 848 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 17:05:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 17:05:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 17:05:00 INFO scheduler.DAGScheduler: Submitting ResultStage 848 (KafkaRDD[1171] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 17:05:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 847.0 (TID 847, ***hostname masked***, executor 3, partition 0, RACK_LOCAL, 2063 bytes) 18/04/17 17:05:00 INFO storage.MemoryStore: Block broadcast_848 stored as values in memory (estimated size 5.7 KB, free 491.5 MB) 18/04/17 17:05:00 INFO spark.ContextCleaner: Cleaned accumulator 827 18/04/17 17:05:00 INFO storage.BlockManagerInfo: Removed broadcast_825_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:05:00 INFO storage.BlockManagerInfo: Removed broadcast_825_piece0 on ***hostname masked***:57847 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:05:00 INFO storage.MemoryStore: Block broadcast_848_piece0 stored as bytes in memory (estimated size 3.1 KB, free 491.5 MB) 18/04/17 17:05:00 INFO storage.BlockManagerInfo: Added broadcast_848_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:05:00 INFO storage.BlockManagerInfo: Added broadcast_845_piece0 in memory on ***hostname masked***:55279 (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:05:00 INFO spark.SparkContext: Created broadcast 848 from broadcast at DAGScheduler.scala:1006 18/04/17 17:05:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 848 (KafkaRDD[1171] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 17:05:00 INFO cluster.YarnClusterScheduler: Adding task set 848.0 with 1 tasks 18/04/17 17:05:00 INFO spark.ContextCleaner: Cleaned accumulator 826 18/04/17 17:05:00 INFO spark.ContextCleaner: Cleaned accumulator 828 18/04/17 17:05:00 INFO scheduler.DAGScheduler: Got job 849 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 17:05:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 849 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 17:05:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 17:05:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 17:05:00 INFO scheduler.DAGScheduler: Submitting ResultStage 849 (KafkaRDD[1176] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 17:05:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 848.0 (TID 848, ***hostname masked***, executor 3, partition 0, RACK_LOCAL, 2050 bytes) 18/04/17 17:05:00 INFO storage.BlockManagerInfo: Removed broadcast_826_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:05:00 INFO storage.MemoryStore: Block broadcast_849 stored as values in memory (estimated size 5.7 KB, free 491.5 MB) 18/04/17 17:05:00 INFO storage.BlockManagerInfo: Removed broadcast_826_piece0 on ***hostname masked***:56034 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:05:00 INFO storage.MemoryStore: Block broadcast_849_piece0 stored as bytes in memory (estimated size 3.1 KB, free 491.5 MB) 18/04/17 17:05:00 INFO storage.BlockManagerInfo: Added broadcast_844_piece0 in memory on ***hostname masked***:55095 (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:05:00 INFO storage.BlockManagerInfo: Removed broadcast_828_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:05:00 INFO storage.BlockManagerInfo: Added broadcast_849_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:05:00 INFO spark.SparkContext: Created broadcast 849 from broadcast at DAGScheduler.scala:1006 18/04/17 17:05:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 849 (KafkaRDD[1176] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 17:05:00 INFO cluster.YarnClusterScheduler: Adding task set 849.0 with 1 tasks 18/04/17 17:05:00 INFO scheduler.DAGScheduler: Got job 850 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 17:05:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 850 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 17:05:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 17:05:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 17:05:00 INFO storage.BlockManagerInfo: Added broadcast_846_piece0 in memory on ***hostname masked***:57847 (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:05:00 INFO scheduler.DAGScheduler: Submitting ResultStage 850 (KafkaRDD[1160] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 17:05:00 INFO storage.BlockManagerInfo: Removed broadcast_828_piece0 on ***hostname masked***:50260 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:05:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 849.0 (TID 849, ***hostname masked***, executor 3, partition 0, NODE_LOCAL, 2047 bytes) 18/04/17 17:05:00 INFO storage.MemoryStore: Block broadcast_850 stored as values in memory (estimated size 5.7 KB, free 491.5 MB) 18/04/17 17:05:00 INFO spark.ContextCleaner: Cleaned accumulator 829 18/04/17 17:05:00 INFO storage.BlockManagerInfo: Removed broadcast_827_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:05:00 INFO storage.BlockManagerInfo: Removed broadcast_827_piece0 on ***hostname masked***:56034 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:05:00 INFO spark.ContextCleaner: Cleaned accumulator 831 18/04/17 17:05:00 INFO storage.BlockManagerInfo: Removed broadcast_829_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:05:00 INFO storage.MemoryStore: Block broadcast_850_piece0 stored as bytes in memory (estimated size 3.1 KB, free 491.5 MB) 18/04/17 17:05:00 INFO storage.BlockManagerInfo: Added broadcast_850_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:05:00 INFO spark.SparkContext: Created broadcast 850 from broadcast at DAGScheduler.scala:1006 18/04/17 17:05:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 850 (KafkaRDD[1160] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 17:05:00 INFO cluster.YarnClusterScheduler: Adding task set 850.0 with 1 tasks 18/04/17 17:05:00 INFO scheduler.DAGScheduler: Got job 851 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 17:05:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 851 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 17:05:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 17:05:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 17:05:00 INFO scheduler.DAGScheduler: Submitting ResultStage 851 (KafkaRDD[1162] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 17:05:00 INFO storage.MemoryStore: Block broadcast_851 stored as values in memory (estimated size 5.7 KB, free 491.5 MB) 18/04/17 17:05:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 850.0 (TID 850, ***hostname masked***, executor 10, partition 0, RACK_LOCAL, 2051 bytes) 18/04/17 17:05:00 INFO storage.BlockManagerInfo: Removed broadcast_829_piece0 on ***hostname masked***:60107 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:05:00 INFO storage.BlockManagerInfo: Added broadcast_847_piece0 in memory on ***hostname masked***:60107 (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:05:00 INFO spark.ContextCleaner: Cleaned accumulator 830 18/04/17 17:05:00 INFO storage.BlockManagerInfo: Removed broadcast_831_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:05:00 INFO storage.BlockManagerInfo: Removed broadcast_831_piece0 on ***hostname masked***:53081 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:05:00 INFO storage.MemoryStore: Block broadcast_851_piece0 stored as bytes in memory (estimated size 3.1 KB, free 491.5 MB) 18/04/17 17:05:00 INFO storage.BlockManagerInfo: Added broadcast_851_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:05:00 INFO spark.SparkContext: Created broadcast 851 from broadcast at DAGScheduler.scala:1006 18/04/17 17:05:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 851 (KafkaRDD[1162] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 17:05:00 INFO cluster.YarnClusterScheduler: Adding task set 851.0 with 1 tasks 18/04/17 17:05:00 INFO scheduler.DAGScheduler: Got job 852 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 17:05:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 852 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 17:05:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 17:05:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 17:05:00 INFO scheduler.DAGScheduler: Submitting ResultStage 852 (KafkaRDD[1175] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 17:05:00 INFO spark.ContextCleaner: Cleaned accumulator 832 18/04/17 17:05:00 INFO storage.MemoryStore: Block broadcast_852 stored as values in memory (estimated size 5.7 KB, free 491.5 MB) 18/04/17 17:05:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 851.0 (TID 851, ***hostname masked***, executor 4, partition 0, NODE_LOCAL, 2059 bytes) 18/04/17 17:05:00 INFO storage.BlockManagerInfo: Removed broadcast_830_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:05:00 INFO storage.BlockManagerInfo: Removed broadcast_830_piece0 on ***hostname masked***:50260 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:05:00 INFO storage.MemoryStore: Block broadcast_852_piece0 stored as bytes in memory (estimated size 3.1 KB, free 491.5 MB) 18/04/17 17:05:00 INFO storage.BlockManagerInfo: Added broadcast_852_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:05:00 INFO storage.BlockManagerInfo: Removed broadcast_832_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:05:00 INFO spark.SparkContext: Created broadcast 852 from broadcast at DAGScheduler.scala:1006 18/04/17 17:05:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 852 (KafkaRDD[1175] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 17:05:00 INFO cluster.YarnClusterScheduler: Adding task set 852.0 with 1 tasks 18/04/17 17:05:00 INFO scheduler.DAGScheduler: Got job 853 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 17:05:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 853 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 17:05:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 17:05:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 17:05:00 INFO scheduler.DAGScheduler: Submitting ResultStage 853 (KafkaRDD[1167] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 17:05:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 852.0 (TID 852, ***hostname masked***, executor 7, partition 0, RACK_LOCAL, 2048 bytes) 18/04/17 17:05:00 INFO storage.MemoryStore: Block broadcast_853 stored as values in memory (estimated size 5.7 KB, free 491.5 MB) 18/04/17 17:05:00 INFO storage.BlockManagerInfo: Added broadcast_850_piece0 in memory on ***hostname masked***:55095 (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:05:00 INFO storage.BlockManagerInfo: Removed broadcast_832_piece0 on ***hostname masked***:55033 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:05:00 INFO spark.ContextCleaner: Cleaned accumulator 833 18/04/17 17:05:00 INFO spark.ContextCleaner: Cleaned accumulator 835 18/04/17 17:05:00 INFO storage.BlockManagerInfo: Removed broadcast_833_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:05:00 INFO storage.BlockManagerInfo: Added broadcast_851_piece0 in memory on ***hostname masked***:55279 (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:05:00 INFO storage.BlockManagerInfo: Removed broadcast_833_piece0 on ***hostname masked***:55279 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:05:00 INFO storage.MemoryStore: Block broadcast_853_piece0 stored as bytes in memory (estimated size 3.1 KB, free 491.5 MB) 18/04/17 17:05:00 INFO spark.ContextCleaner: Cleaned accumulator 834 18/04/17 17:05:00 INFO storage.BlockManagerInfo: Added broadcast_853_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:05:00 INFO spark.SparkContext: Created broadcast 853 from broadcast at DAGScheduler.scala:1006 18/04/17 17:05:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 853 (KafkaRDD[1167] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 17:05:00 INFO cluster.YarnClusterScheduler: Adding task set 853.0 with 1 tasks 18/04/17 17:05:00 INFO scheduler.DAGScheduler: Got job 854 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 17:05:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 854 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 17:05:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 17:05:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 17:05:00 INFO scheduler.DAGScheduler: Submitting ResultStage 854 (KafkaRDD[1161] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 17:05:00 INFO storage.BlockManagerInfo: Removed broadcast_835_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:05:00 INFO storage.BlockManagerInfo: Added broadcast_849_piece0 in memory on ***hostname masked***:60107 (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:05:00 INFO storage.MemoryStore: Block broadcast_854 stored as values in memory (estimated size 5.7 KB, free 491.5 MB) 18/04/17 17:05:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 853.0 (TID 853, ***hostname masked***, executor 3, partition 0, RACK_LOCAL, 2064 bytes) 18/04/17 17:05:00 INFO storage.BlockManagerInfo: Removed broadcast_835_piece0 on ***hostname masked***:50260 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:05:00 INFO spark.ContextCleaner: Cleaned accumulator 836 18/04/17 17:05:00 INFO storage.BlockManagerInfo: Removed broadcast_834_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:05:00 INFO storage.MemoryStore: Block broadcast_854_piece0 stored as bytes in memory (estimated size 3.1 KB, free 491.5 MB) 18/04/17 17:05:00 INFO storage.BlockManagerInfo: Removed broadcast_834_piece0 on ***hostname masked***:55033 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:05:00 INFO storage.BlockManagerInfo: Added broadcast_854_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:05:00 INFO spark.SparkContext: Created broadcast 854 from broadcast at DAGScheduler.scala:1006 18/04/17 17:05:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 854 (KafkaRDD[1161] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 17:05:00 INFO cluster.YarnClusterScheduler: Adding task set 854.0 with 1 tasks 18/04/17 17:05:00 INFO scheduler.DAGScheduler: Got job 855 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 17:05:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 855 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 17:05:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 17:05:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 17:05:00 INFO scheduler.DAGScheduler: Submitting ResultStage 855 (KafkaRDD[1170] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 17:05:00 INFO storage.MemoryStore: Block broadcast_855 stored as values in memory (estimated size 5.7 KB, free 491.5 MB) 18/04/17 17:05:00 INFO spark.ContextCleaner: Cleaned accumulator 838 18/04/17 17:05:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 854.0 (TID 854, ***hostname masked***, executor 7, partition 0, RACK_LOCAL, 2041 bytes) 18/04/17 17:05:00 INFO storage.BlockManagerInfo: Removed broadcast_836_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:05:00 INFO storage.BlockManagerInfo: Added broadcast_852_piece0 in memory on ***hostname masked***:41751 (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:05:00 INFO storage.BlockManagerInfo: Removed broadcast_836_piece0 on ***hostname masked***:55279 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:05:00 INFO spark.ContextCleaner: Cleaned accumulator 837 18/04/17 17:05:00 INFO storage.BlockManagerInfo: Removed broadcast_838_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:05:00 INFO storage.MemoryStore: Block broadcast_855_piece0 stored as bytes in memory (estimated size 3.1 KB, free 491.5 MB) 18/04/17 17:05:00 INFO storage.BlockManagerInfo: Added broadcast_855_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:05:00 INFO storage.BlockManagerInfo: Removed broadcast_838_piece0 on ***hostname masked***:55279 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:05:00 INFO spark.SparkContext: Created broadcast 855 from broadcast at DAGScheduler.scala:1006 18/04/17 17:05:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 855 (KafkaRDD[1170] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 17:05:00 INFO cluster.YarnClusterScheduler: Adding task set 855.0 with 1 tasks 18/04/17 17:05:00 INFO scheduler.DAGScheduler: Got job 856 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 17:05:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 856 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 17:05:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 17:05:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 17:05:00 INFO scheduler.DAGScheduler: Submitting ResultStage 856 (KafkaRDD[1184] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 17:05:00 INFO spark.ContextCleaner: Cleaned accumulator 839 18/04/17 17:05:00 INFO storage.MemoryStore: Block broadcast_856 stored as values in memory (estimated size 5.7 KB, free 491.5 MB) 18/04/17 17:05:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 855.0 (TID 855, ***hostname masked***, executor 6, partition 0, RACK_LOCAL, 2048 bytes) 18/04/17 17:05:00 INFO storage.BlockManagerInfo: Removed broadcast_837_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:05:00 INFO storage.BlockManagerInfo: Added broadcast_848_piece0 in memory on ***hostname masked***:60107 (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:05:00 INFO storage.BlockManagerInfo: Removed broadcast_837_piece0 on ***hostname masked***:41751 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:05:00 INFO spark.ContextCleaner: Cleaned accumulator 841 18/04/17 17:05:00 INFO storage.BlockManagerInfo: Removed broadcast_839_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:05:00 INFO storage.MemoryStore: Block broadcast_856_piece0 stored as bytes in memory (estimated size 3.1 KB, free 491.5 MB) 18/04/17 17:05:00 INFO storage.BlockManagerInfo: Added broadcast_856_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:05:00 INFO spark.SparkContext: Created broadcast 856 from broadcast at DAGScheduler.scala:1006 18/04/17 17:05:00 INFO storage.BlockManagerInfo: Removed broadcast_839_piece0 on ***hostname masked***:35790 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:05:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 856 (KafkaRDD[1184] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 17:05:00 INFO cluster.YarnClusterScheduler: Adding task set 856.0 with 1 tasks 18/04/17 17:05:00 INFO scheduler.DAGScheduler: Got job 857 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 17:05:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 857 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 17:05:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 17:05:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 17:05:00 INFO scheduler.DAGScheduler: Submitting ResultStage 857 (KafkaRDD[1177] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 17:05:00 INFO storage.MemoryStore: Block broadcast_857 stored as values in memory (estimated size 5.7 KB, free 491.5 MB) 18/04/17 17:05:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 856.0 (TID 856, ***hostname masked***, executor 8, partition 0, RACK_LOCAL, 2040 bytes) 18/04/17 17:05:00 INFO spark.ContextCleaner: Cleaned accumulator 840 18/04/17 17:05:00 INFO storage.BlockManagerInfo: Removed broadcast_841_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:05:00 INFO storage.BlockManagerInfo: Removed broadcast_841_piece0 on ***hostname masked***:57847 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:05:00 INFO storage.MemoryStore: Block broadcast_857_piece0 stored as bytes in memory (estimated size 3.1 KB, free 491.5 MB) 18/04/17 17:05:00 INFO storage.BlockManagerInfo: Added broadcast_857_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:05:00 INFO spark.SparkContext: Created broadcast 857 from broadcast at DAGScheduler.scala:1006 18/04/17 17:05:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 857 (KafkaRDD[1177] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 17:05:00 INFO cluster.YarnClusterScheduler: Adding task set 857.0 with 1 tasks 18/04/17 17:05:00 INFO scheduler.DAGScheduler: Got job 858 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 17:05:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 858 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 17:05:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 17:05:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 17:05:00 INFO scheduler.DAGScheduler: Submitting ResultStage 858 (KafkaRDD[1178] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 17:05:00 INFO storage.BlockManagerInfo: Removed broadcast_840_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:05:00 INFO storage.BlockManagerInfo: Added broadcast_854_piece0 in memory on ***hostname masked***:41751 (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:05:00 INFO storage.MemoryStore: Block broadcast_858 stored as values in memory (estimated size 5.7 KB, free 491.5 MB) 18/04/17 17:05:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 857.0 (TID 857, ***hostname masked***, executor 6, partition 0, RACK_LOCAL, 2045 bytes) 18/04/17 17:05:00 INFO storage.BlockManagerInfo: Removed broadcast_840_piece0 on ***hostname masked***:43653 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:05:00 INFO storage.MemoryStore: Block broadcast_858_piece0 stored as bytes in memory (estimated size 3.1 KB, free 491.5 MB) 18/04/17 17:05:00 INFO storage.BlockManagerInfo: Added broadcast_858_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:05:00 INFO spark.SparkContext: Created broadcast 858 from broadcast at DAGScheduler.scala:1006 18/04/17 17:05:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 858 (KafkaRDD[1178] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 17:05:00 INFO cluster.YarnClusterScheduler: Adding task set 858.0 with 1 tasks 18/04/17 17:05:00 INFO scheduler.DAGScheduler: Got job 859 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 17:05:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 859 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 17:05:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 17:05:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 17:05:00 INFO scheduler.DAGScheduler: Submitting ResultStage 859 (KafkaRDD[1185] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 17:05:00 INFO storage.BlockManagerInfo: Added broadcast_855_piece0 in memory on ***hostname masked***:35790 (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:05:00 INFO storage.MemoryStore: Block broadcast_859 stored as values in memory (estimated size 5.7 KB, free 491.5 MB) 18/04/17 17:05:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 858.0 (TID 858, ***hostname masked***, executor 5, partition 0, NODE_LOCAL, 2056 bytes) 18/04/17 17:05:00 INFO storage.BlockManagerInfo: Added broadcast_856_piece0 in memory on ***hostname masked***:50260 (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:05:00 INFO storage.MemoryStore: Block broadcast_859_piece0 stored as bytes in memory (estimated size 3.1 KB, free 491.5 MB) 18/04/17 17:05:00 INFO storage.BlockManagerInfo: Added broadcast_859_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:05:00 INFO spark.SparkContext: Created broadcast 859 from broadcast at DAGScheduler.scala:1006 18/04/17 17:05:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 859 (KafkaRDD[1185] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 17:05:00 INFO cluster.YarnClusterScheduler: Adding task set 859.0 with 1 tasks 18/04/17 17:05:00 INFO scheduler.DAGScheduler: Got job 860 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 17:05:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 860 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 17:05:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 17:05:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 17:05:00 INFO scheduler.DAGScheduler: Submitting ResultStage 860 (KafkaRDD[1154] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 17:05:00 INFO storage.MemoryStore: Block broadcast_860 stored as values in memory (estimated size 5.7 KB, free 491.5 MB) 18/04/17 17:05:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 859.0 (TID 859, ***hostname masked***, executor 10, partition 0, NODE_LOCAL, 2048 bytes) 18/04/17 17:05:00 INFO storage.MemoryStore: Block broadcast_860_piece0 stored as bytes in memory (estimated size 3.1 KB, free 491.5 MB) 18/04/17 17:05:00 INFO storage.BlockManagerInfo: Added broadcast_860_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:05:00 INFO spark.SparkContext: Created broadcast 860 from broadcast at DAGScheduler.scala:1006 18/04/17 17:05:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 860 (KafkaRDD[1154] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 17:05:00 INFO cluster.YarnClusterScheduler: Adding task set 860.0 with 1 tasks 18/04/17 17:05:00 INFO scheduler.DAGScheduler: Got job 861 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 17:05:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 861 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 17:05:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 17:05:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 17:05:00 INFO scheduler.DAGScheduler: Submitting ResultStage 861 (KafkaRDD[1164] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 17:05:00 INFO storage.MemoryStore: Block broadcast_861 stored as values in memory (estimated size 5.7 KB, free 491.5 MB) 18/04/17 17:05:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 860.0 (TID 860, ***hostname masked***, executor 10, partition 0, RACK_LOCAL, 2034 bytes) 18/04/17 17:05:00 INFO storage.BlockManagerInfo: Added broadcast_858_piece0 in memory on ***hostname masked***:53081 (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:05:00 INFO spark.ContextCleaner: Cleaned accumulator 843 18/04/17 17:05:00 INFO storage.MemoryStore: Block broadcast_861_piece0 stored as bytes in memory (estimated size 3.1 KB, free 491.5 MB) 18/04/17 17:05:00 INFO storage.BlockManagerInfo: Added broadcast_853_piece0 in memory on ***hostname masked***:60107 (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:05:00 INFO storage.BlockManagerInfo: Removed broadcast_843_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:05:00 INFO storage.BlockManagerInfo: Added broadcast_861_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:05:00 INFO spark.SparkContext: Created broadcast 861 from broadcast at DAGScheduler.scala:1006 18/04/17 17:05:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 861 (KafkaRDD[1164] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 17:05:00 INFO cluster.YarnClusterScheduler: Adding task set 861.0 with 1 tasks 18/04/17 17:05:00 INFO scheduler.DAGScheduler: Got job 862 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 17:05:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 862 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 17:05:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 17:05:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 17:05:00 INFO scheduler.DAGScheduler: Submitting ResultStage 862 (KafkaRDD[1180] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 17:05:00 INFO storage.MemoryStore: Block broadcast_862 stored as values in memory (estimated size 5.7 KB, free 491.5 MB) 18/04/17 17:05:00 INFO storage.BlockManagerInfo: Added broadcast_857_piece0 in memory on ***hostname masked***:35790 (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:05:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 861.0 (TID 861, ***hostname masked***, executor 6, partition 0, RACK_LOCAL, 2051 bytes) 18/04/17 17:05:00 INFO storage.BlockManagerInfo: Added broadcast_859_piece0 in memory on ***hostname masked***:55095 (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:05:00 INFO storage.BlockManagerInfo: Removed broadcast_843_piece0 on ***hostname masked***:43653 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:05:00 INFO storage.MemoryStore: Block broadcast_862_piece0 stored as bytes in memory (estimated size 3.1 KB, free 491.5 MB) 18/04/17 17:05:00 INFO storage.BlockManagerInfo: Added broadcast_862_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:05:00 INFO spark.SparkContext: Created broadcast 862 from broadcast at DAGScheduler.scala:1006 18/04/17 17:05:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 862 (KafkaRDD[1180] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 17:05:00 INFO cluster.YarnClusterScheduler: Adding task set 862.0 with 1 tasks 18/04/17 17:05:00 INFO scheduler.DAGScheduler: Got job 864 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 17:05:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 863 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 17:05:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 17:05:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 17:05:00 INFO scheduler.DAGScheduler: Submitting ResultStage 863 (KafkaRDD[1179] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 17:05:00 INFO storage.MemoryStore: Block broadcast_863 stored as values in memory (estimated size 5.7 KB, free 491.5 MB) 18/04/17 17:05:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 862.0 (TID 862, ***hostname masked***, executor 4, partition 0, RACK_LOCAL, 2045 bytes) 18/04/17 17:05:00 INFO storage.MemoryStore: Block broadcast_863_piece0 stored as bytes in memory (estimated size 3.1 KB, free 491.5 MB) 18/04/17 17:05:00 INFO storage.BlockManagerInfo: Added broadcast_863_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:05:00 INFO spark.SparkContext: Created broadcast 863 from broadcast at DAGScheduler.scala:1006 18/04/17 17:05:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 863 (KafkaRDD[1179] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 17:05:00 INFO cluster.YarnClusterScheduler: Adding task set 863.0 with 1 tasks 18/04/17 17:05:00 INFO scheduler.DAGScheduler: Got job 865 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 17:05:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 864 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 17:05:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 17:05:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 17:05:00 INFO scheduler.DAGScheduler: Submitting ResultStage 864 (KafkaRDD[1183] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 17:05:00 INFO storage.MemoryStore: Block broadcast_864 stored as values in memory (estimated size 5.7 KB, free 491.5 MB) 18/04/17 17:05:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 863.0 (TID 863, ***hostname masked***, executor 5, partition 0, RACK_LOCAL, 2048 bytes) 18/04/17 17:05:00 INFO storage.BlockManagerInfo: Added broadcast_861_piece0 in memory on ***hostname masked***:35790 (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:05:00 INFO storage.MemoryStore: Block broadcast_864_piece0 stored as bytes in memory (estimated size 3.1 KB, free 491.5 MB) 18/04/17 17:05:00 INFO storage.BlockManagerInfo: Added broadcast_864_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:05:00 INFO spark.SparkContext: Created broadcast 864 from broadcast at DAGScheduler.scala:1006 18/04/17 17:05:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 864 (KafkaRDD[1183] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 17:05:00 INFO cluster.YarnClusterScheduler: Adding task set 864.0 with 1 tasks 18/04/17 17:05:00 INFO scheduler.DAGScheduler: Got job 866 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 17:05:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 865 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 17:05:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 17:05:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 17:05:00 INFO scheduler.DAGScheduler: Submitting ResultStage 865 (KafkaRDD[1157] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 17:05:00 INFO storage.MemoryStore: Block broadcast_865 stored as values in memory (estimated size 5.7 KB, free 491.5 MB) 18/04/17 17:05:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 864.0 (TID 864, ***hostname masked***, executor 5, partition 0, RACK_LOCAL, 2042 bytes) 18/04/17 17:05:00 INFO storage.BlockManagerInfo: Added broadcast_860_piece0 in memory on ***hostname masked***:55095 (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:05:00 INFO spark.ContextCleaner: Cleaned accumulator 844 18/04/17 17:05:00 INFO storage.BlockManagerInfo: Removed broadcast_842_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:05:00 INFO storage.MemoryStore: Block broadcast_865_piece0 stored as bytes in memory (estimated size 3.1 KB, free 491.5 MB) 18/04/17 17:05:00 INFO storage.BlockManagerInfo: Added broadcast_865_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:05:00 INFO spark.SparkContext: Created broadcast 865 from broadcast at DAGScheduler.scala:1006 18/04/17 17:05:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 865 (KafkaRDD[1157] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 17:05:00 INFO cluster.YarnClusterScheduler: Adding task set 865.0 with 1 tasks 18/04/17 17:05:00 INFO storage.BlockManagerInfo: Added broadcast_862_piece0 in memory on ***hostname masked***:55279 (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:05:00 INFO scheduler.DAGScheduler: Got job 863 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 17:05:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 866 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 17:05:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 17:05:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 17:05:00 INFO scheduler.DAGScheduler: Submitting ResultStage 866 (KafkaRDD[1181] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 17:05:00 INFO storage.MemoryStore: Block broadcast_866 stored as values in memory (estimated size 5.7 KB, free 491.5 MB) 18/04/17 17:05:00 INFO storage.BlockManagerInfo: Removed broadcast_842_piece0 on ***hostname masked***:35790 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:05:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 865.0 (TID 865, ***hostname masked***, executor 4, partition 0, NODE_LOCAL, 2059 bytes) 18/04/17 17:05:00 INFO storage.MemoryStore: Block broadcast_866_piece0 stored as bytes in memory (estimated size 3.1 KB, free 491.5 MB) 18/04/17 17:05:00 INFO storage.BlockManagerInfo: Added broadcast_866_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:05:00 INFO spark.SparkContext: Created broadcast 866 from broadcast at DAGScheduler.scala:1006 18/04/17 17:05:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 866 (KafkaRDD[1181] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 17:05:00 INFO cluster.YarnClusterScheduler: Adding task set 866.0 with 1 tasks 18/04/17 17:05:00 INFO scheduler.DAGScheduler: Got job 867 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 17:05:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 867 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 17:05:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 17:05:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 17:05:00 INFO scheduler.DAGScheduler: Submitting ResultStage 867 (KafkaRDD[1158] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 17:05:00 INFO storage.MemoryStore: Block broadcast_867 stored as values in memory (estimated size 5.7 KB, free 491.5 MB) 18/04/17 17:05:00 INFO storage.BlockManagerInfo: Added broadcast_863_piece0 in memory on ***hostname masked***:53081 (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:05:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 866.0 (TID 866, ***hostname masked***, executor 5, partition 0, NODE_LOCAL, 2049 bytes) 18/04/17 17:05:00 INFO storage.MemoryStore: Block broadcast_867_piece0 stored as bytes in memory (estimated size 3.1 KB, free 491.5 MB) 18/04/17 17:05:00 INFO storage.BlockManagerInfo: Added broadcast_867_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:05:00 INFO spark.SparkContext: Created broadcast 867 from broadcast at DAGScheduler.scala:1006 18/04/17 17:05:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 867 (KafkaRDD[1158] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 17:05:00 INFO cluster.YarnClusterScheduler: Adding task set 867.0 with 1 tasks 18/04/17 17:05:00 INFO scheduler.DAGScheduler: Got job 868 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 17:05:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 868 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 17:05:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 17:05:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 17:05:00 INFO scheduler.DAGScheduler: Submitting ResultStage 868 (KafkaRDD[1163] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 17:05:00 INFO storage.MemoryStore: Block broadcast_868 stored as values in memory (estimated size 5.7 KB, free 491.4 MB) 18/04/17 17:05:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 867.0 (TID 867, ***hostname masked***, executor 11, partition 0, NODE_LOCAL, 2045 bytes) 18/04/17 17:05:00 INFO storage.MemoryStore: Block broadcast_868_piece0 stored as bytes in memory (estimated size 3.1 KB, free 491.4 MB) 18/04/17 17:05:00 INFO storage.BlockManagerInfo: Added broadcast_868_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:05:00 INFO spark.SparkContext: Created broadcast 868 from broadcast at DAGScheduler.scala:1006 18/04/17 17:05:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 868 (KafkaRDD[1163] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 17:05:00 INFO cluster.YarnClusterScheduler: Adding task set 868.0 with 1 tasks 18/04/17 17:05:00 INFO scheduler.DAGScheduler: Got job 869 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 17:05:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 869 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 17:05:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 17:05:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 17:05:00 INFO scheduler.DAGScheduler: Submitting ResultStage 869 (KafkaRDD[1186] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 17:05:00 INFO storage.MemoryStore: Block broadcast_869 stored as values in memory (estimated size 5.7 KB, free 491.4 MB) 18/04/17 17:05:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 868.0 (TID 868, ***hostname masked***, executor 12, partition 0, RACK_LOCAL, 2046 bytes) 18/04/17 17:05:00 INFO storage.MemoryStore: Block broadcast_869_piece0 stored as bytes in memory (estimated size 3.1 KB, free 491.4 MB) 18/04/17 17:05:00 INFO storage.BlockManagerInfo: Added broadcast_869_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:05:00 INFO spark.SparkContext: Created broadcast 869 from broadcast at DAGScheduler.scala:1006 18/04/17 17:05:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 869 (KafkaRDD[1186] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 17:05:00 INFO cluster.YarnClusterScheduler: Adding task set 869.0 with 1 tasks 18/04/17 17:05:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 869.0 (TID 869, ***hostname masked***, executor 8, partition 0, RACK_LOCAL, 2053 bytes) 18/04/17 17:05:00 INFO storage.BlockManagerInfo: Added broadcast_867_piece0 in memory on ***hostname masked***:57847 (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:05:00 INFO storage.BlockManagerInfo: Added broadcast_865_piece0 in memory on ***hostname masked***:55279 (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:05:00 INFO storage.BlockManagerInfo: Added broadcast_866_piece0 in memory on ***hostname masked***:53081 (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:05:00 INFO storage.BlockManagerInfo: Added broadcast_868_piece0 in memory on ***hostname masked***:42188 (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:05:00 INFO storage.BlockManagerInfo: Added broadcast_864_piece0 in memory on ***hostname masked***:53081 (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:05:00 INFO storage.BlockManagerInfo: Added broadcast_869_piece0 in memory on ***hostname masked***:50260 (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:05:02 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 857.0 (TID 857) in 2218 ms on ***hostname masked*** (executor 6) (1/1) 18/04/17 17:05:02 INFO cluster.YarnClusterScheduler: Removed TaskSet 857.0, whose tasks have all completed, from pool 18/04/17 17:05:02 INFO scheduler.DAGScheduler: ResultStage 857 (foreachPartition at PredictorEngineApp.java:153) finished in 2.219 s 18/04/17 17:05:02 INFO scheduler.DAGScheduler: Job 857 finished: foreachPartition at PredictorEngineApp.java:153, took 2.267392 s 18/04/17 17:05:02 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x3f49a45e connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 17:05:02 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x3f49a45e0x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 17:05:02 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 17:05:02 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:43251, server: ***hostname masked***/***IP masked***:2181 18/04/17 17:05:02 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x2626be142b28ed8, negotiated timeout = 60000 18/04/17 17:05:02 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x2626be142b28ed8 18/04/17 17:05:02 INFO zookeeper.ZooKeeper: Session: 0x2626be142b28ed8 closed 18/04/17 17:05:02 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 17:05:02 INFO scheduler.JobScheduler: Finished job streaming job 1523973900000 ms.25 from job set of time 1523973900000 ms 18/04/17 17:05:03 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 845.0 (TID 845) in 2994 ms on ***hostname masked*** (executor 4) (1/1) 18/04/17 17:05:03 INFO cluster.YarnClusterScheduler: Removed TaskSet 845.0, whose tasks have all completed, from pool 18/04/17 17:05:03 INFO scheduler.DAGScheduler: ResultStage 845 (foreachPartition at PredictorEngineApp.java:153) finished in 2.994 s 18/04/17 17:05:03 INFO scheduler.DAGScheduler: Job 845 finished: foreachPartition at PredictorEngineApp.java:153, took 3.005562 s 18/04/17 17:05:03 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x2ade1520 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 17:05:03 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x2ade15200x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 17:05:03 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 17:05:03 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:38662, server: ***hostname masked***/***IP masked***:2181 18/04/17 17:05:03 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x1626be1444c95dd, negotiated timeout = 60000 18/04/17 17:05:03 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x1626be1444c95dd 18/04/17 17:05:03 INFO zookeeper.ZooKeeper: Session: 0x1626be1444c95dd closed 18/04/17 17:05:03 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 17:05:03 INFO scheduler.JobScheduler: Finished job streaming job 1523973900000 ms.7 from job set of time 1523973900000 ms 18/04/17 17:05:03 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 850.0 (TID 850) in 3183 ms on ***hostname masked*** (executor 10) (1/1) 18/04/17 17:05:03 INFO cluster.YarnClusterScheduler: Removed TaskSet 850.0, whose tasks have all completed, from pool 18/04/17 17:05:03 INFO scheduler.DAGScheduler: ResultStage 850 (foreachPartition at PredictorEngineApp.java:153) finished in 3.183 s 18/04/17 17:05:03 INFO scheduler.DAGScheduler: Job 850 finished: foreachPartition at PredictorEngineApp.java:153, took 3.210341 s 18/04/17 17:05:03 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x5e59e543 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 17:05:03 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x5e59e5430x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 17:05:03 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 17:05:03 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:38666, server: ***hostname masked***/***IP masked***:2181 18/04/17 17:05:03 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x1626be1444c95de, negotiated timeout = 60000 18/04/17 17:05:03 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x1626be1444c95de 18/04/17 17:05:03 INFO zookeeper.ZooKeeper: Session: 0x1626be1444c95de closed 18/04/17 17:05:03 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 17:05:03 INFO scheduler.JobScheduler: Finished job streaming job 1523973900000 ms.8 from job set of time 1523973900000 ms 18/04/17 17:05:05 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 864.0 (TID 864) in 5721 ms on ***hostname masked*** (executor 5) (1/1) 18/04/17 17:05:05 INFO cluster.YarnClusterScheduler: Removed TaskSet 864.0, whose tasks have all completed, from pool 18/04/17 17:05:05 INFO scheduler.DAGScheduler: ResultStage 864 (foreachPartition at PredictorEngineApp.java:153) finished in 5.721 s 18/04/17 17:05:05 INFO scheduler.DAGScheduler: Job 865 finished: foreachPartition at PredictorEngineApp.java:153, took 5.795429 s 18/04/17 17:05:05 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x489b695c connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 17:05:05 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x489b695c0x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 17:05:05 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 17:05:05 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:38673, server: ***hostname masked***/***IP masked***:2181 18/04/17 17:05:05 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x1626be1444c95e2, negotiated timeout = 60000 18/04/17 17:05:05 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x1626be1444c95e2 18/04/17 17:05:05 INFO zookeeper.ZooKeeper: Session: 0x1626be1444c95e2 closed 18/04/17 17:05:05 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 17:05:05 INFO scheduler.JobScheduler: Finished job streaming job 1523973900000 ms.31 from job set of time 1523973900000 ms 18/04/17 17:05:06 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 854.0 (TID 854) in 6303 ms on ***hostname masked*** (executor 7) (1/1) 18/04/17 17:05:06 INFO cluster.YarnClusterScheduler: Removed TaskSet 854.0, whose tasks have all completed, from pool 18/04/17 17:05:06 INFO scheduler.DAGScheduler: ResultStage 854 (foreachPartition at PredictorEngineApp.java:153) finished in 6.303 s 18/04/17 17:05:06 INFO scheduler.DAGScheduler: Job 854 finished: foreachPartition at PredictorEngineApp.java:153, took 6.342448 s 18/04/17 17:05:06 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x5252bf3a connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 17:05:06 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x5252bf3a0x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 17:05:06 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 17:05:06 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:60529, server: ***hostname masked***/***IP masked***:2181 18/04/17 17:05:06 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x3626be1439a9599, negotiated timeout = 60000 18/04/17 17:05:06 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x3626be1439a9599 18/04/17 17:05:06 INFO zookeeper.ZooKeeper: Session: 0x3626be1439a9599 closed 18/04/17 17:05:06 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 17:05:06 INFO scheduler.JobScheduler: Finished job streaming job 1523973900000 ms.9 from job set of time 1523973900000 ms 18/04/17 17:05:06 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 861.0 (TID 861) in 6371 ms on ***hostname masked*** (executor 6) (1/1) 18/04/17 17:05:06 INFO scheduler.DAGScheduler: ResultStage 861 (foreachPartition at PredictorEngineApp.java:153) finished in 6.372 s 18/04/17 17:05:06 INFO cluster.YarnClusterScheduler: Removed TaskSet 861.0, whose tasks have all completed, from pool 18/04/17 17:05:06 INFO scheduler.DAGScheduler: Job 861 finished: foreachPartition at PredictorEngineApp.java:153, took 6.432807 s 18/04/17 17:05:06 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x10450931 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 17:05:06 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x104509310x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 17:05:06 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 17:05:06 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:60532, server: ***hostname masked***/***IP masked***:2181 18/04/17 17:05:06 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x3626be1439a959a, negotiated timeout = 60000 18/04/17 17:05:06 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x3626be1439a959a 18/04/17 17:05:06 INFO zookeeper.ZooKeeper: Session: 0x3626be1439a959a closed 18/04/17 17:05:06 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 17:05:06 INFO scheduler.JobScheduler: Finished job streaming job 1523973900000 ms.12 from job set of time 1523973900000 ms 18/04/17 17:05:06 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 852.0 (TID 852) in 6575 ms on ***hostname masked*** (executor 7) (1/1) 18/04/17 17:05:06 INFO cluster.YarnClusterScheduler: Removed TaskSet 852.0, whose tasks have all completed, from pool 18/04/17 17:05:06 INFO scheduler.DAGScheduler: ResultStage 852 (foreachPartition at PredictorEngineApp.java:153) finished in 6.575 s 18/04/17 17:05:06 INFO scheduler.DAGScheduler: Job 852 finished: foreachPartition at PredictorEngineApp.java:153, took 6.607579 s 18/04/17 17:05:06 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x3b435593 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 17:05:06 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x3b4355930x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 17:05:06 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 17:05:06 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:38684, server: ***hostname masked***/***IP masked***:2181 18/04/17 17:05:06 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x1626be1444c95e3, negotiated timeout = 60000 18/04/17 17:05:06 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x1626be1444c95e3 18/04/17 17:05:06 INFO zookeeper.ZooKeeper: Session: 0x1626be1444c95e3 closed 18/04/17 17:05:06 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 17:05:06 INFO scheduler.JobScheduler: Finished job streaming job 1523973900000 ms.23 from job set of time 1523973900000 ms 18/04/17 17:05:06 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 867.0 (TID 867) in 6742 ms on ***hostname masked*** (executor 11) (1/1) 18/04/17 17:05:06 INFO scheduler.DAGScheduler: ResultStage 867 (foreachPartition at PredictorEngineApp.java:153) finished in 6.742 s 18/04/17 17:05:06 INFO cluster.YarnClusterScheduler: Removed TaskSet 867.0, whose tasks have all completed, from pool 18/04/17 17:05:06 INFO scheduler.DAGScheduler: Job 867 finished: foreachPartition at PredictorEngineApp.java:153, took 6.821580 s 18/04/17 17:05:06 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x52fe4135 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 17:05:06 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x52fe41350x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 17:05:06 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 17:05:06 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:43282, server: ***hostname masked***/***IP masked***:2181 18/04/17 17:05:06 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x2626be142b28ede, negotiated timeout = 60000 18/04/17 17:05:06 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x2626be142b28ede 18/04/17 17:05:06 INFO zookeeper.ZooKeeper: Session: 0x2626be142b28ede closed 18/04/17 17:05:06 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 17:05:06 INFO scheduler.JobScheduler: Finished job streaming job 1523973900000 ms.6 from job set of time 1523973900000 ms 18/04/17 17:05:07 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 846.0 (TID 846) in 7021 ms on ***hostname masked*** (executor 11) (1/1) 18/04/17 17:05:07 INFO cluster.YarnClusterScheduler: Removed TaskSet 846.0, whose tasks have all completed, from pool 18/04/17 17:05:07 INFO scheduler.DAGScheduler: ResultStage 846 (foreachPartition at PredictorEngineApp.java:153) finished in 7.022 s 18/04/17 17:05:07 INFO scheduler.DAGScheduler: Job 846 finished: foreachPartition at PredictorEngineApp.java:153, took 7.036128 s 18/04/17 17:05:07 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x3ad7d9a6 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 17:05:07 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x3ad7d9a60x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 17:05:07 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 17:05:07 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:38690, server: ***hostname masked***/***IP masked***:2181 18/04/17 17:05:07 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x1626be1444c95e5, negotiated timeout = 60000 18/04/17 17:05:07 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x1626be1444c95e5 18/04/17 17:05:07 INFO zookeeper.ZooKeeper: Session: 0x1626be1444c95e5 closed 18/04/17 17:05:07 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 17:05:07 INFO scheduler.JobScheduler: Finished job streaming job 1523973900000 ms.20 from job set of time 1523973900000 ms 18/04/17 17:05:07 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 856.0 (TID 856) in 7123 ms on ***hostname masked*** (executor 8) (1/1) 18/04/17 17:05:07 INFO cluster.YarnClusterScheduler: Removed TaskSet 856.0, whose tasks have all completed, from pool 18/04/17 17:05:07 INFO scheduler.DAGScheduler: ResultStage 856 (foreachPartition at PredictorEngineApp.java:153) finished in 7.124 s 18/04/17 17:05:07 INFO scheduler.DAGScheduler: Job 856 finished: foreachPartition at PredictorEngineApp.java:153, took 7.169425 s 18/04/17 17:05:07 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0xc81c37b connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 17:05:07 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0xc81c37b0x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 17:05:07 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 17:05:07 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:43289, server: ***hostname masked***/***IP masked***:2181 18/04/17 17:05:07 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x2626be142b28edf, negotiated timeout = 60000 18/04/17 17:05:07 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x2626be142b28edf 18/04/17 17:05:07 INFO zookeeper.ZooKeeper: Session: 0x2626be142b28edf closed 18/04/17 17:05:07 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 17:05:07 INFO scheduler.JobScheduler: Finished job streaming job 1523973900000 ms.32 from job set of time 1523973900000 ms 18/04/17 17:05:07 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 855.0 (TID 855) in 7399 ms on ***hostname masked*** (executor 6) (1/1) 18/04/17 17:05:07 INFO cluster.YarnClusterScheduler: Removed TaskSet 855.0, whose tasks have all completed, from pool 18/04/17 17:05:07 INFO scheduler.DAGScheduler: ResultStage 855 (foreachPartition at PredictorEngineApp.java:153) finished in 7.400 s 18/04/17 17:05:07 INFO scheduler.DAGScheduler: Job 855 finished: foreachPartition at PredictorEngineApp.java:153, took 7.442296 s 18/04/17 17:05:07 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x4cc64267 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 17:05:07 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x4cc642670x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 17:05:07 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 17:05:07 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:60548, server: ***hostname masked***/***IP masked***:2181 18/04/17 17:05:07 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x3626be1439a959d, negotiated timeout = 60000 18/04/17 17:05:07 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x3626be1439a959d 18/04/17 17:05:07 INFO zookeeper.ZooKeeper: Session: 0x3626be1439a959d closed 18/04/17 17:05:07 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 17:05:07 INFO scheduler.JobScheduler: Finished job streaming job 1523973900000 ms.18 from job set of time 1523973900000 ms 18/04/17 17:05:07 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 869.0 (TID 869) in 7450 ms on ***hostname masked*** (executor 8) (1/1) 18/04/17 17:05:07 INFO scheduler.DAGScheduler: ResultStage 869 (foreachPartition at PredictorEngineApp.java:153) finished in 7.450 s 18/04/17 17:05:07 INFO cluster.YarnClusterScheduler: Removed TaskSet 869.0, whose tasks have all completed, from pool 18/04/17 17:05:07 INFO scheduler.DAGScheduler: Job 869 finished: foreachPartition at PredictorEngineApp.java:153, took 7.532919 s 18/04/17 17:05:07 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x7b2dd5ca connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 17:05:07 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x7b2dd5ca0x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 17:05:07 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 17:05:07 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:43295, server: ***hostname masked***/***IP masked***:2181 18/04/17 17:05:07 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x2626be142b28ee0, negotiated timeout = 60000 18/04/17 17:05:07 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x2626be142b28ee0 18/04/17 17:05:07 INFO zookeeper.ZooKeeper: Session: 0x2626be142b28ee0 closed 18/04/17 17:05:07 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 17:05:07 INFO scheduler.JobScheduler: Finished job streaming job 1523973900000 ms.34 from job set of time 1523973900000 ms 18/04/17 17:05:09 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 868.0 (TID 868) in 9134 ms on ***hostname masked*** (executor 12) (1/1) 18/04/17 17:05:09 INFO cluster.YarnClusterScheduler: Removed TaskSet 868.0, whose tasks have all completed, from pool 18/04/17 17:05:09 INFO scheduler.DAGScheduler: ResultStage 868 (foreachPartition at PredictorEngineApp.java:153) finished in 9.134 s 18/04/17 17:05:09 INFO scheduler.DAGScheduler: Job 868 finished: foreachPartition at PredictorEngineApp.java:153, took 9.215759 s 18/04/17 17:05:09 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x48206f13 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 17:05:09 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x48206f130x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 17:05:09 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 17:05:09 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:43300, server: ***hostname masked***/***IP masked***:2181 18/04/17 17:05:09 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x2626be142b28ee2, negotiated timeout = 60000 18/04/17 17:05:09 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x2626be142b28ee2 18/04/17 17:05:09 INFO zookeeper.ZooKeeper: Session: 0x2626be142b28ee2 closed 18/04/17 17:05:09 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 17:05:09 INFO scheduler.JobScheduler: Finished job streaming job 1523973900000 ms.11 from job set of time 1523973900000 ms 18/04/17 17:05:10 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 862.0 (TID 862) in 9931 ms on ***hostname masked*** (executor 4) (1/1) 18/04/17 17:05:10 INFO cluster.YarnClusterScheduler: Removed TaskSet 862.0, whose tasks have all completed, from pool 18/04/17 17:05:10 INFO scheduler.DAGScheduler: ResultStage 862 (foreachPartition at PredictorEngineApp.java:153) finished in 9.931 s 18/04/17 17:05:10 INFO scheduler.DAGScheduler: Job 862 finished: foreachPartition at PredictorEngineApp.java:153, took 9.994840 s 18/04/17 17:05:10 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x6db6e14b connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 17:05:10 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x6db6e14b0x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 17:05:10 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 17:05:10 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:43303, server: ***hostname masked***/***IP masked***:2181 18/04/17 17:05:10 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x2626be142b28ee5, negotiated timeout = 60000 18/04/17 17:05:10 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x2626be142b28ee5 18/04/17 17:05:10 INFO zookeeper.ZooKeeper: Session: 0x2626be142b28ee5 closed 18/04/17 17:05:10 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 17:05:10 INFO scheduler.JobScheduler: Finished job streaming job 1523973900000 ms.28 from job set of time 1523973900000 ms 18/04/17 17:05:10 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 860.0 (TID 860) in 10676 ms on ***hostname masked*** (executor 10) (1/1) 18/04/17 17:05:10 INFO cluster.YarnClusterScheduler: Removed TaskSet 860.0, whose tasks have all completed, from pool 18/04/17 17:05:10 INFO scheduler.DAGScheduler: ResultStage 860 (foreachPartition at PredictorEngineApp.java:153) finished in 10.676 s 18/04/17 17:05:10 INFO scheduler.DAGScheduler: Job 860 finished: foreachPartition at PredictorEngineApp.java:153, took 10.734023 s 18/04/17 17:05:10 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0xd29c409 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 17:05:10 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0xd29c4090x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 17:05:10 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 17:05:10 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:60563, server: ***hostname masked***/***IP masked***:2181 18/04/17 17:05:10 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x3626be1439a95a0, negotiated timeout = 60000 18/04/17 17:05:10 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x3626be1439a95a0 18/04/17 17:05:10 INFO zookeeper.ZooKeeper: Session: 0x3626be1439a95a0 closed 18/04/17 17:05:10 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 17:05:10 INFO scheduler.JobScheduler: Finished job streaming job 1523973900000 ms.2 from job set of time 1523973900000 ms 18/04/17 17:05:10 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 859.0 (TID 859) in 10818 ms on ***hostname masked*** (executor 10) (1/1) 18/04/17 17:05:10 INFO cluster.YarnClusterScheduler: Removed TaskSet 859.0, whose tasks have all completed, from pool 18/04/17 17:05:10 INFO scheduler.DAGScheduler: ResultStage 859 (foreachPartition at PredictorEngineApp.java:153) finished in 10.818 s 18/04/17 17:05:10 INFO scheduler.DAGScheduler: Job 859 finished: foreachPartition at PredictorEngineApp.java:153, took 10.872994 s 18/04/17 17:05:10 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x3564c0db connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 17:05:10 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x3564c0db0x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 17:05:10 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 17:05:10 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:38715, server: ***hostname masked***/***IP masked***:2181 18/04/17 17:05:10 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x1626be1444c95ea, negotiated timeout = 60000 18/04/17 17:05:10 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x1626be1444c95ea 18/04/17 17:05:10 INFO zookeeper.ZooKeeper: Session: 0x1626be1444c95ea closed 18/04/17 17:05:10 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 17:05:10 INFO scheduler.JobScheduler: Finished job streaming job 1523973900000 ms.33 from job set of time 1523973900000 ms 18/04/17 17:05:11 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 866.0 (TID 866) in 11550 ms on ***hostname masked*** (executor 5) (1/1) 18/04/17 17:05:11 INFO scheduler.DAGScheduler: ResultStage 866 (foreachPartition at PredictorEngineApp.java:153) finished in 11.550 s 18/04/17 17:05:11 INFO cluster.YarnClusterScheduler: Removed TaskSet 866.0, whose tasks have all completed, from pool 18/04/17 17:05:11 INFO scheduler.DAGScheduler: Job 863 finished: foreachPartition at PredictorEngineApp.java:153, took 11.628035 s 18/04/17 17:05:11 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x1657cac3 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 17:05:11 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x1657cac30x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 17:05:11 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 17:05:11 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:60571, server: ***hostname masked***/***IP masked***:2181 18/04/17 17:05:11 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x3626be1439a95a1, negotiated timeout = 60000 18/04/17 17:05:11 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x3626be1439a95a1 18/04/17 17:05:11 INFO zookeeper.ZooKeeper: Session: 0x3626be1439a95a1 closed 18/04/17 17:05:11 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 17:05:11 INFO scheduler.JobScheduler: Finished job streaming job 1523973900000 ms.29 from job set of time 1523973900000 ms 18/04/17 17:05:12 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 863.0 (TID 863) in 11890 ms on ***hostname masked*** (executor 5) (1/1) 18/04/17 17:05:12 INFO cluster.YarnClusterScheduler: Removed TaskSet 863.0, whose tasks have all completed, from pool 18/04/17 17:05:12 INFO scheduler.DAGScheduler: ResultStage 863 (foreachPartition at PredictorEngineApp.java:153) finished in 11.891 s 18/04/17 17:05:12 INFO scheduler.DAGScheduler: Job 864 finished: foreachPartition at PredictorEngineApp.java:153, took 11.956719 s 18/04/17 17:05:12 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x511fefe7 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 17:05:12 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x511fefe70x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 17:05:12 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 17:05:12 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:60574, server: ***hostname masked***/***IP masked***:2181 18/04/17 17:05:12 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x3626be1439a95a2, negotiated timeout = 60000 18/04/17 17:05:12 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x3626be1439a95a2 18/04/17 17:05:12 INFO zookeeper.ZooKeeper: Session: 0x3626be1439a95a2 closed 18/04/17 17:05:12 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 17:05:12 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 853.0 (TID 853) in 11943 ms on ***hostname masked*** (executor 3) (1/1) 18/04/17 17:05:12 INFO scheduler.DAGScheduler: ResultStage 853 (foreachPartition at PredictorEngineApp.java:153) finished in 11.944 s 18/04/17 17:05:12 INFO cluster.YarnClusterScheduler: Removed TaskSet 853.0, whose tasks have all completed, from pool 18/04/17 17:05:12 INFO scheduler.DAGScheduler: Job 853 finished: foreachPartition at PredictorEngineApp.java:153, took 11.979634 s 18/04/17 17:05:12 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0xe5a2515 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 17:05:12 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0xe5a25150x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 17:05:12 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 17:05:12 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:38726, server: ***hostname masked***/***IP masked***:2181 18/04/17 17:05:12 INFO scheduler.JobScheduler: Finished job streaming job 1523973900000 ms.27 from job set of time 1523973900000 ms 18/04/17 17:05:12 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x1626be1444c95ee, negotiated timeout = 60000 18/04/17 17:05:12 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x1626be1444c95ee 18/04/17 17:05:12 INFO zookeeper.ZooKeeper: Session: 0x1626be1444c95ee closed 18/04/17 17:05:12 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 17:05:12 INFO scheduler.JobScheduler: Finished job streaming job 1523973900000 ms.15 from job set of time 1523973900000 ms 18/04/17 17:05:13 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 848.0 (TID 848) in 12959 ms on ***hostname masked*** (executor 3) (1/1) 18/04/17 17:05:13 INFO cluster.YarnClusterScheduler: Removed TaskSet 848.0, whose tasks have all completed, from pool 18/04/17 17:05:13 INFO scheduler.DAGScheduler: ResultStage 848 (foreachPartition at PredictorEngineApp.java:153) finished in 12.959 s 18/04/17 17:05:13 INFO scheduler.DAGScheduler: Job 848 finished: foreachPartition at PredictorEngineApp.java:153, took 12.979363 s 18/04/17 17:05:13 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x75080ccb connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 17:05:13 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x75080ccb0x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 17:05:13 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 17:05:13 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:38730, server: ***hostname masked***/***IP masked***:2181 18/04/17 17:05:13 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x1626be1444c95f1, negotiated timeout = 60000 18/04/17 17:05:13 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x1626be1444c95f1 18/04/17 17:05:13 INFO zookeeper.ZooKeeper: Session: 0x1626be1444c95f1 closed 18/04/17 17:05:13 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 17:05:13 INFO scheduler.JobScheduler: Finished job streaming job 1523973900000 ms.19 from job set of time 1523973900000 ms 18/04/17 17:05:13 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 849.0 (TID 849) in 13293 ms on ***hostname masked*** (executor 3) (1/1) 18/04/17 17:05:13 INFO cluster.YarnClusterScheduler: Removed TaskSet 849.0, whose tasks have all completed, from pool 18/04/17 17:05:13 INFO scheduler.DAGScheduler: ResultStage 849 (foreachPartition at PredictorEngineApp.java:153) finished in 13.293 s 18/04/17 17:05:13 INFO scheduler.DAGScheduler: Job 849 finished: foreachPartition at PredictorEngineApp.java:153, took 13.315963 s 18/04/17 17:05:13 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x25653c59 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 17:05:13 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x25653c590x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 17:05:13 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 17:05:13 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:43329, server: ***hostname masked***/***IP masked***:2181 18/04/17 17:05:13 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x2626be142b28ee7, negotiated timeout = 60000 18/04/17 17:05:13 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x2626be142b28ee7 18/04/17 17:05:13 INFO zookeeper.ZooKeeper: Session: 0x2626be142b28ee7 closed 18/04/17 17:05:13 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 17:05:13 INFO scheduler.JobScheduler: Finished job streaming job 1523973900000 ms.24 from job set of time 1523973900000 ms 18/04/17 17:05:15 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 844.0 (TID 844) in 15602 ms on ***hostname masked*** (executor 10) (1/1) 18/04/17 17:05:15 INFO scheduler.DAGScheduler: ResultStage 844 (foreachPartition at PredictorEngineApp.java:153) finished in 15.603 s 18/04/17 17:05:15 INFO cluster.YarnClusterScheduler: Removed TaskSet 844.0, whose tasks have all completed, from pool 18/04/17 17:05:15 INFO scheduler.DAGScheduler: Job 844 finished: foreachPartition at PredictorEngineApp.java:153, took 15.610928 s 18/04/17 17:05:15 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x29d0a131 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 17:05:15 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x29d0a1310x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 17:05:15 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 17:05:15 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:38740, server: ***hostname masked***/***IP masked***:2181 18/04/17 17:05:15 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x1626be1444c95f3, negotiated timeout = 60000 18/04/17 17:05:15 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x1626be1444c95f3 18/04/17 17:05:15 INFO zookeeper.ZooKeeper: Session: 0x1626be1444c95f3 closed 18/04/17 17:05:15 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 17:05:15 INFO scheduler.JobScheduler: Finished job streaming job 1523973900000 ms.22 from job set of time 1523973900000 ms 18/04/17 17:05:16 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 858.0 (TID 858) in 16057 ms on ***hostname masked*** (executor 5) (1/1) 18/04/17 17:05:16 INFO cluster.YarnClusterScheduler: Removed TaskSet 858.0, whose tasks have all completed, from pool 18/04/17 17:05:16 INFO scheduler.DAGScheduler: ResultStage 858 (foreachPartition at PredictorEngineApp.java:153) finished in 16.057 s 18/04/17 17:05:16 INFO scheduler.DAGScheduler: Job 858 finished: foreachPartition at PredictorEngineApp.java:153, took 16.109096 s 18/04/17 17:05:16 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x11220d45 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 17:05:16 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x11220d450x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 17:05:16 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 17:05:16 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:38747, server: ***hostname masked***/***IP masked***:2181 18/04/17 17:05:16 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x1626be1444c95f4, negotiated timeout = 60000 18/04/17 17:05:16 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x1626be1444c95f4 18/04/17 17:05:16 INFO zookeeper.ZooKeeper: Session: 0x1626be1444c95f4 closed 18/04/17 17:05:16 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 17:05:16 INFO scheduler.JobScheduler: Finished job streaming job 1523973900000 ms.26 from job set of time 1523973900000 ms 18/04/17 17:05:17 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 847.0 (TID 847) in 17693 ms on ***hostname masked*** (executor 3) (1/1) 18/04/17 17:05:17 INFO cluster.YarnClusterScheduler: Removed TaskSet 847.0, whose tasks have all completed, from pool 18/04/17 17:05:17 INFO scheduler.DAGScheduler: ResultStage 847 (foreachPartition at PredictorEngineApp.java:153) finished in 17.693 s 18/04/17 17:05:17 INFO scheduler.DAGScheduler: Job 847 finished: foreachPartition at PredictorEngineApp.java:153, took 17.711432 s 18/04/17 17:05:17 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x39ab92e2 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 17:05:17 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x39ab92e20x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 17:05:17 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 17:05:17 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:60602, server: ***hostname masked***/***IP masked***:2181 18/04/17 17:05:17 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x3626be1439a95a8, negotiated timeout = 60000 18/04/17 17:05:17 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x3626be1439a95a8 18/04/17 17:05:17 INFO zookeeper.ZooKeeper: Session: 0x3626be1439a95a8 closed 18/04/17 17:05:17 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 17:05:17 INFO scheduler.JobScheduler: Finished job streaming job 1523973900000 ms.1 from job set of time 1523973900000 ms 18/04/17 17:05:19 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 865.0 (TID 865) in 18866 ms on ***hostname masked*** (executor 4) (1/1) 18/04/17 17:05:19 INFO cluster.YarnClusterScheduler: Removed TaskSet 865.0, whose tasks have all completed, from pool 18/04/17 17:05:19 INFO scheduler.DAGScheduler: ResultStage 865 (foreachPartition at PredictorEngineApp.java:153) finished in 18.866 s 18/04/17 17:05:19 INFO scheduler.DAGScheduler: Job 866 finished: foreachPartition at PredictorEngineApp.java:153, took 18.941745 s 18/04/17 17:05:19 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x45b2e80c connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 17:05:19 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x45b2e80c0x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 17:05:19 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 17:05:19 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:43351, server: ***hostname masked***/***IP masked***:2181 18/04/17 17:05:19 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x2626be142b28eea, negotiated timeout = 60000 18/04/17 17:05:19 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x2626be142b28eea 18/04/17 17:05:19 INFO zookeeper.ZooKeeper: Session: 0x2626be142b28eea closed 18/04/17 17:05:19 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 17:05:19 INFO scheduler.JobScheduler: Finished job streaming job 1523973900000 ms.5 from job set of time 1523973900000 ms 18/04/17 17:05:20 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 851.0 (TID 851) in 20464 ms on ***hostname masked*** (executor 4) (1/1) 18/04/17 17:05:20 INFO scheduler.DAGScheduler: ResultStage 851 (foreachPartition at PredictorEngineApp.java:153) finished in 20.464 s 18/04/17 17:05:20 INFO cluster.YarnClusterScheduler: Removed TaskSet 851.0, whose tasks have all completed, from pool 18/04/17 17:05:20 INFO scheduler.DAGScheduler: Job 851 finished: foreachPartition at PredictorEngineApp.java:153, took 20.494470 s 18/04/17 17:05:20 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x24918260 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 17:05:20 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x249182600x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 17:05:20 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 17:05:20 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:43356, server: ***hostname masked***/***IP masked***:2181 18/04/17 17:05:20 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x2626be142b28eed, negotiated timeout = 60000 18/04/17 17:05:20 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x2626be142b28eed 18/04/17 17:05:20 INFO zookeeper.ZooKeeper: Session: 0x2626be142b28eed closed 18/04/17 17:05:20 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 17:05:20 INFO scheduler.JobScheduler: Finished job streaming job 1523973900000 ms.10 from job set of time 1523973900000 ms 18/04/17 17:05:20 INFO scheduler.JobScheduler: Total delay: 20.601 s for time 1523973900000 ms (execution: 20.545 s) 18/04/17 17:05:20 INFO kafka.KafkaRDD: Removing RDD 1116 from persistence list 18/04/17 17:05:20 INFO storage.BlockManager: Removing RDD 1116 18/04/17 17:05:20 INFO kafka.KafkaRDD: Removing RDD 1116 from persistence list 18/04/17 17:05:20 INFO storage.BlockManager: Removing RDD 1116 18/04/17 17:05:20 INFO kafka.KafkaRDD: Removing RDD 1117 from persistence list 18/04/17 17:05:20 INFO storage.BlockManager: Removing RDD 1117 18/04/17 17:05:20 INFO kafka.KafkaRDD: Removing RDD 1117 from persistence list 18/04/17 17:05:20 INFO storage.BlockManager: Removing RDD 1117 18/04/17 17:05:20 INFO kafka.KafkaRDD: Removing RDD 1118 from persistence list 18/04/17 17:05:20 INFO storage.BlockManager: Removing RDD 1118 18/04/17 17:05:20 INFO kafka.KafkaRDD: Removing RDD 1118 from persistence list 18/04/17 17:05:20 INFO storage.BlockManager: Removing RDD 1118 18/04/17 17:05:20 INFO kafka.KafkaRDD: Removing RDD 1119 from persistence list 18/04/17 17:05:20 INFO storage.BlockManager: Removing RDD 1119 18/04/17 17:05:20 INFO kafka.KafkaRDD: Removing RDD 1119 from persistence list 18/04/17 17:05:20 INFO storage.BlockManager: Removing RDD 1119 18/04/17 17:05:20 INFO kafka.KafkaRDD: Removing RDD 1120 from persistence list 18/04/17 17:05:20 INFO storage.BlockManager: Removing RDD 1120 18/04/17 17:05:20 INFO kafka.KafkaRDD: Removing RDD 1120 from persistence list 18/04/17 17:05:20 INFO storage.BlockManager: Removing RDD 1120 18/04/17 17:05:20 INFO kafka.KafkaRDD: Removing RDD 1121 from persistence list 18/04/17 17:05:20 INFO storage.BlockManager: Removing RDD 1121 18/04/17 17:05:20 INFO kafka.KafkaRDD: Removing RDD 1121 from persistence list 18/04/17 17:05:20 INFO storage.BlockManager: Removing RDD 1121 18/04/17 17:05:20 INFO kafka.KafkaRDD: Removing RDD 1122 from persistence list 18/04/17 17:05:20 INFO storage.BlockManager: Removing RDD 1122 18/04/17 17:05:20 INFO kafka.KafkaRDD: Removing RDD 1122 from persistence list 18/04/17 17:05:20 INFO storage.BlockManager: Removing RDD 1122 18/04/17 17:05:20 INFO kafka.KafkaRDD: Removing RDD 1123 from persistence list 18/04/17 17:05:20 INFO storage.BlockManager: Removing RDD 1123 18/04/17 17:05:20 INFO kafka.KafkaRDD: Removing RDD 1123 from persistence list 18/04/17 17:05:20 INFO storage.BlockManager: Removing RDD 1123 18/04/17 17:05:20 INFO kafka.KafkaRDD: Removing RDD 1124 from persistence list 18/04/17 17:05:20 INFO storage.BlockManager: Removing RDD 1124 18/04/17 17:05:20 INFO kafka.KafkaRDD: Removing RDD 1124 from persistence list 18/04/17 17:05:20 INFO storage.BlockManager: Removing RDD 1124 18/04/17 17:05:20 INFO kafka.KafkaRDD: Removing RDD 1125 from persistence list 18/04/17 17:05:20 INFO storage.BlockManager: Removing RDD 1125 18/04/17 17:05:20 INFO kafka.KafkaRDD: Removing RDD 1125 from persistence list 18/04/17 17:05:20 INFO storage.BlockManager: Removing RDD 1125 18/04/17 17:05:20 INFO kafka.KafkaRDD: Removing RDD 1126 from persistence list 18/04/17 17:05:20 INFO storage.BlockManager: Removing RDD 1126 18/04/17 17:05:20 INFO kafka.KafkaRDD: Removing RDD 1126 from persistence list 18/04/17 17:05:20 INFO storage.BlockManager: Removing RDD 1126 18/04/17 17:05:20 INFO kafka.KafkaRDD: Removing RDD 1127 from persistence list 18/04/17 17:05:20 INFO storage.BlockManager: Removing RDD 1127 18/04/17 17:05:20 INFO kafka.KafkaRDD: Removing RDD 1127 from persistence list 18/04/17 17:05:20 INFO storage.BlockManager: Removing RDD 1127 18/04/17 17:05:20 INFO kafka.KafkaRDD: Removing RDD 1128 from persistence list 18/04/17 17:05:20 INFO storage.BlockManager: Removing RDD 1128 18/04/17 17:05:20 INFO kafka.KafkaRDD: Removing RDD 1128 from persistence list 18/04/17 17:05:20 INFO storage.BlockManager: Removing RDD 1128 18/04/17 17:05:20 INFO kafka.KafkaRDD: Removing RDD 1129 from persistence list 18/04/17 17:05:20 INFO storage.BlockManager: Removing RDD 1129 18/04/17 17:05:20 INFO kafka.KafkaRDD: Removing RDD 1129 from persistence list 18/04/17 17:05:20 INFO storage.BlockManager: Removing RDD 1129 18/04/17 17:05:20 INFO kafka.KafkaRDD: Removing RDD 1130 from persistence list 18/04/17 17:05:20 INFO storage.BlockManager: Removing RDD 1130 18/04/17 17:05:20 INFO kafka.KafkaRDD: Removing RDD 1130 from persistence list 18/04/17 17:05:20 INFO storage.BlockManager: Removing RDD 1130 18/04/17 17:05:20 INFO kafka.KafkaRDD: Removing RDD 1131 from persistence list 18/04/17 17:05:20 INFO storage.BlockManager: Removing RDD 1131 18/04/17 17:05:20 INFO kafka.KafkaRDD: Removing RDD 1131 from persistence list 18/04/17 17:05:20 INFO storage.BlockManager: Removing RDD 1131 18/04/17 17:05:20 INFO kafka.KafkaRDD: Removing RDD 1132 from persistence list 18/04/17 17:05:20 INFO storage.BlockManager: Removing RDD 1132 18/04/17 17:05:20 INFO kafka.KafkaRDD: Removing RDD 1132 from persistence list 18/04/17 17:05:20 INFO storage.BlockManager: Removing RDD 1132 18/04/17 17:05:20 INFO kafka.KafkaRDD: Removing RDD 1133 from persistence list 18/04/17 17:05:20 INFO storage.BlockManager: Removing RDD 1133 18/04/17 17:05:20 INFO kafka.KafkaRDD: Removing RDD 1133 from persistence list 18/04/17 17:05:20 INFO storage.BlockManager: Removing RDD 1133 18/04/17 17:05:20 INFO kafka.KafkaRDD: Removing RDD 1134 from persistence list 18/04/17 17:05:20 INFO storage.BlockManager: Removing RDD 1134 18/04/17 17:05:20 INFO kafka.KafkaRDD: Removing RDD 1134 from persistence list 18/04/17 17:05:20 INFO storage.BlockManager: Removing RDD 1134 18/04/17 17:05:20 INFO kafka.KafkaRDD: Removing RDD 1135 from persistence list 18/04/17 17:05:20 INFO storage.BlockManager: Removing RDD 1135 18/04/17 17:05:20 INFO kafka.KafkaRDD: Removing RDD 1135 from persistence list 18/04/17 17:05:20 INFO storage.BlockManager: Removing RDD 1135 18/04/17 17:05:20 INFO kafka.KafkaRDD: Removing RDD 1136 from persistence list 18/04/17 17:05:20 INFO storage.BlockManager: Removing RDD 1136 18/04/17 17:05:20 INFO kafka.KafkaRDD: Removing RDD 1136 from persistence list 18/04/17 17:05:20 INFO storage.BlockManager: Removing RDD 1136 18/04/17 17:05:20 INFO kafka.KafkaRDD: Removing RDD 1137 from persistence list 18/04/17 17:05:20 INFO storage.BlockManager: Removing RDD 1137 18/04/17 17:05:20 INFO kafka.KafkaRDD: Removing RDD 1137 from persistence list 18/04/17 17:05:20 INFO storage.BlockManager: Removing RDD 1137 18/04/17 17:05:20 INFO kafka.KafkaRDD: Removing RDD 1138 from persistence list 18/04/17 17:05:20 INFO storage.BlockManager: Removing RDD 1138 18/04/17 17:05:20 INFO kafka.KafkaRDD: Removing RDD 1138 from persistence list 18/04/17 17:05:20 INFO storage.BlockManager: Removing RDD 1138 18/04/17 17:05:20 INFO kafka.KafkaRDD: Removing RDD 1139 from persistence list 18/04/17 17:05:20 INFO storage.BlockManager: Removing RDD 1139 18/04/17 17:05:20 INFO kafka.KafkaRDD: Removing RDD 1139 from persistence list 18/04/17 17:05:20 INFO storage.BlockManager: Removing RDD 1139 18/04/17 17:05:20 INFO kafka.KafkaRDD: Removing RDD 1140 from persistence list 18/04/17 17:05:20 INFO storage.BlockManager: Removing RDD 1140 18/04/17 17:05:20 INFO kafka.KafkaRDD: Removing RDD 1140 from persistence list 18/04/17 17:05:20 INFO storage.BlockManager: Removing RDD 1140 18/04/17 17:05:20 INFO kafka.KafkaRDD: Removing RDD 1141 from persistence list 18/04/17 17:05:20 INFO storage.BlockManager: Removing RDD 1141 18/04/17 17:05:20 INFO kafka.KafkaRDD: Removing RDD 1141 from persistence list 18/04/17 17:05:20 INFO storage.BlockManager: Removing RDD 1141 18/04/17 17:05:20 INFO kafka.KafkaRDD: Removing RDD 1142 from persistence list 18/04/17 17:05:20 INFO storage.BlockManager: Removing RDD 1142 18/04/17 17:05:20 INFO kafka.KafkaRDD: Removing RDD 1142 from persistence list 18/04/17 17:05:20 INFO storage.BlockManager: Removing RDD 1142 18/04/17 17:05:20 INFO kafka.KafkaRDD: Removing RDD 1143 from persistence list 18/04/17 17:05:20 INFO storage.BlockManager: Removing RDD 1143 18/04/17 17:05:20 INFO kafka.KafkaRDD: Removing RDD 1143 from persistence list 18/04/17 17:05:20 INFO storage.BlockManager: Removing RDD 1143 18/04/17 17:05:20 INFO kafka.KafkaRDD: Removing RDD 1144 from persistence list 18/04/17 17:05:20 INFO storage.BlockManager: Removing RDD 1144 18/04/17 17:05:20 INFO kafka.KafkaRDD: Removing RDD 1144 from persistence list 18/04/17 17:05:20 INFO storage.BlockManager: Removing RDD 1144 18/04/17 17:05:20 INFO kafka.KafkaRDD: Removing RDD 1145 from persistence list 18/04/17 17:05:20 INFO storage.BlockManager: Removing RDD 1145 18/04/17 17:05:20 INFO kafka.KafkaRDD: Removing RDD 1145 from persistence list 18/04/17 17:05:20 INFO storage.BlockManager: Removing RDD 1145 18/04/17 17:05:20 INFO kafka.KafkaRDD: Removing RDD 1146 from persistence list 18/04/17 17:05:20 INFO storage.BlockManager: Removing RDD 1146 18/04/17 17:05:20 INFO kafka.KafkaRDD: Removing RDD 1146 from persistence list 18/04/17 17:05:20 INFO storage.BlockManager: Removing RDD 1146 18/04/17 17:05:20 INFO kafka.KafkaRDD: Removing RDD 1147 from persistence list 18/04/17 17:05:20 INFO storage.BlockManager: Removing RDD 1147 18/04/17 17:05:20 INFO kafka.KafkaRDD: Removing RDD 1147 from persistence list 18/04/17 17:05:20 INFO storage.BlockManager: Removing RDD 1147 18/04/17 17:05:20 INFO kafka.KafkaRDD: Removing RDD 1148 from persistence list 18/04/17 17:05:20 INFO storage.BlockManager: Removing RDD 1148 18/04/17 17:05:20 INFO kafka.KafkaRDD: Removing RDD 1148 from persistence list 18/04/17 17:05:20 INFO storage.BlockManager: Removing RDD 1148 18/04/17 17:05:20 INFO kafka.KafkaRDD: Removing RDD 1149 from persistence list 18/04/17 17:05:20 INFO storage.BlockManager: Removing RDD 1149 18/04/17 17:05:20 INFO kafka.KafkaRDD: Removing RDD 1149 from persistence list 18/04/17 17:05:20 INFO storage.BlockManager: Removing RDD 1149 18/04/17 17:05:20 INFO kafka.KafkaRDD: Removing RDD 1150 from persistence list 18/04/17 17:05:20 INFO storage.BlockManager: Removing RDD 1150 18/04/17 17:05:20 INFO kafka.KafkaRDD: Removing RDD 1150 from persistence list 18/04/17 17:05:20 INFO storage.BlockManager: Removing RDD 1150 18/04/17 17:05:20 INFO kafka.KafkaRDD: Removing RDD 1151 from persistence list 18/04/17 17:05:20 INFO storage.BlockManager: Removing RDD 1151 18/04/17 17:05:20 INFO kafka.KafkaRDD: Removing RDD 1151 from persistence list 18/04/17 17:05:20 INFO storage.BlockManager: Removing RDD 1151 18/04/17 17:05:20 INFO scheduler.ReceivedBlockTracker: Deleting batches ArrayBuffer() 18/04/17 17:05:20 INFO scheduler.InputInfoTracker: remove old batch metadata: 1523973780000 ms 18/04/17 17:06:00 INFO scheduler.JobScheduler: Added jobs for time 1523973960000 ms 18/04/17 17:06:00 INFO scheduler.JobScheduler: Starting job streaming job 1523973960000 ms.0 from job set of time 1523973960000 ms 18/04/17 17:06:00 INFO scheduler.JobScheduler: Starting job streaming job 1523973960000 ms.1 from job set of time 1523973960000 ms 18/04/17 17:06:00 INFO scheduler.JobScheduler: Starting job streaming job 1523973960000 ms.2 from job set of time 1523973960000 ms 18/04/17 17:06:00 INFO scheduler.JobScheduler: Starting job streaming job 1523973960000 ms.3 from job set of time 1523973960000 ms 18/04/17 17:06:00 INFO scheduler.JobScheduler: Starting job streaming job 1523973960000 ms.4 from job set of time 1523973960000 ms 18/04/17 17:06:00 INFO scheduler.JobScheduler: Finished job streaming job 1523973960000 ms.0 from job set of time 1523973960000 ms 18/04/17 17:06:00 INFO scheduler.JobScheduler: Finished job streaming job 1523973960000 ms.3 from job set of time 1523973960000 ms 18/04/17 17:06:00 INFO scheduler.JobScheduler: Starting job streaming job 1523973960000 ms.5 from job set of time 1523973960000 ms 18/04/17 17:06:00 INFO scheduler.JobScheduler: Starting job streaming job 1523973960000 ms.7 from job set of time 1523973960000 ms 18/04/17 17:06:00 INFO scheduler.JobScheduler: Starting job streaming job 1523973960000 ms.8 from job set of time 1523973960000 ms 18/04/17 17:06:00 INFO scheduler.JobScheduler: Starting job streaming job 1523973960000 ms.6 from job set of time 1523973960000 ms 18/04/17 17:06:00 INFO scheduler.JobScheduler: Finished job streaming job 1523973960000 ms.4 from job set of time 1523973960000 ms 18/04/17 17:06:00 INFO scheduler.JobScheduler: Starting job streaming job 1523973960000 ms.10 from job set of time 1523973960000 ms 18/04/17 17:06:00 INFO scheduler.JobScheduler: Starting job streaming job 1523973960000 ms.9 from job set of time 1523973960000 ms 18/04/17 17:06:00 INFO scheduler.JobScheduler: Starting job streaming job 1523973960000 ms.12 from job set of time 1523973960000 ms 18/04/17 17:06:00 INFO scheduler.JobScheduler: Starting job streaming job 1523973960000 ms.11 from job set of time 1523973960000 ms 18/04/17 17:06:00 INFO scheduler.JobScheduler: Starting job streaming job 1523973960000 ms.13 from job set of time 1523973960000 ms 18/04/17 17:06:00 INFO scheduler.JobScheduler: Starting job streaming job 1523973960000 ms.14 from job set of time 1523973960000 ms 18/04/17 17:06:00 INFO scheduler.JobScheduler: Finished job streaming job 1523973960000 ms.14 from job set of time 1523973960000 ms 18/04/17 17:06:00 INFO scheduler.JobScheduler: Finished job streaming job 1523973960000 ms.13 from job set of time 1523973960000 ms 18/04/17 17:06:00 INFO scheduler.JobScheduler: Starting job streaming job 1523973960000 ms.16 from job set of time 1523973960000 ms 18/04/17 17:06:00 INFO scheduler.JobScheduler: Starting job streaming job 1523973960000 ms.17 from job set of time 1523973960000 ms 18/04/17 17:06:00 INFO scheduler.JobScheduler: Finished job streaming job 1523973960000 ms.16 from job set of time 1523973960000 ms 18/04/17 17:06:00 INFO scheduler.JobScheduler: Starting job streaming job 1523973960000 ms.15 from job set of time 1523973960000 ms 18/04/17 17:06:00 INFO scheduler.JobScheduler: Finished job streaming job 1523973960000 ms.17 from job set of time 1523973960000 ms 18/04/17 17:06:00 INFO scheduler.JobScheduler: Starting job streaming job 1523973960000 ms.19 from job set of time 1523973960000 ms 18/04/17 17:06:00 INFO scheduler.JobScheduler: Starting job streaming job 1523973960000 ms.20 from job set of time 1523973960000 ms 18/04/17 17:06:00 INFO scheduler.JobScheduler: Starting job streaming job 1523973960000 ms.18 from job set of time 1523973960000 ms 18/04/17 17:06:00 INFO scheduler.JobScheduler: Starting job streaming job 1523973960000 ms.21 from job set of time 1523973960000 ms 18/04/17 17:06:00 INFO scheduler.JobScheduler: Starting job streaming job 1523973960000 ms.22 from job set of time 1523973960000 ms 18/04/17 17:06:00 INFO scheduler.JobScheduler: Finished job streaming job 1523973960000 ms.21 from job set of time 1523973960000 ms 18/04/17 17:06:00 INFO scheduler.JobScheduler: Starting job streaming job 1523973960000 ms.23 from job set of time 1523973960000 ms 18/04/17 17:06:00 INFO scheduler.JobScheduler: Starting job streaming job 1523973960000 ms.24 from job set of time 1523973960000 ms 18/04/17 17:06:00 INFO scheduler.JobScheduler: Starting job streaming job 1523973960000 ms.25 from job set of time 1523973960000 ms 18/04/17 17:06:00 INFO scheduler.JobScheduler: Starting job streaming job 1523973960000 ms.26 from job set of time 1523973960000 ms 18/04/17 17:06:00 INFO scheduler.JobScheduler: Starting job streaming job 1523973960000 ms.27 from job set of time 1523973960000 ms 18/04/17 17:06:00 INFO scheduler.JobScheduler: Starting job streaming job 1523973960000 ms.28 from job set of time 1523973960000 ms 18/04/17 17:06:00 INFO scheduler.JobScheduler: Starting job streaming job 1523973960000 ms.29 from job set of time 1523973960000 ms 18/04/17 17:06:00 INFO scheduler.JobScheduler: Starting job streaming job 1523973960000 ms.30 from job set of time 1523973960000 ms 18/04/17 17:06:00 INFO scheduler.JobScheduler: Starting job streaming job 1523973960000 ms.31 from job set of time 1523973960000 ms 18/04/17 17:06:00 INFO scheduler.JobScheduler: Starting job streaming job 1523973960000 ms.32 from job set of time 1523973960000 ms 18/04/17 17:06:00 INFO scheduler.JobScheduler: Finished job streaming job 1523973960000 ms.30 from job set of time 1523973960000 ms 18/04/17 17:06:00 INFO scheduler.JobScheduler: Starting job streaming job 1523973960000 ms.33 from job set of time 1523973960000 ms 18/04/17 17:06:00 INFO scheduler.JobScheduler: Starting job streaming job 1523973960000 ms.35 from job set of time 1523973960000 ms 18/04/17 17:06:00 INFO scheduler.JobScheduler: Starting job streaming job 1523973960000 ms.34 from job set of time 1523973960000 ms 18/04/17 17:06:00 INFO scheduler.JobScheduler: Finished job streaming job 1523973960000 ms.35 from job set of time 1523973960000 ms 18/04/17 17:06:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 17:06:00 INFO scheduler.DAGScheduler: Got job 870 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 17:06:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 870 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 17:06:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 17:06:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 17:06:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 17:06:00 INFO scheduler.DAGScheduler: Submitting ResultStage 870 (KafkaRDD[1220] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 17:06:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 17:06:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 17:06:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 17:06:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 17:06:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 17:06:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 17:06:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 17:06:00 INFO storage.MemoryStore: Block broadcast_870 stored as values in memory (estimated size 5.7 KB, free 491.4 MB) 18/04/17 17:06:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 17:06:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 17:06:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 17:06:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 17:06:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 17:06:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 17:06:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 17:06:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 17:06:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 17:06:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 17:06:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 17:06:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 17:06:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 17:06:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 17:06:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 17:06:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 17:06:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 17:06:00 INFO storage.MemoryStore: Block broadcast_870_piece0 stored as bytes in memory (estimated size 3.1 KB, free 491.4 MB) 18/04/17 17:06:00 INFO storage.BlockManagerInfo: Added broadcast_870_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:06:00 INFO spark.SparkContext: Created broadcast 870 from broadcast at DAGScheduler.scala:1006 18/04/17 17:06:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 870 (KafkaRDD[1220] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 17:06:00 INFO cluster.YarnClusterScheduler: Adding task set 870.0 with 1 tasks 18/04/17 17:06:00 INFO scheduler.DAGScheduler: Got job 871 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 17:06:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 871 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 17:06:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 17:06:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 17:06:00 INFO scheduler.DAGScheduler: Submitting ResultStage 871 (KafkaRDD[1197] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 17:06:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 870.0 (TID 870, ***hostname masked***, executor 2, partition 0, RACK_LOCAL, 2040 bytes) 18/04/17 17:06:00 INFO storage.MemoryStore: Block broadcast_871 stored as values in memory (estimated size 5.7 KB, free 491.4 MB) 18/04/17 17:06:00 INFO storage.MemoryStore: Block broadcast_871_piece0 stored as bytes in memory (estimated size 3.1 KB, free 491.4 MB) 18/04/17 17:06:00 INFO storage.BlockManagerInfo: Added broadcast_871_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:06:00 INFO spark.SparkContext: Created broadcast 871 from broadcast at DAGScheduler.scala:1006 18/04/17 17:06:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 871 (KafkaRDD[1197] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 17:06:00 INFO cluster.YarnClusterScheduler: Adding task set 871.0 with 1 tasks 18/04/17 17:06:00 INFO scheduler.DAGScheduler: Got job 872 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 17:06:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 872 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 17:06:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 17:06:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 17:06:00 INFO scheduler.DAGScheduler: Submitting ResultStage 872 (KafkaRDD[1198] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 17:06:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 871.0 (TID 871, ***hostname masked***, executor 8, partition 0, RACK_LOCAL, 2041 bytes) 18/04/17 17:06:00 INFO storage.MemoryStore: Block broadcast_872 stored as values in memory (estimated size 5.7 KB, free 491.4 MB) 18/04/17 17:06:00 INFO storage.MemoryStore: Block broadcast_872_piece0 stored as bytes in memory (estimated size 3.1 KB, free 491.4 MB) 18/04/17 17:06:00 INFO storage.BlockManagerInfo: Added broadcast_872_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:06:00 INFO spark.SparkContext: Created broadcast 872 from broadcast at DAGScheduler.scala:1006 18/04/17 17:06:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 872 (KafkaRDD[1198] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 17:06:00 INFO cluster.YarnClusterScheduler: Adding task set 872.0 with 1 tasks 18/04/17 17:06:00 INFO scheduler.DAGScheduler: Got job 873 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 17:06:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 873 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 17:06:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 17:06:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 17:06:00 INFO scheduler.DAGScheduler: Submitting ResultStage 873 (KafkaRDD[1221] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 17:06:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 872.0 (TID 872, ***hostname masked***, executor 1, partition 0, NODE_LOCAL, 2059 bytes) 18/04/17 17:06:00 INFO storage.MemoryStore: Block broadcast_873 stored as values in memory (estimated size 5.7 KB, free 491.4 MB) 18/04/17 17:06:00 INFO storage.BlockManagerInfo: Removed broadcast_865_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:06:00 INFO storage.MemoryStore: Block broadcast_873_piece0 stored as bytes in memory (estimated size 3.1 KB, free 491.4 MB) 18/04/17 17:06:00 INFO storage.BlockManagerInfo: Added broadcast_873_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:06:00 INFO spark.SparkContext: Created broadcast 873 from broadcast at DAGScheduler.scala:1006 18/04/17 17:06:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 873 (KafkaRDD[1221] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 17:06:00 INFO cluster.YarnClusterScheduler: Adding task set 873.0 with 1 tasks 18/04/17 17:06:00 INFO scheduler.DAGScheduler: Got job 874 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 17:06:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 874 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 17:06:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 17:06:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 17:06:00 INFO scheduler.DAGScheduler: Submitting ResultStage 874 (KafkaRDD[1222] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 17:06:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 873.0 (TID 873, ***hostname masked***, executor 7, partition 0, NODE_LOCAL, 2048 bytes) 18/04/17 17:06:00 INFO storage.MemoryStore: Block broadcast_874 stored as values in memory (estimated size 5.7 KB, free 491.4 MB) 18/04/17 17:06:00 INFO storage.BlockManagerInfo: Added broadcast_870_piece0 in memory on ***hostname masked***:43653 (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:06:00 INFO storage.BlockManagerInfo: Removed broadcast_865_piece0 on ***hostname masked***:55279 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:06:00 INFO storage.BlockManagerInfo: Added broadcast_871_piece0 in memory on ***hostname masked***:50260 (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:06:00 INFO spark.ContextCleaner: Cleaned accumulator 846 18/04/17 17:06:00 INFO storage.BlockManagerInfo: Removed broadcast_844_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:06:00 INFO storage.MemoryStore: Block broadcast_874_piece0 stored as bytes in memory (estimated size 3.1 KB, free 491.4 MB) 18/04/17 17:06:00 INFO storage.BlockManagerInfo: Added broadcast_874_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:06:00 INFO spark.SparkContext: Created broadcast 874 from broadcast at DAGScheduler.scala:1006 18/04/17 17:06:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 874 (KafkaRDD[1222] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 17:06:00 INFO cluster.YarnClusterScheduler: Adding task set 874.0 with 1 tasks 18/04/17 17:06:00 INFO scheduler.DAGScheduler: Got job 875 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 17:06:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 875 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 17:06:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 17:06:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 17:06:00 INFO scheduler.DAGScheduler: Submitting ResultStage 875 (KafkaRDD[1215] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 17:06:00 INFO storage.BlockManagerInfo: Removed broadcast_844_piece0 on ***hostname masked***:55095 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:06:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 874.0 (TID 874, ***hostname masked***, executor 11, partition 0, RACK_LOCAL, 2053 bytes) 18/04/17 17:06:00 INFO storage.MemoryStore: Block broadcast_875 stored as values in memory (estimated size 5.7 KB, free 491.4 MB) 18/04/17 17:06:00 INFO spark.ContextCleaner: Cleaned accumulator 845 18/04/17 17:06:00 INFO storage.BlockManagerInfo: Removed broadcast_845_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:06:00 INFO storage.BlockManagerInfo: Removed broadcast_845_piece0 on ***hostname masked***:55279 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:06:00 INFO spark.ContextCleaner: Cleaned accumulator 848 18/04/17 17:06:00 INFO storage.BlockManagerInfo: Removed broadcast_846_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:06:00 INFO storage.BlockManagerInfo: Added broadcast_872_piece0 in memory on ***hostname masked***:56034 (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:06:00 INFO storage.MemoryStore: Block broadcast_875_piece0 stored as bytes in memory (estimated size 3.1 KB, free 491.4 MB) 18/04/17 17:06:00 INFO storage.BlockManagerInfo: Added broadcast_875_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:06:00 INFO spark.SparkContext: Created broadcast 875 from broadcast at DAGScheduler.scala:1006 18/04/17 17:06:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 875 (KafkaRDD[1215] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 17:06:00 INFO cluster.YarnClusterScheduler: Adding task set 875.0 with 1 tasks 18/04/17 17:06:00 INFO scheduler.DAGScheduler: Got job 876 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 17:06:00 INFO storage.BlockManagerInfo: Removed broadcast_846_piece0 on ***hostname masked***:57847 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:06:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 876 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 17:06:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 17:06:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 17:06:00 INFO scheduler.DAGScheduler: Submitting ResultStage 876 (KafkaRDD[1207] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 17:06:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 875.0 (TID 875, ***hostname masked***, executor 7, partition 0, RACK_LOCAL, 2048 bytes) 18/04/17 17:06:00 INFO storage.MemoryStore: Block broadcast_876 stored as values in memory (estimated size 5.7 KB, free 491.4 MB) 18/04/17 17:06:00 INFO spark.ContextCleaner: Cleaned accumulator 847 18/04/17 17:06:00 INFO storage.BlockManagerInfo: Removed broadcast_848_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:06:00 INFO storage.BlockManagerInfo: Removed broadcast_848_piece0 on ***hostname masked***:60107 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:06:00 INFO spark.ContextCleaner: Cleaned accumulator 849 18/04/17 17:06:00 INFO storage.BlockManagerInfo: Added broadcast_873_piece0 in memory on ***hostname masked***:41751 (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:06:00 INFO storage.BlockManagerInfo: Removed broadcast_847_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:06:00 INFO storage.BlockManagerInfo: Removed broadcast_847_piece0 on ***hostname masked***:60107 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:06:00 INFO storage.MemoryStore: Block broadcast_876_piece0 stored as bytes in memory (estimated size 3.1 KB, free 491.4 MB) 18/04/17 17:06:00 INFO storage.BlockManagerInfo: Added broadcast_876_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:06:00 INFO spark.SparkContext: Created broadcast 876 from broadcast at DAGScheduler.scala:1006 18/04/17 17:06:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 876 (KafkaRDD[1207] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 17:06:00 INFO cluster.YarnClusterScheduler: Adding task set 876.0 with 1 tasks 18/04/17 17:06:00 INFO scheduler.DAGScheduler: Got job 878 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 17:06:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 877 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 17:06:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 17:06:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 17:06:00 INFO scheduler.DAGScheduler: Submitting ResultStage 877 (KafkaRDD[1216] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 17:06:00 INFO spark.ContextCleaner: Cleaned accumulator 851 18/04/17 17:06:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 876.0 (TID 876, ***hostname masked***, executor 3, partition 0, RACK_LOCAL, 2050 bytes) 18/04/17 17:06:00 INFO storage.MemoryStore: Block broadcast_877 stored as values in memory (estimated size 5.7 KB, free 491.4 MB) 18/04/17 17:06:00 INFO storage.BlockManagerInfo: Added broadcast_874_piece0 in memory on ***hostname masked***:57847 (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:06:00 INFO storage.BlockManagerInfo: Removed broadcast_849_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:06:00 INFO storage.BlockManagerInfo: Removed broadcast_849_piece0 on ***hostname masked***:60107 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:06:00 INFO spark.ContextCleaner: Cleaned accumulator 850 18/04/17 17:06:00 INFO storage.BlockManagerInfo: Removed broadcast_851_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:06:00 INFO storage.BlockManagerInfo: Removed broadcast_851_piece0 on ***hostname masked***:55279 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:06:00 INFO spark.ContextCleaner: Cleaned accumulator 852 18/04/17 17:06:00 INFO storage.MemoryStore: Block broadcast_877_piece0 stored as bytes in memory (estimated size 3.1 KB, free 491.4 MB) 18/04/17 17:06:00 INFO storage.BlockManagerInfo: Removed broadcast_850_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:06:00 INFO storage.BlockManagerInfo: Added broadcast_877_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:06:00 INFO spark.SparkContext: Created broadcast 877 from broadcast at DAGScheduler.scala:1006 18/04/17 17:06:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 877 (KafkaRDD[1216] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 17:06:00 INFO cluster.YarnClusterScheduler: Adding task set 877.0 with 1 tasks 18/04/17 17:06:00 INFO scheduler.DAGScheduler: Got job 877 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 17:06:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 878 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 17:06:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 17:06:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 17:06:00 INFO scheduler.DAGScheduler: Submitting ResultStage 878 (KafkaRDD[1211] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 17:06:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 877.0 (TID 877, ***hostname masked***, executor 12, partition 0, RACK_LOCAL, 2045 bytes) 18/04/17 17:06:00 INFO storage.BlockManagerInfo: Removed broadcast_850_piece0 on ***hostname masked***:55095 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:06:00 INFO storage.MemoryStore: Block broadcast_878 stored as values in memory (estimated size 5.7 KB, free 491.4 MB) 18/04/17 17:06:00 INFO spark.ContextCleaner: Cleaned accumulator 854 18/04/17 17:06:00 INFO storage.BlockManagerInfo: Removed broadcast_852_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:06:00 INFO storage.MemoryStore: Block broadcast_878_piece0 stored as bytes in memory (estimated size 3.1 KB, free 491.4 MB) 18/04/17 17:06:00 INFO storage.BlockManagerInfo: Added broadcast_878_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:06:00 INFO spark.SparkContext: Created broadcast 878 from broadcast at DAGScheduler.scala:1006 18/04/17 17:06:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 878 (KafkaRDD[1211] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 17:06:00 INFO cluster.YarnClusterScheduler: Adding task set 878.0 with 1 tasks 18/04/17 17:06:00 INFO scheduler.DAGScheduler: Got job 879 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 17:06:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 879 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 17:06:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 17:06:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 17:06:00 INFO scheduler.DAGScheduler: Submitting ResultStage 879 (KafkaRDD[1206] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 17:06:00 INFO storage.MemoryStore: Block broadcast_879 stored as values in memory (estimated size 5.7 KB, free 491.4 MB) 18/04/17 17:06:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 878.0 (TID 878, ***hostname masked***, executor 2, partition 0, RACK_LOCAL, 2048 bytes) 18/04/17 17:06:00 INFO storage.MemoryStore: Block broadcast_879_piece0 stored as bytes in memory (estimated size 3.1 KB, free 491.4 MB) 18/04/17 17:06:00 INFO storage.BlockManagerInfo: Added broadcast_879_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:06:00 INFO storage.BlockManagerInfo: Added broadcast_876_piece0 in memory on ***hostname masked***:60107 (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:06:00 INFO spark.SparkContext: Created broadcast 879 from broadcast at DAGScheduler.scala:1006 18/04/17 17:06:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 879 (KafkaRDD[1206] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 17:06:00 INFO cluster.YarnClusterScheduler: Adding task set 879.0 with 1 tasks 18/04/17 17:06:00 INFO scheduler.DAGScheduler: Got job 880 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 17:06:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 880 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 17:06:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 17:06:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 17:06:00 INFO scheduler.DAGScheduler: Submitting ResultStage 880 (KafkaRDD[1217] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 17:06:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 879.0 (TID 879, ***hostname masked***, executor 1, partition 0, RACK_LOCAL, 2048 bytes) 18/04/17 17:06:00 INFO storage.MemoryStore: Block broadcast_880 stored as values in memory (estimated size 5.7 KB, free 491.4 MB) 18/04/17 17:06:00 INFO storage.MemoryStore: Block broadcast_880_piece0 stored as bytes in memory (estimated size 3.1 KB, free 491.4 MB) 18/04/17 17:06:00 INFO storage.BlockManagerInfo: Added broadcast_880_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:06:00 INFO spark.SparkContext: Created broadcast 880 from broadcast at DAGScheduler.scala:1006 18/04/17 17:06:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 880 (KafkaRDD[1217] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 17:06:00 INFO cluster.YarnClusterScheduler: Adding task set 880.0 with 1 tasks 18/04/17 17:06:00 INFO scheduler.DAGScheduler: Got job 881 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 17:06:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 881 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 17:06:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 17:06:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 17:06:00 INFO scheduler.DAGScheduler: Submitting ResultStage 881 (KafkaRDD[1212] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 17:06:00 INFO storage.MemoryStore: Block broadcast_881 stored as values in memory (estimated size 5.7 KB, free 491.4 MB) 18/04/17 17:06:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 880.0 (TID 880, ***hostname masked***, executor 5, partition 0, NODE_LOCAL, 2049 bytes) 18/04/17 17:06:00 INFO storage.BlockManagerInfo: Added broadcast_875_piece0 in memory on ***hostname masked***:41751 (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:06:00 INFO storage.BlockManagerInfo: Removed broadcast_852_piece0 on ***hostname masked***:41751 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:06:00 INFO storage.MemoryStore: Block broadcast_881_piece0 stored as bytes in memory (estimated size 3.1 KB, free 491.4 MB) 18/04/17 17:06:00 INFO storage.BlockManagerInfo: Added broadcast_881_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:06:00 INFO spark.ContextCleaner: Cleaned accumulator 853 18/04/17 17:06:00 INFO spark.SparkContext: Created broadcast 881 from broadcast at DAGScheduler.scala:1006 18/04/17 17:06:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 881 (KafkaRDD[1212] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 17:06:00 INFO cluster.YarnClusterScheduler: Adding task set 881.0 with 1 tasks 18/04/17 17:06:00 INFO scheduler.DAGScheduler: Got job 882 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 17:06:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 882 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 17:06:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 17:06:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 17:06:00 INFO scheduler.DAGScheduler: Submitting ResultStage 882 (KafkaRDD[1190] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 17:06:00 INFO storage.BlockManagerInfo: Removed broadcast_854_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:06:00 INFO storage.MemoryStore: Block broadcast_882 stored as values in memory (estimated size 5.7 KB, free 491.4 MB) 18/04/17 17:06:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 881.0 (TID 881, ***hostname masked***, executor 2, partition 0, NODE_LOCAL, 2047 bytes) 18/04/17 17:06:00 INFO storage.BlockManagerInfo: Added broadcast_879_piece0 in memory on ***hostname masked***:56034 (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:06:00 INFO storage.BlockManagerInfo: Removed broadcast_854_piece0 on ***hostname masked***:41751 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:06:00 INFO storage.MemoryStore: Block broadcast_882_piece0 stored as bytes in memory (estimated size 3.1 KB, free 491.4 MB) 18/04/17 17:06:00 INFO storage.BlockManagerInfo: Added broadcast_882_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:06:00 INFO spark.SparkContext: Created broadcast 882 from broadcast at DAGScheduler.scala:1006 18/04/17 17:06:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 882 (KafkaRDD[1190] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 17:06:00 INFO cluster.YarnClusterScheduler: Adding task set 882.0 with 1 tasks 18/04/17 17:06:00 INFO scheduler.DAGScheduler: Got job 883 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 17:06:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 883 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 17:06:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 17:06:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 17:06:00 INFO scheduler.DAGScheduler: Submitting ResultStage 883 (KafkaRDD[1200] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 17:06:00 INFO storage.MemoryStore: Block broadcast_883 stored as values in memory (estimated size 5.7 KB, free 491.4 MB) 18/04/17 17:06:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 882.0 (TID 882, ***hostname masked***, executor 5, partition 0, RACK_LOCAL, 2034 bytes) 18/04/17 17:06:00 INFO storage.MemoryStore: Block broadcast_883_piece0 stored as bytes in memory (estimated size 3.1 KB, free 491.4 MB) 18/04/17 17:06:00 INFO storage.BlockManagerInfo: Added broadcast_883_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:06:00 INFO spark.SparkContext: Created broadcast 883 from broadcast at DAGScheduler.scala:1006 18/04/17 17:06:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 883 (KafkaRDD[1200] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 17:06:00 INFO cluster.YarnClusterScheduler: Adding task set 883.0 with 1 tasks 18/04/17 17:06:00 INFO scheduler.DAGScheduler: Got job 884 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 17:06:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 884 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 17:06:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 17:06:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 17:06:00 INFO scheduler.DAGScheduler: Submitting ResultStage 884 (KafkaRDD[1208] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 17:06:00 INFO storage.MemoryStore: Block broadcast_884 stored as values in memory (estimated size 5.7 KB, free 491.4 MB) 18/04/17 17:06:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 883.0 (TID 883, ***hostname masked***, executor 4, partition 0, RACK_LOCAL, 2051 bytes) 18/04/17 17:06:00 INFO storage.MemoryStore: Block broadcast_884_piece0 stored as bytes in memory (estimated size 3.1 KB, free 491.4 MB) 18/04/17 17:06:00 INFO storage.BlockManagerInfo: Added broadcast_884_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:06:00 INFO spark.SparkContext: Created broadcast 884 from broadcast at DAGScheduler.scala:1006 18/04/17 17:06:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 884 (KafkaRDD[1208] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 17:06:00 INFO cluster.YarnClusterScheduler: Adding task set 884.0 with 1 tasks 18/04/17 17:06:00 INFO scheduler.DAGScheduler: Got job 885 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 17:06:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 885 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 17:06:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 17:06:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 17:06:00 INFO scheduler.DAGScheduler: Submitting ResultStage 885 (KafkaRDD[1199] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 17:06:00 INFO storage.BlockManagerInfo: Added broadcast_880_piece0 in memory on ***hostname masked***:53081 (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:06:00 INFO storage.MemoryStore: Block broadcast_885 stored as values in memory (estimated size 5.7 KB, free 491.4 MB) 18/04/17 17:06:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 884.0 (TID 884, ***hostname masked***, executor 6, partition 0, RACK_LOCAL, 2048 bytes) 18/04/17 17:06:00 INFO storage.BlockManagerInfo: Added broadcast_877_piece0 in memory on ***hostname masked***:42188 (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:06:00 INFO storage.MemoryStore: Block broadcast_885_piece0 stored as bytes in memory (estimated size 3.1 KB, free 491.4 MB) 18/04/17 17:06:00 INFO storage.BlockManagerInfo: Added broadcast_885_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:06:00 INFO spark.SparkContext: Created broadcast 885 from broadcast at DAGScheduler.scala:1006 18/04/17 17:06:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 885 (KafkaRDD[1199] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 17:06:00 INFO cluster.YarnClusterScheduler: Adding task set 885.0 with 1 tasks 18/04/17 17:06:00 INFO scheduler.DAGScheduler: Got job 886 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 17:06:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 886 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 17:06:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 17:06:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 17:06:00 INFO scheduler.DAGScheduler: Submitting ResultStage 886 (KafkaRDD[1193] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 17:06:00 INFO storage.MemoryStore: Block broadcast_886 stored as values in memory (estimated size 5.7 KB, free 491.4 MB) 18/04/17 17:06:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 885.0 (TID 885, ***hostname masked***, executor 5, partition 0, RACK_LOCAL, 2046 bytes) 18/04/17 17:06:00 INFO storage.MemoryStore: Block broadcast_886_piece0 stored as bytes in memory (estimated size 3.1 KB, free 491.4 MB) 18/04/17 17:06:00 INFO storage.BlockManagerInfo: Added broadcast_886_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:06:00 INFO spark.SparkContext: Created broadcast 886 from broadcast at DAGScheduler.scala:1006 18/04/17 17:06:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 886 (KafkaRDD[1193] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 17:06:00 INFO cluster.YarnClusterScheduler: Adding task set 886.0 with 1 tasks 18/04/17 17:06:00 INFO scheduler.DAGScheduler: Got job 887 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 17:06:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 887 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 17:06:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 17:06:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 17:06:00 INFO scheduler.DAGScheduler: Submitting ResultStage 887 (KafkaRDD[1210] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 17:06:00 INFO storage.MemoryStore: Block broadcast_887 stored as values in memory (estimated size 5.7 KB, free 491.4 MB) 18/04/17 17:06:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 886.0 (TID 886, ***hostname masked***, executor 1, partition 0, NODE_LOCAL, 2059 bytes) 18/04/17 17:06:00 INFO storage.BlockManagerInfo: Added broadcast_883_piece0 in memory on ***hostname masked***:55279 (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:06:00 INFO storage.MemoryStore: Block broadcast_887_piece0 stored as bytes in memory (estimated size 3.1 KB, free 491.4 MB) 18/04/17 17:06:00 INFO spark.ContextCleaner: Cleaned accumulator 855 18/04/17 17:06:00 INFO storage.BlockManagerInfo: Added broadcast_887_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:06:00 INFO storage.BlockManagerInfo: Added broadcast_878_piece0 in memory on ***hostname masked***:43653 (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:06:00 INFO storage.BlockManagerInfo: Removed broadcast_853_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:06:00 INFO spark.SparkContext: Created broadcast 887 from broadcast at DAGScheduler.scala:1006 18/04/17 17:06:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 887 (KafkaRDD[1210] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 17:06:00 INFO cluster.YarnClusterScheduler: Adding task set 887.0 with 1 tasks 18/04/17 17:06:00 INFO scheduler.DAGScheduler: Got job 888 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 17:06:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 888 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 17:06:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 17:06:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 17:06:00 INFO scheduler.DAGScheduler: Submitting ResultStage 888 (KafkaRDD[1219] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 17:06:00 INFO storage.MemoryStore: Block broadcast_888 stored as values in memory (estimated size 5.7 KB, free 491.4 MB) 18/04/17 17:06:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 887.0 (TID 887, ***hostname masked***, executor 7, partition 0, RACK_LOCAL, 2069 bytes) 18/04/17 17:06:00 INFO storage.BlockManagerInfo: Removed broadcast_853_piece0 on ***hostname masked***:60107 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:06:00 INFO storage.MemoryStore: Block broadcast_888_piece0 stored as bytes in memory (estimated size 3.1 KB, free 491.4 MB) 18/04/17 17:06:00 INFO storage.BlockManagerInfo: Added broadcast_888_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:06:00 INFO spark.SparkContext: Created broadcast 888 from broadcast at DAGScheduler.scala:1006 18/04/17 17:06:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 888 (KafkaRDD[1219] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 17:06:00 INFO cluster.YarnClusterScheduler: Adding task set 888.0 with 1 tasks 18/04/17 17:06:00 INFO scheduler.DAGScheduler: Got job 889 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 17:06:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 889 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 17:06:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 17:06:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 17:06:00 INFO scheduler.DAGScheduler: Submitting ResultStage 889 (KafkaRDD[1213] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 17:06:00 INFO storage.MemoryStore: Block broadcast_889 stored as values in memory (estimated size 5.7 KB, free 491.4 MB) 18/04/17 17:06:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 888.0 (TID 888, ***hostname masked***, executor 11, partition 0, RACK_LOCAL, 2042 bytes) 18/04/17 17:06:00 INFO storage.BlockManagerInfo: Added broadcast_886_piece0 in memory on ***hostname masked***:56034 (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:06:00 INFO storage.MemoryStore: Block broadcast_889_piece0 stored as bytes in memory (estimated size 3.1 KB, free 491.4 MB) 18/04/17 17:06:00 INFO storage.BlockManagerInfo: Added broadcast_889_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:06:00 INFO spark.SparkContext: Created broadcast 889 from broadcast at DAGScheduler.scala:1006 18/04/17 17:06:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 889 (KafkaRDD[1213] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 17:06:00 INFO cluster.YarnClusterScheduler: Adding task set 889.0 with 1 tasks 18/04/17 17:06:00 INFO scheduler.DAGScheduler: Got job 890 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 17:06:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 890 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 17:06:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 17:06:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 17:06:00 INFO scheduler.DAGScheduler: Submitting ResultStage 890 (KafkaRDD[1195] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 17:06:00 INFO storage.BlockManagerInfo: Added broadcast_884_piece0 in memory on ***hostname masked***:35790 (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:06:00 INFO storage.MemoryStore: Block broadcast_890 stored as values in memory (estimated size 5.7 KB, free 491.4 MB) 18/04/17 17:06:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 889.0 (TID 889, ***hostname masked***, executor 4, partition 0, RACK_LOCAL, 2045 bytes) 18/04/17 17:06:00 INFO storage.BlockManagerInfo: Added broadcast_887_piece0 in memory on ***hostname masked***:41751 (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:06:00 INFO storage.MemoryStore: Block broadcast_890_piece0 stored as bytes in memory (estimated size 3.1 KB, free 491.4 MB) 18/04/17 17:06:00 INFO storage.BlockManagerInfo: Added broadcast_881_piece0 in memory on ***hostname masked***:43653 (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:06:00 INFO storage.BlockManagerInfo: Added broadcast_890_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:06:00 INFO spark.SparkContext: Created broadcast 890 from broadcast at DAGScheduler.scala:1006 18/04/17 17:06:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 890 (KafkaRDD[1195] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 17:06:00 INFO cluster.YarnClusterScheduler: Adding task set 890.0 with 1 tasks 18/04/17 17:06:00 INFO scheduler.DAGScheduler: Got job 891 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 17:06:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 891 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 17:06:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 17:06:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 17:06:00 INFO scheduler.DAGScheduler: Submitting ResultStage 891 (KafkaRDD[1214] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 17:06:00 INFO storage.MemoryStore: Block broadcast_891 stored as values in memory (estimated size 5.7 KB, free 491.4 MB) 18/04/17 17:06:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 890.0 (TID 890, ***hostname masked***, executor 6, partition 0, RACK_LOCAL, 2060 bytes) 18/04/17 17:06:00 INFO storage.BlockManagerInfo: Added broadcast_888_piece0 in memory on ***hostname masked***:57847 (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:06:00 INFO storage.MemoryStore: Block broadcast_891_piece0 stored as bytes in memory (estimated size 3.1 KB, free 491.4 MB) 18/04/17 17:06:00 INFO storage.BlockManagerInfo: Added broadcast_891_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:06:00 INFO spark.SparkContext: Created broadcast 891 from broadcast at DAGScheduler.scala:1006 18/04/17 17:06:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 891 (KafkaRDD[1214] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 17:06:00 INFO cluster.YarnClusterScheduler: Adding task set 891.0 with 1 tasks 18/04/17 17:06:00 INFO scheduler.DAGScheduler: Got job 892 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 17:06:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 892 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 17:06:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 17:06:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 17:06:00 INFO scheduler.DAGScheduler: Submitting ResultStage 892 (KafkaRDD[1203] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 17:06:00 INFO storage.MemoryStore: Block broadcast_892 stored as values in memory (estimated size 5.7 KB, free 491.3 MB) 18/04/17 17:06:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 891.0 (TID 891, ***hostname masked***, executor 2, partition 0, NODE_LOCAL, 2056 bytes) 18/04/17 17:06:00 INFO storage.BlockManagerInfo: Added broadcast_889_piece0 in memory on ***hostname masked***:55279 (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:06:00 INFO storage.MemoryStore: Block broadcast_892_piece0 stored as bytes in memory (estimated size 3.1 KB, free 491.3 MB) 18/04/17 17:06:00 INFO storage.BlockManagerInfo: Added broadcast_892_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:06:00 INFO spark.SparkContext: Created broadcast 892 from broadcast at DAGScheduler.scala:1006 18/04/17 17:06:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 892 (KafkaRDD[1203] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 17:06:00 INFO cluster.YarnClusterScheduler: Adding task set 892.0 with 1 tasks 18/04/17 17:06:00 INFO scheduler.DAGScheduler: Got job 893 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 17:06:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 893 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 17:06:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 17:06:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 17:06:00 INFO scheduler.DAGScheduler: Submitting ResultStage 893 (KafkaRDD[1194] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 17:06:00 INFO storage.MemoryStore: Block broadcast_893 stored as values in memory (estimated size 5.7 KB, free 491.3 MB) 18/04/17 17:06:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 892.0 (TID 892, ***hostname masked***, executor 7, partition 0, RACK_LOCAL, 2064 bytes) 18/04/17 17:06:00 INFO storage.BlockManagerInfo: Added broadcast_882_piece0 in memory on ***hostname masked***:53081 (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:06:00 INFO spark.ContextCleaner: Cleaned accumulator 857 18/04/17 17:06:00 INFO storage.MemoryStore: Block broadcast_893_piece0 stored as bytes in memory (estimated size 3.1 KB, free 491.3 MB) 18/04/17 17:06:00 INFO storage.BlockManagerInfo: Added broadcast_893_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:06:00 INFO storage.BlockManagerInfo: Removed broadcast_855_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:06:00 INFO spark.SparkContext: Created broadcast 893 from broadcast at DAGScheduler.scala:1006 18/04/17 17:06:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 893 (KafkaRDD[1194] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 17:06:00 INFO cluster.YarnClusterScheduler: Adding task set 893.0 with 1 tasks 18/04/17 17:06:00 INFO scheduler.DAGScheduler: Got job 894 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 17:06:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 894 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 17:06:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 17:06:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 17:06:00 INFO scheduler.DAGScheduler: Submitting ResultStage 894 (KafkaRDD[1189] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 17:06:00 INFO storage.MemoryStore: Block broadcast_894 stored as values in memory (estimated size 5.7 KB, free 491.3 MB) 18/04/17 17:06:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 893.0 (TID 893, ***hostname masked***, executor 8, partition 0, NODE_LOCAL, 2045 bytes) 18/04/17 17:06:00 INFO storage.BlockManagerInfo: Removed broadcast_855_piece0 on ***hostname masked***:35790 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:06:00 INFO spark.ContextCleaner: Cleaned accumulator 856 18/04/17 17:06:00 INFO storage.MemoryStore: Block broadcast_894_piece0 stored as bytes in memory (estimated size 3.1 KB, free 491.3 MB) 18/04/17 17:06:00 INFO storage.BlockManagerInfo: Added broadcast_891_piece0 in memory on ***hostname masked***:43653 (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:06:00 INFO storage.BlockManagerInfo: Added broadcast_894_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:06:00 INFO storage.BlockManagerInfo: Removed broadcast_857_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:06:00 INFO spark.SparkContext: Created broadcast 894 from broadcast at DAGScheduler.scala:1006 18/04/17 17:06:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 894 (KafkaRDD[1189] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 17:06:00 INFO cluster.YarnClusterScheduler: Adding task set 894.0 with 1 tasks 18/04/17 17:06:00 INFO scheduler.DAGScheduler: Got job 895 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 17:06:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 895 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 17:06:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 17:06:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 17:06:00 INFO scheduler.DAGScheduler: Submitting ResultStage 895 (KafkaRDD[1196] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 17:06:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 894.0 (TID 894, ***hostname masked***, executor 5, partition 0, RACK_LOCAL, 2063 bytes) 18/04/17 17:06:00 INFO storage.MemoryStore: Block broadcast_895 stored as values in memory (estimated size 5.7 KB, free 491.3 MB) 18/04/17 17:06:00 INFO storage.BlockManagerInfo: Added broadcast_890_piece0 in memory on ***hostname masked***:35790 (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:06:00 INFO storage.BlockManagerInfo: Removed broadcast_857_piece0 on ***hostname masked***:35790 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:06:00 INFO spark.ContextCleaner: Cleaned accumulator 858 18/04/17 17:06:00 INFO storage.BlockManagerInfo: Removed broadcast_856_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:06:00 INFO storage.MemoryStore: Block broadcast_895_piece0 stored as bytes in memory (estimated size 3.1 KB, free 491.3 MB) 18/04/17 17:06:00 INFO storage.BlockManagerInfo: Added broadcast_895_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:06:00 INFO spark.SparkContext: Created broadcast 895 from broadcast at DAGScheduler.scala:1006 18/04/17 17:06:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 895 (KafkaRDD[1196] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 17:06:00 INFO cluster.YarnClusterScheduler: Adding task set 895.0 with 1 tasks 18/04/17 17:06:00 INFO storage.BlockManagerInfo: Added broadcast_885_piece0 in memory on ***hostname masked***:53081 (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:06:00 INFO storage.BlockManagerInfo: Added broadcast_893_piece0 in memory on ***hostname masked***:50260 (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:06:00 INFO storage.BlockManagerInfo: Removed broadcast_856_piece0 on ***hostname masked***:50260 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:06:00 INFO spark.ContextCleaner: Cleaned accumulator 860 18/04/17 17:06:00 INFO storage.BlockManagerInfo: Removed broadcast_858_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:06:00 INFO storage.BlockManagerInfo: Added broadcast_892_piece0 in memory on ***hostname masked***:41751 (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:06:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 895.0 (TID 895, ***hostname masked***, executor 12, partition 0, RACK_LOCAL, 2051 bytes) 18/04/17 17:06:00 INFO storage.BlockManagerInfo: Removed broadcast_858_piece0 on ***hostname masked***:53081 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:06:00 INFO spark.ContextCleaner: Cleaned accumulator 859 18/04/17 17:06:00 INFO storage.BlockManagerInfo: Removed broadcast_860_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:06:00 INFO storage.BlockManagerInfo: Removed broadcast_860_piece0 on ***hostname masked***:55095 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:06:00 INFO spark.ContextCleaner: Cleaned accumulator 861 18/04/17 17:06:00 INFO storage.BlockManagerInfo: Removed broadcast_859_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:06:00 INFO storage.BlockManagerInfo: Removed broadcast_859_piece0 on ***hostname masked***:55095 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:06:00 INFO storage.BlockManagerInfo: Added broadcast_894_piece0 in memory on ***hostname masked***:53081 (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:06:00 INFO spark.ContextCleaner: Cleaned accumulator 863 18/04/17 17:06:00 INFO storage.BlockManagerInfo: Removed broadcast_861_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:06:00 INFO storage.BlockManagerInfo: Removed broadcast_861_piece0 on ***hostname masked***:35790 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:06:00 INFO storage.BlockManagerInfo: Added broadcast_895_piece0 in memory on ***hostname masked***:42188 (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:06:00 INFO spark.ContextCleaner: Cleaned accumulator 862 18/04/17 17:06:00 INFO storage.BlockManagerInfo: Removed broadcast_863_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:06:00 INFO storage.BlockManagerInfo: Removed broadcast_863_piece0 on ***hostname masked***:53081 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:06:00 INFO spark.ContextCleaner: Cleaned accumulator 864 18/04/17 17:06:00 INFO storage.BlockManagerInfo: Removed broadcast_862_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:06:00 INFO storage.BlockManagerInfo: Removed broadcast_862_piece0 on ***hostname masked***:55279 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:06:00 INFO spark.ContextCleaner: Cleaned accumulator 866 18/04/17 17:06:00 INFO storage.BlockManagerInfo: Removed broadcast_864_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:06:00 INFO storage.BlockManagerInfo: Removed broadcast_864_piece0 on ***hostname masked***:53081 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:06:00 INFO spark.ContextCleaner: Cleaned accumulator 865 18/04/17 17:06:00 INFO storage.BlockManagerInfo: Removed broadcast_866_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:06:00 INFO storage.BlockManagerInfo: Removed broadcast_866_piece0 on ***hostname masked***:53081 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:06:00 INFO spark.ContextCleaner: Cleaned accumulator 867 18/04/17 17:06:00 INFO spark.ContextCleaner: Cleaned accumulator 869 18/04/17 17:06:00 INFO storage.BlockManagerInfo: Removed broadcast_867_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:06:00 INFO storage.BlockManagerInfo: Removed broadcast_867_piece0 on ***hostname masked***:57847 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:06:00 INFO spark.ContextCleaner: Cleaned accumulator 868 18/04/17 17:06:00 INFO storage.BlockManagerInfo: Removed broadcast_869_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:06:00 INFO storage.BlockManagerInfo: Removed broadcast_869_piece0 on ***hostname masked***:50260 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:06:00 INFO spark.ContextCleaner: Cleaned accumulator 870 18/04/17 17:06:00 INFO storage.BlockManagerInfo: Removed broadcast_868_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:06:00 INFO storage.BlockManagerInfo: Removed broadcast_868_piece0 on ***hostname masked***:42188 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:06:02 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 889.0 (TID 889) in 1936 ms on ***hostname masked*** (executor 4) (1/1) 18/04/17 17:06:02 INFO scheduler.DAGScheduler: ResultStage 889 (foreachPartition at PredictorEngineApp.java:153) finished in 1.936 s 18/04/17 17:06:02 INFO cluster.YarnClusterScheduler: Removed TaskSet 889.0, whose tasks have all completed, from pool 18/04/17 17:06:02 INFO scheduler.DAGScheduler: Job 889 finished: foreachPartition at PredictorEngineApp.java:153, took 2.015908 s 18/04/17 17:06:02 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x6959bd29 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 17:06:02 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x6959bd290x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 17:06:02 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 17:06:02 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:38909, server: ***hostname masked***/***IP masked***:2181 18/04/17 17:06:02 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x1626be1444c960b, negotiated timeout = 60000 18/04/17 17:06:02 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x1626be1444c960b 18/04/17 17:06:02 INFO zookeeper.ZooKeeper: Session: 0x1626be1444c960b closed 18/04/17 17:06:02 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 17:06:02 INFO scheduler.JobScheduler: Finished job streaming job 1523973960000 ms.25 from job set of time 1523973960000 ms 18/04/17 17:06:02 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 890.0 (TID 890) in 2077 ms on ***hostname masked*** (executor 6) (1/1) 18/04/17 17:06:02 INFO cluster.YarnClusterScheduler: Removed TaskSet 890.0, whose tasks have all completed, from pool 18/04/17 17:06:02 INFO scheduler.DAGScheduler: ResultStage 890 (foreachPartition at PredictorEngineApp.java:153) finished in 2.077 s 18/04/17 17:06:02 INFO scheduler.DAGScheduler: Job 890 finished: foreachPartition at PredictorEngineApp.java:153, took 2.158389 s 18/04/17 17:06:02 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x6e909d75 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 17:06:02 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x6e909d750x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 17:06:02 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 17:06:02 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:60763, server: ***hostname masked***/***IP masked***:2181 18/04/17 17:06:02 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x3626be1439a95b5, negotiated timeout = 60000 18/04/17 17:06:02 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x3626be1439a95b5 18/04/17 17:06:02 INFO zookeeper.ZooKeeper: Session: 0x3626be1439a95b5 closed 18/04/17 17:06:02 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 17:06:02 INFO scheduler.JobScheduler: Finished job streaming job 1523973960000 ms.7 from job set of time 1523973960000 ms 18/04/17 17:06:02 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 895.0 (TID 895) in 2287 ms on ***hostname masked*** (executor 12) (1/1) 18/04/17 17:06:02 INFO cluster.YarnClusterScheduler: Removed TaskSet 895.0, whose tasks have all completed, from pool 18/04/17 17:06:02 INFO scheduler.DAGScheduler: ResultStage 895 (foreachPartition at PredictorEngineApp.java:153) finished in 2.288 s 18/04/17 17:06:02 INFO scheduler.DAGScheduler: Job 895 finished: foreachPartition at PredictorEngineApp.java:153, took 2.386287 s 18/04/17 17:06:02 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x229dd150 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 17:06:02 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x229dd1500x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 17:06:02 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 17:06:02 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:43510, server: ***hostname masked***/***IP masked***:2181 18/04/17 17:06:02 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x2626be142b28efa, negotiated timeout = 60000 18/04/17 17:06:02 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x2626be142b28efa 18/04/17 17:06:02 INFO zookeeper.ZooKeeper: Session: 0x2626be142b28efa closed 18/04/17 17:06:02 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 17:06:02 INFO scheduler.JobScheduler: Finished job streaming job 1523973960000 ms.8 from job set of time 1523973960000 ms 18/04/17 17:06:04 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 888.0 (TID 888) in 3947 ms on ***hostname masked*** (executor 11) (1/1) 18/04/17 17:06:04 INFO cluster.YarnClusterScheduler: Removed TaskSet 888.0, whose tasks have all completed, from pool 18/04/17 17:06:04 INFO scheduler.DAGScheduler: ResultStage 888 (foreachPartition at PredictorEngineApp.java:153) finished in 3.947 s 18/04/17 17:06:04 INFO scheduler.DAGScheduler: Job 888 finished: foreachPartition at PredictorEngineApp.java:153, took 4.024023 s 18/04/17 17:06:04 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x7eabf1b1 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 17:06:04 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x7eabf1b10x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 17:06:04 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 17:06:04 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:60772, server: ***hostname masked***/***IP masked***:2181 18/04/17 17:06:04 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x3626be1439a95b6, negotiated timeout = 60000 18/04/17 17:06:04 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x3626be1439a95b6 18/04/17 17:06:04 INFO zookeeper.ZooKeeper: Session: 0x3626be1439a95b6 closed 18/04/17 17:06:04 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 17:06:04 INFO scheduler.JobScheduler: Finished job streaming job 1523973960000 ms.31 from job set of time 1523973960000 ms 18/04/17 17:06:04 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 876.0 (TID 876) in 4213 ms on ***hostname masked*** (executor 3) (1/1) 18/04/17 17:06:04 INFO cluster.YarnClusterScheduler: Removed TaskSet 876.0, whose tasks have all completed, from pool 18/04/17 17:06:04 INFO scheduler.DAGScheduler: ResultStage 876 (foreachPartition at PredictorEngineApp.java:153) finished in 4.213 s 18/04/17 17:06:04 INFO scheduler.DAGScheduler: Job 876 finished: foreachPartition at PredictorEngineApp.java:153, took 4.253669 s 18/04/17 17:06:04 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x4377d3d6 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 17:06:04 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x4377d3d60x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 17:06:04 INFO scheduler.DAGScheduler: ResultStage 883 (foreachPartition at PredictorEngineApp.java:153) finished in 4.198 s 18/04/17 17:06:04 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 883.0 (TID 883) in 4196 ms on ***hostname masked*** (executor 4) (1/1) 18/04/17 17:06:04 INFO cluster.YarnClusterScheduler: Removed TaskSet 883.0, whose tasks have all completed, from pool 18/04/17 17:06:04 INFO scheduler.DAGScheduler: Job 883 finished: foreachPartition at PredictorEngineApp.java:153, took 4.316574 s 18/04/17 17:06:04 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x6c33c1ae connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 17:06:04 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x6c33c1ae0x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 17:06:04 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 17:06:04 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:60775, server: ***hostname masked***/***IP masked***:2181 18/04/17 17:06:04 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 17:06:04 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:60776, server: ***hostname masked***/***IP masked***:2181 18/04/17 17:06:04 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x3626be1439a95b7, negotiated timeout = 60000 18/04/17 17:06:04 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x3626be1439a95b8, negotiated timeout = 60000 18/04/17 17:06:04 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x3626be1439a95b8 18/04/17 17:06:04 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x3626be1439a95b7 18/04/17 17:06:04 INFO zookeeper.ZooKeeper: Session: 0x3626be1439a95b8 closed 18/04/17 17:06:04 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 17:06:04 INFO zookeeper.ZooKeeper: Session: 0x3626be1439a95b7 closed 18/04/17 17:06:04 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 17:06:04 INFO scheduler.JobScheduler: Finished job streaming job 1523973960000 ms.12 from job set of time 1523973960000 ms 18/04/17 17:06:04 INFO scheduler.JobScheduler: Finished job streaming job 1523973960000 ms.19 from job set of time 1523973960000 ms 18/04/17 17:06:05 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 877.0 (TID 877) in 4992 ms on ***hostname masked*** (executor 12) (1/1) 18/04/17 17:06:05 INFO cluster.YarnClusterScheduler: Removed TaskSet 877.0, whose tasks have all completed, from pool 18/04/17 17:06:05 INFO scheduler.DAGScheduler: ResultStage 877 (foreachPartition at PredictorEngineApp.java:153) finished in 4.992 s 18/04/17 17:06:05 INFO scheduler.DAGScheduler: Job 878 finished: foreachPartition at PredictorEngineApp.java:153, took 5.037451 s 18/04/17 17:06:05 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x27b4d063 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 17:06:05 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x27b4d0630x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 17:06:05 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 17:06:05 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:38932, server: ***hostname masked***/***IP masked***:2181 18/04/17 17:06:05 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x1626be1444c960f, negotiated timeout = 60000 18/04/17 17:06:05 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x1626be1444c960f 18/04/17 17:06:05 INFO zookeeper.ZooKeeper: Session: 0x1626be1444c960f closed 18/04/17 17:06:05 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 17:06:05 INFO scheduler.JobScheduler: Finished job streaming job 1523973960000 ms.28 from job set of time 1523973960000 ms 18/04/17 17:06:05 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 884.0 (TID 884) in 5284 ms on ***hostname masked*** (executor 6) (1/1) 18/04/17 17:06:05 INFO cluster.YarnClusterScheduler: Removed TaskSet 884.0, whose tasks have all completed, from pool 18/04/17 17:06:05 INFO scheduler.DAGScheduler: ResultStage 884 (foreachPartition at PredictorEngineApp.java:153) finished in 5.285 s 18/04/17 17:06:05 INFO scheduler.DAGScheduler: Job 884 finished: foreachPartition at PredictorEngineApp.java:153, took 5.349758 s 18/04/17 17:06:05 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x65cc12ca connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 17:06:05 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x65cc12ca0x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 17:06:05 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 17:06:05 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:38935, server: ***hostname masked***/***IP masked***:2181 18/04/17 17:06:05 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x1626be1444c9610, negotiated timeout = 60000 18/04/17 17:06:05 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x1626be1444c9610 18/04/17 17:06:05 INFO zookeeper.ZooKeeper: Session: 0x1626be1444c9610 closed 18/04/17 17:06:05 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 17:06:05 INFO scheduler.JobScheduler: Finished job streaming job 1523973960000 ms.20 from job set of time 1523973960000 ms 18/04/17 17:06:06 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 874.0 (TID 874) in 6020 ms on ***hostname masked*** (executor 11) (1/1) 18/04/17 17:06:06 INFO cluster.YarnClusterScheduler: Removed TaskSet 874.0, whose tasks have all completed, from pool 18/04/17 17:06:06 INFO scheduler.DAGScheduler: ResultStage 874 (foreachPartition at PredictorEngineApp.java:153) finished in 6.021 s 18/04/17 17:06:06 INFO scheduler.DAGScheduler: Job 874 finished: foreachPartition at PredictorEngineApp.java:153, took 6.051886 s 18/04/17 17:06:06 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x7492f274 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 17:06:06 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x7492f2740x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 17:06:06 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 17:06:06 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:60790, server: ***hostname masked***/***IP masked***:2181 18/04/17 17:06:06 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x3626be1439a95bb, negotiated timeout = 60000 18/04/17 17:06:06 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x3626be1439a95bb 18/04/17 17:06:06 INFO zookeeper.ZooKeeper: Session: 0x3626be1439a95bb closed 18/04/17 17:06:06 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 17:06:06 INFO scheduler.JobScheduler: Finished job streaming job 1523973960000 ms.34 from job set of time 1523973960000 ms 18/04/17 17:06:07 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 871.0 (TID 871) in 7430 ms on ***hostname masked*** (executor 8) (1/1) 18/04/17 17:06:07 INFO cluster.YarnClusterScheduler: Removed TaskSet 871.0, whose tasks have all completed, from pool 18/04/17 17:06:07 INFO scheduler.DAGScheduler: ResultStage 871 (foreachPartition at PredictorEngineApp.java:153) finished in 7.431 s 18/04/17 17:06:07 INFO scheduler.DAGScheduler: Job 871 finished: foreachPartition at PredictorEngineApp.java:153, took 7.440432 s 18/04/17 17:06:07 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x4ab9e8c0 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 17:06:07 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x4ab9e8c00x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 17:06:07 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 17:06:07 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:60796, server: ***hostname masked***/***IP masked***:2181 18/04/17 17:06:07 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x3626be1439a95bc, negotiated timeout = 60000 18/04/17 17:06:07 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x3626be1439a95bc 18/04/17 17:06:07 INFO zookeeper.ZooKeeper: Session: 0x3626be1439a95bc closed 18/04/17 17:06:07 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 17:06:07 INFO scheduler.JobScheduler: Finished job streaming job 1523973960000 ms.9 from job set of time 1523973960000 ms 18/04/17 17:06:07 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 893.0 (TID 893) in 7504 ms on ***hostname masked*** (executor 8) (1/1) 18/04/17 17:06:07 INFO scheduler.DAGScheduler: ResultStage 893 (foreachPartition at PredictorEngineApp.java:153) finished in 7.504 s 18/04/17 17:06:07 INFO cluster.YarnClusterScheduler: Removed TaskSet 893.0, whose tasks have all completed, from pool 18/04/17 17:06:07 INFO scheduler.DAGScheduler: Job 893 finished: foreachPartition at PredictorEngineApp.java:153, took 7.592605 s 18/04/17 17:06:07 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x2c8dd303 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 17:06:07 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x2c8dd3030x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 17:06:07 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 17:06:07 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:60799, server: ***hostname masked***/***IP masked***:2181 18/04/17 17:06:07 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x3626be1439a95bd, negotiated timeout = 60000 18/04/17 17:06:07 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x3626be1439a95bd 18/04/17 17:06:07 INFO zookeeper.ZooKeeper: Session: 0x3626be1439a95bd closed 18/04/17 17:06:07 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 17:06:07 INFO scheduler.JobScheduler: Finished job streaming job 1523973960000 ms.6 from job set of time 1523973960000 ms 18/04/17 17:06:10 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 879.0 (TID 879) in 10835 ms on ***hostname masked*** (executor 1) (1/1) 18/04/17 17:06:10 INFO cluster.YarnClusterScheduler: Removed TaskSet 879.0, whose tasks have all completed, from pool 18/04/17 17:06:10 INFO scheduler.DAGScheduler: ResultStage 879 (foreachPartition at PredictorEngineApp.java:153) finished in 10.836 s 18/04/17 17:06:10 INFO scheduler.DAGScheduler: Job 879 finished: foreachPartition at PredictorEngineApp.java:153, took 10.887312 s 18/04/17 17:06:10 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x7bfb6691 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 17:06:10 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x7bfb66910x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 17:06:10 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 17:06:10 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:60809, server: ***hostname masked***/***IP masked***:2181 18/04/17 17:06:10 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x3626be1439a95c2, negotiated timeout = 60000 18/04/17 17:06:10 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x3626be1439a95c2 18/04/17 17:06:10 INFO zookeeper.ZooKeeper: Session: 0x3626be1439a95c2 closed 18/04/17 17:06:10 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 17:06:10 INFO scheduler.JobScheduler: Finished job streaming job 1523973960000 ms.18 from job set of time 1523973960000 ms 18/04/17 17:06:13 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 892.0 (TID 892) in 13683 ms on ***hostname masked*** (executor 7) (1/1) 18/04/17 17:06:13 INFO cluster.YarnClusterScheduler: Removed TaskSet 892.0, whose tasks have all completed, from pool 18/04/17 17:06:13 INFO scheduler.DAGScheduler: ResultStage 892 (foreachPartition at PredictorEngineApp.java:153) finished in 13.683 s 18/04/17 17:06:13 INFO scheduler.DAGScheduler: Job 892 finished: foreachPartition at PredictorEngineApp.java:153, took 13.769579 s 18/04/17 17:06:13 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x5587d7b4 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 17:06:13 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x5587d7b40x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 17:06:13 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 17:06:13 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:43559, server: ***hostname masked***/***IP masked***:2181 18/04/17 17:06:13 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x2626be142b28f03, negotiated timeout = 60000 18/04/17 17:06:13 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x2626be142b28f03 18/04/17 17:06:13 INFO zookeeper.ZooKeeper: Session: 0x2626be142b28f03 closed 18/04/17 17:06:13 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 17:06:13 INFO scheduler.JobScheduler: Finished job streaming job 1523973960000 ms.15 from job set of time 1523973960000 ms 18/04/17 17:06:14 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 875.0 (TID 875) in 14139 ms on ***hostname masked*** (executor 7) (1/1) 18/04/17 17:06:14 INFO cluster.YarnClusterScheduler: Removed TaskSet 875.0, whose tasks have all completed, from pool 18/04/17 17:06:14 INFO scheduler.DAGScheduler: ResultStage 875 (foreachPartition at PredictorEngineApp.java:153) finished in 14.141 s 18/04/17 17:06:14 INFO scheduler.DAGScheduler: Job 875 finished: foreachPartition at PredictorEngineApp.java:153, took 14.176548 s 18/04/17 17:06:14 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x40364c9a connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 17:06:14 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x40364c9a0x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 17:06:14 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 17:06:14 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:60819, server: ***hostname masked***/***IP masked***:2181 18/04/17 17:06:14 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x3626be1439a95c4, negotiated timeout = 60000 18/04/17 17:06:14 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x3626be1439a95c4 18/04/17 17:06:14 INFO zookeeper.ZooKeeper: Session: 0x3626be1439a95c4 closed 18/04/17 17:06:14 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 17:06:14 INFO scheduler.JobScheduler: Finished job streaming job 1523973960000 ms.27 from job set of time 1523973960000 ms 18/04/17 17:06:14 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 873.0 (TID 873) in 14215 ms on ***hostname masked*** (executor 7) (1/1) 18/04/17 17:06:14 INFO cluster.YarnClusterScheduler: Removed TaskSet 873.0, whose tasks have all completed, from pool 18/04/17 17:06:14 INFO scheduler.DAGScheduler: ResultStage 873 (foreachPartition at PredictorEngineApp.java:153) finished in 14.215 s 18/04/17 17:06:14 INFO scheduler.DAGScheduler: Job 873 finished: foreachPartition at PredictorEngineApp.java:153, took 14.240564 s 18/04/17 17:06:14 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x1f2b2e92 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 17:06:14 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x1f2b2e920x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 17:06:14 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 17:06:14 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:43566, server: ***hostname masked***/***IP masked***:2181 18/04/17 17:06:14 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x2626be142b28f04, negotiated timeout = 60000 18/04/17 17:06:14 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x2626be142b28f04 18/04/17 17:06:14 INFO zookeeper.ZooKeeper: Session: 0x2626be142b28f04 closed 18/04/17 17:06:14 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 17:06:14 INFO scheduler.JobScheduler: Finished job streaming job 1523973960000 ms.33 from job set of time 1523973960000 ms 18/04/17 17:06:15 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 880.0 (TID 880) in 15271 ms on ***hostname masked*** (executor 5) (1/1) 18/04/17 17:06:15 INFO scheduler.DAGScheduler: ResultStage 880 (foreachPartition at PredictorEngineApp.java:153) finished in 15.272 s 18/04/17 17:06:15 INFO cluster.YarnClusterScheduler: Removed TaskSet 880.0, whose tasks have all completed, from pool 18/04/17 17:06:15 INFO scheduler.DAGScheduler: Job 880 finished: foreachPartition at PredictorEngineApp.java:153, took 15.326096 s 18/04/17 17:06:15 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x508523c9 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 17:06:15 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x508523c90x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 17:06:15 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 17:06:15 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:60828, server: ***hostname masked***/***IP masked***:2181 18/04/17 17:06:15 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x3626be1439a95c8, negotiated timeout = 60000 18/04/17 17:06:15 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x3626be1439a95c8 18/04/17 17:06:15 INFO zookeeper.ZooKeeper: Session: 0x3626be1439a95c8 closed 18/04/17 17:06:15 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 17:06:15 INFO scheduler.JobScheduler: Finished job streaming job 1523973960000 ms.29 from job set of time 1523973960000 ms 18/04/17 17:06:15 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 882.0 (TID 882) in 15308 ms on ***hostname masked*** (executor 5) (1/1) 18/04/17 17:06:15 INFO scheduler.DAGScheduler: ResultStage 882 (foreachPartition at PredictorEngineApp.java:153) finished in 15.308 s 18/04/17 17:06:15 INFO cluster.YarnClusterScheduler: Removed TaskSet 882.0, whose tasks have all completed, from pool 18/04/17 17:06:15 INFO scheduler.DAGScheduler: Job 882 finished: foreachPartition at PredictorEngineApp.java:153, took 15.368243 s 18/04/17 17:06:15 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x691de71c connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 17:06:15 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x691de71c0x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 17:06:15 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 17:06:15 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:38980, server: ***hostname masked***/***IP masked***:2181 18/04/17 17:06:15 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x1626be1444c9616, negotiated timeout = 60000 18/04/17 17:06:15 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x1626be1444c9616 18/04/17 17:06:15 INFO zookeeper.ZooKeeper: Session: 0x1626be1444c9616 closed 18/04/17 17:06:15 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 17:06:15 INFO scheduler.JobScheduler: Finished job streaming job 1523973960000 ms.2 from job set of time 1523973960000 ms 18/04/17 17:06:15 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 870.0 (TID 870) in 15756 ms on ***hostname masked*** (executor 2) (1/1) 18/04/17 17:06:15 INFO cluster.YarnClusterScheduler: Removed TaskSet 870.0, whose tasks have all completed, from pool 18/04/17 17:06:15 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 881.0 (TID 881) in 15704 ms on ***hostname masked*** (executor 2) (1/1) 18/04/17 17:06:15 INFO scheduler.DAGScheduler: ResultStage 870 (foreachPartition at PredictorEngineApp.java:153) finished in 15.757 s 18/04/17 17:06:15 INFO cluster.YarnClusterScheduler: Removed TaskSet 881.0, whose tasks have all completed, from pool 18/04/17 17:06:15 INFO scheduler.DAGScheduler: Job 870 finished: foreachPartition at PredictorEngineApp.java:153, took 15.763751 s 18/04/17 17:06:15 INFO scheduler.DAGScheduler: ResultStage 881 (foreachPartition at PredictorEngineApp.java:153) finished in 15.704 s 18/04/17 17:06:15 INFO scheduler.DAGScheduler: Job 881 finished: foreachPartition at PredictorEngineApp.java:153, took 15.760994 s 18/04/17 17:06:15 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x7024eac7 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 17:06:15 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x2edf09ed connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 17:06:15 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x7024eac70x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 17:06:15 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x2edf09ed0x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 17:06:15 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 17:06:15 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 17:06:15 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:60834, server: ***hostname masked***/***IP masked***:2181 18/04/17 17:06:15 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:38984, server: ***hostname masked***/***IP masked***:2181 18/04/17 17:06:15 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x3626be1439a95ca, negotiated timeout = 60000 18/04/17 17:06:15 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x1626be1444c9617, negotiated timeout = 60000 18/04/17 17:06:15 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x1626be1444c9617 18/04/17 17:06:15 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x3626be1439a95ca 18/04/17 17:06:15 INFO zookeeper.ZooKeeper: Session: 0x1626be1444c9617 closed 18/04/17 17:06:15 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 17:06:15 INFO zookeeper.ZooKeeper: Session: 0x3626be1439a95ca closed 18/04/17 17:06:15 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 17:06:15 INFO scheduler.JobScheduler: Finished job streaming job 1523973960000 ms.32 from job set of time 1523973960000 ms 18/04/17 17:06:15 INFO scheduler.JobScheduler: Finished job streaming job 1523973960000 ms.24 from job set of time 1523973960000 ms 18/04/17 17:06:16 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 878.0 (TID 878) in 16104 ms on ***hostname masked*** (executor 2) (1/1) 18/04/17 17:06:16 INFO scheduler.DAGScheduler: ResultStage 878 (foreachPartition at PredictorEngineApp.java:153) finished in 16.105 s 18/04/17 17:06:16 INFO cluster.YarnClusterScheduler: Removed TaskSet 878.0, whose tasks have all completed, from pool 18/04/17 17:06:16 INFO scheduler.DAGScheduler: Job 877 finished: foreachPartition at PredictorEngineApp.java:153, took 16.153693 s 18/04/17 17:06:16 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x22e6b31c connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 17:06:16 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x22e6b31c0x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 17:06:16 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 17:06:16 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:43585, server: ***hostname masked***/***IP masked***:2181 18/04/17 17:06:16 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x2626be142b28f09, negotiated timeout = 60000 18/04/17 17:06:16 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x2626be142b28f09 18/04/17 17:06:16 INFO zookeeper.ZooKeeper: Session: 0x2626be142b28f09 closed 18/04/17 17:06:16 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 17:06:16 INFO scheduler.JobScheduler: Finished job streaming job 1523973960000 ms.23 from job set of time 1523973960000 ms 18/04/17 17:06:18 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 887.0 (TID 887) in 18850 ms on ***hostname masked*** (executor 7) (1/1) 18/04/17 17:06:18 INFO cluster.YarnClusterScheduler: Removed TaskSet 887.0, whose tasks have all completed, from pool 18/04/17 17:06:18 INFO scheduler.DAGScheduler: ResultStage 887 (foreachPartition at PredictorEngineApp.java:153) finished in 18.851 s 18/04/17 17:06:18 INFO scheduler.DAGScheduler: Job 887 finished: foreachPartition at PredictorEngineApp.java:153, took 18.924828 s 18/04/17 17:06:18 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x51825e18 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 17:06:18 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x51825e180x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 17:06:18 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 17:06:18 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:60848, server: ***hostname masked***/***IP masked***:2181 18/04/17 17:06:18 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x3626be1439a95cb, negotiated timeout = 60000 18/04/17 17:06:19 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x3626be1439a95cb 18/04/17 17:06:19 INFO zookeeper.ZooKeeper: Session: 0x3626be1439a95cb closed 18/04/17 17:06:19 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 17:06:19 INFO scheduler.JobScheduler: Finished job streaming job 1523973960000 ms.22 from job set of time 1523973960000 ms 18/04/17 17:06:20 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 886.0 (TID 886) in 20391 ms on ***hostname masked*** (executor 1) (1/1) 18/04/17 17:06:20 INFO cluster.YarnClusterScheduler: Removed TaskSet 886.0, whose tasks have all completed, from pool 18/04/17 17:06:20 INFO scheduler.DAGScheduler: ResultStage 886 (foreachPartition at PredictorEngineApp.java:153) finished in 20.391 s 18/04/17 17:06:20 INFO scheduler.DAGScheduler: Job 886 finished: foreachPartition at PredictorEngineApp.java:153, took 20.462425 s 18/04/17 17:06:20 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x415705f8 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 17:06:20 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x415705f80x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 17:06:20 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 17:06:20 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:60855, server: ***hostname masked***/***IP masked***:2181 18/04/17 17:06:20 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x3626be1439a95cc, negotiated timeout = 60000 18/04/17 17:06:20 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x3626be1439a95cc 18/04/17 17:06:20 INFO zookeeper.ZooKeeper: Session: 0x3626be1439a95cc closed 18/04/17 17:06:20 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 17:06:20 INFO scheduler.JobScheduler: Finished job streaming job 1523973960000 ms.5 from job set of time 1523973960000 ms 18/04/17 17:06:21 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 891.0 (TID 891) in 21086 ms on ***hostname masked*** (executor 2) (1/1) 18/04/17 17:06:21 INFO cluster.YarnClusterScheduler: Removed TaskSet 891.0, whose tasks have all completed, from pool 18/04/17 17:06:21 INFO scheduler.DAGScheduler: ResultStage 891 (foreachPartition at PredictorEngineApp.java:153) finished in 21.087 s 18/04/17 17:06:21 INFO scheduler.DAGScheduler: Job 891 finished: foreachPartition at PredictorEngineApp.java:153, took 21.171207 s 18/04/17 17:06:21 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x13a244d1 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 17:06:21 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x13a244d10x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 17:06:21 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 17:06:21 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:39009, server: ***hostname masked***/***IP masked***:2181 18/04/17 17:06:21 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x1626be1444c961a, negotiated timeout = 60000 18/04/17 17:06:21 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x1626be1444c961a 18/04/17 17:06:21 INFO zookeeper.ZooKeeper: Session: 0x1626be1444c961a closed 18/04/17 17:06:21 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 17:06:21 INFO scheduler.JobScheduler: Finished job streaming job 1523973960000 ms.26 from job set of time 1523973960000 ms 18/04/17 17:06:22 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 872.0 (TID 872) in 21993 ms on ***hostname masked*** (executor 1) (1/1) 18/04/17 17:06:22 INFO cluster.YarnClusterScheduler: Removed TaskSet 872.0, whose tasks have all completed, from pool 18/04/17 17:06:22 INFO scheduler.DAGScheduler: ResultStage 872 (foreachPartition at PredictorEngineApp.java:153) finished in 21.993 s 18/04/17 17:06:22 INFO scheduler.DAGScheduler: Job 872 finished: foreachPartition at PredictorEngineApp.java:153, took 22.015516 s 18/04/17 17:06:22 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x3a59d9cc connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 17:06:22 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x3a59d9cc0x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 17:06:22 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 17:06:22 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:39014, server: ***hostname masked***/***IP masked***:2181 18/04/17 17:06:22 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x1626be1444c961c, negotiated timeout = 60000 18/04/17 17:06:22 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x1626be1444c961c 18/04/17 17:06:22 INFO zookeeper.ZooKeeper: Session: 0x1626be1444c961c closed 18/04/17 17:06:22 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 17:06:22 INFO scheduler.JobScheduler: Finished job streaming job 1523973960000 ms.10 from job set of time 1523973960000 ms 18/04/17 17:06:23 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 894.0 (TID 894) in 23812 ms on ***hostname masked*** (executor 5) (1/1) 18/04/17 17:06:23 INFO cluster.YarnClusterScheduler: Removed TaskSet 894.0, whose tasks have all completed, from pool 18/04/17 17:06:23 INFO scheduler.DAGScheduler: ResultStage 894 (foreachPartition at PredictorEngineApp.java:153) finished in 23.813 s 18/04/17 17:06:23 INFO scheduler.DAGScheduler: Job 894 finished: foreachPartition at PredictorEngineApp.java:153, took 23.903921 s 18/04/17 17:06:23 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x357a4dba connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 17:06:23 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x357a4dba0x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 17:06:23 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 17:06:23 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:43615, server: ***hostname masked***/***IP masked***:2181 18/04/17 17:06:23 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x2626be142b28f0e, negotiated timeout = 60000 18/04/17 17:06:23 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x2626be142b28f0e 18/04/17 17:06:23 INFO zookeeper.ZooKeeper: Session: 0x2626be142b28f0e closed 18/04/17 17:06:23 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 17:06:23 INFO scheduler.JobScheduler: Finished job streaming job 1523973960000 ms.1 from job set of time 1523973960000 ms 18/04/17 17:06:24 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 885.0 (TID 885) in 24431 ms on ***hostname masked*** (executor 5) (1/1) 18/04/17 17:06:24 INFO scheduler.DAGScheduler: ResultStage 885 (foreachPartition at PredictorEngineApp.java:153) finished in 24.431 s 18/04/17 17:06:24 INFO cluster.YarnClusterScheduler: Removed TaskSet 885.0, whose tasks have all completed, from pool 18/04/17 17:06:24 INFO scheduler.DAGScheduler: Job 885 finished: foreachPartition at PredictorEngineApp.java:153, took 24.498810 s 18/04/17 17:06:24 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x40cd9c11 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 17:06:24 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x40cd9c110x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 17:06:24 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 17:06:24 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:60874, server: ***hostname masked***/***IP masked***:2181 18/04/17 17:06:24 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x3626be1439a95cd, negotiated timeout = 60000 18/04/17 17:06:24 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x3626be1439a95cd 18/04/17 17:06:24 INFO zookeeper.ZooKeeper: Session: 0x3626be1439a95cd closed 18/04/17 17:06:24 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 17:06:24 INFO scheduler.JobScheduler: Finished job streaming job 1523973960000 ms.11 from job set of time 1523973960000 ms 18/04/17 17:06:24 INFO scheduler.JobScheduler: Total delay: 24.587 s for time 1523973960000 ms (execution: 24.536 s) 18/04/17 17:06:24 INFO kafka.KafkaRDD: Removing RDD 1152 from persistence list 18/04/17 17:06:24 INFO storage.BlockManager: Removing RDD 1152 18/04/17 17:06:24 INFO kafka.KafkaRDD: Removing RDD 1152 from persistence list 18/04/17 17:06:24 INFO storage.BlockManager: Removing RDD 1152 18/04/17 17:06:24 INFO kafka.KafkaRDD: Removing RDD 1153 from persistence list 18/04/17 17:06:24 INFO storage.BlockManager: Removing RDD 1153 18/04/17 17:06:24 INFO kafka.KafkaRDD: Removing RDD 1153 from persistence list 18/04/17 17:06:24 INFO storage.BlockManager: Removing RDD 1153 18/04/17 17:06:24 INFO kafka.KafkaRDD: Removing RDD 1154 from persistence list 18/04/17 17:06:24 INFO storage.BlockManager: Removing RDD 1154 18/04/17 17:06:24 INFO kafka.KafkaRDD: Removing RDD 1154 from persistence list 18/04/17 17:06:24 INFO storage.BlockManager: Removing RDD 1154 18/04/17 17:06:24 INFO kafka.KafkaRDD: Removing RDD 1155 from persistence list 18/04/17 17:06:24 INFO storage.BlockManager: Removing RDD 1155 18/04/17 17:06:24 INFO kafka.KafkaRDD: Removing RDD 1155 from persistence list 18/04/17 17:06:24 INFO storage.BlockManager: Removing RDD 1155 18/04/17 17:06:24 INFO kafka.KafkaRDD: Removing RDD 1156 from persistence list 18/04/17 17:06:24 INFO storage.BlockManager: Removing RDD 1156 18/04/17 17:06:24 INFO kafka.KafkaRDD: Removing RDD 1156 from persistence list 18/04/17 17:06:24 INFO storage.BlockManager: Removing RDD 1156 18/04/17 17:06:24 INFO kafka.KafkaRDD: Removing RDD 1157 from persistence list 18/04/17 17:06:24 INFO storage.BlockManager: Removing RDD 1157 18/04/17 17:06:24 INFO kafka.KafkaRDD: Removing RDD 1157 from persistence list 18/04/17 17:06:24 INFO storage.BlockManager: Removing RDD 1157 18/04/17 17:06:24 INFO kafka.KafkaRDD: Removing RDD 1158 from persistence list 18/04/17 17:06:24 INFO storage.BlockManager: Removing RDD 1158 18/04/17 17:06:24 INFO kafka.KafkaRDD: Removing RDD 1158 from persistence list 18/04/17 17:06:24 INFO storage.BlockManager: Removing RDD 1158 18/04/17 17:06:24 INFO kafka.KafkaRDD: Removing RDD 1159 from persistence list 18/04/17 17:06:24 INFO storage.BlockManager: Removing RDD 1159 18/04/17 17:06:24 INFO kafka.KafkaRDD: Removing RDD 1159 from persistence list 18/04/17 17:06:24 INFO storage.BlockManager: Removing RDD 1159 18/04/17 17:06:24 INFO kafka.KafkaRDD: Removing RDD 1160 from persistence list 18/04/17 17:06:24 INFO storage.BlockManager: Removing RDD 1160 18/04/17 17:06:24 INFO kafka.KafkaRDD: Removing RDD 1160 from persistence list 18/04/17 17:06:24 INFO storage.BlockManager: Removing RDD 1160 18/04/17 17:06:24 INFO kafka.KafkaRDD: Removing RDD 1161 from persistence list 18/04/17 17:06:24 INFO storage.BlockManager: Removing RDD 1161 18/04/17 17:06:24 INFO kafka.KafkaRDD: Removing RDD 1161 from persistence list 18/04/17 17:06:24 INFO storage.BlockManager: Removing RDD 1161 18/04/17 17:06:24 INFO kafka.KafkaRDD: Removing RDD 1162 from persistence list 18/04/17 17:06:24 INFO storage.BlockManager: Removing RDD 1162 18/04/17 17:06:24 INFO kafka.KafkaRDD: Removing RDD 1162 from persistence list 18/04/17 17:06:24 INFO storage.BlockManager: Removing RDD 1162 18/04/17 17:06:24 INFO kafka.KafkaRDD: Removing RDD 1163 from persistence list 18/04/17 17:06:24 INFO storage.BlockManager: Removing RDD 1163 18/04/17 17:06:24 INFO kafka.KafkaRDD: Removing RDD 1163 from persistence list 18/04/17 17:06:24 INFO storage.BlockManager: Removing RDD 1163 18/04/17 17:06:24 INFO kafka.KafkaRDD: Removing RDD 1164 from persistence list 18/04/17 17:06:24 INFO storage.BlockManager: Removing RDD 1164 18/04/17 17:06:24 INFO kafka.KafkaRDD: Removing RDD 1164 from persistence list 18/04/17 17:06:24 INFO storage.BlockManager: Removing RDD 1164 18/04/17 17:06:24 INFO kafka.KafkaRDD: Removing RDD 1165 from persistence list 18/04/17 17:06:24 INFO storage.BlockManager: Removing RDD 1165 18/04/17 17:06:24 INFO kafka.KafkaRDD: Removing RDD 1165 from persistence list 18/04/17 17:06:24 INFO storage.BlockManager: Removing RDD 1165 18/04/17 17:06:24 INFO kafka.KafkaRDD: Removing RDD 1166 from persistence list 18/04/17 17:06:24 INFO storage.BlockManager: Removing RDD 1166 18/04/17 17:06:24 INFO kafka.KafkaRDD: Removing RDD 1166 from persistence list 18/04/17 17:06:24 INFO storage.BlockManager: Removing RDD 1166 18/04/17 17:06:24 INFO kafka.KafkaRDD: Removing RDD 1167 from persistence list 18/04/17 17:06:24 INFO storage.BlockManager: Removing RDD 1167 18/04/17 17:06:24 INFO kafka.KafkaRDD: Removing RDD 1167 from persistence list 18/04/17 17:06:24 INFO storage.BlockManager: Removing RDD 1167 18/04/17 17:06:24 INFO kafka.KafkaRDD: Removing RDD 1168 from persistence list 18/04/17 17:06:24 INFO storage.BlockManager: Removing RDD 1168 18/04/17 17:06:24 INFO kafka.KafkaRDD: Removing RDD 1168 from persistence list 18/04/17 17:06:24 INFO storage.BlockManager: Removing RDD 1168 18/04/17 17:06:24 INFO kafka.KafkaRDD: Removing RDD 1169 from persistence list 18/04/17 17:06:24 INFO storage.BlockManager: Removing RDD 1169 18/04/17 17:06:24 INFO kafka.KafkaRDD: Removing RDD 1169 from persistence list 18/04/17 17:06:24 INFO storage.BlockManager: Removing RDD 1169 18/04/17 17:06:24 INFO kafka.KafkaRDD: Removing RDD 1170 from persistence list 18/04/17 17:06:24 INFO storage.BlockManager: Removing RDD 1170 18/04/17 17:06:24 INFO kafka.KafkaRDD: Removing RDD 1170 from persistence list 18/04/17 17:06:24 INFO storage.BlockManager: Removing RDD 1170 18/04/17 17:06:24 INFO kafka.KafkaRDD: Removing RDD 1171 from persistence list 18/04/17 17:06:24 INFO storage.BlockManager: Removing RDD 1171 18/04/17 17:06:24 INFO kafka.KafkaRDD: Removing RDD 1171 from persistence list 18/04/17 17:06:24 INFO storage.BlockManager: Removing RDD 1171 18/04/17 17:06:24 INFO kafka.KafkaRDD: Removing RDD 1172 from persistence list 18/04/17 17:06:24 INFO storage.BlockManager: Removing RDD 1172 18/04/17 17:06:24 INFO kafka.KafkaRDD: Removing RDD 1172 from persistence list 18/04/17 17:06:24 INFO storage.BlockManager: Removing RDD 1172 18/04/17 17:06:24 INFO kafka.KafkaRDD: Removing RDD 1173 from persistence list 18/04/17 17:06:24 INFO storage.BlockManager: Removing RDD 1173 18/04/17 17:06:24 INFO kafka.KafkaRDD: Removing RDD 1173 from persistence list 18/04/17 17:06:24 INFO storage.BlockManager: Removing RDD 1173 18/04/17 17:06:24 INFO kafka.KafkaRDD: Removing RDD 1174 from persistence list 18/04/17 17:06:24 INFO storage.BlockManager: Removing RDD 1174 18/04/17 17:06:24 INFO kafka.KafkaRDD: Removing RDD 1174 from persistence list 18/04/17 17:06:24 INFO storage.BlockManager: Removing RDD 1174 18/04/17 17:06:24 INFO kafka.KafkaRDD: Removing RDD 1175 from persistence list 18/04/17 17:06:24 INFO storage.BlockManager: Removing RDD 1175 18/04/17 17:06:24 INFO kafka.KafkaRDD: Removing RDD 1175 from persistence list 18/04/17 17:06:24 INFO storage.BlockManager: Removing RDD 1175 18/04/17 17:06:24 INFO kafka.KafkaRDD: Removing RDD 1176 from persistence list 18/04/17 17:06:24 INFO storage.BlockManager: Removing RDD 1176 18/04/17 17:06:24 INFO kafka.KafkaRDD: Removing RDD 1176 from persistence list 18/04/17 17:06:24 INFO storage.BlockManager: Removing RDD 1176 18/04/17 17:06:24 INFO kafka.KafkaRDD: Removing RDD 1177 from persistence list 18/04/17 17:06:24 INFO storage.BlockManager: Removing RDD 1177 18/04/17 17:06:24 INFO kafka.KafkaRDD: Removing RDD 1177 from persistence list 18/04/17 17:06:24 INFO storage.BlockManager: Removing RDD 1177 18/04/17 17:06:24 INFO kafka.KafkaRDD: Removing RDD 1178 from persistence list 18/04/17 17:06:24 INFO storage.BlockManager: Removing RDD 1178 18/04/17 17:06:24 INFO kafka.KafkaRDD: Removing RDD 1178 from persistence list 18/04/17 17:06:24 INFO storage.BlockManager: Removing RDD 1178 18/04/17 17:06:24 INFO kafka.KafkaRDD: Removing RDD 1179 from persistence list 18/04/17 17:06:24 INFO storage.BlockManager: Removing RDD 1179 18/04/17 17:06:24 INFO kafka.KafkaRDD: Removing RDD 1179 from persistence list 18/04/17 17:06:24 INFO storage.BlockManager: Removing RDD 1179 18/04/17 17:06:24 INFO kafka.KafkaRDD: Removing RDD 1180 from persistence list 18/04/17 17:06:24 INFO storage.BlockManager: Removing RDD 1180 18/04/17 17:06:24 INFO kafka.KafkaRDD: Removing RDD 1180 from persistence list 18/04/17 17:06:24 INFO storage.BlockManager: Removing RDD 1180 18/04/17 17:06:24 INFO kafka.KafkaRDD: Removing RDD 1181 from persistence list 18/04/17 17:06:24 INFO storage.BlockManager: Removing RDD 1181 18/04/17 17:06:24 INFO kafka.KafkaRDD: Removing RDD 1181 from persistence list 18/04/17 17:06:24 INFO storage.BlockManager: Removing RDD 1181 18/04/17 17:06:24 INFO kafka.KafkaRDD: Removing RDD 1182 from persistence list 18/04/17 17:06:24 INFO storage.BlockManager: Removing RDD 1182 18/04/17 17:06:24 INFO kafka.KafkaRDD: Removing RDD 1182 from persistence list 18/04/17 17:06:24 INFO storage.BlockManager: Removing RDD 1182 18/04/17 17:06:24 INFO kafka.KafkaRDD: Removing RDD 1183 from persistence list 18/04/17 17:06:24 INFO storage.BlockManager: Removing RDD 1183 18/04/17 17:06:24 INFO kafka.KafkaRDD: Removing RDD 1183 from persistence list 18/04/17 17:06:24 INFO storage.BlockManager: Removing RDD 1183 18/04/17 17:06:24 INFO kafka.KafkaRDD: Removing RDD 1184 from persistence list 18/04/17 17:06:24 INFO storage.BlockManager: Removing RDD 1184 18/04/17 17:06:24 INFO kafka.KafkaRDD: Removing RDD 1184 from persistence list 18/04/17 17:06:24 INFO storage.BlockManager: Removing RDD 1184 18/04/17 17:06:24 INFO kafka.KafkaRDD: Removing RDD 1185 from persistence list 18/04/17 17:06:24 INFO storage.BlockManager: Removing RDD 1185 18/04/17 17:06:24 INFO kafka.KafkaRDD: Removing RDD 1185 from persistence list 18/04/17 17:06:24 INFO storage.BlockManager: Removing RDD 1185 18/04/17 17:06:24 INFO kafka.KafkaRDD: Removing RDD 1186 from persistence list 18/04/17 17:06:24 INFO storage.BlockManager: Removing RDD 1186 18/04/17 17:06:24 INFO kafka.KafkaRDD: Removing RDD 1186 from persistence list 18/04/17 17:06:24 INFO storage.BlockManager: Removing RDD 1186 18/04/17 17:06:24 INFO kafka.KafkaRDD: Removing RDD 1187 from persistence list 18/04/17 17:06:24 INFO storage.BlockManager: Removing RDD 1187 18/04/17 17:06:24 INFO kafka.KafkaRDD: Removing RDD 1187 from persistence list 18/04/17 17:06:24 INFO storage.BlockManager: Removing RDD 1187 18/04/17 17:06:24 INFO scheduler.ReceivedBlockTracker: Deleting batches ArrayBuffer() 18/04/17 17:06:24 INFO scheduler.InputInfoTracker: remove old batch metadata: 1523973840000 ms 18/04/17 17:07:00 INFO scheduler.JobScheduler: Added jobs for time 1523974020000 ms 18/04/17 17:07:00 INFO scheduler.JobScheduler: Starting job streaming job 1523974020000 ms.1 from job set of time 1523974020000 ms 18/04/17 17:07:00 INFO scheduler.JobScheduler: Starting job streaming job 1523974020000 ms.0 from job set of time 1523974020000 ms 18/04/17 17:07:00 INFO scheduler.JobScheduler: Starting job streaming job 1523974020000 ms.2 from job set of time 1523974020000 ms 18/04/17 17:07:00 INFO scheduler.JobScheduler: Starting job streaming job 1523974020000 ms.3 from job set of time 1523974020000 ms 18/04/17 17:07:00 INFO scheduler.JobScheduler: Finished job streaming job 1523974020000 ms.0 from job set of time 1523974020000 ms 18/04/17 17:07:00 INFO scheduler.JobScheduler: Starting job streaming job 1523974020000 ms.4 from job set of time 1523974020000 ms 18/04/17 17:07:00 INFO scheduler.JobScheduler: Starting job streaming job 1523974020000 ms.5 from job set of time 1523974020000 ms 18/04/17 17:07:00 INFO scheduler.JobScheduler: Starting job streaming job 1523974020000 ms.6 from job set of time 1523974020000 ms 18/04/17 17:07:00 INFO scheduler.JobScheduler: Finished job streaming job 1523974020000 ms.3 from job set of time 1523974020000 ms 18/04/17 17:07:00 INFO scheduler.JobScheduler: Finished job streaming job 1523974020000 ms.4 from job set of time 1523974020000 ms 18/04/17 17:07:00 INFO scheduler.JobScheduler: Starting job streaming job 1523974020000 ms.8 from job set of time 1523974020000 ms 18/04/17 17:07:00 INFO scheduler.JobScheduler: Starting job streaming job 1523974020000 ms.7 from job set of time 1523974020000 ms 18/04/17 17:07:00 INFO scheduler.JobScheduler: Starting job streaming job 1523974020000 ms.9 from job set of time 1523974020000 ms 18/04/17 17:07:00 INFO scheduler.JobScheduler: Starting job streaming job 1523974020000 ms.10 from job set of time 1523974020000 ms 18/04/17 17:07:00 INFO scheduler.JobScheduler: Starting job streaming job 1523974020000 ms.11 from job set of time 1523974020000 ms 18/04/17 17:07:00 INFO scheduler.JobScheduler: Starting job streaming job 1523974020000 ms.13 from job set of time 1523974020000 ms 18/04/17 17:07:00 INFO scheduler.JobScheduler: Starting job streaming job 1523974020000 ms.12 from job set of time 1523974020000 ms 18/04/17 17:07:00 INFO scheduler.JobScheduler: Finished job streaming job 1523974020000 ms.13 from job set of time 1523974020000 ms 18/04/17 17:07:00 INFO scheduler.JobScheduler: Starting job streaming job 1523974020000 ms.14 from job set of time 1523974020000 ms 18/04/17 17:07:00 INFO scheduler.JobScheduler: Finished job streaming job 1523974020000 ms.14 from job set of time 1523974020000 ms 18/04/17 17:07:00 INFO scheduler.JobScheduler: Starting job streaming job 1523974020000 ms.15 from job set of time 1523974020000 ms 18/04/17 17:07:00 INFO scheduler.JobScheduler: Starting job streaming job 1523974020000 ms.16 from job set of time 1523974020000 ms 18/04/17 17:07:00 INFO scheduler.JobScheduler: Starting job streaming job 1523974020000 ms.17 from job set of time 1523974020000 ms 18/04/17 17:07:00 INFO scheduler.JobScheduler: Finished job streaming job 1523974020000 ms.16 from job set of time 1523974020000 ms 18/04/17 17:07:00 INFO scheduler.JobScheduler: Starting job streaming job 1523974020000 ms.18 from job set of time 1523974020000 ms 18/04/17 17:07:00 INFO scheduler.JobScheduler: Finished job streaming job 1523974020000 ms.17 from job set of time 1523974020000 ms 18/04/17 17:07:00 INFO scheduler.JobScheduler: Starting job streaming job 1523974020000 ms.19 from job set of time 1523974020000 ms 18/04/17 17:07:00 INFO scheduler.JobScheduler: Starting job streaming job 1523974020000 ms.21 from job set of time 1523974020000 ms 18/04/17 17:07:00 INFO scheduler.JobScheduler: Starting job streaming job 1523974020000 ms.20 from job set of time 1523974020000 ms 18/04/17 17:07:00 INFO scheduler.JobScheduler: Finished job streaming job 1523974020000 ms.21 from job set of time 1523974020000 ms 18/04/17 17:07:00 INFO scheduler.JobScheduler: Starting job streaming job 1523974020000 ms.23 from job set of time 1523974020000 ms 18/04/17 17:07:00 INFO scheduler.JobScheduler: Starting job streaming job 1523974020000 ms.22 from job set of time 1523974020000 ms 18/04/17 17:07:00 INFO scheduler.JobScheduler: Starting job streaming job 1523974020000 ms.24 from job set of time 1523974020000 ms 18/04/17 17:07:00 INFO scheduler.JobScheduler: Starting job streaming job 1523974020000 ms.25 from job set of time 1523974020000 ms 18/04/17 17:07:00 INFO scheduler.JobScheduler: Starting job streaming job 1523974020000 ms.26 from job set of time 1523974020000 ms 18/04/17 17:07:00 INFO scheduler.JobScheduler: Starting job streaming job 1523974020000 ms.27 from job set of time 1523974020000 ms 18/04/17 17:07:00 INFO scheduler.JobScheduler: Starting job streaming job 1523974020000 ms.28 from job set of time 1523974020000 ms 18/04/17 17:07:00 INFO scheduler.JobScheduler: Starting job streaming job 1523974020000 ms.29 from job set of time 1523974020000 ms 18/04/17 17:07:00 INFO scheduler.JobScheduler: Starting job streaming job 1523974020000 ms.31 from job set of time 1523974020000 ms 18/04/17 17:07:00 INFO scheduler.JobScheduler: Starting job streaming job 1523974020000 ms.30 from job set of time 1523974020000 ms 18/04/17 17:07:00 INFO scheduler.JobScheduler: Starting job streaming job 1523974020000 ms.32 from job set of time 1523974020000 ms 18/04/17 17:07:00 INFO scheduler.JobScheduler: Finished job streaming job 1523974020000 ms.30 from job set of time 1523974020000 ms 18/04/17 17:07:00 INFO scheduler.JobScheduler: Starting job streaming job 1523974020000 ms.35 from job set of time 1523974020000 ms 18/04/17 17:07:00 INFO scheduler.JobScheduler: Starting job streaming job 1523974020000 ms.33 from job set of time 1523974020000 ms 18/04/17 17:07:00 INFO scheduler.JobScheduler: Starting job streaming job 1523974020000 ms.34 from job set of time 1523974020000 ms 18/04/17 17:07:00 INFO scheduler.JobScheduler: Finished job streaming job 1523974020000 ms.35 from job set of time 1523974020000 ms 18/04/17 17:07:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 17:07:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 17:07:00 INFO scheduler.DAGScheduler: Got job 896 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 17:07:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 896 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 17:07:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 17:07:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 17:07:00 INFO scheduler.DAGScheduler: Submitting ResultStage 896 (KafkaRDD[1236] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 17:07:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 17:07:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 17:07:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 17:07:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 17:07:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 17:07:00 INFO storage.MemoryStore: Block broadcast_896 stored as values in memory (estimated size 5.7 KB, free 491.4 MB) 18/04/17 17:07:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 17:07:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 17:07:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 17:07:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 17:07:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 17:07:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 17:07:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 17:07:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 17:07:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 17:07:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 17:07:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 17:07:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 17:07:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 17:07:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 17:07:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 17:07:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 17:07:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 17:07:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 17:07:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 17:07:00 INFO storage.MemoryStore: Block broadcast_896_piece0 stored as bytes in memory (estimated size 3.1 KB, free 491.4 MB) 18/04/17 17:07:00 INFO storage.BlockManagerInfo: Added broadcast_896_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:07:00 INFO spark.SparkContext: Created broadcast 896 from broadcast at DAGScheduler.scala:1006 18/04/17 17:07:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 896 (KafkaRDD[1236] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 17:07:00 INFO cluster.YarnClusterScheduler: Adding task set 896.0 with 1 tasks 18/04/17 17:07:00 INFO scheduler.DAGScheduler: Got job 897 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 17:07:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 897 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 17:07:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 17:07:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 17:07:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 896.0 (TID 896, ***hostname masked***, executor 10, partition 0, RACK_LOCAL, 2051 bytes) 18/04/17 17:07:00 INFO scheduler.DAGScheduler: Submitting ResultStage 897 (KafkaRDD[1239] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 17:07:00 INFO storage.MemoryStore: Block broadcast_897 stored as values in memory (estimated size 5.7 KB, free 491.4 MB) 18/04/17 17:07:00 INFO storage.MemoryStore: Block broadcast_897_piece0 stored as bytes in memory (estimated size 3.1 KB, free 491.4 MB) 18/04/17 17:07:00 INFO storage.BlockManagerInfo: Added broadcast_897_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:07:00 INFO spark.SparkContext: Created broadcast 897 from broadcast at DAGScheduler.scala:1006 18/04/17 17:07:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 897 (KafkaRDD[1239] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 17:07:00 INFO cluster.YarnClusterScheduler: Adding task set 897.0 with 1 tasks 18/04/17 17:07:00 INFO scheduler.DAGScheduler: Got job 898 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 17:07:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 898 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 17:07:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 17:07:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 17:07:00 INFO scheduler.DAGScheduler: Submitting ResultStage 898 (KafkaRDD[1250] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 17:07:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 897.0 (TID 897, ***hostname masked***, executor 2, partition 0, RACK_LOCAL, 2064 bytes) 18/04/17 17:07:00 INFO storage.MemoryStore: Block broadcast_898 stored as values in memory (estimated size 5.7 KB, free 491.4 MB) 18/04/17 17:07:00 INFO storage.MemoryStore: Block broadcast_898_piece0 stored as bytes in memory (estimated size 3.1 KB, free 491.4 MB) 18/04/17 17:07:00 INFO storage.BlockManagerInfo: Added broadcast_898_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:07:00 INFO spark.SparkContext: Created broadcast 898 from broadcast at DAGScheduler.scala:1006 18/04/17 17:07:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 898 (KafkaRDD[1250] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 17:07:00 INFO cluster.YarnClusterScheduler: Adding task set 898.0 with 1 tasks 18/04/17 17:07:00 INFO scheduler.DAGScheduler: Got job 899 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 17:07:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 899 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 17:07:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 17:07:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 17:07:00 INFO scheduler.DAGScheduler: Submitting ResultStage 899 (KafkaRDD[1243] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 17:07:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 898.0 (TID 898, ***hostname masked***, executor 3, partition 0, NODE_LOCAL, 2056 bytes) 18/04/17 17:07:00 INFO storage.MemoryStore: Block broadcast_899 stored as values in memory (estimated size 5.7 KB, free 491.4 MB) 18/04/17 17:07:00 INFO storage.MemoryStore: Block broadcast_899_piece0 stored as bytes in memory (estimated size 3.1 KB, free 491.4 MB) 18/04/17 17:07:00 INFO storage.BlockManagerInfo: Added broadcast_899_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:07:00 INFO spark.SparkContext: Created broadcast 899 from broadcast at DAGScheduler.scala:1006 18/04/17 17:07:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 899 (KafkaRDD[1243] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 17:07:00 INFO cluster.YarnClusterScheduler: Adding task set 899.0 with 1 tasks 18/04/17 17:07:00 INFO scheduler.DAGScheduler: Got job 900 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 17:07:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 900 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 17:07:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 17:07:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 17:07:00 INFO scheduler.DAGScheduler: Submitting ResultStage 900 (KafkaRDD[1252] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 17:07:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 899.0 (TID 899, ***hostname masked***, executor 11, partition 0, RACK_LOCAL, 2050 bytes) 18/04/17 17:07:00 INFO storage.MemoryStore: Block broadcast_900 stored as values in memory (estimated size 5.7 KB, free 491.4 MB) 18/04/17 17:07:00 INFO storage.MemoryStore: Block broadcast_900_piece0 stored as bytes in memory (estimated size 3.1 KB, free 491.4 MB) 18/04/17 17:07:00 INFO storage.BlockManagerInfo: Added broadcast_900_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:07:00 INFO spark.SparkContext: Created broadcast 900 from broadcast at DAGScheduler.scala:1006 18/04/17 17:07:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 900 (KafkaRDD[1252] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 17:07:00 INFO cluster.YarnClusterScheduler: Adding task set 900.0 with 1 tasks 18/04/17 17:07:00 INFO scheduler.DAGScheduler: Got job 901 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 17:07:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 901 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 17:07:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 17:07:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 17:07:00 INFO scheduler.DAGScheduler: Submitting ResultStage 901 (KafkaRDD[1226] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 17:07:00 INFO storage.BlockManagerInfo: Added broadcast_897_piece0 in memory on ***hostname masked***:43653 (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:07:00 INFO storage.MemoryStore: Block broadcast_901 stored as values in memory (estimated size 5.7 KB, free 491.4 MB) 18/04/17 17:07:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 900.0 (TID 900, ***hostname masked***, executor 9, partition 0, RACK_LOCAL, 2045 bytes) 18/04/17 17:07:00 INFO storage.BlockManagerInfo: Added broadcast_896_piece0 in memory on ***hostname masked***:55095 (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:07:00 INFO storage.MemoryStore: Block broadcast_901_piece0 stored as bytes in memory (estimated size 3.1 KB, free 491.4 MB) 18/04/17 17:07:00 INFO storage.BlockManagerInfo: Added broadcast_901_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:07:00 INFO spark.SparkContext: Created broadcast 901 from broadcast at DAGScheduler.scala:1006 18/04/17 17:07:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 901 (KafkaRDD[1226] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 17:07:00 INFO cluster.YarnClusterScheduler: Adding task set 901.0 with 1 tasks 18/04/17 17:07:00 INFO scheduler.DAGScheduler: Got job 902 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 17:07:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 902 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 17:07:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 17:07:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 17:07:00 INFO scheduler.DAGScheduler: Submitting ResultStage 902 (KafkaRDD[1233] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 17:07:00 INFO storage.MemoryStore: Block broadcast_902 stored as values in memory (estimated size 5.7 KB, free 491.4 MB) 18/04/17 17:07:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 901.0 (TID 901, ***hostname masked***, executor 1, partition 0, RACK_LOCAL, 2034 bytes) 18/04/17 17:07:00 INFO storage.BlockManagerInfo: Added broadcast_898_piece0 in memory on ***hostname masked***:60107 (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:07:00 INFO storage.MemoryStore: Block broadcast_902_piece0 stored as bytes in memory (estimated size 3.1 KB, free 491.4 MB) 18/04/17 17:07:00 INFO storage.BlockManagerInfo: Added broadcast_902_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:07:00 INFO spark.SparkContext: Created broadcast 902 from broadcast at DAGScheduler.scala:1006 18/04/17 17:07:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 902 (KafkaRDD[1233] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 17:07:00 INFO cluster.YarnClusterScheduler: Adding task set 902.0 with 1 tasks 18/04/17 17:07:00 INFO scheduler.DAGScheduler: Got job 903 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 17:07:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 903 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 17:07:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 17:07:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 17:07:00 INFO scheduler.DAGScheduler: Submitting ResultStage 903 (KafkaRDD[1242] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 17:07:00 INFO storage.MemoryStore: Block broadcast_903 stored as values in memory (estimated size 5.7 KB, free 491.4 MB) 18/04/17 17:07:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 902.0 (TID 902, ***hostname masked***, executor 1, partition 0, RACK_LOCAL, 2041 bytes) 18/04/17 17:07:00 INFO storage.MemoryStore: Block broadcast_903_piece0 stored as bytes in memory (estimated size 3.1 KB, free 491.4 MB) 18/04/17 17:07:00 INFO storage.BlockManagerInfo: Added broadcast_903_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:07:00 INFO spark.SparkContext: Created broadcast 903 from broadcast at DAGScheduler.scala:1006 18/04/17 17:07:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 903 (KafkaRDD[1242] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 17:07:00 INFO cluster.YarnClusterScheduler: Adding task set 903.0 with 1 tasks 18/04/17 17:07:00 INFO scheduler.DAGScheduler: Got job 904 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 17:07:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 904 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 17:07:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 17:07:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 17:07:00 INFO scheduler.DAGScheduler: Submitting ResultStage 904 (KafkaRDD[1248] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 17:07:00 INFO storage.MemoryStore: Block broadcast_904 stored as values in memory (estimated size 5.7 KB, free 491.4 MB) 18/04/17 17:07:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 903.0 (TID 903, ***hostname masked***, executor 2, partition 0, RACK_LOCAL, 2048 bytes) 18/04/17 17:07:00 INFO storage.BlockManagerInfo: Added broadcast_899_piece0 in memory on ***hostname masked***:57847 (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:07:00 INFO storage.BlockManagerInfo: Added broadcast_900_piece0 in memory on ***hostname masked***:55033 (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:07:00 INFO spark.ContextCleaner: Cleaned accumulator 877 18/04/17 17:07:00 INFO storage.BlockManagerInfo: Removed broadcast_871_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:07:00 INFO storage.MemoryStore: Block broadcast_904_piece0 stored as bytes in memory (estimated size 3.1 KB, free 491.4 MB) 18/04/17 17:07:00 INFO storage.BlockManagerInfo: Added broadcast_904_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:07:00 INFO spark.SparkContext: Created broadcast 904 from broadcast at DAGScheduler.scala:1006 18/04/17 17:07:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 904 (KafkaRDD[1248] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 17:07:00 INFO cluster.YarnClusterScheduler: Adding task set 904.0 with 1 tasks 18/04/17 17:07:00 INFO scheduler.DAGScheduler: Got job 905 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 17:07:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 905 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 17:07:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 17:07:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 17:07:00 INFO scheduler.DAGScheduler: Submitting ResultStage 905 (KafkaRDD[1247] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 17:07:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 904.0 (TID 904, ***hostname masked***, executor 3, partition 0, NODE_LOCAL, 2047 bytes) 18/04/17 17:07:00 INFO storage.MemoryStore: Block broadcast_905 stored as values in memory (estimated size 5.7 KB, free 491.4 MB) 18/04/17 17:07:00 INFO storage.BlockManagerInfo: Removed broadcast_871_piece0 on ***hostname masked***:50260 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:07:00 INFO storage.BlockManagerInfo: Removed broadcast_878_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:07:00 INFO storage.BlockManagerInfo: Removed broadcast_878_piece0 on ***hostname masked***:43653 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:07:00 INFO spark.ContextCleaner: Cleaned accumulator 880 18/04/17 17:07:00 INFO storage.BlockManagerInfo: Removed broadcast_877_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:07:00 INFO storage.BlockManagerInfo: Removed broadcast_877_piece0 on ***hostname masked***:42188 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:07:00 INFO storage.BlockManagerInfo: Added broadcast_904_piece0 in memory on ***hostname masked***:60107 (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:07:00 INFO spark.ContextCleaner: Cleaned accumulator 874 18/04/17 17:07:00 INFO spark.ContextCleaner: Cleaned accumulator 879 18/04/17 17:07:00 INFO spark.ContextCleaner: Cleaned accumulator 876 18/04/17 17:07:00 INFO spark.ContextCleaner: Cleaned accumulator 872 18/04/17 17:07:00 INFO spark.ContextCleaner: Cleaned accumulator 871 18/04/17 17:07:00 INFO storage.BlockManagerInfo: Removed broadcast_876_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:07:00 INFO storage.BlockManagerInfo: Removed broadcast_876_piece0 on ***hostname masked***:60107 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:07:00 INFO spark.ContextCleaner: Cleaned accumulator 881 18/04/17 17:07:00 INFO spark.ContextCleaner: Cleaned accumulator 883 18/04/17 17:07:00 INFO storage.BlockManagerInfo: Removed broadcast_881_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:07:00 INFO storage.MemoryStore: Block broadcast_905_piece0 stored as bytes in memory (estimated size 3.1 KB, free 491.4 MB) 18/04/17 17:07:00 INFO storage.BlockManagerInfo: Added broadcast_905_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:07:00 INFO spark.SparkContext: Created broadcast 905 from broadcast at DAGScheduler.scala:1006 18/04/17 17:07:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 905 (KafkaRDD[1247] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 17:07:00 INFO cluster.YarnClusterScheduler: Adding task set 905.0 with 1 tasks 18/04/17 17:07:00 INFO scheduler.DAGScheduler: Got job 906 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 17:07:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 906 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 17:07:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 17:07:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 17:07:00 INFO storage.BlockManagerInfo: Removed broadcast_881_piece0 on ***hostname masked***:43653 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:07:00 INFO scheduler.DAGScheduler: Submitting ResultStage 906 (KafkaRDD[1253] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 17:07:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 905.0 (TID 905, ***hostname masked***, executor 5, partition 0, RACK_LOCAL, 2048 bytes) 18/04/17 17:07:00 INFO storage.MemoryStore: Block broadcast_906 stored as values in memory (estimated size 5.7 KB, free 491.4 MB) 18/04/17 17:07:00 INFO spark.ContextCleaner: Cleaned accumulator 882 18/04/17 17:07:00 INFO storage.BlockManagerInfo: Removed broadcast_880_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:07:00 INFO storage.BlockManagerInfo: Removed broadcast_880_piece0 on ***hostname masked***:53081 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:07:00 INFO spark.ContextCleaner: Cleaned accumulator 885 18/04/17 17:07:00 INFO storage.BlockManagerInfo: Removed broadcast_883_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:07:00 INFO storage.BlockManagerInfo: Removed broadcast_883_piece0 on ***hostname masked***:55279 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:07:00 INFO spark.ContextCleaner: Cleaned accumulator 884 18/04/17 17:07:00 INFO storage.BlockManagerInfo: Removed broadcast_882_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:07:00 INFO storage.BlockManagerInfo: Removed broadcast_882_piece0 on ***hostname masked***:53081 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:07:00 INFO storage.MemoryStore: Block broadcast_906_piece0 stored as bytes in memory (estimated size 3.1 KB, free 491.4 MB) 18/04/17 17:07:00 INFO storage.BlockManagerInfo: Removed broadcast_879_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:07:00 INFO storage.BlockManagerInfo: Added broadcast_906_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:07:00 INFO spark.SparkContext: Created broadcast 906 from broadcast at DAGScheduler.scala:1006 18/04/17 17:07:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 906 (KafkaRDD[1253] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 17:07:00 INFO cluster.YarnClusterScheduler: Adding task set 906.0 with 1 tasks 18/04/17 17:07:00 INFO scheduler.DAGScheduler: Got job 908 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 17:07:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 907 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 17:07:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 17:07:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 17:07:00 INFO scheduler.DAGScheduler: Submitting ResultStage 907 (KafkaRDD[1231] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 17:07:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 906.0 (TID 906, ***hostname masked***, executor 5, partition 0, NODE_LOCAL, 2049 bytes) 18/04/17 17:07:00 INFO storage.MemoryStore: Block broadcast_907 stored as values in memory (estimated size 5.7 KB, free 491.4 MB) 18/04/17 17:07:00 INFO storage.BlockManagerInfo: Removed broadcast_879_piece0 on ***hostname masked***:56034 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:07:00 INFO storage.MemoryStore: Block broadcast_907_piece0 stored as bytes in memory (estimated size 3.1 KB, free 491.4 MB) 18/04/17 17:07:00 INFO storage.BlockManagerInfo: Added broadcast_907_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:07:00 INFO spark.SparkContext: Created broadcast 907 from broadcast at DAGScheduler.scala:1006 18/04/17 17:07:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 907 (KafkaRDD[1231] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 17:07:00 INFO cluster.YarnClusterScheduler: Adding task set 907.0 with 1 tasks 18/04/17 17:07:00 INFO scheduler.DAGScheduler: Got job 907 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 17:07:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 908 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 17:07:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 17:07:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 17:07:00 INFO scheduler.DAGScheduler: Submitting ResultStage 908 (KafkaRDD[1255] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 17:07:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 907.0 (TID 907, ***hostname masked***, executor 2, partition 0, RACK_LOCAL, 2060 bytes) 18/04/17 17:07:00 INFO storage.MemoryStore: Block broadcast_908 stored as values in memory (estimated size 5.7 KB, free 491.4 MB) 18/04/17 17:07:00 INFO spark.ContextCleaner: Cleaned accumulator 878 18/04/17 17:07:00 INFO spark.ContextCleaner: Cleaned accumulator 886 18/04/17 17:07:00 INFO storage.BlockManagerInfo: Removed broadcast_884_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:07:00 INFO storage.BlockManagerInfo: Removed broadcast_884_piece0 on ***hostname masked***:35790 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:07:00 INFO storage.MemoryStore: Block broadcast_908_piece0 stored as bytes in memory (estimated size 3.1 KB, free 491.4 MB) 18/04/17 17:07:00 INFO storage.BlockManagerInfo: Added broadcast_908_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:07:00 INFO storage.BlockManagerInfo: Removed broadcast_873_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:07:00 INFO spark.SparkContext: Created broadcast 908 from broadcast at DAGScheduler.scala:1006 18/04/17 17:07:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 908 (KafkaRDD[1255] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 17:07:00 INFO cluster.YarnClusterScheduler: Adding task set 908.0 with 1 tasks 18/04/17 17:07:00 INFO scheduler.DAGScheduler: Got job 909 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 17:07:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 909 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 17:07:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 17:07:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 17:07:00 INFO scheduler.DAGScheduler: Submitting ResultStage 909 (KafkaRDD[1246] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 17:07:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 908.0 (TID 908, ***hostname masked***, executor 2, partition 0, RACK_LOCAL, 2042 bytes) 18/04/17 17:07:00 INFO storage.MemoryStore: Block broadcast_909 stored as values in memory (estimated size 5.7 KB, free 491.4 MB) 18/04/17 17:07:00 INFO storage.BlockManagerInfo: Removed broadcast_873_piece0 on ***hostname masked***:41751 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:07:00 INFO storage.MemoryStore: Block broadcast_909_piece0 stored as bytes in memory (estimated size 3.1 KB, free 491.4 MB) 18/04/17 17:07:00 INFO storage.BlockManagerInfo: Added broadcast_909_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:07:00 INFO spark.SparkContext: Created broadcast 909 from broadcast at DAGScheduler.scala:1006 18/04/17 17:07:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 909 (KafkaRDD[1246] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 17:07:00 INFO cluster.YarnClusterScheduler: Adding task set 909.0 with 1 tasks 18/04/17 17:07:00 INFO scheduler.DAGScheduler: Got job 910 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 17:07:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 910 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 17:07:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 17:07:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 17:07:00 INFO scheduler.DAGScheduler: Submitting ResultStage 910 (KafkaRDD[1229] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 17:07:00 INFO storage.MemoryStore: Block broadcast_910 stored as values in memory (estimated size 5.7 KB, free 491.4 MB) 18/04/17 17:07:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 909.0 (TID 909, ***hostname masked***, executor 12, partition 0, RACK_LOCAL, 2069 bytes) 18/04/17 17:07:00 INFO storage.BlockManagerInfo: Added broadcast_907_piece0 in memory on ***hostname masked***:43653 (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:07:00 INFO storage.MemoryStore: Block broadcast_910_piece0 stored as bytes in memory (estimated size 3.1 KB, free 491.4 MB) 18/04/17 17:07:00 INFO storage.BlockManagerInfo: Added broadcast_910_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:07:00 INFO spark.SparkContext: Created broadcast 910 from broadcast at DAGScheduler.scala:1006 18/04/17 17:07:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 910 (KafkaRDD[1229] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 17:07:00 INFO cluster.YarnClusterScheduler: Adding task set 910.0 with 1 tasks 18/04/17 17:07:00 INFO scheduler.DAGScheduler: Got job 911 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 17:07:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 911 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 17:07:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 17:07:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 17:07:00 INFO scheduler.DAGScheduler: Submitting ResultStage 911 (KafkaRDD[1249] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 17:07:00 INFO storage.MemoryStore: Block broadcast_911 stored as values in memory (estimated size 5.7 KB, free 491.4 MB) 18/04/17 17:07:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 910.0 (TID 910, ***hostname masked***, executor 6, partition 0, NODE_LOCAL, 2059 bytes) 18/04/17 17:07:00 INFO storage.BlockManagerInfo: Added broadcast_906_piece0 in memory on ***hostname masked***:53081 (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:07:00 INFO spark.ContextCleaner: Cleaned accumulator 887 18/04/17 17:07:00 INFO storage.BlockManagerInfo: Added broadcast_903_piece0 in memory on ***hostname masked***:43653 (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:07:00 INFO storage.MemoryStore: Block broadcast_911_piece0 stored as bytes in memory (estimated size 3.1 KB, free 491.4 MB) 18/04/17 17:07:00 INFO storage.BlockManagerInfo: Added broadcast_902_piece0 in memory on ***hostname masked***:56034 (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:07:00 INFO storage.BlockManagerInfo: Added broadcast_911_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:07:00 INFO storage.BlockManagerInfo: Removed broadcast_885_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:07:00 INFO spark.SparkContext: Created broadcast 911 from broadcast at DAGScheduler.scala:1006 18/04/17 17:07:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 911 (KafkaRDD[1249] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 17:07:00 INFO cluster.YarnClusterScheduler: Adding task set 911.0 with 1 tasks 18/04/17 17:07:00 INFO scheduler.DAGScheduler: Got job 913 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 17:07:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 912 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 17:07:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 17:07:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 17:07:00 INFO scheduler.DAGScheduler: Submitting ResultStage 912 (KafkaRDD[1258] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 17:07:00 INFO storage.BlockManagerInfo: Removed broadcast_885_piece0 on ***hostname masked***:53081 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:07:00 INFO storage.MemoryStore: Block broadcast_912 stored as values in memory (estimated size 5.7 KB, free 491.4 MB) 18/04/17 17:07:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 911.0 (TID 911, ***hostname masked***, executor 4, partition 0, RACK_LOCAL, 2045 bytes) 18/04/17 17:07:00 INFO storage.BlockManagerInfo: Added broadcast_909_piece0 in memory on ***hostname masked***:42188 (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:07:00 INFO storage.BlockManagerInfo: Removed broadcast_870_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:07:00 INFO storage.BlockManagerInfo: Removed broadcast_870_piece0 on ***hostname masked***:43653 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:07:00 INFO storage.MemoryStore: Block broadcast_912_piece0 stored as bytes in memory (estimated size 3.1 KB, free 491.4 MB) 18/04/17 17:07:00 INFO storage.BlockManagerInfo: Added broadcast_912_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:07:00 INFO spark.SparkContext: Created broadcast 912 from broadcast at DAGScheduler.scala:1006 18/04/17 17:07:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 912 (KafkaRDD[1258] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 17:07:00 INFO cluster.YarnClusterScheduler: Adding task set 912.0 with 1 tasks 18/04/17 17:07:00 INFO scheduler.DAGScheduler: Got job 912 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 17:07:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 913 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 17:07:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 17:07:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 17:07:00 INFO scheduler.DAGScheduler: Submitting ResultStage 913 (KafkaRDD[1256] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 17:07:00 INFO storage.MemoryStore: Block broadcast_913 stored as values in memory (estimated size 5.7 KB, free 491.4 MB) 18/04/17 17:07:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 912.0 (TID 912, ***hostname masked***, executor 6, partition 0, RACK_LOCAL, 2053 bytes) 18/04/17 17:07:00 INFO storage.BlockManagerInfo: Removed broadcast_875_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:07:00 INFO storage.BlockManagerInfo: Added broadcast_908_piece0 in memory on ***hostname masked***:43653 (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:07:00 INFO storage.BlockManagerInfo: Removed broadcast_875_piece0 on ***hostname masked***:41751 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:07:00 INFO storage.BlockManagerInfo: Added broadcast_905_piece0 in memory on ***hostname masked***:53081 (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:07:00 INFO spark.ContextCleaner: Cleaned accumulator 888 18/04/17 17:07:00 INFO storage.MemoryStore: Block broadcast_913_piece0 stored as bytes in memory (estimated size 3.1 KB, free 491.4 MB) 18/04/17 17:07:00 INFO storage.BlockManagerInfo: Added broadcast_913_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:07:00 INFO storage.BlockManagerInfo: Removed broadcast_886_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:07:00 INFO spark.SparkContext: Created broadcast 913 from broadcast at DAGScheduler.scala:1006 18/04/17 17:07:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 913 (KafkaRDD[1256] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 17:07:00 INFO cluster.YarnClusterScheduler: Adding task set 913.0 with 1 tasks 18/04/17 17:07:00 INFO scheduler.DAGScheduler: Got job 914 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 17:07:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 914 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 17:07:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 17:07:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 17:07:00 INFO scheduler.DAGScheduler: Submitting ResultStage 914 (KafkaRDD[1244] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 17:07:00 INFO storage.MemoryStore: Block broadcast_914 stored as values in memory (estimated size 5.7 KB, free 491.4 MB) 18/04/17 17:07:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 913.0 (TID 913, ***hostname masked***, executor 10, partition 0, RACK_LOCAL, 2040 bytes) 18/04/17 17:07:00 INFO storage.BlockManagerInfo: Added broadcast_901_piece0 in memory on ***hostname masked***:56034 (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:07:00 INFO storage.BlockManagerInfo: Removed broadcast_886_piece0 on ***hostname masked***:56034 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:07:00 INFO storage.BlockManagerInfo: Added broadcast_910_piece0 in memory on ***hostname masked***:35790 (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:07:00 INFO storage.BlockManagerInfo: Added broadcast_911_piece0 in memory on ***hostname masked***:55279 (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:07:00 INFO spark.ContextCleaner: Cleaned accumulator 875 18/04/17 17:07:00 INFO storage.BlockManagerInfo: Removed broadcast_888_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:07:00 INFO storage.MemoryStore: Block broadcast_914_piece0 stored as bytes in memory (estimated size 3.1 KB, free 491.4 MB) 18/04/17 17:07:00 INFO storage.BlockManagerInfo: Added broadcast_914_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:07:00 INFO storage.BlockManagerInfo: Removed broadcast_888_piece0 on ***hostname masked***:57847 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:07:00 INFO spark.SparkContext: Created broadcast 914 from broadcast at DAGScheduler.scala:1006 18/04/17 17:07:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 914 (KafkaRDD[1244] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 17:07:00 INFO cluster.YarnClusterScheduler: Adding task set 914.0 with 1 tasks 18/04/17 17:07:00 INFO scheduler.DAGScheduler: Got job 915 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 17:07:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 915 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 17:07:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 17:07:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 17:07:00 INFO scheduler.DAGScheduler: Submitting ResultStage 915 (KafkaRDD[1251] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 17:07:00 INFO spark.ContextCleaner: Cleaned accumulator 889 18/04/17 17:07:00 INFO storage.MemoryStore: Block broadcast_915 stored as values in memory (estimated size 5.7 KB, free 491.4 MB) 18/04/17 17:07:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 914.0 (TID 914, ***hostname masked***, executor 7, partition 0, RACK_LOCAL, 2048 bytes) 18/04/17 17:07:00 INFO storage.BlockManagerInfo: Removed broadcast_887_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:07:00 INFO storage.BlockManagerInfo: Removed broadcast_887_piece0 on ***hostname masked***:41751 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:07:00 INFO spark.ContextCleaner: Cleaned accumulator 890 18/04/17 17:07:00 INFO storage.MemoryStore: Block broadcast_915_piece0 stored as bytes in memory (estimated size 3.1 KB, free 491.4 MB) 18/04/17 17:07:00 INFO storage.BlockManagerInfo: Added broadcast_913_piece0 in memory on ***hostname masked***:55095 (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:07:00 INFO storage.BlockManagerInfo: Removed broadcast_890_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:07:00 INFO storage.BlockManagerInfo: Added broadcast_915_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:07:00 INFO spark.SparkContext: Created broadcast 915 from broadcast at DAGScheduler.scala:1006 18/04/17 17:07:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 915 (KafkaRDD[1251] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 17:07:00 INFO cluster.YarnClusterScheduler: Adding task set 915.0 with 1 tasks 18/04/17 17:07:00 INFO scheduler.DAGScheduler: Got job 917 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 17:07:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 916 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 17:07:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 17:07:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 17:07:00 INFO scheduler.DAGScheduler: Submitting ResultStage 916 (KafkaRDD[1257] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 17:07:00 INFO storage.MemoryStore: Block broadcast_916 stored as values in memory (estimated size 5.7 KB, free 491.4 MB) 18/04/17 17:07:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 915.0 (TID 915, ***hostname masked***, executor 7, partition 0, RACK_LOCAL, 2048 bytes) 18/04/17 17:07:00 INFO storage.BlockManagerInfo: Removed broadcast_890_piece0 on ***hostname masked***:35790 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:07:00 INFO spark.ContextCleaner: Cleaned accumulator 891 18/04/17 17:07:00 INFO storage.BlockManagerInfo: Removed broadcast_889_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:07:00 INFO storage.MemoryStore: Block broadcast_916_piece0 stored as bytes in memory (estimated size 3.1 KB, free 491.4 MB) 18/04/17 17:07:00 INFO storage.BlockManagerInfo: Added broadcast_916_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:07:00 INFO spark.SparkContext: Created broadcast 916 from broadcast at DAGScheduler.scala:1006 18/04/17 17:07:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 916 (KafkaRDD[1257] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 17:07:00 INFO cluster.YarnClusterScheduler: Adding task set 916.0 with 1 tasks 18/04/17 17:07:00 INFO scheduler.DAGScheduler: Got job 916 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 17:07:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 917 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 17:07:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 17:07:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 17:07:00 INFO scheduler.DAGScheduler: Submitting ResultStage 917 (KafkaRDD[1232] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 17:07:00 INFO storage.BlockManagerInfo: Removed broadcast_889_piece0 on ***hostname masked***:55279 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:07:00 INFO storage.MemoryStore: Block broadcast_917 stored as values in memory (estimated size 5.7 KB, free 491.4 MB) 18/04/17 17:07:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 916.0 (TID 916, ***hostname masked***, executor 7, partition 0, NODE_LOCAL, 2048 bytes) 18/04/17 17:07:00 INFO storage.BlockManagerInfo: Added broadcast_912_piece0 in memory on ***hostname masked***:35790 (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:07:00 INFO storage.BlockManagerInfo: Removed broadcast_892_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:07:00 INFO storage.MemoryStore: Block broadcast_917_piece0 stored as bytes in memory (estimated size 3.1 KB, free 491.4 MB) 18/04/17 17:07:00 INFO storage.BlockManagerInfo: Added broadcast_917_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:07:00 INFO spark.SparkContext: Created broadcast 917 from broadcast at DAGScheduler.scala:1006 18/04/17 17:07:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 917 (KafkaRDD[1232] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 17:07:00 INFO cluster.YarnClusterScheduler: Adding task set 917.0 with 1 tasks 18/04/17 17:07:00 INFO scheduler.DAGScheduler: Got job 918 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 17:07:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 918 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 17:07:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 17:07:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 17:07:00 INFO scheduler.DAGScheduler: Submitting ResultStage 918 (KafkaRDD[1225] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 17:07:00 INFO storage.BlockManagerInfo: Removed broadcast_892_piece0 on ***hostname masked***:41751 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:07:00 INFO storage.MemoryStore: Block broadcast_918 stored as values in memory (estimated size 5.7 KB, free 491.4 MB) 18/04/17 17:07:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 917.0 (TID 917, ***hostname masked***, executor 9, partition 0, RACK_LOCAL, 2051 bytes) 18/04/17 17:07:00 INFO storage.MemoryStore: Block broadcast_918_piece0 stored as bytes in memory (estimated size 3.1 KB, free 491.4 MB) 18/04/17 17:07:00 INFO storage.BlockManagerInfo: Added broadcast_918_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:07:00 INFO spark.SparkContext: Created broadcast 918 from broadcast at DAGScheduler.scala:1006 18/04/17 17:07:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 918 (KafkaRDD[1225] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 17:07:00 INFO cluster.YarnClusterScheduler: Adding task set 918.0 with 1 tasks 18/04/17 17:07:00 INFO scheduler.DAGScheduler: Got job 919 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 17:07:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 919 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 17:07:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 17:07:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 17:07:00 INFO scheduler.DAGScheduler: Submitting ResultStage 919 (KafkaRDD[1235] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 17:07:00 INFO storage.MemoryStore: Block broadcast_919 stored as values in memory (estimated size 5.7 KB, free 491.4 MB) 18/04/17 17:07:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 918.0 (TID 918, ***hostname masked***, executor 10, partition 0, RACK_LOCAL, 2063 bytes) 18/04/17 17:07:00 INFO storage.MemoryStore: Block broadcast_919_piece0 stored as bytes in memory (estimated size 3.1 KB, free 491.4 MB) 18/04/17 17:07:00 INFO storage.BlockManagerInfo: Added broadcast_919_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:07:00 INFO spark.SparkContext: Created broadcast 919 from broadcast at DAGScheduler.scala:1006 18/04/17 17:07:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 919 (KafkaRDD[1235] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 17:07:00 INFO cluster.YarnClusterScheduler: Adding task set 919.0 with 1 tasks 18/04/17 17:07:00 INFO spark.ContextCleaner: Cleaned accumulator 893 18/04/17 17:07:00 INFO scheduler.DAGScheduler: Got job 920 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 17:07:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 920 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 17:07:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 17:07:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 17:07:00 INFO scheduler.DAGScheduler: Submitting ResultStage 920 (KafkaRDD[1230] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 17:07:00 INFO storage.MemoryStore: Block broadcast_920 stored as values in memory (estimated size 5.7 KB, free 491.4 MB) 18/04/17 17:07:00 INFO storage.BlockManagerInfo: Removed broadcast_891_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:07:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 919.0 (TID 919, ***hostname masked***, executor 11, partition 0, RACK_LOCAL, 2046 bytes) 18/04/17 17:07:00 INFO storage.BlockManagerInfo: Added broadcast_915_piece0 in memory on ***hostname masked***:41751 (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:07:00 INFO storage.BlockManagerInfo: Removed broadcast_891_piece0 on ***hostname masked***:43653 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:07:00 INFO storage.MemoryStore: Block broadcast_920_piece0 stored as bytes in memory (estimated size 3.1 KB, free 491.4 MB) 18/04/17 17:07:00 INFO storage.BlockManagerInfo: Added broadcast_920_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:07:00 INFO spark.SparkContext: Created broadcast 920 from broadcast at DAGScheduler.scala:1006 18/04/17 17:07:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 920 (KafkaRDD[1230] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 17:07:00 INFO cluster.YarnClusterScheduler: Adding task set 920.0 with 1 tasks 18/04/17 17:07:00 INFO scheduler.DAGScheduler: Got job 921 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 17:07:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 921 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 17:07:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 17:07:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 17:07:00 INFO scheduler.DAGScheduler: Submitting ResultStage 921 (KafkaRDD[1234] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 17:07:00 INFO storage.MemoryStore: Block broadcast_921 stored as values in memory (estimated size 5.7 KB, free 491.4 MB) 18/04/17 17:07:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 920.0 (TID 920, ***hostname masked***, executor 9, partition 0, NODE_LOCAL, 2045 bytes) 18/04/17 17:07:00 INFO storage.BlockManagerInfo: Added broadcast_918_piece0 in memory on ***hostname masked***:55095 (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:07:00 INFO storage.MemoryStore: Block broadcast_921_piece0 stored as bytes in memory (estimated size 3.1 KB, free 491.4 MB) 18/04/17 17:07:00 INFO storage.BlockManagerInfo: Added broadcast_921_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:07:00 INFO spark.SparkContext: Created broadcast 921 from broadcast at DAGScheduler.scala:1006 18/04/17 17:07:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 921 (KafkaRDD[1234] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 17:07:00 INFO cluster.YarnClusterScheduler: Adding task set 921.0 with 1 tasks 18/04/17 17:07:00 INFO storage.BlockManagerInfo: Added broadcast_917_piece0 in memory on ***hostname masked***:55033 (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:07:00 INFO spark.ContextCleaner: Cleaned accumulator 892 18/04/17 17:07:00 INFO storage.BlockManagerInfo: Removed broadcast_872_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:07:00 INFO storage.BlockManagerInfo: Added broadcast_919_piece0 in memory on ***hostname masked***:57847 (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:07:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 921.0 (TID 921, ***hostname masked***, executor 1, partition 0, NODE_LOCAL, 2059 bytes) 18/04/17 17:07:00 INFO storage.BlockManagerInfo: Removed broadcast_872_piece0 on ***hostname masked***:56034 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:07:00 INFO spark.ContextCleaner: Cleaned accumulator 894 18/04/17 17:07:00 INFO spark.ContextCleaner: Cleaned accumulator 873 18/04/17 17:07:00 INFO spark.ContextCleaner: Cleaned accumulator 895 18/04/17 17:07:00 INFO storage.BlockManagerInfo: Removed broadcast_893_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:07:00 INFO storage.BlockManagerInfo: Removed broadcast_893_piece0 on ***hostname masked***:50260 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:07:00 INFO storage.BlockManagerInfo: Added broadcast_914_piece0 in memory on ***hostname masked***:41751 (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:07:00 INFO storage.BlockManagerInfo: Removed broadcast_895_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:07:00 INFO storage.BlockManagerInfo: Removed broadcast_895_piece0 on ***hostname masked***:42188 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:07:00 INFO storage.BlockManagerInfo: Added broadcast_920_piece0 in memory on ***hostname masked***:55033 (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:07:00 INFO spark.ContextCleaner: Cleaned accumulator 896 18/04/17 17:07:00 INFO storage.BlockManagerInfo: Removed broadcast_894_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:07:00 INFO storage.BlockManagerInfo: Removed broadcast_894_piece0 on ***hostname masked***:53081 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:07:00 INFO storage.BlockManagerInfo: Added broadcast_921_piece0 in memory on ***hostname masked***:56034 (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:07:00 INFO storage.BlockManagerInfo: Removed broadcast_874_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:07:00 INFO storage.BlockManagerInfo: Added broadcast_916_piece0 in memory on ***hostname masked***:41751 (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:07:00 INFO storage.BlockManagerInfo: Removed broadcast_874_piece0 on ***hostname masked***:57847 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:07:00 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 901.0 (TID 901) in 111 ms on ***hostname masked*** (executor 1) (1/1) 18/04/17 17:07:00 INFO scheduler.DAGScheduler: ResultStage 901 (foreachPartition at PredictorEngineApp.java:153) finished in 0.112 s 18/04/17 17:07:00 INFO cluster.YarnClusterScheduler: Removed TaskSet 901.0, whose tasks have all completed, from pool 18/04/17 17:07:00 INFO scheduler.DAGScheduler: Job 901 finished: foreachPartition at PredictorEngineApp.java:153, took 0.135076 s 18/04/17 17:07:00 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x33d10519 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 17:07:00 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x33d105190x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 17:07:00 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 17:07:00 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:39158, server: ***hostname masked***/***IP masked***:2181 18/04/17 17:07:00 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x1626be1444c9628, negotiated timeout = 60000 18/04/17 17:07:00 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 912.0 (TID 912) in 63 ms on ***hostname masked*** (executor 6) (1/1) 18/04/17 17:07:00 INFO cluster.YarnClusterScheduler: Removed TaskSet 912.0, whose tasks have all completed, from pool 18/04/17 17:07:00 INFO scheduler.DAGScheduler: ResultStage 912 (foreachPartition at PredictorEngineApp.java:153) finished in 0.065 s 18/04/17 17:07:00 INFO scheduler.DAGScheduler: Job 913 finished: foreachPartition at PredictorEngineApp.java:153, took 0.151627 s 18/04/17 17:07:00 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x1626be1444c9628 18/04/17 17:07:00 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 918.0 (TID 918) in 48 ms on ***hostname masked*** (executor 10) (1/1) 18/04/17 17:07:00 INFO cluster.YarnClusterScheduler: Removed TaskSet 918.0, whose tasks have all completed, from pool 18/04/17 17:07:00 INFO scheduler.DAGScheduler: ResultStage 918 (foreachPartition at PredictorEngineApp.java:153) finished in 0.050 s 18/04/17 17:07:00 INFO scheduler.DAGScheduler: Job 918 finished: foreachPartition at PredictorEngineApp.java:153, took 0.154584 s 18/04/17 17:07:00 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x7e873e20 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 17:07:00 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x7e873e200x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 17:07:00 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 17:07:00 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:32779, server: ***hostname masked***/***IP masked***:2181 18/04/17 17:07:00 INFO zookeeper.ZooKeeper: Session: 0x1626be1444c9628 closed 18/04/17 17:07:00 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 17:07:00 INFO scheduler.JobScheduler: Finished job streaming job 1523974020000 ms.2 from job set of time 1523974020000 ms 18/04/17 17:07:00 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 916.0 (TID 916) in 59 ms on ***hostname masked*** (executor 7) (1/1) 18/04/17 17:07:00 INFO cluster.YarnClusterScheduler: Removed TaskSet 916.0, whose tasks have all completed, from pool 18/04/17 17:07:00 INFO scheduler.DAGScheduler: ResultStage 916 (foreachPartition at PredictorEngineApp.java:153) finished in 0.059 s 18/04/17 17:07:00 INFO scheduler.DAGScheduler: Job 917 finished: foreachPartition at PredictorEngineApp.java:153, took 0.159812 s 18/04/17 17:07:00 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x31a0845 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 17:07:00 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x31a08450x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 17:07:00 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 17:07:00 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:32780, server: ***hostname masked***/***IP masked***:2181 18/04/17 17:07:00 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x3626be1439a95d5, negotiated timeout = 60000 18/04/17 17:07:00 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 920.0 (TID 920) in 54 ms on ***hostname masked*** (executor 9) (1/1) 18/04/17 17:07:00 INFO cluster.YarnClusterScheduler: Removed TaskSet 920.0, whose tasks have all completed, from pool 18/04/17 17:07:00 INFO scheduler.DAGScheduler: ResultStage 920 (foreachPartition at PredictorEngineApp.java:153) finished in 0.054 s 18/04/17 17:07:00 INFO scheduler.DAGScheduler: Job 920 finished: foreachPartition at PredictorEngineApp.java:153, took 0.164101 s 18/04/17 17:07:00 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x1291b54e connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 17:07:00 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x1291b54e0x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 17:07:00 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 17:07:00 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:43758, server: ***hostname masked***/***IP masked***:2181 18/04/17 17:07:00 INFO scheduler.JobScheduler: Finished job streaming job 1523974020000 ms.34 from job set of time 1523974020000 ms 18/04/17 17:07:00 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x3626be1439a95d6, negotiated timeout = 60000 18/04/17 17:07:00 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x2626be142b28f16, negotiated timeout = 60000 18/04/17 17:07:00 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x3626be1439a95d6 18/04/17 17:07:00 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x3626be1439a95d5 18/04/17 17:07:00 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x2626be142b28f16 18/04/17 17:07:00 INFO zookeeper.ZooKeeper: Session: 0x3626be1439a95d6 closed 18/04/17 17:07:00 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 17:07:00 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 17:07:00 INFO zookeeper.ZooKeeper: Session: 0x3626be1439a95d5 closed 18/04/17 17:07:00 INFO zookeeper.ZooKeeper: Session: 0x2626be142b28f16 closed 18/04/17 17:07:00 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 17:07:00 INFO scheduler.JobScheduler: Finished job streaming job 1523974020000 ms.33 from job set of time 1523974020000 ms 18/04/17 17:07:00 INFO scheduler.JobScheduler: Finished job streaming job 1523974020000 ms.1 from job set of time 1523974020000 ms 18/04/17 17:07:00 INFO scheduler.JobScheduler: Finished job streaming job 1523974020000 ms.6 from job set of time 1523974020000 ms 18/04/17 17:07:00 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 911.0 (TID 911) in 617 ms on ***hostname masked*** (executor 4) (1/1) 18/04/17 17:07:00 INFO cluster.YarnClusterScheduler: Removed TaskSet 911.0, whose tasks have all completed, from pool 18/04/17 17:07:00 INFO scheduler.DAGScheduler: ResultStage 911 (foreachPartition at PredictorEngineApp.java:153) finished in 0.617 s 18/04/17 17:07:00 INFO scheduler.DAGScheduler: Job 911 finished: foreachPartition at PredictorEngineApp.java:153, took 0.701670 s 18/04/17 17:07:00 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x79899029 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 17:07:00 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x798990290x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 17:07:00 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 17:07:00 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:43765, server: ***hostname masked***/***IP masked***:2181 18/04/17 17:07:00 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x2626be142b28f19, negotiated timeout = 60000 18/04/17 17:07:00 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x2626be142b28f19 18/04/17 17:07:00 INFO zookeeper.ZooKeeper: Session: 0x2626be142b28f19 closed 18/04/17 17:07:00 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 17:07:00 INFO scheduler.JobScheduler: Finished job streaming job 1523974020000 ms.25 from job set of time 1523974020000 ms 18/04/17 17:07:01 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 917.0 (TID 917) in 856 ms on ***hostname masked*** (executor 9) (1/1) 18/04/17 17:07:01 INFO cluster.YarnClusterScheduler: Removed TaskSet 917.0, whose tasks have all completed, from pool 18/04/17 17:07:01 INFO scheduler.DAGScheduler: ResultStage 917 (foreachPartition at PredictorEngineApp.java:153) finished in 0.857 s 18/04/17 17:07:01 INFO scheduler.DAGScheduler: Job 916 finished: foreachPartition at PredictorEngineApp.java:153, took 0.959982 s 18/04/17 17:07:01 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x1c77712b connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 17:07:01 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x1c77712b0x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 17:07:01 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 17:07:01 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:43768, server: ***hostname masked***/***IP masked***:2181 18/04/17 17:07:01 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x2626be142b28f1a, negotiated timeout = 60000 18/04/17 17:07:01 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x2626be142b28f1a 18/04/17 17:07:01 INFO zookeeper.ZooKeeper: Session: 0x2626be142b28f1a closed 18/04/17 17:07:01 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 17:07:01 INFO scheduler.JobScheduler: Finished job streaming job 1523974020000 ms.8 from job set of time 1523974020000 ms 18/04/17 17:07:01 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 907.0 (TID 907) in 1685 ms on ***hostname masked*** (executor 2) (1/1) 18/04/17 17:07:01 INFO cluster.YarnClusterScheduler: Removed TaskSet 907.0, whose tasks have all completed, from pool 18/04/17 17:07:01 INFO scheduler.DAGScheduler: ResultStage 907 (foreachPartition at PredictorEngineApp.java:153) finished in 1.685 s 18/04/17 17:07:01 INFO scheduler.DAGScheduler: Job 908 finished: foreachPartition at PredictorEngineApp.java:153, took 1.752150 s 18/04/17 17:07:01 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0xe58a978 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 17:07:01 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0xe58a9780x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 17:07:01 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 17:07:01 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:43772, server: ***hostname masked***/***IP masked***:2181 18/04/17 17:07:01 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x2626be142b28f1d, negotiated timeout = 60000 18/04/17 17:07:01 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x2626be142b28f1d 18/04/17 17:07:01 INFO zookeeper.ZooKeeper: Session: 0x2626be142b28f1d closed 18/04/17 17:07:01 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 17:07:01 INFO scheduler.JobScheduler: Finished job streaming job 1523974020000 ms.7 from job set of time 1523974020000 ms 18/04/17 17:07:03 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 900.0 (TID 900) in 3595 ms on ***hostname masked*** (executor 9) (1/1) 18/04/17 17:07:03 INFO cluster.YarnClusterScheduler: Removed TaskSet 900.0, whose tasks have all completed, from pool 18/04/17 17:07:03 INFO scheduler.DAGScheduler: ResultStage 900 (foreachPartition at PredictorEngineApp.java:153) finished in 3.596 s 18/04/17 17:07:03 INFO scheduler.DAGScheduler: Job 900 finished: foreachPartition at PredictorEngineApp.java:153, took 3.616309 s 18/04/17 17:07:03 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x173357de connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 17:07:03 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x173357de0x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 17:07:03 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 17:07:03 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:43780, server: ***hostname masked***/***IP masked***:2181 18/04/17 17:07:03 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x2626be142b28f1e, negotiated timeout = 60000 18/04/17 17:07:03 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x2626be142b28f1e 18/04/17 17:07:03 INFO zookeeper.ZooKeeper: Session: 0x2626be142b28f1e closed 18/04/17 17:07:03 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 17:07:03 INFO scheduler.JobScheduler: Finished job streaming job 1523974020000 ms.28 from job set of time 1523974020000 ms 18/04/17 17:07:05 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 899.0 (TID 899) in 5479 ms on ***hostname masked*** (executor 11) (1/1) 18/04/17 17:07:05 INFO cluster.YarnClusterScheduler: Removed TaskSet 899.0, whose tasks have all completed, from pool 18/04/17 17:07:05 INFO scheduler.DAGScheduler: ResultStage 899 (foreachPartition at PredictorEngineApp.java:153) finished in 5.479 s 18/04/17 17:07:05 INFO scheduler.DAGScheduler: Job 899 finished: foreachPartition at PredictorEngineApp.java:153, took 5.495801 s 18/04/17 17:07:05 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x4f4148ef connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 17:07:05 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x4f4148ef0x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 17:07:05 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 17:07:05 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:39191, server: ***hostname masked***/***IP masked***:2181 18/04/17 17:07:05 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x1626be1444c9631, negotiated timeout = 60000 18/04/17 17:07:05 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x1626be1444c9631 18/04/17 17:07:05 INFO zookeeper.ZooKeeper: Session: 0x1626be1444c9631 closed 18/04/17 17:07:05 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 17:07:05 INFO scheduler.JobScheduler: Finished job streaming job 1523974020000 ms.19 from job set of time 1523974020000 ms 18/04/17 17:07:05 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 913.0 (TID 913) in 5630 ms on ***hostname masked*** (executor 10) (1/1) 18/04/17 17:07:05 INFO cluster.YarnClusterScheduler: Removed TaskSet 913.0, whose tasks have all completed, from pool 18/04/17 17:07:05 INFO scheduler.DAGScheduler: ResultStage 913 (foreachPartition at PredictorEngineApp.java:153) finished in 5.630 s 18/04/17 17:07:05 INFO scheduler.DAGScheduler: Job 912 finished: foreachPartition at PredictorEngineApp.java:153, took 5.721685 s 18/04/17 17:07:05 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x2594730d connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 17:07:05 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x2594730d0x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 17:07:05 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 17:07:05 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:43789, server: ***hostname masked***/***IP masked***:2181 18/04/17 17:07:05 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x2626be142b28f20, negotiated timeout = 60000 18/04/17 17:07:05 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x2626be142b28f20 18/04/17 17:07:05 INFO zookeeper.ZooKeeper: Session: 0x2626be142b28f20 closed 18/04/17 17:07:05 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 17:07:05 INFO scheduler.JobScheduler: Finished job streaming job 1523974020000 ms.32 from job set of time 1523974020000 ms 18/04/17 17:07:05 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 896.0 (TID 896) in 5791 ms on ***hostname masked*** (executor 10) (1/1) 18/04/17 17:07:05 INFO scheduler.DAGScheduler: ResultStage 896 (foreachPartition at PredictorEngineApp.java:153) finished in 5.791 s 18/04/17 17:07:05 INFO cluster.YarnClusterScheduler: Removed TaskSet 896.0, whose tasks have all completed, from pool 18/04/17 17:07:05 INFO scheduler.DAGScheduler: Job 896 finished: foreachPartition at PredictorEngineApp.java:153, took 5.798365 s 18/04/17 17:07:05 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x3180fbe9 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 17:07:05 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x3180fbe90x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 17:07:05 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 17:07:05 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:32815, server: ***hostname masked***/***IP masked***:2181 18/04/17 17:07:05 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x3626be1439a95e2, negotiated timeout = 60000 18/04/17 17:07:05 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x3626be1439a95e2 18/04/17 17:07:05 INFO zookeeper.ZooKeeper: Session: 0x3626be1439a95e2 closed 18/04/17 17:07:05 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 17:07:05 INFO scheduler.JobScheduler: Finished job streaming job 1523974020000 ms.12 from job set of time 1523974020000 ms 18/04/17 17:07:06 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 902.0 (TID 902) in 5931 ms on ***hostname masked*** (executor 1) (1/1) 18/04/17 17:07:06 INFO cluster.YarnClusterScheduler: Removed TaskSet 902.0, whose tasks have all completed, from pool 18/04/17 17:07:06 INFO scheduler.DAGScheduler: ResultStage 902 (foreachPartition at PredictorEngineApp.java:153) finished in 5.933 s 18/04/17 17:07:06 INFO scheduler.DAGScheduler: Job 902 finished: foreachPartition at PredictorEngineApp.java:153, took 5.958186 s 18/04/17 17:07:06 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x783f3d13 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 17:07:06 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x783f3d130x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 17:07:06 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 17:07:06 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:39200, server: ***hostname masked***/***IP masked***:2181 18/04/17 17:07:06 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x1626be1444c9634, negotiated timeout = 60000 18/04/17 17:07:06 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x1626be1444c9634 18/04/17 17:07:06 INFO zookeeper.ZooKeeper: Session: 0x1626be1444c9634 closed 18/04/17 17:07:06 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 17:07:06 INFO scheduler.JobScheduler: Finished job streaming job 1523974020000 ms.9 from job set of time 1523974020000 ms 18/04/17 17:07:06 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 908.0 (TID 908) in 6036 ms on ***hostname masked*** (executor 2) (1/1) 18/04/17 17:07:06 INFO cluster.YarnClusterScheduler: Removed TaskSet 908.0, whose tasks have all completed, from pool 18/04/17 17:07:06 INFO scheduler.DAGScheduler: ResultStage 908 (foreachPartition at PredictorEngineApp.java:153) finished in 6.036 s 18/04/17 17:07:06 INFO scheduler.DAGScheduler: Job 907 finished: foreachPartition at PredictorEngineApp.java:153, took 6.108278 s 18/04/17 17:07:06 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x6ea55ecc connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 17:07:06 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x6ea55ecc0x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 17:07:06 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 17:07:06 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:32822, server: ***hostname masked***/***IP masked***:2181 18/04/17 17:07:06 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x3626be1439a95e3, negotiated timeout = 60000 18/04/17 17:07:06 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x3626be1439a95e3 18/04/17 17:07:06 INFO zookeeper.ZooKeeper: Session: 0x3626be1439a95e3 closed 18/04/17 17:07:06 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 17:07:06 INFO scheduler.JobScheduler: Finished job streaming job 1523974020000 ms.31 from job set of time 1523974020000 ms 18/04/17 17:07:06 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 906.0 (TID 906) in 6199 ms on ***hostname masked*** (executor 5) (1/1) 18/04/17 17:07:06 INFO cluster.YarnClusterScheduler: Removed TaskSet 906.0, whose tasks have all completed, from pool 18/04/17 17:07:06 INFO scheduler.DAGScheduler: ResultStage 906 (foreachPartition at PredictorEngineApp.java:153) finished in 6.199 s 18/04/17 17:07:06 INFO scheduler.DAGScheduler: Job 906 finished: foreachPartition at PredictorEngineApp.java:153, took 6.259356 s 18/04/17 17:07:06 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0xb46b321 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 17:07:06 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0xb46b3210x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 17:07:06 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 17:07:06 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:43802, server: ***hostname masked***/***IP masked***:2181 18/04/17 17:07:06 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x2626be142b28f24, negotiated timeout = 60000 18/04/17 17:07:06 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x2626be142b28f24 18/04/17 17:07:06 INFO zookeeper.ZooKeeper: Session: 0x2626be142b28f24 closed 18/04/17 17:07:06 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 17:07:06 INFO scheduler.JobScheduler: Finished job streaming job 1523974020000 ms.29 from job set of time 1523974020000 ms 18/04/17 17:07:06 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 904.0 (TID 904) in 6419 ms on ***hostname masked*** (executor 3) (1/1) 18/04/17 17:07:06 INFO cluster.YarnClusterScheduler: Removed TaskSet 904.0, whose tasks have all completed, from pool 18/04/17 17:07:06 INFO scheduler.DAGScheduler: ResultStage 904 (foreachPartition at PredictorEngineApp.java:153) finished in 6.419 s 18/04/17 17:07:06 INFO scheduler.DAGScheduler: Job 904 finished: foreachPartition at PredictorEngineApp.java:153, took 6.482645 s 18/04/17 17:07:06 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x25906bf3 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 17:07:06 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x25906bf30x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 17:07:06 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 915.0 (TID 915) in 6384 ms on ***hostname masked*** (executor 7) (1/1) 18/04/17 17:07:06 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 17:07:06 INFO cluster.YarnClusterScheduler: Removed TaskSet 915.0, whose tasks have all completed, from pool 18/04/17 17:07:06 INFO scheduler.DAGScheduler: ResultStage 915 (foreachPartition at PredictorEngineApp.java:153) finished in 6.386 s 18/04/17 17:07:06 INFO scheduler.DAGScheduler: Job 915 finished: foreachPartition at PredictorEngineApp.java:153, took 6.483833 s 18/04/17 17:07:06 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:39210, server: ***hostname masked***/***IP masked***:2181 18/04/17 17:07:06 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x5e32314e connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 17:07:06 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x5e32314e0x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 17:07:06 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 17:07:06 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:32829, server: ***hostname masked***/***IP masked***:2181 18/04/17 17:07:06 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x1626be1444c9637, negotiated timeout = 60000 18/04/17 17:07:06 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x1626be1444c9637 18/04/17 17:07:06 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x3626be1439a95e4, negotiated timeout = 60000 18/04/17 17:07:06 INFO zookeeper.ZooKeeper: Session: 0x1626be1444c9637 closed 18/04/17 17:07:06 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 17:07:06 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x3626be1439a95e4 18/04/17 17:07:06 INFO scheduler.JobScheduler: Finished job streaming job 1523974020000 ms.24 from job set of time 1523974020000 ms 18/04/17 17:07:06 INFO zookeeper.ZooKeeper: Session: 0x3626be1439a95e4 closed 18/04/17 17:07:06 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 17:07:06 INFO scheduler.JobScheduler: Finished job streaming job 1523974020000 ms.27 from job set of time 1523974020000 ms 18/04/17 17:07:06 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 914.0 (TID 914) in 6497 ms on ***hostname masked*** (executor 7) (1/1) 18/04/17 17:07:06 INFO cluster.YarnClusterScheduler: Removed TaskSet 914.0, whose tasks have all completed, from pool 18/04/17 17:07:06 INFO scheduler.DAGScheduler: ResultStage 914 (foreachPartition at PredictorEngineApp.java:153) finished in 6.498 s 18/04/17 17:07:06 INFO scheduler.DAGScheduler: Job 914 finished: foreachPartition at PredictorEngineApp.java:153, took 6.592244 s 18/04/17 17:07:06 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x4e1e56e1 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 17:07:06 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x4e1e56e10x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 17:07:06 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 17:07:06 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:39216, server: ***hostname masked***/***IP masked***:2181 18/04/17 17:07:06 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x1626be1444c9638, negotiated timeout = 60000 18/04/17 17:07:06 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x1626be1444c9638 18/04/17 17:07:06 INFO zookeeper.ZooKeeper: Session: 0x1626be1444c9638 closed 18/04/17 17:07:06 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 17:07:06 INFO scheduler.JobScheduler: Finished job streaming job 1523974020000 ms.20 from job set of time 1523974020000 ms 18/04/17 17:07:06 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 905.0 (TID 905) in 6797 ms on ***hostname masked*** (executor 5) (1/1) 18/04/17 17:07:06 INFO cluster.YarnClusterScheduler: Removed TaskSet 905.0, whose tasks have all completed, from pool 18/04/17 17:07:06 INFO scheduler.DAGScheduler: ResultStage 905 (foreachPartition at PredictorEngineApp.java:153) finished in 6.797 s 18/04/17 17:07:06 INFO scheduler.DAGScheduler: Job 905 finished: foreachPartition at PredictorEngineApp.java:153, took 6.850233 s 18/04/17 17:07:06 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x22298363 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 17:07:06 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x222983630x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 17:07:06 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 17:07:06 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:32837, server: ***hostname masked***/***IP masked***:2181 18/04/17 17:07:06 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x3626be1439a95e7, negotiated timeout = 60000 18/04/17 17:07:06 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x3626be1439a95e7 18/04/17 17:07:06 INFO zookeeper.ZooKeeper: Session: 0x3626be1439a95e7 closed 18/04/17 17:07:06 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 17:07:06 INFO scheduler.JobScheduler: Finished job streaming job 1523974020000 ms.23 from job set of time 1523974020000 ms 18/04/17 17:07:07 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 910.0 (TID 910) in 7581 ms on ***hostname masked*** (executor 6) (1/1) 18/04/17 17:07:07 INFO cluster.YarnClusterScheduler: Removed TaskSet 910.0, whose tasks have all completed, from pool 18/04/17 17:07:07 INFO scheduler.DAGScheduler: ResultStage 910 (foreachPartition at PredictorEngineApp.java:153) finished in 7.581 s 18/04/17 17:07:07 INFO scheduler.DAGScheduler: Job 910 finished: foreachPartition at PredictorEngineApp.java:153, took 7.661187 s 18/04/17 17:07:07 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x12eef895 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 17:07:07 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x12eef8950x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 17:07:07 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 17:07:07 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:43819, server: ***hostname masked***/***IP masked***:2181 18/04/17 17:07:07 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x2626be142b28f26, negotiated timeout = 60000 18/04/17 17:07:07 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x2626be142b28f26 18/04/17 17:07:07 INFO zookeeper.ZooKeeper: Session: 0x2626be142b28f26 closed 18/04/17 17:07:07 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 17:07:07 INFO scheduler.JobScheduler: Finished job streaming job 1523974020000 ms.5 from job set of time 1523974020000 ms 18/04/17 17:07:08 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 909.0 (TID 909) in 8347 ms on ***hostname masked*** (executor 12) (1/1) 18/04/17 17:07:08 INFO cluster.YarnClusterScheduler: Removed TaskSet 909.0, whose tasks have all completed, from pool 18/04/17 17:07:08 INFO scheduler.DAGScheduler: ResultStage 909 (foreachPartition at PredictorEngineApp.java:153) finished in 8.347 s 18/04/17 17:07:08 INFO scheduler.DAGScheduler: Job 909 finished: foreachPartition at PredictorEngineApp.java:153, took 8.423780 s 18/04/17 17:07:08 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x10e69db9 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 17:07:08 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x10e69db90x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 17:07:08 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 17:07:08 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:39228, server: ***hostname masked***/***IP masked***:2181 18/04/17 17:07:08 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x1626be1444c9639, negotiated timeout = 60000 18/04/17 17:07:08 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x1626be1444c9639 18/04/17 17:07:08 INFO zookeeper.ZooKeeper: Session: 0x1626be1444c9639 closed 18/04/17 17:07:08 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 17:07:08 INFO scheduler.JobScheduler: Finished job streaming job 1523974020000 ms.22 from job set of time 1523974020000 ms 18/04/17 17:07:08 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 903.0 (TID 903) in 8819 ms on ***hostname masked*** (executor 2) (1/1) 18/04/17 17:07:08 INFO cluster.YarnClusterScheduler: Removed TaskSet 903.0, whose tasks have all completed, from pool 18/04/17 17:07:08 INFO scheduler.DAGScheduler: ResultStage 903 (foreachPartition at PredictorEngineApp.java:153) finished in 8.819 s 18/04/17 17:07:08 INFO scheduler.DAGScheduler: Job 903 finished: foreachPartition at PredictorEngineApp.java:153, took 8.847175 s 18/04/17 17:07:08 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x353e3a3f connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 17:07:08 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x353e3a3f0x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 17:07:08 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 17:07:08 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:43826, server: ***hostname masked***/***IP masked***:2181 18/04/17 17:07:08 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x2626be142b28f27, negotiated timeout = 60000 18/04/17 17:07:08 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x2626be142b28f27 18/04/17 17:07:08 INFO zookeeper.ZooKeeper: Session: 0x2626be142b28f27 closed 18/04/17 17:07:08 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 17:07:08 INFO scheduler.JobScheduler: Finished job streaming job 1523974020000 ms.18 from job set of time 1523974020000 ms 18/04/17 17:07:09 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 897.0 (TID 897) in 9554 ms on ***hostname masked*** (executor 2) (1/1) 18/04/17 17:07:09 INFO cluster.YarnClusterScheduler: Removed TaskSet 897.0, whose tasks have all completed, from pool 18/04/17 17:07:09 INFO scheduler.DAGScheduler: ResultStage 897 (foreachPartition at PredictorEngineApp.java:153) finished in 9.555 s 18/04/17 17:07:09 INFO scheduler.DAGScheduler: Job 897 finished: foreachPartition at PredictorEngineApp.java:153, took 9.565844 s 18/04/17 17:07:09 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x31ff53f4 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 17:07:09 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x31ff53f40x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 17:07:09 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 17:07:09 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:32853, server: ***hostname masked***/***IP masked***:2181 18/04/17 17:07:09 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x3626be1439a95e9, negotiated timeout = 60000 18/04/17 17:07:09 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x3626be1439a95e9 18/04/17 17:07:09 INFO zookeeper.ZooKeeper: Session: 0x3626be1439a95e9 closed 18/04/17 17:07:09 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 17:07:09 INFO scheduler.JobScheduler: Finished job streaming job 1523974020000 ms.15 from job set of time 1523974020000 ms 18/04/17 17:07:10 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 919.0 (TID 919) in 10113 ms on ***hostname masked*** (executor 11) (1/1) 18/04/17 17:07:10 INFO cluster.YarnClusterScheduler: Removed TaskSet 919.0, whose tasks have all completed, from pool 18/04/17 17:07:10 INFO scheduler.DAGScheduler: ResultStage 919 (foreachPartition at PredictorEngineApp.java:153) finished in 10.113 s 18/04/17 17:07:10 INFO scheduler.DAGScheduler: Job 919 finished: foreachPartition at PredictorEngineApp.java:153, took 10.220603 s 18/04/17 17:07:10 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x551fad3e connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 17:07:10 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x551fad3e0x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 17:07:10 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 17:07:10 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:32857, server: ***hostname masked***/***IP masked***:2181 18/04/17 17:07:10 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x3626be1439a95ec, negotiated timeout = 60000 18/04/17 17:07:10 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x3626be1439a95ec 18/04/17 17:07:10 INFO zookeeper.ZooKeeper: Session: 0x3626be1439a95ec closed 18/04/17 17:07:10 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 17:07:10 INFO scheduler.JobScheduler: Finished job streaming job 1523974020000 ms.11 from job set of time 1523974020000 ms 18/04/17 17:07:10 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 898.0 (TID 898) in 10316 ms on ***hostname masked*** (executor 3) (1/1) 18/04/17 17:07:10 INFO cluster.YarnClusterScheduler: Removed TaskSet 898.0, whose tasks have all completed, from pool 18/04/17 17:07:10 INFO scheduler.DAGScheduler: ResultStage 898 (foreachPartition at PredictorEngineApp.java:153) finished in 10.316 s 18/04/17 17:07:10 INFO scheduler.DAGScheduler: Job 898 finished: foreachPartition at PredictorEngineApp.java:153, took 10.330601 s 18/04/17 17:07:10 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x32593d47 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 17:07:10 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x32593d470x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 17:07:10 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 17:07:10 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:43837, server: ***hostname masked***/***IP masked***:2181 18/04/17 17:07:10 INFO storage.BlockManagerInfo: Removed broadcast_920_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:07:10 INFO storage.BlockManagerInfo: Removed broadcast_920_piece0 on ***hostname masked***:55033 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:07:10 INFO spark.ContextCleaner: Cleaned accumulator 897 18/04/17 17:07:10 INFO storage.BlockManagerInfo: Removed broadcast_897_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:07:10 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x2626be142b28f29, negotiated timeout = 60000 18/04/17 17:07:10 INFO storage.BlockManagerInfo: Removed broadcast_897_piece0 on ***hostname masked***:43653 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:07:10 INFO spark.ContextCleaner: Cleaned accumulator 898 18/04/17 17:07:10 INFO storage.BlockManagerInfo: Removed broadcast_896_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:07:10 INFO storage.BlockManagerInfo: Removed broadcast_896_piece0 on ***hostname masked***:55095 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:07:10 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x2626be142b28f29 18/04/17 17:07:10 INFO spark.ContextCleaner: Cleaned accumulator 900 18/04/17 17:07:10 INFO storage.BlockManagerInfo: Removed broadcast_898_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:07:10 INFO storage.BlockManagerInfo: Removed broadcast_898_piece0 on ***hostname masked***:60107 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:07:10 INFO zookeeper.ZooKeeper: Session: 0x2626be142b28f29 closed 18/04/17 17:07:10 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 17:07:10 INFO scheduler.JobScheduler: Finished job streaming job 1523974020000 ms.26 from job set of time 1523974020000 ms 18/04/17 17:07:10 INFO spark.ContextCleaner: Cleaned accumulator 899 18/04/17 17:07:10 INFO storage.BlockManagerInfo: Removed broadcast_900_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:07:10 INFO storage.BlockManagerInfo: Removed broadcast_900_piece0 on ***hostname masked***:55033 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:07:10 INFO spark.ContextCleaner: Cleaned accumulator 901 18/04/17 17:07:10 INFO storage.BlockManagerInfo: Removed broadcast_899_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:07:10 INFO storage.BlockManagerInfo: Removed broadcast_899_piece0 on ***hostname masked***:57847 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:07:10 INFO spark.ContextCleaner: Cleaned accumulator 903 18/04/17 17:07:10 INFO storage.BlockManagerInfo: Removed broadcast_901_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:07:10 INFO storage.BlockManagerInfo: Removed broadcast_901_piece0 on ***hostname masked***:56034 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:07:10 INFO spark.ContextCleaner: Cleaned accumulator 902 18/04/17 17:07:10 INFO storage.BlockManagerInfo: Removed broadcast_903_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:07:10 INFO storage.BlockManagerInfo: Removed broadcast_903_piece0 on ***hostname masked***:43653 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:07:10 INFO spark.ContextCleaner: Cleaned accumulator 904 18/04/17 17:07:10 INFO storage.BlockManagerInfo: Removed broadcast_902_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:07:10 INFO storage.BlockManagerInfo: Removed broadcast_902_piece0 on ***hostname masked***:56034 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:07:10 INFO spark.ContextCleaner: Cleaned accumulator 905 18/04/17 17:07:10 INFO storage.BlockManagerInfo: Removed broadcast_905_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:07:10 INFO storage.BlockManagerInfo: Removed broadcast_905_piece0 on ***hostname masked***:53081 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:07:10 INFO spark.ContextCleaner: Cleaned accumulator 906 18/04/17 17:07:10 INFO storage.BlockManagerInfo: Removed broadcast_904_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:07:10 INFO storage.BlockManagerInfo: Removed broadcast_904_piece0 on ***hostname masked***:60107 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:07:10 INFO spark.ContextCleaner: Cleaned accumulator 908 18/04/17 17:07:10 INFO storage.BlockManagerInfo: Removed broadcast_906_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:07:10 INFO storage.BlockManagerInfo: Removed broadcast_906_piece0 on ***hostname masked***:53081 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:07:10 INFO spark.ContextCleaner: Cleaned accumulator 907 18/04/17 17:07:10 INFO storage.BlockManagerInfo: Removed broadcast_908_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:07:10 INFO storage.BlockManagerInfo: Removed broadcast_908_piece0 on ***hostname masked***:43653 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:07:10 INFO spark.ContextCleaner: Cleaned accumulator 909 18/04/17 17:07:10 INFO storage.BlockManagerInfo: Removed broadcast_907_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:07:10 INFO storage.BlockManagerInfo: Removed broadcast_907_piece0 on ***hostname masked***:43653 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:07:10 INFO spark.ContextCleaner: Cleaned accumulator 910 18/04/17 17:07:10 INFO storage.BlockManagerInfo: Removed broadcast_909_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:07:10 INFO storage.BlockManagerInfo: Removed broadcast_909_piece0 on ***hostname masked***:42188 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:07:10 INFO spark.ContextCleaner: Cleaned accumulator 912 18/04/17 17:07:10 INFO storage.BlockManagerInfo: Removed broadcast_910_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:07:10 INFO storage.BlockManagerInfo: Removed broadcast_910_piece0 on ***hostname masked***:35790 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:07:10 INFO spark.ContextCleaner: Cleaned accumulator 911 18/04/17 17:07:10 INFO storage.BlockManagerInfo: Removed broadcast_912_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:07:10 INFO storage.BlockManagerInfo: Removed broadcast_912_piece0 on ***hostname masked***:35790 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:07:10 INFO spark.ContextCleaner: Cleaned accumulator 913 18/04/17 17:07:10 INFO storage.BlockManagerInfo: Removed broadcast_911_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:07:10 INFO storage.BlockManagerInfo: Removed broadcast_911_piece0 on ***hostname masked***:55279 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:07:10 INFO spark.ContextCleaner: Cleaned accumulator 915 18/04/17 17:07:10 INFO storage.BlockManagerInfo: Removed broadcast_913_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:07:10 INFO storage.BlockManagerInfo: Removed broadcast_913_piece0 on ***hostname masked***:55095 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:07:10 INFO spark.ContextCleaner: Cleaned accumulator 914 18/04/17 17:07:10 INFO storage.BlockManagerInfo: Removed broadcast_915_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:07:10 INFO storage.BlockManagerInfo: Removed broadcast_915_piece0 on ***hostname masked***:41751 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:07:10 INFO spark.ContextCleaner: Cleaned accumulator 916 18/04/17 17:07:10 INFO storage.BlockManagerInfo: Removed broadcast_914_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.7 MB) 18/04/17 17:07:10 INFO storage.BlockManagerInfo: Removed broadcast_914_piece0 on ***hostname masked***:41751 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:07:10 INFO spark.ContextCleaner: Cleaned accumulator 918 18/04/17 17:07:10 INFO storage.BlockManagerInfo: Removed broadcast_916_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.7 MB) 18/04/17 17:07:10 INFO storage.BlockManagerInfo: Removed broadcast_916_piece0 on ***hostname masked***:41751 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:07:10 INFO spark.ContextCleaner: Cleaned accumulator 917 18/04/17 17:07:10 INFO storage.BlockManagerInfo: Removed broadcast_918_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.7 MB) 18/04/17 17:07:10 INFO storage.BlockManagerInfo: Removed broadcast_918_piece0 on ***hostname masked***:55095 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:07:10 INFO spark.ContextCleaner: Cleaned accumulator 919 18/04/17 17:07:10 INFO storage.BlockManagerInfo: Removed broadcast_917_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.7 MB) 18/04/17 17:07:10 INFO storage.BlockManagerInfo: Removed broadcast_917_piece0 on ***hostname masked***:55033 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:07:10 INFO spark.ContextCleaner: Cleaned accumulator 921 18/04/17 17:07:10 INFO storage.BlockManagerInfo: Removed broadcast_919_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.7 MB) 18/04/17 17:07:10 INFO storage.BlockManagerInfo: Removed broadcast_919_piece0 on ***hostname masked***:57847 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:07:10 INFO spark.ContextCleaner: Cleaned accumulator 920 18/04/17 17:07:11 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 921.0 (TID 921) in 11387 ms on ***hostname masked*** (executor 1) (1/1) 18/04/17 17:07:11 INFO cluster.YarnClusterScheduler: Removed TaskSet 921.0, whose tasks have all completed, from pool 18/04/17 17:07:11 INFO scheduler.DAGScheduler: ResultStage 921 (foreachPartition at PredictorEngineApp.java:153) finished in 11.388 s 18/04/17 17:07:11 INFO scheduler.DAGScheduler: Job 921 finished: foreachPartition at PredictorEngineApp.java:153, took 11.499502 s 18/04/17 17:07:11 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x2d22bed0 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 17:07:11 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x2d22bed00x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 17:07:11 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 17:07:11 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:43841, server: ***hostname masked***/***IP masked***:2181 18/04/17 17:07:11 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x2626be142b28f2a, negotiated timeout = 60000 18/04/17 17:07:11 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x2626be142b28f2a 18/04/17 17:07:11 INFO zookeeper.ZooKeeper: Session: 0x2626be142b28f2a closed 18/04/17 17:07:11 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 17:07:11 INFO scheduler.JobScheduler: Finished job streaming job 1523974020000 ms.10 from job set of time 1523974020000 ms 18/04/17 17:07:11 INFO scheduler.JobScheduler: Total delay: 11.584 s for time 1523974020000 ms (execution: 11.535 s) 18/04/17 17:07:11 INFO kafka.KafkaRDD: Removing RDD 1188 from persistence list 18/04/17 17:07:11 INFO storage.BlockManager: Removing RDD 1188 18/04/17 17:07:11 INFO kafka.KafkaRDD: Removing RDD 1188 from persistence list 18/04/17 17:07:11 INFO storage.BlockManager: Removing RDD 1188 18/04/17 17:07:11 INFO kafka.KafkaRDD: Removing RDD 1189 from persistence list 18/04/17 17:07:11 INFO storage.BlockManager: Removing RDD 1189 18/04/17 17:07:11 INFO kafka.KafkaRDD: Removing RDD 1189 from persistence list 18/04/17 17:07:11 INFO storage.BlockManager: Removing RDD 1189 18/04/17 17:07:11 INFO kafka.KafkaRDD: Removing RDD 1190 from persistence list 18/04/17 17:07:11 INFO storage.BlockManager: Removing RDD 1190 18/04/17 17:07:11 INFO kafka.KafkaRDD: Removing RDD 1190 from persistence list 18/04/17 17:07:11 INFO storage.BlockManager: Removing RDD 1190 18/04/17 17:07:11 INFO kafka.KafkaRDD: Removing RDD 1191 from persistence list 18/04/17 17:07:11 INFO storage.BlockManager: Removing RDD 1191 18/04/17 17:07:11 INFO kafka.KafkaRDD: Removing RDD 1191 from persistence list 18/04/17 17:07:11 INFO storage.BlockManager: Removing RDD 1191 18/04/17 17:07:11 INFO kafka.KafkaRDD: Removing RDD 1192 from persistence list 18/04/17 17:07:11 INFO storage.BlockManager: Removing RDD 1192 18/04/17 17:07:11 INFO kafka.KafkaRDD: Removing RDD 1192 from persistence list 18/04/17 17:07:11 INFO storage.BlockManager: Removing RDD 1192 18/04/17 17:07:11 INFO kafka.KafkaRDD: Removing RDD 1193 from persistence list 18/04/17 17:07:11 INFO storage.BlockManager: Removing RDD 1193 18/04/17 17:07:11 INFO kafka.KafkaRDD: Removing RDD 1193 from persistence list 18/04/17 17:07:11 INFO storage.BlockManager: Removing RDD 1193 18/04/17 17:07:11 INFO kafka.KafkaRDD: Removing RDD 1194 from persistence list 18/04/17 17:07:11 INFO storage.BlockManager: Removing RDD 1194 18/04/17 17:07:11 INFO kafka.KafkaRDD: Removing RDD 1194 from persistence list 18/04/17 17:07:11 INFO storage.BlockManager: Removing RDD 1194 18/04/17 17:07:11 INFO kafka.KafkaRDD: Removing RDD 1195 from persistence list 18/04/17 17:07:11 INFO storage.BlockManager: Removing RDD 1195 18/04/17 17:07:11 INFO kafka.KafkaRDD: Removing RDD 1195 from persistence list 18/04/17 17:07:11 INFO storage.BlockManager: Removing RDD 1195 18/04/17 17:07:11 INFO kafka.KafkaRDD: Removing RDD 1196 from persistence list 18/04/17 17:07:11 INFO storage.BlockManager: Removing RDD 1196 18/04/17 17:07:11 INFO kafka.KafkaRDD: Removing RDD 1196 from persistence list 18/04/17 17:07:11 INFO storage.BlockManager: Removing RDD 1196 18/04/17 17:07:11 INFO kafka.KafkaRDD: Removing RDD 1197 from persistence list 18/04/17 17:07:11 INFO storage.BlockManager: Removing RDD 1197 18/04/17 17:07:11 INFO kafka.KafkaRDD: Removing RDD 1197 from persistence list 18/04/17 17:07:11 INFO storage.BlockManager: Removing RDD 1197 18/04/17 17:07:11 INFO kafka.KafkaRDD: Removing RDD 1198 from persistence list 18/04/17 17:07:11 INFO storage.BlockManager: Removing RDD 1198 18/04/17 17:07:11 INFO kafka.KafkaRDD: Removing RDD 1198 from persistence list 18/04/17 17:07:11 INFO storage.BlockManager: Removing RDD 1198 18/04/17 17:07:11 INFO kafka.KafkaRDD: Removing RDD 1199 from persistence list 18/04/17 17:07:11 INFO storage.BlockManager: Removing RDD 1199 18/04/17 17:07:11 INFO kafka.KafkaRDD: Removing RDD 1199 from persistence list 18/04/17 17:07:11 INFO storage.BlockManager: Removing RDD 1199 18/04/17 17:07:11 INFO kafka.KafkaRDD: Removing RDD 1200 from persistence list 18/04/17 17:07:11 INFO storage.BlockManager: Removing RDD 1200 18/04/17 17:07:11 INFO kafka.KafkaRDD: Removing RDD 1200 from persistence list 18/04/17 17:07:11 INFO storage.BlockManager: Removing RDD 1200 18/04/17 17:07:11 INFO kafka.KafkaRDD: Removing RDD 1201 from persistence list 18/04/17 17:07:11 INFO storage.BlockManager: Removing RDD 1201 18/04/17 17:07:11 INFO kafka.KafkaRDD: Removing RDD 1201 from persistence list 18/04/17 17:07:11 INFO storage.BlockManager: Removing RDD 1201 18/04/17 17:07:11 INFO kafka.KafkaRDD: Removing RDD 1202 from persistence list 18/04/17 17:07:11 INFO storage.BlockManager: Removing RDD 1202 18/04/17 17:07:11 INFO kafka.KafkaRDD: Removing RDD 1202 from persistence list 18/04/17 17:07:11 INFO storage.BlockManager: Removing RDD 1202 18/04/17 17:07:11 INFO kafka.KafkaRDD: Removing RDD 1203 from persistence list 18/04/17 17:07:11 INFO storage.BlockManager: Removing RDD 1203 18/04/17 17:07:11 INFO kafka.KafkaRDD: Removing RDD 1203 from persistence list 18/04/17 17:07:11 INFO storage.BlockManager: Removing RDD 1203 18/04/17 17:07:11 INFO kafka.KafkaRDD: Removing RDD 1204 from persistence list 18/04/17 17:07:11 INFO storage.BlockManager: Removing RDD 1204 18/04/17 17:07:11 INFO kafka.KafkaRDD: Removing RDD 1204 from persistence list 18/04/17 17:07:11 INFO storage.BlockManager: Removing RDD 1204 18/04/17 17:07:11 INFO kafka.KafkaRDD: Removing RDD 1205 from persistence list 18/04/17 17:07:11 INFO storage.BlockManager: Removing RDD 1205 18/04/17 17:07:11 INFO kafka.KafkaRDD: Removing RDD 1205 from persistence list 18/04/17 17:07:11 INFO storage.BlockManager: Removing RDD 1205 18/04/17 17:07:11 INFO kafka.KafkaRDD: Removing RDD 1206 from persistence list 18/04/17 17:07:11 INFO storage.BlockManager: Removing RDD 1206 18/04/17 17:07:11 INFO kafka.KafkaRDD: Removing RDD 1206 from persistence list 18/04/17 17:07:11 INFO storage.BlockManager: Removing RDD 1206 18/04/17 17:07:11 INFO kafka.KafkaRDD: Removing RDD 1207 from persistence list 18/04/17 17:07:11 INFO storage.BlockManager: Removing RDD 1207 18/04/17 17:07:11 INFO kafka.KafkaRDD: Removing RDD 1207 from persistence list 18/04/17 17:07:11 INFO storage.BlockManager: Removing RDD 1207 18/04/17 17:07:11 INFO kafka.KafkaRDD: Removing RDD 1208 from persistence list 18/04/17 17:07:11 INFO storage.BlockManager: Removing RDD 1208 18/04/17 17:07:11 INFO kafka.KafkaRDD: Removing RDD 1208 from persistence list 18/04/17 17:07:11 INFO storage.BlockManager: Removing RDD 1208 18/04/17 17:07:11 INFO kafka.KafkaRDD: Removing RDD 1209 from persistence list 18/04/17 17:07:11 INFO storage.BlockManager: Removing RDD 1209 18/04/17 17:07:11 INFO kafka.KafkaRDD: Removing RDD 1209 from persistence list 18/04/17 17:07:11 INFO storage.BlockManager: Removing RDD 1209 18/04/17 17:07:11 INFO kafka.KafkaRDD: Removing RDD 1210 from persistence list 18/04/17 17:07:11 INFO storage.BlockManager: Removing RDD 1210 18/04/17 17:07:11 INFO kafka.KafkaRDD: Removing RDD 1210 from persistence list 18/04/17 17:07:11 INFO storage.BlockManager: Removing RDD 1210 18/04/17 17:07:11 INFO kafka.KafkaRDD: Removing RDD 1211 from persistence list 18/04/17 17:07:11 INFO storage.BlockManager: Removing RDD 1211 18/04/17 17:07:11 INFO kafka.KafkaRDD: Removing RDD 1211 from persistence list 18/04/17 17:07:11 INFO storage.BlockManager: Removing RDD 1211 18/04/17 17:07:11 INFO kafka.KafkaRDD: Removing RDD 1212 from persistence list 18/04/17 17:07:11 INFO storage.BlockManager: Removing RDD 1212 18/04/17 17:07:11 INFO kafka.KafkaRDD: Removing RDD 1212 from persistence list 18/04/17 17:07:11 INFO storage.BlockManager: Removing RDD 1212 18/04/17 17:07:11 INFO kafka.KafkaRDD: Removing RDD 1213 from persistence list 18/04/17 17:07:11 INFO storage.BlockManager: Removing RDD 1213 18/04/17 17:07:11 INFO kafka.KafkaRDD: Removing RDD 1213 from persistence list 18/04/17 17:07:11 INFO storage.BlockManager: Removing RDD 1213 18/04/17 17:07:11 INFO kafka.KafkaRDD: Removing RDD 1214 from persistence list 18/04/17 17:07:11 INFO storage.BlockManager: Removing RDD 1214 18/04/17 17:07:11 INFO kafka.KafkaRDD: Removing RDD 1214 from persistence list 18/04/17 17:07:11 INFO storage.BlockManager: Removing RDD 1214 18/04/17 17:07:11 INFO kafka.KafkaRDD: Removing RDD 1215 from persistence list 18/04/17 17:07:11 INFO storage.BlockManager: Removing RDD 1215 18/04/17 17:07:11 INFO kafka.KafkaRDD: Removing RDD 1215 from persistence list 18/04/17 17:07:11 INFO storage.BlockManager: Removing RDD 1215 18/04/17 17:07:11 INFO kafka.KafkaRDD: Removing RDD 1216 from persistence list 18/04/17 17:07:11 INFO storage.BlockManager: Removing RDD 1216 18/04/17 17:07:11 INFO kafka.KafkaRDD: Removing RDD 1216 from persistence list 18/04/17 17:07:11 INFO storage.BlockManager: Removing RDD 1216 18/04/17 17:07:11 INFO kafka.KafkaRDD: Removing RDD 1217 from persistence list 18/04/17 17:07:11 INFO storage.BlockManager: Removing RDD 1217 18/04/17 17:07:11 INFO kafka.KafkaRDD: Removing RDD 1217 from persistence list 18/04/17 17:07:11 INFO storage.BlockManager: Removing RDD 1217 18/04/17 17:07:11 INFO kafka.KafkaRDD: Removing RDD 1218 from persistence list 18/04/17 17:07:11 INFO storage.BlockManager: Removing RDD 1218 18/04/17 17:07:11 INFO kafka.KafkaRDD: Removing RDD 1218 from persistence list 18/04/17 17:07:11 INFO storage.BlockManager: Removing RDD 1218 18/04/17 17:07:11 INFO kafka.KafkaRDD: Removing RDD 1219 from persistence list 18/04/17 17:07:11 INFO storage.BlockManager: Removing RDD 1219 18/04/17 17:07:11 INFO kafka.KafkaRDD: Removing RDD 1219 from persistence list 18/04/17 17:07:11 INFO storage.BlockManager: Removing RDD 1219 18/04/17 17:07:11 INFO kafka.KafkaRDD: Removing RDD 1220 from persistence list 18/04/17 17:07:11 INFO storage.BlockManager: Removing RDD 1220 18/04/17 17:07:11 INFO kafka.KafkaRDD: Removing RDD 1220 from persistence list 18/04/17 17:07:11 INFO storage.BlockManager: Removing RDD 1220 18/04/17 17:07:11 INFO kafka.KafkaRDD: Removing RDD 1221 from persistence list 18/04/17 17:07:11 INFO storage.BlockManager: Removing RDD 1221 18/04/17 17:07:11 INFO kafka.KafkaRDD: Removing RDD 1221 from persistence list 18/04/17 17:07:11 INFO storage.BlockManager: Removing RDD 1221 18/04/17 17:07:11 INFO kafka.KafkaRDD: Removing RDD 1222 from persistence list 18/04/17 17:07:11 INFO storage.BlockManager: Removing RDD 1222 18/04/17 17:07:11 INFO kafka.KafkaRDD: Removing RDD 1222 from persistence list 18/04/17 17:07:11 INFO storage.BlockManager: Removing RDD 1222 18/04/17 17:07:11 INFO kafka.KafkaRDD: Removing RDD 1223 from persistence list 18/04/17 17:07:11 INFO storage.BlockManager: Removing RDD 1223 18/04/17 17:07:11 INFO kafka.KafkaRDD: Removing RDD 1223 from persistence list 18/04/17 17:07:11 INFO storage.BlockManager: Removing RDD 1223 18/04/17 17:07:11 INFO scheduler.ReceivedBlockTracker: Deleting batches ArrayBuffer() 18/04/17 17:07:11 INFO scheduler.InputInfoTracker: remove old batch metadata: 1523973900000 ms 18/04/17 17:08:00 INFO scheduler.JobScheduler: Added jobs for time 1523974080000 ms 18/04/17 17:08:00 INFO scheduler.JobScheduler: Starting job streaming job 1523974080000 ms.0 from job set of time 1523974080000 ms 18/04/17 17:08:00 INFO scheduler.JobScheduler: Starting job streaming job 1523974080000 ms.1 from job set of time 1523974080000 ms 18/04/17 17:08:00 INFO scheduler.JobScheduler: Starting job streaming job 1523974080000 ms.2 from job set of time 1523974080000 ms 18/04/17 17:08:00 INFO scheduler.JobScheduler: Starting job streaming job 1523974080000 ms.3 from job set of time 1523974080000 ms 18/04/17 17:08:00 INFO scheduler.JobScheduler: Finished job streaming job 1523974080000 ms.0 from job set of time 1523974080000 ms 18/04/17 17:08:00 INFO scheduler.JobScheduler: Starting job streaming job 1523974080000 ms.4 from job set of time 1523974080000 ms 18/04/17 17:08:00 INFO scheduler.JobScheduler: Starting job streaming job 1523974080000 ms.5 from job set of time 1523974080000 ms 18/04/17 17:08:00 INFO scheduler.JobScheduler: Finished job streaming job 1523974080000 ms.3 from job set of time 1523974080000 ms 18/04/17 17:08:00 INFO scheduler.JobScheduler: Starting job streaming job 1523974080000 ms.6 from job set of time 1523974080000 ms 18/04/17 17:08:00 INFO scheduler.JobScheduler: Starting job streaming job 1523974080000 ms.7 from job set of time 1523974080000 ms 18/04/17 17:08:00 INFO scheduler.JobScheduler: Finished job streaming job 1523974080000 ms.4 from job set of time 1523974080000 ms 18/04/17 17:08:00 INFO scheduler.JobScheduler: Starting job streaming job 1523974080000 ms.9 from job set of time 1523974080000 ms 18/04/17 17:08:00 INFO scheduler.JobScheduler: Starting job streaming job 1523974080000 ms.8 from job set of time 1523974080000 ms 18/04/17 17:08:00 INFO scheduler.JobScheduler: Starting job streaming job 1523974080000 ms.10 from job set of time 1523974080000 ms 18/04/17 17:08:00 INFO scheduler.JobScheduler: Starting job streaming job 1523974080000 ms.11 from job set of time 1523974080000 ms 18/04/17 17:08:00 INFO scheduler.JobScheduler: Starting job streaming job 1523974080000 ms.12 from job set of time 1523974080000 ms 18/04/17 17:08:00 INFO scheduler.JobScheduler: Starting job streaming job 1523974080000 ms.13 from job set of time 1523974080000 ms 18/04/17 17:08:00 INFO scheduler.JobScheduler: Finished job streaming job 1523974080000 ms.13 from job set of time 1523974080000 ms 18/04/17 17:08:00 INFO scheduler.JobScheduler: Starting job streaming job 1523974080000 ms.14 from job set of time 1523974080000 ms 18/04/17 17:08:00 INFO scheduler.JobScheduler: Starting job streaming job 1523974080000 ms.16 from job set of time 1523974080000 ms 18/04/17 17:08:00 INFO scheduler.JobScheduler: Finished job streaming job 1523974080000 ms.16 from job set of time 1523974080000 ms 18/04/17 17:08:00 INFO scheduler.JobScheduler: Starting job streaming job 1523974080000 ms.15 from job set of time 1523974080000 ms 18/04/17 17:08:00 INFO scheduler.JobScheduler: Starting job streaming job 1523974080000 ms.18 from job set of time 1523974080000 ms 18/04/17 17:08:00 INFO scheduler.JobScheduler: Finished job streaming job 1523974080000 ms.14 from job set of time 1523974080000 ms 18/04/17 17:08:00 INFO scheduler.JobScheduler: Starting job streaming job 1523974080000 ms.17 from job set of time 1523974080000 ms 18/04/17 17:08:00 INFO scheduler.JobScheduler: Starting job streaming job 1523974080000 ms.19 from job set of time 1523974080000 ms 18/04/17 17:08:00 INFO scheduler.JobScheduler: Starting job streaming job 1523974080000 ms.20 from job set of time 1523974080000 ms 18/04/17 17:08:00 INFO scheduler.JobScheduler: Finished job streaming job 1523974080000 ms.17 from job set of time 1523974080000 ms 18/04/17 17:08:00 INFO scheduler.JobScheduler: Starting job streaming job 1523974080000 ms.21 from job set of time 1523974080000 ms 18/04/17 17:08:00 INFO scheduler.JobScheduler: Starting job streaming job 1523974080000 ms.22 from job set of time 1523974080000 ms 18/04/17 17:08:00 INFO scheduler.JobScheduler: Finished job streaming job 1523974080000 ms.21 from job set of time 1523974080000 ms 18/04/17 17:08:00 INFO scheduler.JobScheduler: Starting job streaming job 1523974080000 ms.23 from job set of time 1523974080000 ms 18/04/17 17:08:00 INFO scheduler.JobScheduler: Starting job streaming job 1523974080000 ms.25 from job set of time 1523974080000 ms 18/04/17 17:08:00 INFO scheduler.JobScheduler: Starting job streaming job 1523974080000 ms.24 from job set of time 1523974080000 ms 18/04/17 17:08:00 INFO scheduler.JobScheduler: Starting job streaming job 1523974080000 ms.26 from job set of time 1523974080000 ms 18/04/17 17:08:00 INFO scheduler.JobScheduler: Starting job streaming job 1523974080000 ms.27 from job set of time 1523974080000 ms 18/04/17 17:08:00 INFO scheduler.JobScheduler: Starting job streaming job 1523974080000 ms.28 from job set of time 1523974080000 ms 18/04/17 17:08:00 INFO scheduler.JobScheduler: Starting job streaming job 1523974080000 ms.29 from job set of time 1523974080000 ms 18/04/17 17:08:00 INFO scheduler.JobScheduler: Starting job streaming job 1523974080000 ms.30 from job set of time 1523974080000 ms 18/04/17 17:08:00 INFO scheduler.JobScheduler: Starting job streaming job 1523974080000 ms.31 from job set of time 1523974080000 ms 18/04/17 17:08:00 INFO scheduler.JobScheduler: Finished job streaming job 1523974080000 ms.30 from job set of time 1523974080000 ms 18/04/17 17:08:00 INFO scheduler.JobScheduler: Starting job streaming job 1523974080000 ms.32 from job set of time 1523974080000 ms 18/04/17 17:08:00 INFO scheduler.JobScheduler: Starting job streaming job 1523974080000 ms.33 from job set of time 1523974080000 ms 18/04/17 17:08:00 INFO scheduler.JobScheduler: Starting job streaming job 1523974080000 ms.35 from job set of time 1523974080000 ms 18/04/17 17:08:00 INFO scheduler.JobScheduler: Starting job streaming job 1523974080000 ms.34 from job set of time 1523974080000 ms 18/04/17 17:08:00 INFO scheduler.JobScheduler: Finished job streaming job 1523974080000 ms.35 from job set of time 1523974080000 ms 18/04/17 17:08:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 17:08:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 17:08:00 INFO scheduler.DAGScheduler: Got job 922 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 17:08:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 922 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 17:08:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 17:08:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 17:08:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 17:08:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 17:08:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 17:08:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 17:08:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 17:08:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 17:08:00 INFO scheduler.DAGScheduler: Submitting ResultStage 922 (KafkaRDD[1278] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 17:08:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 17:08:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 17:08:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 17:08:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 17:08:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 17:08:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 17:08:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 17:08:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 17:08:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 17:08:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 17:08:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 17:08:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 17:08:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 17:08:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 17:08:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 17:08:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 17:08:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 17:08:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 17:08:00 INFO storage.MemoryStore: Block broadcast_922 stored as values in memory (estimated size 5.7 KB, free 491.6 MB) 18/04/17 17:08:00 INFO storage.MemoryStore: Block broadcast_922_piece0 stored as bytes in memory (estimated size 3.1 KB, free 491.6 MB) 18/04/17 17:08:00 INFO storage.BlockManagerInfo: Added broadcast_922_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.7 MB) 18/04/17 17:08:00 INFO spark.SparkContext: Created broadcast 922 from broadcast at DAGScheduler.scala:1006 18/04/17 17:08:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 922 (KafkaRDD[1278] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 17:08:00 INFO cluster.YarnClusterScheduler: Adding task set 922.0 with 1 tasks 18/04/17 17:08:00 INFO scheduler.DAGScheduler: Got job 923 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 17:08:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 923 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 17:08:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 17:08:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 17:08:00 INFO scheduler.DAGScheduler: Submitting ResultStage 923 (KafkaRDD[1272] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 17:08:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 922.0 (TID 922, ***hostname masked***, executor 2, partition 0, RACK_LOCAL, 2048 bytes) 18/04/17 17:08:00 INFO storage.MemoryStore: Block broadcast_923 stored as values in memory (estimated size 5.7 KB, free 491.6 MB) 18/04/17 17:08:00 INFO storage.MemoryStore: Block broadcast_923_piece0 stored as bytes in memory (estimated size 3.1 KB, free 491.6 MB) 18/04/17 17:08:00 INFO storage.BlockManagerInfo: Added broadcast_923_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.7 MB) 18/04/17 17:08:00 INFO spark.SparkContext: Created broadcast 923 from broadcast at DAGScheduler.scala:1006 18/04/17 17:08:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 923 (KafkaRDD[1272] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 17:08:00 INFO cluster.YarnClusterScheduler: Adding task set 923.0 with 1 tasks 18/04/17 17:08:00 INFO scheduler.DAGScheduler: Got job 924 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 17:08:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 924 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 17:08:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 17:08:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 17:08:00 INFO scheduler.DAGScheduler: Submitting ResultStage 924 (KafkaRDD[1287] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 17:08:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 923.0 (TID 923, ***hostname masked***, executor 6, partition 0, RACK_LOCAL, 2051 bytes) 18/04/17 17:08:00 INFO storage.MemoryStore: Block broadcast_924 stored as values in memory (estimated size 5.7 KB, free 491.6 MB) 18/04/17 17:08:00 INFO storage.MemoryStore: Block broadcast_924_piece0 stored as bytes in memory (estimated size 3.1 KB, free 491.6 MB) 18/04/17 17:08:00 INFO storage.BlockManagerInfo: Added broadcast_924_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.7 MB) 18/04/17 17:08:00 INFO spark.SparkContext: Created broadcast 924 from broadcast at DAGScheduler.scala:1006 18/04/17 17:08:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 924 (KafkaRDD[1287] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 17:08:00 INFO cluster.YarnClusterScheduler: Adding task set 924.0 with 1 tasks 18/04/17 17:08:00 INFO scheduler.DAGScheduler: Got job 925 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 17:08:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 925 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 17:08:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 17:08:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 17:08:00 INFO scheduler.DAGScheduler: Submitting ResultStage 925 (KafkaRDD[1288] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 17:08:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 924.0 (TID 924, ***hostname masked***, executor 6, partition 0, RACK_LOCAL, 2048 bytes) 18/04/17 17:08:00 INFO storage.MemoryStore: Block broadcast_925 stored as values in memory (estimated size 5.7 KB, free 491.6 MB) 18/04/17 17:08:00 INFO storage.MemoryStore: Block broadcast_925_piece0 stored as bytes in memory (estimated size 3.1 KB, free 491.6 MB) 18/04/17 17:08:00 INFO storage.BlockManagerInfo: Added broadcast_925_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.7 MB) 18/04/17 17:08:00 INFO spark.SparkContext: Created broadcast 925 from broadcast at DAGScheduler.scala:1006 18/04/17 17:08:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 925 (KafkaRDD[1288] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 17:08:00 INFO cluster.YarnClusterScheduler: Adding task set 925.0 with 1 tasks 18/04/17 17:08:00 INFO scheduler.DAGScheduler: Got job 926 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 17:08:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 926 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 17:08:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 17:08:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 17:08:00 INFO scheduler.DAGScheduler: Submitting ResultStage 926 (KafkaRDD[1289] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 17:08:00 INFO storage.MemoryStore: Block broadcast_926 stored as values in memory (estimated size 5.7 KB, free 491.6 MB) 18/04/17 17:08:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 925.0 (TID 925, ***hostname masked***, executor 3, partition 0, RACK_LOCAL, 2045 bytes) 18/04/17 17:08:00 INFO storage.MemoryStore: Block broadcast_926_piece0 stored as bytes in memory (estimated size 3.1 KB, free 491.6 MB) 18/04/17 17:08:00 INFO storage.BlockManagerInfo: Added broadcast_926_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:08:00 INFO spark.SparkContext: Created broadcast 926 from broadcast at DAGScheduler.scala:1006 18/04/17 17:08:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 926 (KafkaRDD[1289] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 17:08:00 INFO cluster.YarnClusterScheduler: Adding task set 926.0 with 1 tasks 18/04/17 17:08:00 INFO scheduler.DAGScheduler: Got job 927 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 17:08:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 927 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 17:08:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 17:08:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 17:08:00 INFO scheduler.DAGScheduler: Submitting ResultStage 927 (KafkaRDD[1286] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 17:08:00 INFO storage.MemoryStore: Block broadcast_927 stored as values in memory (estimated size 5.7 KB, free 491.6 MB) 18/04/17 17:08:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 926.0 (TID 926, ***hostname masked***, executor 5, partition 0, NODE_LOCAL, 2049 bytes) 18/04/17 17:08:00 INFO storage.MemoryStore: Block broadcast_927_piece0 stored as bytes in memory (estimated size 3.1 KB, free 491.6 MB) 18/04/17 17:08:00 INFO storage.BlockManagerInfo: Added broadcast_927_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:08:00 INFO spark.SparkContext: Created broadcast 927 from broadcast at DAGScheduler.scala:1006 18/04/17 17:08:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 927 (KafkaRDD[1286] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 17:08:00 INFO cluster.YarnClusterScheduler: Adding task set 927.0 with 1 tasks 18/04/17 17:08:00 INFO scheduler.DAGScheduler: Got job 928 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 17:08:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 928 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 17:08:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 17:08:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 17:08:00 INFO scheduler.DAGScheduler: Submitting ResultStage 928 (KafkaRDD[1293] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 17:08:00 INFO storage.MemoryStore: Block broadcast_928 stored as values in memory (estimated size 5.7 KB, free 491.6 MB) 18/04/17 17:08:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 927.0 (TID 927, ***hostname masked***, executor 3, partition 0, NODE_LOCAL, 2056 bytes) 18/04/17 17:08:00 INFO storage.BlockManagerInfo: Added broadcast_922_piece0 in memory on ***hostname masked***:43653 (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:08:00 INFO storage.MemoryStore: Block broadcast_928_piece0 stored as bytes in memory (estimated size 3.1 KB, free 491.6 MB) 18/04/17 17:08:00 INFO storage.BlockManagerInfo: Added broadcast_928_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:08:00 INFO spark.SparkContext: Created broadcast 928 from broadcast at DAGScheduler.scala:1006 18/04/17 17:08:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 928 (KafkaRDD[1293] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 17:08:00 INFO cluster.YarnClusterScheduler: Adding task set 928.0 with 1 tasks 18/04/17 17:08:00 INFO scheduler.DAGScheduler: Got job 929 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 17:08:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 929 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 17:08:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 17:08:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 17:08:00 INFO scheduler.DAGScheduler: Submitting ResultStage 929 (KafkaRDD[1266] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 17:08:00 INFO storage.MemoryStore: Block broadcast_929 stored as values in memory (estimated size 5.7 KB, free 491.6 MB) 18/04/17 17:08:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 928.0 (TID 928, ***hostname masked***, executor 11, partition 0, NODE_LOCAL, 2048 bytes) 18/04/17 17:08:00 INFO storage.BlockManagerInfo: Added broadcast_924_piece0 in memory on ***hostname masked***:35790 (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:08:00 INFO storage.MemoryStore: Block broadcast_929_piece0 stored as bytes in memory (estimated size 3.1 KB, free 491.6 MB) 18/04/17 17:08:00 INFO storage.BlockManagerInfo: Added broadcast_929_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:08:00 INFO spark.SparkContext: Created broadcast 929 from broadcast at DAGScheduler.scala:1006 18/04/17 17:08:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 929 (KafkaRDD[1266] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 17:08:00 INFO cluster.YarnClusterScheduler: Adding task set 929.0 with 1 tasks 18/04/17 17:08:00 INFO scheduler.DAGScheduler: Got job 930 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 17:08:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 930 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 17:08:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 17:08:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 17:08:00 INFO scheduler.DAGScheduler: Submitting ResultStage 930 (KafkaRDD[1261] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 17:08:00 INFO storage.MemoryStore: Block broadcast_930 stored as values in memory (estimated size 5.7 KB, free 491.6 MB) 18/04/17 17:08:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 929.0 (TID 929, ***hostname masked***, executor 10, partition 0, NODE_LOCAL, 2045 bytes) 18/04/17 17:08:00 INFO storage.MemoryStore: Block broadcast_930_piece0 stored as bytes in memory (estimated size 3.1 KB, free 491.6 MB) 18/04/17 17:08:00 INFO storage.BlockManagerInfo: Added broadcast_930_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:08:00 INFO storage.BlockManagerInfo: Added broadcast_926_piece0 in memory on ***hostname masked***:53081 (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:08:00 INFO spark.SparkContext: Created broadcast 930 from broadcast at DAGScheduler.scala:1006 18/04/17 17:08:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 930 (KafkaRDD[1261] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 17:08:00 INFO cluster.YarnClusterScheduler: Adding task set 930.0 with 1 tasks 18/04/17 17:08:00 INFO scheduler.DAGScheduler: Got job 931 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 17:08:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 931 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 17:08:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 17:08:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 17:08:00 INFO scheduler.DAGScheduler: Submitting ResultStage 931 (KafkaRDD[1268] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 17:08:00 INFO storage.MemoryStore: Block broadcast_931 stored as values in memory (estimated size 5.7 KB, free 491.6 MB) 18/04/17 17:08:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 930.0 (TID 930, ***hostname masked***, executor 10, partition 0, RACK_LOCAL, 2063 bytes) 18/04/17 17:08:00 INFO storage.BlockManagerInfo: Added broadcast_925_piece0 in memory on ***hostname masked***:60107 (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:08:00 INFO storage.MemoryStore: Block broadcast_931_piece0 stored as bytes in memory (estimated size 3.1 KB, free 491.6 MB) 18/04/17 17:08:00 INFO storage.BlockManagerInfo: Added broadcast_931_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:08:00 INFO spark.SparkContext: Created broadcast 931 from broadcast at DAGScheduler.scala:1006 18/04/17 17:08:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 931 (KafkaRDD[1268] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 17:08:00 INFO cluster.YarnClusterScheduler: Adding task set 931.0 with 1 tasks 18/04/17 17:08:00 INFO scheduler.DAGScheduler: Got job 932 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 17:08:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 932 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 17:08:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 17:08:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 17:08:00 INFO scheduler.DAGScheduler: Submitting ResultStage 932 (KafkaRDD[1294] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 17:08:00 INFO storage.MemoryStore: Block broadcast_932 stored as values in memory (estimated size 5.7 KB, free 491.6 MB) 18/04/17 17:08:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 931.0 (TID 931, ***hostname masked***, executor 8, partition 0, RACK_LOCAL, 2051 bytes) 18/04/17 17:08:00 INFO storage.BlockManagerInfo: Added broadcast_923_piece0 in memory on ***hostname masked***:35790 (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:08:00 INFO storage.MemoryStore: Block broadcast_932_piece0 stored as bytes in memory (estimated size 3.1 KB, free 491.6 MB) 18/04/17 17:08:00 INFO storage.BlockManagerInfo: Added broadcast_932_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:08:00 INFO spark.SparkContext: Created broadcast 932 from broadcast at DAGScheduler.scala:1006 18/04/17 17:08:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 932 (KafkaRDD[1294] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 17:08:00 INFO cluster.YarnClusterScheduler: Adding task set 932.0 with 1 tasks 18/04/17 17:08:00 INFO scheduler.DAGScheduler: Got job 933 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 17:08:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 933 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 17:08:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 17:08:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 17:08:00 INFO scheduler.DAGScheduler: Submitting ResultStage 933 (KafkaRDD[1279] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 17:08:00 INFO storage.MemoryStore: Block broadcast_933 stored as values in memory (estimated size 5.7 KB, free 491.6 MB) 18/04/17 17:08:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 932.0 (TID 932, ***hostname masked***, executor 5, partition 0, RACK_LOCAL, 2053 bytes) 18/04/17 17:08:00 INFO storage.BlockManagerInfo: Added broadcast_928_piece0 in memory on ***hostname masked***:57847 (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:08:00 INFO storage.MemoryStore: Block broadcast_933_piece0 stored as bytes in memory (estimated size 3.1 KB, free 491.5 MB) 18/04/17 17:08:00 INFO storage.BlockManagerInfo: Added broadcast_933_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:08:00 INFO spark.SparkContext: Created broadcast 933 from broadcast at DAGScheduler.scala:1006 18/04/17 17:08:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 933 (KafkaRDD[1279] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 17:08:00 INFO cluster.YarnClusterScheduler: Adding task set 933.0 with 1 tasks 18/04/17 17:08:00 INFO scheduler.DAGScheduler: Got job 934 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 17:08:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 934 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 17:08:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 17:08:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 17:08:00 INFO scheduler.DAGScheduler: Submitting ResultStage 934 (KafkaRDD[1282] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 17:08:00 INFO storage.MemoryStore: Block broadcast_934 stored as values in memory (estimated size 5.7 KB, free 491.5 MB) 18/04/17 17:08:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 933.0 (TID 933, ***hostname masked***, executor 2, partition 0, RACK_LOCAL, 2050 bytes) 18/04/17 17:08:00 INFO storage.BlockManagerInfo: Added broadcast_927_piece0 in memory on ***hostname masked***:60107 (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:08:00 INFO storage.MemoryStore: Block broadcast_934_piece0 stored as bytes in memory (estimated size 3.1 KB, free 491.5 MB) 18/04/17 17:08:00 INFO storage.BlockManagerInfo: Added broadcast_934_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:08:00 INFO spark.SparkContext: Created broadcast 934 from broadcast at DAGScheduler.scala:1006 18/04/17 17:08:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 934 (KafkaRDD[1282] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 17:08:00 INFO cluster.YarnClusterScheduler: Adding task set 934.0 with 1 tasks 18/04/17 17:08:00 INFO scheduler.DAGScheduler: Got job 936 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 17:08:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 935 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 17:08:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 17:08:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 17:08:00 INFO scheduler.DAGScheduler: Submitting ResultStage 935 (KafkaRDD[1284] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 17:08:00 INFO storage.MemoryStore: Block broadcast_935 stored as values in memory (estimated size 5.7 KB, free 491.5 MB) 18/04/17 17:08:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 934.0 (TID 934, ***hostname masked***, executor 1, partition 0, RACK_LOCAL, 2069 bytes) 18/04/17 17:08:00 INFO storage.BlockManagerInfo: Added broadcast_932_piece0 in memory on ***hostname masked***:53081 (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:08:00 INFO storage.BlockManagerInfo: Added broadcast_929_piece0 in memory on ***hostname masked***:55095 (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:08:00 INFO storage.MemoryStore: Block broadcast_935_piece0 stored as bytes in memory (estimated size 3.1 KB, free 491.5 MB) 18/04/17 17:08:00 INFO storage.BlockManagerInfo: Added broadcast_935_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:08:00 INFO spark.SparkContext: Created broadcast 935 from broadcast at DAGScheduler.scala:1006 18/04/17 17:08:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 935 (KafkaRDD[1284] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 17:08:00 INFO cluster.YarnClusterScheduler: Adding task set 935.0 with 1 tasks 18/04/17 17:08:00 INFO scheduler.DAGScheduler: Got job 935 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 17:08:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 936 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 17:08:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 17:08:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 17:08:00 INFO scheduler.DAGScheduler: Submitting ResultStage 936 (KafkaRDD[1280] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 17:08:00 INFO storage.MemoryStore: Block broadcast_936 stored as values in memory (estimated size 5.7 KB, free 491.5 MB) 18/04/17 17:08:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 935.0 (TID 935, ***hostname masked***, executor 4, partition 0, NODE_LOCAL, 2047 bytes) 18/04/17 17:08:00 INFO storage.BlockManagerInfo: Added broadcast_933_piece0 in memory on ***hostname masked***:43653 (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:08:00 INFO storage.BlockManagerInfo: Added broadcast_931_piece0 in memory on ***hostname masked***:50260 (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:08:00 INFO storage.MemoryStore: Block broadcast_936_piece0 stored as bytes in memory (estimated size 3.1 KB, free 491.5 MB) 18/04/17 17:08:00 INFO storage.BlockManagerInfo: Added broadcast_936_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:08:00 INFO spark.SparkContext: Created broadcast 936 from broadcast at DAGScheduler.scala:1006 18/04/17 17:08:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 936 (KafkaRDD[1280] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 17:08:00 INFO cluster.YarnClusterScheduler: Adding task set 936.0 with 1 tasks 18/04/17 17:08:00 INFO scheduler.DAGScheduler: Got job 937 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 17:08:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 937 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 17:08:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 17:08:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 17:08:00 INFO scheduler.DAGScheduler: Submitting ResultStage 937 (KafkaRDD[1275] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 17:08:00 INFO storage.MemoryStore: Block broadcast_937 stored as values in memory (estimated size 5.7 KB, free 491.5 MB) 18/04/17 17:08:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 936.0 (TID 936, ***hostname masked***, executor 5, partition 0, RACK_LOCAL, 2048 bytes) 18/04/17 17:08:00 INFO storage.BlockManagerInfo: Added broadcast_930_piece0 in memory on ***hostname masked***:55095 (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:08:00 INFO storage.MemoryStore: Block broadcast_937_piece0 stored as bytes in memory (estimated size 3.1 KB, free 491.5 MB) 18/04/17 17:08:00 INFO storage.BlockManagerInfo: Added broadcast_937_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:08:00 INFO spark.SparkContext: Created broadcast 937 from broadcast at DAGScheduler.scala:1006 18/04/17 17:08:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 937 (KafkaRDD[1275] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 17:08:00 INFO cluster.YarnClusterScheduler: Adding task set 937.0 with 1 tasks 18/04/17 17:08:00 INFO scheduler.DAGScheduler: Got job 938 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 17:08:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 938 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 17:08:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 17:08:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 17:08:00 INFO scheduler.DAGScheduler: Submitting ResultStage 938 (KafkaRDD[1265] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 17:08:00 INFO storage.MemoryStore: Block broadcast_938 stored as values in memory (estimated size 5.7 KB, free 491.5 MB) 18/04/17 17:08:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 937.0 (TID 937, ***hostname masked***, executor 4, partition 0, RACK_LOCAL, 2064 bytes) 18/04/17 17:08:00 INFO storage.BlockManagerInfo: Added broadcast_934_piece0 in memory on ***hostname masked***:56034 (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:08:00 INFO spark.ContextCleaner: Cleaned accumulator 922 18/04/17 17:08:00 INFO storage.BlockManagerInfo: Removed broadcast_921_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:08:00 INFO storage.MemoryStore: Block broadcast_938_piece0 stored as bytes in memory (estimated size 3.1 KB, free 491.5 MB) 18/04/17 17:08:00 INFO storage.BlockManagerInfo: Removed broadcast_921_piece0 on ***hostname masked***:56034 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:08:00 INFO storage.BlockManagerInfo: Added broadcast_938_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:08:00 INFO spark.SparkContext: Created broadcast 938 from broadcast at DAGScheduler.scala:1006 18/04/17 17:08:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 938 (KafkaRDD[1265] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 17:08:00 INFO cluster.YarnClusterScheduler: Adding task set 938.0 with 1 tasks 18/04/17 17:08:00 INFO scheduler.DAGScheduler: Got job 940 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 17:08:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 939 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 17:08:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 17:08:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 17:08:00 INFO scheduler.DAGScheduler: Submitting ResultStage 939 (KafkaRDD[1267] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 17:08:00 INFO storage.MemoryStore: Block broadcast_939 stored as values in memory (estimated size 5.7 KB, free 491.5 MB) 18/04/17 17:08:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 938.0 (TID 938, ***hostname masked***, executor 6, partition 0, NODE_LOCAL, 2059 bytes) 18/04/17 17:08:00 INFO storage.MemoryStore: Block broadcast_939_piece0 stored as bytes in memory (estimated size 3.1 KB, free 491.5 MB) 18/04/17 17:08:00 INFO storage.BlockManagerInfo: Added broadcast_939_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:08:00 INFO spark.SparkContext: Created broadcast 939 from broadcast at DAGScheduler.scala:1006 18/04/17 17:08:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 939 (KafkaRDD[1267] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 17:08:00 INFO cluster.YarnClusterScheduler: Adding task set 939.0 with 1 tasks 18/04/17 17:08:00 INFO scheduler.DAGScheduler: Got job 939 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 17:08:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 940 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 17:08:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 17:08:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 17:08:00 INFO scheduler.DAGScheduler: Submitting ResultStage 940 (KafkaRDD[1270] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 17:08:00 INFO storage.MemoryStore: Block broadcast_940 stored as values in memory (estimated size 5.7 KB, free 491.5 MB) 18/04/17 17:08:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 939.0 (TID 939, ***hostname masked***, executor 2, partition 0, RACK_LOCAL, 2060 bytes) 18/04/17 17:08:00 INFO storage.MemoryStore: Block broadcast_940_piece0 stored as bytes in memory (estimated size 3.1 KB, free 491.5 MB) 18/04/17 17:08:00 INFO storage.BlockManagerInfo: Added broadcast_940_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:08:00 INFO spark.SparkContext: Created broadcast 940 from broadcast at DAGScheduler.scala:1006 18/04/17 17:08:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 940 (KafkaRDD[1270] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 17:08:00 INFO cluster.YarnClusterScheduler: Adding task set 940.0 with 1 tasks 18/04/17 17:08:00 INFO scheduler.DAGScheduler: Got job 941 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 17:08:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 941 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 17:08:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 17:08:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 17:08:00 INFO scheduler.DAGScheduler: Submitting ResultStage 941 (KafkaRDD[1291] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 17:08:00 INFO storage.MemoryStore: Block broadcast_941 stored as values in memory (estimated size 5.7 KB, free 491.5 MB) 18/04/17 17:08:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 940.0 (TID 940, ***hostname masked***, executor 3, partition 0, NODE_LOCAL, 2059 bytes) 18/04/17 17:08:00 INFO storage.BlockManagerInfo: Added broadcast_939_piece0 in memory on ***hostname masked***:43653 (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:08:00 INFO storage.MemoryStore: Block broadcast_941_piece0 stored as bytes in memory (estimated size 3.1 KB, free 491.5 MB) 18/04/17 17:08:00 INFO storage.BlockManagerInfo: Added broadcast_941_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:08:00 INFO spark.SparkContext: Created broadcast 941 from broadcast at DAGScheduler.scala:1006 18/04/17 17:08:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 941 (KafkaRDD[1291] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 17:08:00 INFO cluster.YarnClusterScheduler: Adding task set 941.0 with 1 tasks 18/04/17 17:08:00 INFO scheduler.DAGScheduler: Got job 942 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 17:08:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 942 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 17:08:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 17:08:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 17:08:00 INFO scheduler.DAGScheduler: Submitting ResultStage 942 (KafkaRDD[1271] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 17:08:00 INFO storage.MemoryStore: Block broadcast_942 stored as values in memory (estimated size 5.7 KB, free 491.5 MB) 18/04/17 17:08:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 941.0 (TID 941, ***hostname masked***, executor 5, partition 0, RACK_LOCAL, 2042 bytes) 18/04/17 17:08:00 INFO storage.BlockManagerInfo: Added broadcast_936_piece0 in memory on ***hostname masked***:53081 (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:08:00 INFO storage.MemoryStore: Block broadcast_942_piece0 stored as bytes in memory (estimated size 3.1 KB, free 491.5 MB) 18/04/17 17:08:00 INFO storage.BlockManagerInfo: Added broadcast_942_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:08:00 INFO spark.SparkContext: Created broadcast 942 from broadcast at DAGScheduler.scala:1006 18/04/17 17:08:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 942 (KafkaRDD[1271] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 17:08:00 INFO cluster.YarnClusterScheduler: Adding task set 942.0 with 1 tasks 18/04/17 17:08:00 INFO scheduler.DAGScheduler: Got job 943 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 17:08:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 943 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 17:08:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 17:08:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 17:08:00 INFO scheduler.DAGScheduler: Submitting ResultStage 943 (KafkaRDD[1292] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 17:08:00 INFO storage.MemoryStore: Block broadcast_943 stored as values in memory (estimated size 5.7 KB, free 491.5 MB) 18/04/17 17:08:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 942.0 (TID 942, ***hostname masked***, executor 7, partition 0, RACK_LOCAL, 2046 bytes) 18/04/17 17:08:00 INFO storage.MemoryStore: Block broadcast_943_piece0 stored as bytes in memory (estimated size 3.1 KB, free 491.5 MB) 18/04/17 17:08:00 INFO storage.BlockManagerInfo: Added broadcast_943_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:08:00 INFO spark.SparkContext: Created broadcast 943 from broadcast at DAGScheduler.scala:1006 18/04/17 17:08:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 943 (KafkaRDD[1292] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 17:08:00 INFO cluster.YarnClusterScheduler: Adding task set 943.0 with 1 tasks 18/04/17 17:08:00 INFO scheduler.DAGScheduler: Got job 944 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 17:08:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 944 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 17:08:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 17:08:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 17:08:00 INFO scheduler.DAGScheduler: Submitting ResultStage 944 (KafkaRDD[1285] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 17:08:00 INFO storage.MemoryStore: Block broadcast_944 stored as values in memory (estimated size 5.7 KB, free 491.5 MB) 18/04/17 17:08:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 943.0 (TID 943, ***hostname masked***, executor 7, partition 0, RACK_LOCAL, 2040 bytes) 18/04/17 17:08:00 INFO storage.BlockManagerInfo: Added broadcast_935_piece0 in memory on ***hostname masked***:55279 (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:08:00 INFO storage.MemoryStore: Block broadcast_944_piece0 stored as bytes in memory (estimated size 3.1 KB, free 491.5 MB) 18/04/17 17:08:00 INFO storage.BlockManagerInfo: Added broadcast_944_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:08:00 INFO spark.SparkContext: Created broadcast 944 from broadcast at DAGScheduler.scala:1006 18/04/17 17:08:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 944 (KafkaRDD[1285] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 17:08:00 INFO cluster.YarnClusterScheduler: Adding task set 944.0 with 1 tasks 18/04/17 17:08:00 INFO scheduler.DAGScheduler: Got job 945 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 17:08:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 945 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 17:08:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 17:08:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 17:08:00 INFO scheduler.DAGScheduler: Submitting ResultStage 945 (KafkaRDD[1269] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 17:08:00 INFO storage.MemoryStore: Block broadcast_945 stored as values in memory (estimated size 5.7 KB, free 491.5 MB) 18/04/17 17:08:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 944.0 (TID 944, ***hostname masked***, executor 7, partition 0, RACK_LOCAL, 2045 bytes) 18/04/17 17:08:00 INFO storage.BlockManagerInfo: Added broadcast_942_piece0 in memory on ***hostname masked***:41751 (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:08:00 INFO storage.BlockManagerInfo: Added broadcast_938_piece0 in memory on ***hostname masked***:35790 (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:08:00 INFO storage.MemoryStore: Block broadcast_945_piece0 stored as bytes in memory (estimated size 3.1 KB, free 491.5 MB) 18/04/17 17:08:00 INFO storage.BlockManagerInfo: Added broadcast_945_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:08:00 INFO spark.SparkContext: Created broadcast 945 from broadcast at DAGScheduler.scala:1006 18/04/17 17:08:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 945 (KafkaRDD[1269] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 17:08:00 INFO cluster.YarnClusterScheduler: Adding task set 945.0 with 1 tasks 18/04/17 17:08:00 INFO scheduler.DAGScheduler: Got job 946 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 17:08:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 946 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 17:08:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 17:08:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 17:08:00 INFO scheduler.DAGScheduler: Submitting ResultStage 946 (KafkaRDD[1283] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 17:08:00 INFO storage.MemoryStore: Block broadcast_946 stored as values in memory (estimated size 5.7 KB, free 491.4 MB) 18/04/17 17:08:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 945.0 (TID 945, ***hostname masked***, executor 9, partition 0, RACK_LOCAL, 2041 bytes) 18/04/17 17:08:00 INFO storage.MemoryStore: Block broadcast_946_piece0 stored as bytes in memory (estimated size 3.1 KB, free 491.4 MB) 18/04/17 17:08:00 INFO storage.BlockManagerInfo: Added broadcast_946_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:08:00 INFO spark.SparkContext: Created broadcast 946 from broadcast at DAGScheduler.scala:1006 18/04/17 17:08:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 946 (KafkaRDD[1283] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 17:08:00 INFO cluster.YarnClusterScheduler: Adding task set 946.0 with 1 tasks 18/04/17 17:08:00 INFO scheduler.DAGScheduler: Got job 947 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 17:08:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 947 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 17:08:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 17:08:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 17:08:00 INFO scheduler.DAGScheduler: Submitting ResultStage 947 (KafkaRDD[1262] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 17:08:00 INFO storage.MemoryStore: Block broadcast_947 stored as values in memory (estimated size 5.7 KB, free 491.4 MB) 18/04/17 17:08:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 946.0 (TID 946, ***hostname masked***, executor 3, partition 0, RACK_LOCAL, 2048 bytes) 18/04/17 17:08:00 INFO storage.MemoryStore: Block broadcast_947_piece0 stored as bytes in memory (estimated size 3.1 KB, free 491.4 MB) 18/04/17 17:08:00 INFO storage.BlockManagerInfo: Added broadcast_947_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:08:00 INFO spark.SparkContext: Created broadcast 947 from broadcast at DAGScheduler.scala:1006 18/04/17 17:08:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 947 (KafkaRDD[1262] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 17:08:00 INFO cluster.YarnClusterScheduler: Adding task set 947.0 with 1 tasks 18/04/17 17:08:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 947.0 (TID 947, ***hostname masked***, executor 4, partition 0, RACK_LOCAL, 2034 bytes) 18/04/17 17:08:00 INFO storage.BlockManagerInfo: Added broadcast_944_piece0 in memory on ***hostname masked***:41751 (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:08:00 INFO storage.BlockManagerInfo: Added broadcast_937_piece0 in memory on ***hostname masked***:55279 (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:08:00 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 929.0 (TID 929) in 73 ms on ***hostname masked*** (executor 10) (1/1) 18/04/17 17:08:00 INFO cluster.YarnClusterScheduler: Removed TaskSet 929.0, whose tasks have all completed, from pool 18/04/17 17:08:00 INFO storage.BlockManagerInfo: Added broadcast_945_piece0 in memory on ***hostname masked***:55033 (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:08:00 INFO scheduler.DAGScheduler: ResultStage 929 (foreachPartition at PredictorEngineApp.java:153) finished in 0.073 s 18/04/17 17:08:00 INFO scheduler.DAGScheduler: Job 929 finished: foreachPartition at PredictorEngineApp.java:153, took 0.099993 s 18/04/17 17:08:00 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x1633b390 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 17:08:00 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x1633b3900x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 17:08:00 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 17:08:00 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:39406, server: ***hostname masked***/***IP masked***:2181 18/04/17 17:08:00 INFO storage.BlockManagerInfo: Added broadcast_943_piece0 in memory on ***hostname masked***:41751 (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:08:00 INFO storage.BlockManagerInfo: Added broadcast_947_piece0 in memory on ***hostname masked***:55279 (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:08:00 INFO storage.BlockManagerInfo: Added broadcast_941_piece0 in memory on ***hostname masked***:53081 (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:08:00 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x1626be1444c9647, negotiated timeout = 60000 18/04/17 17:08:00 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x1626be1444c9647 18/04/17 17:08:00 INFO storage.BlockManagerInfo: Added broadcast_940_piece0 in memory on ***hostname masked***:60107 (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:08:00 INFO zookeeper.ZooKeeper: Session: 0x1626be1444c9647 closed 18/04/17 17:08:00 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 17:08:00 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 927.0 (TID 927) in 107 ms on ***hostname masked*** (executor 3) (1/1) 18/04/17 17:08:00 INFO scheduler.DAGScheduler: ResultStage 927 (foreachPartition at PredictorEngineApp.java:153) finished in 0.108 s 18/04/17 17:08:00 INFO cluster.YarnClusterScheduler: Removed TaskSet 927.0, whose tasks have all completed, from pool 18/04/17 17:08:00 INFO scheduler.DAGScheduler: Job 927 finished: foreachPartition at PredictorEngineApp.java:153, took 0.129030 s 18/04/17 17:08:00 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x4dbb00 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 17:08:00 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x4dbb000x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 17:08:00 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 17:08:00 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:39409, server: ***hostname masked***/***IP masked***:2181 18/04/17 17:08:00 INFO storage.BlockManagerInfo: Added broadcast_946_piece0 in memory on ***hostname masked***:60107 (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:08:00 INFO scheduler.JobScheduler: Finished job streaming job 1523974080000 ms.6 from job set of time 1523974080000 ms 18/04/17 17:08:00 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x1626be1444c9648, negotiated timeout = 60000 18/04/17 17:08:00 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x1626be1444c9648 18/04/17 17:08:00 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 944.0 (TID 944) in 59 ms on ***hostname masked*** (executor 7) (1/1) 18/04/17 17:08:00 INFO cluster.YarnClusterScheduler: Removed TaskSet 944.0, whose tasks have all completed, from pool 18/04/17 17:08:00 INFO scheduler.DAGScheduler: ResultStage 944 (foreachPartition at PredictorEngineApp.java:153) finished in 0.059 s 18/04/17 17:08:00 INFO scheduler.DAGScheduler: Job 944 finished: foreachPartition at PredictorEngineApp.java:153, took 0.144332 s 18/04/17 17:08:00 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x120c4522 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 17:08:00 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x120c45220x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 17:08:00 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 17:08:00 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:44007, server: ***hostname masked***/***IP masked***:2181 18/04/17 17:08:00 INFO zookeeper.ZooKeeper: Session: 0x1626be1444c9648 closed 18/04/17 17:08:00 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 17:08:00 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x2626be142b28f35, negotiated timeout = 60000 18/04/17 17:08:00 INFO scheduler.JobScheduler: Finished job streaming job 1523974080000 ms.26 from job set of time 1523974080000 ms 18/04/17 17:08:00 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x2626be142b28f35 18/04/17 17:08:00 INFO zookeeper.ZooKeeper: Session: 0x2626be142b28f35 closed 18/04/17 17:08:00 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 17:08:00 INFO scheduler.JobScheduler: Finished job streaming job 1523974080000 ms.25 from job set of time 1523974080000 ms 18/04/17 17:08:00 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 926.0 (TID 926) in 192 ms on ***hostname masked*** (executor 5) (1/1) 18/04/17 17:08:00 INFO cluster.YarnClusterScheduler: Removed TaskSet 926.0, whose tasks have all completed, from pool 18/04/17 17:08:00 INFO scheduler.DAGScheduler: ResultStage 926 (foreachPartition at PredictorEngineApp.java:153) finished in 0.194 s 18/04/17 17:08:00 INFO scheduler.DAGScheduler: Job 926 finished: foreachPartition at PredictorEngineApp.java:153, took 0.211202 s 18/04/17 17:08:00 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x6d7119b2 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 17:08:00 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x6d7119b20x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 17:08:00 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 17:08:00 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:39415, server: ***hostname masked***/***IP masked***:2181 18/04/17 17:08:00 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x1626be1444c964a, negotiated timeout = 60000 18/04/17 17:08:00 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x1626be1444c964a 18/04/17 17:08:00 INFO zookeeper.ZooKeeper: Session: 0x1626be1444c964a closed 18/04/17 17:08:00 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 17:08:00 INFO scheduler.JobScheduler: Finished job streaming job 1523974080000 ms.29 from job set of time 1523974080000 ms 18/04/17 17:08:01 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 931.0 (TID 931) in 954 ms on ***hostname masked*** (executor 8) (1/1) 18/04/17 17:08:01 INFO cluster.YarnClusterScheduler: Removed TaskSet 931.0, whose tasks have all completed, from pool 18/04/17 17:08:01 INFO scheduler.DAGScheduler: ResultStage 931 (foreachPartition at PredictorEngineApp.java:153) finished in 0.954 s 18/04/17 17:08:01 INFO scheduler.DAGScheduler: Job 931 finished: foreachPartition at PredictorEngineApp.java:153, took 0.985947 s 18/04/17 17:08:01 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x41b4ab2f connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 17:08:01 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x41b4ab2f0x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 17:08:01 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 17:08:01 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:39420, server: ***hostname masked***/***IP masked***:2181 18/04/17 17:08:01 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x1626be1444c964e, negotiated timeout = 60000 18/04/17 17:08:01 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x1626be1444c964e 18/04/17 17:08:01 INFO zookeeper.ZooKeeper: Session: 0x1626be1444c964e closed 18/04/17 17:08:01 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 17:08:01 INFO scheduler.JobScheduler: Finished job streaming job 1523974080000 ms.8 from job set of time 1523974080000 ms 18/04/17 17:08:02 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 939.0 (TID 939) in 2076 ms on ***hostname masked*** (executor 2) (1/1) 18/04/17 17:08:02 INFO cluster.YarnClusterScheduler: Removed TaskSet 939.0, whose tasks have all completed, from pool 18/04/17 17:08:02 INFO scheduler.DAGScheduler: ResultStage 939 (foreachPartition at PredictorEngineApp.java:153) finished in 2.078 s 18/04/17 17:08:02 INFO scheduler.DAGScheduler: Job 940 finished: foreachPartition at PredictorEngineApp.java:153, took 2.144494 s 18/04/17 17:08:02 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x6e3238a3 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 17:08:02 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x6e3238a30x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 17:08:02 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 17:08:02 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:33042, server: ***hostname masked***/***IP masked***:2181 18/04/17 17:08:02 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x3626be1439a95fc, negotiated timeout = 60000 18/04/17 17:08:02 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x3626be1439a95fc 18/04/17 17:08:02 INFO zookeeper.ZooKeeper: Session: 0x3626be1439a95fc closed 18/04/17 17:08:02 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 17:08:02 INFO scheduler.JobScheduler: Finished job streaming job 1523974080000 ms.7 from job set of time 1523974080000 ms 18/04/17 17:08:03 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 945.0 (TID 945) in 3324 ms on ***hostname masked*** (executor 9) (1/1) 18/04/17 17:08:03 INFO cluster.YarnClusterScheduler: Removed TaskSet 945.0, whose tasks have all completed, from pool 18/04/17 17:08:03 INFO scheduler.DAGScheduler: ResultStage 945 (foreachPartition at PredictorEngineApp.java:153) finished in 3.325 s 18/04/17 17:08:03 INFO scheduler.DAGScheduler: Job 945 finished: foreachPartition at PredictorEngineApp.java:153, took 3.413106 s 18/04/17 17:08:03 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x58b1eedc connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 17:08:03 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x58b1eedc0x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 17:08:03 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 17:08:03 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:44025, server: ***hostname masked***/***IP masked***:2181 18/04/17 17:08:03 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x2626be142b28f3a, negotiated timeout = 60000 18/04/17 17:08:03 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x2626be142b28f3a 18/04/17 17:08:03 INFO zookeeper.ZooKeeper: Session: 0x2626be142b28f3a closed 18/04/17 17:08:03 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 17:08:03 INFO scheduler.JobScheduler: Finished job streaming job 1523974080000 ms.9 from job set of time 1523974080000 ms 18/04/17 17:08:03 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 928.0 (TID 928) in 3688 ms on ***hostname masked*** (executor 11) (1/1) 18/04/17 17:08:03 INFO cluster.YarnClusterScheduler: Removed TaskSet 928.0, whose tasks have all completed, from pool 18/04/17 17:08:03 INFO scheduler.DAGScheduler: ResultStage 928 (foreachPartition at PredictorEngineApp.java:153) finished in 3.688 s 18/04/17 17:08:03 INFO scheduler.DAGScheduler: Job 928 finished: foreachPartition at PredictorEngineApp.java:153, took 3.711581 s 18/04/17 17:08:03 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x3aee718c connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 17:08:03 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x3aee718c0x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 17:08:03 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 17:08:03 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:39433, server: ***hostname masked***/***IP masked***:2181 18/04/17 17:08:03 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x1626be1444c9650, negotiated timeout = 60000 18/04/17 17:08:03 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x1626be1444c9650 18/04/17 17:08:03 INFO zookeeper.ZooKeeper: Session: 0x1626be1444c9650 closed 18/04/17 17:08:03 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 17:08:03 INFO scheduler.JobScheduler: Finished job streaming job 1523974080000 ms.33 from job set of time 1523974080000 ms 18/04/17 17:08:05 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 943.0 (TID 943) in 5236 ms on ***hostname masked*** (executor 7) (1/1) 18/04/17 17:08:05 INFO cluster.YarnClusterScheduler: Removed TaskSet 943.0, whose tasks have all completed, from pool 18/04/17 17:08:05 INFO scheduler.DAGScheduler: ResultStage 943 (foreachPartition at PredictorEngineApp.java:153) finished in 5.237 s 18/04/17 17:08:05 INFO scheduler.DAGScheduler: Job 943 finished: foreachPartition at PredictorEngineApp.java:153, took 5.318695 s 18/04/17 17:08:05 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x7aa6a32e connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 17:08:05 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x7aa6a32e0x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 17:08:05 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 17:08:05 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:39439, server: ***hostname masked***/***IP masked***:2181 18/04/17 17:08:05 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x1626be1444c9653, negotiated timeout = 60000 18/04/17 17:08:05 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x1626be1444c9653 18/04/17 17:08:05 INFO zookeeper.ZooKeeper: Session: 0x1626be1444c9653 closed 18/04/17 17:08:05 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 17:08:05 INFO scheduler.JobScheduler: Finished job streaming job 1523974080000 ms.32 from job set of time 1523974080000 ms 18/04/17 17:08:06 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 941.0 (TID 941) in 6595 ms on ***hostname masked*** (executor 5) (1/1) 18/04/17 17:08:06 INFO cluster.YarnClusterScheduler: Removed TaskSet 941.0, whose tasks have all completed, from pool 18/04/17 17:08:06 INFO scheduler.DAGScheduler: ResultStage 941 (foreachPartition at PredictorEngineApp.java:153) finished in 6.596 s 18/04/17 17:08:06 INFO scheduler.DAGScheduler: Job 941 finished: foreachPartition at PredictorEngineApp.java:153, took 6.671003 s 18/04/17 17:08:06 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x60869100 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 17:08:06 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x608691000x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 17:08:06 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 17:08:06 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:33061, server: ***hostname masked***/***IP masked***:2181 18/04/17 17:08:06 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x3626be1439a95fe, negotiated timeout = 60000 18/04/17 17:08:06 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x3626be1439a95fe 18/04/17 17:08:06 INFO zookeeper.ZooKeeper: Session: 0x3626be1439a95fe closed 18/04/17 17:08:06 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 17:08:06 INFO scheduler.JobScheduler: Finished job streaming job 1523974080000 ms.31 from job set of time 1523974080000 ms 18/04/17 17:08:06 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 923.0 (TID 923) in 6783 ms on ***hostname masked*** (executor 6) (1/1) 18/04/17 17:08:06 INFO cluster.YarnClusterScheduler: Removed TaskSet 923.0, whose tasks have all completed, from pool 18/04/17 17:08:06 INFO scheduler.DAGScheduler: ResultStage 923 (foreachPartition at PredictorEngineApp.java:153) finished in 6.783 s 18/04/17 17:08:06 INFO scheduler.DAGScheduler: Job 923 finished: foreachPartition at PredictorEngineApp.java:153, took 6.792906 s 18/04/17 17:08:06 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x26c3ad39 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 17:08:06 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x26c3ad390x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 17:08:06 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 17:08:06 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:33064, server: ***hostname masked***/***IP masked***:2181 18/04/17 17:08:06 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x3626be1439a95ff, negotiated timeout = 60000 18/04/17 17:08:06 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x3626be1439a95ff 18/04/17 17:08:06 INFO zookeeper.ZooKeeper: Session: 0x3626be1439a95ff closed 18/04/17 17:08:06 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 17:08:06 INFO scheduler.JobScheduler: Finished job streaming job 1523974080000 ms.12 from job set of time 1523974080000 ms 18/04/17 17:08:06 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 922.0 (TID 922) in 6913 ms on ***hostname masked*** (executor 2) (1/1) 18/04/17 17:08:06 INFO cluster.YarnClusterScheduler: Removed TaskSet 922.0, whose tasks have all completed, from pool 18/04/17 17:08:06 INFO scheduler.DAGScheduler: ResultStage 922 (foreachPartition at PredictorEngineApp.java:153) finished in 6.913 s 18/04/17 17:08:06 INFO scheduler.DAGScheduler: Job 922 finished: foreachPartition at PredictorEngineApp.java:153, took 6.920147 s 18/04/17 17:08:06 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x1b0fa294 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 17:08:06 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x1b0fa2940x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 17:08:06 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 17:08:06 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:33068, server: ***hostname masked***/***IP masked***:2181 18/04/17 17:08:06 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x3626be1439a9601, negotiated timeout = 60000 18/04/17 17:08:07 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x3626be1439a9601 18/04/17 17:08:07 INFO zookeeper.ZooKeeper: Session: 0x3626be1439a9601 closed 18/04/17 17:08:07 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 17:08:07 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 933.0 (TID 933) in 6913 ms on ***hostname masked*** (executor 2) (1/1) 18/04/17 17:08:07 INFO cluster.YarnClusterScheduler: Removed TaskSet 933.0, whose tasks have all completed, from pool 18/04/17 17:08:07 INFO scheduler.DAGScheduler: ResultStage 933 (foreachPartition at PredictorEngineApp.java:153) finished in 6.914 s 18/04/17 17:08:07 INFO scheduler.DAGScheduler: Job 933 finished: foreachPartition at PredictorEngineApp.java:153, took 6.950948 s 18/04/17 17:08:07 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x544f74b5 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 17:08:07 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x544f74b50x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 17:08:07 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 17:08:07 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:39453, server: ***hostname masked***/***IP masked***:2181 18/04/17 17:08:07 INFO scheduler.JobScheduler: Finished job streaming job 1523974080000 ms.18 from job set of time 1523974080000 ms 18/04/17 17:08:07 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x1626be1444c9657, negotiated timeout = 60000 18/04/17 17:08:07 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x1626be1444c9657 18/04/17 17:08:07 INFO zookeeper.ZooKeeper: Session: 0x1626be1444c9657 closed 18/04/17 17:08:07 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 17:08:07 INFO scheduler.JobScheduler: Finished job streaming job 1523974080000 ms.19 from job set of time 1523974080000 ms 18/04/17 17:08:07 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 930.0 (TID 930) in 7144 ms on ***hostname masked*** (executor 10) (1/1) 18/04/17 17:08:07 INFO cluster.YarnClusterScheduler: Removed TaskSet 930.0, whose tasks have all completed, from pool 18/04/17 17:08:07 INFO scheduler.DAGScheduler: ResultStage 930 (foreachPartition at PredictorEngineApp.java:153) finished in 7.144 s 18/04/17 17:08:07 INFO scheduler.DAGScheduler: Job 930 finished: foreachPartition at PredictorEngineApp.java:153, took 7.173763 s 18/04/17 17:08:07 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x17843420 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 17:08:07 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x178434200x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 17:08:07 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 17:08:07 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:44051, server: ***hostname masked***/***IP masked***:2181 18/04/17 17:08:07 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x2626be142b28f40, negotiated timeout = 60000 18/04/17 17:08:07 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x2626be142b28f40 18/04/17 17:08:07 INFO zookeeper.ZooKeeper: Session: 0x2626be142b28f40 closed 18/04/17 17:08:07 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 17:08:07 INFO scheduler.JobScheduler: Finished job streaming job 1523974080000 ms.1 from job set of time 1523974080000 ms 18/04/17 17:08:07 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 925.0 (TID 925) in 7906 ms on ***hostname masked*** (executor 3) (1/1) 18/04/17 17:08:07 INFO cluster.YarnClusterScheduler: Removed TaskSet 925.0, whose tasks have all completed, from pool 18/04/17 17:08:07 INFO scheduler.DAGScheduler: ResultStage 925 (foreachPartition at PredictorEngineApp.java:153) finished in 7.907 s 18/04/17 17:08:07 INFO scheduler.DAGScheduler: Job 925 finished: foreachPartition at PredictorEngineApp.java:153, took 7.921863 s 18/04/17 17:08:07 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x438a5033 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 17:08:07 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x438a50330x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 17:08:07 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 17:08:07 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:44056, server: ***hostname masked***/***IP masked***:2181 18/04/17 17:08:07 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x2626be142b28f42, negotiated timeout = 60000 18/04/17 17:08:07 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x2626be142b28f42 18/04/17 17:08:08 INFO zookeeper.ZooKeeper: Session: 0x2626be142b28f42 closed 18/04/17 17:08:08 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 17:08:08 INFO scheduler.JobScheduler: Finished job streaming job 1523974080000 ms.28 from job set of time 1523974080000 ms 18/04/17 17:08:08 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 937.0 (TID 937) in 8040 ms on ***hostname masked*** (executor 4) (1/1) 18/04/17 17:08:08 INFO cluster.YarnClusterScheduler: Removed TaskSet 937.0, whose tasks have all completed, from pool 18/04/17 17:08:08 INFO scheduler.DAGScheduler: ResultStage 937 (foreachPartition at PredictorEngineApp.java:153) finished in 8.040 s 18/04/17 17:08:08 INFO scheduler.DAGScheduler: Job 937 finished: foreachPartition at PredictorEngineApp.java:153, took 8.091432 s 18/04/17 17:08:08 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x5ef47bf9 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 17:08:08 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x5ef47bf90x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 17:08:08 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 17:08:08 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:33082, server: ***hostname masked***/***IP masked***:2181 18/04/17 17:08:08 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x3626be1439a9603, negotiated timeout = 60000 18/04/17 17:08:08 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x3626be1439a9603 18/04/17 17:08:08 INFO zookeeper.ZooKeeper: Session: 0x3626be1439a9603 closed 18/04/17 17:08:08 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 17:08:08 INFO scheduler.JobScheduler: Finished job streaming job 1523974080000 ms.15 from job set of time 1523974080000 ms 18/04/17 17:08:08 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 947.0 (TID 947) in 8457 ms on ***hostname masked*** (executor 4) (1/1) 18/04/17 17:08:08 INFO scheduler.DAGScheduler: ResultStage 947 (foreachPartition at PredictorEngineApp.java:153) finished in 8.457 s 18/04/17 17:08:08 INFO cluster.YarnClusterScheduler: Removed TaskSet 947.0, whose tasks have all completed, from pool 18/04/17 17:08:08 INFO scheduler.DAGScheduler: Job 947 finished: foreachPartition at PredictorEngineApp.java:153, took 8.550651 s 18/04/17 17:08:08 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x3bf3e6d connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 17:08:08 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x3bf3e6d0x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 17:08:08 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 17:08:08 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:39467, server: ***hostname masked***/***IP masked***:2181 18/04/17 17:08:08 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x1626be1444c9658, negotiated timeout = 60000 18/04/17 17:08:08 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x1626be1444c9658 18/04/17 17:08:08 INFO zookeeper.ZooKeeper: Session: 0x1626be1444c9658 closed 18/04/17 17:08:08 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 17:08:08 INFO scheduler.JobScheduler: Finished job streaming job 1523974080000 ms.2 from job set of time 1523974080000 ms 18/04/17 17:08:08 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 934.0 (TID 934) in 8659 ms on ***hostname masked*** (executor 1) (1/1) 18/04/17 17:08:08 INFO cluster.YarnClusterScheduler: Removed TaskSet 934.0, whose tasks have all completed, from pool 18/04/17 17:08:08 INFO scheduler.DAGScheduler: ResultStage 934 (foreachPartition at PredictorEngineApp.java:153) finished in 8.660 s 18/04/17 17:08:08 INFO scheduler.DAGScheduler: Job 934 finished: foreachPartition at PredictorEngineApp.java:153, took 8.699818 s 18/04/17 17:08:08 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0xd7ac48a connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 17:08:08 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0xd7ac48a0x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 17:08:08 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 17:08:08 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:44065, server: ***hostname masked***/***IP masked***:2181 18/04/17 17:08:08 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x2626be142b28f45, negotiated timeout = 60000 18/04/17 17:08:08 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x2626be142b28f45 18/04/17 17:08:08 INFO zookeeper.ZooKeeper: Session: 0x2626be142b28f45 closed 18/04/17 17:08:08 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 17:08:08 INFO scheduler.JobScheduler: Finished job streaming job 1523974080000 ms.22 from job set of time 1523974080000 ms 18/04/17 17:08:09 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 935.0 (TID 935) in 9170 ms on ***hostname masked*** (executor 4) (1/1) 18/04/17 17:08:09 INFO cluster.YarnClusterScheduler: Removed TaskSet 935.0, whose tasks have all completed, from pool 18/04/17 17:08:09 INFO scheduler.DAGScheduler: ResultStage 935 (foreachPartition at PredictorEngineApp.java:153) finished in 9.171 s 18/04/17 17:08:09 INFO scheduler.DAGScheduler: Job 936 finished: foreachPartition at PredictorEngineApp.java:153, took 9.214238 s 18/04/17 17:08:09 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x248c6b67 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 17:08:09 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x248c6b670x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 17:08:09 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 17:08:09 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:39474, server: ***hostname masked***/***IP masked***:2181 18/04/17 17:08:09 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x1626be1444c9659, negotiated timeout = 60000 18/04/17 17:08:09 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x1626be1444c9659 18/04/17 17:08:09 INFO zookeeper.ZooKeeper: Session: 0x1626be1444c9659 closed 18/04/17 17:08:09 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 17:08:09 INFO scheduler.JobScheduler: Finished job streaming job 1523974080000 ms.24 from job set of time 1523974080000 ms 18/04/17 17:08:09 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 924.0 (TID 924) in 9425 ms on ***hostname masked*** (executor 6) (1/1) 18/04/17 17:08:09 INFO cluster.YarnClusterScheduler: Removed TaskSet 924.0, whose tasks have all completed, from pool 18/04/17 17:08:09 INFO scheduler.DAGScheduler: ResultStage 924 (foreachPartition at PredictorEngineApp.java:153) finished in 9.426 s 18/04/17 17:08:09 INFO scheduler.DAGScheduler: Job 924 finished: foreachPartition at PredictorEngineApp.java:153, took 9.438063 s 18/04/17 17:08:09 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x353be8b8 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 17:08:09 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x353be8b80x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 17:08:09 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 17:08:09 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:44072, server: ***hostname masked***/***IP masked***:2181 18/04/17 17:08:09 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 942.0 (TID 942) in 9365 ms on ***hostname masked*** (executor 7) (1/1) 18/04/17 17:08:09 INFO scheduler.DAGScheduler: ResultStage 942 (foreachPartition at PredictorEngineApp.java:153) finished in 9.367 s 18/04/17 17:08:09 INFO cluster.YarnClusterScheduler: Removed TaskSet 942.0, whose tasks have all completed, from pool 18/04/17 17:08:09 INFO scheduler.DAGScheduler: Job 942 finished: foreachPartition at PredictorEngineApp.java:153, took 9.444252 s 18/04/17 17:08:09 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x7b8f2a51 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 17:08:09 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x7b8f2a510x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 17:08:09 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 17:08:09 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:39478, server: ***hostname masked***/***IP masked***:2181 18/04/17 17:08:09 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x2626be142b28f46, negotiated timeout = 60000 18/04/17 17:08:09 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x1626be1444c965a, negotiated timeout = 60000 18/04/17 17:08:09 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x2626be142b28f46 18/04/17 17:08:09 INFO zookeeper.ZooKeeper: Session: 0x2626be142b28f46 closed 18/04/17 17:08:09 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 17:08:09 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x1626be1444c965a 18/04/17 17:08:09 INFO zookeeper.ZooKeeper: Session: 0x1626be1444c965a closed 18/04/17 17:08:09 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 17:08:09 INFO scheduler.JobScheduler: Finished job streaming job 1523974080000 ms.27 from job set of time 1523974080000 ms 18/04/17 17:08:09 INFO scheduler.JobScheduler: Finished job streaming job 1523974080000 ms.11 from job set of time 1523974080000 ms 18/04/17 17:08:09 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 946.0 (TID 946) in 9647 ms on ***hostname masked*** (executor 3) (1/1) 18/04/17 17:08:09 INFO cluster.YarnClusterScheduler: Removed TaskSet 946.0, whose tasks have all completed, from pool 18/04/17 17:08:09 INFO scheduler.DAGScheduler: ResultStage 946 (foreachPartition at PredictorEngineApp.java:153) finished in 9.648 s 18/04/17 17:08:09 INFO scheduler.DAGScheduler: Job 946 finished: foreachPartition at PredictorEngineApp.java:153, took 9.738681 s 18/04/17 17:08:09 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x2a5d60e0 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 17:08:09 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x2a5d60e00x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 17:08:09 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 17:08:09 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:39484, server: ***hostname masked***/***IP masked***:2181 18/04/17 17:08:09 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x1626be1444c965b, negotiated timeout = 60000 18/04/17 17:08:09 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x1626be1444c965b 18/04/17 17:08:09 INFO zookeeper.ZooKeeper: Session: 0x1626be1444c965b closed 18/04/17 17:08:09 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 17:08:09 INFO scheduler.JobScheduler: Finished job streaming job 1523974080000 ms.23 from job set of time 1523974080000 ms 18/04/17 17:08:09 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 932.0 (TID 932) in 9854 ms on ***hostname masked*** (executor 5) (1/1) 18/04/17 17:08:09 INFO cluster.YarnClusterScheduler: Removed TaskSet 932.0, whose tasks have all completed, from pool 18/04/17 17:08:09 INFO scheduler.DAGScheduler: ResultStage 932 (foreachPartition at PredictorEngineApp.java:153) finished in 9.854 s 18/04/17 17:08:09 INFO scheduler.DAGScheduler: Job 932 finished: foreachPartition at PredictorEngineApp.java:153, took 9.889034 s 18/04/17 17:08:09 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x78a57ad0 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 17:08:09 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x78a57ad00x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 17:08:09 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 17:08:09 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:33107, server: ***hostname masked***/***IP masked***:2181 18/04/17 17:08:09 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x3626be1439a9606, negotiated timeout = 60000 18/04/17 17:08:10 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x3626be1439a9606 18/04/17 17:08:10 INFO zookeeper.ZooKeeper: Session: 0x3626be1439a9606 closed 18/04/17 17:08:10 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 17:08:10 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 936.0 (TID 936) in 9909 ms on ***hostname masked*** (executor 5) (1/1) 18/04/17 17:08:10 INFO cluster.YarnClusterScheduler: Removed TaskSet 936.0, whose tasks have all completed, from pool 18/04/17 17:08:10 INFO scheduler.DAGScheduler: ResultStage 936 (foreachPartition at PredictorEngineApp.java:153) finished in 9.910 s 18/04/17 17:08:10 INFO scheduler.DAGScheduler: Job 935 finished: foreachPartition at PredictorEngineApp.java:153, took 9.955984 s 18/04/17 17:08:10 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0xfa2167a connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 17:08:10 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0xfa2167a0x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 17:08:10 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 17:08:10 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:39492, server: ***hostname masked***/***IP masked***:2181 18/04/17 17:08:10 INFO scheduler.JobScheduler: Finished job streaming job 1523974080000 ms.34 from job set of time 1523974080000 ms 18/04/17 17:08:10 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x1626be1444c965c, negotiated timeout = 60000 18/04/17 17:08:10 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x1626be1444c965c 18/04/17 17:08:10 INFO zookeeper.ZooKeeper: Session: 0x1626be1444c965c closed 18/04/17 17:08:10 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 17:08:10 INFO scheduler.JobScheduler: Finished job streaming job 1523974080000 ms.20 from job set of time 1523974080000 ms 18/04/17 17:08:13 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 938.0 (TID 938) in 13022 ms on ***hostname masked*** (executor 6) (1/1) 18/04/17 17:08:13 INFO cluster.YarnClusterScheduler: Removed TaskSet 938.0, whose tasks have all completed, from pool 18/04/17 17:08:13 INFO scheduler.DAGScheduler: ResultStage 938 (foreachPartition at PredictorEngineApp.java:153) finished in 13.023 s 18/04/17 17:08:13 INFO scheduler.DAGScheduler: Job 938 finished: foreachPartition at PredictorEngineApp.java:153, took 13.085708 s 18/04/17 17:08:13 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0xe3bc43 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 17:08:13 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0xe3bc430x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 17:08:13 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 17:08:13 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:33119, server: ***hostname masked***/***IP masked***:2181 18/04/17 17:08:13 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x3626be1439a9609, negotiated timeout = 60000 18/04/17 17:08:13 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x3626be1439a9609 18/04/17 17:08:13 INFO zookeeper.ZooKeeper: Session: 0x3626be1439a9609 closed 18/04/17 17:08:13 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 17:08:13 INFO scheduler.JobScheduler: Finished job streaming job 1523974080000 ms.5 from job set of time 1523974080000 ms 18/04/17 17:08:14 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 940.0 (TID 940) in 14750 ms on ***hostname masked*** (executor 3) (1/1) 18/04/17 17:08:14 INFO scheduler.DAGScheduler: ResultStage 940 (foreachPartition at PredictorEngineApp.java:153) finished in 14.752 s 18/04/17 17:08:14 INFO cluster.YarnClusterScheduler: Removed TaskSet 940.0, whose tasks have all completed, from pool 18/04/17 17:08:14 INFO scheduler.DAGScheduler: Job 939 finished: foreachPartition at PredictorEngineApp.java:153, took 14.822450 s 18/04/17 17:08:14 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x3c61c3e9 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 17:08:14 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x3c61c3e90x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 17:08:14 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 17:08:14 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:39505, server: ***hostname masked***/***IP masked***:2181 18/04/17 17:08:14 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x1626be1444c965d, negotiated timeout = 60000 18/04/17 17:08:14 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x1626be1444c965d 18/04/17 17:08:14 INFO zookeeper.ZooKeeper: Session: 0x1626be1444c965d closed 18/04/17 17:08:14 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 17:08:14 INFO scheduler.JobScheduler: Finished job streaming job 1523974080000 ms.10 from job set of time 1523974080000 ms 18/04/17 17:08:14 INFO scheduler.JobScheduler: Total delay: 14.914 s for time 1523974080000 ms (execution: 14.857 s) 18/04/17 17:08:14 INFO kafka.KafkaRDD: Removing RDD 1224 from persistence list 18/04/17 17:08:14 INFO storage.BlockManager: Removing RDD 1224 18/04/17 17:08:14 INFO kafka.KafkaRDD: Removing RDD 1224 from persistence list 18/04/17 17:08:14 INFO storage.BlockManager: Removing RDD 1224 18/04/17 17:08:14 INFO kafka.KafkaRDD: Removing RDD 1225 from persistence list 18/04/17 17:08:14 INFO storage.BlockManager: Removing RDD 1225 18/04/17 17:08:14 INFO kafka.KafkaRDD: Removing RDD 1225 from persistence list 18/04/17 17:08:14 INFO storage.BlockManager: Removing RDD 1225 18/04/17 17:08:14 INFO kafka.KafkaRDD: Removing RDD 1226 from persistence list 18/04/17 17:08:14 INFO storage.BlockManager: Removing RDD 1226 18/04/17 17:08:14 INFO kafka.KafkaRDD: Removing RDD 1226 from persistence list 18/04/17 17:08:14 INFO storage.BlockManager: Removing RDD 1226 18/04/17 17:08:14 INFO kafka.KafkaRDD: Removing RDD 1227 from persistence list 18/04/17 17:08:14 INFO storage.BlockManager: Removing RDD 1227 18/04/17 17:08:14 INFO kafka.KafkaRDD: Removing RDD 1227 from persistence list 18/04/17 17:08:14 INFO storage.BlockManager: Removing RDD 1227 18/04/17 17:08:14 INFO kafka.KafkaRDD: Removing RDD 1228 from persistence list 18/04/17 17:08:14 INFO storage.BlockManager: Removing RDD 1228 18/04/17 17:08:14 INFO kafka.KafkaRDD: Removing RDD 1228 from persistence list 18/04/17 17:08:14 INFO storage.BlockManager: Removing RDD 1228 18/04/17 17:08:14 INFO kafka.KafkaRDD: Removing RDD 1229 from persistence list 18/04/17 17:08:14 INFO storage.BlockManager: Removing RDD 1229 18/04/17 17:08:14 INFO kafka.KafkaRDD: Removing RDD 1229 from persistence list 18/04/17 17:08:14 INFO storage.BlockManager: Removing RDD 1229 18/04/17 17:08:14 INFO kafka.KafkaRDD: Removing RDD 1230 from persistence list 18/04/17 17:08:14 INFO storage.BlockManager: Removing RDD 1230 18/04/17 17:08:14 INFO kafka.KafkaRDD: Removing RDD 1230 from persistence list 18/04/17 17:08:14 INFO storage.BlockManager: Removing RDD 1230 18/04/17 17:08:14 INFO kafka.KafkaRDD: Removing RDD 1231 from persistence list 18/04/17 17:08:14 INFO storage.BlockManager: Removing RDD 1231 18/04/17 17:08:14 INFO kafka.KafkaRDD: Removing RDD 1231 from persistence list 18/04/17 17:08:14 INFO storage.BlockManager: Removing RDD 1231 18/04/17 17:08:14 INFO kafka.KafkaRDD: Removing RDD 1232 from persistence list 18/04/17 17:08:14 INFO storage.BlockManager: Removing RDD 1232 18/04/17 17:08:14 INFO kafka.KafkaRDD: Removing RDD 1232 from persistence list 18/04/17 17:08:14 INFO storage.BlockManager: Removing RDD 1232 18/04/17 17:08:14 INFO kafka.KafkaRDD: Removing RDD 1233 from persistence list 18/04/17 17:08:14 INFO storage.BlockManager: Removing RDD 1233 18/04/17 17:08:14 INFO kafka.KafkaRDD: Removing RDD 1233 from persistence list 18/04/17 17:08:14 INFO storage.BlockManager: Removing RDD 1233 18/04/17 17:08:14 INFO kafka.KafkaRDD: Removing RDD 1234 from persistence list 18/04/17 17:08:14 INFO storage.BlockManager: Removing RDD 1234 18/04/17 17:08:14 INFO kafka.KafkaRDD: Removing RDD 1234 from persistence list 18/04/17 17:08:14 INFO storage.BlockManager: Removing RDD 1234 18/04/17 17:08:14 INFO kafka.KafkaRDD: Removing RDD 1235 from persistence list 18/04/17 17:08:14 INFO storage.BlockManager: Removing RDD 1235 18/04/17 17:08:14 INFO kafka.KafkaRDD: Removing RDD 1235 from persistence list 18/04/17 17:08:14 INFO storage.BlockManager: Removing RDD 1235 18/04/17 17:08:14 INFO kafka.KafkaRDD: Removing RDD 1236 from persistence list 18/04/17 17:08:14 INFO storage.BlockManager: Removing RDD 1236 18/04/17 17:08:14 INFO kafka.KafkaRDD: Removing RDD 1236 from persistence list 18/04/17 17:08:14 INFO storage.BlockManager: Removing RDD 1236 18/04/17 17:08:14 INFO kafka.KafkaRDD: Removing RDD 1237 from persistence list 18/04/17 17:08:14 INFO storage.BlockManager: Removing RDD 1237 18/04/17 17:08:14 INFO kafka.KafkaRDD: Removing RDD 1237 from persistence list 18/04/17 17:08:14 INFO storage.BlockManager: Removing RDD 1237 18/04/17 17:08:14 INFO kafka.KafkaRDD: Removing RDD 1238 from persistence list 18/04/17 17:08:14 INFO storage.BlockManager: Removing RDD 1238 18/04/17 17:08:14 INFO kafka.KafkaRDD: Removing RDD 1238 from persistence list 18/04/17 17:08:14 INFO storage.BlockManager: Removing RDD 1238 18/04/17 17:08:14 INFO kafka.KafkaRDD: Removing RDD 1239 from persistence list 18/04/17 17:08:14 INFO storage.BlockManager: Removing RDD 1239 18/04/17 17:08:14 INFO kafka.KafkaRDD: Removing RDD 1239 from persistence list 18/04/17 17:08:14 INFO storage.BlockManager: Removing RDD 1239 18/04/17 17:08:14 INFO kafka.KafkaRDD: Removing RDD 1240 from persistence list 18/04/17 17:08:14 INFO storage.BlockManager: Removing RDD 1240 18/04/17 17:08:14 INFO kafka.KafkaRDD: Removing RDD 1240 from persistence list 18/04/17 17:08:14 INFO storage.BlockManager: Removing RDD 1240 18/04/17 17:08:14 INFO kafka.KafkaRDD: Removing RDD 1241 from persistence list 18/04/17 17:08:14 INFO storage.BlockManager: Removing RDD 1241 18/04/17 17:08:14 INFO kafka.KafkaRDD: Removing RDD 1241 from persistence list 18/04/17 17:08:14 INFO storage.BlockManager: Removing RDD 1241 18/04/17 17:08:14 INFO kafka.KafkaRDD: Removing RDD 1242 from persistence list 18/04/17 17:08:14 INFO storage.BlockManager: Removing RDD 1242 18/04/17 17:08:14 INFO kafka.KafkaRDD: Removing RDD 1242 from persistence list 18/04/17 17:08:14 INFO storage.BlockManager: Removing RDD 1242 18/04/17 17:08:14 INFO kafka.KafkaRDD: Removing RDD 1243 from persistence list 18/04/17 17:08:14 INFO storage.BlockManager: Removing RDD 1243 18/04/17 17:08:14 INFO kafka.KafkaRDD: Removing RDD 1243 from persistence list 18/04/17 17:08:14 INFO storage.BlockManager: Removing RDD 1243 18/04/17 17:08:14 INFO kafka.KafkaRDD: Removing RDD 1244 from persistence list 18/04/17 17:08:14 INFO storage.BlockManager: Removing RDD 1244 18/04/17 17:08:14 INFO kafka.KafkaRDD: Removing RDD 1244 from persistence list 18/04/17 17:08:14 INFO storage.BlockManager: Removing RDD 1244 18/04/17 17:08:14 INFO kafka.KafkaRDD: Removing RDD 1245 from persistence list 18/04/17 17:08:14 INFO storage.BlockManager: Removing RDD 1245 18/04/17 17:08:14 INFO kafka.KafkaRDD: Removing RDD 1245 from persistence list 18/04/17 17:08:14 INFO storage.BlockManager: Removing RDD 1245 18/04/17 17:08:14 INFO kafka.KafkaRDD: Removing RDD 1246 from persistence list 18/04/17 17:08:14 INFO storage.BlockManager: Removing RDD 1246 18/04/17 17:08:14 INFO kafka.KafkaRDD: Removing RDD 1246 from persistence list 18/04/17 17:08:14 INFO storage.BlockManager: Removing RDD 1246 18/04/17 17:08:14 INFO kafka.KafkaRDD: Removing RDD 1247 from persistence list 18/04/17 17:08:14 INFO storage.BlockManager: Removing RDD 1247 18/04/17 17:08:14 INFO kafka.KafkaRDD: Removing RDD 1247 from persistence list 18/04/17 17:08:14 INFO storage.BlockManager: Removing RDD 1247 18/04/17 17:08:14 INFO kafka.KafkaRDD: Removing RDD 1248 from persistence list 18/04/17 17:08:14 INFO storage.BlockManager: Removing RDD 1248 18/04/17 17:08:14 INFO kafka.KafkaRDD: Removing RDD 1248 from persistence list 18/04/17 17:08:14 INFO storage.BlockManager: Removing RDD 1248 18/04/17 17:08:14 INFO kafka.KafkaRDD: Removing RDD 1249 from persistence list 18/04/17 17:08:14 INFO storage.BlockManager: Removing RDD 1249 18/04/17 17:08:14 INFO kafka.KafkaRDD: Removing RDD 1249 from persistence list 18/04/17 17:08:14 INFO storage.BlockManager: Removing RDD 1249 18/04/17 17:08:14 INFO kafka.KafkaRDD: Removing RDD 1250 from persistence list 18/04/17 17:08:14 INFO storage.BlockManager: Removing RDD 1250 18/04/17 17:08:14 INFO kafka.KafkaRDD: Removing RDD 1250 from persistence list 18/04/17 17:08:14 INFO storage.BlockManager: Removing RDD 1250 18/04/17 17:08:14 INFO kafka.KafkaRDD: Removing RDD 1251 from persistence list 18/04/17 17:08:14 INFO storage.BlockManager: Removing RDD 1251 18/04/17 17:08:14 INFO kafka.KafkaRDD: Removing RDD 1251 from persistence list 18/04/17 17:08:14 INFO storage.BlockManager: Removing RDD 1251 18/04/17 17:08:14 INFO kafka.KafkaRDD: Removing RDD 1252 from persistence list 18/04/17 17:08:14 INFO storage.BlockManager: Removing RDD 1252 18/04/17 17:08:14 INFO kafka.KafkaRDD: Removing RDD 1252 from persistence list 18/04/17 17:08:14 INFO storage.BlockManager: Removing RDD 1252 18/04/17 17:08:14 INFO kafka.KafkaRDD: Removing RDD 1253 from persistence list 18/04/17 17:08:14 INFO storage.BlockManager: Removing RDD 1253 18/04/17 17:08:14 INFO kafka.KafkaRDD: Removing RDD 1253 from persistence list 18/04/17 17:08:14 INFO storage.BlockManager: Removing RDD 1253 18/04/17 17:08:14 INFO kafka.KafkaRDD: Removing RDD 1254 from persistence list 18/04/17 17:08:14 INFO storage.BlockManager: Removing RDD 1254 18/04/17 17:08:14 INFO kafka.KafkaRDD: Removing RDD 1254 from persistence list 18/04/17 17:08:14 INFO storage.BlockManager: Removing RDD 1254 18/04/17 17:08:14 INFO kafka.KafkaRDD: Removing RDD 1255 from persistence list 18/04/17 17:08:14 INFO storage.BlockManager: Removing RDD 1255 18/04/17 17:08:14 INFO kafka.KafkaRDD: Removing RDD 1255 from persistence list 18/04/17 17:08:14 INFO storage.BlockManager: Removing RDD 1255 18/04/17 17:08:14 INFO kafka.KafkaRDD: Removing RDD 1256 from persistence list 18/04/17 17:08:14 INFO storage.BlockManager: Removing RDD 1256 18/04/17 17:08:14 INFO kafka.KafkaRDD: Removing RDD 1256 from persistence list 18/04/17 17:08:14 INFO storage.BlockManager: Removing RDD 1256 18/04/17 17:08:14 INFO kafka.KafkaRDD: Removing RDD 1257 from persistence list 18/04/17 17:08:14 INFO storage.BlockManager: Removing RDD 1257 18/04/17 17:08:14 INFO kafka.KafkaRDD: Removing RDD 1257 from persistence list 18/04/17 17:08:14 INFO storage.BlockManager: Removing RDD 1257 18/04/17 17:08:14 INFO kafka.KafkaRDD: Removing RDD 1258 from persistence list 18/04/17 17:08:14 INFO storage.BlockManager: Removing RDD 1258 18/04/17 17:08:14 INFO kafka.KafkaRDD: Removing RDD 1258 from persistence list 18/04/17 17:08:14 INFO storage.BlockManager: Removing RDD 1258 18/04/17 17:08:14 INFO kafka.KafkaRDD: Removing RDD 1259 from persistence list 18/04/17 17:08:14 INFO storage.BlockManager: Removing RDD 1259 18/04/17 17:08:14 INFO kafka.KafkaRDD: Removing RDD 1259 from persistence list 18/04/17 17:08:14 INFO storage.BlockManager: Removing RDD 1259 18/04/17 17:08:14 INFO scheduler.ReceivedBlockTracker: Deleting batches ArrayBuffer() 18/04/17 17:08:14 INFO scheduler.InputInfoTracker: remove old batch metadata: 1523973960000 ms 18/04/17 17:09:00 INFO scheduler.JobScheduler: Added jobs for time 1523974140000 ms 18/04/17 17:09:00 INFO scheduler.JobScheduler: Starting job streaming job 1523974140000 ms.0 from job set of time 1523974140000 ms 18/04/17 17:09:00 INFO scheduler.JobScheduler: Starting job streaming job 1523974140000 ms.3 from job set of time 1523974140000 ms 18/04/17 17:09:00 INFO scheduler.JobScheduler: Starting job streaming job 1523974140000 ms.4 from job set of time 1523974140000 ms 18/04/17 17:09:00 INFO scheduler.JobScheduler: Starting job streaming job 1523974140000 ms.2 from job set of time 1523974140000 ms 18/04/17 17:09:00 INFO scheduler.JobScheduler: Starting job streaming job 1523974140000 ms.1 from job set of time 1523974140000 ms 18/04/17 17:09:00 INFO scheduler.JobScheduler: Finished job streaming job 1523974140000 ms.3 from job set of time 1523974140000 ms 18/04/17 17:09:00 INFO scheduler.JobScheduler: Finished job streaming job 1523974140000 ms.0 from job set of time 1523974140000 ms 18/04/17 17:09:00 INFO scheduler.JobScheduler: Starting job streaming job 1523974140000 ms.5 from job set of time 1523974140000 ms 18/04/17 17:09:00 INFO scheduler.JobScheduler: Finished job streaming job 1523974140000 ms.4 from job set of time 1523974140000 ms 18/04/17 17:09:00 INFO scheduler.JobScheduler: Starting job streaming job 1523974140000 ms.6 from job set of time 1523974140000 ms 18/04/17 17:09:00 INFO scheduler.JobScheduler: Starting job streaming job 1523974140000 ms.7 from job set of time 1523974140000 ms 18/04/17 17:09:00 INFO scheduler.JobScheduler: Starting job streaming job 1523974140000 ms.8 from job set of time 1523974140000 ms 18/04/17 17:09:00 INFO scheduler.JobScheduler: Starting job streaming job 1523974140000 ms.9 from job set of time 1523974140000 ms 18/04/17 17:09:00 INFO scheduler.JobScheduler: Starting job streaming job 1523974140000 ms.11 from job set of time 1523974140000 ms 18/04/17 17:09:00 INFO scheduler.JobScheduler: Starting job streaming job 1523974140000 ms.10 from job set of time 1523974140000 ms 18/04/17 17:09:00 INFO scheduler.JobScheduler: Starting job streaming job 1523974140000 ms.12 from job set of time 1523974140000 ms 18/04/17 17:09:00 INFO scheduler.JobScheduler: Starting job streaming job 1523974140000 ms.14 from job set of time 1523974140000 ms 18/04/17 17:09:00 INFO scheduler.JobScheduler: Starting job streaming job 1523974140000 ms.13 from job set of time 1523974140000 ms 18/04/17 17:09:00 INFO scheduler.JobScheduler: Starting job streaming job 1523974140000 ms.15 from job set of time 1523974140000 ms 18/04/17 17:09:00 INFO scheduler.JobScheduler: Finished job streaming job 1523974140000 ms.14 from job set of time 1523974140000 ms 18/04/17 17:09:00 INFO scheduler.JobScheduler: Starting job streaming job 1523974140000 ms.17 from job set of time 1523974140000 ms 18/04/17 17:09:00 INFO scheduler.JobScheduler: Finished job streaming job 1523974140000 ms.13 from job set of time 1523974140000 ms 18/04/17 17:09:00 INFO scheduler.JobScheduler: Finished job streaming job 1523974140000 ms.17 from job set of time 1523974140000 ms 18/04/17 17:09:00 INFO scheduler.JobScheduler: Starting job streaming job 1523974140000 ms.18 from job set of time 1523974140000 ms 18/04/17 17:09:00 INFO scheduler.JobScheduler: Starting job streaming job 1523974140000 ms.16 from job set of time 1523974140000 ms 18/04/17 17:09:00 INFO scheduler.JobScheduler: Starting job streaming job 1523974140000 ms.20 from job set of time 1523974140000 ms 18/04/17 17:09:00 INFO scheduler.JobScheduler: Starting job streaming job 1523974140000 ms.19 from job set of time 1523974140000 ms 18/04/17 17:09:00 INFO scheduler.JobScheduler: Finished job streaming job 1523974140000 ms.16 from job set of time 1523974140000 ms 18/04/17 17:09:00 INFO scheduler.JobScheduler: Starting job streaming job 1523974140000 ms.21 from job set of time 1523974140000 ms 18/04/17 17:09:00 INFO scheduler.JobScheduler: Finished job streaming job 1523974140000 ms.21 from job set of time 1523974140000 ms 18/04/17 17:09:00 INFO scheduler.JobScheduler: Starting job streaming job 1523974140000 ms.22 from job set of time 1523974140000 ms 18/04/17 17:09:00 INFO scheduler.JobScheduler: Starting job streaming job 1523974140000 ms.24 from job set of time 1523974140000 ms 18/04/17 17:09:00 INFO scheduler.JobScheduler: Starting job streaming job 1523974140000 ms.23 from job set of time 1523974140000 ms 18/04/17 17:09:00 INFO scheduler.JobScheduler: Starting job streaming job 1523974140000 ms.25 from job set of time 1523974140000 ms 18/04/17 17:09:00 INFO scheduler.JobScheduler: Starting job streaming job 1523974140000 ms.26 from job set of time 1523974140000 ms 18/04/17 17:09:00 INFO scheduler.JobScheduler: Starting job streaming job 1523974140000 ms.27 from job set of time 1523974140000 ms 18/04/17 17:09:00 INFO scheduler.JobScheduler: Starting job streaming job 1523974140000 ms.28 from job set of time 1523974140000 ms 18/04/17 17:09:00 INFO scheduler.JobScheduler: Starting job streaming job 1523974140000 ms.29 from job set of time 1523974140000 ms 18/04/17 17:09:00 INFO scheduler.JobScheduler: Starting job streaming job 1523974140000 ms.30 from job set of time 1523974140000 ms 18/04/17 17:09:00 INFO scheduler.JobScheduler: Starting job streaming job 1523974140000 ms.31 from job set of time 1523974140000 ms 18/04/17 17:09:00 INFO scheduler.JobScheduler: Finished job streaming job 1523974140000 ms.30 from job set of time 1523974140000 ms 18/04/17 17:09:00 INFO scheduler.JobScheduler: Starting job streaming job 1523974140000 ms.32 from job set of time 1523974140000 ms 18/04/17 17:09:00 INFO scheduler.JobScheduler: Starting job streaming job 1523974140000 ms.33 from job set of time 1523974140000 ms 18/04/17 17:09:00 INFO scheduler.JobScheduler: Starting job streaming job 1523974140000 ms.34 from job set of time 1523974140000 ms 18/04/17 17:09:00 INFO scheduler.JobScheduler: Starting job streaming job 1523974140000 ms.35 from job set of time 1523974140000 ms 18/04/17 17:09:00 INFO scheduler.JobScheduler: Finished job streaming job 1523974140000 ms.35 from job set of time 1523974140000 ms 18/04/17 17:09:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 17:09:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 17:09:00 INFO scheduler.DAGScheduler: Got job 948 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 17:09:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 948 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 17:09:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 17:09:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 17:09:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 17:09:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 17:09:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 17:09:00 INFO scheduler.DAGScheduler: Submitting ResultStage 948 (KafkaRDD[1328] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 17:09:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 17:09:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 17:09:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 17:09:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 17:09:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 17:09:00 INFO storage.MemoryStore: Block broadcast_948 stored as values in memory (estimated size 5.7 KB, free 491.4 MB) 18/04/17 17:09:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 17:09:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 17:09:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 17:09:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 17:09:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 17:09:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 17:09:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 17:09:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 17:09:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 17:09:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 17:09:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 17:09:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 17:09:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 17:09:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 17:09:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 17:09:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 17:09:00 INFO storage.MemoryStore: Block broadcast_948_piece0 stored as bytes in memory (estimated size 3.1 KB, free 491.4 MB) 18/04/17 17:09:00 INFO storage.BlockManagerInfo: Added broadcast_948_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:09:00 INFO spark.SparkContext: Created broadcast 948 from broadcast at DAGScheduler.scala:1006 18/04/17 17:09:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 948 (KafkaRDD[1328] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 17:09:00 INFO cluster.YarnClusterScheduler: Adding task set 948.0 with 1 tasks 18/04/17 17:09:00 INFO scheduler.DAGScheduler: Got job 949 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 17:09:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 949 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 17:09:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 17:09:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 17:09:00 INFO scheduler.DAGScheduler: Submitting ResultStage 949 (KafkaRDD[1325] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 17:09:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 948.0 (TID 948, ***hostname masked***, executor 2, partition 0, RACK_LOCAL, 2040 bytes) 18/04/17 17:09:00 INFO storage.MemoryStore: Block broadcast_949 stored as values in memory (estimated size 5.7 KB, free 491.4 MB) 18/04/17 17:09:00 INFO storage.MemoryStore: Block broadcast_949_piece0 stored as bytes in memory (estimated size 3.1 KB, free 491.4 MB) 18/04/17 17:09:00 INFO storage.BlockManagerInfo: Added broadcast_949_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:09:00 INFO spark.SparkContext: Created broadcast 949 from broadcast at DAGScheduler.scala:1006 18/04/17 17:09:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 949 (KafkaRDD[1325] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 17:09:00 INFO cluster.YarnClusterScheduler: Adding task set 949.0 with 1 tasks 18/04/17 17:09:00 INFO scheduler.DAGScheduler: Got job 950 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 17:09:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 950 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 17:09:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 17:09:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 17:09:00 INFO scheduler.DAGScheduler: Submitting ResultStage 950 (KafkaRDD[1306] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 17:09:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 949.0 (TID 949, ***hostname masked***, executor 4, partition 0, NODE_LOCAL, 2049 bytes) 18/04/17 17:09:00 INFO storage.MemoryStore: Block broadcast_950 stored as values in memory (estimated size 5.7 KB, free 491.4 MB) 18/04/17 17:09:00 INFO spark.ContextCleaner: Cleaned accumulator 940 18/04/17 17:09:00 INFO spark.ContextCleaner: Cleaned accumulator 930 18/04/17 17:09:00 INFO storage.MemoryStore: Block broadcast_950_piece0 stored as bytes in memory (estimated size 3.1 KB, free 491.4 MB) 18/04/17 17:09:00 INFO storage.BlockManagerInfo: Removed broadcast_928_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:09:00 INFO storage.BlockManagerInfo: Added broadcast_950_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:09:00 INFO spark.SparkContext: Created broadcast 950 from broadcast at DAGScheduler.scala:1006 18/04/17 17:09:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 950 (KafkaRDD[1306] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 17:09:00 INFO cluster.YarnClusterScheduler: Adding task set 950.0 with 1 tasks 18/04/17 17:09:00 INFO scheduler.DAGScheduler: Got job 951 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 17:09:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 951 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 17:09:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 17:09:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 17:09:00 INFO scheduler.DAGScheduler: Submitting ResultStage 951 (KafkaRDD[1301] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 17:09:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 950.0 (TID 950, ***hostname masked***, executor 2, partition 0, NODE_LOCAL, 2059 bytes) 18/04/17 17:09:00 INFO storage.MemoryStore: Block broadcast_951 stored as values in memory (estimated size 5.7 KB, free 491.4 MB) 18/04/17 17:09:00 INFO storage.BlockManagerInfo: Removed broadcast_928_piece0 on ***hostname masked***:57847 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:09:00 INFO storage.MemoryStore: Block broadcast_951_piece0 stored as bytes in memory (estimated size 3.1 KB, free 491.4 MB) 18/04/17 17:09:00 INFO storage.BlockManagerInfo: Added broadcast_951_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:09:00 INFO spark.SparkContext: Created broadcast 951 from broadcast at DAGScheduler.scala:1006 18/04/17 17:09:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 951 (KafkaRDD[1301] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 17:09:00 INFO cluster.YarnClusterScheduler: Adding task set 951.0 with 1 tasks 18/04/17 17:09:00 INFO scheduler.DAGScheduler: Got job 952 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 17:09:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 952 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 17:09:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 17:09:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 17:09:00 INFO scheduler.DAGScheduler: Submitting ResultStage 952 (KafkaRDD[1318] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 17:09:00 INFO storage.MemoryStore: Block broadcast_952 stored as values in memory (estimated size 5.7 KB, free 491.4 MB) 18/04/17 17:09:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 951.0 (TID 951, ***hostname masked***, executor 6, partition 0, NODE_LOCAL, 2059 bytes) 18/04/17 17:09:00 INFO storage.BlockManagerInfo: Added broadcast_950_piece0 in memory on ***hostname masked***:43653 (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:09:00 INFO storage.MemoryStore: Block broadcast_952_piece0 stored as bytes in memory (estimated size 3.1 KB, free 491.4 MB) 18/04/17 17:09:00 INFO storage.BlockManagerInfo: Added broadcast_952_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:09:00 INFO storage.BlockManagerInfo: Added broadcast_949_piece0 in memory on ***hostname masked***:55279 (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:09:00 INFO spark.SparkContext: Created broadcast 952 from broadcast at DAGScheduler.scala:1006 18/04/17 17:09:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 952 (KafkaRDD[1318] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 17:09:00 INFO cluster.YarnClusterScheduler: Adding task set 952.0 with 1 tasks 18/04/17 17:09:00 INFO scheduler.DAGScheduler: Got job 953 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 17:09:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 953 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 17:09:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 17:09:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 17:09:00 INFO scheduler.DAGScheduler: Submitting ResultStage 953 (KafkaRDD[1307] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 17:09:00 INFO storage.MemoryStore: Block broadcast_953 stored as values in memory (estimated size 5.7 KB, free 491.4 MB) 18/04/17 17:09:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 952.0 (TID 952, ***hostname masked***, executor 10, partition 0, RACK_LOCAL, 2069 bytes) 18/04/17 17:09:00 INFO storage.BlockManagerInfo: Removed broadcast_926_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:09:00 INFO storage.BlockManagerInfo: Removed broadcast_926_piece0 on ***hostname masked***:53081 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:09:00 INFO spark.ContextCleaner: Cleaned accumulator 928 18/04/17 17:09:00 INFO spark.ContextCleaner: Cleaned accumulator 925 18/04/17 17:09:00 INFO storage.BlockManagerInfo: Removed broadcast_924_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:09:00 INFO storage.MemoryStore: Block broadcast_953_piece0 stored as bytes in memory (estimated size 3.1 KB, free 491.4 MB) 18/04/17 17:09:00 INFO storage.BlockManagerInfo: Added broadcast_953_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:09:00 INFO spark.SparkContext: Created broadcast 953 from broadcast at DAGScheduler.scala:1006 18/04/17 17:09:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 953 (KafkaRDD[1307] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 17:09:00 INFO cluster.YarnClusterScheduler: Adding task set 953.0 with 1 tasks 18/04/17 17:09:00 INFO scheduler.DAGScheduler: Got job 954 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 17:09:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 954 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 17:09:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 17:09:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 17:09:00 INFO scheduler.DAGScheduler: Submitting ResultStage 954 (KafkaRDD[1329] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 17:09:00 INFO storage.MemoryStore: Block broadcast_954 stored as values in memory (estimated size 5.7 KB, free 491.4 MB) 18/04/17 17:09:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 953.0 (TID 953, ***hostname masked***, executor 11, partition 0, RACK_LOCAL, 2046 bytes) 18/04/17 17:09:00 INFO storage.MemoryStore: Block broadcast_954_piece0 stored as bytes in memory (estimated size 3.1 KB, free 491.4 MB) 18/04/17 17:09:00 INFO storage.BlockManagerInfo: Added broadcast_954_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:09:00 INFO spark.SparkContext: Created broadcast 954 from broadcast at DAGScheduler.scala:1006 18/04/17 17:09:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 954 (KafkaRDD[1329] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 17:09:00 INFO cluster.YarnClusterScheduler: Adding task set 954.0 with 1 tasks 18/04/17 17:09:00 INFO scheduler.DAGScheduler: Got job 955 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 17:09:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 955 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 17:09:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 17:09:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 17:09:00 INFO scheduler.DAGScheduler: Submitting ResultStage 955 (KafkaRDD[1316] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 17:09:00 INFO storage.MemoryStore: Block broadcast_955 stored as values in memory (estimated size 5.7 KB, free 491.4 MB) 18/04/17 17:09:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 954.0 (TID 954, ***hostname masked***, executor 11, partition 0, NODE_LOCAL, 2048 bytes) 18/04/17 17:09:00 INFO storage.BlockManagerInfo: Added broadcast_951_piece0 in memory on ***hostname masked***:35790 (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:09:00 INFO storage.BlockManagerInfo: Removed broadcast_924_piece0 on ***hostname masked***:35790 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:09:00 INFO storage.BlockManagerInfo: Removed broadcast_930_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:09:00 INFO storage.MemoryStore: Block broadcast_955_piece0 stored as bytes in memory (estimated size 3.1 KB, free 491.4 MB) 18/04/17 17:09:00 INFO storage.BlockManagerInfo: Added broadcast_955_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:09:00 INFO spark.SparkContext: Created broadcast 955 from broadcast at DAGScheduler.scala:1006 18/04/17 17:09:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 955 (KafkaRDD[1316] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 17:09:00 INFO cluster.YarnClusterScheduler: Adding task set 955.0 with 1 tasks 18/04/17 17:09:00 INFO scheduler.DAGScheduler: Got job 956 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 17:09:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 956 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 17:09:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 17:09:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 17:09:00 INFO scheduler.DAGScheduler: Submitting ResultStage 956 (KafkaRDD[1322] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 17:09:00 INFO storage.MemoryStore: Block broadcast_956 stored as values in memory (estimated size 5.7 KB, free 491.4 MB) 18/04/17 17:09:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 955.0 (TID 955, ***hostname masked***, executor 11, partition 0, RACK_LOCAL, 2048 bytes) 18/04/17 17:09:00 INFO storage.BlockManagerInfo: Added broadcast_948_piece0 in memory on ***hostname masked***:43653 (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:09:00 INFO storage.MemoryStore: Block broadcast_956_piece0 stored as bytes in memory (estimated size 3.1 KB, free 491.4 MB) 18/04/17 17:09:00 INFO storage.BlockManagerInfo: Added broadcast_956_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:09:00 INFO spark.SparkContext: Created broadcast 956 from broadcast at DAGScheduler.scala:1006 18/04/17 17:09:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 956 (KafkaRDD[1322] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 17:09:00 INFO cluster.YarnClusterScheduler: Adding task set 956.0 with 1 tasks 18/04/17 17:09:00 INFO scheduler.DAGScheduler: Got job 957 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 17:09:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 957 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 17:09:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 17:09:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 17:09:00 INFO scheduler.DAGScheduler: Submitting ResultStage 957 (KafkaRDD[1305] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 17:09:00 INFO storage.MemoryStore: Block broadcast_957 stored as values in memory (estimated size 5.7 KB, free 491.4 MB) 18/04/17 17:09:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 956.0 (TID 956, ***hostname masked***, executor 5, partition 0, NODE_LOCAL, 2056 bytes) 18/04/17 17:09:00 INFO storage.BlockManagerInfo: Added broadcast_953_piece0 in memory on ***hostname masked***:57847 (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:09:00 INFO storage.MemoryStore: Block broadcast_957_piece0 stored as bytes in memory (estimated size 3.1 KB, free 491.4 MB) 18/04/17 17:09:00 INFO storage.BlockManagerInfo: Added broadcast_957_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:09:00 INFO spark.SparkContext: Created broadcast 957 from broadcast at DAGScheduler.scala:1006 18/04/17 17:09:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 957 (KafkaRDD[1305] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 17:09:00 INFO cluster.YarnClusterScheduler: Adding task set 957.0 with 1 tasks 18/04/17 17:09:00 INFO scheduler.DAGScheduler: Got job 958 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 17:09:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 958 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 17:09:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 17:09:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 17:09:00 INFO scheduler.DAGScheduler: Submitting ResultStage 958 (KafkaRDD[1327] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 17:09:00 INFO storage.MemoryStore: Block broadcast_958 stored as values in memory (estimated size 5.7 KB, free 491.4 MB) 18/04/17 17:09:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 957.0 (TID 957, ***hostname masked***, executor 8, partition 0, RACK_LOCAL, 2041 bytes) 18/04/17 17:09:00 INFO storage.MemoryStore: Block broadcast_958_piece0 stored as bytes in memory (estimated size 3.1 KB, free 491.4 MB) 18/04/17 17:09:00 INFO storage.BlockManagerInfo: Added broadcast_958_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:09:00 INFO spark.SparkContext: Created broadcast 958 from broadcast at DAGScheduler.scala:1006 18/04/17 17:09:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 958 (KafkaRDD[1327] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 17:09:00 INFO cluster.YarnClusterScheduler: Adding task set 958.0 with 1 tasks 18/04/17 17:09:00 INFO scheduler.DAGScheduler: Got job 959 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 17:09:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 959 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 17:09:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 17:09:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 17:09:00 INFO scheduler.DAGScheduler: Submitting ResultStage 959 (KafkaRDD[1315] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 17:09:00 INFO storage.MemoryStore: Block broadcast_959 stored as values in memory (estimated size 5.7 KB, free 491.4 MB) 18/04/17 17:09:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 958.0 (TID 958, ***hostname masked***, executor 8, partition 0, RACK_LOCAL, 2042 bytes) 18/04/17 17:09:00 INFO storage.MemoryStore: Block broadcast_959_piece0 stored as bytes in memory (estimated size 3.1 KB, free 491.4 MB) 18/04/17 17:09:00 INFO storage.BlockManagerInfo: Added broadcast_959_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:09:00 INFO spark.SparkContext: Created broadcast 959 from broadcast at DAGScheduler.scala:1006 18/04/17 17:09:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 959 (KafkaRDD[1315] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 17:09:00 INFO cluster.YarnClusterScheduler: Adding task set 959.0 with 1 tasks 18/04/17 17:09:00 INFO scheduler.DAGScheduler: Got job 960 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 17:09:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 960 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 17:09:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 17:09:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 17:09:00 INFO scheduler.DAGScheduler: Submitting ResultStage 960 (KafkaRDD[1321] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 17:09:00 INFO storage.MemoryStore: Block broadcast_960 stored as values in memory (estimated size 5.7 KB, free 491.4 MB) 18/04/17 17:09:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 959.0 (TID 959, ***hostname masked***, executor 8, partition 0, RACK_LOCAL, 2050 bytes) 18/04/17 17:09:00 INFO storage.BlockManagerInfo: Added broadcast_955_piece0 in memory on ***hostname masked***:57847 (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:09:00 INFO storage.MemoryStore: Block broadcast_960_piece0 stored as bytes in memory (estimated size 3.1 KB, free 491.4 MB) 18/04/17 17:09:00 INFO storage.BlockManagerInfo: Added broadcast_952_piece0 in memory on ***hostname masked***:55095 (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:09:00 INFO storage.BlockManagerInfo: Added broadcast_960_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:09:00 INFO storage.BlockManagerInfo: Removed broadcast_930_piece0 on ***hostname masked***:55095 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:09:00 INFO spark.SparkContext: Created broadcast 960 from broadcast at DAGScheduler.scala:1006 18/04/17 17:09:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 960 (KafkaRDD[1321] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 17:09:00 INFO cluster.YarnClusterScheduler: Adding task set 960.0 with 1 tasks 18/04/17 17:09:00 INFO scheduler.DAGScheduler: Got job 961 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 17:09:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 961 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 17:09:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 17:09:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 17:09:00 INFO scheduler.DAGScheduler: Submitting ResultStage 961 (KafkaRDD[1314] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 17:09:00 INFO storage.MemoryStore: Block broadcast_961 stored as values in memory (estimated size 5.7 KB, free 491.4 MB) 18/04/17 17:09:00 INFO spark.ContextCleaner: Cleaned accumulator 931 18/04/17 17:09:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 960.0 (TID 960, ***hostname masked***, executor 7, partition 0, RACK_LOCAL, 2045 bytes) 18/04/17 17:09:00 INFO storage.BlockManagerInfo: Removed broadcast_925_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:09:00 INFO storage.BlockManagerInfo: Added broadcast_956_piece0 in memory on ***hostname masked***:53081 (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:09:00 INFO storage.BlockManagerInfo: Added broadcast_957_piece0 in memory on ***hostname masked***:50260 (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:09:00 INFO storage.BlockManagerInfo: Removed broadcast_925_piece0 on ***hostname masked***:60107 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:09:00 INFO storage.MemoryStore: Block broadcast_961_piece0 stored as bytes in memory (estimated size 3.1 KB, free 491.4 MB) 18/04/17 17:09:00 INFO storage.BlockManagerInfo: Added broadcast_961_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:09:00 INFO spark.SparkContext: Created broadcast 961 from broadcast at DAGScheduler.scala:1006 18/04/17 17:09:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 961 (KafkaRDD[1314] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 17:09:00 INFO cluster.YarnClusterScheduler: Adding task set 961.0 with 1 tasks 18/04/17 17:09:00 INFO scheduler.DAGScheduler: Got job 962 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 17:09:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 962 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 17:09:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 17:09:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 17:09:00 INFO scheduler.DAGScheduler: Submitting ResultStage 962 (KafkaRDD[1311] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 17:09:00 INFO storage.MemoryStore: Block broadcast_962 stored as values in memory (estimated size 5.7 KB, free 491.4 MB) 18/04/17 17:09:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 961.0 (TID 961, ***hostname masked***, executor 6, partition 0, RACK_LOCAL, 2048 bytes) 18/04/17 17:09:00 INFO storage.BlockManagerInfo: Removed broadcast_931_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:09:00 INFO storage.BlockManagerInfo: Removed broadcast_931_piece0 on ***hostname masked***:50260 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:09:00 INFO spark.ContextCleaner: Cleaned accumulator 932 18/04/17 17:09:00 INFO spark.ContextCleaner: Cleaned accumulator 934 18/04/17 17:09:00 INFO storage.MemoryStore: Block broadcast_962_piece0 stored as bytes in memory (estimated size 3.1 KB, free 491.4 MB) 18/04/17 17:09:00 INFO storage.BlockManagerInfo: Removed broadcast_932_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:09:00 INFO storage.BlockManagerInfo: Added broadcast_962_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:09:00 INFO spark.SparkContext: Created broadcast 962 from broadcast at DAGScheduler.scala:1006 18/04/17 17:09:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 962 (KafkaRDD[1311] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 17:09:00 INFO cluster.YarnClusterScheduler: Adding task set 962.0 with 1 tasks 18/04/17 17:09:00 INFO scheduler.DAGScheduler: Got job 963 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 17:09:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 963 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 17:09:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 17:09:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 17:09:00 INFO scheduler.DAGScheduler: Submitting ResultStage 963 (KafkaRDD[1297] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 17:09:00 INFO storage.MemoryStore: Block broadcast_963 stored as values in memory (estimated size 5.7 KB, free 491.4 MB) 18/04/17 17:09:00 INFO storage.BlockManagerInfo: Removed broadcast_932_piece0 on ***hostname masked***:53081 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:09:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 962.0 (TID 962, ***hostname masked***, executor 10, partition 0, RACK_LOCAL, 2064 bytes) 18/04/17 17:09:00 INFO storage.BlockManagerInfo: Added broadcast_954_piece0 in memory on ***hostname masked***:57847 (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:09:00 INFO spark.ContextCleaner: Cleaned accumulator 933 18/04/17 17:09:00 INFO storage.BlockManagerInfo: Removed broadcast_934_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:09:00 INFO storage.MemoryStore: Block broadcast_963_piece0 stored as bytes in memory (estimated size 3.1 KB, free 491.4 MB) 18/04/17 17:09:00 INFO storage.BlockManagerInfo: Added broadcast_963_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:09:00 INFO spark.SparkContext: Created broadcast 963 from broadcast at DAGScheduler.scala:1006 18/04/17 17:09:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 963 (KafkaRDD[1297] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 17:09:00 INFO cluster.YarnClusterScheduler: Adding task set 963.0 with 1 tasks 18/04/17 17:09:00 INFO storage.BlockManagerInfo: Removed broadcast_934_piece0 on ***hostname masked***:56034 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:09:00 INFO scheduler.DAGScheduler: Got job 964 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 17:09:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 964 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 17:09:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 17:09:00 INFO storage.BlockManagerInfo: Added broadcast_961_piece0 in memory on ***hostname masked***:35790 (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:09:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 17:09:00 INFO scheduler.DAGScheduler: Submitting ResultStage 964 (KafkaRDD[1320] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 17:09:00 INFO storage.MemoryStore: Block broadcast_964 stored as values in memory (estimated size 5.7 KB, free 491.4 MB) 18/04/17 17:09:00 INFO storage.BlockManagerInfo: Added broadcast_960_piece0 in memory on ***hostname masked***:41751 (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:09:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 963.0 (TID 963, ***hostname masked***, executor 4, partition 0, RACK_LOCAL, 2063 bytes) 18/04/17 17:09:00 INFO spark.ContextCleaner: Cleaned accumulator 935 18/04/17 17:09:00 INFO storage.BlockManagerInfo: Removed broadcast_933_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:09:00 INFO storage.BlockManagerInfo: Added broadcast_962_piece0 in memory on ***hostname masked***:55095 (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:09:00 INFO storage.BlockManagerInfo: Removed broadcast_933_piece0 on ***hostname masked***:43653 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:09:00 INFO storage.MemoryStore: Block broadcast_964_piece0 stored as bytes in memory (estimated size 3.1 KB, free 491.4 MB) 18/04/17 17:09:00 INFO storage.BlockManagerInfo: Added broadcast_964_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:09:00 INFO spark.SparkContext: Created broadcast 964 from broadcast at DAGScheduler.scala:1006 18/04/17 17:09:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 964 (KafkaRDD[1320] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 17:09:00 INFO cluster.YarnClusterScheduler: Adding task set 964.0 with 1 tasks 18/04/17 17:09:00 INFO spark.ContextCleaner: Cleaned accumulator 937 18/04/17 17:09:00 INFO scheduler.DAGScheduler: Got job 965 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 17:09:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 965 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 17:09:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 17:09:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 17:09:00 INFO scheduler.DAGScheduler: Submitting ResultStage 965 (KafkaRDD[1304] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 17:09:00 INFO storage.MemoryStore: Block broadcast_965 stored as values in memory (estimated size 5.7 KB, free 491.4 MB) 18/04/17 17:09:00 INFO storage.BlockManagerInfo: Removed broadcast_935_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:09:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 964.0 (TID 964, ***hostname masked***, executor 1, partition 0, NODE_LOCAL, 2047 bytes) 18/04/17 17:09:00 INFO storage.BlockManagerInfo: Added broadcast_959_piece0 in memory on ***hostname masked***:50260 (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:09:00 INFO storage.BlockManagerInfo: Removed broadcast_935_piece0 on ***hostname masked***:55279 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:09:00 INFO storage.MemoryStore: Block broadcast_965_piece0 stored as bytes in memory (estimated size 3.1 KB, free 491.4 MB) 18/04/17 17:09:00 INFO storage.BlockManagerInfo: Added broadcast_965_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:09:00 INFO spark.SparkContext: Created broadcast 965 from broadcast at DAGScheduler.scala:1006 18/04/17 17:09:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 965 (KafkaRDD[1304] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 17:09:00 INFO cluster.YarnClusterScheduler: Adding task set 965.0 with 1 tasks 18/04/17 17:09:00 INFO scheduler.DAGScheduler: Got job 967 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 17:09:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 966 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 17:09:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 17:09:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 17:09:00 INFO scheduler.DAGScheduler: Submitting ResultStage 966 (KafkaRDD[1323] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 17:09:00 INFO storage.MemoryStore: Block broadcast_966 stored as values in memory (estimated size 5.7 KB, free 491.4 MB) 18/04/17 17:09:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 965.0 (TID 965, ***hostname masked***, executor 6, partition 0, RACK_LOCAL, 2051 bytes) 18/04/17 17:09:00 INFO storage.MemoryStore: Block broadcast_966_piece0 stored as bytes in memory (estimated size 3.1 KB, free 491.4 MB) 18/04/17 17:09:00 INFO storage.BlockManagerInfo: Added broadcast_966_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:09:00 INFO spark.SparkContext: Created broadcast 966 from broadcast at DAGScheduler.scala:1006 18/04/17 17:09:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 966 (KafkaRDD[1323] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 17:09:00 INFO cluster.YarnClusterScheduler: Adding task set 966.0 with 1 tasks 18/04/17 17:09:00 INFO scheduler.DAGScheduler: Got job 966 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 17:09:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 967 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 17:09:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 17:09:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 17:09:00 INFO scheduler.DAGScheduler: Submitting ResultStage 967 (KafkaRDD[1298] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 17:09:00 INFO storage.MemoryStore: Block broadcast_967 stored as values in memory (estimated size 5.7 KB, free 491.4 MB) 18/04/17 17:09:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 966.0 (TID 966, ***hostname masked***, executor 11, partition 0, RACK_LOCAL, 2048 bytes) 18/04/17 17:09:00 INFO storage.BlockManagerInfo: Added broadcast_958_piece0 in memory on ***hostname masked***:50260 (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:09:00 INFO storage.BlockManagerInfo: Added broadcast_965_piece0 in memory on ***hostname masked***:35790 (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:09:00 INFO storage.MemoryStore: Block broadcast_967_piece0 stored as bytes in memory (estimated size 3.1 KB, free 491.4 MB) 18/04/17 17:09:00 INFO storage.BlockManagerInfo: Added broadcast_967_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:09:00 INFO spark.SparkContext: Created broadcast 967 from broadcast at DAGScheduler.scala:1006 18/04/17 17:09:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 967 (KafkaRDD[1298] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 17:09:00 INFO cluster.YarnClusterScheduler: Adding task set 967.0 with 1 tasks 18/04/17 17:09:00 INFO scheduler.DAGScheduler: Got job 968 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 17:09:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 968 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 17:09:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 17:09:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 17:09:00 INFO scheduler.DAGScheduler: Submitting ResultStage 968 (KafkaRDD[1302] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 17:09:00 INFO storage.MemoryStore: Block broadcast_968 stored as values in memory (estimated size 5.7 KB, free 491.3 MB) 18/04/17 17:09:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 967.0 (TID 967, ***hostname masked***, executor 8, partition 0, RACK_LOCAL, 2034 bytes) 18/04/17 17:09:00 INFO storage.MemoryStore: Block broadcast_968_piece0 stored as bytes in memory (estimated size 3.1 KB, free 491.3 MB) 18/04/17 17:09:00 INFO storage.BlockManagerInfo: Added broadcast_968_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:09:00 INFO storage.BlockManagerInfo: Added broadcast_966_piece0 in memory on ***hostname masked***:57847 (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:09:00 INFO spark.SparkContext: Created broadcast 968 from broadcast at DAGScheduler.scala:1006 18/04/17 17:09:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 968 (KafkaRDD[1302] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 17:09:00 INFO cluster.YarnClusterScheduler: Adding task set 968.0 with 1 tasks 18/04/17 17:09:00 INFO scheduler.DAGScheduler: Got job 969 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 17:09:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 969 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 17:09:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 17:09:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 17:09:00 INFO scheduler.DAGScheduler: Submitting ResultStage 969 (KafkaRDD[1303] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 17:09:00 INFO storage.MemoryStore: Block broadcast_969 stored as values in memory (estimated size 5.7 KB, free 491.3 MB) 18/04/17 17:09:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 968.0 (TID 968, ***hostname masked***, executor 12, partition 0, NODE_LOCAL, 2045 bytes) 18/04/17 17:09:00 INFO storage.MemoryStore: Block broadcast_969_piece0 stored as bytes in memory (estimated size 3.1 KB, free 491.3 MB) 18/04/17 17:09:00 INFO storage.BlockManagerInfo: Added broadcast_969_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:09:00 INFO spark.SparkContext: Created broadcast 969 from broadcast at DAGScheduler.scala:1006 18/04/17 17:09:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 969 (KafkaRDD[1303] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 17:09:00 INFO cluster.YarnClusterScheduler: Adding task set 969.0 with 1 tasks 18/04/17 17:09:00 INFO storage.BlockManagerInfo: Added broadcast_967_piece0 in memory on ***hostname masked***:50260 (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:09:00 INFO scheduler.DAGScheduler: Got job 970 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 17:09:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 970 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 17:09:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 17:09:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 17:09:00 INFO scheduler.DAGScheduler: Submitting ResultStage 970 (KafkaRDD[1319] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 17:09:00 INFO storage.MemoryStore: Block broadcast_970 stored as values in memory (estimated size 5.7 KB, free 491.3 MB) 18/04/17 17:09:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 969.0 (TID 969, ***hostname masked***, executor 9, partition 0, RACK_LOCAL, 2060 bytes) 18/04/17 17:09:00 INFO storage.MemoryStore: Block broadcast_970_piece0 stored as bytes in memory (estimated size 3.1 KB, free 491.3 MB) 18/04/17 17:09:00 INFO storage.BlockManagerInfo: Added broadcast_970_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.5 MB) 18/04/17 17:09:00 INFO spark.SparkContext: Created broadcast 970 from broadcast at DAGScheduler.scala:1006 18/04/17 17:09:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 970 (KafkaRDD[1319] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 17:09:00 INFO cluster.YarnClusterScheduler: Adding task set 970.0 with 1 tasks 18/04/17 17:09:00 INFO scheduler.DAGScheduler: Got job 971 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 17:09:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 971 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 17:09:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 17:09:00 INFO storage.BlockManagerInfo: Added broadcast_964_piece0 in memory on ***hostname masked***:56034 (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:09:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 17:09:00 INFO scheduler.DAGScheduler: Submitting ResultStage 971 (KafkaRDD[1308] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 17:09:00 INFO storage.MemoryStore: Block broadcast_971 stored as values in memory (estimated size 5.7 KB, free 491.3 MB) 18/04/17 17:09:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 970.0 (TID 970, ***hostname masked***, executor 4, partition 0, RACK_LOCAL, 2048 bytes) 18/04/17 17:09:00 INFO storage.MemoryStore: Block broadcast_971_piece0 stored as bytes in memory (estimated size 3.1 KB, free 491.3 MB) 18/04/17 17:09:00 INFO storage.BlockManagerInfo: Added broadcast_971_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.5 MB) 18/04/17 17:09:00 INFO spark.SparkContext: Created broadcast 971 from broadcast at DAGScheduler.scala:1006 18/04/17 17:09:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 971 (KafkaRDD[1308] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 17:09:00 INFO cluster.YarnClusterScheduler: Adding task set 971.0 with 1 tasks 18/04/17 17:09:00 INFO scheduler.DAGScheduler: Got job 972 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 17:09:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 972 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 17:09:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 17:09:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 17:09:00 INFO scheduler.DAGScheduler: Submitting ResultStage 972 (KafkaRDD[1330] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 17:09:00 INFO storage.BlockManagerInfo: Added broadcast_963_piece0 in memory on ***hostname masked***:55279 (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:09:00 INFO spark.ContextCleaner: Cleaned accumulator 936 18/04/17 17:09:00 INFO spark.ContextCleaner: Cleaned accumulator 923 18/04/17 17:09:00 INFO storage.MemoryStore: Block broadcast_972 stored as values in memory (estimated size 5.7 KB, free 491.3 MB) 18/04/17 17:09:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 971.0 (TID 971, ***hostname masked***, executor 2, partition 0, RACK_LOCAL, 2051 bytes) 18/04/17 17:09:00 INFO storage.BlockManagerInfo: Removed broadcast_922_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.5 MB) 18/04/17 17:09:00 INFO storage.MemoryStore: Block broadcast_972_piece0 stored as bytes in memory (estimated size 3.1 KB, free 491.3 MB) 18/04/17 17:09:00 INFO storage.BlockManagerInfo: Added broadcast_972_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.5 MB) 18/04/17 17:09:00 INFO spark.SparkContext: Created broadcast 972 from broadcast at DAGScheduler.scala:1006 18/04/17 17:09:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 972 (KafkaRDD[1330] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 17:09:00 INFO cluster.YarnClusterScheduler: Adding task set 972.0 with 1 tasks 18/04/17 17:09:00 INFO scheduler.DAGScheduler: Got job 973 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 17:09:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 973 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 17:09:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 17:09:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 17:09:00 INFO scheduler.DAGScheduler: Submitting ResultStage 973 (KafkaRDD[1324] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 17:09:00 INFO storage.MemoryStore: Block broadcast_973 stored as values in memory (estimated size 5.7 KB, free 491.3 MB) 18/04/17 17:09:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 972.0 (TID 972, ***hostname masked***, executor 1, partition 0, RACK_LOCAL, 2053 bytes) 18/04/17 17:09:00 INFO storage.BlockManagerInfo: Removed broadcast_922_piece0 on ***hostname masked***:43653 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:09:00 INFO storage.MemoryStore: Block broadcast_973_piece0 stored as bytes in memory (estimated size 3.1 KB, free 491.3 MB) 18/04/17 17:09:00 INFO storage.BlockManagerInfo: Added broadcast_973_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.5 MB) 18/04/17 17:09:00 INFO spark.SparkContext: Created broadcast 973 from broadcast at DAGScheduler.scala:1006 18/04/17 17:09:00 INFO storage.BlockManagerInfo: Added broadcast_968_piece0 in memory on ***hostname masked***:42188 (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:09:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 973 (KafkaRDD[1324] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 17:09:00 INFO cluster.YarnClusterScheduler: Adding task set 973.0 with 1 tasks 18/04/17 17:09:00 INFO spark.ContextCleaner: Cleaned accumulator 924 18/04/17 17:09:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 973.0 (TID 973, ***hostname masked***, executor 4, partition 0, RACK_LOCAL, 2045 bytes) 18/04/17 17:09:00 INFO storage.BlockManagerInfo: Removed broadcast_929_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.5 MB) 18/04/17 17:09:00 INFO storage.BlockManagerInfo: Removed broadcast_929_piece0 on ***hostname masked***:55095 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:09:00 INFO storage.BlockManagerInfo: Added broadcast_970_piece0 in memory on ***hostname masked***:55279 (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:09:00 INFO spark.ContextCleaner: Cleaned accumulator 938 18/04/17 17:09:00 INFO storage.BlockManagerInfo: Removed broadcast_936_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.5 MB) 18/04/17 17:09:00 INFO storage.BlockManagerInfo: Added broadcast_972_piece0 in memory on ***hostname masked***:56034 (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:09:00 INFO storage.BlockManagerInfo: Added broadcast_971_piece0 in memory on ***hostname masked***:43653 (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:09:00 INFO storage.BlockManagerInfo: Added broadcast_969_piece0 in memory on ***hostname masked***:55033 (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:09:00 INFO storage.BlockManagerInfo: Removed broadcast_936_piece0 on ***hostname masked***:53081 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:09:00 INFO storage.BlockManagerInfo: Added broadcast_973_piece0 in memory on ***hostname masked***:55279 (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:09:00 INFO storage.BlockManagerInfo: Removed broadcast_938_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:09:00 INFO storage.BlockManagerInfo: Removed broadcast_938_piece0 on ***hostname masked***:35790 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:09:00 INFO spark.ContextCleaner: Cleaned accumulator 939 18/04/17 17:09:00 INFO storage.BlockManagerInfo: Removed broadcast_937_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:09:00 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 954.0 (TID 954) in 86 ms on ***hostname masked*** (executor 11) (1/1) 18/04/17 17:09:00 INFO cluster.YarnClusterScheduler: Removed TaskSet 954.0, whose tasks have all completed, from pool 18/04/17 17:09:00 INFO scheduler.DAGScheduler: ResultStage 954 (foreachPartition at PredictorEngineApp.java:153) finished in 0.087 s 18/04/17 17:09:00 INFO scheduler.DAGScheduler: Job 954 finished: foreachPartition at PredictorEngineApp.java:153, took 0.133681 s 18/04/17 17:09:00 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x35a605ce connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 17:09:00 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x35a605ce0x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 17:09:00 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 17:09:00 INFO storage.BlockManagerInfo: Removed broadcast_937_piece0 on ***hostname masked***:55279 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:09:00 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:44258, server: ***hostname masked***/***IP masked***:2181 18/04/17 17:09:00 INFO spark.ContextCleaner: Cleaned accumulator 927 18/04/17 17:09:00 INFO spark.ContextCleaner: Cleaned accumulator 941 18/04/17 17:09:00 INFO storage.BlockManagerInfo: Removed broadcast_939_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:09:00 INFO storage.BlockManagerInfo: Removed broadcast_939_piece0 on ***hostname masked***:43653 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:09:00 INFO storage.BlockManagerInfo: Removed broadcast_941_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:09:00 INFO storage.BlockManagerInfo: Removed broadcast_941_piece0 on ***hostname masked***:53081 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:09:00 INFO spark.ContextCleaner: Cleaned accumulator 942 18/04/17 17:09:00 INFO storage.BlockManagerInfo: Removed broadcast_940_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:09:00 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x2626be142b28f58, negotiated timeout = 60000 18/04/17 17:09:00 INFO storage.BlockManagerInfo: Removed broadcast_940_piece0 on ***hostname masked***:60107 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:09:00 INFO spark.ContextCleaner: Cleaned accumulator 944 18/04/17 17:09:00 INFO storage.BlockManagerInfo: Removed broadcast_942_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:09:00 INFO storage.BlockManagerInfo: Removed broadcast_942_piece0 on ***hostname masked***:41751 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:09:00 INFO spark.ContextCleaner: Cleaned accumulator 943 18/04/17 17:09:00 INFO storage.BlockManagerInfo: Removed broadcast_927_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:09:00 INFO storage.BlockManagerInfo: Removed broadcast_927_piece0 on ***hostname masked***:60107 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:09:00 INFO spark.ContextCleaner: Cleaned accumulator 945 18/04/17 17:09:00 INFO storage.BlockManagerInfo: Removed broadcast_943_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:09:00 INFO storage.BlockManagerInfo: Removed broadcast_943_piece0 on ***hostname masked***:41751 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:09:00 INFO storage.BlockManagerInfo: Removed broadcast_945_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:09:00 INFO storage.BlockManagerInfo: Removed broadcast_945_piece0 on ***hostname masked***:55033 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:09:00 INFO spark.ContextCleaner: Cleaned accumulator 946 18/04/17 17:09:00 INFO storage.BlockManagerInfo: Removed broadcast_944_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:09:00 INFO storage.BlockManagerInfo: Removed broadcast_944_piece0 on ***hostname masked***:41751 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:09:00 INFO spark.ContextCleaner: Cleaned accumulator 948 18/04/17 17:09:00 INFO storage.BlockManagerInfo: Removed broadcast_946_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:09:00 INFO storage.BlockManagerInfo: Removed broadcast_946_piece0 on ***hostname masked***:60107 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:09:00 INFO spark.ContextCleaner: Cleaned accumulator 947 18/04/17 17:09:00 INFO spark.ContextCleaner: Cleaned accumulator 929 18/04/17 17:09:00 INFO storage.BlockManagerInfo: Removed broadcast_947_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:09:00 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x2626be142b28f58 18/04/17 17:09:00 INFO storage.BlockManagerInfo: Removed broadcast_947_piece0 on ***hostname masked***:55279 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:09:00 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 958.0 (TID 958) in 100 ms on ***hostname masked*** (executor 8) (1/1) 18/04/17 17:09:00 INFO cluster.YarnClusterScheduler: Removed TaskSet 958.0, whose tasks have all completed, from pool 18/04/17 17:09:00 INFO scheduler.DAGScheduler: ResultStage 958 (foreachPartition at PredictorEngineApp.java:153) finished in 0.101 s 18/04/17 17:09:00 INFO scheduler.DAGScheduler: Job 958 finished: foreachPartition at PredictorEngineApp.java:153, took 0.160006 s 18/04/17 17:09:00 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x16d7a7e6 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 17:09:00 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x16d7a7e60x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 17:09:00 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 17:09:00 INFO storage.BlockManagerInfo: Removed broadcast_923_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:09:00 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:33284, server: ***hostname masked***/***IP masked***:2181 18/04/17 17:09:00 INFO storage.BlockManagerInfo: Removed broadcast_923_piece0 on ***hostname masked***:35790 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:09:00 INFO spark.ContextCleaner: Cleaned accumulator 926 18/04/17 17:09:00 INFO zookeeper.ZooKeeper: Session: 0x2626be142b28f58 closed 18/04/17 17:09:00 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 17:09:00 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x3626be1439a9615, negotiated timeout = 60000 18/04/17 17:09:00 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x3626be1439a9615 18/04/17 17:09:00 INFO scheduler.JobScheduler: Finished job streaming job 1523974140000 ms.33 from job set of time 1523974140000 ms 18/04/17 17:09:00 INFO zookeeper.ZooKeeper: Session: 0x3626be1439a9615 closed 18/04/17 17:09:00 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 17:09:00 INFO scheduler.JobScheduler: Finished job streaming job 1523974140000 ms.31 from job set of time 1523974140000 ms 18/04/17 17:09:00 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 970.0 (TID 970) in 145 ms on ***hostname masked*** (executor 4) (1/1) 18/04/17 17:09:00 INFO cluster.YarnClusterScheduler: Removed TaskSet 970.0, whose tasks have all completed, from pool 18/04/17 17:09:00 INFO scheduler.DAGScheduler: ResultStage 970 (foreachPartition at PredictorEngineApp.java:153) finished in 0.146 s 18/04/17 17:09:00 INFO scheduler.DAGScheduler: Job 970 finished: foreachPartition at PredictorEngineApp.java:153, took 0.239575 s 18/04/17 17:09:00 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x2b43944e connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 17:09:00 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x2b43944e0x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 17:09:00 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 17:09:00 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:44264, server: ***hostname masked***/***IP masked***:2181 18/04/17 17:09:00 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x2626be142b28f5b, negotiated timeout = 60000 18/04/17 17:09:00 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x2626be142b28f5b 18/04/17 17:09:00 INFO zookeeper.ZooKeeper: Session: 0x2626be142b28f5b closed 18/04/17 17:09:00 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 17:09:00 INFO scheduler.JobScheduler: Finished job streaming job 1523974140000 ms.23 from job set of time 1523974140000 ms 18/04/17 17:09:00 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 960.0 (TID 960) in 411 ms on ***hostname masked*** (executor 7) (1/1) 18/04/17 17:09:00 INFO cluster.YarnClusterScheduler: Removed TaskSet 960.0, whose tasks have all completed, from pool 18/04/17 17:09:00 INFO scheduler.DAGScheduler: ResultStage 960 (foreachPartition at PredictorEngineApp.java:153) finished in 0.412 s 18/04/17 17:09:00 INFO scheduler.DAGScheduler: Job 960 finished: foreachPartition at PredictorEngineApp.java:153, took 0.476728 s 18/04/17 17:09:00 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x63ba5f05 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 17:09:00 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x63ba5f050x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 17:09:00 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 17:09:00 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:33291, server: ***hostname masked***/***IP masked***:2181 18/04/17 17:09:00 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x3626be1439a961a, negotiated timeout = 60000 18/04/17 17:09:00 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x3626be1439a961a 18/04/17 17:09:00 INFO zookeeper.ZooKeeper: Session: 0x3626be1439a961a closed 18/04/17 17:09:00 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 17:09:00 INFO scheduler.JobScheduler: Finished job streaming job 1523974140000 ms.25 from job set of time 1523974140000 ms 18/04/17 17:09:01 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 969.0 (TID 969) in 851 ms on ***hostname masked*** (executor 9) (1/1) 18/04/17 17:09:01 INFO cluster.YarnClusterScheduler: Removed TaskSet 969.0, whose tasks have all completed, from pool 18/04/17 17:09:01 INFO scheduler.DAGScheduler: ResultStage 969 (foreachPartition at PredictorEngineApp.java:153) finished in 0.851 s 18/04/17 17:09:01 INFO scheduler.DAGScheduler: Job 969 finished: foreachPartition at PredictorEngineApp.java:153, took 0.942444 s 18/04/17 17:09:01 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x53fa7204 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 17:09:01 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x53fa72040x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 17:09:01 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 17:09:01 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:44272, server: ***hostname masked***/***IP masked***:2181 18/04/17 17:09:01 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x2626be142b28f5f, negotiated timeout = 60000 18/04/17 17:09:01 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x2626be142b28f5f 18/04/17 17:09:01 INFO zookeeper.ZooKeeper: Session: 0x2626be142b28f5f closed 18/04/17 17:09:01 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 17:09:01 INFO scheduler.JobScheduler: Finished job streaming job 1523974140000 ms.7 from job set of time 1523974140000 ms 18/04/17 17:09:01 INFO spark.ContextCleaner: Cleaned accumulator 955 18/04/17 17:09:01 INFO storage.BlockManagerInfo: Removed broadcast_958_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:09:01 INFO storage.BlockManagerInfo: Removed broadcast_958_piece0 on ***hostname masked***:50260 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:09:01 INFO spark.ContextCleaner: Cleaned accumulator 959 18/04/17 17:09:01 INFO storage.BlockManagerInfo: Removed broadcast_960_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:09:01 INFO storage.BlockManagerInfo: Removed broadcast_960_piece0 on ***hostname masked***:41751 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:09:01 INFO spark.ContextCleaner: Cleaned accumulator 961 18/04/17 17:09:01 INFO storage.BlockManagerInfo: Removed broadcast_969_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:09:01 INFO storage.BlockManagerInfo: Removed broadcast_969_piece0 on ***hostname masked***:55033 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:09:01 INFO spark.ContextCleaner: Cleaned accumulator 970 18/04/17 17:09:01 INFO storage.BlockManagerInfo: Removed broadcast_954_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:09:01 INFO storage.BlockManagerInfo: Removed broadcast_954_piece0 on ***hostname masked***:57847 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:09:01 INFO storage.BlockManagerInfo: Removed broadcast_970_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:09:01 INFO storage.BlockManagerInfo: Removed broadcast_970_piece0 on ***hostname masked***:55279 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:09:01 INFO spark.ContextCleaner: Cleaned accumulator 971 18/04/17 17:09:02 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 965.0 (TID 965) in 2086 ms on ***hostname masked*** (executor 6) (1/1) 18/04/17 17:09:02 INFO cluster.YarnClusterScheduler: Removed TaskSet 965.0, whose tasks have all completed, from pool 18/04/17 17:09:02 INFO scheduler.DAGScheduler: ResultStage 965 (foreachPartition at PredictorEngineApp.java:153) finished in 2.086 s 18/04/17 17:09:02 INFO scheduler.DAGScheduler: Job 965 finished: foreachPartition at PredictorEngineApp.java:153, took 2.166123 s 18/04/17 17:09:02 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0xb61f4f2 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 17:09:02 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0xb61f4f20x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 17:09:02 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 17:09:02 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:44276, server: ***hostname masked***/***IP masked***:2181 18/04/17 17:09:02 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x2626be142b28f61, negotiated timeout = 60000 18/04/17 17:09:02 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x2626be142b28f61 18/04/17 17:09:02 INFO zookeeper.ZooKeeper: Session: 0x2626be142b28f61 closed 18/04/17 17:09:02 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 17:09:02 INFO scheduler.JobScheduler: Finished job streaming job 1523974140000 ms.8 from job set of time 1523974140000 ms 18/04/17 17:09:03 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 968.0 (TID 968) in 3511 ms on ***hostname masked*** (executor 12) (1/1) 18/04/17 17:09:03 INFO scheduler.DAGScheduler: ResultStage 968 (foreachPartition at PredictorEngineApp.java:153) finished in 3.511 s 18/04/17 17:09:03 INFO cluster.YarnClusterScheduler: Removed TaskSet 968.0, whose tasks have all completed, from pool 18/04/17 17:09:03 INFO scheduler.DAGScheduler: Job 968 finished: foreachPartition at PredictorEngineApp.java:153, took 3.599456 s 18/04/17 17:09:03 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x286819f1 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 17:09:03 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x286819f10x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 17:09:03 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 17:09:03 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:33306, server: ***hostname masked***/***IP masked***:2181 18/04/17 17:09:03 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x3626be1439a961b, negotiated timeout = 60000 18/04/17 17:09:03 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x3626be1439a961b 18/04/17 17:09:03 INFO zookeeper.ZooKeeper: Session: 0x3626be1439a961b closed 18/04/17 17:09:03 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 17:09:03 INFO scheduler.JobScheduler: Finished job streaming job 1523974140000 ms.6 from job set of time 1523974140000 ms 18/04/17 17:09:05 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 962.0 (TID 962) in 5663 ms on ***hostname masked*** (executor 10) (1/1) 18/04/17 17:09:05 INFO cluster.YarnClusterScheduler: Removed TaskSet 962.0, whose tasks have all completed, from pool 18/04/17 17:09:05 INFO scheduler.DAGScheduler: ResultStage 962 (foreachPartition at PredictorEngineApp.java:153) finished in 5.664 s 18/04/17 17:09:05 INFO scheduler.DAGScheduler: Job 962 finished: foreachPartition at PredictorEngineApp.java:153, took 5.734004 s 18/04/17 17:09:05 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x4578f11c connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 17:09:05 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x4578f11c0x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 17:09:05 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 17:09:05 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:33312, server: ***hostname masked***/***IP masked***:2181 18/04/17 17:09:05 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x3626be1439a961d, negotiated timeout = 60000 18/04/17 17:09:05 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x3626be1439a961d 18/04/17 17:09:05 INFO zookeeper.ZooKeeper: Session: 0x3626be1439a961d closed 18/04/17 17:09:05 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 17:09:05 INFO scheduler.JobScheduler: Finished job streaming job 1523974140000 ms.15 from job set of time 1523974140000 ms 18/04/17 17:09:07 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 961.0 (TID 961) in 7564 ms on ***hostname masked*** (executor 6) (1/1) 18/04/17 17:09:07 INFO cluster.YarnClusterScheduler: Removed TaskSet 961.0, whose tasks have all completed, from pool 18/04/17 17:09:07 INFO scheduler.DAGScheduler: ResultStage 961 (foreachPartition at PredictorEngineApp.java:153) finished in 7.564 s 18/04/17 17:09:07 INFO scheduler.DAGScheduler: Job 961 finished: foreachPartition at PredictorEngineApp.java:153, took 7.632290 s 18/04/17 17:09:07 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x318f837c connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 17:09:07 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x318f837c0x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 17:09:07 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 17:09:07 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:39699, server: ***hostname masked***/***IP masked***:2181 18/04/17 17:09:07 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x1626be1444c966c, negotiated timeout = 60000 18/04/17 17:09:07 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x1626be1444c966c 18/04/17 17:09:07 INFO zookeeper.ZooKeeper: Session: 0x1626be1444c966c closed 18/04/17 17:09:07 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 17:09:07 INFO scheduler.JobScheduler: Finished job streaming job 1523974140000 ms.18 from job set of time 1523974140000 ms 18/04/17 17:09:07 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 972.0 (TID 972) in 7577 ms on ***hostname masked*** (executor 1) (1/1) 18/04/17 17:09:07 INFO scheduler.DAGScheduler: ResultStage 972 (foreachPartition at PredictorEngineApp.java:153) finished in 7.578 s 18/04/17 17:09:07 INFO cluster.YarnClusterScheduler: Removed TaskSet 972.0, whose tasks have all completed, from pool 18/04/17 17:09:07 INFO scheduler.DAGScheduler: Job 972 finished: foreachPartition at PredictorEngineApp.java:153, took 7.676002 s 18/04/17 17:09:07 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x259f07a1 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 17:09:07 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x259f07a10x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 17:09:07 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 17:09:07 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:33320, server: ***hostname masked***/***IP masked***:2181 18/04/17 17:09:07 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x3626be1439a961f, negotiated timeout = 60000 18/04/17 17:09:07 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x3626be1439a961f 18/04/17 17:09:07 INFO zookeeper.ZooKeeper: Session: 0x3626be1439a961f closed 18/04/17 17:09:07 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 17:09:07 INFO scheduler.JobScheduler: Finished job streaming job 1523974140000 ms.34 from job set of time 1523974140000 ms 18/04/17 17:09:07 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 964.0 (TID 964) in 7653 ms on ***hostname masked*** (executor 1) (1/1) 18/04/17 17:09:07 INFO cluster.YarnClusterScheduler: Removed TaskSet 964.0, whose tasks have all completed, from pool 18/04/17 17:09:07 INFO scheduler.DAGScheduler: ResultStage 964 (foreachPartition at PredictorEngineApp.java:153) finished in 7.654 s 18/04/17 17:09:07 INFO scheduler.DAGScheduler: Job 964 finished: foreachPartition at PredictorEngineApp.java:153, took 7.729710 s 18/04/17 17:09:07 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x1c73756d connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 17:09:07 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x1c73756d0x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 17:09:07 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 17:09:07 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:39705, server: ***hostname masked***/***IP masked***:2181 18/04/17 17:09:07 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x1626be1444c966e, negotiated timeout = 60000 18/04/17 17:09:07 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x1626be1444c966e 18/04/17 17:09:07 INFO zookeeper.ZooKeeper: Session: 0x1626be1444c966e closed 18/04/17 17:09:07 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 17:09:07 INFO scheduler.JobScheduler: Finished job streaming job 1523974140000 ms.24 from job set of time 1523974140000 ms 18/04/17 17:09:08 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 956.0 (TID 956) in 8546 ms on ***hostname masked*** (executor 5) (1/1) 18/04/17 17:09:08 INFO cluster.YarnClusterScheduler: Removed TaskSet 956.0, whose tasks have all completed, from pool 18/04/17 17:09:08 INFO scheduler.DAGScheduler: ResultStage 956 (foreachPartition at PredictorEngineApp.java:153) finished in 8.547 s 18/04/17 17:09:08 INFO scheduler.DAGScheduler: Job 956 finished: foreachPartition at PredictorEngineApp.java:153, took 8.600990 s 18/04/17 17:09:08 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x1ef734a2 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 17:09:08 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x1ef734a20x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 17:09:08 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 17:09:08 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:39710, server: ***hostname masked***/***IP masked***:2181 18/04/17 17:09:08 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x1626be1444c966f, negotiated timeout = 60000 18/04/17 17:09:08 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x1626be1444c966f 18/04/17 17:09:08 INFO zookeeper.ZooKeeper: Session: 0x1626be1444c966f closed 18/04/17 17:09:08 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 17:09:08 INFO scheduler.JobScheduler: Finished job streaming job 1523974140000 ms.26 from job set of time 1523974140000 ms 18/04/17 17:09:09 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 971.0 (TID 971) in 8887 ms on ***hostname masked*** (executor 2) (1/1) 18/04/17 17:09:09 INFO cluster.YarnClusterScheduler: Removed TaskSet 971.0, whose tasks have all completed, from pool 18/04/17 17:09:09 INFO scheduler.DAGScheduler: ResultStage 971 (foreachPartition at PredictorEngineApp.java:153) finished in 8.887 s 18/04/17 17:09:09 INFO scheduler.DAGScheduler: Job 971 finished: foreachPartition at PredictorEngineApp.java:153, took 8.983185 s 18/04/17 17:09:09 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x46d897df connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 17:09:09 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x46d897df0x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 17:09:09 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 17:09:09 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:39714, server: ***hostname masked***/***IP masked***:2181 18/04/17 17:09:09 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x1626be1444c9670, negotiated timeout = 60000 18/04/17 17:09:09 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x1626be1444c9670 18/04/17 17:09:09 INFO zookeeper.ZooKeeper: Session: 0x1626be1444c9670 closed 18/04/17 17:09:09 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 17:09:09 INFO scheduler.JobScheduler: Finished job streaming job 1523974140000 ms.12 from job set of time 1523974140000 ms 18/04/17 17:09:09 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 957.0 (TID 957) in 9543 ms on ***hostname masked*** (executor 8) (1/1) 18/04/17 17:09:09 INFO cluster.YarnClusterScheduler: Removed TaskSet 957.0, whose tasks have all completed, from pool 18/04/17 17:09:09 INFO scheduler.DAGScheduler: ResultStage 957 (foreachPartition at PredictorEngineApp.java:153) finished in 9.544 s 18/04/17 17:09:09 INFO scheduler.DAGScheduler: Job 957 finished: foreachPartition at PredictorEngineApp.java:153, took 9.601012 s 18/04/17 17:09:09 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x481980a0 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 17:09:09 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x481980a00x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 17:09:09 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 17:09:09 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:39717, server: ***hostname masked***/***IP masked***:2181 18/04/17 17:09:09 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x1626be1444c9673, negotiated timeout = 60000 18/04/17 17:09:09 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x1626be1444c9673 18/04/17 17:09:09 INFO zookeeper.ZooKeeper: Session: 0x1626be1444c9673 closed 18/04/17 17:09:09 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 17:09:09 INFO scheduler.JobScheduler: Finished job streaming job 1523974140000 ms.9 from job set of time 1523974140000 ms 18/04/17 17:09:09 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 959.0 (TID 959) in 9601 ms on ***hostname masked*** (executor 8) (1/1) 18/04/17 17:09:09 INFO cluster.YarnClusterScheduler: Removed TaskSet 959.0, whose tasks have all completed, from pool 18/04/17 17:09:09 INFO scheduler.DAGScheduler: ResultStage 959 (foreachPartition at PredictorEngineApp.java:153) finished in 9.602 s 18/04/17 17:09:09 INFO scheduler.DAGScheduler: Job 959 finished: foreachPartition at PredictorEngineApp.java:153, took 9.664581 s 18/04/17 17:09:09 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x4e231b52 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 17:09:09 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x4e231b520x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 17:09:09 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 17:09:09 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:33338, server: ***hostname masked***/***IP masked***:2181 18/04/17 17:09:09 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x3626be1439a9621, negotiated timeout = 60000 18/04/17 17:09:09 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x3626be1439a9621 18/04/17 17:09:09 INFO zookeeper.ZooKeeper: Session: 0x3626be1439a9621 closed 18/04/17 17:09:09 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 17:09:09 INFO scheduler.JobScheduler: Finished job streaming job 1523974140000 ms.19 from job set of time 1523974140000 ms 18/04/17 17:09:09 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 973.0 (TID 973) in 9621 ms on ***hostname masked*** (executor 4) (1/1) 18/04/17 17:09:09 INFO cluster.YarnClusterScheduler: Removed TaskSet 973.0, whose tasks have all completed, from pool 18/04/17 17:09:09 INFO scheduler.DAGScheduler: ResultStage 973 (foreachPartition at PredictorEngineApp.java:153) finished in 9.621 s 18/04/17 17:09:09 INFO scheduler.DAGScheduler: Job 973 finished: foreachPartition at PredictorEngineApp.java:153, took 9.721470 s 18/04/17 17:09:09 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x2874417d connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 17:09:09 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x2874417d0x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 17:09:09 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 17:09:09 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:39723, server: ***hostname masked***/***IP masked***:2181 18/04/17 17:09:09 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x1626be1444c9674, negotiated timeout = 60000 18/04/17 17:09:09 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x1626be1444c9674 18/04/17 17:09:09 INFO zookeeper.ZooKeeper: Session: 0x1626be1444c9674 closed 18/04/17 17:09:09 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 17:09:09 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 967.0 (TID 967) in 9665 ms on ***hostname masked*** (executor 8) (1/1) 18/04/17 17:09:09 INFO scheduler.DAGScheduler: ResultStage 967 (foreachPartition at PredictorEngineApp.java:153) finished in 9.665 s 18/04/17 17:09:09 INFO cluster.YarnClusterScheduler: Removed TaskSet 967.0, whose tasks have all completed, from pool 18/04/17 17:09:09 INFO scheduler.DAGScheduler: Job 966 finished: foreachPartition at PredictorEngineApp.java:153, took 9.751176 s 18/04/17 17:09:09 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0xcae6287 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 17:09:09 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0xcae62870x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 17:09:09 INFO scheduler.JobScheduler: Finished job streaming job 1523974140000 ms.28 from job set of time 1523974140000 ms 18/04/17 17:09:09 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 17:09:09 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:39726, server: ***hostname masked***/***IP masked***:2181 18/04/17 17:09:09 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x1626be1444c9675, negotiated timeout = 60000 18/04/17 17:09:09 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 949.0 (TID 949) in 9753 ms on ***hostname masked*** (executor 4) (1/1) 18/04/17 17:09:09 INFO cluster.YarnClusterScheduler: Removed TaskSet 949.0, whose tasks have all completed, from pool 18/04/17 17:09:09 INFO scheduler.DAGScheduler: ResultStage 949 (foreachPartition at PredictorEngineApp.java:153) finished in 9.753 s 18/04/17 17:09:09 INFO scheduler.DAGScheduler: Job 949 finished: foreachPartition at PredictorEngineApp.java:153, took 9.764121 s 18/04/17 17:09:09 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x1626be1444c9675 18/04/17 17:09:09 INFO zookeeper.ZooKeeper: Session: 0x1626be1444c9675 closed 18/04/17 17:09:09 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 17:09:09 INFO scheduler.JobScheduler: Finished job streaming job 1523974140000 ms.2 from job set of time 1523974140000 ms 18/04/17 17:09:09 INFO scheduler.JobScheduler: Finished job streaming job 1523974140000 ms.29 from job set of time 1523974140000 ms 18/04/17 17:09:10 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 948.0 (TID 948) in 9947 ms on ***hostname masked*** (executor 2) (1/1) 18/04/17 17:09:10 INFO cluster.YarnClusterScheduler: Removed TaskSet 948.0, whose tasks have all completed, from pool 18/04/17 17:09:10 INFO scheduler.DAGScheduler: ResultStage 948 (foreachPartition at PredictorEngineApp.java:153) finished in 9.948 s 18/04/17 17:09:10 INFO scheduler.DAGScheduler: Job 948 finished: foreachPartition at PredictorEngineApp.java:153, took 9.954868 s 18/04/17 17:09:10 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x1ecd778c connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 17:09:10 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x1ecd778c0x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 17:09:10 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 17:09:10 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:44325, server: ***hostname masked***/***IP masked***:2181 18/04/17 17:09:10 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x2626be142b28f67, negotiated timeout = 60000 18/04/17 17:09:10 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x2626be142b28f67 18/04/17 17:09:10 INFO zookeeper.ZooKeeper: Session: 0x2626be142b28f67 closed 18/04/17 17:09:10 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 17:09:10 INFO scheduler.JobScheduler: Finished job streaming job 1523974140000 ms.32 from job set of time 1523974140000 ms 18/04/17 17:09:10 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 966.0 (TID 966) in 10423 ms on ***hostname masked*** (executor 11) (1/1) 18/04/17 17:09:10 INFO cluster.YarnClusterScheduler: Removed TaskSet 966.0, whose tasks have all completed, from pool 18/04/17 17:09:10 INFO scheduler.DAGScheduler: ResultStage 966 (foreachPartition at PredictorEngineApp.java:153) finished in 10.423 s 18/04/17 17:09:10 INFO scheduler.DAGScheduler: Job 967 finished: foreachPartition at PredictorEngineApp.java:153, took 10.506216 s 18/04/17 17:09:10 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x1d8c766b connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 17:09:10 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x1d8c766b0x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 17:09:10 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 17:09:10 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:39733, server: ***hostname masked***/***IP masked***:2181 18/04/17 17:09:10 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x1626be1444c9677, negotiated timeout = 60000 18/04/17 17:09:10 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x1626be1444c9677 18/04/17 17:09:10 INFO zookeeper.ZooKeeper: Session: 0x1626be1444c9677 closed 18/04/17 17:09:10 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 17:09:10 INFO scheduler.JobScheduler: Finished job streaming job 1523974140000 ms.27 from job set of time 1523974140000 ms 18/04/17 17:09:10 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 955.0 (TID 955) in 10611 ms on ***hostname masked*** (executor 11) (1/1) 18/04/17 17:09:10 INFO scheduler.DAGScheduler: ResultStage 955 (foreachPartition at PredictorEngineApp.java:153) finished in 10.612 s 18/04/17 17:09:10 INFO cluster.YarnClusterScheduler: Removed TaskSet 955.0, whose tasks have all completed, from pool 18/04/17 17:09:10 INFO scheduler.DAGScheduler: Job 955 finished: foreachPartition at PredictorEngineApp.java:153, took 10.663175 s 18/04/17 17:09:10 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x6b3b75df connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 17:09:10 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x6b3b75df0x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 17:09:10 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 17:09:10 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:44331, server: ***hostname masked***/***IP masked***:2181 18/04/17 17:09:10 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x2626be142b28f6b, negotiated timeout = 60000 18/04/17 17:09:10 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x2626be142b28f6b 18/04/17 17:09:10 INFO zookeeper.ZooKeeper: Session: 0x2626be142b28f6b closed 18/04/17 17:09:10 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 17:09:10 INFO scheduler.JobScheduler: Finished job streaming job 1523974140000 ms.20 from job set of time 1523974140000 ms 18/04/17 17:09:10 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 952.0 (TID 952) in 10705 ms on ***hostname masked*** (executor 10) (1/1) 18/04/17 17:09:10 INFO cluster.YarnClusterScheduler: Removed TaskSet 952.0, whose tasks have all completed, from pool 18/04/17 17:09:10 INFO scheduler.DAGScheduler: ResultStage 952 (foreachPartition at PredictorEngineApp.java:153) finished in 10.705 s 18/04/17 17:09:10 INFO scheduler.DAGScheduler: Job 952 finished: foreachPartition at PredictorEngineApp.java:153, took 10.743216 s 18/04/17 17:09:10 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x6e52aa3c connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 17:09:10 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x6e52aa3c0x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 17:09:10 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 17:09:10 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:33357, server: ***hostname masked***/***IP masked***:2181 18/04/17 17:09:10 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x3626be1439a9625, negotiated timeout = 60000 18/04/17 17:09:10 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x3626be1439a9625 18/04/17 17:09:10 INFO zookeeper.ZooKeeper: Session: 0x3626be1439a9625 closed 18/04/17 17:09:10 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 17:09:10 INFO scheduler.JobScheduler: Finished job streaming job 1523974140000 ms.22 from job set of time 1523974140000 ms 18/04/17 17:09:11 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 951.0 (TID 951) in 11561 ms on ***hostname masked*** (executor 6) (1/1) 18/04/17 17:09:11 INFO scheduler.DAGScheduler: ResultStage 951 (foreachPartition at PredictorEngineApp.java:153) finished in 11.561 s 18/04/17 17:09:11 INFO cluster.YarnClusterScheduler: Removed TaskSet 951.0, whose tasks have all completed, from pool 18/04/17 17:09:11 INFO scheduler.DAGScheduler: Job 951 finished: foreachPartition at PredictorEngineApp.java:153, took 11.593352 s 18/04/17 17:09:11 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0xfe62cd6 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 17:09:11 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0xfe62cd60x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 17:09:11 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 17:09:11 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:44338, server: ***hostname masked***/***IP masked***:2181 18/04/17 17:09:11 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x2626be142b28f6c, negotiated timeout = 60000 18/04/17 17:09:11 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x2626be142b28f6c 18/04/17 17:09:11 INFO zookeeper.ZooKeeper: Session: 0x2626be142b28f6c closed 18/04/17 17:09:11 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 17:09:11 INFO scheduler.JobScheduler: Finished job streaming job 1523974140000 ms.5 from job set of time 1523974140000 ms 18/04/17 17:09:13 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 963.0 (TID 963) in 13325 ms on ***hostname masked*** (executor 4) (1/1) 18/04/17 17:09:13 INFO cluster.YarnClusterScheduler: Removed TaskSet 963.0, whose tasks have all completed, from pool 18/04/17 17:09:13 INFO scheduler.DAGScheduler: ResultStage 963 (foreachPartition at PredictorEngineApp.java:153) finished in 13.326 s 18/04/17 17:09:13 INFO scheduler.DAGScheduler: Job 963 finished: foreachPartition at PredictorEngineApp.java:153, took 13.398728 s 18/04/17 17:09:13 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x2bdd6366 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 17:09:13 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x2bdd63660x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 17:09:13 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 17:09:13 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:44344, server: ***hostname masked***/***IP masked***:2181 18/04/17 17:09:13 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x2626be142b28f6f, negotiated timeout = 60000 18/04/17 17:09:13 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x2626be142b28f6f 18/04/17 17:09:13 INFO zookeeper.ZooKeeper: Session: 0x2626be142b28f6f closed 18/04/17 17:09:13 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 17:09:13 INFO scheduler.JobScheduler: Finished job streaming job 1523974140000 ms.1 from job set of time 1523974140000 ms 18/04/17 17:09:14 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 953.0 (TID 953) in 14187 ms on ***hostname masked*** (executor 11) (1/1) 18/04/17 17:09:14 INFO cluster.YarnClusterScheduler: Removed TaskSet 953.0, whose tasks have all completed, from pool 18/04/17 17:09:14 INFO scheduler.DAGScheduler: ResultStage 953 (foreachPartition at PredictorEngineApp.java:153) finished in 14.187 s 18/04/17 17:09:14 INFO scheduler.DAGScheduler: Job 953 finished: foreachPartition at PredictorEngineApp.java:153, took 14.231344 s 18/04/17 17:09:14 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x224cb4c3 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 17:09:14 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x224cb4c30x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 17:09:14 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 17:09:14 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:39753, server: ***hostname masked***/***IP masked***:2181 18/04/17 17:09:14 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x1626be1444c9678, negotiated timeout = 60000 18/04/17 17:09:14 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x1626be1444c9678 18/04/17 17:09:14 INFO zookeeper.ZooKeeper: Session: 0x1626be1444c9678 closed 18/04/17 17:09:14 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 17:09:14 INFO scheduler.JobScheduler: Finished job streaming job 1523974140000 ms.11 from job set of time 1523974140000 ms 18/04/17 17:09:15 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 950.0 (TID 950) in 15698 ms on ***hostname masked*** (executor 2) (1/1) 18/04/17 17:09:15 INFO cluster.YarnClusterScheduler: Removed TaskSet 950.0, whose tasks have all completed, from pool 18/04/17 17:09:15 INFO scheduler.DAGScheduler: ResultStage 950 (foreachPartition at PredictorEngineApp.java:153) finished in 15.698 s 18/04/17 17:09:15 INFO scheduler.DAGScheduler: Job 950 finished: foreachPartition at PredictorEngineApp.java:153, took 15.723044 s 18/04/17 17:09:15 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x58cf6e5d connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 17:09:15 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x58cf6e5d0x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 17:09:15 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 17:09:15 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:33375, server: ***hostname masked***/***IP masked***:2181 18/04/17 17:09:15 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x3626be1439a9628, negotiated timeout = 60000 18/04/17 17:09:15 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x3626be1439a9628 18/04/17 17:09:15 INFO zookeeper.ZooKeeper: Session: 0x3626be1439a9628 closed 18/04/17 17:09:15 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 17:09:15 INFO scheduler.JobScheduler: Finished job streaming job 1523974140000 ms.10 from job set of time 1523974140000 ms 18/04/17 17:09:15 INFO scheduler.JobScheduler: Total delay: 15.809 s for time 1523974140000 ms (execution: 15.758 s) 18/04/17 17:09:15 INFO kafka.KafkaRDD: Removing RDD 1260 from persistence list 18/04/17 17:09:15 INFO storage.BlockManager: Removing RDD 1260 18/04/17 17:09:15 INFO kafka.KafkaRDD: Removing RDD 1260 from persistence list 18/04/17 17:09:15 INFO storage.BlockManager: Removing RDD 1260 18/04/17 17:09:15 INFO kafka.KafkaRDD: Removing RDD 1261 from persistence list 18/04/17 17:09:15 INFO storage.BlockManager: Removing RDD 1261 18/04/17 17:09:15 INFO kafka.KafkaRDD: Removing RDD 1261 from persistence list 18/04/17 17:09:15 INFO storage.BlockManager: Removing RDD 1261 18/04/17 17:09:15 INFO kafka.KafkaRDD: Removing RDD 1262 from persistence list 18/04/17 17:09:15 INFO storage.BlockManager: Removing RDD 1262 18/04/17 17:09:15 INFO kafka.KafkaRDD: Removing RDD 1262 from persistence list 18/04/17 17:09:15 INFO storage.BlockManager: Removing RDD 1262 18/04/17 17:09:15 INFO kafka.KafkaRDD: Removing RDD 1263 from persistence list 18/04/17 17:09:15 INFO storage.BlockManager: Removing RDD 1263 18/04/17 17:09:15 INFO kafka.KafkaRDD: Removing RDD 1263 from persistence list 18/04/17 17:09:15 INFO storage.BlockManager: Removing RDD 1263 18/04/17 17:09:15 INFO kafka.KafkaRDD: Removing RDD 1264 from persistence list 18/04/17 17:09:15 INFO storage.BlockManager: Removing RDD 1264 18/04/17 17:09:15 INFO kafka.KafkaRDD: Removing RDD 1264 from persistence list 18/04/17 17:09:15 INFO storage.BlockManager: Removing RDD 1264 18/04/17 17:09:15 INFO kafka.KafkaRDD: Removing RDD 1265 from persistence list 18/04/17 17:09:15 INFO storage.BlockManager: Removing RDD 1265 18/04/17 17:09:15 INFO kafka.KafkaRDD: Removing RDD 1265 from persistence list 18/04/17 17:09:15 INFO storage.BlockManager: Removing RDD 1265 18/04/17 17:09:15 INFO kafka.KafkaRDD: Removing RDD 1266 from persistence list 18/04/17 17:09:15 INFO storage.BlockManager: Removing RDD 1266 18/04/17 17:09:15 INFO kafka.KafkaRDD: Removing RDD 1266 from persistence list 18/04/17 17:09:15 INFO storage.BlockManager: Removing RDD 1266 18/04/17 17:09:15 INFO kafka.KafkaRDD: Removing RDD 1267 from persistence list 18/04/17 17:09:15 INFO storage.BlockManager: Removing RDD 1267 18/04/17 17:09:15 INFO kafka.KafkaRDD: Removing RDD 1267 from persistence list 18/04/17 17:09:15 INFO storage.BlockManager: Removing RDD 1267 18/04/17 17:09:15 INFO kafka.KafkaRDD: Removing RDD 1268 from persistence list 18/04/17 17:09:15 INFO storage.BlockManager: Removing RDD 1268 18/04/17 17:09:15 INFO kafka.KafkaRDD: Removing RDD 1268 from persistence list 18/04/17 17:09:15 INFO storage.BlockManager: Removing RDD 1268 18/04/17 17:09:15 INFO kafka.KafkaRDD: Removing RDD 1269 from persistence list 18/04/17 17:09:15 INFO storage.BlockManager: Removing RDD 1269 18/04/17 17:09:15 INFO kafka.KafkaRDD: Removing RDD 1269 from persistence list 18/04/17 17:09:15 INFO storage.BlockManager: Removing RDD 1269 18/04/17 17:09:15 INFO kafka.KafkaRDD: Removing RDD 1270 from persistence list 18/04/17 17:09:15 INFO storage.BlockManager: Removing RDD 1270 18/04/17 17:09:15 INFO kafka.KafkaRDD: Removing RDD 1270 from persistence list 18/04/17 17:09:15 INFO storage.BlockManager: Removing RDD 1270 18/04/17 17:09:15 INFO kafka.KafkaRDD: Removing RDD 1271 from persistence list 18/04/17 17:09:15 INFO storage.BlockManager: Removing RDD 1271 18/04/17 17:09:15 INFO kafka.KafkaRDD: Removing RDD 1271 from persistence list 18/04/17 17:09:15 INFO storage.BlockManager: Removing RDD 1271 18/04/17 17:09:15 INFO kafka.KafkaRDD: Removing RDD 1272 from persistence list 18/04/17 17:09:15 INFO storage.BlockManager: Removing RDD 1272 18/04/17 17:09:15 INFO kafka.KafkaRDD: Removing RDD 1272 from persistence list 18/04/17 17:09:15 INFO storage.BlockManager: Removing RDD 1272 18/04/17 17:09:15 INFO kafka.KafkaRDD: Removing RDD 1273 from persistence list 18/04/17 17:09:15 INFO storage.BlockManager: Removing RDD 1273 18/04/17 17:09:15 INFO kafka.KafkaRDD: Removing RDD 1273 from persistence list 18/04/17 17:09:15 INFO storage.BlockManager: Removing RDD 1273 18/04/17 17:09:15 INFO kafka.KafkaRDD: Removing RDD 1274 from persistence list 18/04/17 17:09:15 INFO storage.BlockManager: Removing RDD 1274 18/04/17 17:09:15 INFO kafka.KafkaRDD: Removing RDD 1274 from persistence list 18/04/17 17:09:15 INFO storage.BlockManager: Removing RDD 1274 18/04/17 17:09:15 INFO kafka.KafkaRDD: Removing RDD 1275 from persistence list 18/04/17 17:09:15 INFO storage.BlockManager: Removing RDD 1275 18/04/17 17:09:15 INFO kafka.KafkaRDD: Removing RDD 1275 from persistence list 18/04/17 17:09:15 INFO storage.BlockManager: Removing RDD 1275 18/04/17 17:09:15 INFO kafka.KafkaRDD: Removing RDD 1276 from persistence list 18/04/17 17:09:15 INFO storage.BlockManager: Removing RDD 1276 18/04/17 17:09:15 INFO kafka.KafkaRDD: Removing RDD 1276 from persistence list 18/04/17 17:09:15 INFO storage.BlockManager: Removing RDD 1276 18/04/17 17:09:15 INFO kafka.KafkaRDD: Removing RDD 1277 from persistence list 18/04/17 17:09:15 INFO storage.BlockManager: Removing RDD 1277 18/04/17 17:09:15 INFO kafka.KafkaRDD: Removing RDD 1277 from persistence list 18/04/17 17:09:15 INFO storage.BlockManager: Removing RDD 1277 18/04/17 17:09:15 INFO kafka.KafkaRDD: Removing RDD 1278 from persistence list 18/04/17 17:09:15 INFO storage.BlockManager: Removing RDD 1278 18/04/17 17:09:15 INFO kafka.KafkaRDD: Removing RDD 1278 from persistence list 18/04/17 17:09:15 INFO storage.BlockManager: Removing RDD 1278 18/04/17 17:09:15 INFO kafka.KafkaRDD: Removing RDD 1279 from persistence list 18/04/17 17:09:15 INFO storage.BlockManager: Removing RDD 1279 18/04/17 17:09:15 INFO kafka.KafkaRDD: Removing RDD 1279 from persistence list 18/04/17 17:09:15 INFO storage.BlockManager: Removing RDD 1279 18/04/17 17:09:15 INFO kafka.KafkaRDD: Removing RDD 1280 from persistence list 18/04/17 17:09:15 INFO storage.BlockManager: Removing RDD 1280 18/04/17 17:09:15 INFO kafka.KafkaRDD: Removing RDD 1280 from persistence list 18/04/17 17:09:15 INFO storage.BlockManager: Removing RDD 1280 18/04/17 17:09:15 INFO kafka.KafkaRDD: Removing RDD 1281 from persistence list 18/04/17 17:09:15 INFO storage.BlockManager: Removing RDD 1281 18/04/17 17:09:15 INFO kafka.KafkaRDD: Removing RDD 1281 from persistence list 18/04/17 17:09:15 INFO storage.BlockManager: Removing RDD 1281 18/04/17 17:09:15 INFO kafka.KafkaRDD: Removing RDD 1282 from persistence list 18/04/17 17:09:15 INFO storage.BlockManager: Removing RDD 1282 18/04/17 17:09:15 INFO kafka.KafkaRDD: Removing RDD 1282 from persistence list 18/04/17 17:09:15 INFO storage.BlockManager: Removing RDD 1282 18/04/17 17:09:15 INFO kafka.KafkaRDD: Removing RDD 1283 from persistence list 18/04/17 17:09:15 INFO storage.BlockManager: Removing RDD 1283 18/04/17 17:09:15 INFO kafka.KafkaRDD: Removing RDD 1283 from persistence list 18/04/17 17:09:15 INFO storage.BlockManager: Removing RDD 1283 18/04/17 17:09:15 INFO kafka.KafkaRDD: Removing RDD 1284 from persistence list 18/04/17 17:09:15 INFO storage.BlockManager: Removing RDD 1284 18/04/17 17:09:15 INFO kafka.KafkaRDD: Removing RDD 1284 from persistence list 18/04/17 17:09:15 INFO storage.BlockManager: Removing RDD 1284 18/04/17 17:09:15 INFO kafka.KafkaRDD: Removing RDD 1285 from persistence list 18/04/17 17:09:15 INFO storage.BlockManager: Removing RDD 1285 18/04/17 17:09:15 INFO kafka.KafkaRDD: Removing RDD 1285 from persistence list 18/04/17 17:09:15 INFO storage.BlockManager: Removing RDD 1285 18/04/17 17:09:15 INFO kafka.KafkaRDD: Removing RDD 1286 from persistence list 18/04/17 17:09:15 INFO storage.BlockManager: Removing RDD 1286 18/04/17 17:09:15 INFO kafka.KafkaRDD: Removing RDD 1286 from persistence list 18/04/17 17:09:15 INFO storage.BlockManager: Removing RDD 1286 18/04/17 17:09:15 INFO kafka.KafkaRDD: Removing RDD 1287 from persistence list 18/04/17 17:09:15 INFO storage.BlockManager: Removing RDD 1287 18/04/17 17:09:15 INFO kafka.KafkaRDD: Removing RDD 1287 from persistence list 18/04/17 17:09:15 INFO storage.BlockManager: Removing RDD 1287 18/04/17 17:09:15 INFO kafka.KafkaRDD: Removing RDD 1288 from persistence list 18/04/17 17:09:15 INFO storage.BlockManager: Removing RDD 1288 18/04/17 17:09:15 INFO kafka.KafkaRDD: Removing RDD 1288 from persistence list 18/04/17 17:09:15 INFO storage.BlockManager: Removing RDD 1288 18/04/17 17:09:15 INFO kafka.KafkaRDD: Removing RDD 1289 from persistence list 18/04/17 17:09:15 INFO storage.BlockManager: Removing RDD 1289 18/04/17 17:09:15 INFO kafka.KafkaRDD: Removing RDD 1289 from persistence list 18/04/17 17:09:15 INFO storage.BlockManager: Removing RDD 1289 18/04/17 17:09:15 INFO kafka.KafkaRDD: Removing RDD 1290 from persistence list 18/04/17 17:09:15 INFO storage.BlockManager: Removing RDD 1290 18/04/17 17:09:15 INFO kafka.KafkaRDD: Removing RDD 1290 from persistence list 18/04/17 17:09:15 INFO storage.BlockManager: Removing RDD 1290 18/04/17 17:09:15 INFO kafka.KafkaRDD: Removing RDD 1291 from persistence list 18/04/17 17:09:15 INFO storage.BlockManager: Removing RDD 1291 18/04/17 17:09:15 INFO kafka.KafkaRDD: Removing RDD 1291 from persistence list 18/04/17 17:09:15 INFO storage.BlockManager: Removing RDD 1291 18/04/17 17:09:15 INFO kafka.KafkaRDD: Removing RDD 1292 from persistence list 18/04/17 17:09:15 INFO storage.BlockManager: Removing RDD 1292 18/04/17 17:09:15 INFO kafka.KafkaRDD: Removing RDD 1292 from persistence list 18/04/17 17:09:15 INFO storage.BlockManager: Removing RDD 1292 18/04/17 17:09:15 INFO kafka.KafkaRDD: Removing RDD 1293 from persistence list 18/04/17 17:09:15 INFO storage.BlockManager: Removing RDD 1293 18/04/17 17:09:15 INFO kafka.KafkaRDD: Removing RDD 1293 from persistence list 18/04/17 17:09:15 INFO storage.BlockManager: Removing RDD 1293 18/04/17 17:09:15 INFO kafka.KafkaRDD: Removing RDD 1294 from persistence list 18/04/17 17:09:15 INFO storage.BlockManager: Removing RDD 1294 18/04/17 17:09:15 INFO kafka.KafkaRDD: Removing RDD 1294 from persistence list 18/04/17 17:09:15 INFO storage.BlockManager: Removing RDD 1294 18/04/17 17:09:15 INFO kafka.KafkaRDD: Removing RDD 1295 from persistence list 18/04/17 17:09:15 INFO storage.BlockManager: Removing RDD 1295 18/04/17 17:09:15 INFO kafka.KafkaRDD: Removing RDD 1295 from persistence list 18/04/17 17:09:15 INFO storage.BlockManager: Removing RDD 1295 18/04/17 17:09:15 INFO scheduler.ReceivedBlockTracker: Deleting batches ArrayBuffer() 18/04/17 17:09:15 INFO scheduler.InputInfoTracker: remove old batch metadata: 1523974020000 ms 18/04/17 17:10:00 INFO scheduler.JobScheduler: Added jobs for time 1523974200000 ms 18/04/17 17:10:00 INFO scheduler.JobScheduler: Starting job streaming job 1523974200000 ms.0 from job set of time 1523974200000 ms 18/04/17 17:10:00 INFO scheduler.JobScheduler: Starting job streaming job 1523974200000 ms.1 from job set of time 1523974200000 ms 18/04/17 17:10:00 INFO scheduler.JobScheduler: Starting job streaming job 1523974200000 ms.2 from job set of time 1523974200000 ms 18/04/17 17:10:00 INFO scheduler.JobScheduler: Starting job streaming job 1523974200000 ms.3 from job set of time 1523974200000 ms 18/04/17 17:10:00 INFO scheduler.JobScheduler: Finished job streaming job 1523974200000 ms.0 from job set of time 1523974200000 ms 18/04/17 17:10:00 INFO scheduler.JobScheduler: Starting job streaming job 1523974200000 ms.4 from job set of time 1523974200000 ms 18/04/17 17:10:00 INFO scheduler.JobScheduler: Starting job streaming job 1523974200000 ms.6 from job set of time 1523974200000 ms 18/04/17 17:10:00 INFO scheduler.JobScheduler: Finished job streaming job 1523974200000 ms.3 from job set of time 1523974200000 ms 18/04/17 17:10:00 INFO scheduler.JobScheduler: Starting job streaming job 1523974200000 ms.8 from job set of time 1523974200000 ms 18/04/17 17:10:00 INFO scheduler.JobScheduler: Starting job streaming job 1523974200000 ms.5 from job set of time 1523974200000 ms 18/04/17 17:10:00 INFO scheduler.JobScheduler: Starting job streaming job 1523974200000 ms.7 from job set of time 1523974200000 ms 18/04/17 17:10:00 INFO scheduler.JobScheduler: Finished job streaming job 1523974200000 ms.4 from job set of time 1523974200000 ms 18/04/17 17:10:00 INFO scheduler.JobScheduler: Starting job streaming job 1523974200000 ms.10 from job set of time 1523974200000 ms 18/04/17 17:10:00 INFO scheduler.JobScheduler: Starting job streaming job 1523974200000 ms.9 from job set of time 1523974200000 ms 18/04/17 17:10:00 INFO scheduler.JobScheduler: Starting job streaming job 1523974200000 ms.11 from job set of time 1523974200000 ms 18/04/17 17:10:00 INFO scheduler.JobScheduler: Starting job streaming job 1523974200000 ms.12 from job set of time 1523974200000 ms 18/04/17 17:10:00 INFO scheduler.JobScheduler: Starting job streaming job 1523974200000 ms.13 from job set of time 1523974200000 ms 18/04/17 17:10:00 INFO scheduler.JobScheduler: Finished job streaming job 1523974200000 ms.13 from job set of time 1523974200000 ms 18/04/17 17:10:00 INFO scheduler.JobScheduler: Starting job streaming job 1523974200000 ms.14 from job set of time 1523974200000 ms 18/04/17 17:10:00 INFO scheduler.JobScheduler: Starting job streaming job 1523974200000 ms.15 from job set of time 1523974200000 ms 18/04/17 17:10:00 INFO scheduler.JobScheduler: Finished job streaming job 1523974200000 ms.14 from job set of time 1523974200000 ms 18/04/17 17:10:00 INFO scheduler.JobScheduler: Starting job streaming job 1523974200000 ms.16 from job set of time 1523974200000 ms 18/04/17 17:10:00 INFO scheduler.JobScheduler: Finished job streaming job 1523974200000 ms.16 from job set of time 1523974200000 ms 18/04/17 17:10:00 INFO scheduler.JobScheduler: Starting job streaming job 1523974200000 ms.17 from job set of time 1523974200000 ms 18/04/17 17:10:00 INFO scheduler.JobScheduler: Finished job streaming job 1523974200000 ms.17 from job set of time 1523974200000 ms 18/04/17 17:10:00 INFO scheduler.JobScheduler: Starting job streaming job 1523974200000 ms.19 from job set of time 1523974200000 ms 18/04/17 17:10:00 INFO scheduler.JobScheduler: Starting job streaming job 1523974200000 ms.18 from job set of time 1523974200000 ms 18/04/17 17:10:00 INFO scheduler.JobScheduler: Starting job streaming job 1523974200000 ms.20 from job set of time 1523974200000 ms 18/04/17 17:10:00 INFO scheduler.JobScheduler: Starting job streaming job 1523974200000 ms.21 from job set of time 1523974200000 ms 18/04/17 17:10:00 INFO scheduler.JobScheduler: Starting job streaming job 1523974200000 ms.22 from job set of time 1523974200000 ms 18/04/17 17:10:00 INFO scheduler.JobScheduler: Finished job streaming job 1523974200000 ms.21 from job set of time 1523974200000 ms 18/04/17 17:10:00 INFO scheduler.JobScheduler: Starting job streaming job 1523974200000 ms.24 from job set of time 1523974200000 ms 18/04/17 17:10:00 INFO scheduler.JobScheduler: Starting job streaming job 1523974200000 ms.25 from job set of time 1523974200000 ms 18/04/17 17:10:00 INFO scheduler.JobScheduler: Starting job streaming job 1523974200000 ms.23 from job set of time 1523974200000 ms 18/04/17 17:10:00 INFO scheduler.JobScheduler: Starting job streaming job 1523974200000 ms.26 from job set of time 1523974200000 ms 18/04/17 17:10:00 INFO scheduler.JobScheduler: Starting job streaming job 1523974200000 ms.27 from job set of time 1523974200000 ms 18/04/17 17:10:00 INFO scheduler.JobScheduler: Starting job streaming job 1523974200000 ms.28 from job set of time 1523974200000 ms 18/04/17 17:10:00 INFO scheduler.JobScheduler: Starting job streaming job 1523974200000 ms.29 from job set of time 1523974200000 ms 18/04/17 17:10:00 INFO scheduler.JobScheduler: Starting job streaming job 1523974200000 ms.30 from job set of time 1523974200000 ms 18/04/17 17:10:00 INFO scheduler.JobScheduler: Starting job streaming job 1523974200000 ms.31 from job set of time 1523974200000 ms 18/04/17 17:10:00 INFO scheduler.JobScheduler: Starting job streaming job 1523974200000 ms.32 from job set of time 1523974200000 ms 18/04/17 17:10:00 INFO scheduler.JobScheduler: Finished job streaming job 1523974200000 ms.30 from job set of time 1523974200000 ms 18/04/17 17:10:00 INFO scheduler.JobScheduler: Starting job streaming job 1523974200000 ms.34 from job set of time 1523974200000 ms 18/04/17 17:10:00 INFO scheduler.JobScheduler: Starting job streaming job 1523974200000 ms.33 from job set of time 1523974200000 ms 18/04/17 17:10:00 INFO scheduler.JobScheduler: Starting job streaming job 1523974200000 ms.35 from job set of time 1523974200000 ms 18/04/17 17:10:00 INFO scheduler.JobScheduler: Finished job streaming job 1523974200000 ms.35 from job set of time 1523974200000 ms 18/04/17 17:10:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 17:10:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 17:10:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 17:10:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 17:10:00 INFO scheduler.DAGScheduler: Got job 974 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 17:10:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 974 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 17:10:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 17:10:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 17:10:00 INFO scheduler.DAGScheduler: Submitting ResultStage 974 (KafkaRDD[1364] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 17:10:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 17:10:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 17:10:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 17:10:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 17:10:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 17:10:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 17:10:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 17:10:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 17:10:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 17:10:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 17:10:00 INFO storage.MemoryStore: Block broadcast_974 stored as values in memory (estimated size 5.7 KB, free 491.5 MB) 18/04/17 17:10:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 17:10:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 17:10:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 17:10:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 17:10:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 17:10:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 17:10:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 17:10:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 17:10:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 17:10:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 17:10:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 17:10:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 17:10:00 INFO storage.MemoryStore: Block broadcast_974_piece0 stored as bytes in memory (estimated size 3.1 KB, free 491.5 MB) 18/04/17 17:10:00 INFO storage.BlockManagerInfo: Added broadcast_974_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:10:00 INFO spark.SparkContext: Created broadcast 974 from broadcast at DAGScheduler.scala:1006 18/04/17 17:10:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 974 (KafkaRDD[1364] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 17:10:00 INFO cluster.YarnClusterScheduler: Adding task set 974.0 with 1 tasks 18/04/17 17:10:00 INFO scheduler.DAGScheduler: Got job 975 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 17:10:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 975 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 17:10:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 17:10:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 17:10:00 INFO scheduler.DAGScheduler: Submitting ResultStage 975 (KafkaRDD[1344] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 17:10:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 974.0 (TID 974, ***hostname masked***, executor 4, partition 0, RACK_LOCAL, 2040 bytes) 18/04/17 17:10:00 INFO storage.MemoryStore: Block broadcast_975 stored as values in memory (estimated size 5.7 KB, free 491.5 MB) 18/04/17 17:10:00 INFO storage.MemoryStore: Block broadcast_975_piece0 stored as bytes in memory (estimated size 3.1 KB, free 491.5 MB) 18/04/17 17:10:00 INFO storage.BlockManagerInfo: Added broadcast_975_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:10:00 INFO spark.SparkContext: Created broadcast 975 from broadcast at DAGScheduler.scala:1006 18/04/17 17:10:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 975 (KafkaRDD[1344] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 17:10:00 INFO cluster.YarnClusterScheduler: Adding task set 975.0 with 1 tasks 18/04/17 17:10:00 INFO scheduler.DAGScheduler: Got job 976 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 17:10:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 976 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 17:10:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 17:10:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 17:10:00 INFO scheduler.DAGScheduler: Submitting ResultStage 976 (KafkaRDD[1342] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 17:10:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 975.0 (TID 975, ***hostname masked***, executor 8, partition 0, RACK_LOCAL, 2051 bytes) 18/04/17 17:10:00 INFO storage.MemoryStore: Block broadcast_976 stored as values in memory (estimated size 5.7 KB, free 491.5 MB) 18/04/17 17:10:00 INFO storage.MemoryStore: Block broadcast_976_piece0 stored as bytes in memory (estimated size 3.1 KB, free 491.5 MB) 18/04/17 17:10:00 INFO storage.BlockManagerInfo: Added broadcast_976_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:10:00 INFO spark.SparkContext: Created broadcast 976 from broadcast at DAGScheduler.scala:1006 18/04/17 17:10:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 976 (KafkaRDD[1342] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 17:10:00 INFO cluster.YarnClusterScheduler: Adding task set 976.0 with 1 tasks 18/04/17 17:10:00 INFO scheduler.DAGScheduler: Got job 977 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 17:10:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 977 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 17:10:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 17:10:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 17:10:00 INFO scheduler.DAGScheduler: Submitting ResultStage 977 (KafkaRDD[1354] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 17:10:00 INFO storage.BlockManagerInfo: Added broadcast_974_piece0 in memory on ***hostname masked***:55279 (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:10:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 976.0 (TID 976, ***hostname masked***, executor 6, partition 0, NODE_LOCAL, 2059 bytes) 18/04/17 17:10:00 INFO storage.MemoryStore: Block broadcast_977 stored as values in memory (estimated size 5.7 KB, free 491.4 MB) 18/04/17 17:10:00 INFO storage.MemoryStore: Block broadcast_977_piece0 stored as bytes in memory (estimated size 3.1 KB, free 491.4 MB) 18/04/17 17:10:00 INFO storage.BlockManagerInfo: Added broadcast_977_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:10:00 INFO spark.SparkContext: Created broadcast 977 from broadcast at DAGScheduler.scala:1006 18/04/17 17:10:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 977 (KafkaRDD[1354] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 17:10:00 INFO cluster.YarnClusterScheduler: Adding task set 977.0 with 1 tasks 18/04/17 17:10:00 INFO scheduler.DAGScheduler: Got job 978 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 17:10:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 978 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 17:10:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 17:10:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 17:10:00 INFO scheduler.DAGScheduler: Submitting ResultStage 978 (KafkaRDD[1338] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 17:10:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 977.0 (TID 977, ***hostname masked***, executor 12, partition 0, RACK_LOCAL, 2069 bytes) 18/04/17 17:10:00 INFO storage.MemoryStore: Block broadcast_978 stored as values in memory (estimated size 5.7 KB, free 491.4 MB) 18/04/17 17:10:00 INFO storage.MemoryStore: Block broadcast_978_piece0 stored as bytes in memory (estimated size 3.1 KB, free 491.4 MB) 18/04/17 17:10:00 INFO storage.BlockManagerInfo: Added broadcast_978_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:10:00 INFO storage.BlockManagerInfo: Added broadcast_975_piece0 in memory on ***hostname masked***:50260 (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:10:00 INFO spark.SparkContext: Created broadcast 978 from broadcast at DAGScheduler.scala:1006 18/04/17 17:10:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 978 (KafkaRDD[1338] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 17:10:00 INFO cluster.YarnClusterScheduler: Adding task set 978.0 with 1 tasks 18/04/17 17:10:00 INFO scheduler.DAGScheduler: Got job 979 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 17:10:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 979 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 17:10:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 17:10:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 17:10:00 INFO scheduler.DAGScheduler: Submitting ResultStage 979 (KafkaRDD[1339] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 17:10:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 978.0 (TID 978, ***hostname masked***, executor 9, partition 0, NODE_LOCAL, 2045 bytes) 18/04/17 17:10:00 INFO storage.MemoryStore: Block broadcast_979 stored as values in memory (estimated size 5.7 KB, free 491.4 MB) 18/04/17 17:10:00 INFO storage.MemoryStore: Block broadcast_979_piece0 stored as bytes in memory (estimated size 3.1 KB, free 491.4 MB) 18/04/17 17:10:00 INFO storage.BlockManagerInfo: Added broadcast_979_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:10:00 INFO spark.SparkContext: Created broadcast 979 from broadcast at DAGScheduler.scala:1006 18/04/17 17:10:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 979 (KafkaRDD[1339] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 17:10:00 INFO cluster.YarnClusterScheduler: Adding task set 979.0 with 1 tasks 18/04/17 17:10:00 INFO scheduler.DAGScheduler: Got job 980 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 17:10:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 980 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 17:10:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 17:10:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 17:10:00 INFO scheduler.DAGScheduler: Submitting ResultStage 980 (KafkaRDD[1358] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 17:10:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 979.0 (TID 979, ***hostname masked***, executor 1, partition 0, RACK_LOCAL, 2060 bytes) 18/04/17 17:10:00 INFO storage.MemoryStore: Block broadcast_980 stored as values in memory (estimated size 5.7 KB, free 491.4 MB) 18/04/17 17:10:00 INFO storage.MemoryStore: Block broadcast_980_piece0 stored as bytes in memory (estimated size 3.1 KB, free 491.4 MB) 18/04/17 17:10:00 INFO storage.BlockManagerInfo: Added broadcast_980_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:10:00 INFO spark.SparkContext: Created broadcast 980 from broadcast at DAGScheduler.scala:1006 18/04/17 17:10:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 980 (KafkaRDD[1358] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 17:10:00 INFO cluster.YarnClusterScheduler: Adding task set 980.0 with 1 tasks 18/04/17 17:10:00 INFO scheduler.DAGScheduler: Got job 981 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 17:10:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 981 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 17:10:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 17:10:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 17:10:00 INFO scheduler.DAGScheduler: Submitting ResultStage 981 (KafkaRDD[1357] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 17:10:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 980.0 (TID 980, ***hostname masked***, executor 6, partition 0, NODE_LOCAL, 2056 bytes) 18/04/17 17:10:00 INFO storage.MemoryStore: Block broadcast_981 stored as values in memory (estimated size 5.7 KB, free 491.4 MB) 18/04/17 17:10:00 INFO storage.MemoryStore: Block broadcast_981_piece0 stored as bytes in memory (estimated size 3.1 KB, free 491.4 MB) 18/04/17 17:10:00 INFO storage.BlockManagerInfo: Added broadcast_981_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:10:00 INFO spark.SparkContext: Created broadcast 981 from broadcast at DAGScheduler.scala:1006 18/04/17 17:10:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 981 (KafkaRDD[1357] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 17:10:00 INFO cluster.YarnClusterScheduler: Adding task set 981.0 with 1 tasks 18/04/17 17:10:00 INFO scheduler.DAGScheduler: Got job 982 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 17:10:00 INFO storage.BlockManagerInfo: Added broadcast_977_piece0 in memory on ***hostname masked***:42188 (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:10:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 982 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 17:10:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 17:10:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 17:10:00 INFO scheduler.DAGScheduler: Submitting ResultStage 982 (KafkaRDD[1365] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 17:10:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 981.0 (TID 981, ***hostname masked***, executor 2, partition 0, RACK_LOCAL, 2045 bytes) 18/04/17 17:10:00 INFO storage.MemoryStore: Block broadcast_982 stored as values in memory (estimated size 5.7 KB, free 491.4 MB) 18/04/17 17:10:00 INFO storage.BlockManagerInfo: Added broadcast_976_piece0 in memory on ***hostname masked***:35790 (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:10:00 INFO storage.MemoryStore: Block broadcast_982_piece0 stored as bytes in memory (estimated size 3.1 KB, free 491.4 MB) 18/04/17 17:10:00 INFO storage.BlockManagerInfo: Added broadcast_982_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:10:00 INFO spark.SparkContext: Created broadcast 982 from broadcast at DAGScheduler.scala:1006 18/04/17 17:10:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 982 (KafkaRDD[1365] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 17:10:00 INFO cluster.YarnClusterScheduler: Adding task set 982.0 with 1 tasks 18/04/17 17:10:00 INFO scheduler.DAGScheduler: Got job 983 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 17:10:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 983 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 17:10:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 17:10:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 17:10:00 INFO scheduler.DAGScheduler: Submitting ResultStage 983 (KafkaRDD[1334] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 17:10:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 982.0 (TID 982, ***hostname masked***, executor 8, partition 0, NODE_LOCAL, 2048 bytes) 18/04/17 17:10:00 INFO storage.MemoryStore: Block broadcast_983 stored as values in memory (estimated size 5.7 KB, free 491.4 MB) 18/04/17 17:10:00 INFO storage.MemoryStore: Block broadcast_983_piece0 stored as bytes in memory (estimated size 3.1 KB, free 491.4 MB) 18/04/17 17:10:00 INFO storage.BlockManagerInfo: Added broadcast_983_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:10:00 INFO spark.SparkContext: Created broadcast 983 from broadcast at DAGScheduler.scala:1006 18/04/17 17:10:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 983 (KafkaRDD[1334] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 17:10:00 INFO cluster.YarnClusterScheduler: Adding task set 983.0 with 1 tasks 18/04/17 17:10:00 INFO scheduler.DAGScheduler: Got job 984 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 17:10:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 984 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 17:10:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 17:10:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 17:10:00 INFO scheduler.DAGScheduler: Submitting ResultStage 984 (KafkaRDD[1355] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 17:10:00 INFO storage.BlockManagerInfo: Added broadcast_978_piece0 in memory on ***hostname masked***:55033 (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:10:00 INFO storage.MemoryStore: Block broadcast_984 stored as values in memory (estimated size 5.7 KB, free 491.4 MB) 18/04/17 17:10:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 983.0 (TID 983, ***hostname masked***, executor 5, partition 0, RACK_LOCAL, 2034 bytes) 18/04/17 17:10:00 INFO storage.BlockManagerInfo: Added broadcast_979_piece0 in memory on ***hostname masked***:56034 (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:10:00 INFO storage.BlockManagerInfo: Added broadcast_982_piece0 in memory on ***hostname masked***:50260 (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:10:00 INFO storage.MemoryStore: Block broadcast_984_piece0 stored as bytes in memory (estimated size 3.1 KB, free 491.4 MB) 18/04/17 17:10:00 INFO storage.BlockManagerInfo: Added broadcast_984_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:10:00 INFO spark.SparkContext: Created broadcast 984 from broadcast at DAGScheduler.scala:1006 18/04/17 17:10:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 984 (KafkaRDD[1355] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 17:10:00 INFO cluster.YarnClusterScheduler: Adding task set 984.0 with 1 tasks 18/04/17 17:10:00 INFO scheduler.DAGScheduler: Got job 986 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 17:10:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 985 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 17:10:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 17:10:00 INFO storage.BlockManagerInfo: Added broadcast_980_piece0 in memory on ***hostname masked***:35790 (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:10:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 17:10:00 INFO scheduler.DAGScheduler: Submitting ResultStage 985 (KafkaRDD[1352] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 17:10:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 984.0 (TID 984, ***hostname masked***, executor 5, partition 0, RACK_LOCAL, 2048 bytes) 18/04/17 17:10:00 INFO storage.MemoryStore: Block broadcast_985 stored as values in memory (estimated size 5.7 KB, free 491.4 MB) 18/04/17 17:10:00 INFO storage.MemoryStore: Block broadcast_985_piece0 stored as bytes in memory (estimated size 3.1 KB, free 491.4 MB) 18/04/17 17:10:00 INFO storage.BlockManagerInfo: Added broadcast_985_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:10:00 INFO spark.SparkContext: Created broadcast 985 from broadcast at DAGScheduler.scala:1006 18/04/17 17:10:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 985 (KafkaRDD[1352] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 17:10:00 INFO cluster.YarnClusterScheduler: Adding task set 985.0 with 1 tasks 18/04/17 17:10:00 INFO scheduler.DAGScheduler: Got job 985 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 17:10:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 986 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 17:10:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 17:10:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 17:10:00 INFO scheduler.DAGScheduler: Submitting ResultStage 986 (KafkaRDD[1356] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 17:10:00 INFO storage.MemoryStore: Block broadcast_986 stored as values in memory (estimated size 5.7 KB, free 491.4 MB) 18/04/17 17:10:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 985.0 (TID 985, ***hostname masked***, executor 11, partition 0, RACK_LOCAL, 2048 bytes) 18/04/17 17:10:00 INFO storage.MemoryStore: Block broadcast_986_piece0 stored as bytes in memory (estimated size 3.1 KB, free 491.4 MB) 18/04/17 17:10:00 INFO storage.BlockManagerInfo: Added broadcast_986_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:10:00 INFO spark.SparkContext: Created broadcast 986 from broadcast at DAGScheduler.scala:1006 18/04/17 17:10:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 986 (KafkaRDD[1356] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 17:10:00 INFO cluster.YarnClusterScheduler: Adding task set 986.0 with 1 tasks 18/04/17 17:10:00 INFO scheduler.DAGScheduler: Got job 987 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 17:10:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 987 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 17:10:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 17:10:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 17:10:00 INFO scheduler.DAGScheduler: Submitting ResultStage 987 (KafkaRDD[1340] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 17:10:00 INFO storage.MemoryStore: Block broadcast_987 stored as values in memory (estimated size 5.7 KB, free 491.4 MB) 18/04/17 17:10:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 986.0 (TID 986, ***hostname masked***, executor 6, partition 0, NODE_LOCAL, 2047 bytes) 18/04/17 17:10:00 INFO storage.BlockManagerInfo: Added broadcast_983_piece0 in memory on ***hostname masked***:53081 (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:10:00 INFO storage.BlockManagerInfo: Added broadcast_981_piece0 in memory on ***hostname masked***:43653 (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:10:00 INFO storage.MemoryStore: Block broadcast_987_piece0 stored as bytes in memory (estimated size 3.1 KB, free 491.4 MB) 18/04/17 17:10:00 INFO storage.BlockManagerInfo: Added broadcast_987_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:10:00 INFO storage.BlockManagerInfo: Removed broadcast_948_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:10:00 INFO spark.SparkContext: Created broadcast 987 from broadcast at DAGScheduler.scala:1006 18/04/17 17:10:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 987 (KafkaRDD[1340] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 17:10:00 INFO cluster.YarnClusterScheduler: Adding task set 987.0 with 1 tasks 18/04/17 17:10:00 INFO scheduler.DAGScheduler: Got job 988 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 17:10:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 988 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 17:10:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 17:10:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 17:10:00 INFO scheduler.DAGScheduler: Submitting ResultStage 988 (KafkaRDD[1333] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 17:10:00 INFO storage.MemoryStore: Block broadcast_988 stored as values in memory (estimated size 5.7 KB, free 491.4 MB) 18/04/17 17:10:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 987.0 (TID 987, ***hostname masked***, executor 3, partition 0, RACK_LOCAL, 2051 bytes) 18/04/17 17:10:00 INFO storage.BlockManagerInfo: Removed broadcast_948_piece0 on ***hostname masked***:43653 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:10:00 INFO storage.BlockManagerInfo: Added broadcast_985_piece0 in memory on ***hostname masked***:57847 (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:10:00 INFO storage.MemoryStore: Block broadcast_988_piece0 stored as bytes in memory (estimated size 3.1 KB, free 491.4 MB) 18/04/17 17:10:00 INFO storage.BlockManagerInfo: Added broadcast_988_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:10:00 INFO storage.BlockManagerInfo: Removed broadcast_957_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:10:00 INFO spark.SparkContext: Created broadcast 988 from broadcast at DAGScheduler.scala:1006 18/04/17 17:10:00 INFO storage.BlockManagerInfo: Added broadcast_986_piece0 in memory on ***hostname masked***:35790 (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:10:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 988 (KafkaRDD[1333] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 17:10:00 INFO cluster.YarnClusterScheduler: Adding task set 988.0 with 1 tasks 18/04/17 17:10:00 INFO scheduler.DAGScheduler: Got job 989 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 17:10:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 989 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 17:10:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 17:10:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 17:10:00 INFO scheduler.DAGScheduler: Submitting ResultStage 989 (KafkaRDD[1361] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 17:10:00 INFO storage.MemoryStore: Block broadcast_989 stored as values in memory (estimated size 5.7 KB, free 491.4 MB) 18/04/17 17:10:00 INFO storage.BlockManagerInfo: Removed broadcast_957_piece0 on ***hostname masked***:50260 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:10:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 988.0 (TID 988, ***hostname masked***, executor 5, partition 0, RACK_LOCAL, 2063 bytes) 18/04/17 17:10:00 INFO spark.ContextCleaner: Cleaned accumulator 958 18/04/17 17:10:00 INFO storage.BlockManagerInfo: Removed broadcast_956_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:10:00 INFO storage.MemoryStore: Block broadcast_989_piece0 stored as bytes in memory (estimated size 3.1 KB, free 491.4 MB) 18/04/17 17:10:00 INFO storage.BlockManagerInfo: Added broadcast_989_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:10:00 INFO storage.BlockManagerInfo: Removed broadcast_956_piece0 on ***hostname masked***:53081 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:10:00 INFO spark.SparkContext: Created broadcast 989 from broadcast at DAGScheduler.scala:1006 18/04/17 17:10:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 989 (KafkaRDD[1361] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 17:10:00 INFO storage.BlockManagerInfo: Added broadcast_984_piece0 in memory on ***hostname masked***:53081 (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:10:00 INFO cluster.YarnClusterScheduler: Adding task set 989.0 with 1 tasks 18/04/17 17:10:00 INFO scheduler.DAGScheduler: Got job 990 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 17:10:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 990 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 17:10:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 17:10:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 17:10:00 INFO scheduler.DAGScheduler: Submitting ResultStage 990 (KafkaRDD[1343] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 17:10:00 INFO storage.MemoryStore: Block broadcast_990 stored as values in memory (estimated size 5.7 KB, free 491.4 MB) 18/04/17 17:10:00 INFO spark.ContextCleaner: Cleaned accumulator 957 18/04/17 17:10:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 989.0 (TID 989, ***hostname masked***, executor 3, partition 0, NODE_LOCAL, 2049 bytes) 18/04/17 17:10:00 INFO storage.BlockManagerInfo: Removed broadcast_961_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:10:00 INFO storage.BlockManagerInfo: Removed broadcast_961_piece0 on ***hostname masked***:35790 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:10:00 INFO spark.ContextCleaner: Cleaned accumulator 962 18/04/17 17:10:00 INFO storage.BlockManagerInfo: Removed broadcast_959_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:10:00 INFO storage.MemoryStore: Block broadcast_990_piece0 stored as bytes in memory (estimated size 3.1 KB, free 491.4 MB) 18/04/17 17:10:00 INFO storage.BlockManagerInfo: Added broadcast_990_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:10:00 INFO spark.SparkContext: Created broadcast 990 from broadcast at DAGScheduler.scala:1006 18/04/17 17:10:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 990 (KafkaRDD[1343] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 17:10:00 INFO cluster.YarnClusterScheduler: Adding task set 990.0 with 1 tasks 18/04/17 17:10:00 INFO scheduler.DAGScheduler: Got job 992 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 17:10:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 991 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 17:10:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 17:10:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 17:10:00 INFO scheduler.DAGScheduler: Submitting ResultStage 991 (KafkaRDD[1347] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 17:10:00 INFO storage.MemoryStore: Block broadcast_991 stored as values in memory (estimated size 5.7 KB, free 491.4 MB) 18/04/17 17:10:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 990.0 (TID 990, ***hostname masked***, executor 1, partition 0, RACK_LOCAL, 2046 bytes) 18/04/17 17:10:00 INFO storage.BlockManagerInfo: Removed broadcast_959_piece0 on ***hostname masked***:50260 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:10:00 INFO spark.ContextCleaner: Cleaned accumulator 960 18/04/17 17:10:00 INFO storage.BlockManagerInfo: Removed broadcast_963_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:10:00 INFO storage.BlockManagerInfo: Removed broadcast_963_piece0 on ***hostname masked***:55279 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:10:00 INFO storage.BlockManagerInfo: Added broadcast_988_piece0 in memory on ***hostname masked***:53081 (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:10:00 INFO storage.MemoryStore: Block broadcast_991_piece0 stored as bytes in memory (estimated size 3.1 KB, free 491.4 MB) 18/04/17 17:10:00 INFO storage.BlockManagerInfo: Added broadcast_991_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:10:00 INFO spark.SparkContext: Created broadcast 991 from broadcast at DAGScheduler.scala:1006 18/04/17 17:10:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 991 (KafkaRDD[1347] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 17:10:00 INFO cluster.YarnClusterScheduler: Adding task set 991.0 with 1 tasks 18/04/17 17:10:00 INFO storage.BlockManagerInfo: Added broadcast_987_piece0 in memory on ***hostname masked***:60107 (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:10:00 INFO scheduler.DAGScheduler: Got job 991 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 17:10:00 INFO spark.ContextCleaner: Cleaned accumulator 964 18/04/17 17:10:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 992 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 17:10:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 17:10:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 17:10:00 INFO scheduler.DAGScheduler: Submitting ResultStage 992 (KafkaRDD[1351] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 17:10:00 INFO storage.MemoryStore: Block broadcast_992 stored as values in memory (estimated size 5.7 KB, free 491.4 MB) 18/04/17 17:10:00 INFO storage.BlockManagerInfo: Removed broadcast_962_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:10:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 991.0 (TID 991, ***hostname masked***, executor 9, partition 0, RACK_LOCAL, 2064 bytes) 18/04/17 17:10:00 INFO storage.BlockManagerInfo: Removed broadcast_962_piece0 on ***hostname masked***:55095 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:10:00 INFO spark.ContextCleaner: Cleaned accumulator 963 18/04/17 17:10:00 INFO storage.MemoryStore: Block broadcast_992_piece0 stored as bytes in memory (estimated size 3.1 KB, free 491.4 MB) 18/04/17 17:10:00 INFO storage.BlockManagerInfo: Removed broadcast_965_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:10:00 INFO storage.BlockManagerInfo: Added broadcast_992_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:10:00 INFO spark.SparkContext: Created broadcast 992 from broadcast at DAGScheduler.scala:1006 18/04/17 17:10:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 992 (KafkaRDD[1351] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 17:10:00 INFO cluster.YarnClusterScheduler: Adding task set 992.0 with 1 tasks 18/04/17 17:10:00 INFO scheduler.DAGScheduler: Got job 993 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 17:10:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 993 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 17:10:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 17:10:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 17:10:00 INFO scheduler.DAGScheduler: Submitting ResultStage 993 (KafkaRDD[1341] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 17:10:00 INFO storage.MemoryStore: Block broadcast_993 stored as values in memory (estimated size 5.7 KB, free 491.4 MB) 18/04/17 17:10:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 992.0 (TID 992, ***hostname masked***, executor 2, partition 0, RACK_LOCAL, 2050 bytes) 18/04/17 17:10:00 INFO storage.BlockManagerInfo: Added broadcast_990_piece0 in memory on ***hostname masked***:56034 (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:10:00 INFO storage.BlockManagerInfo: Removed broadcast_965_piece0 on ***hostname masked***:35790 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:10:00 INFO storage.MemoryStore: Block broadcast_993_piece0 stored as bytes in memory (estimated size 3.1 KB, free 491.4 MB) 18/04/17 17:10:00 INFO spark.ContextCleaner: Cleaned accumulator 966 18/04/17 17:10:00 INFO storage.BlockManagerInfo: Added broadcast_993_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:10:00 INFO spark.SparkContext: Created broadcast 993 from broadcast at DAGScheduler.scala:1006 18/04/17 17:10:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 993 (KafkaRDD[1341] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 17:10:00 INFO cluster.YarnClusterScheduler: Adding task set 993.0 with 1 tasks 18/04/17 17:10:00 INFO scheduler.DAGScheduler: Got job 994 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 17:10:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 994 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 17:10:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 17:10:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 17:10:00 INFO scheduler.DAGScheduler: Submitting ResultStage 994 (KafkaRDD[1360] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 17:10:00 INFO storage.BlockManagerInfo: Removed broadcast_964_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:10:00 INFO storage.MemoryStore: Block broadcast_994 stored as values in memory (estimated size 5.7 KB, free 491.4 MB) 18/04/17 17:10:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 993.0 (TID 993, ***hostname masked***, executor 2, partition 0, RACK_LOCAL, 2041 bytes) 18/04/17 17:10:00 INFO storage.BlockManagerInfo: Added broadcast_991_piece0 in memory on ***hostname masked***:55033 (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:10:00 INFO storage.BlockManagerInfo: Removed broadcast_964_piece0 on ***hostname masked***:56034 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:10:00 INFO storage.MemoryStore: Block broadcast_994_piece0 stored as bytes in memory (estimated size 3.1 KB, free 491.4 MB) 18/04/17 17:10:00 INFO storage.BlockManagerInfo: Added broadcast_994_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:10:00 INFO spark.SparkContext: Created broadcast 994 from broadcast at DAGScheduler.scala:1006 18/04/17 17:10:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 994 (KafkaRDD[1360] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 17:10:00 INFO cluster.YarnClusterScheduler: Adding task set 994.0 with 1 tasks 18/04/17 17:10:00 INFO scheduler.DAGScheduler: Got job 995 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 17:10:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 995 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 17:10:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 17:10:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 17:10:00 INFO scheduler.DAGScheduler: Submitting ResultStage 995 (KafkaRDD[1363] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 17:10:00 INFO storage.MemoryStore: Block broadcast_995 stored as values in memory (estimated size 5.7 KB, free 491.4 MB) 18/04/17 17:10:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 994.0 (TID 994, ***hostname masked***, executor 3, partition 0, RACK_LOCAL, 2045 bytes) 18/04/17 17:10:00 INFO spark.ContextCleaner: Cleaned accumulator 965 18/04/17 17:10:00 INFO storage.BlockManagerInfo: Added broadcast_992_piece0 in memory on ***hostname masked***:43653 (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:10:00 INFO storage.BlockManagerInfo: Removed broadcast_967_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:10:00 INFO storage.MemoryStore: Block broadcast_995_piece0 stored as bytes in memory (estimated size 3.1 KB, free 491.4 MB) 18/04/17 17:10:00 INFO storage.BlockManagerInfo: Added broadcast_995_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:10:00 INFO storage.BlockManagerInfo: Removed broadcast_967_piece0 on ***hostname masked***:50260 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:10:00 INFO spark.SparkContext: Created broadcast 995 from broadcast at DAGScheduler.scala:1006 18/04/17 17:10:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 995 (KafkaRDD[1363] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 17:10:00 INFO cluster.YarnClusterScheduler: Adding task set 995.0 with 1 tasks 18/04/17 17:10:00 INFO scheduler.DAGScheduler: Got job 996 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 17:10:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 996 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 17:10:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 17:10:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 17:10:00 INFO scheduler.DAGScheduler: Submitting ResultStage 996 (KafkaRDD[1337] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 17:10:00 INFO spark.ContextCleaner: Cleaned accumulator 968 18/04/17 17:10:00 INFO storage.MemoryStore: Block broadcast_996 stored as values in memory (estimated size 5.7 KB, free 491.4 MB) 18/04/17 17:10:00 INFO storage.BlockManagerInfo: Removed broadcast_966_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:10:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 995.0 (TID 995, ***hostname masked***, executor 8, partition 0, RACK_LOCAL, 2042 bytes) 18/04/17 17:10:00 INFO storage.BlockManagerInfo: Added broadcast_989_piece0 in memory on ***hostname masked***:60107 (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:10:00 INFO storage.BlockManagerInfo: Removed broadcast_966_piece0 on ***hostname masked***:57847 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:10:00 INFO storage.MemoryStore: Block broadcast_996_piece0 stored as bytes in memory (estimated size 3.1 KB, free 491.4 MB) 18/04/17 17:10:00 INFO storage.BlockManagerInfo: Added broadcast_996_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:10:00 INFO spark.SparkContext: Created broadcast 996 from broadcast at DAGScheduler.scala:1006 18/04/17 17:10:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 996 (KafkaRDD[1337] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 17:10:00 INFO cluster.YarnClusterScheduler: Adding task set 996.0 with 1 tasks 18/04/17 17:10:00 INFO scheduler.DAGScheduler: Got job 997 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 17:10:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 997 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 17:10:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 17:10:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 17:10:00 INFO scheduler.DAGScheduler: Submitting ResultStage 997 (KafkaRDD[1366] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 17:10:00 INFO storage.MemoryStore: Block broadcast_997 stored as values in memory (estimated size 5.7 KB, free 491.4 MB) 18/04/17 17:10:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 996.0 (TID 996, ***hostname masked***, executor 2, partition 0, NODE_LOCAL, 2059 bytes) 18/04/17 17:10:00 INFO spark.ContextCleaner: Cleaned accumulator 967 18/04/17 17:10:00 INFO storage.MemoryStore: Block broadcast_997_piece0 stored as bytes in memory (estimated size 3.1 KB, free 491.4 MB) 18/04/17 17:10:00 INFO storage.BlockManagerInfo: Added broadcast_997_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:10:00 INFO storage.BlockManagerInfo: Removed broadcast_951_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:10:00 INFO spark.SparkContext: Created broadcast 997 from broadcast at DAGScheduler.scala:1006 18/04/17 17:10:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 997 (KafkaRDD[1366] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 17:10:00 INFO cluster.YarnClusterScheduler: Adding task set 997.0 with 1 tasks 18/04/17 17:10:00 INFO scheduler.DAGScheduler: Got job 998 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 17:10:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 998 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 17:10:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 17:10:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 17:10:00 INFO scheduler.DAGScheduler: Submitting ResultStage 998 (KafkaRDD[1350] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 17:10:00 INFO storage.BlockManagerInfo: Added broadcast_995_piece0 in memory on ***hostname masked***:50260 (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:10:00 INFO storage.MemoryStore: Block broadcast_998 stored as values in memory (estimated size 5.7 KB, free 491.4 MB) 18/04/17 17:10:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 997.0 (TID 997, ***hostname masked***, executor 3, partition 0, RACK_LOCAL, 2053 bytes) 18/04/17 17:10:00 INFO storage.BlockManagerInfo: Removed broadcast_951_piece0 on ***hostname masked***:35790 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:10:00 INFO spark.ContextCleaner: Cleaned accumulator 969 18/04/17 17:10:00 INFO spark.ContextCleaner: Cleaned accumulator 973 18/04/17 17:10:00 INFO storage.BlockManagerInfo: Removed broadcast_971_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:10:00 INFO storage.MemoryStore: Block broadcast_998_piece0 stored as bytes in memory (estimated size 3.1 KB, free 491.4 MB) 18/04/17 17:10:00 INFO storage.BlockManagerInfo: Added broadcast_998_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:10:00 INFO spark.SparkContext: Created broadcast 998 from broadcast at DAGScheduler.scala:1006 18/04/17 17:10:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 998 (KafkaRDD[1350] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 17:10:00 INFO cluster.YarnClusterScheduler: Adding task set 998.0 with 1 tasks 18/04/17 17:10:00 INFO scheduler.DAGScheduler: Got job 999 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 17:10:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 999 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 17:10:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 17:10:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 17:10:00 INFO scheduler.DAGScheduler: Submitting ResultStage 999 (KafkaRDD[1359] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 17:10:00 INFO storage.MemoryStore: Block broadcast_999 stored as values in memory (estimated size 5.7 KB, free 491.4 MB) 18/04/17 17:10:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 998.0 (TID 998, ***hostname masked***, executor 7, partition 0, RACK_LOCAL, 2048 bytes) 18/04/17 17:10:00 INFO storage.BlockManagerInfo: Added broadcast_993_piece0 in memory on ***hostname masked***:43653 (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:10:00 INFO storage.BlockManagerInfo: Removed broadcast_971_piece0 on ***hostname masked***:43653 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:10:00 INFO storage.MemoryStore: Block broadcast_999_piece0 stored as bytes in memory (estimated size 3.1 KB, free 491.4 MB) 18/04/17 17:10:00 INFO storage.BlockManagerInfo: Added broadcast_999_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:10:00 INFO spark.SparkContext: Created broadcast 999 from broadcast at DAGScheduler.scala:1006 18/04/17 17:10:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 999 (KafkaRDD[1359] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 17:10:00 INFO cluster.YarnClusterScheduler: Adding task set 999.0 with 1 tasks 18/04/17 17:10:00 INFO spark.ContextCleaner: Cleaned accumulator 972 18/04/17 17:10:00 INFO storage.BlockManagerInfo: Removed broadcast_968_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:10:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 999.0 (TID 999, ***hostname masked***, executor 12, partition 0, RACK_LOCAL, 2048 bytes) 18/04/17 17:10:00 INFO storage.BlockManagerInfo: Removed broadcast_968_piece0 on ***hostname masked***:42188 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:10:00 INFO storage.BlockManagerInfo: Added broadcast_994_piece0 in memory on ***hostname masked***:60107 (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:10:00 INFO storage.BlockManagerInfo: Removed broadcast_973_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:10:00 INFO storage.BlockManagerInfo: Added broadcast_999_piece0 in memory on ***hostname masked***:42188 (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:10:00 INFO storage.BlockManagerInfo: Added broadcast_996_piece0 in memory on ***hostname masked***:43653 (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:10:00 INFO storage.BlockManagerInfo: Removed broadcast_973_piece0 on ***hostname masked***:55279 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:10:00 INFO spark.ContextCleaner: Cleaned accumulator 974 18/04/17 17:10:00 INFO storage.BlockManagerInfo: Removed broadcast_972_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:10:00 INFO storage.BlockManagerInfo: Added broadcast_998_piece0 in memory on ***hostname masked***:41751 (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:10:00 INFO storage.BlockManagerInfo: Removed broadcast_972_piece0 on ***hostname masked***:56034 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:10:00 INFO storage.BlockManagerInfo: Added broadcast_997_piece0 in memory on ***hostname masked***:60107 (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:10:00 INFO spark.ContextCleaner: Cleaned accumulator 950 18/04/17 17:10:00 INFO spark.ContextCleaner: Cleaned accumulator 952 18/04/17 17:10:00 INFO spark.ContextCleaner: Cleaned accumulator 953 18/04/17 17:10:00 INFO spark.ContextCleaner: Cleaned accumulator 949 18/04/17 17:10:00 INFO spark.ContextCleaner: Cleaned accumulator 956 18/04/17 17:10:00 INFO storage.BlockManagerInfo: Removed broadcast_955_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:10:00 INFO storage.BlockManagerInfo: Removed broadcast_955_piece0 on ***hostname masked***:57847 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:10:00 INFO spark.ContextCleaner: Cleaned accumulator 951 18/04/17 17:10:00 INFO storage.BlockManagerInfo: Removed broadcast_953_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:10:00 INFO storage.BlockManagerInfo: Removed broadcast_953_piece0 on ***hostname masked***:57847 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:10:00 INFO storage.BlockManagerInfo: Removed broadcast_952_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:10:00 INFO storage.BlockManagerInfo: Removed broadcast_952_piece0 on ***hostname masked***:55095 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:10:00 INFO spark.ContextCleaner: Cleaned accumulator 954 18/04/17 17:10:00 INFO storage.BlockManagerInfo: Removed broadcast_949_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:10:00 INFO storage.BlockManagerInfo: Removed broadcast_949_piece0 on ***hostname masked***:55279 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:10:00 INFO storage.BlockManagerInfo: Removed broadcast_950_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:10:00 INFO storage.BlockManagerInfo: Removed broadcast_950_piece0 on ***hostname masked***:43653 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:10:00 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 993.0 (TID 993) in 54 ms on ***hostname masked*** (executor 2) (1/1) 18/04/17 17:10:00 INFO scheduler.DAGScheduler: ResultStage 993 (foreachPartition at PredictorEngineApp.java:153) finished in 0.055 s 18/04/17 17:10:00 INFO cluster.YarnClusterScheduler: Removed TaskSet 993.0, whose tasks have all completed, from pool 18/04/17 17:10:00 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 987.0 (TID 987) in 74 ms on ***hostname masked*** (executor 3) (1/1) 18/04/17 17:10:00 INFO scheduler.DAGScheduler: ResultStage 987 (foreachPartition at PredictorEngineApp.java:153) finished in 0.075 s 18/04/17 17:10:00 INFO scheduler.DAGScheduler: Job 993 finished: foreachPartition at PredictorEngineApp.java:153, took 0.140828 s 18/04/17 17:10:00 INFO scheduler.DAGScheduler: Job 987 finished: foreachPartition at PredictorEngineApp.java:153, took 0.141276 s 18/04/17 17:10:00 INFO cluster.YarnClusterScheduler: Removed TaskSet 987.0, whose tasks have all completed, from pool 18/04/17 17:10:00 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x17022ade connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 17:10:00 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x7988412 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 17:10:00 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x17022ade0x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 17:10:00 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x79884120x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 17:10:00 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 17:10:00 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 17:10:00 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:33586, server: ***hostname masked***/***IP masked***:2181 18/04/17 17:10:00 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:39969, server: ***hostname masked***/***IP masked***:2181 18/04/17 17:10:00 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x3626be1439a9638, negotiated timeout = 60000 18/04/17 17:10:00 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 994.0 (TID 994) in 65 ms on ***hostname masked*** (executor 3) (1/1) 18/04/17 17:10:00 INFO scheduler.DAGScheduler: ResultStage 994 (foreachPartition at PredictorEngineApp.java:153) finished in 0.066 s 18/04/17 17:10:00 INFO cluster.YarnClusterScheduler: Removed TaskSet 994.0, whose tasks have all completed, from pool 18/04/17 17:10:00 INFO scheduler.DAGScheduler: Job 994 finished: foreachPartition at PredictorEngineApp.java:153, took 0.154768 s 18/04/17 17:10:00 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x311be1d5 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 17:10:00 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x311be1d50x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 17:10:00 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 17:10:00 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:44566, server: ***hostname masked***/***IP masked***:2181 18/04/17 17:10:00 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x1626be1444c9683, negotiated timeout = 60000 18/04/17 17:10:00 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x2626be142b28f76, negotiated timeout = 60000 18/04/17 17:10:00 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x1626be1444c9683 18/04/17 17:10:00 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x3626be1439a9638 18/04/17 17:10:00 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x2626be142b28f76 18/04/17 17:10:00 INFO zookeeper.ZooKeeper: Session: 0x1626be1444c9683 closed 18/04/17 17:10:00 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 17:10:00 INFO zookeeper.ZooKeeper: Session: 0x3626be1439a9638 closed 18/04/17 17:10:00 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 17:10:00 INFO zookeeper.ZooKeeper: Session: 0x2626be142b28f76 closed 18/04/17 17:10:00 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 17:10:00 INFO scheduler.JobScheduler: Finished job streaming job 1523974200000 ms.9 from job set of time 1523974200000 ms 18/04/17 17:10:00 INFO scheduler.JobScheduler: Finished job streaming job 1523974200000 ms.8 from job set of time 1523974200000 ms 18/04/17 17:10:00 INFO scheduler.JobScheduler: Finished job streaming job 1523974200000 ms.28 from job set of time 1523974200000 ms 18/04/17 17:10:00 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 979.0 (TID 979) in 730 ms on ***hostname masked*** (executor 1) (1/1) 18/04/17 17:10:00 INFO scheduler.DAGScheduler: ResultStage 979 (foreachPartition at PredictorEngineApp.java:153) finished in 0.731 s 18/04/17 17:10:00 INFO cluster.YarnClusterScheduler: Removed TaskSet 979.0, whose tasks have all completed, from pool 18/04/17 17:10:00 INFO scheduler.DAGScheduler: Job 979 finished: foreachPartition at PredictorEngineApp.java:153, took 0.758184 s 18/04/17 17:10:00 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x33a82388 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 17:10:00 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x33a823880x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 17:10:00 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 17:10:00 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:39981, server: ***hostname masked***/***IP masked***:2181 18/04/17 17:10:00 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x1626be1444c9689, negotiated timeout = 60000 18/04/17 17:10:00 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x1626be1444c9689 18/04/17 17:10:00 INFO zookeeper.ZooKeeper: Session: 0x1626be1444c9689 closed 18/04/17 17:10:00 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 17:10:00 INFO scheduler.JobScheduler: Finished job streaming job 1523974200000 ms.7 from job set of time 1523974200000 ms 18/04/17 17:10:01 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 981.0 (TID 981) in 1226 ms on ***hostname masked*** (executor 2) (1/1) 18/04/17 17:10:01 INFO cluster.YarnClusterScheduler: Removed TaskSet 981.0, whose tasks have all completed, from pool 18/04/17 17:10:01 INFO scheduler.DAGScheduler: ResultStage 981 (foreachPartition at PredictorEngineApp.java:153) finished in 1.226 s 18/04/17 17:10:01 INFO scheduler.DAGScheduler: Job 981 finished: foreachPartition at PredictorEngineApp.java:153, took 1.260299 s 18/04/17 17:10:01 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x2cbdb588 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 17:10:01 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x2cbdb5880x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 17:10:01 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 17:10:01 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:33605, server: ***hostname masked***/***IP masked***:2181 18/04/17 17:10:01 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x3626be1439a963e, negotiated timeout = 60000 18/04/17 17:10:01 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x3626be1439a963e 18/04/17 17:10:01 INFO zookeeper.ZooKeeper: Session: 0x3626be1439a963e closed 18/04/17 17:10:01 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 17:10:01 INFO scheduler.JobScheduler: Finished job streaming job 1523974200000 ms.25 from job set of time 1523974200000 ms 18/04/17 17:10:03 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 998.0 (TID 998) in 2863 ms on ***hostname masked*** (executor 7) (1/1) 18/04/17 17:10:03 INFO cluster.YarnClusterScheduler: Removed TaskSet 998.0, whose tasks have all completed, from pool 18/04/17 17:10:03 INFO scheduler.DAGScheduler: ResultStage 998 (foreachPartition at PredictorEngineApp.java:153) finished in 2.863 s 18/04/17 17:10:03 INFO scheduler.DAGScheduler: Job 998 finished: foreachPartition at PredictorEngineApp.java:153, took 2.963030 s 18/04/17 17:10:03 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x3f27473f connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 17:10:03 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x3f27473f0x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 17:10:03 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 17:10:03 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:44589, server: ***hostname masked***/***IP masked***:2181 18/04/17 17:10:03 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 974.0 (TID 974) in 2963 ms on ***hostname masked*** (executor 4) (1/1) 18/04/17 17:10:03 INFO cluster.YarnClusterScheduler: Removed TaskSet 974.0, whose tasks have all completed, from pool 18/04/17 17:10:03 INFO scheduler.DAGScheduler: ResultStage 974 (foreachPartition at PredictorEngineApp.java:153) finished in 2.963 s 18/04/17 17:10:03 INFO scheduler.DAGScheduler: Job 974 finished: foreachPartition at PredictorEngineApp.java:153, took 2.971283 s 18/04/17 17:10:03 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x1545be73 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 17:10:03 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x1545be730x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 17:10:03 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 17:10:03 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:44590, server: ***hostname masked***/***IP masked***:2181 18/04/17 17:10:03 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x2626be142b28f7f, negotiated timeout = 60000 18/04/17 17:10:03 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x2626be142b28f80, negotiated timeout = 60000 18/04/17 17:10:03 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x2626be142b28f7f 18/04/17 17:10:03 INFO zookeeper.ZooKeeper: Session: 0x2626be142b28f7f closed 18/04/17 17:10:03 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 17:10:03 INFO scheduler.JobScheduler: Finished job streaming job 1523974200000 ms.18 from job set of time 1523974200000 ms 18/04/17 17:10:03 INFO scheduler.JobScheduler: Finished job streaming job 1523974200000 ms.32 from job set of time 1523974200000 ms 18/04/17 17:10:03 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 985.0 (TID 985) in 2967 ms on ***hostname masked*** (executor 11) (1/1) 18/04/17 17:10:03 INFO scheduler.DAGScheduler: ResultStage 985 (foreachPartition at PredictorEngineApp.java:153) finished in 2.967 s 18/04/17 17:10:03 INFO cluster.YarnClusterScheduler: Removed TaskSet 985.0, whose tasks have all completed, from pool 18/04/17 17:10:03 INFO scheduler.DAGScheduler: Job 986 finished: foreachPartition at PredictorEngineApp.java:153, took 3.016872 s 18/04/17 17:10:03 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x441b543c connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 17:10:03 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x441b543c0x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 17:10:03 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 17:10:03 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:39998, server: ***hostname masked***/***IP masked***:2181 18/04/17 17:10:03 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x1626be1444c968b, negotiated timeout = 60000 18/04/17 17:10:03 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x1626be1444c968b 18/04/17 17:10:03 INFO zookeeper.ZooKeeper: Session: 0x1626be1444c968b closed 18/04/17 17:10:03 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 17:10:03 INFO scheduler.JobScheduler: Finished job streaming job 1523974200000 ms.20 from job set of time 1523974200000 ms 18/04/17 17:10:04 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 995.0 (TID 995) in 4266 ms on ***hostname masked*** (executor 8) (1/1) 18/04/17 17:10:04 INFO cluster.YarnClusterScheduler: Removed TaskSet 995.0, whose tasks have all completed, from pool 18/04/17 17:10:04 INFO scheduler.DAGScheduler: ResultStage 995 (foreachPartition at PredictorEngineApp.java:153) finished in 4.267 s 18/04/17 17:10:04 INFO scheduler.DAGScheduler: Job 995 finished: foreachPartition at PredictorEngineApp.java:153, took 4.358851 s 18/04/17 17:10:04 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x603e87ce connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 17:10:04 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x603e87ce0x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 17:10:04 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 17:10:04 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:33624, server: ***hostname masked***/***IP masked***:2181 18/04/17 17:10:04 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x3626be1439a9640, negotiated timeout = 60000 18/04/17 17:10:04 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x3626be1439a9640 18/04/17 17:10:04 INFO zookeeper.ZooKeeper: Session: 0x3626be1439a9640 closed 18/04/17 17:10:04 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 17:10:04 INFO scheduler.JobScheduler: Finished job streaming job 1523974200000 ms.31 from job set of time 1523974200000 ms 18/04/17 17:10:04 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 991.0 (TID 991) in 4679 ms on ***hostname masked*** (executor 9) (1/1) 18/04/17 17:10:04 INFO cluster.YarnClusterScheduler: Removed TaskSet 991.0, whose tasks have all completed, from pool 18/04/17 17:10:04 INFO scheduler.DAGScheduler: ResultStage 991 (foreachPartition at PredictorEngineApp.java:153) finished in 4.680 s 18/04/17 17:10:04 INFO scheduler.DAGScheduler: Job 992 finished: foreachPartition at PredictorEngineApp.java:153, took 4.760390 s 18/04/17 17:10:04 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x5c04711 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 17:10:04 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x5c047110x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 17:10:04 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 17:10:04 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:44604, server: ***hostname masked***/***IP masked***:2181 18/04/17 17:10:04 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x2626be142b28f82, negotiated timeout = 60000 18/04/17 17:10:04 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x2626be142b28f82 18/04/17 17:10:04 INFO zookeeper.ZooKeeper: Session: 0x2626be142b28f82 closed 18/04/17 17:10:04 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 17:10:04 INFO scheduler.JobScheduler: Finished job streaming job 1523974200000 ms.15 from job set of time 1523974200000 ms 18/04/17 17:10:04 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 978.0 (TID 978) in 4800 ms on ***hostname masked*** (executor 9) (1/1) 18/04/17 17:10:04 INFO cluster.YarnClusterScheduler: Removed TaskSet 978.0, whose tasks have all completed, from pool 18/04/17 17:10:04 INFO scheduler.DAGScheduler: ResultStage 978 (foreachPartition at PredictorEngineApp.java:153) finished in 4.800 s 18/04/17 17:10:04 INFO scheduler.DAGScheduler: Job 978 finished: foreachPartition at PredictorEngineApp.java:153, took 4.824498 s 18/04/17 17:10:04 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x2f0bf789 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 17:10:04 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x2f0bf7890x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 17:10:04 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 17:10:04 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:33630, server: ***hostname masked***/***IP masked***:2181 18/04/17 17:10:04 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x3626be1439a9643, negotiated timeout = 60000 18/04/17 17:10:04 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x3626be1439a9643 18/04/17 17:10:04 INFO zookeeper.ZooKeeper: Session: 0x3626be1439a9643 closed 18/04/17 17:10:04 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 17:10:04 INFO scheduler.JobScheduler: Finished job streaming job 1523974200000 ms.6 from job set of time 1523974200000 ms 18/04/17 17:10:05 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 975.0 (TID 975) in 5364 ms on ***hostname masked*** (executor 8) (1/1) 18/04/17 17:10:05 INFO cluster.YarnClusterScheduler: Removed TaskSet 975.0, whose tasks have all completed, from pool 18/04/17 17:10:05 INFO scheduler.DAGScheduler: ResultStage 975 (foreachPartition at PredictorEngineApp.java:153) finished in 5.364 s 18/04/17 17:10:05 INFO scheduler.DAGScheduler: Job 975 finished: foreachPartition at PredictorEngineApp.java:153, took 5.376188 s 18/04/17 17:10:05 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x3dfd2b0f connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 17:10:05 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x3dfd2b0f0x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 17:10:05 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 17:10:05 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:44611, server: ***hostname masked***/***IP masked***:2181 18/04/17 17:10:05 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x2626be142b28f83, negotiated timeout = 60000 18/04/17 17:10:05 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x2626be142b28f83 18/04/17 17:10:05 INFO zookeeper.ZooKeeper: Session: 0x2626be142b28f83 closed 18/04/17 17:10:05 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 17:10:05 INFO scheduler.JobScheduler: Finished job streaming job 1523974200000 ms.12 from job set of time 1523974200000 ms 18/04/17 17:10:05 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 989.0 (TID 989) in 5381 ms on ***hostname masked*** (executor 3) (1/1) 18/04/17 17:10:05 INFO scheduler.DAGScheduler: ResultStage 989 (foreachPartition at PredictorEngineApp.java:153) finished in 5.382 s 18/04/17 17:10:05 INFO cluster.YarnClusterScheduler: Removed TaskSet 989.0, whose tasks have all completed, from pool 18/04/17 17:10:05 INFO scheduler.DAGScheduler: Job 989 finished: foreachPartition at PredictorEngineApp.java:153, took 5.454822 s 18/04/17 17:10:05 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x60b4b4c3 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 17:10:05 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x60b4b4c30x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 17:10:05 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 17:10:05 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:40019, server: ***hostname masked***/***IP masked***:2181 18/04/17 17:10:05 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x1626be1444c968f, negotiated timeout = 60000 18/04/17 17:10:05 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x1626be1444c968f 18/04/17 17:10:05 INFO zookeeper.ZooKeeper: Session: 0x1626be1444c968f closed 18/04/17 17:10:05 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 17:10:05 INFO scheduler.JobScheduler: Finished job streaming job 1523974200000 ms.29 from job set of time 1523974200000 ms 18/04/17 17:10:05 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 999.0 (TID 999) in 5484 ms on ***hostname masked*** (executor 12) (1/1) 18/04/17 17:10:05 INFO cluster.YarnClusterScheduler: Removed TaskSet 999.0, whose tasks have all completed, from pool 18/04/17 17:10:05 INFO scheduler.DAGScheduler: ResultStage 999 (foreachPartition at PredictorEngineApp.java:153) finished in 5.484 s 18/04/17 17:10:05 INFO scheduler.DAGScheduler: Job 999 finished: foreachPartition at PredictorEngineApp.java:153, took 5.584704 s 18/04/17 17:10:05 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x5e0ea51c connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 17:10:05 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x5e0ea51c0x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 17:10:05 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 17:10:05 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:44617, server: ***hostname masked***/***IP masked***:2181 18/04/17 17:10:05 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x2626be142b28f84, negotiated timeout = 60000 18/04/17 17:10:05 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x2626be142b28f84 18/04/17 17:10:05 INFO zookeeper.ZooKeeper: Session: 0x2626be142b28f84 closed 18/04/17 17:10:05 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 17:10:05 INFO scheduler.JobScheduler: Finished job streaming job 1523974200000 ms.27 from job set of time 1523974200000 ms 18/04/17 17:10:05 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 997.0 (TID 997) in 5689 ms on ***hostname masked*** (executor 3) (1/1) 18/04/17 17:10:05 INFO cluster.YarnClusterScheduler: Removed TaskSet 997.0, whose tasks have all completed, from pool 18/04/17 17:10:05 INFO scheduler.DAGScheduler: ResultStage 997 (foreachPartition at PredictorEngineApp.java:153) finished in 5.689 s 18/04/17 17:10:05 INFO scheduler.DAGScheduler: Job 997 finished: foreachPartition at PredictorEngineApp.java:153, took 5.786735 s 18/04/17 17:10:05 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x4784eee4 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 17:10:05 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x4784eee40x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 17:10:05 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 17:10:05 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:33643, server: ***hostname masked***/***IP masked***:2181 18/04/17 17:10:05 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x3626be1439a9647, negotiated timeout = 60000 18/04/17 17:10:05 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x3626be1439a9647 18/04/17 17:10:05 INFO zookeeper.ZooKeeper: Session: 0x3626be1439a9647 closed 18/04/17 17:10:05 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 17:10:05 INFO scheduler.JobScheduler: Finished job streaming job 1523974200000 ms.34 from job set of time 1523974200000 ms 18/04/17 17:10:06 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 992.0 (TID 992) in 5992 ms on ***hostname masked*** (executor 2) (1/1) 18/04/17 17:10:06 INFO cluster.YarnClusterScheduler: Removed TaskSet 992.0, whose tasks have all completed, from pool 18/04/17 17:10:06 INFO scheduler.DAGScheduler: ResultStage 992 (foreachPartition at PredictorEngineApp.java:153) finished in 5.993 s 18/04/17 17:10:06 INFO scheduler.DAGScheduler: Job 991 finished: foreachPartition at PredictorEngineApp.java:153, took 6.076381 s 18/04/17 17:10:06 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x6bcdd8b9 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 17:10:06 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x6bcdd8b90x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 17:10:06 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 17:10:06 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:44623, server: ***hostname masked***/***IP masked***:2181 18/04/17 17:10:06 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x2626be142b28f86, negotiated timeout = 60000 18/04/17 17:10:06 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x2626be142b28f86 18/04/17 17:10:06 INFO zookeeper.ZooKeeper: Session: 0x2626be142b28f86 closed 18/04/17 17:10:06 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 17:10:06 INFO scheduler.JobScheduler: Finished job streaming job 1523974200000 ms.19 from job set of time 1523974200000 ms 18/04/17 17:10:06 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 982.0 (TID 982) in 6170 ms on ***hostname masked*** (executor 8) (1/1) 18/04/17 17:10:06 INFO scheduler.DAGScheduler: ResultStage 982 (foreachPartition at PredictorEngineApp.java:153) finished in 6.170 s 18/04/17 17:10:06 INFO cluster.YarnClusterScheduler: Removed TaskSet 982.0, whose tasks have all completed, from pool 18/04/17 17:10:06 INFO scheduler.DAGScheduler: Job 982 finished: foreachPartition at PredictorEngineApp.java:153, took 6.208952 s 18/04/17 17:10:06 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x6e000948 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 17:10:06 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x6e0009480x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 17:10:06 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 17:10:06 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:40032, server: ***hostname masked***/***IP masked***:2181 18/04/17 17:10:06 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x1626be1444c9690, negotiated timeout = 60000 18/04/17 17:10:06 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x1626be1444c9690 18/04/17 17:10:06 INFO zookeeper.ZooKeeper: Session: 0x1626be1444c9690 closed 18/04/17 17:10:06 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 17:10:06 INFO scheduler.JobScheduler: Finished job streaming job 1523974200000 ms.33 from job set of time 1523974200000 ms 18/04/17 17:10:07 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 990.0 (TID 990) in 6900 ms on ***hostname masked*** (executor 1) (1/1) 18/04/17 17:10:07 INFO cluster.YarnClusterScheduler: Removed TaskSet 990.0, whose tasks have all completed, from pool 18/04/17 17:10:07 INFO scheduler.DAGScheduler: ResultStage 990 (foreachPartition at PredictorEngineApp.java:153) finished in 6.901 s 18/04/17 17:10:07 INFO scheduler.DAGScheduler: Job 990 finished: foreachPartition at PredictorEngineApp.java:153, took 6.977527 s 18/04/17 17:10:07 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x501d8337 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 17:10:07 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x501d83370x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 17:10:07 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 17:10:07 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:33653, server: ***hostname masked***/***IP masked***:2181 18/04/17 17:10:07 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x3626be1439a9648, negotiated timeout = 60000 18/04/17 17:10:07 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x3626be1439a9648 18/04/17 17:10:07 INFO zookeeper.ZooKeeper: Session: 0x3626be1439a9648 closed 18/04/17 17:10:07 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 17:10:07 INFO scheduler.JobScheduler: Finished job streaming job 1523974200000 ms.11 from job set of time 1523974200000 ms 18/04/17 17:10:07 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 986.0 (TID 986) in 7776 ms on ***hostname masked*** (executor 6) (1/1) 18/04/17 17:10:07 INFO cluster.YarnClusterScheduler: Removed TaskSet 986.0, whose tasks have all completed, from pool 18/04/17 17:10:07 INFO scheduler.DAGScheduler: ResultStage 986 (foreachPartition at PredictorEngineApp.java:153) finished in 7.777 s 18/04/17 17:10:07 INFO scheduler.DAGScheduler: Job 985 finished: foreachPartition at PredictorEngineApp.java:153, took 7.829366 s 18/04/17 17:10:07 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x2f336b54 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 17:10:07 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x2f336b540x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 17:10:07 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 17:10:07 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:33657, server: ***hostname masked***/***IP masked***:2181 18/04/17 17:10:07 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x3626be1439a9649, negotiated timeout = 60000 18/04/17 17:10:07 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x3626be1439a9649 18/04/17 17:10:07 INFO zookeeper.ZooKeeper: Session: 0x3626be1439a9649 closed 18/04/17 17:10:07 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 17:10:07 INFO scheduler.JobScheduler: Finished job streaming job 1523974200000 ms.24 from job set of time 1523974200000 ms 18/04/17 17:10:08 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 983.0 (TID 983) in 8101 ms on ***hostname masked*** (executor 5) (1/1) 18/04/17 17:10:08 INFO cluster.YarnClusterScheduler: Removed TaskSet 983.0, whose tasks have all completed, from pool 18/04/17 17:10:08 INFO scheduler.DAGScheduler: ResultStage 983 (foreachPartition at PredictorEngineApp.java:153) finished in 8.102 s 18/04/17 17:10:08 INFO scheduler.DAGScheduler: Job 983 finished: foreachPartition at PredictorEngineApp.java:153, took 8.143384 s 18/04/17 17:10:08 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0xb5d7b45 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 17:10:08 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0xb5d7b450x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 17:10:08 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 17:10:08 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:44637, server: ***hostname masked***/***IP masked***:2181 18/04/17 17:10:08 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x2626be142b28f88, negotiated timeout = 60000 18/04/17 17:10:08 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x2626be142b28f88 18/04/17 17:10:08 INFO zookeeper.ZooKeeper: Session: 0x2626be142b28f88 closed 18/04/17 17:10:08 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 17:10:08 INFO scheduler.JobScheduler: Finished job streaming job 1523974200000 ms.2 from job set of time 1523974200000 ms 18/04/17 17:10:08 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 977.0 (TID 977) in 8534 ms on ***hostname masked*** (executor 12) (1/1) 18/04/17 17:10:08 INFO cluster.YarnClusterScheduler: Removed TaskSet 977.0, whose tasks have all completed, from pool 18/04/17 17:10:08 INFO scheduler.DAGScheduler: ResultStage 977 (foreachPartition at PredictorEngineApp.java:153) finished in 8.535 s 18/04/17 17:10:08 INFO scheduler.DAGScheduler: Job 977 finished: foreachPartition at PredictorEngineApp.java:153, took 8.554952 s 18/04/17 17:10:08 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x2026fbc connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 17:10:08 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x2026fbc0x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 17:10:08 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 17:10:08 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:44641, server: ***hostname masked***/***IP masked***:2181 18/04/17 17:10:08 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x2626be142b28f89, negotiated timeout = 60000 18/04/17 17:10:08 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x2626be142b28f89 18/04/17 17:10:08 INFO zookeeper.ZooKeeper: Session: 0x2626be142b28f89 closed 18/04/17 17:10:08 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 17:10:08 INFO scheduler.JobScheduler: Finished job streaming job 1523974200000 ms.22 from job set of time 1523974200000 ms 18/04/17 17:10:08 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 984.0 (TID 984) in 8738 ms on ***hostname masked*** (executor 5) (1/1) 18/04/17 17:10:08 INFO cluster.YarnClusterScheduler: Removed TaskSet 984.0, whose tasks have all completed, from pool 18/04/17 17:10:08 INFO scheduler.DAGScheduler: ResultStage 984 (foreachPartition at PredictorEngineApp.java:153) finished in 8.738 s 18/04/17 17:10:08 INFO scheduler.DAGScheduler: Job 984 finished: foreachPartition at PredictorEngineApp.java:153, took 8.783423 s 18/04/17 17:10:08 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x36e93178 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 17:10:08 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x36e931780x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 17:10:08 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 17:10:08 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:40049, server: ***hostname masked***/***IP masked***:2181 18/04/17 17:10:08 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x1626be1444c9693, negotiated timeout = 60000 18/04/17 17:10:08 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x1626be1444c9693 18/04/17 17:10:08 INFO zookeeper.ZooKeeper: Session: 0x1626be1444c9693 closed 18/04/17 17:10:08 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 17:10:08 INFO scheduler.JobScheduler: Finished job streaming job 1523974200000 ms.23 from job set of time 1523974200000 ms 18/04/17 17:10:09 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 996.0 (TID 996) in 9353 ms on ***hostname masked*** (executor 2) (1/1) 18/04/17 17:10:09 INFO cluster.YarnClusterScheduler: Removed TaskSet 996.0, whose tasks have all completed, from pool 18/04/17 17:10:09 INFO scheduler.DAGScheduler: ResultStage 996 (foreachPartition at PredictorEngineApp.java:153) finished in 9.353 s 18/04/17 17:10:09 INFO scheduler.DAGScheduler: Job 996 finished: foreachPartition at PredictorEngineApp.java:153, took 9.448901 s 18/04/17 17:10:09 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x46b0f4f5 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 17:10:09 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x46b0f4f50x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 17:10:09 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 17:10:09 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:44649, server: ***hostname masked***/***IP masked***:2181 18/04/17 17:10:09 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x2626be142b28f8a, negotiated timeout = 60000 18/04/17 17:10:09 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x2626be142b28f8a 18/04/17 17:10:09 INFO zookeeper.ZooKeeper: Session: 0x2626be142b28f8a closed 18/04/17 17:10:09 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 17:10:09 INFO scheduler.JobScheduler: Finished job streaming job 1523974200000 ms.5 from job set of time 1523974200000 ms 18/04/17 17:10:11 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 988.0 (TID 988) in 11346 ms on ***hostname masked*** (executor 5) (1/1) 18/04/17 17:10:11 INFO cluster.YarnClusterScheduler: Removed TaskSet 988.0, whose tasks have all completed, from pool 18/04/17 17:10:11 INFO scheduler.DAGScheduler: ResultStage 988 (foreachPartition at PredictorEngineApp.java:153) finished in 11.346 s 18/04/17 17:10:11 INFO scheduler.DAGScheduler: Job 988 finished: foreachPartition at PredictorEngineApp.java:153, took 11.415801 s 18/04/17 17:10:11 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x4ac03e74 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 17:10:11 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x4ac03e740x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 17:10:11 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 17:10:11 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:33678, server: ***hostname masked***/***IP masked***:2181 18/04/17 17:10:11 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x3626be1439a964f, negotiated timeout = 60000 18/04/17 17:10:11 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x3626be1439a964f 18/04/17 17:10:11 INFO zookeeper.ZooKeeper: Session: 0x3626be1439a964f closed 18/04/17 17:10:11 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 17:10:11 INFO scheduler.JobScheduler: Finished job streaming job 1523974200000 ms.1 from job set of time 1523974200000 ms 18/04/17 17:10:13 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 980.0 (TID 980) in 13681 ms on ***hostname masked*** (executor 6) (1/1) 18/04/17 17:10:13 INFO cluster.YarnClusterScheduler: Removed TaskSet 980.0, whose tasks have all completed, from pool 18/04/17 17:10:13 INFO scheduler.DAGScheduler: ResultStage 980 (foreachPartition at PredictorEngineApp.java:153) finished in 13.682 s 18/04/17 17:10:13 INFO scheduler.DAGScheduler: Job 980 finished: foreachPartition at PredictorEngineApp.java:153, took 13.713327 s 18/04/17 17:10:13 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x70c42492 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 17:10:13 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x70c424920x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 17:10:13 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 17:10:13 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:33683, server: ***hostname masked***/***IP masked***:2181 18/04/17 17:10:13 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x3626be1439a9650, negotiated timeout = 60000 18/04/17 17:10:13 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x3626be1439a9650 18/04/17 17:10:13 INFO zookeeper.ZooKeeper: Session: 0x3626be1439a9650 closed 18/04/17 17:10:13 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 17:10:13 INFO scheduler.JobScheduler: Finished job streaming job 1523974200000 ms.26 from job set of time 1523974200000 ms 18/04/17 17:10:14 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 976.0 (TID 976) in 14687 ms on ***hostname masked*** (executor 6) (1/1) 18/04/17 17:10:14 INFO cluster.YarnClusterScheduler: Removed TaskSet 976.0, whose tasks have all completed, from pool 18/04/17 17:10:14 INFO scheduler.DAGScheduler: ResultStage 976 (foreachPartition at PredictorEngineApp.java:153) finished in 14.688 s 18/04/17 17:10:14 INFO scheduler.DAGScheduler: Job 976 finished: foreachPartition at PredictorEngineApp.java:153, took 14.704352 s 18/04/17 17:10:14 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x77769f04 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 17:10:14 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x77769f040x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 17:10:14 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 17:10:14 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:44665, server: ***hostname masked***/***IP masked***:2181 18/04/17 17:10:14 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x2626be142b28f8d, negotiated timeout = 60000 18/04/17 17:10:14 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x2626be142b28f8d 18/04/17 17:10:14 INFO zookeeper.ZooKeeper: Session: 0x2626be142b28f8d closed 18/04/17 17:10:14 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 17:10:14 INFO scheduler.JobScheduler: Finished job streaming job 1523974200000 ms.10 from job set of time 1523974200000 ms 18/04/17 17:10:14 INFO scheduler.JobScheduler: Total delay: 14.794 s for time 1523974200000 ms (execution: 14.739 s) 18/04/17 17:10:14 INFO kafka.KafkaRDD: Removing RDD 1296 from persistence list 18/04/17 17:10:14 INFO storage.BlockManager: Removing RDD 1296 18/04/17 17:10:14 INFO kafka.KafkaRDD: Removing RDD 1296 from persistence list 18/04/17 17:10:14 INFO storage.BlockManager: Removing RDD 1296 18/04/17 17:10:14 INFO kafka.KafkaRDD: Removing RDD 1297 from persistence list 18/04/17 17:10:14 INFO storage.BlockManager: Removing RDD 1297 18/04/17 17:10:14 INFO kafka.KafkaRDD: Removing RDD 1297 from persistence list 18/04/17 17:10:14 INFO storage.BlockManager: Removing RDD 1297 18/04/17 17:10:14 INFO kafka.KafkaRDD: Removing RDD 1298 from persistence list 18/04/17 17:10:14 INFO storage.BlockManager: Removing RDD 1298 18/04/17 17:10:14 INFO kafka.KafkaRDD: Removing RDD 1298 from persistence list 18/04/17 17:10:14 INFO storage.BlockManager: Removing RDD 1298 18/04/17 17:10:14 INFO kafka.KafkaRDD: Removing RDD 1299 from persistence list 18/04/17 17:10:14 INFO storage.BlockManager: Removing RDD 1299 18/04/17 17:10:14 INFO kafka.KafkaRDD: Removing RDD 1299 from persistence list 18/04/17 17:10:14 INFO storage.BlockManager: Removing RDD 1299 18/04/17 17:10:14 INFO kafka.KafkaRDD: Removing RDD 1300 from persistence list 18/04/17 17:10:14 INFO storage.BlockManager: Removing RDD 1300 18/04/17 17:10:14 INFO kafka.KafkaRDD: Removing RDD 1300 from persistence list 18/04/17 17:10:14 INFO storage.BlockManager: Removing RDD 1300 18/04/17 17:10:14 INFO kafka.KafkaRDD: Removing RDD 1301 from persistence list 18/04/17 17:10:14 INFO storage.BlockManager: Removing RDD 1301 18/04/17 17:10:14 INFO kafka.KafkaRDD: Removing RDD 1301 from persistence list 18/04/17 17:10:14 INFO storage.BlockManager: Removing RDD 1301 18/04/17 17:10:14 INFO kafka.KafkaRDD: Removing RDD 1302 from persistence list 18/04/17 17:10:14 INFO storage.BlockManager: Removing RDD 1302 18/04/17 17:10:14 INFO kafka.KafkaRDD: Removing RDD 1302 from persistence list 18/04/17 17:10:14 INFO storage.BlockManager: Removing RDD 1302 18/04/17 17:10:14 INFO kafka.KafkaRDD: Removing RDD 1303 from persistence list 18/04/17 17:10:14 INFO storage.BlockManager: Removing RDD 1303 18/04/17 17:10:14 INFO kafka.KafkaRDD: Removing RDD 1303 from persistence list 18/04/17 17:10:14 INFO storage.BlockManager: Removing RDD 1303 18/04/17 17:10:14 INFO kafka.KafkaRDD: Removing RDD 1304 from persistence list 18/04/17 17:10:14 INFO storage.BlockManager: Removing RDD 1304 18/04/17 17:10:14 INFO kafka.KafkaRDD: Removing RDD 1304 from persistence list 18/04/17 17:10:14 INFO storage.BlockManager: Removing RDD 1304 18/04/17 17:10:14 INFO kafka.KafkaRDD: Removing RDD 1305 from persistence list 18/04/17 17:10:14 INFO storage.BlockManager: Removing RDD 1305 18/04/17 17:10:14 INFO kafka.KafkaRDD: Removing RDD 1305 from persistence list 18/04/17 17:10:14 INFO storage.BlockManager: Removing RDD 1305 18/04/17 17:10:14 INFO kafka.KafkaRDD: Removing RDD 1306 from persistence list 18/04/17 17:10:14 INFO storage.BlockManager: Removing RDD 1306 18/04/17 17:10:14 INFO kafka.KafkaRDD: Removing RDD 1306 from persistence list 18/04/17 17:10:14 INFO storage.BlockManager: Removing RDD 1306 18/04/17 17:10:14 INFO kafka.KafkaRDD: Removing RDD 1307 from persistence list 18/04/17 17:10:14 INFO storage.BlockManager: Removing RDD 1307 18/04/17 17:10:14 INFO kafka.KafkaRDD: Removing RDD 1307 from persistence list 18/04/17 17:10:14 INFO storage.BlockManager: Removing RDD 1307 18/04/17 17:10:14 INFO kafka.KafkaRDD: Removing RDD 1308 from persistence list 18/04/17 17:10:14 INFO storage.BlockManager: Removing RDD 1308 18/04/17 17:10:14 INFO kafka.KafkaRDD: Removing RDD 1308 from persistence list 18/04/17 17:10:14 INFO storage.BlockManager: Removing RDD 1308 18/04/17 17:10:14 INFO kafka.KafkaRDD: Removing RDD 1309 from persistence list 18/04/17 17:10:14 INFO storage.BlockManager: Removing RDD 1309 18/04/17 17:10:14 INFO kafka.KafkaRDD: Removing RDD 1309 from persistence list 18/04/17 17:10:14 INFO storage.BlockManager: Removing RDD 1309 18/04/17 17:10:14 INFO kafka.KafkaRDD: Removing RDD 1310 from persistence list 18/04/17 17:10:14 INFO storage.BlockManager: Removing RDD 1310 18/04/17 17:10:14 INFO kafka.KafkaRDD: Removing RDD 1310 from persistence list 18/04/17 17:10:14 INFO storage.BlockManager: Removing RDD 1310 18/04/17 17:10:14 INFO kafka.KafkaRDD: Removing RDD 1311 from persistence list 18/04/17 17:10:14 INFO storage.BlockManager: Removing RDD 1311 18/04/17 17:10:14 INFO kafka.KafkaRDD: Removing RDD 1311 from persistence list 18/04/17 17:10:14 INFO storage.BlockManager: Removing RDD 1311 18/04/17 17:10:14 INFO kafka.KafkaRDD: Removing RDD 1312 from persistence list 18/04/17 17:10:14 INFO storage.BlockManager: Removing RDD 1312 18/04/17 17:10:14 INFO kafka.KafkaRDD: Removing RDD 1312 from persistence list 18/04/17 17:10:14 INFO storage.BlockManager: Removing RDD 1312 18/04/17 17:10:14 INFO kafka.KafkaRDD: Removing RDD 1313 from persistence list 18/04/17 17:10:14 INFO storage.BlockManager: Removing RDD 1313 18/04/17 17:10:14 INFO kafka.KafkaRDD: Removing RDD 1313 from persistence list 18/04/17 17:10:14 INFO storage.BlockManager: Removing RDD 1313 18/04/17 17:10:14 INFO kafka.KafkaRDD: Removing RDD 1314 from persistence list 18/04/17 17:10:14 INFO storage.BlockManager: Removing RDD 1314 18/04/17 17:10:14 INFO kafka.KafkaRDD: Removing RDD 1314 from persistence list 18/04/17 17:10:14 INFO storage.BlockManager: Removing RDD 1314 18/04/17 17:10:14 INFO kafka.KafkaRDD: Removing RDD 1315 from persistence list 18/04/17 17:10:14 INFO storage.BlockManager: Removing RDD 1315 18/04/17 17:10:14 INFO kafka.KafkaRDD: Removing RDD 1315 from persistence list 18/04/17 17:10:14 INFO storage.BlockManager: Removing RDD 1315 18/04/17 17:10:14 INFO kafka.KafkaRDD: Removing RDD 1316 from persistence list 18/04/17 17:10:14 INFO storage.BlockManager: Removing RDD 1316 18/04/17 17:10:14 INFO kafka.KafkaRDD: Removing RDD 1316 from persistence list 18/04/17 17:10:14 INFO storage.BlockManager: Removing RDD 1316 18/04/17 17:10:14 INFO kafka.KafkaRDD: Removing RDD 1317 from persistence list 18/04/17 17:10:14 INFO storage.BlockManager: Removing RDD 1317 18/04/17 17:10:14 INFO kafka.KafkaRDD: Removing RDD 1317 from persistence list 18/04/17 17:10:14 INFO storage.BlockManager: Removing RDD 1317 18/04/17 17:10:14 INFO kafka.KafkaRDD: Removing RDD 1318 from persistence list 18/04/17 17:10:14 INFO storage.BlockManager: Removing RDD 1318 18/04/17 17:10:14 INFO kafka.KafkaRDD: Removing RDD 1318 from persistence list 18/04/17 17:10:14 INFO storage.BlockManager: Removing RDD 1318 18/04/17 17:10:14 INFO kafka.KafkaRDD: Removing RDD 1319 from persistence list 18/04/17 17:10:14 INFO storage.BlockManager: Removing RDD 1319 18/04/17 17:10:14 INFO kafka.KafkaRDD: Removing RDD 1319 from persistence list 18/04/17 17:10:14 INFO storage.BlockManager: Removing RDD 1319 18/04/17 17:10:14 INFO kafka.KafkaRDD: Removing RDD 1320 from persistence list 18/04/17 17:10:14 INFO storage.BlockManager: Removing RDD 1320 18/04/17 17:10:14 INFO kafka.KafkaRDD: Removing RDD 1320 from persistence list 18/04/17 17:10:14 INFO storage.BlockManager: Removing RDD 1320 18/04/17 17:10:14 INFO kafka.KafkaRDD: Removing RDD 1321 from persistence list 18/04/17 17:10:14 INFO storage.BlockManager: Removing RDD 1321 18/04/17 17:10:14 INFO kafka.KafkaRDD: Removing RDD 1321 from persistence list 18/04/17 17:10:14 INFO storage.BlockManager: Removing RDD 1321 18/04/17 17:10:14 INFO kafka.KafkaRDD: Removing RDD 1322 from persistence list 18/04/17 17:10:14 INFO storage.BlockManager: Removing RDD 1322 18/04/17 17:10:14 INFO kafka.KafkaRDD: Removing RDD 1322 from persistence list 18/04/17 17:10:14 INFO storage.BlockManager: Removing RDD 1322 18/04/17 17:10:14 INFO kafka.KafkaRDD: Removing RDD 1323 from persistence list 18/04/17 17:10:14 INFO storage.BlockManager: Removing RDD 1323 18/04/17 17:10:14 INFO kafka.KafkaRDD: Removing RDD 1323 from persistence list 18/04/17 17:10:14 INFO storage.BlockManager: Removing RDD 1323 18/04/17 17:10:14 INFO kafka.KafkaRDD: Removing RDD 1324 from persistence list 18/04/17 17:10:14 INFO storage.BlockManager: Removing RDD 1324 18/04/17 17:10:14 INFO kafka.KafkaRDD: Removing RDD 1324 from persistence list 18/04/17 17:10:14 INFO storage.BlockManager: Removing RDD 1324 18/04/17 17:10:14 INFO kafka.KafkaRDD: Removing RDD 1325 from persistence list 18/04/17 17:10:14 INFO storage.BlockManager: Removing RDD 1325 18/04/17 17:10:14 INFO kafka.KafkaRDD: Removing RDD 1325 from persistence list 18/04/17 17:10:14 INFO storage.BlockManager: Removing RDD 1325 18/04/17 17:10:14 INFO kafka.KafkaRDD: Removing RDD 1326 from persistence list 18/04/17 17:10:14 INFO storage.BlockManager: Removing RDD 1326 18/04/17 17:10:14 INFO kafka.KafkaRDD: Removing RDD 1326 from persistence list 18/04/17 17:10:14 INFO storage.BlockManager: Removing RDD 1326 18/04/17 17:10:14 INFO kafka.KafkaRDD: Removing RDD 1327 from persistence list 18/04/17 17:10:14 INFO storage.BlockManager: Removing RDD 1327 18/04/17 17:10:14 INFO kafka.KafkaRDD: Removing RDD 1327 from persistence list 18/04/17 17:10:14 INFO storage.BlockManager: Removing RDD 1327 18/04/17 17:10:14 INFO kafka.KafkaRDD: Removing RDD 1328 from persistence list 18/04/17 17:10:14 INFO storage.BlockManager: Removing RDD 1328 18/04/17 17:10:14 INFO kafka.KafkaRDD: Removing RDD 1328 from persistence list 18/04/17 17:10:14 INFO storage.BlockManager: Removing RDD 1328 18/04/17 17:10:14 INFO kafka.KafkaRDD: Removing RDD 1329 from persistence list 18/04/17 17:10:14 INFO storage.BlockManager: Removing RDD 1329 18/04/17 17:10:14 INFO kafka.KafkaRDD: Removing RDD 1329 from persistence list 18/04/17 17:10:14 INFO storage.BlockManager: Removing RDD 1329 18/04/17 17:10:14 INFO kafka.KafkaRDD: Removing RDD 1330 from persistence list 18/04/17 17:10:14 INFO storage.BlockManager: Removing RDD 1330 18/04/17 17:10:14 INFO kafka.KafkaRDD: Removing RDD 1330 from persistence list 18/04/17 17:10:14 INFO storage.BlockManager: Removing RDD 1330 18/04/17 17:10:14 INFO kafka.KafkaRDD: Removing RDD 1331 from persistence list 18/04/17 17:10:14 INFO storage.BlockManager: Removing RDD 1331 18/04/17 17:10:14 INFO kafka.KafkaRDD: Removing RDD 1331 from persistence list 18/04/17 17:10:14 INFO storage.BlockManager: Removing RDD 1331 18/04/17 17:10:14 INFO scheduler.ReceivedBlockTracker: Deleting batches ArrayBuffer() 18/04/17 17:10:14 INFO scheduler.InputInfoTracker: remove old batch metadata: 1523974080000 ms 18/04/17 17:10:27 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 756.0 (TID 756) in 567782 ms on ***hostname masked*** (executor 6) (1/1) 18/04/17 17:10:27 INFO cluster.YarnClusterScheduler: Removed TaskSet 756.0, whose tasks have all completed, from pool 18/04/17 17:10:27 INFO scheduler.DAGScheduler: ResultStage 756 (foreachPartition at PredictorEngineApp.java:153) finished in 567.782 s 18/04/17 17:10:27 INFO scheduler.DAGScheduler: Job 756 finished: foreachPartition at PredictorEngineApp.java:153, took 567.855211 s 18/04/17 17:10:27 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x13605cc0 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 17:10:27 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x13605cc00x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 17:10:27 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 17:10:27 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:44686, server: ***hostname masked***/***IP masked***:2181 18/04/17 17:10:27 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x2626be142b28f90, negotiated timeout = 60000 18/04/17 17:10:27 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x2626be142b28f90 18/04/17 17:10:27 INFO zookeeper.ZooKeeper: Session: 0x2626be142b28f90 closed 18/04/17 17:10:27 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 17:10:27 INFO scheduler.JobScheduler: Finished job streaming job 1523973660000 ms.10 from job set of time 1523973660000 ms 18/04/17 17:10:27 INFO scheduler.JobScheduler: Total delay: 567.963 s for time 1523973660000 ms (execution: 567.897 s) 18/04/17 17:10:27 INFO scheduler.ReceivedBlockTracker: Deleting batches ArrayBuffer() 18/04/17 17:10:27 INFO scheduler.InputInfoTracker: remove old batch metadata: 18/04/17 17:11:00 INFO spark.ContextCleaner: Cleaned accumulator 975 18/04/17 17:11:00 INFO spark.ContextCleaner: Cleaned accumulator 982 18/04/17 17:11:00 INFO spark.ContextCleaner: Cleaned accumulator 978 18/04/17 17:11:00 INFO storage.BlockManagerInfo: Removed broadcast_977_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:11:00 INFO storage.BlockManagerInfo: Removed broadcast_977_piece0 on ***hostname masked***:42188 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:11:00 INFO storage.BlockManagerInfo: Removed broadcast_981_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:11:00 INFO storage.BlockManagerInfo: Removed broadcast_981_piece0 on ***hostname masked***:43653 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:11:00 INFO spark.ContextCleaner: Cleaned accumulator 984 18/04/17 17:11:00 INFO storage.BlockManagerInfo: Removed broadcast_982_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:11:00 INFO storage.BlockManagerInfo: Removed broadcast_982_piece0 on ***hostname masked***:50260 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:11:00 INFO spark.ContextCleaner: Cleaned accumulator 983 18/04/17 17:11:00 INFO storage.BlockManagerInfo: Removed broadcast_984_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:11:00 INFO storage.BlockManagerInfo: Removed broadcast_984_piece0 on ***hostname masked***:53081 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:11:00 INFO spark.ContextCleaner: Cleaned accumulator 985 18/04/17 17:11:00 INFO scheduler.JobScheduler: Added jobs for time 1523974260000 ms 18/04/17 17:11:00 INFO storage.BlockManagerInfo: Removed broadcast_983_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:11:00 INFO scheduler.JobScheduler: Starting job streaming job 1523974260000 ms.0 from job set of time 1523974260000 ms 18/04/17 17:11:00 INFO scheduler.JobScheduler: Starting job streaming job 1523974260000 ms.1 from job set of time 1523974260000 ms 18/04/17 17:11:00 INFO scheduler.JobScheduler: Starting job streaming job 1523974260000 ms.2 from job set of time 1523974260000 ms 18/04/17 17:11:00 INFO scheduler.JobScheduler: Starting job streaming job 1523974260000 ms.3 from job set of time 1523974260000 ms 18/04/17 17:11:00 INFO scheduler.JobScheduler: Finished job streaming job 1523974260000 ms.0 from job set of time 1523974260000 ms 18/04/17 17:11:00 INFO scheduler.JobScheduler: Starting job streaming job 1523974260000 ms.4 from job set of time 1523974260000 ms 18/04/17 17:11:00 INFO scheduler.JobScheduler: Starting job streaming job 1523974260000 ms.5 from job set of time 1523974260000 ms 18/04/17 17:11:00 INFO scheduler.JobScheduler: Starting job streaming job 1523974260000 ms.6 from job set of time 1523974260000 ms 18/04/17 17:11:00 INFO scheduler.JobScheduler: Finished job streaming job 1523974260000 ms.3 from job set of time 1523974260000 ms 18/04/17 17:11:00 INFO scheduler.JobScheduler: Finished job streaming job 1523974260000 ms.4 from job set of time 1523974260000 ms 18/04/17 17:11:00 INFO scheduler.JobScheduler: Starting job streaming job 1523974260000 ms.7 from job set of time 1523974260000 ms 18/04/17 17:11:00 INFO scheduler.JobScheduler: Starting job streaming job 1523974260000 ms.9 from job set of time 1523974260000 ms 18/04/17 17:11:00 INFO scheduler.JobScheduler: Starting job streaming job 1523974260000 ms.8 from job set of time 1523974260000 ms 18/04/17 17:11:00 INFO scheduler.JobScheduler: Starting job streaming job 1523974260000 ms.10 from job set of time 1523974260000 ms 18/04/17 17:11:00 INFO scheduler.JobScheduler: Starting job streaming job 1523974260000 ms.11 from job set of time 1523974260000 ms 18/04/17 17:11:00 INFO scheduler.JobScheduler: Starting job streaming job 1523974260000 ms.12 from job set of time 1523974260000 ms 18/04/17 17:11:00 INFO scheduler.JobScheduler: Starting job streaming job 1523974260000 ms.13 from job set of time 1523974260000 ms 18/04/17 17:11:00 INFO scheduler.JobScheduler: Starting job streaming job 1523974260000 ms.14 from job set of time 1523974260000 ms 18/04/17 17:11:00 INFO scheduler.JobScheduler: Finished job streaming job 1523974260000 ms.13 from job set of time 1523974260000 ms 18/04/17 17:11:00 INFO scheduler.JobScheduler: Starting job streaming job 1523974260000 ms.15 from job set of time 1523974260000 ms 18/04/17 17:11:00 INFO scheduler.JobScheduler: Finished job streaming job 1523974260000 ms.14 from job set of time 1523974260000 ms 18/04/17 17:11:00 INFO scheduler.JobScheduler: Starting job streaming job 1523974260000 ms.18 from job set of time 1523974260000 ms 18/04/17 17:11:00 INFO scheduler.JobScheduler: Starting job streaming job 1523974260000 ms.16 from job set of time 1523974260000 ms 18/04/17 17:11:00 INFO scheduler.JobScheduler: Starting job streaming job 1523974260000 ms.19 from job set of time 1523974260000 ms 18/04/17 17:11:00 INFO scheduler.JobScheduler: Starting job streaming job 1523974260000 ms.20 from job set of time 1523974260000 ms 18/04/17 17:11:00 INFO scheduler.JobScheduler: Finished job streaming job 1523974260000 ms.16 from job set of time 1523974260000 ms 18/04/17 17:11:00 INFO storage.BlockManagerInfo: Removed broadcast_983_piece0 on ***hostname masked***:53081 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:11:00 INFO scheduler.JobScheduler: Starting job streaming job 1523974260000 ms.21 from job set of time 1523974260000 ms 18/04/17 17:11:00 INFO scheduler.JobScheduler: Starting job streaming job 1523974260000 ms.22 from job set of time 1523974260000 ms 18/04/17 17:11:00 INFO scheduler.JobScheduler: Finished job streaming job 1523974260000 ms.21 from job set of time 1523974260000 ms 18/04/17 17:11:00 INFO scheduler.JobScheduler: Starting job streaming job 1523974260000 ms.17 from job set of time 1523974260000 ms 18/04/17 17:11:00 INFO scheduler.JobScheduler: Starting job streaming job 1523974260000 ms.23 from job set of time 1523974260000 ms 18/04/17 17:11:00 INFO scheduler.JobScheduler: Finished job streaming job 1523974260000 ms.17 from job set of time 1523974260000 ms 18/04/17 17:11:00 INFO scheduler.JobScheduler: Starting job streaming job 1523974260000 ms.24 from job set of time 1523974260000 ms 18/04/17 17:11:00 INFO scheduler.JobScheduler: Starting job streaming job 1523974260000 ms.26 from job set of time 1523974260000 ms 18/04/17 17:11:00 INFO scheduler.JobScheduler: Starting job streaming job 1523974260000 ms.25 from job set of time 1523974260000 ms 18/04/17 17:11:00 INFO scheduler.JobScheduler: Starting job streaming job 1523974260000 ms.27 from job set of time 1523974260000 ms 18/04/17 17:11:00 INFO scheduler.JobScheduler: Starting job streaming job 1523974260000 ms.28 from job set of time 1523974260000 ms 18/04/17 17:11:00 INFO scheduler.JobScheduler: Starting job streaming job 1523974260000 ms.29 from job set of time 1523974260000 ms 18/04/17 17:11:00 INFO scheduler.JobScheduler: Starting job streaming job 1523974260000 ms.30 from job set of time 1523974260000 ms 18/04/17 17:11:00 INFO scheduler.JobScheduler: Starting job streaming job 1523974260000 ms.31 from job set of time 1523974260000 ms 18/04/17 17:11:00 INFO scheduler.JobScheduler: Starting job streaming job 1523974260000 ms.32 from job set of time 1523974260000 ms 18/04/17 17:11:00 INFO scheduler.JobScheduler: Finished job streaming job 1523974260000 ms.30 from job set of time 1523974260000 ms 18/04/17 17:11:00 INFO scheduler.JobScheduler: Starting job streaming job 1523974260000 ms.33 from job set of time 1523974260000 ms 18/04/17 17:11:00 INFO scheduler.JobScheduler: Starting job streaming job 1523974260000 ms.35 from job set of time 1523974260000 ms 18/04/17 17:11:00 INFO scheduler.JobScheduler: Starting job streaming job 1523974260000 ms.34 from job set of time 1523974260000 ms 18/04/17 17:11:00 INFO scheduler.JobScheduler: Finished job streaming job 1523974260000 ms.35 from job set of time 1523974260000 ms 18/04/17 17:11:00 INFO spark.ContextCleaner: Cleaned accumulator 987 18/04/17 17:11:00 INFO storage.BlockManagerInfo: Removed broadcast_985_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:11:00 INFO storage.BlockManagerInfo: Removed broadcast_985_piece0 on ***hostname masked***:57847 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:11:00 INFO spark.ContextCleaner: Cleaned accumulator 986 18/04/17 17:11:00 INFO storage.BlockManagerInfo: Removed broadcast_987_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:11:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 17:11:00 INFO scheduler.DAGScheduler: Got job 1000 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 17:11:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 1000 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 17:11:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 17:11:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 17:11:00 INFO scheduler.DAGScheduler: Submitting ResultStage 1000 (KafkaRDD[1397] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 17:11:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 17:11:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 17:11:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 17:11:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 17:11:00 INFO storage.BlockManagerInfo: Removed broadcast_987_piece0 on ***hostname masked***:60107 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:11:00 INFO storage.MemoryStore: Block broadcast_1000 stored as values in memory (estimated size 5.7 KB, free 491.5 MB) 18/04/17 17:11:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 17:11:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 17:11:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 17:11:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 17:11:00 INFO spark.ContextCleaner: Cleaned accumulator 988 18/04/17 17:11:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 17:11:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 17:11:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 17:11:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 17:11:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 17:11:00 INFO storage.BlockManagerInfo: Removed broadcast_986_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:11:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 17:11:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 17:11:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 17:11:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 17:11:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 17:11:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 17:11:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 17:11:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 17:11:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 17:11:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 17:11:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 17:11:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 17:11:00 INFO storage.BlockManagerInfo: Removed broadcast_986_piece0 on ***hostname masked***:35790 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:11:00 INFO storage.MemoryStore: Block broadcast_1000_piece0 stored as bytes in memory (estimated size 3.1 KB, free 491.5 MB) 18/04/17 17:11:00 INFO storage.BlockManagerInfo: Added broadcast_1000_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:11:00 INFO storage.BlockManagerInfo: Removed broadcast_988_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:11:00 INFO spark.SparkContext: Created broadcast 1000 from broadcast at DAGScheduler.scala:1006 18/04/17 17:11:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 1000 (KafkaRDD[1397] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 17:11:00 INFO cluster.YarnClusterScheduler: Adding task set 1000.0 with 1 tasks 18/04/17 17:11:00 INFO scheduler.DAGScheduler: Got job 1001 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 17:11:00 INFO storage.BlockManagerInfo: Removed broadcast_988_piece0 on ***hostname masked***:53081 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:11:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 1001 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 17:11:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 17:11:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 17:11:00 INFO scheduler.DAGScheduler: Submitting ResultStage 1001 (KafkaRDD[1387] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 17:11:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 1000.0 (TID 1000, ***hostname masked***, executor 5, partition 0, NODE_LOCAL, 2049 bytes) 18/04/17 17:11:00 INFO storage.MemoryStore: Block broadcast_1001 stored as values in memory (estimated size 5.7 KB, free 491.5 MB) 18/04/17 17:11:00 INFO spark.ContextCleaner: Cleaned accumulator 989 18/04/17 17:11:00 INFO spark.ContextCleaner: Cleaned accumulator 991 18/04/17 17:11:00 INFO storage.BlockManagerInfo: Removed broadcast_989_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:11:00 INFO storage.BlockManagerInfo: Removed broadcast_989_piece0 on ***hostname masked***:60107 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:11:00 INFO spark.ContextCleaner: Cleaned accumulator 990 18/04/17 17:11:00 INFO storage.BlockManagerInfo: Removed broadcast_991_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:11:00 INFO storage.MemoryStore: Block broadcast_1001_piece0 stored as bytes in memory (estimated size 3.1 KB, free 491.5 MB) 18/04/17 17:11:00 INFO storage.BlockManagerInfo: Added broadcast_1001_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:11:00 INFO spark.SparkContext: Created broadcast 1001 from broadcast at DAGScheduler.scala:1006 18/04/17 17:11:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 1001 (KafkaRDD[1387] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 17:11:00 INFO cluster.YarnClusterScheduler: Adding task set 1001.0 with 1 tasks 18/04/17 17:11:00 INFO scheduler.DAGScheduler: Got job 1002 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 17:11:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 1002 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 17:11:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 17:11:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 17:11:00 INFO scheduler.DAGScheduler: Submitting ResultStage 1002 (KafkaRDD[1373] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 17:11:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 1001.0 (TID 1001, ***hostname masked***, executor 4, partition 0, RACK_LOCAL, 2050 bytes) 18/04/17 17:11:00 INFO storage.MemoryStore: Block broadcast_1002 stored as values in memory (estimated size 5.7 KB, free 491.5 MB) 18/04/17 17:11:00 INFO storage.BlockManagerInfo: Removed broadcast_991_piece0 on ***hostname masked***:55033 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:11:00 INFO spark.ContextCleaner: Cleaned accumulator 992 18/04/17 17:11:00 INFO storage.BlockManagerInfo: Removed broadcast_990_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:11:00 INFO storage.MemoryStore: Block broadcast_1002_piece0 stored as bytes in memory (estimated size 3.1 KB, free 491.5 MB) 18/04/17 17:11:00 INFO storage.BlockManagerInfo: Added broadcast_1002_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:11:00 INFO spark.SparkContext: Created broadcast 1002 from broadcast at DAGScheduler.scala:1006 18/04/17 17:11:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 1002 (KafkaRDD[1373] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 17:11:00 INFO cluster.YarnClusterScheduler: Adding task set 1002.0 with 1 tasks 18/04/17 17:11:00 INFO scheduler.DAGScheduler: Got job 1003 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 17:11:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 1003 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 17:11:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 17:11:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 17:11:00 INFO scheduler.DAGScheduler: Submitting ResultStage 1003 (KafkaRDD[1394] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 17:11:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 1002.0 (TID 1002, ***hostname masked***, executor 4, partition 0, NODE_LOCAL, 2059 bytes) 18/04/17 17:11:00 INFO storage.MemoryStore: Block broadcast_1003 stored as values in memory (estimated size 5.7 KB, free 491.5 MB) 18/04/17 17:11:00 INFO storage.BlockManagerInfo: Removed broadcast_990_piece0 on ***hostname masked***:56034 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:11:00 INFO spark.ContextCleaner: Cleaned accumulator 994 18/04/17 17:11:00 INFO storage.BlockManagerInfo: Removed broadcast_992_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:11:00 INFO storage.MemoryStore: Block broadcast_1003_piece0 stored as bytes in memory (estimated size 3.1 KB, free 491.5 MB) 18/04/17 17:11:00 INFO storage.BlockManagerInfo: Added broadcast_1003_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:11:00 INFO spark.SparkContext: Created broadcast 1003 from broadcast at DAGScheduler.scala:1006 18/04/17 17:11:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 1003 (KafkaRDD[1394] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 17:11:00 INFO cluster.YarnClusterScheduler: Adding task set 1003.0 with 1 tasks 18/04/17 17:11:00 INFO scheduler.DAGScheduler: Got job 1004 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 17:11:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 1004 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 17:11:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 17:11:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 17:11:00 INFO storage.BlockManagerInfo: Removed broadcast_992_piece0 on ***hostname masked***:43653 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:11:00 INFO scheduler.DAGScheduler: Submitting ResultStage 1004 (KafkaRDD[1396] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 17:11:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 1003.0 (TID 1003, ***hostname masked***, executor 1, partition 0, NODE_LOCAL, 2056 bytes) 18/04/17 17:11:00 INFO storage.MemoryStore: Block broadcast_1004 stored as values in memory (estimated size 5.7 KB, free 491.5 MB) 18/04/17 17:11:00 INFO spark.ContextCleaner: Cleaned accumulator 993 18/04/17 17:11:00 INFO storage.BlockManagerInfo: Removed broadcast_994_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:11:00 INFO storage.BlockManagerInfo: Removed broadcast_994_piece0 on ***hostname masked***:60107 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:11:00 INFO spark.ContextCleaner: Cleaned accumulator 995 18/04/17 17:11:00 INFO storage.MemoryStore: Block broadcast_1004_piece0 stored as bytes in memory (estimated size 3.1 KB, free 491.5 MB) 18/04/17 17:11:00 INFO storage.BlockManagerInfo: Added broadcast_1000_piece0 in memory on ***hostname masked***:53081 (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:11:00 INFO storage.BlockManagerInfo: Added broadcast_1004_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:11:00 INFO storage.BlockManagerInfo: Removed broadcast_993_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:11:00 INFO storage.BlockManagerInfo: Added broadcast_1001_piece0 in memory on ***hostname masked***:55279 (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:11:00 INFO spark.SparkContext: Created broadcast 1004 from broadcast at DAGScheduler.scala:1006 18/04/17 17:11:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 1004 (KafkaRDD[1396] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 17:11:00 INFO cluster.YarnClusterScheduler: Adding task set 1004.0 with 1 tasks 18/04/17 17:11:00 INFO scheduler.DAGScheduler: Got job 1005 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 17:11:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 1005 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 17:11:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 17:11:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 17:11:00 INFO scheduler.DAGScheduler: Submitting ResultStage 1005 (KafkaRDD[1369] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 17:11:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 1004.0 (TID 1004, ***hostname masked***, executor 9, partition 0, RACK_LOCAL, 2045 bytes) 18/04/17 17:11:00 INFO storage.MemoryStore: Block broadcast_1005 stored as values in memory (estimated size 5.7 KB, free 491.5 MB) 18/04/17 17:11:00 INFO storage.BlockManagerInfo: Removed broadcast_993_piece0 on ***hostname masked***:43653 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:11:00 INFO spark.ContextCleaner: Cleaned accumulator 997 18/04/17 17:11:00 INFO storage.BlockManagerInfo: Removed broadcast_995_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:11:00 INFO storage.BlockManagerInfo: Removed broadcast_995_piece0 on ***hostname masked***:50260 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:11:00 INFO storage.MemoryStore: Block broadcast_1005_piece0 stored as bytes in memory (estimated size 3.1 KB, free 491.5 MB) 18/04/17 17:11:00 INFO storage.BlockManagerInfo: Added broadcast_1005_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:11:00 INFO spark.SparkContext: Created broadcast 1005 from broadcast at DAGScheduler.scala:1006 18/04/17 17:11:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 1005 (KafkaRDD[1369] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 17:11:00 INFO cluster.YarnClusterScheduler: Adding task set 1005.0 with 1 tasks 18/04/17 17:11:00 INFO scheduler.DAGScheduler: Got job 1006 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 17:11:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 1006 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 17:11:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 17:11:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 17:11:00 INFO spark.ContextCleaner: Cleaned accumulator 996 18/04/17 17:11:00 INFO scheduler.DAGScheduler: Submitting ResultStage 1006 (KafkaRDD[1375] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 17:11:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 1005.0 (TID 1005, ***hostname masked***, executor 10, partition 0, RACK_LOCAL, 2063 bytes) 18/04/17 17:11:00 INFO storage.MemoryStore: Block broadcast_1006 stored as values in memory (estimated size 5.7 KB, free 491.5 MB) 18/04/17 17:11:00 INFO storage.BlockManagerInfo: Removed broadcast_997_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:11:00 INFO storage.BlockManagerInfo: Removed broadcast_997_piece0 on ***hostname masked***:60107 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:11:00 INFO spark.ContextCleaner: Cleaned accumulator 998 18/04/17 17:11:00 INFO storage.BlockManagerInfo: Removed broadcast_996_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:11:00 INFO storage.MemoryStore: Block broadcast_1006_piece0 stored as bytes in memory (estimated size 3.1 KB, free 491.5 MB) 18/04/17 17:11:00 INFO storage.BlockManagerInfo: Added broadcast_1006_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:11:00 INFO spark.SparkContext: Created broadcast 1006 from broadcast at DAGScheduler.scala:1006 18/04/17 17:11:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 1006 (KafkaRDD[1375] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 17:11:00 INFO cluster.YarnClusterScheduler: Adding task set 1006.0 with 1 tasks 18/04/17 17:11:00 INFO scheduler.DAGScheduler: Got job 1007 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 17:11:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 1007 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 17:11:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 17:11:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 17:11:00 INFO scheduler.DAGScheduler: Submitting ResultStage 1007 (KafkaRDD[1386] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 17:11:00 INFO storage.MemoryStore: Block broadcast_1007 stored as values in memory (estimated size 5.7 KB, free 491.5 MB) 18/04/17 17:11:00 INFO storage.BlockManagerInfo: Removed broadcast_996_piece0 on ***hostname masked***:43653 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:11:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 1006.0 (TID 1006, ***hostname masked***, executor 3, partition 0, RACK_LOCAL, 2060 bytes) 18/04/17 17:11:00 INFO storage.MemoryStore: Block broadcast_1007_piece0 stored as bytes in memory (estimated size 3.1 KB, free 491.5 MB) 18/04/17 17:11:00 INFO storage.BlockManagerInfo: Added broadcast_1003_piece0 in memory on ***hostname masked***:56034 (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:11:00 INFO storage.BlockManagerInfo: Added broadcast_1007_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:11:00 INFO storage.BlockManagerInfo: Added broadcast_1002_piece0 in memory on ***hostname masked***:55279 (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:11:00 INFO spark.SparkContext: Created broadcast 1007 from broadcast at DAGScheduler.scala:1006 18/04/17 17:11:00 INFO storage.BlockManagerInfo: Removed broadcast_974_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:11:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 1007 (KafkaRDD[1386] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 17:11:00 INFO cluster.YarnClusterScheduler: Adding task set 1007.0 with 1 tasks 18/04/17 17:11:00 INFO scheduler.DAGScheduler: Got job 1008 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 17:11:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 1008 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 17:11:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 17:11:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 17:11:00 INFO scheduler.DAGScheduler: Submitting ResultStage 1008 (KafkaRDD[1379] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 17:11:00 INFO storage.MemoryStore: Block broadcast_1008 stored as values in memory (estimated size 5.7 KB, free 491.5 MB) 18/04/17 17:11:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 1007.0 (TID 1007, ***hostname masked***, executor 5, partition 0, RACK_LOCAL, 2048 bytes) 18/04/17 17:11:00 INFO storage.BlockManagerInfo: Removed broadcast_974_piece0 on ***hostname masked***:55279 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:11:00 INFO storage.BlockManagerInfo: Removed broadcast_975_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:11:00 INFO storage.BlockManagerInfo: Removed broadcast_975_piece0 on ***hostname masked***:50260 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:11:00 INFO storage.MemoryStore: Block broadcast_1008_piece0 stored as bytes in memory (estimated size 3.1 KB, free 491.5 MB) 18/04/17 17:11:00 INFO storage.BlockManagerInfo: Added broadcast_1008_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:11:00 INFO spark.SparkContext: Created broadcast 1008 from broadcast at DAGScheduler.scala:1006 18/04/17 17:11:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 1008 (KafkaRDD[1379] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 17:11:00 INFO cluster.YarnClusterScheduler: Adding task set 1008.0 with 1 tasks 18/04/17 17:11:00 INFO scheduler.DAGScheduler: Got job 1009 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 17:11:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 1009 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 17:11:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 17:11:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 17:11:00 INFO scheduler.DAGScheduler: Submitting ResultStage 1009 (KafkaRDD[1388] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 17:11:00 INFO spark.ContextCleaner: Cleaned accumulator 977 18/04/17 17:11:00 INFO spark.ContextCleaner: Cleaned accumulator 999 18/04/17 17:11:00 INFO storage.MemoryStore: Block broadcast_1009 stored as values in memory (estimated size 5.7 KB, free 491.5 MB) 18/04/17 17:11:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 1008.0 (TID 1008, ***hostname masked***, executor 5, partition 0, RACK_LOCAL, 2046 bytes) 18/04/17 17:11:00 INFO storage.BlockManagerInfo: Removed broadcast_999_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:11:00 INFO storage.BlockManagerInfo: Removed broadcast_999_piece0 on ***hostname masked***:42188 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:11:00 INFO storage.MemoryStore: Block broadcast_1009_piece0 stored as bytes in memory (estimated size 3.1 KB, free 491.5 MB) 18/04/17 17:11:00 INFO storage.BlockManagerInfo: Added broadcast_1009_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:11:00 INFO spark.SparkContext: Created broadcast 1009 from broadcast at DAGScheduler.scala:1006 18/04/17 17:11:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 1009 (KafkaRDD[1388] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 17:11:00 INFO cluster.YarnClusterScheduler: Adding task set 1009.0 with 1 tasks 18/04/17 17:11:00 INFO storage.BlockManagerInfo: Added broadcast_1007_piece0 in memory on ***hostname masked***:53081 (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:11:00 INFO scheduler.DAGScheduler: Got job 1010 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 17:11:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 1010 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 17:11:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 17:11:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 17:11:00 INFO scheduler.DAGScheduler: Submitting ResultStage 1010 (KafkaRDD[1383] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 17:11:00 INFO storage.MemoryStore: Block broadcast_1010 stored as values in memory (estimated size 5.7 KB, free 491.5 MB) 18/04/17 17:11:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 1009.0 (TID 1009, ***hostname masked***, executor 5, partition 0, RACK_LOCAL, 2048 bytes) 18/04/17 17:11:00 INFO storage.BlockManagerInfo: Added broadcast_1005_piece0 in memory on ***hostname masked***:55095 (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:11:00 INFO storage.BlockManagerInfo: Added broadcast_1006_piece0 in memory on ***hostname masked***:60107 (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:11:00 INFO storage.MemoryStore: Block broadcast_1010_piece0 stored as bytes in memory (estimated size 3.1 KB, free 491.5 MB) 18/04/17 17:11:00 INFO storage.BlockManagerInfo: Added broadcast_1010_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:11:00 INFO spark.SparkContext: Created broadcast 1010 from broadcast at DAGScheduler.scala:1006 18/04/17 17:11:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 1010 (KafkaRDD[1383] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 17:11:00 INFO cluster.YarnClusterScheduler: Adding task set 1010.0 with 1 tasks 18/04/17 17:11:00 INFO scheduler.DAGScheduler: Got job 1011 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 17:11:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 1011 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 17:11:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 17:11:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 17:11:00 INFO scheduler.DAGScheduler: Submitting ResultStage 1011 (KafkaRDD[1402] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 17:11:00 INFO storage.MemoryStore: Block broadcast_1011 stored as values in memory (estimated size 5.7 KB, free 491.5 MB) 18/04/17 17:11:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 1010.0 (TID 1010, ***hostname masked***, executor 2, partition 0, RACK_LOCAL, 2064 bytes) 18/04/17 17:11:00 INFO storage.BlockManagerInfo: Added broadcast_1008_piece0 in memory on ***hostname masked***:53081 (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:11:00 INFO storage.MemoryStore: Block broadcast_1011_piece0 stored as bytes in memory (estimated size 3.1 KB, free 491.5 MB) 18/04/17 17:11:00 INFO storage.BlockManagerInfo: Added broadcast_1011_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:11:00 INFO spark.SparkContext: Created broadcast 1011 from broadcast at DAGScheduler.scala:1006 18/04/17 17:11:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 1011 (KafkaRDD[1402] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 17:11:00 INFO cluster.YarnClusterScheduler: Adding task set 1011.0 with 1 tasks 18/04/17 17:11:00 INFO scheduler.DAGScheduler: Got job 1013 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 17:11:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 1012 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 17:11:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 17:11:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 17:11:00 INFO scheduler.DAGScheduler: Submitting ResultStage 1012 (KafkaRDD[1401] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 17:11:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 1011.0 (TID 1011, ***hostname masked***, executor 2, partition 0, RACK_LOCAL, 2053 bytes) 18/04/17 17:11:00 INFO storage.MemoryStore: Block broadcast_1012 stored as values in memory (estimated size 5.7 KB, free 491.5 MB) 18/04/17 17:11:00 INFO storage.MemoryStore: Block broadcast_1012_piece0 stored as bytes in memory (estimated size 3.1 KB, free 491.5 MB) 18/04/17 17:11:00 INFO storage.BlockManagerInfo: Added broadcast_1012_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:11:00 INFO spark.SparkContext: Created broadcast 1012 from broadcast at DAGScheduler.scala:1006 18/04/17 17:11:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 1012 (KafkaRDD[1401] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 17:11:00 INFO cluster.YarnClusterScheduler: Adding task set 1012.0 with 1 tasks 18/04/17 17:11:00 INFO scheduler.DAGScheduler: Got job 1012 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 17:11:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 1013 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 17:11:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 17:11:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 17:11:00 INFO scheduler.DAGScheduler: Submitting ResultStage 1013 (KafkaRDD[1393] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 17:11:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 1012.0 (TID 1012, ***hostname masked***, executor 8, partition 0, NODE_LOCAL, 2048 bytes) 18/04/17 17:11:00 INFO storage.BlockManagerInfo: Added broadcast_1009_piece0 in memory on ***hostname masked***:53081 (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:11:00 INFO storage.MemoryStore: Block broadcast_1013 stored as values in memory (estimated size 5.7 KB, free 491.5 MB) 18/04/17 17:11:00 INFO storage.BlockManagerInfo: Added broadcast_1010_piece0 in memory on ***hostname masked***:43653 (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:11:00 INFO storage.MemoryStore: Block broadcast_1013_piece0 stored as bytes in memory (estimated size 3.1 KB, free 491.5 MB) 18/04/17 17:11:00 INFO storage.BlockManagerInfo: Added broadcast_1013_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:11:00 INFO spark.SparkContext: Created broadcast 1013 from broadcast at DAGScheduler.scala:1006 18/04/17 17:11:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 1013 (KafkaRDD[1393] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 17:11:00 INFO cluster.YarnClusterScheduler: Adding task set 1013.0 with 1 tasks 18/04/17 17:11:00 INFO scheduler.DAGScheduler: Got job 1014 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 17:11:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 1014 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 17:11:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 17:11:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 17:11:00 INFO scheduler.DAGScheduler: Submitting ResultStage 1014 (KafkaRDD[1395] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 17:11:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 1013.0 (TID 1013, ***hostname masked***, executor 10, partition 0, RACK_LOCAL, 2045 bytes) 18/04/17 17:11:00 INFO storage.MemoryStore: Block broadcast_1014 stored as values in memory (estimated size 5.7 KB, free 491.5 MB) 18/04/17 17:11:00 INFO storage.BlockManagerInfo: Added broadcast_1012_piece0 in memory on ***hostname masked***:50260 (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:11:00 INFO storage.MemoryStore: Block broadcast_1014_piece0 stored as bytes in memory (estimated size 3.1 KB, free 491.5 MB) 18/04/17 17:11:00 INFO storage.BlockManagerInfo: Added broadcast_1014_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:11:00 INFO spark.SparkContext: Created broadcast 1014 from broadcast at DAGScheduler.scala:1006 18/04/17 17:11:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 1014 (KafkaRDD[1395] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 17:11:00 INFO cluster.YarnClusterScheduler: Adding task set 1014.0 with 1 tasks 18/04/17 17:11:00 INFO scheduler.DAGScheduler: Got job 1015 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 17:11:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 1015 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 17:11:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 17:11:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 17:11:00 INFO scheduler.DAGScheduler: Submitting ResultStage 1015 (KafkaRDD[1378] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 17:11:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 1014.0 (TID 1014, ***hostname masked***, executor 4, partition 0, RACK_LOCAL, 2048 bytes) 18/04/17 17:11:00 INFO storage.MemoryStore: Block broadcast_1015 stored as values in memory (estimated size 5.7 KB, free 491.5 MB) 18/04/17 17:11:00 INFO storage.BlockManagerInfo: Added broadcast_1013_piece0 in memory on ***hostname masked***:55095 (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:11:00 INFO storage.BlockManagerInfo: Added broadcast_1011_piece0 in memory on ***hostname masked***:43653 (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:11:00 INFO storage.MemoryStore: Block broadcast_1015_piece0 stored as bytes in memory (estimated size 3.1 KB, free 491.5 MB) 18/04/17 17:11:00 INFO storage.BlockManagerInfo: Added broadcast_1015_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:11:00 INFO spark.SparkContext: Created broadcast 1015 from broadcast at DAGScheduler.scala:1006 18/04/17 17:11:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 1015 (KafkaRDD[1378] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 17:11:00 INFO cluster.YarnClusterScheduler: Adding task set 1015.0 with 1 tasks 18/04/17 17:11:00 INFO scheduler.DAGScheduler: Got job 1016 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 17:11:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 1016 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 17:11:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 17:11:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 17:11:00 INFO scheduler.DAGScheduler: Submitting ResultStage 1016 (KafkaRDD[1380] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 17:11:00 INFO storage.MemoryStore: Block broadcast_1016 stored as values in memory (estimated size 5.7 KB, free 491.5 MB) 18/04/17 17:11:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 1015.0 (TID 1015, ***hostname masked***, executor 6, partition 0, NODE_LOCAL, 2059 bytes) 18/04/17 17:11:00 INFO storage.MemoryStore: Block broadcast_1016_piece0 stored as bytes in memory (estimated size 3.1 KB, free 491.5 MB) 18/04/17 17:11:00 INFO storage.BlockManagerInfo: Added broadcast_1016_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:11:00 INFO spark.SparkContext: Created broadcast 1016 from broadcast at DAGScheduler.scala:1006 18/04/17 17:11:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 1016 (KafkaRDD[1380] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 17:11:00 INFO cluster.YarnClusterScheduler: Adding task set 1016.0 with 1 tasks 18/04/17 17:11:00 INFO scheduler.DAGScheduler: Got job 1017 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 17:11:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 1017 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 17:11:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 17:11:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 17:11:00 INFO scheduler.DAGScheduler: Submitting ResultStage 1017 (KafkaRDD[1374] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 17:11:00 INFO storage.MemoryStore: Block broadcast_1017 stored as values in memory (estimated size 5.7 KB, free 491.5 MB) 18/04/17 17:11:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 1016.0 (TID 1016, ***hostname masked***, executor 7, partition 0, RACK_LOCAL, 2051 bytes) 18/04/17 17:11:00 INFO storage.BlockManagerInfo: Added broadcast_1014_piece0 in memory on ***hostname masked***:55279 (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:11:00 INFO storage.BlockManagerInfo: Added broadcast_1004_piece0 in memory on ***hostname masked***:55033 (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:11:00 INFO storage.MemoryStore: Block broadcast_1017_piece0 stored as bytes in memory (estimated size 3.1 KB, free 491.5 MB) 18/04/17 17:11:00 INFO storage.BlockManagerInfo: Added broadcast_1017_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:11:00 INFO spark.SparkContext: Created broadcast 1017 from broadcast at DAGScheduler.scala:1006 18/04/17 17:11:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 1017 (KafkaRDD[1374] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 17:11:00 INFO cluster.YarnClusterScheduler: Adding task set 1017.0 with 1 tasks 18/04/17 17:11:00 INFO scheduler.DAGScheduler: Got job 1018 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 17:11:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 1018 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 17:11:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 17:11:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 17:11:00 INFO scheduler.DAGScheduler: Submitting ResultStage 1018 (KafkaRDD[1376] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 17:11:00 INFO storage.MemoryStore: Block broadcast_1018 stored as values in memory (estimated size 5.7 KB, free 491.5 MB) 18/04/17 17:11:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 1017.0 (TID 1017, ***hostname masked***, executor 11, partition 0, NODE_LOCAL, 2045 bytes) 18/04/17 17:11:00 INFO storage.MemoryStore: Block broadcast_1018_piece0 stored as bytes in memory (estimated size 3.1 KB, free 491.5 MB) 18/04/17 17:11:00 INFO storage.BlockManagerInfo: Added broadcast_1018_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:11:00 INFO spark.SparkContext: Created broadcast 1018 from broadcast at DAGScheduler.scala:1006 18/04/17 17:11:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 1018 (KafkaRDD[1376] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 17:11:00 INFO cluster.YarnClusterScheduler: Adding task set 1018.0 with 1 tasks 18/04/17 17:11:00 INFO scheduler.DAGScheduler: Got job 1020 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 17:11:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 1019 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 17:11:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 17:11:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 17:11:00 INFO scheduler.DAGScheduler: Submitting ResultStage 1019 (KafkaRDD[1377] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 17:11:00 INFO storage.BlockManagerInfo: Added broadcast_1015_piece0 in memory on ***hostname masked***:35790 (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:11:00 INFO storage.MemoryStore: Block broadcast_1019 stored as values in memory (estimated size 5.7 KB, free 491.4 MB) 18/04/17 17:11:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 1018.0 (TID 1018, ***hostname masked***, executor 10, partition 0, RACK_LOCAL, 2051 bytes) 18/04/17 17:11:00 INFO spark.ContextCleaner: Cleaned accumulator 1000 18/04/17 17:11:00 INFO storage.BlockManagerInfo: Removed broadcast_998_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:11:00 INFO storage.MemoryStore: Block broadcast_1019_piece0 stored as bytes in memory (estimated size 3.1 KB, free 491.5 MB) 18/04/17 17:11:00 INFO storage.BlockManagerInfo: Added broadcast_1019_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:11:00 INFO storage.BlockManagerInfo: Removed broadcast_998_piece0 on ***hostname masked***:41751 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:11:00 INFO spark.SparkContext: Created broadcast 1019 from broadcast at DAGScheduler.scala:1006 18/04/17 17:11:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 1019 (KafkaRDD[1377] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 17:11:00 INFO cluster.YarnClusterScheduler: Adding task set 1019.0 with 1 tasks 18/04/17 17:11:00 INFO scheduler.DAGScheduler: Got job 1019 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 17:11:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 1020 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 17:11:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 17:11:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 17:11:00 INFO scheduler.DAGScheduler: Submitting ResultStage 1020 (KafkaRDD[1391] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 17:11:00 INFO storage.MemoryStore: Block broadcast_1020 stored as values in memory (estimated size 5.7 KB, free 491.4 MB) 18/04/17 17:11:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 1019.0 (TID 1019, ***hostname masked***, executor 11, partition 0, RACK_LOCAL, 2041 bytes) 18/04/17 17:11:00 INFO storage.BlockManagerInfo: Added broadcast_1017_piece0 in memory on ***hostname masked***:57847 (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:11:00 INFO storage.BlockManagerInfo: Added broadcast_1018_piece0 in memory on ***hostname masked***:55095 (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:11:00 INFO storage.MemoryStore: Block broadcast_1020_piece0 stored as bytes in memory (estimated size 3.1 KB, free 491.4 MB) 18/04/17 17:11:00 INFO storage.BlockManagerInfo: Added broadcast_1020_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:11:00 INFO storage.BlockManagerInfo: Added broadcast_1016_piece0 in memory on ***hostname masked***:41751 (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:11:00 INFO spark.SparkContext: Created broadcast 1020 from broadcast at DAGScheduler.scala:1006 18/04/17 17:11:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 1020 (KafkaRDD[1391] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 17:11:00 INFO cluster.YarnClusterScheduler: Adding task set 1020.0 with 1 tasks 18/04/17 17:11:00 INFO scheduler.DAGScheduler: Got job 1022 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 17:11:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 1021 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 17:11:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 17:11:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 17:11:00 INFO scheduler.DAGScheduler: Submitting ResultStage 1021 (KafkaRDD[1392] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 17:11:00 INFO storage.MemoryStore: Block broadcast_1021 stored as values in memory (estimated size 5.7 KB, free 491.4 MB) 18/04/17 17:11:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 1020.0 (TID 1020, ***hostname masked***, executor 12, partition 0, RACK_LOCAL, 2048 bytes) 18/04/17 17:11:00 INFO storage.MemoryStore: Block broadcast_1021_piece0 stored as bytes in memory (estimated size 3.1 KB, free 491.4 MB) 18/04/17 17:11:00 INFO storage.BlockManagerInfo: Added broadcast_1021_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:11:00 INFO spark.SparkContext: Created broadcast 1021 from broadcast at DAGScheduler.scala:1006 18/04/17 17:11:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 1021 (KafkaRDD[1392] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 17:11:00 INFO cluster.YarnClusterScheduler: Adding task set 1021.0 with 1 tasks 18/04/17 17:11:00 INFO scheduler.DAGScheduler: Got job 1021 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 17:11:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 1022 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 17:11:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 17:11:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 17:11:00 INFO scheduler.DAGScheduler: Submitting ResultStage 1022 (KafkaRDD[1400] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 17:11:00 INFO storage.MemoryStore: Block broadcast_1022 stored as values in memory (estimated size 5.7 KB, free 491.4 MB) 18/04/17 17:11:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 1021.0 (TID 1021, ***hostname masked***, executor 3, partition 0, NODE_LOCAL, 2047 bytes) 18/04/17 17:11:00 INFO storage.BlockManagerInfo: Removed broadcast_978_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:11:00 INFO storage.MemoryStore: Block broadcast_1022_piece0 stored as bytes in memory (estimated size 3.1 KB, free 491.4 MB) 18/04/17 17:11:00 INFO storage.BlockManagerInfo: Added broadcast_1022_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:11:00 INFO spark.SparkContext: Created broadcast 1022 from broadcast at DAGScheduler.scala:1006 18/04/17 17:11:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 1022 (KafkaRDD[1400] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 17:11:00 INFO cluster.YarnClusterScheduler: Adding task set 1022.0 with 1 tasks 18/04/17 17:11:00 INFO scheduler.DAGScheduler: Got job 1023 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 17:11:00 INFO storage.BlockManagerInfo: Added broadcast_1019_piece0 in memory on ***hostname masked***:57847 (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:11:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 1023 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 17:11:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 17:11:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 17:11:00 INFO scheduler.DAGScheduler: Submitting ResultStage 1023 (KafkaRDD[1390] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 17:11:00 INFO storage.BlockManagerInfo: Removed broadcast_978_piece0 on ***hostname masked***:55033 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:11:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 1022.0 (TID 1022, ***hostname masked***, executor 8, partition 0, RACK_LOCAL, 2040 bytes) 18/04/17 17:11:00 INFO storage.MemoryStore: Block broadcast_1023 stored as values in memory (estimated size 5.7 KB, free 491.4 MB) 18/04/17 17:11:00 INFO spark.ContextCleaner: Cleaned accumulator 980 18/04/17 17:11:00 INFO storage.BlockManagerInfo: Removed broadcast_979_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:11:00 INFO storage.MemoryStore: Block broadcast_1023_piece0 stored as bytes in memory (estimated size 3.1 KB, free 491.4 MB) 18/04/17 17:11:00 INFO storage.BlockManagerInfo: Removed broadcast_979_piece0 on ***hostname masked***:56034 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:11:00 INFO storage.BlockManagerInfo: Added broadcast_1023_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:11:00 INFO spark.SparkContext: Created broadcast 1023 from broadcast at DAGScheduler.scala:1006 18/04/17 17:11:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 1023 (KafkaRDD[1390] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 17:11:00 INFO cluster.YarnClusterScheduler: Adding task set 1023.0 with 1 tasks 18/04/17 17:11:00 INFO scheduler.DAGScheduler: Got job 1024 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 17:11:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 1024 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 17:11:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 17:11:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 17:11:00 INFO scheduler.DAGScheduler: Submitting ResultStage 1024 (KafkaRDD[1370] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 17:11:00 INFO storage.MemoryStore: Block broadcast_1024 stored as values in memory (estimated size 5.7 KB, free 491.4 MB) 18/04/17 17:11:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 1023.0 (TID 1023, ***hostname masked***, executor 2, partition 0, RACK_LOCAL, 2069 bytes) 18/04/17 17:11:00 INFO storage.BlockManagerInfo: Added broadcast_1021_piece0 in memory on ***hostname masked***:60107 (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:11:00 INFO storage.MemoryStore: Block broadcast_1024_piece0 stored as bytes in memory (estimated size 3.1 KB, free 491.4 MB) 18/04/17 17:11:00 INFO storage.BlockManagerInfo: Added broadcast_1024_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:11:00 INFO spark.SparkContext: Created broadcast 1024 from broadcast at DAGScheduler.scala:1006 18/04/17 17:11:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 1024 (KafkaRDD[1370] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 17:11:00 INFO cluster.YarnClusterScheduler: Adding task set 1024.0 with 1 tasks 18/04/17 17:11:00 INFO scheduler.DAGScheduler: Got job 1025 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 17:11:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 1025 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 17:11:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 17:11:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 17:11:00 INFO scheduler.DAGScheduler: Submitting ResultStage 1025 (KafkaRDD[1399] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 17:11:00 INFO storage.MemoryStore: Block broadcast_1025 stored as values in memory (estimated size 5.7 KB, free 491.4 MB) 18/04/17 17:11:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 1024.0 (TID 1024, ***hostname masked***, executor 1, partition 0, RACK_LOCAL, 2034 bytes) 18/04/17 17:11:00 INFO storage.MemoryStore: Block broadcast_1025_piece0 stored as bytes in memory (estimated size 3.1 KB, free 491.4 MB) 18/04/17 17:11:00 INFO storage.BlockManagerInfo: Added broadcast_1022_piece0 in memory on ***hostname masked***:50260 (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:11:00 INFO storage.BlockManagerInfo: Added broadcast_1025_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:11:00 INFO spark.SparkContext: Created broadcast 1025 from broadcast at DAGScheduler.scala:1006 18/04/17 17:11:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 1025 (KafkaRDD[1399] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 17:11:00 INFO cluster.YarnClusterScheduler: Adding task set 1025.0 with 1 tasks 18/04/17 17:11:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 1025.0 (TID 1025, ***hostname masked***, executor 11, partition 0, RACK_LOCAL, 2042 bytes) 18/04/17 17:11:00 INFO storage.BlockManagerInfo: Added broadcast_1023_piece0 in memory on ***hostname masked***:43653 (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:11:00 INFO storage.BlockManagerInfo: Added broadcast_1024_piece0 in memory on ***hostname masked***:56034 (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:11:00 INFO spark.ContextCleaner: Cleaned accumulator 979 18/04/17 17:11:00 INFO spark.ContextCleaner: Cleaned accumulator 981 18/04/17 17:11:00 INFO storage.BlockManagerInfo: Added broadcast_1020_piece0 in memory on ***hostname masked***:42188 (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:11:00 INFO storage.BlockManagerInfo: Removed broadcast_980_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:11:00 INFO storage.BlockManagerInfo: Removed broadcast_980_piece0 on ***hostname masked***:35790 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:11:00 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 1011.0 (TID 1011) in 62 ms on ***hostname masked*** (executor 2) (1/1) 18/04/17 17:11:00 INFO cluster.YarnClusterScheduler: Removed TaskSet 1011.0, whose tasks have all completed, from pool 18/04/17 17:11:00 INFO scheduler.DAGScheduler: ResultStage 1011 (foreachPartition at PredictorEngineApp.java:153) finished in 0.064 s 18/04/17 17:11:00 INFO scheduler.DAGScheduler: Job 1011 finished: foreachPartition at PredictorEngineApp.java:153, took 0.100409 s 18/04/17 17:11:00 INFO spark.ContextCleaner: Cleaned accumulator 976 18/04/17 17:11:00 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x39dedef0 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 17:11:00 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x39dedef00x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 17:11:00 INFO storage.BlockManagerInfo: Removed broadcast_976_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:11:00 INFO storage.BlockManagerInfo: Removed broadcast_976_piece0 on ***hostname masked***:35790 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:11:00 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 17:11:00 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:44815, server: ***hostname masked***/***IP masked***:2181 18/04/17 17:11:00 INFO storage.BlockManagerInfo: Added broadcast_1025_piece0 in memory on ***hostname masked***:57847 (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:11:00 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x2626be142b28f9b, negotiated timeout = 60000 18/04/17 17:11:00 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x2626be142b28f9b 18/04/17 17:11:00 INFO zookeeper.ZooKeeper: Session: 0x2626be142b28f9b closed 18/04/17 17:11:00 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 17:11:00 INFO scheduler.JobScheduler: Finished job streaming job 1523974260000 ms.34 from job set of time 1523974260000 ms 18/04/17 17:11:00 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 1002.0 (TID 1002) in 165 ms on ***hostname masked*** (executor 4) (1/1) 18/04/17 17:11:00 INFO cluster.YarnClusterScheduler: Removed TaskSet 1002.0, whose tasks have all completed, from pool 18/04/17 17:11:00 INFO scheduler.DAGScheduler: ResultStage 1002 (foreachPartition at PredictorEngineApp.java:153) finished in 0.166 s 18/04/17 17:11:00 INFO scheduler.DAGScheduler: Job 1002 finished: foreachPartition at PredictorEngineApp.java:153, took 0.177308 s 18/04/17 17:11:00 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x4bd8fea connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 17:11:00 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x4bd8fea0x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 17:11:00 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 17:11:00 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:44818, server: ***hostname masked***/***IP masked***:2181 18/04/17 17:11:00 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x2626be142b28f9e, negotiated timeout = 60000 18/04/17 17:11:00 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x2626be142b28f9e 18/04/17 17:11:00 INFO zookeeper.ZooKeeper: Session: 0x2626be142b28f9e closed 18/04/17 17:11:00 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 17:11:00 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 1007.0 (TID 1007) in 177 ms on ***hostname masked*** (executor 5) (1/1) 18/04/17 17:11:00 INFO cluster.YarnClusterScheduler: Removed TaskSet 1007.0, whose tasks have all completed, from pool 18/04/17 17:11:00 INFO scheduler.DAGScheduler: ResultStage 1007 (foreachPartition at PredictorEngineApp.java:153) finished in 0.178 s 18/04/17 17:11:00 INFO scheduler.DAGScheduler: Job 1007 finished: foreachPartition at PredictorEngineApp.java:153, took 0.203459 s 18/04/17 17:11:00 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x12052a4 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 17:11:00 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x12052a40x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 17:11:00 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 17:11:00 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:40226, server: ***hostname masked***/***IP masked***:2181 18/04/17 17:11:00 INFO scheduler.JobScheduler: Finished job streaming job 1523974260000 ms.5 from job set of time 1523974260000 ms 18/04/17 17:11:00 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x1626be1444c96ab, negotiated timeout = 60000 18/04/17 17:11:00 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x1626be1444c96ab 18/04/17 17:11:00 INFO zookeeper.ZooKeeper: Session: 0x1626be1444c96ab closed 18/04/17 17:11:00 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 17:11:00 INFO scheduler.JobScheduler: Finished job streaming job 1523974260000 ms.18 from job set of time 1523974260000 ms 18/04/17 17:11:01 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 1006.0 (TID 1006) in 1167 ms on ***hostname masked*** (executor 3) (1/1) 18/04/17 17:11:01 INFO cluster.YarnClusterScheduler: Removed TaskSet 1006.0, whose tasks have all completed, from pool 18/04/17 17:11:01 INFO scheduler.DAGScheduler: ResultStage 1006 (foreachPartition at PredictorEngineApp.java:153) finished in 1.167 s 18/04/17 17:11:01 INFO scheduler.DAGScheduler: Job 1006 finished: foreachPartition at PredictorEngineApp.java:153, took 1.188964 s 18/04/17 17:11:01 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x42588ef3 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 17:11:01 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x42588ef30x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 17:11:01 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 17:11:01 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:33848, server: ***hostname masked***/***IP masked***:2181 18/04/17 17:11:01 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x3626be1439a965b, negotiated timeout = 60000 18/04/17 17:11:01 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x3626be1439a965b 18/04/17 17:11:01 INFO zookeeper.ZooKeeper: Session: 0x3626be1439a965b closed 18/04/17 17:11:01 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 17:11:01 INFO scheduler.JobScheduler: Finished job streaming job 1523974260000 ms.7 from job set of time 1523974260000 ms 18/04/17 17:11:01 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 1013.0 (TID 1013) in 1357 ms on ***hostname masked*** (executor 10) (1/1) 18/04/17 17:11:01 INFO cluster.YarnClusterScheduler: Removed TaskSet 1013.0, whose tasks have all completed, from pool 18/04/17 17:11:01 INFO scheduler.DAGScheduler: ResultStage 1013 (foreachPartition at PredictorEngineApp.java:153) finished in 1.358 s 18/04/17 17:11:01 INFO scheduler.DAGScheduler: Job 1012 finished: foreachPartition at PredictorEngineApp.java:153, took 1.403650 s 18/04/17 17:11:01 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x3be3b621 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 17:11:01 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x3be3b6210x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 17:11:01 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 17:11:01 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:44828, server: ***hostname masked***/***IP masked***:2181 18/04/17 17:11:01 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x2626be142b28fa2, negotiated timeout = 60000 18/04/17 17:11:01 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x2626be142b28fa2 18/04/17 17:11:01 INFO zookeeper.ZooKeeper: Session: 0x2626be142b28fa2 closed 18/04/17 17:11:01 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 17:11:01 INFO scheduler.JobScheduler: Finished job streaming job 1523974260000 ms.25 from job set of time 1523974260000 ms 18/04/17 17:11:01 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 1018.0 (TID 1018) in 1519 ms on ***hostname masked*** (executor 10) (1/1) 18/04/17 17:11:01 INFO cluster.YarnClusterScheduler: Removed TaskSet 1018.0, whose tasks have all completed, from pool 18/04/17 17:11:01 INFO scheduler.DAGScheduler: ResultStage 1018 (foreachPartition at PredictorEngineApp.java:153) finished in 1.520 s 18/04/17 17:11:01 INFO scheduler.DAGScheduler: Job 1018 finished: foreachPartition at PredictorEngineApp.java:153, took 1.586593 s 18/04/17 17:11:01 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x4207c995 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 17:11:01 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x4207c9950x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 17:11:01 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 17:11:01 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:40236, server: ***hostname masked***/***IP masked***:2181 18/04/17 17:11:01 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x1626be1444c96b0, negotiated timeout = 60000 18/04/17 17:11:01 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x1626be1444c96b0 18/04/17 17:11:01 INFO zookeeper.ZooKeeper: Session: 0x1626be1444c96b0 closed 18/04/17 17:11:01 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 17:11:01 INFO scheduler.JobScheduler: Finished job streaming job 1523974260000 ms.8 from job set of time 1523974260000 ms 18/04/17 17:11:02 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 1016.0 (TID 1016) in 2002 ms on ***hostname masked*** (executor 7) (1/1) 18/04/17 17:11:02 INFO cluster.YarnClusterScheduler: Removed TaskSet 1016.0, whose tasks have all completed, from pool 18/04/17 17:11:02 INFO scheduler.DAGScheduler: ResultStage 1016 (foreachPartition at PredictorEngineApp.java:153) finished in 2.002 s 18/04/17 17:11:02 INFO scheduler.DAGScheduler: Job 1016 finished: foreachPartition at PredictorEngineApp.java:153, took 2.060170 s 18/04/17 17:11:02 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x1e389b46 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 17:11:02 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x1e389b460x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 17:11:02 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 17:11:02 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:44835, server: ***hostname masked***/***IP masked***:2181 18/04/17 17:11:02 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x2626be142b28fa5, negotiated timeout = 60000 18/04/17 17:11:02 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x2626be142b28fa5 18/04/17 17:11:02 INFO zookeeper.ZooKeeper: Session: 0x2626be142b28fa5 closed 18/04/17 17:11:02 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 17:11:02 INFO scheduler.JobScheduler: Finished job streaming job 1523974260000 ms.12 from job set of time 1523974260000 ms 18/04/17 17:11:02 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 1004.0 (TID 1004) in 2543 ms on ***hostname masked*** (executor 9) (1/1) 18/04/17 17:11:02 INFO cluster.YarnClusterScheduler: Removed TaskSet 1004.0, whose tasks have all completed, from pool 18/04/17 17:11:02 INFO scheduler.DAGScheduler: ResultStage 1004 (foreachPartition at PredictorEngineApp.java:153) finished in 2.544 s 18/04/17 17:11:02 INFO scheduler.DAGScheduler: Job 1004 finished: foreachPartition at PredictorEngineApp.java:153, took 2.560918 s 18/04/17 17:11:02 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x78e2cb81 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 17:11:02 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x78e2cb810x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 17:11:02 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 17:11:02 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:40243, server: ***hostname masked***/***IP masked***:2181 18/04/17 17:11:02 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x1626be1444c96b1, negotiated timeout = 60000 18/04/17 17:11:02 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x1626be1444c96b1 18/04/17 17:11:02 INFO zookeeper.ZooKeeper: Session: 0x1626be1444c96b1 closed 18/04/17 17:11:02 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 17:11:02 INFO scheduler.JobScheduler: Finished job streaming job 1523974260000 ms.28 from job set of time 1523974260000 ms 18/04/17 17:11:02 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 1025.0 (TID 1025) in 2654 ms on ***hostname masked*** (executor 11) (1/1) 18/04/17 17:11:02 INFO cluster.YarnClusterScheduler: Removed TaskSet 1025.0, whose tasks have all completed, from pool 18/04/17 17:11:02 INFO scheduler.DAGScheduler: ResultStage 1025 (foreachPartition at PredictorEngineApp.java:153) finished in 2.655 s 18/04/17 17:11:02 INFO scheduler.DAGScheduler: Job 1025 finished: foreachPartition at PredictorEngineApp.java:153, took 2.748738 s 18/04/17 17:11:02 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x1ad300f7 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 17:11:02 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x1ad300f70x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 17:11:02 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 17:11:02 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:44841, server: ***hostname masked***/***IP masked***:2181 18/04/17 17:11:02 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 1020.0 (TID 1020) in 2682 ms on ***hostname masked*** (executor 12) (1/1) 18/04/17 17:11:02 INFO cluster.YarnClusterScheduler: Removed TaskSet 1020.0, whose tasks have all completed, from pool 18/04/17 17:11:02 INFO scheduler.DAGScheduler: ResultStage 1020 (foreachPartition at PredictorEngineApp.java:153) finished in 2.682 s 18/04/17 17:11:02 INFO scheduler.DAGScheduler: Job 1019 finished: foreachPartition at PredictorEngineApp.java:153, took 2.756307 s 18/04/17 17:11:02 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x5ecb9454 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 17:11:02 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x5ecb94540x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 17:11:02 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x2626be142b28fa7, negotiated timeout = 60000 18/04/17 17:11:02 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 17:11:02 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:33865, server: ***hostname masked***/***IP masked***:2181 18/04/17 17:11:02 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x3626be1439a965d, negotiated timeout = 60000 18/04/17 17:11:02 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x2626be142b28fa7 18/04/17 17:11:02 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x3626be1439a965d 18/04/17 17:11:02 INFO zookeeper.ZooKeeper: Session: 0x2626be142b28fa7 closed 18/04/17 17:11:02 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 17:11:02 INFO zookeeper.ZooKeeper: Session: 0x3626be1439a965d closed 18/04/17 17:11:02 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 17:11:02 INFO scheduler.JobScheduler: Finished job streaming job 1523974260000 ms.31 from job set of time 1523974260000 ms 18/04/17 17:11:02 INFO scheduler.JobScheduler: Finished job streaming job 1523974260000 ms.23 from job set of time 1523974260000 ms 18/04/17 17:11:03 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 1021.0 (TID 1021) in 3665 ms on ***hostname masked*** (executor 3) (1/1) 18/04/17 17:11:03 INFO cluster.YarnClusterScheduler: Removed TaskSet 1021.0, whose tasks have all completed, from pool 18/04/17 17:11:03 INFO scheduler.DAGScheduler: ResultStage 1021 (foreachPartition at PredictorEngineApp.java:153) finished in 3.672 s 18/04/17 17:11:03 INFO scheduler.DAGScheduler: Job 1022 finished: foreachPartition at PredictorEngineApp.java:153, took 3.749485 s 18/04/17 17:11:03 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x5993dda2 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 17:11:03 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x5993dda20x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 17:11:03 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 17:11:03 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:40255, server: ***hostname masked***/***IP masked***:2181 18/04/17 17:11:03 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x1626be1444c96b3, negotiated timeout = 60000 18/04/17 17:11:03 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x1626be1444c96b3 18/04/17 17:11:03 INFO zookeeper.ZooKeeper: Session: 0x1626be1444c96b3 closed 18/04/17 17:11:03 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 17:11:03 INFO scheduler.JobScheduler: Finished job streaming job 1523974260000 ms.24 from job set of time 1523974260000 ms 18/04/17 17:11:04 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 1010.0 (TID 1010) in 4338 ms on ***hostname masked*** (executor 2) (1/1) 18/04/17 17:11:04 INFO cluster.YarnClusterScheduler: Removed TaskSet 1010.0, whose tasks have all completed, from pool 18/04/17 17:11:04 INFO scheduler.DAGScheduler: ResultStage 1010 (foreachPartition at PredictorEngineApp.java:153) finished in 4.339 s 18/04/17 17:11:04 INFO scheduler.DAGScheduler: Job 1010 finished: foreachPartition at PredictorEngineApp.java:153, took 4.372196 s 18/04/17 17:11:04 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x2955a08a connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 17:11:04 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x2955a08a0x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 17:11:04 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 17:11:04 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:33878, server: ***hostname masked***/***IP masked***:2181 18/04/17 17:11:04 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x3626be1439a965f, negotiated timeout = 60000 18/04/17 17:11:04 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x3626be1439a965f 18/04/17 17:11:04 INFO zookeeper.ZooKeeper: Session: 0x3626be1439a965f closed 18/04/17 17:11:04 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 17:11:04 INFO scheduler.JobScheduler: Finished job streaming job 1523974260000 ms.15 from job set of time 1523974260000 ms 18/04/17 17:11:04 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 1001.0 (TID 1001) in 4685 ms on ***hostname masked*** (executor 4) (1/1) 18/04/17 17:11:04 INFO cluster.YarnClusterScheduler: Removed TaskSet 1001.0, whose tasks have all completed, from pool 18/04/17 17:11:04 INFO scheduler.DAGScheduler: ResultStage 1001 (foreachPartition at PredictorEngineApp.java:153) finished in 4.685 s 18/04/17 17:11:04 INFO scheduler.DAGScheduler: Job 1001 finished: foreachPartition at PredictorEngineApp.java:153, took 4.693215 s 18/04/17 17:11:04 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x6805b6eb connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 17:11:04 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x6805b6eb0x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 17:11:04 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 17:11:04 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:40264, server: ***hostname masked***/***IP masked***:2181 18/04/17 17:11:04 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 1022.0 (TID 1022) in 4609 ms on ***hostname masked*** (executor 8) (1/1) 18/04/17 17:11:04 INFO scheduler.DAGScheduler: ResultStage 1022 (foreachPartition at PredictorEngineApp.java:153) finished in 4.610 s 18/04/17 17:11:04 INFO cluster.YarnClusterScheduler: Removed TaskSet 1022.0, whose tasks have all completed, from pool 18/04/17 17:11:04 INFO scheduler.DAGScheduler: Job 1021 finished: foreachPartition at PredictorEngineApp.java:153, took 4.695976 s 18/04/17 17:11:04 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x6bff976b connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 17:11:04 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x6bff976b0x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 17:11:04 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 17:11:04 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x1626be1444c96b4, negotiated timeout = 60000 18/04/17 17:11:04 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:40265, server: ***hostname masked***/***IP masked***:2181 18/04/17 17:11:04 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x1626be1444c96b5, negotiated timeout = 60000 18/04/17 17:11:04 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x1626be1444c96b4 18/04/17 17:11:04 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x1626be1444c96b5 18/04/17 17:11:04 INFO zookeeper.ZooKeeper: Session: 0x1626be1444c96b4 closed 18/04/17 17:11:04 INFO zookeeper.ZooKeeper: Session: 0x1626be1444c96b5 closed 18/04/17 17:11:04 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 17:11:04 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 17:11:04 INFO scheduler.JobScheduler: Finished job streaming job 1523974260000 ms.19 from job set of time 1523974260000 ms 18/04/17 17:11:04 INFO scheduler.JobScheduler: Finished job streaming job 1523974260000 ms.32 from job set of time 1523974260000 ms 18/04/17 17:11:04 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 1012.0 (TID 1012) in 4740 ms on ***hostname masked*** (executor 8) (1/1) 18/04/17 17:11:04 INFO cluster.YarnClusterScheduler: Removed TaskSet 1012.0, whose tasks have all completed, from pool 18/04/17 17:11:04 INFO scheduler.DAGScheduler: ResultStage 1012 (foreachPartition at PredictorEngineApp.java:153) finished in 4.741 s 18/04/17 17:11:04 INFO scheduler.DAGScheduler: Job 1013 finished: foreachPartition at PredictorEngineApp.java:153, took 4.782006 s 18/04/17 17:11:04 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x213b0bb1 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 17:11:04 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x213b0bb10x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 17:11:04 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 17:11:04 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:33888, server: ***hostname masked***/***IP masked***:2181 18/04/17 17:11:04 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x3626be1439a9662, negotiated timeout = 60000 18/04/17 17:11:04 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x3626be1439a9662 18/04/17 17:11:04 INFO zookeeper.ZooKeeper: Session: 0x3626be1439a9662 closed 18/04/17 17:11:04 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 17:11:04 INFO scheduler.JobScheduler: Finished job streaming job 1523974260000 ms.33 from job set of time 1523974260000 ms 18/04/17 17:11:04 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 1024.0 (TID 1024) in 4734 ms on ***hostname masked*** (executor 1) (1/1) 18/04/17 17:11:04 INFO cluster.YarnClusterScheduler: Removed TaskSet 1024.0, whose tasks have all completed, from pool 18/04/17 17:11:04 INFO scheduler.DAGScheduler: ResultStage 1024 (foreachPartition at PredictorEngineApp.java:153) finished in 4.736 s 18/04/17 17:11:04 INFO scheduler.DAGScheduler: Job 1024 finished: foreachPartition at PredictorEngineApp.java:153, took 4.826200 s 18/04/17 17:11:04 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x58d2d687 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 17:11:04 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x58d2d6870x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 17:11:04 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 17:11:04 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:40273, server: ***hostname masked***/***IP masked***:2181 18/04/17 17:11:04 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x1626be1444c96b6, negotiated timeout = 60000 18/04/17 17:11:04 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x1626be1444c96b6 18/04/17 17:11:04 INFO zookeeper.ZooKeeper: Session: 0x1626be1444c96b6 closed 18/04/17 17:11:04 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 17:11:04 INFO scheduler.JobScheduler: Finished job streaming job 1523974260000 ms.2 from job set of time 1523974260000 ms 18/04/17 17:11:04 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 1014.0 (TID 1014) in 4826 ms on ***hostname masked*** (executor 4) (1/1) 18/04/17 17:11:04 INFO cluster.YarnClusterScheduler: Removed TaskSet 1014.0, whose tasks have all completed, from pool 18/04/17 17:11:04 INFO scheduler.DAGScheduler: ResultStage 1014 (foreachPartition at PredictorEngineApp.java:153) finished in 4.827 s 18/04/17 17:11:04 INFO scheduler.DAGScheduler: Job 1014 finished: foreachPartition at PredictorEngineApp.java:153, took 4.876897 s 18/04/17 17:11:04 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0xf4abe4d connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 17:11:04 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0xf4abe4d0x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 17:11:04 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 17:11:04 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:33894, server: ***hostname masked***/***IP masked***:2181 18/04/17 17:11:04 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x3626be1439a9663, negotiated timeout = 60000 18/04/17 17:11:04 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x3626be1439a9663 18/04/17 17:11:04 INFO zookeeper.ZooKeeper: Session: 0x3626be1439a9663 closed 18/04/17 17:11:04 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 17:11:04 INFO scheduler.JobScheduler: Finished job streaming job 1523974260000 ms.27 from job set of time 1523974260000 ms 18/04/17 17:11:05 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 1017.0 (TID 1017) in 5620 ms on ***hostname masked*** (executor 11) (1/1) 18/04/17 17:11:05 INFO cluster.YarnClusterScheduler: Removed TaskSet 1017.0, whose tasks have all completed, from pool 18/04/17 17:11:05 INFO scheduler.DAGScheduler: ResultStage 1017 (foreachPartition at PredictorEngineApp.java:153) finished in 5.621 s 18/04/17 17:11:05 INFO scheduler.DAGScheduler: Job 1017 finished: foreachPartition at PredictorEngineApp.java:153, took 5.683106 s 18/04/17 17:11:05 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x1458eb5d connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 17:11:05 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x1458eb5d0x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 17:11:05 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 17:11:05 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:40280, server: ***hostname masked***/***IP masked***:2181 18/04/17 17:11:05 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x1626be1444c96b8, negotiated timeout = 60000 18/04/17 17:11:05 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x1626be1444c96b8 18/04/17 17:11:05 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 1019.0 (TID 1019) in 5637 ms on ***hostname masked*** (executor 11) (1/1) 18/04/17 17:11:05 INFO scheduler.DAGScheduler: ResultStage 1019 (foreachPartition at PredictorEngineApp.java:153) finished in 5.638 s 18/04/17 17:11:05 INFO cluster.YarnClusterScheduler: Removed TaskSet 1019.0, whose tasks have all completed, from pool 18/04/17 17:11:05 INFO scheduler.DAGScheduler: Job 1020 finished: foreachPartition at PredictorEngineApp.java:153, took 5.708370 s 18/04/17 17:11:05 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x544a13fc connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 17:11:05 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x544a13fc0x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 17:11:05 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 17:11:05 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:44879, server: ***hostname masked***/***IP masked***:2181 18/04/17 17:11:05 INFO zookeeper.ZooKeeper: Session: 0x1626be1444c96b8 closed 18/04/17 17:11:05 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 17:11:05 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x2626be142b28faa, negotiated timeout = 60000 18/04/17 17:11:05 INFO scheduler.JobScheduler: Finished job streaming job 1523974260000 ms.6 from job set of time 1523974260000 ms 18/04/17 17:11:05 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x2626be142b28faa 18/04/17 17:11:05 INFO zookeeper.ZooKeeper: Session: 0x2626be142b28faa closed 18/04/17 17:11:05 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 17:11:05 INFO scheduler.JobScheduler: Finished job streaming job 1523974260000 ms.9 from job set of time 1523974260000 ms 18/04/17 17:11:06 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 1005.0 (TID 1005) in 6224 ms on ***hostname masked*** (executor 10) (1/1) 18/04/17 17:11:06 INFO cluster.YarnClusterScheduler: Removed TaskSet 1005.0, whose tasks have all completed, from pool 18/04/17 17:11:06 INFO scheduler.DAGScheduler: ResultStage 1005 (foreachPartition at PredictorEngineApp.java:153) finished in 6.224 s 18/04/17 17:11:06 INFO scheduler.DAGScheduler: Job 1005 finished: foreachPartition at PredictorEngineApp.java:153, took 6.243746 s 18/04/17 17:11:06 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x5be32e9e connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 17:11:06 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x5be32e9e0x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 17:11:06 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 17:11:06 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:33906, server: ***hostname masked***/***IP masked***:2181 18/04/17 17:11:06 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x3626be1439a9665, negotiated timeout = 60000 18/04/17 17:11:06 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x3626be1439a9665 18/04/17 17:11:06 INFO zookeeper.ZooKeeper: Session: 0x3626be1439a9665 closed 18/04/17 17:11:06 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 17:11:06 INFO scheduler.JobScheduler: Finished job streaming job 1523974260000 ms.1 from job set of time 1523974260000 ms 18/04/17 17:11:07 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 1000.0 (TID 1000) in 7140 ms on ***hostname masked*** (executor 5) (1/1) 18/04/17 17:11:07 INFO cluster.YarnClusterScheduler: Removed TaskSet 1000.0, whose tasks have all completed, from pool 18/04/17 17:11:07 INFO scheduler.DAGScheduler: ResultStage 1000 (foreachPartition at PredictorEngineApp.java:153) finished in 7.141 s 18/04/17 17:11:07 INFO scheduler.DAGScheduler: Job 1000 finished: foreachPartition at PredictorEngineApp.java:153, took 7.146512 s 18/04/17 17:11:07 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x1df1698b connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 17:11:07 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x1df1698b0x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 17:11:07 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 17:11:07 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:40292, server: ***hostname masked***/***IP masked***:2181 18/04/17 17:11:07 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x1626be1444c96ba, negotiated timeout = 60000 18/04/17 17:11:07 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x1626be1444c96ba 18/04/17 17:11:07 INFO zookeeper.ZooKeeper: Session: 0x1626be1444c96ba closed 18/04/17 17:11:07 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 17:11:07 INFO scheduler.JobScheduler: Finished job streaming job 1523974260000 ms.29 from job set of time 1523974260000 ms 18/04/17 17:11:07 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 1009.0 (TID 1009) in 7224 ms on ***hostname masked*** (executor 5) (1/1) 18/04/17 17:11:07 INFO cluster.YarnClusterScheduler: Removed TaskSet 1009.0, whose tasks have all completed, from pool 18/04/17 17:11:07 INFO scheduler.DAGScheduler: ResultStage 1009 (foreachPartition at PredictorEngineApp.java:153) finished in 7.224 s 18/04/17 17:11:07 INFO scheduler.DAGScheduler: Job 1009 finished: foreachPartition at PredictorEngineApp.java:153, took 7.255367 s 18/04/17 17:11:07 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x194cb37 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 17:11:07 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x194cb370x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 17:11:07 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 17:11:07 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:33913, server: ***hostname masked***/***IP masked***:2181 18/04/17 17:11:07 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x3626be1439a9666, negotiated timeout = 60000 18/04/17 17:11:07 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x3626be1439a9666 18/04/17 17:11:07 INFO zookeeper.ZooKeeper: Session: 0x3626be1439a9666 closed 18/04/17 17:11:07 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 17:11:07 INFO scheduler.JobScheduler: Finished job streaming job 1523974260000 ms.20 from job set of time 1523974260000 ms 18/04/17 17:11:07 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 1015.0 (TID 1015) in 7466 ms on ***hostname masked*** (executor 6) (1/1) 18/04/17 17:11:07 INFO cluster.YarnClusterScheduler: Removed TaskSet 1015.0, whose tasks have all completed, from pool 18/04/17 17:11:07 INFO scheduler.DAGScheduler: ResultStage 1015 (foreachPartition at PredictorEngineApp.java:153) finished in 7.466 s 18/04/17 17:11:07 INFO scheduler.DAGScheduler: Job 1015 finished: foreachPartition at PredictorEngineApp.java:153, took 7.520877 s 18/04/17 17:11:07 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x5fd934e0 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 17:11:07 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x5fd934e00x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 17:11:07 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 17:11:07 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:44893, server: ***hostname masked***/***IP masked***:2181 18/04/17 17:11:07 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x2626be142b28fab, negotiated timeout = 60000 18/04/17 17:11:07 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x2626be142b28fab 18/04/17 17:11:07 INFO zookeeper.ZooKeeper: Session: 0x2626be142b28fab closed 18/04/17 17:11:07 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 17:11:07 INFO scheduler.JobScheduler: Finished job streaming job 1523974260000 ms.10 from job set of time 1523974260000 ms 18/04/17 17:11:07 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 1003.0 (TID 1003) in 7889 ms on ***hostname masked*** (executor 1) (1/1) 18/04/17 17:11:07 INFO scheduler.DAGScheduler: ResultStage 1003 (foreachPartition at PredictorEngineApp.java:153) finished in 7.889 s 18/04/17 17:11:07 INFO cluster.YarnClusterScheduler: Removed TaskSet 1003.0, whose tasks have all completed, from pool 18/04/17 17:11:07 INFO scheduler.DAGScheduler: Job 1003 finished: foreachPartition at PredictorEngineApp.java:153, took 7.902868 s 18/04/17 17:11:07 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x3032aac4 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 17:11:07 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x3032aac40x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 17:11:07 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 17:11:07 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:40301, server: ***hostname masked***/***IP masked***:2181 18/04/17 17:11:07 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x1626be1444c96bd, negotiated timeout = 60000 18/04/17 17:11:07 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x1626be1444c96bd 18/04/17 17:11:07 INFO zookeeper.ZooKeeper: Session: 0x1626be1444c96bd closed 18/04/17 17:11:07 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 17:11:07 INFO scheduler.JobScheduler: Finished job streaming job 1523974260000 ms.26 from job set of time 1523974260000 ms 18/04/17 17:11:08 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 1023.0 (TID 1023) in 8372 ms on ***hostname masked*** (executor 2) (1/1) 18/04/17 17:11:08 INFO cluster.YarnClusterScheduler: Removed TaskSet 1023.0, whose tasks have all completed, from pool 18/04/17 17:11:08 INFO scheduler.DAGScheduler: ResultStage 1023 (foreachPartition at PredictorEngineApp.java:153) finished in 8.372 s 18/04/17 17:11:08 INFO scheduler.DAGScheduler: Job 1023 finished: foreachPartition at PredictorEngineApp.java:153, took 8.461190 s 18/04/17 17:11:08 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x2d9de531 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 17:11:08 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x2d9de5310x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 17:11:08 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 17:11:08 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:44900, server: ***hostname masked***/***IP masked***:2181 18/04/17 17:11:08 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x2626be142b28fac, negotiated timeout = 60000 18/04/17 17:11:08 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x2626be142b28fac 18/04/17 17:11:08 INFO zookeeper.ZooKeeper: Session: 0x2626be142b28fac closed 18/04/17 17:11:08 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 17:11:08 INFO scheduler.JobScheduler: Finished job streaming job 1523974260000 ms.22 from job set of time 1523974260000 ms 18/04/17 17:11:10 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 1008.0 (TID 1008) in 10624 ms on ***hostname masked*** (executor 5) (1/1) 18/04/17 17:11:10 INFO cluster.YarnClusterScheduler: Removed TaskSet 1008.0, whose tasks have all completed, from pool 18/04/17 17:11:10 INFO scheduler.DAGScheduler: ResultStage 1008 (foreachPartition at PredictorEngineApp.java:153) finished in 10.624 s 18/04/17 17:11:10 INFO scheduler.DAGScheduler: Job 1008 finished: foreachPartition at PredictorEngineApp.java:153, took 10.652328 s 18/04/17 17:11:10 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x3bea011b connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 17:11:10 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x3bea011b0x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 17:11:10 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 17:11:10 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:40311, server: ***hostname masked***/***IP masked***:2181 18/04/17 17:11:10 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x1626be1444c96c0, negotiated timeout = 60000 18/04/17 17:11:10 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x1626be1444c96c0 18/04/17 17:11:10 INFO zookeeper.ZooKeeper: Session: 0x1626be1444c96c0 closed 18/04/17 17:11:10 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 17:11:10 INFO scheduler.JobScheduler: Finished job streaming job 1523974260000 ms.11 from job set of time 1523974260000 ms 18/04/17 17:11:10 INFO scheduler.JobScheduler: Total delay: 10.762 s for time 1523974260000 ms (execution: 10.704 s) 18/04/17 17:11:10 INFO kafka.KafkaRDD: Removing RDD 1332 from persistence list 18/04/17 17:11:10 INFO storage.BlockManager: Removing RDD 1332 18/04/17 17:11:10 INFO kafka.KafkaRDD: Removing RDD 1332 from persistence list 18/04/17 17:11:10 INFO storage.BlockManager: Removing RDD 1332 18/04/17 17:11:10 INFO kafka.KafkaRDD: Removing RDD 1333 from persistence list 18/04/17 17:11:10 INFO storage.BlockManager: Removing RDD 1333 18/04/17 17:11:10 INFO kafka.KafkaRDD: Removing RDD 1333 from persistence list 18/04/17 17:11:10 INFO storage.BlockManager: Removing RDD 1333 18/04/17 17:11:10 INFO kafka.KafkaRDD: Removing RDD 1334 from persistence list 18/04/17 17:11:10 INFO storage.BlockManager: Removing RDD 1334 18/04/17 17:11:10 INFO kafka.KafkaRDD: Removing RDD 1334 from persistence list 18/04/17 17:11:10 INFO storage.BlockManager: Removing RDD 1334 18/04/17 17:11:10 INFO kafka.KafkaRDD: Removing RDD 1335 from persistence list 18/04/17 17:11:10 INFO storage.BlockManager: Removing RDD 1335 18/04/17 17:11:10 INFO kafka.KafkaRDD: Removing RDD 1335 from persistence list 18/04/17 17:11:10 INFO storage.BlockManager: Removing RDD 1335 18/04/17 17:11:10 INFO kafka.KafkaRDD: Removing RDD 1336 from persistence list 18/04/17 17:11:10 INFO storage.BlockManager: Removing RDD 1336 18/04/17 17:11:10 INFO kafka.KafkaRDD: Removing RDD 1336 from persistence list 18/04/17 17:11:10 INFO storage.BlockManager: Removing RDD 1336 18/04/17 17:11:10 INFO kafka.KafkaRDD: Removing RDD 1337 from persistence list 18/04/17 17:11:10 INFO storage.BlockManager: Removing RDD 1337 18/04/17 17:11:10 INFO kafka.KafkaRDD: Removing RDD 1337 from persistence list 18/04/17 17:11:10 INFO storage.BlockManager: Removing RDD 1337 18/04/17 17:11:10 INFO kafka.KafkaRDD: Removing RDD 1338 from persistence list 18/04/17 17:11:10 INFO storage.BlockManager: Removing RDD 1338 18/04/17 17:11:10 INFO kafka.KafkaRDD: Removing RDD 1338 from persistence list 18/04/17 17:11:10 INFO storage.BlockManager: Removing RDD 1338 18/04/17 17:11:10 INFO kafka.KafkaRDD: Removing RDD 1339 from persistence list 18/04/17 17:11:10 INFO storage.BlockManager: Removing RDD 1339 18/04/17 17:11:10 INFO kafka.KafkaRDD: Removing RDD 1339 from persistence list 18/04/17 17:11:10 INFO storage.BlockManager: Removing RDD 1339 18/04/17 17:11:10 INFO kafka.KafkaRDD: Removing RDD 1340 from persistence list 18/04/17 17:11:10 INFO storage.BlockManager: Removing RDD 1340 18/04/17 17:11:10 INFO kafka.KafkaRDD: Removing RDD 1340 from persistence list 18/04/17 17:11:10 INFO storage.BlockManager: Removing RDD 1340 18/04/17 17:11:10 INFO kafka.KafkaRDD: Removing RDD 1341 from persistence list 18/04/17 17:11:10 INFO storage.BlockManager: Removing RDD 1341 18/04/17 17:11:10 INFO kafka.KafkaRDD: Removing RDD 1341 from persistence list 18/04/17 17:11:10 INFO storage.BlockManager: Removing RDD 1341 18/04/17 17:11:10 INFO kafka.KafkaRDD: Removing RDD 1342 from persistence list 18/04/17 17:11:10 INFO storage.BlockManager: Removing RDD 1342 18/04/17 17:11:10 INFO kafka.KafkaRDD: Removing RDD 1342 from persistence list 18/04/17 17:11:10 INFO storage.BlockManager: Removing RDD 1342 18/04/17 17:11:10 INFO kafka.KafkaRDD: Removing RDD 1343 from persistence list 18/04/17 17:11:10 INFO storage.BlockManager: Removing RDD 1343 18/04/17 17:11:10 INFO kafka.KafkaRDD: Removing RDD 1343 from persistence list 18/04/17 17:11:10 INFO storage.BlockManager: Removing RDD 1343 18/04/17 17:11:10 INFO kafka.KafkaRDD: Removing RDD 1344 from persistence list 18/04/17 17:11:10 INFO storage.BlockManager: Removing RDD 1344 18/04/17 17:11:10 INFO kafka.KafkaRDD: Removing RDD 1344 from persistence list 18/04/17 17:11:10 INFO storage.BlockManager: Removing RDD 1344 18/04/17 17:11:10 INFO kafka.KafkaRDD: Removing RDD 1345 from persistence list 18/04/17 17:11:10 INFO storage.BlockManager: Removing RDD 1345 18/04/17 17:11:10 INFO kafka.KafkaRDD: Removing RDD 1345 from persistence list 18/04/17 17:11:10 INFO storage.BlockManager: Removing RDD 1345 18/04/17 17:11:10 INFO kafka.KafkaRDD: Removing RDD 1346 from persistence list 18/04/17 17:11:10 INFO storage.BlockManager: Removing RDD 1346 18/04/17 17:11:10 INFO kafka.KafkaRDD: Removing RDD 1346 from persistence list 18/04/17 17:11:10 INFO storage.BlockManager: Removing RDD 1346 18/04/17 17:11:10 INFO kafka.KafkaRDD: Removing RDD 1347 from persistence list 18/04/17 17:11:10 INFO storage.BlockManager: Removing RDD 1347 18/04/17 17:11:10 INFO kafka.KafkaRDD: Removing RDD 1347 from persistence list 18/04/17 17:11:10 INFO storage.BlockManager: Removing RDD 1347 18/04/17 17:11:10 INFO kafka.KafkaRDD: Removing RDD 1348 from persistence list 18/04/17 17:11:10 INFO storage.BlockManager: Removing RDD 1348 18/04/17 17:11:10 INFO kafka.KafkaRDD: Removing RDD 1348 from persistence list 18/04/17 17:11:10 INFO storage.BlockManager: Removing RDD 1348 18/04/17 17:11:10 INFO kafka.KafkaRDD: Removing RDD 1349 from persistence list 18/04/17 17:11:10 INFO storage.BlockManager: Removing RDD 1349 18/04/17 17:11:10 INFO kafka.KafkaRDD: Removing RDD 1349 from persistence list 18/04/17 17:11:10 INFO storage.BlockManager: Removing RDD 1349 18/04/17 17:11:10 INFO kafka.KafkaRDD: Removing RDD 1350 from persistence list 18/04/17 17:11:10 INFO storage.BlockManager: Removing RDD 1350 18/04/17 17:11:10 INFO kafka.KafkaRDD: Removing RDD 1350 from persistence list 18/04/17 17:11:10 INFO storage.BlockManager: Removing RDD 1350 18/04/17 17:11:10 INFO kafka.KafkaRDD: Removing RDD 1351 from persistence list 18/04/17 17:11:10 INFO storage.BlockManager: Removing RDD 1351 18/04/17 17:11:10 INFO kafka.KafkaRDD: Removing RDD 1351 from persistence list 18/04/17 17:11:10 INFO storage.BlockManager: Removing RDD 1351 18/04/17 17:11:10 INFO kafka.KafkaRDD: Removing RDD 1352 from persistence list 18/04/17 17:11:10 INFO storage.BlockManager: Removing RDD 1352 18/04/17 17:11:10 INFO kafka.KafkaRDD: Removing RDD 1352 from persistence list 18/04/17 17:11:10 INFO storage.BlockManager: Removing RDD 1352 18/04/17 17:11:10 INFO kafka.KafkaRDD: Removing RDD 1353 from persistence list 18/04/17 17:11:10 INFO storage.BlockManager: Removing RDD 1353 18/04/17 17:11:10 INFO kafka.KafkaRDD: Removing RDD 1353 from persistence list 18/04/17 17:11:10 INFO storage.BlockManager: Removing RDD 1353 18/04/17 17:11:10 INFO kafka.KafkaRDD: Removing RDD 1354 from persistence list 18/04/17 17:11:10 INFO storage.BlockManager: Removing RDD 1354 18/04/17 17:11:10 INFO kafka.KafkaRDD: Removing RDD 1354 from persistence list 18/04/17 17:11:10 INFO storage.BlockManager: Removing RDD 1354 18/04/17 17:11:10 INFO kafka.KafkaRDD: Removing RDD 1355 from persistence list 18/04/17 17:11:10 INFO storage.BlockManager: Removing RDD 1355 18/04/17 17:11:10 INFO kafka.KafkaRDD: Removing RDD 1355 from persistence list 18/04/17 17:11:10 INFO storage.BlockManager: Removing RDD 1355 18/04/17 17:11:10 INFO kafka.KafkaRDD: Removing RDD 1356 from persistence list 18/04/17 17:11:10 INFO storage.BlockManager: Removing RDD 1356 18/04/17 17:11:10 INFO kafka.KafkaRDD: Removing RDD 1356 from persistence list 18/04/17 17:11:10 INFO storage.BlockManager: Removing RDD 1356 18/04/17 17:11:10 INFO kafka.KafkaRDD: Removing RDD 1357 from persistence list 18/04/17 17:11:10 INFO storage.BlockManager: Removing RDD 1357 18/04/17 17:11:10 INFO kafka.KafkaRDD: Removing RDD 1357 from persistence list 18/04/17 17:11:10 INFO storage.BlockManager: Removing RDD 1357 18/04/17 17:11:10 INFO kafka.KafkaRDD: Removing RDD 1358 from persistence list 18/04/17 17:11:10 INFO storage.BlockManager: Removing RDD 1358 18/04/17 17:11:10 INFO kafka.KafkaRDD: Removing RDD 1358 from persistence list 18/04/17 17:11:10 INFO storage.BlockManager: Removing RDD 1358 18/04/17 17:11:10 INFO kafka.KafkaRDD: Removing RDD 1359 from persistence list 18/04/17 17:11:10 INFO storage.BlockManager: Removing RDD 1359 18/04/17 17:11:10 INFO kafka.KafkaRDD: Removing RDD 1359 from persistence list 18/04/17 17:11:10 INFO storage.BlockManager: Removing RDD 1359 18/04/17 17:11:10 INFO kafka.KafkaRDD: Removing RDD 1360 from persistence list 18/04/17 17:11:10 INFO storage.BlockManager: Removing RDD 1360 18/04/17 17:11:10 INFO kafka.KafkaRDD: Removing RDD 1360 from persistence list 18/04/17 17:11:10 INFO storage.BlockManager: Removing RDD 1360 18/04/17 17:11:10 INFO kafka.KafkaRDD: Removing RDD 1361 from persistence list 18/04/17 17:11:10 INFO storage.BlockManager: Removing RDD 1361 18/04/17 17:11:10 INFO kafka.KafkaRDD: Removing RDD 1361 from persistence list 18/04/17 17:11:10 INFO storage.BlockManager: Removing RDD 1361 18/04/17 17:11:10 INFO kafka.KafkaRDD: Removing RDD 1362 from persistence list 18/04/17 17:11:10 INFO storage.BlockManager: Removing RDD 1362 18/04/17 17:11:10 INFO kafka.KafkaRDD: Removing RDD 1362 from persistence list 18/04/17 17:11:10 INFO storage.BlockManager: Removing RDD 1362 18/04/17 17:11:10 INFO kafka.KafkaRDD: Removing RDD 1363 from persistence list 18/04/17 17:11:10 INFO storage.BlockManager: Removing RDD 1363 18/04/17 17:11:10 INFO kafka.KafkaRDD: Removing RDD 1363 from persistence list 18/04/17 17:11:10 INFO storage.BlockManager: Removing RDD 1363 18/04/17 17:11:10 INFO kafka.KafkaRDD: Removing RDD 1364 from persistence list 18/04/17 17:11:10 INFO storage.BlockManager: Removing RDD 1364 18/04/17 17:11:10 INFO kafka.KafkaRDD: Removing RDD 1364 from persistence list 18/04/17 17:11:10 INFO storage.BlockManager: Removing RDD 1364 18/04/17 17:11:10 INFO kafka.KafkaRDD: Removing RDD 1365 from persistence list 18/04/17 17:11:10 INFO storage.BlockManager: Removing RDD 1365 18/04/17 17:11:10 INFO kafka.KafkaRDD: Removing RDD 1365 from persistence list 18/04/17 17:11:10 INFO storage.BlockManager: Removing RDD 1365 18/04/17 17:11:10 INFO kafka.KafkaRDD: Removing RDD 1366 from persistence list 18/04/17 17:11:10 INFO storage.BlockManager: Removing RDD 1366 18/04/17 17:11:10 INFO kafka.KafkaRDD: Removing RDD 1366 from persistence list 18/04/17 17:11:10 INFO storage.BlockManager: Removing RDD 1366 18/04/17 17:11:10 INFO kafka.KafkaRDD: Removing RDD 1367 from persistence list 18/04/17 17:11:10 INFO storage.BlockManager: Removing RDD 1367 18/04/17 17:11:10 INFO kafka.KafkaRDD: Removing RDD 1367 from persistence list 18/04/17 17:11:10 INFO storage.BlockManager: Removing RDD 1367 18/04/17 17:11:10 INFO scheduler.ReceivedBlockTracker: Deleting batches ArrayBuffer() 18/04/17 17:11:10 INFO scheduler.InputInfoTracker: remove old batch metadata: 1523974140000 ms 18/04/17 17:12:00 INFO scheduler.JobScheduler: Added jobs for time 1523974320000 ms 18/04/17 17:12:00 INFO scheduler.JobScheduler: Starting job streaming job 1523974320000 ms.0 from job set of time 1523974320000 ms 18/04/17 17:12:00 INFO scheduler.JobScheduler: Starting job streaming job 1523974320000 ms.2 from job set of time 1523974320000 ms 18/04/17 17:12:00 INFO scheduler.JobScheduler: Starting job streaming job 1523974320000 ms.3 from job set of time 1523974320000 ms 18/04/17 17:12:00 INFO scheduler.JobScheduler: Starting job streaming job 1523974320000 ms.4 from job set of time 1523974320000 ms 18/04/17 17:12:00 INFO scheduler.JobScheduler: Starting job streaming job 1523974320000 ms.1 from job set of time 1523974320000 ms 18/04/17 17:12:00 INFO scheduler.JobScheduler: Finished job streaming job 1523974320000 ms.0 from job set of time 1523974320000 ms 18/04/17 17:12:00 INFO scheduler.JobScheduler: Finished job streaming job 1523974320000 ms.3 from job set of time 1523974320000 ms 18/04/17 17:12:00 INFO scheduler.JobScheduler: Finished job streaming job 1523974320000 ms.4 from job set of time 1523974320000 ms 18/04/17 17:12:00 INFO scheduler.JobScheduler: Starting job streaming job 1523974320000 ms.6 from job set of time 1523974320000 ms 18/04/17 17:12:00 INFO scheduler.JobScheduler: Starting job streaming job 1523974320000 ms.7 from job set of time 1523974320000 ms 18/04/17 17:12:00 INFO scheduler.JobScheduler: Starting job streaming job 1523974320000 ms.5 from job set of time 1523974320000 ms 18/04/17 17:12:00 INFO scheduler.JobScheduler: Starting job streaming job 1523974320000 ms.8 from job set of time 1523974320000 ms 18/04/17 17:12:00 INFO scheduler.JobScheduler: Starting job streaming job 1523974320000 ms.9 from job set of time 1523974320000 ms 18/04/17 17:12:00 INFO scheduler.JobScheduler: Starting job streaming job 1523974320000 ms.10 from job set of time 1523974320000 ms 18/04/17 17:12:00 INFO scheduler.JobScheduler: Starting job streaming job 1523974320000 ms.11 from job set of time 1523974320000 ms 18/04/17 17:12:00 INFO scheduler.JobScheduler: Starting job streaming job 1523974320000 ms.12 from job set of time 1523974320000 ms 18/04/17 17:12:00 INFO scheduler.JobScheduler: Starting job streaming job 1523974320000 ms.13 from job set of time 1523974320000 ms 18/04/17 17:12:00 INFO scheduler.JobScheduler: Starting job streaming job 1523974320000 ms.14 from job set of time 1523974320000 ms 18/04/17 17:12:00 INFO scheduler.JobScheduler: Finished job streaming job 1523974320000 ms.13 from job set of time 1523974320000 ms 18/04/17 17:12:00 INFO scheduler.JobScheduler: Finished job streaming job 1523974320000 ms.14 from job set of time 1523974320000 ms 18/04/17 17:12:00 INFO scheduler.JobScheduler: Starting job streaming job 1523974320000 ms.15 from job set of time 1523974320000 ms 18/04/17 17:12:00 INFO scheduler.JobScheduler: Starting job streaming job 1523974320000 ms.17 from job set of time 1523974320000 ms 18/04/17 17:12:00 INFO scheduler.JobScheduler: Starting job streaming job 1523974320000 ms.16 from job set of time 1523974320000 ms 18/04/17 17:12:00 INFO scheduler.JobScheduler: Finished job streaming job 1523974320000 ms.17 from job set of time 1523974320000 ms 18/04/17 17:12:00 INFO scheduler.JobScheduler: Starting job streaming job 1523974320000 ms.19 from job set of time 1523974320000 ms 18/04/17 17:12:00 INFO scheduler.JobScheduler: Starting job streaming job 1523974320000 ms.18 from job set of time 1523974320000 ms 18/04/17 17:12:00 INFO scheduler.JobScheduler: Finished job streaming job 1523974320000 ms.16 from job set of time 1523974320000 ms 18/04/17 17:12:00 INFO scheduler.JobScheduler: Starting job streaming job 1523974320000 ms.20 from job set of time 1523974320000 ms 18/04/17 17:12:00 INFO scheduler.JobScheduler: Starting job streaming job 1523974320000 ms.21 from job set of time 1523974320000 ms 18/04/17 17:12:00 INFO scheduler.JobScheduler: Finished job streaming job 1523974320000 ms.21 from job set of time 1523974320000 ms 18/04/17 17:12:00 INFO scheduler.JobScheduler: Starting job streaming job 1523974320000 ms.22 from job set of time 1523974320000 ms 18/04/17 17:12:00 INFO scheduler.JobScheduler: Starting job streaming job 1523974320000 ms.23 from job set of time 1523974320000 ms 18/04/17 17:12:00 INFO scheduler.JobScheduler: Starting job streaming job 1523974320000 ms.24 from job set of time 1523974320000 ms 18/04/17 17:12:00 INFO scheduler.JobScheduler: Starting job streaming job 1523974320000 ms.25 from job set of time 1523974320000 ms 18/04/17 17:12:00 INFO scheduler.JobScheduler: Starting job streaming job 1523974320000 ms.26 from job set of time 1523974320000 ms 18/04/17 17:12:00 INFO scheduler.JobScheduler: Starting job streaming job 1523974320000 ms.27 from job set of time 1523974320000 ms 18/04/17 17:12:00 INFO scheduler.JobScheduler: Starting job streaming job 1523974320000 ms.28 from job set of time 1523974320000 ms 18/04/17 17:12:00 INFO scheduler.JobScheduler: Starting job streaming job 1523974320000 ms.29 from job set of time 1523974320000 ms 18/04/17 17:12:00 INFO scheduler.JobScheduler: Starting job streaming job 1523974320000 ms.30 from job set of time 1523974320000 ms 18/04/17 17:12:00 INFO scheduler.JobScheduler: Finished job streaming job 1523974320000 ms.30 from job set of time 1523974320000 ms 18/04/17 17:12:00 INFO scheduler.JobScheduler: Starting job streaming job 1523974320000 ms.32 from job set of time 1523974320000 ms 18/04/17 17:12:00 INFO scheduler.JobScheduler: Starting job streaming job 1523974320000 ms.31 from job set of time 1523974320000 ms 18/04/17 17:12:00 INFO scheduler.JobScheduler: Starting job streaming job 1523974320000 ms.33 from job set of time 1523974320000 ms 18/04/17 17:12:00 INFO scheduler.JobScheduler: Starting job streaming job 1523974320000 ms.34 from job set of time 1523974320000 ms 18/04/17 17:12:00 INFO scheduler.JobScheduler: Starting job streaming job 1523974320000 ms.35 from job set of time 1523974320000 ms 18/04/17 17:12:00 INFO scheduler.JobScheduler: Finished job streaming job 1523974320000 ms.35 from job set of time 1523974320000 ms 18/04/17 17:12:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 17:12:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 17:12:00 INFO scheduler.DAGScheduler: Got job 1026 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 17:12:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 1026 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 17:12:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 17:12:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 17:12:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 17:12:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 17:12:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 17:12:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 17:12:00 INFO scheduler.DAGScheduler: Submitting ResultStage 1026 (KafkaRDD[1415] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 17:12:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 17:12:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 17:12:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 17:12:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 17:12:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 17:12:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 17:12:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 17:12:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 17:12:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 17:12:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 17:12:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 17:12:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 17:12:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 17:12:00 INFO storage.MemoryStore: Block broadcast_1026 stored as values in memory (estimated size 5.7 KB, free 491.4 MB) 18/04/17 17:12:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 17:12:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 17:12:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 17:12:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 17:12:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 17:12:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 17:12:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 17:12:00 INFO storage.MemoryStore: Block broadcast_1026_piece0 stored as bytes in memory (estimated size 3.1 KB, free 491.4 MB) 18/04/17 17:12:00 INFO storage.BlockManagerInfo: Added broadcast_1026_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:12:00 INFO spark.SparkContext: Created broadcast 1026 from broadcast at DAGScheduler.scala:1006 18/04/17 17:12:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 1026 (KafkaRDD[1415] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 17:12:00 INFO cluster.YarnClusterScheduler: Adding task set 1026.0 with 1 tasks 18/04/17 17:12:00 INFO scheduler.DAGScheduler: Got job 1027 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 17:12:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 1027 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 17:12:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 17:12:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 17:12:00 INFO scheduler.DAGScheduler: Submitting ResultStage 1027 (KafkaRDD[1426] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 17:12:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 1026.0 (TID 1026, ***hostname masked***, executor 10, partition 0, RACK_LOCAL, 2046 bytes) 18/04/17 17:12:00 INFO storage.MemoryStore: Block broadcast_1027 stored as values in memory (estimated size 5.7 KB, free 491.4 MB) 18/04/17 17:12:00 INFO storage.MemoryStore: Block broadcast_1027_piece0 stored as bytes in memory (estimated size 3.1 KB, free 491.4 MB) 18/04/17 17:12:00 INFO storage.BlockManagerInfo: Added broadcast_1027_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:12:00 INFO spark.SparkContext: Created broadcast 1027 from broadcast at DAGScheduler.scala:1006 18/04/17 17:12:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 1027 (KafkaRDD[1426] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 17:12:00 INFO cluster.YarnClusterScheduler: Adding task set 1027.0 with 1 tasks 18/04/17 17:12:00 INFO scheduler.DAGScheduler: Got job 1028 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 17:12:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 1028 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 17:12:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 17:12:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 17:12:00 INFO scheduler.DAGScheduler: Submitting ResultStage 1028 (KafkaRDD[1416] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 17:12:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 1027.0 (TID 1027, ***hostname masked***, executor 5, partition 0, RACK_LOCAL, 2069 bytes) 18/04/17 17:12:00 INFO storage.MemoryStore: Block broadcast_1028 stored as values in memory (estimated size 5.7 KB, free 491.4 MB) 18/04/17 17:12:00 INFO storage.MemoryStore: Block broadcast_1028_piece0 stored as bytes in memory (estimated size 3.1 KB, free 491.4 MB) 18/04/17 17:12:00 INFO storage.BlockManagerInfo: Added broadcast_1028_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:12:00 INFO spark.SparkContext: Created broadcast 1028 from broadcast at DAGScheduler.scala:1006 18/04/17 17:12:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 1028 (KafkaRDD[1416] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 17:12:00 INFO cluster.YarnClusterScheduler: Adding task set 1028.0 with 1 tasks 18/04/17 17:12:00 INFO scheduler.DAGScheduler: Got job 1029 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 17:12:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 1029 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 17:12:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 17:12:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 17:12:00 INFO scheduler.DAGScheduler: Submitting ResultStage 1029 (KafkaRDD[1435] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 17:12:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 1028.0 (TID 1028, ***hostname masked***, executor 11, partition 0, RACK_LOCAL, 2051 bytes) 18/04/17 17:12:00 INFO storage.MemoryStore: Block broadcast_1029 stored as values in memory (estimated size 5.7 KB, free 491.4 MB) 18/04/17 17:12:00 INFO storage.MemoryStore: Block broadcast_1029_piece0 stored as bytes in memory (estimated size 3.1 KB, free 491.4 MB) 18/04/17 17:12:00 INFO storage.BlockManagerInfo: Added broadcast_1029_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:12:00 INFO spark.SparkContext: Created broadcast 1029 from broadcast at DAGScheduler.scala:1006 18/04/17 17:12:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 1029 (KafkaRDD[1435] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 17:12:00 INFO cluster.YarnClusterScheduler: Adding task set 1029.0 with 1 tasks 18/04/17 17:12:00 INFO scheduler.DAGScheduler: Got job 1030 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 17:12:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 1030 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 17:12:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 17:12:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 17:12:00 INFO scheduler.DAGScheduler: Submitting ResultStage 1030 (KafkaRDD[1429] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 17:12:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 1029.0 (TID 1029, ***hostname masked***, executor 9, partition 0, RACK_LOCAL, 2042 bytes) 18/04/17 17:12:00 INFO storage.MemoryStore: Block broadcast_1030 stored as values in memory (estimated size 5.7 KB, free 491.4 MB) 18/04/17 17:12:00 INFO storage.MemoryStore: Block broadcast_1030_piece0 stored as bytes in memory (estimated size 3.1 KB, free 491.4 MB) 18/04/17 17:12:00 INFO storage.BlockManagerInfo: Added broadcast_1030_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:12:00 INFO spark.SparkContext: Created broadcast 1030 from broadcast at DAGScheduler.scala:1006 18/04/17 17:12:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 1030 (KafkaRDD[1429] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 17:12:00 INFO cluster.YarnClusterScheduler: Adding task set 1030.0 with 1 tasks 18/04/17 17:12:00 INFO scheduler.DAGScheduler: Got job 1031 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 17:12:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 1031 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 17:12:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 17:12:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 17:12:00 INFO scheduler.DAGScheduler: Submitting ResultStage 1031 (KafkaRDD[1430] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 17:12:00 INFO storage.BlockManagerInfo: Added broadcast_1027_piece0 in memory on ***hostname masked***:53081 (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:12:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 1030.0 (TID 1030, ***hostname masked***, executor 2, partition 0, RACK_LOCAL, 2045 bytes) 18/04/17 17:12:00 INFO storage.MemoryStore: Block broadcast_1031 stored as values in memory (estimated size 5.7 KB, free 491.4 MB) 18/04/17 17:12:00 INFO storage.BlockManagerInfo: Added broadcast_1026_piece0 in memory on ***hostname masked***:55095 (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:12:00 INFO storage.MemoryStore: Block broadcast_1031_piece0 stored as bytes in memory (estimated size 3.1 KB, free 491.4 MB) 18/04/17 17:12:00 INFO storage.BlockManagerInfo: Added broadcast_1031_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:12:00 INFO spark.SparkContext: Created broadcast 1031 from broadcast at DAGScheduler.scala:1006 18/04/17 17:12:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 1031 (KafkaRDD[1430] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 17:12:00 INFO cluster.YarnClusterScheduler: Adding task set 1031.0 with 1 tasks 18/04/17 17:12:00 INFO scheduler.DAGScheduler: Got job 1032 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 17:12:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 1032 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 17:12:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 17:12:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 17:12:00 INFO scheduler.DAGScheduler: Submitting ResultStage 1032 (KafkaRDD[1412] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 17:12:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 1031.0 (TID 1031, ***hostname masked***, executor 5, partition 0, NODE_LOCAL, 2056 bytes) 18/04/17 17:12:00 INFO storage.MemoryStore: Block broadcast_1032 stored as values in memory (estimated size 5.7 KB, free 491.4 MB) 18/04/17 17:12:00 INFO storage.BlockManagerInfo: Added broadcast_1028_piece0 in memory on ***hostname masked***:57847 (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:12:00 INFO storage.MemoryStore: Block broadcast_1032_piece0 stored as bytes in memory (estimated size 3.1 KB, free 491.4 MB) 18/04/17 17:12:00 INFO storage.BlockManagerInfo: Added broadcast_1032_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:12:00 INFO spark.SparkContext: Created broadcast 1032 from broadcast at DAGScheduler.scala:1006 18/04/17 17:12:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 1032 (KafkaRDD[1412] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 17:12:00 INFO cluster.YarnClusterScheduler: Adding task set 1032.0 with 1 tasks 18/04/17 17:12:00 INFO scheduler.DAGScheduler: Got job 1033 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 17:12:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 1033 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 17:12:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 17:12:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 17:12:00 INFO scheduler.DAGScheduler: Submitting ResultStage 1033 (KafkaRDD[1405] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 17:12:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 1032.0 (TID 1032, ***hostname masked***, executor 8, partition 0, RACK_LOCAL, 2051 bytes) 18/04/17 17:12:00 INFO storage.MemoryStore: Block broadcast_1033 stored as values in memory (estimated size 5.7 KB, free 491.4 MB) 18/04/17 17:12:00 INFO storage.MemoryStore: Block broadcast_1033_piece0 stored as bytes in memory (estimated size 3.1 KB, free 491.4 MB) 18/04/17 17:12:00 INFO storage.BlockManagerInfo: Added broadcast_1033_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:12:00 INFO spark.SparkContext: Created broadcast 1033 from broadcast at DAGScheduler.scala:1006 18/04/17 17:12:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 1033 (KafkaRDD[1405] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 17:12:00 INFO cluster.YarnClusterScheduler: Adding task set 1033.0 with 1 tasks 18/04/17 17:12:00 INFO storage.BlockManagerInfo: Added broadcast_1031_piece0 in memory on ***hostname masked***:53081 (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:12:00 INFO scheduler.DAGScheduler: Got job 1034 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 17:12:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 1034 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 17:12:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 17:12:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 17:12:00 INFO scheduler.DAGScheduler: Submitting ResultStage 1034 (KafkaRDD[1409] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 17:12:00 INFO storage.MemoryStore: Block broadcast_1034 stored as values in memory (estimated size 5.7 KB, free 491.4 MB) 18/04/17 17:12:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 1033.0 (TID 1033, ***hostname masked***, executor 1, partition 0, RACK_LOCAL, 2063 bytes) 18/04/17 17:12:00 INFO storage.MemoryStore: Block broadcast_1034_piece0 stored as bytes in memory (estimated size 3.1 KB, free 491.4 MB) 18/04/17 17:12:00 INFO storage.BlockManagerInfo: Added broadcast_1034_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:12:00 INFO storage.BlockManagerInfo: Removed broadcast_1022_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:12:00 INFO spark.SparkContext: Created broadcast 1034 from broadcast at DAGScheduler.scala:1006 18/04/17 17:12:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 1034 (KafkaRDD[1409] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 17:12:00 INFO cluster.YarnClusterScheduler: Adding task set 1034.0 with 1 tasks 18/04/17 17:12:00 INFO scheduler.DAGScheduler: Got job 1035 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 17:12:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 1035 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 17:12:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 17:12:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 17:12:00 INFO scheduler.DAGScheduler: Submitting ResultStage 1035 (KafkaRDD[1438] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 17:12:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 1034.0 (TID 1034, ***hostname masked***, executor 2, partition 0, NODE_LOCAL, 2059 bytes) 18/04/17 17:12:00 INFO storage.MemoryStore: Block broadcast_1035 stored as values in memory (estimated size 5.7 KB, free 491.4 MB) 18/04/17 17:12:00 INFO storage.BlockManagerInfo: Added broadcast_1032_piece0 in memory on ***hostname masked***:50260 (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:12:00 INFO storage.BlockManagerInfo: Removed broadcast_1022_piece0 on ***hostname masked***:50260 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:12:00 INFO storage.BlockManagerInfo: Added broadcast_1029_piece0 in memory on ***hostname masked***:55033 (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:12:00 INFO storage.MemoryStore: Block broadcast_1035_piece0 stored as bytes in memory (estimated size 3.1 KB, free 491.4 MB) 18/04/17 17:12:00 INFO storage.BlockManagerInfo: Added broadcast_1035_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:12:00 INFO spark.SparkContext: Created broadcast 1035 from broadcast at DAGScheduler.scala:1006 18/04/17 17:12:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 1035 (KafkaRDD[1438] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 17:12:00 INFO cluster.YarnClusterScheduler: Adding task set 1035.0 with 1 tasks 18/04/17 17:12:00 INFO scheduler.DAGScheduler: Got job 1036 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 17:12:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 1036 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 17:12:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 17:12:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 17:12:00 INFO scheduler.DAGScheduler: Submitting ResultStage 1036 (KafkaRDD[1422] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 17:12:00 INFO storage.MemoryStore: Block broadcast_1036 stored as values in memory (estimated size 5.7 KB, free 491.4 MB) 18/04/17 17:12:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 1035.0 (TID 1035, ***hostname masked***, executor 11, partition 0, RACK_LOCAL, 2053 bytes) 18/04/17 17:12:00 INFO storage.MemoryStore: Block broadcast_1036_piece0 stored as bytes in memory (estimated size 3.1 KB, free 491.4 MB) 18/04/17 17:12:00 INFO storage.BlockManagerInfo: Added broadcast_1036_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:12:00 INFO spark.SparkContext: Created broadcast 1036 from broadcast at DAGScheduler.scala:1006 18/04/17 17:12:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 1036 (KafkaRDD[1422] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 17:12:00 INFO cluster.YarnClusterScheduler: Adding task set 1036.0 with 1 tasks 18/04/17 17:12:00 INFO scheduler.DAGScheduler: Got job 1037 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 17:12:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 1037 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 17:12:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 17:12:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 17:12:00 INFO scheduler.DAGScheduler: Submitting ResultStage 1037 (KafkaRDD[1431] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 17:12:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 1036.0 (TID 1036, ***hostname masked***, executor 12, partition 0, RACK_LOCAL, 2048 bytes) 18/04/17 17:12:00 INFO storage.MemoryStore: Block broadcast_1037 stored as values in memory (estimated size 5.7 KB, free 491.3 MB) 18/04/17 17:12:00 INFO storage.BlockManagerInfo: Added broadcast_1033_piece0 in memory on ***hostname masked***:56034 (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:12:00 INFO storage.MemoryStore: Block broadcast_1037_piece0 stored as bytes in memory (estimated size 3.1 KB, free 491.3 MB) 18/04/17 17:12:00 INFO storage.BlockManagerInfo: Added broadcast_1030_piece0 in memory on ***hostname masked***:43653 (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:12:00 INFO storage.BlockManagerInfo: Added broadcast_1037_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:12:00 INFO spark.SparkContext: Created broadcast 1037 from broadcast at DAGScheduler.scala:1006 18/04/17 17:12:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 1037 (KafkaRDD[1431] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 17:12:00 INFO cluster.YarnClusterScheduler: Adding task set 1037.0 with 1 tasks 18/04/17 17:12:00 INFO scheduler.DAGScheduler: Got job 1039 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 17:12:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 1038 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 17:12:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 17:12:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 17:12:00 INFO scheduler.DAGScheduler: Submitting ResultStage 1038 (KafkaRDD[1433] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 17:12:00 INFO storage.MemoryStore: Block broadcast_1038 stored as values in memory (estimated size 5.7 KB, free 491.3 MB) 18/04/17 17:12:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 1037.0 (TID 1037, ***hostname masked***, executor 5, partition 0, RACK_LOCAL, 2048 bytes) 18/04/17 17:12:00 INFO spark.ContextCleaner: Cleaned accumulator 1001 18/04/17 17:12:00 INFO spark.ContextCleaner: Cleaned accumulator 1003 18/04/17 17:12:00 INFO storage.BlockManagerInfo: Removed broadcast_1001_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:12:00 INFO storage.MemoryStore: Block broadcast_1038_piece0 stored as bytes in memory (estimated size 3.1 KB, free 491.3 MB) 18/04/17 17:12:00 INFO storage.BlockManagerInfo: Added broadcast_1038_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:12:00 INFO spark.SparkContext: Created broadcast 1038 from broadcast at DAGScheduler.scala:1006 18/04/17 17:12:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 1038 (KafkaRDD[1433] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 17:12:00 INFO cluster.YarnClusterScheduler: Adding task set 1038.0 with 1 tasks 18/04/17 17:12:00 INFO scheduler.DAGScheduler: Got job 1038 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 17:12:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 1039 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 17:12:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 17:12:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 17:12:00 INFO scheduler.DAGScheduler: Submitting ResultStage 1039 (KafkaRDD[1423] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 17:12:00 INFO storage.BlockManagerInfo: Removed broadcast_1001_piece0 on ***hostname masked***:55279 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:12:00 INFO storage.MemoryStore: Block broadcast_1039 stored as values in memory (estimated size 5.7 KB, free 491.3 MB) 18/04/17 17:12:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 1038.0 (TID 1038, ***hostname masked***, executor 4, partition 0, NODE_LOCAL, 2049 bytes) 18/04/17 17:12:00 INFO spark.ContextCleaner: Cleaned accumulator 1002 18/04/17 17:12:00 INFO storage.BlockManagerInfo: Removed broadcast_1000_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:12:00 INFO storage.MemoryStore: Block broadcast_1039_piece0 stored as bytes in memory (estimated size 3.1 KB, free 491.3 MB) 18/04/17 17:12:00 INFO storage.BlockManagerInfo: Added broadcast_1035_piece0 in memory on ***hostname masked***:57847 (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:12:00 INFO storage.BlockManagerInfo: Added broadcast_1039_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:12:00 INFO spark.SparkContext: Created broadcast 1039 from broadcast at DAGScheduler.scala:1006 18/04/17 17:12:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 1039 (KafkaRDD[1423] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 17:12:00 INFO cluster.YarnClusterScheduler: Adding task set 1039.0 with 1 tasks 18/04/17 17:12:00 INFO scheduler.DAGScheduler: Got job 1040 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 17:12:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 1040 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 17:12:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 17:12:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 17:12:00 INFO scheduler.DAGScheduler: Submitting ResultStage 1040 (KafkaRDD[1432] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 17:12:00 INFO storage.BlockManagerInfo: Added broadcast_1037_piece0 in memory on ***hostname masked***:53081 (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:12:00 INFO storage.MemoryStore: Block broadcast_1040 stored as values in memory (estimated size 5.7 KB, free 491.3 MB) 18/04/17 17:12:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 1039.0 (TID 1039, ***hostname masked***, executor 4, partition 0, RACK_LOCAL, 2050 bytes) 18/04/17 17:12:00 INFO storage.BlockManagerInfo: Removed broadcast_1000_piece0 on ***hostname masked***:53081 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:12:00 INFO storage.MemoryStore: Block broadcast_1040_piece0 stored as bytes in memory (estimated size 3.1 KB, free 491.3 MB) 18/04/17 17:12:00 INFO storage.BlockManagerInfo: Added broadcast_1040_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:12:00 INFO spark.SparkContext: Created broadcast 1040 from broadcast at DAGScheduler.scala:1006 18/04/17 17:12:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 1040 (KafkaRDD[1432] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 17:12:00 INFO cluster.YarnClusterScheduler: Adding task set 1040.0 with 1 tasks 18/04/17 17:12:00 INFO scheduler.DAGScheduler: Got job 1041 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 17:12:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 1041 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 17:12:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 17:12:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 17:12:00 INFO scheduler.DAGScheduler: Submitting ResultStage 1041 (KafkaRDD[1424] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 17:12:00 INFO storage.MemoryStore: Block broadcast_1041 stored as values in memory (estimated size 5.7 KB, free 491.3 MB) 18/04/17 17:12:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 1040.0 (TID 1040, ***hostname masked***, executor 1, partition 0, RACK_LOCAL, 2045 bytes) 18/04/17 17:12:00 INFO storage.BlockManagerInfo: Added broadcast_1038_piece0 in memory on ***hostname masked***:55279 (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:12:00 INFO storage.MemoryStore: Block broadcast_1041_piece0 stored as bytes in memory (estimated size 3.1 KB, free 491.3 MB) 18/04/17 17:12:00 INFO storage.BlockManagerInfo: Added broadcast_1041_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.5 MB) 18/04/17 17:12:00 INFO spark.SparkContext: Created broadcast 1041 from broadcast at DAGScheduler.scala:1006 18/04/17 17:12:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 1041 (KafkaRDD[1424] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 17:12:00 INFO cluster.YarnClusterScheduler: Adding task set 1041.0 with 1 tasks 18/04/17 17:12:00 INFO scheduler.DAGScheduler: Got job 1042 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 17:12:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 1042 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 17:12:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 17:12:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 17:12:00 INFO scheduler.DAGScheduler: Submitting ResultStage 1042 (KafkaRDD[1411] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 17:12:00 INFO storage.MemoryStore: Block broadcast_1042 stored as values in memory (estimated size 5.7 KB, free 491.3 MB) 18/04/17 17:12:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 1041.0 (TID 1041, ***hostname masked***, executor 5, partition 0, RACK_LOCAL, 2048 bytes) 18/04/17 17:12:00 INFO storage.MemoryStore: Block broadcast_1042_piece0 stored as bytes in memory (estimated size 3.1 KB, free 491.3 MB) 18/04/17 17:12:00 INFO storage.BlockManagerInfo: Added broadcast_1042_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.5 MB) 18/04/17 17:12:00 INFO spark.SparkContext: Created broadcast 1042 from broadcast at DAGScheduler.scala:1006 18/04/17 17:12:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 1042 (KafkaRDD[1411] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 17:12:00 INFO cluster.YarnClusterScheduler: Adding task set 1042.0 with 1 tasks 18/04/17 17:12:00 INFO scheduler.DAGScheduler: Got job 1043 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 17:12:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 1043 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 17:12:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 17:12:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 17:12:00 INFO scheduler.DAGScheduler: Submitting ResultStage 1043 (KafkaRDD[1419] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 17:12:00 INFO storage.BlockManagerInfo: Added broadcast_1034_piece0 in memory on ***hostname masked***:43653 (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:12:00 INFO storage.MemoryStore: Block broadcast_1043 stored as values in memory (estimated size 5.7 KB, free 491.3 MB) 18/04/17 17:12:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 1042.0 (TID 1042, ***hostname masked***, executor 2, partition 0, RACK_LOCAL, 2060 bytes) 18/04/17 17:12:00 INFO storage.BlockManagerInfo: Added broadcast_1040_piece0 in memory on ***hostname masked***:56034 (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:12:00 INFO storage.MemoryStore: Block broadcast_1043_piece0 stored as bytes in memory (estimated size 3.1 KB, free 491.3 MB) 18/04/17 17:12:00 INFO storage.BlockManagerInfo: Added broadcast_1043_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.5 MB) 18/04/17 17:12:00 INFO spark.SparkContext: Created broadcast 1043 from broadcast at DAGScheduler.scala:1006 18/04/17 17:12:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 1043 (KafkaRDD[1419] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 17:12:00 INFO cluster.YarnClusterScheduler: Adding task set 1043.0 with 1 tasks 18/04/17 17:12:00 INFO scheduler.DAGScheduler: Got job 1044 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 17:12:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 1044 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 17:12:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 17:12:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 17:12:00 INFO scheduler.DAGScheduler: Submitting ResultStage 1044 (KafkaRDD[1406] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 17:12:00 INFO storage.MemoryStore: Block broadcast_1044 stored as values in memory (estimated size 5.7 KB, free 491.3 MB) 18/04/17 17:12:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 1043.0 (TID 1043, ***hostname masked***, executor 2, partition 0, RACK_LOCAL, 2064 bytes) 18/04/17 17:12:00 INFO storage.MemoryStore: Block broadcast_1044_piece0 stored as bytes in memory (estimated size 3.1 KB, free 491.3 MB) 18/04/17 17:12:00 INFO storage.BlockManagerInfo: Added broadcast_1044_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.5 MB) 18/04/17 17:12:00 INFO spark.SparkContext: Created broadcast 1044 from broadcast at DAGScheduler.scala:1006 18/04/17 17:12:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 1044 (KafkaRDD[1406] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 17:12:00 INFO cluster.YarnClusterScheduler: Adding task set 1044.0 with 1 tasks 18/04/17 17:12:00 INFO scheduler.DAGScheduler: Got job 1045 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 17:12:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 1045 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 17:12:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 17:12:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 17:12:00 INFO scheduler.DAGScheduler: Submitting ResultStage 1045 (KafkaRDD[1436] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 17:12:00 INFO storage.MemoryStore: Block broadcast_1045 stored as values in memory (estimated size 5.7 KB, free 491.3 MB) 18/04/17 17:12:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 1044.0 (TID 1044, ***hostname masked***, executor 12, partition 0, RACK_LOCAL, 2034 bytes) 18/04/17 17:12:00 INFO storage.BlockManagerInfo: Added broadcast_1039_piece0 in memory on ***hostname masked***:55279 (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:12:00 INFO storage.BlockManagerInfo: Added broadcast_1041_piece0 in memory on ***hostname masked***:53081 (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:12:00 INFO storage.BlockManagerInfo: Added broadcast_1042_piece0 in memory on ***hostname masked***:43653 (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:12:00 INFO storage.MemoryStore: Block broadcast_1045_piece0 stored as bytes in memory (estimated size 3.1 KB, free 491.3 MB) 18/04/17 17:12:00 INFO spark.ContextCleaner: Cleaned accumulator 1005 18/04/17 17:12:00 INFO storage.BlockManagerInfo: Added broadcast_1045_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.5 MB) 18/04/17 17:12:00 INFO spark.SparkContext: Created broadcast 1045 from broadcast at DAGScheduler.scala:1006 18/04/17 17:12:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 1045 (KafkaRDD[1436] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 17:12:00 INFO cluster.YarnClusterScheduler: Adding task set 1045.0 with 1 tasks 18/04/17 17:12:00 INFO scheduler.DAGScheduler: Got job 1046 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 17:12:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 1046 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 17:12:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 17:12:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 17:12:00 INFO scheduler.DAGScheduler: Submitting ResultStage 1046 (KafkaRDD[1437] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 17:12:00 INFO storage.BlockManagerInfo: Added broadcast_1036_piece0 in memory on ***hostname masked***:42188 (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:12:00 INFO storage.BlockManagerInfo: Removed broadcast_1003_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.5 MB) 18/04/17 17:12:00 INFO storage.MemoryStore: Block broadcast_1046 stored as values in memory (estimated size 5.7 KB, free 491.3 MB) 18/04/17 17:12:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 1045.0 (TID 1045, ***hostname masked***, executor 6, partition 0, RACK_LOCAL, 2040 bytes) 18/04/17 17:12:00 INFO storage.BlockManagerInfo: Removed broadcast_1003_piece0 on ***hostname masked***:56034 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:12:00 INFO spark.ContextCleaner: Cleaned accumulator 1004 18/04/17 17:12:00 INFO storage.MemoryStore: Block broadcast_1046_piece0 stored as bytes in memory (estimated size 3.1 KB, free 491.3 MB) 18/04/17 17:12:00 INFO storage.BlockManagerInfo: Removed broadcast_1002_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.5 MB) 18/04/17 17:12:00 INFO storage.BlockManagerInfo: Added broadcast_1046_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.5 MB) 18/04/17 17:12:00 INFO spark.SparkContext: Created broadcast 1046 from broadcast at DAGScheduler.scala:1006 18/04/17 17:12:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 1046 (KafkaRDD[1437] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 17:12:00 INFO cluster.YarnClusterScheduler: Adding task set 1046.0 with 1 tasks 18/04/17 17:12:00 INFO scheduler.DAGScheduler: Got job 1047 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 17:12:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 1047 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 17:12:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 17:12:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 17:12:00 INFO scheduler.DAGScheduler: Submitting ResultStage 1047 (KafkaRDD[1428] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 17:12:00 INFO storage.MemoryStore: Block broadcast_1047 stored as values in memory (estimated size 5.7 KB, free 491.3 MB) 18/04/17 17:12:00 INFO storage.BlockManagerInfo: Removed broadcast_1002_piece0 on ***hostname masked***:55279 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:12:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 1046.0 (TID 1046, ***hostname masked***, executor 7, partition 0, NODE_LOCAL, 2048 bytes) 18/04/17 17:12:00 INFO spark.ContextCleaner: Cleaned accumulator 1007 18/04/17 17:12:00 INFO storage.BlockManagerInfo: Removed broadcast_1005_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.5 MB) 18/04/17 17:12:00 INFO storage.BlockManagerInfo: Removed broadcast_1005_piece0 on ***hostname masked***:55095 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:12:00 INFO storage.MemoryStore: Block broadcast_1047_piece0 stored as bytes in memory (estimated size 3.1 KB, free 491.3 MB) 18/04/17 17:12:00 INFO storage.BlockManagerInfo: Added broadcast_1047_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.5 MB) 18/04/17 17:12:00 INFO spark.SparkContext: Created broadcast 1047 from broadcast at DAGScheduler.scala:1006 18/04/17 17:12:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 1047 (KafkaRDD[1428] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 17:12:00 INFO cluster.YarnClusterScheduler: Adding task set 1047.0 with 1 tasks 18/04/17 17:12:00 INFO scheduler.DAGScheduler: Got job 1048 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 17:12:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 1048 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 17:12:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 17:12:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 17:12:00 INFO scheduler.DAGScheduler: Submitting ResultStage 1048 (KafkaRDD[1413] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 17:12:00 INFO storage.MemoryStore: Block broadcast_1048 stored as values in memory (estimated size 5.7 KB, free 491.3 MB) 18/04/17 17:12:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 1047.0 (TID 1047, ***hostname masked***, executor 3, partition 0, NODE_LOCAL, 2047 bytes) 18/04/17 17:12:00 INFO spark.ContextCleaner: Cleaned accumulator 1006 18/04/17 17:12:00 INFO storage.BlockManagerInfo: Added broadcast_1043_piece0 in memory on ***hostname masked***:43653 (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:12:00 INFO storage.BlockManagerInfo: Removed broadcast_1004_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.5 MB) 18/04/17 17:12:00 INFO storage.BlockManagerInfo: Removed broadcast_1004_piece0 on ***hostname masked***:55033 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:12:00 INFO storage.MemoryStore: Block broadcast_1048_piece0 stored as bytes in memory (estimated size 3.1 KB, free 491.3 MB) 18/04/17 17:12:00 INFO storage.BlockManagerInfo: Added broadcast_1048_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.5 MB) 18/04/17 17:12:00 INFO spark.SparkContext: Created broadcast 1048 from broadcast at DAGScheduler.scala:1006 18/04/17 17:12:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 1048 (KafkaRDD[1413] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 17:12:00 INFO cluster.YarnClusterScheduler: Adding task set 1048.0 with 1 tasks 18/04/17 17:12:00 INFO scheduler.DAGScheduler: Got job 1049 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 17:12:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 1049 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 17:12:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 17:12:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 17:12:00 INFO spark.ContextCleaner: Cleaned accumulator 1009 18/04/17 17:12:00 INFO scheduler.DAGScheduler: Submitting ResultStage 1049 (KafkaRDD[1414] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 17:12:00 INFO storage.MemoryStore: Block broadcast_1049 stored as values in memory (estimated size 5.7 KB, free 491.3 MB) 18/04/17 17:12:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 1048.0 (TID 1048, ***hostname masked***, executor 12, partition 0, RACK_LOCAL, 2041 bytes) 18/04/17 17:12:00 INFO storage.BlockManagerInfo: Removed broadcast_1007_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.5 MB) 18/04/17 17:12:00 INFO storage.BlockManagerInfo: Added broadcast_1044_piece0 in memory on ***hostname masked***:42188 (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:12:00 INFO storage.MemoryStore: Block broadcast_1049_piece0 stored as bytes in memory (estimated size 3.1 KB, free 491.3 MB) 18/04/17 17:12:00 INFO storage.BlockManagerInfo: Added broadcast_1049_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.5 MB) 18/04/17 17:12:00 INFO spark.SparkContext: Created broadcast 1049 from broadcast at DAGScheduler.scala:1006 18/04/17 17:12:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 1049 (KafkaRDD[1414] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 17:12:00 INFO cluster.YarnClusterScheduler: Adding task set 1049.0 with 1 tasks 18/04/17 17:12:00 INFO scheduler.DAGScheduler: Got job 1050 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 17:12:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 1050 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 17:12:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 17:12:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 17:12:00 INFO scheduler.DAGScheduler: Submitting ResultStage 1050 (KafkaRDD[1410] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 17:12:00 INFO storage.MemoryStore: Block broadcast_1050 stored as values in memory (estimated size 5.7 KB, free 491.3 MB) 18/04/17 17:12:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 1049.0 (TID 1049, ***hostname masked***, executor 1, partition 0, NODE_LOCAL, 2059 bytes) 18/04/17 17:12:00 INFO storage.BlockManagerInfo: Added broadcast_1046_piece0 in memory on ***hostname masked***:41751 (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:12:00 INFO storage.BlockManagerInfo: Added broadcast_1045_piece0 in memory on ***hostname masked***:35790 (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:12:00 INFO storage.MemoryStore: Block broadcast_1050_piece0 stored as bytes in memory (estimated size 3.1 KB, free 491.3 MB) 18/04/17 17:12:00 INFO storage.BlockManagerInfo: Added broadcast_1050_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.5 MB) 18/04/17 17:12:00 INFO spark.SparkContext: Created broadcast 1050 from broadcast at DAGScheduler.scala:1006 18/04/17 17:12:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 1050 (KafkaRDD[1410] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 17:12:00 INFO cluster.YarnClusterScheduler: Adding task set 1050.0 with 1 tasks 18/04/17 17:12:00 INFO scheduler.DAGScheduler: Got job 1051 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 17:12:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 1051 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 17:12:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 17:12:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 17:12:00 INFO scheduler.DAGScheduler: Submitting ResultStage 1051 (KafkaRDD[1427] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 17:12:00 INFO storage.MemoryStore: Block broadcast_1051 stored as values in memory (estimated size 5.7 KB, free 491.3 MB) 18/04/17 17:12:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 1050.0 (TID 1050, ***hostname masked***, executor 8, partition 0, NODE_LOCAL, 2045 bytes) 18/04/17 17:12:00 INFO storage.MemoryStore: Block broadcast_1051_piece0 stored as bytes in memory (estimated size 3.1 KB, free 491.3 MB) 18/04/17 17:12:00 INFO storage.BlockManagerInfo: Added broadcast_1051_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.5 MB) 18/04/17 17:12:00 INFO spark.SparkContext: Created broadcast 1051 from broadcast at DAGScheduler.scala:1006 18/04/17 17:12:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 1051 (KafkaRDD[1427] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 17:12:00 INFO cluster.YarnClusterScheduler: Adding task set 1051.0 with 1 tasks 18/04/17 17:12:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 1051.0 (TID 1051, ***hostname masked***, executor 11, partition 0, RACK_LOCAL, 2048 bytes) 18/04/17 17:12:00 INFO storage.BlockManagerInfo: Added broadcast_1048_piece0 in memory on ***hostname masked***:42188 (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:12:00 INFO storage.BlockManagerInfo: Added broadcast_1047_piece0 in memory on ***hostname masked***:60107 (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:12:00 INFO storage.BlockManagerInfo: Added broadcast_1050_piece0 in memory on ***hostname masked***:50260 (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:12:00 INFO storage.BlockManagerInfo: Added broadcast_1051_piece0 in memory on ***hostname masked***:57847 (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:12:00 INFO storage.BlockManagerInfo: Added broadcast_1049_piece0 in memory on ***hostname masked***:56034 (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:12:00 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 1040.0 (TID 1040) in 54 ms on ***hostname masked*** (executor 1) (1/1) 18/04/17 17:12:00 INFO scheduler.DAGScheduler: ResultStage 1040 (foreachPartition at PredictorEngineApp.java:153) finished in 0.055 s 18/04/17 17:12:00 INFO cluster.YarnClusterScheduler: Removed TaskSet 1040.0, whose tasks have all completed, from pool 18/04/17 17:12:00 INFO scheduler.DAGScheduler: Job 1040 finished: foreachPartition at PredictorEngineApp.java:153, took 0.114005 s 18/04/17 17:12:00 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x2e8263f connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 17:12:00 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x2e8263f0x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 17:12:00 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 17:12:00 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:40481, server: ***hostname masked***/***IP masked***:2181 18/04/17 17:12:00 INFO storage.BlockManagerInfo: Removed broadcast_1007_piece0 on ***hostname masked***:53081 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:12:00 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x1626be1444c96ca, negotiated timeout = 60000 18/04/17 17:12:00 INFO spark.ContextCleaner: Cleaned accumulator 1008 18/04/17 17:12:00 INFO storage.BlockManagerInfo: Removed broadcast_1006_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.5 MB) 18/04/17 17:12:00 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x1626be1444c96ca 18/04/17 17:12:00 INFO zookeeper.ZooKeeper: Session: 0x1626be1444c96ca closed 18/04/17 17:12:00 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 17:12:00 INFO storage.BlockManagerInfo: Removed broadcast_1006_piece0 on ***hostname masked***:60107 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:12:00 INFO scheduler.JobScheduler: Finished job streaming job 1523974320000 ms.28 from job set of time 1523974320000 ms 18/04/17 17:12:00 INFO spark.ContextCleaner: Cleaned accumulator 1011 18/04/17 17:12:00 INFO storage.BlockManagerInfo: Removed broadcast_1009_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.5 MB) 18/04/17 17:12:00 INFO storage.BlockManagerInfo: Removed broadcast_1009_piece0 on ***hostname masked***:53081 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:12:00 INFO spark.ContextCleaner: Cleaned accumulator 1010 18/04/17 17:12:00 INFO storage.BlockManagerInfo: Removed broadcast_1008_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.5 MB) 18/04/17 17:12:00 INFO storage.BlockManagerInfo: Removed broadcast_1008_piece0 on ***hostname masked***:53081 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:12:00 INFO spark.ContextCleaner: Cleaned accumulator 1013 18/04/17 17:12:00 INFO storage.BlockManagerInfo: Removed broadcast_1011_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.5 MB) 18/04/17 17:12:00 INFO storage.BlockManagerInfo: Removed broadcast_1011_piece0 on ***hostname masked***:43653 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:12:00 INFO spark.ContextCleaner: Cleaned accumulator 1012 18/04/17 17:12:00 INFO storage.BlockManagerInfo: Removed broadcast_1010_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.5 MB) 18/04/17 17:12:00 INFO storage.BlockManagerInfo: Removed broadcast_1010_piece0 on ***hostname masked***:43653 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:12:00 INFO spark.ContextCleaner: Cleaned accumulator 1015 18/04/17 17:12:00 INFO storage.BlockManagerInfo: Removed broadcast_1013_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:12:00 INFO storage.BlockManagerInfo: Removed broadcast_1013_piece0 on ***hostname masked***:55095 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:12:00 INFO spark.ContextCleaner: Cleaned accumulator 1014 18/04/17 17:12:00 INFO storage.BlockManagerInfo: Removed broadcast_1012_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:12:00 INFO storage.BlockManagerInfo: Removed broadcast_1012_piece0 on ***hostname masked***:50260 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:12:00 INFO spark.ContextCleaner: Cleaned accumulator 1017 18/04/17 17:12:00 INFO storage.BlockManagerInfo: Removed broadcast_1015_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:12:00 INFO storage.BlockManagerInfo: Removed broadcast_1015_piece0 on ***hostname masked***:35790 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:12:00 INFO spark.ContextCleaner: Cleaned accumulator 1016 18/04/17 17:12:00 INFO storage.BlockManagerInfo: Removed broadcast_1014_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:12:00 INFO storage.BlockManagerInfo: Removed broadcast_1014_piece0 on ***hostname masked***:55279 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:12:00 INFO spark.ContextCleaner: Cleaned accumulator 1019 18/04/17 17:12:00 INFO storage.BlockManagerInfo: Removed broadcast_1017_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:12:00 INFO storage.BlockManagerInfo: Removed broadcast_1017_piece0 on ***hostname masked***:57847 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:12:00 INFO spark.ContextCleaner: Cleaned accumulator 1018 18/04/17 17:12:00 INFO storage.BlockManagerInfo: Removed broadcast_1016_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:12:00 INFO storage.BlockManagerInfo: Removed broadcast_1016_piece0 on ***hostname masked***:41751 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:12:00 INFO spark.ContextCleaner: Cleaned accumulator 1021 18/04/17 17:12:00 INFO storage.BlockManagerInfo: Removed broadcast_1019_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:12:00 INFO storage.BlockManagerInfo: Removed broadcast_1019_piece0 on ***hostname masked***:57847 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:12:00 INFO spark.ContextCleaner: Cleaned accumulator 1020 18/04/17 17:12:00 INFO storage.BlockManagerInfo: Removed broadcast_1018_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:12:00 INFO storage.BlockManagerInfo: Removed broadcast_1018_piece0 on ***hostname masked***:55095 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:12:00 INFO spark.ContextCleaner: Cleaned accumulator 1023 18/04/17 17:12:00 INFO storage.BlockManagerInfo: Removed broadcast_1021_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:12:00 INFO storage.BlockManagerInfo: Removed broadcast_1021_piece0 on ***hostname masked***:60107 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:12:00 INFO spark.ContextCleaner: Cleaned accumulator 1022 18/04/17 17:12:00 INFO storage.BlockManagerInfo: Removed broadcast_1020_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:12:00 INFO storage.BlockManagerInfo: Removed broadcast_1020_piece0 on ***hostname masked***:42188 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:12:00 INFO spark.ContextCleaner: Cleaned accumulator 1025 18/04/17 17:12:00 INFO storage.BlockManagerInfo: Removed broadcast_1023_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:12:00 INFO storage.BlockManagerInfo: Removed broadcast_1023_piece0 on ***hostname masked***:43653 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:12:00 INFO spark.ContextCleaner: Cleaned accumulator 1024 18/04/17 17:12:00 INFO spark.ContextCleaner: Cleaned accumulator 1026 18/04/17 17:12:00 INFO storage.BlockManagerInfo: Removed broadcast_1024_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:12:00 INFO storage.BlockManagerInfo: Removed broadcast_1024_piece0 on ***hostname masked***:56034 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:12:00 INFO storage.BlockManagerInfo: Removed broadcast_1025_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:12:00 INFO storage.BlockManagerInfo: Removed broadcast_1025_piece0 on ***hostname masked***:57847 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:12:00 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 1036.0 (TID 1036) in 187 ms on ***hostname masked*** (executor 12) (1/1) 18/04/17 17:12:00 INFO cluster.YarnClusterScheduler: Removed TaskSet 1036.0, whose tasks have all completed, from pool 18/04/17 17:12:00 INFO scheduler.DAGScheduler: ResultStage 1036 (foreachPartition at PredictorEngineApp.java:153) finished in 0.187 s 18/04/17 17:12:00 INFO scheduler.DAGScheduler: Job 1036 finished: foreachPartition at PredictorEngineApp.java:153, took 0.234444 s 18/04/17 17:12:00 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0xdf5a4c6 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 17:12:00 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0xdf5a4c60x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 17:12:00 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 17:12:00 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:40484, server: ***hostname masked***/***IP masked***:2181 18/04/17 17:12:00 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x1626be1444c96ce, negotiated timeout = 60000 18/04/17 17:12:00 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x1626be1444c96ce 18/04/17 17:12:00 INFO zookeeper.ZooKeeper: Session: 0x1626be1444c96ce closed 18/04/17 17:12:00 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 17:12:00 INFO scheduler.JobScheduler: Finished job streaming job 1523974320000 ms.18 from job set of time 1523974320000 ms 18/04/17 17:12:01 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 1032.0 (TID 1032) in 1340 ms on ***hostname masked*** (executor 8) (1/1) 18/04/17 17:12:01 INFO cluster.YarnClusterScheduler: Removed TaskSet 1032.0, whose tasks have all completed, from pool 18/04/17 17:12:01 INFO scheduler.DAGScheduler: ResultStage 1032 (foreachPartition at PredictorEngineApp.java:153) finished in 1.340 s 18/04/17 17:12:01 INFO scheduler.DAGScheduler: Job 1032 finished: foreachPartition at PredictorEngineApp.java:153, took 1.365943 s 18/04/17 17:12:01 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x68ea0249 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 17:12:01 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x68ea02490x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 17:12:01 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 17:12:01 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:45083, server: ***hostname masked***/***IP masked***:2181 18/04/17 17:12:01 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x2626be142b28fbd, negotiated timeout = 60000 18/04/17 17:12:01 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x2626be142b28fbd 18/04/17 17:12:01 INFO zookeeper.ZooKeeper: Session: 0x2626be142b28fbd closed 18/04/17 17:12:01 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 17:12:01 INFO scheduler.JobScheduler: Finished job streaming job 1523974320000 ms.8 from job set of time 1523974320000 ms 18/04/17 17:12:01 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 1029.0 (TID 1029) in 1588 ms on ***hostname masked*** (executor 9) (1/1) 18/04/17 17:12:01 INFO cluster.YarnClusterScheduler: Removed TaskSet 1029.0, whose tasks have all completed, from pool 18/04/17 17:12:01 INFO scheduler.DAGScheduler: ResultStage 1029 (foreachPartition at PredictorEngineApp.java:153) finished in 1.588 s 18/04/17 17:12:01 INFO scheduler.DAGScheduler: Job 1029 finished: foreachPartition at PredictorEngineApp.java:153, took 1.604721 s 18/04/17 17:12:01 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x22f3f043 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 17:12:01 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x22f3f0430x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 17:12:01 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 17:12:01 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:45086, server: ***hostname masked***/***IP masked***:2181 18/04/17 17:12:01 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x2626be142b28fbe, negotiated timeout = 60000 18/04/17 17:12:01 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x2626be142b28fbe 18/04/17 17:12:01 INFO zookeeper.ZooKeeper: Session: 0x2626be142b28fbe closed 18/04/17 17:12:01 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 17:12:01 INFO scheduler.JobScheduler: Finished job streaming job 1523974320000 ms.31 from job set of time 1523974320000 ms 18/04/17 17:12:03 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 1042.0 (TID 1042) in 2935 ms on ***hostname masked*** (executor 2) (1/1) 18/04/17 17:12:03 INFO cluster.YarnClusterScheduler: Removed TaskSet 1042.0, whose tasks have all completed, from pool 18/04/17 17:12:03 INFO scheduler.DAGScheduler: ResultStage 1042 (foreachPartition at PredictorEngineApp.java:153) finished in 2.935 s 18/04/17 17:12:03 INFO scheduler.DAGScheduler: Job 1042 finished: foreachPartition at PredictorEngineApp.java:153, took 2.999490 s 18/04/17 17:12:03 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x68cefb2a connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 17:12:03 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x68cefb2a0x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 17:12:03 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 17:12:03 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:45091, server: ***hostname masked***/***IP masked***:2181 18/04/17 17:12:03 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x2626be142b28fc0, negotiated timeout = 60000 18/04/17 17:12:03 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 1030.0 (TID 1030) in 2992 ms on ***hostname masked*** (executor 2) (1/1) 18/04/17 17:12:03 INFO scheduler.DAGScheduler: ResultStage 1030 (foreachPartition at PredictorEngineApp.java:153) finished in 2.992 s 18/04/17 17:12:03 INFO cluster.YarnClusterScheduler: Removed TaskSet 1030.0, whose tasks have all completed, from pool 18/04/17 17:12:03 INFO scheduler.DAGScheduler: Job 1030 finished: foreachPartition at PredictorEngineApp.java:153, took 3.011788 s 18/04/17 17:12:03 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x2626be142b28fc0 18/04/17 17:12:03 INFO zookeeper.ZooKeeper: Session: 0x2626be142b28fc0 closed 18/04/17 17:12:03 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 17:12:03 INFO scheduler.JobScheduler: Finished job streaming job 1523974320000 ms.7 from job set of time 1523974320000 ms 18/04/17 17:12:03 INFO scheduler.JobScheduler: Finished job streaming job 1523974320000 ms.25 from job set of time 1523974320000 ms 18/04/17 17:12:03 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 1046.0 (TID 1046) in 3117 ms on ***hostname masked*** (executor 7) (1/1) 18/04/17 17:12:03 INFO scheduler.DAGScheduler: ResultStage 1046 (foreachPartition at PredictorEngineApp.java:153) finished in 3.118 s 18/04/17 17:12:03 INFO cluster.YarnClusterScheduler: Removed TaskSet 1046.0, whose tasks have all completed, from pool 18/04/17 17:12:03 INFO scheduler.DAGScheduler: Job 1046 finished: foreachPartition at PredictorEngineApp.java:153, took 3.192550 s 18/04/17 17:12:03 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x6d03833e connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 17:12:03 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x6d03833e0x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 17:12:03 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 17:12:03 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:40499, server: ***hostname masked***/***IP masked***:2181 18/04/17 17:12:03 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x1626be1444c96d2, negotiated timeout = 60000 18/04/17 17:12:03 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x1626be1444c96d2 18/04/17 17:12:03 INFO zookeeper.ZooKeeper: Session: 0x1626be1444c96d2 closed 18/04/17 17:12:03 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 17:12:03 INFO scheduler.JobScheduler: Finished job streaming job 1523974320000 ms.33 from job set of time 1523974320000 ms 18/04/17 17:12:03 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 1050.0 (TID 1050) in 3262 ms on ***hostname masked*** (executor 8) (1/1) 18/04/17 17:12:03 INFO cluster.YarnClusterScheduler: Removed TaskSet 1050.0, whose tasks have all completed, from pool 18/04/17 17:12:03 INFO scheduler.DAGScheduler: ResultStage 1050 (foreachPartition at PredictorEngineApp.java:153) finished in 3.262 s 18/04/17 17:12:03 INFO scheduler.DAGScheduler: Job 1050 finished: foreachPartition at PredictorEngineApp.java:153, took 3.346190 s 18/04/17 17:12:03 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x765dc690 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 17:12:03 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x765dc6900x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 17:12:03 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 17:12:03 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:45099, server: ***hostname masked***/***IP masked***:2181 18/04/17 17:12:03 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x2626be142b28fc1, negotiated timeout = 60000 18/04/17 17:12:03 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x2626be142b28fc1 18/04/17 17:12:03 INFO zookeeper.ZooKeeper: Session: 0x2626be142b28fc1 closed 18/04/17 17:12:03 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 17:12:03 INFO scheduler.JobScheduler: Finished job streaming job 1523974320000 ms.6 from job set of time 1523974320000 ms 18/04/17 17:12:03 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 1045.0 (TID 1045) in 3316 ms on ***hostname masked*** (executor 6) (1/1) 18/04/17 17:12:03 INFO cluster.YarnClusterScheduler: Removed TaskSet 1045.0, whose tasks have all completed, from pool 18/04/17 17:12:03 INFO scheduler.DAGScheduler: ResultStage 1045 (foreachPartition at PredictorEngineApp.java:153) finished in 3.316 s 18/04/17 17:12:03 INFO scheduler.DAGScheduler: Job 1045 finished: foreachPartition at PredictorEngineApp.java:153, took 3.388445 s 18/04/17 17:12:03 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x23d2df1b connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 17:12:03 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x23d2df1b0x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 17:12:03 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 17:12:03 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:40507, server: ***hostname masked***/***IP masked***:2181 18/04/17 17:12:03 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x1626be1444c96d3, negotiated timeout = 60000 18/04/17 17:12:03 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x1626be1444c96d3 18/04/17 17:12:03 INFO zookeeper.ZooKeeper: Session: 0x1626be1444c96d3 closed 18/04/17 17:12:03 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 17:12:03 INFO scheduler.JobScheduler: Finished job streaming job 1523974320000 ms.32 from job set of time 1523974320000 ms 18/04/17 17:12:03 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 1047.0 (TID 1047) in 3404 ms on ***hostname masked*** (executor 3) (1/1) 18/04/17 17:12:03 INFO cluster.YarnClusterScheduler: Removed TaskSet 1047.0, whose tasks have all completed, from pool 18/04/17 17:12:03 INFO scheduler.DAGScheduler: ResultStage 1047 (foreachPartition at PredictorEngineApp.java:153) finished in 3.405 s 18/04/17 17:12:03 INFO scheduler.DAGScheduler: Job 1047 finished: foreachPartition at PredictorEngineApp.java:153, took 3.482392 s 18/04/17 17:12:03 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x1b62db26 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 17:12:03 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x1b62db260x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 17:12:03 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 17:12:03 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:34128, server: ***hostname masked***/***IP masked***:2181 18/04/17 17:12:03 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x3626be1439a967e, negotiated timeout = 60000 18/04/17 17:12:03 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x3626be1439a967e 18/04/17 17:12:03 INFO zookeeper.ZooKeeper: Session: 0x3626be1439a967e closed 18/04/17 17:12:03 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 17:12:03 INFO scheduler.JobScheduler: Finished job streaming job 1523974320000 ms.24 from job set of time 1523974320000 ms 18/04/17 17:12:05 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 1048.0 (TID 1048) in 5750 ms on ***hostname masked*** (executor 12) (1/1) 18/04/17 17:12:05 INFO cluster.YarnClusterScheduler: Removed TaskSet 1048.0, whose tasks have all completed, from pool 18/04/17 17:12:05 INFO scheduler.DAGScheduler: ResultStage 1048 (foreachPartition at PredictorEngineApp.java:153) finished in 5.750 s 18/04/17 17:12:05 INFO scheduler.DAGScheduler: Job 1048 finished: foreachPartition at PredictorEngineApp.java:153, took 5.830227 s 18/04/17 17:12:05 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x633a087b connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 17:12:05 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x633a087b0x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 17:12:05 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 17:12:05 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:45111, server: ***hostname masked***/***IP masked***:2181 18/04/17 17:12:05 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 1044.0 (TID 1044) in 5768 ms on ***hostname masked*** (executor 12) (1/1) 18/04/17 17:12:05 INFO scheduler.DAGScheduler: ResultStage 1044 (foreachPartition at PredictorEngineApp.java:153) finished in 5.768 s 18/04/17 17:12:05 INFO cluster.YarnClusterScheduler: Removed TaskSet 1044.0, whose tasks have all completed, from pool 18/04/17 17:12:05 INFO scheduler.DAGScheduler: Job 1044 finished: foreachPartition at PredictorEngineApp.java:153, took 5.837361 s 18/04/17 17:12:05 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x2beba42b connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 17:12:05 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x2beba42b0x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 17:12:05 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 17:12:05 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:34135, server: ***hostname masked***/***IP masked***:2181 18/04/17 17:12:05 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x2626be142b28fc4, negotiated timeout = 60000 18/04/17 17:12:05 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 1039.0 (TID 1039) in 5791 ms on ***hostname masked*** (executor 4) (1/1) 18/04/17 17:12:05 INFO scheduler.DAGScheduler: ResultStage 1039 (foreachPartition at PredictorEngineApp.java:153) finished in 5.792 s 18/04/17 17:12:05 INFO cluster.YarnClusterScheduler: Removed TaskSet 1039.0, whose tasks have all completed, from pool 18/04/17 17:12:05 INFO scheduler.DAGScheduler: Job 1038 finished: foreachPartition at PredictorEngineApp.java:153, took 5.847912 s 18/04/17 17:12:05 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x3626be1439a9680, negotiated timeout = 60000 18/04/17 17:12:05 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x2626be142b28fc4 18/04/17 17:12:05 INFO zookeeper.ZooKeeper: Session: 0x2626be142b28fc4 closed 18/04/17 17:12:05 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 17:12:05 INFO scheduler.JobScheduler: Finished job streaming job 1523974320000 ms.9 from job set of time 1523974320000 ms 18/04/17 17:12:05 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x3626be1439a9680 18/04/17 17:12:05 INFO zookeeper.ZooKeeper: Session: 0x3626be1439a9680 closed 18/04/17 17:12:05 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 17:12:05 INFO scheduler.JobScheduler: Finished job streaming job 1523974320000 ms.19 from job set of time 1523974320000 ms 18/04/17 17:12:05 INFO scheduler.JobScheduler: Finished job streaming job 1523974320000 ms.2 from job set of time 1523974320000 ms 18/04/17 17:12:05 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 1038.0 (TID 1038) in 5836 ms on ***hostname masked*** (executor 4) (1/1) 18/04/17 17:12:05 INFO cluster.YarnClusterScheduler: Removed TaskSet 1038.0, whose tasks have all completed, from pool 18/04/17 17:12:05 INFO scheduler.DAGScheduler: ResultStage 1038 (foreachPartition at PredictorEngineApp.java:153) finished in 5.836 s 18/04/17 17:12:05 INFO scheduler.DAGScheduler: Job 1039 finished: foreachPartition at PredictorEngineApp.java:153, took 5.889184 s 18/04/17 17:12:05 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x5e0d419 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 17:12:05 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x5e0d4190x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 17:12:05 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 17:12:05 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:40523, server: ***hostname masked***/***IP masked***:2181 18/04/17 17:12:05 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x1626be1444c96d6, negotiated timeout = 60000 18/04/17 17:12:05 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x1626be1444c96d6 18/04/17 17:12:05 INFO zookeeper.ZooKeeper: Session: 0x1626be1444c96d6 closed 18/04/17 17:12:05 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 17:12:05 INFO scheduler.JobScheduler: Finished job streaming job 1523974320000 ms.29 from job set of time 1523974320000 ms 18/04/17 17:12:06 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 1028.0 (TID 1028) in 6734 ms on ***hostname masked*** (executor 11) (1/1) 18/04/17 17:12:06 INFO cluster.YarnClusterScheduler: Removed TaskSet 1028.0, whose tasks have all completed, from pool 18/04/17 17:12:06 INFO scheduler.DAGScheduler: ResultStage 1028 (foreachPartition at PredictorEngineApp.java:153) finished in 6.735 s 18/04/17 17:12:06 INFO scheduler.DAGScheduler: Job 1028 finished: foreachPartition at PredictorEngineApp.java:153, took 6.748338 s 18/04/17 17:12:06 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x3b5c0778 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 17:12:06 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x3b5c07780x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 17:12:06 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 17:12:06 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:40527, server: ***hostname masked***/***IP masked***:2181 18/04/17 17:12:06 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x1626be1444c96d8, negotiated timeout = 60000 18/04/17 17:12:06 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x1626be1444c96d8 18/04/17 17:12:06 INFO zookeeper.ZooKeeper: Session: 0x1626be1444c96d8 closed 18/04/17 17:12:06 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 17:12:06 INFO scheduler.JobScheduler: Finished job streaming job 1523974320000 ms.12 from job set of time 1523974320000 ms 18/04/17 17:12:07 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 1043.0 (TID 1043) in 7011 ms on ***hostname masked*** (executor 2) (1/1) 18/04/17 17:12:07 INFO cluster.YarnClusterScheduler: Removed TaskSet 1043.0, whose tasks have all completed, from pool 18/04/17 17:12:07 INFO scheduler.DAGScheduler: ResultStage 1043 (foreachPartition at PredictorEngineApp.java:153) finished in 7.012 s 18/04/17 17:12:07 INFO scheduler.DAGScheduler: Job 1043 finished: foreachPartition at PredictorEngineApp.java:153, took 7.078499 s 18/04/17 17:12:07 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x599bb4c4 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 17:12:07 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x599bb4c40x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 17:12:07 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 17:12:07 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:45126, server: ***hostname masked***/***IP masked***:2181 18/04/17 17:12:07 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x2626be142b28fc5, negotiated timeout = 60000 18/04/17 17:12:07 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x2626be142b28fc5 18/04/17 17:12:07 INFO zookeeper.ZooKeeper: Session: 0x2626be142b28fc5 closed 18/04/17 17:12:07 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 17:12:07 INFO scheduler.JobScheduler: Finished job streaming job 1523974320000 ms.15 from job set of time 1523974320000 ms 18/04/17 17:12:07 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 1026.0 (TID 1026) in 7393 ms on ***hostname masked*** (executor 10) (1/1) 18/04/17 17:12:07 INFO cluster.YarnClusterScheduler: Removed TaskSet 1026.0, whose tasks have all completed, from pool 18/04/17 17:12:07 INFO scheduler.DAGScheduler: ResultStage 1026 (foreachPartition at PredictorEngineApp.java:153) finished in 7.394 s 18/04/17 17:12:07 INFO scheduler.DAGScheduler: Job 1026 finished: foreachPartition at PredictorEngineApp.java:153, took 7.400527 s 18/04/17 17:12:07 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x4f3e47e6 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 17:12:07 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x4f3e47e60x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 17:12:07 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 17:12:07 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:34152, server: ***hostname masked***/***IP masked***:2181 18/04/17 17:12:07 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x3626be1439a9683, negotiated timeout = 60000 18/04/17 17:12:07 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x3626be1439a9683 18/04/17 17:12:07 INFO zookeeper.ZooKeeper: Session: 0x3626be1439a9683 closed 18/04/17 17:12:07 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 17:12:07 INFO scheduler.JobScheduler: Finished job streaming job 1523974320000 ms.11 from job set of time 1523974320000 ms 18/04/17 17:12:08 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 1051.0 (TID 1051) in 8433 ms on ***hostname masked*** (executor 11) (1/1) 18/04/17 17:12:08 INFO cluster.YarnClusterScheduler: Removed TaskSet 1051.0, whose tasks have all completed, from pool 18/04/17 17:12:08 INFO scheduler.DAGScheduler: ResultStage 1051 (foreachPartition at PredictorEngineApp.java:153) finished in 8.433 s 18/04/17 17:12:08 INFO scheduler.DAGScheduler: Job 1051 finished: foreachPartition at PredictorEngineApp.java:153, took 8.518663 s 18/04/17 17:12:08 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x60076c2f connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 17:12:08 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x60076c2f0x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 17:12:08 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 17:12:08 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:34156, server: ***hostname masked***/***IP masked***:2181 18/04/17 17:12:08 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x3626be1439a9686, negotiated timeout = 60000 18/04/17 17:12:08 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x3626be1439a9686 18/04/17 17:12:08 INFO zookeeper.ZooKeeper: Session: 0x3626be1439a9686 closed 18/04/17 17:12:08 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 17:12:08 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 1035.0 (TID 1035) in 8503 ms on ***hostname masked*** (executor 11) (1/1) 18/04/17 17:12:08 INFO cluster.YarnClusterScheduler: Removed TaskSet 1035.0, whose tasks have all completed, from pool 18/04/17 17:12:08 INFO scheduler.DAGScheduler: ResultStage 1035 (foreachPartition at PredictorEngineApp.java:153) finished in 8.503 s 18/04/17 17:12:08 INFO scheduler.DAGScheduler: Job 1035 finished: foreachPartition at PredictorEngineApp.java:153, took 8.547619 s 18/04/17 17:12:08 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x685b4849 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 17:12:08 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x685b48490x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 17:12:08 INFO scheduler.JobScheduler: Finished job streaming job 1523974320000 ms.23 from job set of time 1523974320000 ms 18/04/17 17:12:08 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 17:12:08 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:45136, server: ***hostname masked***/***IP masked***:2181 18/04/17 17:12:08 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x2626be142b28fc6, negotiated timeout = 60000 18/04/17 17:12:08 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x2626be142b28fc6 18/04/17 17:12:08 INFO zookeeper.ZooKeeper: Session: 0x2626be142b28fc6 closed 18/04/17 17:12:08 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 17:12:08 INFO scheduler.JobScheduler: Finished job streaming job 1523974320000 ms.34 from job set of time 1523974320000 ms 18/04/17 17:12:10 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 1034.0 (TID 1034) in 10797 ms on ***hostname masked*** (executor 2) (1/1) 18/04/17 17:12:10 INFO cluster.YarnClusterScheduler: Removed TaskSet 1034.0, whose tasks have all completed, from pool 18/04/17 17:12:10 INFO scheduler.DAGScheduler: ResultStage 1034 (foreachPartition at PredictorEngineApp.java:153) finished in 10.797 s 18/04/17 17:12:10 INFO scheduler.DAGScheduler: Job 1034 finished: foreachPartition at PredictorEngineApp.java:153, took 10.839200 s 18/04/17 17:12:10 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x3d393352 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 17:12:10 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x3d3933520x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 17:12:10 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 17:12:10 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:45142, server: ***hostname masked***/***IP masked***:2181 18/04/17 17:12:10 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x2626be142b28fc7, negotiated timeout = 60000 18/04/17 17:12:10 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x2626be142b28fc7 18/04/17 17:12:10 INFO zookeeper.ZooKeeper: Session: 0x2626be142b28fc7 closed 18/04/17 17:12:10 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 17:12:10 INFO scheduler.JobScheduler: Finished job streaming job 1523974320000 ms.5 from job set of time 1523974320000 ms 18/04/17 17:12:13 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 1037.0 (TID 1037) in 12913 ms on ***hostname masked*** (executor 5) (1/1) 18/04/17 17:12:13 INFO cluster.YarnClusterScheduler: Removed TaskSet 1037.0, whose tasks have all completed, from pool 18/04/17 17:12:13 INFO scheduler.DAGScheduler: ResultStage 1037 (foreachPartition at PredictorEngineApp.java:153) finished in 12.913 s 18/04/17 17:12:13 INFO scheduler.DAGScheduler: Job 1037 finished: foreachPartition at PredictorEngineApp.java:153, took 12.963561 s 18/04/17 17:12:13 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x495962ed connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 17:12:13 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x495962ed0x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 17:12:13 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 17:12:13 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:34171, server: ***hostname masked***/***IP masked***:2181 18/04/17 17:12:13 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x3626be1439a9688, negotiated timeout = 60000 18/04/17 17:12:13 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x3626be1439a9688 18/04/17 17:12:13 INFO zookeeper.ZooKeeper: Session: 0x3626be1439a9688 closed 18/04/17 17:12:13 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 17:12:13 INFO scheduler.JobScheduler: Finished job streaming job 1523974320000 ms.27 from job set of time 1523974320000 ms 18/04/17 17:12:13 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 1033.0 (TID 1033) in 13131 ms on ***hostname masked*** (executor 1) (1/1) 18/04/17 17:12:13 INFO cluster.YarnClusterScheduler: Removed TaskSet 1033.0, whose tasks have all completed, from pool 18/04/17 17:12:13 INFO scheduler.DAGScheduler: ResultStage 1033 (foreachPartition at PredictorEngineApp.java:153) finished in 13.131 s 18/04/17 17:12:13 INFO scheduler.DAGScheduler: Job 1033 finished: foreachPartition at PredictorEngineApp.java:153, took 13.160360 s 18/04/17 17:12:13 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x2d916a88 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 17:12:13 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x2d916a880x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 17:12:13 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 17:12:13 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:34174, server: ***hostname masked***/***IP masked***:2181 18/04/17 17:12:13 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x3626be1439a9689, negotiated timeout = 60000 18/04/17 17:12:13 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x3626be1439a9689 18/04/17 17:12:13 INFO zookeeper.ZooKeeper: Session: 0x3626be1439a9689 closed 18/04/17 17:12:13 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 17:12:13 INFO scheduler.JobScheduler: Finished job streaming job 1523974320000 ms.1 from job set of time 1523974320000 ms 18/04/17 17:12:13 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 1041.0 (TID 1041) in 13344 ms on ***hostname masked*** (executor 5) (1/1) 18/04/17 17:12:13 INFO cluster.YarnClusterScheduler: Removed TaskSet 1041.0, whose tasks have all completed, from pool 18/04/17 17:12:13 INFO scheduler.DAGScheduler: ResultStage 1041 (foreachPartition at PredictorEngineApp.java:153) finished in 13.344 s 18/04/17 17:12:13 INFO scheduler.DAGScheduler: Job 1041 finished: foreachPartition at PredictorEngineApp.java:153, took 13.405597 s 18/04/17 17:12:13 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x528ea7c5 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 17:12:13 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x528ea7c50x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 17:12:13 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 17:12:13 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:40559, server: ***hostname masked***/***IP masked***:2181 18/04/17 17:12:13 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x1626be1444c96dc, negotiated timeout = 60000 18/04/17 17:12:13 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x1626be1444c96dc 18/04/17 17:12:13 INFO zookeeper.ZooKeeper: Session: 0x1626be1444c96dc closed 18/04/17 17:12:13 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 17:12:13 INFO scheduler.JobScheduler: Finished job streaming job 1523974320000 ms.20 from job set of time 1523974320000 ms 18/04/17 17:12:14 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 1049.0 (TID 1049) in 14346 ms on ***hostname masked*** (executor 1) (1/1) 18/04/17 17:12:14 INFO cluster.YarnClusterScheduler: Removed TaskSet 1049.0, whose tasks have all completed, from pool 18/04/17 17:12:14 INFO scheduler.DAGScheduler: ResultStage 1049 (foreachPartition at PredictorEngineApp.java:153) finished in 14.346 s 18/04/17 17:12:14 INFO scheduler.DAGScheduler: Job 1049 finished: foreachPartition at PredictorEngineApp.java:153, took 14.428282 s 18/04/17 17:12:14 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x505e95db connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 17:12:14 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x505e95db0x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 17:12:14 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 17:12:14 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:45158, server: ***hostname masked***/***IP masked***:2181 18/04/17 17:12:14 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x2626be142b28fc8, negotiated timeout = 60000 18/04/17 17:12:14 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x2626be142b28fc8 18/04/17 17:12:14 INFO zookeeper.ZooKeeper: Session: 0x2626be142b28fc8 closed 18/04/17 17:12:14 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 17:12:14 INFO scheduler.JobScheduler: Finished job streaming job 1523974320000 ms.10 from job set of time 1523974320000 ms 18/04/17 17:12:20 INFO scheduler.DAGScheduler: ResultStage 1031 (foreachPartition at PredictorEngineApp.java:153) finished in 20.489 s 18/04/17 17:12:20 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 1031.0 (TID 1031) in 20489 ms on ***hostname masked*** (executor 5) (1/1) 18/04/17 17:12:20 INFO scheduler.DAGScheduler: Job 1031 finished: foreachPartition at PredictorEngineApp.java:153, took 20.512079 s 18/04/17 17:12:20 INFO cluster.YarnClusterScheduler: Removed TaskSet 1031.0, whose tasks have all completed, from pool 18/04/17 17:12:20 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x1d5d0359 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 17:12:20 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x1d5d03590x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 17:12:20 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 17:12:20 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:40574, server: ***hostname masked***/***IP masked***:2181 18/04/17 17:12:20 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 1027.0 (TID 1027) in 20507 ms on ***hostname masked*** (executor 5) (1/1) 18/04/17 17:12:20 INFO cluster.YarnClusterScheduler: Removed TaskSet 1027.0, whose tasks have all completed, from pool 18/04/17 17:12:20 INFO scheduler.DAGScheduler: ResultStage 1027 (foreachPartition at PredictorEngineApp.java:153) finished in 20.507 s 18/04/17 17:12:20 INFO scheduler.DAGScheduler: Job 1027 finished: foreachPartition at PredictorEngineApp.java:153, took 20.516653 s 18/04/17 17:12:20 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x2b741d2f connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 17:12:20 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x2b741d2f0x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 17:12:20 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 17:12:20 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:45170, server: ***hostname masked***/***IP masked***:2181 18/04/17 17:12:20 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x1626be1444c96df, negotiated timeout = 60000 18/04/17 17:12:20 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x2626be142b28fca, negotiated timeout = 60000 18/04/17 17:12:20 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x1626be1444c96df 18/04/17 17:12:20 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x2626be142b28fca 18/04/17 17:12:20 INFO zookeeper.ZooKeeper: Session: 0x1626be1444c96df closed 18/04/17 17:12:20 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 17:12:20 INFO zookeeper.ZooKeeper: Session: 0x2626be142b28fca closed 18/04/17 17:12:20 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 17:12:20 INFO scheduler.JobScheduler: Finished job streaming job 1523974320000 ms.26 from job set of time 1523974320000 ms 18/04/17 17:12:20 INFO scheduler.JobScheduler: Finished job streaming job 1523974320000 ms.22 from job set of time 1523974320000 ms 18/04/17 17:12:20 INFO scheduler.JobScheduler: Total delay: 20.611 s for time 1523974320000 ms (execution: 20.557 s) 18/04/17 17:12:20 INFO kafka.KafkaRDD: Removing RDD 1368 from persistence list 18/04/17 17:12:20 INFO storage.BlockManager: Removing RDD 1368 18/04/17 17:12:20 INFO kafka.KafkaRDD: Removing RDD 1368 from persistence list 18/04/17 17:12:20 INFO storage.BlockManager: Removing RDD 1368 18/04/17 17:12:20 INFO kafka.KafkaRDD: Removing RDD 1369 from persistence list 18/04/17 17:12:20 INFO storage.BlockManager: Removing RDD 1369 18/04/17 17:12:20 INFO kafka.KafkaRDD: Removing RDD 1369 from persistence list 18/04/17 17:12:20 INFO storage.BlockManager: Removing RDD 1369 18/04/17 17:12:20 INFO kafka.KafkaRDD: Removing RDD 1370 from persistence list 18/04/17 17:12:20 INFO storage.BlockManager: Removing RDD 1370 18/04/17 17:12:20 INFO kafka.KafkaRDD: Removing RDD 1370 from persistence list 18/04/17 17:12:20 INFO storage.BlockManager: Removing RDD 1370 18/04/17 17:12:20 INFO kafka.KafkaRDD: Removing RDD 1371 from persistence list 18/04/17 17:12:20 INFO storage.BlockManager: Removing RDD 1371 18/04/17 17:12:20 INFO kafka.KafkaRDD: Removing RDD 1371 from persistence list 18/04/17 17:12:20 INFO storage.BlockManager: Removing RDD 1371 18/04/17 17:12:20 INFO kafka.KafkaRDD: Removing RDD 1372 from persistence list 18/04/17 17:12:20 INFO storage.BlockManager: Removing RDD 1372 18/04/17 17:12:20 INFO kafka.KafkaRDD: Removing RDD 1372 from persistence list 18/04/17 17:12:20 INFO storage.BlockManager: Removing RDD 1372 18/04/17 17:12:20 INFO kafka.KafkaRDD: Removing RDD 1373 from persistence list 18/04/17 17:12:20 INFO storage.BlockManager: Removing RDD 1373 18/04/17 17:12:20 INFO kafka.KafkaRDD: Removing RDD 1373 from persistence list 18/04/17 17:12:20 INFO storage.BlockManager: Removing RDD 1373 18/04/17 17:12:20 INFO kafka.KafkaRDD: Removing RDD 1374 from persistence list 18/04/17 17:12:20 INFO storage.BlockManager: Removing RDD 1374 18/04/17 17:12:20 INFO kafka.KafkaRDD: Removing RDD 1374 from persistence list 18/04/17 17:12:20 INFO storage.BlockManager: Removing RDD 1374 18/04/17 17:12:20 INFO kafka.KafkaRDD: Removing RDD 1375 from persistence list 18/04/17 17:12:20 INFO storage.BlockManager: Removing RDD 1375 18/04/17 17:12:20 INFO kafka.KafkaRDD: Removing RDD 1375 from persistence list 18/04/17 17:12:20 INFO storage.BlockManager: Removing RDD 1375 18/04/17 17:12:20 INFO kafka.KafkaRDD: Removing RDD 1376 from persistence list 18/04/17 17:12:20 INFO storage.BlockManager: Removing RDD 1376 18/04/17 17:12:20 INFO kafka.KafkaRDD: Removing RDD 1376 from persistence list 18/04/17 17:12:20 INFO storage.BlockManager: Removing RDD 1376 18/04/17 17:12:20 INFO kafka.KafkaRDD: Removing RDD 1377 from persistence list 18/04/17 17:12:20 INFO storage.BlockManager: Removing RDD 1377 18/04/17 17:12:20 INFO kafka.KafkaRDD: Removing RDD 1377 from persistence list 18/04/17 17:12:20 INFO storage.BlockManager: Removing RDD 1377 18/04/17 17:12:20 INFO kafka.KafkaRDD: Removing RDD 1378 from persistence list 18/04/17 17:12:20 INFO storage.BlockManager: Removing RDD 1378 18/04/17 17:12:20 INFO kafka.KafkaRDD: Removing RDD 1378 from persistence list 18/04/17 17:12:20 INFO spark.ContextCleaner: Cleaned accumulator 1050 18/04/17 17:12:20 INFO spark.ContextCleaner: Cleaned accumulator 1033 18/04/17 17:12:20 INFO storage.BlockManager: Removing RDD 1378 18/04/17 17:12:20 INFO kafka.KafkaRDD: Removing RDD 1379 from persistence list 18/04/17 17:12:20 INFO storage.BlockManager: Removing RDD 1379 18/04/17 17:12:20 INFO kafka.KafkaRDD: Removing RDD 1379 from persistence list 18/04/17 17:12:20 INFO storage.BlockManagerInfo: Removed broadcast_1029_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:12:20 INFO storage.BlockManager: Removing RDD 1379 18/04/17 17:12:20 INFO kafka.KafkaRDD: Removing RDD 1380 from persistence list 18/04/17 17:12:20 INFO storage.BlockManager: Removing RDD 1380 18/04/17 17:12:20 INFO kafka.KafkaRDD: Removing RDD 1380 from persistence list 18/04/17 17:12:20 INFO storage.BlockManager: Removing RDD 1380 18/04/17 17:12:20 INFO kafka.KafkaRDD: Removing RDD 1381 from persistence list 18/04/17 17:12:20 INFO storage.BlockManagerInfo: Removed broadcast_1029_piece0 on ***hostname masked***:55033 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:12:20 INFO storage.BlockManager: Removing RDD 1381 18/04/17 17:12:20 INFO kafka.KafkaRDD: Removing RDD 1381 from persistence list 18/04/17 17:12:20 INFO storage.BlockManager: Removing RDD 1381 18/04/17 17:12:20 INFO kafka.KafkaRDD: Removing RDD 1382 from persistence list 18/04/17 17:12:20 INFO storage.BlockManager: Removing RDD 1382 18/04/17 17:12:20 INFO kafka.KafkaRDD: Removing RDD 1382 from persistence list 18/04/17 17:12:20 INFO storage.BlockManager: Removing RDD 1382 18/04/17 17:12:20 INFO kafka.KafkaRDD: Removing RDD 1383 from persistence list 18/04/17 17:12:20 INFO storage.BlockManagerInfo: Removed broadcast_1030_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:12:20 INFO storage.BlockManager: Removing RDD 1383 18/04/17 17:12:20 INFO kafka.KafkaRDD: Removing RDD 1383 from persistence list 18/04/17 17:12:20 INFO storage.BlockManager: Removing RDD 1383 18/04/17 17:12:20 INFO kafka.KafkaRDD: Removing RDD 1384 from persistence list 18/04/17 17:12:20 INFO storage.BlockManager: Removing RDD 1384 18/04/17 17:12:20 INFO kafka.KafkaRDD: Removing RDD 1384 from persistence list 18/04/17 17:12:20 INFO storage.BlockManager: Removing RDD 1384 18/04/17 17:12:20 INFO kafka.KafkaRDD: Removing RDD 1385 from persistence list 18/04/17 17:12:20 INFO storage.BlockManager: Removing RDD 1385 18/04/17 17:12:20 INFO kafka.KafkaRDD: Removing RDD 1385 from persistence list 18/04/17 17:12:20 INFO storage.BlockManager: Removing RDD 1385 18/04/17 17:12:20 INFO storage.BlockManagerInfo: Removed broadcast_1030_piece0 on ***hostname masked***:43653 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:12:20 INFO kafka.KafkaRDD: Removing RDD 1386 from persistence list 18/04/17 17:12:20 INFO storage.BlockManager: Removing RDD 1386 18/04/17 17:12:20 INFO kafka.KafkaRDD: Removing RDD 1386 from persistence list 18/04/17 17:12:20 INFO storage.BlockManager: Removing RDD 1386 18/04/17 17:12:20 INFO kafka.KafkaRDD: Removing RDD 1387 from persistence list 18/04/17 17:12:20 INFO storage.BlockManager: Removing RDD 1387 18/04/17 17:12:20 INFO kafka.KafkaRDD: Removing RDD 1387 from persistence list 18/04/17 17:12:20 INFO storage.BlockManagerInfo: Removed broadcast_1031_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:12:20 INFO storage.BlockManager: Removing RDD 1387 18/04/17 17:12:20 INFO kafka.KafkaRDD: Removing RDD 1388 from persistence list 18/04/17 17:12:20 INFO storage.BlockManagerInfo: Removed broadcast_1031_piece0 on ***hostname masked***:53081 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:12:20 INFO storage.BlockManager: Removing RDD 1388 18/04/17 17:12:20 INFO kafka.KafkaRDD: Removing RDD 1388 from persistence list 18/04/17 17:12:20 INFO storage.BlockManager: Removing RDD 1388 18/04/17 17:12:20 INFO kafka.KafkaRDD: Removing RDD 1389 from persistence list 18/04/17 17:12:20 INFO spark.ContextCleaner: Cleaned accumulator 1032 18/04/17 17:12:20 INFO storage.BlockManager: Removing RDD 1389 18/04/17 17:12:20 INFO kafka.KafkaRDD: Removing RDD 1389 from persistence list 18/04/17 17:12:20 INFO storage.BlockManager: Removing RDD 1389 18/04/17 17:12:20 INFO kafka.KafkaRDD: Removing RDD 1390 from persistence list 18/04/17 17:12:20 INFO storage.BlockManagerInfo: Removed broadcast_1033_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:12:20 INFO storage.BlockManager: Removing RDD 1390 18/04/17 17:12:20 INFO kafka.KafkaRDD: Removing RDD 1390 from persistence list 18/04/17 17:12:20 INFO storage.BlockManagerInfo: Removed broadcast_1033_piece0 on ***hostname masked***:56034 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:12:20 INFO storage.BlockManager: Removing RDD 1390 18/04/17 17:12:20 INFO kafka.KafkaRDD: Removing RDD 1391 from persistence list 18/04/17 17:12:20 INFO storage.BlockManager: Removing RDD 1391 18/04/17 17:12:20 INFO spark.ContextCleaner: Cleaned accumulator 1034 18/04/17 17:12:20 INFO kafka.KafkaRDD: Removing RDD 1391 from persistence list 18/04/17 17:12:20 INFO storage.BlockManager: Removing RDD 1391 18/04/17 17:12:20 INFO kafka.KafkaRDD: Removing RDD 1392 from persistence list 18/04/17 17:12:20 INFO storage.BlockManagerInfo: Removed broadcast_1032_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:12:20 INFO storage.BlockManager: Removing RDD 1392 18/04/17 17:12:20 INFO kafka.KafkaRDD: Removing RDD 1392 from persistence list 18/04/17 17:12:20 INFO storage.BlockManager: Removing RDD 1392 18/04/17 17:12:20 INFO kafka.KafkaRDD: Removing RDD 1393 from persistence list 18/04/17 17:12:20 INFO storage.BlockManagerInfo: Removed broadcast_1032_piece0 on ***hostname masked***:50260 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:12:20 INFO storage.BlockManager: Removing RDD 1393 18/04/17 17:12:20 INFO kafka.KafkaRDD: Removing RDD 1393 from persistence list 18/04/17 17:12:20 INFO storage.BlockManager: Removing RDD 1393 18/04/17 17:12:20 INFO kafka.KafkaRDD: Removing RDD 1394 from persistence list 18/04/17 17:12:20 INFO spark.ContextCleaner: Cleaned accumulator 1030 18/04/17 17:12:20 INFO storage.BlockManager: Removing RDD 1394 18/04/17 17:12:20 INFO kafka.KafkaRDD: Removing RDD 1394 from persistence list 18/04/17 17:12:20 INFO kafka.KafkaRDD: Removing RDD 1395 from persistence list 18/04/17 17:12:20 INFO storage.BlockManagerInfo: Removed broadcast_1034_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:12:20 INFO kafka.KafkaRDD: Removing RDD 1395 from persistence list 18/04/17 17:12:20 INFO storage.BlockManagerInfo: Removed broadcast_1034_piece0 on ***hostname masked***:43653 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:12:20 INFO storage.BlockManager: Removing RDD 1394 18/04/17 17:12:20 INFO storage.BlockManager: Removing RDD 1395 18/04/17 17:12:20 INFO storage.BlockManager: Removing RDD 1395 18/04/17 17:12:20 INFO kafka.KafkaRDD: Removing RDD 1396 from persistence list 18/04/17 17:12:20 INFO storage.BlockManager: Removing RDD 1396 18/04/17 17:12:20 INFO kafka.KafkaRDD: Removing RDD 1396 from persistence list 18/04/17 17:12:20 INFO storage.BlockManager: Removing RDD 1396 18/04/17 17:12:20 INFO spark.ContextCleaner: Cleaned accumulator 1035 18/04/17 17:12:20 INFO spark.ContextCleaner: Cleaned accumulator 1037 18/04/17 17:12:20 INFO kafka.KafkaRDD: Removing RDD 1397 from persistence list 18/04/17 17:12:20 INFO storage.BlockManager: Removing RDD 1397 18/04/17 17:12:20 INFO kafka.KafkaRDD: Removing RDD 1397 from persistence list 18/04/17 17:12:20 INFO storage.BlockManagerInfo: Removed broadcast_1035_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:12:20 INFO storage.BlockManager: Removing RDD 1397 18/04/17 17:12:20 INFO kafka.KafkaRDD: Removing RDD 1398 from persistence list 18/04/17 17:12:20 INFO storage.BlockManager: Removing RDD 1398 18/04/17 17:12:20 INFO kafka.KafkaRDD: Removing RDD 1398 from persistence list 18/04/17 17:12:20 INFO storage.BlockManager: Removing RDD 1398 18/04/17 17:12:20 INFO storage.BlockManagerInfo: Removed broadcast_1035_piece0 on ***hostname masked***:57847 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:12:20 INFO kafka.KafkaRDD: Removing RDD 1399 from persistence list 18/04/17 17:12:20 INFO storage.BlockManager: Removing RDD 1399 18/04/17 17:12:20 INFO kafka.KafkaRDD: Removing RDD 1399 from persistence list 18/04/17 17:12:20 INFO spark.ContextCleaner: Cleaned accumulator 1036 18/04/17 17:12:20 INFO storage.BlockManager: Removing RDD 1399 18/04/17 17:12:20 INFO kafka.KafkaRDD: Removing RDD 1400 from persistence list 18/04/17 17:12:20 INFO storage.BlockManager: Removing RDD 1400 18/04/17 17:12:20 INFO kafka.KafkaRDD: Removing RDD 1400 from persistence list 18/04/17 17:12:20 INFO storage.BlockManagerInfo: Removed broadcast_1027_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:12:20 INFO storage.BlockManagerInfo: Removed broadcast_1027_piece0 on ***hostname masked***:53081 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:12:20 INFO storage.BlockManager: Removing RDD 1400 18/04/17 17:12:20 INFO kafka.KafkaRDD: Removing RDD 1401 from persistence list 18/04/17 17:12:20 INFO spark.ContextCleaner: Cleaned accumulator 1038 18/04/17 17:12:20 INFO storage.BlockManager: Removing RDD 1401 18/04/17 17:12:20 INFO kafka.KafkaRDD: Removing RDD 1401 from persistence list 18/04/17 17:12:20 INFO storage.BlockManager: Removing RDD 1401 18/04/17 17:12:20 INFO kafka.KafkaRDD: Removing RDD 1402 from persistence list 18/04/17 17:12:20 INFO storage.BlockManagerInfo: Removed broadcast_1036_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:12:20 INFO storage.BlockManager: Removing RDD 1402 18/04/17 17:12:20 INFO kafka.KafkaRDD: Removing RDD 1402 from persistence list 18/04/17 17:12:20 INFO storage.BlockManager: Removing RDD 1402 18/04/17 17:12:20 INFO kafka.KafkaRDD: Removing RDD 1403 from persistence list 18/04/17 17:12:20 INFO storage.BlockManagerInfo: Removed broadcast_1036_piece0 on ***hostname masked***:42188 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:12:20 INFO storage.BlockManager: Removing RDD 1403 18/04/17 17:12:20 INFO kafka.KafkaRDD: Removing RDD 1403 from persistence list 18/04/17 17:12:20 INFO storage.BlockManager: Removing RDD 1403 18/04/17 17:12:20 INFO spark.ContextCleaner: Cleaned accumulator 1027 18/04/17 17:12:20 INFO spark.ContextCleaner: Cleaned accumulator 1029 18/04/17 17:12:20 INFO scheduler.ReceivedBlockTracker: Deleting batches ArrayBuffer() 18/04/17 17:12:20 INFO scheduler.InputInfoTracker: remove old batch metadata: 1523974200000 ms 18/04/17 17:12:20 INFO storage.BlockManagerInfo: Removed broadcast_1037_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:12:20 INFO storage.BlockManagerInfo: Removed broadcast_1037_piece0 on ***hostname masked***:53081 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:12:20 INFO storage.BlockManagerInfo: Removed broadcast_1026_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:12:20 INFO storage.BlockManagerInfo: Removed broadcast_1026_piece0 on ***hostname masked***:55095 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:12:20 INFO storage.BlockManagerInfo: Removed broadcast_1028_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:12:20 INFO storage.BlockManagerInfo: Removed broadcast_1028_piece0 on ***hostname masked***:57847 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:12:20 INFO spark.ContextCleaner: Cleaned accumulator 1039 18/04/17 17:12:20 INFO storage.BlockManagerInfo: Removed broadcast_1039_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:12:20 INFO storage.BlockManagerInfo: Removed broadcast_1039_piece0 on ***hostname masked***:55279 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:12:20 INFO spark.ContextCleaner: Cleaned accumulator 1040 18/04/17 17:12:20 INFO storage.BlockManagerInfo: Removed broadcast_1038_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:12:20 INFO storage.BlockManagerInfo: Removed broadcast_1038_piece0 on ***hostname masked***:55279 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:12:20 INFO spark.ContextCleaner: Cleaned accumulator 1042 18/04/17 17:12:20 INFO storage.BlockManagerInfo: Removed broadcast_1040_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:12:20 INFO storage.BlockManagerInfo: Removed broadcast_1040_piece0 on ***hostname masked***:56034 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:12:20 INFO spark.ContextCleaner: Cleaned accumulator 1041 18/04/17 17:12:20 INFO storage.BlockManagerInfo: Removed broadcast_1042_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:12:20 INFO storage.BlockManagerInfo: Removed broadcast_1042_piece0 on ***hostname masked***:43653 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:12:20 INFO spark.ContextCleaner: Cleaned accumulator 1043 18/04/17 17:12:20 INFO storage.BlockManagerInfo: Removed broadcast_1041_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:12:20 INFO storage.BlockManagerInfo: Removed broadcast_1041_piece0 on ***hostname masked***:53081 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:12:20 INFO spark.ContextCleaner: Cleaned accumulator 1045 18/04/17 17:12:20 INFO storage.BlockManagerInfo: Removed broadcast_1043_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:12:20 INFO storage.BlockManagerInfo: Removed broadcast_1043_piece0 on ***hostname masked***:43653 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:12:20 INFO spark.ContextCleaner: Cleaned accumulator 1044 18/04/17 17:12:20 INFO storage.BlockManagerInfo: Removed broadcast_1051_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:12:20 INFO storage.BlockManagerInfo: Removed broadcast_1051_piece0 on ***hostname masked***:57847 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:12:20 INFO spark.ContextCleaner: Cleaned accumulator 1052 18/04/17 17:12:20 INFO storage.BlockManagerInfo: Removed broadcast_1050_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:12:20 INFO storage.BlockManagerInfo: Removed broadcast_1050_piece0 on ***hostname masked***:50260 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:12:20 INFO storage.BlockManagerInfo: Removed broadcast_1045_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.7 MB) 18/04/17 17:12:20 INFO storage.BlockManagerInfo: Removed broadcast_1045_piece0 on ***hostname masked***:35790 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:12:20 INFO spark.ContextCleaner: Cleaned accumulator 1046 18/04/17 17:12:20 INFO storage.BlockManagerInfo: Removed broadcast_1044_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.7 MB) 18/04/17 17:12:20 INFO storage.BlockManagerInfo: Removed broadcast_1044_piece0 on ***hostname masked***:42188 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:12:20 INFO spark.ContextCleaner: Cleaned accumulator 1048 18/04/17 17:12:20 INFO storage.BlockManagerInfo: Removed broadcast_1046_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.7 MB) 18/04/17 17:12:20 INFO storage.BlockManagerInfo: Removed broadcast_1046_piece0 on ***hostname masked***:41751 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:12:20 INFO spark.ContextCleaner: Cleaned accumulator 1047 18/04/17 17:12:20 INFO storage.BlockManagerInfo: Removed broadcast_1048_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.7 MB) 18/04/17 17:12:20 INFO storage.BlockManagerInfo: Removed broadcast_1048_piece0 on ***hostname masked***:42188 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:12:20 INFO spark.ContextCleaner: Cleaned accumulator 1049 18/04/17 17:12:20 INFO storage.BlockManagerInfo: Removed broadcast_1047_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.7 MB) 18/04/17 17:12:20 INFO storage.BlockManagerInfo: Removed broadcast_1047_piece0 on ***hostname masked***:60107 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:12:20 INFO spark.ContextCleaner: Cleaned accumulator 1051 18/04/17 17:12:20 INFO storage.BlockManagerInfo: Removed broadcast_1049_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.7 MB) 18/04/17 17:12:20 INFO storage.BlockManagerInfo: Removed broadcast_1049_piece0 on ***hostname masked***:56034 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:12:20 INFO spark.ContextCleaner: Cleaned accumulator 1028 18/04/17 17:12:20 INFO spark.ContextCleaner: Cleaned accumulator 1031 18/04/17 17:13:00 INFO scheduler.JobScheduler: Added jobs for time 1523974380000 ms 18/04/17 17:13:00 INFO scheduler.JobScheduler: Starting job streaming job 1523974380000 ms.0 from job set of time 1523974380000 ms 18/04/17 17:13:00 INFO scheduler.JobScheduler: Starting job streaming job 1523974380000 ms.1 from job set of time 1523974380000 ms 18/04/17 17:13:00 INFO scheduler.JobScheduler: Starting job streaming job 1523974380000 ms.2 from job set of time 1523974380000 ms 18/04/17 17:13:00 INFO scheduler.JobScheduler: Starting job streaming job 1523974380000 ms.3 from job set of time 1523974380000 ms 18/04/17 17:13:00 INFO scheduler.JobScheduler: Finished job streaming job 1523974380000 ms.0 from job set of time 1523974380000 ms 18/04/17 17:13:00 INFO scheduler.JobScheduler: Starting job streaming job 1523974380000 ms.4 from job set of time 1523974380000 ms 18/04/17 17:13:00 INFO scheduler.JobScheduler: Starting job streaming job 1523974380000 ms.6 from job set of time 1523974380000 ms 18/04/17 17:13:00 INFO scheduler.JobScheduler: Finished job streaming job 1523974380000 ms.3 from job set of time 1523974380000 ms 18/04/17 17:13:00 INFO scheduler.JobScheduler: Starting job streaming job 1523974380000 ms.5 from job set of time 1523974380000 ms 18/04/17 17:13:00 INFO scheduler.JobScheduler: Finished job streaming job 1523974380000 ms.4 from job set of time 1523974380000 ms 18/04/17 17:13:00 INFO scheduler.JobScheduler: Starting job streaming job 1523974380000 ms.7 from job set of time 1523974380000 ms 18/04/17 17:13:00 INFO scheduler.JobScheduler: Starting job streaming job 1523974380000 ms.8 from job set of time 1523974380000 ms 18/04/17 17:13:00 INFO scheduler.JobScheduler: Starting job streaming job 1523974380000 ms.9 from job set of time 1523974380000 ms 18/04/17 17:13:00 INFO scheduler.JobScheduler: Starting job streaming job 1523974380000 ms.10 from job set of time 1523974380000 ms 18/04/17 17:13:00 INFO scheduler.JobScheduler: Starting job streaming job 1523974380000 ms.11 from job set of time 1523974380000 ms 18/04/17 17:13:00 INFO scheduler.JobScheduler: Starting job streaming job 1523974380000 ms.12 from job set of time 1523974380000 ms 18/04/17 17:13:00 INFO scheduler.JobScheduler: Starting job streaming job 1523974380000 ms.13 from job set of time 1523974380000 ms 18/04/17 17:13:00 INFO scheduler.JobScheduler: Starting job streaming job 1523974380000 ms.14 from job set of time 1523974380000 ms 18/04/17 17:13:00 INFO scheduler.JobScheduler: Finished job streaming job 1523974380000 ms.13 from job set of time 1523974380000 ms 18/04/17 17:13:00 INFO scheduler.JobScheduler: Starting job streaming job 1523974380000 ms.16 from job set of time 1523974380000 ms 18/04/17 17:13:00 INFO scheduler.JobScheduler: Finished job streaming job 1523974380000 ms.14 from job set of time 1523974380000 ms 18/04/17 17:13:00 INFO scheduler.JobScheduler: Finished job streaming job 1523974380000 ms.16 from job set of time 1523974380000 ms 18/04/17 17:13:00 INFO scheduler.JobScheduler: Starting job streaming job 1523974380000 ms.15 from job set of time 1523974380000 ms 18/04/17 17:13:00 INFO scheduler.JobScheduler: Starting job streaming job 1523974380000 ms.17 from job set of time 1523974380000 ms 18/04/17 17:13:00 INFO scheduler.JobScheduler: Starting job streaming job 1523974380000 ms.18 from job set of time 1523974380000 ms 18/04/17 17:13:00 INFO scheduler.JobScheduler: Finished job streaming job 1523974380000 ms.17 from job set of time 1523974380000 ms 18/04/17 17:13:00 INFO scheduler.JobScheduler: Starting job streaming job 1523974380000 ms.20 from job set of time 1523974380000 ms 18/04/17 17:13:00 INFO scheduler.JobScheduler: Starting job streaming job 1523974380000 ms.19 from job set of time 1523974380000 ms 18/04/17 17:13:00 INFO scheduler.JobScheduler: Starting job streaming job 1523974380000 ms.21 from job set of time 1523974380000 ms 18/04/17 17:13:00 INFO scheduler.JobScheduler: Starting job streaming job 1523974380000 ms.22 from job set of time 1523974380000 ms 18/04/17 17:13:00 INFO scheduler.JobScheduler: Finished job streaming job 1523974380000 ms.21 from job set of time 1523974380000 ms 18/04/17 17:13:00 INFO scheduler.JobScheduler: Starting job streaming job 1523974380000 ms.23 from job set of time 1523974380000 ms 18/04/17 17:13:00 INFO scheduler.JobScheduler: Starting job streaming job 1523974380000 ms.24 from job set of time 1523974380000 ms 18/04/17 17:13:00 INFO scheduler.JobScheduler: Starting job streaming job 1523974380000 ms.26 from job set of time 1523974380000 ms 18/04/17 17:13:00 INFO scheduler.JobScheduler: Starting job streaming job 1523974380000 ms.27 from job set of time 1523974380000 ms 18/04/17 17:13:00 INFO scheduler.JobScheduler: Starting job streaming job 1523974380000 ms.25 from job set of time 1523974380000 ms 18/04/17 17:13:00 INFO scheduler.JobScheduler: Starting job streaming job 1523974380000 ms.28 from job set of time 1523974380000 ms 18/04/17 17:13:00 INFO scheduler.JobScheduler: Starting job streaming job 1523974380000 ms.29 from job set of time 1523974380000 ms 18/04/17 17:13:00 INFO scheduler.JobScheduler: Starting job streaming job 1523974380000 ms.30 from job set of time 1523974380000 ms 18/04/17 17:13:00 INFO scheduler.JobScheduler: Starting job streaming job 1523974380000 ms.31 from job set of time 1523974380000 ms 18/04/17 17:13:00 INFO scheduler.JobScheduler: Finished job streaming job 1523974380000 ms.30 from job set of time 1523974380000 ms 18/04/17 17:13:00 INFO scheduler.JobScheduler: Starting job streaming job 1523974380000 ms.33 from job set of time 1523974380000 ms 18/04/17 17:13:00 INFO scheduler.JobScheduler: Starting job streaming job 1523974380000 ms.32 from job set of time 1523974380000 ms 18/04/17 17:13:00 INFO scheduler.JobScheduler: Starting job streaming job 1523974380000 ms.34 from job set of time 1523974380000 ms 18/04/17 17:13:00 INFO scheduler.JobScheduler: Starting job streaming job 1523974380000 ms.35 from job set of time 1523974380000 ms 18/04/17 17:13:00 INFO scheduler.JobScheduler: Finished job streaming job 1523974380000 ms.35 from job set of time 1523974380000 ms 18/04/17 17:13:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 17:13:00 INFO scheduler.DAGScheduler: Got job 1052 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 17:13:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 1052 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 17:13:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 17:13:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 17:13:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 17:13:00 INFO scheduler.DAGScheduler: Submitting ResultStage 1052 (KafkaRDD[1463] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 17:13:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 17:13:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 17:13:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 17:13:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 17:13:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 17:13:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 17:13:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 17:13:00 INFO storage.MemoryStore: Block broadcast_1052 stored as values in memory (estimated size 5.7 KB, free 491.7 MB) 18/04/17 17:13:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 17:13:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 17:13:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 17:13:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 17:13:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 17:13:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 17:13:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 17:13:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 17:13:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 17:13:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 17:13:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 17:13:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 17:13:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 17:13:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 17:13:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 17:13:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 17:13:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 17:13:00 INFO storage.MemoryStore: Block broadcast_1052_piece0 stored as bytes in memory (estimated size 3.1 KB, free 491.7 MB) 18/04/17 17:13:00 INFO storage.BlockManagerInfo: Added broadcast_1052_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.7 MB) 18/04/17 17:13:00 INFO spark.SparkContext: Created broadcast 1052 from broadcast at DAGScheduler.scala:1006 18/04/17 17:13:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 1052 (KafkaRDD[1463] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 17:13:00 INFO cluster.YarnClusterScheduler: Adding task set 1052.0 with 1 tasks 18/04/17 17:13:00 INFO scheduler.DAGScheduler: Got job 1053 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 17:13:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 1053 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 17:13:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 17:13:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 17:13:00 INFO scheduler.DAGScheduler: Submitting ResultStage 1053 (KafkaRDD[1471] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 17:13:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 1052.0 (TID 1052, ***hostname masked***, executor 7, partition 0, RACK_LOCAL, 2048 bytes) 18/04/17 17:13:00 INFO storage.MemoryStore: Block broadcast_1053 stored as values in memory (estimated size 5.7 KB, free 491.6 MB) 18/04/17 17:13:00 INFO storage.MemoryStore: Block broadcast_1053_piece0 stored as bytes in memory (estimated size 3.1 KB, free 491.6 MB) 18/04/17 17:13:00 INFO storage.BlockManagerInfo: Added broadcast_1053_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.7 MB) 18/04/17 17:13:00 INFO spark.SparkContext: Created broadcast 1053 from broadcast at DAGScheduler.scala:1006 18/04/17 17:13:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 1053 (KafkaRDD[1471] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 17:13:00 INFO cluster.YarnClusterScheduler: Adding task set 1053.0 with 1 tasks 18/04/17 17:13:00 INFO scheduler.DAGScheduler: Got job 1054 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 17:13:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 1054 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 17:13:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 17:13:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 17:13:00 INFO scheduler.DAGScheduler: Submitting ResultStage 1054 (KafkaRDD[1442] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 17:13:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 1053.0 (TID 1053, ***hostname masked***, executor 4, partition 0, RACK_LOCAL, 2042 bytes) 18/04/17 17:13:00 INFO storage.MemoryStore: Block broadcast_1054 stored as values in memory (estimated size 5.7 KB, free 491.6 MB) 18/04/17 17:13:00 INFO storage.MemoryStore: Block broadcast_1054_piece0 stored as bytes in memory (estimated size 3.1 KB, free 491.6 MB) 18/04/17 17:13:00 INFO storage.BlockManagerInfo: Added broadcast_1054_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.7 MB) 18/04/17 17:13:00 INFO spark.SparkContext: Created broadcast 1054 from broadcast at DAGScheduler.scala:1006 18/04/17 17:13:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 1054 (KafkaRDD[1442] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 17:13:00 INFO cluster.YarnClusterScheduler: Adding task set 1054.0 with 1 tasks 18/04/17 17:13:00 INFO scheduler.DAGScheduler: Got job 1055 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 17:13:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 1055 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 17:13:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 17:13:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 17:13:00 INFO scheduler.DAGScheduler: Submitting ResultStage 1055 (KafkaRDD[1455] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 17:13:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 1054.0 (TID 1054, ***hostname masked***, executor 2, partition 0, RACK_LOCAL, 2034 bytes) 18/04/17 17:13:00 INFO storage.MemoryStore: Block broadcast_1055 stored as values in memory (estimated size 5.7 KB, free 491.6 MB) 18/04/17 17:13:00 INFO storage.MemoryStore: Block broadcast_1055_piece0 stored as bytes in memory (estimated size 3.1 KB, free 491.6 MB) 18/04/17 17:13:00 INFO storage.BlockManagerInfo: Added broadcast_1055_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.7 MB) 18/04/17 17:13:00 INFO spark.SparkContext: Created broadcast 1055 from broadcast at DAGScheduler.scala:1006 18/04/17 17:13:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 1055 (KafkaRDD[1455] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 17:13:00 INFO cluster.YarnClusterScheduler: Adding task set 1055.0 with 1 tasks 18/04/17 17:13:00 INFO scheduler.DAGScheduler: Got job 1056 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 17:13:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 1056 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 17:13:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 17:13:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 17:13:00 INFO scheduler.DAGScheduler: Submitting ResultStage 1056 (KafkaRDD[1473] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 17:13:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 1055.0 (TID 1055, ***hostname masked***, executor 1, partition 0, RACK_LOCAL, 2064 bytes) 18/04/17 17:13:00 INFO storage.MemoryStore: Block broadcast_1056 stored as values in memory (estimated size 5.7 KB, free 491.6 MB) 18/04/17 17:13:00 INFO storage.BlockManagerInfo: Added broadcast_1052_piece0 in memory on ***hostname masked***:41751 (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:13:00 INFO storage.MemoryStore: Block broadcast_1056_piece0 stored as bytes in memory (estimated size 3.1 KB, free 491.6 MB) 18/04/17 17:13:00 INFO storage.BlockManagerInfo: Added broadcast_1056_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.7 MB) 18/04/17 17:13:00 INFO spark.SparkContext: Created broadcast 1056 from broadcast at DAGScheduler.scala:1006 18/04/17 17:13:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 1056 (KafkaRDD[1473] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 17:13:00 INFO cluster.YarnClusterScheduler: Adding task set 1056.0 with 1 tasks 18/04/17 17:13:00 INFO scheduler.DAGScheduler: Got job 1057 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 17:13:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 1057 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 17:13:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 17:13:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 17:13:00 INFO scheduler.DAGScheduler: Submitting ResultStage 1057 (KafkaRDD[1451] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 17:13:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 1056.0 (TID 1056, ***hostname masked***, executor 10, partition 0, NODE_LOCAL, 2048 bytes) 18/04/17 17:13:00 INFO storage.MemoryStore: Block broadcast_1057 stored as values in memory (estimated size 5.7 KB, free 491.6 MB) 18/04/17 17:13:00 INFO storage.MemoryStore: Block broadcast_1057_piece0 stored as bytes in memory (estimated size 3.1 KB, free 491.6 MB) 18/04/17 17:13:00 INFO storage.BlockManagerInfo: Added broadcast_1057_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:13:00 INFO spark.SparkContext: Created broadcast 1057 from broadcast at DAGScheduler.scala:1006 18/04/17 17:13:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 1057 (KafkaRDD[1451] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 17:13:00 INFO cluster.YarnClusterScheduler: Adding task set 1057.0 with 1 tasks 18/04/17 17:13:00 INFO storage.BlockManagerInfo: Added broadcast_1053_piece0 in memory on ***hostname masked***:55279 (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:13:00 INFO scheduler.DAGScheduler: Got job 1058 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 17:13:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 1058 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 17:13:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 17:13:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 17:13:00 INFO scheduler.DAGScheduler: Submitting ResultStage 1058 (KafkaRDD[1445] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 17:13:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 1057.0 (TID 1057, ***hostname masked***, executor 10, partition 0, RACK_LOCAL, 2046 bytes) 18/04/17 17:13:00 INFO storage.MemoryStore: Block broadcast_1058 stored as values in memory (estimated size 5.7 KB, free 491.6 MB) 18/04/17 17:13:00 INFO storage.MemoryStore: Block broadcast_1058_piece0 stored as bytes in memory (estimated size 3.1 KB, free 491.6 MB) 18/04/17 17:13:00 INFO storage.BlockManagerInfo: Added broadcast_1058_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:13:00 INFO spark.SparkContext: Created broadcast 1058 from broadcast at DAGScheduler.scala:1006 18/04/17 17:13:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 1058 (KafkaRDD[1445] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 17:13:00 INFO cluster.YarnClusterScheduler: Adding task set 1058.0 with 1 tasks 18/04/17 17:13:00 INFO scheduler.DAGScheduler: Got job 1059 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 17:13:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 1059 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 17:13:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 17:13:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 17:13:00 INFO scheduler.DAGScheduler: Submitting ResultStage 1059 (KafkaRDD[1450] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 17:13:00 INFO storage.MemoryStore: Block broadcast_1059 stored as values in memory (estimated size 5.7 KB, free 491.6 MB) 18/04/17 17:13:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 1058.0 (TID 1058, ***hostname masked***, executor 2, partition 0, NODE_LOCAL, 2059 bytes) 18/04/17 17:13:00 INFO storage.MemoryStore: Block broadcast_1059_piece0 stored as bytes in memory (estimated size 3.1 KB, free 491.6 MB) 18/04/17 17:13:00 INFO storage.BlockManagerInfo: Added broadcast_1059_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:13:00 INFO spark.SparkContext: Created broadcast 1059 from broadcast at DAGScheduler.scala:1006 18/04/17 17:13:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 1059 (KafkaRDD[1450] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 17:13:00 INFO cluster.YarnClusterScheduler: Adding task set 1059.0 with 1 tasks 18/04/17 17:13:00 INFO scheduler.DAGScheduler: Got job 1060 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 17:13:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 1060 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 17:13:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 17:13:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 17:13:00 INFO scheduler.DAGScheduler: Submitting ResultStage 1060 (KafkaRDD[1474] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 17:13:00 INFO storage.BlockManagerInfo: Added broadcast_1054_piece0 in memory on ***hostname masked***:43653 (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:13:00 INFO storage.MemoryStore: Block broadcast_1060 stored as values in memory (estimated size 5.7 KB, free 491.6 MB) 18/04/17 17:13:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 1059.0 (TID 1059, ***hostname masked***, executor 3, partition 0, NODE_LOCAL, 2059 bytes) 18/04/17 17:13:00 INFO storage.BlockManagerInfo: Added broadcast_1055_piece0 in memory on ***hostname masked***:56034 (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:13:00 INFO storage.MemoryStore: Block broadcast_1060_piece0 stored as bytes in memory (estimated size 3.1 KB, free 491.6 MB) 18/04/17 17:13:00 INFO storage.BlockManagerInfo: Added broadcast_1060_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:13:00 INFO spark.SparkContext: Created broadcast 1060 from broadcast at DAGScheduler.scala:1006 18/04/17 17:13:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 1060 (KafkaRDD[1474] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 17:13:00 INFO cluster.YarnClusterScheduler: Adding task set 1060.0 with 1 tasks 18/04/17 17:13:00 INFO scheduler.DAGScheduler: Got job 1061 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 17:13:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 1061 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 17:13:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 17:13:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 17:13:00 INFO scheduler.DAGScheduler: Submitting ResultStage 1061 (KafkaRDD[1464] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 17:13:00 INFO storage.MemoryStore: Block broadcast_1061 stored as values in memory (estimated size 5.7 KB, free 491.6 MB) 18/04/17 17:13:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 1060.0 (TID 1060, ***hostname masked***, executor 10, partition 0, RACK_LOCAL, 2053 bytes) 18/04/17 17:13:00 INFO storage.MemoryStore: Block broadcast_1061_piece0 stored as bytes in memory (estimated size 3.1 KB, free 491.6 MB) 18/04/17 17:13:00 INFO storage.BlockManagerInfo: Added broadcast_1061_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:13:00 INFO spark.SparkContext: Created broadcast 1061 from broadcast at DAGScheduler.scala:1006 18/04/17 17:13:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 1061 (KafkaRDD[1464] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 17:13:00 INFO cluster.YarnClusterScheduler: Adding task set 1061.0 with 1 tasks 18/04/17 17:13:00 INFO scheduler.DAGScheduler: Got job 1062 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 17:13:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 1062 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 17:13:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 17:13:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 17:13:00 INFO scheduler.DAGScheduler: Submitting ResultStage 1062 (KafkaRDD[1468] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 17:13:00 INFO storage.MemoryStore: Block broadcast_1062 stored as values in memory (estimated size 5.7 KB, free 491.6 MB) 18/04/17 17:13:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 1061.0 (TID 1061, ***hostname masked***, executor 6, partition 0, NODE_LOCAL, 2047 bytes) 18/04/17 17:13:00 INFO storage.BlockManagerInfo: Added broadcast_1057_piece0 in memory on ***hostname masked***:55095 (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:13:00 INFO storage.MemoryStore: Block broadcast_1062_piece0 stored as bytes in memory (estimated size 3.1 KB, free 491.6 MB) 18/04/17 17:13:00 INFO storage.BlockManagerInfo: Added broadcast_1062_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:13:00 INFO spark.SparkContext: Created broadcast 1062 from broadcast at DAGScheduler.scala:1006 18/04/17 17:13:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 1062 (KafkaRDD[1468] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 17:13:00 INFO cluster.YarnClusterScheduler: Adding task set 1062.0 with 1 tasks 18/04/17 17:13:00 INFO scheduler.DAGScheduler: Got job 1063 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 17:13:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 1063 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 17:13:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 17:13:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 17:13:00 INFO scheduler.DAGScheduler: Submitting ResultStage 1063 (KafkaRDD[1469] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 17:13:00 INFO storage.MemoryStore: Block broadcast_1063 stored as values in memory (estimated size 5.7 KB, free 491.6 MB) 18/04/17 17:13:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 1062.0 (TID 1062, ***hostname masked***, executor 9, partition 0, RACK_LOCAL, 2045 bytes) 18/04/17 17:13:00 INFO storage.MemoryStore: Block broadcast_1063_piece0 stored as bytes in memory (estimated size 3.1 KB, free 491.6 MB) 18/04/17 17:13:00 INFO storage.BlockManagerInfo: Added broadcast_1063_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:13:00 INFO spark.SparkContext: Created broadcast 1063 from broadcast at DAGScheduler.scala:1006 18/04/17 17:13:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 1063 (KafkaRDD[1469] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 17:13:00 INFO cluster.YarnClusterScheduler: Adding task set 1063.0 with 1 tasks 18/04/17 17:13:00 INFO scheduler.DAGScheduler: Got job 1064 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 17:13:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 1064 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 17:13:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 17:13:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 17:13:00 INFO scheduler.DAGScheduler: Submitting ResultStage 1064 (KafkaRDD[1447] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 17:13:00 INFO storage.MemoryStore: Block broadcast_1064 stored as values in memory (estimated size 5.7 KB, free 491.6 MB) 18/04/17 17:13:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 1063.0 (TID 1063, ***hostname masked***, executor 3, partition 0, NODE_LOCAL, 2049 bytes) 18/04/17 17:13:00 INFO storage.BlockManagerInfo: Added broadcast_1059_piece0 in memory on ***hostname masked***:60107 (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:13:00 INFO storage.BlockManagerInfo: Added broadcast_1058_piece0 in memory on ***hostname masked***:43653 (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:13:00 INFO storage.MemoryStore: Block broadcast_1064_piece0 stored as bytes in memory (estimated size 3.1 KB, free 491.5 MB) 18/04/17 17:13:00 INFO storage.BlockManagerInfo: Added broadcast_1064_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:13:00 INFO spark.SparkContext: Created broadcast 1064 from broadcast at DAGScheduler.scala:1006 18/04/17 17:13:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 1064 (KafkaRDD[1447] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 17:13:00 INFO cluster.YarnClusterScheduler: Adding task set 1064.0 with 1 tasks 18/04/17 17:13:00 INFO scheduler.DAGScheduler: Got job 1065 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 17:13:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 1065 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 17:13:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 17:13:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 17:13:00 INFO scheduler.DAGScheduler: Submitting ResultStage 1065 (KafkaRDD[1446] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 17:13:00 INFO storage.MemoryStore: Block broadcast_1065 stored as values in memory (estimated size 5.7 KB, free 491.5 MB) 18/04/17 17:13:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 1064.0 (TID 1064, ***hostname masked***, executor 1, partition 0, RACK_LOCAL, 2060 bytes) 18/04/17 17:13:00 INFO storage.BlockManagerInfo: Added broadcast_1060_piece0 in memory on ***hostname masked***:55095 (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:13:00 INFO storage.MemoryStore: Block broadcast_1065_piece0 stored as bytes in memory (estimated size 3.1 KB, free 491.5 MB) 18/04/17 17:13:00 INFO storage.BlockManagerInfo: Added broadcast_1065_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:13:00 INFO spark.SparkContext: Created broadcast 1065 from broadcast at DAGScheduler.scala:1006 18/04/17 17:13:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 1065 (KafkaRDD[1446] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 17:13:00 INFO cluster.YarnClusterScheduler: Adding task set 1065.0 with 1 tasks 18/04/17 17:13:00 INFO scheduler.DAGScheduler: Got job 1066 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 17:13:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 1066 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 17:13:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 17:13:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 17:13:00 INFO scheduler.DAGScheduler: Submitting ResultStage 1066 (KafkaRDD[1449] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 17:13:00 INFO storage.MemoryStore: Block broadcast_1066 stored as values in memory (estimated size 5.7 KB, free 491.5 MB) 18/04/17 17:13:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 1065.0 (TID 1065, ***hostname masked***, executor 9, partition 0, NODE_LOCAL, 2045 bytes) 18/04/17 17:13:00 INFO storage.MemoryStore: Block broadcast_1066_piece0 stored as bytes in memory (estimated size 3.1 KB, free 491.5 MB) 18/04/17 17:13:00 INFO storage.BlockManagerInfo: Added broadcast_1066_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:13:00 INFO storage.BlockManagerInfo: Added broadcast_1062_piece0 in memory on ***hostname masked***:55033 (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:13:00 INFO spark.SparkContext: Created broadcast 1066 from broadcast at DAGScheduler.scala:1006 18/04/17 17:13:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 1066 (KafkaRDD[1449] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 17:13:00 INFO cluster.YarnClusterScheduler: Adding task set 1066.0 with 1 tasks 18/04/17 17:13:00 INFO scheduler.DAGScheduler: Got job 1067 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 17:13:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 1067 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 17:13:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 17:13:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 17:13:00 INFO scheduler.DAGScheduler: Submitting ResultStage 1067 (KafkaRDD[1466] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 17:13:00 INFO storage.BlockManagerInfo: Added broadcast_1064_piece0 in memory on ***hostname masked***:56034 (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:13:00 INFO storage.MemoryStore: Block broadcast_1067 stored as values in memory (estimated size 5.7 KB, free 491.5 MB) 18/04/17 17:13:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 1066.0 (TID 1066, ***hostname masked***, executor 4, partition 0, RACK_LOCAL, 2041 bytes) 18/04/17 17:13:00 INFO storage.BlockManagerInfo: Added broadcast_1063_piece0 in memory on ***hostname masked***:60107 (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:13:00 INFO storage.BlockManagerInfo: Added broadcast_1061_piece0 in memory on ***hostname masked***:35790 (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:13:00 INFO storage.MemoryStore: Block broadcast_1067_piece0 stored as bytes in memory (estimated size 3.1 KB, free 491.5 MB) 18/04/17 17:13:00 INFO storage.BlockManagerInfo: Added broadcast_1067_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:13:00 INFO spark.SparkContext: Created broadcast 1067 from broadcast at DAGScheduler.scala:1006 18/04/17 17:13:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 1067 (KafkaRDD[1466] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 17:13:00 INFO cluster.YarnClusterScheduler: Adding task set 1067.0 with 1 tasks 18/04/17 17:13:00 INFO scheduler.DAGScheduler: Got job 1068 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 17:13:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 1068 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 17:13:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 17:13:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 17:13:00 INFO scheduler.DAGScheduler: Submitting ResultStage 1068 (KafkaRDD[1452] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 17:13:00 INFO storage.MemoryStore: Block broadcast_1068 stored as values in memory (estimated size 5.7 KB, free 491.5 MB) 18/04/17 17:13:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 1067.0 (TID 1067, ***hostname masked***, executor 6, partition 0, NODE_LOCAL, 2056 bytes) 18/04/17 17:13:00 INFO storage.MemoryStore: Block broadcast_1068_piece0 stored as bytes in memory (estimated size 3.1 KB, free 491.5 MB) 18/04/17 17:13:00 INFO storage.BlockManagerInfo: Added broadcast_1068_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:13:00 INFO spark.SparkContext: Created broadcast 1068 from broadcast at DAGScheduler.scala:1006 18/04/17 17:13:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 1068 (KafkaRDD[1452] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 17:13:00 INFO cluster.YarnClusterScheduler: Adding task set 1068.0 with 1 tasks 18/04/17 17:13:00 INFO scheduler.DAGScheduler: Got job 1069 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 17:13:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 1069 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 17:13:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 17:13:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 17:13:00 INFO scheduler.DAGScheduler: Submitting ResultStage 1069 (KafkaRDD[1465] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 17:13:00 INFO storage.MemoryStore: Block broadcast_1069 stored as values in memory (estimated size 5.7 KB, free 491.5 MB) 18/04/17 17:13:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 1068.0 (TID 1068, ***hostname masked***, executor 6, partition 0, RACK_LOCAL, 2051 bytes) 18/04/17 17:13:00 INFO storage.BlockManagerInfo: Added broadcast_1065_piece0 in memory on ***hostname masked***:55033 (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:13:00 INFO storage.BlockManagerInfo: Added broadcast_1066_piece0 in memory on ***hostname masked***:55279 (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:13:00 INFO storage.MemoryStore: Block broadcast_1069_piece0 stored as bytes in memory (estimated size 3.1 KB, free 491.5 MB) 18/04/17 17:13:00 INFO storage.BlockManagerInfo: Added broadcast_1069_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:13:00 INFO spark.SparkContext: Created broadcast 1069 from broadcast at DAGScheduler.scala:1006 18/04/17 17:13:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 1069 (KafkaRDD[1465] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 17:13:00 INFO cluster.YarnClusterScheduler: Adding task set 1069.0 with 1 tasks 18/04/17 17:13:00 INFO scheduler.DAGScheduler: Got job 1070 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 17:13:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 1070 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 17:13:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 17:13:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 17:13:00 INFO scheduler.DAGScheduler: Submitting ResultStage 1070 (KafkaRDD[1462] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 17:13:00 INFO storage.MemoryStore: Block broadcast_1070 stored as values in memory (estimated size 5.7 KB, free 491.5 MB) 18/04/17 17:13:00 INFO storage.BlockManagerInfo: Added broadcast_1056_piece0 in memory on ***hostname masked***:55095 (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:13:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 1069.0 (TID 1069, ***hostname masked***, executor 2, partition 0, RACK_LOCAL, 2045 bytes) 18/04/17 17:13:00 INFO storage.MemoryStore: Block broadcast_1070_piece0 stored as bytes in memory (estimated size 3.1 KB, free 491.5 MB) 18/04/17 17:13:00 INFO storage.BlockManagerInfo: Added broadcast_1070_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:13:00 INFO spark.SparkContext: Created broadcast 1070 from broadcast at DAGScheduler.scala:1006 18/04/17 17:13:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 1070 (KafkaRDD[1462] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 17:13:00 INFO cluster.YarnClusterScheduler: Adding task set 1070.0 with 1 tasks 18/04/17 17:13:00 INFO scheduler.DAGScheduler: Got job 1071 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 17:13:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 1071 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 17:13:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 17:13:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 17:13:00 INFO scheduler.DAGScheduler: Submitting ResultStage 1071 (KafkaRDD[1459] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 17:13:00 INFO storage.MemoryStore: Block broadcast_1071 stored as values in memory (estimated size 5.7 KB, free 491.5 MB) 18/04/17 17:13:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 1070.0 (TID 1070, ***hostname masked***, executor 9, partition 0, RACK_LOCAL, 2069 bytes) 18/04/17 17:13:00 INFO storage.BlockManagerInfo: Added broadcast_1068_piece0 in memory on ***hostname masked***:35790 (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:13:00 INFO storage.MemoryStore: Block broadcast_1071_piece0 stored as bytes in memory (estimated size 3.1 KB, free 491.5 MB) 18/04/17 17:13:00 INFO storage.BlockManagerInfo: Added broadcast_1071_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:13:00 INFO spark.SparkContext: Created broadcast 1071 from broadcast at DAGScheduler.scala:1006 18/04/17 17:13:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 1071 (KafkaRDD[1459] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 17:13:00 INFO cluster.YarnClusterScheduler: Adding task set 1071.0 with 1 tasks 18/04/17 17:13:00 INFO scheduler.DAGScheduler: Got job 1072 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 17:13:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 1072 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 17:13:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 17:13:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 17:13:00 INFO scheduler.DAGScheduler: Submitting ResultStage 1072 (KafkaRDD[1441] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 17:13:00 INFO storage.MemoryStore: Block broadcast_1072 stored as values in memory (estimated size 5.7 KB, free 491.5 MB) 18/04/17 17:13:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 1071.0 (TID 1071, ***hostname masked***, executor 3, partition 0, RACK_LOCAL, 2050 bytes) 18/04/17 17:13:00 INFO storage.MemoryStore: Block broadcast_1072_piece0 stored as bytes in memory (estimated size 3.1 KB, free 491.5 MB) 18/04/17 17:13:00 INFO storage.BlockManagerInfo: Added broadcast_1072_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:13:00 INFO spark.SparkContext: Created broadcast 1072 from broadcast at DAGScheduler.scala:1006 18/04/17 17:13:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 1072 (KafkaRDD[1441] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 17:13:00 INFO cluster.YarnClusterScheduler: Adding task set 1072.0 with 1 tasks 18/04/17 17:13:00 INFO scheduler.DAGScheduler: Got job 1073 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 17:13:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 1073 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 17:13:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 17:13:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 17:13:00 INFO scheduler.DAGScheduler: Submitting ResultStage 1073 (KafkaRDD[1467] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 17:13:00 INFO storage.MemoryStore: Block broadcast_1073 stored as values in memory (estimated size 5.7 KB, free 491.5 MB) 18/04/17 17:13:00 INFO storage.BlockManagerInfo: Added broadcast_1069_piece0 in memory on ***hostname masked***:43653 (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:13:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 1072.0 (TID 1072, ***hostname masked***, executor 9, partition 0, RACK_LOCAL, 2063 bytes) 18/04/17 17:13:00 INFO storage.BlockManagerInfo: Added broadcast_1070_piece0 in memory on ***hostname masked***:55033 (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:13:00 INFO storage.MemoryStore: Block broadcast_1073_piece0 stored as bytes in memory (estimated size 3.1 KB, free 491.5 MB) 18/04/17 17:13:00 INFO storage.BlockManagerInfo: Added broadcast_1073_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:13:00 INFO spark.SparkContext: Created broadcast 1073 from broadcast at DAGScheduler.scala:1006 18/04/17 17:13:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 1073 (KafkaRDD[1467] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 17:13:00 INFO cluster.YarnClusterScheduler: Adding task set 1073.0 with 1 tasks 18/04/17 17:13:00 INFO scheduler.DAGScheduler: Got job 1074 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 17:13:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 1074 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 17:13:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 17:13:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 17:13:00 INFO scheduler.DAGScheduler: Submitting ResultStage 1074 (KafkaRDD[1472] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 17:13:00 INFO storage.MemoryStore: Block broadcast_1074 stored as values in memory (estimated size 5.7 KB, free 491.5 MB) 18/04/17 17:13:00 INFO storage.BlockManagerInfo: Added broadcast_1067_piece0 in memory on ***hostname masked***:35790 (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:13:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 1073.0 (TID 1073, ***hostname masked***, executor 4, partition 0, RACK_LOCAL, 2048 bytes) 18/04/17 17:13:00 INFO storage.MemoryStore: Block broadcast_1074_piece0 stored as bytes in memory (estimated size 3.1 KB, free 491.5 MB) 18/04/17 17:13:00 INFO storage.BlockManagerInfo: Added broadcast_1074_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:13:00 INFO spark.SparkContext: Created broadcast 1074 from broadcast at DAGScheduler.scala:1006 18/04/17 17:13:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 1074 (KafkaRDD[1472] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 17:13:00 INFO cluster.YarnClusterScheduler: Adding task set 1074.0 with 1 tasks 18/04/17 17:13:00 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 1058.0 (TID 1058) in 58 ms on ***hostname masked*** (executor 2) (1/1) 18/04/17 17:13:00 INFO cluster.YarnClusterScheduler: Removed TaskSet 1058.0, whose tasks have all completed, from pool 18/04/17 17:13:00 INFO scheduler.DAGScheduler: Got job 1075 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 17:13:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 1075 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 17:13:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 17:13:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 17:13:00 INFO scheduler.DAGScheduler: Submitting ResultStage 1075 (KafkaRDD[1458] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 17:13:00 INFO storage.MemoryStore: Block broadcast_1075 stored as values in memory (estimated size 5.7 KB, free 491.5 MB) 18/04/17 17:13:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 1074.0 (TID 1074, ***hostname masked***, executor 2, partition 0, RACK_LOCAL, 2040 bytes) 18/04/17 17:13:00 INFO storage.BlockManagerInfo: Added broadcast_1071_piece0 in memory on ***hostname masked***:60107 (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:13:00 INFO storage.BlockManagerInfo: Added broadcast_1072_piece0 in memory on ***hostname masked***:55033 (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:13:00 INFO storage.MemoryStore: Block broadcast_1075_piece0 stored as bytes in memory (estimated size 3.1 KB, free 491.5 MB) 18/04/17 17:13:00 INFO storage.BlockManagerInfo: Added broadcast_1075_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:13:00 INFO spark.SparkContext: Created broadcast 1075 from broadcast at DAGScheduler.scala:1006 18/04/17 17:13:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 1075 (KafkaRDD[1458] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 17:13:00 INFO cluster.YarnClusterScheduler: Adding task set 1075.0 with 1 tasks 18/04/17 17:13:00 INFO scheduler.DAGScheduler: Got job 1076 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 17:13:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 1076 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 17:13:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 17:13:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 17:13:00 INFO scheduler.DAGScheduler: Submitting ResultStage 1076 (KafkaRDD[1460] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 17:13:00 INFO storage.MemoryStore: Block broadcast_1076 stored as values in memory (estimated size 5.7 KB, free 491.4 MB) 18/04/17 17:13:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 1075.0 (TID 1075, ***hostname masked***, executor 1, partition 0, RACK_LOCAL, 2048 bytes) 18/04/17 17:13:00 INFO storage.BlockManagerInfo: Added broadcast_1073_piece0 in memory on ***hostname masked***:55279 (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:13:00 INFO storage.MemoryStore: Block broadcast_1076_piece0 stored as bytes in memory (estimated size 3.1 KB, free 491.4 MB) 18/04/17 17:13:00 INFO storage.BlockManagerInfo: Added broadcast_1076_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:13:00 INFO spark.SparkContext: Created broadcast 1076 from broadcast at DAGScheduler.scala:1006 18/04/17 17:13:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 1076 (KafkaRDD[1460] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 17:13:00 INFO cluster.YarnClusterScheduler: Adding task set 1076.0 with 1 tasks 18/04/17 17:13:00 INFO scheduler.DAGScheduler: Got job 1077 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 17:13:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 1077 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 17:13:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 17:13:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 17:13:00 INFO scheduler.DAGScheduler: Submitting ResultStage 1077 (KafkaRDD[1448] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 17:13:00 INFO storage.MemoryStore: Block broadcast_1077 stored as values in memory (estimated size 5.7 KB, free 491.4 MB) 18/04/17 17:13:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 1076.0 (TID 1076, ***hostname masked***, executor 11, partition 0, RACK_LOCAL, 2048 bytes) 18/04/17 17:13:00 INFO storage.MemoryStore: Block broadcast_1077_piece0 stored as bytes in memory (estimated size 3.1 KB, free 491.4 MB) 18/04/17 17:13:00 INFO storage.BlockManagerInfo: Added broadcast_1077_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:13:00 INFO spark.SparkContext: Created broadcast 1077 from broadcast at DAGScheduler.scala:1006 18/04/17 17:13:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 1077 (KafkaRDD[1448] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 17:13:00 INFO cluster.YarnClusterScheduler: Adding task set 1077.0 with 1 tasks 18/04/17 17:13:00 INFO scheduler.DAGScheduler: ResultStage 1058 (foreachPartition at PredictorEngineApp.java:153) finished in 0.065 s 18/04/17 17:13:00 INFO scheduler.DAGScheduler: Job 1058 finished: foreachPartition at PredictorEngineApp.java:153, took 0.091563 s 18/04/17 17:13:00 INFO storage.BlockManagerInfo: Added broadcast_1074_piece0 in memory on ***hostname masked***:43653 (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:13:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 1077.0 (TID 1077, ***hostname masked***, executor 5, partition 0, RACK_LOCAL, 2051 bytes) 18/04/17 17:13:00 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x7e74996d connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 17:13:00 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x7e74996d0x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 17:13:00 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 17:13:00 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:40724, server: ***hostname masked***/***IP masked***:2181 18/04/17 17:13:00 INFO storage.BlockManagerInfo: Added broadcast_1075_piece0 in memory on ***hostname masked***:56034 (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:13:00 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 1062.0 (TID 1062) in 60 ms on ***hostname masked*** (executor 9) (1/1) 18/04/17 17:13:00 INFO cluster.YarnClusterScheduler: Removed TaskSet 1062.0, whose tasks have all completed, from pool 18/04/17 17:13:00 INFO scheduler.DAGScheduler: ResultStage 1062 (foreachPartition at PredictorEngineApp.java:153) finished in 0.061 s 18/04/17 17:13:00 INFO scheduler.DAGScheduler: Job 1062 finished: foreachPartition at PredictorEngineApp.java:153, took 0.098868 s 18/04/17 17:13:00 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x4684349 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 17:13:00 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x46843490x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 17:13:00 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 17:13:00 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:34343, server: ***hostname masked***/***IP masked***:2181 18/04/17 17:13:00 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 1057.0 (TID 1057) in 79 ms on ***hostname masked*** (executor 10) (1/1) 18/04/17 17:13:00 INFO scheduler.DAGScheduler: ResultStage 1057 (foreachPartition at PredictorEngineApp.java:153) finished in 0.080 s 18/04/17 17:13:00 INFO cluster.YarnClusterScheduler: Removed TaskSet 1057.0, whose tasks have all completed, from pool 18/04/17 17:13:00 INFO scheduler.DAGScheduler: Job 1057 finished: foreachPartition at PredictorEngineApp.java:153, took 0.102930 s 18/04/17 17:13:00 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x47d42602 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 17:13:00 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x47d426020x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 17:13:00 INFO storage.BlockManagerInfo: Added broadcast_1077_piece0 in memory on ***hostname masked***:53081 (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:13:00 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 17:13:00 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:45321, server: ***hostname masked***/***IP masked***:2181 18/04/17 17:13:00 INFO storage.BlockManagerInfo: Added broadcast_1076_piece0 in memory on ***hostname masked***:57847 (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:13:00 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x1626be1444c96e9, negotiated timeout = 60000 18/04/17 17:13:00 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x3626be1439a969e, negotiated timeout = 60000 18/04/17 17:13:00 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x2626be142b28fd6, negotiated timeout = 60000 18/04/17 17:13:00 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x3626be1439a969e 18/04/17 17:13:00 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x2626be142b28fd6 18/04/17 17:13:00 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x1626be1444c96e9 18/04/17 17:13:00 INFO zookeeper.ZooKeeper: Session: 0x3626be1439a969e closed 18/04/17 17:13:00 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 17:13:00 INFO zookeeper.ZooKeeper: Session: 0x1626be1444c96e9 closed 18/04/17 17:13:00 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 17:13:00 INFO zookeeper.ZooKeeper: Session: 0x2626be142b28fd6 closed 18/04/17 17:13:00 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 17:13:00 INFO scheduler.JobScheduler: Finished job streaming job 1523974380000 ms.28 from job set of time 1523974380000 ms 18/04/17 17:13:00 INFO scheduler.JobScheduler: Finished job streaming job 1523974380000 ms.5 from job set of time 1523974380000 ms 18/04/17 17:13:00 INFO scheduler.JobScheduler: Finished job streaming job 1523974380000 ms.11 from job set of time 1523974380000 ms 18/04/17 17:13:01 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 1077.0 (TID 1077) in 1114 ms on ***hostname masked*** (executor 5) (1/1) 18/04/17 17:13:01 INFO scheduler.DAGScheduler: ResultStage 1077 (foreachPartition at PredictorEngineApp.java:153) finished in 1.115 s 18/04/17 17:13:01 INFO cluster.YarnClusterScheduler: Removed TaskSet 1077.0, whose tasks have all completed, from pool 18/04/17 17:13:01 INFO scheduler.DAGScheduler: Job 1077 finished: foreachPartition at PredictorEngineApp.java:153, took 1.203705 s 18/04/17 17:13:01 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x2362586 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 17:13:01 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x23625860x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 17:13:01 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 17:13:01 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:34354, server: ***hostname masked***/***IP masked***:2181 18/04/17 17:13:01 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x3626be1439a96a5, negotiated timeout = 60000 18/04/17 17:13:01 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x3626be1439a96a5 18/04/17 17:13:01 INFO zookeeper.ZooKeeper: Session: 0x3626be1439a96a5 closed 18/04/17 17:13:01 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 17:13:01 INFO scheduler.JobScheduler: Finished job streaming job 1523974380000 ms.8 from job set of time 1523974380000 ms 18/04/17 17:13:02 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 1069.0 (TID 1069) in 2095 ms on ***hostname masked*** (executor 2) (1/1) 18/04/17 17:13:02 INFO cluster.YarnClusterScheduler: Removed TaskSet 1069.0, whose tasks have all completed, from pool 18/04/17 17:13:02 INFO scheduler.DAGScheduler: ResultStage 1069 (foreachPartition at PredictorEngineApp.java:153) finished in 2.106 s 18/04/17 17:13:02 INFO scheduler.DAGScheduler: Job 1069 finished: foreachPartition at PredictorEngineApp.java:153, took 2.163770 s 18/04/17 17:13:02 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x2dbed173 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 17:13:02 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x2dbed1730x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 17:13:02 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 17:13:02 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:45335, server: ***hostname masked***/***IP masked***:2181 18/04/17 17:13:02 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x2626be142b28fda, negotiated timeout = 60000 18/04/17 17:13:02 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x2626be142b28fda 18/04/17 17:13:02 INFO zookeeper.ZooKeeper: Session: 0x2626be142b28fda closed 18/04/17 17:13:02 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 17:13:02 INFO scheduler.JobScheduler: Finished job streaming job 1523974380000 ms.25 from job set of time 1523974380000 ms 18/04/17 17:13:02 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 1064.0 (TID 1064) in 2215 ms on ***hostname masked*** (executor 1) (1/1) 18/04/17 17:13:02 INFO cluster.YarnClusterScheduler: Removed TaskSet 1064.0, whose tasks have all completed, from pool 18/04/17 17:13:02 INFO scheduler.DAGScheduler: ResultStage 1064 (foreachPartition at PredictorEngineApp.java:153) finished in 2.215 s 18/04/17 17:13:02 INFO scheduler.DAGScheduler: Job 1064 finished: foreachPartition at PredictorEngineApp.java:153, took 2.258823 s 18/04/17 17:13:02 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x68c751ff connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 17:13:02 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x68c751ff0x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 17:13:02 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 17:13:02 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:34361, server: ***hostname masked***/***IP masked***:2181 18/04/17 17:13:02 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x3626be1439a96a8, negotiated timeout = 60000 18/04/17 17:13:02 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x3626be1439a96a8 18/04/17 17:13:02 INFO zookeeper.ZooKeeper: Session: 0x3626be1439a96a8 closed 18/04/17 17:13:02 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 17:13:02 INFO scheduler.JobScheduler: Finished job streaming job 1523974380000 ms.7 from job set of time 1523974380000 ms 18/04/17 17:13:03 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 1052.0 (TID 1052) in 3915 ms on ***hostname masked*** (executor 7) (1/1) 18/04/17 17:13:03 INFO cluster.YarnClusterScheduler: Removed TaskSet 1052.0, whose tasks have all completed, from pool 18/04/17 17:13:03 INFO scheduler.DAGScheduler: ResultStage 1052 (foreachPartition at PredictorEngineApp.java:153) finished in 3.916 s 18/04/17 17:13:03 INFO scheduler.DAGScheduler: Job 1052 finished: foreachPartition at PredictorEngineApp.java:153, took 3.923281 s 18/04/17 17:13:03 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x4bb91b34 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 17:13:03 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x4bb91b340x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 17:13:03 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 17:13:03 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:40750, server: ***hostname masked***/***IP masked***:2181 18/04/17 17:13:03 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x1626be1444c96ed, negotiated timeout = 60000 18/04/17 17:13:04 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x1626be1444c96ed 18/04/17 17:13:04 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 1076.0 (TID 1076) in 3865 ms on ***hostname masked*** (executor 11) (1/1) 18/04/17 17:13:04 INFO scheduler.DAGScheduler: ResultStage 1076 (foreachPartition at PredictorEngineApp.java:153) finished in 3.866 s 18/04/17 17:13:04 INFO cluster.YarnClusterScheduler: Removed TaskSet 1076.0, whose tasks have all completed, from pool 18/04/17 17:13:04 INFO scheduler.DAGScheduler: Job 1076 finished: foreachPartition at PredictorEngineApp.java:153, took 3.952573 s 18/04/17 17:13:04 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x1c637ff8 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 17:13:04 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x1c637ff80x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 17:13:04 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 17:13:04 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:40753, server: ***hostname masked***/***IP masked***:2181 18/04/17 17:13:04 INFO zookeeper.ZooKeeper: Session: 0x1626be1444c96ed closed 18/04/17 17:13:04 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 17:13:04 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x1626be1444c96ee, negotiated timeout = 60000 18/04/17 17:13:04 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x1626be1444c96ee 18/04/17 17:13:04 INFO scheduler.JobScheduler: Finished job streaming job 1523974380000 ms.23 from job set of time 1523974380000 ms 18/04/17 17:13:04 INFO zookeeper.ZooKeeper: Session: 0x1626be1444c96ee closed 18/04/17 17:13:04 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 17:13:04 INFO scheduler.JobScheduler: Finished job streaming job 1523974380000 ms.20 from job set of time 1523974380000 ms 18/04/17 17:13:05 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 1053.0 (TID 1053) in 5110 ms on ***hostname masked*** (executor 4) (1/1) 18/04/17 17:13:05 INFO cluster.YarnClusterScheduler: Removed TaskSet 1053.0, whose tasks have all completed, from pool 18/04/17 17:13:05 INFO scheduler.DAGScheduler: ResultStage 1053 (foreachPartition at PredictorEngineApp.java:153) finished in 5.110 s 18/04/17 17:13:05 INFO scheduler.DAGScheduler: Job 1053 finished: foreachPartition at PredictorEngineApp.java:153, took 5.120836 s 18/04/17 17:13:05 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x212b6779 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 17:13:05 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x212b67790x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 17:13:05 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 17:13:05 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:40757, server: ***hostname masked***/***IP masked***:2181 18/04/17 17:13:05 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x1626be1444c96f0, negotiated timeout = 60000 18/04/17 17:13:05 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x1626be1444c96f0 18/04/17 17:13:05 INFO zookeeper.ZooKeeper: Session: 0x1626be1444c96f0 closed 18/04/17 17:13:05 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 17:13:05 INFO scheduler.JobScheduler: Finished job streaming job 1523974380000 ms.31 from job set of time 1523974380000 ms 18/04/17 17:13:07 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 1060.0 (TID 1060) in 6969 ms on ***hostname masked*** (executor 10) (1/1) 18/04/17 17:13:07 INFO cluster.YarnClusterScheduler: Removed TaskSet 1060.0, whose tasks have all completed, from pool 18/04/17 17:13:07 INFO scheduler.DAGScheduler: ResultStage 1060 (foreachPartition at PredictorEngineApp.java:153) finished in 6.969 s 18/04/17 17:13:07 INFO scheduler.DAGScheduler: Job 1060 finished: foreachPartition at PredictorEngineApp.java:153, took 7.001092 s 18/04/17 17:13:07 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x1f5788b8 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 17:13:07 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x1f5788b80x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 17:13:07 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 17:13:07 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:40763, server: ***hostname masked***/***IP masked***:2181 18/04/17 17:13:07 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x1626be1444c96f3, negotiated timeout = 60000 18/04/17 17:13:07 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x1626be1444c96f3 18/04/17 17:13:07 INFO zookeeper.ZooKeeper: Session: 0x1626be1444c96f3 closed 18/04/17 17:13:07 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 17:13:07 INFO scheduler.JobScheduler: Finished job streaming job 1523974380000 ms.34 from job set of time 1523974380000 ms 18/04/17 17:13:07 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 1055.0 (TID 1055) in 7039 ms on ***hostname masked*** (executor 1) (1/1) 18/04/17 17:13:07 INFO scheduler.DAGScheduler: ResultStage 1055 (foreachPartition at PredictorEngineApp.java:153) finished in 7.039 s 18/04/17 17:13:07 INFO cluster.YarnClusterScheduler: Removed TaskSet 1055.0, whose tasks have all completed, from pool 18/04/17 17:13:07 INFO scheduler.DAGScheduler: Job 1055 finished: foreachPartition at PredictorEngineApp.java:153, took 7.056472 s 18/04/17 17:13:07 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x5bd89aef connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 17:13:07 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x5bd89aef0x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 17:13:07 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 17:13:07 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:34384, server: ***hostname masked***/***IP masked***:2181 18/04/17 17:13:07 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x3626be1439a96ab, negotiated timeout = 60000 18/04/17 17:13:07 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x3626be1439a96ab 18/04/17 17:13:07 INFO zookeeper.ZooKeeper: Session: 0x3626be1439a96ab closed 18/04/17 17:13:07 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 17:13:07 INFO scheduler.JobScheduler: Finished job streaming job 1523974380000 ms.15 from job set of time 1523974380000 ms 18/04/17 17:13:07 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 1056.0 (TID 1056) in 7098 ms on ***hostname masked*** (executor 10) (1/1) 18/04/17 17:13:07 INFO cluster.YarnClusterScheduler: Removed TaskSet 1056.0, whose tasks have all completed, from pool 18/04/17 17:13:07 INFO scheduler.DAGScheduler: ResultStage 1056 (foreachPartition at PredictorEngineApp.java:153) finished in 7.099 s 18/04/17 17:13:07 INFO scheduler.DAGScheduler: Job 1056 finished: foreachPartition at PredictorEngineApp.java:153, took 7.119214 s 18/04/17 17:13:07 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x53512c connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 17:13:07 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x53512c0x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 17:13:07 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 17:13:07 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:40769, server: ***hostname masked***/***IP masked***:2181 18/04/17 17:13:07 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x1626be1444c96f5, negotiated timeout = 60000 18/04/17 17:13:07 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x1626be1444c96f5 18/04/17 17:13:07 INFO zookeeper.ZooKeeper: Session: 0x1626be1444c96f5 closed 18/04/17 17:13:07 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 17:13:07 INFO scheduler.JobScheduler: Finished job streaming job 1523974380000 ms.33 from job set of time 1523974380000 ms 18/04/17 17:13:07 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 1075.0 (TID 1075) in 7146 ms on ***hostname masked*** (executor 1) (1/1) 18/04/17 17:13:07 INFO cluster.YarnClusterScheduler: Removed TaskSet 1075.0, whose tasks have all completed, from pool 18/04/17 17:13:07 INFO scheduler.DAGScheduler: ResultStage 1075 (foreachPartition at PredictorEngineApp.java:153) finished in 7.148 s 18/04/17 17:13:07 INFO scheduler.DAGScheduler: Job 1075 finished: foreachPartition at PredictorEngineApp.java:153, took 7.232473 s 18/04/17 17:13:07 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0xce9a2ab connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 17:13:07 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0xce9a2ab0x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 17:13:07 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 17:13:07 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:40773, server: ***hostname masked***/***IP masked***:2181 18/04/17 17:13:07 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x1626be1444c96f6, negotiated timeout = 60000 18/04/17 17:13:07 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x1626be1444c96f6 18/04/17 17:13:07 INFO zookeeper.ZooKeeper: Session: 0x1626be1444c96f6 closed 18/04/17 17:13:07 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 17:13:07 INFO scheduler.JobScheduler: Finished job streaming job 1523974380000 ms.18 from job set of time 1523974380000 ms 18/04/17 17:13:08 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 1074.0 (TID 1074) in 7968 ms on ***hostname masked*** (executor 2) (1/1) 18/04/17 17:13:08 INFO cluster.YarnClusterScheduler: Removed TaskSet 1074.0, whose tasks have all completed, from pool 18/04/17 17:13:08 INFO scheduler.DAGScheduler: ResultStage 1074 (foreachPartition at PredictorEngineApp.java:153) finished in 7.968 s 18/04/17 17:13:08 INFO scheduler.DAGScheduler: Job 1074 finished: foreachPartition at PredictorEngineApp.java:153, took 8.050849 s 18/04/17 17:13:08 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x56702a3b connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 17:13:08 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x56702a3b0x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 17:13:08 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 17:13:08 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:34397, server: ***hostname masked***/***IP masked***:2181 18/04/17 17:13:08 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x3626be1439a96ad, negotiated timeout = 60000 18/04/17 17:13:08 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x3626be1439a96ad 18/04/17 17:13:08 INFO zookeeper.ZooKeeper: Session: 0x3626be1439a96ad closed 18/04/17 17:13:08 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 17:13:08 INFO scheduler.JobScheduler: Finished job streaming job 1523974380000 ms.32 from job set of time 1523974380000 ms 18/04/17 17:13:08 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 1066.0 (TID 1066) in 8057 ms on ***hostname masked*** (executor 4) (1/1) 18/04/17 17:13:08 INFO cluster.YarnClusterScheduler: Removed TaskSet 1066.0, whose tasks have all completed, from pool 18/04/17 17:13:08 INFO scheduler.DAGScheduler: ResultStage 1066 (foreachPartition at PredictorEngineApp.java:153) finished in 8.058 s 18/04/17 17:13:08 INFO scheduler.DAGScheduler: Job 1066 finished: foreachPartition at PredictorEngineApp.java:153, took 8.106999 s 18/04/17 17:13:08 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x5ea9c1e1 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 17:13:08 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x5ea9c1e10x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 17:13:08 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 17:13:08 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:45377, server: ***hostname masked***/***IP masked***:2181 18/04/17 17:13:08 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x2626be142b28fe1, negotiated timeout = 60000 18/04/17 17:13:08 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x2626be142b28fe1 18/04/17 17:13:08 INFO zookeeper.ZooKeeper: Session: 0x2626be142b28fe1 closed 18/04/17 17:13:08 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 17:13:08 INFO scheduler.JobScheduler: Finished job streaming job 1523974380000 ms.9 from job set of time 1523974380000 ms 18/04/17 17:13:08 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 1054.0 (TID 1054) in 8197 ms on ***hostname masked*** (executor 2) (1/1) 18/04/17 17:13:08 INFO cluster.YarnClusterScheduler: Removed TaskSet 1054.0, whose tasks have all completed, from pool 18/04/17 17:13:08 INFO scheduler.DAGScheduler: ResultStage 1054 (foreachPartition at PredictorEngineApp.java:153) finished in 8.197 s 18/04/17 17:13:08 INFO scheduler.DAGScheduler: Job 1054 finished: foreachPartition at PredictorEngineApp.java:153, took 8.211668 s 18/04/17 17:13:08 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x318ad690 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 17:13:08 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x318ad6900x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 17:13:08 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 17:13:08 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:34404, server: ***hostname masked***/***IP masked***:2181 18/04/17 17:13:08 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x3626be1439a96af, negotiated timeout = 60000 18/04/17 17:13:08 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x3626be1439a96af 18/04/17 17:13:08 INFO zookeeper.ZooKeeper: Session: 0x3626be1439a96af closed 18/04/17 17:13:08 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 17:13:08 INFO scheduler.JobScheduler: Finished job streaming job 1523974380000 ms.2 from job set of time 1523974380000 ms 18/04/17 17:13:08 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 1068.0 (TID 1068) in 8189 ms on ***hostname masked*** (executor 6) (1/1) 18/04/17 17:13:08 INFO cluster.YarnClusterScheduler: Removed TaskSet 1068.0, whose tasks have all completed, from pool 18/04/17 17:13:08 INFO scheduler.DAGScheduler: ResultStage 1068 (foreachPartition at PredictorEngineApp.java:153) finished in 8.191 s 18/04/17 17:13:08 INFO scheduler.DAGScheduler: Job 1068 finished: foreachPartition at PredictorEngineApp.java:153, took 8.245248 s 18/04/17 17:13:08 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x5a3ed59b connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 17:13:08 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x5a3ed59b0x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 17:13:08 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 17:13:08 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:34407, server: ***hostname masked***/***IP masked***:2181 18/04/17 17:13:08 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x3626be1439a96b0, negotiated timeout = 60000 18/04/17 17:13:08 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x3626be1439a96b0 18/04/17 17:13:08 INFO zookeeper.ZooKeeper: Session: 0x3626be1439a96b0 closed 18/04/17 17:13:08 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 17:13:08 INFO scheduler.JobScheduler: Finished job streaming job 1523974380000 ms.12 from job set of time 1523974380000 ms 18/04/17 17:13:08 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 1073.0 (TID 1073) in 8292 ms on ***hostname masked*** (executor 4) (1/1) 18/04/17 17:13:08 INFO cluster.YarnClusterScheduler: Removed TaskSet 1073.0, whose tasks have all completed, from pool 18/04/17 17:13:08 INFO scheduler.DAGScheduler: ResultStage 1073 (foreachPartition at PredictorEngineApp.java:153) finished in 8.292 s 18/04/17 17:13:08 INFO scheduler.DAGScheduler: Job 1073 finished: foreachPartition at PredictorEngineApp.java:153, took 8.372668 s 18/04/17 17:13:08 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x42e20cd3 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 17:13:08 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x42e20cd30x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 17:13:08 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 17:13:08 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:34410, server: ***hostname masked***/***IP masked***:2181 18/04/17 17:13:08 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x3626be1439a96b1, negotiated timeout = 60000 18/04/17 17:13:08 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x3626be1439a96b1 18/04/17 17:13:08 INFO zookeeper.ZooKeeper: Session: 0x3626be1439a96b1 closed 18/04/17 17:13:08 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 17:13:08 INFO scheduler.JobScheduler: Finished job streaming job 1523974380000 ms.27 from job set of time 1523974380000 ms 18/04/17 17:13:09 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 1063.0 (TID 1063) in 9340 ms on ***hostname masked*** (executor 3) (1/1) 18/04/17 17:13:09 INFO scheduler.DAGScheduler: ResultStage 1063 (foreachPartition at PredictorEngineApp.java:153) finished in 9.340 s 18/04/17 17:13:09 INFO cluster.YarnClusterScheduler: Removed TaskSet 1063.0, whose tasks have all completed, from pool 18/04/17 17:13:09 INFO scheduler.DAGScheduler: Job 1063 finished: foreachPartition at PredictorEngineApp.java:153, took 9.381538 s 18/04/17 17:13:09 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x50589aa2 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 17:13:09 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x50589aa20x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 17:13:09 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 17:13:09 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:40796, server: ***hostname masked***/***IP masked***:2181 18/04/17 17:13:09 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x1626be1444c96f9, negotiated timeout = 60000 18/04/17 17:13:09 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x1626be1444c96f9 18/04/17 17:13:09 INFO zookeeper.ZooKeeper: Session: 0x1626be1444c96f9 closed 18/04/17 17:13:09 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 17:13:09 INFO scheduler.JobScheduler: Finished job streaming job 1523974380000 ms.29 from job set of time 1523974380000 ms 18/04/17 17:13:09 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 1065.0 (TID 1065) in 9399 ms on ***hostname masked*** (executor 9) (1/1) 18/04/17 17:13:09 INFO cluster.YarnClusterScheduler: Removed TaskSet 1065.0, whose tasks have all completed, from pool 18/04/17 17:13:09 INFO scheduler.DAGScheduler: ResultStage 1065 (foreachPartition at PredictorEngineApp.java:153) finished in 9.400 s 18/04/17 17:13:09 INFO scheduler.DAGScheduler: Job 1065 finished: foreachPartition at PredictorEngineApp.java:153, took 9.445927 s 18/04/17 17:13:09 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0xd7ce9a1 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 17:13:09 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0xd7ce9a10x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 17:13:09 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 17:13:09 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:40799, server: ***hostname masked***/***IP masked***:2181 18/04/17 17:13:09 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x1626be1444c96fa, negotiated timeout = 60000 18/04/17 17:13:09 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x1626be1444c96fa 18/04/17 17:13:09 INFO zookeeper.ZooKeeper: Session: 0x1626be1444c96fa closed 18/04/17 17:13:09 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 17:13:09 INFO scheduler.JobScheduler: Finished job streaming job 1523974380000 ms.6 from job set of time 1523974380000 ms 18/04/17 17:13:09 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 1071.0 (TID 1071) in 9600 ms on ***hostname masked*** (executor 3) (1/1) 18/04/17 17:13:09 INFO cluster.YarnClusterScheduler: Removed TaskSet 1071.0, whose tasks have all completed, from pool 18/04/17 17:13:09 INFO scheduler.DAGScheduler: ResultStage 1071 (foreachPartition at PredictorEngineApp.java:153) finished in 9.600 s 18/04/17 17:13:09 INFO scheduler.DAGScheduler: Job 1071 finished: foreachPartition at PredictorEngineApp.java:153, took 9.674852 s 18/04/17 17:13:09 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x710768e connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 17:13:09 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x710768e0x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 17:13:09 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 17:13:09 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:34420, server: ***hostname masked***/***IP masked***:2181 18/04/17 17:13:09 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x3626be1439a96b3, negotiated timeout = 60000 18/04/17 17:13:09 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x3626be1439a96b3 18/04/17 17:13:09 INFO zookeeper.ZooKeeper: Session: 0x3626be1439a96b3 closed 18/04/17 17:13:09 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 17:13:09 INFO scheduler.JobScheduler: Finished job streaming job 1523974380000 ms.19 from job set of time 1523974380000 ms 18/04/17 17:13:10 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 1061.0 (TID 1061) in 10714 ms on ***hostname masked*** (executor 6) (1/1) 18/04/17 17:13:10 INFO cluster.YarnClusterScheduler: Removed TaskSet 1061.0, whose tasks have all completed, from pool 18/04/17 17:13:10 INFO scheduler.DAGScheduler: ResultStage 1061 (foreachPartition at PredictorEngineApp.java:153) finished in 10.715 s 18/04/17 17:13:10 INFO scheduler.DAGScheduler: Job 1061 finished: foreachPartition at PredictorEngineApp.java:153, took 10.750357 s 18/04/17 17:13:10 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x49369ee0 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 17:13:10 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x49369ee00x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 17:13:10 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 17:13:10 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:34427, server: ***hostname masked***/***IP masked***:2181 18/04/17 17:13:10 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x3626be1439a96b4, negotiated timeout = 60000 18/04/17 17:13:10 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x3626be1439a96b4 18/04/17 17:13:10 INFO zookeeper.ZooKeeper: Session: 0x3626be1439a96b4 closed 18/04/17 17:13:10 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 17:13:10 INFO scheduler.JobScheduler: Finished job streaming job 1523974380000 ms.24 from job set of time 1523974380000 ms 18/04/17 17:13:15 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 1067.0 (TID 1067) in 14891 ms on ***hostname masked*** (executor 6) (1/1) 18/04/17 17:13:15 INFO cluster.YarnClusterScheduler: Removed TaskSet 1067.0, whose tasks have all completed, from pool 18/04/17 17:13:15 INFO scheduler.DAGScheduler: ResultStage 1067 (foreachPartition at PredictorEngineApp.java:153) finished in 14.892 s 18/04/17 17:13:15 INFO scheduler.DAGScheduler: Job 1067 finished: foreachPartition at PredictorEngineApp.java:153, took 14.943637 s 18/04/17 17:13:15 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x5d473c27 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 17:13:15 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x5d473c270x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 17:13:15 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 17:13:15 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:45417, server: ***hostname masked***/***IP masked***:2181 18/04/17 17:13:15 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x2626be142b28fe9, negotiated timeout = 60000 18/04/17 17:13:15 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x2626be142b28fe9 18/04/17 17:13:15 INFO zookeeper.ZooKeeper: Session: 0x2626be142b28fe9 closed 18/04/17 17:13:15 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 17:13:15 INFO scheduler.JobScheduler: Finished job streaming job 1523974380000 ms.26 from job set of time 1523974380000 ms 18/04/17 17:13:15 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 1059.0 (TID 1059) in 15196 ms on ***hostname masked*** (executor 3) (1/1) 18/04/17 17:13:15 INFO cluster.YarnClusterScheduler: Removed TaskSet 1059.0, whose tasks have all completed, from pool 18/04/17 17:13:15 INFO scheduler.DAGScheduler: ResultStage 1059 (foreachPartition at PredictorEngineApp.java:153) finished in 15.196 s 18/04/17 17:13:15 INFO scheduler.DAGScheduler: Job 1059 finished: foreachPartition at PredictorEngineApp.java:153, took 15.224958 s 18/04/17 17:13:15 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x2becbc2e connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 17:13:15 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x2becbc2e0x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 17:13:15 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 17:13:15 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:45422, server: ***hostname masked***/***IP masked***:2181 18/04/17 17:13:15 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x2626be142b28fea, negotiated timeout = 60000 18/04/17 17:13:15 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x2626be142b28fea 18/04/17 17:13:15 INFO zookeeper.ZooKeeper: Session: 0x2626be142b28fea closed 18/04/17 17:13:15 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 17:13:15 INFO scheduler.JobScheduler: Finished job streaming job 1523974380000 ms.10 from job set of time 1523974380000 ms 18/04/17 17:13:18 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 1070.0 (TID 1070) in 18138 ms on ***hostname masked*** (executor 9) (1/1) 18/04/17 17:13:18 INFO cluster.YarnClusterScheduler: Removed TaskSet 1070.0, whose tasks have all completed, from pool 18/04/17 17:13:18 INFO scheduler.DAGScheduler: ResultStage 1070 (foreachPartition at PredictorEngineApp.java:153) finished in 18.140 s 18/04/17 17:13:18 INFO scheduler.DAGScheduler: Job 1070 finished: foreachPartition at PredictorEngineApp.java:153, took 18.210796 s 18/04/17 17:13:18 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x49b23494 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 17:13:18 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x49b234940x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 17:13:18 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 17:13:18 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:45429, server: ***hostname masked***/***IP masked***:2181 18/04/17 17:13:18 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x2626be142b28fec, negotiated timeout = 60000 18/04/17 17:13:18 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x2626be142b28fec 18/04/17 17:13:18 INFO zookeeper.ZooKeeper: Session: 0x2626be142b28fec closed 18/04/17 17:13:18 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 17:13:18 INFO scheduler.JobScheduler: Finished job streaming job 1523974380000 ms.22 from job set of time 1523974380000 ms 18/04/17 17:13:18 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 1072.0 (TID 1072) in 18290 ms on ***hostname masked*** (executor 9) (1/1) 18/04/17 17:13:18 INFO cluster.YarnClusterScheduler: Removed TaskSet 1072.0, whose tasks have all completed, from pool 18/04/17 17:13:18 INFO scheduler.DAGScheduler: ResultStage 1072 (foreachPartition at PredictorEngineApp.java:153) finished in 18.290 s 18/04/17 17:13:18 INFO scheduler.DAGScheduler: Job 1072 finished: foreachPartition at PredictorEngineApp.java:153, took 18.367600 s 18/04/17 17:13:18 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x4b61a00c connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 17:13:18 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x4b61a00c0x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 17:13:18 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 17:13:18 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:40837, server: ***hostname masked***/***IP masked***:2181 18/04/17 17:13:18 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x1626be1444c96fc, negotiated timeout = 60000 18/04/17 17:13:18 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x1626be1444c96fc 18/04/17 17:13:18 INFO zookeeper.ZooKeeper: Session: 0x1626be1444c96fc closed 18/04/17 17:13:18 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 17:13:18 INFO scheduler.JobScheduler: Finished job streaming job 1523974380000 ms.1 from job set of time 1523974380000 ms 18/04/17 17:13:18 INFO scheduler.JobScheduler: Total delay: 18.456 s for time 1523974380000 ms (execution: 18.404 s) 18/04/17 17:13:18 INFO kafka.KafkaRDD: Removing RDD 1404 from persistence list 18/04/17 17:13:18 INFO storage.BlockManager: Removing RDD 1404 18/04/17 17:13:18 INFO kafka.KafkaRDD: Removing RDD 1404 from persistence list 18/04/17 17:13:18 INFO storage.BlockManager: Removing RDD 1404 18/04/17 17:13:18 INFO kafka.KafkaRDD: Removing RDD 1405 from persistence list 18/04/17 17:13:18 INFO storage.BlockManager: Removing RDD 1405 18/04/17 17:13:18 INFO kafka.KafkaRDD: Removing RDD 1405 from persistence list 18/04/17 17:13:18 INFO storage.BlockManager: Removing RDD 1405 18/04/17 17:13:18 INFO kafka.KafkaRDD: Removing RDD 1406 from persistence list 18/04/17 17:13:18 INFO storage.BlockManager: Removing RDD 1406 18/04/17 17:13:18 INFO kafka.KafkaRDD: Removing RDD 1406 from persistence list 18/04/17 17:13:18 INFO storage.BlockManager: Removing RDD 1406 18/04/17 17:13:18 INFO kafka.KafkaRDD: Removing RDD 1407 from persistence list 18/04/17 17:13:18 INFO storage.BlockManager: Removing RDD 1407 18/04/17 17:13:18 INFO kafka.KafkaRDD: Removing RDD 1407 from persistence list 18/04/17 17:13:18 INFO storage.BlockManager: Removing RDD 1407 18/04/17 17:13:18 INFO kafka.KafkaRDD: Removing RDD 1408 from persistence list 18/04/17 17:13:18 INFO storage.BlockManager: Removing RDD 1408 18/04/17 17:13:18 INFO kafka.KafkaRDD: Removing RDD 1408 from persistence list 18/04/17 17:13:18 INFO storage.BlockManager: Removing RDD 1408 18/04/17 17:13:18 INFO kafka.KafkaRDD: Removing RDD 1409 from persistence list 18/04/17 17:13:18 INFO storage.BlockManager: Removing RDD 1409 18/04/17 17:13:18 INFO kafka.KafkaRDD: Removing RDD 1409 from persistence list 18/04/17 17:13:18 INFO storage.BlockManager: Removing RDD 1409 18/04/17 17:13:18 INFO kafka.KafkaRDD: Removing RDD 1410 from persistence list 18/04/17 17:13:18 INFO storage.BlockManager: Removing RDD 1410 18/04/17 17:13:18 INFO kafka.KafkaRDD: Removing RDD 1410 from persistence list 18/04/17 17:13:18 INFO storage.BlockManager: Removing RDD 1410 18/04/17 17:13:18 INFO kafka.KafkaRDD: Removing RDD 1411 from persistence list 18/04/17 17:13:18 INFO storage.BlockManager: Removing RDD 1411 18/04/17 17:13:18 INFO kafka.KafkaRDD: Removing RDD 1411 from persistence list 18/04/17 17:13:18 INFO storage.BlockManager: Removing RDD 1411 18/04/17 17:13:18 INFO kafka.KafkaRDD: Removing RDD 1412 from persistence list 18/04/17 17:13:18 INFO storage.BlockManager: Removing RDD 1412 18/04/17 17:13:18 INFO kafka.KafkaRDD: Removing RDD 1412 from persistence list 18/04/17 17:13:18 INFO storage.BlockManager: Removing RDD 1412 18/04/17 17:13:18 INFO kafka.KafkaRDD: Removing RDD 1413 from persistence list 18/04/17 17:13:18 INFO storage.BlockManager: Removing RDD 1413 18/04/17 17:13:18 INFO kafka.KafkaRDD: Removing RDD 1413 from persistence list 18/04/17 17:13:18 INFO storage.BlockManager: Removing RDD 1413 18/04/17 17:13:18 INFO kafka.KafkaRDD: Removing RDD 1414 from persistence list 18/04/17 17:13:18 INFO storage.BlockManager: Removing RDD 1414 18/04/17 17:13:18 INFO kafka.KafkaRDD: Removing RDD 1414 from persistence list 18/04/17 17:13:18 INFO storage.BlockManager: Removing RDD 1414 18/04/17 17:13:18 INFO kafka.KafkaRDD: Removing RDD 1415 from persistence list 18/04/17 17:13:18 INFO storage.BlockManager: Removing RDD 1415 18/04/17 17:13:18 INFO kafka.KafkaRDD: Removing RDD 1415 from persistence list 18/04/17 17:13:18 INFO storage.BlockManager: Removing RDD 1415 18/04/17 17:13:18 INFO kafka.KafkaRDD: Removing RDD 1416 from persistence list 18/04/17 17:13:18 INFO storage.BlockManager: Removing RDD 1416 18/04/17 17:13:18 INFO kafka.KafkaRDD: Removing RDD 1416 from persistence list 18/04/17 17:13:18 INFO storage.BlockManager: Removing RDD 1416 18/04/17 17:13:18 INFO kafka.KafkaRDD: Removing RDD 1417 from persistence list 18/04/17 17:13:18 INFO storage.BlockManager: Removing RDD 1417 18/04/17 17:13:18 INFO kafka.KafkaRDD: Removing RDD 1417 from persistence list 18/04/17 17:13:18 INFO storage.BlockManager: Removing RDD 1417 18/04/17 17:13:18 INFO kafka.KafkaRDD: Removing RDD 1418 from persistence list 18/04/17 17:13:18 INFO storage.BlockManager: Removing RDD 1418 18/04/17 17:13:18 INFO kafka.KafkaRDD: Removing RDD 1418 from persistence list 18/04/17 17:13:18 INFO storage.BlockManager: Removing RDD 1418 18/04/17 17:13:18 INFO kafka.KafkaRDD: Removing RDD 1419 from persistence list 18/04/17 17:13:18 INFO storage.BlockManager: Removing RDD 1419 18/04/17 17:13:18 INFO kafka.KafkaRDD: Removing RDD 1419 from persistence list 18/04/17 17:13:18 INFO storage.BlockManager: Removing RDD 1419 18/04/17 17:13:18 INFO kafka.KafkaRDD: Removing RDD 1420 from persistence list 18/04/17 17:13:18 INFO storage.BlockManager: Removing RDD 1420 18/04/17 17:13:18 INFO kafka.KafkaRDD: Removing RDD 1420 from persistence list 18/04/17 17:13:18 INFO storage.BlockManager: Removing RDD 1420 18/04/17 17:13:18 INFO kafka.KafkaRDD: Removing RDD 1421 from persistence list 18/04/17 17:13:18 INFO storage.BlockManager: Removing RDD 1421 18/04/17 17:13:18 INFO kafka.KafkaRDD: Removing RDD 1421 from persistence list 18/04/17 17:13:18 INFO storage.BlockManager: Removing RDD 1421 18/04/17 17:13:18 INFO kafka.KafkaRDD: Removing RDD 1422 from persistence list 18/04/17 17:13:18 INFO storage.BlockManager: Removing RDD 1422 18/04/17 17:13:18 INFO kafka.KafkaRDD: Removing RDD 1422 from persistence list 18/04/17 17:13:18 INFO storage.BlockManager: Removing RDD 1422 18/04/17 17:13:18 INFO kafka.KafkaRDD: Removing RDD 1423 from persistence list 18/04/17 17:13:18 INFO storage.BlockManager: Removing RDD 1423 18/04/17 17:13:18 INFO kafka.KafkaRDD: Removing RDD 1423 from persistence list 18/04/17 17:13:18 INFO storage.BlockManager: Removing RDD 1423 18/04/17 17:13:18 INFO kafka.KafkaRDD: Removing RDD 1424 from persistence list 18/04/17 17:13:18 INFO storage.BlockManager: Removing RDD 1424 18/04/17 17:13:18 INFO kafka.KafkaRDD: Removing RDD 1424 from persistence list 18/04/17 17:13:18 INFO storage.BlockManager: Removing RDD 1424 18/04/17 17:13:18 INFO kafka.KafkaRDD: Removing RDD 1425 from persistence list 18/04/17 17:13:18 INFO storage.BlockManager: Removing RDD 1425 18/04/17 17:13:18 INFO kafka.KafkaRDD: Removing RDD 1425 from persistence list 18/04/17 17:13:18 INFO storage.BlockManager: Removing RDD 1425 18/04/17 17:13:18 INFO kafka.KafkaRDD: Removing RDD 1426 from persistence list 18/04/17 17:13:18 INFO storage.BlockManager: Removing RDD 1426 18/04/17 17:13:18 INFO kafka.KafkaRDD: Removing RDD 1426 from persistence list 18/04/17 17:13:18 INFO storage.BlockManager: Removing RDD 1426 18/04/17 17:13:18 INFO kafka.KafkaRDD: Removing RDD 1427 from persistence list 18/04/17 17:13:18 INFO storage.BlockManager: Removing RDD 1427 18/04/17 17:13:18 INFO kafka.KafkaRDD: Removing RDD 1427 from persistence list 18/04/17 17:13:18 INFO storage.BlockManager: Removing RDD 1427 18/04/17 17:13:18 INFO kafka.KafkaRDD: Removing RDD 1428 from persistence list 18/04/17 17:13:18 INFO storage.BlockManager: Removing RDD 1428 18/04/17 17:13:18 INFO kafka.KafkaRDD: Removing RDD 1428 from persistence list 18/04/17 17:13:18 INFO storage.BlockManager: Removing RDD 1428 18/04/17 17:13:18 INFO kafka.KafkaRDD: Removing RDD 1429 from persistence list 18/04/17 17:13:18 INFO storage.BlockManager: Removing RDD 1429 18/04/17 17:13:18 INFO kafka.KafkaRDD: Removing RDD 1429 from persistence list 18/04/17 17:13:18 INFO storage.BlockManager: Removing RDD 1429 18/04/17 17:13:18 INFO kafka.KafkaRDD: Removing RDD 1430 from persistence list 18/04/17 17:13:18 INFO storage.BlockManager: Removing RDD 1430 18/04/17 17:13:18 INFO kafka.KafkaRDD: Removing RDD 1430 from persistence list 18/04/17 17:13:18 INFO storage.BlockManager: Removing RDD 1430 18/04/17 17:13:18 INFO kafka.KafkaRDD: Removing RDD 1431 from persistence list 18/04/17 17:13:18 INFO storage.BlockManager: Removing RDD 1431 18/04/17 17:13:18 INFO kafka.KafkaRDD: Removing RDD 1431 from persistence list 18/04/17 17:13:18 INFO storage.BlockManager: Removing RDD 1431 18/04/17 17:13:18 INFO kafka.KafkaRDD: Removing RDD 1432 from persistence list 18/04/17 17:13:18 INFO storage.BlockManager: Removing RDD 1432 18/04/17 17:13:18 INFO kafka.KafkaRDD: Removing RDD 1432 from persistence list 18/04/17 17:13:18 INFO storage.BlockManager: Removing RDD 1432 18/04/17 17:13:18 INFO kafka.KafkaRDD: Removing RDD 1433 from persistence list 18/04/17 17:13:18 INFO storage.BlockManager: Removing RDD 1433 18/04/17 17:13:18 INFO kafka.KafkaRDD: Removing RDD 1433 from persistence list 18/04/17 17:13:18 INFO storage.BlockManager: Removing RDD 1433 18/04/17 17:13:18 INFO kafka.KafkaRDD: Removing RDD 1434 from persistence list 18/04/17 17:13:18 INFO storage.BlockManager: Removing RDD 1434 18/04/17 17:13:18 INFO kafka.KafkaRDD: Removing RDD 1434 from persistence list 18/04/17 17:13:18 INFO storage.BlockManager: Removing RDD 1434 18/04/17 17:13:18 INFO kafka.KafkaRDD: Removing RDD 1435 from persistence list 18/04/17 17:13:18 INFO storage.BlockManager: Removing RDD 1435 18/04/17 17:13:18 INFO kafka.KafkaRDD: Removing RDD 1435 from persistence list 18/04/17 17:13:18 INFO storage.BlockManager: Removing RDD 1435 18/04/17 17:13:18 INFO kafka.KafkaRDD: Removing RDD 1436 from persistence list 18/04/17 17:13:18 INFO storage.BlockManager: Removing RDD 1436 18/04/17 17:13:18 INFO kafka.KafkaRDD: Removing RDD 1436 from persistence list 18/04/17 17:13:18 INFO storage.BlockManager: Removing RDD 1436 18/04/17 17:13:18 INFO kafka.KafkaRDD: Removing RDD 1437 from persistence list 18/04/17 17:13:18 INFO storage.BlockManager: Removing RDD 1437 18/04/17 17:13:18 INFO kafka.KafkaRDD: Removing RDD 1437 from persistence list 18/04/17 17:13:18 INFO storage.BlockManager: Removing RDD 1437 18/04/17 17:13:18 INFO kafka.KafkaRDD: Removing RDD 1438 from persistence list 18/04/17 17:13:18 INFO storage.BlockManager: Removing RDD 1438 18/04/17 17:13:18 INFO kafka.KafkaRDD: Removing RDD 1438 from persistence list 18/04/17 17:13:18 INFO storage.BlockManager: Removing RDD 1438 18/04/17 17:13:18 INFO kafka.KafkaRDD: Removing RDD 1439 from persistence list 18/04/17 17:13:18 INFO storage.BlockManager: Removing RDD 1439 18/04/17 17:13:18 INFO kafka.KafkaRDD: Removing RDD 1439 from persistence list 18/04/17 17:13:18 INFO storage.BlockManager: Removing RDD 1439 18/04/17 17:13:18 INFO scheduler.ReceivedBlockTracker: Deleting batches ArrayBuffer() 18/04/17 17:13:18 INFO scheduler.InputInfoTracker: remove old batch metadata: 1523974260000 ms 18/04/17 17:14:00 INFO scheduler.JobScheduler: Added jobs for time 1523974440000 ms 18/04/17 17:14:00 INFO scheduler.JobScheduler: Starting job streaming job 1523974440000 ms.1 from job set of time 1523974440000 ms 18/04/17 17:14:00 INFO scheduler.JobScheduler: Starting job streaming job 1523974440000 ms.2 from job set of time 1523974440000 ms 18/04/17 17:14:00 INFO scheduler.JobScheduler: Starting job streaming job 1523974440000 ms.0 from job set of time 1523974440000 ms 18/04/17 17:14:00 INFO scheduler.JobScheduler: Starting job streaming job 1523974440000 ms.3 from job set of time 1523974440000 ms 18/04/17 17:14:00 INFO scheduler.JobScheduler: Finished job streaming job 1523974440000 ms.3 from job set of time 1523974440000 ms 18/04/17 17:14:00 INFO scheduler.JobScheduler: Finished job streaming job 1523974440000 ms.0 from job set of time 1523974440000 ms 18/04/17 17:14:00 INFO scheduler.JobScheduler: Starting job streaming job 1523974440000 ms.4 from job set of time 1523974440000 ms 18/04/17 17:14:00 INFO scheduler.JobScheduler: Starting job streaming job 1523974440000 ms.6 from job set of time 1523974440000 ms 18/04/17 17:14:00 INFO scheduler.JobScheduler: Starting job streaming job 1523974440000 ms.5 from job set of time 1523974440000 ms 18/04/17 17:14:00 INFO scheduler.JobScheduler: Starting job streaming job 1523974440000 ms.7 from job set of time 1523974440000 ms 18/04/17 17:14:00 INFO scheduler.JobScheduler: Starting job streaming job 1523974440000 ms.8 from job set of time 1523974440000 ms 18/04/17 17:14:00 INFO scheduler.JobScheduler: Finished job streaming job 1523974440000 ms.4 from job set of time 1523974440000 ms 18/04/17 17:14:00 INFO scheduler.JobScheduler: Starting job streaming job 1523974440000 ms.9 from job set of time 1523974440000 ms 18/04/17 17:14:00 INFO scheduler.JobScheduler: Starting job streaming job 1523974440000 ms.10 from job set of time 1523974440000 ms 18/04/17 17:14:00 INFO scheduler.JobScheduler: Starting job streaming job 1523974440000 ms.11 from job set of time 1523974440000 ms 18/04/17 17:14:00 INFO scheduler.JobScheduler: Starting job streaming job 1523974440000 ms.12 from job set of time 1523974440000 ms 18/04/17 17:14:00 INFO scheduler.JobScheduler: Starting job streaming job 1523974440000 ms.13 from job set of time 1523974440000 ms 18/04/17 17:14:00 INFO scheduler.JobScheduler: Starting job streaming job 1523974440000 ms.14 from job set of time 1523974440000 ms 18/04/17 17:14:00 INFO scheduler.JobScheduler: Finished job streaming job 1523974440000 ms.13 from job set of time 1523974440000 ms 18/04/17 17:14:00 INFO scheduler.JobScheduler: Starting job streaming job 1523974440000 ms.15 from job set of time 1523974440000 ms 18/04/17 17:14:00 INFO scheduler.JobScheduler: Finished job streaming job 1523974440000 ms.14 from job set of time 1523974440000 ms 18/04/17 17:14:00 INFO scheduler.JobScheduler: Starting job streaming job 1523974440000 ms.17 from job set of time 1523974440000 ms 18/04/17 17:14:00 INFO scheduler.JobScheduler: Starting job streaming job 1523974440000 ms.19 from job set of time 1523974440000 ms 18/04/17 17:14:00 INFO scheduler.JobScheduler: Starting job streaming job 1523974440000 ms.16 from job set of time 1523974440000 ms 18/04/17 17:14:00 INFO scheduler.JobScheduler: Starting job streaming job 1523974440000 ms.18 from job set of time 1523974440000 ms 18/04/17 17:14:00 INFO scheduler.JobScheduler: Finished job streaming job 1523974440000 ms.17 from job set of time 1523974440000 ms 18/04/17 17:14:00 INFO scheduler.JobScheduler: Starting job streaming job 1523974440000 ms.20 from job set of time 1523974440000 ms 18/04/17 17:14:00 INFO scheduler.JobScheduler: Finished job streaming job 1523974440000 ms.16 from job set of time 1523974440000 ms 18/04/17 17:14:00 INFO scheduler.JobScheduler: Starting job streaming job 1523974440000 ms.21 from job set of time 1523974440000 ms 18/04/17 17:14:00 INFO scheduler.JobScheduler: Starting job streaming job 1523974440000 ms.23 from job set of time 1523974440000 ms 18/04/17 17:14:00 INFO scheduler.JobScheduler: Starting job streaming job 1523974440000 ms.22 from job set of time 1523974440000 ms 18/04/17 17:14:00 INFO scheduler.JobScheduler: Starting job streaming job 1523974440000 ms.24 from job set of time 1523974440000 ms 18/04/17 17:14:00 INFO scheduler.JobScheduler: Starting job streaming job 1523974440000 ms.25 from job set of time 1523974440000 ms 18/04/17 17:14:00 INFO scheduler.JobScheduler: Finished job streaming job 1523974440000 ms.21 from job set of time 1523974440000 ms 18/04/17 17:14:00 INFO scheduler.JobScheduler: Starting job streaming job 1523974440000 ms.26 from job set of time 1523974440000 ms 18/04/17 17:14:00 INFO scheduler.JobScheduler: Starting job streaming job 1523974440000 ms.27 from job set of time 1523974440000 ms 18/04/17 17:14:00 INFO scheduler.JobScheduler: Starting job streaming job 1523974440000 ms.28 from job set of time 1523974440000 ms 18/04/17 17:14:00 INFO scheduler.JobScheduler: Starting job streaming job 1523974440000 ms.29 from job set of time 1523974440000 ms 18/04/17 17:14:00 INFO scheduler.JobScheduler: Starting job streaming job 1523974440000 ms.30 from job set of time 1523974440000 ms 18/04/17 17:14:00 INFO scheduler.JobScheduler: Starting job streaming job 1523974440000 ms.31 from job set of time 1523974440000 ms 18/04/17 17:14:00 INFO scheduler.JobScheduler: Finished job streaming job 1523974440000 ms.30 from job set of time 1523974440000 ms 18/04/17 17:14:00 INFO scheduler.JobScheduler: Starting job streaming job 1523974440000 ms.32 from job set of time 1523974440000 ms 18/04/17 17:14:00 INFO scheduler.JobScheduler: Starting job streaming job 1523974440000 ms.34 from job set of time 1523974440000 ms 18/04/17 17:14:00 INFO scheduler.JobScheduler: Starting job streaming job 1523974440000 ms.33 from job set of time 1523974440000 ms 18/04/17 17:14:00 INFO scheduler.JobScheduler: Starting job streaming job 1523974440000 ms.35 from job set of time 1523974440000 ms 18/04/17 17:14:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 17:14:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 17:14:00 INFO scheduler.DAGScheduler: Got job 1078 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 17:14:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 1078 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 17:14:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 17:14:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 17:14:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 17:14:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 17:14:00 INFO scheduler.DAGScheduler: Submitting ResultStage 1078 (KafkaRDD[1507] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 17:14:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 17:14:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 17:14:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 17:14:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 17:14:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 17:14:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 17:14:00 INFO storage.MemoryStore: Block broadcast_1078 stored as values in memory (estimated size 5.7 KB, free 491.4 MB) 18/04/17 17:14:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 17:14:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 17:14:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 17:14:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 17:14:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 17:14:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 17:14:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 17:14:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 17:14:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 17:14:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 17:14:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 17:14:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 17:14:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 17:14:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 17:14:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 17:14:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 17:14:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 17:14:00 INFO storage.MemoryStore: Block broadcast_1078_piece0 stored as bytes in memory (estimated size 3.1 KB, free 491.4 MB) 18/04/17 17:14:00 INFO storage.BlockManagerInfo: Added broadcast_1078_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:14:00 INFO spark.SparkContext: Created broadcast 1078 from broadcast at DAGScheduler.scala:1006 18/04/17 17:14:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 1078 (KafkaRDD[1507] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 17:14:00 INFO cluster.YarnClusterScheduler: Adding task set 1078.0 with 1 tasks 18/04/17 17:14:00 INFO scheduler.DAGScheduler: Got job 1079 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 17:14:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 1079 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 17:14:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 17:14:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 17:14:00 INFO scheduler.DAGScheduler: Submitting ResultStage 1079 (KafkaRDD[1477] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 17:14:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 1078.0 (TID 1078, ***hostname masked***, executor 8, partition 0, RACK_LOCAL, 2042 bytes) 18/04/17 17:14:00 INFO storage.MemoryStore: Block broadcast_1079 stored as values in memory (estimated size 5.7 KB, free 491.4 MB) 18/04/17 17:14:00 INFO storage.MemoryStore: Block broadcast_1079_piece0 stored as bytes in memory (estimated size 3.1 KB, free 491.4 MB) 18/04/17 17:14:00 INFO storage.BlockManagerInfo: Added broadcast_1079_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:14:00 INFO spark.SparkContext: Created broadcast 1079 from broadcast at DAGScheduler.scala:1006 18/04/17 17:14:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 1079 (KafkaRDD[1477] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 17:14:00 INFO cluster.YarnClusterScheduler: Adding task set 1079.0 with 1 tasks 18/04/17 17:14:00 INFO scheduler.DAGScheduler: Got job 1080 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 17:14:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 1080 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 17:14:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 17:14:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 17:14:00 INFO scheduler.DAGScheduler: Submitting ResultStage 1080 (KafkaRDD[1487] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 17:14:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 1079.0 (TID 1079, ***hostname masked***, executor 4, partition 0, RACK_LOCAL, 2063 bytes) 18/04/17 17:14:00 INFO storage.MemoryStore: Block broadcast_1080 stored as values in memory (estimated size 5.7 KB, free 491.4 MB) 18/04/17 17:14:00 INFO storage.MemoryStore: Block broadcast_1080_piece0 stored as bytes in memory (estimated size 3.1 KB, free 491.4 MB) 18/04/17 17:14:00 INFO storage.BlockManagerInfo: Added broadcast_1080_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:14:00 INFO spark.SparkContext: Created broadcast 1080 from broadcast at DAGScheduler.scala:1006 18/04/17 17:14:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 1080 (KafkaRDD[1487] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 17:14:00 INFO cluster.YarnClusterScheduler: Adding task set 1080.0 with 1 tasks 18/04/17 17:14:00 INFO scheduler.DAGScheduler: Got job 1081 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 17:14:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 1081 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 17:14:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 17:14:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 17:14:00 INFO scheduler.DAGScheduler: Submitting ResultStage 1081 (KafkaRDD[1500] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 17:14:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 1080.0 (TID 1080, ***hostname masked***, executor 12, partition 0, RACK_LOCAL, 2046 bytes) 18/04/17 17:14:00 INFO storage.MemoryStore: Block broadcast_1081 stored as values in memory (estimated size 5.7 KB, free 491.4 MB) 18/04/17 17:14:00 INFO storage.MemoryStore: Block broadcast_1081_piece0 stored as bytes in memory (estimated size 3.1 KB, free 491.4 MB) 18/04/17 17:14:00 INFO storage.BlockManagerInfo: Added broadcast_1081_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:14:00 INFO storage.BlockManagerInfo: Removed broadcast_1052_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:14:00 INFO spark.SparkContext: Created broadcast 1081 from broadcast at DAGScheduler.scala:1006 18/04/17 17:14:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 1081 (KafkaRDD[1500] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 17:14:00 INFO cluster.YarnClusterScheduler: Adding task set 1081.0 with 1 tasks 18/04/17 17:14:00 INFO scheduler.DAGScheduler: Got job 1082 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 17:14:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 1082 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 17:14:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 17:14:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 17:14:00 INFO scheduler.DAGScheduler: Submitting ResultStage 1082 (KafkaRDD[1499] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 17:14:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 1081.0 (TID 1081, ***hostname masked***, executor 2, partition 0, NODE_LOCAL, 2047 bytes) 18/04/17 17:14:00 INFO storage.MemoryStore: Block broadcast_1082 stored as values in memory (estimated size 5.7 KB, free 491.4 MB) 18/04/17 17:14:00 INFO storage.BlockManagerInfo: Removed broadcast_1052_piece0 on ***hostname masked***:41751 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:14:00 INFO storage.BlockManagerInfo: Added broadcast_1078_piece0 in memory on ***hostname masked***:50260 (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:14:00 INFO storage.MemoryStore: Block broadcast_1082_piece0 stored as bytes in memory (estimated size 3.1 KB, free 491.4 MB) 18/04/17 17:14:00 INFO storage.BlockManagerInfo: Added broadcast_1082_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:14:00 INFO spark.SparkContext: Created broadcast 1082 from broadcast at DAGScheduler.scala:1006 18/04/17 17:14:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 1082 (KafkaRDD[1499] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 17:14:00 INFO cluster.YarnClusterScheduler: Adding task set 1082.0 with 1 tasks 18/04/17 17:14:00 INFO scheduler.DAGScheduler: Got job 1083 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 17:14:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 1083 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 17:14:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 17:14:00 INFO storage.BlockManagerInfo: Added broadcast_1079_piece0 in memory on ***hostname masked***:55279 (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:14:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 17:14:00 INFO scheduler.DAGScheduler: Submitting ResultStage 1083 (KafkaRDD[1498] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 17:14:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 1082.0 (TID 1082, ***hostname masked***, executor 9, partition 0, RACK_LOCAL, 2048 bytes) 18/04/17 17:14:00 INFO storage.MemoryStore: Block broadcast_1083 stored as values in memory (estimated size 5.7 KB, free 491.4 MB) 18/04/17 17:14:00 INFO storage.BlockManagerInfo: Added broadcast_1080_piece0 in memory on ***hostname masked***:42188 (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:14:00 INFO storage.BlockManagerInfo: Removed broadcast_1059_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:14:00 INFO storage.MemoryStore: Block broadcast_1083_piece0 stored as bytes in memory (estimated size 3.1 KB, free 491.4 MB) 18/04/17 17:14:00 INFO storage.BlockManagerInfo: Added broadcast_1083_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:14:00 INFO spark.SparkContext: Created broadcast 1083 from broadcast at DAGScheduler.scala:1006 18/04/17 17:14:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 1083 (KafkaRDD[1498] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 17:14:00 INFO cluster.YarnClusterScheduler: Adding task set 1083.0 with 1 tasks 18/04/17 17:14:00 INFO scheduler.DAGScheduler: Got job 1084 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 17:14:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 1084 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 17:14:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 17:14:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 17:14:00 INFO scheduler.DAGScheduler: Submitting ResultStage 1084 (KafkaRDD[1501] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 17:14:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 1083.0 (TID 1083, ***hostname masked***, executor 7, partition 0, RACK_LOCAL, 2069 bytes) 18/04/17 17:14:00 INFO storage.MemoryStore: Block broadcast_1084 stored as values in memory (estimated size 5.7 KB, free 491.4 MB) 18/04/17 17:14:00 INFO storage.BlockManagerInfo: Removed broadcast_1059_piece0 on ***hostname masked***:60107 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:14:00 INFO spark.ContextCleaner: Cleaned accumulator 1060 18/04/17 17:14:00 INFO storage.MemoryStore: Block broadcast_1084_piece0 stored as bytes in memory (estimated size 3.1 KB, free 491.4 MB) 18/04/17 17:14:00 INFO storage.BlockManagerInfo: Added broadcast_1084_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:14:00 INFO storage.BlockManagerInfo: Removed broadcast_1058_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:14:00 INFO spark.SparkContext: Created broadcast 1084 from broadcast at DAGScheduler.scala:1006 18/04/17 17:14:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 1084 (KafkaRDD[1501] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 17:14:00 INFO cluster.YarnClusterScheduler: Adding task set 1084.0 with 1 tasks 18/04/17 17:14:00 INFO scheduler.DAGScheduler: Got job 1085 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 17:14:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 1085 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 17:14:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 17:14:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 17:14:00 INFO scheduler.DAGScheduler: Submitting ResultStage 1085 (KafkaRDD[1485] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 17:14:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 1084.0 (TID 1084, ***hostname masked***, executor 11, partition 0, RACK_LOCAL, 2045 bytes) 18/04/17 17:14:00 INFO storage.MemoryStore: Block broadcast_1085 stored as values in memory (estimated size 5.7 KB, free 491.4 MB) 18/04/17 17:14:00 INFO storage.BlockManagerInfo: Added broadcast_1081_piece0 in memory on ***hostname masked***:43653 (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:14:00 INFO storage.BlockManagerInfo: Removed broadcast_1058_piece0 on ***hostname masked***:43653 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:14:00 INFO spark.ContextCleaner: Cleaned accumulator 1062 18/04/17 17:14:00 INFO storage.MemoryStore: Block broadcast_1085_piece0 stored as bytes in memory (estimated size 3.1 KB, free 491.4 MB) 18/04/17 17:14:00 INFO storage.BlockManagerInfo: Removed broadcast_1060_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:14:00 INFO storage.BlockManagerInfo: Added broadcast_1085_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:14:00 INFO spark.SparkContext: Created broadcast 1085 from broadcast at DAGScheduler.scala:1006 18/04/17 17:14:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 1085 (KafkaRDD[1485] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 17:14:00 INFO cluster.YarnClusterScheduler: Adding task set 1085.0 with 1 tasks 18/04/17 17:14:00 INFO scheduler.DAGScheduler: Got job 1086 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 17:14:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 1086 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 17:14:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 17:14:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 17:14:00 INFO scheduler.DAGScheduler: Submitting ResultStage 1086 (KafkaRDD[1502] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 17:14:00 INFO storage.MemoryStore: Block broadcast_1086 stored as values in memory (estimated size 5.7 KB, free 491.4 MB) 18/04/17 17:14:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 1085.0 (TID 1085, ***hostname masked***, executor 9, partition 0, RACK_LOCAL, 2041 bytes) 18/04/17 17:14:00 INFO storage.BlockManagerInfo: Added broadcast_1082_piece0 in memory on ***hostname masked***:55033 (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:14:00 INFO storage.BlockManagerInfo: Removed broadcast_1060_piece0 on ***hostname masked***:55095 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:14:00 INFO spark.ContextCleaner: Cleaned accumulator 1061 18/04/17 17:14:00 INFO spark.ContextCleaner: Cleaned accumulator 1057 18/04/17 17:14:00 INFO storage.BlockManagerInfo: Added broadcast_1083_piece0 in memory on ***hostname masked***:41751 (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:14:00 INFO storage.BlockManagerInfo: Removed broadcast_1056_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:14:00 INFO storage.MemoryStore: Block broadcast_1086_piece0 stored as bytes in memory (estimated size 3.1 KB, free 491.4 MB) 18/04/17 17:14:00 INFO storage.BlockManagerInfo: Added broadcast_1086_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:14:00 INFO storage.BlockManagerInfo: Removed broadcast_1056_piece0 on ***hostname masked***:55095 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:14:00 INFO spark.SparkContext: Created broadcast 1086 from broadcast at DAGScheduler.scala:1006 18/04/17 17:14:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 1086 (KafkaRDD[1502] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 17:14:00 INFO cluster.YarnClusterScheduler: Adding task set 1086.0 with 1 tasks 18/04/17 17:14:00 INFO scheduler.DAGScheduler: Got job 1087 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 17:14:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 1087 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 17:14:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 17:14:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 17:14:00 INFO scheduler.DAGScheduler: Submitting ResultStage 1087 (KafkaRDD[1504] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 17:14:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 1086.0 (TID 1086, ***hostname masked***, executor 4, partition 0, NODE_LOCAL, 2056 bytes) 18/04/17 17:14:00 INFO storage.MemoryStore: Block broadcast_1087 stored as values in memory (estimated size 5.7 KB, free 491.4 MB) 18/04/17 17:14:00 INFO storage.BlockManagerInfo: Removed broadcast_1061_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:14:00 INFO storage.BlockManagerInfo: Added broadcast_1084_piece0 in memory on ***hostname masked***:57847 (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:14:00 INFO storage.BlockManagerInfo: Removed broadcast_1061_piece0 on ***hostname masked***:35790 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:14:00 INFO storage.MemoryStore: Block broadcast_1087_piece0 stored as bytes in memory (estimated size 3.1 KB, free 491.4 MB) 18/04/17 17:14:00 INFO storage.BlockManagerInfo: Added broadcast_1087_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:14:00 INFO spark.SparkContext: Created broadcast 1087 from broadcast at DAGScheduler.scala:1006 18/04/17 17:14:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 1087 (KafkaRDD[1504] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 17:14:00 INFO cluster.YarnClusterScheduler: Adding task set 1087.0 with 1 tasks 18/04/17 17:14:00 INFO scheduler.DAGScheduler: Got job 1088 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 17:14:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 1088 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 17:14:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 17:14:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 17:14:00 INFO scheduler.DAGScheduler: Submitting ResultStage 1088 (KafkaRDD[1483] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 17:14:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 1087.0 (TID 1087, ***hostname masked***, executor 1, partition 0, RACK_LOCAL, 2045 bytes) 18/04/17 17:14:00 INFO spark.ContextCleaner: Cleaned accumulator 1064 18/04/17 17:14:00 INFO storage.MemoryStore: Block broadcast_1088 stored as values in memory (estimated size 5.7 KB, free 491.4 MB) 18/04/17 17:14:00 INFO storage.BlockManagerInfo: Removed broadcast_1062_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:14:00 INFO storage.BlockManagerInfo: Removed broadcast_1062_piece0 on ***hostname masked***:55033 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:14:00 INFO storage.MemoryStore: Block broadcast_1088_piece0 stored as bytes in memory (estimated size 3.1 KB, free 491.4 MB) 18/04/17 17:14:00 INFO storage.BlockManagerInfo: Added broadcast_1088_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:14:00 INFO spark.SparkContext: Created broadcast 1088 from broadcast at DAGScheduler.scala:1006 18/04/17 17:14:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 1088 (KafkaRDD[1483] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 17:14:00 INFO cluster.YarnClusterScheduler: Adding task set 1088.0 with 1 tasks 18/04/17 17:14:00 INFO scheduler.DAGScheduler: Got job 1090 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 17:14:00 INFO storage.BlockManagerInfo: Added broadcast_1085_piece0 in memory on ***hostname masked***:55033 (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:14:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 1089 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 17:14:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 17:14:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 17:14:00 INFO scheduler.DAGScheduler: Submitting ResultStage 1089 (KafkaRDD[1494] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 17:14:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 1088.0 (TID 1088, ***hostname masked***, executor 6, partition 0, RACK_LOCAL, 2060 bytes) 18/04/17 17:14:00 INFO storage.MemoryStore: Block broadcast_1089 stored as values in memory (estimated size 5.7 KB, free 491.4 MB) 18/04/17 17:14:00 INFO storage.MemoryStore: Block broadcast_1089_piece0 stored as bytes in memory (estimated size 3.1 KB, free 491.4 MB) 18/04/17 17:14:00 INFO storage.BlockManagerInfo: Added broadcast_1089_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:14:00 INFO spark.SparkContext: Created broadcast 1089 from broadcast at DAGScheduler.scala:1006 18/04/17 17:14:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 1089 (KafkaRDD[1494] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 17:14:00 INFO cluster.YarnClusterScheduler: Adding task set 1089.0 with 1 tasks 18/04/17 17:14:00 INFO scheduler.DAGScheduler: Got job 1089 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 17:14:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 1090 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 17:14:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 17:14:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 17:14:00 INFO scheduler.DAGScheduler: Submitting ResultStage 1090 (KafkaRDD[1505] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 17:14:00 INFO storage.MemoryStore: Block broadcast_1090 stored as values in memory (estimated size 5.7 KB, free 491.4 MB) 18/04/17 17:14:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 1089.0 (TID 1089, ***hostname masked***, executor 1, partition 0, RACK_LOCAL, 2048 bytes) 18/04/17 17:14:00 INFO storage.MemoryStore: Block broadcast_1090_piece0 stored as bytes in memory (estimated size 3.1 KB, free 491.4 MB) 18/04/17 17:14:00 INFO storage.BlockManagerInfo: Added broadcast_1090_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:14:00 INFO spark.SparkContext: Created broadcast 1090 from broadcast at DAGScheduler.scala:1006 18/04/17 17:14:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 1090 (KafkaRDD[1505] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 17:14:00 INFO cluster.YarnClusterScheduler: Adding task set 1090.0 with 1 tasks 18/04/17 17:14:00 INFO scheduler.DAGScheduler: Got job 1091 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 17:14:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 1091 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 17:14:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 17:14:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 17:14:00 INFO scheduler.DAGScheduler: Submitting ResultStage 1091 (KafkaRDD[1481] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 17:14:00 INFO storage.MemoryStore: Block broadcast_1091 stored as values in memory (estimated size 5.7 KB, free 491.4 MB) 18/04/17 17:14:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 1090.0 (TID 1090, ***hostname masked***, executor 6, partition 0, NODE_LOCAL, 2049 bytes) 18/04/17 17:14:00 INFO storage.BlockManagerInfo: Added broadcast_1087_piece0 in memory on ***hostname masked***:56034 (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:14:00 INFO storage.MemoryStore: Block broadcast_1091_piece0 stored as bytes in memory (estimated size 3.1 KB, free 491.4 MB) 18/04/17 17:14:00 INFO storage.BlockManagerInfo: Added broadcast_1091_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:14:00 INFO spark.SparkContext: Created broadcast 1091 from broadcast at DAGScheduler.scala:1006 18/04/17 17:14:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 1091 (KafkaRDD[1481] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 17:14:00 INFO cluster.YarnClusterScheduler: Adding task set 1091.0 with 1 tasks 18/04/17 17:14:00 INFO scheduler.DAGScheduler: Got job 1092 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 17:14:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 1092 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 17:14:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 17:14:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 17:14:00 INFO scheduler.DAGScheduler: Submitting ResultStage 1092 (KafkaRDD[1488] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 17:14:00 INFO storage.MemoryStore: Block broadcast_1092 stored as values in memory (estimated size 5.7 KB, free 491.4 MB) 18/04/17 17:14:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 1091.0 (TID 1091, ***hostname masked***, executor 3, partition 0, NODE_LOCAL, 2059 bytes) 18/04/17 17:14:00 INFO storage.MemoryStore: Block broadcast_1092_piece0 stored as bytes in memory (estimated size 3.1 KB, free 491.4 MB) 18/04/17 17:14:00 INFO storage.BlockManagerInfo: Added broadcast_1092_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:14:00 INFO spark.SparkContext: Created broadcast 1092 from broadcast at DAGScheduler.scala:1006 18/04/17 17:14:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 1092 (KafkaRDD[1488] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 17:14:00 INFO cluster.YarnClusterScheduler: Adding task set 1092.0 with 1 tasks 18/04/17 17:14:00 INFO scheduler.DAGScheduler: Got job 1093 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 17:14:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 1093 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 17:14:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 17:14:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 17:14:00 INFO scheduler.DAGScheduler: Submitting ResultStage 1093 (KafkaRDD[1508] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 17:14:00 INFO storage.MemoryStore: Block broadcast_1093 stored as values in memory (estimated size 5.7 KB, free 491.4 MB) 18/04/17 17:14:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 1092.0 (TID 1092, ***hostname masked***, executor 4, partition 0, RACK_LOCAL, 2051 bytes) 18/04/17 17:14:00 INFO storage.BlockManagerInfo: Added broadcast_1088_piece0 in memory on ***hostname masked***:35790 (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:14:00 INFO storage.MemoryStore: Block broadcast_1093_piece0 stored as bytes in memory (estimated size 3.1 KB, free 491.4 MB) 18/04/17 17:14:00 INFO storage.BlockManagerInfo: Added broadcast_1093_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:14:00 INFO spark.SparkContext: Created broadcast 1093 from broadcast at DAGScheduler.scala:1006 18/04/17 17:14:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 1093 (KafkaRDD[1508] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 17:14:00 INFO cluster.YarnClusterScheduler: Adding task set 1093.0 with 1 tasks 18/04/17 17:14:00 INFO scheduler.DAGScheduler: Got job 1094 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 17:14:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 1094 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 17:14:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 17:14:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 17:14:00 INFO scheduler.DAGScheduler: Submitting ResultStage 1094 (KafkaRDD[1486] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 17:14:00 INFO storage.BlockManagerInfo: Added broadcast_1086_piece0 in memory on ***hostname masked***:55279 (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:14:00 INFO spark.ContextCleaner: Cleaned accumulator 1063 18/04/17 17:14:00 INFO storage.MemoryStore: Block broadcast_1094 stored as values in memory (estimated size 5.7 KB, free 491.4 MB) 18/04/17 17:14:00 INFO storage.BlockManagerInfo: Removed broadcast_1063_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:14:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 1093.0 (TID 1093, ***hostname masked***, executor 6, partition 0, RACK_LOCAL, 2040 bytes) 18/04/17 17:14:00 INFO storage.BlockManagerInfo: Removed broadcast_1063_piece0 on ***hostname masked***:60107 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:14:00 INFO storage.MemoryStore: Block broadcast_1094_piece0 stored as bytes in memory (estimated size 3.1 KB, free 491.4 MB) 18/04/17 17:14:00 INFO storage.BlockManagerInfo: Added broadcast_1094_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:14:00 INFO storage.BlockManagerInfo: Added broadcast_1089_piece0 in memory on ***hostname masked***:56034 (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:14:00 INFO spark.SparkContext: Created broadcast 1094 from broadcast at DAGScheduler.scala:1006 18/04/17 17:14:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 1094 (KafkaRDD[1486] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 17:14:00 INFO cluster.YarnClusterScheduler: Adding task set 1094.0 with 1 tasks 18/04/17 17:14:00 INFO scheduler.DAGScheduler: Got job 1095 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 17:14:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 1095 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 17:14:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 17:14:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 17:14:00 INFO scheduler.DAGScheduler: Submitting ResultStage 1095 (KafkaRDD[1510] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 17:14:00 INFO storage.MemoryStore: Block broadcast_1095 stored as values in memory (estimated size 5.7 KB, free 491.4 MB) 18/04/17 17:14:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 1094.0 (TID 1094, ***hostname masked***, executor 3, partition 0, NODE_LOCAL, 2059 bytes) 18/04/17 17:14:00 INFO spark.ContextCleaner: Cleaned accumulator 1066 18/04/17 17:14:00 INFO storage.BlockManagerInfo: Removed broadcast_1064_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:14:00 INFO storage.MemoryStore: Block broadcast_1095_piece0 stored as bytes in memory (estimated size 3.1 KB, free 491.4 MB) 18/04/17 17:14:00 INFO storage.BlockManagerInfo: Added broadcast_1095_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:14:00 INFO spark.SparkContext: Created broadcast 1095 from broadcast at DAGScheduler.scala:1006 18/04/17 17:14:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 1095 (KafkaRDD[1510] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 17:14:00 INFO cluster.YarnClusterScheduler: Adding task set 1095.0 with 1 tasks 18/04/17 17:14:00 INFO scheduler.DAGScheduler: Got job 1096 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 17:14:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 1096 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 17:14:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 17:14:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 17:14:00 INFO scheduler.DAGScheduler: Submitting ResultStage 1096 (KafkaRDD[1491] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 17:14:00 INFO storage.BlockManagerInfo: Removed broadcast_1064_piece0 on ***hostname masked***:56034 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:14:00 INFO storage.MemoryStore: Block broadcast_1096 stored as values in memory (estimated size 5.7 KB, free 491.4 MB) 18/04/17 17:14:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 1095.0 (TID 1095, ***hostname masked***, executor 11, partition 0, RACK_LOCAL, 2053 bytes) 18/04/17 17:14:00 INFO storage.BlockManagerInfo: Added broadcast_1091_piece0 in memory on ***hostname masked***:60107 (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:14:00 INFO spark.ContextCleaner: Cleaned accumulator 1065 18/04/17 17:14:00 INFO storage.BlockManagerInfo: Added broadcast_1092_piece0 in memory on ***hostname masked***:55279 (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:14:00 INFO storage.BlockManagerInfo: Removed broadcast_1066_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:14:00 INFO storage.BlockManagerInfo: Removed broadcast_1066_piece0 on ***hostname masked***:55279 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:14:00 INFO storage.MemoryStore: Block broadcast_1096_piece0 stored as bytes in memory (estimated size 3.1 KB, free 491.4 MB) 18/04/17 17:14:00 INFO storage.BlockManagerInfo: Added broadcast_1096_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:14:00 INFO spark.SparkContext: Created broadcast 1096 from broadcast at DAGScheduler.scala:1006 18/04/17 17:14:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 1096 (KafkaRDD[1491] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 17:14:00 INFO cluster.YarnClusterScheduler: Adding task set 1096.0 with 1 tasks 18/04/17 17:14:00 INFO scheduler.DAGScheduler: Got job 1097 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 17:14:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 1097 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 17:14:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 17:14:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 17:14:00 INFO scheduler.DAGScheduler: Submitting ResultStage 1097 (KafkaRDD[1495] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 17:14:00 INFO spark.ContextCleaner: Cleaned accumulator 1067 18/04/17 17:14:00 INFO storage.MemoryStore: Block broadcast_1097 stored as values in memory (estimated size 5.7 KB, free 491.4 MB) 18/04/17 17:14:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 1096.0 (TID 1096, ***hostname masked***, executor 6, partition 0, RACK_LOCAL, 2064 bytes) 18/04/17 17:14:00 INFO storage.BlockManagerInfo: Removed broadcast_1065_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:14:00 INFO storage.BlockManagerInfo: Removed broadcast_1065_piece0 on ***hostname masked***:55033 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:14:00 INFO storage.BlockManagerInfo: Added broadcast_1093_piece0 in memory on ***hostname masked***:35790 (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:14:00 INFO storage.MemoryStore: Block broadcast_1097_piece0 stored as bytes in memory (estimated size 3.1 KB, free 491.4 MB) 18/04/17 17:14:00 INFO storage.BlockManagerInfo: Added broadcast_1097_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:14:00 INFO spark.SparkContext: Created broadcast 1097 from broadcast at DAGScheduler.scala:1006 18/04/17 17:14:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 1097 (KafkaRDD[1495] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 17:14:00 INFO spark.ContextCleaner: Cleaned accumulator 1069 18/04/17 17:14:00 INFO cluster.YarnClusterScheduler: Adding task set 1097.0 with 1 tasks 18/04/17 17:14:00 INFO scheduler.DAGScheduler: Got job 1098 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 17:14:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 1098 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 17:14:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 17:14:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 17:14:00 INFO scheduler.DAGScheduler: Submitting ResultStage 1098 (KafkaRDD[1478] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 17:14:00 INFO storage.MemoryStore: Block broadcast_1098 stored as values in memory (estimated size 5.7 KB, free 491.4 MB) 18/04/17 17:14:00 INFO storage.BlockManagerInfo: Removed broadcast_1067_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:14:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 1097.0 (TID 1097, ***hostname masked***, executor 2, partition 0, RACK_LOCAL, 2050 bytes) 18/04/17 17:14:00 INFO storage.BlockManagerInfo: Added broadcast_1095_piece0 in memory on ***hostname masked***:57847 (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:14:00 INFO storage.BlockManagerInfo: Removed broadcast_1067_piece0 on ***hostname masked***:35790 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:14:00 INFO spark.ContextCleaner: Cleaned accumulator 1068 18/04/17 17:14:00 INFO storage.BlockManagerInfo: Removed broadcast_1069_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:14:00 INFO storage.MemoryStore: Block broadcast_1098_piece0 stored as bytes in memory (estimated size 3.1 KB, free 491.4 MB) 18/04/17 17:14:00 INFO storage.BlockManagerInfo: Added broadcast_1098_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:14:00 INFO spark.SparkContext: Created broadcast 1098 from broadcast at DAGScheduler.scala:1006 18/04/17 17:14:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 1098 (KafkaRDD[1478] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 17:14:00 INFO cluster.YarnClusterScheduler: Adding task set 1098.0 with 1 tasks 18/04/17 17:14:00 INFO scheduler.DAGScheduler: Got job 1101 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 17:14:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 1099 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 17:14:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 17:14:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 17:14:00 INFO scheduler.DAGScheduler: Submitting ResultStage 1099 (KafkaRDD[1509] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 17:14:00 INFO storage.MemoryStore: Block broadcast_1099 stored as values in memory (estimated size 5.7 KB, free 491.4 MB) 18/04/17 17:14:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 1098.0 (TID 1098, ***hostname masked***, executor 10, partition 0, RACK_LOCAL, 2034 bytes) 18/04/17 17:14:00 INFO storage.BlockManagerInfo: Removed broadcast_1069_piece0 on ***hostname masked***:43653 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:14:00 INFO spark.ContextCleaner: Cleaned accumulator 1070 18/04/17 17:14:00 INFO storage.BlockManagerInfo: Removed broadcast_1068_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:14:00 INFO storage.MemoryStore: Block broadcast_1099_piece0 stored as bytes in memory (estimated size 3.1 KB, free 491.4 MB) 18/04/17 17:14:00 INFO storage.BlockManagerInfo: Removed broadcast_1068_piece0 on ***hostname masked***:35790 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:14:00 INFO storage.BlockManagerInfo: Added broadcast_1099_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:14:00 INFO spark.SparkContext: Created broadcast 1099 from broadcast at DAGScheduler.scala:1006 18/04/17 17:14:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 1099 (KafkaRDD[1509] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 17:14:00 INFO cluster.YarnClusterScheduler: Adding task set 1099.0 with 1 tasks 18/04/17 17:14:00 INFO scheduler.DAGScheduler: Got job 1100 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 17:14:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 1100 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 17:14:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 17:14:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 17:14:00 INFO scheduler.DAGScheduler: Submitting ResultStage 1100 (KafkaRDD[1482] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 17:14:00 INFO storage.MemoryStore: Block broadcast_1100 stored as values in memory (estimated size 5.7 KB, free 491.4 MB) 18/04/17 17:14:00 INFO storage.BlockManagerInfo: Added broadcast_1094_piece0 in memory on ***hostname masked***:60107 (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:14:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 1099.0 (TID 1099, ***hostname masked***, executor 8, partition 0, NODE_LOCAL, 2048 bytes) 18/04/17 17:14:00 INFO storage.BlockManagerInfo: Added broadcast_1097_piece0 in memory on ***hostname masked***:43653 (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:14:00 INFO storage.BlockManagerInfo: Removed broadcast_1053_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:14:00 INFO storage.BlockManagerInfo: Added broadcast_1090_piece0 in memory on ***hostname masked***:35790 (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:14:00 INFO storage.BlockManagerInfo: Removed broadcast_1053_piece0 on ***hostname masked***:55279 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:14:00 INFO storage.MemoryStore: Block broadcast_1100_piece0 stored as bytes in memory (estimated size 3.1 KB, free 491.4 MB) 18/04/17 17:14:00 INFO storage.BlockManagerInfo: Added broadcast_1100_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:14:00 INFO spark.SparkContext: Created broadcast 1100 from broadcast at DAGScheduler.scala:1006 18/04/17 17:14:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 1100 (KafkaRDD[1482] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 17:14:00 INFO cluster.YarnClusterScheduler: Adding task set 1100.0 with 1 tasks 18/04/17 17:14:00 INFO scheduler.DAGScheduler: Got job 1099 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 17:14:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 1101 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 17:14:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 17:14:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 17:14:00 INFO scheduler.DAGScheduler: Submitting ResultStage 1101 (KafkaRDD[1484] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 17:14:00 INFO storage.MemoryStore: Block broadcast_1101 stored as values in memory (estimated size 5.7 KB, free 491.4 MB) 18/04/17 17:14:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 1100.0 (TID 1100, ***hostname masked***, executor 7, partition 0, NODE_LOCAL, 2045 bytes) 18/04/17 17:14:00 INFO storage.MemoryStore: Block broadcast_1101_piece0 stored as bytes in memory (estimated size 3.1 KB, free 491.4 MB) 18/04/17 17:14:00 INFO storage.BlockManagerInfo: Added broadcast_1101_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:14:00 INFO spark.SparkContext: Created broadcast 1101 from broadcast at DAGScheduler.scala:1006 18/04/17 17:14:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 1101 (KafkaRDD[1484] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 17:14:00 INFO cluster.YarnClusterScheduler: Adding task set 1101.0 with 1 tasks 18/04/17 17:14:00 INFO scheduler.DAGScheduler: Got job 1102 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 17:14:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 1102 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 17:14:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 17:14:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 17:14:00 INFO scheduler.DAGScheduler: Submitting ResultStage 1102 (KafkaRDD[1496] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 17:14:00 INFO storage.MemoryStore: Block broadcast_1102 stored as values in memory (estimated size 5.7 KB, free 491.4 MB) 18/04/17 17:14:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 1101.0 (TID 1101, ***hostname masked***, executor 3, partition 0, RACK_LOCAL, 2051 bytes) 18/04/17 17:14:00 INFO spark.ContextCleaner: Cleaned accumulator 1056 18/04/17 17:14:00 INFO spark.ContextCleaner: Cleaned accumulator 1071 18/04/17 17:14:00 INFO spark.ContextCleaner: Cleaned accumulator 1055 18/04/17 17:14:00 INFO spark.ContextCleaner: Cleaned accumulator 1072 18/04/17 17:14:00 INFO storage.BlockManagerInfo: Removed broadcast_1070_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:14:00 INFO storage.BlockManagerInfo: Added broadcast_1098_piece0 in memory on ***hostname masked***:55095 (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:14:00 INFO storage.BlockManagerInfo: Added broadcast_1099_piece0 in memory on ***hostname masked***:50260 (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:14:00 INFO storage.MemoryStore: Block broadcast_1102_piece0 stored as bytes in memory (estimated size 3.1 KB, free 491.4 MB) 18/04/17 17:14:00 INFO storage.BlockManagerInfo: Added broadcast_1102_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:14:00 INFO spark.SparkContext: Created broadcast 1102 from broadcast at DAGScheduler.scala:1006 18/04/17 17:14:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 1102 (KafkaRDD[1496] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 17:14:00 INFO cluster.YarnClusterScheduler: Adding task set 1102.0 with 1 tasks 18/04/17 17:14:00 INFO storage.BlockManagerInfo: Removed broadcast_1070_piece0 on ***hostname masked***:55033 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:14:00 INFO scheduler.DAGScheduler: Got job 1103 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 17:14:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 1103 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 17:14:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 17:14:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 17:14:00 INFO scheduler.DAGScheduler: Submitting ResultStage 1103 (KafkaRDD[1511] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 17:14:00 INFO storage.MemoryStore: Block broadcast_1103 stored as values in memory (estimated size 5.7 KB, free 491.4 MB) 18/04/17 17:14:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 1102.0 (TID 1102, ***hostname masked***, executor 4, partition 0, RACK_LOCAL, 2048 bytes) 18/04/17 17:14:00 INFO storage.MemoryStore: Block broadcast_1103_piece0 stored as bytes in memory (estimated size 3.1 KB, free 491.4 MB) 18/04/17 17:14:00 INFO storage.BlockManagerInfo: Added broadcast_1103_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:14:00 INFO spark.SparkContext: Created broadcast 1103 from broadcast at DAGScheduler.scala:1006 18/04/17 17:14:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 1103 (KafkaRDD[1511] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 17:14:00 INFO cluster.YarnClusterScheduler: Adding task set 1103.0 with 1 tasks 18/04/17 17:14:00 INFO scheduler.DAGScheduler: Got job 1104 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 17:14:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 1104 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 17:14:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 17:14:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 17:14:00 INFO scheduler.DAGScheduler: Submitting ResultStage 1104 (KafkaRDD[1503] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 17:14:00 INFO storage.MemoryStore: Block broadcast_1104 stored as values in memory (estimated size 5.7 KB, free 491.3 MB) 18/04/17 17:14:00 INFO storage.BlockManagerInfo: Added broadcast_1100_piece0 in memory on ***hostname masked***:41751 (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:14:00 INFO storage.BlockManagerInfo: Removed broadcast_1072_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:14:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 1103.0 (TID 1103, ***hostname masked***, executor 9, partition 0, NODE_LOCAL, 2037 bytes) 18/04/17 17:14:00 INFO storage.BlockManagerInfo: Removed broadcast_1072_piece0 on ***hostname masked***:55033 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:14:00 INFO storage.MemoryStore: Block broadcast_1104_piece0 stored as bytes in memory (estimated size 3.1 KB, free 491.4 MB) 18/04/17 17:14:00 INFO storage.BlockManagerInfo: Added broadcast_1104_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:14:00 INFO spark.ContextCleaner: Cleaned accumulator 1073 18/04/17 17:14:00 INFO spark.SparkContext: Created broadcast 1104 from broadcast at DAGScheduler.scala:1006 18/04/17 17:14:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 1104 (KafkaRDD[1503] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 17:14:00 INFO cluster.YarnClusterScheduler: Adding task set 1104.0 with 1 tasks 18/04/17 17:14:00 INFO storage.BlockManagerInfo: Added broadcast_1096_piece0 in memory on ***hostname masked***:35790 (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:14:00 INFO storage.BlockManagerInfo: Removed broadcast_1071_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:14:00 INFO storage.BlockManagerInfo: Added broadcast_1101_piece0 in memory on ***hostname masked***:60107 (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:14:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 1104.0 (TID 1104, ***hostname masked***, executor 10, partition 0, RACK_LOCAL, 2048 bytes) 18/04/17 17:14:00 INFO storage.BlockManagerInfo: Removed broadcast_1071_piece0 on ***hostname masked***:60107 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:14:00 INFO spark.ContextCleaner: Cleaned accumulator 1054 18/04/17 17:14:00 INFO spark.ContextCleaner: Cleaned accumulator 1059 18/04/17 17:14:00 INFO spark.ContextCleaner: Cleaned accumulator 1074 18/04/17 17:14:00 INFO storage.BlockManagerInfo: Removed broadcast_1074_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:14:00 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 1085.0 (TID 1085) in 58 ms on ***hostname masked*** (executor 9) (1/1) 18/04/17 17:14:00 INFO cluster.YarnClusterScheduler: Removed TaskSet 1085.0, whose tasks have all completed, from pool 18/04/17 17:14:00 INFO scheduler.DAGScheduler: ResultStage 1085 (foreachPartition at PredictorEngineApp.java:153) finished in 0.060 s 18/04/17 17:14:00 INFO scheduler.DAGScheduler: Job 1085 finished: foreachPartition at PredictorEngineApp.java:153, took 0.099362 s 18/04/17 17:14:00 INFO storage.BlockManagerInfo: Added broadcast_1102_piece0 in memory on ***hostname masked***:55279 (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:14:00 INFO storage.BlockManagerInfo: Removed broadcast_1074_piece0 on ***hostname masked***:43653 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:14:00 INFO storage.BlockManagerInfo: Added broadcast_1103_piece0 in memory on ***hostname masked***:55033 (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:14:00 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x4582c1b2 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 17:14:00 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x4582c1b20x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 17:14:00 INFO spark.ContextCleaner: Cleaned accumulator 1075 18/04/17 17:14:00 INFO storage.BlockManagerInfo: Removed broadcast_1073_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:14:00 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 17:14:00 INFO storage.BlockManagerInfo: Added broadcast_1104_piece0 in memory on ***hostname masked***:55095 (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:14:00 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:34611, server: ***hostname masked***/***IP masked***:2181 18/04/17 17:14:00 INFO storage.BlockManagerInfo: Removed broadcast_1073_piece0 on ***hostname masked***:55279 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:14:00 INFO storage.BlockManagerInfo: Removed broadcast_1054_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:14:00 INFO storage.BlockManagerInfo: Removed broadcast_1054_piece0 on ***hostname masked***:43653 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:14:00 INFO storage.BlockManagerInfo: Removed broadcast_1075_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:14:00 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x3626be1439a96c1, negotiated timeout = 60000 18/04/17 17:14:00 INFO storage.BlockManagerInfo: Removed broadcast_1075_piece0 on ***hostname masked***:56034 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:14:00 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x3626be1439a96c1 18/04/17 17:14:00 INFO zookeeper.ZooKeeper: Session: 0x3626be1439a96c1 closed 18/04/17 17:14:00 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 17:14:00 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 1089.0 (TID 1089) in 71 ms on ***hostname masked*** (executor 1) (1/1) 18/04/17 17:14:00 INFO cluster.YarnClusterScheduler: Removed TaskSet 1089.0, whose tasks have all completed, from pool 18/04/17 17:14:00 INFO scheduler.DAGScheduler: ResultStage 1089 (foreachPartition at PredictorEngineApp.java:153) finished in 0.072 s 18/04/17 17:14:00 INFO scheduler.DAGScheduler: Job 1090 finished: foreachPartition at PredictorEngineApp.java:153, took 0.124489 s 18/04/17 17:14:00 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x3bbc7a22 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 17:14:00 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x3bbc7a220x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 17:14:00 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 17:14:00 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:34614, server: ***hostname masked***/***IP masked***:2181 18/04/17 17:14:00 INFO scheduler.JobScheduler: Finished job streaming job 1523974440000 ms.9 from job set of time 1523974440000 ms 18/04/17 17:14:00 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x3626be1439a96c2, negotiated timeout = 60000 18/04/17 17:14:00 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x3626be1439a96c2 18/04/17 17:14:00 INFO spark.ContextCleaner: Cleaned accumulator 1076 18/04/17 17:14:00 INFO spark.ContextCleaner: Cleaned accumulator 1078 18/04/17 17:14:00 INFO storage.BlockManagerInfo: Removed broadcast_1076_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:14:00 INFO storage.BlockManagerInfo: Removed broadcast_1076_piece0 on ***hostname masked***:57847 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:14:00 INFO spark.ContextCleaner: Cleaned accumulator 1077 18/04/17 17:14:00 INFO spark.ContextCleaner: Cleaned accumulator 1053 18/04/17 17:14:00 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 1093.0 (TID 1093) in 75 ms on ***hostname masked*** (executor 6) (1/1) 18/04/17 17:14:00 INFO scheduler.DAGScheduler: ResultStage 1093 (foreachPartition at PredictorEngineApp.java:153) finished in 0.076 s 18/04/17 17:14:00 INFO cluster.YarnClusterScheduler: Removed TaskSet 1093.0, whose tasks have all completed, from pool 18/04/17 17:14:00 INFO zookeeper.ZooKeeper: Session: 0x3626be1439a96c2 closed 18/04/17 17:14:00 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 17:14:00 INFO scheduler.DAGScheduler: Job 1093 finished: foreachPartition at PredictorEngineApp.java:153, took 0.140885 s 18/04/17 17:14:00 INFO storage.BlockManagerInfo: Removed broadcast_1057_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:14:00 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x65573393 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 17:14:00 INFO storage.BlockManagerInfo: Removed broadcast_1057_piece0 on ***hostname masked***:55095 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:14:00 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x655733930x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 17:14:00 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 17:14:00 INFO spark.ContextCleaner: Cleaned accumulator 1086 18/04/17 17:14:00 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:34617, server: ***hostname masked***/***IP masked***:2181 18/04/17 17:14:00 INFO storage.BlockManagerInfo: Removed broadcast_1085_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:14:00 INFO storage.BlockManagerInfo: Removed broadcast_1085_piece0 on ***hostname masked***:55033 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:14:00 INFO storage.BlockManagerInfo: Removed broadcast_1089_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:14:00 INFO scheduler.JobScheduler: Finished job streaming job 1523974440000 ms.18 from job set of time 1523974440000 ms 18/04/17 17:14:00 INFO storage.BlockManagerInfo: Removed broadcast_1089_piece0 on ***hostname masked***:56034 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:14:00 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x3626be1439a96c4, negotiated timeout = 60000 18/04/17 17:14:00 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x3626be1439a96c4 18/04/17 17:14:00 INFO zookeeper.ZooKeeper: Session: 0x3626be1439a96c4 closed 18/04/17 17:14:00 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 17:14:00 INFO spark.ContextCleaner: Cleaned accumulator 1090 18/04/17 17:14:00 INFO storage.BlockManagerInfo: Removed broadcast_1093_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:14:00 INFO storage.BlockManagerInfo: Removed broadcast_1093_piece0 on ***hostname masked***:35790 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:14:00 INFO spark.ContextCleaner: Cleaned accumulator 1094 18/04/17 17:14:00 INFO storage.BlockManagerInfo: Removed broadcast_1077_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:14:00 INFO storage.BlockManagerInfo: Removed broadcast_1077_piece0 on ***hostname masked***:53081 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:14:00 INFO storage.BlockManagerInfo: Removed broadcast_1055_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:14:00 INFO scheduler.JobScheduler: Finished job streaming job 1523974440000 ms.32 from job set of time 1523974440000 ms 18/04/17 17:14:00 INFO storage.BlockManagerInfo: Removed broadcast_1055_piece0 on ***hostname masked***:56034 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:14:00 INFO spark.ContextCleaner: Cleaned accumulator 1058 18/04/17 17:14:00 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 1103.0 (TID 1103) in 189 ms on ***hostname masked*** (executor 9) (1/1) 18/04/17 17:14:00 INFO cluster.YarnClusterScheduler: Removed TaskSet 1103.0, whose tasks have all completed, from pool 18/04/17 17:14:00 INFO scheduler.DAGScheduler: ResultStage 1103 (foreachPartition at PredictorEngineApp.java:153) finished in 0.190 s 18/04/17 17:14:00 INFO scheduler.DAGScheduler: Job 1103 finished: foreachPartition at PredictorEngineApp.java:153, took 0.281753 s 18/04/17 17:14:00 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x27383ed connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 17:14:00 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x27383ed0x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 17:14:00 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 17:14:00 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:41002, server: ***hostname masked***/***IP masked***:2181 18/04/17 17:14:00 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x1626be1444c970c, negotiated timeout = 60000 18/04/17 17:14:00 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x1626be1444c970c 18/04/17 17:14:00 INFO zookeeper.ZooKeeper: Session: 0x1626be1444c970c closed 18/04/17 17:14:00 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 17:14:00 INFO scheduler.JobScheduler: Finished job streaming job 1523974440000 ms.35 from job set of time 1523974440000 ms 18/04/17 17:14:01 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 1084.0 (TID 1084) in 1450 ms on ***hostname masked*** (executor 11) (1/1) 18/04/17 17:14:01 INFO cluster.YarnClusterScheduler: Removed TaskSet 1084.0, whose tasks have all completed, from pool 18/04/17 17:14:01 INFO scheduler.DAGScheduler: ResultStage 1084 (foreachPartition at PredictorEngineApp.java:153) finished in 1.451 s 18/04/17 17:14:01 INFO scheduler.DAGScheduler: Job 1084 finished: foreachPartition at PredictorEngineApp.java:153, took 1.486860 s 18/04/17 17:14:01 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x574f4264 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 17:14:01 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x574f42640x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 17:14:01 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 17:14:01 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:34625, server: ***hostname masked***/***IP masked***:2181 18/04/17 17:14:01 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x3626be1439a96cb, negotiated timeout = 60000 18/04/17 17:14:01 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x3626be1439a96cb 18/04/17 17:14:01 INFO zookeeper.ZooKeeper: Session: 0x3626be1439a96cb closed 18/04/17 17:14:01 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 17:14:01 INFO scheduler.JobScheduler: Finished job streaming job 1523974440000 ms.25 from job set of time 1523974440000 ms 18/04/17 17:14:04 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 1088.0 (TID 1088) in 4263 ms on ***hostname masked*** (executor 6) (1/1) 18/04/17 17:14:04 INFO cluster.YarnClusterScheduler: Removed TaskSet 1088.0, whose tasks have all completed, from pool 18/04/17 17:14:04 INFO scheduler.DAGScheduler: ResultStage 1088 (foreachPartition at PredictorEngineApp.java:153) finished in 4.263 s 18/04/17 17:14:04 INFO scheduler.DAGScheduler: Job 1088 finished: foreachPartition at PredictorEngineApp.java:153, took 4.312874 s 18/04/17 17:14:04 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x5742fce8 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 17:14:04 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x5742fce80x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 17:14:04 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 17:14:04 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:34632, server: ***hostname masked***/***IP masked***:2181 18/04/17 17:14:04 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x3626be1439a96cc, negotiated timeout = 60000 18/04/17 17:14:04 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x3626be1439a96cc 18/04/17 17:14:04 INFO zookeeper.ZooKeeper: Session: 0x3626be1439a96cc closed 18/04/17 17:14:04 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 17:14:04 INFO scheduler.JobScheduler: Finished job streaming job 1523974440000 ms.7 from job set of time 1523974440000 ms 18/04/17 17:14:04 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 1101.0 (TID 1101) in 4339 ms on ***hostname masked*** (executor 3) (1/1) 18/04/17 17:14:04 INFO cluster.YarnClusterScheduler: Removed TaskSet 1101.0, whose tasks have all completed, from pool 18/04/17 17:14:04 INFO scheduler.DAGScheduler: ResultStage 1101 (foreachPartition at PredictorEngineApp.java:153) finished in 4.341 s 18/04/17 17:14:04 INFO scheduler.DAGScheduler: Job 1099 finished: foreachPartition at PredictorEngineApp.java:153, took 4.428329 s 18/04/17 17:14:04 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x107319aa connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 17:14:04 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x107319aa0x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 17:14:04 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 17:14:04 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:45612, server: ***hostname masked***/***IP masked***:2181 18/04/17 17:14:04 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x2626be142b28ffd, negotiated timeout = 60000 18/04/17 17:14:04 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x2626be142b28ffd 18/04/17 17:14:04 INFO zookeeper.ZooKeeper: Session: 0x2626be142b28ffd closed 18/04/17 17:14:04 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 17:14:04 INFO scheduler.JobScheduler: Finished job streaming job 1523974440000 ms.8 from job set of time 1523974440000 ms 18/04/17 17:14:05 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 1087.0 (TID 1087) in 5140 ms on ***hostname masked*** (executor 1) (1/1) 18/04/17 17:14:05 INFO cluster.YarnClusterScheduler: Removed TaskSet 1087.0, whose tasks have all completed, from pool 18/04/17 17:14:05 INFO scheduler.DAGScheduler: ResultStage 1087 (foreachPartition at PredictorEngineApp.java:153) finished in 5.140 s 18/04/17 17:14:05 INFO scheduler.DAGScheduler: Job 1087 finished: foreachPartition at PredictorEngineApp.java:153, took 5.187725 s 18/04/17 17:14:05 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x1a01bb57 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 17:14:05 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x1a01bb570x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 17:14:05 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 17:14:05 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:34640, server: ***hostname masked***/***IP masked***:2181 18/04/17 17:14:05 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x3626be1439a96cd, negotiated timeout = 60000 18/04/17 17:14:05 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x3626be1439a96cd 18/04/17 17:14:05 INFO zookeeper.ZooKeeper: Session: 0x3626be1439a96cd closed 18/04/17 17:14:05 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 17:14:05 INFO scheduler.JobScheduler: Finished job streaming job 1523974440000 ms.28 from job set of time 1523974440000 ms 18/04/17 17:14:05 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 1082.0 (TID 1082) in 5298 ms on ***hostname masked*** (executor 9) (1/1) 18/04/17 17:14:05 INFO cluster.YarnClusterScheduler: Removed TaskSet 1082.0, whose tasks have all completed, from pool 18/04/17 17:14:05 INFO scheduler.DAGScheduler: ResultStage 1082 (foreachPartition at PredictorEngineApp.java:153) finished in 5.298 s 18/04/17 17:14:05 INFO scheduler.DAGScheduler: Job 1082 finished: foreachPartition at PredictorEngineApp.java:153, took 5.328362 s 18/04/17 17:14:05 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x3156ef3b connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 17:14:05 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x3156ef3b0x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 17:14:05 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 17:14:05 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:34643, server: ***hostname masked***/***IP masked***:2181 18/04/17 17:14:05 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x3626be1439a96ce, negotiated timeout = 60000 18/04/17 17:14:05 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x3626be1439a96ce 18/04/17 17:14:05 INFO zookeeper.ZooKeeper: Session: 0x3626be1439a96ce closed 18/04/17 17:14:05 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 17:14:05 INFO scheduler.JobScheduler: Finished job streaming job 1523974440000 ms.23 from job set of time 1523974440000 ms 18/04/17 17:14:05 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 1095.0 (TID 1095) in 5786 ms on ***hostname masked*** (executor 11) (1/1) 18/04/17 17:14:05 INFO scheduler.DAGScheduler: ResultStage 1095 (foreachPartition at PredictorEngineApp.java:153) finished in 5.787 s 18/04/17 17:14:05 INFO cluster.YarnClusterScheduler: Removed TaskSet 1095.0, whose tasks have all completed, from pool 18/04/17 17:14:05 INFO scheduler.DAGScheduler: Job 1095 finished: foreachPartition at PredictorEngineApp.java:153, took 5.892206 s 18/04/17 17:14:05 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x517bbd96 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 17:14:05 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x517bbd960x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 17:14:05 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 17:14:05 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:41028, server: ***hostname masked***/***IP masked***:2181 18/04/17 17:14:05 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x1626be1444c9713, negotiated timeout = 60000 18/04/17 17:14:05 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x1626be1444c9713 18/04/17 17:14:05 INFO zookeeper.ZooKeeper: Session: 0x1626be1444c9713 closed 18/04/17 17:14:05 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 17:14:05 INFO scheduler.JobScheduler: Finished job streaming job 1523974440000 ms.34 from job set of time 1523974440000 ms 18/04/17 17:14:06 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 1078.0 (TID 1078) in 6142 ms on ***hostname masked*** (executor 8) (1/1) 18/04/17 17:14:06 INFO cluster.YarnClusterScheduler: Removed TaskSet 1078.0, whose tasks have all completed, from pool 18/04/17 17:14:06 INFO scheduler.DAGScheduler: ResultStage 1078 (foreachPartition at PredictorEngineApp.java:153) finished in 6.142 s 18/04/17 17:14:06 INFO scheduler.DAGScheduler: Job 1078 finished: foreachPartition at PredictorEngineApp.java:153, took 6.149828 s 18/04/17 17:14:06 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x6047c470 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 17:14:06 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x6047c4700x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 17:14:06 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 17:14:06 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:45628, server: ***hostname masked***/***IP masked***:2181 18/04/17 17:14:06 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x2626be142b29000, negotiated timeout = 60000 18/04/17 17:14:06 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x2626be142b29000 18/04/17 17:14:06 INFO zookeeper.ZooKeeper: Session: 0x2626be142b29000 closed 18/04/17 17:14:06 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 17:14:06 INFO scheduler.JobScheduler: Finished job streaming job 1523974440000 ms.31 from job set of time 1523974440000 ms 18/04/17 17:14:07 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 1100.0 (TID 1100) in 7769 ms on ***hostname masked*** (executor 7) (1/1) 18/04/17 17:14:07 INFO cluster.YarnClusterScheduler: Removed TaskSet 1100.0, whose tasks have all completed, from pool 18/04/17 17:14:07 INFO scheduler.DAGScheduler: ResultStage 1100 (foreachPartition at PredictorEngineApp.java:153) finished in 7.770 s 18/04/17 17:14:07 INFO scheduler.DAGScheduler: Job 1100 finished: foreachPartition at PredictorEngineApp.java:153, took 7.855785 s 18/04/17 17:14:07 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x23665248 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 17:14:07 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x236652480x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 17:14:07 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 17:14:07 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:45634, server: ***hostname masked***/***IP masked***:2181 18/04/17 17:14:07 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x2626be142b29002, negotiated timeout = 60000 18/04/17 17:14:07 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x2626be142b29002 18/04/17 17:14:07 INFO zookeeper.ZooKeeper: Session: 0x2626be142b29002 closed 18/04/17 17:14:07 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 17:14:07 INFO scheduler.JobScheduler: Finished job streaming job 1523974440000 ms.6 from job set of time 1523974440000 ms 18/04/17 17:14:08 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 1099.0 (TID 1099) in 8526 ms on ***hostname masked*** (executor 8) (1/1) 18/04/17 17:14:08 INFO cluster.YarnClusterScheduler: Removed TaskSet 1099.0, whose tasks have all completed, from pool 18/04/17 17:14:08 INFO scheduler.DAGScheduler: ResultStage 1099 (foreachPartition at PredictorEngineApp.java:153) finished in 8.527 s 18/04/17 17:14:08 INFO scheduler.DAGScheduler: Job 1101 finished: foreachPartition at PredictorEngineApp.java:153, took 8.610454 s 18/04/17 17:14:08 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x321dad3f connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 17:14:08 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x321dad3f0x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 17:14:08 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 17:14:08 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:41043, server: ***hostname masked***/***IP masked***:2181 18/04/17 17:14:08 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x1626be1444c9715, negotiated timeout = 60000 18/04/17 17:14:08 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x1626be1444c9715 18/04/17 17:14:08 INFO zookeeper.ZooKeeper: Session: 0x1626be1444c9715 closed 18/04/17 17:14:08 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 17:14:08 INFO scheduler.JobScheduler: Finished job streaming job 1523974440000 ms.33 from job set of time 1523974440000 ms 18/04/17 17:14:08 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 1098.0 (TID 1098) in 8732 ms on ***hostname masked*** (executor 10) (1/1) 18/04/17 17:14:08 INFO cluster.YarnClusterScheduler: Removed TaskSet 1098.0, whose tasks have all completed, from pool 18/04/17 17:14:08 INFO scheduler.DAGScheduler: ResultStage 1098 (foreachPartition at PredictorEngineApp.java:153) finished in 8.733 s 18/04/17 17:14:08 INFO scheduler.DAGScheduler: Job 1098 finished: foreachPartition at PredictorEngineApp.java:153, took 8.813357 s 18/04/17 17:14:08 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x815fa48 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 17:14:08 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x815fa480x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 17:14:08 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 17:14:08 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:45641, server: ***hostname masked***/***IP masked***:2181 18/04/17 17:14:08 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x2626be142b29003, negotiated timeout = 60000 18/04/17 17:14:08 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x2626be142b29003 18/04/17 17:14:08 INFO zookeeper.ZooKeeper: Session: 0x2626be142b29003 closed 18/04/17 17:14:08 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 17:14:08 INFO scheduler.JobScheduler: Finished job streaming job 1523974440000 ms.2 from job set of time 1523974440000 ms 18/04/17 17:14:09 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 1104.0 (TID 1104) in 9048 ms on ***hostname masked*** (executor 10) (1/1) 18/04/17 17:14:09 INFO cluster.YarnClusterScheduler: Removed TaskSet 1104.0, whose tasks have all completed, from pool 18/04/17 17:14:09 INFO scheduler.DAGScheduler: ResultStage 1104 (foreachPartition at PredictorEngineApp.java:153) finished in 9.048 s 18/04/17 17:14:09 INFO scheduler.DAGScheduler: Job 1104 finished: foreachPartition at PredictorEngineApp.java:153, took 9.141901 s 18/04/17 17:14:09 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x11e98d18 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 17:14:09 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x11e98d180x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 17:14:09 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 17:14:09 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:45645, server: ***hostname masked***/***IP masked***:2181 18/04/17 17:14:09 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x2626be142b29004, negotiated timeout = 60000 18/04/17 17:14:09 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x2626be142b29004 18/04/17 17:14:09 INFO zookeeper.ZooKeeper: Session: 0x2626be142b29004 closed 18/04/17 17:14:09 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 17:14:09 INFO scheduler.JobScheduler: Finished job streaming job 1523974440000 ms.27 from job set of time 1523974440000 ms 18/04/17 17:14:10 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 1097.0 (TID 1097) in 10062 ms on ***hostname masked*** (executor 2) (1/1) 18/04/17 17:14:10 INFO cluster.YarnClusterScheduler: Removed TaskSet 1097.0, whose tasks have all completed, from pool 18/04/17 17:14:10 INFO scheduler.DAGScheduler: ResultStage 1097 (foreachPartition at PredictorEngineApp.java:153) finished in 10.062 s 18/04/17 17:14:10 INFO scheduler.DAGScheduler: Job 1097 finished: foreachPartition at PredictorEngineApp.java:153, took 10.139883 s 18/04/17 17:14:10 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x756b44ad connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 17:14:10 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x756b44ad0x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 17:14:10 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 17:14:10 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:41054, server: ***hostname masked***/***IP masked***:2181 18/04/17 17:14:10 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x1626be1444c971a, negotiated timeout = 60000 18/04/17 17:14:10 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x1626be1444c971a 18/04/17 17:14:10 INFO zookeeper.ZooKeeper: Session: 0x1626be1444c971a closed 18/04/17 17:14:10 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 17:14:10 INFO scheduler.JobScheduler: Finished job streaming job 1523974440000 ms.19 from job set of time 1523974440000 ms 18/04/17 17:14:10 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 1081.0 (TID 1081) in 10155 ms on ***hostname masked*** (executor 2) (1/1) 18/04/17 17:14:10 INFO cluster.YarnClusterScheduler: Removed TaskSet 1081.0, whose tasks have all completed, from pool 18/04/17 17:14:10 INFO scheduler.DAGScheduler: ResultStage 1081 (foreachPartition at PredictorEngineApp.java:153) finished in 10.156 s 18/04/17 17:14:10 INFO scheduler.DAGScheduler: Job 1081 finished: foreachPartition at PredictorEngineApp.java:153, took 10.182921 s 18/04/17 17:14:10 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x4f8d2d3a connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 17:14:10 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x4f8d2d3a0x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 17:14:10 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 17:14:10 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:34675, server: ***hostname masked***/***IP masked***:2181 18/04/17 17:14:10 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x3626be1439a96d1, negotiated timeout = 60000 18/04/17 17:14:10 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x3626be1439a96d1 18/04/17 17:14:10 INFO zookeeper.ZooKeeper: Session: 0x3626be1439a96d1 closed 18/04/17 17:14:10 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 17:14:10 INFO scheduler.JobScheduler: Finished job streaming job 1523974440000 ms.24 from job set of time 1523974440000 ms 18/04/17 17:14:10 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 1092.0 (TID 1092) in 10822 ms on ***hostname masked*** (executor 4) (1/1) 18/04/17 17:14:10 INFO scheduler.DAGScheduler: ResultStage 1092 (foreachPartition at PredictorEngineApp.java:153) finished in 10.823 s 18/04/17 17:14:10 INFO cluster.YarnClusterScheduler: Removed TaskSet 1092.0, whose tasks have all completed, from pool 18/04/17 17:14:10 INFO scheduler.DAGScheduler: Job 1092 finished: foreachPartition at PredictorEngineApp.java:153, took 10.918198 s 18/04/17 17:14:10 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x7fc311c4 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 17:14:10 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x7fc311c40x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 17:14:10 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 17:14:10 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:41060, server: ***hostname masked***/***IP masked***:2181 18/04/17 17:14:10 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x1626be1444c971b, negotiated timeout = 60000 18/04/17 17:14:10 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x1626be1444c971b 18/04/17 17:14:11 INFO zookeeper.ZooKeeper: Session: 0x1626be1444c971b closed 18/04/17 17:14:11 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 17:14:11 INFO scheduler.JobScheduler: Finished job streaming job 1523974440000 ms.12 from job set of time 1523974440000 ms 18/04/17 17:14:11 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 1096.0 (TID 1096) in 11046 ms on ***hostname masked*** (executor 6) (1/1) 18/04/17 17:14:11 INFO scheduler.DAGScheduler: ResultStage 1096 (foreachPartition at PredictorEngineApp.java:153) finished in 11.046 s 18/04/17 17:14:11 INFO cluster.YarnClusterScheduler: Removed TaskSet 1096.0, whose tasks have all completed, from pool 18/04/17 17:14:11 INFO scheduler.DAGScheduler: Job 1096 finished: foreachPartition at PredictorEngineApp.java:153, took 11.121473 s 18/04/17 17:14:11 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x7b6347fb connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 17:14:11 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x7b6347fb0x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 17:14:11 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 17:14:11 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:41065, server: ***hostname masked***/***IP masked***:2181 18/04/17 17:14:11 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x1626be1444c971c, negotiated timeout = 60000 18/04/17 17:14:11 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x1626be1444c971c 18/04/17 17:14:11 INFO zookeeper.ZooKeeper: Session: 0x1626be1444c971c closed 18/04/17 17:14:11 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 17:14:11 INFO scheduler.JobScheduler: Finished job streaming job 1523974440000 ms.15 from job set of time 1523974440000 ms 18/04/17 17:14:11 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 1080.0 (TID 1080) in 11289 ms on ***hostname masked*** (executor 12) (1/1) 18/04/17 17:14:11 INFO cluster.YarnClusterScheduler: Removed TaskSet 1080.0, whose tasks have all completed, from pool 18/04/17 17:14:11 INFO scheduler.DAGScheduler: ResultStage 1080 (foreachPartition at PredictorEngineApp.java:153) finished in 11.289 s 18/04/17 17:14:11 INFO scheduler.DAGScheduler: Job 1080 finished: foreachPartition at PredictorEngineApp.java:153, took 11.302786 s 18/04/17 17:14:11 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x5884257f connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 17:14:11 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x5884257f0x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 17:14:11 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 17:14:11 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:34686, server: ***hostname masked***/***IP masked***:2181 18/04/17 17:14:11 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x3626be1439a96d4, negotiated timeout = 60000 18/04/17 17:14:11 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x3626be1439a96d4 18/04/17 17:14:11 INFO zookeeper.ZooKeeper: Session: 0x3626be1439a96d4 closed 18/04/17 17:14:11 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 17:14:11 INFO scheduler.JobScheduler: Finished job streaming job 1523974440000 ms.11 from job set of time 1523974440000 ms 18/04/17 17:14:11 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 1090.0 (TID 1090) in 11378 ms on ***hostname masked*** (executor 6) (1/1) 18/04/17 17:14:11 INFO cluster.YarnClusterScheduler: Removed TaskSet 1090.0, whose tasks have all completed, from pool 18/04/17 17:14:11 INFO scheduler.DAGScheduler: ResultStage 1090 (foreachPartition at PredictorEngineApp.java:153) finished in 11.378 s 18/04/17 17:14:11 INFO scheduler.DAGScheduler: Job 1089 finished: foreachPartition at PredictorEngineApp.java:153, took 11.434927 s 18/04/17 17:14:11 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x5277fe80 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 17:14:11 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x5277fe800x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 17:14:11 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 17:14:11 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:34689, server: ***hostname masked***/***IP masked***:2181 18/04/17 17:14:11 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x3626be1439a96d5, negotiated timeout = 60000 18/04/17 17:14:11 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x3626be1439a96d5 18/04/17 17:14:11 INFO zookeeper.ZooKeeper: Session: 0x3626be1439a96d5 closed 18/04/17 17:14:11 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 17:14:11 INFO scheduler.JobScheduler: Finished job streaming job 1523974440000 ms.29 from job set of time 1523974440000 ms 18/04/17 17:14:12 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 1102.0 (TID 1102) in 12402 ms on ***hostname masked*** (executor 4) (1/1) 18/04/17 17:14:12 INFO cluster.YarnClusterScheduler: Removed TaskSet 1102.0, whose tasks have all completed, from pool 18/04/17 17:14:12 INFO scheduler.DAGScheduler: ResultStage 1102 (foreachPartition at PredictorEngineApp.java:153) finished in 12.402 s 18/04/17 17:14:12 INFO scheduler.DAGScheduler: Job 1102 finished: foreachPartition at PredictorEngineApp.java:153, took 12.492505 s 18/04/17 17:14:12 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0xce34f11 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 17:14:12 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0xce34f110x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 17:14:12 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 17:14:12 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:45670, server: ***hostname masked***/***IP masked***:2181 18/04/17 17:14:12 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x2626be142b29007, negotiated timeout = 60000 18/04/17 17:14:12 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x2626be142b29007 18/04/17 17:14:12 INFO zookeeper.ZooKeeper: Session: 0x2626be142b29007 closed 18/04/17 17:14:12 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 17:14:12 INFO scheduler.JobScheduler: Finished job streaming job 1523974440000 ms.20 from job set of time 1523974440000 ms 18/04/17 17:14:14 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 1083.0 (TID 1083) in 14272 ms on ***hostname masked*** (executor 7) (1/1) 18/04/17 17:14:14 INFO cluster.YarnClusterScheduler: Removed TaskSet 1083.0, whose tasks have all completed, from pool 18/04/17 17:14:14 INFO scheduler.DAGScheduler: ResultStage 1083 (foreachPartition at PredictorEngineApp.java:153) finished in 14.272 s 18/04/17 17:14:14 INFO scheduler.DAGScheduler: Job 1083 finished: foreachPartition at PredictorEngineApp.java:153, took 14.304901 s 18/04/17 17:14:14 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x48cdcfa1 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 17:14:14 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x48cdcfa10x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 17:14:14 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 17:14:14 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:41082, server: ***hostname masked***/***IP masked***:2181 18/04/17 17:14:14 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x1626be1444c971d, negotiated timeout = 60000 18/04/17 17:14:14 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x1626be1444c971d 18/04/17 17:14:14 INFO zookeeper.ZooKeeper: Session: 0x1626be1444c971d closed 18/04/17 17:14:14 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 17:14:14 INFO scheduler.JobScheduler: Finished job streaming job 1523974440000 ms.22 from job set of time 1523974440000 ms 18/04/17 17:14:17 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 1079.0 (TID 1079) in 17254 ms on ***hostname masked*** (executor 4) (1/1) 18/04/17 17:14:17 INFO cluster.YarnClusterScheduler: Removed TaskSet 1079.0, whose tasks have all completed, from pool 18/04/17 17:14:17 INFO scheduler.DAGScheduler: ResultStage 1079 (foreachPartition at PredictorEngineApp.java:153) finished in 17.254 s 18/04/17 17:14:17 INFO scheduler.DAGScheduler: Job 1079 finished: foreachPartition at PredictorEngineApp.java:153, took 17.264363 s 18/04/17 17:14:17 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x3e83a8d5 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 17:14:17 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x3e83a8d50x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 17:14:17 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 17:14:17 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:45686, server: ***hostname masked***/***IP masked***:2181 18/04/17 17:14:17 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x2626be142b2900b, negotiated timeout = 60000 18/04/17 17:14:17 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x2626be142b2900b 18/04/17 17:14:17 INFO zookeeper.ZooKeeper: Session: 0x2626be142b2900b closed 18/04/17 17:14:17 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 17:14:17 INFO scheduler.JobScheduler: Finished job streaming job 1523974440000 ms.1 from job set of time 1523974440000 ms 18/04/17 17:14:19 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 1091.0 (TID 1091) in 19360 ms on ***hostname masked*** (executor 3) (1/1) 18/04/17 17:14:19 INFO cluster.YarnClusterScheduler: Removed TaskSet 1091.0, whose tasks have all completed, from pool 18/04/17 17:14:19 INFO scheduler.DAGScheduler: ResultStage 1091 (foreachPartition at PredictorEngineApp.java:153) finished in 19.360 s 18/04/17 17:14:19 INFO scheduler.DAGScheduler: Job 1091 finished: foreachPartition at PredictorEngineApp.java:153, took 19.419781 s 18/04/17 17:14:19 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x33bf67e3 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 17:14:19 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x33bf67e30x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 17:14:19 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 17:14:19 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:41096, server: ***hostname masked***/***IP masked***:2181 18/04/17 17:14:19 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x1626be1444c9720, negotiated timeout = 60000 18/04/17 17:14:19 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x1626be1444c9720 18/04/17 17:14:19 INFO zookeeper.ZooKeeper: Session: 0x1626be1444c9720 closed 18/04/17 17:14:19 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 17:14:19 INFO scheduler.JobScheduler: Finished job streaming job 1523974440000 ms.5 from job set of time 1523974440000 ms 18/04/17 17:14:21 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 1094.0 (TID 1094) in 21013 ms on ***hostname masked*** (executor 3) (1/1) 18/04/17 17:14:21 INFO cluster.YarnClusterScheduler: Removed TaskSet 1094.0, whose tasks have all completed, from pool 18/04/17 17:14:21 INFO scheduler.DAGScheduler: ResultStage 1094 (foreachPartition at PredictorEngineApp.java:153) finished in 21.014 s 18/04/17 17:14:21 INFO scheduler.DAGScheduler: Job 1094 finished: foreachPartition at PredictorEngineApp.java:153, took 21.081966 s 18/04/17 17:14:21 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x3832b11a connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 17:14:21 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x3832b11a0x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 17:14:21 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 17:14:21 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:41104, server: ***hostname masked***/***IP masked***:2181 18/04/17 17:14:21 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x1626be1444c9723, negotiated timeout = 60000 18/04/17 17:14:21 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x1626be1444c9723 18/04/17 17:14:21 INFO zookeeper.ZooKeeper: Session: 0x1626be1444c9723 closed 18/04/17 17:14:21 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 17:14:21 INFO scheduler.JobScheduler: Finished job streaming job 1523974440000 ms.10 from job set of time 1523974440000 ms 18/04/17 17:15:00 INFO scheduler.JobScheduler: Added jobs for time 1523974500000 ms 18/04/17 17:15:00 INFO scheduler.JobScheduler: Starting job streaming job 1523974500000 ms.1 from job set of time 1523974500000 ms 18/04/17 17:15:00 INFO scheduler.JobScheduler: Starting job streaming job 1523974500000 ms.2 from job set of time 1523974500000 ms 18/04/17 17:15:00 INFO scheduler.JobScheduler: Starting job streaming job 1523974500000 ms.0 from job set of time 1523974500000 ms 18/04/17 17:15:00 INFO scheduler.JobScheduler: Starting job streaming job 1523974500000 ms.3 from job set of time 1523974500000 ms 18/04/17 17:15:00 INFO scheduler.JobScheduler: Starting job streaming job 1523974500000 ms.4 from job set of time 1523974500000 ms 18/04/17 17:15:00 INFO scheduler.JobScheduler: Finished job streaming job 1523974500000 ms.3 from job set of time 1523974500000 ms 18/04/17 17:15:00 INFO scheduler.JobScheduler: Finished job streaming job 1523974500000 ms.0 from job set of time 1523974500000 ms 18/04/17 17:15:00 INFO scheduler.JobScheduler: Starting job streaming job 1523974500000 ms.7 from job set of time 1523974500000 ms 18/04/17 17:15:00 INFO scheduler.JobScheduler: Starting job streaming job 1523974500000 ms.5 from job set of time 1523974500000 ms 18/04/17 17:15:00 INFO scheduler.JobScheduler: Finished job streaming job 1523974500000 ms.4 from job set of time 1523974500000 ms 18/04/17 17:15:00 INFO scheduler.JobScheduler: Starting job streaming job 1523974500000 ms.8 from job set of time 1523974500000 ms 18/04/17 17:15:00 INFO scheduler.JobScheduler: Starting job streaming job 1523974500000 ms.6 from job set of time 1523974500000 ms 18/04/17 17:15:00 INFO scheduler.JobScheduler: Starting job streaming job 1523974500000 ms.10 from job set of time 1523974500000 ms 18/04/17 17:15:00 INFO scheduler.JobScheduler: Starting job streaming job 1523974500000 ms.9 from job set of time 1523974500000 ms 18/04/17 17:15:00 INFO scheduler.JobScheduler: Starting job streaming job 1523974500000 ms.11 from job set of time 1523974500000 ms 18/04/17 17:15:00 INFO scheduler.JobScheduler: Starting job streaming job 1523974500000 ms.12 from job set of time 1523974500000 ms 18/04/17 17:15:00 INFO scheduler.JobScheduler: Starting job streaming job 1523974500000 ms.13 from job set of time 1523974500000 ms 18/04/17 17:15:00 INFO scheduler.JobScheduler: Starting job streaming job 1523974500000 ms.14 from job set of time 1523974500000 ms 18/04/17 17:15:00 INFO scheduler.JobScheduler: Finished job streaming job 1523974500000 ms.13 from job set of time 1523974500000 ms 18/04/17 17:15:00 INFO scheduler.JobScheduler: Starting job streaming job 1523974500000 ms.15 from job set of time 1523974500000 ms 18/04/17 17:15:00 INFO scheduler.JobScheduler: Finished job streaming job 1523974500000 ms.14 from job set of time 1523974500000 ms 18/04/17 17:15:00 INFO scheduler.JobScheduler: Starting job streaming job 1523974500000 ms.16 from job set of time 1523974500000 ms 18/04/17 17:15:00 INFO scheduler.JobScheduler: Finished job streaming job 1523974500000 ms.16 from job set of time 1523974500000 ms 18/04/17 17:15:00 INFO scheduler.JobScheduler: Starting job streaming job 1523974500000 ms.17 from job set of time 1523974500000 ms 18/04/17 17:15:00 INFO scheduler.JobScheduler: Finished job streaming job 1523974500000 ms.17 from job set of time 1523974500000 ms 18/04/17 17:15:00 INFO scheduler.JobScheduler: Starting job streaming job 1523974500000 ms.18 from job set of time 1523974500000 ms 18/04/17 17:15:00 INFO scheduler.JobScheduler: Starting job streaming job 1523974500000 ms.19 from job set of time 1523974500000 ms 18/04/17 17:15:00 INFO scheduler.JobScheduler: Starting job streaming job 1523974500000 ms.20 from job set of time 1523974500000 ms 18/04/17 17:15:00 INFO scheduler.JobScheduler: Starting job streaming job 1523974500000 ms.21 from job set of time 1523974500000 ms 18/04/17 17:15:00 INFO scheduler.JobScheduler: Starting job streaming job 1523974500000 ms.22 from job set of time 1523974500000 ms 18/04/17 17:15:00 INFO scheduler.JobScheduler: Finished job streaming job 1523974500000 ms.21 from job set of time 1523974500000 ms 18/04/17 17:15:00 INFO scheduler.JobScheduler: Starting job streaming job 1523974500000 ms.24 from job set of time 1523974500000 ms 18/04/17 17:15:00 INFO scheduler.JobScheduler: Starting job streaming job 1523974500000 ms.23 from job set of time 1523974500000 ms 18/04/17 17:15:00 INFO scheduler.JobScheduler: Starting job streaming job 1523974500000 ms.25 from job set of time 1523974500000 ms 18/04/17 17:15:00 INFO scheduler.JobScheduler: Starting job streaming job 1523974500000 ms.26 from job set of time 1523974500000 ms 18/04/17 17:15:00 INFO scheduler.JobScheduler: Starting job streaming job 1523974500000 ms.27 from job set of time 1523974500000 ms 18/04/17 17:15:00 INFO scheduler.JobScheduler: Starting job streaming job 1523974500000 ms.28 from job set of time 1523974500000 ms 18/04/17 17:15:00 INFO scheduler.JobScheduler: Starting job streaming job 1523974500000 ms.29 from job set of time 1523974500000 ms 18/04/17 17:15:00 INFO scheduler.JobScheduler: Starting job streaming job 1523974500000 ms.30 from job set of time 1523974500000 ms 18/04/17 17:15:00 INFO scheduler.JobScheduler: Finished job streaming job 1523974500000 ms.30 from job set of time 1523974500000 ms 18/04/17 17:15:00 INFO scheduler.JobScheduler: Starting job streaming job 1523974500000 ms.31 from job set of time 1523974500000 ms 18/04/17 17:15:00 INFO scheduler.JobScheduler: Starting job streaming job 1523974500000 ms.33 from job set of time 1523974500000 ms 18/04/17 17:15:00 INFO scheduler.JobScheduler: Starting job streaming job 1523974500000 ms.32 from job set of time 1523974500000 ms 18/04/17 17:15:00 INFO scheduler.JobScheduler: Starting job streaming job 1523974500000 ms.34 from job set of time 1523974500000 ms 18/04/17 17:15:00 INFO scheduler.JobScheduler: Starting job streaming job 1523974500000 ms.35 from job set of time 1523974500000 ms 18/04/17 17:15:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 17:15:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 17:15:00 INFO scheduler.DAGScheduler: Got job 1105 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 17:15:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 1105 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 17:15:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 17:15:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 17:15:00 INFO scheduler.DAGScheduler: Submitting ResultStage 1105 (KafkaRDD[1514] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 17:15:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 17:15:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 17:15:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 17:15:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 17:15:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 17:15:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 17:15:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 17:15:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 17:15:00 INFO storage.MemoryStore: Block broadcast_1105 stored as values in memory (estimated size 5.7 KB, free 491.4 MB) 18/04/17 17:15:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 17:15:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 17:15:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 17:15:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 17:15:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 17:15:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 17:15:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 17:15:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 17:15:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 17:15:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 17:15:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 17:15:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 17:15:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 17:15:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 17:15:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 17:15:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 17:15:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 17:15:00 INFO storage.MemoryStore: Block broadcast_1105_piece0 stored as bytes in memory (estimated size 3.1 KB, free 491.4 MB) 18/04/17 17:15:00 INFO storage.BlockManagerInfo: Added broadcast_1105_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:15:00 INFO spark.SparkContext: Created broadcast 1105 from broadcast at DAGScheduler.scala:1006 18/04/17 17:15:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 1105 (KafkaRDD[1514] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 17:15:00 INFO cluster.YarnClusterScheduler: Adding task set 1105.0 with 1 tasks 18/04/17 17:15:00 INFO scheduler.DAGScheduler: Got job 1106 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 17:15:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 1106 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 17:15:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 17:15:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 17:15:00 INFO scheduler.DAGScheduler: Submitting ResultStage 1106 (KafkaRDD[1534] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 17:15:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 1105.0 (TID 1105, ***hostname masked***, executor 4, partition 0, RACK_LOCAL, 2034 bytes) 18/04/17 17:15:00 INFO storage.MemoryStore: Block broadcast_1106 stored as values in memory (estimated size 5.7 KB, free 491.4 MB) 18/04/17 17:15:00 INFO storage.MemoryStore: Block broadcast_1106_piece0 stored as bytes in memory (estimated size 3.1 KB, free 491.4 MB) 18/04/17 17:15:00 INFO storage.BlockManagerInfo: Added broadcast_1106_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:15:00 INFO spark.SparkContext: Created broadcast 1106 from broadcast at DAGScheduler.scala:1006 18/04/17 17:15:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 1106 (KafkaRDD[1534] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 17:15:00 INFO cluster.YarnClusterScheduler: Adding task set 1106.0 with 1 tasks 18/04/17 17:15:00 INFO scheduler.DAGScheduler: Got job 1107 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 17:15:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 1107 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 17:15:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 17:15:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 17:15:00 INFO scheduler.DAGScheduler: Submitting ResultStage 1107 (KafkaRDD[1519] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 17:15:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 1106.0 (TID 1106, ***hostname masked***, executor 10, partition 0, RACK_LOCAL, 2069 bytes) 18/04/17 17:15:00 INFO storage.MemoryStore: Block broadcast_1107 stored as values in memory (estimated size 5.7 KB, free 491.4 MB) 18/04/17 17:15:00 INFO storage.MemoryStore: Block broadcast_1107_piece0 stored as bytes in memory (estimated size 3.1 KB, free 491.4 MB) 18/04/17 17:15:00 INFO storage.BlockManagerInfo: Added broadcast_1107_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:15:00 INFO spark.SparkContext: Created broadcast 1107 from broadcast at DAGScheduler.scala:1006 18/04/17 17:15:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 1107 (KafkaRDD[1519] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 17:15:00 INFO cluster.YarnClusterScheduler: Adding task set 1107.0 with 1 tasks 18/04/17 17:15:00 INFO scheduler.DAGScheduler: Got job 1108 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 17:15:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 1108 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 17:15:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 17:15:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 17:15:00 INFO scheduler.DAGScheduler: Submitting ResultStage 1108 (KafkaRDD[1541] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 17:15:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 1107.0 (TID 1107, ***hostname masked***, executor 5, partition 0, RACK_LOCAL, 2060 bytes) 18/04/17 17:15:00 INFO storage.MemoryStore: Block broadcast_1108 stored as values in memory (estimated size 5.7 KB, free 491.4 MB) 18/04/17 17:15:00 INFO storage.MemoryStore: Block broadcast_1108_piece0 stored as bytes in memory (estimated size 3.1 KB, free 491.4 MB) 18/04/17 17:15:00 INFO storage.BlockManagerInfo: Added broadcast_1108_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:15:00 INFO spark.SparkContext: Created broadcast 1108 from broadcast at DAGScheduler.scala:1006 18/04/17 17:15:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 1108 (KafkaRDD[1541] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 17:15:00 INFO cluster.YarnClusterScheduler: Adding task set 1108.0 with 1 tasks 18/04/17 17:15:00 INFO scheduler.DAGScheduler: Got job 1109 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 17:15:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 1109 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 17:15:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 17:15:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 17:15:00 INFO scheduler.DAGScheduler: Submitting ResultStage 1109 (KafkaRDD[1545] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 17:15:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 1108.0 (TID 1108, ***hostname masked***, executor 4, partition 0, NODE_LOCAL, 2049 bytes) 18/04/17 17:15:00 INFO storage.MemoryStore: Block broadcast_1109 stored as values in memory (estimated size 5.7 KB, free 491.4 MB) 18/04/17 17:15:00 INFO storage.BlockManagerInfo: Added broadcast_1105_piece0 in memory on ***hostname masked***:55279 (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:15:00 INFO storage.MemoryStore: Block broadcast_1109_piece0 stored as bytes in memory (estimated size 3.1 KB, free 491.4 MB) 18/04/17 17:15:00 INFO storage.BlockManagerInfo: Added broadcast_1109_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:15:00 INFO spark.SparkContext: Created broadcast 1109 from broadcast at DAGScheduler.scala:1006 18/04/17 17:15:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 1109 (KafkaRDD[1545] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 17:15:00 INFO cluster.YarnClusterScheduler: Adding task set 1109.0 with 1 tasks 18/04/17 17:15:00 INFO scheduler.DAGScheduler: Got job 1110 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 17:15:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 1110 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 17:15:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 17:15:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 17:15:00 INFO scheduler.DAGScheduler: Submitting ResultStage 1110 (KafkaRDD[1535] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 17:15:00 INFO storage.MemoryStore: Block broadcast_1110 stored as values in memory (estimated size 5.7 KB, free 491.4 MB) 18/04/17 17:15:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 1109.0 (TID 1109, ***hostname masked***, executor 9, partition 0, NODE_LOCAL, 2048 bytes) 18/04/17 17:15:00 INFO storage.MemoryStore: Block broadcast_1110_piece0 stored as bytes in memory (estimated size 3.1 KB, free 491.4 MB) 18/04/17 17:15:00 INFO storage.BlockManagerInfo: Added broadcast_1110_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:15:00 INFO spark.SparkContext: Created broadcast 1110 from broadcast at DAGScheduler.scala:1006 18/04/17 17:15:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 1110 (KafkaRDD[1535] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 17:15:00 INFO cluster.YarnClusterScheduler: Adding task set 1110.0 with 1 tasks 18/04/17 17:15:00 INFO scheduler.DAGScheduler: Got job 1111 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 17:15:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 1111 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 17:15:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 17:15:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 17:15:00 INFO scheduler.DAGScheduler: Submitting ResultStage 1111 (KafkaRDD[1530] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 17:15:00 INFO storage.BlockManagerInfo: Added broadcast_1106_piece0 in memory on ***hostname masked***:55095 (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:15:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 1110.0 (TID 1110, ***hostname masked***, executor 8, partition 0, RACK_LOCAL, 2048 bytes) 18/04/17 17:15:00 INFO storage.MemoryStore: Block broadcast_1111 stored as values in memory (estimated size 5.7 KB, free 491.4 MB) 18/04/17 17:15:00 INFO storage.BlockManagerInfo: Added broadcast_1107_piece0 in memory on ***hostname masked***:53081 (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:15:00 INFO storage.BlockManagerInfo: Added broadcast_1108_piece0 in memory on ***hostname masked***:55279 (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:15:00 INFO storage.MemoryStore: Block broadcast_1111_piece0 stored as bytes in memory (estimated size 3.1 KB, free 491.4 MB) 18/04/17 17:15:00 INFO storage.BlockManagerInfo: Added broadcast_1111_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:15:00 INFO spark.SparkContext: Created broadcast 1111 from broadcast at DAGScheduler.scala:1006 18/04/17 17:15:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 1111 (KafkaRDD[1530] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 17:15:00 INFO cluster.YarnClusterScheduler: Adding task set 1111.0 with 1 tasks 18/04/17 17:15:00 INFO scheduler.DAGScheduler: Got job 1112 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 17:15:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 1112 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 17:15:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 17:15:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 17:15:00 INFO scheduler.DAGScheduler: Submitting ResultStage 1112 (KafkaRDD[1520] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 17:15:00 INFO storage.MemoryStore: Block broadcast_1112 stored as values in memory (estimated size 5.7 KB, free 491.4 MB) 18/04/17 17:15:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 1111.0 (TID 1111, ***hostname masked***, executor 9, partition 0, RACK_LOCAL, 2048 bytes) 18/04/17 17:15:00 INFO storage.MemoryStore: Block broadcast_1112_piece0 stored as bytes in memory (estimated size 3.1 KB, free 491.4 MB) 18/04/17 17:15:00 INFO storage.BlockManagerInfo: Added broadcast_1112_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:15:00 INFO spark.SparkContext: Created broadcast 1112 from broadcast at DAGScheduler.scala:1006 18/04/17 17:15:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 1112 (KafkaRDD[1520] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 17:15:00 INFO cluster.YarnClusterScheduler: Adding task set 1112.0 with 1 tasks 18/04/17 17:15:00 INFO scheduler.DAGScheduler: Got job 1113 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 17:15:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 1113 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 17:15:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 17:15:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 17:15:00 INFO scheduler.DAGScheduler: Submitting ResultStage 1113 (KafkaRDD[1537] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 17:15:00 INFO storage.MemoryStore: Block broadcast_1113 stored as values in memory (estimated size 5.7 KB, free 491.4 MB) 18/04/17 17:15:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 1112.0 (TID 1112, ***hostname masked***, executor 11, partition 0, RACK_LOCAL, 2051 bytes) 18/04/17 17:15:00 INFO storage.MemoryStore: Block broadcast_1113_piece0 stored as bytes in memory (estimated size 3.1 KB, free 491.4 MB) 18/04/17 17:15:00 INFO storage.BlockManagerInfo: Added broadcast_1113_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:15:00 INFO spark.SparkContext: Created broadcast 1113 from broadcast at DAGScheduler.scala:1006 18/04/17 17:15:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 1113 (KafkaRDD[1537] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 17:15:00 INFO cluster.YarnClusterScheduler: Adding task set 1113.0 with 1 tasks 18/04/17 17:15:00 INFO scheduler.DAGScheduler: Got job 1114 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 17:15:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 1114 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 17:15:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 17:15:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 17:15:00 INFO scheduler.DAGScheduler: Submitting ResultStage 1114 (KafkaRDD[1532] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 17:15:00 INFO storage.MemoryStore: Block broadcast_1114 stored as values in memory (estimated size 5.7 KB, free 491.4 MB) 18/04/17 17:15:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 1113.0 (TID 1113, ***hostname masked***, executor 9, partition 0, RACK_LOCAL, 2045 bytes) 18/04/17 17:15:00 INFO storage.BlockManagerInfo: Added broadcast_1110_piece0 in memory on ***hostname masked***:50260 (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:15:00 INFO storage.BlockManagerInfo: Added broadcast_1109_piece0 in memory on ***hostname masked***:55033 (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:15:00 INFO storage.MemoryStore: Block broadcast_1114_piece0 stored as bytes in memory (estimated size 3.1 KB, free 491.4 MB) 18/04/17 17:15:00 INFO storage.BlockManagerInfo: Added broadcast_1114_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:15:00 INFO spark.SparkContext: Created broadcast 1114 from broadcast at DAGScheduler.scala:1006 18/04/17 17:15:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 1114 (KafkaRDD[1532] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 17:15:00 INFO cluster.YarnClusterScheduler: Adding task set 1114.0 with 1 tasks 18/04/17 17:15:00 INFO scheduler.DAGScheduler: Got job 1115 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 17:15:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 1115 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 17:15:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 17:15:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 17:15:00 INFO scheduler.DAGScheduler: Submitting ResultStage 1115 (KafkaRDD[1539] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 17:15:00 INFO storage.MemoryStore: Block broadcast_1115 stored as values in memory (estimated size 5.7 KB, free 491.4 MB) 18/04/17 17:15:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 1114.0 (TID 1114, ***hostname masked***, executor 5, partition 0, RACK_LOCAL, 2048 bytes) 18/04/17 17:15:00 INFO storage.MemoryStore: Block broadcast_1115_piece0 stored as bytes in memory (estimated size 3.1 KB, free 491.4 MB) 18/04/17 17:15:00 INFO storage.BlockManagerInfo: Added broadcast_1115_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:15:00 INFO spark.SparkContext: Created broadcast 1115 from broadcast at DAGScheduler.scala:1006 18/04/17 17:15:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 1115 (KafkaRDD[1539] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 17:15:00 INFO cluster.YarnClusterScheduler: Adding task set 1115.0 with 1 tasks 18/04/17 17:15:00 INFO scheduler.DAGScheduler: Got job 1116 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 17:15:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 1116 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 17:15:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 17:15:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 17:15:00 INFO scheduler.DAGScheduler: Submitting ResultStage 1116 (KafkaRDD[1518] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 17:15:00 INFO storage.MemoryStore: Block broadcast_1116 stored as values in memory (estimated size 5.7 KB, free 491.4 MB) 18/04/17 17:15:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 1115.0 (TID 1115, ***hostname masked***, executor 2, partition 0, RACK_LOCAL, 2048 bytes) 18/04/17 17:15:00 INFO storage.BlockManagerInfo: Added broadcast_1112_piece0 in memory on ***hostname masked***:57847 (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:15:00 INFO storage.BlockManagerInfo: Added broadcast_1114_piece0 in memory on ***hostname masked***:53081 (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:15:00 INFO storage.MemoryStore: Block broadcast_1116_piece0 stored as bytes in memory (estimated size 3.1 KB, free 491.4 MB) 18/04/17 17:15:00 INFO storage.BlockManagerInfo: Added broadcast_1116_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:15:00 INFO spark.SparkContext: Created broadcast 1116 from broadcast at DAGScheduler.scala:1006 18/04/17 17:15:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 1116 (KafkaRDD[1518] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 17:15:00 INFO cluster.YarnClusterScheduler: Adding task set 1116.0 with 1 tasks 18/04/17 17:15:00 INFO scheduler.DAGScheduler: Got job 1117 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 17:15:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 1117 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 17:15:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 17:15:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 17:15:00 INFO scheduler.DAGScheduler: Submitting ResultStage 1117 (KafkaRDD[1521] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 17:15:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 1116.0 (TID 1116, ***hostname masked***, executor 12, partition 0, NODE_LOCAL, 2045 bytes) 18/04/17 17:15:00 INFO storage.MemoryStore: Block broadcast_1117 stored as values in memory (estimated size 5.7 KB, free 491.3 MB) 18/04/17 17:15:00 INFO storage.BlockManagerInfo: Added broadcast_1113_piece0 in memory on ***hostname masked***:55033 (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:15:00 INFO storage.MemoryStore: Block broadcast_1117_piece0 stored as bytes in memory (estimated size 3.1 KB, free 491.3 MB) 18/04/17 17:15:00 INFO storage.BlockManagerInfo: Added broadcast_1117_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:15:00 INFO spark.SparkContext: Created broadcast 1117 from broadcast at DAGScheduler.scala:1006 18/04/17 17:15:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 1117 (KafkaRDD[1521] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 17:15:00 INFO cluster.YarnClusterScheduler: Adding task set 1117.0 with 1 tasks 18/04/17 17:15:00 INFO scheduler.DAGScheduler: Got job 1118 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 17:15:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 1118 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 17:15:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 17:15:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 17:15:00 INFO scheduler.DAGScheduler: Submitting ResultStage 1118 (KafkaRDD[1513] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 17:15:00 INFO storage.MemoryStore: Block broadcast_1118 stored as values in memory (estimated size 5.7 KB, free 491.3 MB) 18/04/17 17:15:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 1117.0 (TID 1117, ***hostname masked***, executor 10, partition 0, RACK_LOCAL, 2041 bytes) 18/04/17 17:15:00 INFO storage.MemoryStore: Block broadcast_1118_piece0 stored as bytes in memory (estimated size 3.1 KB, free 491.3 MB) 18/04/17 17:15:00 INFO storage.BlockManagerInfo: Added broadcast_1118_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:15:00 INFO spark.SparkContext: Created broadcast 1118 from broadcast at DAGScheduler.scala:1006 18/04/17 17:15:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 1118 (KafkaRDD[1513] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 17:15:00 INFO cluster.YarnClusterScheduler: Adding task set 1118.0 with 1 tasks 18/04/17 17:15:00 INFO scheduler.DAGScheduler: Got job 1119 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 17:15:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 1119 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 17:15:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 17:15:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 17:15:00 INFO scheduler.DAGScheduler: Submitting ResultStage 1119 (KafkaRDD[1547] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 17:15:00 INFO storage.MemoryStore: Block broadcast_1119 stored as values in memory (estimated size 5.7 KB, free 491.3 MB) 18/04/17 17:15:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 1118.0 (TID 1118, ***hostname masked***, executor 9, partition 0, RACK_LOCAL, 2063 bytes) 18/04/17 17:15:00 INFO storage.BlockManagerInfo: Added broadcast_1111_piece0 in memory on ***hostname masked***:55033 (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:15:00 INFO storage.BlockManagerInfo: Added broadcast_1117_piece0 in memory on ***hostname masked***:55095 (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:15:00 INFO storage.MemoryStore: Block broadcast_1119_piece0 stored as bytes in memory (estimated size 3.1 KB, free 491.3 MB) 18/04/17 17:15:00 INFO storage.BlockManagerInfo: Added broadcast_1119_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.5 MB) 18/04/17 17:15:00 INFO spark.SparkContext: Created broadcast 1119 from broadcast at DAGScheduler.scala:1006 18/04/17 17:15:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 1119 (KafkaRDD[1547] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 17:15:00 INFO cluster.YarnClusterScheduler: Adding task set 1119.0 with 1 tasks 18/04/17 17:15:00 INFO scheduler.DAGScheduler: Got job 1120 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 17:15:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 1120 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 17:15:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 17:15:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 17:15:00 INFO scheduler.DAGScheduler: Submitting ResultStage 1120 (KafkaRDD[1524] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 17:15:00 INFO storage.MemoryStore: Block broadcast_1120 stored as values in memory (estimated size 5.7 KB, free 491.3 MB) 18/04/17 17:15:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 1119.0 (TID 1119, ***hostname masked***, executor 7, partition 0, NODE_LOCAL, 2037 bytes) 18/04/17 17:15:00 INFO storage.BlockManagerInfo: Added broadcast_1115_piece0 in memory on ***hostname masked***:43653 (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:15:00 INFO storage.BlockManagerInfo: Added broadcast_1116_piece0 in memory on ***hostname masked***:42188 (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:15:00 INFO storage.MemoryStore: Block broadcast_1120_piece0 stored as bytes in memory (estimated size 3.1 KB, free 491.3 MB) 18/04/17 17:15:00 INFO storage.BlockManagerInfo: Added broadcast_1120_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.5 MB) 18/04/17 17:15:00 INFO spark.SparkContext: Created broadcast 1120 from broadcast at DAGScheduler.scala:1006 18/04/17 17:15:00 INFO storage.BlockManagerInfo: Added broadcast_1118_piece0 in memory on ***hostname masked***:55033 (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:15:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 1120 (KafkaRDD[1524] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 17:15:00 INFO cluster.YarnClusterScheduler: Adding task set 1120.0 with 1 tasks 18/04/17 17:15:00 INFO scheduler.DAGScheduler: Got job 1121 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 17:15:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 1121 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 17:15:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 17:15:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 17:15:00 INFO scheduler.DAGScheduler: Submitting ResultStage 1121 (KafkaRDD[1527] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 17:15:00 INFO storage.MemoryStore: Block broadcast_1121 stored as values in memory (estimated size 5.7 KB, free 491.3 MB) 18/04/17 17:15:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 1120.0 (TID 1120, ***hostname masked***, executor 1, partition 0, RACK_LOCAL, 2051 bytes) 18/04/17 17:15:00 INFO storage.MemoryStore: Block broadcast_1121_piece0 stored as bytes in memory (estimated size 3.1 KB, free 491.3 MB) 18/04/17 17:15:00 INFO storage.BlockManagerInfo: Added broadcast_1121_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.5 MB) 18/04/17 17:15:00 INFO spark.SparkContext: Created broadcast 1121 from broadcast at DAGScheduler.scala:1006 18/04/17 17:15:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 1121 (KafkaRDD[1527] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 17:15:00 INFO cluster.YarnClusterScheduler: Adding task set 1121.0 with 1 tasks 18/04/17 17:15:00 INFO scheduler.DAGScheduler: Got job 1123 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 17:15:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 1122 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 17:15:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 17:15:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 17:15:00 INFO scheduler.DAGScheduler: Submitting ResultStage 1122 (KafkaRDD[1523] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 17:15:00 INFO storage.MemoryStore: Block broadcast_1122 stored as values in memory (estimated size 5.7 KB, free 491.3 MB) 18/04/17 17:15:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 1121.0 (TID 1121, ***hostname masked***, executor 7, partition 0, RACK_LOCAL, 2064 bytes) 18/04/17 17:15:00 INFO storage.MemoryStore: Block broadcast_1122_piece0 stored as bytes in memory (estimated size 3.1 KB, free 491.3 MB) 18/04/17 17:15:00 INFO storage.BlockManagerInfo: Removed broadcast_1078_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.5 MB) 18/04/17 17:15:00 INFO storage.BlockManagerInfo: Added broadcast_1122_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.5 MB) 18/04/17 17:15:00 INFO spark.SparkContext: Created broadcast 1122 from broadcast at DAGScheduler.scala:1006 18/04/17 17:15:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 1122 (KafkaRDD[1523] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 17:15:00 INFO cluster.YarnClusterScheduler: Adding task set 1122.0 with 1 tasks 18/04/17 17:15:00 INFO scheduler.DAGScheduler: Got job 1122 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 17:15:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 1123 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 17:15:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 17:15:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 17:15:00 INFO scheduler.DAGScheduler: Submitting ResultStage 1123 (KafkaRDD[1540] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 17:15:00 INFO storage.BlockManagerInfo: Added broadcast_1119_piece0 in memory on ***hostname masked***:41751 (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:15:00 INFO storage.BlockManagerInfo: Removed broadcast_1078_piece0 on ***hostname masked***:50260 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:15:00 INFO storage.MemoryStore: Block broadcast_1123 stored as values in memory (estimated size 5.7 KB, free 491.3 MB) 18/04/17 17:15:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 1122.0 (TID 1122, ***hostname masked***, executor 4, partition 0, RACK_LOCAL, 2046 bytes) 18/04/17 17:15:00 INFO storage.BlockManagerInfo: Added broadcast_1120_piece0 in memory on ***hostname masked***:56034 (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:15:00 INFO spark.ContextCleaner: Cleaned accumulator 1080 18/04/17 17:15:00 INFO spark.ContextCleaner: Cleaned accumulator 1082 18/04/17 17:15:00 INFO spark.ContextCleaner: Cleaned accumulator 1079 18/04/17 17:15:00 INFO spark.ContextCleaner: Cleaned accumulator 1085 18/04/17 17:15:00 INFO storage.BlockManagerInfo: Removed broadcast_1083_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.5 MB) 18/04/17 17:15:00 INFO storage.MemoryStore: Block broadcast_1123_piece0 stored as bytes in memory (estimated size 3.1 KB, free 491.3 MB) 18/04/17 17:15:00 INFO storage.BlockManagerInfo: Added broadcast_1123_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.5 MB) 18/04/17 17:15:00 INFO spark.SparkContext: Created broadcast 1123 from broadcast at DAGScheduler.scala:1006 18/04/17 17:15:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 1123 (KafkaRDD[1540] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 17:15:00 INFO cluster.YarnClusterScheduler: Adding task set 1123.0 with 1 tasks 18/04/17 17:15:00 INFO scheduler.DAGScheduler: Got job 1125 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 17:15:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 1124 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 17:15:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 17:15:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 17:15:00 INFO scheduler.DAGScheduler: Submitting ResultStage 1124 (KafkaRDD[1517] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 17:15:00 INFO storage.BlockManagerInfo: Removed broadcast_1083_piece0 on ***hostname masked***:41751 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:15:00 INFO storage.MemoryStore: Block broadcast_1124 stored as values in memory (estimated size 5.7 KB, free 491.3 MB) 18/04/17 17:15:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 1123.0 (TID 1123, ***hostname masked***, executor 6, partition 0, RACK_LOCAL, 2045 bytes) 18/04/17 17:15:00 INFO spark.ContextCleaner: Cleaned accumulator 1084 18/04/17 17:15:00 INFO storage.BlockManagerInfo: Removed broadcast_1082_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.5 MB) 18/04/17 17:15:00 INFO storage.BlockManagerInfo: Added broadcast_1122_piece0 in memory on ***hostname masked***:55279 (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:15:00 INFO storage.BlockManagerInfo: Removed broadcast_1082_piece0 on ***hostname masked***:55033 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:15:00 INFO spark.ContextCleaner: Cleaned accumulator 1083 18/04/17 17:15:00 INFO spark.ContextCleaner: Cleaned accumulator 1088 18/04/17 17:15:00 INFO storage.MemoryStore: Block broadcast_1124_piece0 stored as bytes in memory (estimated size 3.1 KB, free 491.3 MB) 18/04/17 17:15:00 INFO storage.BlockManagerInfo: Added broadcast_1124_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.5 MB) 18/04/17 17:15:00 INFO storage.BlockManagerInfo: Removed broadcast_1084_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.5 MB) 18/04/17 17:15:00 INFO spark.SparkContext: Created broadcast 1124 from broadcast at DAGScheduler.scala:1006 18/04/17 17:15:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 1124 (KafkaRDD[1517] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 17:15:00 INFO cluster.YarnClusterScheduler: Adding task set 1124.0 with 1 tasks 18/04/17 17:15:00 INFO scheduler.DAGScheduler: Got job 1124 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 17:15:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 1125 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 17:15:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 17:15:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 17:15:00 INFO scheduler.DAGScheduler: Submitting ResultStage 1125 (KafkaRDD[1531] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 17:15:00 INFO storage.MemoryStore: Block broadcast_1125 stored as values in memory (estimated size 5.7 KB, free 491.3 MB) 18/04/17 17:15:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 1124.0 (TID 1124, ***hostname masked***, executor 5, partition 0, NODE_LOCAL, 2059 bytes) 18/04/17 17:15:00 INFO storage.BlockManagerInfo: Removed broadcast_1084_piece0 on ***hostname masked***:57847 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:15:00 INFO storage.BlockManagerInfo: Added broadcast_1121_piece0 in memory on ***hostname masked***:41751 (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:15:00 INFO storage.BlockManagerInfo: Removed broadcast_1090_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.5 MB) 18/04/17 17:15:00 INFO storage.MemoryStore: Block broadcast_1125_piece0 stored as bytes in memory (estimated size 3.1 KB, free 491.3 MB) 18/04/17 17:15:00 INFO storage.BlockManagerInfo: Added broadcast_1125_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.5 MB) 18/04/17 17:15:00 INFO spark.SparkContext: Created broadcast 1125 from broadcast at DAGScheduler.scala:1006 18/04/17 17:15:00 INFO storage.BlockManagerInfo: Removed broadcast_1090_piece0 on ***hostname masked***:35790 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:15:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 1125 (KafkaRDD[1531] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 17:15:00 INFO cluster.YarnClusterScheduler: Adding task set 1125.0 with 1 tasks 18/04/17 17:15:00 INFO scheduler.DAGScheduler: Got job 1126 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 17:15:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 1126 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 17:15:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 17:15:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 17:15:00 INFO scheduler.DAGScheduler: Submitting ResultStage 1126 (KafkaRDD[1522] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 17:15:00 INFO storage.MemoryStore: Block broadcast_1126 stored as values in memory (estimated size 5.7 KB, free 491.3 MB) 18/04/17 17:15:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 1125.0 (TID 1125, ***hostname masked***, executor 6, partition 0, RACK_LOCAL, 2050 bytes) 18/04/17 17:15:00 INFO spark.ContextCleaner: Cleaned accumulator 1091 18/04/17 17:15:00 INFO storage.BlockManagerInfo: Removed broadcast_1088_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.5 MB) 18/04/17 17:15:00 INFO storage.BlockManagerInfo: Added broadcast_1124_piece0 in memory on ***hostname masked***:53081 (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:15:00 INFO storage.MemoryStore: Block broadcast_1126_piece0 stored as bytes in memory (estimated size 3.1 KB, free 491.3 MB) 18/04/17 17:15:00 INFO storage.BlockManagerInfo: Added broadcast_1126_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.5 MB) 18/04/17 17:15:00 INFO spark.SparkContext: Created broadcast 1126 from broadcast at DAGScheduler.scala:1006 18/04/17 17:15:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 1126 (KafkaRDD[1522] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 17:15:00 INFO cluster.YarnClusterScheduler: Adding task set 1126.0 with 1 tasks 18/04/17 17:15:00 INFO scheduler.DAGScheduler: Got job 1127 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 17:15:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 1127 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 17:15:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 17:15:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 17:15:00 INFO scheduler.DAGScheduler: Submitting ResultStage 1127 (KafkaRDD[1538] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 17:15:00 INFO storage.BlockManagerInfo: Removed broadcast_1088_piece0 on ***hostname masked***:35790 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:15:00 INFO storage.MemoryStore: Block broadcast_1127 stored as values in memory (estimated size 5.7 KB, free 491.3 MB) 18/04/17 17:15:00 INFO storage.BlockManagerInfo: Added broadcast_1123_piece0 in memory on ***hostname masked***:35790 (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:15:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 1126.0 (TID 1126, ***hostname masked***, executor 6, partition 0, NODE_LOCAL, 2059 bytes) 18/04/17 17:15:00 INFO spark.ContextCleaner: Cleaned accumulator 1089 18/04/17 17:15:00 INFO storage.BlockManagerInfo: Removed broadcast_1087_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.5 MB) 18/04/17 17:15:00 INFO storage.BlockManagerInfo: Removed broadcast_1087_piece0 on ***hostname masked***:56034 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:15:00 INFO spark.ContextCleaner: Cleaned accumulator 1095 18/04/17 17:15:00 INFO storage.BlockManagerInfo: Removed broadcast_1092_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:15:00 INFO storage.MemoryStore: Block broadcast_1127_piece0 stored as bytes in memory (estimated size 3.1 KB, free 491.3 MB) 18/04/17 17:15:00 INFO storage.BlockManagerInfo: Added broadcast_1127_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.5 MB) 18/04/17 17:15:00 INFO spark.SparkContext: Created broadcast 1127 from broadcast at DAGScheduler.scala:1006 18/04/17 17:15:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 1127 (KafkaRDD[1538] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 17:15:00 INFO cluster.YarnClusterScheduler: Adding task set 1127.0 with 1 tasks 18/04/17 17:15:00 INFO scheduler.DAGScheduler: Got job 1128 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 17:15:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 1128 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 17:15:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 17:15:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 17:15:00 INFO storage.BlockManagerInfo: Removed broadcast_1092_piece0 on ***hostname masked***:55279 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:15:00 INFO scheduler.DAGScheduler: Submitting ResultStage 1128 (KafkaRDD[1546] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 17:15:00 INFO storage.MemoryStore: Block broadcast_1128 stored as values in memory (estimated size 5.7 KB, free 491.3 MB) 18/04/17 17:15:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 1127.0 (TID 1127, ***hostname masked***, executor 3, partition 0, NODE_LOCAL, 2056 bytes) 18/04/17 17:15:00 INFO spark.ContextCleaner: Cleaned accumulator 1093 18/04/17 17:15:00 INFO storage.BlockManagerInfo: Removed broadcast_1091_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:15:00 INFO storage.MemoryStore: Block broadcast_1128_piece0 stored as bytes in memory (estimated size 3.1 KB, free 491.3 MB) 18/04/17 17:15:00 INFO storage.BlockManagerInfo: Added broadcast_1128_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.5 MB) 18/04/17 17:15:00 INFO spark.SparkContext: Created broadcast 1128 from broadcast at DAGScheduler.scala:1006 18/04/17 17:15:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 1128 (KafkaRDD[1546] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 17:15:00 INFO cluster.YarnClusterScheduler: Adding task set 1128.0 with 1 tasks 18/04/17 17:15:00 INFO scheduler.DAGScheduler: Got job 1129 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 17:15:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 1129 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 17:15:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 17:15:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 17:15:00 INFO scheduler.DAGScheduler: Submitting ResultStage 1129 (KafkaRDD[1543] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 17:15:00 INFO storage.BlockManagerInfo: Removed broadcast_1091_piece0 on ***hostname masked***:60107 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:15:00 INFO storage.MemoryStore: Block broadcast_1129 stored as values in memory (estimated size 5.7 KB, free 491.3 MB) 18/04/17 17:15:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 1128.0 (TID 1128, ***hostname masked***, executor 2, partition 0, RACK_LOCAL, 2053 bytes) 18/04/17 17:15:00 INFO spark.ContextCleaner: Cleaned accumulator 1092 18/04/17 17:15:00 INFO storage.BlockManagerInfo: Removed broadcast_1096_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:15:00 INFO storage.MemoryStore: Block broadcast_1129_piece0 stored as bytes in memory (estimated size 3.1 KB, free 491.3 MB) 18/04/17 17:15:00 INFO storage.BlockManagerInfo: Added broadcast_1129_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.5 MB) 18/04/17 17:15:00 INFO spark.SparkContext: Created broadcast 1129 from broadcast at DAGScheduler.scala:1006 18/04/17 17:15:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 1129 (KafkaRDD[1543] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 17:15:00 INFO cluster.YarnClusterScheduler: Adding task set 1129.0 with 1 tasks 18/04/17 17:15:00 INFO scheduler.DAGScheduler: Got job 1130 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 17:15:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 1130 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 17:15:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 17:15:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 17:15:00 INFO scheduler.DAGScheduler: Submitting ResultStage 1130 (KafkaRDD[1536] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 17:15:00 INFO storage.BlockManagerInfo: Removed broadcast_1096_piece0 on ***hostname masked***:35790 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:15:00 INFO storage.MemoryStore: Block broadcast_1130 stored as values in memory (estimated size 5.7 KB, free 491.3 MB) 18/04/17 17:15:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 1129.0 (TID 1129, ***hostname masked***, executor 12, partition 0, RACK_LOCAL, 2042 bytes) 18/04/17 17:15:00 INFO spark.ContextCleaner: Cleaned accumulator 1097 18/04/17 17:15:00 INFO storage.BlockManagerInfo: Removed broadcast_1095_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:15:00 INFO storage.MemoryStore: Block broadcast_1130_piece0 stored as bytes in memory (estimated size 3.1 KB, free 491.3 MB) 18/04/17 17:15:00 INFO storage.BlockManagerInfo: Removed broadcast_1095_piece0 on ***hostname masked***:57847 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:15:00 INFO storage.BlockManagerInfo: Added broadcast_1130_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.5 MB) 18/04/17 17:15:00 INFO spark.SparkContext: Created broadcast 1130 from broadcast at DAGScheduler.scala:1006 18/04/17 17:15:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 1130 (KafkaRDD[1536] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 17:15:00 INFO cluster.YarnClusterScheduler: Adding task set 1130.0 with 1 tasks 18/04/17 17:15:00 INFO scheduler.DAGScheduler: Got job 1131 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 17:15:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 1131 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 17:15:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 17:15:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 17:15:00 INFO storage.BlockManagerInfo: Added broadcast_1126_piece0 in memory on ***hostname masked***:35790 (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:15:00 INFO scheduler.DAGScheduler: Submitting ResultStage 1131 (KafkaRDD[1544] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 17:15:00 INFO spark.ContextCleaner: Cleaned accumulator 1096 18/04/17 17:15:00 INFO storage.MemoryStore: Block broadcast_1131 stored as values in memory (estimated size 5.7 KB, free 491.3 MB) 18/04/17 17:15:00 INFO storage.BlockManagerInfo: Removed broadcast_1094_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:15:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 1130.0 (TID 1130, ***hostname masked***, executor 5, partition 0, NODE_LOCAL, 2047 bytes) 18/04/17 17:15:00 INFO storage.BlockManagerInfo: Added broadcast_1127_piece0 in memory on ***hostname masked***:60107 (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:15:00 INFO storage.BlockManagerInfo: Removed broadcast_1094_piece0 on ***hostname masked***:60107 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:15:00 INFO storage.MemoryStore: Block broadcast_1131_piece0 stored as bytes in memory (estimated size 3.1 KB, free 491.3 MB) 18/04/17 17:15:00 INFO spark.ContextCleaner: Cleaned accumulator 1100 18/04/17 17:15:00 INFO storage.BlockManagerInfo: Added broadcast_1131_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.5 MB) 18/04/17 17:15:00 INFO spark.SparkContext: Created broadcast 1131 from broadcast at DAGScheduler.scala:1006 18/04/17 17:15:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 1131 (KafkaRDD[1544] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 17:15:00 INFO cluster.YarnClusterScheduler: Adding task set 1131.0 with 1 tasks 18/04/17 17:15:00 INFO storage.BlockManagerInfo: Removed broadcast_1098_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:15:00 INFO storage.BlockManagerInfo: Removed broadcast_1098_piece0 on ***hostname masked***:55095 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:15:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 1131.0 (TID 1131, ***hostname masked***, executor 7, partition 0, RACK_LOCAL, 2040 bytes) 18/04/17 17:15:00 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 1111.0 (TID 1111) in 80 ms on ***hostname masked*** (executor 9) (1/1) 18/04/17 17:15:00 INFO scheduler.DAGScheduler: ResultStage 1111 (foreachPartition at PredictorEngineApp.java:153) finished in 0.080 s 18/04/17 17:15:00 INFO cluster.YarnClusterScheduler: Removed TaskSet 1111.0, whose tasks have all completed, from pool 18/04/17 17:15:00 INFO scheduler.DAGScheduler: Job 1111 finished: foreachPartition at PredictorEngineApp.java:153, took 0.108747 s 18/04/17 17:15:00 INFO storage.BlockManagerInfo: Added broadcast_1129_piece0 in memory on ***hostname masked***:42188 (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:15:00 INFO storage.BlockManagerInfo: Added broadcast_1128_piece0 in memory on ***hostname masked***:43653 (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:15:00 INFO spark.ContextCleaner: Cleaned accumulator 1099 18/04/17 17:15:00 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x1b4c75dc connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 17:15:00 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x1b4c75dc0x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 17:15:00 INFO storage.BlockManagerInfo: Removed broadcast_1097_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:15:00 INFO storage.BlockManagerInfo: Added broadcast_1130_piece0 in memory on ***hostname masked***:53081 (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:15:00 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 17:15:00 INFO storage.BlockManagerInfo: Removed broadcast_1097_piece0 on ***hostname masked***:43653 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:15:00 INFO spark.ContextCleaner: Cleaned accumulator 1098 18/04/17 17:15:00 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:34873, server: ***hostname masked***/***IP masked***:2181 18/04/17 17:15:00 INFO storage.BlockManagerInfo: Removed broadcast_1101_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:15:00 INFO storage.BlockManagerInfo: Removed broadcast_1101_piece0 on ***hostname masked***:60107 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:15:00 INFO storage.BlockManagerInfo: Added broadcast_1131_piece0 in memory on ***hostname masked***:41751 (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:15:00 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x3626be1439a96ea, negotiated timeout = 60000 18/04/17 17:15:00 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x3626be1439a96ea 18/04/17 17:15:00 INFO zookeeper.ZooKeeper: Session: 0x3626be1439a96ea closed 18/04/17 17:15:00 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 17:15:00 INFO storage.BlockManagerInfo: Added broadcast_1125_piece0 in memory on ***hostname masked***:35790 (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:15:00 INFO spark.ContextCleaner: Cleaned accumulator 1102 18/04/17 17:15:00 INFO storage.BlockManagerInfo: Removed broadcast_1100_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:15:00 INFO storage.BlockManagerInfo: Removed broadcast_1100_piece0 on ***hostname masked***:41751 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:15:00 INFO spark.ContextCleaner: Cleaned accumulator 1101 18/04/17 17:15:00 INFO storage.BlockManagerInfo: Removed broadcast_1099_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:15:00 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 1119.0 (TID 1119) in 72 ms on ***hostname masked*** (executor 7) (1/1) 18/04/17 17:15:00 INFO scheduler.DAGScheduler: ResultStage 1119 (foreachPartition at PredictorEngineApp.java:153) finished in 0.073 s 18/04/17 17:15:00 INFO cluster.YarnClusterScheduler: Removed TaskSet 1119.0, whose tasks have all completed, from pool 18/04/17 17:15:00 INFO scheduler.DAGScheduler: Job 1119 finished: foreachPartition at PredictorEngineApp.java:153, took 0.130774 s 18/04/17 17:15:00 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x1d08ea8b connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 17:15:00 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x1d08ea8b0x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 17:15:00 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 17:15:00 INFO storage.BlockManagerInfo: Removed broadcast_1099_piece0 on ***hostname masked***:50260 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:15:00 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:34877, server: ***hostname masked***/***IP masked***:2181 18/04/17 17:15:00 INFO storage.BlockManagerInfo: Removed broadcast_1103_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:15:00 INFO storage.BlockManagerInfo: Removed broadcast_1103_piece0 on ***hostname masked***:55033 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:15:00 INFO spark.ContextCleaner: Cleaned accumulator 1104 18/04/17 17:15:00 INFO storage.BlockManagerInfo: Removed broadcast_1102_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:15:00 INFO storage.BlockManagerInfo: Removed broadcast_1102_piece0 on ***hostname masked***:55279 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:15:00 INFO spark.ContextCleaner: Cleaned accumulator 1103 18/04/17 17:15:00 INFO scheduler.JobScheduler: Finished job streaming job 1523974500000 ms.18 from job set of time 1523974500000 ms 18/04/17 17:15:00 INFO storage.BlockManagerInfo: Removed broadcast_1104_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:15:00 INFO storage.BlockManagerInfo: Removed broadcast_1104_piece0 on ***hostname masked***:55095 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:15:00 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x3626be1439a96eb, negotiated timeout = 60000 18/04/17 17:15:00 INFO spark.ContextCleaner: Cleaned accumulator 1105 18/04/17 17:15:00 INFO storage.BlockManagerInfo: Removed broadcast_1079_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:15:00 INFO storage.BlockManagerInfo: Removed broadcast_1079_piece0 on ***hostname masked***:55279 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:15:00 INFO spark.ContextCleaner: Cleaned accumulator 1081 18/04/17 17:15:00 INFO storage.BlockManagerInfo: Removed broadcast_1080_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:15:00 INFO storage.BlockManagerInfo: Removed broadcast_1080_piece0 on ***hostname masked***:42188 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:15:00 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x3626be1439a96eb 18/04/17 17:15:00 INFO storage.BlockManagerInfo: Removed broadcast_1081_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:15:00 INFO storage.BlockManagerInfo: Removed broadcast_1081_piece0 on ***hostname masked***:43653 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:15:00 INFO zookeeper.ZooKeeper: Session: 0x3626be1439a96eb closed 18/04/17 17:15:00 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 17:15:00 INFO scheduler.JobScheduler: Finished job streaming job 1523974500000 ms.35 from job set of time 1523974500000 ms 18/04/17 17:15:00 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 1107.0 (TID 1107) in 176 ms on ***hostname masked*** (executor 5) (1/1) 18/04/17 17:15:00 INFO cluster.YarnClusterScheduler: Removed TaskSet 1107.0, whose tasks have all completed, from pool 18/04/17 17:15:00 INFO scheduler.DAGScheduler: ResultStage 1107 (foreachPartition at PredictorEngineApp.java:153) finished in 0.176 s 18/04/17 17:15:00 INFO scheduler.DAGScheduler: Job 1107 finished: foreachPartition at PredictorEngineApp.java:153, took 0.190131 s 18/04/17 17:15:00 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x299ae4a9 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 17:15:00 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x299ae4a90x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 17:15:00 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 17:15:00 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:45857, server: ***hostname masked***/***IP masked***:2181 18/04/17 17:15:00 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x2626be142b29013, negotiated timeout = 60000 18/04/17 17:15:00 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x2626be142b29013 18/04/17 17:15:00 INFO zookeeper.ZooKeeper: Session: 0x2626be142b29013 closed 18/04/17 17:15:00 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 17:15:00 INFO scheduler.JobScheduler: Finished job streaming job 1523974500000 ms.7 from job set of time 1523974500000 ms 18/04/17 17:15:01 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 1112.0 (TID 1112) in 1867 ms on ***hostname masked*** (executor 11) (1/1) 18/04/17 17:15:01 INFO cluster.YarnClusterScheduler: Removed TaskSet 1112.0, whose tasks have all completed, from pool 18/04/17 17:15:01 INFO scheduler.DAGScheduler: ResultStage 1112 (foreachPartition at PredictorEngineApp.java:153) finished in 1.867 s 18/04/17 17:15:01 INFO scheduler.DAGScheduler: Job 1112 finished: foreachPartition at PredictorEngineApp.java:153, took 1.900089 s 18/04/17 17:15:01 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x46308030 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 17:15:01 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x463080300x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 17:15:01 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 17:15:01 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:34885, server: ***hostname masked***/***IP masked***:2181 18/04/17 17:15:01 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x3626be1439a96f2, negotiated timeout = 60000 18/04/17 17:15:01 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x3626be1439a96f2 18/04/17 17:15:01 INFO zookeeper.ZooKeeper: Session: 0x3626be1439a96f2 closed 18/04/17 17:15:01 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 17:15:01 INFO scheduler.JobScheduler: Finished job streaming job 1523974500000 ms.8 from job set of time 1523974500000 ms 18/04/17 17:15:02 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 1113.0 (TID 1113) in 2412 ms on ***hostname masked*** (executor 9) (1/1) 18/04/17 17:15:02 INFO scheduler.DAGScheduler: ResultStage 1113 (foreachPartition at PredictorEngineApp.java:153) finished in 2.413 s 18/04/17 17:15:02 INFO cluster.YarnClusterScheduler: Removed TaskSet 1113.0, whose tasks have all completed, from pool 18/04/17 17:15:02 INFO scheduler.DAGScheduler: Job 1113 finished: foreachPartition at PredictorEngineApp.java:153, took 2.449196 s 18/04/17 17:15:02 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x23a44a3e connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 17:15:02 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x23a44a3e0x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 17:15:02 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 17:15:02 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:45866, server: ***hostname masked***/***IP masked***:2181 18/04/17 17:15:02 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x2626be142b29016, negotiated timeout = 60000 18/04/17 17:15:02 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x2626be142b29016 18/04/17 17:15:02 INFO zookeeper.ZooKeeper: Session: 0x2626be142b29016 closed 18/04/17 17:15:02 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 17:15:02 INFO scheduler.JobScheduler: Finished job streaming job 1523974500000 ms.25 from job set of time 1523974500000 ms 18/04/17 17:15:04 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 1120.0 (TID 1120) in 4014 ms on ***hostname masked*** (executor 1) (1/1) 18/04/17 17:15:04 INFO cluster.YarnClusterScheduler: Removed TaskSet 1120.0, whose tasks have all completed, from pool 18/04/17 17:15:04 INFO scheduler.DAGScheduler: ResultStage 1120 (foreachPartition at PredictorEngineApp.java:153) finished in 4.015 s 18/04/17 17:15:04 INFO scheduler.DAGScheduler: Job 1120 finished: foreachPartition at PredictorEngineApp.java:153, took 4.076806 s 18/04/17 17:15:04 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x4e2d7b2e connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 17:15:04 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x4e2d7b2e0x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 17:15:04 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 17:15:04 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:41276, server: ***hostname masked***/***IP masked***:2181 18/04/17 17:15:04 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x1626be1444c9739, negotiated timeout = 60000 18/04/17 17:15:04 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x1626be1444c9739 18/04/17 17:15:04 INFO zookeeper.ZooKeeper: Session: 0x1626be1444c9739 closed 18/04/17 17:15:04 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 17:15:04 INFO scheduler.JobScheduler: Finished job streaming job 1523974500000 ms.12 from job set of time 1523974500000 ms 18/04/17 17:15:05 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 1110.0 (TID 1110) in 5108 ms on ***hostname masked*** (executor 8) (1/1) 18/04/17 17:15:05 INFO cluster.YarnClusterScheduler: Removed TaskSet 1110.0, whose tasks have all completed, from pool 18/04/17 17:15:05 INFO scheduler.DAGScheduler: ResultStage 1110 (foreachPartition at PredictorEngineApp.java:153) finished in 5.109 s 18/04/17 17:15:05 INFO scheduler.DAGScheduler: Job 1110 finished: foreachPartition at PredictorEngineApp.java:153, took 5.134155 s 18/04/17 17:15:05 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x786a4682 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 17:15:05 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x786a46820x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 17:15:05 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 17:15:05 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:34899, server: ***hostname masked***/***IP masked***:2181 18/04/17 17:15:05 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x3626be1439a96f5, negotiated timeout = 60000 18/04/17 17:15:05 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x3626be1439a96f5 18/04/17 17:15:05 INFO zookeeper.ZooKeeper: Session: 0x3626be1439a96f5 closed 18/04/17 17:15:05 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 17:15:05 INFO scheduler.JobScheduler: Finished job streaming job 1523974500000 ms.23 from job set of time 1523974500000 ms 18/04/17 17:15:05 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 1129.0 (TID 1129) in 5778 ms on ***hostname masked*** (executor 12) (1/1) 18/04/17 17:15:05 INFO cluster.YarnClusterScheduler: Removed TaskSet 1129.0, whose tasks have all completed, from pool 18/04/17 17:15:05 INFO scheduler.DAGScheduler: ResultStage 1129 (foreachPartition at PredictorEngineApp.java:153) finished in 5.779 s 18/04/17 17:15:05 INFO scheduler.DAGScheduler: Job 1129 finished: foreachPartition at PredictorEngineApp.java:153, took 5.880611 s 18/04/17 17:15:05 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x2c4a253 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 17:15:05 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x2c4a2530x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 17:15:05 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 17:15:05 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:41291, server: ***hostname masked***/***IP masked***:2181 18/04/17 17:15:05 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x1626be1444c973c, negotiated timeout = 60000 18/04/17 17:15:05 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x1626be1444c973c 18/04/17 17:15:05 INFO zookeeper.ZooKeeper: Session: 0x1626be1444c973c closed 18/04/17 17:15:05 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 17:15:05 INFO scheduler.JobScheduler: Finished job streaming job 1523974500000 ms.31 from job set of time 1523974500000 ms 18/04/17 17:15:07 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 1116.0 (TID 1116) in 7316 ms on ***hostname masked*** (executor 12) (1/1) 18/04/17 17:15:07 INFO cluster.YarnClusterScheduler: Removed TaskSet 1116.0, whose tasks have all completed, from pool 18/04/17 17:15:07 INFO scheduler.DAGScheduler: ResultStage 1116 (foreachPartition at PredictorEngineApp.java:153) finished in 7.316 s 18/04/17 17:15:07 INFO scheduler.DAGScheduler: Job 1116 finished: foreachPartition at PredictorEngineApp.java:153, took 7.362283 s 18/04/17 17:15:07 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x4615fce connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 17:15:07 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x4615fce0x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 17:15:07 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 17:15:07 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:34915, server: ***hostname masked***/***IP masked***:2181 18/04/17 17:15:07 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x3626be1439a96f7, negotiated timeout = 60000 18/04/17 17:15:07 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x3626be1439a96f7 18/04/17 17:15:07 INFO zookeeper.ZooKeeper: Session: 0x3626be1439a96f7 closed 18/04/17 17:15:07 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 17:15:07 INFO scheduler.JobScheduler: Finished job streaming job 1523974500000 ms.6 from job set of time 1523974500000 ms 18/04/17 17:15:07 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 1121.0 (TID 1121) in 7696 ms on ***hostname masked*** (executor 7) (1/1) 18/04/17 17:15:07 INFO cluster.YarnClusterScheduler: Removed TaskSet 1121.0, whose tasks have all completed, from pool 18/04/17 17:15:07 INFO scheduler.DAGScheduler: ResultStage 1121 (foreachPartition at PredictorEngineApp.java:153) finished in 7.697 s 18/04/17 17:15:07 INFO scheduler.DAGScheduler: Job 1121 finished: foreachPartition at PredictorEngineApp.java:153, took 7.763492 s 18/04/17 17:15:07 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x5d611e60 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 17:15:07 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x5d611e600x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 17:15:07 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 17:15:07 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:45896, server: ***hostname masked***/***IP masked***:2181 18/04/17 17:15:07 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x2626be142b29017, negotiated timeout = 60000 18/04/17 17:15:07 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x2626be142b29017 18/04/17 17:15:07 INFO zookeeper.ZooKeeper: Session: 0x2626be142b29017 closed 18/04/17 17:15:07 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 17:15:07 INFO scheduler.JobScheduler: Finished job streaming job 1523974500000 ms.15 from job set of time 1523974500000 ms 18/04/17 17:15:07 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 1131.0 (TID 1131) in 7733 ms on ***hostname masked*** (executor 7) (1/1) 18/04/17 17:15:07 INFO cluster.YarnClusterScheduler: Removed TaskSet 1131.0, whose tasks have all completed, from pool 18/04/17 17:15:07 INFO scheduler.DAGScheduler: ResultStage 1131 (foreachPartition at PredictorEngineApp.java:153) finished in 7.733 s 18/04/17 17:15:07 INFO scheduler.DAGScheduler: Job 1131 finished: foreachPartition at PredictorEngineApp.java:153, took 7.839727 s 18/04/17 17:15:07 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x3baef419 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 17:15:07 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x3baef4190x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 17:15:07 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 17:15:07 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:41304, server: ***hostname masked***/***IP masked***:2181 18/04/17 17:15:07 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x1626be1444c9740, negotiated timeout = 60000 18/04/17 17:15:07 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x1626be1444c9740 18/04/17 17:15:07 INFO zookeeper.ZooKeeper: Session: 0x1626be1444c9740 closed 18/04/17 17:15:07 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 17:15:07 INFO scheduler.JobScheduler: Finished job streaming job 1523974500000 ms.32 from job set of time 1523974500000 ms 18/04/17 17:15:08 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 1117.0 (TID 1117) in 8192 ms on ***hostname masked*** (executor 10) (1/1) 18/04/17 17:15:08 INFO cluster.YarnClusterScheduler: Removed TaskSet 1117.0, whose tasks have all completed, from pool 18/04/17 17:15:08 INFO scheduler.DAGScheduler: ResultStage 1117 (foreachPartition at PredictorEngineApp.java:153) finished in 8.193 s 18/04/17 17:15:08 INFO scheduler.DAGScheduler: Job 1117 finished: foreachPartition at PredictorEngineApp.java:153, took 8.243323 s 18/04/17 17:15:08 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x5269077f connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 17:15:08 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x5269077f0x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 17:15:08 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 17:15:08 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:34926, server: ***hostname masked***/***IP masked***:2181 18/04/17 17:15:08 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x3626be1439a96f9, negotiated timeout = 60000 18/04/17 17:15:08 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x3626be1439a96f9 18/04/17 17:15:08 INFO zookeeper.ZooKeeper: Session: 0x3626be1439a96f9 closed 18/04/17 17:15:08 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 17:15:08 INFO scheduler.JobScheduler: Finished job streaming job 1523974500000 ms.9 from job set of time 1523974500000 ms 18/04/17 17:15:10 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 1115.0 (TID 1115) in 10291 ms on ***hostname masked*** (executor 2) (1/1) 18/04/17 17:15:10 INFO cluster.YarnClusterScheduler: Removed TaskSet 1115.0, whose tasks have all completed, from pool 18/04/17 17:15:10 INFO scheduler.DAGScheduler: ResultStage 1115 (foreachPartition at PredictorEngineApp.java:153) finished in 10.292 s 18/04/17 17:15:10 INFO scheduler.DAGScheduler: Job 1115 finished: foreachPartition at PredictorEngineApp.java:153, took 10.334888 s 18/04/17 17:15:10 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x2d284f26 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 17:15:10 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x2d284f260x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 17:15:10 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 17:15:10 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:45908, server: ***hostname masked***/***IP masked***:2181 18/04/17 17:15:10 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x2626be142b29018, negotiated timeout = 60000 18/04/17 17:15:10 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x2626be142b29018 18/04/17 17:15:10 INFO zookeeper.ZooKeeper: Session: 0x2626be142b29018 closed 18/04/17 17:15:10 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 17:15:10 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 1109.0 (TID 1109) in 10337 ms on ***hostname masked*** (executor 9) (1/1) 18/04/17 17:15:10 INFO cluster.YarnClusterScheduler: Removed TaskSet 1109.0, whose tasks have all completed, from pool 18/04/17 17:15:10 INFO scheduler.DAGScheduler: ResultStage 1109 (foreachPartition at PredictorEngineApp.java:153) finished in 10.338 s 18/04/17 17:15:10 INFO scheduler.DAGScheduler: Job 1109 finished: foreachPartition at PredictorEngineApp.java:153, took 10.359910 s 18/04/17 17:15:10 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0xdd47498 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 17:15:10 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0xdd474980x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 17:15:10 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 17:15:10 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:34934, server: ***hostname masked***/***IP masked***:2181 18/04/17 17:15:10 INFO scheduler.JobScheduler: Finished job streaming job 1523974500000 ms.27 from job set of time 1523974500000 ms 18/04/17 17:15:10 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x3626be1439a96fb, negotiated timeout = 60000 18/04/17 17:15:10 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x3626be1439a96fb 18/04/17 17:15:10 INFO zookeeper.ZooKeeper: Session: 0x3626be1439a96fb closed 18/04/17 17:15:10 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 17:15:10 INFO scheduler.JobScheduler: Finished job streaming job 1523974500000 ms.33 from job set of time 1523974500000 ms 18/04/17 17:15:10 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 1128.0 (TID 1128) in 10320 ms on ***hostname masked*** (executor 2) (1/1) 18/04/17 17:15:10 INFO cluster.YarnClusterScheduler: Removed TaskSet 1128.0, whose tasks have all completed, from pool 18/04/17 17:15:10 INFO scheduler.DAGScheduler: ResultStage 1128 (foreachPartition at PredictorEngineApp.java:153) finished in 10.320 s 18/04/17 17:15:10 INFO scheduler.DAGScheduler: Job 1128 finished: foreachPartition at PredictorEngineApp.java:153, took 10.419896 s 18/04/17 17:15:10 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x1c0617db connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 17:15:10 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x1c0617db0x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 17:15:10 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 17:15:10 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:34937, server: ***hostname masked***/***IP masked***:2181 18/04/17 17:15:10 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x3626be1439a96fd, negotiated timeout = 60000 18/04/17 17:15:10 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x3626be1439a96fd 18/04/17 17:15:10 INFO zookeeper.ZooKeeper: Session: 0x3626be1439a96fd closed 18/04/17 17:15:10 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 17:15:10 INFO scheduler.JobScheduler: Finished job streaming job 1523974500000 ms.34 from job set of time 1523974500000 ms 18/04/17 17:15:11 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 1127.0 (TID 1127) in 11821 ms on ***hostname masked*** (executor 3) (1/1) 18/04/17 17:15:11 INFO cluster.YarnClusterScheduler: Removed TaskSet 1127.0, whose tasks have all completed, from pool 18/04/17 17:15:11 INFO scheduler.DAGScheduler: ResultStage 1127 (foreachPartition at PredictorEngineApp.java:153) finished in 11.822 s 18/04/17 17:15:11 INFO scheduler.DAGScheduler: Job 1127 finished: foreachPartition at PredictorEngineApp.java:153, took 11.919092 s 18/04/17 17:15:11 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x1d987e63 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 17:15:11 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x1d987e630x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 17:15:11 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 17:15:11 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:34943, server: ***hostname masked***/***IP masked***:2181 18/04/17 17:15:11 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x3626be1439a96fe, negotiated timeout = 60000 18/04/17 17:15:11 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x3626be1439a96fe 18/04/17 17:15:11 INFO zookeeper.ZooKeeper: Session: 0x3626be1439a96fe closed 18/04/17 17:15:11 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 17:15:12 INFO scheduler.JobScheduler: Finished job streaming job 1523974500000 ms.26 from job set of time 1523974500000 ms 18/04/17 17:15:12 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 1105.0 (TID 1105) in 12892 ms on ***hostname masked*** (executor 4) (1/1) 18/04/17 17:15:12 INFO cluster.YarnClusterScheduler: Removed TaskSet 1105.0, whose tasks have all completed, from pool 18/04/17 17:15:12 INFO scheduler.DAGScheduler: ResultStage 1105 (foreachPartition at PredictorEngineApp.java:153) finished in 12.892 s 18/04/17 17:15:12 INFO scheduler.DAGScheduler: Job 1105 finished: foreachPartition at PredictorEngineApp.java:153, took 12.899815 s 18/04/17 17:15:12 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x3e7b1fed connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 17:15:12 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x3e7b1fed0x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 17:15:12 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 17:15:12 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:45924, server: ***hostname masked***/***IP masked***:2181 18/04/17 17:15:12 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x2626be142b29019, negotiated timeout = 60000 18/04/17 17:15:12 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x2626be142b29019 18/04/17 17:15:12 INFO zookeeper.ZooKeeper: Session: 0x2626be142b29019 closed 18/04/17 17:15:12 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 17:15:12 INFO scheduler.JobScheduler: Finished job streaming job 1523974500000 ms.2 from job set of time 1523974500000 ms 18/04/17 17:15:13 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 1108.0 (TID 1108) in 13089 ms on ***hostname masked*** (executor 4) (1/1) 18/04/17 17:15:13 INFO cluster.YarnClusterScheduler: Removed TaskSet 1108.0, whose tasks have all completed, from pool 18/04/17 17:15:13 INFO scheduler.DAGScheduler: ResultStage 1108 (foreachPartition at PredictorEngineApp.java:153) finished in 13.089 s 18/04/17 17:15:13 INFO scheduler.DAGScheduler: Job 1108 finished: foreachPartition at PredictorEngineApp.java:153, took 13.107278 s 18/04/17 17:15:13 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x406934b1 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 17:15:13 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x406934b10x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 17:15:13 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 17:15:13 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:45927, server: ***hostname masked***/***IP masked***:2181 18/04/17 17:15:13 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x2626be142b2901a, negotiated timeout = 60000 18/04/17 17:15:13 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x2626be142b2901a 18/04/17 17:15:13 INFO zookeeper.ZooKeeper: Session: 0x2626be142b2901a closed 18/04/17 17:15:13 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 17:15:13 INFO scheduler.JobScheduler: Finished job streaming job 1523974500000 ms.29 from job set of time 1523974500000 ms 18/04/17 17:15:14 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 1125.0 (TID 1125) in 14158 ms on ***hostname masked*** (executor 6) (1/1) 18/04/17 17:15:14 INFO cluster.YarnClusterScheduler: Removed TaskSet 1125.0, whose tasks have all completed, from pool 18/04/17 17:15:14 INFO scheduler.DAGScheduler: ResultStage 1125 (foreachPartition at PredictorEngineApp.java:153) finished in 14.158 s 18/04/17 17:15:14 INFO scheduler.DAGScheduler: Job 1124 finished: foreachPartition at PredictorEngineApp.java:153, took 14.249001 s 18/04/17 17:15:14 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x45a6cf4d connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 17:15:14 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x45a6cf4d0x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 17:15:14 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 17:15:14 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:45932, server: ***hostname masked***/***IP masked***:2181 18/04/17 17:15:14 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x2626be142b2901c, negotiated timeout = 60000 18/04/17 17:15:14 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x2626be142b2901c 18/04/17 17:15:14 INFO zookeeper.ZooKeeper: Session: 0x2626be142b2901c closed 18/04/17 17:15:14 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 17:15:14 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 1123.0 (TID 1123) in 14197 ms on ***hostname masked*** (executor 6) (1/1) 18/04/17 17:15:14 INFO scheduler.DAGScheduler: ResultStage 1123 (foreachPartition at PredictorEngineApp.java:153) finished in 14.197 s 18/04/17 17:15:14 INFO cluster.YarnClusterScheduler: Removed TaskSet 1123.0, whose tasks have all completed, from pool 18/04/17 17:15:14 INFO scheduler.DAGScheduler: Job 1122 finished: foreachPartition at PredictorEngineApp.java:153, took 14.280515 s 18/04/17 17:15:14 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x2eeb9c07 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 17:15:14 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x2eeb9c070x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 17:15:14 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 17:15:14 INFO scheduler.JobScheduler: Finished job streaming job 1523974500000 ms.19 from job set of time 1523974500000 ms 18/04/17 17:15:14 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:41340, server: ***hostname masked***/***IP masked***:2181 18/04/17 17:15:14 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x1626be1444c9743, negotiated timeout = 60000 18/04/17 17:15:14 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x1626be1444c9743 18/04/17 17:15:14 INFO zookeeper.ZooKeeper: Session: 0x1626be1444c9743 closed 18/04/17 17:15:14 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 17:15:14 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 1106.0 (TID 1106) in 14305 ms on ***hostname masked*** (executor 10) (1/1) 18/04/17 17:15:14 INFO scheduler.DAGScheduler: ResultStage 1106 (foreachPartition at PredictorEngineApp.java:153) finished in 14.305 s 18/04/17 17:15:14 INFO cluster.YarnClusterScheduler: Removed TaskSet 1106.0, whose tasks have all completed, from pool 18/04/17 17:15:14 INFO scheduler.DAGScheduler: Job 1106 finished: foreachPartition at PredictorEngineApp.java:153, took 14.315838 s 18/04/17 17:15:14 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x201f713 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 17:15:14 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x201f7130x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 17:15:14 INFO scheduler.JobScheduler: Finished job streaming job 1523974500000 ms.28 from job set of time 1523974500000 ms 18/04/17 17:15:14 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 17:15:14 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:45938, server: ***hostname masked***/***IP masked***:2181 18/04/17 17:15:14 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x2626be142b2901d, negotiated timeout = 60000 18/04/17 17:15:14 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x2626be142b2901d 18/04/17 17:15:14 INFO zookeeper.ZooKeeper: Session: 0x2626be142b2901d closed 18/04/17 17:15:14 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 17:15:14 INFO scheduler.JobScheduler: Finished job streaming job 1523974500000 ms.22 from job set of time 1523974500000 ms 18/04/17 17:15:15 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 1130.0 (TID 1130) in 14999 ms on ***hostname masked*** (executor 5) (1/1) 18/04/17 17:15:15 INFO cluster.YarnClusterScheduler: Removed TaskSet 1130.0, whose tasks have all completed, from pool 18/04/17 17:15:15 INFO scheduler.DAGScheduler: ResultStage 1130 (foreachPartition at PredictorEngineApp.java:153) finished in 15.000 s 18/04/17 17:15:15 INFO scheduler.DAGScheduler: Job 1130 finished: foreachPartition at PredictorEngineApp.java:153, took 15.104237 s 18/04/17 17:15:15 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x41396691 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 17:15:15 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x413966910x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 17:15:15 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 17:15:15 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:34964, server: ***hostname masked***/***IP masked***:2181 18/04/17 17:15:15 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x3626be1439a9700, negotiated timeout = 60000 18/04/17 17:15:15 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x3626be1439a9700 18/04/17 17:15:15 INFO zookeeper.ZooKeeper: Session: 0x3626be1439a9700 closed 18/04/17 17:15:15 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 17:15:15 INFO scheduler.JobScheduler: Finished job streaming job 1523974500000 ms.24 from job set of time 1523974500000 ms 18/04/17 17:15:15 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 1114.0 (TID 1114) in 15337 ms on ***hostname masked*** (executor 5) (1/1) 18/04/17 17:15:15 INFO cluster.YarnClusterScheduler: Removed TaskSet 1114.0, whose tasks have all completed, from pool 18/04/17 17:15:15 INFO scheduler.DAGScheduler: ResultStage 1114 (foreachPartition at PredictorEngineApp.java:153) finished in 15.337 s 18/04/17 17:15:15 INFO scheduler.DAGScheduler: Job 1114 finished: foreachPartition at PredictorEngineApp.java:153, took 15.377123 s 18/04/17 17:15:15 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x3f0cd627 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 17:15:15 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x3f0cd6270x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 17:15:15 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 17:15:15 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:41350, server: ***hostname masked***/***IP masked***:2181 18/04/17 17:15:15 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x1626be1444c9744, negotiated timeout = 60000 18/04/17 17:15:15 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x1626be1444c9744 18/04/17 17:15:15 INFO zookeeper.ZooKeeper: Session: 0x1626be1444c9744 closed 18/04/17 17:15:15 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 17:15:15 INFO scheduler.JobScheduler: Finished job streaming job 1523974500000 ms.20 from job set of time 1523974500000 ms 18/04/17 17:15:15 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 1118.0 (TID 1118) in 15579 ms on ***hostname masked*** (executor 9) (1/1) 18/04/17 17:15:15 INFO scheduler.DAGScheduler: ResultStage 1118 (foreachPartition at PredictorEngineApp.java:153) finished in 15.580 s 18/04/17 17:15:15 INFO cluster.YarnClusterScheduler: Removed TaskSet 1118.0, whose tasks have all completed, from pool 18/04/17 17:15:15 INFO scheduler.DAGScheduler: Job 1118 finished: foreachPartition at PredictorEngineApp.java:153, took 15.634664 s 18/04/17 17:15:15 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x7c398362 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 17:15:15 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x7c3983620x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 17:15:15 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 17:15:15 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:34971, server: ***hostname masked***/***IP masked***:2181 18/04/17 17:15:15 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x3626be1439a9702, negotiated timeout = 60000 18/04/17 17:15:15 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x3626be1439a9702 18/04/17 17:15:15 INFO zookeeper.ZooKeeper: Session: 0x3626be1439a9702 closed 18/04/17 17:15:15 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 17:15:15 INFO scheduler.JobScheduler: Finished job streaming job 1523974500000 ms.1 from job set of time 1523974500000 ms 18/04/17 17:15:18 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 1122.0 (TID 1122) in 18737 ms on ***hostname masked*** (executor 4) (1/1) 18/04/17 17:15:18 INFO cluster.YarnClusterScheduler: Removed TaskSet 1122.0, whose tasks have all completed, from pool 18/04/17 17:15:18 INFO scheduler.DAGScheduler: ResultStage 1122 (foreachPartition at PredictorEngineApp.java:153) finished in 18.738 s 18/04/17 17:15:18 INFO scheduler.DAGScheduler: Job 1123 finished: foreachPartition at PredictorEngineApp.java:153, took 18.817867 s 18/04/17 17:15:18 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x7e312a9a connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 17:15:18 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x7e312a9a0x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 17:15:18 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 17:15:18 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:45956, server: ***hostname masked***/***IP masked***:2181 18/04/17 17:15:18 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x2626be142b29021, negotiated timeout = 60000 18/04/17 17:15:18 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x2626be142b29021 18/04/17 17:15:18 INFO zookeeper.ZooKeeper: Session: 0x2626be142b29021 closed 18/04/17 17:15:18 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 17:15:18 INFO scheduler.JobScheduler: Finished job streaming job 1523974500000 ms.11 from job set of time 1523974500000 ms 18/04/17 17:15:20 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 1124.0 (TID 1124) in 20793 ms on ***hostname masked*** (executor 5) (1/1) 18/04/17 17:15:20 INFO cluster.YarnClusterScheduler: Removed TaskSet 1124.0, whose tasks have all completed, from pool 18/04/17 17:15:20 INFO scheduler.DAGScheduler: ResultStage 1124 (foreachPartition at PredictorEngineApp.java:153) finished in 20.794 s 18/04/17 17:15:20 INFO scheduler.DAGScheduler: Job 1125 finished: foreachPartition at PredictorEngineApp.java:153, took 20.881098 s 18/04/17 17:15:20 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x7391c7f7 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 17:15:20 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x7391c7f70x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 17:15:20 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 17:15:20 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:41367, server: ***hostname masked***/***IP masked***:2181 18/04/17 17:15:20 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x1626be1444c9747, negotiated timeout = 60000 18/04/17 17:15:20 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x1626be1444c9747 18/04/17 17:15:20 INFO zookeeper.ZooKeeper: Session: 0x1626be1444c9747 closed 18/04/17 17:15:20 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 17:15:20 INFO scheduler.JobScheduler: Finished job streaming job 1523974500000 ms.5 from job set of time 1523974500000 ms 18/04/17 17:15:21 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 1126.0 (TID 1126) in 21815 ms on ***hostname masked*** (executor 6) (1/1) 18/04/17 17:15:21 INFO cluster.YarnClusterScheduler: Removed TaskSet 1126.0, whose tasks have all completed, from pool 18/04/17 17:15:21 INFO scheduler.DAGScheduler: ResultStage 1126 (foreachPartition at PredictorEngineApp.java:153) finished in 21.816 s 18/04/17 17:15:21 INFO scheduler.DAGScheduler: Job 1126 finished: foreachPartition at PredictorEngineApp.java:153, took 21.910261 s 18/04/17 17:15:21 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x6dd739cc connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 17:15:21 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x6dd739cc0x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 17:15:21 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 17:15:21 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:34990, server: ***hostname masked***/***IP masked***:2181 18/04/17 17:15:21 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x3626be1439a9704, negotiated timeout = 60000 18/04/17 17:15:21 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x3626be1439a9704 18/04/17 17:15:21 INFO zookeeper.ZooKeeper: Session: 0x3626be1439a9704 closed 18/04/17 17:15:21 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 17:15:21 INFO scheduler.JobScheduler: Finished job streaming job 1523974500000 ms.10 from job set of time 1523974500000 ms 18/04/17 17:15:21 INFO scheduler.JobScheduler: Total delay: 21.995 s for time 1523974500000 ms (execution: 21.948 s) 18/04/17 17:15:21 INFO kafka.KafkaRDD: Removing RDD 1476 from persistence list 18/04/17 17:15:21 INFO storage.BlockManager: Removing RDD 1476 18/04/17 17:15:21 INFO kafka.KafkaRDD: Removing RDD 1440 from persistence list 18/04/17 17:15:21 INFO storage.BlockManager: Removing RDD 1440 18/04/17 17:15:21 INFO kafka.KafkaRDD: Removing RDD 1476 from persistence list 18/04/17 17:15:21 INFO storage.BlockManager: Removing RDD 1476 18/04/17 17:15:21 INFO kafka.KafkaRDD: Removing RDD 1440 from persistence list 18/04/17 17:15:21 INFO storage.BlockManager: Removing RDD 1440 18/04/17 17:15:21 INFO kafka.KafkaRDD: Removing RDD 1477 from persistence list 18/04/17 17:15:21 INFO storage.BlockManager: Removing RDD 1477 18/04/17 17:15:21 INFO kafka.KafkaRDD: Removing RDD 1441 from persistence list 18/04/17 17:15:21 INFO storage.BlockManager: Removing RDD 1441 18/04/17 17:15:21 INFO kafka.KafkaRDD: Removing RDD 1477 from persistence list 18/04/17 17:15:21 INFO storage.BlockManager: Removing RDD 1477 18/04/17 17:15:21 INFO kafka.KafkaRDD: Removing RDD 1441 from persistence list 18/04/17 17:15:22 INFO storage.BlockManager: Removing RDD 1441 18/04/17 17:15:22 INFO kafka.KafkaRDD: Removing RDD 1478 from persistence list 18/04/17 17:15:22 INFO storage.BlockManager: Removing RDD 1478 18/04/17 17:15:22 INFO kafka.KafkaRDD: Removing RDD 1442 from persistence list 18/04/17 17:15:22 INFO storage.BlockManager: Removing RDD 1442 18/04/17 17:15:22 INFO kafka.KafkaRDD: Removing RDD 1478 from persistence list 18/04/17 17:15:22 INFO storage.BlockManager: Removing RDD 1478 18/04/17 17:15:22 INFO kafka.KafkaRDD: Removing RDD 1442 from persistence list 18/04/17 17:15:22 INFO storage.BlockManager: Removing RDD 1442 18/04/17 17:15:22 INFO kafka.KafkaRDD: Removing RDD 1479 from persistence list 18/04/17 17:15:22 INFO storage.BlockManager: Removing RDD 1479 18/04/17 17:15:22 INFO kafka.KafkaRDD: Removing RDD 1443 from persistence list 18/04/17 17:15:22 INFO storage.BlockManager: Removing RDD 1443 18/04/17 17:15:22 INFO kafka.KafkaRDD: Removing RDD 1479 from persistence list 18/04/17 17:15:22 INFO storage.BlockManager: Removing RDD 1479 18/04/17 17:15:22 INFO kafka.KafkaRDD: Removing RDD 1443 from persistence list 18/04/17 17:15:22 INFO storage.BlockManager: Removing RDD 1443 18/04/17 17:15:22 INFO kafka.KafkaRDD: Removing RDD 1480 from persistence list 18/04/17 17:15:22 INFO storage.BlockManager: Removing RDD 1480 18/04/17 17:15:22 INFO kafka.KafkaRDD: Removing RDD 1444 from persistence list 18/04/17 17:15:22 INFO storage.BlockManager: Removing RDD 1444 18/04/17 17:15:22 INFO kafka.KafkaRDD: Removing RDD 1480 from persistence list 18/04/17 17:15:22 INFO storage.BlockManager: Removing RDD 1480 18/04/17 17:15:22 INFO kafka.KafkaRDD: Removing RDD 1444 from persistence list 18/04/17 17:15:22 INFO storage.BlockManager: Removing RDD 1444 18/04/17 17:15:22 INFO kafka.KafkaRDD: Removing RDD 1481 from persistence list 18/04/17 17:15:22 INFO storage.BlockManager: Removing RDD 1481 18/04/17 17:15:22 INFO kafka.KafkaRDD: Removing RDD 1445 from persistence list 18/04/17 17:15:22 INFO storage.BlockManager: Removing RDD 1445 18/04/17 17:15:22 INFO kafka.KafkaRDD: Removing RDD 1481 from persistence list 18/04/17 17:15:22 INFO storage.BlockManager: Removing RDD 1481 18/04/17 17:15:22 INFO kafka.KafkaRDD: Removing RDD 1445 from persistence list 18/04/17 17:15:22 INFO storage.BlockManager: Removing RDD 1445 18/04/17 17:15:22 INFO kafka.KafkaRDD: Removing RDD 1482 from persistence list 18/04/17 17:15:22 INFO storage.BlockManager: Removing RDD 1482 18/04/17 17:15:22 INFO kafka.KafkaRDD: Removing RDD 1446 from persistence list 18/04/17 17:15:22 INFO storage.BlockManager: Removing RDD 1446 18/04/17 17:15:22 INFO kafka.KafkaRDD: Removing RDD 1482 from persistence list 18/04/17 17:15:22 INFO storage.BlockManager: Removing RDD 1482 18/04/17 17:15:22 INFO kafka.KafkaRDD: Removing RDD 1446 from persistence list 18/04/17 17:15:22 INFO storage.BlockManager: Removing RDD 1446 18/04/17 17:15:22 INFO kafka.KafkaRDD: Removing RDD 1483 from persistence list 18/04/17 17:15:22 INFO storage.BlockManager: Removing RDD 1483 18/04/17 17:15:22 INFO kafka.KafkaRDD: Removing RDD 1447 from persistence list 18/04/17 17:15:22 INFO storage.BlockManager: Removing RDD 1447 18/04/17 17:15:22 INFO kafka.KafkaRDD: Removing RDD 1483 from persistence list 18/04/17 17:15:22 INFO storage.BlockManager: Removing RDD 1483 18/04/17 17:15:22 INFO kafka.KafkaRDD: Removing RDD 1447 from persistence list 18/04/17 17:15:22 INFO storage.BlockManager: Removing RDD 1447 18/04/17 17:15:22 INFO kafka.KafkaRDD: Removing RDD 1484 from persistence list 18/04/17 17:15:22 INFO storage.BlockManager: Removing RDD 1484 18/04/17 17:15:22 INFO kafka.KafkaRDD: Removing RDD 1448 from persistence list 18/04/17 17:15:22 INFO storage.BlockManager: Removing RDD 1448 18/04/17 17:15:22 INFO kafka.KafkaRDD: Removing RDD 1484 from persistence list 18/04/17 17:15:22 INFO storage.BlockManager: Removing RDD 1484 18/04/17 17:15:22 INFO kafka.KafkaRDD: Removing RDD 1448 from persistence list 18/04/17 17:15:22 INFO storage.BlockManager: Removing RDD 1448 18/04/17 17:15:22 INFO kafka.KafkaRDD: Removing RDD 1485 from persistence list 18/04/17 17:15:22 INFO storage.BlockManager: Removing RDD 1485 18/04/17 17:15:22 INFO kafka.KafkaRDD: Removing RDD 1449 from persistence list 18/04/17 17:15:22 INFO storage.BlockManager: Removing RDD 1449 18/04/17 17:15:22 INFO kafka.KafkaRDD: Removing RDD 1485 from persistence list 18/04/17 17:15:22 INFO storage.BlockManager: Removing RDD 1485 18/04/17 17:15:22 INFO kafka.KafkaRDD: Removing RDD 1449 from persistence list 18/04/17 17:15:22 INFO storage.BlockManager: Removing RDD 1449 18/04/17 17:15:22 INFO kafka.KafkaRDD: Removing RDD 1486 from persistence list 18/04/17 17:15:22 INFO storage.BlockManager: Removing RDD 1486 18/04/17 17:15:22 INFO kafka.KafkaRDD: Removing RDD 1450 from persistence list 18/04/17 17:15:22 INFO storage.BlockManager: Removing RDD 1450 18/04/17 17:15:22 INFO kafka.KafkaRDD: Removing RDD 1486 from persistence list 18/04/17 17:15:22 INFO storage.BlockManager: Removing RDD 1486 18/04/17 17:15:22 INFO kafka.KafkaRDD: Removing RDD 1450 from persistence list 18/04/17 17:15:22 INFO storage.BlockManager: Removing RDD 1450 18/04/17 17:15:22 INFO kafka.KafkaRDD: Removing RDD 1487 from persistence list 18/04/17 17:15:22 INFO storage.BlockManager: Removing RDD 1487 18/04/17 17:15:22 INFO kafka.KafkaRDD: Removing RDD 1451 from persistence list 18/04/17 17:15:22 INFO storage.BlockManager: Removing RDD 1451 18/04/17 17:15:22 INFO kafka.KafkaRDD: Removing RDD 1487 from persistence list 18/04/17 17:15:22 INFO storage.BlockManager: Removing RDD 1487 18/04/17 17:15:22 INFO kafka.KafkaRDD: Removing RDD 1451 from persistence list 18/04/17 17:15:22 INFO storage.BlockManager: Removing RDD 1451 18/04/17 17:15:22 INFO kafka.KafkaRDD: Removing RDD 1488 from persistence list 18/04/17 17:15:22 INFO storage.BlockManager: Removing RDD 1488 18/04/17 17:15:22 INFO kafka.KafkaRDD: Removing RDD 1452 from persistence list 18/04/17 17:15:22 INFO storage.BlockManager: Removing RDD 1452 18/04/17 17:15:22 INFO kafka.KafkaRDD: Removing RDD 1488 from persistence list 18/04/17 17:15:22 INFO storage.BlockManager: Removing RDD 1488 18/04/17 17:15:22 INFO kafka.KafkaRDD: Removing RDD 1452 from persistence list 18/04/17 17:15:22 INFO storage.BlockManager: Removing RDD 1452 18/04/17 17:15:22 INFO kafka.KafkaRDD: Removing RDD 1489 from persistence list 18/04/17 17:15:22 INFO storage.BlockManager: Removing RDD 1489 18/04/17 17:15:22 INFO kafka.KafkaRDD: Removing RDD 1453 from persistence list 18/04/17 17:15:22 INFO storage.BlockManager: Removing RDD 1453 18/04/17 17:15:22 INFO kafka.KafkaRDD: Removing RDD 1489 from persistence list 18/04/17 17:15:22 INFO storage.BlockManager: Removing RDD 1489 18/04/17 17:15:22 INFO kafka.KafkaRDD: Removing RDD 1453 from persistence list 18/04/17 17:15:22 INFO storage.BlockManager: Removing RDD 1453 18/04/17 17:15:22 INFO kafka.KafkaRDD: Removing RDD 1490 from persistence list 18/04/17 17:15:22 INFO storage.BlockManager: Removing RDD 1490 18/04/17 17:15:22 INFO kafka.KafkaRDD: Removing RDD 1454 from persistence list 18/04/17 17:15:22 INFO storage.BlockManager: Removing RDD 1454 18/04/17 17:15:22 INFO kafka.KafkaRDD: Removing RDD 1490 from persistence list 18/04/17 17:15:22 INFO storage.BlockManager: Removing RDD 1490 18/04/17 17:15:22 INFO kafka.KafkaRDD: Removing RDD 1454 from persistence list 18/04/17 17:15:22 INFO storage.BlockManager: Removing RDD 1454 18/04/17 17:15:22 INFO kafka.KafkaRDD: Removing RDD 1491 from persistence list 18/04/17 17:15:22 INFO storage.BlockManager: Removing RDD 1491 18/04/17 17:15:22 INFO kafka.KafkaRDD: Removing RDD 1455 from persistence list 18/04/17 17:15:22 INFO storage.BlockManager: Removing RDD 1455 18/04/17 17:15:22 INFO kafka.KafkaRDD: Removing RDD 1491 from persistence list 18/04/17 17:15:22 INFO storage.BlockManager: Removing RDD 1491 18/04/17 17:15:22 INFO kafka.KafkaRDD: Removing RDD 1455 from persistence list 18/04/17 17:15:22 INFO storage.BlockManager: Removing RDD 1455 18/04/17 17:15:22 INFO kafka.KafkaRDD: Removing RDD 1492 from persistence list 18/04/17 17:15:22 INFO storage.BlockManager: Removing RDD 1492 18/04/17 17:15:22 INFO kafka.KafkaRDD: Removing RDD 1456 from persistence list 18/04/17 17:15:22 INFO storage.BlockManager: Removing RDD 1456 18/04/17 17:15:22 INFO kafka.KafkaRDD: Removing RDD 1492 from persistence list 18/04/17 17:15:22 INFO storage.BlockManager: Removing RDD 1492 18/04/17 17:15:22 INFO kafka.KafkaRDD: Removing RDD 1456 from persistence list 18/04/17 17:15:22 INFO storage.BlockManager: Removing RDD 1456 18/04/17 17:15:22 INFO kafka.KafkaRDD: Removing RDD 1493 from persistence list 18/04/17 17:15:22 INFO storage.BlockManager: Removing RDD 1493 18/04/17 17:15:22 INFO kafka.KafkaRDD: Removing RDD 1457 from persistence list 18/04/17 17:15:22 INFO storage.BlockManager: Removing RDD 1457 18/04/17 17:15:22 INFO kafka.KafkaRDD: Removing RDD 1493 from persistence list 18/04/17 17:15:22 INFO storage.BlockManager: Removing RDD 1493 18/04/17 17:15:22 INFO kafka.KafkaRDD: Removing RDD 1457 from persistence list 18/04/17 17:15:22 INFO storage.BlockManager: Removing RDD 1457 18/04/17 17:15:22 INFO kafka.KafkaRDD: Removing RDD 1494 from persistence list 18/04/17 17:15:22 INFO storage.BlockManager: Removing RDD 1494 18/04/17 17:15:22 INFO kafka.KafkaRDD: Removing RDD 1458 from persistence list 18/04/17 17:15:22 INFO storage.BlockManager: Removing RDD 1458 18/04/17 17:15:22 INFO kafka.KafkaRDD: Removing RDD 1494 from persistence list 18/04/17 17:15:22 INFO storage.BlockManager: Removing RDD 1494 18/04/17 17:15:22 INFO kafka.KafkaRDD: Removing RDD 1458 from persistence list 18/04/17 17:15:22 INFO storage.BlockManager: Removing RDD 1458 18/04/17 17:15:22 INFO kafka.KafkaRDD: Removing RDD 1495 from persistence list 18/04/17 17:15:22 INFO storage.BlockManager: Removing RDD 1495 18/04/17 17:15:22 INFO kafka.KafkaRDD: Removing RDD 1459 from persistence list 18/04/17 17:15:22 INFO storage.BlockManager: Removing RDD 1459 18/04/17 17:15:22 INFO kafka.KafkaRDD: Removing RDD 1495 from persistence list 18/04/17 17:15:22 INFO storage.BlockManager: Removing RDD 1495 18/04/17 17:15:22 INFO kafka.KafkaRDD: Removing RDD 1459 from persistence list 18/04/17 17:15:22 INFO storage.BlockManager: Removing RDD 1459 18/04/17 17:15:22 INFO kafka.KafkaRDD: Removing RDD 1496 from persistence list 18/04/17 17:15:22 INFO storage.BlockManager: Removing RDD 1496 18/04/17 17:15:22 INFO kafka.KafkaRDD: Removing RDD 1460 from persistence list 18/04/17 17:15:22 INFO storage.BlockManager: Removing RDD 1460 18/04/17 17:15:22 INFO kafka.KafkaRDD: Removing RDD 1496 from persistence list 18/04/17 17:15:22 INFO storage.BlockManager: Removing RDD 1496 18/04/17 17:15:22 INFO kafka.KafkaRDD: Removing RDD 1460 from persistence list 18/04/17 17:15:22 INFO storage.BlockManager: Removing RDD 1460 18/04/17 17:15:22 INFO kafka.KafkaRDD: Removing RDD 1497 from persistence list 18/04/17 17:15:22 INFO storage.BlockManager: Removing RDD 1497 18/04/17 17:15:22 INFO kafka.KafkaRDD: Removing RDD 1461 from persistence list 18/04/17 17:15:22 INFO storage.BlockManager: Removing RDD 1461 18/04/17 17:15:22 INFO kafka.KafkaRDD: Removing RDD 1497 from persistence list 18/04/17 17:15:22 INFO storage.BlockManager: Removing RDD 1497 18/04/17 17:15:22 INFO kafka.KafkaRDD: Removing RDD 1461 from persistence list 18/04/17 17:15:22 INFO storage.BlockManager: Removing RDD 1461 18/04/17 17:15:22 INFO kafka.KafkaRDD: Removing RDD 1498 from persistence list 18/04/17 17:15:22 INFO storage.BlockManager: Removing RDD 1498 18/04/17 17:15:22 INFO kafka.KafkaRDD: Removing RDD 1462 from persistence list 18/04/17 17:15:22 INFO storage.BlockManager: Removing RDD 1462 18/04/17 17:15:22 INFO kafka.KafkaRDD: Removing RDD 1498 from persistence list 18/04/17 17:15:22 INFO storage.BlockManager: Removing RDD 1498 18/04/17 17:15:22 INFO kafka.KafkaRDD: Removing RDD 1462 from persistence list 18/04/17 17:15:22 INFO storage.BlockManager: Removing RDD 1462 18/04/17 17:15:22 INFO kafka.KafkaRDD: Removing RDD 1499 from persistence list 18/04/17 17:15:22 INFO storage.BlockManager: Removing RDD 1499 18/04/17 17:15:22 INFO kafka.KafkaRDD: Removing RDD 1463 from persistence list 18/04/17 17:15:22 INFO storage.BlockManager: Removing RDD 1463 18/04/17 17:15:22 INFO kafka.KafkaRDD: Removing RDD 1499 from persistence list 18/04/17 17:15:22 INFO storage.BlockManager: Removing RDD 1499 18/04/17 17:15:22 INFO kafka.KafkaRDD: Removing RDD 1463 from persistence list 18/04/17 17:15:22 INFO storage.BlockManager: Removing RDD 1463 18/04/17 17:15:22 INFO kafka.KafkaRDD: Removing RDD 1500 from persistence list 18/04/17 17:15:22 INFO storage.BlockManager: Removing RDD 1500 18/04/17 17:15:22 INFO kafka.KafkaRDD: Removing RDD 1464 from persistence list 18/04/17 17:15:22 INFO storage.BlockManager: Removing RDD 1464 18/04/17 17:15:22 INFO kafka.KafkaRDD: Removing RDD 1500 from persistence list 18/04/17 17:15:22 INFO storage.BlockManager: Removing RDD 1500 18/04/17 17:15:22 INFO kafka.KafkaRDD: Removing RDD 1464 from persistence list 18/04/17 17:15:22 INFO storage.BlockManager: Removing RDD 1464 18/04/17 17:15:22 INFO kafka.KafkaRDD: Removing RDD 1501 from persistence list 18/04/17 17:15:22 INFO storage.BlockManager: Removing RDD 1501 18/04/17 17:15:22 INFO kafka.KafkaRDD: Removing RDD 1465 from persistence list 18/04/17 17:15:22 INFO storage.BlockManager: Removing RDD 1465 18/04/17 17:15:22 INFO kafka.KafkaRDD: Removing RDD 1501 from persistence list 18/04/17 17:15:22 INFO storage.BlockManager: Removing RDD 1501 18/04/17 17:15:22 INFO kafka.KafkaRDD: Removing RDD 1465 from persistence list 18/04/17 17:15:22 INFO storage.BlockManager: Removing RDD 1465 18/04/17 17:15:22 INFO kafka.KafkaRDD: Removing RDD 1502 from persistence list 18/04/17 17:15:22 INFO storage.BlockManager: Removing RDD 1502 18/04/17 17:15:22 INFO kafka.KafkaRDD: Removing RDD 1466 from persistence list 18/04/17 17:15:22 INFO storage.BlockManager: Removing RDD 1466 18/04/17 17:15:22 INFO kafka.KafkaRDD: Removing RDD 1502 from persistence list 18/04/17 17:15:22 INFO storage.BlockManager: Removing RDD 1502 18/04/17 17:15:22 INFO kafka.KafkaRDD: Removing RDD 1466 from persistence list 18/04/17 17:15:22 INFO storage.BlockManager: Removing RDD 1466 18/04/17 17:15:22 INFO kafka.KafkaRDD: Removing RDD 1503 from persistence list 18/04/17 17:15:22 INFO storage.BlockManager: Removing RDD 1503 18/04/17 17:15:22 INFO kafka.KafkaRDD: Removing RDD 1467 from persistence list 18/04/17 17:15:22 INFO storage.BlockManager: Removing RDD 1467 18/04/17 17:15:22 INFO kafka.KafkaRDD: Removing RDD 1503 from persistence list 18/04/17 17:15:22 INFO storage.BlockManager: Removing RDD 1503 18/04/17 17:15:22 INFO kafka.KafkaRDD: Removing RDD 1467 from persistence list 18/04/17 17:15:22 INFO storage.BlockManager: Removing RDD 1467 18/04/17 17:15:22 INFO kafka.KafkaRDD: Removing RDD 1504 from persistence list 18/04/17 17:15:22 INFO storage.BlockManager: Removing RDD 1504 18/04/17 17:15:22 INFO kafka.KafkaRDD: Removing RDD 1468 from persistence list 18/04/17 17:15:22 INFO storage.BlockManager: Removing RDD 1468 18/04/17 17:15:22 INFO kafka.KafkaRDD: Removing RDD 1504 from persistence list 18/04/17 17:15:22 INFO storage.BlockManager: Removing RDD 1504 18/04/17 17:15:22 INFO kafka.KafkaRDD: Removing RDD 1468 from persistence list 18/04/17 17:15:22 INFO storage.BlockManager: Removing RDD 1468 18/04/17 17:15:22 INFO kafka.KafkaRDD: Removing RDD 1505 from persistence list 18/04/17 17:15:22 INFO storage.BlockManager: Removing RDD 1505 18/04/17 17:15:22 INFO kafka.KafkaRDD: Removing RDD 1469 from persistence list 18/04/17 17:15:22 INFO storage.BlockManager: Removing RDD 1469 18/04/17 17:15:22 INFO kafka.KafkaRDD: Removing RDD 1505 from persistence list 18/04/17 17:15:22 INFO storage.BlockManager: Removing RDD 1505 18/04/17 17:15:22 INFO kafka.KafkaRDD: Removing RDD 1469 from persistence list 18/04/17 17:15:22 INFO storage.BlockManager: Removing RDD 1469 18/04/17 17:15:22 INFO kafka.KafkaRDD: Removing RDD 1506 from persistence list 18/04/17 17:15:22 INFO storage.BlockManager: Removing RDD 1506 18/04/17 17:15:22 INFO kafka.KafkaRDD: Removing RDD 1470 from persistence list 18/04/17 17:15:22 INFO storage.BlockManager: Removing RDD 1470 18/04/17 17:15:22 INFO kafka.KafkaRDD: Removing RDD 1506 from persistence list 18/04/17 17:15:22 INFO storage.BlockManager: Removing RDD 1506 18/04/17 17:15:22 INFO kafka.KafkaRDD: Removing RDD 1470 from persistence list 18/04/17 17:15:22 INFO storage.BlockManager: Removing RDD 1470 18/04/17 17:15:22 INFO kafka.KafkaRDD: Removing RDD 1507 from persistence list 18/04/17 17:15:22 INFO storage.BlockManager: Removing RDD 1507 18/04/17 17:15:22 INFO kafka.KafkaRDD: Removing RDD 1471 from persistence list 18/04/17 17:15:22 INFO storage.BlockManager: Removing RDD 1471 18/04/17 17:15:22 INFO kafka.KafkaRDD: Removing RDD 1507 from persistence list 18/04/17 17:15:22 INFO storage.BlockManager: Removing RDD 1507 18/04/17 17:15:22 INFO kafka.KafkaRDD: Removing RDD 1471 from persistence list 18/04/17 17:15:22 INFO storage.BlockManager: Removing RDD 1471 18/04/17 17:15:22 INFO kafka.KafkaRDD: Removing RDD 1508 from persistence list 18/04/17 17:15:22 INFO storage.BlockManager: Removing RDD 1508 18/04/17 17:15:22 INFO kafka.KafkaRDD: Removing RDD 1472 from persistence list 18/04/17 17:15:22 INFO storage.BlockManager: Removing RDD 1472 18/04/17 17:15:22 INFO kafka.KafkaRDD: Removing RDD 1508 from persistence list 18/04/17 17:15:22 INFO storage.BlockManager: Removing RDD 1508 18/04/17 17:15:22 INFO kafka.KafkaRDD: Removing RDD 1472 from persistence list 18/04/17 17:15:22 INFO storage.BlockManager: Removing RDD 1472 18/04/17 17:15:22 INFO kafka.KafkaRDD: Removing RDD 1509 from persistence list 18/04/17 17:15:22 INFO storage.BlockManager: Removing RDD 1509 18/04/17 17:15:22 INFO kafka.KafkaRDD: Removing RDD 1473 from persistence list 18/04/17 17:15:22 INFO storage.BlockManager: Removing RDD 1473 18/04/17 17:15:22 INFO kafka.KafkaRDD: Removing RDD 1509 from persistence list 18/04/17 17:15:22 INFO storage.BlockManager: Removing RDD 1509 18/04/17 17:15:22 INFO kafka.KafkaRDD: Removing RDD 1473 from persistence list 18/04/17 17:15:22 INFO storage.BlockManager: Removing RDD 1473 18/04/17 17:15:22 INFO kafka.KafkaRDD: Removing RDD 1510 from persistence list 18/04/17 17:15:22 INFO storage.BlockManager: Removing RDD 1510 18/04/17 17:15:22 INFO kafka.KafkaRDD: Removing RDD 1474 from persistence list 18/04/17 17:15:22 INFO storage.BlockManager: Removing RDD 1474 18/04/17 17:15:22 INFO kafka.KafkaRDD: Removing RDD 1510 from persistence list 18/04/17 17:15:22 INFO storage.BlockManager: Removing RDD 1510 18/04/17 17:15:22 INFO kafka.KafkaRDD: Removing RDD 1474 from persistence list 18/04/17 17:15:22 INFO storage.BlockManager: Removing RDD 1474 18/04/17 17:15:22 INFO kafka.KafkaRDD: Removing RDD 1511 from persistence list 18/04/17 17:15:22 INFO storage.BlockManager: Removing RDD 1511 18/04/17 17:15:22 INFO kafka.KafkaRDD: Removing RDD 1475 from persistence list 18/04/17 17:15:22 INFO storage.BlockManagerInfo: Removed broadcast_1106_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:15:22 INFO storage.BlockManager: Removing RDD 1475 18/04/17 17:15:22 INFO kafka.KafkaRDD: Removing RDD 1511 from persistence list 18/04/17 17:15:22 INFO storage.BlockManager: Removing RDD 1511 18/04/17 17:15:22 INFO kafka.KafkaRDD: Removing RDD 1475 from persistence list 18/04/17 17:15:22 INFO storage.BlockManagerInfo: Removed broadcast_1106_piece0 on ***hostname masked***:55095 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:15:22 INFO storage.BlockManager: Removing RDD 1475 18/04/17 17:15:22 INFO scheduler.ReceivedBlockTracker: Deleting batches ArrayBuffer() 18/04/17 17:15:22 INFO scheduler.InputInfoTracker: remove old batch metadata: 1523974380000 ms 1523974320000 ms 18/04/17 17:15:22 INFO storage.BlockManagerInfo: Removed broadcast_1109_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:15:22 INFO storage.BlockManagerInfo: Removed broadcast_1109_piece0 on ***hostname masked***:55033 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:15:22 INFO spark.ContextCleaner: Cleaned accumulator 1108 18/04/17 17:15:22 INFO storage.BlockManagerInfo: Removed broadcast_1105_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:15:22 INFO storage.BlockManagerInfo: Removed broadcast_1105_piece0 on ***hostname masked***:55279 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:15:22 INFO storage.BlockManagerInfo: Removed broadcast_1108_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:15:22 INFO storage.BlockManagerInfo: Removed broadcast_1108_piece0 on ***hostname masked***:55279 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:15:22 INFO spark.ContextCleaner: Cleaned accumulator 1114 18/04/17 17:15:22 INFO storage.BlockManagerInfo: Removed broadcast_1112_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:15:22 INFO storage.BlockManagerInfo: Removed broadcast_1112_piece0 on ***hostname masked***:57847 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:15:22 INFO spark.ContextCleaner: Cleaned accumulator 1113 18/04/17 17:15:22 INFO spark.ContextCleaner: Cleaned accumulator 1107 18/04/17 17:15:22 INFO spark.ContextCleaner: Cleaned accumulator 1109 18/04/17 17:15:22 INFO spark.ContextCleaner: Cleaned accumulator 1106 18/04/17 17:15:22 INFO spark.ContextCleaner: Cleaned accumulator 1115 18/04/17 17:15:22 INFO storage.BlockManagerInfo: Removed broadcast_1113_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:15:22 INFO storage.BlockManagerInfo: Removed broadcast_1113_piece0 on ***hostname masked***:55033 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:15:22 INFO storage.BlockManagerInfo: Removed broadcast_1115_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:15:22 INFO storage.BlockManagerInfo: Removed broadcast_1115_piece0 on ***hostname masked***:43653 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:15:22 INFO spark.ContextCleaner: Cleaned accumulator 1116 18/04/17 17:15:22 INFO storage.BlockManagerInfo: Removed broadcast_1114_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:15:22 INFO storage.BlockManagerInfo: Removed broadcast_1114_piece0 on ***hostname masked***:53081 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:15:22 INFO storage.BlockManagerInfo: Removed broadcast_1116_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:15:22 INFO storage.BlockManagerInfo: Removed broadcast_1116_piece0 on ***hostname masked***:42188 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:15:22 INFO spark.ContextCleaner: Cleaned accumulator 1117 18/04/17 17:15:22 INFO storage.BlockManagerInfo: Removed broadcast_1110_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:15:22 INFO storage.BlockManagerInfo: Removed broadcast_1110_piece0 on ***hostname masked***:50260 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:15:22 INFO storage.BlockManagerInfo: Removed broadcast_1117_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:15:22 INFO storage.BlockManagerInfo: Removed broadcast_1117_piece0 on ***hostname masked***:55095 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:15:22 INFO spark.ContextCleaner: Cleaned accumulator 1118 18/04/17 17:15:22 INFO spark.ContextCleaner: Cleaned accumulator 1120 18/04/17 17:15:22 INFO storage.BlockManagerInfo: Removed broadcast_1118_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:15:22 INFO storage.BlockManagerInfo: Removed broadcast_1118_piece0 on ***hostname masked***:55033 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:15:22 INFO spark.ContextCleaner: Cleaned accumulator 1119 18/04/17 17:15:22 INFO spark.ContextCleaner: Cleaned accumulator 1110 18/04/17 17:15:22 INFO storage.BlockManagerInfo: Removed broadcast_1111_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:15:22 INFO storage.BlockManagerInfo: Removed broadcast_1111_piece0 on ***hostname masked***:55033 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:15:22 INFO storage.BlockManagerInfo: Removed broadcast_1119_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:15:22 INFO storage.BlockManagerInfo: Removed broadcast_1119_piece0 on ***hostname masked***:41751 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:15:22 INFO storage.BlockManagerInfo: Removed broadcast_1107_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:15:22 INFO storage.BlockManagerInfo: Removed broadcast_1107_piece0 on ***hostname masked***:53081 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:15:22 INFO spark.ContextCleaner: Cleaned accumulator 1121 18/04/17 17:15:22 INFO storage.BlockManagerInfo: Removed broadcast_1121_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:15:22 INFO storage.BlockManagerInfo: Removed broadcast_1121_piece0 on ***hostname masked***:41751 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:15:22 INFO spark.ContextCleaner: Cleaned accumulator 1122 18/04/17 17:15:22 INFO storage.BlockManagerInfo: Removed broadcast_1120_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:15:22 INFO storage.BlockManagerInfo: Removed broadcast_1120_piece0 on ***hostname masked***:56034 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:15:22 INFO storage.BlockManagerInfo: Removed broadcast_1131_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:15:22 INFO storage.BlockManagerInfo: Removed broadcast_1131_piece0 on ***hostname masked***:41751 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:15:22 INFO spark.ContextCleaner: Cleaned accumulator 1132 18/04/17 17:15:22 INFO storage.BlockManagerInfo: Removed broadcast_1130_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:15:22 INFO storage.BlockManagerInfo: Removed broadcast_1130_piece0 on ***hostname masked***:53081 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:15:22 INFO spark.ContextCleaner: Cleaned accumulator 1124 18/04/17 17:15:22 INFO storage.BlockManagerInfo: Removed broadcast_1122_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:15:22 INFO storage.BlockManagerInfo: Removed broadcast_1122_piece0 on ***hostname masked***:55279 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:15:22 INFO spark.ContextCleaner: Cleaned accumulator 1123 18/04/17 17:15:22 INFO storage.BlockManagerInfo: Removed broadcast_1124_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:15:22 INFO storage.BlockManagerInfo: Removed broadcast_1124_piece0 on ***hostname masked***:53081 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:15:22 INFO spark.ContextCleaner: Cleaned accumulator 1125 18/04/17 17:15:22 INFO storage.BlockManagerInfo: Removed broadcast_1123_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:15:22 INFO storage.BlockManagerInfo: Removed broadcast_1123_piece0 on ***hostname masked***:35790 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:15:22 INFO spark.ContextCleaner: Cleaned accumulator 1111 18/04/17 17:15:22 INFO storage.BlockManagerInfo: Removed broadcast_1125_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.7 MB) 18/04/17 17:15:22 INFO storage.BlockManagerInfo: Removed broadcast_1125_piece0 on ***hostname masked***:35790 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:15:22 INFO spark.ContextCleaner: Cleaned accumulator 1126 18/04/17 17:15:22 INFO spark.ContextCleaner: Cleaned accumulator 1128 18/04/17 17:15:22 INFO storage.BlockManagerInfo: Removed broadcast_1126_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.7 MB) 18/04/17 17:15:22 INFO storage.BlockManagerInfo: Removed broadcast_1126_piece0 on ***hostname masked***:35790 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:15:22 INFO spark.ContextCleaner: Cleaned accumulator 1127 18/04/17 17:15:22 INFO spark.ContextCleaner: Cleaned accumulator 1112 18/04/17 17:15:22 INFO spark.ContextCleaner: Cleaned accumulator 1129 18/04/17 17:15:22 INFO storage.BlockManagerInfo: Removed broadcast_1127_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.7 MB) 18/04/17 17:15:22 INFO storage.BlockManagerInfo: Removed broadcast_1127_piece0 on ***hostname masked***:60107 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:15:22 INFO storage.BlockManagerInfo: Removed broadcast_1129_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.7 MB) 18/04/17 17:15:22 INFO storage.BlockManagerInfo: Removed broadcast_1129_piece0 on ***hostname masked***:42188 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:15:22 INFO spark.ContextCleaner: Cleaned accumulator 1130 18/04/17 17:15:22 INFO storage.BlockManagerInfo: Removed broadcast_1128_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.7 MB) 18/04/17 17:15:22 INFO storage.BlockManagerInfo: Removed broadcast_1128_piece0 on ***hostname masked***:43653 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:15:22 INFO spark.ContextCleaner: Cleaned accumulator 1131 18/04/17 17:16:00 INFO scheduler.JobScheduler: Added jobs for time 1523974560000 ms 18/04/17 17:16:00 INFO scheduler.JobScheduler: Starting job streaming job 1523974560000 ms.0 from job set of time 1523974560000 ms 18/04/17 17:16:00 INFO scheduler.JobScheduler: Starting job streaming job 1523974560000 ms.1 from job set of time 1523974560000 ms 18/04/17 17:16:00 INFO scheduler.JobScheduler: Starting job streaming job 1523974560000 ms.2 from job set of time 1523974560000 ms 18/04/17 17:16:00 INFO scheduler.JobScheduler: Starting job streaming job 1523974560000 ms.3 from job set of time 1523974560000 ms 18/04/17 17:16:00 INFO scheduler.JobScheduler: Finished job streaming job 1523974560000 ms.0 from job set of time 1523974560000 ms 18/04/17 17:16:00 INFO scheduler.JobScheduler: Starting job streaming job 1523974560000 ms.4 from job set of time 1523974560000 ms 18/04/17 17:16:00 INFO scheduler.JobScheduler: Finished job streaming job 1523974560000 ms.3 from job set of time 1523974560000 ms 18/04/17 17:16:00 INFO scheduler.JobScheduler: Starting job streaming job 1523974560000 ms.6 from job set of time 1523974560000 ms 18/04/17 17:16:00 INFO scheduler.JobScheduler: Starting job streaming job 1523974560000 ms.5 from job set of time 1523974560000 ms 18/04/17 17:16:00 INFO scheduler.JobScheduler: Starting job streaming job 1523974560000 ms.7 from job set of time 1523974560000 ms 18/04/17 17:16:00 INFO scheduler.JobScheduler: Finished job streaming job 1523974560000 ms.4 from job set of time 1523974560000 ms 18/04/17 17:16:00 INFO scheduler.JobScheduler: Starting job streaming job 1523974560000 ms.8 from job set of time 1523974560000 ms 18/04/17 17:16:00 INFO scheduler.JobScheduler: Starting job streaming job 1523974560000 ms.9 from job set of time 1523974560000 ms 18/04/17 17:16:00 INFO scheduler.JobScheduler: Starting job streaming job 1523974560000 ms.10 from job set of time 1523974560000 ms 18/04/17 17:16:00 INFO scheduler.JobScheduler: Starting job streaming job 1523974560000 ms.11 from job set of time 1523974560000 ms 18/04/17 17:16:00 INFO scheduler.JobScheduler: Starting job streaming job 1523974560000 ms.12 from job set of time 1523974560000 ms 18/04/17 17:16:00 INFO scheduler.JobScheduler: Starting job streaming job 1523974560000 ms.13 from job set of time 1523974560000 ms 18/04/17 17:16:00 INFO scheduler.JobScheduler: Finished job streaming job 1523974560000 ms.13 from job set of time 1523974560000 ms 18/04/17 17:16:00 INFO scheduler.JobScheduler: Starting job streaming job 1523974560000 ms.14 from job set of time 1523974560000 ms 18/04/17 17:16:00 INFO scheduler.JobScheduler: Finished job streaming job 1523974560000 ms.14 from job set of time 1523974560000 ms 18/04/17 17:16:00 INFO scheduler.JobScheduler: Starting job streaming job 1523974560000 ms.15 from job set of time 1523974560000 ms 18/04/17 17:16:00 INFO scheduler.JobScheduler: Starting job streaming job 1523974560000 ms.16 from job set of time 1523974560000 ms 18/04/17 17:16:00 INFO scheduler.JobScheduler: Finished job streaming job 1523974560000 ms.16 from job set of time 1523974560000 ms 18/04/17 17:16:00 INFO scheduler.JobScheduler: Starting job streaming job 1523974560000 ms.17 from job set of time 1523974560000 ms 18/04/17 17:16:00 INFO scheduler.JobScheduler: Starting job streaming job 1523974560000 ms.19 from job set of time 1523974560000 ms 18/04/17 17:16:00 INFO scheduler.JobScheduler: Starting job streaming job 1523974560000 ms.18 from job set of time 1523974560000 ms 18/04/17 17:16:00 INFO scheduler.JobScheduler: Finished job streaming job 1523974560000 ms.17 from job set of time 1523974560000 ms 18/04/17 17:16:00 INFO scheduler.JobScheduler: Starting job streaming job 1523974560000 ms.20 from job set of time 1523974560000 ms 18/04/17 17:16:00 INFO scheduler.JobScheduler: Starting job streaming job 1523974560000 ms.22 from job set of time 1523974560000 ms 18/04/17 17:16:00 INFO scheduler.JobScheduler: Starting job streaming job 1523974560000 ms.21 from job set of time 1523974560000 ms 18/04/17 17:16:00 INFO scheduler.JobScheduler: Starting job streaming job 1523974560000 ms.23 from job set of time 1523974560000 ms 18/04/17 17:16:00 INFO scheduler.JobScheduler: Finished job streaming job 1523974560000 ms.21 from job set of time 1523974560000 ms 18/04/17 17:16:00 INFO scheduler.JobScheduler: Starting job streaming job 1523974560000 ms.24 from job set of time 1523974560000 ms 18/04/17 17:16:00 INFO scheduler.JobScheduler: Starting job streaming job 1523974560000 ms.25 from job set of time 1523974560000 ms 18/04/17 17:16:00 INFO scheduler.JobScheduler: Starting job streaming job 1523974560000 ms.26 from job set of time 1523974560000 ms 18/04/17 17:16:00 INFO scheduler.JobScheduler: Starting job streaming job 1523974560000 ms.27 from job set of time 1523974560000 ms 18/04/17 17:16:00 INFO scheduler.JobScheduler: Starting job streaming job 1523974560000 ms.28 from job set of time 1523974560000 ms 18/04/17 17:16:00 INFO scheduler.JobScheduler: Starting job streaming job 1523974560000 ms.29 from job set of time 1523974560000 ms 18/04/17 17:16:00 INFO scheduler.JobScheduler: Starting job streaming job 1523974560000 ms.30 from job set of time 1523974560000 ms 18/04/17 17:16:00 INFO scheduler.JobScheduler: Finished job streaming job 1523974560000 ms.30 from job set of time 1523974560000 ms 18/04/17 17:16:00 INFO scheduler.JobScheduler: Starting job streaming job 1523974560000 ms.31 from job set of time 1523974560000 ms 18/04/17 17:16:00 INFO scheduler.JobScheduler: Starting job streaming job 1523974560000 ms.32 from job set of time 1523974560000 ms 18/04/17 17:16:00 INFO scheduler.JobScheduler: Starting job streaming job 1523974560000 ms.34 from job set of time 1523974560000 ms 18/04/17 17:16:00 INFO scheduler.JobScheduler: Starting job streaming job 1523974560000 ms.35 from job set of time 1523974560000 ms 18/04/17 17:16:00 INFO scheduler.JobScheduler: Starting job streaming job 1523974560000 ms.33 from job set of time 1523974560000 ms 18/04/17 17:16:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 17:16:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 17:16:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 17:16:00 INFO scheduler.DAGScheduler: Got job 1132 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 17:16:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 1132 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 17:16:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 17:16:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 17:16:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 17:16:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 17:16:00 INFO scheduler.DAGScheduler: Submitting ResultStage 1132 (KafkaRDD[1568] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 17:16:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 17:16:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 17:16:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 17:16:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 17:16:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 17:16:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 17:16:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 17:16:00 INFO storage.MemoryStore: Block broadcast_1132 stored as values in memory (estimated size 5.7 KB, free 491.6 MB) 18/04/17 17:16:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 17:16:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 17:16:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 17:16:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 17:16:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 17:16:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 17:16:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 17:16:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 17:16:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 17:16:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 17:16:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 17:16:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 17:16:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 17:16:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 17:16:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 17:16:00 INFO storage.MemoryStore: Block broadcast_1132_piece0 stored as bytes in memory (estimated size 3.1 KB, free 491.6 MB) 18/04/17 17:16:00 INFO storage.BlockManagerInfo: Added broadcast_1132_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.7 MB) 18/04/17 17:16:00 INFO spark.SparkContext: Created broadcast 1132 from broadcast at DAGScheduler.scala:1006 18/04/17 17:16:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 1132 (KafkaRDD[1568] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 17:16:00 INFO cluster.YarnClusterScheduler: Adding task set 1132.0 with 1 tasks 18/04/17 17:16:00 INFO scheduler.DAGScheduler: Got job 1133 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 17:16:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 1133 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 17:16:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 17:16:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 17:16:00 INFO scheduler.DAGScheduler: Submitting ResultStage 1133 (KafkaRDD[1549] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 17:16:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 1132.0 (TID 1132, ***hostname masked***, executor 8, partition 0, RACK_LOCAL, 2048 bytes) 18/04/17 17:16:00 INFO storage.MemoryStore: Block broadcast_1133 stored as values in memory (estimated size 5.7 KB, free 491.6 MB) 18/04/17 17:16:00 INFO storage.MemoryStore: Block broadcast_1133_piece0 stored as bytes in memory (estimated size 3.1 KB, free 491.6 MB) 18/04/17 17:16:00 INFO storage.BlockManagerInfo: Added broadcast_1133_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.7 MB) 18/04/17 17:16:00 INFO spark.SparkContext: Created broadcast 1133 from broadcast at DAGScheduler.scala:1006 18/04/17 17:16:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 1133 (KafkaRDD[1549] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 17:16:00 INFO cluster.YarnClusterScheduler: Adding task set 1133.0 with 1 tasks 18/04/17 17:16:00 INFO scheduler.DAGScheduler: Got job 1134 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 17:16:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 1134 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 17:16:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 17:16:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 17:16:00 INFO scheduler.DAGScheduler: Submitting ResultStage 1134 (KafkaRDD[1566] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 17:16:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 1133.0 (TID 1133, ***hostname masked***, executor 10, partition 0, RACK_LOCAL, 2063 bytes) 18/04/17 17:16:00 INFO storage.MemoryStore: Block broadcast_1134 stored as values in memory (estimated size 5.7 KB, free 491.6 MB) 18/04/17 17:16:00 INFO storage.MemoryStore: Block broadcast_1134_piece0 stored as bytes in memory (estimated size 3.1 KB, free 491.6 MB) 18/04/17 17:16:00 INFO storage.BlockManagerInfo: Added broadcast_1134_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.7 MB) 18/04/17 17:16:00 INFO spark.SparkContext: Created broadcast 1134 from broadcast at DAGScheduler.scala:1006 18/04/17 17:16:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 1134 (KafkaRDD[1566] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 17:16:00 INFO cluster.YarnClusterScheduler: Adding task set 1134.0 with 1 tasks 18/04/17 17:16:00 INFO scheduler.DAGScheduler: Got job 1136 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 17:16:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 1135 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 17:16:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 17:16:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 17:16:00 INFO scheduler.DAGScheduler: Submitting ResultStage 1135 (KafkaRDD[1583] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 17:16:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 1134.0 (TID 1134, ***hostname masked***, executor 7, partition 0, RACK_LOCAL, 2048 bytes) 18/04/17 17:16:00 INFO storage.MemoryStore: Block broadcast_1135 stored as values in memory (estimated size 5.7 KB, free 491.6 MB) 18/04/17 17:16:00 INFO storage.MemoryStore: Block broadcast_1135_piece0 stored as bytes in memory (estimated size 3.1 KB, free 491.6 MB) 18/04/17 17:16:00 INFO storage.BlockManagerInfo: Added broadcast_1135_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.7 MB) 18/04/17 17:16:00 INFO spark.SparkContext: Created broadcast 1135 from broadcast at DAGScheduler.scala:1006 18/04/17 17:16:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 1135 (KafkaRDD[1583] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 17:16:00 INFO cluster.YarnClusterScheduler: Adding task set 1135.0 with 1 tasks 18/04/17 17:16:00 INFO scheduler.DAGScheduler: Got job 1135 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 17:16:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 1136 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 17:16:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 17:16:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 17:16:00 INFO scheduler.DAGScheduler: Submitting ResultStage 1136 (KafkaRDD[1567] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 17:16:00 INFO storage.MemoryStore: Block broadcast_1136 stored as values in memory (estimated size 5.7 KB, free 491.6 MB) 18/04/17 17:16:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 1135.0 (TID 1135, ***hostname masked***, executor 12, partition 0, NODE_LOCAL, 2037 bytes) 18/04/17 17:16:00 INFO storage.MemoryStore: Block broadcast_1136_piece0 stored as bytes in memory (estimated size 3.1 KB, free 491.6 MB) 18/04/17 17:16:00 INFO storage.BlockManagerInfo: Added broadcast_1132_piece0 in memory on ***hostname masked***:50260 (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:16:00 INFO storage.BlockManagerInfo: Added broadcast_1136_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:16:00 INFO spark.SparkContext: Created broadcast 1136 from broadcast at DAGScheduler.scala:1006 18/04/17 17:16:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 1136 (KafkaRDD[1567] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 17:16:00 INFO cluster.YarnClusterScheduler: Adding task set 1136.0 with 1 tasks 18/04/17 17:16:00 INFO scheduler.DAGScheduler: Got job 1138 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 17:16:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 1137 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 17:16:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 17:16:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 17:16:00 INFO scheduler.DAGScheduler: Submitting ResultStage 1137 (KafkaRDD[1553] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 17:16:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 1136.0 (TID 1136, ***hostname masked***, executor 5, partition 0, RACK_LOCAL, 2050 bytes) 18/04/17 17:16:00 INFO storage.MemoryStore: Block broadcast_1137 stored as values in memory (estimated size 5.7 KB, free 491.6 MB) 18/04/17 17:16:00 INFO storage.MemoryStore: Block broadcast_1137_piece0 stored as bytes in memory (estimated size 3.1 KB, free 491.6 MB) 18/04/17 17:16:00 INFO storage.BlockManagerInfo: Added broadcast_1137_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:16:00 INFO spark.SparkContext: Created broadcast 1137 from broadcast at DAGScheduler.scala:1006 18/04/17 17:16:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 1137 (KafkaRDD[1553] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 17:16:00 INFO cluster.YarnClusterScheduler: Adding task set 1137.0 with 1 tasks 18/04/17 17:16:00 INFO scheduler.DAGScheduler: Got job 1137 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 17:16:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 1138 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 17:16:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 17:16:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 17:16:00 INFO scheduler.DAGScheduler: Submitting ResultStage 1138 (KafkaRDD[1555] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 17:16:00 INFO storage.MemoryStore: Block broadcast_1138 stored as values in memory (estimated size 5.7 KB, free 491.6 MB) 18/04/17 17:16:00 INFO storage.BlockManagerInfo: Added broadcast_1133_piece0 in memory on ***hostname masked***:55095 (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:16:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 1137.0 (TID 1137, ***hostname masked***, executor 5, partition 0, NODE_LOCAL, 2059 bytes) 18/04/17 17:16:00 INFO storage.BlockManagerInfo: Added broadcast_1134_piece0 in memory on ***hostname masked***:41751 (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:16:00 INFO storage.MemoryStore: Block broadcast_1138_piece0 stored as bytes in memory (estimated size 3.1 KB, free 491.6 MB) 18/04/17 17:16:00 INFO storage.BlockManagerInfo: Added broadcast_1138_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:16:00 INFO spark.SparkContext: Created broadcast 1138 from broadcast at DAGScheduler.scala:1006 18/04/17 17:16:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 1138 (KafkaRDD[1555] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 17:16:00 INFO cluster.YarnClusterScheduler: Adding task set 1138.0 with 1 tasks 18/04/17 17:16:00 INFO scheduler.DAGScheduler: Got job 1139 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 17:16:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 1139 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 17:16:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 17:16:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 17:16:00 INFO scheduler.DAGScheduler: Submitting ResultStage 1139 (KafkaRDD[1556] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 17:16:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 1138.0 (TID 1138, ***hostname masked***, executor 8, partition 0, RACK_LOCAL, 2060 bytes) 18/04/17 17:16:00 INFO storage.MemoryStore: Block broadcast_1139 stored as values in memory (estimated size 5.7 KB, free 491.6 MB) 18/04/17 17:16:00 INFO storage.MemoryStore: Block broadcast_1139_piece0 stored as bytes in memory (estimated size 3.1 KB, free 491.6 MB) 18/04/17 17:16:00 INFO storage.BlockManagerInfo: Added broadcast_1139_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:16:00 INFO spark.SparkContext: Created broadcast 1139 from broadcast at DAGScheduler.scala:1006 18/04/17 17:16:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 1139 (KafkaRDD[1556] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 17:16:00 INFO cluster.YarnClusterScheduler: Adding task set 1139.0 with 1 tasks 18/04/17 17:16:00 INFO scheduler.DAGScheduler: Got job 1140 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 17:16:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 1140 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 17:16:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 17:16:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 17:16:00 INFO scheduler.DAGScheduler: Submitting ResultStage 1140 (KafkaRDD[1574] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 17:16:00 INFO storage.BlockManagerInfo: Added broadcast_1135_piece0 in memory on ***hostname masked***:42188 (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:16:00 INFO storage.MemoryStore: Block broadcast_1140 stored as values in memory (estimated size 5.7 KB, free 491.6 MB) 18/04/17 17:16:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 1139.0 (TID 1139, ***hostname masked***, executor 2, partition 0, RACK_LOCAL, 2051 bytes) 18/04/17 17:16:00 INFO storage.BlockManagerInfo: Added broadcast_1137_piece0 in memory on ***hostname masked***:53081 (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:16:00 INFO storage.MemoryStore: Block broadcast_1140_piece0 stored as bytes in memory (estimated size 3.1 KB, free 491.6 MB) 18/04/17 17:16:00 INFO storage.BlockManagerInfo: Added broadcast_1140_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:16:00 INFO spark.SparkContext: Created broadcast 1140 from broadcast at DAGScheduler.scala:1006 18/04/17 17:16:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 1140 (KafkaRDD[1574] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 17:16:00 INFO cluster.YarnClusterScheduler: Adding task set 1140.0 with 1 tasks 18/04/17 17:16:00 INFO scheduler.DAGScheduler: Got job 1141 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 17:16:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 1141 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 17:16:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 17:16:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 17:16:00 INFO scheduler.DAGScheduler: Submitting ResultStage 1141 (KafkaRDD[1571] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 17:16:00 INFO storage.MemoryStore: Block broadcast_1141 stored as values in memory (estimated size 5.7 KB, free 491.6 MB) 18/04/17 17:16:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 1140.0 (TID 1140, ***hostname masked***, executor 4, partition 0, NODE_LOCAL, 2056 bytes) 18/04/17 17:16:00 INFO storage.MemoryStore: Block broadcast_1141_piece0 stored as bytes in memory (estimated size 3.1 KB, free 491.6 MB) 18/04/17 17:16:00 INFO storage.BlockManagerInfo: Added broadcast_1141_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:16:00 INFO spark.SparkContext: Created broadcast 1141 from broadcast at DAGScheduler.scala:1006 18/04/17 17:16:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 1141 (KafkaRDD[1571] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 17:16:00 INFO cluster.YarnClusterScheduler: Adding task set 1141.0 with 1 tasks 18/04/17 17:16:00 INFO scheduler.DAGScheduler: Got job 1142 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 17:16:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 1142 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 17:16:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 17:16:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 17:16:00 INFO scheduler.DAGScheduler: Submitting ResultStage 1142 (KafkaRDD[1560] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 17:16:00 INFO storage.MemoryStore: Block broadcast_1142 stored as values in memory (estimated size 5.7 KB, free 491.6 MB) 18/04/17 17:16:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 1141.0 (TID 1141, ***hostname masked***, executor 5, partition 0, RACK_LOCAL, 2048 bytes) 18/04/17 17:16:00 INFO storage.MemoryStore: Block broadcast_1142_piece0 stored as bytes in memory (estimated size 3.1 KB, free 491.6 MB) 18/04/17 17:16:00 INFO storage.BlockManagerInfo: Added broadcast_1142_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:16:00 INFO spark.SparkContext: Created broadcast 1142 from broadcast at DAGScheduler.scala:1006 18/04/17 17:16:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 1142 (KafkaRDD[1560] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 17:16:00 INFO cluster.YarnClusterScheduler: Adding task set 1142.0 with 1 tasks 18/04/17 17:16:00 INFO scheduler.DAGScheduler: Got job 1143 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 17:16:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 1143 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 17:16:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 17:16:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 17:16:00 INFO scheduler.DAGScheduler: Submitting ResultStage 1143 (KafkaRDD[1582] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 17:16:00 INFO storage.MemoryStore: Block broadcast_1143 stored as values in memory (estimated size 5.7 KB, free 491.6 MB) 18/04/17 17:16:00 INFO storage.BlockManagerInfo: Added broadcast_1136_piece0 in memory on ***hostname masked***:53081 (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:16:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 1142.0 (TID 1142, ***hostname masked***, executor 11, partition 0, RACK_LOCAL, 2051 bytes) 18/04/17 17:16:00 INFO storage.MemoryStore: Block broadcast_1143_piece0 stored as bytes in memory (estimated size 3.1 KB, free 491.5 MB) 18/04/17 17:16:00 INFO storage.BlockManagerInfo: Added broadcast_1143_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:16:00 INFO spark.SparkContext: Created broadcast 1143 from broadcast at DAGScheduler.scala:1006 18/04/17 17:16:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 1143 (KafkaRDD[1582] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 17:16:00 INFO cluster.YarnClusterScheduler: Adding task set 1143.0 with 1 tasks 18/04/17 17:16:00 INFO scheduler.DAGScheduler: Got job 1144 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 17:16:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 1144 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 17:16:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 17:16:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 17:16:00 INFO scheduler.DAGScheduler: Submitting ResultStage 1144 (KafkaRDD[1563] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 17:16:00 INFO storage.MemoryStore: Block broadcast_1144 stored as values in memory (estimated size 5.7 KB, free 491.5 MB) 18/04/17 17:16:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 1143.0 (TID 1143, ***hostname masked***, executor 6, partition 0, RACK_LOCAL, 2053 bytes) 18/04/17 17:16:00 INFO storage.BlockManagerInfo: Added broadcast_1140_piece0 in memory on ***hostname masked***:55279 (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:16:00 INFO storage.MemoryStore: Block broadcast_1144_piece0 stored as bytes in memory (estimated size 3.1 KB, free 491.5 MB) 18/04/17 17:16:00 INFO storage.BlockManagerInfo: Added broadcast_1144_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:16:00 INFO spark.SparkContext: Created broadcast 1144 from broadcast at DAGScheduler.scala:1006 18/04/17 17:16:00 INFO storage.BlockManagerInfo: Added broadcast_1141_piece0 in memory on ***hostname masked***:53081 (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:16:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 1144 (KafkaRDD[1563] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 17:16:00 INFO cluster.YarnClusterScheduler: Adding task set 1144.0 with 1 tasks 18/04/17 17:16:00 INFO scheduler.DAGScheduler: Got job 1145 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 17:16:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 1145 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 17:16:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 17:16:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 17:16:00 INFO scheduler.DAGScheduler: Submitting ResultStage 1145 (KafkaRDD[1558] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 17:16:00 INFO storage.MemoryStore: Block broadcast_1145 stored as values in memory (estimated size 5.7 KB, free 491.5 MB) 18/04/17 17:16:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 1144.0 (TID 1144, ***hostname masked***, executor 4, partition 0, RACK_LOCAL, 2064 bytes) 18/04/17 17:16:00 INFO storage.BlockManagerInfo: Added broadcast_1139_piece0 in memory on ***hostname masked***:43653 (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:16:00 INFO storage.MemoryStore: Block broadcast_1145_piece0 stored as bytes in memory (estimated size 3.1 KB, free 491.5 MB) 18/04/17 17:16:00 INFO storage.BlockManagerInfo: Added broadcast_1145_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:16:00 INFO spark.SparkContext: Created broadcast 1145 from broadcast at DAGScheduler.scala:1006 18/04/17 17:16:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 1145 (KafkaRDD[1558] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 17:16:00 INFO cluster.YarnClusterScheduler: Adding task set 1145.0 with 1 tasks 18/04/17 17:16:00 INFO scheduler.DAGScheduler: Got job 1146 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 17:16:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 1146 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 17:16:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 17:16:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 17:16:00 INFO scheduler.DAGScheduler: Submitting ResultStage 1146 (KafkaRDD[1550] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 17:16:00 INFO storage.MemoryStore: Block broadcast_1146 stored as values in memory (estimated size 5.7 KB, free 491.5 MB) 18/04/17 17:16:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 1145.0 (TID 1145, ***hostname masked***, executor 1, partition 0, NODE_LOCAL, 2059 bytes) 18/04/17 17:16:00 INFO storage.MemoryStore: Block broadcast_1146_piece0 stored as bytes in memory (estimated size 3.1 KB, free 491.5 MB) 18/04/17 17:16:00 INFO storage.BlockManagerInfo: Added broadcast_1146_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:16:00 INFO spark.SparkContext: Created broadcast 1146 from broadcast at DAGScheduler.scala:1006 18/04/17 17:16:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 1146 (KafkaRDD[1550] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 17:16:00 INFO cluster.YarnClusterScheduler: Adding task set 1146.0 with 1 tasks 18/04/17 17:16:00 INFO scheduler.DAGScheduler: Got job 1147 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 17:16:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 1147 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 17:16:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 17:16:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 17:16:00 INFO scheduler.DAGScheduler: Submitting ResultStage 1147 (KafkaRDD[1580] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 17:16:00 INFO storage.MemoryStore: Block broadcast_1147 stored as values in memory (estimated size 5.7 KB, free 491.5 MB) 18/04/17 17:16:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 1146.0 (TID 1146, ***hostname masked***, executor 6, partition 0, RACK_LOCAL, 2034 bytes) 18/04/17 17:16:00 INFO storage.BlockManagerInfo: Added broadcast_1144_piece0 in memory on ***hostname masked***:55279 (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:16:00 INFO storage.BlockManagerInfo: Added broadcast_1138_piece0 in memory on ***hostname masked***:50260 (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:16:00 INFO storage.MemoryStore: Block broadcast_1147_piece0 stored as bytes in memory (estimated size 3.1 KB, free 491.5 MB) 18/04/17 17:16:00 INFO storage.BlockManagerInfo: Added broadcast_1147_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:16:00 INFO spark.SparkContext: Created broadcast 1147 from broadcast at DAGScheduler.scala:1006 18/04/17 17:16:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 1147 (KafkaRDD[1580] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 17:16:00 INFO cluster.YarnClusterScheduler: Adding task set 1147.0 with 1 tasks 18/04/17 17:16:00 INFO scheduler.DAGScheduler: Got job 1148 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 17:16:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 1148 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 17:16:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 17:16:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 17:16:00 INFO scheduler.DAGScheduler: Submitting ResultStage 1148 (KafkaRDD[1573] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 17:16:00 INFO storage.BlockManagerInfo: Added broadcast_1142_piece0 in memory on ***hostname masked***:57847 (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:16:00 INFO storage.MemoryStore: Block broadcast_1148 stored as values in memory (estimated size 5.7 KB, free 491.5 MB) 18/04/17 17:16:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 1147.0 (TID 1147, ***hostname masked***, executor 4, partition 0, RACK_LOCAL, 2040 bytes) 18/04/17 17:16:00 INFO storage.MemoryStore: Block broadcast_1148_piece0 stored as bytes in memory (estimated size 3.1 KB, free 491.5 MB) 18/04/17 17:16:00 INFO storage.BlockManagerInfo: Added broadcast_1148_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:16:00 INFO spark.SparkContext: Created broadcast 1148 from broadcast at DAGScheduler.scala:1006 18/04/17 17:16:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 1148 (KafkaRDD[1573] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 17:16:00 INFO cluster.YarnClusterScheduler: Adding task set 1148.0 with 1 tasks 18/04/17 17:16:00 INFO scheduler.DAGScheduler: Got job 1149 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 17:16:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 1149 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 17:16:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 17:16:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 17:16:00 INFO scheduler.DAGScheduler: Submitting ResultStage 1149 (KafkaRDD[1570] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 17:16:00 INFO storage.MemoryStore: Block broadcast_1149 stored as values in memory (estimated size 5.7 KB, free 491.5 MB) 18/04/17 17:16:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 1148.0 (TID 1148, ***hostname masked***, executor 5, partition 0, RACK_LOCAL, 2045 bytes) 18/04/17 17:16:00 INFO storage.BlockManagerInfo: Added broadcast_1143_piece0 in memory on ***hostname masked***:35790 (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:16:00 INFO storage.MemoryStore: Block broadcast_1149_piece0 stored as bytes in memory (estimated size 3.1 KB, free 491.5 MB) 18/04/17 17:16:00 INFO storage.BlockManagerInfo: Added broadcast_1149_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:16:00 INFO storage.BlockManagerInfo: Added broadcast_1147_piece0 in memory on ***hostname masked***:55279 (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:16:00 INFO spark.SparkContext: Created broadcast 1149 from broadcast at DAGScheduler.scala:1006 18/04/17 17:16:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 1149 (KafkaRDD[1570] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 17:16:00 INFO cluster.YarnClusterScheduler: Adding task set 1149.0 with 1 tasks 18/04/17 17:16:00 INFO scheduler.DAGScheduler: Got job 1150 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 17:16:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 1150 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 17:16:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 17:16:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 17:16:00 INFO scheduler.DAGScheduler: Submitting ResultStage 1150 (KafkaRDD[1572] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 17:16:00 INFO storage.MemoryStore: Block broadcast_1150 stored as values in memory (estimated size 5.7 KB, free 491.5 MB) 18/04/17 17:16:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 1149.0 (TID 1149, ***hostname masked***, executor 3, partition 0, RACK_LOCAL, 2069 bytes) 18/04/17 17:16:00 INFO storage.BlockManagerInfo: Added broadcast_1145_piece0 in memory on ***hostname masked***:56034 (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:16:00 INFO storage.MemoryStore: Block broadcast_1150_piece0 stored as bytes in memory (estimated size 3.1 KB, free 491.5 MB) 18/04/17 17:16:00 INFO storage.BlockManagerInfo: Added broadcast_1150_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:16:00 INFO spark.SparkContext: Created broadcast 1150 from broadcast at DAGScheduler.scala:1006 18/04/17 17:16:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 1150 (KafkaRDD[1572] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 17:16:00 INFO cluster.YarnClusterScheduler: Adding task set 1150.0 with 1 tasks 18/04/17 17:16:00 INFO scheduler.DAGScheduler: Got job 1151 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 17:16:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 1151 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 17:16:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 17:16:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 17:16:00 INFO scheduler.DAGScheduler: Submitting ResultStage 1151 (KafkaRDD[1579] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 17:16:00 INFO storage.MemoryStore: Block broadcast_1151 stored as values in memory (estimated size 5.7 KB, free 491.5 MB) 18/04/17 17:16:00 INFO storage.BlockManagerInfo: Added broadcast_1148_piece0 in memory on ***hostname masked***:53081 (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:16:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 1150.0 (TID 1150, ***hostname masked***, executor 3, partition 0, NODE_LOCAL, 2047 bytes) 18/04/17 17:16:00 INFO storage.MemoryStore: Block broadcast_1151_piece0 stored as bytes in memory (estimated size 3.1 KB, free 491.5 MB) 18/04/17 17:16:00 INFO storage.BlockManagerInfo: Added broadcast_1151_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:16:00 INFO spark.SparkContext: Created broadcast 1151 from broadcast at DAGScheduler.scala:1006 18/04/17 17:16:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 1151 (KafkaRDD[1579] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 17:16:00 INFO cluster.YarnClusterScheduler: Adding task set 1151.0 with 1 tasks 18/04/17 17:16:00 INFO scheduler.DAGScheduler: Got job 1152 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 17:16:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 1152 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 17:16:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 17:16:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 17:16:00 INFO scheduler.DAGScheduler: Submitting ResultStage 1152 (KafkaRDD[1557] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 17:16:00 INFO storage.MemoryStore: Block broadcast_1152 stored as values in memory (estimated size 5.7 KB, free 491.5 MB) 18/04/17 17:16:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 1151.0 (TID 1151, ***hostname masked***, executor 11, partition 0, RACK_LOCAL, 2042 bytes) 18/04/17 17:16:00 INFO storage.MemoryStore: Block broadcast_1152_piece0 stored as bytes in memory (estimated size 3.1 KB, free 491.5 MB) 18/04/17 17:16:00 INFO storage.BlockManagerInfo: Added broadcast_1152_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:16:00 INFO spark.SparkContext: Created broadcast 1152 from broadcast at DAGScheduler.scala:1006 18/04/17 17:16:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 1152 (KafkaRDD[1557] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 17:16:00 INFO cluster.YarnClusterScheduler: Adding task set 1152.0 with 1 tasks 18/04/17 17:16:00 INFO storage.BlockManagerInfo: Added broadcast_1146_piece0 in memory on ***hostname masked***:35790 (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:16:00 INFO scheduler.DAGScheduler: Got job 1153 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 17:16:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 1153 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 17:16:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 17:16:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 17:16:00 INFO scheduler.DAGScheduler: Submitting ResultStage 1153 (KafkaRDD[1554] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 17:16:00 INFO storage.MemoryStore: Block broadcast_1153 stored as values in memory (estimated size 5.7 KB, free 491.5 MB) 18/04/17 17:16:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 1152.0 (TID 1152, ***hostname masked***, executor 9, partition 0, RACK_LOCAL, 2041 bytes) 18/04/17 17:16:00 INFO storage.MemoryStore: Block broadcast_1153_piece0 stored as bytes in memory (estimated size 3.1 KB, free 491.5 MB) 18/04/17 17:16:00 INFO storage.BlockManagerInfo: Added broadcast_1151_piece0 in memory on ***hostname masked***:57847 (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:16:00 INFO storage.BlockManagerInfo: Added broadcast_1153_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:16:00 INFO spark.SparkContext: Created broadcast 1153 from broadcast at DAGScheduler.scala:1006 18/04/17 17:16:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 1153 (KafkaRDD[1554] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 17:16:00 INFO cluster.YarnClusterScheduler: Adding task set 1153.0 with 1 tasks 18/04/17 17:16:00 INFO scheduler.DAGScheduler: Got job 1154 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 17:16:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 1154 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 17:16:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 17:16:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 17:16:00 INFO scheduler.DAGScheduler: Submitting ResultStage 1154 (KafkaRDD[1577] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 17:16:00 INFO storage.MemoryStore: Block broadcast_1154 stored as values in memory (estimated size 5.7 KB, free 491.5 MB) 18/04/17 17:16:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 1153.0 (TID 1153, ***hostname masked***, executor 12, partition 0, NODE_LOCAL, 2045 bytes) 18/04/17 17:16:00 INFO storage.MemoryStore: Block broadcast_1154_piece0 stored as bytes in memory (estimated size 3.1 KB, free 491.5 MB) 18/04/17 17:16:00 INFO storage.BlockManagerInfo: Added broadcast_1154_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:16:00 INFO spark.SparkContext: Created broadcast 1154 from broadcast at DAGScheduler.scala:1006 18/04/17 17:16:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 1154 (KafkaRDD[1577] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 17:16:00 INFO cluster.YarnClusterScheduler: Adding task set 1154.0 with 1 tasks 18/04/17 17:16:00 INFO scheduler.DAGScheduler: Got job 1155 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 17:16:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 1155 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 17:16:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 17:16:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 17:16:00 INFO scheduler.DAGScheduler: Submitting ResultStage 1155 (KafkaRDD[1575] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 17:16:00 INFO storage.MemoryStore: Block broadcast_1155 stored as values in memory (estimated size 5.7 KB, free 491.4 MB) 18/04/17 17:16:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 1154.0 (TID 1154, ***hostname masked***, executor 3, partition 0, NODE_LOCAL, 2049 bytes) 18/04/17 17:16:00 INFO storage.MemoryStore: Block broadcast_1155_piece0 stored as bytes in memory (estimated size 3.1 KB, free 491.4 MB) 18/04/17 17:16:00 INFO storage.BlockManagerInfo: Added broadcast_1155_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:16:00 INFO spark.SparkContext: Created broadcast 1155 from broadcast at DAGScheduler.scala:1006 18/04/17 17:16:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 1155 (KafkaRDD[1575] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 17:16:00 INFO cluster.YarnClusterScheduler: Adding task set 1155.0 with 1 tasks 18/04/17 17:16:00 INFO scheduler.DAGScheduler: Got job 1157 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 17:16:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 1156 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 17:16:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 17:16:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 17:16:00 INFO scheduler.DAGScheduler: Submitting ResultStage 1156 (KafkaRDD[1559] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 17:16:00 INFO storage.MemoryStore: Block broadcast_1156 stored as values in memory (estimated size 5.7 KB, free 491.4 MB) 18/04/17 17:16:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 1155.0 (TID 1155, ***hostname masked***, executor 6, partition 0, RACK_LOCAL, 2048 bytes) 18/04/17 17:16:00 INFO storage.BlockManagerInfo: Added broadcast_1153_piece0 in memory on ***hostname masked***:42188 (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:16:00 INFO storage.MemoryStore: Block broadcast_1156_piece0 stored as bytes in memory (estimated size 3.1 KB, free 491.4 MB) 18/04/17 17:16:00 INFO storage.BlockManagerInfo: Added broadcast_1156_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:16:00 INFO spark.SparkContext: Created broadcast 1156 from broadcast at DAGScheduler.scala:1006 18/04/17 17:16:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 1156 (KafkaRDD[1559] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 17:16:00 INFO cluster.YarnClusterScheduler: Adding task set 1156.0 with 1 tasks 18/04/17 17:16:00 INFO scheduler.DAGScheduler: Got job 1156 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 17:16:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 1157 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 17:16:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 17:16:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 17:16:00 INFO scheduler.DAGScheduler: Submitting ResultStage 1157 (KafkaRDD[1581] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 17:16:00 INFO storage.MemoryStore: Block broadcast_1157 stored as values in memory (estimated size 5.7 KB, free 491.4 MB) 18/04/17 17:16:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 1156.0 (TID 1156, ***hostname masked***, executor 3, partition 0, RACK_LOCAL, 2046 bytes) 18/04/17 17:16:00 INFO storage.MemoryStore: Block broadcast_1157_piece0 stored as bytes in memory (estimated size 3.1 KB, free 491.4 MB) 18/04/17 17:16:00 INFO storage.BlockManagerInfo: Added broadcast_1157_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:16:00 INFO spark.SparkContext: Created broadcast 1157 from broadcast at DAGScheduler.scala:1006 18/04/17 17:16:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 1157 (KafkaRDD[1581] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 17:16:00 INFO cluster.YarnClusterScheduler: Adding task set 1157.0 with 1 tasks 18/04/17 17:16:00 INFO scheduler.DAGScheduler: Got job 1158 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 17:16:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 1158 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 17:16:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 17:16:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 17:16:00 INFO scheduler.DAGScheduler: Submitting ResultStage 1158 (KafkaRDD[1576] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 17:16:00 INFO storage.MemoryStore: Block broadcast_1158 stored as values in memory (estimated size 5.7 KB, free 491.4 MB) 18/04/17 17:16:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 1157.0 (TID 1157, ***hostname masked***, executor 9, partition 0, NODE_LOCAL, 2048 bytes) 18/04/17 17:16:00 INFO storage.BlockManagerInfo: Added broadcast_1149_piece0 in memory on ***hostname masked***:60107 (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:16:00 INFO storage.MemoryStore: Block broadcast_1158_piece0 stored as bytes in memory (estimated size 3.1 KB, free 491.4 MB) 18/04/17 17:16:00 INFO storage.BlockManagerInfo: Added broadcast_1158_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:16:00 INFO spark.SparkContext: Created broadcast 1158 from broadcast at DAGScheduler.scala:1006 18/04/17 17:16:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 1158 (KafkaRDD[1576] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 17:16:00 INFO cluster.YarnClusterScheduler: Adding task set 1158.0 with 1 tasks 18/04/17 17:16:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 1158.0 (TID 1158, ***hostname masked***, executor 6, partition 0, RACK_LOCAL, 2045 bytes) 18/04/17 17:16:00 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 1138.0 (TID 1138) in 62 ms on ***hostname masked*** (executor 8) (1/1) 18/04/17 17:16:00 INFO cluster.YarnClusterScheduler: Removed TaskSet 1138.0, whose tasks have all completed, from pool 18/04/17 17:16:00 INFO scheduler.DAGScheduler: ResultStage 1138 (foreachPartition at PredictorEngineApp.java:153) finished in 0.062 s 18/04/17 17:16:00 INFO scheduler.DAGScheduler: Job 1137 finished: foreachPartition at PredictorEngineApp.java:153, took 0.084789 s 18/04/17 17:16:00 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x2ff9c258 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 17:16:00 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x2ff9c2580x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 17:16:00 INFO storage.BlockManagerInfo: Added broadcast_1155_piece0 in memory on ***hostname masked***:35790 (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:16:00 INFO storage.BlockManagerInfo: Added broadcast_1152_piece0 in memory on ***hostname masked***:55033 (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:16:00 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 17:16:00 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:41520, server: ***hostname masked***/***IP masked***:2181 18/04/17 17:16:00 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x1626be1444c9750, negotiated timeout = 60000 18/04/17 17:16:00 INFO storage.BlockManagerInfo: Added broadcast_1156_piece0 in memory on ***hostname masked***:60107 (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:16:00 INFO storage.BlockManagerInfo: Added broadcast_1157_piece0 in memory on ***hostname masked***:55033 (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:16:00 INFO storage.BlockManagerInfo: Added broadcast_1158_piece0 in memory on ***hostname masked***:35790 (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:16:00 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x1626be1444c9750 18/04/17 17:16:00 INFO storage.BlockManagerInfo: Added broadcast_1154_piece0 in memory on ***hostname masked***:60107 (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:16:00 INFO zookeeper.ZooKeeper: Session: 0x1626be1444c9750 closed 18/04/17 17:16:00 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 17:16:00 INFO storage.BlockManagerInfo: Added broadcast_1150_piece0 in memory on ***hostname masked***:60107 (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:16:00 INFO scheduler.JobScheduler: Finished job streaming job 1523974560000 ms.7 from job set of time 1523974560000 ms 18/04/17 17:16:00 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 1137.0 (TID 1137) in 154 ms on ***hostname masked*** (executor 5) (1/1) 18/04/17 17:16:00 INFO cluster.YarnClusterScheduler: Removed TaskSet 1137.0, whose tasks have all completed, from pool 18/04/17 17:16:00 INFO scheduler.DAGScheduler: ResultStage 1137 (foreachPartition at PredictorEngineApp.java:153) finished in 0.154 s 18/04/17 17:16:00 INFO scheduler.DAGScheduler: Job 1138 finished: foreachPartition at PredictorEngineApp.java:153, took 0.173983 s 18/04/17 17:16:00 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x66de307d connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 17:16:00 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x66de307d0x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 17:16:00 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 17:16:00 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:41523, server: ***hostname masked***/***IP masked***:2181 18/04/17 17:16:00 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x1626be1444c9751, negotiated timeout = 60000 18/04/17 17:16:00 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x1626be1444c9751 18/04/17 17:16:00 INFO zookeeper.ZooKeeper: Session: 0x1626be1444c9751 closed 18/04/17 17:16:00 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 17:16:00 INFO scheduler.JobScheduler: Finished job streaming job 1523974560000 ms.5 from job set of time 1523974560000 ms 18/04/17 17:16:00 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 1135.0 (TID 1135) in 425 ms on ***hostname masked*** (executor 12) (1/1) 18/04/17 17:16:00 INFO cluster.YarnClusterScheduler: Removed TaskSet 1135.0, whose tasks have all completed, from pool 18/04/17 17:16:00 INFO scheduler.DAGScheduler: ResultStage 1135 (foreachPartition at PredictorEngineApp.java:153) finished in 0.425 s 18/04/17 17:16:00 INFO scheduler.DAGScheduler: Job 1136 finished: foreachPartition at PredictorEngineApp.java:153, took 0.439444 s 18/04/17 17:16:00 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x5f08af62 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 17:16:00 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x5f08af620x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 17:16:00 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 17:16:00 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:41526, server: ***hostname masked***/***IP masked***:2181 18/04/17 17:16:00 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x1626be1444c9756, negotiated timeout = 60000 18/04/17 17:16:00 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x1626be1444c9756 18/04/17 17:16:00 INFO zookeeper.ZooKeeper: Session: 0x1626be1444c9756 closed 18/04/17 17:16:00 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 17:16:00 INFO scheduler.JobScheduler: Finished job streaming job 1523974560000 ms.35 from job set of time 1523974560000 ms 18/04/17 17:16:01 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 1139.0 (TID 1139) in 1799 ms on ***hostname masked*** (executor 2) (1/1) 18/04/17 17:16:01 INFO scheduler.DAGScheduler: ResultStage 1139 (foreachPartition at PredictorEngineApp.java:153) finished in 1.800 s 18/04/17 17:16:01 INFO cluster.YarnClusterScheduler: Removed TaskSet 1139.0, whose tasks have all completed, from pool 18/04/17 17:16:01 INFO scheduler.DAGScheduler: Job 1139 finished: foreachPartition at PredictorEngineApp.java:153, took 1.825271 s 18/04/17 17:16:01 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x6737834f connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 17:16:01 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x6737834f0x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 17:16:01 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 17:16:01 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:35149, server: ***hostname masked***/***IP masked***:2181 18/04/17 17:16:01 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x3626be1439a970c, negotiated timeout = 60000 18/04/17 17:16:01 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x3626be1439a970c 18/04/17 17:16:01 INFO zookeeper.ZooKeeper: Session: 0x3626be1439a970c closed 18/04/17 17:16:01 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 17:16:01 INFO scheduler.JobScheduler: Finished job streaming job 1523974560000 ms.8 from job set of time 1523974560000 ms 18/04/17 17:16:02 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 1148.0 (TID 1148) in 2197 ms on ***hostname masked*** (executor 5) (1/1) 18/04/17 17:16:02 INFO scheduler.DAGScheduler: ResultStage 1148 (foreachPartition at PredictorEngineApp.java:153) finished in 2.197 s 18/04/17 17:16:02 INFO cluster.YarnClusterScheduler: Removed TaskSet 1148.0, whose tasks have all completed, from pool 18/04/17 17:16:02 INFO scheduler.DAGScheduler: Job 1148 finished: foreachPartition at PredictorEngineApp.java:153, took 2.246246 s 18/04/17 17:16:02 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0xbdf3e95 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 17:16:02 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0xbdf3e950x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 17:16:02 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 17:16:02 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:46132, server: ***hostname masked***/***IP masked***:2181 18/04/17 17:16:02 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x2626be142b29034, negotiated timeout = 60000 18/04/17 17:16:02 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x2626be142b29034 18/04/17 17:16:02 INFO zookeeper.ZooKeeper: Session: 0x2626be142b29034 closed 18/04/17 17:16:02 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 17:16:02 INFO scheduler.JobScheduler: Finished job streaming job 1523974560000 ms.25 from job set of time 1523974560000 ms 18/04/17 17:16:04 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 1134.0 (TID 1134) in 4155 ms on ***hostname masked*** (executor 7) (1/1) 18/04/17 17:16:04 INFO scheduler.DAGScheduler: ResultStage 1134 (foreachPartition at PredictorEngineApp.java:153) finished in 4.155 s 18/04/17 17:16:04 INFO cluster.YarnClusterScheduler: Removed TaskSet 1134.0, whose tasks have all completed, from pool 18/04/17 17:16:04 INFO scheduler.DAGScheduler: Job 1134 finished: foreachPartition at PredictorEngineApp.java:153, took 4.167032 s 18/04/17 17:16:04 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x31517ec1 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 17:16:04 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x31517ec10x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 17:16:04 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 17:16:04 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:35161, server: ***hostname masked***/***IP masked***:2181 18/04/17 17:16:04 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x3626be1439a970f, negotiated timeout = 60000 18/04/17 17:16:04 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 1132.0 (TID 1132) in 4169 ms on ***hostname masked*** (executor 8) (1/1) 18/04/17 17:16:04 INFO scheduler.DAGScheduler: ResultStage 1132 (foreachPartition at PredictorEngineApp.java:153) finished in 4.169 s 18/04/17 17:16:04 INFO cluster.YarnClusterScheduler: Removed TaskSet 1132.0, whose tasks have all completed, from pool 18/04/17 17:16:04 INFO scheduler.DAGScheduler: Job 1132 finished: foreachPartition at PredictorEngineApp.java:153, took 4.175224 s 18/04/17 17:16:04 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x3626be1439a970f 18/04/17 17:16:04 INFO zookeeper.ZooKeeper: Session: 0x3626be1439a970f closed 18/04/17 17:16:04 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 17:16:04 INFO scheduler.JobScheduler: Finished job streaming job 1523974560000 ms.18 from job set of time 1523974560000 ms 18/04/17 17:16:04 INFO scheduler.JobScheduler: Finished job streaming job 1523974560000 ms.20 from job set of time 1523974560000 ms 18/04/17 17:16:04 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 1151.0 (TID 1151) in 4199 ms on ***hostname masked*** (executor 11) (1/1) 18/04/17 17:16:04 INFO cluster.YarnClusterScheduler: Removed TaskSet 1151.0, whose tasks have all completed, from pool 18/04/17 17:16:04 INFO scheduler.DAGScheduler: ResultStage 1151 (foreachPartition at PredictorEngineApp.java:153) finished in 4.200 s 18/04/17 17:16:04 INFO scheduler.DAGScheduler: Job 1151 finished: foreachPartition at PredictorEngineApp.java:153, took 4.256672 s 18/04/17 17:16:04 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x5b3cf467 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 17:16:04 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x5b3cf4670x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 17:16:04 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 17:16:04 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:41546, server: ***hostname masked***/***IP masked***:2181 18/04/17 17:16:04 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x1626be1444c975a, negotiated timeout = 60000 18/04/17 17:16:04 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x1626be1444c975a 18/04/17 17:16:04 INFO zookeeper.ZooKeeper: Session: 0x1626be1444c975a closed 18/04/17 17:16:04 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 17:16:04 INFO scheduler.JobScheduler: Finished job streaming job 1523974560000 ms.31 from job set of time 1523974560000 ms 18/04/17 17:16:04 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 1153.0 (TID 1153) in 4633 ms on ***hostname masked*** (executor 12) (1/1) 18/04/17 17:16:04 INFO cluster.YarnClusterScheduler: Removed TaskSet 1153.0, whose tasks have all completed, from pool 18/04/17 17:16:04 INFO scheduler.DAGScheduler: ResultStage 1153 (foreachPartition at PredictorEngineApp.java:153) finished in 4.634 s 18/04/17 17:16:04 INFO scheduler.DAGScheduler: Job 1153 finished: foreachPartition at PredictorEngineApp.java:153, took 4.705208 s 18/04/17 17:16:04 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x19cb51ce connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 17:16:04 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x19cb51ce0x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 17:16:04 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 17:16:04 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:41549, server: ***hostname masked***/***IP masked***:2181 18/04/17 17:16:04 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x1626be1444c975b, negotiated timeout = 60000 18/04/17 17:16:04 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x1626be1444c975b 18/04/17 17:16:04 INFO zookeeper.ZooKeeper: Session: 0x1626be1444c975b closed 18/04/17 17:16:04 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 17:16:04 INFO scheduler.JobScheduler: Finished job streaming job 1523974560000 ms.6 from job set of time 1523974560000 ms 18/04/17 17:16:06 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 1142.0 (TID 1142) in 5955 ms on ***hostname masked*** (executor 11) (1/1) 18/04/17 17:16:06 INFO cluster.YarnClusterScheduler: Removed TaskSet 1142.0, whose tasks have all completed, from pool 18/04/17 17:16:06 INFO scheduler.DAGScheduler: ResultStage 1142 (foreachPartition at PredictorEngineApp.java:153) finished in 5.956 s 18/04/17 17:16:06 INFO scheduler.DAGScheduler: Job 1142 finished: foreachPartition at PredictorEngineApp.java:153, took 5.989132 s 18/04/17 17:16:06 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x6ffae44c connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 17:16:06 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x6ffae44c0x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 17:16:06 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 17:16:06 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:41554, server: ***hostname masked***/***IP masked***:2181 18/04/17 17:16:06 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x1626be1444c975c, negotiated timeout = 60000 18/04/17 17:16:06 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x1626be1444c975c 18/04/17 17:16:06 INFO zookeeper.ZooKeeper: Session: 0x1626be1444c975c closed 18/04/17 17:16:06 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 17:16:06 INFO scheduler.JobScheduler: Finished job streaming job 1523974560000 ms.12 from job set of time 1523974560000 ms 18/04/17 17:16:08 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 1157.0 (TID 1157) in 7873 ms on ***hostname masked*** (executor 9) (1/1) 18/04/17 17:16:08 INFO cluster.YarnClusterScheduler: Removed TaskSet 1157.0, whose tasks have all completed, from pool 18/04/17 17:16:08 INFO scheduler.DAGScheduler: ResultStage 1157 (foreachPartition at PredictorEngineApp.java:153) finished in 7.874 s 18/04/17 17:16:08 INFO scheduler.DAGScheduler: Job 1156 finished: foreachPartition at PredictorEngineApp.java:153, took 7.954744 s 18/04/17 17:16:08 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0xc5880e connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 17:16:08 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0xc5880e0x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 17:16:08 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 17:16:08 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:46156, server: ***hostname masked***/***IP masked***:2181 18/04/17 17:16:08 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x2626be142b29038, negotiated timeout = 60000 18/04/17 17:16:08 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x2626be142b29038 18/04/17 17:16:08 INFO zookeeper.ZooKeeper: Session: 0x2626be142b29038 closed 18/04/17 17:16:08 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 17:16:08 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 1152.0 (TID 1152) in 7916 ms on ***hostname masked*** (executor 9) (1/1) 18/04/17 17:16:08 INFO cluster.YarnClusterScheduler: Removed TaskSet 1152.0, whose tasks have all completed, from pool 18/04/17 17:16:08 INFO scheduler.DAGScheduler: ResultStage 1152 (foreachPartition at PredictorEngineApp.java:153) finished in 7.917 s 18/04/17 17:16:08 INFO scheduler.DAGScheduler: Job 1152 finished: foreachPartition at PredictorEngineApp.java:153, took 7.984658 s 18/04/17 17:16:08 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x57186807 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 17:16:08 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x571868070x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 17:16:08 INFO scheduler.JobScheduler: Finished job streaming job 1523974560000 ms.33 from job set of time 1523974560000 ms 18/04/17 17:16:08 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 17:16:08 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:41564, server: ***hostname masked***/***IP masked***:2181 18/04/17 17:16:08 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x1626be1444c975d, negotiated timeout = 60000 18/04/17 17:16:08 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x1626be1444c975d 18/04/17 17:16:08 INFO zookeeper.ZooKeeper: Session: 0x1626be1444c975d closed 18/04/17 17:16:08 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 17:16:08 INFO scheduler.JobScheduler: Finished job streaming job 1523974560000 ms.9 from job set of time 1523974560000 ms 18/04/17 17:16:09 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 1133.0 (TID 1133) in 9344 ms on ***hostname masked*** (executor 10) (1/1) 18/04/17 17:16:09 INFO cluster.YarnClusterScheduler: Removed TaskSet 1133.0, whose tasks have all completed, from pool 18/04/17 17:16:09 INFO scheduler.DAGScheduler: ResultStage 1133 (foreachPartition at PredictorEngineApp.java:153) finished in 9.344 s 18/04/17 17:16:09 INFO scheduler.DAGScheduler: Job 1133 finished: foreachPartition at PredictorEngineApp.java:153, took 9.352829 s 18/04/17 17:16:09 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x5dce8609 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 17:16:09 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x5dce86090x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 17:16:09 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 17:16:09 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:46165, server: ***hostname masked***/***IP masked***:2181 18/04/17 17:16:09 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x2626be142b29039, negotiated timeout = 60000 18/04/17 17:16:09 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x2626be142b29039 18/04/17 17:16:09 INFO zookeeper.ZooKeeper: Session: 0x2626be142b29039 closed 18/04/17 17:16:09 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 17:16:09 INFO scheduler.JobScheduler: Finished job streaming job 1523974560000 ms.1 from job set of time 1523974560000 ms 18/04/17 17:16:09 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 1141.0 (TID 1141) in 9511 ms on ***hostname masked*** (executor 5) (1/1) 18/04/17 17:16:09 INFO scheduler.DAGScheduler: ResultStage 1141 (foreachPartition at PredictorEngineApp.java:153) finished in 9.511 s 18/04/17 17:16:09 INFO cluster.YarnClusterScheduler: Removed TaskSet 1141.0, whose tasks have all completed, from pool 18/04/17 17:16:09 INFO scheduler.DAGScheduler: Job 1141 finished: foreachPartition at PredictorEngineApp.java:153, took 9.541548 s 18/04/17 17:16:09 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 1136.0 (TID 1136) in 9525 ms on ***hostname masked*** (executor 5) (1/1) 18/04/17 17:16:09 INFO scheduler.DAGScheduler: ResultStage 1136 (foreachPartition at PredictorEngineApp.java:153) finished in 9.525 s 18/04/17 17:16:09 INFO cluster.YarnClusterScheduler: Removed TaskSet 1136.0, whose tasks have all completed, from pool 18/04/17 17:16:09 INFO scheduler.DAGScheduler: Job 1135 finished: foreachPartition at PredictorEngineApp.java:153, took 9.542529 s 18/04/17 17:16:09 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x35f58de7 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 17:16:09 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x752ee47d connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 17:16:09 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x35f58de70x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 17:16:09 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x752ee47d0x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 17:16:09 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 17:16:09 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 17:16:09 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:46168, server: ***hostname masked***/***IP masked***:2181 18/04/17 17:16:09 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:46169, server: ***hostname masked***/***IP masked***:2181 18/04/17 17:16:09 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x2626be142b2903b, negotiated timeout = 60000 18/04/17 17:16:09 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x2626be142b2903c, negotiated timeout = 60000 18/04/17 17:16:09 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x2626be142b2903b 18/04/17 17:16:09 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x2626be142b2903c 18/04/17 17:16:09 INFO zookeeper.ZooKeeper: Session: 0x2626be142b2903b closed 18/04/17 17:16:09 INFO zookeeper.ZooKeeper: Session: 0x2626be142b2903c closed 18/04/17 17:16:09 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 17:16:09 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 17:16:09 INFO scheduler.JobScheduler: Finished job streaming job 1523974560000 ms.19 from job set of time 1523974560000 ms 18/04/17 17:16:09 INFO scheduler.JobScheduler: Finished job streaming job 1523974560000 ms.23 from job set of time 1523974560000 ms 18/04/17 17:16:11 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 1147.0 (TID 1147) in 11747 ms on ***hostname masked*** (executor 4) (1/1) 18/04/17 17:16:11 INFO cluster.YarnClusterScheduler: Removed TaskSet 1147.0, whose tasks have all completed, from pool 18/04/17 17:16:11 INFO scheduler.DAGScheduler: ResultStage 1147 (foreachPartition at PredictorEngineApp.java:153) finished in 11.748 s 18/04/17 17:16:11 INFO scheduler.DAGScheduler: Job 1147 finished: foreachPartition at PredictorEngineApp.java:153, took 11.794272 s 18/04/17 17:16:11 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0xf7aee3c connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 17:16:11 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0xf7aee3c0x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 17:16:11 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 17:16:11 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:41582, server: ***hostname masked***/***IP masked***:2181 18/04/17 17:16:11 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x1626be1444c9760, negotiated timeout = 60000 18/04/17 17:16:11 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x1626be1444c9760 18/04/17 17:16:11 INFO zookeeper.ZooKeeper: Session: 0x1626be1444c9760 closed 18/04/17 17:16:11 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 17:16:11 INFO scheduler.JobScheduler: Finished job streaming job 1523974560000 ms.32 from job set of time 1523974560000 ms 18/04/17 17:16:11 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 1144.0 (TID 1144) in 11869 ms on ***hostname masked*** (executor 4) (1/1) 18/04/17 17:16:11 INFO cluster.YarnClusterScheduler: Removed TaskSet 1144.0, whose tasks have all completed, from pool 18/04/17 17:16:11 INFO scheduler.DAGScheduler: ResultStage 1144 (foreachPartition at PredictorEngineApp.java:153) finished in 11.870 s 18/04/17 17:16:11 INFO scheduler.DAGScheduler: Job 1144 finished: foreachPartition at PredictorEngineApp.java:153, took 11.908314 s 18/04/17 17:16:11 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x54df8b1a connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 17:16:11 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x54df8b1a0x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 17:16:11 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 17:16:11 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:35204, server: ***hostname masked***/***IP masked***:2181 18/04/17 17:16:11 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x3626be1439a9714, negotiated timeout = 60000 18/04/17 17:16:11 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x3626be1439a9714 18/04/17 17:16:11 INFO zookeeper.ZooKeeper: Session: 0x3626be1439a9714 closed 18/04/17 17:16:11 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 17:16:12 INFO scheduler.JobScheduler: Finished job streaming job 1523974560000 ms.15 from job set of time 1523974560000 ms 18/04/17 17:16:12 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 1145.0 (TID 1145) in 12446 ms on ***hostname masked*** (executor 1) (1/1) 18/04/17 17:16:12 INFO cluster.YarnClusterScheduler: Removed TaskSet 1145.0, whose tasks have all completed, from pool 18/04/17 17:16:12 INFO scheduler.DAGScheduler: ResultStage 1145 (foreachPartition at PredictorEngineApp.java:153) finished in 12.446 s 18/04/17 17:16:12 INFO scheduler.DAGScheduler: Job 1145 finished: foreachPartition at PredictorEngineApp.java:153, took 12.487459 s 18/04/17 17:16:12 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x74ccb596 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 17:16:12 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x74ccb5960x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 17:16:12 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 17:16:12 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:41590, server: ***hostname masked***/***IP masked***:2181 18/04/17 17:16:12 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x1626be1444c9761, negotiated timeout = 60000 18/04/17 17:16:12 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x1626be1444c9761 18/04/17 17:16:12 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 17:16:12 INFO zookeeper.ZooKeeper: Session: 0x1626be1444c9761 closed 18/04/17 17:16:12 INFO scheduler.JobScheduler: Finished job streaming job 1523974560000 ms.10 from job set of time 1523974560000 ms 18/04/17 17:16:15 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 1154.0 (TID 1154) in 15410 ms on ***hostname masked*** (executor 3) (1/1) 18/04/17 17:16:15 INFO scheduler.DAGScheduler: ResultStage 1154 (foreachPartition at PredictorEngineApp.java:153) finished in 15.411 s 18/04/17 17:16:15 INFO cluster.YarnClusterScheduler: Removed TaskSet 1154.0, whose tasks have all completed, from pool 18/04/17 17:16:15 INFO scheduler.DAGScheduler: Job 1154 finished: foreachPartition at PredictorEngineApp.java:153, took 15.484863 s 18/04/17 17:16:15 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x5899dbb1 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 17:16:15 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x5899dbb10x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 17:16:15 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 17:16:15 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:41602, server: ***hostname masked***/***IP masked***:2181 18/04/17 17:16:15 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x1626be1444c9763, negotiated timeout = 60000 18/04/17 17:16:15 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x1626be1444c9763 18/04/17 17:16:15 INFO zookeeper.ZooKeeper: Session: 0x1626be1444c9763 closed 18/04/17 17:16:15 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 17:16:15 INFO scheduler.JobScheduler: Finished job streaming job 1523974560000 ms.29 from job set of time 1523974560000 ms 18/04/17 17:16:15 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 1143.0 (TID 1143) in 15543 ms on ***hostname masked*** (executor 6) (1/1) 18/04/17 17:16:15 INFO cluster.YarnClusterScheduler: Removed TaskSet 1143.0, whose tasks have all completed, from pool 18/04/17 17:16:15 INFO scheduler.DAGScheduler: ResultStage 1143 (foreachPartition at PredictorEngineApp.java:153) finished in 15.544 s 18/04/17 17:16:15 INFO scheduler.DAGScheduler: Job 1143 finished: foreachPartition at PredictorEngineApp.java:153, took 15.579300 s 18/04/17 17:16:15 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x2a592601 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 17:16:15 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x2a5926010x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 17:16:15 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 17:16:15 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:35223, server: ***hostname masked***/***IP masked***:2181 18/04/17 17:16:15 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x3626be1439a9717, negotiated timeout = 60000 18/04/17 17:16:15 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x3626be1439a9717 18/04/17 17:16:15 INFO zookeeper.ZooKeeper: Session: 0x3626be1439a9717 closed 18/04/17 17:16:15 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 17:16:15 INFO scheduler.JobScheduler: Finished job streaming job 1523974560000 ms.34 from job set of time 1523974560000 ms 18/04/17 17:16:15 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 1146.0 (TID 1146) in 15590 ms on ***hostname masked*** (executor 6) (1/1) 18/04/17 17:16:15 INFO scheduler.DAGScheduler: ResultStage 1146 (foreachPartition at PredictorEngineApp.java:153) finished in 15.590 s 18/04/17 17:16:15 INFO cluster.YarnClusterScheduler: Removed TaskSet 1146.0, whose tasks have all completed, from pool 18/04/17 17:16:15 INFO scheduler.DAGScheduler: Job 1146 finished: foreachPartition at PredictorEngineApp.java:153, took 15.633923 s 18/04/17 17:16:15 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x6ac9ad73 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 17:16:15 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x6ac9ad730x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 17:16:15 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 17:16:15 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:41608, server: ***hostname masked***/***IP masked***:2181 18/04/17 17:16:15 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x1626be1444c9765, negotiated timeout = 60000 18/04/17 17:16:15 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x1626be1444c9765 18/04/17 17:16:15 INFO zookeeper.ZooKeeper: Session: 0x1626be1444c9765 closed 18/04/17 17:16:15 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 17:16:15 INFO scheduler.JobScheduler: Finished job streaming job 1523974560000 ms.2 from job set of time 1523974560000 ms 18/04/17 17:16:15 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 1158.0 (TID 1158) in 15711 ms on ***hostname masked*** (executor 6) (1/1) 18/04/17 17:16:15 INFO scheduler.DAGScheduler: ResultStage 1158 (foreachPartition at PredictorEngineApp.java:153) finished in 15.712 s 18/04/17 17:16:15 INFO cluster.YarnClusterScheduler: Removed TaskSet 1158.0, whose tasks have all completed, from pool 18/04/17 17:16:15 INFO scheduler.DAGScheduler: Job 1158 finished: foreachPartition at PredictorEngineApp.java:153, took 15.794715 s 18/04/17 17:16:15 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x622c5eb5 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 17:16:15 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x622c5eb50x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 17:16:15 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 17:16:15 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:46206, server: ***hostname masked***/***IP masked***:2181 18/04/17 17:16:15 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x2626be142b29041, negotiated timeout = 60000 18/04/17 17:16:15 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x2626be142b29041 18/04/17 17:16:15 INFO zookeeper.ZooKeeper: Session: 0x2626be142b29041 closed 18/04/17 17:16:15 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 17:16:15 INFO scheduler.JobScheduler: Finished job streaming job 1523974560000 ms.28 from job set of time 1523974560000 ms 18/04/17 17:16:17 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 1140.0 (TID 1140) in 17375 ms on ***hostname masked*** (executor 4) (1/1) 18/04/17 17:16:17 INFO scheduler.DAGScheduler: ResultStage 1140 (foreachPartition at PredictorEngineApp.java:153) finished in 17.375 s 18/04/17 17:16:17 INFO cluster.YarnClusterScheduler: Removed TaskSet 1140.0, whose tasks have all completed, from pool 18/04/17 17:16:17 INFO scheduler.DAGScheduler: Job 1140 finished: foreachPartition at PredictorEngineApp.java:153, took 17.402944 s 18/04/17 17:16:17 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x654e6028 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 17:16:17 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x654e60280x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 17:16:17 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 17:16:17 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:41617, server: ***hostname masked***/***IP masked***:2181 18/04/17 17:16:17 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x1626be1444c9766, negotiated timeout = 60000 18/04/17 17:16:17 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x1626be1444c9766 18/04/17 17:16:17 INFO zookeeper.ZooKeeper: Session: 0x1626be1444c9766 closed 18/04/17 17:16:17 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 17:16:17 INFO scheduler.JobScheduler: Finished job streaming job 1523974560000 ms.26 from job set of time 1523974560000 ms 18/04/17 17:16:17 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 1150.0 (TID 1150) in 17798 ms on ***hostname masked*** (executor 3) (1/1) 18/04/17 17:16:17 INFO cluster.YarnClusterScheduler: Removed TaskSet 1150.0, whose tasks have all completed, from pool 18/04/17 17:16:17 INFO scheduler.DAGScheduler: ResultStage 1150 (foreachPartition at PredictorEngineApp.java:153) finished in 17.799 s 18/04/17 17:16:17 INFO scheduler.DAGScheduler: Job 1150 finished: foreachPartition at PredictorEngineApp.java:153, took 17.853496 s 18/04/17 17:16:17 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0xb995792 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 17:16:17 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0xb9957920x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 17:16:17 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 17:16:17 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:46215, server: ***hostname masked***/***IP masked***:2181 18/04/17 17:16:17 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x2626be142b29043, negotiated timeout = 60000 18/04/17 17:16:17 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x2626be142b29043 18/04/17 17:16:17 INFO zookeeper.ZooKeeper: Session: 0x2626be142b29043 closed 18/04/17 17:16:17 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 17:16:17 INFO scheduler.JobScheduler: Finished job streaming job 1523974560000 ms.24 from job set of time 1523974560000 ms 18/04/17 17:16:27 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 1156.0 (TID 1156) in 27295 ms on ***hostname masked*** (executor 3) (1/1) 18/04/17 17:16:27 INFO cluster.YarnClusterScheduler: Removed TaskSet 1156.0, whose tasks have all completed, from pool 18/04/17 17:16:27 INFO scheduler.DAGScheduler: ResultStage 1156 (foreachPartition at PredictorEngineApp.java:153) finished in 27.295 s 18/04/17 17:16:27 INFO scheduler.DAGScheduler: Job 1157 finished: foreachPartition at PredictorEngineApp.java:153, took 27.373208 s 18/04/17 17:16:27 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x3dc81894 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 17:16:27 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x3dc818940x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 17:16:27 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 17:16:27 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:46233, server: ***hostname masked***/***IP masked***:2181 18/04/17 17:16:27 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x2626be142b29049, negotiated timeout = 60000 18/04/17 17:16:27 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x2626be142b29049 18/04/17 17:16:27 INFO zookeeper.ZooKeeper: Session: 0x2626be142b29049 closed 18/04/17 17:16:27 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 17:16:27 INFO scheduler.JobScheduler: Finished job streaming job 1523974560000 ms.11 from job set of time 1523974560000 ms 18/04/17 17:16:27 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 1149.0 (TID 1149) in 27443 ms on ***hostname masked*** (executor 3) (1/1) 18/04/17 17:16:27 INFO scheduler.DAGScheduler: ResultStage 1149 (foreachPartition at PredictorEngineApp.java:153) finished in 27.443 s 18/04/17 17:16:27 INFO cluster.YarnClusterScheduler: Removed TaskSet 1149.0, whose tasks have all completed, from pool 18/04/17 17:16:27 INFO scheduler.DAGScheduler: Job 1149 finished: foreachPartition at PredictorEngineApp.java:153, took 27.495237 s 18/04/17 17:16:27 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x503d440e connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 17:16:27 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x503d440e0x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 17:16:27 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 17:16:27 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:41641, server: ***hostname masked***/***IP masked***:2181 18/04/17 17:16:27 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x1626be1444c976a, negotiated timeout = 60000 18/04/17 17:16:27 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x1626be1444c976a 18/04/17 17:16:27 INFO zookeeper.ZooKeeper: Session: 0x1626be1444c976a closed 18/04/17 17:16:27 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 17:16:27 INFO scheduler.JobScheduler: Finished job streaming job 1523974560000 ms.22 from job set of time 1523974560000 ms 18/04/17 17:17:00 INFO scheduler.JobScheduler: Added jobs for time 1523974620000 ms 18/04/17 17:17:00 INFO scheduler.JobScheduler: Starting job streaming job 1523974620000 ms.1 from job set of time 1523974620000 ms 18/04/17 17:17:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 17:17:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 17:17:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 17:17:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 17:17:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 17:17:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 17:17:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 17:17:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 17:17:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 17:17:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 17:17:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 17:17:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 17:17:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 17:17:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 17:17:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 17:17:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 17:17:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 17:17:00 INFO scheduler.DAGScheduler: Got job 1160 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 17:17:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 17:17:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 17:17:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 17:17:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 17:17:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 17:17:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 17:17:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 17:17:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 17:17:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 17:17:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 1159 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 17:17:00 INFO scheduler.JobScheduler: Starting job streaming job 1523974620000 ms.0 from job set of time 1523974620000 ms 18/04/17 17:17:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 17:17:00 INFO scheduler.JobScheduler: Starting job streaming job 1523974620000 ms.2 from job set of time 1523974620000 ms 18/04/17 17:17:00 INFO scheduler.JobScheduler: Starting job streaming job 1523974620000 ms.3 from job set of time 1523974620000 ms 18/04/17 17:17:00 INFO scheduler.JobScheduler: Finished job streaming job 1523974620000 ms.0 from job set of time 1523974620000 ms 18/04/17 17:17:00 INFO scheduler.JobScheduler: Starting job streaming job 1523974620000 ms.4 from job set of time 1523974620000 ms 18/04/17 17:17:00 INFO scheduler.JobScheduler: Starting job streaming job 1523974620000 ms.5 from job set of time 1523974620000 ms 18/04/17 17:17:00 INFO scheduler.JobScheduler: Starting job streaming job 1523974620000 ms.6 from job set of time 1523974620000 ms 18/04/17 17:17:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 17:17:00 INFO scheduler.JobScheduler: Finished job streaming job 1523974620000 ms.3 from job set of time 1523974620000 ms 18/04/17 17:17:00 INFO scheduler.JobScheduler: Starting job streaming job 1523974620000 ms.8 from job set of time 1523974620000 ms 18/04/17 17:17:00 INFO scheduler.JobScheduler: Finished job streaming job 1523974620000 ms.4 from job set of time 1523974620000 ms 18/04/17 17:17:00 INFO scheduler.JobScheduler: Starting job streaming job 1523974620000 ms.7 from job set of time 1523974620000 ms 18/04/17 17:17:00 INFO scheduler.JobScheduler: Starting job streaming job 1523974620000 ms.10 from job set of time 1523974620000 ms 18/04/17 17:17:00 INFO scheduler.JobScheduler: Starting job streaming job 1523974620000 ms.9 from job set of time 1523974620000 ms 18/04/17 17:17:00 INFO scheduler.JobScheduler: Starting job streaming job 1523974620000 ms.11 from job set of time 1523974620000 ms 18/04/17 17:17:00 INFO scheduler.JobScheduler: Starting job streaming job 1523974620000 ms.12 from job set of time 1523974620000 ms 18/04/17 17:17:00 INFO scheduler.JobScheduler: Starting job streaming job 1523974620000 ms.13 from job set of time 1523974620000 ms 18/04/17 17:17:00 INFO scheduler.JobScheduler: Starting job streaming job 1523974620000 ms.14 from job set of time 1523974620000 ms 18/04/17 17:17:00 INFO scheduler.JobScheduler: Starting job streaming job 1523974620000 ms.15 from job set of time 1523974620000 ms 18/04/17 17:17:00 INFO scheduler.JobScheduler: Finished job streaming job 1523974620000 ms.13 from job set of time 1523974620000 ms 18/04/17 17:17:00 INFO scheduler.JobScheduler: Starting job streaming job 1523974620000 ms.17 from job set of time 1523974620000 ms 18/04/17 17:17:00 INFO scheduler.JobScheduler: Starting job streaming job 1523974620000 ms.16 from job set of time 1523974620000 ms 18/04/17 17:17:00 INFO scheduler.DAGScheduler: Submitting ResultStage 1159 (KafkaRDD[1592] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 17:17:00 INFO scheduler.JobScheduler: Finished job streaming job 1523974620000 ms.14 from job set of time 1523974620000 ms 18/04/17 17:17:00 INFO scheduler.JobScheduler: Finished job streaming job 1523974620000 ms.17 from job set of time 1523974620000 ms 18/04/17 17:17:00 INFO scheduler.JobScheduler: Starting job streaming job 1523974620000 ms.20 from job set of time 1523974620000 ms 18/04/17 17:17:00 INFO scheduler.JobScheduler: Starting job streaming job 1523974620000 ms.19 from job set of time 1523974620000 ms 18/04/17 17:17:00 INFO scheduler.JobScheduler: Starting job streaming job 1523974620000 ms.18 from job set of time 1523974620000 ms 18/04/17 17:17:00 INFO scheduler.JobScheduler: Finished job streaming job 1523974620000 ms.16 from job set of time 1523974620000 ms 18/04/17 17:17:00 INFO scheduler.JobScheduler: Starting job streaming job 1523974620000 ms.21 from job set of time 1523974620000 ms 18/04/17 17:17:00 INFO scheduler.JobScheduler: Finished job streaming job 1523974620000 ms.21 from job set of time 1523974620000 ms 18/04/17 17:17:00 INFO scheduler.JobScheduler: Starting job streaming job 1523974620000 ms.22 from job set of time 1523974620000 ms 18/04/17 17:17:00 INFO scheduler.JobScheduler: Starting job streaming job 1523974620000 ms.24 from job set of time 1523974620000 ms 18/04/17 17:17:00 INFO scheduler.JobScheduler: Starting job streaming job 1523974620000 ms.23 from job set of time 1523974620000 ms 18/04/17 17:17:00 INFO scheduler.JobScheduler: Starting job streaming job 1523974620000 ms.25 from job set of time 1523974620000 ms 18/04/17 17:17:00 INFO scheduler.JobScheduler: Starting job streaming job 1523974620000 ms.26 from job set of time 1523974620000 ms 18/04/17 17:17:00 INFO scheduler.JobScheduler: Starting job streaming job 1523974620000 ms.27 from job set of time 1523974620000 ms 18/04/17 17:17:00 INFO scheduler.JobScheduler: Starting job streaming job 1523974620000 ms.28 from job set of time 1523974620000 ms 18/04/17 17:17:00 INFO scheduler.JobScheduler: Starting job streaming job 1523974620000 ms.29 from job set of time 1523974620000 ms 18/04/17 17:17:00 INFO scheduler.JobScheduler: Starting job streaming job 1523974620000 ms.30 from job set of time 1523974620000 ms 18/04/17 17:17:00 INFO scheduler.JobScheduler: Finished job streaming job 1523974620000 ms.30 from job set of time 1523974620000 ms 18/04/17 17:17:00 INFO scheduler.JobScheduler: Starting job streaming job 1523974620000 ms.31 from job set of time 1523974620000 ms 18/04/17 17:17:00 INFO scheduler.JobScheduler: Starting job streaming job 1523974620000 ms.33 from job set of time 1523974620000 ms 18/04/17 17:17:00 INFO scheduler.JobScheduler: Starting job streaming job 1523974620000 ms.32 from job set of time 1523974620000 ms 18/04/17 17:17:00 INFO scheduler.JobScheduler: Starting job streaming job 1523974620000 ms.34 from job set of time 1523974620000 ms 18/04/17 17:17:00 INFO scheduler.JobScheduler: Starting job streaming job 1523974620000 ms.35 from job set of time 1523974620000 ms 18/04/17 17:17:00 INFO scheduler.JobScheduler: Finished job streaming job 1523974620000 ms.35 from job set of time 1523974620000 ms 18/04/17 17:17:00 INFO storage.MemoryStore: Block broadcast_1159 stored as values in memory (estimated size 5.7 KB, free 491.4 MB) 18/04/17 17:17:00 INFO storage.MemoryStore: Block broadcast_1159_piece0 stored as bytes in memory (estimated size 3.1 KB, free 491.4 MB) 18/04/17 17:17:00 INFO storage.BlockManagerInfo: Added broadcast_1159_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:17:00 INFO spark.SparkContext: Created broadcast 1159 from broadcast at DAGScheduler.scala:1006 18/04/17 17:17:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 1159 (KafkaRDD[1592] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 17:17:00 INFO cluster.YarnClusterScheduler: Adding task set 1159.0 with 1 tasks 18/04/17 17:17:00 INFO scheduler.DAGScheduler: Got job 1161 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 17:17:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 1160 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 17:17:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 17:17:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 17:17:00 INFO scheduler.DAGScheduler: Submitting ResultStage 1160 (KafkaRDD[1596] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 17:17:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 1159.0 (TID 1159, ***hostname masked***, executor 1, partition 0, RACK_LOCAL, 2051 bytes) 18/04/17 17:17:00 INFO storage.MemoryStore: Block broadcast_1160 stored as values in memory (estimated size 5.7 KB, free 491.4 MB) 18/04/17 17:17:00 INFO storage.MemoryStore: Block broadcast_1160_piece0 stored as bytes in memory (estimated size 3.1 KB, free 491.4 MB) 18/04/17 17:17:00 INFO storage.BlockManagerInfo: Added broadcast_1160_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:17:00 INFO spark.SparkContext: Created broadcast 1160 from broadcast at DAGScheduler.scala:1006 18/04/17 17:17:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 1160 (KafkaRDD[1596] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 17:17:00 INFO cluster.YarnClusterScheduler: Adding task set 1160.0 with 1 tasks 18/04/17 17:17:00 INFO scheduler.DAGScheduler: Got job 1163 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 17:17:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 1161 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 17:17:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 17:17:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 17:17:00 INFO scheduler.DAGScheduler: Submitting ResultStage 1161 (KafkaRDD[1617] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 17:17:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 1160.0 (TID 1160, ***hostname masked***, executor 8, partition 0, RACK_LOCAL, 2051 bytes) 18/04/17 17:17:00 INFO storage.MemoryStore: Block broadcast_1161 stored as values in memory (estimated size 5.7 KB, free 491.4 MB) 18/04/17 17:17:00 INFO storage.MemoryStore: Block broadcast_1161_piece0 stored as bytes in memory (estimated size 3.1 KB, free 491.4 MB) 18/04/17 17:17:00 INFO storage.BlockManagerInfo: Added broadcast_1161_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:17:00 INFO spark.SparkContext: Created broadcast 1161 from broadcast at DAGScheduler.scala:1006 18/04/17 17:17:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 1161 (KafkaRDD[1617] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 17:17:00 INFO cluster.YarnClusterScheduler: Adding task set 1161.0 with 1 tasks 18/04/17 17:17:00 INFO scheduler.DAGScheduler: Got job 1159 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 17:17:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 1162 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 17:17:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 17:17:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 17:17:00 INFO scheduler.DAGScheduler: Submitting ResultStage 1162 (KafkaRDD[1595] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 17:17:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 1161.0 (TID 1161, ***hostname masked***, executor 10, partition 0, NODE_LOCAL, 2048 bytes) 18/04/17 17:17:00 INFO storage.MemoryStore: Block broadcast_1162 stored as values in memory (estimated size 5.7 KB, free 491.4 MB) 18/04/17 17:17:00 INFO storage.MemoryStore: Block broadcast_1162_piece0 stored as bytes in memory (estimated size 3.1 KB, free 491.4 MB) 18/04/17 17:17:00 INFO storage.BlockManagerInfo: Added broadcast_1162_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:17:00 INFO spark.SparkContext: Created broadcast 1162 from broadcast at DAGScheduler.scala:1006 18/04/17 17:17:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 1162 (KafkaRDD[1595] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 17:17:00 INFO cluster.YarnClusterScheduler: Adding task set 1162.0 with 1 tasks 18/04/17 17:17:00 INFO scheduler.DAGScheduler: Got job 1162 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 17:17:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 1163 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 17:17:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 17:17:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 17:17:00 INFO scheduler.DAGScheduler: Submitting ResultStage 1163 (KafkaRDD[1610] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 17:17:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 1162.0 (TID 1162, ***hostname masked***, executor 8, partition 0, RACK_LOCAL, 2046 bytes) 18/04/17 17:17:00 INFO storage.MemoryStore: Block broadcast_1163 stored as values in memory (estimated size 5.7 KB, free 491.4 MB) 18/04/17 17:17:00 INFO storage.BlockManagerInfo: Added broadcast_1159_piece0 in memory on ***hostname masked***:56034 (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:17:00 INFO storage.MemoryStore: Block broadcast_1163_piece0 stored as bytes in memory (estimated size 3.1 KB, free 491.4 MB) 18/04/17 17:17:00 INFO storage.BlockManagerInfo: Added broadcast_1163_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:17:00 INFO spark.SparkContext: Created broadcast 1163 from broadcast at DAGScheduler.scala:1006 18/04/17 17:17:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 1163 (KafkaRDD[1610] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 17:17:00 INFO cluster.YarnClusterScheduler: Adding task set 1163.0 with 1 tasks 18/04/17 17:17:00 INFO scheduler.DAGScheduler: Got job 1164 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 17:17:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 1164 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 17:17:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 17:17:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 17:17:00 INFO scheduler.DAGScheduler: Submitting ResultStage 1164 (KafkaRDD[1609] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 17:17:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 1163.0 (TID 1163, ***hostname masked***, executor 4, partition 0, NODE_LOCAL, 2056 bytes) 18/04/17 17:17:00 INFO storage.MemoryStore: Block broadcast_1164 stored as values in memory (estimated size 5.7 KB, free 491.4 MB) 18/04/17 17:17:00 INFO storage.BlockManagerInfo: Added broadcast_1160_piece0 in memory on ***hostname masked***:50260 (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:17:00 INFO storage.MemoryStore: Block broadcast_1164_piece0 stored as bytes in memory (estimated size 3.1 KB, free 491.4 MB) 18/04/17 17:17:00 INFO storage.BlockManagerInfo: Added broadcast_1164_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:17:00 INFO spark.SparkContext: Created broadcast 1164 from broadcast at DAGScheduler.scala:1006 18/04/17 17:17:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 1164 (KafkaRDD[1609] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 17:17:00 INFO cluster.YarnClusterScheduler: Adding task set 1164.0 with 1 tasks 18/04/17 17:17:00 INFO scheduler.DAGScheduler: Got job 1165 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 17:17:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 1165 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 17:17:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 17:17:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 17:17:00 INFO scheduler.DAGScheduler: Submitting ResultStage 1165 (KafkaRDD[1612] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 17:17:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 1164.0 (TID 1164, ***hostname masked***, executor 4, partition 0, RACK_LOCAL, 2045 bytes) 18/04/17 17:17:00 INFO storage.MemoryStore: Block broadcast_1165 stored as values in memory (estimated size 5.7 KB, free 491.4 MB) 18/04/17 17:17:00 INFO storage.MemoryStore: Block broadcast_1165_piece0 stored as bytes in memory (estimated size 3.1 KB, free 491.4 MB) 18/04/17 17:17:00 INFO storage.BlockManagerInfo: Added broadcast_1165_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:17:00 INFO spark.SparkContext: Created broadcast 1165 from broadcast at DAGScheduler.scala:1006 18/04/17 17:17:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 1165 (KafkaRDD[1612] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 17:17:00 INFO cluster.YarnClusterScheduler: Adding task set 1165.0 with 1 tasks 18/04/17 17:17:00 INFO scheduler.DAGScheduler: Got job 1167 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 17:17:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 1166 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 17:17:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 17:17:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 17:17:00 INFO scheduler.DAGScheduler: Submitting ResultStage 1166 (KafkaRDD[1607] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 17:17:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 1165.0 (TID 1165, ***hostname masked***, executor 6, partition 0, RACK_LOCAL, 2045 bytes) 18/04/17 17:17:00 INFO storage.MemoryStore: Block broadcast_1166 stored as values in memory (estimated size 5.7 KB, free 491.4 MB) 18/04/17 17:17:00 INFO storage.BlockManagerInfo: Added broadcast_1163_piece0 in memory on ***hostname masked***:55279 (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:17:00 INFO storage.MemoryStore: Block broadcast_1166_piece0 stored as bytes in memory (estimated size 3.1 KB, free 491.4 MB) 18/04/17 17:17:00 INFO storage.BlockManagerInfo: Added broadcast_1162_piece0 in memory on ***hostname masked***:50260 (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:17:00 INFO storage.BlockManagerInfo: Added broadcast_1166_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:17:00 INFO spark.SparkContext: Created broadcast 1166 from broadcast at DAGScheduler.scala:1006 18/04/17 17:17:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 1166 (KafkaRDD[1607] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 17:17:00 INFO cluster.YarnClusterScheduler: Adding task set 1166.0 with 1 tasks 18/04/17 17:17:00 INFO scheduler.DAGScheduler: Got job 1166 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 17:17:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 1167 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 17:17:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 17:17:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 17:17:00 INFO scheduler.DAGScheduler: Submitting ResultStage 1167 (KafkaRDD[1611] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 17:17:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 1166.0 (TID 1166, ***hostname masked***, executor 1, partition 0, RACK_LOCAL, 2048 bytes) 18/04/17 17:17:00 INFO storage.MemoryStore: Block broadcast_1167 stored as values in memory (estimated size 5.7 KB, free 491.3 MB) 18/04/17 17:17:00 INFO storage.MemoryStore: Block broadcast_1167_piece0 stored as bytes in memory (estimated size 3.1 KB, free 491.3 MB) 18/04/17 17:17:00 INFO storage.BlockManagerInfo: Added broadcast_1167_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:17:00 INFO spark.SparkContext: Created broadcast 1167 from broadcast at DAGScheduler.scala:1006 18/04/17 17:17:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 1167 (KafkaRDD[1611] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 17:17:00 INFO cluster.YarnClusterScheduler: Adding task set 1167.0 with 1 tasks 18/04/17 17:17:00 INFO scheduler.DAGScheduler: Got job 1168 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 17:17:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 1168 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 17:17:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 17:17:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 17:17:00 INFO scheduler.DAGScheduler: Submitting ResultStage 1168 (KafkaRDD[1615] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 17:17:00 INFO storage.MemoryStore: Block broadcast_1168 stored as values in memory (estimated size 5.7 KB, free 491.3 MB) 18/04/17 17:17:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 1167.0 (TID 1167, ***hostname masked***, executor 2, partition 0, RACK_LOCAL, 2048 bytes) 18/04/17 17:17:00 INFO storage.BlockManagerInfo: Added broadcast_1161_piece0 in memory on ***hostname masked***:55095 (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:17:00 INFO storage.MemoryStore: Block broadcast_1168_piece0 stored as bytes in memory (estimated size 3.1 KB, free 491.3 MB) 18/04/17 17:17:00 INFO storage.BlockManagerInfo: Added broadcast_1168_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:17:00 INFO spark.SparkContext: Created broadcast 1168 from broadcast at DAGScheduler.scala:1006 18/04/17 17:17:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 1168 (KafkaRDD[1615] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 17:17:00 INFO cluster.YarnClusterScheduler: Adding task set 1168.0 with 1 tasks 18/04/17 17:17:00 INFO scheduler.DAGScheduler: Got job 1171 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 17:17:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 1169 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 17:17:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 17:17:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 17:17:00 INFO scheduler.DAGScheduler: Submitting ResultStage 1169 (KafkaRDD[1585] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 17:17:00 INFO storage.MemoryStore: Block broadcast_1169 stored as values in memory (estimated size 5.7 KB, free 491.3 MB) 18/04/17 17:17:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 1168.0 (TID 1168, ***hostname masked***, executor 1, partition 0, RACK_LOCAL, 2042 bytes) 18/04/17 17:17:00 INFO storage.BlockManagerInfo: Added broadcast_1166_piece0 in memory on ***hostname masked***:56034 (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:17:00 INFO storage.BlockManagerInfo: Added broadcast_1164_piece0 in memory on ***hostname masked***:55279 (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:17:00 INFO storage.MemoryStore: Block broadcast_1169_piece0 stored as bytes in memory (estimated size 3.1 KB, free 491.3 MB) 18/04/17 17:17:00 INFO storage.BlockManagerInfo: Added broadcast_1169_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.5 MB) 18/04/17 17:17:00 INFO spark.SparkContext: Created broadcast 1169 from broadcast at DAGScheduler.scala:1006 18/04/17 17:17:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 1169 (KafkaRDD[1585] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 17:17:00 INFO storage.BlockManagerInfo: Removed broadcast_1156_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:17:00 INFO cluster.YarnClusterScheduler: Adding task set 1169.0 with 1 tasks 18/04/17 17:17:00 INFO scheduler.DAGScheduler: Got job 1170 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 17:17:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 1170 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 17:17:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 17:17:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 17:17:00 INFO scheduler.DAGScheduler: Submitting ResultStage 1170 (KafkaRDD[1604] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 17:17:00 INFO storage.MemoryStore: Block broadcast_1170 stored as values in memory (estimated size 5.7 KB, free 491.3 MB) 18/04/17 17:17:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 1169.0 (TID 1169, ***hostname masked***, executor 3, partition 0, RACK_LOCAL, 2063 bytes) 18/04/17 17:17:00 INFO storage.MemoryStore: Block broadcast_1170_piece0 stored as bytes in memory (estimated size 3.1 KB, free 491.3 MB) 18/04/17 17:17:00 INFO storage.BlockManagerInfo: Added broadcast_1170_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.5 MB) 18/04/17 17:17:00 INFO spark.SparkContext: Created broadcast 1170 from broadcast at DAGScheduler.scala:1006 18/04/17 17:17:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 1170 (KafkaRDD[1604] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 17:17:00 INFO cluster.YarnClusterScheduler: Adding task set 1170.0 with 1 tasks 18/04/17 17:17:00 INFO scheduler.DAGScheduler: Got job 1169 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 17:17:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 1171 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 17:17:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 17:17:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 17:17:00 INFO scheduler.DAGScheduler: Submitting ResultStage 1171 (KafkaRDD[1602] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 17:17:00 INFO storage.MemoryStore: Block broadcast_1171 stored as values in memory (estimated size 5.7 KB, free 491.3 MB) 18/04/17 17:17:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 1170.0 (TID 1170, ***hostname masked***, executor 2, partition 0, RACK_LOCAL, 2048 bytes) 18/04/17 17:17:00 INFO storage.BlockManagerInfo: Added broadcast_1165_piece0 in memory on ***hostname masked***:35790 (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:17:00 INFO storage.BlockManagerInfo: Added broadcast_1167_piece0 in memory on ***hostname masked***:43653 (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:17:00 INFO storage.MemoryStore: Block broadcast_1171_piece0 stored as bytes in memory (estimated size 3.1 KB, free 491.3 MB) 18/04/17 17:17:00 INFO storage.BlockManagerInfo: Added broadcast_1171_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.5 MB) 18/04/17 17:17:00 INFO spark.SparkContext: Created broadcast 1171 from broadcast at DAGScheduler.scala:1006 18/04/17 17:17:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 1171 (KafkaRDD[1602] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 17:17:00 INFO cluster.YarnClusterScheduler: Adding task set 1171.0 with 1 tasks 18/04/17 17:17:00 INFO scheduler.DAGScheduler: Got job 1172 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 17:17:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 1172 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 17:17:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 17:17:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 17:17:00 INFO scheduler.DAGScheduler: Submitting ResultStage 1172 (KafkaRDD[1606] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 17:17:00 INFO storage.BlockManagerInfo: Removed broadcast_1156_piece0 on ***hostname masked***:60107 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:17:00 INFO storage.MemoryStore: Block broadcast_1172 stored as values in memory (estimated size 5.7 KB, free 491.3 MB) 18/04/17 17:17:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 1171.0 (TID 1171, ***hostname masked***, executor 11, partition 0, RACK_LOCAL, 2048 bytes) 18/04/17 17:17:00 INFO spark.ContextCleaner: Cleaned accumulator 1133 18/04/17 17:17:00 INFO spark.ContextCleaner: Cleaned accumulator 1135 18/04/17 17:17:00 INFO storage.BlockManagerInfo: Removed broadcast_1133_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.5 MB) 18/04/17 17:17:00 INFO storage.MemoryStore: Block broadcast_1172_piece0 stored as bytes in memory (estimated size 3.1 KB, free 491.3 MB) 18/04/17 17:17:00 INFO storage.BlockManagerInfo: Added broadcast_1172_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.5 MB) 18/04/17 17:17:00 INFO spark.SparkContext: Created broadcast 1172 from broadcast at DAGScheduler.scala:1006 18/04/17 17:17:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 1172 (KafkaRDD[1606] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 17:17:00 INFO cluster.YarnClusterScheduler: Adding task set 1172.0 with 1 tasks 18/04/17 17:17:00 INFO scheduler.DAGScheduler: Got job 1173 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 17:17:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 1173 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 17:17:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 17:17:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 17:17:00 INFO scheduler.DAGScheduler: Submitting ResultStage 1173 (KafkaRDD[1591] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 17:17:00 INFO storage.MemoryStore: Block broadcast_1173 stored as values in memory (estimated size 5.7 KB, free 491.3 MB) 18/04/17 17:17:00 INFO storage.BlockManagerInfo: Removed broadcast_1133_piece0 on ***hostname masked***:55095 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:17:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 1172.0 (TID 1172, ***hostname masked***, executor 6, partition 0, RACK_LOCAL, 2069 bytes) 18/04/17 17:17:00 INFO storage.MemoryStore: Block broadcast_1173_piece0 stored as bytes in memory (estimated size 3.1 KB, free 491.3 MB) 18/04/17 17:17:00 INFO storage.BlockManagerInfo: Added broadcast_1173_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.5 MB) 18/04/17 17:17:00 INFO spark.SparkContext: Created broadcast 1173 from broadcast at DAGScheduler.scala:1006 18/04/17 17:17:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 1173 (KafkaRDD[1591] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 17:17:00 INFO cluster.YarnClusterScheduler: Adding task set 1173.0 with 1 tasks 18/04/17 17:17:00 INFO scheduler.DAGScheduler: Got job 1175 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 17:17:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 1174 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 17:17:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 17:17:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 17:17:00 INFO scheduler.DAGScheduler: Submitting ResultStage 1174 (KafkaRDD[1586] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 17:17:00 INFO storage.MemoryStore: Block broadcast_1174 stored as values in memory (estimated size 5.7 KB, free 491.3 MB) 18/04/17 17:17:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 1173.0 (TID 1173, ***hostname masked***, executor 1, partition 0, RACK_LOCAL, 2060 bytes) 18/04/17 17:17:00 INFO storage.BlockManagerInfo: Added broadcast_1169_piece0 in memory on ***hostname masked***:60107 (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:17:00 INFO storage.MemoryStore: Block broadcast_1174_piece0 stored as bytes in memory (estimated size 3.1 KB, free 491.3 MB) 18/04/17 17:17:00 INFO storage.BlockManagerInfo: Added broadcast_1174_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.5 MB) 18/04/17 17:17:00 INFO spark.SparkContext: Created broadcast 1174 from broadcast at DAGScheduler.scala:1006 18/04/17 17:17:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 1174 (KafkaRDD[1586] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 17:17:00 INFO cluster.YarnClusterScheduler: Adding task set 1174.0 with 1 tasks 18/04/17 17:17:00 INFO scheduler.DAGScheduler: Got job 1174 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 17:17:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 1175 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 17:17:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 17:17:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 17:17:00 INFO scheduler.DAGScheduler: Submitting ResultStage 1175 (KafkaRDD[1593] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 17:17:00 INFO storage.BlockManagerInfo: Added broadcast_1172_piece0 in memory on ***hostname masked***:35790 (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:17:00 INFO storage.BlockManagerInfo: Added broadcast_1171_piece0 in memory on ***hostname masked***:57847 (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:17:00 INFO storage.MemoryStore: Block broadcast_1175 stored as values in memory (estimated size 5.7 KB, free 491.3 MB) 18/04/17 17:17:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 1174.0 (TID 1174, ***hostname masked***, executor 6, partition 0, RACK_LOCAL, 2034 bytes) 18/04/17 17:17:00 INFO storage.MemoryStore: Block broadcast_1175_piece0 stored as bytes in memory (estimated size 3.1 KB, free 491.3 MB) 18/04/17 17:17:00 INFO storage.BlockManagerInfo: Added broadcast_1175_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.5 MB) 18/04/17 17:17:00 INFO spark.SparkContext: Created broadcast 1175 from broadcast at DAGScheduler.scala:1006 18/04/17 17:17:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 1175 (KafkaRDD[1593] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 17:17:00 INFO cluster.YarnClusterScheduler: Adding task set 1175.0 with 1 tasks 18/04/17 17:17:00 INFO scheduler.DAGScheduler: Got job 1176 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 17:17:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 1176 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 17:17:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 17:17:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 17:17:00 INFO scheduler.DAGScheduler: Submitting ResultStage 1176 (KafkaRDD[1613] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 17:17:00 INFO storage.MemoryStore: Block broadcast_1176 stored as values in memory (estimated size 5.7 KB, free 491.3 MB) 18/04/17 17:17:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 1175.0 (TID 1175, ***hostname masked***, executor 10, partition 0, RACK_LOCAL, 2041 bytes) 18/04/17 17:17:00 INFO storage.MemoryStore: Block broadcast_1176_piece0 stored as bytes in memory (estimated size 3.1 KB, free 491.3 MB) 18/04/17 17:17:00 INFO storage.BlockManagerInfo: Added broadcast_1176_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.5 MB) 18/04/17 17:17:00 INFO spark.SparkContext: Created broadcast 1176 from broadcast at DAGScheduler.scala:1006 18/04/17 17:17:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 1176 (KafkaRDD[1613] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 17:17:00 INFO cluster.YarnClusterScheduler: Adding task set 1176.0 with 1 tasks 18/04/17 17:17:00 INFO scheduler.DAGScheduler: Got job 1177 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 17:17:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 1177 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 17:17:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 17:17:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 17:17:00 INFO scheduler.DAGScheduler: Submitting ResultStage 1177 (KafkaRDD[1616] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 17:17:00 INFO storage.MemoryStore: Block broadcast_1177 stored as values in memory (estimated size 5.7 KB, free 491.3 MB) 18/04/17 17:17:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 1176.0 (TID 1176, ***hostname masked***, executor 3, partition 0, NODE_LOCAL, 2049 bytes) 18/04/17 17:17:00 INFO storage.BlockManagerInfo: Added broadcast_1174_piece0 in memory on ***hostname masked***:35790 (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:17:00 INFO storage.BlockManagerInfo: Added broadcast_1168_piece0 in memory on ***hostname masked***:56034 (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:17:00 INFO spark.ContextCleaner: Cleaned accumulator 1134 18/04/17 17:17:00 INFO storage.BlockManagerInfo: Removed broadcast_1132_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.5 MB) 18/04/17 17:17:00 INFO storage.MemoryStore: Block broadcast_1177_piece0 stored as bytes in memory (estimated size 3.1 KB, free 491.3 MB) 18/04/17 17:17:00 INFO storage.BlockManagerInfo: Added broadcast_1177_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.5 MB) 18/04/17 17:17:00 INFO spark.SparkContext: Created broadcast 1177 from broadcast at DAGScheduler.scala:1006 18/04/17 17:17:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 1177 (KafkaRDD[1616] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 17:17:00 INFO cluster.YarnClusterScheduler: Adding task set 1177.0 with 1 tasks 18/04/17 17:17:00 INFO scheduler.DAGScheduler: Got job 1178 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 17:17:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 1178 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 17:17:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 17:17:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 17:17:00 INFO storage.BlockManagerInfo: Added broadcast_1170_piece0 in memory on ***hostname masked***:43653 (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:17:00 INFO scheduler.DAGScheduler: Submitting ResultStage 1178 (KafkaRDD[1594] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 17:17:00 INFO storage.BlockManagerInfo: Removed broadcast_1132_piece0 on ***hostname masked***:50260 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:17:00 INFO storage.MemoryStore: Block broadcast_1178 stored as values in memory (estimated size 5.7 KB, free 491.3 MB) 18/04/17 17:17:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 1177.0 (TID 1177, ***hostname masked***, executor 10, partition 0, RACK_LOCAL, 2040 bytes) 18/04/17 17:17:00 INFO spark.ContextCleaner: Cleaned accumulator 1137 18/04/17 17:17:00 INFO storage.BlockManagerInfo: Removed broadcast_1135_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.5 MB) 18/04/17 17:17:00 INFO storage.BlockManagerInfo: Added broadcast_1175_piece0 in memory on ***hostname masked***:55095 (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:17:00 INFO storage.MemoryStore: Block broadcast_1178_piece0 stored as bytes in memory (estimated size 3.1 KB, free 491.3 MB) 18/04/17 17:17:00 INFO storage.BlockManagerInfo: Added broadcast_1178_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.5 MB) 18/04/17 17:17:00 INFO spark.SparkContext: Created broadcast 1178 from broadcast at DAGScheduler.scala:1006 18/04/17 17:17:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 1178 (KafkaRDD[1594] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 17:17:00 INFO cluster.YarnClusterScheduler: Adding task set 1178.0 with 1 tasks 18/04/17 17:17:00 INFO scheduler.DAGScheduler: Got job 1180 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 17:17:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 1179 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 17:17:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 17:17:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 17:17:00 INFO scheduler.DAGScheduler: Submitting ResultStage 1179 (KafkaRDD[1603] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 17:17:00 INFO storage.MemoryStore: Block broadcast_1179 stored as values in memory (estimated size 5.7 KB, free 491.3 MB) 18/04/17 17:17:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 1178.0 (TID 1178, ***hostname masked***, executor 4, partition 0, NODE_LOCAL, 2059 bytes) 18/04/17 17:17:00 INFO storage.BlockManagerInfo: Removed broadcast_1135_piece0 on ***hostname masked***:42188 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:17:00 INFO spark.ContextCleaner: Cleaned accumulator 1136 18/04/17 17:17:00 INFO storage.BlockManagerInfo: Removed broadcast_1134_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.5 MB) 18/04/17 17:17:00 INFO storage.MemoryStore: Block broadcast_1179_piece0 stored as bytes in memory (estimated size 3.1 KB, free 491.3 MB) 18/04/17 17:17:00 INFO storage.BlockManagerInfo: Added broadcast_1179_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.5 MB) 18/04/17 17:17:00 INFO spark.SparkContext: Created broadcast 1179 from broadcast at DAGScheduler.scala:1006 18/04/17 17:17:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 1179 (KafkaRDD[1603] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 17:17:00 INFO cluster.YarnClusterScheduler: Adding task set 1179.0 with 1 tasks 18/04/17 17:17:00 INFO scheduler.DAGScheduler: Got job 1179 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 17:17:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 1180 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 17:17:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 17:17:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 17:17:00 INFO scheduler.DAGScheduler: Submitting ResultStage 1180 (KafkaRDD[1618] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 17:17:00 INFO storage.MemoryStore: Block broadcast_1180 stored as values in memory (estimated size 5.7 KB, free 491.3 MB) 18/04/17 17:17:00 INFO storage.BlockManagerInfo: Added broadcast_1176_piece0 in memory on ***hostname masked***:60107 (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:17:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 1179.0 (TID 1179, ***hostname masked***, executor 11, partition 0, RACK_LOCAL, 2050 bytes) 18/04/17 17:17:00 INFO storage.BlockManagerInfo: Removed broadcast_1134_piece0 on ***hostname masked***:41751 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:17:00 INFO storage.BlockManagerInfo: Added broadcast_1173_piece0 in memory on ***hostname masked***:56034 (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:17:00 INFO spark.ContextCleaner: Cleaned accumulator 1139 18/04/17 17:17:00 INFO storage.BlockManagerInfo: Removed broadcast_1137_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.5 MB) 18/04/17 17:17:00 INFO storage.MemoryStore: Block broadcast_1180_piece0 stored as bytes in memory (estimated size 3.1 KB, free 491.3 MB) 18/04/17 17:17:00 INFO storage.BlockManagerInfo: Added broadcast_1180_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.5 MB) 18/04/17 17:17:00 INFO spark.SparkContext: Created broadcast 1180 from broadcast at DAGScheduler.scala:1006 18/04/17 17:17:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 1180 (KafkaRDD[1618] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 17:17:00 INFO cluster.YarnClusterScheduler: Adding task set 1180.0 with 1 tasks 18/04/17 17:17:00 INFO scheduler.DAGScheduler: Got job 1181 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 17:17:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 1181 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 17:17:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 17:17:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 17:17:00 INFO scheduler.DAGScheduler: Submitting ResultStage 1181 (KafkaRDD[1608] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 17:17:00 INFO storage.MemoryStore: Block broadcast_1181 stored as values in memory (estimated size 5.7 KB, free 491.3 MB) 18/04/17 17:17:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 1180.0 (TID 1180, ***hostname masked***, executor 10, partition 0, RACK_LOCAL, 2053 bytes) 18/04/17 17:17:00 INFO storage.BlockManagerInfo: Added broadcast_1178_piece0 in memory on ***hostname masked***:55279 (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:17:00 INFO storage.BlockManagerInfo: Added broadcast_1177_piece0 in memory on ***hostname masked***:55095 (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:17:00 INFO storage.BlockManagerInfo: Removed broadcast_1137_piece0 on ***hostname masked***:53081 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:17:00 INFO storage.BlockManagerInfo: Added broadcast_1179_piece0 in memory on ***hostname masked***:57847 (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:17:00 INFO spark.ContextCleaner: Cleaned accumulator 1138 18/04/17 17:17:00 INFO storage.MemoryStore: Block broadcast_1181_piece0 stored as bytes in memory (estimated size 3.1 KB, free 491.3 MB) 18/04/17 17:17:00 INFO storage.BlockManagerInfo: Added broadcast_1181_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.5 MB) 18/04/17 17:17:00 INFO storage.BlockManagerInfo: Removed broadcast_1136_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.5 MB) 18/04/17 17:17:00 INFO spark.SparkContext: Created broadcast 1181 from broadcast at DAGScheduler.scala:1006 18/04/17 17:17:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 1181 (KafkaRDD[1608] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 17:17:00 INFO cluster.YarnClusterScheduler: Adding task set 1181.0 with 1 tasks 18/04/17 17:17:00 INFO scheduler.DAGScheduler: Got job 1182 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 17:17:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 1182 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 17:17:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 17:17:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 17:17:00 INFO scheduler.DAGScheduler: Submitting ResultStage 1182 (KafkaRDD[1590] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 17:17:00 INFO storage.MemoryStore: Block broadcast_1182 stored as values in memory (estimated size 5.7 KB, free 491.3 MB) 18/04/17 17:17:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 1181.0 (TID 1181, ***hostname masked***, executor 5, partition 0, NODE_LOCAL, 2047 bytes) 18/04/17 17:17:00 INFO storage.BlockManagerInfo: Removed broadcast_1136_piece0 on ***hostname masked***:53081 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:17:00 INFO storage.MemoryStore: Block broadcast_1182_piece0 stored as bytes in memory (estimated size 3.1 KB, free 491.3 MB) 18/04/17 17:17:00 INFO storage.BlockManagerInfo: Added broadcast_1182_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.5 MB) 18/04/17 17:17:00 INFO spark.SparkContext: Created broadcast 1182 from broadcast at DAGScheduler.scala:1006 18/04/17 17:17:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 1182 (KafkaRDD[1590] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 17:17:00 INFO cluster.YarnClusterScheduler: Adding task set 1182.0 with 1 tasks 18/04/17 17:17:00 INFO scheduler.DAGScheduler: Got job 1183 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 17:17:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 1183 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 17:17:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 17:17:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 17:17:00 INFO scheduler.DAGScheduler: Submitting ResultStage 1183 (KafkaRDD[1589] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 17:17:00 INFO storage.MemoryStore: Block broadcast_1183 stored as values in memory (estimated size 5.7 KB, free 491.3 MB) 18/04/17 17:17:00 INFO spark.ContextCleaner: Cleaned accumulator 1141 18/04/17 17:17:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 1182.0 (TID 1182, ***hostname masked***, executor 7, partition 0, NODE_LOCAL, 2045 bytes) 18/04/17 17:17:00 INFO storage.BlockManagerInfo: Removed broadcast_1139_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.5 MB) 18/04/17 17:17:00 INFO storage.MemoryStore: Block broadcast_1183_piece0 stored as bytes in memory (estimated size 3.1 KB, free 491.3 MB) 18/04/17 17:17:00 INFO storage.BlockManagerInfo: Added broadcast_1183_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.5 MB) 18/04/17 17:17:00 INFO storage.BlockManagerInfo: Added broadcast_1180_piece0 in memory on ***hostname masked***:55095 (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:17:00 INFO spark.SparkContext: Created broadcast 1183 from broadcast at DAGScheduler.scala:1006 18/04/17 17:17:00 INFO storage.BlockManagerInfo: Removed broadcast_1139_piece0 on ***hostname masked***:43653 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:17:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 1183 (KafkaRDD[1589] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 17:17:00 INFO cluster.YarnClusterScheduler: Adding task set 1183.0 with 1 tasks 18/04/17 17:17:00 INFO scheduler.DAGScheduler: Got job 1184 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 17:17:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 1184 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 17:17:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 17:17:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 17:17:00 INFO scheduler.DAGScheduler: Submitting ResultStage 1184 (KafkaRDD[1599] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 17:17:00 INFO storage.MemoryStore: Block broadcast_1184 stored as values in memory (estimated size 5.7 KB, free 491.3 MB) 18/04/17 17:17:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 1183.0 (TID 1183, ***hostname masked***, executor 2, partition 0, NODE_LOCAL, 2059 bytes) 18/04/17 17:17:00 INFO spark.ContextCleaner: Cleaned accumulator 1140 18/04/17 17:17:00 INFO storage.BlockManagerInfo: Removed broadcast_1138_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.5 MB) 18/04/17 17:17:00 INFO storage.MemoryStore: Block broadcast_1184_piece0 stored as bytes in memory (estimated size 3.1 KB, free 491.3 MB) 18/04/17 17:17:00 INFO storage.BlockManagerInfo: Added broadcast_1184_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.5 MB) 18/04/17 17:17:00 INFO spark.SparkContext: Created broadcast 1184 from broadcast at DAGScheduler.scala:1006 18/04/17 17:17:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 1184 (KafkaRDD[1599] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 17:17:00 INFO cluster.YarnClusterScheduler: Adding task set 1184.0 with 1 tasks 18/04/17 17:17:00 INFO storage.BlockManagerInfo: Removed broadcast_1138_piece0 on ***hostname masked***:50260 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:17:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 1184.0 (TID 1184, ***hostname masked***, executor 2, partition 0, RACK_LOCAL, 2064 bytes) 18/04/17 17:17:00 INFO spark.ContextCleaner: Cleaned accumulator 1143 18/04/17 17:17:00 INFO storage.BlockManagerInfo: Removed broadcast_1141_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.5 MB) 18/04/17 17:17:00 INFO storage.BlockManagerInfo: Added broadcast_1182_piece0 in memory on ***hostname masked***:41751 (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:17:00 INFO storage.BlockManagerInfo: Added broadcast_1183_piece0 in memory on ***hostname masked***:43653 (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:17:00 INFO storage.BlockManagerInfo: Removed broadcast_1141_piece0 on ***hostname masked***:53081 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:17:00 INFO spark.ContextCleaner: Cleaned accumulator 1142 18/04/17 17:17:00 INFO storage.BlockManagerInfo: Removed broadcast_1140_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.5 MB) 18/04/17 17:17:00 INFO storage.BlockManagerInfo: Removed broadcast_1140_piece0 on ***hostname masked***:55279 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:17:00 INFO spark.ContextCleaner: Cleaned accumulator 1145 18/04/17 17:17:00 INFO storage.BlockManagerInfo: Removed broadcast_1143_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.5 MB) 18/04/17 17:17:00 INFO storage.BlockManagerInfo: Added broadcast_1184_piece0 in memory on ***hostname masked***:43653 (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:17:00 INFO storage.BlockManagerInfo: Removed broadcast_1143_piece0 on ***hostname masked***:35790 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:17:00 INFO spark.ContextCleaner: Cleaned accumulator 1144 18/04/17 17:17:00 INFO storage.BlockManagerInfo: Removed broadcast_1142_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.5 MB) 18/04/17 17:17:00 INFO storage.BlockManagerInfo: Removed broadcast_1142_piece0 on ***hostname masked***:57847 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:17:00 INFO storage.BlockManagerInfo: Removed broadcast_1144_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.5 MB) 18/04/17 17:17:00 INFO storage.BlockManagerInfo: Removed broadcast_1144_piece0 on ***hostname masked***:55279 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:17:00 INFO storage.BlockManagerInfo: Added broadcast_1181_piece0 in memory on ***hostname masked***:53081 (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:17:00 INFO storage.BlockManagerInfo: Removed broadcast_1145_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.5 MB) 18/04/17 17:17:00 INFO storage.BlockManagerInfo: Removed broadcast_1145_piece0 on ***hostname masked***:56034 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:17:00 INFO spark.ContextCleaner: Cleaned accumulator 1146 18/04/17 17:17:00 INFO storage.BlockManagerInfo: Removed broadcast_1147_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:17:00 INFO storage.BlockManagerInfo: Removed broadcast_1147_piece0 on ***hostname masked***:55279 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:17:00 INFO spark.ContextCleaner: Cleaned accumulator 1148 18/04/17 17:17:00 INFO storage.BlockManagerInfo: Removed broadcast_1146_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:17:00 INFO storage.BlockManagerInfo: Removed broadcast_1146_piece0 on ***hostname masked***:35790 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:17:00 INFO spark.ContextCleaner: Cleaned accumulator 1147 18/04/17 17:17:00 INFO storage.BlockManagerInfo: Removed broadcast_1149_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:17:00 INFO storage.BlockManagerInfo: Removed broadcast_1149_piece0 on ***hostname masked***:60107 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:17:00 INFO spark.ContextCleaner: Cleaned accumulator 1150 18/04/17 17:17:00 INFO storage.BlockManagerInfo: Removed broadcast_1148_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:17:00 INFO storage.BlockManagerInfo: Removed broadcast_1148_piece0 on ***hostname masked***:53081 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:17:00 INFO spark.ContextCleaner: Cleaned accumulator 1149 18/04/17 17:17:00 INFO spark.ContextCleaner: Cleaned accumulator 1152 18/04/17 17:17:00 INFO storage.BlockManagerInfo: Removed broadcast_1150_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:17:00 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 1171.0 (TID 1171) in 63 ms on ***hostname masked*** (executor 11) (1/1) 18/04/17 17:17:00 INFO cluster.YarnClusterScheduler: Removed TaskSet 1171.0, whose tasks have all completed, from pool 18/04/17 17:17:00 INFO scheduler.DAGScheduler: ResultStage 1171 (foreachPartition at PredictorEngineApp.java:153) finished in 0.064 s 18/04/17 17:17:00 INFO scheduler.DAGScheduler: Job 1169 finished: foreachPartition at PredictorEngineApp.java:153, took 0.122890 s 18/04/17 17:17:00 INFO storage.BlockManagerInfo: Removed broadcast_1150_piece0 on ***hostname masked***:60107 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:17:00 INFO spark.ContextCleaner: Cleaned accumulator 1151 18/04/17 17:17:00 INFO spark.ContextCleaner: Cleaned accumulator 1153 18/04/17 17:17:00 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x3232376c connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 17:17:00 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x3232376c0x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 17:17:00 INFO storage.BlockManagerInfo: Removed broadcast_1151_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:17:00 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 17:17:00 INFO storage.BlockManagerInfo: Removed broadcast_1151_piece0 on ***hostname masked***:57847 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:17:00 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:35392, server: ***hostname masked***/***IP masked***:2181 18/04/17 17:17:00 INFO storage.BlockManagerInfo: Removed broadcast_1153_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:17:00 INFO storage.BlockManagerInfo: Removed broadcast_1153_piece0 on ***hostname masked***:42188 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:17:00 INFO spark.ContextCleaner: Cleaned accumulator 1154 18/04/17 17:17:00 INFO storage.BlockManagerInfo: Removed broadcast_1152_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:17:00 INFO storage.BlockManagerInfo: Removed broadcast_1152_piece0 on ***hostname masked***:55033 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:17:00 INFO storage.BlockManagerInfo: Removed broadcast_1154_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:17:00 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 1175.0 (TID 1175) in 60 ms on ***hostname masked*** (executor 10) (1/1) 18/04/17 17:17:00 INFO cluster.YarnClusterScheduler: Removed TaskSet 1175.0, whose tasks have all completed, from pool 18/04/17 17:17:00 INFO scheduler.DAGScheduler: ResultStage 1175 (foreachPartition at PredictorEngineApp.java:153) finished in 0.060 s 18/04/17 17:17:00 INFO scheduler.DAGScheduler: Job 1174 finished: foreachPartition at PredictorEngineApp.java:153, took 0.131251 s 18/04/17 17:17:00 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x60637bca connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 17:17:00 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x60637bca0x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 17:17:00 INFO storage.BlockManagerInfo: Removed broadcast_1154_piece0 on ***hostname masked***:60107 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:17:00 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 17:17:00 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:46370, server: ***hostname masked***/***IP masked***:2181 18/04/17 17:17:00 INFO spark.ContextCleaner: Cleaned accumulator 1155 18/04/17 17:17:00 INFO spark.ContextCleaner: Cleaned accumulator 1158 18/04/17 17:17:00 INFO spark.ContextCleaner: Cleaned accumulator 1157 18/04/17 17:17:00 INFO storage.BlockManagerInfo: Removed broadcast_1158_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:17:00 INFO storage.BlockManagerInfo: Removed broadcast_1158_piece0 on ***hostname masked***:35790 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:17:00 INFO spark.ContextCleaner: Cleaned accumulator 1159 18/04/17 17:17:00 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 1173.0 (TID 1173) in 72 ms on ***hostname masked*** (executor 1) (1/1) 18/04/17 17:17:00 INFO cluster.YarnClusterScheduler: Removed TaskSet 1173.0, whose tasks have all completed, from pool 18/04/17 17:17:00 INFO scheduler.DAGScheduler: ResultStage 1173 (foreachPartition at PredictorEngineApp.java:153) finished in 0.072 s 18/04/17 17:17:00 INFO scheduler.DAGScheduler: Job 1173 finished: foreachPartition at PredictorEngineApp.java:153, took 0.136451 s 18/04/17 17:17:00 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x9b7ddb4 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 17:17:00 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x9b7ddb40x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 17:17:00 INFO storage.BlockManagerInfo: Removed broadcast_1157_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:17:00 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 17:17:00 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x3626be1439a9725, negotiated timeout = 60000 18/04/17 17:17:00 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:41776, server: ***hostname masked***/***IP masked***:2181 18/04/17 17:17:00 INFO storage.BlockManagerInfo: Removed broadcast_1157_piece0 on ***hostname masked***:55033 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:17:00 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x2626be142b29057, negotiated timeout = 60000 18/04/17 17:17:00 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x1626be1444c9772, negotiated timeout = 60000 18/04/17 17:17:00 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x2626be142b29057 18/04/17 17:17:00 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x3626be1439a9725 18/04/17 17:17:00 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x1626be1444c9772 18/04/17 17:17:00 INFO zookeeper.ZooKeeper: Session: 0x2626be142b29057 closed 18/04/17 17:17:00 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 17:17:00 INFO zookeeper.ZooKeeper: Session: 0x3626be1439a9725 closed 18/04/17 17:17:00 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 17:17:00 INFO zookeeper.ZooKeeper: Session: 0x1626be1444c9772 closed 18/04/17 17:17:00 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 17:17:00 INFO scheduler.JobScheduler: Finished job streaming job 1523974620000 ms.9 from job set of time 1523974620000 ms 18/04/17 17:17:00 INFO scheduler.JobScheduler: Finished job streaming job 1523974620000 ms.18 from job set of time 1523974620000 ms 18/04/17 17:17:00 INFO scheduler.JobScheduler: Finished job streaming job 1523974620000 ms.7 from job set of time 1523974620000 ms 18/04/17 17:17:02 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 1164.0 (TID 1164) in 1929 ms on ***hostname masked*** (executor 4) (1/1) 18/04/17 17:17:02 INFO cluster.YarnClusterScheduler: Removed TaskSet 1164.0, whose tasks have all completed, from pool 18/04/17 17:17:02 INFO scheduler.DAGScheduler: ResultStage 1164 (foreachPartition at PredictorEngineApp.java:153) finished in 1.930 s 18/04/17 17:17:02 INFO scheduler.DAGScheduler: Job 1164 finished: foreachPartition at PredictorEngineApp.java:153, took 1.955216 s 18/04/17 17:17:02 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x4b3633ed connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 17:17:02 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x4b3633ed0x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 17:17:02 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 17:17:02 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:35402, server: ***hostname masked***/***IP masked***:2181 18/04/17 17:17:02 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x3626be1439a972b, negotiated timeout = 60000 18/04/17 17:17:02 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x3626be1439a972b 18/04/17 17:17:02 INFO zookeeper.ZooKeeper: Session: 0x3626be1439a972b closed 18/04/17 17:17:02 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 17:17:02 INFO scheduler.JobScheduler: Finished job streaming job 1523974620000 ms.25 from job set of time 1523974620000 ms 18/04/17 17:17:04 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 1179.0 (TID 1179) in 4658 ms on ***hostname masked*** (executor 11) (1/1) 18/04/17 17:17:04 INFO cluster.YarnClusterScheduler: Removed TaskSet 1179.0, whose tasks have all completed, from pool 18/04/17 17:17:04 INFO scheduler.DAGScheduler: ResultStage 1179 (foreachPartition at PredictorEngineApp.java:153) finished in 4.658 s 18/04/17 17:17:04 INFO scheduler.DAGScheduler: Job 1180 finished: foreachPartition at PredictorEngineApp.java:153, took 4.740038 s 18/04/17 17:17:04 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x5e36e676 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 17:17:04 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x5e36e6760x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 17:17:04 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 17:17:04 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:46387, server: ***hostname masked***/***IP masked***:2181 18/04/17 17:17:04 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x2626be142b2905b, negotiated timeout = 60000 18/04/17 17:17:04 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x2626be142b2905b 18/04/17 17:17:04 INFO zookeeper.ZooKeeper: Session: 0x2626be142b2905b closed 18/04/17 17:17:04 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 17:17:04 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 1182.0 (TID 1182) in 4685 ms on ***hostname masked*** (executor 7) (1/1) 18/04/17 17:17:04 INFO cluster.YarnClusterScheduler: Removed TaskSet 1182.0, whose tasks have all completed, from pool 18/04/17 17:17:04 INFO scheduler.DAGScheduler: ResultStage 1182 (foreachPartition at PredictorEngineApp.java:153) finished in 4.685 s 18/04/17 17:17:04 INFO scheduler.DAGScheduler: Job 1182 finished: foreachPartition at PredictorEngineApp.java:153, took 4.774080 s 18/04/17 17:17:04 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x6b326cd3 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 17:17:04 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x6b326cd30x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 17:17:04 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 17:17:04 INFO scheduler.JobScheduler: Finished job streaming job 1523974620000 ms.19 from job set of time 1523974620000 ms 18/04/17 17:17:04 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:41795, server: ***hostname masked***/***IP masked***:2181 18/04/17 17:17:04 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x1626be1444c9779, negotiated timeout = 60000 18/04/17 17:17:04 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x1626be1444c9779 18/04/17 17:17:04 INFO zookeeper.ZooKeeper: Session: 0x1626be1444c9779 closed 18/04/17 17:17:04 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 17:17:04 INFO scheduler.JobScheduler: Finished job streaming job 1523974620000 ms.6 from job set of time 1523974620000 ms 18/04/17 17:17:06 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 1159.0 (TID 1159) in 6008 ms on ***hostname masked*** (executor 1) (1/1) 18/04/17 17:17:06 INFO cluster.YarnClusterScheduler: Removed TaskSet 1159.0, whose tasks have all completed, from pool 18/04/17 17:17:06 INFO scheduler.DAGScheduler: ResultStage 1159 (foreachPartition at PredictorEngineApp.java:153) finished in 6.008 s 18/04/17 17:17:06 INFO scheduler.DAGScheduler: Job 1160 finished: foreachPartition at PredictorEngineApp.java:153, took 6.015204 s 18/04/17 17:17:06 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x93650bd connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 17:17:06 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x93650bd0x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 17:17:06 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 17:17:06 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:35419, server: ***hostname masked***/***IP masked***:2181 18/04/17 17:17:06 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x3626be1439a972e, negotiated timeout = 60000 18/04/17 17:17:06 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x3626be1439a972e 18/04/17 17:17:06 INFO zookeeper.ZooKeeper: Session: 0x3626be1439a972e closed 18/04/17 17:17:06 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 17:17:06 INFO scheduler.JobScheduler: Finished job streaming job 1523974620000 ms.8 from job set of time 1523974620000 ms 18/04/17 17:17:06 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 1181.0 (TID 1181) in 6139 ms on ***hostname masked*** (executor 5) (1/1) 18/04/17 17:17:06 INFO cluster.YarnClusterScheduler: Removed TaskSet 1181.0, whose tasks have all completed, from pool 18/04/17 17:17:06 INFO scheduler.DAGScheduler: ResultStage 1181 (foreachPartition at PredictorEngineApp.java:153) finished in 6.139 s 18/04/17 17:17:06 INFO scheduler.DAGScheduler: Job 1181 finished: foreachPartition at PredictorEngineApp.java:153, took 6.225303 s 18/04/17 17:17:06 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x7ba83d79 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 17:17:06 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x7ba83d790x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 17:17:06 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 17:17:06 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:41804, server: ***hostname masked***/***IP masked***:2181 18/04/17 17:17:06 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x1626be1444c977b, negotiated timeout = 60000 18/04/17 17:17:06 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x1626be1444c977b 18/04/17 17:17:06 INFO zookeeper.ZooKeeper: Session: 0x1626be1444c977b closed 18/04/17 17:17:06 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 17:17:06 INFO scheduler.JobScheduler: Finished job streaming job 1523974620000 ms.24 from job set of time 1523974620000 ms 18/04/17 17:17:07 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 1160.0 (TID 1160) in 7457 ms on ***hostname masked*** (executor 8) (1/1) 18/04/17 17:17:07 INFO cluster.YarnClusterScheduler: Removed TaskSet 1160.0, whose tasks have all completed, from pool 18/04/17 17:17:07 INFO scheduler.DAGScheduler: ResultStage 1160 (foreachPartition at PredictorEngineApp.java:153) finished in 7.457 s 18/04/17 17:17:07 INFO scheduler.DAGScheduler: Job 1161 finished: foreachPartition at PredictorEngineApp.java:153, took 7.468024 s 18/04/17 17:17:07 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x7a8d181c connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 17:17:07 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x7a8d181c0x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 17:17:07 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 17:17:07 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:35427, server: ***hostname masked***/***IP masked***:2181 18/04/17 17:17:07 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x3626be1439a972f, negotiated timeout = 60000 18/04/17 17:17:07 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x3626be1439a972f 18/04/17 17:17:07 INFO zookeeper.ZooKeeper: Session: 0x3626be1439a972f closed 18/04/17 17:17:07 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 17:17:07 INFO scheduler.JobScheduler: Finished job streaming job 1523974620000 ms.12 from job set of time 1523974620000 ms 18/04/17 17:17:07 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 1168.0 (TID 1168) in 7838 ms on ***hostname masked*** (executor 1) (1/1) 18/04/17 17:17:07 INFO cluster.YarnClusterScheduler: Removed TaskSet 1168.0, whose tasks have all completed, from pool 18/04/17 17:17:07 INFO scheduler.DAGScheduler: ResultStage 1168 (foreachPartition at PredictorEngineApp.java:153) finished in 7.839 s 18/04/17 17:17:07 INFO scheduler.DAGScheduler: Job 1168 finished: foreachPartition at PredictorEngineApp.java:153, took 7.877273 s 18/04/17 17:17:07 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x7c9da9a3 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 17:17:07 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x7c9da9a30x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 17:17:07 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 17:17:07 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:46407, server: ***hostname masked***/***IP masked***:2181 18/04/17 17:17:07 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x2626be142b2905e, negotiated timeout = 60000 18/04/17 17:17:07 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x2626be142b2905e 18/04/17 17:17:07 INFO zookeeper.ZooKeeper: Session: 0x2626be142b2905e closed 18/04/17 17:17:07 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 17:17:07 INFO scheduler.JobScheduler: Finished job streaming job 1523974620000 ms.31 from job set of time 1523974620000 ms 18/04/17 17:17:09 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 1176.0 (TID 1176) in 8933 ms on ***hostname masked*** (executor 3) (1/1) 18/04/17 17:17:09 INFO cluster.YarnClusterScheduler: Removed TaskSet 1176.0, whose tasks have all completed, from pool 18/04/17 17:17:09 INFO scheduler.DAGScheduler: ResultStage 1176 (foreachPartition at PredictorEngineApp.java:153) finished in 8.934 s 18/04/17 17:17:09 INFO scheduler.DAGScheduler: Job 1176 finished: foreachPartition at PredictorEngineApp.java:153, took 9.006437 s 18/04/17 17:17:09 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x558773a6 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 17:17:09 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x558773a60x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 17:17:09 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 17:17:09 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:46412, server: ***hostname masked***/***IP masked***:2181 18/04/17 17:17:09 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x2626be142b2905f, negotiated timeout = 60000 18/04/17 17:17:09 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x2626be142b2905f 18/04/17 17:17:09 INFO zookeeper.ZooKeeper: Session: 0x2626be142b2905f closed 18/04/17 17:17:09 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 17:17:09 INFO scheduler.JobScheduler: Finished job streaming job 1523974620000 ms.29 from job set of time 1523974620000 ms 18/04/17 17:17:09 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 1166.0 (TID 1166) in 9573 ms on ***hostname masked*** (executor 1) (1/1) 18/04/17 17:17:09 INFO cluster.YarnClusterScheduler: Removed TaskSet 1166.0, whose tasks have all completed, from pool 18/04/17 17:17:09 INFO scheduler.DAGScheduler: ResultStage 1166 (foreachPartition at PredictorEngineApp.java:153) finished in 9.573 s 18/04/17 17:17:09 INFO scheduler.DAGScheduler: Job 1167 finished: foreachPartition at PredictorEngineApp.java:153, took 9.605665 s 18/04/17 17:17:09 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x6645aa28 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 17:17:09 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x6645aa280x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 17:17:09 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 17:17:09 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:41820, server: ***hostname masked***/***IP masked***:2181 18/04/17 17:17:09 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x1626be1444c977d, negotiated timeout = 60000 18/04/17 17:17:09 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x1626be1444c977d 18/04/17 17:17:09 INFO zookeeper.ZooKeeper: Session: 0x1626be1444c977d closed 18/04/17 17:17:09 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 17:17:09 INFO scheduler.JobScheduler: Finished job streaming job 1523974620000 ms.23 from job set of time 1523974620000 ms 18/04/17 17:17:12 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 1161.0 (TID 1161) in 12671 ms on ***hostname masked*** (executor 10) (1/1) 18/04/17 17:17:12 INFO cluster.YarnClusterScheduler: Removed TaskSet 1161.0, whose tasks have all completed, from pool 18/04/17 17:17:12 INFO scheduler.DAGScheduler: ResultStage 1161 (foreachPartition at PredictorEngineApp.java:153) finished in 12.671 s 18/04/17 17:17:12 INFO scheduler.DAGScheduler: Job 1163 finished: foreachPartition at PredictorEngineApp.java:153, took 12.686053 s 18/04/17 17:17:12 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x1177b9c connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 17:17:12 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x1177b9c0x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 17:17:12 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 17:17:12 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:35449, server: ***hostname masked***/***IP masked***:2181 18/04/17 17:17:12 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x3626be1439a9731, negotiated timeout = 60000 18/04/17 17:17:12 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x3626be1439a9731 18/04/17 17:17:12 INFO zookeeper.ZooKeeper: Session: 0x3626be1439a9731 closed 18/04/17 17:17:12 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 17:17:12 INFO scheduler.JobScheduler: Finished job streaming job 1523974620000 ms.33 from job set of time 1523974620000 ms 18/04/17 17:17:12 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 1177.0 (TID 1177) in 12757 ms on ***hostname masked*** (executor 10) (1/1) 18/04/17 17:17:12 INFO cluster.YarnClusterScheduler: Removed TaskSet 1177.0, whose tasks have all completed, from pool 18/04/17 17:17:12 INFO scheduler.DAGScheduler: ResultStage 1177 (foreachPartition at PredictorEngineApp.java:153) finished in 12.758 s 18/04/17 17:17:12 INFO scheduler.DAGScheduler: Job 1177 finished: foreachPartition at PredictorEngineApp.java:153, took 12.833918 s 18/04/17 17:17:12 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x137e8629 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 17:17:12 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x137e86290x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 17:17:12 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 17:17:12 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:46429, server: ***hostname masked***/***IP masked***:2181 18/04/17 17:17:12 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x2626be142b29061, negotiated timeout = 60000 18/04/17 17:17:12 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x2626be142b29061 18/04/17 17:17:12 INFO zookeeper.ZooKeeper: Session: 0x2626be142b29061 closed 18/04/17 17:17:12 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 17:17:12 INFO scheduler.JobScheduler: Finished job streaming job 1523974620000 ms.32 from job set of time 1523974620000 ms 18/04/17 17:17:12 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 1165.0 (TID 1165) in 12841 ms on ***hostname masked*** (executor 6) (1/1) 18/04/17 17:17:12 INFO cluster.YarnClusterScheduler: Removed TaskSet 1165.0, whose tasks have all completed, from pool 18/04/17 17:17:12 INFO scheduler.DAGScheduler: ResultStage 1165 (foreachPartition at PredictorEngineApp.java:153) finished in 12.841 s 18/04/17 17:17:12 INFO scheduler.DAGScheduler: Job 1165 finished: foreachPartition at PredictorEngineApp.java:153, took 12.870479 s 18/04/17 17:17:12 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x7e3f6708 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 17:17:12 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x7e3f67080x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 17:17:12 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 17:17:12 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:35455, server: ***hostname masked***/***IP masked***:2181 18/04/17 17:17:12 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x3626be1439a9733, negotiated timeout = 60000 18/04/17 17:17:12 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x3626be1439a9733 18/04/17 17:17:12 INFO zookeeper.ZooKeeper: Session: 0x3626be1439a9733 closed 18/04/17 17:17:12 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 17:17:12 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 1180.0 (TID 1180) in 12813 ms on ***hostname masked*** (executor 10) (1/1) 18/04/17 17:17:12 INFO cluster.YarnClusterScheduler: Removed TaskSet 1180.0, whose tasks have all completed, from pool 18/04/17 17:17:12 INFO scheduler.DAGScheduler: ResultStage 1180 (foreachPartition at PredictorEngineApp.java:153) finished in 12.814 s 18/04/17 17:17:12 INFO scheduler.DAGScheduler: Job 1179 finished: foreachPartition at PredictorEngineApp.java:153, took 12.897650 s 18/04/17 17:17:12 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x70da1862 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 17:17:12 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x70da18620x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 17:17:12 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 17:17:12 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:35458, server: ***hostname masked***/***IP masked***:2181 18/04/17 17:17:12 INFO scheduler.JobScheduler: Finished job streaming job 1523974620000 ms.28 from job set of time 1523974620000 ms 18/04/17 17:17:12 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x3626be1439a9734, negotiated timeout = 60000 18/04/17 17:17:12 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x3626be1439a9734 18/04/17 17:17:13 INFO zookeeper.ZooKeeper: Session: 0x3626be1439a9734 closed 18/04/17 17:17:13 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 17:17:13 INFO scheduler.JobScheduler: Finished job streaming job 1523974620000 ms.34 from job set of time 1523974620000 ms 18/04/17 17:17:13 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 1174.0 (TID 1174) in 13017 ms on ***hostname masked*** (executor 6) (1/1) 18/04/17 17:17:13 INFO cluster.YarnClusterScheduler: Removed TaskSet 1174.0, whose tasks have all completed, from pool 18/04/17 17:17:13 INFO scheduler.DAGScheduler: ResultStage 1174 (foreachPartition at PredictorEngineApp.java:153) finished in 13.017 s 18/04/17 17:17:13 INFO scheduler.DAGScheduler: Job 1175 finished: foreachPartition at PredictorEngineApp.java:153, took 13.085072 s 18/04/17 17:17:13 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x4e9e51ee connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 17:17:13 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x4e9e51ee0x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 17:17:13 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 17:17:13 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:46439, server: ***hostname masked***/***IP masked***:2181 18/04/17 17:17:13 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x2626be142b29063, negotiated timeout = 60000 18/04/17 17:17:13 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x2626be142b29063 18/04/17 17:17:13 INFO zookeeper.ZooKeeper: Session: 0x2626be142b29063 closed 18/04/17 17:17:13 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 17:17:13 INFO scheduler.JobScheduler: Finished job streaming job 1523974620000 ms.2 from job set of time 1523974620000 ms 18/04/17 17:17:14 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 1162.0 (TID 1162) in 14600 ms on ***hostname masked*** (executor 8) (1/1) 18/04/17 17:17:14 INFO cluster.YarnClusterScheduler: Removed TaskSet 1162.0, whose tasks have all completed, from pool 18/04/17 17:17:14 INFO scheduler.DAGScheduler: ResultStage 1162 (foreachPartition at PredictorEngineApp.java:153) finished in 14.600 s 18/04/17 17:17:14 INFO scheduler.DAGScheduler: Job 1159 finished: foreachPartition at PredictorEngineApp.java:153, took 14.619800 s 18/04/17 17:17:14 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x722f439b connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 17:17:14 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x722f439b0x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 17:17:14 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 17:17:14 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:41848, server: ***hostname masked***/***IP masked***:2181 18/04/17 17:17:14 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x1626be1444c9780, negotiated timeout = 60000 18/04/17 17:17:14 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x1626be1444c9780 18/04/17 17:17:14 INFO zookeeper.ZooKeeper: Session: 0x1626be1444c9780 closed 18/04/17 17:17:14 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 17:17:14 INFO scheduler.JobScheduler: Finished job streaming job 1523974620000 ms.11 from job set of time 1523974620000 ms 18/04/17 17:17:15 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 1169.0 (TID 1169) in 15159 ms on ***hostname masked*** (executor 3) (1/1) 18/04/17 17:17:15 INFO cluster.YarnClusterScheduler: Removed TaskSet 1169.0, whose tasks have all completed, from pool 18/04/17 17:17:15 INFO scheduler.DAGScheduler: ResultStage 1169 (foreachPartition at PredictorEngineApp.java:153) finished in 15.160 s 18/04/17 17:17:15 INFO scheduler.DAGScheduler: Job 1171 finished: foreachPartition at PredictorEngineApp.java:153, took 15.213578 s 18/04/17 17:17:15 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x5057128f connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 17:17:15 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x5057128f0x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 17:17:15 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 17:17:15 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:35470, server: ***hostname masked***/***IP masked***:2181 18/04/17 17:17:15 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x3626be1439a9736, negotiated timeout = 60000 18/04/17 17:17:15 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x3626be1439a9736 18/04/17 17:17:15 INFO zookeeper.ZooKeeper: Session: 0x3626be1439a9736 closed 18/04/17 17:17:15 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 17:17:15 INFO scheduler.JobScheduler: Finished job streaming job 1523974620000 ms.1 from job set of time 1523974620000 ms 18/04/17 17:17:18 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 1184.0 (TID 1184) in 18467 ms on ***hostname masked*** (executor 2) (1/1) 18/04/17 17:17:18 INFO cluster.YarnClusterScheduler: Removed TaskSet 1184.0, whose tasks have all completed, from pool 18/04/17 17:17:18 INFO scheduler.DAGScheduler: ResultStage 1184 (foreachPartition at PredictorEngineApp.java:153) finished in 18.467 s 18/04/17 17:17:18 INFO scheduler.DAGScheduler: Job 1184 finished: foreachPartition at PredictorEngineApp.java:153, took 18.559677 s 18/04/17 17:17:18 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x7f02c42e connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 17:17:18 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x7f02c42e0x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 17:17:18 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 17:17:18 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:46454, server: ***hostname masked***/***IP masked***:2181 18/04/17 17:17:18 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x2626be142b29067, negotiated timeout = 60000 18/04/17 17:17:18 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x2626be142b29067 18/04/17 17:17:18 INFO zookeeper.ZooKeeper: Session: 0x2626be142b29067 closed 18/04/17 17:17:18 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 17:17:18 INFO scheduler.JobScheduler: Finished job streaming job 1523974620000 ms.15 from job set of time 1523974620000 ms 18/04/17 17:17:19 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 1167.0 (TID 1167) in 19417 ms on ***hostname masked*** (executor 2) (1/1) 18/04/17 17:17:19 INFO cluster.YarnClusterScheduler: Removed TaskSet 1167.0, whose tasks have all completed, from pool 18/04/17 17:17:19 INFO scheduler.DAGScheduler: ResultStage 1167 (foreachPartition at PredictorEngineApp.java:153) finished in 19.417 s 18/04/17 17:17:19 INFO scheduler.DAGScheduler: Job 1166 finished: foreachPartition at PredictorEngineApp.java:153, took 19.452016 s 18/04/17 17:17:19 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x7d3c66fa connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 17:17:19 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x7d3c66fa0x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 17:17:19 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 17:17:19 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:41863, server: ***hostname masked***/***IP masked***:2181 18/04/17 17:17:19 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x1626be1444c9783, negotiated timeout = 60000 18/04/17 17:17:19 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x1626be1444c9783 18/04/17 17:17:19 INFO zookeeper.ZooKeeper: Session: 0x1626be1444c9783 closed 18/04/17 17:17:19 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 17:17:19 INFO scheduler.JobScheduler: Finished job streaming job 1523974620000 ms.27 from job set of time 1523974620000 ms 18/04/17 17:17:19 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 1172.0 (TID 1172) in 19629 ms on ***hostname masked*** (executor 6) (1/1) 18/04/17 17:17:19 INFO cluster.YarnClusterScheduler: Removed TaskSet 1172.0, whose tasks have all completed, from pool 18/04/17 17:17:19 INFO scheduler.DAGScheduler: ResultStage 1172 (foreachPartition at PredictorEngineApp.java:153) finished in 19.630 s 18/04/17 17:17:19 INFO scheduler.DAGScheduler: Job 1172 finished: foreachPartition at PredictorEngineApp.java:153, took 19.691643 s 18/04/17 17:17:19 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x46f253c5 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 17:17:19 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x46f253c50x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 17:17:19 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 17:17:19 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:46461, server: ***hostname masked***/***IP masked***:2181 18/04/17 17:17:19 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x2626be142b29069, negotiated timeout = 60000 18/04/17 17:17:19 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x2626be142b29069 18/04/17 17:17:19 INFO zookeeper.ZooKeeper: Session: 0x2626be142b29069 closed 18/04/17 17:17:19 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 17:17:19 INFO scheduler.JobScheduler: Finished job streaming job 1523974620000 ms.22 from job set of time 1523974620000 ms 18/04/17 17:17:20 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 1170.0 (TID 1170) in 20033 ms on ***hostname masked*** (executor 2) (1/1) 18/04/17 17:17:20 INFO cluster.YarnClusterScheduler: Removed TaskSet 1170.0, whose tasks have all completed, from pool 18/04/17 17:17:20 INFO scheduler.DAGScheduler: ResultStage 1170 (foreachPartition at PredictorEngineApp.java:153) finished in 20.035 s 18/04/17 17:17:20 INFO scheduler.DAGScheduler: Job 1170 finished: foreachPartition at PredictorEngineApp.java:153, took 20.090610 s 18/04/17 17:17:20 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x7526173 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 17:17:20 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x75261730x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 17:17:20 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 17:17:20 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:41870, server: ***hostname masked***/***IP masked***:2181 18/04/17 17:17:20 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x1626be1444c9784, negotiated timeout = 60000 18/04/17 17:17:20 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x1626be1444c9784 18/04/17 17:17:20 INFO zookeeper.ZooKeeper: Session: 0x1626be1444c9784 closed 18/04/17 17:17:20 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 17:17:20 INFO scheduler.JobScheduler: Finished job streaming job 1523974620000 ms.20 from job set of time 1523974620000 ms 18/04/17 17:17:20 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 1163.0 (TID 1163) in 20339 ms on ***hostname masked*** (executor 4) (1/1) 18/04/17 17:17:20 INFO cluster.YarnClusterScheduler: Removed TaskSet 1163.0, whose tasks have all completed, from pool 18/04/17 17:17:20 INFO scheduler.DAGScheduler: ResultStage 1163 (foreachPartition at PredictorEngineApp.java:153) finished in 20.339 s 18/04/17 17:17:20 INFO scheduler.DAGScheduler: Job 1162 finished: foreachPartition at PredictorEngineApp.java:153, took 20.361751 s 18/04/17 17:17:20 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x16a4ac5e connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 17:17:20 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x16a4ac5e0x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 17:17:20 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 17:17:20 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:46468, server: ***hostname masked***/***IP masked***:2181 18/04/17 17:17:20 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x2626be142b2906a, negotiated timeout = 60000 18/04/17 17:17:20 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x2626be142b2906a 18/04/17 17:17:20 INFO zookeeper.ZooKeeper: Session: 0x2626be142b2906a closed 18/04/17 17:17:20 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 17:17:20 INFO scheduler.JobScheduler: Finished job streaming job 1523974620000 ms.26 from job set of time 1523974620000 ms 18/04/17 17:17:22 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 1178.0 (TID 1178) in 21836 ms on ***hostname masked*** (executor 4) (1/1) 18/04/17 17:17:22 INFO cluster.YarnClusterScheduler: Removed TaskSet 1178.0, whose tasks have all completed, from pool 18/04/17 17:17:22 INFO scheduler.DAGScheduler: ResultStage 1178 (foreachPartition at PredictorEngineApp.java:153) finished in 21.836 s 18/04/17 17:17:22 INFO scheduler.DAGScheduler: Job 1178 finished: foreachPartition at PredictorEngineApp.java:153, took 21.914440 s 18/04/17 17:17:22 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x3a1f599b connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 17:17:22 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x3a1f599b0x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 17:17:22 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 17:17:22 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:41878, server: ***hostname masked***/***IP masked***:2181 18/04/17 17:17:22 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x1626be1444c9785, negotiated timeout = 60000 18/04/17 17:17:22 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x1626be1444c9785 18/04/17 17:17:22 INFO zookeeper.ZooKeeper: Session: 0x1626be1444c9785 closed 18/04/17 17:17:22 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 17:17:22 INFO scheduler.JobScheduler: Finished job streaming job 1523974620000 ms.10 from job set of time 1523974620000 ms 18/04/17 17:17:26 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 1183.0 (TID 1183) in 26057 ms on ***hostname masked*** (executor 2) (1/1) 18/04/17 17:17:26 INFO cluster.YarnClusterScheduler: Removed TaskSet 1183.0, whose tasks have all completed, from pool 18/04/17 17:17:26 INFO scheduler.DAGScheduler: ResultStage 1183 (foreachPartition at PredictorEngineApp.java:153) finished in 26.057 s 18/04/17 17:17:26 INFO scheduler.DAGScheduler: Job 1183 finished: foreachPartition at PredictorEngineApp.java:153, took 26.147480 s 18/04/17 17:17:26 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x9f74356 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 17:17:26 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x9f743560x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 17:17:26 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 17:17:26 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:46483, server: ***hostname masked***/***IP masked***:2181 18/04/17 17:17:26 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x2626be142b2906d, negotiated timeout = 60000 18/04/17 17:17:26 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x2626be142b2906d 18/04/17 17:17:26 INFO zookeeper.ZooKeeper: Session: 0x2626be142b2906d closed 18/04/17 17:17:26 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 17:17:26 INFO scheduler.JobScheduler: Finished job streaming job 1523974620000 ms.5 from job set of time 1523974620000 ms 18/04/17 17:17:26 INFO scheduler.JobScheduler: Total delay: 26.283 s for time 1523974620000 ms (execution: 26.233 s) 18/04/17 17:17:26 INFO kafka.KafkaRDD: Removing RDD 1512 from persistence list 18/04/17 17:17:26 INFO storage.BlockManager: Removing RDD 1512 18/04/17 17:17:26 INFO kafka.KafkaRDD: Removing RDD 1548 from persistence list 18/04/17 17:17:26 INFO storage.BlockManager: Removing RDD 1548 18/04/17 17:17:26 INFO kafka.KafkaRDD: Removing RDD 1512 from persistence list 18/04/17 17:17:26 INFO storage.BlockManager: Removing RDD 1512 18/04/17 17:17:26 INFO kafka.KafkaRDD: Removing RDD 1548 from persistence list 18/04/17 17:17:26 INFO storage.BlockManager: Removing RDD 1548 18/04/17 17:17:26 INFO kafka.KafkaRDD: Removing RDD 1513 from persistence list 18/04/17 17:17:26 INFO storage.BlockManager: Removing RDD 1513 18/04/17 17:17:26 INFO kafka.KafkaRDD: Removing RDD 1549 from persistence list 18/04/17 17:17:26 INFO storage.BlockManager: Removing RDD 1549 18/04/17 17:17:26 INFO kafka.KafkaRDD: Removing RDD 1513 from persistence list 18/04/17 17:17:26 INFO storage.BlockManager: Removing RDD 1513 18/04/17 17:17:26 INFO kafka.KafkaRDD: Removing RDD 1549 from persistence list 18/04/17 17:17:26 INFO storage.BlockManager: Removing RDD 1549 18/04/17 17:17:26 INFO kafka.KafkaRDD: Removing RDD 1514 from persistence list 18/04/17 17:17:26 INFO storage.BlockManager: Removing RDD 1514 18/04/17 17:17:26 INFO kafka.KafkaRDD: Removing RDD 1550 from persistence list 18/04/17 17:17:26 INFO storage.BlockManager: Removing RDD 1550 18/04/17 17:17:26 INFO kafka.KafkaRDD: Removing RDD 1514 from persistence list 18/04/17 17:17:26 INFO storage.BlockManager: Removing RDD 1514 18/04/17 17:17:26 INFO kafka.KafkaRDD: Removing RDD 1550 from persistence list 18/04/17 17:17:26 INFO storage.BlockManager: Removing RDD 1550 18/04/17 17:17:26 INFO kafka.KafkaRDD: Removing RDD 1515 from persistence list 18/04/17 17:17:26 INFO storage.BlockManager: Removing RDD 1515 18/04/17 17:17:26 INFO kafka.KafkaRDD: Removing RDD 1551 from persistence list 18/04/17 17:17:26 INFO storage.BlockManager: Removing RDD 1551 18/04/17 17:17:26 INFO kafka.KafkaRDD: Removing RDD 1515 from persistence list 18/04/17 17:17:26 INFO storage.BlockManager: Removing RDD 1515 18/04/17 17:17:26 INFO kafka.KafkaRDD: Removing RDD 1551 from persistence list 18/04/17 17:17:26 INFO storage.BlockManager: Removing RDD 1551 18/04/17 17:17:26 INFO kafka.KafkaRDD: Removing RDD 1516 from persistence list 18/04/17 17:17:26 INFO storage.BlockManager: Removing RDD 1516 18/04/17 17:17:26 INFO kafka.KafkaRDD: Removing RDD 1552 from persistence list 18/04/17 17:17:26 INFO storage.BlockManager: Removing RDD 1552 18/04/17 17:17:26 INFO kafka.KafkaRDD: Removing RDD 1516 from persistence list 18/04/17 17:17:26 INFO storage.BlockManager: Removing RDD 1516 18/04/17 17:17:26 INFO kafka.KafkaRDD: Removing RDD 1552 from persistence list 18/04/17 17:17:26 INFO storage.BlockManager: Removing RDD 1552 18/04/17 17:17:26 INFO kafka.KafkaRDD: Removing RDD 1517 from persistence list 18/04/17 17:17:26 INFO storage.BlockManager: Removing RDD 1517 18/04/17 17:17:26 INFO kafka.KafkaRDD: Removing RDD 1553 from persistence list 18/04/17 17:17:26 INFO storage.BlockManager: Removing RDD 1553 18/04/17 17:17:26 INFO kafka.KafkaRDD: Removing RDD 1517 from persistence list 18/04/17 17:17:26 INFO storage.BlockManager: Removing RDD 1517 18/04/17 17:17:26 INFO kafka.KafkaRDD: Removing RDD 1553 from persistence list 18/04/17 17:17:26 INFO storage.BlockManager: Removing RDD 1553 18/04/17 17:17:26 INFO kafka.KafkaRDD: Removing RDD 1518 from persistence list 18/04/17 17:17:26 INFO storage.BlockManager: Removing RDD 1518 18/04/17 17:17:26 INFO kafka.KafkaRDD: Removing RDD 1554 from persistence list 18/04/17 17:17:26 INFO storage.BlockManager: Removing RDD 1554 18/04/17 17:17:26 INFO kafka.KafkaRDD: Removing RDD 1518 from persistence list 18/04/17 17:17:26 INFO storage.BlockManager: Removing RDD 1518 18/04/17 17:17:26 INFO kafka.KafkaRDD: Removing RDD 1554 from persistence list 18/04/17 17:17:26 INFO storage.BlockManager: Removing RDD 1554 18/04/17 17:17:26 INFO kafka.KafkaRDD: Removing RDD 1519 from persistence list 18/04/17 17:17:26 INFO storage.BlockManager: Removing RDD 1519 18/04/17 17:17:26 INFO kafka.KafkaRDD: Removing RDD 1555 from persistence list 18/04/17 17:17:26 INFO storage.BlockManager: Removing RDD 1555 18/04/17 17:17:26 INFO kafka.KafkaRDD: Removing RDD 1519 from persistence list 18/04/17 17:17:26 INFO storage.BlockManager: Removing RDD 1519 18/04/17 17:17:26 INFO kafka.KafkaRDD: Removing RDD 1555 from persistence list 18/04/17 17:17:26 INFO storage.BlockManager: Removing RDD 1555 18/04/17 17:17:26 INFO kafka.KafkaRDD: Removing RDD 1520 from persistence list 18/04/17 17:17:26 INFO storage.BlockManager: Removing RDD 1520 18/04/17 17:17:26 INFO kafka.KafkaRDD: Removing RDD 1556 from persistence list 18/04/17 17:17:26 INFO storage.BlockManager: Removing RDD 1556 18/04/17 17:17:26 INFO kafka.KafkaRDD: Removing RDD 1520 from persistence list 18/04/17 17:17:26 INFO storage.BlockManager: Removing RDD 1520 18/04/17 17:17:26 INFO kafka.KafkaRDD: Removing RDD 1556 from persistence list 18/04/17 17:17:26 INFO storage.BlockManager: Removing RDD 1556 18/04/17 17:17:26 INFO kafka.KafkaRDD: Removing RDD 1521 from persistence list 18/04/17 17:17:26 INFO storage.BlockManager: Removing RDD 1521 18/04/17 17:17:26 INFO kafka.KafkaRDD: Removing RDD 1557 from persistence list 18/04/17 17:17:26 INFO storage.BlockManager: Removing RDD 1557 18/04/17 17:17:26 INFO kafka.KafkaRDD: Removing RDD 1521 from persistence list 18/04/17 17:17:26 INFO storage.BlockManager: Removing RDD 1521 18/04/17 17:17:26 INFO kafka.KafkaRDD: Removing RDD 1557 from persistence list 18/04/17 17:17:26 INFO storage.BlockManagerInfo: Removed broadcast_1181_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:17:26 INFO storage.BlockManager: Removing RDD 1557 18/04/17 17:17:26 INFO kafka.KafkaRDD: Removing RDD 1522 from persistence list 18/04/17 17:17:26 INFO storage.BlockManager: Removing RDD 1522 18/04/17 17:17:26 INFO kafka.KafkaRDD: Removing RDD 1558 from persistence list 18/04/17 17:17:26 INFO storage.BlockManagerInfo: Removed broadcast_1181_piece0 on ***hostname masked***:53081 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:17:26 INFO storage.BlockManager: Removing RDD 1558 18/04/17 17:17:26 INFO kafka.KafkaRDD: Removing RDD 1522 from persistence list 18/04/17 17:17:26 INFO storage.BlockManager: Removing RDD 1522 18/04/17 17:17:26 INFO kafka.KafkaRDD: Removing RDD 1558 from persistence list 18/04/17 17:17:26 INFO storage.BlockManager: Removing RDD 1558 18/04/17 17:17:26 INFO kafka.KafkaRDD: Removing RDD 1523 from persistence list 18/04/17 17:17:26 INFO storage.BlockManager: Removing RDD 1523 18/04/17 17:17:26 INFO kafka.KafkaRDD: Removing RDD 1559 from persistence list 18/04/17 17:17:26 INFO storage.BlockManager: Removing RDD 1559 18/04/17 17:17:26 INFO kafka.KafkaRDD: Removing RDD 1523 from persistence list 18/04/17 17:17:26 INFO storage.BlockManager: Removing RDD 1523 18/04/17 17:17:26 INFO kafka.KafkaRDD: Removing RDD 1559 from persistence list 18/04/17 17:17:26 INFO storage.BlockManager: Removing RDD 1559 18/04/17 17:17:26 INFO kafka.KafkaRDD: Removing RDD 1524 from persistence list 18/04/17 17:17:26 INFO storage.BlockManager: Removing RDD 1524 18/04/17 17:17:26 INFO kafka.KafkaRDD: Removing RDD 1560 from persistence list 18/04/17 17:17:26 INFO storage.BlockManager: Removing RDD 1560 18/04/17 17:17:26 INFO kafka.KafkaRDD: Removing RDD 1524 from persistence list 18/04/17 17:17:26 INFO storage.BlockManager: Removing RDD 1524 18/04/17 17:17:26 INFO kafka.KafkaRDD: Removing RDD 1560 from persistence list 18/04/17 17:17:26 INFO storage.BlockManager: Removing RDD 1560 18/04/17 17:17:26 INFO kafka.KafkaRDD: Removing RDD 1525 from persistence list 18/04/17 17:17:26 INFO storage.BlockManager: Removing RDD 1525 18/04/17 17:17:26 INFO kafka.KafkaRDD: Removing RDD 1561 from persistence list 18/04/17 17:17:26 INFO storage.BlockManager: Removing RDD 1561 18/04/17 17:17:26 INFO kafka.KafkaRDD: Removing RDD 1525 from persistence list 18/04/17 17:17:26 INFO storage.BlockManager: Removing RDD 1525 18/04/17 17:17:26 INFO kafka.KafkaRDD: Removing RDD 1561 from persistence list 18/04/17 17:17:26 INFO storage.BlockManager: Removing RDD 1561 18/04/17 17:17:26 INFO kafka.KafkaRDD: Removing RDD 1526 from persistence list 18/04/17 17:17:26 INFO storage.BlockManager: Removing RDD 1526 18/04/17 17:17:26 INFO kafka.KafkaRDD: Removing RDD 1562 from persistence list 18/04/17 17:17:26 INFO storage.BlockManager: Removing RDD 1562 18/04/17 17:17:26 INFO kafka.KafkaRDD: Removing RDD 1526 from persistence list 18/04/17 17:17:26 INFO storage.BlockManager: Removing RDD 1526 18/04/17 17:17:26 INFO kafka.KafkaRDD: Removing RDD 1562 from persistence list 18/04/17 17:17:26 INFO storage.BlockManager: Removing RDD 1562 18/04/17 17:17:26 INFO kafka.KafkaRDD: Removing RDD 1527 from persistence list 18/04/17 17:17:26 INFO storage.BlockManager: Removing RDD 1527 18/04/17 17:17:26 INFO kafka.KafkaRDD: Removing RDD 1563 from persistence list 18/04/17 17:17:26 INFO storage.BlockManager: Removing RDD 1563 18/04/17 17:17:26 INFO kafka.KafkaRDD: Removing RDD 1527 from persistence list 18/04/17 17:17:26 INFO storage.BlockManager: Removing RDD 1527 18/04/17 17:17:26 INFO kafka.KafkaRDD: Removing RDD 1563 from persistence list 18/04/17 17:17:26 INFO storage.BlockManager: Removing RDD 1563 18/04/17 17:17:26 INFO kafka.KafkaRDD: Removing RDD 1528 from persistence list 18/04/17 17:17:26 INFO storage.BlockManager: Removing RDD 1528 18/04/17 17:17:26 INFO kafka.KafkaRDD: Removing RDD 1564 from persistence list 18/04/17 17:17:26 INFO storage.BlockManager: Removing RDD 1564 18/04/17 17:17:26 INFO kafka.KafkaRDD: Removing RDD 1528 from persistence list 18/04/17 17:17:26 INFO storage.BlockManager: Removing RDD 1528 18/04/17 17:17:26 INFO kafka.KafkaRDD: Removing RDD 1564 from persistence list 18/04/17 17:17:26 INFO storage.BlockManager: Removing RDD 1564 18/04/17 17:17:26 INFO kafka.KafkaRDD: Removing RDD 1529 from persistence list 18/04/17 17:17:26 INFO storage.BlockManager: Removing RDD 1529 18/04/17 17:17:26 INFO kafka.KafkaRDD: Removing RDD 1565 from persistence list 18/04/17 17:17:26 INFO storage.BlockManager: Removing RDD 1565 18/04/17 17:17:26 INFO kafka.KafkaRDD: Removing RDD 1529 from persistence list 18/04/17 17:17:26 INFO storage.BlockManager: Removing RDD 1529 18/04/17 17:17:26 INFO kafka.KafkaRDD: Removing RDD 1565 from persistence list 18/04/17 17:17:26 INFO storage.BlockManager: Removing RDD 1565 18/04/17 17:17:26 INFO kafka.KafkaRDD: Removing RDD 1530 from persistence list 18/04/17 17:17:26 INFO storage.BlockManager: Removing RDD 1530 18/04/17 17:17:26 INFO kafka.KafkaRDD: Removing RDD 1566 from persistence list 18/04/17 17:17:26 INFO storage.BlockManager: Removing RDD 1566 18/04/17 17:17:26 INFO kafka.KafkaRDD: Removing RDD 1530 from persistence list 18/04/17 17:17:26 INFO storage.BlockManager: Removing RDD 1530 18/04/17 17:17:26 INFO kafka.KafkaRDD: Removing RDD 1566 from persistence list 18/04/17 17:17:26 INFO storage.BlockManager: Removing RDD 1566 18/04/17 17:17:26 INFO kafka.KafkaRDD: Removing RDD 1531 from persistence list 18/04/17 17:17:26 INFO storage.BlockManager: Removing RDD 1531 18/04/17 17:17:26 INFO kafka.KafkaRDD: Removing RDD 1567 from persistence list 18/04/17 17:17:26 INFO storage.BlockManager: Removing RDD 1567 18/04/17 17:17:26 INFO kafka.KafkaRDD: Removing RDD 1531 from persistence list 18/04/17 17:17:26 INFO storage.BlockManager: Removing RDD 1531 18/04/17 17:17:26 INFO kafka.KafkaRDD: Removing RDD 1567 from persistence list 18/04/17 17:17:26 INFO storage.BlockManager: Removing RDD 1567 18/04/17 17:17:26 INFO kafka.KafkaRDD: Removing RDD 1532 from persistence list 18/04/17 17:17:26 INFO storage.BlockManager: Removing RDD 1532 18/04/17 17:17:26 INFO kafka.KafkaRDD: Removing RDD 1568 from persistence list 18/04/17 17:17:26 INFO storage.BlockManager: Removing RDD 1568 18/04/17 17:17:26 INFO kafka.KafkaRDD: Removing RDD 1532 from persistence list 18/04/17 17:17:26 INFO storage.BlockManager: Removing RDD 1532 18/04/17 17:17:26 INFO kafka.KafkaRDD: Removing RDD 1568 from persistence list 18/04/17 17:17:26 INFO storage.BlockManager: Removing RDD 1568 18/04/17 17:17:26 INFO kafka.KafkaRDD: Removing RDD 1533 from persistence list 18/04/17 17:17:26 INFO storage.BlockManager: Removing RDD 1533 18/04/17 17:17:26 INFO kafka.KafkaRDD: Removing RDD 1569 from persistence list 18/04/17 17:17:26 INFO storage.BlockManager: Removing RDD 1569 18/04/17 17:17:26 INFO kafka.KafkaRDD: Removing RDD 1533 from persistence list 18/04/17 17:17:26 INFO storage.BlockManager: Removing RDD 1533 18/04/17 17:17:26 INFO kafka.KafkaRDD: Removing RDD 1569 from persistence list 18/04/17 17:17:26 INFO storage.BlockManager: Removing RDD 1569 18/04/17 17:17:26 INFO kafka.KafkaRDD: Removing RDD 1534 from persistence list 18/04/17 17:17:26 INFO storage.BlockManager: Removing RDD 1534 18/04/17 17:17:26 INFO kafka.KafkaRDD: Removing RDD 1570 from persistence list 18/04/17 17:17:26 INFO storage.BlockManager: Removing RDD 1570 18/04/17 17:17:26 INFO kafka.KafkaRDD: Removing RDD 1534 from persistence list 18/04/17 17:17:26 INFO storage.BlockManager: Removing RDD 1534 18/04/17 17:17:26 INFO kafka.KafkaRDD: Removing RDD 1570 from persistence list 18/04/17 17:17:26 INFO storage.BlockManager: Removing RDD 1570 18/04/17 17:17:26 INFO kafka.KafkaRDD: Removing RDD 1535 from persistence list 18/04/17 17:17:26 INFO storage.BlockManager: Removing RDD 1535 18/04/17 17:17:26 INFO kafka.KafkaRDD: Removing RDD 1571 from persistence list 18/04/17 17:17:26 INFO storage.BlockManager: Removing RDD 1571 18/04/17 17:17:26 INFO kafka.KafkaRDD: Removing RDD 1535 from persistence list 18/04/17 17:17:26 INFO storage.BlockManager: Removing RDD 1535 18/04/17 17:17:26 INFO kafka.KafkaRDD: Removing RDD 1571 from persistence list 18/04/17 17:17:26 INFO storage.BlockManager: Removing RDD 1571 18/04/17 17:17:26 INFO kafka.KafkaRDD: Removing RDD 1536 from persistence list 18/04/17 17:17:26 INFO storage.BlockManager: Removing RDD 1536 18/04/17 17:17:26 INFO kafka.KafkaRDD: Removing RDD 1572 from persistence list 18/04/17 17:17:26 INFO storage.BlockManager: Removing RDD 1572 18/04/17 17:17:26 INFO kafka.KafkaRDD: Removing RDD 1536 from persistence list 18/04/17 17:17:26 INFO storage.BlockManager: Removing RDD 1536 18/04/17 17:17:26 INFO kafka.KafkaRDD: Removing RDD 1572 from persistence list 18/04/17 17:17:26 INFO storage.BlockManager: Removing RDD 1572 18/04/17 17:17:26 INFO kafka.KafkaRDD: Removing RDD 1537 from persistence list 18/04/17 17:17:26 INFO storage.BlockManager: Removing RDD 1537 18/04/17 17:17:26 INFO kafka.KafkaRDD: Removing RDD 1573 from persistence list 18/04/17 17:17:26 INFO storage.BlockManager: Removing RDD 1573 18/04/17 17:17:26 INFO kafka.KafkaRDD: Removing RDD 1537 from persistence list 18/04/17 17:17:26 INFO storage.BlockManager: Removing RDD 1537 18/04/17 17:17:26 INFO kafka.KafkaRDD: Removing RDD 1573 from persistence list 18/04/17 17:17:26 INFO storage.BlockManager: Removing RDD 1573 18/04/17 17:17:26 INFO kafka.KafkaRDD: Removing RDD 1538 from persistence list 18/04/17 17:17:26 INFO storage.BlockManager: Removing RDD 1538 18/04/17 17:17:26 INFO kafka.KafkaRDD: Removing RDD 1574 from persistence list 18/04/17 17:17:26 INFO storage.BlockManager: Removing RDD 1574 18/04/17 17:17:26 INFO kafka.KafkaRDD: Removing RDD 1538 from persistence list 18/04/17 17:17:26 INFO storage.BlockManager: Removing RDD 1538 18/04/17 17:17:26 INFO kafka.KafkaRDD: Removing RDD 1574 from persistence list 18/04/17 17:17:26 INFO storage.BlockManager: Removing RDD 1574 18/04/17 17:17:26 INFO kafka.KafkaRDD: Removing RDD 1539 from persistence list 18/04/17 17:17:26 INFO storage.BlockManager: Removing RDD 1539 18/04/17 17:17:26 INFO kafka.KafkaRDD: Removing RDD 1575 from persistence list 18/04/17 17:17:26 INFO storage.BlockManager: Removing RDD 1575 18/04/17 17:17:26 INFO kafka.KafkaRDD: Removing RDD 1539 from persistence list 18/04/17 17:17:26 INFO storage.BlockManager: Removing RDD 1539 18/04/17 17:17:26 INFO kafka.KafkaRDD: Removing RDD 1575 from persistence list 18/04/17 17:17:26 INFO storage.BlockManager: Removing RDD 1575 18/04/17 17:17:26 INFO kafka.KafkaRDD: Removing RDD 1540 from persistence list 18/04/17 17:17:26 INFO storage.BlockManager: Removing RDD 1540 18/04/17 17:17:26 INFO kafka.KafkaRDD: Removing RDD 1576 from persistence list 18/04/17 17:17:26 INFO storage.BlockManager: Removing RDD 1576 18/04/17 17:17:26 INFO kafka.KafkaRDD: Removing RDD 1540 from persistence list 18/04/17 17:17:26 INFO storage.BlockManager: Removing RDD 1540 18/04/17 17:17:26 INFO kafka.KafkaRDD: Removing RDD 1576 from persistence list 18/04/17 17:17:26 INFO storage.BlockManagerInfo: Removed broadcast_1159_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:17:26 INFO storage.BlockManager: Removing RDD 1576 18/04/17 17:17:26 INFO kafka.KafkaRDD: Removing RDD 1541 from persistence list 18/04/17 17:17:26 INFO storage.BlockManagerInfo: Removed broadcast_1159_piece0 on ***hostname masked***:56034 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:17:26 INFO storage.BlockManager: Removing RDD 1541 18/04/17 17:17:26 INFO kafka.KafkaRDD: Removing RDD 1577 from persistence list 18/04/17 17:17:26 INFO storage.BlockManager: Removing RDD 1577 18/04/17 17:17:26 INFO kafka.KafkaRDD: Removing RDD 1541 from persistence list 18/04/17 17:17:26 INFO storage.BlockManager: Removing RDD 1541 18/04/17 17:17:26 INFO kafka.KafkaRDD: Removing RDD 1577 from persistence list 18/04/17 17:17:26 INFO spark.ContextCleaner: Cleaned accumulator 1160 18/04/17 17:17:26 INFO storage.BlockManager: Removing RDD 1577 18/04/17 17:17:26 INFO kafka.KafkaRDD: Removing RDD 1542 from persistence list 18/04/17 17:17:26 INFO storage.BlockManager: Removing RDD 1542 18/04/17 17:17:26 INFO kafka.KafkaRDD: Removing RDD 1578 from persistence list 18/04/17 17:17:26 INFO storage.BlockManagerInfo: Removed broadcast_1160_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:17:26 INFO storage.BlockManager: Removing RDD 1578 18/04/17 17:17:26 INFO kafka.KafkaRDD: Removing RDD 1542 from persistence list 18/04/17 17:17:26 INFO storage.BlockManager: Removing RDD 1542 18/04/17 17:17:26 INFO kafka.KafkaRDD: Removing RDD 1578 from persistence list 18/04/17 17:17:26 INFO storage.BlockManagerInfo: Removed broadcast_1160_piece0 on ***hostname masked***:50260 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:17:26 INFO storage.BlockManager: Removing RDD 1578 18/04/17 17:17:26 INFO kafka.KafkaRDD: Removing RDD 1543 from persistence list 18/04/17 17:17:26 INFO spark.ContextCleaner: Cleaned accumulator 1161 18/04/17 17:17:26 INFO spark.ContextCleaner: Cleaned accumulator 1163 18/04/17 17:17:26 INFO storage.BlockManager: Removing RDD 1543 18/04/17 17:17:26 INFO kafka.KafkaRDD: Removing RDD 1579 from persistence list 18/04/17 17:17:26 INFO storage.BlockManager: Removing RDD 1579 18/04/17 17:17:26 INFO kafka.KafkaRDD: Removing RDD 1543 from persistence list 18/04/17 17:17:26 INFO storage.BlockManagerInfo: Removed broadcast_1161_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:17:26 INFO storage.BlockManager: Removing RDD 1543 18/04/17 17:17:26 INFO kafka.KafkaRDD: Removing RDD 1579 from persistence list 18/04/17 17:17:26 INFO storage.BlockManager: Removing RDD 1579 18/04/17 17:17:26 INFO kafka.KafkaRDD: Removing RDD 1544 from persistence list 18/04/17 17:17:26 INFO storage.BlockManagerInfo: Removed broadcast_1161_piece0 on ***hostname masked***:55095 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:17:26 INFO storage.BlockManager: Removing RDD 1544 18/04/17 17:17:26 INFO kafka.KafkaRDD: Removing RDD 1580 from persistence list 18/04/17 17:17:26 INFO spark.ContextCleaner: Cleaned accumulator 1162 18/04/17 17:17:26 INFO spark.ContextCleaner: Cleaned accumulator 1164 18/04/17 17:17:26 INFO storage.BlockManager: Removing RDD 1580 18/04/17 17:17:26 INFO kafka.KafkaRDD: Removing RDD 1544 from persistence list 18/04/17 17:17:26 INFO storage.BlockManager: Removing RDD 1544 18/04/17 17:17:26 INFO kafka.KafkaRDD: Removing RDD 1580 from persistence list 18/04/17 17:17:26 INFO storage.BlockManagerInfo: Removed broadcast_1162_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:17:26 INFO storage.BlockManager: Removing RDD 1580 18/04/17 17:17:26 INFO kafka.KafkaRDD: Removing RDD 1545 from persistence list 18/04/17 17:17:26 INFO storage.BlockManagerInfo: Removed broadcast_1162_piece0 on ***hostname masked***:50260 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:17:26 INFO storage.BlockManager: Removing RDD 1545 18/04/17 17:17:26 INFO kafka.KafkaRDD: Removing RDD 1581 from persistence list 18/04/17 17:17:26 INFO storage.BlockManager: Removing RDD 1581 18/04/17 17:17:26 INFO spark.ContextCleaner: Cleaned accumulator 1165 18/04/17 17:17:26 INFO kafka.KafkaRDD: Removing RDD 1545 from persistence list 18/04/17 17:17:26 INFO storage.BlockManager: Removing RDD 1545 18/04/17 17:17:26 INFO kafka.KafkaRDD: Removing RDD 1581 from persistence list 18/04/17 17:17:26 INFO storage.BlockManagerInfo: Removed broadcast_1163_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:17:26 INFO storage.BlockManager: Removing RDD 1581 18/04/17 17:17:26 INFO kafka.KafkaRDD: Removing RDD 1546 from persistence list 18/04/17 17:17:26 INFO storage.BlockManager: Removing RDD 1546 18/04/17 17:17:26 INFO kafka.KafkaRDD: Removing RDD 1582 from persistence list 18/04/17 17:17:26 INFO storage.BlockManager: Removing RDD 1582 18/04/17 17:17:26 INFO kafka.KafkaRDD: Removing RDD 1546 from persistence list 18/04/17 17:17:26 INFO storage.BlockManagerInfo: Removed broadcast_1163_piece0 on ***hostname masked***:55279 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:17:26 INFO storage.BlockManager: Removing RDD 1546 18/04/17 17:17:26 INFO kafka.KafkaRDD: Removing RDD 1582 from persistence list 18/04/17 17:17:26 INFO storage.BlockManager: Removing RDD 1582 18/04/17 17:17:26 INFO kafka.KafkaRDD: Removing RDD 1547 from persistence list 18/04/17 17:17:26 INFO storage.BlockManager: Removing RDD 1547 18/04/17 17:17:26 INFO kafka.KafkaRDD: Removing RDD 1583 from persistence list 18/04/17 17:17:26 INFO storage.BlockManager: Removing RDD 1583 18/04/17 17:17:26 INFO kafka.KafkaRDD: Removing RDD 1547 from persistence list 18/04/17 17:17:26 INFO storage.BlockManager: Removing RDD 1547 18/04/17 17:17:26 INFO kafka.KafkaRDD: Removing RDD 1583 from persistence list 18/04/17 17:17:26 INFO storage.BlockManager: Removing RDD 1583 18/04/17 17:17:26 INFO scheduler.ReceivedBlockTracker: Deleting batches ArrayBuffer() 18/04/17 17:17:26 INFO scheduler.InputInfoTracker: remove old batch metadata: 1523974500000 ms 1523974440000 ms 18/04/17 17:17:26 INFO storage.BlockManagerInfo: Removed broadcast_1165_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:17:26 INFO storage.BlockManagerInfo: Removed broadcast_1165_piece0 on ***hostname masked***:35790 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:17:26 INFO spark.ContextCleaner: Cleaned accumulator 1166 18/04/17 17:17:26 INFO storage.BlockManagerInfo: Removed broadcast_1164_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:17:26 INFO storage.BlockManagerInfo: Removed broadcast_1164_piece0 on ***hostname masked***:55279 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:17:26 INFO storage.BlockManagerInfo: Removed broadcast_1166_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:17:26 INFO storage.BlockManagerInfo: Removed broadcast_1166_piece0 on ***hostname masked***:56034 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:17:26 INFO spark.ContextCleaner: Cleaned accumulator 1167 18/04/17 17:17:26 INFO storage.BlockManagerInfo: Removed broadcast_1167_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:17:26 INFO storage.BlockManagerInfo: Removed broadcast_1167_piece0 on ***hostname masked***:43653 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:17:26 INFO spark.ContextCleaner: Cleaned accumulator 1168 18/04/17 17:17:26 INFO storage.BlockManagerInfo: Removed broadcast_1184_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:17:26 INFO storage.BlockManagerInfo: Removed broadcast_1184_piece0 on ***hostname masked***:43653 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:17:26 INFO spark.ContextCleaner: Cleaned accumulator 1185 18/04/17 17:17:26 INFO storage.BlockManagerInfo: Removed broadcast_1183_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:17:26 INFO storage.BlockManagerInfo: Removed broadcast_1183_piece0 on ***hostname masked***:43653 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:17:26 INFO spark.ContextCleaner: Cleaned accumulator 1170 18/04/17 17:17:26 INFO storage.BlockManagerInfo: Removed broadcast_1168_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:17:26 INFO storage.BlockManagerInfo: Removed broadcast_1168_piece0 on ***hostname masked***:56034 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:17:26 INFO spark.ContextCleaner: Cleaned accumulator 1169 18/04/17 17:17:26 INFO spark.ContextCleaner: Cleaned accumulator 1171 18/04/17 17:17:26 INFO storage.BlockManagerInfo: Removed broadcast_1169_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:17:26 INFO storage.BlockManagerInfo: Removed broadcast_1169_piece0 on ***hostname masked***:60107 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:17:26 INFO spark.ContextCleaner: Cleaned accumulator 1172 18/04/17 17:17:26 INFO storage.BlockManagerInfo: Removed broadcast_1170_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:17:26 INFO storage.BlockManagerInfo: Removed broadcast_1170_piece0 on ***hostname masked***:43653 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:17:26 INFO storage.BlockManagerInfo: Removed broadcast_1171_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:17:26 INFO storage.BlockManagerInfo: Removed broadcast_1171_piece0 on ***hostname masked***:57847 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:17:26 INFO storage.BlockManagerInfo: Removed broadcast_1172_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:17:26 INFO storage.BlockManagerInfo: Removed broadcast_1172_piece0 on ***hostname masked***:35790 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:17:26 INFO spark.ContextCleaner: Cleaned accumulator 1173 18/04/17 17:17:26 INFO spark.ContextCleaner: Cleaned accumulator 1174 18/04/17 17:17:26 INFO storage.BlockManagerInfo: Removed broadcast_1174_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:17:26 INFO storage.BlockManagerInfo: Removed broadcast_1174_piece0 on ***hostname masked***:35790 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:17:26 INFO spark.ContextCleaner: Cleaned accumulator 1175 18/04/17 17:17:26 INFO storage.BlockManagerInfo: Removed broadcast_1173_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:17:26 INFO storage.BlockManagerInfo: Removed broadcast_1173_piece0 on ***hostname masked***:56034 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:17:26 INFO spark.ContextCleaner: Cleaned accumulator 1177 18/04/17 17:17:26 INFO storage.BlockManagerInfo: Removed broadcast_1175_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:17:26 INFO storage.BlockManagerInfo: Removed broadcast_1175_piece0 on ***hostname masked***:55095 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:17:26 INFO spark.ContextCleaner: Cleaned accumulator 1176 18/04/17 17:17:26 INFO spark.ContextCleaner: Cleaned accumulator 1178 18/04/17 17:17:26 INFO storage.BlockManagerInfo: Removed broadcast_1176_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:17:26 INFO storage.BlockManagerInfo: Removed broadcast_1176_piece0 on ***hostname masked***:60107 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:17:26 INFO storage.BlockManagerInfo: Removed broadcast_1178_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:17:26 INFO storage.BlockManagerInfo: Removed broadcast_1178_piece0 on ***hostname masked***:55279 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:17:26 INFO spark.ContextCleaner: Cleaned accumulator 1179 18/04/17 17:17:26 INFO storage.BlockManagerInfo: Removed broadcast_1177_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.7 MB) 18/04/17 17:17:26 INFO storage.BlockManagerInfo: Removed broadcast_1177_piece0 on ***hostname masked***:55095 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:17:26 INFO spark.ContextCleaner: Cleaned accumulator 1180 18/04/17 17:17:26 INFO spark.ContextCleaner: Cleaned accumulator 1181 18/04/17 17:17:26 INFO storage.BlockManagerInfo: Removed broadcast_1179_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.7 MB) 18/04/17 17:17:26 INFO storage.BlockManagerInfo: Removed broadcast_1179_piece0 on ***hostname masked***:57847 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:17:26 INFO spark.ContextCleaner: Cleaned accumulator 1182 18/04/17 17:17:26 INFO storage.BlockManagerInfo: Removed broadcast_1180_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.7 MB) 18/04/17 17:17:26 INFO storage.BlockManagerInfo: Removed broadcast_1180_piece0 on ***hostname masked***:55095 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:17:26 INFO spark.ContextCleaner: Cleaned accumulator 1184 18/04/17 17:17:26 INFO storage.BlockManagerInfo: Removed broadcast_1182_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.7 MB) 18/04/17 17:17:26 INFO storage.BlockManagerInfo: Removed broadcast_1182_piece0 on ***hostname masked***:41751 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:17:26 INFO spark.ContextCleaner: Cleaned accumulator 1183 18/04/17 17:18:00 INFO scheduler.JobScheduler: Added jobs for time 1523974680000 ms 18/04/17 17:18:00 INFO scheduler.JobScheduler: Starting job streaming job 1523974680000 ms.0 from job set of time 1523974680000 ms 18/04/17 17:18:00 INFO scheduler.JobScheduler: Starting job streaming job 1523974680000 ms.2 from job set of time 1523974680000 ms 18/04/17 17:18:00 INFO scheduler.JobScheduler: Starting job streaming job 1523974680000 ms.1 from job set of time 1523974680000 ms 18/04/17 17:18:00 INFO scheduler.JobScheduler: Starting job streaming job 1523974680000 ms.3 from job set of time 1523974680000 ms 18/04/17 17:18:00 INFO scheduler.JobScheduler: Starting job streaming job 1523974680000 ms.4 from job set of time 1523974680000 ms 18/04/17 17:18:00 INFO scheduler.JobScheduler: Finished job streaming job 1523974680000 ms.3 from job set of time 1523974680000 ms 18/04/17 17:18:00 INFO scheduler.JobScheduler: Finished job streaming job 1523974680000 ms.0 from job set of time 1523974680000 ms 18/04/17 17:18:00 INFO scheduler.JobScheduler: Starting job streaming job 1523974680000 ms.6 from job set of time 1523974680000 ms 18/04/17 17:18:00 INFO scheduler.JobScheduler: Finished job streaming job 1523974680000 ms.4 from job set of time 1523974680000 ms 18/04/17 17:18:00 INFO scheduler.JobScheduler: Starting job streaming job 1523974680000 ms.5 from job set of time 1523974680000 ms 18/04/17 17:18:00 INFO scheduler.JobScheduler: Starting job streaming job 1523974680000 ms.8 from job set of time 1523974680000 ms 18/04/17 17:18:00 INFO scheduler.JobScheduler: Starting job streaming job 1523974680000 ms.7 from job set of time 1523974680000 ms 18/04/17 17:18:00 INFO scheduler.JobScheduler: Starting job streaming job 1523974680000 ms.9 from job set of time 1523974680000 ms 18/04/17 17:18:00 INFO scheduler.JobScheduler: Starting job streaming job 1523974680000 ms.10 from job set of time 1523974680000 ms 18/04/17 17:18:00 INFO scheduler.JobScheduler: Starting job streaming job 1523974680000 ms.11 from job set of time 1523974680000 ms 18/04/17 17:18:00 INFO scheduler.JobScheduler: Starting job streaming job 1523974680000 ms.12 from job set of time 1523974680000 ms 18/04/17 17:18:00 INFO scheduler.JobScheduler: Starting job streaming job 1523974680000 ms.13 from job set of time 1523974680000 ms 18/04/17 17:18:00 INFO scheduler.JobScheduler: Starting job streaming job 1523974680000 ms.14 from job set of time 1523974680000 ms 18/04/17 17:18:00 INFO scheduler.JobScheduler: Starting job streaming job 1523974680000 ms.15 from job set of time 1523974680000 ms 18/04/17 17:18:00 INFO scheduler.JobScheduler: Finished job streaming job 1523974680000 ms.13 from job set of time 1523974680000 ms 18/04/17 17:18:00 INFO scheduler.JobScheduler: Starting job streaming job 1523974680000 ms.17 from job set of time 1523974680000 ms 18/04/17 17:18:00 INFO scheduler.JobScheduler: Finished job streaming job 1523974680000 ms.14 from job set of time 1523974680000 ms 18/04/17 17:18:00 INFO scheduler.JobScheduler: Starting job streaming job 1523974680000 ms.16 from job set of time 1523974680000 ms 18/04/17 17:18:00 INFO scheduler.JobScheduler: Finished job streaming job 1523974680000 ms.17 from job set of time 1523974680000 ms 18/04/17 17:18:00 INFO scheduler.JobScheduler: Starting job streaming job 1523974680000 ms.18 from job set of time 1523974680000 ms 18/04/17 17:18:00 INFO scheduler.JobScheduler: Starting job streaming job 1523974680000 ms.19 from job set of time 1523974680000 ms 18/04/17 17:18:00 INFO scheduler.JobScheduler: Finished job streaming job 1523974680000 ms.16 from job set of time 1523974680000 ms 18/04/17 17:18:00 INFO scheduler.JobScheduler: Starting job streaming job 1523974680000 ms.20 from job set of time 1523974680000 ms 18/04/17 17:18:00 INFO scheduler.JobScheduler: Starting job streaming job 1523974680000 ms.21 from job set of time 1523974680000 ms 18/04/17 17:18:00 INFO scheduler.JobScheduler: Starting job streaming job 1523974680000 ms.22 from job set of time 1523974680000 ms 18/04/17 17:18:00 INFO scheduler.JobScheduler: Starting job streaming job 1523974680000 ms.23 from job set of time 1523974680000 ms 18/04/17 17:18:00 INFO scheduler.JobScheduler: Finished job streaming job 1523974680000 ms.21 from job set of time 1523974680000 ms 18/04/17 17:18:00 INFO scheduler.JobScheduler: Starting job streaming job 1523974680000 ms.25 from job set of time 1523974680000 ms 18/04/17 17:18:00 INFO scheduler.JobScheduler: Starting job streaming job 1523974680000 ms.24 from job set of time 1523974680000 ms 18/04/17 17:18:00 INFO scheduler.JobScheduler: Starting job streaming job 1523974680000 ms.26 from job set of time 1523974680000 ms 18/04/17 17:18:00 INFO scheduler.JobScheduler: Starting job streaming job 1523974680000 ms.27 from job set of time 1523974680000 ms 18/04/17 17:18:00 INFO scheduler.JobScheduler: Starting job streaming job 1523974680000 ms.28 from job set of time 1523974680000 ms 18/04/17 17:18:00 INFO scheduler.JobScheduler: Starting job streaming job 1523974680000 ms.29 from job set of time 1523974680000 ms 18/04/17 17:18:00 INFO scheduler.JobScheduler: Starting job streaming job 1523974680000 ms.30 from job set of time 1523974680000 ms 18/04/17 17:18:00 INFO scheduler.JobScheduler: Starting job streaming job 1523974680000 ms.31 from job set of time 1523974680000 ms 18/04/17 17:18:00 INFO scheduler.JobScheduler: Starting job streaming job 1523974680000 ms.32 from job set of time 1523974680000 ms 18/04/17 17:18:00 INFO scheduler.JobScheduler: Starting job streaming job 1523974680000 ms.33 from job set of time 1523974680000 ms 18/04/17 17:18:00 INFO scheduler.JobScheduler: Starting job streaming job 1523974680000 ms.34 from job set of time 1523974680000 ms 18/04/17 17:18:00 INFO scheduler.JobScheduler: Starting job streaming job 1523974680000 ms.35 from job set of time 1523974680000 ms 18/04/17 17:18:00 INFO scheduler.JobScheduler: Finished job streaming job 1523974680000 ms.35 from job set of time 1523974680000 ms 18/04/17 17:18:00 INFO scheduler.JobScheduler: Finished job streaming job 1523974680000 ms.30 from job set of time 1523974680000 ms 18/04/17 17:18:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 17:18:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 17:18:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 17:18:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 17:18:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 17:18:00 INFO scheduler.DAGScheduler: Got job 1185 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 17:18:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 1185 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 17:18:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 17:18:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 17:18:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 17:18:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 17:18:00 INFO scheduler.DAGScheduler: Submitting ResultStage 1185 (KafkaRDD[1643] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 17:18:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 17:18:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 17:18:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 17:18:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 17:18:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 17:18:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 17:18:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 17:18:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 17:18:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 17:18:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 17:18:00 INFO storage.MemoryStore: Block broadcast_1185 stored as values in memory (estimated size 5.7 KB, free 491.6 MB) 18/04/17 17:18:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 17:18:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 17:18:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 17:18:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 17:18:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 17:18:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 17:18:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 17:18:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 17:18:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 17:18:00 INFO storage.MemoryStore: Block broadcast_1185_piece0 stored as bytes in memory (estimated size 3.1 KB, free 491.6 MB) 18/04/17 17:18:00 INFO storage.BlockManagerInfo: Added broadcast_1185_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.7 MB) 18/04/17 17:18:00 INFO spark.SparkContext: Created broadcast 1185 from broadcast at DAGScheduler.scala:1006 18/04/17 17:18:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 1185 (KafkaRDD[1643] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 17:18:00 INFO cluster.YarnClusterScheduler: Adding task set 1185.0 with 1 tasks 18/04/17 17:18:00 INFO scheduler.DAGScheduler: Got job 1186 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 17:18:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 1186 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 17:18:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 17:18:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 17:18:00 INFO scheduler.DAGScheduler: Submitting ResultStage 1186 (KafkaRDD[1640] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 17:18:00 INFO storage.MemoryStore: Block broadcast_1186 stored as values in memory (estimated size 5.7 KB, free 491.6 MB) 18/04/17 17:18:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 1185.0 (TID 1185, ***hostname masked***, executor 7, partition 0, RACK_LOCAL, 2048 bytes) 18/04/17 17:18:00 INFO storage.MemoryStore: Block broadcast_1186_piece0 stored as bytes in memory (estimated size 3.1 KB, free 491.6 MB) 18/04/17 17:18:00 INFO storage.BlockManagerInfo: Added broadcast_1186_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.7 MB) 18/04/17 17:18:00 INFO spark.SparkContext: Created broadcast 1186 from broadcast at DAGScheduler.scala:1006 18/04/17 17:18:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 1186 (KafkaRDD[1640] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 17:18:00 INFO cluster.YarnClusterScheduler: Adding task set 1186.0 with 1 tasks 18/04/17 17:18:00 INFO scheduler.DAGScheduler: Got job 1187 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 17:18:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 1187 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 17:18:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 17:18:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 17:18:00 INFO scheduler.DAGScheduler: Submitting ResultStage 1187 (KafkaRDD[1632] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 17:18:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 1186.0 (TID 1186, ***hostname masked***, executor 1, partition 0, RACK_LOCAL, 2048 bytes) 18/04/17 17:18:00 INFO storage.MemoryStore: Block broadcast_1187 stored as values in memory (estimated size 5.7 KB, free 491.6 MB) 18/04/17 17:18:00 INFO storage.MemoryStore: Block broadcast_1187_piece0 stored as bytes in memory (estimated size 3.1 KB, free 491.6 MB) 18/04/17 17:18:00 INFO storage.BlockManagerInfo: Added broadcast_1187_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.7 MB) 18/04/17 17:18:00 INFO spark.SparkContext: Created broadcast 1187 from broadcast at DAGScheduler.scala:1006 18/04/17 17:18:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 1187 (KafkaRDD[1632] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 17:18:00 INFO cluster.YarnClusterScheduler: Adding task set 1187.0 with 1 tasks 18/04/17 17:18:00 INFO scheduler.DAGScheduler: Got job 1188 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 17:18:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 1188 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 17:18:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 17:18:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 17:18:00 INFO scheduler.DAGScheduler: Submitting ResultStage 1188 (KafkaRDD[1652] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 17:18:00 INFO storage.MemoryStore: Block broadcast_1188 stored as values in memory (estimated size 5.7 KB, free 491.6 MB) 18/04/17 17:18:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 1187.0 (TID 1187, ***hostname masked***, executor 8, partition 0, RACK_LOCAL, 2051 bytes) 18/04/17 17:18:00 INFO storage.MemoryStore: Block broadcast_1188_piece0 stored as bytes in memory (estimated size 3.1 KB, free 491.6 MB) 18/04/17 17:18:00 INFO storage.BlockManagerInfo: Added broadcast_1188_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:18:00 INFO spark.SparkContext: Created broadcast 1188 from broadcast at DAGScheduler.scala:1006 18/04/17 17:18:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 1188 (KafkaRDD[1652] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 17:18:00 INFO cluster.YarnClusterScheduler: Adding task set 1188.0 with 1 tasks 18/04/17 17:18:00 INFO scheduler.DAGScheduler: Got job 1189 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 17:18:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 1189 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 17:18:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 17:18:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 17:18:00 INFO scheduler.DAGScheduler: Submitting ResultStage 1189 (KafkaRDD[1625] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 17:18:00 INFO storage.MemoryStore: Block broadcast_1189 stored as values in memory (estimated size 5.7 KB, free 491.6 MB) 18/04/17 17:18:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 1188.0 (TID 1188, ***hostname masked***, executor 1, partition 0, RACK_LOCAL, 2040 bytes) 18/04/17 17:18:00 INFO storage.BlockManagerInfo: Added broadcast_1185_piece0 in memory on ***hostname masked***:41751 (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:18:00 INFO storage.MemoryStore: Block broadcast_1189_piece0 stored as bytes in memory (estimated size 3.1 KB, free 491.6 MB) 18/04/17 17:18:00 INFO storage.BlockManagerInfo: Added broadcast_1189_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:18:00 INFO spark.SparkContext: Created broadcast 1189 from broadcast at DAGScheduler.scala:1006 18/04/17 17:18:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 1189 (KafkaRDD[1625] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 17:18:00 INFO cluster.YarnClusterScheduler: Adding task set 1189.0 with 1 tasks 18/04/17 17:18:00 INFO scheduler.DAGScheduler: Got job 1190 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 17:18:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 1190 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 17:18:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 17:18:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 17:18:00 INFO scheduler.DAGScheduler: Submitting ResultStage 1190 (KafkaRDD[1622] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 17:18:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 1189.0 (TID 1189, ***hostname masked***, executor 4, partition 0, NODE_LOCAL, 2059 bytes) 18/04/17 17:18:00 INFO storage.MemoryStore: Block broadcast_1190 stored as values in memory (estimated size 5.7 KB, free 491.6 MB) 18/04/17 17:18:00 INFO storage.MemoryStore: Block broadcast_1190_piece0 stored as bytes in memory (estimated size 3.1 KB, free 491.6 MB) 18/04/17 17:18:00 INFO storage.BlockManagerInfo: Added broadcast_1190_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:18:00 INFO spark.SparkContext: Created broadcast 1190 from broadcast at DAGScheduler.scala:1006 18/04/17 17:18:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 1190 (KafkaRDD[1622] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 17:18:00 INFO cluster.YarnClusterScheduler: Adding task set 1190.0 with 1 tasks 18/04/17 17:18:00 INFO scheduler.DAGScheduler: Got job 1191 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 17:18:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 1191 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 17:18:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 17:18:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 17:18:00 INFO scheduler.DAGScheduler: Submitting ResultStage 1191 (KafkaRDD[1645] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 17:18:00 INFO storage.MemoryStore: Block broadcast_1191 stored as values in memory (estimated size 5.7 KB, free 491.6 MB) 18/04/17 17:18:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 1190.0 (TID 1190, ***hostname masked***, executor 5, partition 0, RACK_LOCAL, 2034 bytes) 18/04/17 17:18:00 INFO storage.BlockManagerInfo: Added broadcast_1186_piece0 in memory on ***hostname masked***:56034 (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:18:00 INFO storage.BlockManagerInfo: Added broadcast_1187_piece0 in memory on ***hostname masked***:50260 (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:18:00 INFO storage.MemoryStore: Block broadcast_1191_piece0 stored as bytes in memory (estimated size 3.1 KB, free 491.6 MB) 18/04/17 17:18:00 INFO storage.BlockManagerInfo: Added broadcast_1191_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:18:00 INFO spark.SparkContext: Created broadcast 1191 from broadcast at DAGScheduler.scala:1006 18/04/17 17:18:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 1191 (KafkaRDD[1645] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 17:18:00 INFO cluster.YarnClusterScheduler: Adding task set 1191.0 with 1 tasks 18/04/17 17:18:00 INFO scheduler.DAGScheduler: Got job 1193 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 17:18:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 1192 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 17:18:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 17:18:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 17:18:00 INFO scheduler.DAGScheduler: Submitting ResultStage 1192 (KafkaRDD[1621] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 17:18:00 INFO storage.MemoryStore: Block broadcast_1192 stored as values in memory (estimated size 5.7 KB, free 491.6 MB) 18/04/17 17:18:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 1191.0 (TID 1191, ***hostname masked***, executor 1, partition 0, RACK_LOCAL, 2045 bytes) 18/04/17 17:18:00 INFO storage.MemoryStore: Block broadcast_1192_piece0 stored as bytes in memory (estimated size 3.1 KB, free 491.6 MB) 18/04/17 17:18:00 INFO storage.BlockManagerInfo: Added broadcast_1192_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:18:00 INFO spark.SparkContext: Created broadcast 1192 from broadcast at DAGScheduler.scala:1006 18/04/17 17:18:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 1192 (KafkaRDD[1621] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 17:18:00 INFO cluster.YarnClusterScheduler: Adding task set 1192.0 with 1 tasks 18/04/17 17:18:00 INFO scheduler.DAGScheduler: Got job 1192 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 17:18:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 1193 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 17:18:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 17:18:00 INFO storage.BlockManagerInfo: Added broadcast_1189_piece0 in memory on ***hostname masked***:55279 (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:18:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 17:18:00 INFO scheduler.DAGScheduler: Submitting ResultStage 1193 (KafkaRDD[1627] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 17:18:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 1192.0 (TID 1192, ***hostname masked***, executor 3, partition 0, RACK_LOCAL, 2063 bytes) 18/04/17 17:18:00 INFO storage.MemoryStore: Block broadcast_1193 stored as values in memory (estimated size 5.7 KB, free 491.6 MB) 18/04/17 17:18:00 INFO storage.MemoryStore: Block broadcast_1193_piece0 stored as bytes in memory (estimated size 3.1 KB, free 491.6 MB) 18/04/17 17:18:00 INFO storage.BlockManagerInfo: Added broadcast_1193_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:18:00 INFO spark.SparkContext: Created broadcast 1193 from broadcast at DAGScheduler.scala:1006 18/04/17 17:18:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 1193 (KafkaRDD[1627] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 17:18:00 INFO cluster.YarnClusterScheduler: Adding task set 1193.0 with 1 tasks 18/04/17 17:18:00 INFO scheduler.DAGScheduler: Got job 1194 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 17:18:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 1194 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 17:18:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 17:18:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 17:18:00 INFO scheduler.DAGScheduler: Submitting ResultStage 1194 (KafkaRDD[1654] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 17:18:00 INFO storage.MemoryStore: Block broadcast_1194 stored as values in memory (estimated size 5.7 KB, free 491.6 MB) 18/04/17 17:18:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 1193.0 (TID 1193, ***hostname masked***, executor 2, partition 0, RACK_LOCAL, 2060 bytes) 18/04/17 17:18:00 INFO storage.MemoryStore: Block broadcast_1194_piece0 stored as bytes in memory (estimated size 3.1 KB, free 491.6 MB) 18/04/17 17:18:00 INFO storage.BlockManagerInfo: Added broadcast_1194_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:18:00 INFO spark.SparkContext: Created broadcast 1194 from broadcast at DAGScheduler.scala:1006 18/04/17 17:18:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 1194 (KafkaRDD[1654] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 17:18:00 INFO cluster.YarnClusterScheduler: Adding task set 1194.0 with 1 tasks 18/04/17 17:18:00 INFO scheduler.DAGScheduler: Got job 1195 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 17:18:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 1195 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 17:18:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 17:18:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 17:18:00 INFO scheduler.DAGScheduler: Submitting ResultStage 1195 (KafkaRDD[1648] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 17:18:00 INFO storage.BlockManagerInfo: Added broadcast_1190_piece0 in memory on ***hostname masked***:53081 (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:18:00 INFO storage.MemoryStore: Block broadcast_1195 stored as values in memory (estimated size 5.7 KB, free 491.6 MB) 18/04/17 17:18:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 1194.0 (TID 1194, ***hostname masked***, executor 1, partition 0, RACK_LOCAL, 2053 bytes) 18/04/17 17:18:00 INFO storage.BlockManagerInfo: Added broadcast_1188_piece0 in memory on ***hostname masked***:56034 (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:18:00 INFO storage.MemoryStore: Block broadcast_1195_piece0 stored as bytes in memory (estimated size 3.1 KB, free 491.5 MB) 18/04/17 17:18:00 INFO storage.BlockManagerInfo: Added broadcast_1195_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:18:00 INFO spark.SparkContext: Created broadcast 1195 from broadcast at DAGScheduler.scala:1006 18/04/17 17:18:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 1195 (KafkaRDD[1648] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 17:18:00 INFO cluster.YarnClusterScheduler: Adding task set 1195.0 with 1 tasks 18/04/17 17:18:00 INFO scheduler.DAGScheduler: Got job 1196 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 17:18:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 1196 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 17:18:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 17:18:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 17:18:00 INFO scheduler.DAGScheduler: Submitting ResultStage 1196 (KafkaRDD[1635] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 17:18:00 INFO storage.MemoryStore: Block broadcast_1196 stored as values in memory (estimated size 5.7 KB, free 491.5 MB) 18/04/17 17:18:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 1195.0 (TID 1195, ***hostname masked***, executor 11, partition 0, RACK_LOCAL, 2045 bytes) 18/04/17 17:18:00 INFO storage.MemoryStore: Block broadcast_1196_piece0 stored as bytes in memory (estimated size 3.1 KB, free 491.5 MB) 18/04/17 17:18:00 INFO storage.BlockManagerInfo: Added broadcast_1196_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:18:00 INFO spark.SparkContext: Created broadcast 1196 from broadcast at DAGScheduler.scala:1006 18/04/17 17:18:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 1196 (KafkaRDD[1635] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 17:18:00 INFO cluster.YarnClusterScheduler: Adding task set 1196.0 with 1 tasks 18/04/17 17:18:00 INFO scheduler.DAGScheduler: Got job 1197 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 17:18:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 1197 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 17:18:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 17:18:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 17:18:00 INFO scheduler.DAGScheduler: Submitting ResultStage 1197 (KafkaRDD[1644] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 17:18:00 INFO storage.MemoryStore: Block broadcast_1197 stored as values in memory (estimated size 5.7 KB, free 491.5 MB) 18/04/17 17:18:00 INFO storage.BlockManagerInfo: Added broadcast_1193_piece0 in memory on ***hostname masked***:43653 (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:18:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 1196.0 (TID 1196, ***hostname masked***, executor 2, partition 0, RACK_LOCAL, 2064 bytes) 18/04/17 17:18:00 INFO storage.BlockManagerInfo: Added broadcast_1194_piece0 in memory on ***hostname masked***:56034 (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:18:00 INFO storage.BlockManagerInfo: Added broadcast_1192_piece0 in memory on ***hostname masked***:60107 (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:18:00 INFO storage.MemoryStore: Block broadcast_1197_piece0 stored as bytes in memory (estimated size 3.1 KB, free 491.5 MB) 18/04/17 17:18:00 INFO storage.BlockManagerInfo: Added broadcast_1197_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:18:00 INFO spark.SparkContext: Created broadcast 1197 from broadcast at DAGScheduler.scala:1006 18/04/17 17:18:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 1197 (KafkaRDD[1644] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 17:18:00 INFO cluster.YarnClusterScheduler: Adding task set 1197.0 with 1 tasks 18/04/17 17:18:00 INFO scheduler.DAGScheduler: Got job 1198 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 17:18:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 1198 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 17:18:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 17:18:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 17:18:00 INFO scheduler.DAGScheduler: Submitting ResultStage 1198 (KafkaRDD[1651] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 17:18:00 INFO storage.MemoryStore: Block broadcast_1198 stored as values in memory (estimated size 5.7 KB, free 491.5 MB) 18/04/17 17:18:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 1197.0 (TID 1197, ***hostname masked***, executor 3, partition 0, NODE_LOCAL, 2047 bytes) 18/04/17 17:18:00 INFO storage.MemoryStore: Block broadcast_1198_piece0 stored as bytes in memory (estimated size 3.1 KB, free 491.5 MB) 18/04/17 17:18:00 INFO storage.BlockManagerInfo: Added broadcast_1198_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:18:00 INFO spark.SparkContext: Created broadcast 1198 from broadcast at DAGScheduler.scala:1006 18/04/17 17:18:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 1198 (KafkaRDD[1651] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 17:18:00 INFO cluster.YarnClusterScheduler: Adding task set 1198.0 with 1 tasks 18/04/17 17:18:00 INFO scheduler.DAGScheduler: Got job 1199 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 17:18:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 1199 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 17:18:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 17:18:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 17:18:00 INFO scheduler.DAGScheduler: Submitting ResultStage 1199 (KafkaRDD[1647] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 17:18:00 INFO storage.BlockManagerInfo: Added broadcast_1196_piece0 in memory on ***hostname masked***:43653 (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:18:00 INFO storage.MemoryStore: Block broadcast_1199 stored as values in memory (estimated size 5.7 KB, free 491.5 MB) 18/04/17 17:18:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 1198.0 (TID 1198, ***hostname masked***, executor 10, partition 0, RACK_LOCAL, 2042 bytes) 18/04/17 17:18:00 INFO storage.BlockManagerInfo: Added broadcast_1197_piece0 in memory on ***hostname masked***:60107 (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:18:00 INFO storage.MemoryStore: Block broadcast_1199_piece0 stored as bytes in memory (estimated size 3.1 KB, free 491.5 MB) 18/04/17 17:18:00 INFO storage.BlockManagerInfo: Added broadcast_1199_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:18:00 INFO spark.SparkContext: Created broadcast 1199 from broadcast at DAGScheduler.scala:1006 18/04/17 17:18:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 1199 (KafkaRDD[1647] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 17:18:00 INFO cluster.YarnClusterScheduler: Adding task set 1199.0 with 1 tasks 18/04/17 17:18:00 INFO scheduler.DAGScheduler: Got job 1200 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 17:18:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 1200 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 17:18:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 17:18:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 17:18:00 INFO scheduler.DAGScheduler: Submitting ResultStage 1200 (KafkaRDD[1639] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 17:18:00 INFO storage.MemoryStore: Block broadcast_1200 stored as values in memory (estimated size 5.7 KB, free 491.5 MB) 18/04/17 17:18:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 1199.0 (TID 1199, ***hostname masked***, executor 5, partition 0, RACK_LOCAL, 2048 bytes) 18/04/17 17:18:00 INFO storage.BlockManagerInfo: Added broadcast_1195_piece0 in memory on ***hostname masked***:57847 (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:18:00 INFO storage.MemoryStore: Block broadcast_1200_piece0 stored as bytes in memory (estimated size 3.1 KB, free 491.5 MB) 18/04/17 17:18:00 INFO storage.BlockManagerInfo: Added broadcast_1200_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:18:00 INFO spark.SparkContext: Created broadcast 1200 from broadcast at DAGScheduler.scala:1006 18/04/17 17:18:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 1200 (KafkaRDD[1639] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 17:18:00 INFO cluster.YarnClusterScheduler: Adding task set 1200.0 with 1 tasks 18/04/17 17:18:00 INFO scheduler.DAGScheduler: Got job 1201 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 17:18:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 1201 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 17:18:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 17:18:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 17:18:00 INFO scheduler.DAGScheduler: Submitting ResultStage 1201 (KafkaRDD[1630] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 17:18:00 INFO storage.MemoryStore: Block broadcast_1201 stored as values in memory (estimated size 5.7 KB, free 491.5 MB) 18/04/17 17:18:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 1200.0 (TID 1200, ***hostname masked***, executor 8, partition 0, RACK_LOCAL, 2050 bytes) 18/04/17 17:18:00 INFO storage.BlockManagerInfo: Added broadcast_1199_piece0 in memory on ***hostname masked***:53081 (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:18:00 INFO storage.MemoryStore: Block broadcast_1201_piece0 stored as bytes in memory (estimated size 3.1 KB, free 491.5 MB) 18/04/17 17:18:00 INFO storage.BlockManagerInfo: Added broadcast_1201_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:18:00 INFO spark.SparkContext: Created broadcast 1201 from broadcast at DAGScheduler.scala:1006 18/04/17 17:18:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 1201 (KafkaRDD[1630] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 17:18:00 INFO cluster.YarnClusterScheduler: Adding task set 1201.0 with 1 tasks 18/04/17 17:18:00 INFO scheduler.DAGScheduler: Got job 1202 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 17:18:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 1202 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 17:18:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 17:18:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 17:18:00 INFO scheduler.DAGScheduler: Submitting ResultStage 1202 (KafkaRDD[1626] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 17:18:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 1201.0 (TID 1201, ***hostname masked***, executor 6, partition 0, NODE_LOCAL, 2059 bytes) 18/04/17 17:18:00 INFO storage.MemoryStore: Block broadcast_1202 stored as values in memory (estimated size 5.7 KB, free 491.5 MB) 18/04/17 17:18:00 INFO storage.BlockManagerInfo: Added broadcast_1200_piece0 in memory on ***hostname masked***:50260 (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:18:00 INFO storage.BlockManagerInfo: Added broadcast_1198_piece0 in memory on ***hostname masked***:55095 (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:18:00 INFO storage.MemoryStore: Block broadcast_1202_piece0 stored as bytes in memory (estimated size 3.1 KB, free 491.5 MB) 18/04/17 17:18:00 INFO storage.BlockManagerInfo: Added broadcast_1202_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:18:00 INFO spark.SparkContext: Created broadcast 1202 from broadcast at DAGScheduler.scala:1006 18/04/17 17:18:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 1202 (KafkaRDD[1626] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 17:18:00 INFO cluster.YarnClusterScheduler: Adding task set 1202.0 with 1 tasks 18/04/17 17:18:00 INFO scheduler.DAGScheduler: Got job 1203 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 17:18:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 1203 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 17:18:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 17:18:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 17:18:00 INFO scheduler.DAGScheduler: Submitting ResultStage 1203 (KafkaRDD[1649] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 17:18:00 INFO storage.MemoryStore: Block broadcast_1203 stored as values in memory (estimated size 5.7 KB, free 491.5 MB) 18/04/17 17:18:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 1202.0 (TID 1202, ***hostname masked***, executor 12, partition 0, NODE_LOCAL, 2045 bytes) 18/04/17 17:18:00 INFO storage.BlockManagerInfo: Added broadcast_1191_piece0 in memory on ***hostname masked***:56034 (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:18:00 INFO storage.MemoryStore: Block broadcast_1203_piece0 stored as bytes in memory (estimated size 3.1 KB, free 491.5 MB) 18/04/17 17:18:00 INFO storage.BlockManagerInfo: Added broadcast_1203_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:18:00 INFO spark.SparkContext: Created broadcast 1203 from broadcast at DAGScheduler.scala:1006 18/04/17 17:18:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 1203 (KafkaRDD[1649] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 17:18:00 INFO cluster.YarnClusterScheduler: Adding task set 1203.0 with 1 tasks 18/04/17 17:18:00 INFO scheduler.DAGScheduler: Got job 1204 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 17:18:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 1204 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 17:18:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 17:18:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 17:18:00 INFO scheduler.DAGScheduler: Submitting ResultStage 1204 (KafkaRDD[1638] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 17:18:00 INFO storage.MemoryStore: Block broadcast_1204 stored as values in memory (estimated size 5.7 KB, free 491.5 MB) 18/04/17 17:18:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 1203.0 (TID 1203, ***hostname masked***, executor 5, partition 0, NODE_LOCAL, 2049 bytes) 18/04/17 17:18:00 INFO storage.BlockManagerInfo: Added broadcast_1201_piece0 in memory on ***hostname masked***:35790 (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:18:00 INFO storage.MemoryStore: Block broadcast_1204_piece0 stored as bytes in memory (estimated size 3.1 KB, free 491.5 MB) 18/04/17 17:18:00 INFO storage.BlockManagerInfo: Added broadcast_1204_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:18:00 INFO spark.SparkContext: Created broadcast 1204 from broadcast at DAGScheduler.scala:1006 18/04/17 17:18:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 1204 (KafkaRDD[1638] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 17:18:00 INFO cluster.YarnClusterScheduler: Adding task set 1204.0 with 1 tasks 18/04/17 17:18:00 INFO scheduler.DAGScheduler: Got job 1205 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 17:18:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 1205 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 17:18:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 17:18:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 17:18:00 INFO scheduler.DAGScheduler: Submitting ResultStage 1205 (KafkaRDD[1642] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 17:18:00 INFO storage.MemoryStore: Block broadcast_1205 stored as values in memory (estimated size 5.7 KB, free 491.5 MB) 18/04/17 17:18:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 1204.0 (TID 1204, ***hostname masked***, executor 10, partition 0, RACK_LOCAL, 2048 bytes) 18/04/17 17:18:00 INFO storage.BlockManagerInfo: Added broadcast_1203_piece0 in memory on ***hostname masked***:53081 (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:18:00 INFO storage.BlockManagerInfo: Added broadcast_1202_piece0 in memory on ***hostname masked***:42188 (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:18:00 INFO storage.MemoryStore: Block broadcast_1205_piece0 stored as bytes in memory (estimated size 3.1 KB, free 491.5 MB) 18/04/17 17:18:00 INFO storage.BlockManagerInfo: Added broadcast_1205_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:18:00 INFO spark.SparkContext: Created broadcast 1205 from broadcast at DAGScheduler.scala:1006 18/04/17 17:18:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 1205 (KafkaRDD[1642] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 17:18:00 INFO cluster.YarnClusterScheduler: Adding task set 1205.0 with 1 tasks 18/04/17 17:18:00 INFO scheduler.DAGScheduler: Got job 1206 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 17:18:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 1206 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 17:18:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 17:18:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 17:18:00 INFO scheduler.DAGScheduler: Submitting ResultStage 1206 (KafkaRDD[1628] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 17:18:00 INFO storage.MemoryStore: Block broadcast_1206 stored as values in memory (estimated size 5.7 KB, free 491.5 MB) 18/04/17 17:18:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 1205.0 (TID 1205, ***hostname masked***, executor 7, partition 0, RACK_LOCAL, 2069 bytes) 18/04/17 17:18:00 INFO storage.BlockManagerInfo: Added broadcast_1204_piece0 in memory on ***hostname masked***:55095 (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:18:00 INFO storage.MemoryStore: Block broadcast_1206_piece0 stored as bytes in memory (estimated size 3.1 KB, free 491.5 MB) 18/04/17 17:18:00 INFO storage.BlockManagerInfo: Added broadcast_1206_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:18:00 INFO spark.SparkContext: Created broadcast 1206 from broadcast at DAGScheduler.scala:1006 18/04/17 17:18:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 1206 (KafkaRDD[1628] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 17:18:00 INFO cluster.YarnClusterScheduler: Adding task set 1206.0 with 1 tasks 18/04/17 17:18:00 INFO scheduler.DAGScheduler: Got job 1208 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 17:18:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 1207 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 17:18:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 17:18:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 17:18:00 INFO scheduler.DAGScheduler: Submitting ResultStage 1207 (KafkaRDD[1646] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 17:18:00 INFO storage.MemoryStore: Block broadcast_1207 stored as values in memory (estimated size 5.7 KB, free 491.4 MB) 18/04/17 17:18:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 1206.0 (TID 1206, ***hostname masked***, executor 7, partition 0, RACK_LOCAL, 2051 bytes) 18/04/17 17:18:00 INFO storage.MemoryStore: Block broadcast_1207_piece0 stored as bytes in memory (estimated size 3.1 KB, free 491.4 MB) 18/04/17 17:18:00 INFO storage.BlockManagerInfo: Added broadcast_1207_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:18:00 INFO storage.BlockManagerInfo: Added broadcast_1205_piece0 in memory on ***hostname masked***:41751 (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:18:00 INFO spark.SparkContext: Created broadcast 1207 from broadcast at DAGScheduler.scala:1006 18/04/17 17:18:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 1207 (KafkaRDD[1646] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 17:18:00 INFO cluster.YarnClusterScheduler: Adding task set 1207.0 with 1 tasks 18/04/17 17:18:00 INFO scheduler.DAGScheduler: Got job 1207 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 17:18:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 1208 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 17:18:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 17:18:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 17:18:00 INFO scheduler.DAGScheduler: Submitting ResultStage 1208 (KafkaRDD[1631] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 17:18:00 INFO storage.MemoryStore: Block broadcast_1208 stored as values in memory (estimated size 5.7 KB, free 491.4 MB) 18/04/17 17:18:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 1207.0 (TID 1207, ***hostname masked***, executor 3, partition 0, NODE_LOCAL, 2056 bytes) 18/04/17 17:18:00 INFO storage.MemoryStore: Block broadcast_1208_piece0 stored as bytes in memory (estimated size 3.1 KB, free 491.4 MB) 18/04/17 17:18:00 INFO storage.BlockManagerInfo: Added broadcast_1208_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:18:00 INFO spark.SparkContext: Created broadcast 1208 from broadcast at DAGScheduler.scala:1006 18/04/17 17:18:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 1208 (KafkaRDD[1631] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 17:18:00 INFO cluster.YarnClusterScheduler: Adding task set 1208.0 with 1 tasks 18/04/17 17:18:00 INFO scheduler.DAGScheduler: Got job 1209 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 17:18:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 1209 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 17:18:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 17:18:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 17:18:00 INFO scheduler.DAGScheduler: Submitting ResultStage 1209 (KafkaRDD[1629] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 17:18:00 INFO storage.MemoryStore: Block broadcast_1209 stored as values in memory (estimated size 5.7 KB, free 491.4 MB) 18/04/17 17:18:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 1208.0 (TID 1208, ***hostname masked***, executor 4, partition 0, RACK_LOCAL, 2046 bytes) 18/04/17 17:18:00 INFO storage.MemoryStore: Block broadcast_1209_piece0 stored as bytes in memory (estimated size 3.1 KB, free 491.4 MB) 18/04/17 17:18:00 INFO storage.BlockManagerInfo: Added broadcast_1209_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:18:00 INFO spark.SparkContext: Created broadcast 1209 from broadcast at DAGScheduler.scala:1006 18/04/17 17:18:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 1209 (KafkaRDD[1629] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 17:18:00 INFO cluster.YarnClusterScheduler: Adding task set 1209.0 with 1 tasks 18/04/17 17:18:00 INFO scheduler.DAGScheduler: Got job 1210 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 17:18:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 1210 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 17:18:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 17:18:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 17:18:00 INFO scheduler.DAGScheduler: Submitting ResultStage 1210 (KafkaRDD[1653] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 17:18:00 INFO storage.MemoryStore: Block broadcast_1210 stored as values in memory (estimated size 5.7 KB, free 491.4 MB) 18/04/17 17:18:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 1209.0 (TID 1209, ***hostname masked***, executor 11, partition 0, RACK_LOCAL, 2041 bytes) 18/04/17 17:18:00 INFO storage.BlockManagerInfo: Added broadcast_1207_piece0 in memory on ***hostname masked***:60107 (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:18:00 INFO storage.MemoryStore: Block broadcast_1210_piece0 stored as bytes in memory (estimated size 3.1 KB, free 491.4 MB) 18/04/17 17:18:00 INFO storage.BlockManagerInfo: Added broadcast_1210_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:18:00 INFO storage.BlockManagerInfo: Added broadcast_1206_piece0 in memory on ***hostname masked***:41751 (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:18:00 INFO spark.SparkContext: Created broadcast 1210 from broadcast at DAGScheduler.scala:1006 18/04/17 17:18:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 1210 (KafkaRDD[1653] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 17:18:00 INFO cluster.YarnClusterScheduler: Adding task set 1210.0 with 1 tasks 18/04/17 17:18:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 1210.0 (TID 1210, ***hostname masked***, executor 11, partition 0, NODE_LOCAL, 2048 bytes) 18/04/17 17:18:00 INFO storage.BlockManagerInfo: Added broadcast_1208_piece0 in memory on ***hostname masked***:55279 (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:18:00 INFO storage.BlockManagerInfo: Added broadcast_1209_piece0 in memory on ***hostname masked***:57847 (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:18:00 INFO storage.BlockManagerInfo: Added broadcast_1210_piece0 in memory on ***hostname masked***:57847 (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:18:00 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 1206.0 (TID 1206) in 77 ms on ***hostname masked*** (executor 7) (1/1) 18/04/17 17:18:00 INFO cluster.YarnClusterScheduler: Removed TaskSet 1206.0, whose tasks have all completed, from pool 18/04/17 17:18:00 INFO scheduler.DAGScheduler: ResultStage 1206 (foreachPartition at PredictorEngineApp.java:153) finished in 0.078 s 18/04/17 17:18:00 INFO scheduler.DAGScheduler: Job 1206 finished: foreachPartition at PredictorEngineApp.java:153, took 0.162171 s 18/04/17 17:18:00 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x73078ef4 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 17:18:00 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x73078ef40x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 17:18:00 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 17:18:00 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:46626, server: ***hostname masked***/***IP masked***:2181 18/04/17 17:18:00 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x2626be142b29076, negotiated timeout = 60000 18/04/17 17:18:00 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x2626be142b29076 18/04/17 17:18:00 INFO zookeeper.ZooKeeper: Session: 0x2626be142b29076 closed 18/04/17 17:18:00 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 17:18:00 INFO scheduler.JobScheduler: Finished job streaming job 1523974680000 ms.8 from job set of time 1523974680000 ms 18/04/17 17:18:03 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 1191.0 (TID 1191) in 3089 ms on ***hostname masked*** (executor 1) (1/1) 18/04/17 17:18:03 INFO cluster.YarnClusterScheduler: Removed TaskSet 1191.0, whose tasks have all completed, from pool 18/04/17 17:18:03 INFO scheduler.DAGScheduler: ResultStage 1191 (foreachPartition at PredictorEngineApp.java:153) finished in 3.089 s 18/04/17 17:18:03 INFO scheduler.DAGScheduler: Job 1191 finished: foreachPartition at PredictorEngineApp.java:153, took 3.111164 s 18/04/17 17:18:03 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x5d3f8a67 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 17:18:03 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x5d3f8a670x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 17:18:03 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 17:18:03 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:42038, server: ***hostname masked***/***IP masked***:2181 18/04/17 17:18:03 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x1626be1444c9795, negotiated timeout = 60000 18/04/17 17:18:03 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x1626be1444c9795 18/04/17 17:18:03 INFO zookeeper.ZooKeeper: Session: 0x1626be1444c9795 closed 18/04/17 17:18:03 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 17:18:03 INFO scheduler.JobScheduler: Finished job streaming job 1523974680000 ms.25 from job set of time 1523974680000 ms 18/04/17 17:18:04 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 1193.0 (TID 1193) in 3940 ms on ***hostname masked*** (executor 2) (1/1) 18/04/17 17:18:04 INFO cluster.YarnClusterScheduler: Removed TaskSet 1193.0, whose tasks have all completed, from pool 18/04/17 17:18:04 INFO scheduler.DAGScheduler: ResultStage 1193 (foreachPartition at PredictorEngineApp.java:153) finished in 3.940 s 18/04/17 17:18:04 INFO scheduler.DAGScheduler: Job 1192 finished: foreachPartition at PredictorEngineApp.java:153, took 3.968070 s 18/04/17 17:18:04 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x5139c457 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 17:18:04 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x5139c4570x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 17:18:04 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 17:18:04 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:42042, server: ***hostname masked***/***IP masked***:2181 18/04/17 17:18:04 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x1626be1444c9796, negotiated timeout = 60000 18/04/17 17:18:04 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x1626be1444c9796 18/04/17 17:18:04 INFO zookeeper.ZooKeeper: Session: 0x1626be1444c9796 closed 18/04/17 17:18:04 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 17:18:04 INFO scheduler.JobScheduler: Finished job streaming job 1523974680000 ms.7 from job set of time 1523974680000 ms 18/04/17 17:18:05 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 1202.0 (TID 1202) in 4955 ms on ***hostname masked*** (executor 12) (1/1) 18/04/17 17:18:05 INFO scheduler.DAGScheduler: ResultStage 1202 (foreachPartition at PredictorEngineApp.java:153) finished in 4.955 s 18/04/17 17:18:05 INFO cluster.YarnClusterScheduler: Removed TaskSet 1202.0, whose tasks have all completed, from pool 18/04/17 17:18:05 INFO scheduler.DAGScheduler: Job 1202 finished: foreachPartition at PredictorEngineApp.java:153, took 5.023651 s 18/04/17 17:18:05 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0xa4b8941 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 17:18:05 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0xa4b89410x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 17:18:05 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 17:18:05 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:42046, server: ***hostname masked***/***IP masked***:2181 18/04/17 17:18:05 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x1626be1444c9797, negotiated timeout = 60000 18/04/17 17:18:05 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x1626be1444c9797 18/04/17 17:18:05 INFO zookeeper.ZooKeeper: Session: 0x1626be1444c9797 closed 18/04/17 17:18:05 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 17:18:05 INFO scheduler.JobScheduler: Finished job streaming job 1523974680000 ms.6 from job set of time 1523974680000 ms 18/04/17 17:18:05 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 1198.0 (TID 1198) in 5776 ms on ***hostname masked*** (executor 10) (1/1) 18/04/17 17:18:05 INFO cluster.YarnClusterScheduler: Removed TaskSet 1198.0, whose tasks have all completed, from pool 18/04/17 17:18:05 INFO scheduler.DAGScheduler: ResultStage 1198 (foreachPartition at PredictorEngineApp.java:153) finished in 5.777 s 18/04/17 17:18:05 INFO scheduler.DAGScheduler: Job 1198 finished: foreachPartition at PredictorEngineApp.java:153, took 5.827528 s 18/04/17 17:18:05 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x33c8dd connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 17:18:05 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x33c8dd0x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 17:18:05 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 17:18:05 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:42050, server: ***hostname masked***/***IP masked***:2181 18/04/17 17:18:05 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x1626be1444c9798, negotiated timeout = 60000 18/04/17 17:18:05 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x1626be1444c9798 18/04/17 17:18:05 INFO zookeeper.ZooKeeper: Session: 0x1626be1444c9798 closed 18/04/17 17:18:05 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 17:18:05 INFO scheduler.JobScheduler: Finished job streaming job 1523974680000 ms.31 from job set of time 1523974680000 ms 18/04/17 17:18:06 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 1196.0 (TID 1196) in 6861 ms on ***hostname masked*** (executor 2) (1/1) 18/04/17 17:18:06 INFO cluster.YarnClusterScheduler: Removed TaskSet 1196.0, whose tasks have all completed, from pool 18/04/17 17:18:06 INFO scheduler.DAGScheduler: ResultStage 1196 (foreachPartition at PredictorEngineApp.java:153) finished in 6.862 s 18/04/17 17:18:06 INFO scheduler.DAGScheduler: Job 1196 finished: foreachPartition at PredictorEngineApp.java:153, took 6.898039 s 18/04/17 17:18:06 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x25b6b344 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 17:18:06 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x25b6b3440x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 17:18:06 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 17:18:06 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:35672, server: ***hostname masked***/***IP masked***:2181 18/04/17 17:18:06 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x3626be1439a9746, negotiated timeout = 60000 18/04/17 17:18:06 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x3626be1439a9746 18/04/17 17:18:06 INFO zookeeper.ZooKeeper: Session: 0x3626be1439a9746 closed 18/04/17 17:18:06 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 17:18:06 INFO scheduler.JobScheduler: Finished job streaming job 1523974680000 ms.15 from job set of time 1523974680000 ms 18/04/17 17:18:07 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 1204.0 (TID 1204) in 7141 ms on ***hostname masked*** (executor 10) (1/1) 18/04/17 17:18:07 INFO cluster.YarnClusterScheduler: Removed TaskSet 1204.0, whose tasks have all completed, from pool 18/04/17 17:18:07 INFO scheduler.DAGScheduler: ResultStage 1204 (foreachPartition at PredictorEngineApp.java:153) finished in 7.142 s 18/04/17 17:18:07 INFO scheduler.DAGScheduler: Job 1204 finished: foreachPartition at PredictorEngineApp.java:153, took 7.218192 s 18/04/17 17:18:07 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x61e3d85c connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 17:18:07 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x61e3d85c0x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 17:18:07 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 17:18:07 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:46653, server: ***hostname masked***/***IP masked***:2181 18/04/17 17:18:07 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x2626be142b2907d, negotiated timeout = 60000 18/04/17 17:18:07 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x2626be142b2907d 18/04/17 17:18:07 INFO zookeeper.ZooKeeper: Session: 0x2626be142b2907d closed 18/04/17 17:18:07 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 17:18:07 INFO scheduler.JobScheduler: Finished job streaming job 1523974680000 ms.18 from job set of time 1523974680000 ms 18/04/17 17:18:07 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 1187.0 (TID 1187) in 7782 ms on ***hostname masked*** (executor 8) (1/1) 18/04/17 17:18:07 INFO cluster.YarnClusterScheduler: Removed TaskSet 1187.0, whose tasks have all completed, from pool 18/04/17 17:18:07 INFO scheduler.DAGScheduler: ResultStage 1187 (foreachPartition at PredictorEngineApp.java:153) finished in 7.783 s 18/04/17 17:18:07 INFO scheduler.DAGScheduler: Job 1187 finished: foreachPartition at PredictorEngineApp.java:153, took 7.793804 s 18/04/17 17:18:07 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x4058d00e connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 17:18:07 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x4058d00e0x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 17:18:07 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 17:18:07 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:35679, server: ***hostname masked***/***IP masked***:2181 18/04/17 17:18:07 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x3626be1439a9747, negotiated timeout = 60000 18/04/17 17:18:07 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x3626be1439a9747 18/04/17 17:18:07 INFO zookeeper.ZooKeeper: Session: 0x3626be1439a9747 closed 18/04/17 17:18:07 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 17:18:07 INFO scheduler.JobScheduler: Finished job streaming job 1523974680000 ms.12 from job set of time 1523974680000 ms 18/04/17 17:18:08 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 1200.0 (TID 1200) in 8242 ms on ***hostname masked*** (executor 8) (1/1) 18/04/17 17:18:08 INFO cluster.YarnClusterScheduler: Removed TaskSet 1200.0, whose tasks have all completed, from pool 18/04/17 17:18:08 INFO scheduler.DAGScheduler: ResultStage 1200 (foreachPartition at PredictorEngineApp.java:153) finished in 8.243 s 18/04/17 17:18:08 INFO scheduler.DAGScheduler: Job 1200 finished: foreachPartition at PredictorEngineApp.java:153, took 8.301729 s 18/04/17 17:18:08 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x8150fde connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 17:18:08 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x8150fde0x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 17:18:08 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 17:18:08 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:42066, server: ***hostname masked***/***IP masked***:2181 18/04/17 17:18:08 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x1626be1444c979b, negotiated timeout = 60000 18/04/17 17:18:08 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x1626be1444c979b 18/04/17 17:18:08 INFO zookeeper.ZooKeeper: Session: 0x1626be1444c979b closed 18/04/17 17:18:08 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 17:18:08 INFO scheduler.JobScheduler: Finished job streaming job 1523974680000 ms.19 from job set of time 1523974680000 ms 18/04/17 17:18:11 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 1205.0 (TID 1205) in 11639 ms on ***hostname masked*** (executor 7) (1/1) 18/04/17 17:18:11 INFO scheduler.DAGScheduler: ResultStage 1205 (foreachPartition at PredictorEngineApp.java:153) finished in 11.640 s 18/04/17 17:18:11 INFO cluster.YarnClusterScheduler: Removed TaskSet 1205.0, whose tasks have all completed, from pool 18/04/17 17:18:11 INFO scheduler.DAGScheduler: Job 1205 finished: foreachPartition at PredictorEngineApp.java:153, took 11.733575 s 18/04/17 17:18:11 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x63098775 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 17:18:11 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x630987750x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 17:18:11 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 17:18:11 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:42075, server: ***hostname masked***/***IP masked***:2181 18/04/17 17:18:11 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x1626be1444c979c, negotiated timeout = 60000 18/04/17 17:18:11 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x1626be1444c979c 18/04/17 17:18:11 INFO zookeeper.ZooKeeper: Session: 0x1626be1444c979c closed 18/04/17 17:18:11 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 17:18:11 INFO scheduler.JobScheduler: Finished job streaming job 1523974680000 ms.22 from job set of time 1523974680000 ms 18/04/17 17:18:12 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 1201.0 (TID 1201) in 12365 ms on ***hostname masked*** (executor 6) (1/1) 18/04/17 17:18:12 INFO cluster.YarnClusterScheduler: Removed TaskSet 1201.0, whose tasks have all completed, from pool 18/04/17 17:18:12 INFO scheduler.DAGScheduler: ResultStage 1201 (foreachPartition at PredictorEngineApp.java:153) finished in 12.365 s 18/04/17 17:18:12 INFO scheduler.DAGScheduler: Job 1201 finished: foreachPartition at PredictorEngineApp.java:153, took 12.428431 s 18/04/17 17:18:12 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x4c1a8fdd connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 17:18:12 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x4c1a8fdd0x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 17:18:12 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 17:18:12 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:35697, server: ***hostname masked***/***IP masked***:2181 18/04/17 17:18:12 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x3626be1439a9748, negotiated timeout = 60000 18/04/17 17:18:12 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x3626be1439a9748 18/04/17 17:18:12 INFO zookeeper.ZooKeeper: Session: 0x3626be1439a9748 closed 18/04/17 17:18:12 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 17:18:12 INFO scheduler.JobScheduler: Finished job streaming job 1523974680000 ms.10 from job set of time 1523974680000 ms 18/04/17 17:18:13 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 1209.0 (TID 1209) in 12954 ms on ***hostname masked*** (executor 11) (1/1) 18/04/17 17:18:13 INFO cluster.YarnClusterScheduler: Removed TaskSet 1209.0, whose tasks have all completed, from pool 18/04/17 17:18:13 INFO scheduler.DAGScheduler: ResultStage 1209 (foreachPartition at PredictorEngineApp.java:153) finished in 12.955 s 18/04/17 17:18:13 INFO scheduler.DAGScheduler: Job 1209 finished: foreachPartition at PredictorEngineApp.java:153, took 13.048392 s 18/04/17 17:18:13 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x5bb40050 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 17:18:13 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x5bb400500x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 17:18:13 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 17:18:13 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:42084, server: ***hostname masked***/***IP masked***:2181 18/04/17 17:18:13 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x1626be1444c979d, negotiated timeout = 60000 18/04/17 17:18:13 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x1626be1444c979d 18/04/17 17:18:13 INFO zookeeper.ZooKeeper: Session: 0x1626be1444c979d closed 18/04/17 17:18:13 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 17:18:13 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 1195.0 (TID 1195) in 13041 ms on ***hostname masked*** (executor 11) (1/1) 18/04/17 17:18:13 INFO cluster.YarnClusterScheduler: Removed TaskSet 1195.0, whose tasks have all completed, from pool 18/04/17 17:18:13 INFO scheduler.DAGScheduler: ResultStage 1195 (foreachPartition at PredictorEngineApp.java:153) finished in 13.042 s 18/04/17 17:18:13 INFO scheduler.DAGScheduler: Job 1195 finished: foreachPartition at PredictorEngineApp.java:153, took 13.074748 s 18/04/17 17:18:13 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x35acac0f connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 17:18:13 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x35acac0f0x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 17:18:13 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 17:18:13 INFO scheduler.JobScheduler: Finished job streaming job 1523974680000 ms.9 from job set of time 1523974680000 ms 18/04/17 17:18:13 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:42087, server: ***hostname masked***/***IP masked***:2181 18/04/17 17:18:13 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x1626be1444c979e, negotiated timeout = 60000 18/04/17 17:18:13 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x1626be1444c979e 18/04/17 17:18:13 INFO zookeeper.ZooKeeper: Session: 0x1626be1444c979e closed 18/04/17 17:18:13 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 17:18:13 INFO scheduler.JobScheduler: Finished job streaming job 1523974680000 ms.28 from job set of time 1523974680000 ms 18/04/17 17:18:13 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 1210.0 (TID 1210) in 13240 ms on ***hostname masked*** (executor 11) (1/1) 18/04/17 17:18:13 INFO cluster.YarnClusterScheduler: Removed TaskSet 1210.0, whose tasks have all completed, from pool 18/04/17 17:18:13 INFO scheduler.DAGScheduler: ResultStage 1210 (foreachPartition at PredictorEngineApp.java:153) finished in 13.241 s 18/04/17 17:18:13 INFO scheduler.DAGScheduler: Job 1210 finished: foreachPartition at PredictorEngineApp.java:153, took 13.335767 s 18/04/17 17:18:13 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x237c1b19 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 17:18:13 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x237c1b190x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 17:18:13 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 17:18:13 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:35708, server: ***hostname masked***/***IP masked***:2181 18/04/17 17:18:13 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x3626be1439a974a, negotiated timeout = 60000 18/04/17 17:18:13 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x3626be1439a974a 18/04/17 17:18:13 INFO zookeeper.ZooKeeper: Session: 0x3626be1439a974a closed 18/04/17 17:18:13 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 17:18:13 INFO scheduler.JobScheduler: Finished job streaming job 1523974680000 ms.33 from job set of time 1523974680000 ms 18/04/17 17:18:13 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 1190.0 (TID 1190) in 13567 ms on ***hostname masked*** (executor 5) (1/1) 18/04/17 17:18:13 INFO cluster.YarnClusterScheduler: Removed TaskSet 1190.0, whose tasks have all completed, from pool 18/04/17 17:18:13 INFO scheduler.DAGScheduler: ResultStage 1190 (foreachPartition at PredictorEngineApp.java:153) finished in 13.567 s 18/04/17 17:18:13 INFO scheduler.DAGScheduler: Job 1190 finished: foreachPartition at PredictorEngineApp.java:153, took 13.586571 s 18/04/17 17:18:13 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x6f0e579b connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 17:18:13 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x6f0e579b0x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 17:18:13 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 17:18:13 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:46688, server: ***hostname masked***/***IP masked***:2181 18/04/17 17:18:13 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x2626be142b29081, negotiated timeout = 60000 18/04/17 17:18:13 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x2626be142b29081 18/04/17 17:18:13 INFO zookeeper.ZooKeeper: Session: 0x2626be142b29081 closed 18/04/17 17:18:13 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 17:18:13 INFO scheduler.JobScheduler: Finished job streaming job 1523974680000 ms.2 from job set of time 1523974680000 ms 18/04/17 17:18:13 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 1188.0 (TID 1188) in 13683 ms on ***hostname masked*** (executor 1) (1/1) 18/04/17 17:18:13 INFO scheduler.DAGScheduler: ResultStage 1188 (foreachPartition at PredictorEngineApp.java:153) finished in 13.683 s 18/04/17 17:18:13 INFO cluster.YarnClusterScheduler: Removed TaskSet 1188.0, whose tasks have all completed, from pool 18/04/17 17:18:13 INFO scheduler.DAGScheduler: Job 1188 finished: foreachPartition at PredictorEngineApp.java:153, took 13.696170 s 18/04/17 17:18:13 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x7810264e connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 17:18:13 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x7810264e0x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 17:18:13 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 17:18:13 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:35714, server: ***hostname masked***/***IP masked***:2181 18/04/17 17:18:13 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x3626be1439a974c, negotiated timeout = 60000 18/04/17 17:18:13 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x3626be1439a974c 18/04/17 17:18:13 INFO zookeeper.ZooKeeper: Session: 0x3626be1439a974c closed 18/04/17 17:18:13 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 17:18:13 INFO scheduler.JobScheduler: Finished job streaming job 1523974680000 ms.32 from job set of time 1523974680000 ms 18/04/17 17:18:13 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 1186.0 (TID 1186) in 13883 ms on ***hostname masked*** (executor 1) (1/1) 18/04/17 17:18:13 INFO cluster.YarnClusterScheduler: Removed TaskSet 1186.0, whose tasks have all completed, from pool 18/04/17 17:18:13 INFO scheduler.DAGScheduler: ResultStage 1186 (foreachPartition at PredictorEngineApp.java:153) finished in 13.883 s 18/04/17 17:18:13 INFO scheduler.DAGScheduler: Job 1186 finished: foreachPartition at PredictorEngineApp.java:153, took 13.891289 s 18/04/17 17:18:13 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x65a41a5d connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 17:18:13 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x65a41a5d0x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 17:18:13 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 17:18:13 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:35717, server: ***hostname masked***/***IP masked***:2181 18/04/17 17:18:13 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x3626be1439a974d, negotiated timeout = 60000 18/04/17 17:18:13 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x3626be1439a974d 18/04/17 17:18:13 INFO zookeeper.ZooKeeper: Session: 0x3626be1439a974d closed 18/04/17 17:18:13 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 17:18:13 INFO scheduler.JobScheduler: Finished job streaming job 1523974680000 ms.20 from job set of time 1523974680000 ms 18/04/17 17:18:14 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 1203.0 (TID 1203) in 13920 ms on ***hostname masked*** (executor 5) (1/1) 18/04/17 17:18:14 INFO cluster.YarnClusterScheduler: Removed TaskSet 1203.0, whose tasks have all completed, from pool 18/04/17 17:18:14 INFO scheduler.DAGScheduler: ResultStage 1203 (foreachPartition at PredictorEngineApp.java:153) finished in 13.921 s 18/04/17 17:18:14 INFO scheduler.DAGScheduler: Job 1203 finished: foreachPartition at PredictorEngineApp.java:153, took 13.992977 s 18/04/17 17:18:14 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x3265aa1f connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 17:18:14 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x3265aa1f0x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 17:18:14 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 17:18:14 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:46697, server: ***hostname masked***/***IP masked***:2181 18/04/17 17:18:14 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x2626be142b29085, negotiated timeout = 60000 18/04/17 17:18:14 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x2626be142b29085 18/04/17 17:18:14 INFO zookeeper.ZooKeeper: Session: 0x2626be142b29085 closed 18/04/17 17:18:14 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 17:18:14 INFO scheduler.JobScheduler: Finished job streaming job 1523974680000 ms.29 from job set of time 1523974680000 ms 18/04/17 17:18:14 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 1194.0 (TID 1194) in 14070 ms on ***hostname masked*** (executor 1) (1/1) 18/04/17 17:18:14 INFO cluster.YarnClusterScheduler: Removed TaskSet 1194.0, whose tasks have all completed, from pool 18/04/17 17:18:14 INFO scheduler.DAGScheduler: ResultStage 1194 (foreachPartition at PredictorEngineApp.java:153) finished in 14.070 s 18/04/17 17:18:14 INFO scheduler.DAGScheduler: Job 1194 finished: foreachPartition at PredictorEngineApp.java:153, took 14.101425 s 18/04/17 17:18:14 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x197f9f1e connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 17:18:14 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x197f9f1e0x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 17:18:14 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 17:18:14 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:35724, server: ***hostname masked***/***IP masked***:2181 18/04/17 17:18:14 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x3626be1439a974e, negotiated timeout = 60000 18/04/17 17:18:14 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x3626be1439a974e 18/04/17 17:18:14 INFO zookeeper.ZooKeeper: Session: 0x3626be1439a974e closed 18/04/17 17:18:14 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 17:18:14 INFO scheduler.JobScheduler: Finished job streaming job 1523974680000 ms.34 from job set of time 1523974680000 ms 18/04/17 17:18:14 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 1199.0 (TID 1199) in 14086 ms on ***hostname masked*** (executor 5) (1/1) 18/04/17 17:18:14 INFO cluster.YarnClusterScheduler: Removed TaskSet 1199.0, whose tasks have all completed, from pool 18/04/17 17:18:14 INFO scheduler.DAGScheduler: ResultStage 1199 (foreachPartition at PredictorEngineApp.java:153) finished in 14.087 s 18/04/17 17:18:14 INFO scheduler.DAGScheduler: Job 1199 finished: foreachPartition at PredictorEngineApp.java:153, took 14.140477 s 18/04/17 17:18:14 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x687da86d connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 17:18:14 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x687da86d0x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 17:18:14 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 17:18:14 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:46704, server: ***hostname masked***/***IP masked***:2181 18/04/17 17:18:14 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x2626be142b29087, negotiated timeout = 60000 18/04/17 17:18:14 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 1197.0 (TID 1197) in 14104 ms on ***hostname masked*** (executor 3) (1/1) 18/04/17 17:18:14 INFO cluster.YarnClusterScheduler: Removed TaskSet 1197.0, whose tasks have all completed, from pool 18/04/17 17:18:14 INFO scheduler.DAGScheduler: ResultStage 1197 (foreachPartition at PredictorEngineApp.java:153) finished in 14.114 s 18/04/17 17:18:14 INFO scheduler.DAGScheduler: Job 1197 finished: foreachPartition at PredictorEngineApp.java:153, took 14.152677 s 18/04/17 17:18:14 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x2626be142b29087 18/04/17 17:18:14 INFO zookeeper.ZooKeeper: Session: 0x2626be142b29087 closed 18/04/17 17:18:14 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 17:18:14 INFO scheduler.JobScheduler: Finished job streaming job 1523974680000 ms.27 from job set of time 1523974680000 ms 18/04/17 17:18:14 INFO scheduler.JobScheduler: Finished job streaming job 1523974680000 ms.24 from job set of time 1523974680000 ms 18/04/17 17:18:19 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 1189.0 (TID 1189) in 19559 ms on ***hostname masked*** (executor 4) (1/1) 18/04/17 17:18:19 INFO cluster.YarnClusterScheduler: Removed TaskSet 1189.0, whose tasks have all completed, from pool 18/04/17 17:18:19 INFO scheduler.DAGScheduler: ResultStage 1189 (foreachPartition at PredictorEngineApp.java:153) finished in 19.559 s 18/04/17 17:18:19 INFO scheduler.DAGScheduler: Job 1189 finished: foreachPartition at PredictorEngineApp.java:153, took 19.575502 s 18/04/17 17:18:19 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x4803a1ab connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 17:18:19 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x4803a1ab0x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 17:18:19 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 17:18:19 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:46715, server: ***hostname masked***/***IP masked***:2181 18/04/17 17:18:19 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x2626be142b2908b, negotiated timeout = 60000 18/04/17 17:18:19 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x2626be142b2908b 18/04/17 17:18:19 INFO zookeeper.ZooKeeper: Session: 0x2626be142b2908b closed 18/04/17 17:18:19 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 17:18:19 INFO scheduler.JobScheduler: Finished job streaming job 1523974680000 ms.5 from job set of time 1523974680000 ms 18/04/17 17:18:20 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 1208.0 (TID 1208) in 19913 ms on ***hostname masked*** (executor 4) (1/1) 18/04/17 17:18:20 INFO cluster.YarnClusterScheduler: Removed TaskSet 1208.0, whose tasks have all completed, from pool 18/04/17 17:18:20 INFO scheduler.DAGScheduler: ResultStage 1208 (foreachPartition at PredictorEngineApp.java:153) finished in 19.914 s 18/04/17 17:18:20 INFO scheduler.DAGScheduler: Job 1207 finished: foreachPartition at PredictorEngineApp.java:153, took 20.004289 s 18/04/17 17:18:20 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x51ce7d8 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 17:18:20 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x51ce7d80x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 17:18:20 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 17:18:20 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:46718, server: ***hostname masked***/***IP masked***:2181 18/04/17 17:18:20 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x2626be142b2908c, negotiated timeout = 60000 18/04/17 17:18:20 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x2626be142b2908c 18/04/17 17:18:20 INFO zookeeper.ZooKeeper: Session: 0x2626be142b2908c closed 18/04/17 17:18:20 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 17:18:20 INFO scheduler.JobScheduler: Finished job streaming job 1523974680000 ms.11 from job set of time 1523974680000 ms 18/04/17 17:18:25 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 1192.0 (TID 1192) in 25639 ms on ***hostname masked*** (executor 3) (1/1) 18/04/17 17:18:25 INFO cluster.YarnClusterScheduler: Removed TaskSet 1192.0, whose tasks have all completed, from pool 18/04/17 17:18:25 INFO scheduler.DAGScheduler: ResultStage 1192 (foreachPartition at PredictorEngineApp.java:153) finished in 25.639 s 18/04/17 17:18:25 INFO scheduler.DAGScheduler: Job 1193 finished: foreachPartition at PredictorEngineApp.java:153, took 25.664474 s 18/04/17 17:18:25 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x27b0906f connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 17:18:25 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x27b0906f0x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 17:18:25 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 17:18:25 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:42135, server: ***hostname masked***/***IP masked***:2181 18/04/17 17:18:25 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x1626be1444c97a1, negotiated timeout = 60000 18/04/17 17:18:25 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x1626be1444c97a1 18/04/17 17:18:25 INFO zookeeper.ZooKeeper: Session: 0x1626be1444c97a1 closed 18/04/17 17:18:25 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 17:18:25 INFO scheduler.JobScheduler: Finished job streaming job 1523974680000 ms.1 from job set of time 1523974680000 ms 18/04/17 17:18:26 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 1207.0 (TID 1207) in 25887 ms on ***hostname masked*** (executor 3) (1/1) 18/04/17 17:18:26 INFO cluster.YarnClusterScheduler: Removed TaskSet 1207.0, whose tasks have all completed, from pool 18/04/17 17:18:26 INFO scheduler.DAGScheduler: ResultStage 1207 (foreachPartition at PredictorEngineApp.java:153) finished in 25.887 s 18/04/17 17:18:26 INFO scheduler.DAGScheduler: Job 1208 finished: foreachPartition at PredictorEngineApp.java:153, took 25.974781 s 18/04/17 17:18:26 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0xffa0faa connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 17:18:26 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0xffa0faa0x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 17:18:26 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 17:18:26 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:35756, server: ***hostname masked***/***IP masked***:2181 18/04/17 17:18:26 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x3626be1439a9752, negotiated timeout = 60000 18/04/17 17:18:26 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x3626be1439a9752 18/04/17 17:18:26 INFO zookeeper.ZooKeeper: Session: 0x3626be1439a9752 closed 18/04/17 17:18:26 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 17:18:26 INFO scheduler.JobScheduler: Finished job streaming job 1523974680000 ms.26 from job set of time 1523974680000 ms 18/04/17 17:19:00 INFO scheduler.JobScheduler: Added jobs for time 1523974740000 ms 18/04/17 17:19:00 INFO scheduler.JobScheduler: Starting job streaming job 1523974740000 ms.0 from job set of time 1523974740000 ms 18/04/17 17:19:00 INFO scheduler.JobScheduler: Starting job streaming job 1523974740000 ms.2 from job set of time 1523974740000 ms 18/04/17 17:19:00 INFO scheduler.JobScheduler: Starting job streaming job 1523974740000 ms.3 from job set of time 1523974740000 ms 18/04/17 17:19:00 INFO scheduler.JobScheduler: Finished job streaming job 1523974740000 ms.0 from job set of time 1523974740000 ms 18/04/17 17:19:00 INFO scheduler.JobScheduler: Starting job streaming job 1523974740000 ms.4 from job set of time 1523974740000 ms 18/04/17 17:19:00 INFO scheduler.JobScheduler: Starting job streaming job 1523974740000 ms.5 from job set of time 1523974740000 ms 18/04/17 17:19:00 INFO scheduler.JobScheduler: Finished job streaming job 1523974740000 ms.3 from job set of time 1523974740000 ms 18/04/17 17:19:00 INFO scheduler.JobScheduler: Starting job streaming job 1523974740000 ms.6 from job set of time 1523974740000 ms 18/04/17 17:19:00 INFO scheduler.JobScheduler: Starting job streaming job 1523974740000 ms.7 from job set of time 1523974740000 ms 18/04/17 17:19:00 INFO scheduler.JobScheduler: Finished job streaming job 1523974740000 ms.4 from job set of time 1523974740000 ms 18/04/17 17:19:00 INFO scheduler.JobScheduler: Starting job streaming job 1523974740000 ms.8 from job set of time 1523974740000 ms 18/04/17 17:19:00 INFO scheduler.JobScheduler: Starting job streaming job 1523974740000 ms.9 from job set of time 1523974740000 ms 18/04/17 17:19:00 INFO scheduler.JobScheduler: Starting job streaming job 1523974740000 ms.10 from job set of time 1523974740000 ms 18/04/17 17:19:00 INFO scheduler.JobScheduler: Starting job streaming job 1523974740000 ms.1 from job set of time 1523974740000 ms 18/04/17 17:19:00 INFO scheduler.JobScheduler: Starting job streaming job 1523974740000 ms.11 from job set of time 1523974740000 ms 18/04/17 17:19:00 INFO scheduler.JobScheduler: Starting job streaming job 1523974740000 ms.12 from job set of time 1523974740000 ms 18/04/17 17:19:00 INFO scheduler.JobScheduler: Starting job streaming job 1523974740000 ms.13 from job set of time 1523974740000 ms 18/04/17 17:19:00 INFO scheduler.JobScheduler: Finished job streaming job 1523974740000 ms.13 from job set of time 1523974740000 ms 18/04/17 17:19:00 INFO scheduler.JobScheduler: Starting job streaming job 1523974740000 ms.14 from job set of time 1523974740000 ms 18/04/17 17:19:00 INFO scheduler.JobScheduler: Finished job streaming job 1523974740000 ms.14 from job set of time 1523974740000 ms 18/04/17 17:19:00 INFO scheduler.JobScheduler: Starting job streaming job 1523974740000 ms.15 from job set of time 1523974740000 ms 18/04/17 17:19:00 INFO scheduler.JobScheduler: Starting job streaming job 1523974740000 ms.16 from job set of time 1523974740000 ms 18/04/17 17:19:00 INFO scheduler.JobScheduler: Starting job streaming job 1523974740000 ms.17 from job set of time 1523974740000 ms 18/04/17 17:19:00 INFO scheduler.JobScheduler: Finished job streaming job 1523974740000 ms.16 from job set of time 1523974740000 ms 18/04/17 17:19:00 INFO scheduler.JobScheduler: Starting job streaming job 1523974740000 ms.19 from job set of time 1523974740000 ms 18/04/17 17:19:00 INFO scheduler.JobScheduler: Starting job streaming job 1523974740000 ms.18 from job set of time 1523974740000 ms 18/04/17 17:19:00 INFO scheduler.JobScheduler: Finished job streaming job 1523974740000 ms.17 from job set of time 1523974740000 ms 18/04/17 17:19:00 INFO scheduler.JobScheduler: Starting job streaming job 1523974740000 ms.20 from job set of time 1523974740000 ms 18/04/17 17:19:00 INFO scheduler.JobScheduler: Starting job streaming job 1523974740000 ms.21 from job set of time 1523974740000 ms 18/04/17 17:19:00 INFO scheduler.JobScheduler: Finished job streaming job 1523974740000 ms.21 from job set of time 1523974740000 ms 18/04/17 17:19:00 INFO scheduler.JobScheduler: Starting job streaming job 1523974740000 ms.22 from job set of time 1523974740000 ms 18/04/17 17:19:00 INFO scheduler.JobScheduler: Starting job streaming job 1523974740000 ms.23 from job set of time 1523974740000 ms 18/04/17 17:19:00 INFO scheduler.JobScheduler: Starting job streaming job 1523974740000 ms.24 from job set of time 1523974740000 ms 18/04/17 17:19:00 INFO scheduler.JobScheduler: Starting job streaming job 1523974740000 ms.25 from job set of time 1523974740000 ms 18/04/17 17:19:00 INFO scheduler.JobScheduler: Starting job streaming job 1523974740000 ms.26 from job set of time 1523974740000 ms 18/04/17 17:19:00 INFO scheduler.JobScheduler: Starting job streaming job 1523974740000 ms.27 from job set of time 1523974740000 ms 18/04/17 17:19:00 INFO scheduler.JobScheduler: Starting job streaming job 1523974740000 ms.28 from job set of time 1523974740000 ms 18/04/17 17:19:00 INFO scheduler.JobScheduler: Starting job streaming job 1523974740000 ms.29 from job set of time 1523974740000 ms 18/04/17 17:19:00 INFO scheduler.JobScheduler: Starting job streaming job 1523974740000 ms.30 from job set of time 1523974740000 ms 18/04/17 17:19:00 INFO scheduler.JobScheduler: Starting job streaming job 1523974740000 ms.31 from job set of time 1523974740000 ms 18/04/17 17:19:00 INFO scheduler.JobScheduler: Starting job streaming job 1523974740000 ms.32 from job set of time 1523974740000 ms 18/04/17 17:19:00 INFO scheduler.JobScheduler: Finished job streaming job 1523974740000 ms.30 from job set of time 1523974740000 ms 18/04/17 17:19:00 INFO scheduler.JobScheduler: Starting job streaming job 1523974740000 ms.33 from job set of time 1523974740000 ms 18/04/17 17:19:00 INFO scheduler.JobScheduler: Starting job streaming job 1523974740000 ms.34 from job set of time 1523974740000 ms 18/04/17 17:19:00 INFO scheduler.JobScheduler: Starting job streaming job 1523974740000 ms.35 from job set of time 1523974740000 ms 18/04/17 17:19:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 17:19:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 17:19:00 INFO scheduler.DAGScheduler: Got job 1211 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 17:19:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 1211 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 17:19:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 17:19:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 17:19:00 INFO scheduler.DAGScheduler: Submitting ResultStage 1211 (KafkaRDD[1679] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 17:19:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 17:19:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 17:19:00 INFO storage.MemoryStore: Block broadcast_1211 stored as values in memory (estimated size 5.7 KB, free 491.4 MB) 18/04/17 17:19:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 17:19:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 17:19:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 17:19:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 17:19:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 17:19:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 17:19:00 INFO storage.MemoryStore: Block broadcast_1211_piece0 stored as bytes in memory (estimated size 3.1 KB, free 491.4 MB) 18/04/17 17:19:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 17:19:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 17:19:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 17:19:00 INFO storage.BlockManagerInfo: Added broadcast_1211_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:19:00 INFO spark.SparkContext: Created broadcast 1211 from broadcast at DAGScheduler.scala:1006 18/04/17 17:19:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 1211 (KafkaRDD[1679] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 17:19:00 INFO cluster.YarnClusterScheduler: Adding task set 1211.0 with 1 tasks 18/04/17 17:19:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 17:19:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 17:19:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 17:19:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 17:19:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 17:19:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 1211.0 (TID 1211, ***hostname masked***, executor 11, partition 0, RACK_LOCAL, 2048 bytes) 18/04/17 17:19:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 17:19:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 17:19:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 17:19:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 17:19:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 17:19:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 17:19:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 17:19:00 INFO scheduler.DAGScheduler: Got job 1212 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 17:19:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 1212 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 17:19:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 17:19:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 17:19:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 17:19:00 INFO scheduler.DAGScheduler: Submitting ResultStage 1212 (KafkaRDD[1664] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 17:19:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 17:19:00 INFO storage.MemoryStore: Block broadcast_1212 stored as values in memory (estimated size 5.7 KB, free 491.4 MB) 18/04/17 17:19:00 INFO storage.MemoryStore: Block broadcast_1212_piece0 stored as bytes in memory (estimated size 3.1 KB, free 491.4 MB) 18/04/17 17:19:00 INFO storage.BlockManagerInfo: Added broadcast_1212_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:19:00 INFO spark.SparkContext: Created broadcast 1212 from broadcast at DAGScheduler.scala:1006 18/04/17 17:19:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 1212 (KafkaRDD[1664] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 17:19:00 INFO cluster.YarnClusterScheduler: Adding task set 1212.0 with 1 tasks 18/04/17 17:19:00 INFO scheduler.DAGScheduler: Got job 1213 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 17:19:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 1213 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 17:19:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 17:19:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 17:19:00 INFO scheduler.DAGScheduler: Submitting ResultStage 1213 (KafkaRDD[1690] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 17:19:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 1212.0 (TID 1212, ***hostname masked***, executor 4, partition 0, RACK_LOCAL, 2051 bytes) 18/04/17 17:19:00 INFO storage.MemoryStore: Block broadcast_1213 stored as values in memory (estimated size 5.7 KB, free 491.4 MB) 18/04/17 17:19:00 INFO storage.BlockManagerInfo: Removed broadcast_1205_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:19:00 INFO storage.MemoryStore: Block broadcast_1213_piece0 stored as bytes in memory (estimated size 3.1 KB, free 491.4 MB) 18/04/17 17:19:00 INFO storage.BlockManagerInfo: Added broadcast_1213_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:19:00 INFO spark.SparkContext: Created broadcast 1213 from broadcast at DAGScheduler.scala:1006 18/04/17 17:19:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 1213 (KafkaRDD[1690] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 17:19:00 INFO cluster.YarnClusterScheduler: Adding task set 1213.0 with 1 tasks 18/04/17 17:19:00 INFO scheduler.DAGScheduler: Got job 1214 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 17:19:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 1214 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 17:19:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 17:19:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 17:19:00 INFO scheduler.DAGScheduler: Submitting ResultStage 1214 (KafkaRDD[1662] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 17:19:00 INFO storage.MemoryStore: Block broadcast_1214 stored as values in memory (estimated size 5.7 KB, free 491.4 MB) 18/04/17 17:19:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 1213.0 (TID 1213, ***hostname masked***, executor 9, partition 0, RACK_LOCAL, 2053 bytes) 18/04/17 17:19:00 INFO storage.BlockManagerInfo: Removed broadcast_1205_piece0 on ***hostname masked***:41751 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:19:00 INFO storage.MemoryStore: Block broadcast_1214_piece0 stored as bytes in memory (estimated size 3.1 KB, free 491.4 MB) 18/04/17 17:19:00 INFO storage.BlockManagerInfo: Added broadcast_1214_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:19:00 INFO spark.SparkContext: Created broadcast 1214 from broadcast at DAGScheduler.scala:1006 18/04/17 17:19:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 1214 (KafkaRDD[1662] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 17:19:00 INFO cluster.YarnClusterScheduler: Adding task set 1214.0 with 1 tasks 18/04/17 17:19:00 INFO scheduler.DAGScheduler: Got job 1215 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 17:19:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 1215 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 17:19:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 17:19:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 17:19:00 INFO scheduler.DAGScheduler: Submitting ResultStage 1215 (KafkaRDD[1684] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 17:19:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 1214.0 (TID 1214, ***hostname masked***, executor 7, partition 0, NODE_LOCAL, 2045 bytes) 18/04/17 17:19:00 INFO storage.MemoryStore: Block broadcast_1215 stored as values in memory (estimated size 5.7 KB, free 491.4 MB) 18/04/17 17:19:00 INFO storage.MemoryStore: Block broadcast_1215_piece0 stored as bytes in memory (estimated size 3.1 KB, free 491.4 MB) 18/04/17 17:19:00 INFO storage.BlockManagerInfo: Added broadcast_1215_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:19:00 INFO spark.SparkContext: Created broadcast 1215 from broadcast at DAGScheduler.scala:1006 18/04/17 17:19:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 1215 (KafkaRDD[1684] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 17:19:00 INFO cluster.YarnClusterScheduler: Adding task set 1215.0 with 1 tasks 18/04/17 17:19:00 INFO scheduler.DAGScheduler: Got job 1216 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 17:19:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 1216 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 17:19:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 17:19:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 17:19:00 INFO scheduler.DAGScheduler: Submitting ResultStage 1216 (KafkaRDD[1691] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 17:19:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 1215.0 (TID 1215, ***hostname masked***, executor 6, partition 0, RACK_LOCAL, 2045 bytes) 18/04/17 17:19:00 INFO storage.MemoryStore: Block broadcast_1216 stored as values in memory (estimated size 5.7 KB, free 491.4 MB) 18/04/17 17:19:00 INFO storage.BlockManagerInfo: Removed broadcast_1186_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:19:00 INFO storage.MemoryStore: Block broadcast_1216_piece0 stored as bytes in memory (estimated size 3.1 KB, free 491.4 MB) 18/04/17 17:19:00 INFO storage.BlockManagerInfo: Added broadcast_1216_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:19:00 INFO spark.SparkContext: Created broadcast 1216 from broadcast at DAGScheduler.scala:1006 18/04/17 17:19:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 1216 (KafkaRDD[1691] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 17:19:00 INFO cluster.YarnClusterScheduler: Adding task set 1216.0 with 1 tasks 18/04/17 17:19:00 INFO scheduler.DAGScheduler: Got job 1217 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 17:19:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 1217 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 17:19:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 17:19:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 17:19:00 INFO scheduler.DAGScheduler: Submitting ResultStage 1217 (KafkaRDD[1681] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 17:19:00 INFO storage.BlockManagerInfo: Removed broadcast_1186_piece0 on ***hostname masked***:56034 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:19:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 1216.0 (TID 1216, ***hostname masked***, executor 9, partition 0, NODE_LOCAL, 2037 bytes) 18/04/17 17:19:00 INFO storage.MemoryStore: Block broadcast_1217 stored as values in memory (estimated size 5.7 KB, free 491.4 MB) 18/04/17 17:19:00 INFO spark.ContextCleaner: Cleaned accumulator 1189 18/04/17 17:19:00 INFO storage.BlockManagerInfo: Removed broadcast_1188_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:19:00 INFO storage.BlockManagerInfo: Removed broadcast_1188_piece0 on ***hostname masked***:56034 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:19:00 INFO storage.MemoryStore: Block broadcast_1217_piece0 stored as bytes in memory (estimated size 3.1 KB, free 491.4 MB) 18/04/17 17:19:00 INFO storage.BlockManagerInfo: Added broadcast_1212_piece0 in memory on ***hostname masked***:55279 (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:19:00 INFO storage.BlockManagerInfo: Added broadcast_1217_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:19:00 INFO spark.SparkContext: Created broadcast 1217 from broadcast at DAGScheduler.scala:1006 18/04/17 17:19:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 1217 (KafkaRDD[1681] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 17:19:00 INFO storage.BlockManagerInfo: Added broadcast_1214_piece0 in memory on ***hostname masked***:41751 (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:19:00 INFO cluster.YarnClusterScheduler: Adding task set 1217.0 with 1 tasks 18/04/17 17:19:00 INFO spark.ContextCleaner: Cleaned accumulator 1187 18/04/17 17:19:00 INFO spark.ContextCleaner: Cleaned accumulator 1190 18/04/17 17:19:00 INFO spark.ContextCleaner: Cleaned accumulator 1191 18/04/17 17:19:00 INFO scheduler.DAGScheduler: Got job 1218 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 17:19:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 1218 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 17:19:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 17:19:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 17:19:00 INFO scheduler.DAGScheduler: Submitting ResultStage 1218 (KafkaRDD[1657] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 17:19:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 1217.0 (TID 1217, ***hostname masked***, executor 1, partition 0, RACK_LOCAL, 2045 bytes) 18/04/17 17:19:00 INFO storage.MemoryStore: Block broadcast_1218 stored as values in memory (estimated size 5.7 KB, free 491.4 MB) 18/04/17 17:19:00 INFO storage.BlockManagerInfo: Removed broadcast_1189_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:19:00 INFO storage.BlockManagerInfo: Removed broadcast_1189_piece0 on ***hostname masked***:55279 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:19:00 INFO storage.BlockManagerInfo: Removed broadcast_1191_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:19:00 INFO storage.MemoryStore: Block broadcast_1218_piece0 stored as bytes in memory (estimated size 3.1 KB, free 491.4 MB) 18/04/17 17:19:00 INFO storage.BlockManagerInfo: Added broadcast_1218_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:19:00 INFO spark.SparkContext: Created broadcast 1218 from broadcast at DAGScheduler.scala:1006 18/04/17 17:19:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 1218 (KafkaRDD[1657] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 17:19:00 INFO cluster.YarnClusterScheduler: Adding task set 1218.0 with 1 tasks 18/04/17 17:19:00 INFO storage.BlockManagerInfo: Removed broadcast_1191_piece0 on ***hostname masked***:56034 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:19:00 INFO scheduler.DAGScheduler: Got job 1219 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 17:19:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 1219 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 17:19:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 17:19:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 17:19:00 INFO scheduler.DAGScheduler: Submitting ResultStage 1219 (KafkaRDD[1668] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 17:19:00 INFO storage.MemoryStore: Block broadcast_1219 stored as values in memory (estimated size 5.7 KB, free 491.4 MB) 18/04/17 17:19:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 1218.0 (TID 1218, ***hostname masked***, executor 7, partition 0, RACK_LOCAL, 2063 bytes) 18/04/17 17:19:00 INFO spark.ContextCleaner: Cleaned accumulator 1192 18/04/17 17:19:00 INFO storage.BlockManagerInfo: Removed broadcast_1190_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:19:00 INFO storage.BlockManagerInfo: Added broadcast_1213_piece0 in memory on ***hostname masked***:55033 (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:19:00 INFO storage.BlockManagerInfo: Added broadcast_1215_piece0 in memory on ***hostname masked***:35790 (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:19:00 INFO storage.BlockManagerInfo: Removed broadcast_1190_piece0 on ***hostname masked***:53081 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:19:00 INFO storage.MemoryStore: Block broadcast_1219_piece0 stored as bytes in memory (estimated size 3.1 KB, free 491.4 MB) 18/04/17 17:19:00 INFO storage.BlockManagerInfo: Added broadcast_1219_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:19:00 INFO spark.SparkContext: Created broadcast 1219 from broadcast at DAGScheduler.scala:1006 18/04/17 17:19:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 1219 (KafkaRDD[1668] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 17:19:00 INFO cluster.YarnClusterScheduler: Adding task set 1219.0 with 1 tasks 18/04/17 17:19:00 INFO scheduler.DAGScheduler: Got job 1220 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 17:19:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 1220 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 17:19:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 17:19:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 17:19:00 INFO scheduler.DAGScheduler: Submitting ResultStage 1220 (KafkaRDD[1666] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 17:19:00 INFO spark.ContextCleaner: Cleaned accumulator 1194 18/04/17 17:19:00 INFO storage.MemoryStore: Block broadcast_1220 stored as values in memory (estimated size 5.7 KB, free 491.4 MB) 18/04/17 17:19:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 1219.0 (TID 1219, ***hostname masked***, executor 7, partition 0, RACK_LOCAL, 2051 bytes) 18/04/17 17:19:00 INFO storage.BlockManagerInfo: Removed broadcast_1192_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:19:00 INFO storage.BlockManagerInfo: Removed broadcast_1192_piece0 on ***hostname masked***:60107 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:19:00 INFO spark.ContextCleaner: Cleaned accumulator 1193 18/04/17 17:19:00 INFO storage.MemoryStore: Block broadcast_1220_piece0 stored as bytes in memory (estimated size 3.1 KB, free 491.4 MB) 18/04/17 17:19:00 INFO storage.BlockManagerInfo: Added broadcast_1220_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:19:00 INFO storage.BlockManagerInfo: Removed broadcast_1194_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:19:00 INFO spark.SparkContext: Created broadcast 1220 from broadcast at DAGScheduler.scala:1006 18/04/17 17:19:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 1220 (KafkaRDD[1666] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 17:19:00 INFO storage.BlockManagerInfo: Added broadcast_1218_piece0 in memory on ***hostname masked***:41751 (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:19:00 INFO cluster.YarnClusterScheduler: Adding task set 1220.0 with 1 tasks 18/04/17 17:19:00 INFO scheduler.DAGScheduler: Got job 1221 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 17:19:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 1221 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 17:19:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 17:19:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 17:19:00 INFO scheduler.DAGScheduler: Submitting ResultStage 1221 (KafkaRDD[1671] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 17:19:00 INFO storage.MemoryStore: Block broadcast_1221 stored as values in memory (estimated size 5.7 KB, free 491.4 MB) 18/04/17 17:19:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 1220.0 (TID 1220, ***hostname masked***, executor 2, partition 0, NODE_LOCAL, 2059 bytes) 18/04/17 17:19:00 INFO storage.BlockManagerInfo: Added broadcast_1217_piece0 in memory on ***hostname masked***:56034 (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:19:00 INFO storage.BlockManagerInfo: Removed broadcast_1194_piece0 on ***hostname masked***:56034 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:19:00 INFO spark.ContextCleaner: Cleaned accumulator 1195 18/04/17 17:19:00 INFO storage.BlockManagerInfo: Removed broadcast_1193_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:19:00 INFO storage.MemoryStore: Block broadcast_1221_piece0 stored as bytes in memory (estimated size 3.1 KB, free 491.4 MB) 18/04/17 17:19:00 INFO storage.BlockManagerInfo: Added broadcast_1221_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:19:00 INFO storage.BlockManagerInfo: Removed broadcast_1193_piece0 on ***hostname masked***:43653 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:19:00 INFO spark.SparkContext: Created broadcast 1221 from broadcast at DAGScheduler.scala:1006 18/04/17 17:19:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 1221 (KafkaRDD[1671] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 17:19:00 INFO cluster.YarnClusterScheduler: Adding task set 1221.0 with 1 tasks 18/04/17 17:19:00 INFO scheduler.DAGScheduler: Got job 1222 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 17:19:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 1222 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 17:19:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 17:19:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 17:19:00 INFO scheduler.DAGScheduler: Submitting ResultStage 1222 (KafkaRDD[1678] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 17:19:00 INFO storage.MemoryStore: Block broadcast_1222 stored as values in memory (estimated size 5.7 KB, free 491.4 MB) 18/04/17 17:19:00 INFO spark.ContextCleaner: Cleaned accumulator 1197 18/04/17 17:19:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 1221.0 (TID 1221, ***hostname masked***, executor 8, partition 0, RACK_LOCAL, 2064 bytes) 18/04/17 17:19:00 INFO storage.BlockManagerInfo: Removed broadcast_1195_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:19:00 INFO storage.MemoryStore: Block broadcast_1222_piece0 stored as bytes in memory (estimated size 3.1 KB, free 491.4 MB) 18/04/17 17:19:00 INFO storage.BlockManagerInfo: Added broadcast_1222_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:19:00 INFO spark.SparkContext: Created broadcast 1222 from broadcast at DAGScheduler.scala:1006 18/04/17 17:19:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 1222 (KafkaRDD[1678] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 17:19:00 INFO cluster.YarnClusterScheduler: Adding task set 1222.0 with 1 tasks 18/04/17 17:19:00 INFO scheduler.DAGScheduler: Got job 1223 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 17:19:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 1223 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 17:19:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 17:19:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 17:19:00 INFO storage.BlockManagerInfo: Removed broadcast_1195_piece0 on ***hostname masked***:57847 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:19:00 INFO scheduler.DAGScheduler: Submitting ResultStage 1223 (KafkaRDD[1661] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 17:19:00 INFO storage.MemoryStore: Block broadcast_1223 stored as values in memory (estimated size 5.7 KB, free 491.4 MB) 18/04/17 17:19:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 1222.0 (TID 1222, ***hostname masked***, executor 3, partition 0, RACK_LOCAL, 2069 bytes) 18/04/17 17:19:00 INFO storage.BlockManagerInfo: Added broadcast_1216_piece0 in memory on ***hostname masked***:55033 (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:19:00 INFO storage.BlockManagerInfo: Added broadcast_1219_piece0 in memory on ***hostname masked***:41751 (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:19:00 INFO spark.ContextCleaner: Cleaned accumulator 1196 18/04/17 17:19:00 INFO storage.BlockManagerInfo: Removed broadcast_1197_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:19:00 INFO storage.BlockManagerInfo: Removed broadcast_1197_piece0 on ***hostname masked***:60107 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:19:00 INFO storage.MemoryStore: Block broadcast_1223_piece0 stored as bytes in memory (estimated size 3.1 KB, free 491.4 MB) 18/04/17 17:19:00 INFO storage.BlockManagerInfo: Added broadcast_1223_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:19:00 INFO spark.SparkContext: Created broadcast 1223 from broadcast at DAGScheduler.scala:1006 18/04/17 17:19:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 1223 (KafkaRDD[1661] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 17:19:00 INFO cluster.YarnClusterScheduler: Adding task set 1223.0 with 1 tasks 18/04/17 17:19:00 INFO scheduler.DAGScheduler: Got job 1224 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 17:19:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 1224 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 17:19:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 17:19:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 17:19:00 INFO spark.ContextCleaner: Cleaned accumulator 1198 18/04/17 17:19:00 INFO scheduler.DAGScheduler: Submitting ResultStage 1224 (KafkaRDD[1675] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 17:19:00 INFO storage.MemoryStore: Block broadcast_1224 stored as values in memory (estimated size 5.7 KB, free 491.4 MB) 18/04/17 17:19:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 1223.0 (TID 1223, ***hostname masked***, executor 2, partition 0, NODE_LOCAL, 2059 bytes) 18/04/17 17:19:00 INFO storage.BlockManagerInfo: Removed broadcast_1196_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:19:00 INFO storage.BlockManagerInfo: Removed broadcast_1196_piece0 on ***hostname masked***:43653 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:19:00 INFO spark.ContextCleaner: Cleaned accumulator 1188 18/04/17 17:19:00 INFO storage.MemoryStore: Block broadcast_1224_piece0 stored as bytes in memory (estimated size 3.1 KB, free 491.4 MB) 18/04/17 17:19:00 INFO storage.BlockManagerInfo: Added broadcast_1224_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:19:00 INFO storage.BlockManagerInfo: Removed broadcast_1187_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:19:00 INFO spark.SparkContext: Created broadcast 1224 from broadcast at DAGScheduler.scala:1006 18/04/17 17:19:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 1224 (KafkaRDD[1675] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 17:19:00 INFO cluster.YarnClusterScheduler: Adding task set 1224.0 with 1 tasks 18/04/17 17:19:00 INFO scheduler.DAGScheduler: Got job 1225 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 17:19:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 1225 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 17:19:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 17:19:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 17:19:00 INFO storage.BlockManagerInfo: Added broadcast_1221_piece0 in memory on ***hostname masked***:50260 (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:19:00 INFO scheduler.DAGScheduler: Submitting ResultStage 1225 (KafkaRDD[1674] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 17:19:00 INFO storage.MemoryStore: Block broadcast_1225 stored as values in memory (estimated size 5.7 KB, free 491.4 MB) 18/04/17 17:19:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 1224.0 (TID 1224, ***hostname masked***, executor 6, partition 0, RACK_LOCAL, 2050 bytes) 18/04/17 17:19:00 INFO storage.BlockManagerInfo: Removed broadcast_1187_piece0 on ***hostname masked***:50260 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:19:00 INFO storage.BlockManagerInfo: Added broadcast_1220_piece0 in memory on ***hostname masked***:43653 (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:19:00 INFO spark.ContextCleaner: Cleaned accumulator 1200 18/04/17 17:19:00 INFO storage.BlockManagerInfo: Removed broadcast_1198_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:19:00 INFO storage.MemoryStore: Block broadcast_1225_piece0 stored as bytes in memory (estimated size 3.1 KB, free 491.4 MB) 18/04/17 17:19:00 INFO storage.BlockManagerInfo: Added broadcast_1225_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:19:00 INFO spark.SparkContext: Created broadcast 1225 from broadcast at DAGScheduler.scala:1006 18/04/17 17:19:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 1225 (KafkaRDD[1674] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 17:19:00 INFO cluster.YarnClusterScheduler: Adding task set 1225.0 with 1 tasks 18/04/17 17:19:00 INFO scheduler.DAGScheduler: Got job 1226 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 17:19:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 1226 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 17:19:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 17:19:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 17:19:00 INFO scheduler.DAGScheduler: Submitting ResultStage 1226 (KafkaRDD[1676] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 17:19:00 INFO storage.MemoryStore: Block broadcast_1226 stored as values in memory (estimated size 5.7 KB, free 491.4 MB) 18/04/17 17:19:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 1225.0 (TID 1225, ***hostname masked***, executor 4, partition 0, RACK_LOCAL, 2048 bytes) 18/04/17 17:19:00 INFO storage.BlockManagerInfo: Removed broadcast_1198_piece0 on ***hostname masked***:55095 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:19:00 INFO storage.BlockManagerInfo: Added broadcast_1222_piece0 in memory on ***hostname masked***:60107 (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:19:00 INFO spark.ContextCleaner: Cleaned accumulator 1199 18/04/17 17:19:00 INFO storage.BlockManagerInfo: Removed broadcast_1200_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:19:00 INFO storage.MemoryStore: Block broadcast_1226_piece0 stored as bytes in memory (estimated size 3.1 KB, free 491.4 MB) 18/04/17 17:19:00 INFO storage.BlockManagerInfo: Added broadcast_1226_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:19:00 INFO spark.SparkContext: Created broadcast 1226 from broadcast at DAGScheduler.scala:1006 18/04/17 17:19:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 1226 (KafkaRDD[1676] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 17:19:00 INFO cluster.YarnClusterScheduler: Adding task set 1226.0 with 1 tasks 18/04/17 17:19:00 INFO scheduler.DAGScheduler: Got job 1227 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 17:19:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 1227 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 17:19:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 17:19:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 17:19:00 INFO scheduler.DAGScheduler: Submitting ResultStage 1227 (KafkaRDD[1683] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 17:19:00 INFO storage.BlockManagerInfo: Removed broadcast_1200_piece0 on ***hostname masked***:50260 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:19:00 INFO storage.MemoryStore: Block broadcast_1227 stored as values in memory (estimated size 5.7 KB, free 491.4 MB) 18/04/17 17:19:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 1226.0 (TID 1226, ***hostname masked***, executor 3, partition 0, RACK_LOCAL, 2048 bytes) 18/04/17 17:19:00 INFO spark.ContextCleaner: Cleaned accumulator 1201 18/04/17 17:19:00 INFO storage.BlockManagerInfo: Added broadcast_1224_piece0 in memory on ***hostname masked***:35790 (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:19:00 INFO storage.BlockManagerInfo: Removed broadcast_1199_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:19:00 INFO storage.MemoryStore: Block broadcast_1227_piece0 stored as bytes in memory (estimated size 3.1 KB, free 491.4 MB) 18/04/17 17:19:00 INFO storage.BlockManagerInfo: Added broadcast_1227_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:19:00 INFO spark.SparkContext: Created broadcast 1227 from broadcast at DAGScheduler.scala:1006 18/04/17 17:19:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 1227 (KafkaRDD[1683] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 17:19:00 INFO cluster.YarnClusterScheduler: Adding task set 1227.0 with 1 tasks 18/04/17 17:19:00 INFO storage.BlockManagerInfo: Removed broadcast_1199_piece0 on ***hostname masked***:53081 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:19:00 INFO scheduler.DAGScheduler: Got job 1228 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 17:19:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 1228 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 17:19:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 17:19:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 17:19:00 INFO scheduler.DAGScheduler: Submitting ResultStage 1228 (KafkaRDD[1688] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 17:19:00 INFO storage.MemoryStore: Block broadcast_1228 stored as values in memory (estimated size 5.7 KB, free 491.4 MB) 18/04/17 17:19:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 1227.0 (TID 1227, ***hostname masked***, executor 8, partition 0, RACK_LOCAL, 2048 bytes) 18/04/17 17:19:00 INFO spark.ContextCleaner: Cleaned accumulator 1203 18/04/17 17:19:00 INFO storage.BlockManagerInfo: Added broadcast_1225_piece0 in memory on ***hostname masked***:55279 (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:19:00 INFO storage.BlockManagerInfo: Removed broadcast_1201_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:19:00 INFO storage.BlockManagerInfo: Added broadcast_1223_piece0 in memory on ***hostname masked***:43653 (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:19:00 INFO storage.BlockManagerInfo: Removed broadcast_1201_piece0 on ***hostname masked***:35790 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:19:00 INFO storage.MemoryStore: Block broadcast_1228_piece0 stored as bytes in memory (estimated size 3.1 KB, free 491.4 MB) 18/04/17 17:19:00 INFO storage.BlockManagerInfo: Added broadcast_1228_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:19:00 INFO spark.SparkContext: Created broadcast 1228 from broadcast at DAGScheduler.scala:1006 18/04/17 17:19:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 1228 (KafkaRDD[1688] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 17:19:00 INFO cluster.YarnClusterScheduler: Adding task set 1228.0 with 1 tasks 18/04/17 17:19:00 INFO scheduler.DAGScheduler: Got job 1229 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 17:19:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 1229 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 17:19:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 17:19:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 17:19:00 INFO scheduler.DAGScheduler: Submitting ResultStage 1229 (KafkaRDD[1687] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 17:19:00 INFO spark.ContextCleaner: Cleaned accumulator 1202 18/04/17 17:19:00 INFO storage.MemoryStore: Block broadcast_1229 stored as values in memory (estimated size 5.7 KB, free 491.4 MB) 18/04/17 17:19:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 1228.0 (TID 1228, ***hostname masked***, executor 10, partition 0, RACK_LOCAL, 2040 bytes) 18/04/17 17:19:00 INFO storage.BlockManagerInfo: Removed broadcast_1203_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:19:00 INFO storage.BlockManagerInfo: Removed broadcast_1203_piece0 on ***hostname masked***:53081 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:19:00 INFO storage.BlockManagerInfo: Added broadcast_1226_piece0 in memory on ***hostname masked***:60107 (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:19:00 INFO storage.MemoryStore: Block broadcast_1229_piece0 stored as bytes in memory (estimated size 3.1 KB, free 491.4 MB) 18/04/17 17:19:00 INFO storage.BlockManagerInfo: Added broadcast_1229_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:19:00 INFO storage.BlockManagerInfo: Added broadcast_1227_piece0 in memory on ***hostname masked***:50260 (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:19:00 INFO spark.SparkContext: Created broadcast 1229 from broadcast at DAGScheduler.scala:1006 18/04/17 17:19:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 1229 (KafkaRDD[1687] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 17:19:00 INFO cluster.YarnClusterScheduler: Adding task set 1229.0 with 1 tasks 18/04/17 17:19:00 INFO scheduler.DAGScheduler: Got job 1230 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 17:19:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 1230 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 17:19:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 17:19:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 17:19:00 INFO scheduler.DAGScheduler: Submitting ResultStage 1230 (KafkaRDD[1689] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 17:19:00 INFO storage.MemoryStore: Block broadcast_1230 stored as values in memory (estimated size 5.7 KB, free 491.4 MB) 18/04/17 17:19:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 1229.0 (TID 1229, ***hostname masked***, executor 4, partition 0, RACK_LOCAL, 2042 bytes) 18/04/17 17:19:00 INFO storage.MemoryStore: Block broadcast_1230_piece0 stored as bytes in memory (estimated size 3.1 KB, free 491.4 MB) 18/04/17 17:19:00 INFO storage.BlockManagerInfo: Added broadcast_1230_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:19:00 INFO spark.SparkContext: Created broadcast 1230 from broadcast at DAGScheduler.scala:1006 18/04/17 17:19:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 1230 (KafkaRDD[1689] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 17:19:00 INFO cluster.YarnClusterScheduler: Adding task set 1230.0 with 1 tasks 18/04/17 17:19:00 INFO scheduler.DAGScheduler: Got job 1231 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 17:19:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 1231 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 17:19:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 17:19:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 17:19:00 INFO scheduler.DAGScheduler: Submitting ResultStage 1231 (KafkaRDD[1685] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 17:19:00 INFO storage.MemoryStore: Block broadcast_1231 stored as values in memory (estimated size 5.7 KB, free 491.4 MB) 18/04/17 17:19:00 INFO spark.ContextCleaner: Cleaned accumulator 1204 18/04/17 17:19:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 1230.0 (TID 1230, ***hostname masked***, executor 8, partition 0, NODE_LOCAL, 2048 bytes) 18/04/17 17:19:00 INFO storage.BlockManagerInfo: Removed broadcast_1202_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:19:00 INFO storage.MemoryStore: Block broadcast_1231_piece0 stored as bytes in memory (estimated size 3.1 KB, free 491.4 MB) 18/04/17 17:19:00 INFO storage.BlockManagerInfo: Added broadcast_1231_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:19:00 INFO spark.SparkContext: Created broadcast 1231 from broadcast at DAGScheduler.scala:1006 18/04/17 17:19:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 1231 (KafkaRDD[1685] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 17:19:00 INFO cluster.YarnClusterScheduler: Adding task set 1231.0 with 1 tasks 18/04/17 17:19:00 INFO scheduler.DAGScheduler: Got job 1232 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 17:19:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 1232 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 17:19:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 17:19:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 17:19:00 INFO scheduler.DAGScheduler: Submitting ResultStage 1232 (KafkaRDD[1680] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 17:19:00 INFO storage.MemoryStore: Block broadcast_1232 stored as values in memory (estimated size 5.7 KB, free 491.4 MB) 18/04/17 17:19:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 1231.0 (TID 1231, ***hostname masked***, executor 6, partition 0, NODE_LOCAL, 2049 bytes) 18/04/17 17:19:00 INFO storage.BlockManagerInfo: Removed broadcast_1202_piece0 on ***hostname masked***:42188 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:19:00 INFO storage.BlockManagerInfo: Added broadcast_1211_piece0 in memory on ***hostname masked***:57847 (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:19:00 INFO storage.BlockManagerInfo: Added broadcast_1229_piece0 in memory on ***hostname masked***:55279 (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:19:00 INFO storage.MemoryStore: Block broadcast_1232_piece0 stored as bytes in memory (estimated size 3.1 KB, free 491.4 MB) 18/04/17 17:19:00 INFO storage.BlockManagerInfo: Added broadcast_1232_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:19:00 INFO spark.SparkContext: Created broadcast 1232 from broadcast at DAGScheduler.scala:1006 18/04/17 17:19:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 1232 (KafkaRDD[1680] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 17:19:00 INFO cluster.YarnClusterScheduler: Adding task set 1232.0 with 1 tasks 18/04/17 17:19:00 INFO scheduler.DAGScheduler: Got job 1233 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 17:19:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 1233 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 17:19:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 17:19:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 17:19:00 INFO scheduler.DAGScheduler: Submitting ResultStage 1233 (KafkaRDD[1667] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 17:19:00 INFO storage.MemoryStore: Block broadcast_1233 stored as values in memory (estimated size 5.7 KB, free 491.4 MB) 18/04/17 17:19:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 1232.0 (TID 1232, ***hostname masked***, executor 5, partition 0, NODE_LOCAL, 2047 bytes) 18/04/17 17:19:00 INFO storage.MemoryStore: Block broadcast_1233_piece0 stored as bytes in memory (estimated size 3.1 KB, free 491.4 MB) 18/04/17 17:19:00 INFO storage.BlockManagerInfo: Added broadcast_1233_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:19:00 INFO spark.SparkContext: Created broadcast 1233 from broadcast at DAGScheduler.scala:1006 18/04/17 17:19:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 1233 (KafkaRDD[1667] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 17:19:00 INFO cluster.YarnClusterScheduler: Adding task set 1233.0 with 1 tasks 18/04/17 17:19:00 INFO scheduler.DAGScheduler: Got job 1234 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 17:19:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 1234 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 17:19:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 17:19:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 17:19:00 INFO scheduler.DAGScheduler: Submitting ResultStage 1234 (KafkaRDD[1658] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 17:19:00 INFO storage.MemoryStore: Block broadcast_1234 stored as values in memory (estimated size 5.7 KB, free 491.4 MB) 18/04/17 17:19:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 1233.0 (TID 1233, ***hostname masked***, executor 5, partition 0, RACK_LOCAL, 2046 bytes) 18/04/17 17:19:00 INFO storage.MemoryStore: Block broadcast_1234_piece0 stored as bytes in memory (estimated size 3.1 KB, free 491.4 MB) 18/04/17 17:19:00 INFO storage.BlockManagerInfo: Added broadcast_1234_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:19:00 INFO spark.SparkContext: Created broadcast 1234 from broadcast at DAGScheduler.scala:1006 18/04/17 17:19:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 1234 (KafkaRDD[1658] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 17:19:00 INFO cluster.YarnClusterScheduler: Adding task set 1234.0 with 1 tasks 18/04/17 17:19:00 INFO scheduler.DAGScheduler: Got job 1235 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 17:19:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 1235 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 17:19:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 17:19:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 17:19:00 INFO scheduler.DAGScheduler: Submitting ResultStage 1235 (KafkaRDD[1665] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 17:19:00 INFO storage.MemoryStore: Block broadcast_1235 stored as values in memory (estimated size 5.7 KB, free 491.4 MB) 18/04/17 17:19:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 1234.0 (TID 1234, ***hostname masked***, executor 11, partition 0, RACK_LOCAL, 2034 bytes) 18/04/17 17:19:00 INFO storage.BlockManagerInfo: Added broadcast_1228_piece0 in memory on ***hostname masked***:55095 (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:19:00 INFO storage.MemoryStore: Block broadcast_1235_piece0 stored as bytes in memory (estimated size 3.1 KB, free 491.4 MB) 18/04/17 17:19:00 INFO storage.BlockManagerInfo: Added broadcast_1235_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:19:00 INFO spark.SparkContext: Created broadcast 1235 from broadcast at DAGScheduler.scala:1006 18/04/17 17:19:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 1235 (KafkaRDD[1665] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 17:19:00 INFO cluster.YarnClusterScheduler: Adding task set 1235.0 with 1 tasks 18/04/17 17:19:00 INFO scheduler.DAGScheduler: Got job 1236 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 17:19:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 1236 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 17:19:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 17:19:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 17:19:00 INFO scheduler.DAGScheduler: Submitting ResultStage 1236 (KafkaRDD[1663] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 17:19:00 INFO storage.MemoryStore: Block broadcast_1236 stored as values in memory (estimated size 5.7 KB, free 491.4 MB) 18/04/17 17:19:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 1235.0 (TID 1235, ***hostname masked***, executor 10, partition 0, RACK_LOCAL, 2041 bytes) 18/04/17 17:19:00 INFO storage.MemoryStore: Block broadcast_1236_piece0 stored as bytes in memory (estimated size 3.1 KB, free 491.4 MB) 18/04/17 17:19:00 INFO storage.BlockManagerInfo: Added broadcast_1236_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:19:00 INFO spark.SparkContext: Created broadcast 1236 from broadcast at DAGScheduler.scala:1006 18/04/17 17:19:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 1236 (KafkaRDD[1663] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 17:19:00 INFO cluster.YarnClusterScheduler: Adding task set 1236.0 with 1 tasks 18/04/17 17:19:00 INFO scheduler.DAGScheduler: Got job 1237 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 17:19:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 1237 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 17:19:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 17:19:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 17:19:00 INFO scheduler.DAGScheduler: Submitting ResultStage 1237 (KafkaRDD[1682] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 17:19:00 INFO storage.MemoryStore: Block broadcast_1237 stored as values in memory (estimated size 5.7 KB, free 491.4 MB) 18/04/17 17:19:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 1236.0 (TID 1236, ***hostname masked***, executor 2, partition 0, RACK_LOCAL, 2060 bytes) 18/04/17 17:19:00 INFO storage.BlockManagerInfo: Added broadcast_1233_piece0 in memory on ***hostname masked***:53081 (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:19:00 INFO storage.MemoryStore: Block broadcast_1237_piece0 stored as bytes in memory (estimated size 3.1 KB, free 491.4 MB) 18/04/17 17:19:00 INFO storage.BlockManagerInfo: Added broadcast_1237_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:19:00 INFO spark.SparkContext: Created broadcast 1237 from broadcast at DAGScheduler.scala:1006 18/04/17 17:19:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 1237 (KafkaRDD[1682] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 17:19:00 INFO cluster.YarnClusterScheduler: Adding task set 1237.0 with 1 tasks 18/04/17 17:19:00 INFO spark.ContextCleaner: Cleaned accumulator 1206 18/04/17 17:19:00 INFO storage.BlockManagerInfo: Removed broadcast_1204_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:19:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 1237.0 (TID 1237, ***hostname masked***, executor 5, partition 0, NODE_LOCAL, 2056 bytes) 18/04/17 17:19:00 INFO storage.BlockManagerInfo: Added broadcast_1231_piece0 in memory on ***hostname masked***:35790 (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:19:00 INFO storage.BlockManagerInfo: Removed broadcast_1204_piece0 on ***hostname masked***:55095 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:19:00 INFO storage.BlockManagerInfo: Added broadcast_1236_piece0 in memory on ***hostname masked***:43653 (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:19:00 INFO storage.BlockManagerInfo: Added broadcast_1235_piece0 in memory on ***hostname masked***:55095 (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:19:00 INFO storage.BlockManagerInfo: Added broadcast_1232_piece0 in memory on ***hostname masked***:53081 (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:19:00 INFO storage.BlockManagerInfo: Added broadcast_1230_piece0 in memory on ***hostname masked***:50260 (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:19:00 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 1214.0 (TID 1214) in 77 ms on ***hostname masked*** (executor 7) (1/1) 18/04/17 17:19:00 INFO cluster.YarnClusterScheduler: Removed TaskSet 1214.0, whose tasks have all completed, from pool 18/04/17 17:19:00 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 1219.0 (TID 1219) in 62 ms on ***hostname masked*** (executor 7) (1/1) 18/04/17 17:19:00 INFO scheduler.DAGScheduler: ResultStage 1214 (foreachPartition at PredictorEngineApp.java:153) finished in 0.077 s 18/04/17 17:19:00 INFO cluster.YarnClusterScheduler: Removed TaskSet 1219.0, whose tasks have all completed, from pool 18/04/17 17:19:00 INFO scheduler.DAGScheduler: ResultStage 1219 (foreachPartition at PredictorEngineApp.java:153) finished in 0.062 s 18/04/17 17:19:00 INFO scheduler.DAGScheduler: Job 1214 finished: foreachPartition at PredictorEngineApp.java:153, took 0.107172 s 18/04/17 17:19:00 INFO scheduler.DAGScheduler: Job 1219 finished: foreachPartition at PredictorEngineApp.java:153, took 0.103410 s 18/04/17 17:19:00 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x7776f052 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 17:19:00 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x1755aff7 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 17:19:00 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x7776f0520x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 17:19:00 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x1755aff70x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 17:19:00 INFO storage.BlockManagerInfo: Added broadcast_1237_piece0 in memory on ***hostname masked***:53081 (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:19:00 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 17:19:00 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 17:19:00 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:46904, server: ***hostname masked***/***IP masked***:2181 18/04/17 17:19:00 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:42310, server: ***hostname masked***/***IP masked***:2181 18/04/17 17:19:00 INFO storage.BlockManagerInfo: Added broadcast_1234_piece0 in memory on ***hostname masked***:57847 (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:19:00 INFO spark.ContextCleaner: Cleaned accumulator 1205 18/04/17 17:19:00 INFO spark.ContextCleaner: Cleaned accumulator 1207 18/04/17 17:19:00 INFO storage.BlockManagerInfo: Removed broadcast_1207_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:19:00 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x2626be142b29098, negotiated timeout = 60000 18/04/17 17:19:00 INFO storage.BlockManagerInfo: Removed broadcast_1207_piece0 on ***hostname masked***:60107 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:19:00 INFO spark.ContextCleaner: Cleaned accumulator 1208 18/04/17 17:19:00 INFO storage.BlockManagerInfo: Removed broadcast_1206_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:19:00 INFO storage.BlockManagerInfo: Removed broadcast_1206_piece0 on ***hostname masked***:41751 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:19:00 INFO spark.ContextCleaner: Cleaned accumulator 1210 18/04/17 17:19:00 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x1626be1444c97ad, negotiated timeout = 60000 18/04/17 17:19:00 INFO storage.BlockManagerInfo: Removed broadcast_1208_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:19:00 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x2626be142b29098 18/04/17 17:19:00 INFO storage.BlockManagerInfo: Removed broadcast_1208_piece0 on ***hostname masked***:55279 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:19:00 INFO spark.ContextCleaner: Cleaned accumulator 1209 18/04/17 17:19:00 INFO storage.BlockManagerInfo: Removed broadcast_1210_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:19:00 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 1223.0 (TID 1223) in 65 ms on ***hostname masked*** (executor 2) (1/1) 18/04/17 17:19:00 INFO cluster.YarnClusterScheduler: Removed TaskSet 1223.0, whose tasks have all completed, from pool 18/04/17 17:19:00 INFO scheduler.DAGScheduler: ResultStage 1223 (foreachPartition at PredictorEngineApp.java:153) finished in 0.065 s 18/04/17 17:19:00 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 1227.0 (TID 1227) in 53 ms on ***hostname masked*** (executor 8) (1/1) 18/04/17 17:19:00 INFO scheduler.DAGScheduler: Job 1223 finished: foreachPartition at PredictorEngineApp.java:153, took 0.117723 s 18/04/17 17:19:00 INFO scheduler.DAGScheduler: ResultStage 1227 (foreachPartition at PredictorEngineApp.java:153) finished in 0.054 s 18/04/17 17:19:00 INFO cluster.YarnClusterScheduler: Removed TaskSet 1227.0, whose tasks have all completed, from pool 18/04/17 17:19:00 INFO scheduler.DAGScheduler: Job 1227 finished: foreachPartition at PredictorEngineApp.java:153, took 0.116581 s 18/04/17 17:19:00 INFO storage.BlockManagerInfo: Removed broadcast_1210_piece0 on ***hostname masked***:57847 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:19:00 INFO spark.ContextCleaner: Cleaned accumulator 1211 18/04/17 17:19:00 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 1226.0 (TID 1226) in 59 ms on ***hostname masked*** (executor 3) (1/1) 18/04/17 17:19:00 INFO scheduler.DAGScheduler: ResultStage 1226 (foreachPartition at PredictorEngineApp.java:153) finished in 0.059 s 18/04/17 17:19:00 INFO cluster.YarnClusterScheduler: Removed TaskSet 1226.0, whose tasks have all completed, from pool 18/04/17 17:19:00 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x1626be1444c97ad 18/04/17 17:19:00 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 17:19:00 INFO storage.BlockManagerInfo: Removed broadcast_1209_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:19:00 INFO zookeeper.ZooKeeper: Session: 0x2626be142b29098 closed 18/04/17 17:19:00 INFO scheduler.JobScheduler: Finished job streaming job 1523974740000 ms.27 from job set of time 1523974740000 ms 18/04/17 17:19:00 INFO scheduler.DAGScheduler: Job 1226 finished: foreachPartition at PredictorEngineApp.java:153, took 0.119620 s 18/04/17 17:19:00 INFO storage.BlockManagerInfo: Removed broadcast_1209_piece0 on ***hostname masked***:57847 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:19:00 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x76bf5e3e connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 17:19:00 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x76bf5e3e0x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 17:19:00 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 17:19:00 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:46910, server: ***hostname masked***/***IP masked***:2181 18/04/17 17:19:00 INFO zookeeper.ZooKeeper: Session: 0x1626be1444c97ad closed 18/04/17 17:19:00 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 17:19:00 INFO scheduler.JobScheduler: Finished job streaming job 1523974740000 ms.6 from job set of time 1523974740000 ms 18/04/17 17:19:00 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x2626be142b29099, negotiated timeout = 60000 18/04/17 17:19:00 INFO scheduler.JobScheduler: Finished job streaming job 1523974740000 ms.12 from job set of time 1523974740000 ms 18/04/17 17:19:00 INFO scheduler.JobScheduler: Finished job streaming job 1523974740000 ms.5 from job set of time 1523974740000 ms 18/04/17 17:19:00 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x2626be142b29099 18/04/17 17:19:00 INFO zookeeper.ZooKeeper: Session: 0x2626be142b29099 closed 18/04/17 17:19:00 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 17:19:00 INFO scheduler.JobScheduler: Finished job streaming job 1523974740000 ms.20 from job set of time 1523974740000 ms 18/04/17 17:19:00 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 1216.0 (TID 1216) in 355 ms on ***hostname masked*** (executor 9) (1/1) 18/04/17 17:19:00 INFO cluster.YarnClusterScheduler: Removed TaskSet 1216.0, whose tasks have all completed, from pool 18/04/17 17:19:00 INFO scheduler.DAGScheduler: ResultStage 1216 (foreachPartition at PredictorEngineApp.java:153) finished in 0.355 s 18/04/17 17:19:00 INFO scheduler.DAGScheduler: Job 1216 finished: foreachPartition at PredictorEngineApp.java:153, took 0.390088 s 18/04/17 17:19:00 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x453b2003 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 17:19:00 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x453b20030x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 17:19:00 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 17:19:00 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:46918, server: ***hostname masked***/***IP masked***:2181 18/04/17 17:19:00 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x2626be142b2909d, negotiated timeout = 60000 18/04/17 17:19:00 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x2626be142b2909d 18/04/17 17:19:00 INFO zookeeper.ZooKeeper: Session: 0x2626be142b2909d closed 18/04/17 17:19:00 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 17:19:00 INFO scheduler.JobScheduler: Finished job streaming job 1523974740000 ms.35 from job set of time 1523974740000 ms 18/04/17 17:19:00 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 1217.0 (TID 1217) in 729 ms on ***hostname masked*** (executor 1) (1/1) 18/04/17 17:19:00 INFO cluster.YarnClusterScheduler: Removed TaskSet 1217.0, whose tasks have all completed, from pool 18/04/17 17:19:00 INFO scheduler.DAGScheduler: ResultStage 1217 (foreachPartition at PredictorEngineApp.java:153) finished in 0.729 s 18/04/17 17:19:00 INFO scheduler.DAGScheduler: Job 1217 finished: foreachPartition at PredictorEngineApp.java:153, took 0.766348 s 18/04/17 17:19:00 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x2a147dbb connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 17:19:00 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x2a147dbb0x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 17:19:00 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 17:19:00 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:35947, server: ***hostname masked***/***IP masked***:2181 18/04/17 17:19:00 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x3626be1439a9768, negotiated timeout = 60000 18/04/17 17:19:00 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x3626be1439a9768 18/04/17 17:19:00 INFO zookeeper.ZooKeeper: Session: 0x3626be1439a9768 closed 18/04/17 17:19:00 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 17:19:00 INFO scheduler.JobScheduler: Finished job streaming job 1523974740000 ms.25 from job set of time 1523974740000 ms 18/04/17 17:19:03 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 1236.0 (TID 1236) in 3491 ms on ***hostname masked*** (executor 2) (1/1) 18/04/17 17:19:03 INFO cluster.YarnClusterScheduler: Removed TaskSet 1236.0, whose tasks have all completed, from pool 18/04/17 17:19:03 INFO scheduler.DAGScheduler: ResultStage 1236 (foreachPartition at PredictorEngineApp.java:153) finished in 3.492 s 18/04/17 17:19:03 INFO scheduler.DAGScheduler: Job 1236 finished: foreachPartition at PredictorEngineApp.java:153, took 3.575779 s 18/04/17 17:19:03 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x3cc7d8dc connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 17:19:03 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x3cc7d8dc0x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 17:19:03 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 17:19:03 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:35959, server: ***hostname masked***/***IP masked***:2181 18/04/17 17:19:03 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x3626be1439a9769, negotiated timeout = 60000 18/04/17 17:19:03 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x3626be1439a9769 18/04/17 17:19:03 INFO zookeeper.ZooKeeper: Session: 0x3626be1439a9769 closed 18/04/17 17:19:03 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 17:19:03 INFO scheduler.JobScheduler: Finished job streaming job 1523974740000 ms.7 from job set of time 1523974740000 ms 18/04/17 17:19:04 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 1212.0 (TID 1212) in 4700 ms on ***hostname masked*** (executor 4) (1/1) 18/04/17 17:19:04 INFO cluster.YarnClusterScheduler: Removed TaskSet 1212.0, whose tasks have all completed, from pool 18/04/17 17:19:04 INFO scheduler.DAGScheduler: ResultStage 1212 (foreachPartition at PredictorEngineApp.java:153) finished in 4.713 s 18/04/17 17:19:04 INFO scheduler.DAGScheduler: Job 1212 finished: foreachPartition at PredictorEngineApp.java:153, took 4.726466 s 18/04/17 17:19:04 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x509a2341 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 17:19:04 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x509a23410x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 17:19:04 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 17:19:04 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:35967, server: ***hostname masked***/***IP masked***:2181 18/04/17 17:19:04 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x3626be1439a976a, negotiated timeout = 60000 18/04/17 17:19:04 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x3626be1439a976a 18/04/17 17:19:04 INFO zookeeper.ZooKeeper: Session: 0x3626be1439a976a closed 18/04/17 17:19:04 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 17:19:04 INFO scheduler.JobScheduler: Finished job streaming job 1523974740000 ms.8 from job set of time 1523974740000 ms 18/04/17 17:19:06 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 1213.0 (TID 1213) in 5982 ms on ***hostname masked*** (executor 9) (1/1) 18/04/17 17:19:06 INFO scheduler.DAGScheduler: ResultStage 1213 (foreachPartition at PredictorEngineApp.java:153) finished in 5.982 s 18/04/17 17:19:06 INFO cluster.YarnClusterScheduler: Removed TaskSet 1213.0, whose tasks have all completed, from pool 18/04/17 17:19:06 INFO scheduler.DAGScheduler: Job 1213 finished: foreachPartition at PredictorEngineApp.java:153, took 6.009539 s 18/04/17 17:19:06 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x3f4697f2 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 17:19:06 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x3f4697f20x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 17:19:06 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 17:19:06 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:35973, server: ***hostname masked***/***IP masked***:2181 18/04/17 17:19:06 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x3626be1439a976b, negotiated timeout = 60000 18/04/17 17:19:06 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x3626be1439a976b 18/04/17 17:19:06 INFO zookeeper.ZooKeeper: Session: 0x3626be1439a976b closed 18/04/17 17:19:06 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 17:19:06 INFO scheduler.JobScheduler: Finished job streaming job 1523974740000 ms.34 from job set of time 1523974740000 ms 18/04/17 17:19:07 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 1229.0 (TID 1229) in 7716 ms on ***hostname masked*** (executor 4) (1/1) 18/04/17 17:19:07 INFO cluster.YarnClusterScheduler: Removed TaskSet 1229.0, whose tasks have all completed, from pool 18/04/17 17:19:07 INFO scheduler.DAGScheduler: ResultStage 1229 (foreachPartition at PredictorEngineApp.java:153) finished in 7.717 s 18/04/17 17:19:07 INFO scheduler.DAGScheduler: Job 1229 finished: foreachPartition at PredictorEngineApp.java:153, took 7.785102 s 18/04/17 17:19:07 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x5833bab7 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 17:19:07 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x5833bab70x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 17:19:07 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 17:19:07 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:42363, server: ***hostname masked***/***IP masked***:2181 18/04/17 17:19:07 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x1626be1444c97bb, negotiated timeout = 60000 18/04/17 17:19:07 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x1626be1444c97bb 18/04/17 17:19:07 INFO zookeeper.ZooKeeper: Session: 0x1626be1444c97bb closed 18/04/17 17:19:07 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 17:19:07 INFO scheduler.JobScheduler: Finished job streaming job 1523974740000 ms.31 from job set of time 1523974740000 ms 18/04/17 17:19:09 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 1235.0 (TID 1235) in 9531 ms on ***hostname masked*** (executor 10) (1/1) 18/04/17 17:19:09 INFO cluster.YarnClusterScheduler: Removed TaskSet 1235.0, whose tasks have all completed, from pool 18/04/17 17:19:09 INFO scheduler.DAGScheduler: ResultStage 1235 (foreachPartition at PredictorEngineApp.java:153) finished in 9.531 s 18/04/17 17:19:09 INFO scheduler.DAGScheduler: Job 1235 finished: foreachPartition at PredictorEngineApp.java:153, took 9.613166 s 18/04/17 17:19:09 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x76344b55 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 17:19:09 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x76344b550x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 17:19:09 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 17:19:09 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:35992, server: ***hostname masked***/***IP masked***:2181 18/04/17 17:19:09 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x3626be1439a976c, negotiated timeout = 60000 18/04/17 17:19:09 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x3626be1439a976c 18/04/17 17:19:09 INFO zookeeper.ZooKeeper: Session: 0x3626be1439a976c closed 18/04/17 17:19:09 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 17:19:09 INFO scheduler.JobScheduler: Finished job streaming job 1523974740000 ms.9 from job set of time 1523974740000 ms 18/04/17 17:19:09 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 1225.0 (TID 1225) in 9642 ms on ***hostname masked*** (executor 4) (1/1) 18/04/17 17:19:09 INFO scheduler.DAGScheduler: ResultStage 1225 (foreachPartition at PredictorEngineApp.java:153) finished in 9.643 s 18/04/17 17:19:09 INFO cluster.YarnClusterScheduler: Removed TaskSet 1225.0, whose tasks have all completed, from pool 18/04/17 17:19:09 INFO scheduler.DAGScheduler: Job 1225 finished: foreachPartition at PredictorEngineApp.java:153, took 9.699798 s 18/04/17 17:19:09 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x1dc06130 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 17:19:09 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x1dc061300x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 17:19:09 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 17:19:09 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:35995, server: ***hostname masked***/***IP masked***:2181 18/04/17 17:19:09 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x3626be1439a976d, negotiated timeout = 60000 18/04/17 17:19:09 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x3626be1439a976d 18/04/17 17:19:09 INFO zookeeper.ZooKeeper: Session: 0x3626be1439a976d closed 18/04/17 17:19:09 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 17:19:09 INFO scheduler.JobScheduler: Finished job streaming job 1523974740000 ms.18 from job set of time 1523974740000 ms 18/04/17 17:19:09 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 1234.0 (TID 1234) in 9685 ms on ***hostname masked*** (executor 11) (1/1) 18/04/17 17:19:09 INFO cluster.YarnClusterScheduler: Removed TaskSet 1234.0, whose tasks have all completed, from pool 18/04/17 17:19:09 INFO scheduler.DAGScheduler: ResultStage 1234 (foreachPartition at PredictorEngineApp.java:153) finished in 9.685 s 18/04/17 17:19:09 INFO scheduler.DAGScheduler: Job 1234 finished: foreachPartition at PredictorEngineApp.java:153, took 9.764852 s 18/04/17 17:19:09 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x751f5f8a connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 17:19:09 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x751f5f8a0x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 17:19:09 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 17:19:09 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:35998, server: ***hostname masked***/***IP masked***:2181 18/04/17 17:19:09 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x3626be1439a976e, negotiated timeout = 60000 18/04/17 17:19:09 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x3626be1439a976e 18/04/17 17:19:09 INFO zookeeper.ZooKeeper: Session: 0x3626be1439a976e closed 18/04/17 17:19:09 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 17:19:09 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 1228.0 (TID 1228) in 9723 ms on ***hostname masked*** (executor 10) (1/1) 18/04/17 17:19:09 INFO cluster.YarnClusterScheduler: Removed TaskSet 1228.0, whose tasks have all completed, from pool 18/04/17 17:19:09 INFO scheduler.DAGScheduler: ResultStage 1228 (foreachPartition at PredictorEngineApp.java:153) finished in 9.723 s 18/04/17 17:19:09 INFO scheduler.DAGScheduler: Job 1228 finished: foreachPartition at PredictorEngineApp.java:153, took 9.789200 s 18/04/17 17:19:09 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x791dc979 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 17:19:09 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x791dc9790x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 17:19:09 INFO scheduler.JobScheduler: Finished job streaming job 1523974740000 ms.2 from job set of time 1523974740000 ms 18/04/17 17:19:09 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 17:19:09 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:36001, server: ***hostname masked***/***IP masked***:2181 18/04/17 17:19:09 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x3626be1439a976f, negotiated timeout = 60000 18/04/17 17:19:09 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x3626be1439a976f 18/04/17 17:19:09 INFO zookeeper.ZooKeeper: Session: 0x3626be1439a976f closed 18/04/17 17:19:09 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 17:19:09 INFO scheduler.JobScheduler: Finished job streaming job 1523974740000 ms.32 from job set of time 1523974740000 ms 18/04/17 17:19:10 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 1211.0 (TID 1211) in 10037 ms on ***hostname masked*** (executor 11) (1/1) 18/04/17 17:19:10 INFO cluster.YarnClusterScheduler: Removed TaskSet 1211.0, whose tasks have all completed, from pool 18/04/17 17:19:10 INFO scheduler.DAGScheduler: ResultStage 1211 (foreachPartition at PredictorEngineApp.java:153) finished in 10.036 s 18/04/17 17:19:10 INFO scheduler.DAGScheduler: Job 1211 finished: foreachPartition at PredictorEngineApp.java:153, took 10.044801 s 18/04/17 17:19:10 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x46def47e connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 17:19:10 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x46def47e0x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 17:19:10 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 17:19:10 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:46984, server: ***hostname masked***/***IP masked***:2181 18/04/17 17:19:10 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x2626be142b290a7, negotiated timeout = 60000 18/04/17 17:19:10 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x2626be142b290a7 18/04/17 17:19:10 INFO zookeeper.ZooKeeper: Session: 0x2626be142b290a7 closed 18/04/17 17:19:10 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 17:19:10 INFO scheduler.JobScheduler: Finished job streaming job 1523974740000 ms.23 from job set of time 1523974740000 ms 18/04/17 17:19:10 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 1230.0 (TID 1230) in 10148 ms on ***hostname masked*** (executor 8) (1/1) 18/04/17 17:19:10 INFO cluster.YarnClusterScheduler: Removed TaskSet 1230.0, whose tasks have all completed, from pool 18/04/17 17:19:10 INFO scheduler.DAGScheduler: ResultStage 1230 (foreachPartition at PredictorEngineApp.java:153) finished in 10.149 s 18/04/17 17:19:10 INFO scheduler.DAGScheduler: Job 1230 finished: foreachPartition at PredictorEngineApp.java:153, took 10.219444 s 18/04/17 17:19:10 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0xfee0f2c connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 17:19:10 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0xfee0f2c0x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 17:19:10 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 17:19:10 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:36010, server: ***hostname masked***/***IP masked***:2181 18/04/17 17:19:10 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x3626be1439a9771, negotiated timeout = 60000 18/04/17 17:19:10 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x3626be1439a9771 18/04/17 17:19:10 INFO zookeeper.ZooKeeper: Session: 0x3626be1439a9771 closed 18/04/17 17:19:10 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 17:19:10 INFO scheduler.JobScheduler: Finished job streaming job 1523974740000 ms.33 from job set of time 1523974740000 ms 18/04/17 17:19:10 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 1221.0 (TID 1221) in 10298 ms on ***hostname masked*** (executor 8) (1/1) 18/04/17 17:19:10 INFO scheduler.DAGScheduler: ResultStage 1221 (foreachPartition at PredictorEngineApp.java:153) finished in 10.298 s 18/04/17 17:19:10 INFO cluster.YarnClusterScheduler: Removed TaskSet 1221.0, whose tasks have all completed, from pool 18/04/17 17:19:10 INFO scheduler.DAGScheduler: Job 1221 finished: foreachPartition at PredictorEngineApp.java:153, took 10.344267 s 18/04/17 17:19:10 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x55a92e56 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 17:19:10 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x55a92e560x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 17:19:10 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 17:19:10 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:36013, server: ***hostname masked***/***IP masked***:2181 18/04/17 17:19:10 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x3626be1439a9772, negotiated timeout = 60000 18/04/17 17:19:10 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x3626be1439a9772 18/04/17 17:19:10 INFO zookeeper.ZooKeeper: Session: 0x3626be1439a9772 closed 18/04/17 17:19:10 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 17:19:10 INFO scheduler.JobScheduler: Finished job streaming job 1523974740000 ms.15 from job set of time 1523974740000 ms 18/04/17 17:19:11 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 1218.0 (TID 1218) in 11543 ms on ***hostname masked*** (executor 7) (1/1) 18/04/17 17:19:11 INFO scheduler.DAGScheduler: ResultStage 1218 (foreachPartition at PredictorEngineApp.java:153) finished in 11.543 s 18/04/17 17:19:11 INFO cluster.YarnClusterScheduler: Removed TaskSet 1218.0, whose tasks have all completed, from pool 18/04/17 17:19:11 INFO scheduler.DAGScheduler: Job 1218 finished: foreachPartition at PredictorEngineApp.java:153, took 11.582567 s 18/04/17 17:19:11 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x58a6be91 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 17:19:11 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x58a6be910x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 17:19:11 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 17:19:11 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:36018, server: ***hostname masked***/***IP masked***:2181 18/04/17 17:19:11 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x3626be1439a9774, negotiated timeout = 60000 18/04/17 17:19:11 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x3626be1439a9774 18/04/17 17:19:11 INFO zookeeper.ZooKeeper: Session: 0x3626be1439a9774 closed 18/04/17 17:19:11 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 17:19:11 INFO scheduler.JobScheduler: Finished job streaming job 1523974740000 ms.1 from job set of time 1523974740000 ms 18/04/17 17:19:14 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 1222.0 (TID 1222) in 14133 ms on ***hostname masked*** (executor 3) (1/1) 18/04/17 17:19:14 INFO cluster.YarnClusterScheduler: Removed TaskSet 1222.0, whose tasks have all completed, from pool 18/04/17 17:19:14 INFO scheduler.DAGScheduler: ResultStage 1222 (foreachPartition at PredictorEngineApp.java:153) finished in 14.133 s 18/04/17 17:19:14 INFO scheduler.DAGScheduler: Job 1222 finished: foreachPartition at PredictorEngineApp.java:153, took 14.181777 s 18/04/17 17:19:14 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x42aae313 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 17:19:14 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x42aae3130x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 17:19:14 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 17:19:14 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:47005, server: ***hostname masked***/***IP masked***:2181 18/04/17 17:19:14 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x2626be142b290aa, negotiated timeout = 60000 18/04/17 17:19:14 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x2626be142b290aa 18/04/17 17:19:14 INFO zookeeper.ZooKeeper: Session: 0x2626be142b290aa closed 18/04/17 17:19:14 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 17:19:14 INFO scheduler.JobScheduler: Finished job streaming job 1523974740000 ms.22 from job set of time 1523974740000 ms 18/04/17 17:19:14 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 1232.0 (TID 1232) in 14763 ms on ***hostname masked*** (executor 5) (1/1) 18/04/17 17:19:14 INFO cluster.YarnClusterScheduler: Removed TaskSet 1232.0, whose tasks have all completed, from pool 18/04/17 17:19:14 INFO scheduler.DAGScheduler: ResultStage 1232 (foreachPartition at PredictorEngineApp.java:153) finished in 14.763 s 18/04/17 17:19:14 INFO scheduler.DAGScheduler: Job 1232 finished: foreachPartition at PredictorEngineApp.java:153, took 14.839938 s 18/04/17 17:19:14 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x1e1d0de7 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 17:19:14 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x1e1d0de70x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 17:19:14 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 17:19:14 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:42413, server: ***hostname masked***/***IP masked***:2181 18/04/17 17:19:14 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x1626be1444c97bf, negotiated timeout = 60000 18/04/17 17:19:14 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x1626be1444c97bf 18/04/17 17:19:14 INFO zookeeper.ZooKeeper: Session: 0x1626be1444c97bf closed 18/04/17 17:19:14 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 17:19:14 INFO scheduler.JobScheduler: Finished job streaming job 1523974740000 ms.24 from job set of time 1523974740000 ms 18/04/17 17:19:15 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 1224.0 (TID 1224) in 15300 ms on ***hostname masked*** (executor 6) (1/1) 18/04/17 17:19:15 INFO cluster.YarnClusterScheduler: Removed TaskSet 1224.0, whose tasks have all completed, from pool 18/04/17 17:19:15 INFO scheduler.DAGScheduler: ResultStage 1224 (foreachPartition at PredictorEngineApp.java:153) finished in 15.301 s 18/04/17 17:19:15 INFO scheduler.DAGScheduler: Job 1224 finished: foreachPartition at PredictorEngineApp.java:153, took 15.355103 s 18/04/17 17:19:15 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 1231.0 (TID 1231) in 15280 ms on ***hostname masked*** (executor 6) (1/1) 18/04/17 17:19:15 INFO cluster.YarnClusterScheduler: Removed TaskSet 1231.0, whose tasks have all completed, from pool 18/04/17 17:19:15 INFO scheduler.DAGScheduler: ResultStage 1231 (foreachPartition at PredictorEngineApp.java:153) finished in 15.281 s 18/04/17 17:19:15 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x3d4db4c9 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 17:19:15 INFO scheduler.DAGScheduler: Job 1231 finished: foreachPartition at PredictorEngineApp.java:153, took 15.354567 s 18/04/17 17:19:15 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x3d4db4c90x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 17:19:15 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x7a2d7b25 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 17:19:15 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x7a2d7b250x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 17:19:15 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 17:19:15 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 17:19:15 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:36035, server: ***hostname masked***/***IP masked***:2181 18/04/17 17:19:15 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:47013, server: ***hostname masked***/***IP masked***:2181 18/04/17 17:19:15 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x3626be1439a9777, negotiated timeout = 60000 18/04/17 17:19:15 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x2626be142b290ac, negotiated timeout = 60000 18/04/17 17:19:15 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x2626be142b290ac 18/04/17 17:19:15 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x3626be1439a9777 18/04/17 17:19:15 INFO zookeeper.ZooKeeper: Session: 0x2626be142b290ac closed 18/04/17 17:19:15 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 17:19:15 INFO zookeeper.ZooKeeper: Session: 0x3626be1439a9777 closed 18/04/17 17:19:15 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 17:19:15 INFO scheduler.JobScheduler: Finished job streaming job 1523974740000 ms.29 from job set of time 1523974740000 ms 18/04/17 17:19:15 INFO scheduler.JobScheduler: Finished job streaming job 1523974740000 ms.19 from job set of time 1523974740000 ms 18/04/17 17:19:15 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 1215.0 (TID 1215) in 15433 ms on ***hostname masked*** (executor 6) (1/1) 18/04/17 17:19:15 INFO cluster.YarnClusterScheduler: Removed TaskSet 1215.0, whose tasks have all completed, from pool 18/04/17 17:19:15 INFO scheduler.DAGScheduler: ResultStage 1215 (foreachPartition at PredictorEngineApp.java:153) finished in 15.433 s 18/04/17 17:19:15 INFO scheduler.DAGScheduler: Job 1215 finished: foreachPartition at PredictorEngineApp.java:153, took 15.464970 s 18/04/17 17:19:15 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x2fa1fde8 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 17:19:15 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x2fa1fde80x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 17:19:15 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 17:19:15 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:36041, server: ***hostname masked***/***IP masked***:2181 18/04/17 17:19:15 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x3626be1439a9778, negotiated timeout = 60000 18/04/17 17:19:15 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x3626be1439a9778 18/04/17 17:19:15 INFO zookeeper.ZooKeeper: Session: 0x3626be1439a9778 closed 18/04/17 17:19:15 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 17:19:15 INFO scheduler.JobScheduler: Finished job streaming job 1523974740000 ms.28 from job set of time 1523974740000 ms 18/04/17 17:19:18 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 1220.0 (TID 1220) in 18786 ms on ***hostname masked*** (executor 2) (1/1) 18/04/17 17:19:18 INFO cluster.YarnClusterScheduler: Removed TaskSet 1220.0, whose tasks have all completed, from pool 18/04/17 17:19:18 INFO scheduler.DAGScheduler: ResultStage 1220 (foreachPartition at PredictorEngineApp.java:153) finished in 18.787 s 18/04/17 17:19:18 INFO scheduler.DAGScheduler: Job 1220 finished: foreachPartition at PredictorEngineApp.java:153, took 18.830703 s 18/04/17 17:19:18 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x285f16de connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 17:19:18 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x285f16de0x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 17:19:18 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 17:19:18 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:47025, server: ***hostname masked***/***IP masked***:2181 18/04/17 17:19:18 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x2626be142b290ad, negotiated timeout = 60000 18/04/17 17:19:18 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x2626be142b290ad 18/04/17 17:19:18 INFO zookeeper.ZooKeeper: Session: 0x2626be142b290ad closed 18/04/17 17:19:18 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 17:19:18 INFO scheduler.JobScheduler: Finished job streaming job 1523974740000 ms.10 from job set of time 1523974740000 ms 18/04/17 17:19:29 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 1237.0 (TID 1237) in 29264 ms on ***hostname masked*** (executor 5) (1/1) 18/04/17 17:19:29 INFO cluster.YarnClusterScheduler: Removed TaskSet 1237.0, whose tasks have all completed, from pool 18/04/17 17:19:29 INFO scheduler.DAGScheduler: ResultStage 1237 (foreachPartition at PredictorEngineApp.java:153) finished in 29.265 s 18/04/17 17:19:29 INFO scheduler.DAGScheduler: Job 1237 finished: foreachPartition at PredictorEngineApp.java:153, took 29.354446 s 18/04/17 17:19:29 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x7f5dbe26 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 17:19:29 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x7f5dbe260x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 17:19:29 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 17:19:29 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:36070, server: ***hostname masked***/***IP masked***:2181 18/04/17 17:19:29 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x3626be1439a977d, negotiated timeout = 60000 18/04/17 17:19:29 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x3626be1439a977d 18/04/17 17:19:29 INFO zookeeper.ZooKeeper: Session: 0x3626be1439a977d closed 18/04/17 17:19:29 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 17:19:29 INFO scheduler.JobScheduler: Finished job streaming job 1523974740000 ms.26 from job set of time 1523974740000 ms 18/04/17 17:19:29 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 1233.0 (TID 1233) in 29650 ms on ***hostname masked*** (executor 5) (1/1) 18/04/17 17:19:29 INFO cluster.YarnClusterScheduler: Removed TaskSet 1233.0, whose tasks have all completed, from pool 18/04/17 17:19:29 INFO scheduler.DAGScheduler: ResultStage 1233 (foreachPartition at PredictorEngineApp.java:153) finished in 29.651 s 18/04/17 17:19:29 INFO scheduler.DAGScheduler: Job 1233 finished: foreachPartition at PredictorEngineApp.java:153, took 29.728737 s 18/04/17 17:19:29 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x67e9fdea connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 17:19:29 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x67e9fdea0x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 17:19:29 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 17:19:29 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:47050, server: ***hostname masked***/***IP masked***:2181 18/04/17 17:19:29 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x2626be142b290b1, negotiated timeout = 60000 18/04/17 17:19:29 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x2626be142b290b1 18/04/17 17:19:29 INFO zookeeper.ZooKeeper: Session: 0x2626be142b290b1 closed 18/04/17 17:19:29 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 17:19:29 INFO scheduler.JobScheduler: Finished job streaming job 1523974740000 ms.11 from job set of time 1523974740000 ms 18/04/17 17:19:29 INFO scheduler.JobScheduler: Total delay: 29.824 s for time 1523974740000 ms (execution: 29.771 s) 18/04/17 17:19:29 INFO kafka.KafkaRDD: Removing RDD 1584 from persistence list 18/04/17 17:19:29 INFO storage.BlockManager: Removing RDD 1584 18/04/17 17:19:29 INFO kafka.KafkaRDD: Removing RDD 1620 from persistence list 18/04/17 17:19:29 INFO storage.BlockManager: Removing RDD 1620 18/04/17 17:19:29 INFO kafka.KafkaRDD: Removing RDD 1584 from persistence list 18/04/17 17:19:29 INFO storage.BlockManager: Removing RDD 1584 18/04/17 17:19:29 INFO kafka.KafkaRDD: Removing RDD 1620 from persistence list 18/04/17 17:19:29 INFO storage.BlockManager: Removing RDD 1620 18/04/17 17:19:29 INFO kafka.KafkaRDD: Removing RDD 1585 from persistence list 18/04/17 17:19:29 INFO storage.BlockManager: Removing RDD 1585 18/04/17 17:19:29 INFO kafka.KafkaRDD: Removing RDD 1621 from persistence list 18/04/17 17:19:29 INFO storage.BlockManager: Removing RDD 1621 18/04/17 17:19:29 INFO kafka.KafkaRDD: Removing RDD 1585 from persistence list 18/04/17 17:19:29 INFO storage.BlockManager: Removing RDD 1585 18/04/17 17:19:29 INFO kafka.KafkaRDD: Removing RDD 1621 from persistence list 18/04/17 17:19:29 INFO storage.BlockManager: Removing RDD 1621 18/04/17 17:19:29 INFO kafka.KafkaRDD: Removing RDD 1586 from persistence list 18/04/17 17:19:29 INFO storage.BlockManager: Removing RDD 1586 18/04/17 17:19:29 INFO kafka.KafkaRDD: Removing RDD 1622 from persistence list 18/04/17 17:19:29 INFO storage.BlockManager: Removing RDD 1622 18/04/17 17:19:29 INFO kafka.KafkaRDD: Removing RDD 1586 from persistence list 18/04/17 17:19:29 INFO storage.BlockManager: Removing RDD 1586 18/04/17 17:19:29 INFO kafka.KafkaRDD: Removing RDD 1622 from persistence list 18/04/17 17:19:29 INFO storage.BlockManager: Removing RDD 1622 18/04/17 17:19:29 INFO kafka.KafkaRDD: Removing RDD 1587 from persistence list 18/04/17 17:19:29 INFO storage.BlockManager: Removing RDD 1587 18/04/17 17:19:29 INFO kafka.KafkaRDD: Removing RDD 1623 from persistence list 18/04/17 17:19:29 INFO storage.BlockManager: Removing RDD 1623 18/04/17 17:19:29 INFO kafka.KafkaRDD: Removing RDD 1587 from persistence list 18/04/17 17:19:29 INFO storage.BlockManager: Removing RDD 1587 18/04/17 17:19:29 INFO kafka.KafkaRDD: Removing RDD 1623 from persistence list 18/04/17 17:19:29 INFO storage.BlockManager: Removing RDD 1623 18/04/17 17:19:29 INFO kafka.KafkaRDD: Removing RDD 1588 from persistence list 18/04/17 17:19:29 INFO storage.BlockManager: Removing RDD 1588 18/04/17 17:19:29 INFO kafka.KafkaRDD: Removing RDD 1624 from persistence list 18/04/17 17:19:29 INFO storage.BlockManager: Removing RDD 1624 18/04/17 17:19:29 INFO kafka.KafkaRDD: Removing RDD 1588 from persistence list 18/04/17 17:19:29 INFO storage.BlockManager: Removing RDD 1588 18/04/17 17:19:29 INFO kafka.KafkaRDD: Removing RDD 1624 from persistence list 18/04/17 17:19:29 INFO storage.BlockManager: Removing RDD 1624 18/04/17 17:19:29 INFO kafka.KafkaRDD: Removing RDD 1589 from persistence list 18/04/17 17:19:29 INFO storage.BlockManager: Removing RDD 1589 18/04/17 17:19:29 INFO kafka.KafkaRDD: Removing RDD 1625 from persistence list 18/04/17 17:19:29 INFO storage.BlockManager: Removing RDD 1625 18/04/17 17:19:29 INFO kafka.KafkaRDD: Removing RDD 1589 from persistence list 18/04/17 17:19:29 INFO storage.BlockManager: Removing RDD 1589 18/04/17 17:19:29 INFO kafka.KafkaRDD: Removing RDD 1625 from persistence list 18/04/17 17:19:29 INFO storage.BlockManager: Removing RDD 1625 18/04/17 17:19:29 INFO kafka.KafkaRDD: Removing RDD 1590 from persistence list 18/04/17 17:19:29 INFO storage.BlockManager: Removing RDD 1590 18/04/17 17:19:29 INFO kafka.KafkaRDD: Removing RDD 1626 from persistence list 18/04/17 17:19:29 INFO storage.BlockManager: Removing RDD 1626 18/04/17 17:19:29 INFO kafka.KafkaRDD: Removing RDD 1590 from persistence list 18/04/17 17:19:29 INFO storage.BlockManager: Removing RDD 1590 18/04/17 17:19:29 INFO kafka.KafkaRDD: Removing RDD 1626 from persistence list 18/04/17 17:19:29 INFO storage.BlockManager: Removing RDD 1626 18/04/17 17:19:29 INFO kafka.KafkaRDD: Removing RDD 1591 from persistence list 18/04/17 17:19:29 INFO storage.BlockManager: Removing RDD 1591 18/04/17 17:19:29 INFO kafka.KafkaRDD: Removing RDD 1627 from persistence list 18/04/17 17:19:29 INFO storage.BlockManager: Removing RDD 1627 18/04/17 17:19:29 INFO kafka.KafkaRDD: Removing RDD 1591 from persistence list 18/04/17 17:19:29 INFO storage.BlockManager: Removing RDD 1591 18/04/17 17:19:29 INFO kafka.KafkaRDD: Removing RDD 1627 from persistence list 18/04/17 17:19:29 INFO storage.BlockManager: Removing RDD 1627 18/04/17 17:19:29 INFO kafka.KafkaRDD: Removing RDD 1592 from persistence list 18/04/17 17:19:29 INFO storage.BlockManager: Removing RDD 1592 18/04/17 17:19:29 INFO kafka.KafkaRDD: Removing RDD 1628 from persistence list 18/04/17 17:19:29 INFO storage.BlockManager: Removing RDD 1628 18/04/17 17:19:29 INFO kafka.KafkaRDD: Removing RDD 1592 from persistence list 18/04/17 17:19:29 INFO storage.BlockManager: Removing RDD 1592 18/04/17 17:19:29 INFO kafka.KafkaRDD: Removing RDD 1628 from persistence list 18/04/17 17:19:29 INFO storage.BlockManager: Removing RDD 1628 18/04/17 17:19:29 INFO kafka.KafkaRDD: Removing RDD 1593 from persistence list 18/04/17 17:19:29 INFO storage.BlockManager: Removing RDD 1593 18/04/17 17:19:29 INFO kafka.KafkaRDD: Removing RDD 1629 from persistence list 18/04/17 17:19:29 INFO storage.BlockManager: Removing RDD 1629 18/04/17 17:19:29 INFO kafka.KafkaRDD: Removing RDD 1593 from persistence list 18/04/17 17:19:29 INFO storage.BlockManager: Removing RDD 1593 18/04/17 17:19:29 INFO kafka.KafkaRDD: Removing RDD 1629 from persistence list 18/04/17 17:19:29 INFO storage.BlockManager: Removing RDD 1629 18/04/17 17:19:29 INFO kafka.KafkaRDD: Removing RDD 1594 from persistence list 18/04/17 17:19:29 INFO storage.BlockManager: Removing RDD 1594 18/04/17 17:19:29 INFO kafka.KafkaRDD: Removing RDD 1630 from persistence list 18/04/17 17:19:29 INFO storage.BlockManager: Removing RDD 1630 18/04/17 17:19:29 INFO kafka.KafkaRDD: Removing RDD 1594 from persistence list 18/04/17 17:19:29 INFO storage.BlockManager: Removing RDD 1594 18/04/17 17:19:29 INFO kafka.KafkaRDD: Removing RDD 1630 from persistence list 18/04/17 17:19:29 INFO storage.BlockManager: Removing RDD 1630 18/04/17 17:19:29 INFO kafka.KafkaRDD: Removing RDD 1595 from persistence list 18/04/17 17:19:29 INFO storage.BlockManager: Removing RDD 1595 18/04/17 17:19:29 INFO kafka.KafkaRDD: Removing RDD 1631 from persistence list 18/04/17 17:19:29 INFO storage.BlockManager: Removing RDD 1631 18/04/17 17:19:29 INFO kafka.KafkaRDD: Removing RDD 1595 from persistence list 18/04/17 17:19:29 INFO storage.BlockManager: Removing RDD 1595 18/04/17 17:19:29 INFO kafka.KafkaRDD: Removing RDD 1631 from persistence list 18/04/17 17:19:29 INFO storage.BlockManager: Removing RDD 1631 18/04/17 17:19:29 INFO kafka.KafkaRDD: Removing RDD 1596 from persistence list 18/04/17 17:19:29 INFO storage.BlockManager: Removing RDD 1596 18/04/17 17:19:29 INFO kafka.KafkaRDD: Removing RDD 1632 from persistence list 18/04/17 17:19:29 INFO storage.BlockManager: Removing RDD 1632 18/04/17 17:19:29 INFO kafka.KafkaRDD: Removing RDD 1596 from persistence list 18/04/17 17:19:29 INFO storage.BlockManager: Removing RDD 1596 18/04/17 17:19:29 INFO kafka.KafkaRDD: Removing RDD 1632 from persistence list 18/04/17 17:19:29 INFO storage.BlockManager: Removing RDD 1632 18/04/17 17:19:29 INFO kafka.KafkaRDD: Removing RDD 1597 from persistence list 18/04/17 17:19:29 INFO storage.BlockManager: Removing RDD 1597 18/04/17 17:19:29 INFO kafka.KafkaRDD: Removing RDD 1633 from persistence list 18/04/17 17:19:29 INFO storage.BlockManager: Removing RDD 1633 18/04/17 17:19:29 INFO kafka.KafkaRDD: Removing RDD 1597 from persistence list 18/04/17 17:19:29 INFO storage.BlockManager: Removing RDD 1597 18/04/17 17:19:29 INFO kafka.KafkaRDD: Removing RDD 1633 from persistence list 18/04/17 17:19:29 INFO storage.BlockManager: Removing RDD 1633 18/04/17 17:19:29 INFO kafka.KafkaRDD: Removing RDD 1598 from persistence list 18/04/17 17:19:29 INFO storage.BlockManager: Removing RDD 1598 18/04/17 17:19:29 INFO kafka.KafkaRDD: Removing RDD 1634 from persistence list 18/04/17 17:19:29 INFO storage.BlockManager: Removing RDD 1634 18/04/17 17:19:29 INFO kafka.KafkaRDD: Removing RDD 1598 from persistence list 18/04/17 17:19:29 INFO storage.BlockManager: Removing RDD 1598 18/04/17 17:19:29 INFO kafka.KafkaRDD: Removing RDD 1634 from persistence list 18/04/17 17:19:29 INFO storage.BlockManager: Removing RDD 1634 18/04/17 17:19:29 INFO kafka.KafkaRDD: Removing RDD 1599 from persistence list 18/04/17 17:19:29 INFO storage.BlockManager: Removing RDD 1599 18/04/17 17:19:29 INFO kafka.KafkaRDD: Removing RDD 1635 from persistence list 18/04/17 17:19:29 INFO storage.BlockManager: Removing RDD 1635 18/04/17 17:19:29 INFO kafka.KafkaRDD: Removing RDD 1599 from persistence list 18/04/17 17:19:29 INFO storage.BlockManager: Removing RDD 1599 18/04/17 17:19:29 INFO kafka.KafkaRDD: Removing RDD 1635 from persistence list 18/04/17 17:19:29 INFO storage.BlockManager: Removing RDD 1635 18/04/17 17:19:29 INFO kafka.KafkaRDD: Removing RDD 1600 from persistence list 18/04/17 17:19:29 INFO storage.BlockManager: Removing RDD 1600 18/04/17 17:19:29 INFO kafka.KafkaRDD: Removing RDD 1636 from persistence list 18/04/17 17:19:29 INFO storage.BlockManager: Removing RDD 1636 18/04/17 17:19:29 INFO kafka.KafkaRDD: Removing RDD 1600 from persistence list 18/04/17 17:19:29 INFO storage.BlockManager: Removing RDD 1600 18/04/17 17:19:29 INFO kafka.KafkaRDD: Removing RDD 1636 from persistence list 18/04/17 17:19:29 INFO storage.BlockManager: Removing RDD 1636 18/04/17 17:19:29 INFO kafka.KafkaRDD: Removing RDD 1601 from persistence list 18/04/17 17:19:29 INFO storage.BlockManager: Removing RDD 1601 18/04/17 17:19:29 INFO kafka.KafkaRDD: Removing RDD 1637 from persistence list 18/04/17 17:19:29 INFO storage.BlockManager: Removing RDD 1637 18/04/17 17:19:29 INFO kafka.KafkaRDD: Removing RDD 1601 from persistence list 18/04/17 17:19:29 INFO storage.BlockManager: Removing RDD 1601 18/04/17 17:19:29 INFO kafka.KafkaRDD: Removing RDD 1637 from persistence list 18/04/17 17:19:29 INFO storage.BlockManager: Removing RDD 1637 18/04/17 17:19:29 INFO kafka.KafkaRDD: Removing RDD 1602 from persistence list 18/04/17 17:19:29 INFO storage.BlockManager: Removing RDD 1602 18/04/17 17:19:29 INFO kafka.KafkaRDD: Removing RDD 1638 from persistence list 18/04/17 17:19:29 INFO storage.BlockManager: Removing RDD 1638 18/04/17 17:19:29 INFO kafka.KafkaRDD: Removing RDD 1602 from persistence list 18/04/17 17:19:29 INFO storage.BlockManager: Removing RDD 1602 18/04/17 17:19:29 INFO kafka.KafkaRDD: Removing RDD 1638 from persistence list 18/04/17 17:19:29 INFO storage.BlockManager: Removing RDD 1638 18/04/17 17:19:29 INFO kafka.KafkaRDD: Removing RDD 1603 from persistence list 18/04/17 17:19:29 INFO storage.BlockManager: Removing RDD 1603 18/04/17 17:19:29 INFO kafka.KafkaRDD: Removing RDD 1639 from persistence list 18/04/17 17:19:29 INFO storage.BlockManager: Removing RDD 1639 18/04/17 17:19:29 INFO kafka.KafkaRDD: Removing RDD 1603 from persistence list 18/04/17 17:19:29 INFO storage.BlockManager: Removing RDD 1603 18/04/17 17:19:29 INFO kafka.KafkaRDD: Removing RDD 1639 from persistence list 18/04/17 17:19:29 INFO storage.BlockManager: Removing RDD 1639 18/04/17 17:19:29 INFO kafka.KafkaRDD: Removing RDD 1604 from persistence list 18/04/17 17:19:29 INFO storage.BlockManager: Removing RDD 1604 18/04/17 17:19:29 INFO kafka.KafkaRDD: Removing RDD 1640 from persistence list 18/04/17 17:19:29 INFO storage.BlockManager: Removing RDD 1640 18/04/17 17:19:29 INFO kafka.KafkaRDD: Removing RDD 1604 from persistence list 18/04/17 17:19:29 INFO storage.BlockManager: Removing RDD 1604 18/04/17 17:19:29 INFO kafka.KafkaRDD: Removing RDD 1640 from persistence list 18/04/17 17:19:29 INFO storage.BlockManager: Removing RDD 1640 18/04/17 17:19:29 INFO kafka.KafkaRDD: Removing RDD 1605 from persistence list 18/04/17 17:19:29 INFO storage.BlockManager: Removing RDD 1605 18/04/17 17:19:29 INFO kafka.KafkaRDD: Removing RDD 1641 from persistence list 18/04/17 17:19:29 INFO storage.BlockManager: Removing RDD 1641 18/04/17 17:19:29 INFO kafka.KafkaRDD: Removing RDD 1605 from persistence list 18/04/17 17:19:29 INFO storage.BlockManager: Removing RDD 1605 18/04/17 17:19:29 INFO kafka.KafkaRDD: Removing RDD 1641 from persistence list 18/04/17 17:19:29 INFO storage.BlockManager: Removing RDD 1641 18/04/17 17:19:29 INFO kafka.KafkaRDD: Removing RDD 1606 from persistence list 18/04/17 17:19:29 INFO storage.BlockManager: Removing RDD 1606 18/04/17 17:19:29 INFO kafka.KafkaRDD: Removing RDD 1642 from persistence list 18/04/17 17:19:29 INFO storage.BlockManager: Removing RDD 1642 18/04/17 17:19:29 INFO kafka.KafkaRDD: Removing RDD 1606 from persistence list 18/04/17 17:19:29 INFO storage.BlockManager: Removing RDD 1606 18/04/17 17:19:29 INFO kafka.KafkaRDD: Removing RDD 1642 from persistence list 18/04/17 17:19:29 INFO storage.BlockManager: Removing RDD 1642 18/04/17 17:19:29 INFO kafka.KafkaRDD: Removing RDD 1607 from persistence list 18/04/17 17:19:29 INFO storage.BlockManager: Removing RDD 1607 18/04/17 17:19:29 INFO kafka.KafkaRDD: Removing RDD 1643 from persistence list 18/04/17 17:19:29 INFO storage.BlockManager: Removing RDD 1643 18/04/17 17:19:29 INFO kafka.KafkaRDD: Removing RDD 1607 from persistence list 18/04/17 17:19:29 INFO storage.BlockManager: Removing RDD 1607 18/04/17 17:19:29 INFO kafka.KafkaRDD: Removing RDD 1643 from persistence list 18/04/17 17:19:29 INFO storage.BlockManager: Removing RDD 1643 18/04/17 17:19:29 INFO kafka.KafkaRDD: Removing RDD 1608 from persistence list 18/04/17 17:19:29 INFO storage.BlockManager: Removing RDD 1608 18/04/17 17:19:29 INFO kafka.KafkaRDD: Removing RDD 1644 from persistence list 18/04/17 17:19:29 INFO storage.BlockManager: Removing RDD 1644 18/04/17 17:19:29 INFO kafka.KafkaRDD: Removing RDD 1608 from persistence list 18/04/17 17:19:29 INFO storage.BlockManager: Removing RDD 1608 18/04/17 17:19:29 INFO kafka.KafkaRDD: Removing RDD 1644 from persistence list 18/04/17 17:19:29 INFO storage.BlockManager: Removing RDD 1644 18/04/17 17:19:29 INFO kafka.KafkaRDD: Removing RDD 1609 from persistence list 18/04/17 17:19:29 INFO storage.BlockManager: Removing RDD 1609 18/04/17 17:19:29 INFO kafka.KafkaRDD: Removing RDD 1645 from persistence list 18/04/17 17:19:29 INFO storage.BlockManager: Removing RDD 1645 18/04/17 17:19:29 INFO kafka.KafkaRDD: Removing RDD 1609 from persistence list 18/04/17 17:19:29 INFO storage.BlockManager: Removing RDD 1609 18/04/17 17:19:29 INFO kafka.KafkaRDD: Removing RDD 1645 from persistence list 18/04/17 17:19:29 INFO storage.BlockManager: Removing RDD 1645 18/04/17 17:19:29 INFO kafka.KafkaRDD: Removing RDD 1610 from persistence list 18/04/17 17:19:29 INFO storage.BlockManager: Removing RDD 1610 18/04/17 17:19:29 INFO kafka.KafkaRDD: Removing RDD 1646 from persistence list 18/04/17 17:19:29 INFO storage.BlockManager: Removing RDD 1646 18/04/17 17:19:29 INFO kafka.KafkaRDD: Removing RDD 1610 from persistence list 18/04/17 17:19:29 INFO storage.BlockManager: Removing RDD 1610 18/04/17 17:19:29 INFO kafka.KafkaRDD: Removing RDD 1646 from persistence list 18/04/17 17:19:29 INFO storage.BlockManager: Removing RDD 1646 18/04/17 17:19:29 INFO kafka.KafkaRDD: Removing RDD 1611 from persistence list 18/04/17 17:19:29 INFO storage.BlockManager: Removing RDD 1611 18/04/17 17:19:29 INFO kafka.KafkaRDD: Removing RDD 1647 from persistence list 18/04/17 17:19:29 INFO storage.BlockManager: Removing RDD 1647 18/04/17 17:19:29 INFO kafka.KafkaRDD: Removing RDD 1611 from persistence list 18/04/17 17:19:29 INFO storage.BlockManager: Removing RDD 1611 18/04/17 17:19:29 INFO kafka.KafkaRDD: Removing RDD 1647 from persistence list 18/04/17 17:19:29 INFO storage.BlockManager: Removing RDD 1647 18/04/17 17:19:29 INFO kafka.KafkaRDD: Removing RDD 1612 from persistence list 18/04/17 17:19:29 INFO storage.BlockManager: Removing RDD 1612 18/04/17 17:19:29 INFO kafka.KafkaRDD: Removing RDD 1648 from persistence list 18/04/17 17:19:29 INFO storage.BlockManager: Removing RDD 1648 18/04/17 17:19:29 INFO kafka.KafkaRDD: Removing RDD 1612 from persistence list 18/04/17 17:19:29 INFO storage.BlockManager: Removing RDD 1612 18/04/17 17:19:29 INFO kafka.KafkaRDD: Removing RDD 1648 from persistence list 18/04/17 17:19:29 INFO storage.BlockManager: Removing RDD 1648 18/04/17 17:19:29 INFO kafka.KafkaRDD: Removing RDD 1613 from persistence list 18/04/17 17:19:29 INFO storage.BlockManager: Removing RDD 1613 18/04/17 17:19:29 INFO kafka.KafkaRDD: Removing RDD 1649 from persistence list 18/04/17 17:19:29 INFO storage.BlockManager: Removing RDD 1649 18/04/17 17:19:29 INFO kafka.KafkaRDD: Removing RDD 1613 from persistence list 18/04/17 17:19:29 INFO storage.BlockManager: Removing RDD 1613 18/04/17 17:19:29 INFO kafka.KafkaRDD: Removing RDD 1649 from persistence list 18/04/17 17:19:29 INFO storage.BlockManager: Removing RDD 1649 18/04/17 17:19:29 INFO kafka.KafkaRDD: Removing RDD 1614 from persistence list 18/04/17 17:19:29 INFO storage.BlockManager: Removing RDD 1614 18/04/17 17:19:29 INFO kafka.KafkaRDD: Removing RDD 1650 from persistence list 18/04/17 17:19:29 INFO storage.BlockManager: Removing RDD 1650 18/04/17 17:19:29 INFO kafka.KafkaRDD: Removing RDD 1614 from persistence list 18/04/17 17:19:29 INFO storage.BlockManager: Removing RDD 1614 18/04/17 17:19:29 INFO kafka.KafkaRDD: Removing RDD 1650 from persistence list 18/04/17 17:19:29 INFO storage.BlockManager: Removing RDD 1650 18/04/17 17:19:29 INFO kafka.KafkaRDD: Removing RDD 1615 from persistence list 18/04/17 17:19:29 INFO storage.BlockManager: Removing RDD 1615 18/04/17 17:19:29 INFO kafka.KafkaRDD: Removing RDD 1651 from persistence list 18/04/17 17:19:29 INFO storage.BlockManager: Removing RDD 1651 18/04/17 17:19:29 INFO kafka.KafkaRDD: Removing RDD 1615 from persistence list 18/04/17 17:19:29 INFO storage.BlockManager: Removing RDD 1615 18/04/17 17:19:29 INFO kafka.KafkaRDD: Removing RDD 1651 from persistence list 18/04/17 17:19:29 INFO storage.BlockManager: Removing RDD 1651 18/04/17 17:19:29 INFO kafka.KafkaRDD: Removing RDD 1616 from persistence list 18/04/17 17:19:29 INFO storage.BlockManager: Removing RDD 1616 18/04/17 17:19:29 INFO kafka.KafkaRDD: Removing RDD 1652 from persistence list 18/04/17 17:19:29 INFO storage.BlockManager: Removing RDD 1652 18/04/17 17:19:29 INFO kafka.KafkaRDD: Removing RDD 1616 from persistence list 18/04/17 17:19:29 INFO storage.BlockManager: Removing RDD 1616 18/04/17 17:19:29 INFO kafka.KafkaRDD: Removing RDD 1652 from persistence list 18/04/17 17:19:29 INFO storage.BlockManager: Removing RDD 1652 18/04/17 17:19:29 INFO kafka.KafkaRDD: Removing RDD 1617 from persistence list 18/04/17 17:19:29 INFO storage.BlockManager: Removing RDD 1617 18/04/17 17:19:29 INFO kafka.KafkaRDD: Removing RDD 1653 from persistence list 18/04/17 17:19:29 INFO storage.BlockManager: Removing RDD 1653 18/04/17 17:19:29 INFO kafka.KafkaRDD: Removing RDD 1617 from persistence list 18/04/17 17:19:29 INFO storage.BlockManager: Removing RDD 1617 18/04/17 17:19:29 INFO kafka.KafkaRDD: Removing RDD 1653 from persistence list 18/04/17 17:19:29 INFO storage.BlockManager: Removing RDD 1653 18/04/17 17:19:29 INFO kafka.KafkaRDD: Removing RDD 1618 from persistence list 18/04/17 17:19:29 INFO storage.BlockManager: Removing RDD 1618 18/04/17 17:19:29 INFO kafka.KafkaRDD: Removing RDD 1654 from persistence list 18/04/17 17:19:29 INFO storage.BlockManager: Removing RDD 1654 18/04/17 17:19:29 INFO kafka.KafkaRDD: Removing RDD 1618 from persistence list 18/04/17 17:19:29 INFO storage.BlockManager: Removing RDD 1618 18/04/17 17:19:29 INFO kafka.KafkaRDD: Removing RDD 1654 from persistence list 18/04/17 17:19:29 INFO storage.BlockManager: Removing RDD 1654 18/04/17 17:19:29 INFO kafka.KafkaRDD: Removing RDD 1619 from persistence list 18/04/17 17:19:29 INFO storage.BlockManager: Removing RDD 1619 18/04/17 17:19:29 INFO kafka.KafkaRDD: Removing RDD 1655 from persistence list 18/04/17 17:19:29 INFO storage.BlockManager: Removing RDD 1655 18/04/17 17:19:29 INFO kafka.KafkaRDD: Removing RDD 1619 from persistence list 18/04/17 17:19:29 INFO storage.BlockManager: Removing RDD 1619 18/04/17 17:19:29 INFO kafka.KafkaRDD: Removing RDD 1655 from persistence list 18/04/17 17:19:29 INFO storage.BlockManager: Removing RDD 1655 18/04/17 17:19:29 INFO scheduler.ReceivedBlockTracker: Deleting batches ArrayBuffer() 18/04/17 17:19:29 INFO scheduler.InputInfoTracker: remove old batch metadata: 1523974560000 ms 1523974620000 ms 18/04/17 17:20:00 INFO scheduler.JobScheduler: Added jobs for time 1523974800000 ms 18/04/17 17:20:00 INFO scheduler.JobScheduler: Starting job streaming job 1523974800000 ms.0 from job set of time 1523974800000 ms 18/04/17 17:20:00 INFO scheduler.JobScheduler: Starting job streaming job 1523974800000 ms.1 from job set of time 1523974800000 ms 18/04/17 17:20:00 INFO scheduler.JobScheduler: Starting job streaming job 1523974800000 ms.2 from job set of time 1523974800000 ms 18/04/17 17:20:00 INFO scheduler.JobScheduler: Finished job streaming job 1523974800000 ms.0 from job set of time 1523974800000 ms 18/04/17 17:20:00 INFO scheduler.JobScheduler: Starting job streaming job 1523974800000 ms.5 from job set of time 1523974800000 ms 18/04/17 17:20:00 INFO scheduler.JobScheduler: Starting job streaming job 1523974800000 ms.3 from job set of time 1523974800000 ms 18/04/17 17:20:00 INFO scheduler.JobScheduler: Starting job streaming job 1523974800000 ms.4 from job set of time 1523974800000 ms 18/04/17 17:20:00 INFO scheduler.JobScheduler: Finished job streaming job 1523974800000 ms.4 from job set of time 1523974800000 ms 18/04/17 17:20:00 INFO scheduler.JobScheduler: Finished job streaming job 1523974800000 ms.3 from job set of time 1523974800000 ms 18/04/17 17:20:00 INFO scheduler.JobScheduler: Starting job streaming job 1523974800000 ms.7 from job set of time 1523974800000 ms 18/04/17 17:20:00 INFO scheduler.JobScheduler: Starting job streaming job 1523974800000 ms.6 from job set of time 1523974800000 ms 18/04/17 17:20:00 INFO scheduler.JobScheduler: Starting job streaming job 1523974800000 ms.8 from job set of time 1523974800000 ms 18/04/17 17:20:00 INFO scheduler.JobScheduler: Starting job streaming job 1523974800000 ms.9 from job set of time 1523974800000 ms 18/04/17 17:20:00 INFO scheduler.JobScheduler: Starting job streaming job 1523974800000 ms.11 from job set of time 1523974800000 ms 18/04/17 17:20:00 INFO scheduler.JobScheduler: Starting job streaming job 1523974800000 ms.10 from job set of time 1523974800000 ms 18/04/17 17:20:00 INFO scheduler.JobScheduler: Starting job streaming job 1523974800000 ms.12 from job set of time 1523974800000 ms 18/04/17 17:20:00 INFO scheduler.JobScheduler: Starting job streaming job 1523974800000 ms.13 from job set of time 1523974800000 ms 18/04/17 17:20:00 INFO scheduler.JobScheduler: Finished job streaming job 1523974800000 ms.13 from job set of time 1523974800000 ms 18/04/17 17:20:00 INFO scheduler.JobScheduler: Starting job streaming job 1523974800000 ms.14 from job set of time 1523974800000 ms 18/04/17 17:20:00 INFO scheduler.JobScheduler: Starting job streaming job 1523974800000 ms.16 from job set of time 1523974800000 ms 18/04/17 17:20:00 INFO scheduler.JobScheduler: Starting job streaming job 1523974800000 ms.15 from job set of time 1523974800000 ms 18/04/17 17:20:00 INFO scheduler.JobScheduler: Finished job streaming job 1523974800000 ms.14 from job set of time 1523974800000 ms 18/04/17 17:20:00 INFO scheduler.JobScheduler: Finished job streaming job 1523974800000 ms.16 from job set of time 1523974800000 ms 18/04/17 17:20:00 INFO scheduler.JobScheduler: Starting job streaming job 1523974800000 ms.18 from job set of time 1523974800000 ms 18/04/17 17:20:00 INFO scheduler.JobScheduler: Starting job streaming job 1523974800000 ms.17 from job set of time 1523974800000 ms 18/04/17 17:20:00 INFO scheduler.JobScheduler: Starting job streaming job 1523974800000 ms.19 from job set of time 1523974800000 ms 18/04/17 17:20:00 INFO scheduler.JobScheduler: Starting job streaming job 1523974800000 ms.20 from job set of time 1523974800000 ms 18/04/17 17:20:00 INFO scheduler.JobScheduler: Finished job streaming job 1523974800000 ms.17 from job set of time 1523974800000 ms 18/04/17 17:20:00 INFO scheduler.JobScheduler: Starting job streaming job 1523974800000 ms.22 from job set of time 1523974800000 ms 18/04/17 17:20:00 INFO scheduler.JobScheduler: Starting job streaming job 1523974800000 ms.21 from job set of time 1523974800000 ms 18/04/17 17:20:00 INFO scheduler.JobScheduler: Finished job streaming job 1523974800000 ms.21 from job set of time 1523974800000 ms 18/04/17 17:20:00 INFO scheduler.JobScheduler: Starting job streaming job 1523974800000 ms.23 from job set of time 1523974800000 ms 18/04/17 17:20:00 INFO scheduler.JobScheduler: Starting job streaming job 1523974800000 ms.24 from job set of time 1523974800000 ms 18/04/17 17:20:00 INFO scheduler.JobScheduler: Starting job streaming job 1523974800000 ms.26 from job set of time 1523974800000 ms 18/04/17 17:20:00 INFO scheduler.JobScheduler: Starting job streaming job 1523974800000 ms.25 from job set of time 1523974800000 ms 18/04/17 17:20:00 INFO scheduler.JobScheduler: Starting job streaming job 1523974800000 ms.27 from job set of time 1523974800000 ms 18/04/17 17:20:00 INFO scheduler.JobScheduler: Starting job streaming job 1523974800000 ms.28 from job set of time 1523974800000 ms 18/04/17 17:20:00 INFO scheduler.JobScheduler: Starting job streaming job 1523974800000 ms.29 from job set of time 1523974800000 ms 18/04/17 17:20:00 INFO scheduler.JobScheduler: Starting job streaming job 1523974800000 ms.30 from job set of time 1523974800000 ms 18/04/17 17:20:00 INFO scheduler.JobScheduler: Starting job streaming job 1523974800000 ms.31 from job set of time 1523974800000 ms 18/04/17 17:20:00 INFO scheduler.JobScheduler: Finished job streaming job 1523974800000 ms.30 from job set of time 1523974800000 ms 18/04/17 17:20:00 INFO scheduler.JobScheduler: Starting job streaming job 1523974800000 ms.32 from job set of time 1523974800000 ms 18/04/17 17:20:00 INFO scheduler.JobScheduler: Starting job streaming job 1523974800000 ms.34 from job set of time 1523974800000 ms 18/04/17 17:20:00 INFO scheduler.JobScheduler: Starting job streaming job 1523974800000 ms.33 from job set of time 1523974800000 ms 18/04/17 17:20:00 INFO scheduler.JobScheduler: Starting job streaming job 1523974800000 ms.35 from job set of time 1523974800000 ms 18/04/17 17:20:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 17:20:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 17:20:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 17:20:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 17:20:00 INFO scheduler.DAGScheduler: Got job 1238 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 17:20:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 17:20:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 1238 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 17:20:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 17:20:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 17:20:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 17:20:00 INFO scheduler.DAGScheduler: Submitting ResultStage 1238 (KafkaRDD[1719] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 17:20:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 17:20:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 17:20:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 17:20:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 17:20:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 17:20:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 17:20:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 17:20:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 17:20:00 INFO storage.MemoryStore: Block broadcast_1238 stored as values in memory (estimated size 5.7 KB, free 491.4 MB) 18/04/17 17:20:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 17:20:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 17:20:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 17:20:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 17:20:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 17:20:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 17:20:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 17:20:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 17:20:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 17:20:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 17:20:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 17:20:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 17:20:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 17:20:00 INFO storage.MemoryStore: Block broadcast_1238_piece0 stored as bytes in memory (estimated size 3.1 KB, free 491.4 MB) 18/04/17 17:20:00 INFO storage.BlockManagerInfo: Added broadcast_1238_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:20:00 INFO spark.SparkContext: Created broadcast 1238 from broadcast at DAGScheduler.scala:1006 18/04/17 17:20:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 1238 (KafkaRDD[1719] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 17:20:00 INFO cluster.YarnClusterScheduler: Adding task set 1238.0 with 1 tasks 18/04/17 17:20:00 INFO scheduler.DAGScheduler: Got job 1240 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 17:20:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 1239 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 17:20:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 17:20:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 17:20:00 INFO scheduler.DAGScheduler: Submitting ResultStage 1239 (KafkaRDD[1694] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 17:20:00 INFO storage.MemoryStore: Block broadcast_1239 stored as values in memory (estimated size 5.7 KB, free 491.4 MB) 18/04/17 17:20:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 1238.0 (TID 1238, ***hostname masked***, executor 4, partition 0, RACK_LOCAL, 2048 bytes) 18/04/17 17:20:00 INFO storage.MemoryStore: Block broadcast_1239_piece0 stored as bytes in memory (estimated size 3.1 KB, free 491.4 MB) 18/04/17 17:20:00 INFO storage.BlockManagerInfo: Added broadcast_1239_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:20:00 INFO spark.SparkContext: Created broadcast 1239 from broadcast at DAGScheduler.scala:1006 18/04/17 17:20:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 1239 (KafkaRDD[1694] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 17:20:00 INFO cluster.YarnClusterScheduler: Adding task set 1239.0 with 1 tasks 18/04/17 17:20:00 INFO scheduler.DAGScheduler: Got job 1239 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 17:20:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 1240 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 17:20:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 17:20:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 17:20:00 INFO scheduler.DAGScheduler: Submitting ResultStage 1240 (KafkaRDD[1703] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 17:20:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 1239.0 (TID 1239, ***hostname masked***, executor 12, partition 0, RACK_LOCAL, 2034 bytes) 18/04/17 17:20:00 INFO storage.MemoryStore: Block broadcast_1240 stored as values in memory (estimated size 5.7 KB, free 491.4 MB) 18/04/17 17:20:00 INFO storage.MemoryStore: Block broadcast_1240_piece0 stored as bytes in memory (estimated size 3.1 KB, free 491.4 MB) 18/04/17 17:20:00 INFO storage.BlockManagerInfo: Added broadcast_1240_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:20:00 INFO spark.SparkContext: Created broadcast 1240 from broadcast at DAGScheduler.scala:1006 18/04/17 17:20:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 1240 (KafkaRDD[1703] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 17:20:00 INFO cluster.YarnClusterScheduler: Adding task set 1240.0 with 1 tasks 18/04/17 17:20:00 INFO scheduler.DAGScheduler: Got job 1241 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 17:20:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 1241 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 17:20:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 17:20:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 17:20:00 INFO scheduler.DAGScheduler: Submitting ResultStage 1241 (KafkaRDD[1698] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 17:20:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 1240.0 (TID 1240, ***hostname masked***, executor 1, partition 0, RACK_LOCAL, 2046 bytes) 18/04/17 17:20:00 INFO storage.MemoryStore: Block broadcast_1241 stored as values in memory (estimated size 5.7 KB, free 491.4 MB) 18/04/17 17:20:00 INFO storage.MemoryStore: Block broadcast_1241_piece0 stored as bytes in memory (estimated size 3.1 KB, free 491.4 MB) 18/04/17 17:20:00 INFO storage.BlockManagerInfo: Added broadcast_1241_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:20:00 INFO spark.SparkContext: Created broadcast 1241 from broadcast at DAGScheduler.scala:1006 18/04/17 17:20:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 1241 (KafkaRDD[1698] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 17:20:00 INFO cluster.YarnClusterScheduler: Adding task set 1241.0 with 1 tasks 18/04/17 17:20:00 INFO scheduler.DAGScheduler: Got job 1242 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 17:20:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 1242 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 17:20:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 17:20:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 17:20:00 INFO scheduler.DAGScheduler: Submitting ResultStage 1242 (KafkaRDD[1710] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 17:20:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 1241.0 (TID 1241, ***hostname masked***, executor 11, partition 0, NODE_LOCAL, 2045 bytes) 18/04/17 17:20:00 INFO storage.MemoryStore: Block broadcast_1242 stored as values in memory (estimated size 5.7 KB, free 491.4 MB) 18/04/17 17:20:00 INFO storage.MemoryStore: Block broadcast_1242_piece0 stored as bytes in memory (estimated size 3.1 KB, free 491.4 MB) 18/04/17 17:20:00 INFO storage.BlockManagerInfo: Added broadcast_1242_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:20:00 INFO spark.SparkContext: Created broadcast 1242 from broadcast at DAGScheduler.scala:1006 18/04/17 17:20:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 1242 (KafkaRDD[1710] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 17:20:00 INFO cluster.YarnClusterScheduler: Adding task set 1242.0 with 1 tasks 18/04/17 17:20:00 INFO scheduler.DAGScheduler: Got job 1243 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 17:20:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 1243 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 17:20:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 17:20:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 17:20:00 INFO storage.BlockManagerInfo: Added broadcast_1238_piece0 in memory on ***hostname masked***:55279 (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:20:00 INFO scheduler.DAGScheduler: Submitting ResultStage 1243 (KafkaRDD[1711] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 17:20:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 1242.0 (TID 1242, ***hostname masked***, executor 9, partition 0, RACK_LOCAL, 2048 bytes) 18/04/17 17:20:00 INFO storage.MemoryStore: Block broadcast_1243 stored as values in memory (estimated size 5.7 KB, free 491.4 MB) 18/04/17 17:20:00 INFO storage.MemoryStore: Block broadcast_1243_piece0 stored as bytes in memory (estimated size 3.1 KB, free 491.4 MB) 18/04/17 17:20:00 INFO storage.BlockManagerInfo: Added broadcast_1243_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:20:00 INFO spark.SparkContext: Created broadcast 1243 from broadcast at DAGScheduler.scala:1006 18/04/17 17:20:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 1243 (KafkaRDD[1711] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 17:20:00 INFO cluster.YarnClusterScheduler: Adding task set 1243.0 with 1 tasks 18/04/17 17:20:00 INFO scheduler.DAGScheduler: Got job 1244 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 17:20:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 1244 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 17:20:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 17:20:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 17:20:00 INFO scheduler.DAGScheduler: Submitting ResultStage 1244 (KafkaRDD[1721] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 17:20:00 INFO storage.BlockManagerInfo: Removed broadcast_1212_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:20:00 INFO storage.MemoryStore: Block broadcast_1244 stored as values in memory (estimated size 5.7 KB, free 491.4 MB) 18/04/17 17:20:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 1243.0 (TID 1243, ***hostname masked***, executor 7, partition 0, RACK_LOCAL, 2050 bytes) 18/04/17 17:20:00 INFO storage.BlockManagerInfo: Removed broadcast_1212_piece0 on ***hostname masked***:55279 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:20:00 INFO storage.BlockManagerInfo: Added broadcast_1240_piece0 in memory on ***hostname masked***:56034 (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:20:00 INFO storage.BlockManagerInfo: Added broadcast_1239_piece0 in memory on ***hostname masked***:42188 (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:20:00 INFO storage.BlockManagerInfo: Added broadcast_1241_piece0 in memory on ***hostname masked***:57847 (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:20:00 INFO storage.MemoryStore: Block broadcast_1244_piece0 stored as bytes in memory (estimated size 3.1 KB, free 491.4 MB) 18/04/17 17:20:00 INFO storage.BlockManagerInfo: Added broadcast_1244_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:20:00 INFO spark.SparkContext: Created broadcast 1244 from broadcast at DAGScheduler.scala:1006 18/04/17 17:20:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 1244 (KafkaRDD[1721] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 17:20:00 INFO cluster.YarnClusterScheduler: Adding task set 1244.0 with 1 tasks 18/04/17 17:20:00 INFO scheduler.DAGScheduler: Got job 1245 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 17:20:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 1245 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 17:20:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 17:20:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 17:20:00 INFO scheduler.DAGScheduler: Submitting ResultStage 1245 (KafkaRDD[1693] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 17:20:00 INFO storage.BlockManagerInfo: Added broadcast_1242_piece0 in memory on ***hostname masked***:55033 (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:20:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 1244.0 (TID 1244, ***hostname masked***, executor 6, partition 0, NODE_LOCAL, 2049 bytes) 18/04/17 17:20:00 INFO storage.MemoryStore: Block broadcast_1245 stored as values in memory (estimated size 5.7 KB, free 491.3 MB) 18/04/17 17:20:00 INFO spark.ContextCleaner: Cleaned accumulator 1225 18/04/17 17:20:00 INFO storage.BlockManagerInfo: Removed broadcast_1223_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:20:00 INFO storage.MemoryStore: Block broadcast_1245_piece0 stored as bytes in memory (estimated size 3.1 KB, free 491.4 MB) 18/04/17 17:20:00 INFO storage.BlockManagerInfo: Added broadcast_1245_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:20:00 INFO spark.SparkContext: Created broadcast 1245 from broadcast at DAGScheduler.scala:1006 18/04/17 17:20:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 1245 (KafkaRDD[1693] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 17:20:00 INFO cluster.YarnClusterScheduler: Adding task set 1245.0 with 1 tasks 18/04/17 17:20:00 INFO scheduler.DAGScheduler: Got job 1246 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 17:20:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 1246 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 17:20:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 17:20:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 17:20:00 INFO scheduler.DAGScheduler: Submitting ResultStage 1246 (KafkaRDD[1726] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 17:20:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 1245.0 (TID 1245, ***hostname masked***, executor 7, partition 0, RACK_LOCAL, 2063 bytes) 18/04/17 17:20:00 INFO storage.MemoryStore: Block broadcast_1246 stored as values in memory (estimated size 5.7 KB, free 491.3 MB) 18/04/17 17:20:00 INFO storage.BlockManagerInfo: Removed broadcast_1223_piece0 on ***hostname masked***:43653 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:20:00 INFO spark.ContextCleaner: Cleaned accumulator 1224 18/04/17 17:20:00 INFO spark.ContextCleaner: Cleaned accumulator 1227 18/04/17 17:20:00 INFO storage.BlockManagerInfo: Removed broadcast_1225_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:20:00 INFO storage.BlockManagerInfo: Added broadcast_1243_piece0 in memory on ***hostname masked***:41751 (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:20:00 INFO storage.BlockManagerInfo: Removed broadcast_1225_piece0 on ***hostname masked***:55279 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:20:00 INFO storage.BlockManagerInfo: Added broadcast_1244_piece0 in memory on ***hostname masked***:35790 (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:20:00 INFO spark.ContextCleaner: Cleaned accumulator 1226 18/04/17 17:20:00 INFO storage.MemoryStore: Block broadcast_1246_piece0 stored as bytes in memory (estimated size 3.1 KB, free 491.4 MB) 18/04/17 17:20:00 INFO storage.BlockManagerInfo: Added broadcast_1246_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:20:00 INFO storage.BlockManagerInfo: Removed broadcast_1224_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:20:00 INFO spark.SparkContext: Created broadcast 1246 from broadcast at DAGScheduler.scala:1006 18/04/17 17:20:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 1246 (KafkaRDD[1726] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 17:20:00 INFO cluster.YarnClusterScheduler: Adding task set 1246.0 with 1 tasks 18/04/17 17:20:00 INFO scheduler.DAGScheduler: Got job 1247 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 17:20:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 1247 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 17:20:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 17:20:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 17:20:00 INFO scheduler.DAGScheduler: Submitting ResultStage 1247 (KafkaRDD[1717] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 17:20:00 INFO storage.BlockManagerInfo: Removed broadcast_1224_piece0 on ***hostname masked***:35790 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:20:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 1246.0 (TID 1246, ***hostname masked***, executor 3, partition 0, RACK_LOCAL, 2053 bytes) 18/04/17 17:20:00 INFO storage.MemoryStore: Block broadcast_1247 stored as values in memory (estimated size 5.7 KB, free 491.4 MB) 18/04/17 17:20:00 INFO spark.ContextCleaner: Cleaned accumulator 1229 18/04/17 17:20:00 INFO storage.BlockManagerInfo: Removed broadcast_1227_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:20:00 INFO storage.BlockManagerInfo: Removed broadcast_1227_piece0 on ***hostname masked***:50260 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:20:00 INFO spark.ContextCleaner: Cleaned accumulator 1228 18/04/17 17:20:00 INFO storage.BlockManagerInfo: Removed broadcast_1226_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:20:00 INFO storage.MemoryStore: Block broadcast_1247_piece0 stored as bytes in memory (estimated size 3.1 KB, free 491.4 MB) 18/04/17 17:20:00 INFO storage.BlockManagerInfo: Added broadcast_1247_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:20:00 INFO spark.SparkContext: Created broadcast 1247 from broadcast at DAGScheduler.scala:1006 18/04/17 17:20:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 1247 (KafkaRDD[1717] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 17:20:00 INFO cluster.YarnClusterScheduler: Adding task set 1247.0 with 1 tasks 18/04/17 17:20:00 INFO scheduler.DAGScheduler: Got job 1249 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 17:20:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 1248 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 17:20:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 17:20:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 17:20:00 INFO scheduler.DAGScheduler: Submitting ResultStage 1248 (KafkaRDD[1712] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 17:20:00 INFO storage.MemoryStore: Block broadcast_1248 stored as values in memory (estimated size 5.7 KB, free 491.4 MB) 18/04/17 17:20:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 1247.0 (TID 1247, ***hostname masked***, executor 8, partition 0, RACK_LOCAL, 2045 bytes) 18/04/17 17:20:00 INFO storage.BlockManagerInfo: Removed broadcast_1226_piece0 on ***hostname masked***:60107 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:20:00 INFO storage.BlockManagerInfo: Added broadcast_1245_piece0 in memory on ***hostname masked***:41751 (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:20:00 INFO storage.BlockManagerInfo: Removed broadcast_1217_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:20:00 INFO storage.BlockManagerInfo: Removed broadcast_1217_piece0 on ***hostname masked***:56034 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:20:00 INFO storage.BlockManagerInfo: Removed broadcast_1222_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:20:00 INFO storage.BlockManagerInfo: Removed broadcast_1222_piece0 on ***hostname masked***:60107 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:20:00 INFO storage.MemoryStore: Block broadcast_1248_piece0 stored as bytes in memory (estimated size 3.1 KB, free 491.4 MB) 18/04/17 17:20:00 INFO storage.BlockManagerInfo: Added broadcast_1248_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:20:00 INFO spark.SparkContext: Created broadcast 1248 from broadcast at DAGScheduler.scala:1006 18/04/17 17:20:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 1248 (KafkaRDD[1712] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 17:20:00 INFO cluster.YarnClusterScheduler: Adding task set 1248.0 with 1 tasks 18/04/17 17:20:00 INFO scheduler.DAGScheduler: Got job 1248 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 17:20:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 1249 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 17:20:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 17:20:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 17:20:00 INFO scheduler.DAGScheduler: Submitting ResultStage 1249 (KafkaRDD[1715] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 17:20:00 INFO storage.BlockManagerInfo: Removed broadcast_1228_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:20:00 INFO storage.MemoryStore: Block broadcast_1249 stored as values in memory (estimated size 5.7 KB, free 491.4 MB) 18/04/17 17:20:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 1248.0 (TID 1248, ***hostname masked***, executor 6, partition 0, RACK_LOCAL, 2048 bytes) 18/04/17 17:20:00 INFO storage.BlockManagerInfo: Removed broadcast_1228_piece0 on ***hostname masked***:55095 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:20:00 INFO storage.BlockManagerInfo: Added broadcast_1246_piece0 in memory on ***hostname masked***:60107 (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:20:00 INFO spark.ContextCleaner: Cleaned accumulator 1231 18/04/17 17:20:00 INFO storage.BlockManagerInfo: Removed broadcast_1229_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:20:00 INFO storage.MemoryStore: Block broadcast_1249_piece0 stored as bytes in memory (estimated size 3.1 KB, free 491.4 MB) 18/04/17 17:20:00 INFO storage.BlockManagerInfo: Added broadcast_1249_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:20:00 INFO spark.SparkContext: Created broadcast 1249 from broadcast at DAGScheduler.scala:1006 18/04/17 17:20:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 1249 (KafkaRDD[1715] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 17:20:00 INFO cluster.YarnClusterScheduler: Adding task set 1249.0 with 1 tasks 18/04/17 17:20:00 INFO scheduler.DAGScheduler: Got job 1250 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 17:20:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 1250 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 17:20:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 17:20:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 17:20:00 INFO scheduler.DAGScheduler: Submitting ResultStage 1250 (KafkaRDD[1699] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 17:20:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 1249.0 (TID 1249, ***hostname masked***, executor 8, partition 0, RACK_LOCAL, 2048 bytes) 18/04/17 17:20:00 INFO storage.MemoryStore: Block broadcast_1250 stored as values in memory (estimated size 5.7 KB, free 491.4 MB) 18/04/17 17:20:00 INFO storage.BlockManagerInfo: Removed broadcast_1229_piece0 on ***hostname masked***:55279 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:20:00 INFO storage.BlockManagerInfo: Added broadcast_1248_piece0 in memory on ***hostname masked***:35790 (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:20:00 INFO spark.ContextCleaner: Cleaned accumulator 1230 18/04/17 17:20:00 INFO spark.ContextCleaner: Cleaned accumulator 1233 18/04/17 17:20:00 INFO storage.BlockManagerInfo: Removed broadcast_1231_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:20:00 INFO storage.BlockManagerInfo: Removed broadcast_1231_piece0 on ***hostname masked***:35790 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:20:00 INFO storage.MemoryStore: Block broadcast_1250_piece0 stored as bytes in memory (estimated size 3.1 KB, free 491.4 MB) 18/04/17 17:20:00 INFO storage.BlockManagerInfo: Added broadcast_1250_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:20:00 INFO spark.SparkContext: Created broadcast 1250 from broadcast at DAGScheduler.scala:1006 18/04/17 17:20:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 1250 (KafkaRDD[1699] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 17:20:00 INFO cluster.YarnClusterScheduler: Adding task set 1250.0 with 1 tasks 18/04/17 17:20:00 INFO scheduler.DAGScheduler: Got job 1251 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 17:20:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 1251 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 17:20:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 17:20:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 17:20:00 INFO scheduler.DAGScheduler: Submitting ResultStage 1251 (KafkaRDD[1723] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 17:20:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 1250.0 (TID 1250, ***hostname masked***, executor 7, partition 0, RACK_LOCAL, 2060 bytes) 18/04/17 17:20:00 INFO storage.MemoryStore: Block broadcast_1251 stored as values in memory (estimated size 5.7 KB, free 491.4 MB) 18/04/17 17:20:00 INFO storage.BlockManagerInfo: Added broadcast_1247_piece0 in memory on ***hostname masked***:50260 (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:20:00 INFO storage.MemoryStore: Block broadcast_1251_piece0 stored as bytes in memory (estimated size 3.1 KB, free 491.4 MB) 18/04/17 17:20:00 INFO storage.BlockManagerInfo: Added broadcast_1251_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:20:00 INFO spark.SparkContext: Created broadcast 1251 from broadcast at DAGScheduler.scala:1006 18/04/17 17:20:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 1251 (KafkaRDD[1723] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 17:20:00 INFO cluster.YarnClusterScheduler: Adding task set 1251.0 with 1 tasks 18/04/17 17:20:00 INFO scheduler.DAGScheduler: Got job 1252 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 17:20:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 1252 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 17:20:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 17:20:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 17:20:00 INFO scheduler.DAGScheduler: Submitting ResultStage 1252 (KafkaRDD[1727] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 17:20:00 INFO storage.MemoryStore: Block broadcast_1252 stored as values in memory (estimated size 5.7 KB, free 491.4 MB) 18/04/17 17:20:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 1251.0 (TID 1251, ***hostname masked***, executor 5, partition 0, RACK_LOCAL, 2042 bytes) 18/04/17 17:20:00 INFO storage.MemoryStore: Block broadcast_1252_piece0 stored as bytes in memory (estimated size 3.1 KB, free 491.4 MB) 18/04/17 17:20:00 INFO storage.BlockManagerInfo: Added broadcast_1252_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:20:00 INFO spark.SparkContext: Created broadcast 1252 from broadcast at DAGScheduler.scala:1006 18/04/17 17:20:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 1252 (KafkaRDD[1727] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 17:20:00 INFO cluster.YarnClusterScheduler: Adding task set 1252.0 with 1 tasks 18/04/17 17:20:00 INFO scheduler.DAGScheduler: Got job 1253 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 17:20:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 1253 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 17:20:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 17:20:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 17:20:00 INFO scheduler.DAGScheduler: Submitting ResultStage 1253 (KafkaRDD[1718] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 17:20:00 INFO storage.MemoryStore: Block broadcast_1253 stored as values in memory (estimated size 5.7 KB, free 491.4 MB) 18/04/17 17:20:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 1252.0 (TID 1252, ***hostname masked***, executor 10, partition 0, NODE_LOCAL, 2037 bytes) 18/04/17 17:20:00 INFO storage.MemoryStore: Block broadcast_1253_piece0 stored as bytes in memory (estimated size 3.1 KB, free 491.4 MB) 18/04/17 17:20:00 INFO storage.BlockManagerInfo: Added broadcast_1253_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:20:00 INFO spark.SparkContext: Created broadcast 1253 from broadcast at DAGScheduler.scala:1006 18/04/17 17:20:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 1253 (KafkaRDD[1718] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 17:20:00 INFO cluster.YarnClusterScheduler: Adding task set 1253.0 with 1 tasks 18/04/17 17:20:00 INFO scheduler.DAGScheduler: Got job 1254 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 17:20:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 1254 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 17:20:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 17:20:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 17:20:00 INFO scheduler.DAGScheduler: Submitting ResultStage 1254 (KafkaRDD[1700] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 17:20:00 INFO storage.MemoryStore: Block broadcast_1254 stored as values in memory (estimated size 5.7 KB, free 491.4 MB) 18/04/17 17:20:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 1253.0 (TID 1253, ***hostname masked***, executor 4, partition 0, NODE_LOCAL, 2056 bytes) 18/04/17 17:20:00 INFO storage.MemoryStore: Block broadcast_1254_piece0 stored as bytes in memory (estimated size 3.1 KB, free 491.4 MB) 18/04/17 17:20:00 INFO storage.BlockManagerInfo: Added broadcast_1254_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:20:00 INFO spark.SparkContext: Created broadcast 1254 from broadcast at DAGScheduler.scala:1006 18/04/17 17:20:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 1254 (KafkaRDD[1700] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 17:20:00 INFO cluster.YarnClusterScheduler: Adding task set 1254.0 with 1 tasks 18/04/17 17:20:00 INFO scheduler.DAGScheduler: Got job 1255 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 17:20:00 INFO storage.BlockManagerInfo: Added broadcast_1249_piece0 in memory on ***hostname masked***:50260 (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:20:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 1255 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 17:20:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 17:20:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 17:20:00 INFO scheduler.DAGScheduler: Submitting ResultStage 1255 (KafkaRDD[1702] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 17:20:00 INFO storage.MemoryStore: Block broadcast_1255 stored as values in memory (estimated size 5.7 KB, free 491.3 MB) 18/04/17 17:20:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 1254.0 (TID 1254, ***hostname masked***, executor 5, partition 0, RACK_LOCAL, 2051 bytes) 18/04/17 17:20:00 INFO storage.BlockManagerInfo: Added broadcast_1251_piece0 in memory on ***hostname masked***:53081 (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:20:00 INFO storage.MemoryStore: Block broadcast_1255_piece0 stored as bytes in memory (estimated size 3.1 KB, free 491.3 MB) 18/04/17 17:20:00 INFO storage.BlockManagerInfo: Added broadcast_1255_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:20:00 INFO spark.SparkContext: Created broadcast 1255 from broadcast at DAGScheduler.scala:1006 18/04/17 17:20:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 1255 (KafkaRDD[1702] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 17:20:00 INFO cluster.YarnClusterScheduler: Adding task set 1255.0 with 1 tasks 18/04/17 17:20:00 INFO scheduler.DAGScheduler: Got job 1256 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 17:20:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 1256 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 17:20:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 17:20:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 17:20:00 INFO scheduler.DAGScheduler: Submitting ResultStage 1256 (KafkaRDD[1724] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 17:20:00 INFO storage.MemoryStore: Block broadcast_1256 stored as values in memory (estimated size 5.7 KB, free 491.3 MB) 18/04/17 17:20:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 1255.0 (TID 1255, ***hostname masked***, executor 3, partition 0, NODE_LOCAL, 2059 bytes) 18/04/17 17:20:00 INFO storage.BlockManagerInfo: Added broadcast_1253_piece0 in memory on ***hostname masked***:55279 (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:20:00 INFO storage.MemoryStore: Block broadcast_1256_piece0 stored as bytes in memory (estimated size 3.1 KB, free 491.3 MB) 18/04/17 17:20:00 INFO storage.BlockManagerInfo: Added broadcast_1256_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:20:00 INFO spark.SparkContext: Created broadcast 1256 from broadcast at DAGScheduler.scala:1006 18/04/17 17:20:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 1256 (KafkaRDD[1724] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 17:20:00 INFO cluster.YarnClusterScheduler: Adding task set 1256.0 with 1 tasks 18/04/17 17:20:00 INFO scheduler.DAGScheduler: Got job 1258 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 17:20:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 1257 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 17:20:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 17:20:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 17:20:00 INFO scheduler.DAGScheduler: Submitting ResultStage 1257 (KafkaRDD[1716] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 17:20:00 INFO storage.MemoryStore: Block broadcast_1257 stored as values in memory (estimated size 5.7 KB, free 491.3 MB) 18/04/17 17:20:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 1256.0 (TID 1256, ***hostname masked***, executor 12, partition 0, RACK_LOCAL, 2040 bytes) 18/04/17 17:20:00 INFO spark.ContextCleaner: Cleaned accumulator 1232 18/04/17 17:20:00 INFO storage.BlockManagerInfo: Removed broadcast_1230_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:20:00 INFO storage.MemoryStore: Block broadcast_1257_piece0 stored as bytes in memory (estimated size 3.1 KB, free 491.3 MB) 18/04/17 17:20:00 INFO storage.BlockManagerInfo: Added broadcast_1257_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:20:00 INFO spark.SparkContext: Created broadcast 1257 from broadcast at DAGScheduler.scala:1006 18/04/17 17:20:00 INFO storage.BlockManagerInfo: Removed broadcast_1230_piece0 on ***hostname masked***:50260 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:20:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 1257 (KafkaRDD[1716] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 17:20:00 INFO cluster.YarnClusterScheduler: Adding task set 1257.0 with 1 tasks 18/04/17 17:20:00 INFO scheduler.DAGScheduler: Got job 1257 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 17:20:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 1258 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 17:20:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 17:20:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 17:20:00 INFO scheduler.DAGScheduler: Submitting ResultStage 1258 (KafkaRDD[1720] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 17:20:00 INFO storage.MemoryStore: Block broadcast_1258 stored as values in memory (estimated size 5.7 KB, free 491.3 MB) 18/04/17 17:20:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 1257.0 (TID 1257, ***hostname masked***, executor 4, partition 0, NODE_LOCAL, 2047 bytes) 18/04/17 17:20:00 INFO storage.BlockManagerInfo: Added broadcast_1255_piece0 in memory on ***hostname masked***:60107 (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:20:00 INFO storage.BlockManagerInfo: Added broadcast_1254_piece0 in memory on ***hostname masked***:53081 (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:20:00 INFO storage.MemoryStore: Block broadcast_1258_piece0 stored as bytes in memory (estimated size 3.1 KB, free 491.3 MB) 18/04/17 17:20:00 INFO storage.BlockManagerInfo: Added broadcast_1258_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.5 MB) 18/04/17 17:20:00 INFO spark.SparkContext: Created broadcast 1258 from broadcast at DAGScheduler.scala:1006 18/04/17 17:20:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 1258 (KafkaRDD[1720] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 17:20:00 INFO cluster.YarnClusterScheduler: Adding task set 1258.0 with 1 tasks 18/04/17 17:20:00 INFO scheduler.DAGScheduler: Got job 1259 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 17:20:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 1259 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 17:20:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 17:20:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 17:20:00 INFO scheduler.DAGScheduler: Submitting ResultStage 1259 (KafkaRDD[1697] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 17:20:00 INFO storage.MemoryStore: Block broadcast_1259 stored as values in memory (estimated size 5.7 KB, free 491.3 MB) 18/04/17 17:20:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 1258.0 (TID 1258, ***hostname masked***, executor 10, partition 0, RACK_LOCAL, 2045 bytes) 18/04/17 17:20:00 INFO storage.BlockManagerInfo: Added broadcast_1256_piece0 in memory on ***hostname masked***:42188 (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:20:00 INFO storage.MemoryStore: Block broadcast_1259_piece0 stored as bytes in memory (estimated size 3.1 KB, free 491.3 MB) 18/04/17 17:20:00 INFO storage.BlockManagerInfo: Added broadcast_1259_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.5 MB) 18/04/17 17:20:00 INFO spark.SparkContext: Created broadcast 1259 from broadcast at DAGScheduler.scala:1006 18/04/17 17:20:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 1259 (KafkaRDD[1697] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 17:20:00 INFO cluster.YarnClusterScheduler: Adding task set 1259.0 with 1 tasks 18/04/17 17:20:00 INFO scheduler.DAGScheduler: Got job 1260 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 17:20:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 1260 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 17:20:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 17:20:00 INFO storage.BlockManagerInfo: Added broadcast_1250_piece0 in memory on ***hostname masked***:41751 (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:20:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 17:20:00 INFO scheduler.DAGScheduler: Submitting ResultStage 1260 (KafkaRDD[1707] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 17:20:00 INFO storage.MemoryStore: Block broadcast_1260 stored as values in memory (estimated size 5.7 KB, free 491.3 MB) 18/04/17 17:20:00 INFO storage.BlockManagerInfo: Added broadcast_1257_piece0 in memory on ***hostname masked***:55279 (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:20:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 1259.0 (TID 1259, ***hostname masked***, executor 6, partition 0, NODE_LOCAL, 2059 bytes) 18/04/17 17:20:00 INFO storage.MemoryStore: Block broadcast_1260_piece0 stored as bytes in memory (estimated size 3.1 KB, free 491.3 MB) 18/04/17 17:20:00 INFO storage.BlockManagerInfo: Added broadcast_1260_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.5 MB) 18/04/17 17:20:00 INFO spark.ContextCleaner: Cleaned accumulator 1235 18/04/17 17:20:00 INFO spark.SparkContext: Created broadcast 1260 from broadcast at DAGScheduler.scala:1006 18/04/17 17:20:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 1260 (KafkaRDD[1707] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 17:20:00 INFO cluster.YarnClusterScheduler: Adding task set 1260.0 with 1 tasks 18/04/17 17:20:00 INFO scheduler.DAGScheduler: Got job 1261 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 17:20:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 1261 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 17:20:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 17:20:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 17:20:00 INFO scheduler.DAGScheduler: Submitting ResultStage 1261 (KafkaRDD[1704] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 17:20:00 INFO storage.BlockManagerInfo: Removed broadcast_1233_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.5 MB) 18/04/17 17:20:00 INFO storage.MemoryStore: Block broadcast_1261 stored as values in memory (estimated size 5.7 KB, free 491.3 MB) 18/04/17 17:20:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 1260.0 (TID 1260, ***hostname masked***, executor 10, partition 0, RACK_LOCAL, 2064 bytes) 18/04/17 17:20:00 INFO storage.MemoryStore: Block broadcast_1261_piece0 stored as bytes in memory (estimated size 3.1 KB, free 491.3 MB) 18/04/17 17:20:00 INFO storage.BlockManagerInfo: Added broadcast_1261_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.5 MB) 18/04/17 17:20:00 INFO spark.SparkContext: Created broadcast 1261 from broadcast at DAGScheduler.scala:1006 18/04/17 17:20:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 1261 (KafkaRDD[1704] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 17:20:00 INFO cluster.YarnClusterScheduler: Adding task set 1261.0 with 1 tasks 18/04/17 17:20:00 INFO storage.BlockManagerInfo: Removed broadcast_1233_piece0 on ***hostname masked***:53081 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:20:00 INFO scheduler.DAGScheduler: Got job 1262 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 17:20:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 1262 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 17:20:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 17:20:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 17:20:00 INFO scheduler.DAGScheduler: Submitting ResultStage 1262 (KafkaRDD[1714] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 17:20:00 INFO storage.MemoryStore: Block broadcast_1262 stored as values in memory (estimated size 5.7 KB, free 491.3 MB) 18/04/17 17:20:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 1261.0 (TID 1261, ***hostname masked***, executor 8, partition 0, RACK_LOCAL, 2051 bytes) 18/04/17 17:20:00 INFO spark.ContextCleaner: Cleaned accumulator 1234 18/04/17 17:20:00 INFO storage.BlockManagerInfo: Removed broadcast_1232_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.5 MB) 18/04/17 17:20:00 INFO storage.MemoryStore: Block broadcast_1262_piece0 stored as bytes in memory (estimated size 3.1 KB, free 491.3 MB) 18/04/17 17:20:00 INFO storage.BlockManagerInfo: Removed broadcast_1232_piece0 on ***hostname masked***:53081 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:20:00 INFO storage.BlockManagerInfo: Added broadcast_1262_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.5 MB) 18/04/17 17:20:00 INFO spark.SparkContext: Created broadcast 1262 from broadcast at DAGScheduler.scala:1006 18/04/17 17:20:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 1262 (KafkaRDD[1714] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 17:20:00 INFO cluster.YarnClusterScheduler: Adding task set 1262.0 with 1 tasks 18/04/17 17:20:00 INFO scheduler.DAGScheduler: Got job 1263 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 17:20:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 1263 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 17:20:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 17:20:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 17:20:00 INFO scheduler.DAGScheduler: Submitting ResultStage 1263 (KafkaRDD[1725] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 17:20:00 INFO storage.MemoryStore: Block broadcast_1263 stored as values in memory (estimated size 5.7 KB, free 491.3 MB) 18/04/17 17:20:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 1262.0 (TID 1262, ***hostname masked***, executor 9, partition 0, RACK_LOCAL, 2069 bytes) 18/04/17 17:20:00 INFO storage.BlockManagerInfo: Added broadcast_1252_piece0 in memory on ***hostname masked***:55095 (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:20:00 INFO storage.MemoryStore: Block broadcast_1263_piece0 stored as bytes in memory (estimated size 3.1 KB, free 491.3 MB) 18/04/17 17:20:00 INFO storage.BlockManagerInfo: Added broadcast_1263_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.5 MB) 18/04/17 17:20:00 INFO spark.SparkContext: Created broadcast 1263 from broadcast at DAGScheduler.scala:1006 18/04/17 17:20:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 1263 (KafkaRDD[1725] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 17:20:00 INFO cluster.YarnClusterScheduler: Adding task set 1263.0 with 1 tasks 18/04/17 17:20:00 INFO scheduler.DAGScheduler: Got job 1264 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 17:20:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 1264 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 17:20:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 17:20:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 17:20:00 INFO scheduler.DAGScheduler: Submitting ResultStage 1264 (KafkaRDD[1701] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 17:20:00 INFO storage.MemoryStore: Block broadcast_1264 stored as values in memory (estimated size 5.7 KB, free 491.3 MB) 18/04/17 17:20:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 1263.0 (TID 1263, ***hostname masked***, executor 11, partition 0, NODE_LOCAL, 2048 bytes) 18/04/17 17:20:00 INFO storage.MemoryStore: Block broadcast_1264_piece0 stored as bytes in memory (estimated size 3.1 KB, free 491.3 MB) 18/04/17 17:20:00 INFO storage.BlockManagerInfo: Added broadcast_1264_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.5 MB) 18/04/17 17:20:00 INFO spark.SparkContext: Created broadcast 1264 from broadcast at DAGScheduler.scala:1006 18/04/17 17:20:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 1264 (KafkaRDD[1701] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 17:20:00 INFO cluster.YarnClusterScheduler: Adding task set 1264.0 with 1 tasks 18/04/17 17:20:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 1264.0 (TID 1264, ***hostname masked***, executor 8, partition 0, RACK_LOCAL, 2041 bytes) 18/04/17 17:20:00 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 1245.0 (TID 1245) in 60 ms on ***hostname masked*** (executor 7) (1/1) 18/04/17 17:20:00 INFO cluster.YarnClusterScheduler: Removed TaskSet 1245.0, whose tasks have all completed, from pool 18/04/17 17:20:00 INFO scheduler.DAGScheduler: ResultStage 1245 (foreachPartition at PredictorEngineApp.java:153) finished in 0.060 s 18/04/17 17:20:00 INFO scheduler.DAGScheduler: Job 1245 finished: foreachPartition at PredictorEngineApp.java:153, took 0.098279 s 18/04/17 17:20:00 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x29ce56a7 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 17:20:00 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x29ce56a70x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 17:20:00 INFO storage.BlockManagerInfo: Added broadcast_1262_piece0 in memory on ***hostname masked***:55033 (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:20:00 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 17:20:00 INFO storage.BlockManagerInfo: Added broadcast_1263_piece0 in memory on ***hostname masked***:57847 (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:20:00 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:36204, server: ***hostname masked***/***IP masked***:2181 18/04/17 17:20:00 INFO storage.BlockManagerInfo: Added broadcast_1261_piece0 in memory on ***hostname masked***:50260 (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:20:00 INFO storage.BlockManagerInfo: Added broadcast_1260_piece0 in memory on ***hostname masked***:55095 (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:20:00 INFO storage.BlockManagerInfo: Added broadcast_1259_piece0 in memory on ***hostname masked***:35790 (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:20:00 INFO spark.ContextCleaner: Cleaned accumulator 1237 18/04/17 17:20:00 INFO storage.BlockManagerInfo: Removed broadcast_1235_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.5 MB) 18/04/17 17:20:00 INFO storage.BlockManagerInfo: Removed broadcast_1235_piece0 on ***hostname masked***:55095 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:20:00 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x3626be1439a9784, negotiated timeout = 60000 18/04/17 17:20:00 INFO spark.ContextCleaner: Cleaned accumulator 1236 18/04/17 17:20:00 INFO storage.BlockManagerInfo: Removed broadcast_1234_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.5 MB) 18/04/17 17:20:00 INFO storage.BlockManagerInfo: Removed broadcast_1234_piece0 on ***hostname masked***:57847 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:20:00 INFO storage.BlockManagerInfo: Added broadcast_1264_piece0 in memory on ***hostname masked***:50260 (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:20:00 INFO storage.BlockManagerInfo: Added broadcast_1258_piece0 in memory on ***hostname masked***:55095 (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:20:00 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x3626be1439a9784 18/04/17 17:20:00 INFO zookeeper.ZooKeeper: Session: 0x3626be1439a9784 closed 18/04/17 17:20:00 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 17:20:00 INFO scheduler.JobScheduler: Finished job streaming job 1523974800000 ms.1 from job set of time 1523974800000 ms 18/04/17 17:20:00 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 1249.0 (TID 1249) in 71 ms on ***hostname masked*** (executor 8) (1/1) 18/04/17 17:20:00 INFO storage.BlockManagerInfo: Removed broadcast_1237_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.5 MB) 18/04/17 17:20:00 INFO scheduler.DAGScheduler: ResultStage 1249 (foreachPartition at PredictorEngineApp.java:153) finished in 0.072 s 18/04/17 17:20:00 INFO storage.BlockManagerInfo: Removed broadcast_1237_piece0 on ***hostname masked***:53081 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:20:00 INFO cluster.YarnClusterScheduler: Removed TaskSet 1249.0, whose tasks have all completed, from pool 18/04/17 17:20:00 INFO scheduler.DAGScheduler: Job 1248 finished: foreachPartition at PredictorEngineApp.java:153, took 0.168318 s 18/04/17 17:20:00 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x130a726a connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 17:20:00 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x130a726a0x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 17:20:00 INFO spark.ContextCleaner: Cleaned accumulator 1238 18/04/17 17:20:00 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 17:20:00 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:36207, server: ***hostname masked***/***IP masked***:2181 18/04/17 17:20:00 INFO storage.BlockManagerInfo: Removed broadcast_1236_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.5 MB) 18/04/17 17:20:00 INFO storage.BlockManagerInfo: Removed broadcast_1236_piece0 on ***hostname masked***:43653 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:20:00 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x3626be1439a9786, negotiated timeout = 60000 18/04/17 17:20:00 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x3626be1439a9786 18/04/17 17:20:00 INFO zookeeper.ZooKeeper: Session: 0x3626be1439a9786 closed 18/04/17 17:20:00 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 17:20:00 INFO spark.ContextCleaner: Cleaned accumulator 1212 18/04/17 17:20:00 INFO spark.ContextCleaner: Cleaned accumulator 1215 18/04/17 17:20:00 INFO spark.ContextCleaner: Cleaned accumulator 1213 18/04/17 17:20:00 INFO spark.ContextCleaner: Cleaned accumulator 1221 18/04/17 17:20:00 INFO storage.BlockManagerInfo: Removed broadcast_1213_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:20:00 INFO storage.BlockManagerInfo: Removed broadcast_1213_piece0 on ***hostname masked***:55033 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:20:00 INFO scheduler.JobScheduler: Finished job streaming job 1523974800000 ms.23 from job set of time 1523974800000 ms 18/04/17 17:20:00 INFO storage.BlockManagerInfo: Removed broadcast_1214_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:20:00 INFO storage.BlockManagerInfo: Removed broadcast_1214_piece0 on ***hostname masked***:41751 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:20:00 INFO storage.BlockManagerInfo: Removed broadcast_1221_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:20:00 INFO storage.BlockManagerInfo: Removed broadcast_1221_piece0 on ***hostname masked***:50260 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:20:00 INFO spark.ContextCleaner: Cleaned accumulator 1216 18/04/17 17:20:00 INFO storage.BlockManagerInfo: Removed broadcast_1216_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:20:00 INFO storage.BlockManagerInfo: Removed broadcast_1216_piece0 on ***hostname masked***:55033 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:20:00 INFO storage.BlockManagerInfo: Removed broadcast_1218_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:20:00 INFO storage.BlockManagerInfo: Removed broadcast_1218_piece0 on ***hostname masked***:41751 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:20:00 INFO storage.BlockManagerInfo: Removed broadcast_1220_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:20:00 INFO storage.BlockManagerInfo: Removed broadcast_1220_piece0 on ***hostname masked***:43653 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:20:00 INFO storage.BlockManagerInfo: Removed broadcast_1215_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:20:00 INFO storage.BlockManagerInfo: Removed broadcast_1215_piece0 on ***hostname masked***:35790 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:20:00 INFO spark.ContextCleaner: Cleaned accumulator 1220 18/04/17 17:20:00 INFO spark.ContextCleaner: Cleaned accumulator 1222 18/04/17 17:20:00 INFO spark.ContextCleaner: Cleaned accumulator 1218 18/04/17 17:20:00 INFO spark.ContextCleaner: Cleaned accumulator 1223 18/04/17 17:20:00 INFO storage.BlockManagerInfo: Removed broadcast_1211_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:20:00 INFO storage.BlockManagerInfo: Removed broadcast_1211_piece0 on ***hostname masked***:57847 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:20:00 INFO spark.ContextCleaner: Cleaned accumulator 1214 18/04/17 17:20:00 INFO spark.ContextCleaner: Cleaned accumulator 1219 18/04/17 17:20:00 INFO storage.BlockManagerInfo: Removed broadcast_1219_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:20:00 INFO storage.BlockManagerInfo: Removed broadcast_1219_piece0 on ***hostname masked***:41751 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:20:00 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 1251.0 (TID 1251) in 156 ms on ***hostname masked*** (executor 5) (1/1) 18/04/17 17:20:00 INFO cluster.YarnClusterScheduler: Removed TaskSet 1251.0, whose tasks have all completed, from pool 18/04/17 17:20:00 INFO scheduler.DAGScheduler: ResultStage 1251 (foreachPartition at PredictorEngineApp.java:153) finished in 0.157 s 18/04/17 17:20:00 INFO scheduler.DAGScheduler: Job 1251 finished: foreachPartition at PredictorEngineApp.java:153, took 0.219924 s 18/04/17 17:20:00 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x22fc0e43 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 17:20:00 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x22fc0e430x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 17:20:00 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 17:20:00 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:42592, server: ***hostname masked***/***IP masked***:2181 18/04/17 17:20:00 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x1626be1444c97d5, negotiated timeout = 60000 18/04/17 17:20:00 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x1626be1444c97d5 18/04/17 17:20:00 INFO spark.ContextCleaner: Cleaned accumulator 1217 18/04/17 17:20:00 INFO zookeeper.ZooKeeper: Session: 0x1626be1444c97d5 closed 18/04/17 17:20:00 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 17:20:00 INFO scheduler.JobScheduler: Finished job streaming job 1523974800000 ms.31 from job set of time 1523974800000 ms 18/04/17 17:20:00 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 1252.0 (TID 1252) in 341 ms on ***hostname masked*** (executor 10) (1/1) 18/04/17 17:20:00 INFO cluster.YarnClusterScheduler: Removed TaskSet 1252.0, whose tasks have all completed, from pool 18/04/17 17:20:00 INFO scheduler.DAGScheduler: ResultStage 1252 (foreachPartition at PredictorEngineApp.java:153) finished in 0.342 s 18/04/17 17:20:00 INFO scheduler.DAGScheduler: Job 1252 finished: foreachPartition at PredictorEngineApp.java:153, took 0.407778 s 18/04/17 17:20:00 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0xf633a1c connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 17:20:00 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0xf633a1c0x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 17:20:00 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 17:20:00 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:47190, server: ***hostname masked***/***IP masked***:2181 18/04/17 17:20:00 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x2626be142b290c1, negotiated timeout = 60000 18/04/17 17:20:00 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x2626be142b290c1 18/04/17 17:20:00 INFO zookeeper.ZooKeeper: Session: 0x2626be142b290c1 closed 18/04/17 17:20:00 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 17:20:00 INFO scheduler.JobScheduler: Finished job streaming job 1523974800000 ms.35 from job set of time 1523974800000 ms 18/04/17 17:20:01 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 1254.0 (TID 1254) in 1385 ms on ***hostname masked*** (executor 5) (1/1) 18/04/17 17:20:01 INFO cluster.YarnClusterScheduler: Removed TaskSet 1254.0, whose tasks have all completed, from pool 18/04/17 17:20:01 INFO scheduler.DAGScheduler: ResultStage 1254 (foreachPartition at PredictorEngineApp.java:153) finished in 1.386 s 18/04/17 17:20:01 INFO scheduler.DAGScheduler: Job 1254 finished: foreachPartition at PredictorEngineApp.java:153, took 1.456257 s 18/04/17 17:20:01 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x5eae3b57 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 17:20:01 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x5eae3b570x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 17:20:01 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 17:20:01 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:36217, server: ***hostname masked***/***IP masked***:2181 18/04/17 17:20:01 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x3626be1439a978a, negotiated timeout = 60000 18/04/17 17:20:01 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x3626be1439a978a 18/04/17 17:20:01 INFO zookeeper.ZooKeeper: Session: 0x3626be1439a978a closed 18/04/17 17:20:01 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 17:20:01 INFO scheduler.JobScheduler: Finished job streaming job 1523974800000 ms.8 from job set of time 1523974800000 ms 18/04/17 17:20:02 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 1247.0 (TID 1247) in 1917 ms on ***hostname masked*** (executor 8) (1/1) 18/04/17 17:20:02 INFO cluster.YarnClusterScheduler: Removed TaskSet 1247.0, whose tasks have all completed, from pool 18/04/17 17:20:02 INFO scheduler.DAGScheduler: ResultStage 1247 (foreachPartition at PredictorEngineApp.java:153) finished in 1.917 s 18/04/17 17:20:02 INFO scheduler.DAGScheduler: Job 1247 finished: foreachPartition at PredictorEngineApp.java:153, took 1.964846 s 18/04/17 17:20:02 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x19d55e70 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 17:20:02 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x19d55e700x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 17:20:02 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 17:20:02 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:36220, server: ***hostname masked***/***IP masked***:2181 18/04/17 17:20:02 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x3626be1439a978b, negotiated timeout = 60000 18/04/17 17:20:02 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x3626be1439a978b 18/04/17 17:20:02 INFO zookeeper.ZooKeeper: Session: 0x3626be1439a978b closed 18/04/17 17:20:02 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 17:20:02 INFO scheduler.JobScheduler: Finished job streaming job 1523974800000 ms.25 from job set of time 1523974800000 ms 18/04/17 17:20:02 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 1250.0 (TID 1250) in 2046 ms on ***hostname masked*** (executor 7) (1/1) 18/04/17 17:20:02 INFO scheduler.DAGScheduler: ResultStage 1250 (foreachPartition at PredictorEngineApp.java:153) finished in 2.047 s 18/04/17 17:20:02 INFO cluster.YarnClusterScheduler: Removed TaskSet 1250.0, whose tasks have all completed, from pool 18/04/17 17:20:02 INFO scheduler.DAGScheduler: Job 1250 finished: foreachPartition at PredictorEngineApp.java:153, took 2.106793 s 18/04/17 17:20:02 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x6f13b729 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 17:20:02 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x6f13b7290x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 17:20:02 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 17:20:02 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:36224, server: ***hostname masked***/***IP masked***:2181 18/04/17 17:20:02 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x3626be1439a978d, negotiated timeout = 60000 18/04/17 17:20:02 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x3626be1439a978d 18/04/17 17:20:02 INFO zookeeper.ZooKeeper: Session: 0x3626be1439a978d closed 18/04/17 17:20:02 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 17:20:02 INFO scheduler.JobScheduler: Finished job streaming job 1523974800000 ms.7 from job set of time 1523974800000 ms 18/04/17 17:20:05 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 1243.0 (TID 1243) in 5602 ms on ***hostname masked*** (executor 7) (1/1) 18/04/17 17:20:05 INFO cluster.YarnClusterScheduler: Removed TaskSet 1243.0, whose tasks have all completed, from pool 18/04/17 17:20:05 INFO scheduler.DAGScheduler: ResultStage 1243 (foreachPartition at PredictorEngineApp.java:153) finished in 5.604 s 18/04/17 17:20:05 INFO scheduler.DAGScheduler: Job 1243 finished: foreachPartition at PredictorEngineApp.java:153, took 5.632244 s 18/04/17 17:20:05 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x780df117 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 17:20:05 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x780df1170x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 17:20:05 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 17:20:05 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:36233, server: ***hostname masked***/***IP masked***:2181 18/04/17 17:20:05 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x3626be1439a978f, negotiated timeout = 60000 18/04/17 17:20:05 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x3626be1439a978f 18/04/17 17:20:05 INFO zookeeper.ZooKeeper: Session: 0x3626be1439a978f closed 18/04/17 17:20:05 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 17:20:05 INFO scheduler.JobScheduler: Finished job streaming job 1523974800000 ms.19 from job set of time 1523974800000 ms 18/04/17 17:20:08 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 1261.0 (TID 1261) in 8549 ms on ***hostname masked*** (executor 8) (1/1) 18/04/17 17:20:08 INFO cluster.YarnClusterScheduler: Removed TaskSet 1261.0, whose tasks have all completed, from pool 18/04/17 17:20:08 INFO scheduler.DAGScheduler: ResultStage 1261 (foreachPartition at PredictorEngineApp.java:153) finished in 8.550 s 18/04/17 17:20:08 INFO scheduler.DAGScheduler: Job 1261 finished: foreachPartition at PredictorEngineApp.java:153, took 8.638933 s 18/04/17 17:20:08 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x34f373b8 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 17:20:08 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x34f373b80x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 17:20:08 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 17:20:08 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:42622, server: ***hostname masked***/***IP masked***:2181 18/04/17 17:20:08 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x1626be1444c97dc, negotiated timeout = 60000 18/04/17 17:20:08 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x1626be1444c97dc 18/04/17 17:20:08 INFO zookeeper.ZooKeeper: Session: 0x1626be1444c97dc closed 18/04/17 17:20:08 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 17:20:08 INFO scheduler.JobScheduler: Finished job streaming job 1523974800000 ms.12 from job set of time 1523974800000 ms 18/04/17 17:20:08 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 1242.0 (TID 1242) in 8698 ms on ***hostname masked*** (executor 9) (1/1) 18/04/17 17:20:08 INFO scheduler.DAGScheduler: ResultStage 1242 (foreachPartition at PredictorEngineApp.java:153) finished in 8.698 s 18/04/17 17:20:08 INFO cluster.YarnClusterScheduler: Removed TaskSet 1242.0, whose tasks have all completed, from pool 18/04/17 17:20:08 INFO scheduler.DAGScheduler: Job 1242 finished: foreachPartition at PredictorEngineApp.java:153, took 8.715185 s 18/04/17 17:20:08 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x6043f403 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 17:20:08 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x6043f4030x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 17:20:08 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 17:20:08 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:47220, server: ***hostname masked***/***IP masked***:2181 18/04/17 17:20:08 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x2626be142b290c6, negotiated timeout = 60000 18/04/17 17:20:08 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x2626be142b290c6 18/04/17 17:20:08 INFO zookeeper.ZooKeeper: Session: 0x2626be142b290c6 closed 18/04/17 17:20:08 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 17:20:08 INFO scheduler.JobScheduler: Finished job streaming job 1523974800000 ms.18 from job set of time 1523974800000 ms 18/04/17 17:20:09 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 1264.0 (TID 1264) in 8984 ms on ***hostname masked*** (executor 8) (1/1) 18/04/17 17:20:09 INFO cluster.YarnClusterScheduler: Removed TaskSet 1264.0, whose tasks have all completed, from pool 18/04/17 17:20:09 INFO scheduler.DAGScheduler: ResultStage 1264 (foreachPartition at PredictorEngineApp.java:153) finished in 8.985 s 18/04/17 17:20:09 INFO scheduler.DAGScheduler: Job 1264 finished: foreachPartition at PredictorEngineApp.java:153, took 9.081009 s 18/04/17 17:20:09 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x74ac2cd2 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 17:20:09 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x74ac2cd20x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 17:20:09 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 17:20:09 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:36248, server: ***hostname masked***/***IP masked***:2181 18/04/17 17:20:09 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x3626be1439a9793, negotiated timeout = 60000 18/04/17 17:20:09 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x3626be1439a9793 18/04/17 17:20:09 INFO zookeeper.ZooKeeper: Session: 0x3626be1439a9793 closed 18/04/17 17:20:09 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 17:20:09 INFO scheduler.JobScheduler: Finished job streaming job 1523974800000 ms.9 from job set of time 1523974800000 ms 18/04/17 17:20:09 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 1241.0 (TID 1241) in 9128 ms on ***hostname masked*** (executor 11) (1/1) 18/04/17 17:20:09 INFO cluster.YarnClusterScheduler: Removed TaskSet 1241.0, whose tasks have all completed, from pool 18/04/17 17:20:09 INFO scheduler.DAGScheduler: ResultStage 1241 (foreachPartition at PredictorEngineApp.java:153) finished in 9.128 s 18/04/17 17:20:09 INFO scheduler.DAGScheduler: Job 1241 finished: foreachPartition at PredictorEngineApp.java:153, took 9.141282 s 18/04/17 17:20:09 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x1f01f168 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 17:20:09 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x1f01f1680x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 17:20:09 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 17:20:09 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:36251, server: ***hostname masked***/***IP masked***:2181 18/04/17 17:20:09 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x3626be1439a9794, negotiated timeout = 60000 18/04/17 17:20:09 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x3626be1439a9794 18/04/17 17:20:09 INFO zookeeper.ZooKeeper: Session: 0x3626be1439a9794 closed 18/04/17 17:20:09 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 17:20:09 INFO scheduler.JobScheduler: Finished job streaming job 1523974800000 ms.6 from job set of time 1523974800000 ms 18/04/17 17:20:09 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 1263.0 (TID 1263) in 9203 ms on ***hostname masked*** (executor 11) (1/1) 18/04/17 17:20:09 INFO cluster.YarnClusterScheduler: Removed TaskSet 1263.0, whose tasks have all completed, from pool 18/04/17 17:20:09 INFO scheduler.DAGScheduler: ResultStage 1263 (foreachPartition at PredictorEngineApp.java:153) finished in 9.203 s 18/04/17 17:20:09 INFO scheduler.DAGScheduler: Job 1263 finished: foreachPartition at PredictorEngineApp.java:153, took 9.297176 s 18/04/17 17:20:09 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x56589955 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 17:20:09 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x565899550x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 17:20:09 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 17:20:09 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:47231, server: ***hostname masked***/***IP masked***:2181 18/04/17 17:20:09 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x2626be142b290c8, negotiated timeout = 60000 18/04/17 17:20:09 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x2626be142b290c8 18/04/17 17:20:09 INFO zookeeper.ZooKeeper: Session: 0x2626be142b290c8 closed 18/04/17 17:20:09 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 17:20:09 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 1260.0 (TID 1260) in 9232 ms on ***hostname masked*** (executor 10) (1/1) 18/04/17 17:20:09 INFO cluster.YarnClusterScheduler: Removed TaskSet 1260.0, whose tasks have all completed, from pool 18/04/17 17:20:09 INFO scheduler.DAGScheduler: ResultStage 1260 (foreachPartition at PredictorEngineApp.java:153) finished in 9.233 s 18/04/17 17:20:09 INFO scheduler.DAGScheduler: Job 1260 finished: foreachPartition at PredictorEngineApp.java:153, took 9.318940 s 18/04/17 17:20:09 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x749c02b9 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 17:20:09 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x749c02b90x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 17:20:09 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 17:20:09 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:36257, server: ***hostname masked***/***IP masked***:2181 18/04/17 17:20:09 INFO scheduler.JobScheduler: Finished job streaming job 1523974800000 ms.33 from job set of time 1523974800000 ms 18/04/17 17:20:09 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x3626be1439a9795, negotiated timeout = 60000 18/04/17 17:20:09 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x3626be1439a9795 18/04/17 17:20:09 INFO zookeeper.ZooKeeper: Session: 0x3626be1439a9795 closed 18/04/17 17:20:09 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 17:20:09 INFO scheduler.JobScheduler: Finished job streaming job 1523974800000 ms.15 from job set of time 1523974800000 ms 18/04/17 17:20:09 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 1258.0 (TID 1258) in 9302 ms on ***hostname masked*** (executor 10) (1/1) 18/04/17 17:20:09 INFO cluster.YarnClusterScheduler: Removed TaskSet 1258.0, whose tasks have all completed, from pool 18/04/17 17:20:09 INFO scheduler.DAGScheduler: ResultStage 1258 (foreachPartition at PredictorEngineApp.java:153) finished in 9.303 s 18/04/17 17:20:09 INFO scheduler.DAGScheduler: Job 1257 finished: foreachPartition at PredictorEngineApp.java:153, took 9.383655 s 18/04/17 17:20:09 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x550e08c8 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 17:20:09 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x550e08c80x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 17:20:09 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 17:20:09 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:36260, server: ***hostname masked***/***IP masked***:2181 18/04/17 17:20:09 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x3626be1439a9797, negotiated timeout = 60000 18/04/17 17:20:09 INFO storage.BlockManagerInfo: Removed broadcast_1261_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:20:09 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x3626be1439a9797 18/04/17 17:20:09 INFO storage.BlockManagerInfo: Removed broadcast_1261_piece0 on ***hostname masked***:50260 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:20:09 INFO spark.ContextCleaner: Cleaned accumulator 1243 18/04/17 17:20:09 INFO storage.BlockManagerInfo: Removed broadcast_1241_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:20:09 INFO storage.BlockManagerInfo: Removed broadcast_1241_piece0 on ***hostname masked***:57847 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:20:09 INFO spark.ContextCleaner: Cleaned accumulator 1242 18/04/17 17:20:09 INFO storage.BlockManagerInfo: Removed broadcast_1243_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:20:09 INFO storage.BlockManagerInfo: Removed broadcast_1243_piece0 on ***hostname masked***:41751 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:20:09 INFO spark.ContextCleaner: Cleaned accumulator 1244 18/04/17 17:20:09 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 17:20:09 INFO zookeeper.ZooKeeper: Session: 0x3626be1439a9797 closed 18/04/17 17:20:09 INFO storage.BlockManagerInfo: Removed broadcast_1242_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:20:09 INFO storage.BlockManagerInfo: Removed broadcast_1242_piece0 on ***hostname masked***:55033 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:20:09 INFO spark.ContextCleaner: Cleaned accumulator 1246 18/04/17 17:20:09 INFO storage.BlockManagerInfo: Removed broadcast_1245_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:20:09 INFO storage.BlockManagerInfo: Removed broadcast_1245_piece0 on ***hostname masked***:41751 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:20:09 INFO storage.BlockManagerInfo: Removed broadcast_1247_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:20:09 INFO storage.BlockManagerInfo: Removed broadcast_1247_piece0 on ***hostname masked***:50260 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:20:09 INFO spark.ContextCleaner: Cleaned accumulator 1248 18/04/17 17:20:09 INFO storage.BlockManagerInfo: Removed broadcast_1264_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:20:09 INFO storage.BlockManagerInfo: Removed broadcast_1264_piece0 on ***hostname masked***:50260 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:20:09 INFO spark.ContextCleaner: Cleaned accumulator 1265 18/04/17 17:20:09 INFO storage.BlockManagerInfo: Removed broadcast_1263_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:20:09 INFO storage.BlockManagerInfo: Removed broadcast_1263_piece0 on ***hostname masked***:57847 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:20:09 INFO scheduler.JobScheduler: Finished job streaming job 1523974800000 ms.28 from job set of time 1523974800000 ms 18/04/17 17:20:09 INFO storage.BlockManagerInfo: Removed broadcast_1249_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:20:09 INFO storage.BlockManagerInfo: Removed broadcast_1249_piece0 on ***hostname masked***:50260 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:20:09 INFO spark.ContextCleaner: Cleaned accumulator 1250 18/04/17 17:20:09 INFO spark.ContextCleaner: Cleaned accumulator 1252 18/04/17 17:20:09 INFO storage.BlockManagerInfo: Removed broadcast_1250_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:20:09 INFO storage.BlockManagerInfo: Removed broadcast_1250_piece0 on ***hostname masked***:41751 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:20:09 INFO spark.ContextCleaner: Cleaned accumulator 1251 18/04/17 17:20:09 INFO storage.BlockManagerInfo: Removed broadcast_1252_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:20:09 INFO storage.BlockManagerInfo: Removed broadcast_1252_piece0 on ***hostname masked***:55095 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:20:09 INFO spark.ContextCleaner: Cleaned accumulator 1253 18/04/17 17:20:09 INFO storage.BlockManagerInfo: Removed broadcast_1251_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:20:09 INFO storage.BlockManagerInfo: Removed broadcast_1251_piece0 on ***hostname masked***:53081 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:20:09 INFO spark.ContextCleaner: Cleaned accumulator 1255 18/04/17 17:20:09 INFO storage.BlockManagerInfo: Removed broadcast_1254_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:20:09 INFO storage.BlockManagerInfo: Removed broadcast_1254_piece0 on ***hostname masked***:53081 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:20:09 INFO storage.BlockManagerInfo: Removed broadcast_1258_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:20:09 INFO storage.BlockManagerInfo: Removed broadcast_1258_piece0 on ***hostname masked***:55095 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:20:09 INFO spark.ContextCleaner: Cleaned accumulator 1259 18/04/17 17:20:09 INFO spark.ContextCleaner: Cleaned accumulator 1261 18/04/17 17:20:09 INFO spark.ContextCleaner: Cleaned accumulator 1262 18/04/17 17:20:09 INFO storage.BlockManagerInfo: Removed broadcast_1260_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:20:09 INFO storage.BlockManagerInfo: Removed broadcast_1260_piece0 on ***hostname masked***:55095 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:20:09 INFO spark.ContextCleaner: Cleaned accumulator 1264 18/04/17 17:20:09 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 1256.0 (TID 1256) in 9624 ms on ***hostname masked*** (executor 12) (1/1) 18/04/17 17:20:09 INFO cluster.YarnClusterScheduler: Removed TaskSet 1256.0, whose tasks have all completed, from pool 18/04/17 17:20:09 INFO scheduler.DAGScheduler: ResultStage 1256 (foreachPartition at PredictorEngineApp.java:153) finished in 9.624 s 18/04/17 17:20:09 INFO scheduler.DAGScheduler: Job 1256 finished: foreachPartition at PredictorEngineApp.java:153, took 9.699769 s 18/04/17 17:20:09 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x51b1fd7b connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 17:20:09 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x51b1fd7b0x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 17:20:09 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 17:20:09 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:42645, server: ***hostname masked***/***IP masked***:2181 18/04/17 17:20:09 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x1626be1444c97dd, negotiated timeout = 60000 18/04/17 17:20:09 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x1626be1444c97dd 18/04/17 17:20:09 INFO zookeeper.ZooKeeper: Session: 0x1626be1444c97dd closed 18/04/17 17:20:09 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 17:20:09 INFO scheduler.JobScheduler: Finished job streaming job 1523974800000 ms.32 from job set of time 1523974800000 ms 18/04/17 17:20:09 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 1239.0 (TID 1239) in 9788 ms on ***hostname masked*** (executor 12) (1/1) 18/04/17 17:20:09 INFO cluster.YarnClusterScheduler: Removed TaskSet 1239.0, whose tasks have all completed, from pool 18/04/17 17:20:09 INFO scheduler.DAGScheduler: ResultStage 1239 (foreachPartition at PredictorEngineApp.java:153) finished in 9.788 s 18/04/17 17:20:09 INFO scheduler.DAGScheduler: Job 1240 finished: foreachPartition at PredictorEngineApp.java:153, took 9.795740 s 18/04/17 17:20:09 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x25503a19 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 17:20:09 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x25503a190x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 17:20:09 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 17:20:09 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:36266, server: ***hostname masked***/***IP masked***:2181 18/04/17 17:20:09 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x3626be1439a9798, negotiated timeout = 60000 18/04/17 17:20:09 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x3626be1439a9798 18/04/17 17:20:09 INFO zookeeper.ZooKeeper: Session: 0x3626be1439a9798 closed 18/04/17 17:20:09 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 17:20:09 INFO scheduler.JobScheduler: Finished job streaming job 1523974800000 ms.2 from job set of time 1523974800000 ms 18/04/17 17:20:10 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 1246.0 (TID 1246) in 10354 ms on ***hostname masked*** (executor 3) (1/1) 18/04/17 17:20:10 INFO cluster.YarnClusterScheduler: Removed TaskSet 1246.0, whose tasks have all completed, from pool 18/04/17 17:20:10 INFO scheduler.DAGScheduler: ResultStage 1246 (foreachPartition at PredictorEngineApp.java:153) finished in 10.354 s 18/04/17 17:20:10 INFO scheduler.DAGScheduler: Job 1246 finished: foreachPartition at PredictorEngineApp.java:153, took 10.397665 s 18/04/17 17:20:10 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x158433e2 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 17:20:10 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x158433e20x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 17:20:10 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 17:20:10 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:36270, server: ***hostname masked***/***IP masked***:2181 18/04/17 17:20:10 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x3626be1439a9799, negotiated timeout = 60000 18/04/17 17:20:10 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x3626be1439a9799 18/04/17 17:20:10 INFO zookeeper.ZooKeeper: Session: 0x3626be1439a9799 closed 18/04/17 17:20:10 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 17:20:10 INFO scheduler.JobScheduler: Finished job streaming job 1523974800000 ms.34 from job set of time 1523974800000 ms 18/04/17 17:20:12 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 1240.0 (TID 1240) in 12153 ms on ***hostname masked*** (executor 1) (1/1) 18/04/17 17:20:12 INFO cluster.YarnClusterScheduler: Removed TaskSet 1240.0, whose tasks have all completed, from pool 18/04/17 17:20:12 INFO scheduler.DAGScheduler: ResultStage 1240 (foreachPartition at PredictorEngineApp.java:153) finished in 12.154 s 18/04/17 17:20:12 INFO scheduler.DAGScheduler: Job 1239 finished: foreachPartition at PredictorEngineApp.java:153, took 12.164369 s 18/04/17 17:20:12 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x2e82d4ea connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 17:20:12 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x2e82d4ea0x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 17:20:12 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 17:20:12 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:36275, server: ***hostname masked***/***IP masked***:2181 18/04/17 17:20:12 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x3626be1439a979a, negotiated timeout = 60000 18/04/17 17:20:12 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x3626be1439a979a 18/04/17 17:20:12 INFO zookeeper.ZooKeeper: Session: 0x3626be1439a979a closed 18/04/17 17:20:12 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 17:20:12 INFO scheduler.JobScheduler: Finished job streaming job 1523974800000 ms.11 from job set of time 1523974800000 ms 18/04/17 17:20:13 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 1244.0 (TID 1244) in 13417 ms on ***hostname masked*** (executor 6) (1/1) 18/04/17 17:20:13 INFO cluster.YarnClusterScheduler: Removed TaskSet 1244.0, whose tasks have all completed, from pool 18/04/17 17:20:13 INFO scheduler.DAGScheduler: ResultStage 1244 (foreachPartition at PredictorEngineApp.java:153) finished in 13.418 s 18/04/17 17:20:13 INFO scheduler.DAGScheduler: Job 1244 finished: foreachPartition at PredictorEngineApp.java:153, took 13.451184 s 18/04/17 17:20:13 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x4777c6e0 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 17:20:13 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x4777c6e00x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 17:20:13 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 17:20:13 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:47256, server: ***hostname masked***/***IP masked***:2181 18/04/17 17:20:13 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x2626be142b290cd, negotiated timeout = 60000 18/04/17 17:20:13 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x2626be142b290cd 18/04/17 17:20:13 INFO zookeeper.ZooKeeper: Session: 0x2626be142b290cd closed 18/04/17 17:20:13 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 17:20:13 INFO scheduler.JobScheduler: Finished job streaming job 1523974800000 ms.29 from job set of time 1523974800000 ms 18/04/17 17:20:14 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 1248.0 (TID 1248) in 14163 ms on ***hostname masked*** (executor 6) (1/1) 18/04/17 17:20:14 INFO cluster.YarnClusterScheduler: Removed TaskSet 1248.0, whose tasks have all completed, from pool 18/04/17 17:20:14 INFO scheduler.DAGScheduler: ResultStage 1248 (foreachPartition at PredictorEngineApp.java:153) finished in 14.164 s 18/04/17 17:20:14 INFO scheduler.DAGScheduler: Job 1249 finished: foreachPartition at PredictorEngineApp.java:153, took 14.217505 s 18/04/17 17:20:14 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0xfcf4ca connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 17:20:14 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0xfcf4ca0x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 17:20:14 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 17:20:14 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:47262, server: ***hostname masked***/***IP masked***:2181 18/04/17 17:20:14 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x2626be142b290cf, negotiated timeout = 60000 18/04/17 17:20:14 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x2626be142b290cf 18/04/17 17:20:14 INFO zookeeper.ZooKeeper: Session: 0x2626be142b290cf closed 18/04/17 17:20:14 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 17:20:14 INFO scheduler.JobScheduler: Finished job streaming job 1523974800000 ms.20 from job set of time 1523974800000 ms 18/04/17 17:20:14 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 1238.0 (TID 1238) in 14817 ms on ***hostname masked*** (executor 4) (1/1) 18/04/17 17:20:14 INFO cluster.YarnClusterScheduler: Removed TaskSet 1238.0, whose tasks have all completed, from pool 18/04/17 17:20:14 INFO scheduler.DAGScheduler: ResultStage 1238 (foreachPartition at PredictorEngineApp.java:153) finished in 14.817 s 18/04/17 17:20:14 INFO scheduler.DAGScheduler: Job 1238 finished: foreachPartition at PredictorEngineApp.java:153, took 14.822387 s 18/04/17 17:20:14 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x310da21c connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 17:20:14 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x310da21c0x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 17:20:14 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 17:20:14 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:42670, server: ***hostname masked***/***IP masked***:2181 18/04/17 17:20:14 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x1626be1444c97e1, negotiated timeout = 60000 18/04/17 17:20:14 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x1626be1444c97e1 18/04/17 17:20:14 INFO zookeeper.ZooKeeper: Session: 0x1626be1444c97e1 closed 18/04/17 17:20:14 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 17:20:14 INFO scheduler.JobScheduler: Finished job streaming job 1523974800000 ms.27 from job set of time 1523974800000 ms 18/04/17 17:20:15 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 1262.0 (TID 1262) in 15365 ms on ***hostname masked*** (executor 9) (1/1) 18/04/17 17:20:15 INFO scheduler.DAGScheduler: ResultStage 1262 (foreachPartition at PredictorEngineApp.java:153) finished in 15.365 s 18/04/17 17:20:15 INFO cluster.YarnClusterScheduler: Removed TaskSet 1262.0, whose tasks have all completed, from pool 18/04/17 17:20:15 INFO scheduler.DAGScheduler: Job 1262 finished: foreachPartition at PredictorEngineApp.java:153, took 15.456884 s 18/04/17 17:20:15 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x6cff1e93 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 17:20:15 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x6cff1e930x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 17:20:15 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 17:20:15 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:36292, server: ***hostname masked***/***IP masked***:2181 18/04/17 17:20:15 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x3626be1439a979d, negotiated timeout = 60000 18/04/17 17:20:15 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x3626be1439a979d 18/04/17 17:20:15 INFO zookeeper.ZooKeeper: Session: 0x3626be1439a979d closed 18/04/17 17:20:15 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 17:20:15 INFO scheduler.JobScheduler: Finished job streaming job 1523974800000 ms.22 from job set of time 1523974800000 ms 18/04/17 17:20:15 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 1257.0 (TID 1257) in 15742 ms on ***hostname masked*** (executor 4) (1/1) 18/04/17 17:20:15 INFO cluster.YarnClusterScheduler: Removed TaskSet 1257.0, whose tasks have all completed, from pool 18/04/17 17:20:15 INFO scheduler.DAGScheduler: ResultStage 1257 (foreachPartition at PredictorEngineApp.java:153) finished in 15.743 s 18/04/17 17:20:15 INFO scheduler.DAGScheduler: Job 1258 finished: foreachPartition at PredictorEngineApp.java:153, took 15.821277 s 18/04/17 17:20:15 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x13a22b4d connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 17:20:15 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x13a22b4d0x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 17:20:15 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 17:20:15 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:42678, server: ***hostname masked***/***IP masked***:2181 18/04/17 17:20:15 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x1626be1444c97e2, negotiated timeout = 60000 18/04/17 17:20:15 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x1626be1444c97e2 18/04/17 17:20:15 INFO zookeeper.ZooKeeper: Session: 0x1626be1444c97e2 closed 18/04/17 17:20:15 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 17:20:15 INFO scheduler.JobScheduler: Finished job streaming job 1523974800000 ms.24 from job set of time 1523974800000 ms 18/04/17 17:20:18 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 1255.0 (TID 1255) in 18791 ms on ***hostname masked*** (executor 3) (1/1) 18/04/17 17:20:18 INFO cluster.YarnClusterScheduler: Removed TaskSet 1255.0, whose tasks have all completed, from pool 18/04/17 17:20:18 INFO scheduler.DAGScheduler: ResultStage 1255 (foreachPartition at PredictorEngineApp.java:153) finished in 18.791 s 18/04/17 17:20:18 INFO scheduler.DAGScheduler: Job 1255 finished: foreachPartition at PredictorEngineApp.java:153, took 18.864134 s 18/04/17 17:20:18 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 1259.0 (TID 1259) in 18780 ms on ***hostname masked*** (executor 6) (1/1) 18/04/17 17:20:18 INFO scheduler.DAGScheduler: ResultStage 1259 (foreachPartition at PredictorEngineApp.java:153) finished in 18.780 s 18/04/17 17:20:18 INFO cluster.YarnClusterScheduler: Removed TaskSet 1259.0, whose tasks have all completed, from pool 18/04/17 17:20:18 INFO scheduler.DAGScheduler: Job 1259 finished: foreachPartition at PredictorEngineApp.java:153, took 18.864201 s 18/04/17 17:20:18 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x3a2dbf06 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 17:20:18 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x6287aecc connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 17:20:18 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x3a2dbf060x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 17:20:18 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x6287aecc0x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 17:20:18 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 17:20:18 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 17:20:18 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:36302, server: ***hostname masked***/***IP masked***:2181 18/04/17 17:20:18 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:47280, server: ***hostname masked***/***IP masked***:2181 18/04/17 17:20:18 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x3626be1439a979f, negotiated timeout = 60000 18/04/17 17:20:18 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x2626be142b290d2, negotiated timeout = 60000 18/04/17 17:20:18 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x3626be1439a979f 18/04/17 17:20:18 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x2626be142b290d2 18/04/17 17:20:18 INFO zookeeper.ZooKeeper: Session: 0x3626be1439a979f closed 18/04/17 17:20:18 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 17:20:18 INFO zookeeper.ZooKeeper: Session: 0x2626be142b290d2 closed 18/04/17 17:20:18 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 17:20:18 INFO scheduler.JobScheduler: Finished job streaming job 1523974800000 ms.5 from job set of time 1523974800000 ms 18/04/17 17:20:18 INFO scheduler.JobScheduler: Finished job streaming job 1523974800000 ms.10 from job set of time 1523974800000 ms 18/04/17 17:20:21 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 1253.0 (TID 1253) in 21495 ms on ***hostname masked*** (executor 4) (1/1) 18/04/17 17:20:21 INFO scheduler.DAGScheduler: ResultStage 1253 (foreachPartition at PredictorEngineApp.java:153) finished in 21.495 s 18/04/17 17:20:21 INFO cluster.YarnClusterScheduler: Removed TaskSet 1253.0, whose tasks have all completed, from pool 18/04/17 17:20:21 INFO scheduler.DAGScheduler: Job 1253 finished: foreachPartition at PredictorEngineApp.java:153, took 21.563252 s 18/04/17 17:20:21 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x72992590 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 17:20:21 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x729925900x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 17:20:21 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 17:20:21 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:42695, server: ***hostname masked***/***IP masked***:2181 18/04/17 17:20:21 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x1626be1444c97e4, negotiated timeout = 60000 18/04/17 17:20:21 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x1626be1444c97e4 18/04/17 17:20:21 INFO zookeeper.ZooKeeper: Session: 0x1626be1444c97e4 closed 18/04/17 17:20:21 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 17:20:21 INFO scheduler.JobScheduler: Finished job streaming job 1523974800000 ms.26 from job set of time 1523974800000 ms 18/04/17 17:20:21 INFO scheduler.JobScheduler: Total delay: 21.645 s for time 1523974800000 ms (execution: 21.596 s) 18/04/17 17:20:21 INFO kafka.KafkaRDD: Removing RDD 1656 from persistence list 18/04/17 17:20:21 INFO storage.BlockManager: Removing RDD 1656 18/04/17 17:20:21 INFO kafka.KafkaRDD: Removing RDD 1656 from persistence list 18/04/17 17:20:21 INFO storage.BlockManager: Removing RDD 1656 18/04/17 17:20:21 INFO kafka.KafkaRDD: Removing RDD 1657 from persistence list 18/04/17 17:20:21 INFO storage.BlockManager: Removing RDD 1657 18/04/17 17:20:21 INFO kafka.KafkaRDD: Removing RDD 1657 from persistence list 18/04/17 17:20:21 INFO storage.BlockManager: Removing RDD 1657 18/04/17 17:20:21 INFO kafka.KafkaRDD: Removing RDD 1658 from persistence list 18/04/17 17:20:21 INFO storage.BlockManager: Removing RDD 1658 18/04/17 17:20:21 INFO kafka.KafkaRDD: Removing RDD 1658 from persistence list 18/04/17 17:20:21 INFO storage.BlockManager: Removing RDD 1658 18/04/17 17:20:21 INFO kafka.KafkaRDD: Removing RDD 1659 from persistence list 18/04/17 17:20:21 INFO storage.BlockManager: Removing RDD 1659 18/04/17 17:20:21 INFO kafka.KafkaRDD: Removing RDD 1659 from persistence list 18/04/17 17:20:21 INFO storage.BlockManager: Removing RDD 1659 18/04/17 17:20:21 INFO kafka.KafkaRDD: Removing RDD 1660 from persistence list 18/04/17 17:20:21 INFO storage.BlockManager: Removing RDD 1660 18/04/17 17:20:21 INFO kafka.KafkaRDD: Removing RDD 1660 from persistence list 18/04/17 17:20:21 INFO storage.BlockManager: Removing RDD 1660 18/04/17 17:20:21 INFO kafka.KafkaRDD: Removing RDD 1661 from persistence list 18/04/17 17:20:21 INFO storage.BlockManager: Removing RDD 1661 18/04/17 17:20:21 INFO kafka.KafkaRDD: Removing RDD 1661 from persistence list 18/04/17 17:20:21 INFO storage.BlockManager: Removing RDD 1661 18/04/17 17:20:21 INFO kafka.KafkaRDD: Removing RDD 1662 from persistence list 18/04/17 17:20:21 INFO storage.BlockManager: Removing RDD 1662 18/04/17 17:20:21 INFO kafka.KafkaRDD: Removing RDD 1662 from persistence list 18/04/17 17:20:21 INFO storage.BlockManager: Removing RDD 1662 18/04/17 17:20:21 INFO kafka.KafkaRDD: Removing RDD 1663 from persistence list 18/04/17 17:20:21 INFO storage.BlockManager: Removing RDD 1663 18/04/17 17:20:21 INFO kafka.KafkaRDD: Removing RDD 1663 from persistence list 18/04/17 17:20:21 INFO storage.BlockManager: Removing RDD 1663 18/04/17 17:20:21 INFO kafka.KafkaRDD: Removing RDD 1664 from persistence list 18/04/17 17:20:21 INFO storage.BlockManager: Removing RDD 1664 18/04/17 17:20:21 INFO kafka.KafkaRDD: Removing RDD 1664 from persistence list 18/04/17 17:20:21 INFO storage.BlockManager: Removing RDD 1664 18/04/17 17:20:21 INFO kafka.KafkaRDD: Removing RDD 1665 from persistence list 18/04/17 17:20:21 INFO storage.BlockManager: Removing RDD 1665 18/04/17 17:20:21 INFO kafka.KafkaRDD: Removing RDD 1665 from persistence list 18/04/17 17:20:21 INFO storage.BlockManager: Removing RDD 1665 18/04/17 17:20:21 INFO kafka.KafkaRDD: Removing RDD 1666 from persistence list 18/04/17 17:20:21 INFO storage.BlockManager: Removing RDD 1666 18/04/17 17:20:21 INFO kafka.KafkaRDD: Removing RDD 1666 from persistence list 18/04/17 17:20:21 INFO storage.BlockManager: Removing RDD 1666 18/04/17 17:20:21 INFO kafka.KafkaRDD: Removing RDD 1667 from persistence list 18/04/17 17:20:21 INFO storage.BlockManager: Removing RDD 1667 18/04/17 17:20:21 INFO kafka.KafkaRDD: Removing RDD 1667 from persistence list 18/04/17 17:20:21 INFO storage.BlockManager: Removing RDD 1667 18/04/17 17:20:21 INFO kafka.KafkaRDD: Removing RDD 1668 from persistence list 18/04/17 17:20:21 INFO storage.BlockManager: Removing RDD 1668 18/04/17 17:20:21 INFO kafka.KafkaRDD: Removing RDD 1668 from persistence list 18/04/17 17:20:21 INFO storage.BlockManager: Removing RDD 1668 18/04/17 17:20:21 INFO kafka.KafkaRDD: Removing RDD 1669 from persistence list 18/04/17 17:20:21 INFO storage.BlockManager: Removing RDD 1669 18/04/17 17:20:21 INFO kafka.KafkaRDD: Removing RDD 1669 from persistence list 18/04/17 17:20:21 INFO storage.BlockManager: Removing RDD 1669 18/04/17 17:20:21 INFO kafka.KafkaRDD: Removing RDD 1670 from persistence list 18/04/17 17:20:21 INFO storage.BlockManager: Removing RDD 1670 18/04/17 17:20:21 INFO kafka.KafkaRDD: Removing RDD 1670 from persistence list 18/04/17 17:20:21 INFO storage.BlockManager: Removing RDD 1670 18/04/17 17:20:21 INFO kafka.KafkaRDD: Removing RDD 1671 from persistence list 18/04/17 17:20:21 INFO storage.BlockManager: Removing RDD 1671 18/04/17 17:20:21 INFO kafka.KafkaRDD: Removing RDD 1671 from persistence list 18/04/17 17:20:21 INFO storage.BlockManager: Removing RDD 1671 18/04/17 17:20:21 INFO kafka.KafkaRDD: Removing RDD 1672 from persistence list 18/04/17 17:20:21 INFO storage.BlockManager: Removing RDD 1672 18/04/17 17:20:21 INFO kafka.KafkaRDD: Removing RDD 1672 from persistence list 18/04/17 17:20:21 INFO storage.BlockManager: Removing RDD 1672 18/04/17 17:20:21 INFO kafka.KafkaRDD: Removing RDD 1673 from persistence list 18/04/17 17:20:21 INFO storage.BlockManager: Removing RDD 1673 18/04/17 17:20:21 INFO kafka.KafkaRDD: Removing RDD 1673 from persistence list 18/04/17 17:20:21 INFO storage.BlockManager: Removing RDD 1673 18/04/17 17:20:21 INFO kafka.KafkaRDD: Removing RDD 1674 from persistence list 18/04/17 17:20:21 INFO storage.BlockManager: Removing RDD 1674 18/04/17 17:20:21 INFO kafka.KafkaRDD: Removing RDD 1674 from persistence list 18/04/17 17:20:21 INFO storage.BlockManager: Removing RDD 1674 18/04/17 17:20:21 INFO kafka.KafkaRDD: Removing RDD 1675 from persistence list 18/04/17 17:20:21 INFO storage.BlockManager: Removing RDD 1675 18/04/17 17:20:21 INFO kafka.KafkaRDD: Removing RDD 1675 from persistence list 18/04/17 17:20:21 INFO storage.BlockManager: Removing RDD 1675 18/04/17 17:20:21 INFO kafka.KafkaRDD: Removing RDD 1676 from persistence list 18/04/17 17:20:21 INFO storage.BlockManager: Removing RDD 1676 18/04/17 17:20:21 INFO kafka.KafkaRDD: Removing RDD 1676 from persistence list 18/04/17 17:20:21 INFO storage.BlockManager: Removing RDD 1676 18/04/17 17:20:21 INFO kafka.KafkaRDD: Removing RDD 1677 from persistence list 18/04/17 17:20:21 INFO storage.BlockManager: Removing RDD 1677 18/04/17 17:20:21 INFO kafka.KafkaRDD: Removing RDD 1677 from persistence list 18/04/17 17:20:21 INFO storage.BlockManager: Removing RDD 1677 18/04/17 17:20:21 INFO kafka.KafkaRDD: Removing RDD 1678 from persistence list 18/04/17 17:20:21 INFO storage.BlockManager: Removing RDD 1678 18/04/17 17:20:21 INFO kafka.KafkaRDD: Removing RDD 1678 from persistence list 18/04/17 17:20:21 INFO storage.BlockManager: Removing RDD 1678 18/04/17 17:20:21 INFO kafka.KafkaRDD: Removing RDD 1679 from persistence list 18/04/17 17:20:21 INFO storage.BlockManager: Removing RDD 1679 18/04/17 17:20:21 INFO kafka.KafkaRDD: Removing RDD 1679 from persistence list 18/04/17 17:20:21 INFO storage.BlockManager: Removing RDD 1679 18/04/17 17:20:21 INFO kafka.KafkaRDD: Removing RDD 1680 from persistence list 18/04/17 17:20:21 INFO storage.BlockManager: Removing RDD 1680 18/04/17 17:20:21 INFO kafka.KafkaRDD: Removing RDD 1680 from persistence list 18/04/17 17:20:21 INFO storage.BlockManager: Removing RDD 1680 18/04/17 17:20:21 INFO kafka.KafkaRDD: Removing RDD 1681 from persistence list 18/04/17 17:20:21 INFO storage.BlockManager: Removing RDD 1681 18/04/17 17:20:21 INFO kafka.KafkaRDD: Removing RDD 1681 from persistence list 18/04/17 17:20:21 INFO storage.BlockManager: Removing RDD 1681 18/04/17 17:20:21 INFO kafka.KafkaRDD: Removing RDD 1682 from persistence list 18/04/17 17:20:21 INFO storage.BlockManager: Removing RDD 1682 18/04/17 17:20:21 INFO kafka.KafkaRDD: Removing RDD 1682 from persistence list 18/04/17 17:20:21 INFO storage.BlockManager: Removing RDD 1682 18/04/17 17:20:21 INFO kafka.KafkaRDD: Removing RDD 1683 from persistence list 18/04/17 17:20:21 INFO storage.BlockManager: Removing RDD 1683 18/04/17 17:20:21 INFO kafka.KafkaRDD: Removing RDD 1683 from persistence list 18/04/17 17:20:21 INFO storage.BlockManager: Removing RDD 1683 18/04/17 17:20:21 INFO kafka.KafkaRDD: Removing RDD 1684 from persistence list 18/04/17 17:20:21 INFO storage.BlockManager: Removing RDD 1684 18/04/17 17:20:21 INFO kafka.KafkaRDD: Removing RDD 1684 from persistence list 18/04/17 17:20:21 INFO storage.BlockManager: Removing RDD 1684 18/04/17 17:20:21 INFO kafka.KafkaRDD: Removing RDD 1685 from persistence list 18/04/17 17:20:21 INFO storage.BlockManager: Removing RDD 1685 18/04/17 17:20:21 INFO kafka.KafkaRDD: Removing RDD 1685 from persistence list 18/04/17 17:20:21 INFO storage.BlockManager: Removing RDD 1685 18/04/17 17:20:21 INFO kafka.KafkaRDD: Removing RDD 1686 from persistence list 18/04/17 17:20:21 INFO storage.BlockManager: Removing RDD 1686 18/04/17 17:20:21 INFO kafka.KafkaRDD: Removing RDD 1686 from persistence list 18/04/17 17:20:21 INFO storage.BlockManager: Removing RDD 1686 18/04/17 17:20:21 INFO kafka.KafkaRDD: Removing RDD 1687 from persistence list 18/04/17 17:20:21 INFO storage.BlockManager: Removing RDD 1687 18/04/17 17:20:21 INFO kafka.KafkaRDD: Removing RDD 1687 from persistence list 18/04/17 17:20:21 INFO storage.BlockManager: Removing RDD 1687 18/04/17 17:20:21 INFO kafka.KafkaRDD: Removing RDD 1688 from persistence list 18/04/17 17:20:21 INFO storage.BlockManager: Removing RDD 1688 18/04/17 17:20:21 INFO kafka.KafkaRDD: Removing RDD 1688 from persistence list 18/04/17 17:20:21 INFO storage.BlockManager: Removing RDD 1688 18/04/17 17:20:21 INFO kafka.KafkaRDD: Removing RDD 1689 from persistence list 18/04/17 17:20:21 INFO storage.BlockManager: Removing RDD 1689 18/04/17 17:20:21 INFO kafka.KafkaRDD: Removing RDD 1689 from persistence list 18/04/17 17:20:21 INFO storage.BlockManager: Removing RDD 1689 18/04/17 17:20:21 INFO kafka.KafkaRDD: Removing RDD 1690 from persistence list 18/04/17 17:20:21 INFO storage.BlockManager: Removing RDD 1690 18/04/17 17:20:21 INFO kafka.KafkaRDD: Removing RDD 1690 from persistence list 18/04/17 17:20:21 INFO storage.BlockManager: Removing RDD 1690 18/04/17 17:20:21 INFO kafka.KafkaRDD: Removing RDD 1691 from persistence list 18/04/17 17:20:21 INFO storage.BlockManager: Removing RDD 1691 18/04/17 17:20:21 INFO kafka.KafkaRDD: Removing RDD 1691 from persistence list 18/04/17 17:20:21 INFO storage.BlockManager: Removing RDD 1691 18/04/17 17:20:21 INFO scheduler.ReceivedBlockTracker: Deleting batches ArrayBuffer() 18/04/17 17:20:21 INFO scheduler.InputInfoTracker: remove old batch metadata: 1523974680000 ms 18/04/17 17:21:00 INFO scheduler.JobScheduler: Added jobs for time 1523974860000 ms 18/04/17 17:21:00 INFO scheduler.JobScheduler: Starting job streaming job 1523974860000 ms.0 from job set of time 1523974860000 ms 18/04/17 17:21:00 INFO scheduler.JobScheduler: Starting job streaming job 1523974860000 ms.2 from job set of time 1523974860000 ms 18/04/17 17:21:00 INFO scheduler.JobScheduler: Starting job streaming job 1523974860000 ms.1 from job set of time 1523974860000 ms 18/04/17 17:21:00 INFO scheduler.JobScheduler: Starting job streaming job 1523974860000 ms.3 from job set of time 1523974860000 ms 18/04/17 17:21:00 INFO scheduler.JobScheduler: Starting job streaming job 1523974860000 ms.4 from job set of time 1523974860000 ms 18/04/17 17:21:00 INFO scheduler.JobScheduler: Finished job streaming job 1523974860000 ms.0 from job set of time 1523974860000 ms 18/04/17 17:21:00 INFO scheduler.JobScheduler: Finished job streaming job 1523974860000 ms.3 from job set of time 1523974860000 ms 18/04/17 17:21:00 INFO scheduler.JobScheduler: Starting job streaming job 1523974860000 ms.5 from job set of time 1523974860000 ms 18/04/17 17:21:00 INFO scheduler.JobScheduler: Starting job streaming job 1523974860000 ms.7 from job set of time 1523974860000 ms 18/04/17 17:21:00 INFO scheduler.JobScheduler: Finished job streaming job 1523974860000 ms.4 from job set of time 1523974860000 ms 18/04/17 17:21:00 INFO scheduler.JobScheduler: Starting job streaming job 1523974860000 ms.6 from job set of time 1523974860000 ms 18/04/17 17:21:00 INFO scheduler.JobScheduler: Starting job streaming job 1523974860000 ms.8 from job set of time 1523974860000 ms 18/04/17 17:21:00 INFO scheduler.JobScheduler: Starting job streaming job 1523974860000 ms.9 from job set of time 1523974860000 ms 18/04/17 17:21:00 INFO scheduler.JobScheduler: Starting job streaming job 1523974860000 ms.11 from job set of time 1523974860000 ms 18/04/17 17:21:00 INFO scheduler.JobScheduler: Starting job streaming job 1523974860000 ms.10 from job set of time 1523974860000 ms 18/04/17 17:21:00 INFO scheduler.JobScheduler: Starting job streaming job 1523974860000 ms.12 from job set of time 1523974860000 ms 18/04/17 17:21:00 INFO scheduler.JobScheduler: Starting job streaming job 1523974860000 ms.13 from job set of time 1523974860000 ms 18/04/17 17:21:00 INFO scheduler.JobScheduler: Starting job streaming job 1523974860000 ms.14 from job set of time 1523974860000 ms 18/04/17 17:21:00 INFO scheduler.JobScheduler: Finished job streaming job 1523974860000 ms.13 from job set of time 1523974860000 ms 18/04/17 17:21:00 INFO scheduler.JobScheduler: Starting job streaming job 1523974860000 ms.15 from job set of time 1523974860000 ms 18/04/17 17:21:00 INFO scheduler.JobScheduler: Starting job streaming job 1523974860000 ms.17 from job set of time 1523974860000 ms 18/04/17 17:21:00 INFO scheduler.JobScheduler: Starting job streaming job 1523974860000 ms.16 from job set of time 1523974860000 ms 18/04/17 17:21:00 INFO scheduler.JobScheduler: Finished job streaming job 1523974860000 ms.17 from job set of time 1523974860000 ms 18/04/17 17:21:00 INFO scheduler.JobScheduler: Finished job streaming job 1523974860000 ms.14 from job set of time 1523974860000 ms 18/04/17 17:21:00 INFO scheduler.JobScheduler: Starting job streaming job 1523974860000 ms.18 from job set of time 1523974860000 ms 18/04/17 17:21:00 INFO scheduler.JobScheduler: Starting job streaming job 1523974860000 ms.19 from job set of time 1523974860000 ms 18/04/17 17:21:00 INFO scheduler.JobScheduler: Finished job streaming job 1523974860000 ms.16 from job set of time 1523974860000 ms 18/04/17 17:21:00 INFO scheduler.JobScheduler: Starting job streaming job 1523974860000 ms.20 from job set of time 1523974860000 ms 18/04/17 17:21:00 INFO scheduler.JobScheduler: Starting job streaming job 1523974860000 ms.22 from job set of time 1523974860000 ms 18/04/17 17:21:00 INFO scheduler.JobScheduler: Starting job streaming job 1523974860000 ms.21 from job set of time 1523974860000 ms 18/04/17 17:21:00 INFO scheduler.JobScheduler: Finished job streaming job 1523974860000 ms.21 from job set of time 1523974860000 ms 18/04/17 17:21:00 INFO scheduler.JobScheduler: Starting job streaming job 1523974860000 ms.23 from job set of time 1523974860000 ms 18/04/17 17:21:00 INFO scheduler.JobScheduler: Starting job streaming job 1523974860000 ms.24 from job set of time 1523974860000 ms 18/04/17 17:21:00 INFO scheduler.JobScheduler: Starting job streaming job 1523974860000 ms.25 from job set of time 1523974860000 ms 18/04/17 17:21:00 INFO scheduler.JobScheduler: Starting job streaming job 1523974860000 ms.26 from job set of time 1523974860000 ms 18/04/17 17:21:00 INFO scheduler.JobScheduler: Starting job streaming job 1523974860000 ms.27 from job set of time 1523974860000 ms 18/04/17 17:21:00 INFO scheduler.JobScheduler: Starting job streaming job 1523974860000 ms.28 from job set of time 1523974860000 ms 18/04/17 17:21:00 INFO scheduler.JobScheduler: Starting job streaming job 1523974860000 ms.29 from job set of time 1523974860000 ms 18/04/17 17:21:00 INFO scheduler.JobScheduler: Starting job streaming job 1523974860000 ms.30 from job set of time 1523974860000 ms 18/04/17 17:21:00 INFO scheduler.JobScheduler: Starting job streaming job 1523974860000 ms.31 from job set of time 1523974860000 ms 18/04/17 17:21:00 INFO scheduler.JobScheduler: Starting job streaming job 1523974860000 ms.32 from job set of time 1523974860000 ms 18/04/17 17:21:00 INFO scheduler.JobScheduler: Finished job streaming job 1523974860000 ms.30 from job set of time 1523974860000 ms 18/04/17 17:21:00 INFO scheduler.JobScheduler: Starting job streaming job 1523974860000 ms.33 from job set of time 1523974860000 ms 18/04/17 17:21:00 INFO scheduler.JobScheduler: Starting job streaming job 1523974860000 ms.34 from job set of time 1523974860000 ms 18/04/17 17:21:00 INFO scheduler.JobScheduler: Starting job streaming job 1523974860000 ms.35 from job set of time 1523974860000 ms 18/04/17 17:21:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 17:21:00 INFO scheduler.DAGScheduler: Got job 1265 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 17:21:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 1265 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 17:21:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 17:21:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 17:21:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 17:21:00 INFO scheduler.DAGScheduler: Submitting ResultStage 1265 (KafkaRDD[1730] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 17:21:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 17:21:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 17:21:00 INFO storage.MemoryStore: Block broadcast_1265 stored as values in memory (estimated size 5.7 KB, free 491.5 MB) 18/04/17 17:21:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 17:21:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 17:21:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 17:21:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 17:21:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 17:21:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 17:21:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 17:21:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 17:21:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 17:21:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 17:21:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 17:21:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 17:21:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 17:21:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 17:21:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 17:21:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 17:21:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 17:21:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 17:21:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 17:21:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 17:21:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 17:21:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 17:21:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 17:21:00 INFO storage.MemoryStore: Block broadcast_1265_piece0 stored as bytes in memory (estimated size 3.1 KB, free 491.5 MB) 18/04/17 17:21:00 INFO storage.BlockManagerInfo: Added broadcast_1265_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:21:00 INFO spark.SparkContext: Created broadcast 1265 from broadcast at DAGScheduler.scala:1006 18/04/17 17:21:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 1265 (KafkaRDD[1730] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 17:21:00 INFO cluster.YarnClusterScheduler: Adding task set 1265.0 with 1 tasks 18/04/17 17:21:00 INFO scheduler.DAGScheduler: Got job 1266 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 17:21:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 1266 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 17:21:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 17:21:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 17:21:00 INFO scheduler.DAGScheduler: Submitting ResultStage 1266 (KafkaRDD[1739] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 17:21:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 1265.0 (TID 1265, ***hostname masked***, executor 10, partition 0, RACK_LOCAL, 2034 bytes) 18/04/17 17:21:00 INFO storage.MemoryStore: Block broadcast_1266 stored as values in memory (estimated size 5.7 KB, free 491.5 MB) 18/04/17 17:21:00 INFO storage.MemoryStore: Block broadcast_1266_piece0 stored as bytes in memory (estimated size 3.1 KB, free 491.5 MB) 18/04/17 17:21:00 INFO storage.BlockManagerInfo: Added broadcast_1266_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:21:00 INFO spark.SparkContext: Created broadcast 1266 from broadcast at DAGScheduler.scala:1006 18/04/17 17:21:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 1266 (KafkaRDD[1739] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 17:21:00 INFO cluster.YarnClusterScheduler: Adding task set 1266.0 with 1 tasks 18/04/17 17:21:00 INFO scheduler.DAGScheduler: Got job 1267 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 17:21:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 1267 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 17:21:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 17:21:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 17:21:00 INFO scheduler.DAGScheduler: Submitting ResultStage 1267 (KafkaRDD[1729] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 17:21:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 1266.0 (TID 1266, ***hostname masked***, executor 5, partition 0, RACK_LOCAL, 2046 bytes) 18/04/17 17:21:00 INFO storage.MemoryStore: Block broadcast_1267 stored as values in memory (estimated size 5.7 KB, free 491.5 MB) 18/04/17 17:21:00 INFO storage.MemoryStore: Block broadcast_1267_piece0 stored as bytes in memory (estimated size 3.1 KB, free 491.5 MB) 18/04/17 17:21:00 INFO storage.BlockManagerInfo: Added broadcast_1267_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:21:00 INFO spark.SparkContext: Created broadcast 1267 from broadcast at DAGScheduler.scala:1006 18/04/17 17:21:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 1267 (KafkaRDD[1729] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 17:21:00 INFO cluster.YarnClusterScheduler: Adding task set 1267.0 with 1 tasks 18/04/17 17:21:00 INFO scheduler.DAGScheduler: Got job 1268 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 17:21:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 1268 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 17:21:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 17:21:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 17:21:00 INFO scheduler.DAGScheduler: Submitting ResultStage 1268 (KafkaRDD[1735] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 17:21:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 1267.0 (TID 1267, ***hostname masked***, executor 9, partition 0, RACK_LOCAL, 2063 bytes) 18/04/17 17:21:00 INFO storage.MemoryStore: Block broadcast_1268 stored as values in memory (estimated size 5.7 KB, free 491.5 MB) 18/04/17 17:21:00 INFO storage.MemoryStore: Block broadcast_1268_piece0 stored as bytes in memory (estimated size 3.1 KB, free 491.5 MB) 18/04/17 17:21:00 INFO storage.BlockManagerInfo: Added broadcast_1268_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:21:00 INFO spark.SparkContext: Created broadcast 1268 from broadcast at DAGScheduler.scala:1006 18/04/17 17:21:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 1268 (KafkaRDD[1735] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 17:21:00 INFO cluster.YarnClusterScheduler: Adding task set 1268.0 with 1 tasks 18/04/17 17:21:00 INFO scheduler.DAGScheduler: Got job 1269 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 17:21:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 1269 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 17:21:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 17:21:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 17:21:00 INFO scheduler.DAGScheduler: Submitting ResultStage 1269 (KafkaRDD[1757] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 17:21:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 1268.0 (TID 1268, ***hostname masked***, executor 10, partition 0, RACK_LOCAL, 2060 bytes) 18/04/17 17:21:00 INFO storage.MemoryStore: Block broadcast_1269 stored as values in memory (estimated size 5.7 KB, free 491.5 MB) 18/04/17 17:21:00 INFO storage.MemoryStore: Block broadcast_1269_piece0 stored as bytes in memory (estimated size 3.1 KB, free 491.5 MB) 18/04/17 17:21:00 INFO storage.BlockManagerInfo: Added broadcast_1269_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:21:00 INFO spark.SparkContext: Created broadcast 1269 from broadcast at DAGScheduler.scala:1006 18/04/17 17:21:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 1269 (KafkaRDD[1757] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 17:21:00 INFO cluster.YarnClusterScheduler: Adding task set 1269.0 with 1 tasks 18/04/17 17:21:00 INFO scheduler.DAGScheduler: Got job 1270 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 17:21:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 1270 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 17:21:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 17:21:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 17:21:00 INFO scheduler.DAGScheduler: Submitting ResultStage 1270 (KafkaRDD[1746] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 17:21:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 1269.0 (TID 1269, ***hostname masked***, executor 2, partition 0, NODE_LOCAL, 2049 bytes) 18/04/17 17:21:00 INFO storage.MemoryStore: Block broadcast_1270 stored as values in memory (estimated size 5.7 KB, free 491.5 MB) 18/04/17 17:21:00 INFO storage.BlockManagerInfo: Added broadcast_1266_piece0 in memory on ***hostname masked***:53081 (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:21:00 INFO storage.MemoryStore: Block broadcast_1270_piece0 stored as bytes in memory (estimated size 3.1 KB, free 491.5 MB) 18/04/17 17:21:00 INFO storage.BlockManagerInfo: Added broadcast_1270_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:21:00 INFO spark.SparkContext: Created broadcast 1270 from broadcast at DAGScheduler.scala:1006 18/04/17 17:21:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 1270 (KafkaRDD[1746] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 17:21:00 INFO cluster.YarnClusterScheduler: Adding task set 1270.0 with 1 tasks 18/04/17 17:21:00 INFO scheduler.DAGScheduler: Got job 1271 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 17:21:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 1271 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 17:21:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 17:21:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 17:21:00 INFO scheduler.DAGScheduler: Submitting ResultStage 1271 (KafkaRDD[1750] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 17:21:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 1270.0 (TID 1270, ***hostname masked***, executor 6, partition 0, RACK_LOCAL, 2048 bytes) 18/04/17 17:21:00 INFO storage.MemoryStore: Block broadcast_1271 stored as values in memory (estimated size 5.7 KB, free 491.5 MB) 18/04/17 17:21:00 INFO storage.BlockManagerInfo: Added broadcast_1265_piece0 in memory on ***hostname masked***:55095 (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:21:00 INFO storage.MemoryStore: Block broadcast_1271_piece0 stored as bytes in memory (estimated size 3.1 KB, free 491.5 MB) 18/04/17 17:21:00 INFO storage.BlockManagerInfo: Added broadcast_1271_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:21:00 INFO spark.SparkContext: Created broadcast 1271 from broadcast at DAGScheduler.scala:1006 18/04/17 17:21:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 1271 (KafkaRDD[1750] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 17:21:00 INFO cluster.YarnClusterScheduler: Adding task set 1271.0 with 1 tasks 18/04/17 17:21:00 INFO scheduler.DAGScheduler: Got job 1272 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 17:21:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 1272 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 17:21:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 17:21:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 17:21:00 INFO scheduler.DAGScheduler: Submitting ResultStage 1272 (KafkaRDD[1752] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 17:21:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 1271.0 (TID 1271, ***hostname masked***, executor 9, partition 0, RACK_LOCAL, 2069 bytes) 18/04/17 17:21:00 INFO storage.MemoryStore: Block broadcast_1272 stored as values in memory (estimated size 5.7 KB, free 491.5 MB) 18/04/17 17:21:00 INFO storage.BlockManagerInfo: Added broadcast_1270_piece0 in memory on ***hostname masked***:35790 (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:21:00 INFO storage.MemoryStore: Block broadcast_1272_piece0 stored as bytes in memory (estimated size 3.1 KB, free 491.5 MB) 18/04/17 17:21:00 INFO storage.BlockManagerInfo: Added broadcast_1272_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:21:00 INFO storage.BlockManagerInfo: Added broadcast_1268_piece0 in memory on ***hostname masked***:55095 (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:21:00 INFO spark.SparkContext: Created broadcast 1272 from broadcast at DAGScheduler.scala:1006 18/04/17 17:21:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 1272 (KafkaRDD[1752] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 17:21:00 INFO cluster.YarnClusterScheduler: Adding task set 1272.0 with 1 tasks 18/04/17 17:21:00 INFO scheduler.DAGScheduler: Got job 1273 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 17:21:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 1273 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 17:21:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 17:21:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 17:21:00 INFO scheduler.DAGScheduler: Submitting ResultStage 1273 (KafkaRDD[1759] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 17:21:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 1272.0 (TID 1272, ***hostname masked***, executor 1, partition 0, NODE_LOCAL, 2047 bytes) 18/04/17 17:21:00 INFO storage.MemoryStore: Block broadcast_1273 stored as values in memory (estimated size 5.7 KB, free 491.5 MB) 18/04/17 17:21:00 INFO storage.BlockManagerInfo: Added broadcast_1267_piece0 in memory on ***hostname masked***:55033 (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:21:00 INFO storage.MemoryStore: Block broadcast_1273_piece0 stored as bytes in memory (estimated size 3.1 KB, free 491.5 MB) 18/04/17 17:21:00 INFO storage.BlockManagerInfo: Added broadcast_1273_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:21:00 INFO spark.SparkContext: Created broadcast 1273 from broadcast at DAGScheduler.scala:1006 18/04/17 17:21:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 1273 (KafkaRDD[1759] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 17:21:00 INFO cluster.YarnClusterScheduler: Adding task set 1273.0 with 1 tasks 18/04/17 17:21:00 INFO scheduler.DAGScheduler: Got job 1274 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 17:21:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 1274 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 17:21:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 17:21:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 17:21:00 INFO scheduler.DAGScheduler: Submitting ResultStage 1274 (KafkaRDD[1734] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 17:21:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 1273.0 (TID 1273, ***hostname masked***, executor 7, partition 0, RACK_LOCAL, 2042 bytes) 18/04/17 17:21:00 INFO storage.MemoryStore: Block broadcast_1274 stored as values in memory (estimated size 5.7 KB, free 491.4 MB) 18/04/17 17:21:00 INFO storage.MemoryStore: Block broadcast_1274_piece0 stored as bytes in memory (estimated size 3.1 KB, free 491.4 MB) 18/04/17 17:21:00 INFO storage.BlockManagerInfo: Added broadcast_1274_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:21:00 INFO spark.SparkContext: Created broadcast 1274 from broadcast at DAGScheduler.scala:1006 18/04/17 17:21:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 1274 (KafkaRDD[1734] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 17:21:00 INFO cluster.YarnClusterScheduler: Adding task set 1274.0 with 1 tasks 18/04/17 17:21:00 INFO storage.BlockManagerInfo: Added broadcast_1269_piece0 in memory on ***hostname masked***:43653 (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:21:00 INFO scheduler.DAGScheduler: Got job 1275 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 17:21:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 1275 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 17:21:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 17:21:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 17:21:00 INFO scheduler.DAGScheduler: Submitting ResultStage 1275 (KafkaRDD[1761] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 17:21:00 INFO storage.MemoryStore: Block broadcast_1275 stored as values in memory (estimated size 5.7 KB, free 491.4 MB) 18/04/17 17:21:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 1274.0 (TID 1274, ***hostname masked***, executor 9, partition 0, NODE_LOCAL, 2045 bytes) 18/04/17 17:21:00 INFO storage.BlockManagerInfo: Added broadcast_1271_piece0 in memory on ***hostname masked***:55033 (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:21:00 INFO storage.MemoryStore: Block broadcast_1275_piece0 stored as bytes in memory (estimated size 3.1 KB, free 491.4 MB) 18/04/17 17:21:00 INFO storage.BlockManagerInfo: Added broadcast_1275_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:21:00 INFO spark.SparkContext: Created broadcast 1275 from broadcast at DAGScheduler.scala:1006 18/04/17 17:21:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 1275 (KafkaRDD[1761] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 17:21:00 INFO cluster.YarnClusterScheduler: Adding task set 1275.0 with 1 tasks 18/04/17 17:21:00 INFO scheduler.DAGScheduler: Got job 1276 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 17:21:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 1276 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 17:21:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 17:21:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 17:21:00 INFO scheduler.DAGScheduler: Submitting ResultStage 1276 (KafkaRDD[1751] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 17:21:00 INFO storage.MemoryStore: Block broadcast_1276 stored as values in memory (estimated size 5.7 KB, free 491.4 MB) 18/04/17 17:21:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 1275.0 (TID 1275, ***hostname masked***, executor 11, partition 0, NODE_LOCAL, 2048 bytes) 18/04/17 17:21:00 INFO storage.BlockManagerInfo: Added broadcast_1272_piece0 in memory on ***hostname masked***:56034 (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:21:00 INFO storage.MemoryStore: Block broadcast_1276_piece0 stored as bytes in memory (estimated size 3.1 KB, free 491.4 MB) 18/04/17 17:21:00 INFO storage.BlockManagerInfo: Added broadcast_1276_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:21:00 INFO spark.SparkContext: Created broadcast 1276 from broadcast at DAGScheduler.scala:1006 18/04/17 17:21:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 1276 (KafkaRDD[1751] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 17:21:00 INFO cluster.YarnClusterScheduler: Adding task set 1276.0 with 1 tasks 18/04/17 17:21:00 INFO storage.BlockManagerInfo: Added broadcast_1273_piece0 in memory on ***hostname masked***:41751 (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:21:00 INFO scheduler.DAGScheduler: Got job 1277 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 17:21:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 1277 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 17:21:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 17:21:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 17:21:00 INFO scheduler.DAGScheduler: Submitting ResultStage 1277 (KafkaRDD[1763] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 17:21:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 1276.0 (TID 1276, ***hostname masked***, executor 6, partition 0, RACK_LOCAL, 2048 bytes) 18/04/17 17:21:00 INFO storage.MemoryStore: Block broadcast_1277 stored as values in memory (estimated size 5.7 KB, free 491.4 MB) 18/04/17 17:21:00 INFO storage.BlockManagerInfo: Added broadcast_1274_piece0 in memory on ***hostname masked***:55033 (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:21:00 INFO storage.MemoryStore: Block broadcast_1277_piece0 stored as bytes in memory (estimated size 3.1 KB, free 491.4 MB) 18/04/17 17:21:00 INFO storage.BlockManagerInfo: Added broadcast_1277_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:21:00 INFO spark.SparkContext: Created broadcast 1277 from broadcast at DAGScheduler.scala:1006 18/04/17 17:21:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 1277 (KafkaRDD[1763] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 17:21:00 INFO cluster.YarnClusterScheduler: Adding task set 1277.0 with 1 tasks 18/04/17 17:21:00 INFO scheduler.DAGScheduler: Got job 1278 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 17:21:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 1278 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 17:21:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 17:21:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 17:21:00 INFO scheduler.DAGScheduler: Submitting ResultStage 1278 (KafkaRDD[1737] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 17:21:00 INFO storage.MemoryStore: Block broadcast_1278 stored as values in memory (estimated size 5.7 KB, free 491.4 MB) 18/04/17 17:21:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 1277.0 (TID 1277, ***hostname masked***, executor 10, partition 0, NODE_LOCAL, 2037 bytes) 18/04/17 17:21:00 INFO storage.BlockManagerInfo: Added broadcast_1276_piece0 in memory on ***hostname masked***:35790 (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:21:00 INFO storage.MemoryStore: Block broadcast_1278_piece0 stored as bytes in memory (estimated size 3.1 KB, free 491.4 MB) 18/04/17 17:21:00 INFO storage.BlockManagerInfo: Added broadcast_1278_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:21:00 INFO spark.SparkContext: Created broadcast 1278 from broadcast at DAGScheduler.scala:1006 18/04/17 17:21:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 1278 (KafkaRDD[1737] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 17:21:00 INFO cluster.YarnClusterScheduler: Adding task set 1278.0 with 1 tasks 18/04/17 17:21:00 INFO scheduler.DAGScheduler: Got job 1279 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 17:21:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 1279 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 17:21:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 17:21:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 17:21:00 INFO scheduler.DAGScheduler: Submitting ResultStage 1279 (KafkaRDD[1754] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 17:21:00 INFO storage.MemoryStore: Block broadcast_1279 stored as values in memory (estimated size 5.7 KB, free 491.4 MB) 18/04/17 17:21:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 1278.0 (TID 1278, ***hostname masked***, executor 9, partition 0, RACK_LOCAL, 2041 bytes) 18/04/17 17:21:00 INFO storage.BlockManagerInfo: Added broadcast_1277_piece0 in memory on ***hostname masked***:55095 (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:21:00 INFO storage.BlockManagerInfo: Added broadcast_1275_piece0 in memory on ***hostname masked***:57847 (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:21:00 INFO storage.BlockManagerInfo: Removed broadcast_1253_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:21:00 INFO storage.MemoryStore: Block broadcast_1279_piece0 stored as bytes in memory (estimated size 3.1 KB, free 491.4 MB) 18/04/17 17:21:00 INFO storage.BlockManagerInfo: Added broadcast_1279_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:21:00 INFO spark.SparkContext: Created broadcast 1279 from broadcast at DAGScheduler.scala:1006 18/04/17 17:21:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 1279 (KafkaRDD[1754] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 17:21:00 INFO cluster.YarnClusterScheduler: Adding task set 1279.0 with 1 tasks 18/04/17 17:21:00 INFO scheduler.DAGScheduler: Got job 1280 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 17:21:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 1280 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 17:21:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 17:21:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 17:21:00 INFO scheduler.DAGScheduler: Submitting ResultStage 1280 (KafkaRDD[1747] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 17:21:00 INFO storage.MemoryStore: Block broadcast_1280 stored as values in memory (estimated size 5.7 KB, free 491.4 MB) 18/04/17 17:21:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 1279.0 (TID 1279, ***hostname masked***, executor 2, partition 0, NODE_LOCAL, 2056 bytes) 18/04/17 17:21:00 INFO storage.BlockManagerInfo: Added broadcast_1278_piece0 in memory on ***hostname masked***:55033 (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:21:00 INFO storage.BlockManagerInfo: Removed broadcast_1253_piece0 on ***hostname masked***:55279 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:21:00 INFO storage.MemoryStore: Block broadcast_1280_piece0 stored as bytes in memory (estimated size 3.1 KB, free 491.4 MB) 18/04/17 17:21:00 INFO storage.BlockManagerInfo: Added broadcast_1280_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:21:00 INFO spark.SparkContext: Created broadcast 1280 from broadcast at DAGScheduler.scala:1006 18/04/17 17:21:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 1280 (KafkaRDD[1747] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 17:21:00 INFO cluster.YarnClusterScheduler: Adding task set 1280.0 with 1 tasks 18/04/17 17:21:00 INFO scheduler.DAGScheduler: Got job 1281 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 17:21:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 1281 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 17:21:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 17:21:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 17:21:00 INFO scheduler.DAGScheduler: Submitting ResultStage 1281 (KafkaRDD[1753] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 17:21:00 INFO storage.MemoryStore: Block broadcast_1281 stored as values in memory (estimated size 5.7 KB, free 491.4 MB) 18/04/17 17:21:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 1280.0 (TID 1280, ***hostname masked***, executor 6, partition 0, RACK_LOCAL, 2050 bytes) 18/04/17 17:21:00 INFO storage.BlockManagerInfo: Removed broadcast_1256_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:21:00 INFO storage.BlockManagerInfo: Added broadcast_1279_piece0 in memory on ***hostname masked***:43653 (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:21:00 INFO storage.BlockManagerInfo: Removed broadcast_1256_piece0 on ***hostname masked***:42188 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:21:00 INFO storage.MemoryStore: Block broadcast_1281_piece0 stored as bytes in memory (estimated size 3.1 KB, free 491.4 MB) 18/04/17 17:21:00 INFO storage.BlockManagerInfo: Added broadcast_1281_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:21:00 INFO spark.SparkContext: Created broadcast 1281 from broadcast at DAGScheduler.scala:1006 18/04/17 17:21:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 1281 (KafkaRDD[1753] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 17:21:00 INFO cluster.YarnClusterScheduler: Adding task set 1281.0 with 1 tasks 18/04/17 17:21:00 INFO scheduler.DAGScheduler: Got job 1282 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 17:21:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 1282 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 17:21:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 17:21:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 17:21:00 INFO scheduler.DAGScheduler: Submitting ResultStage 1282 (KafkaRDD[1755] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 17:21:00 INFO storage.BlockManagerInfo: Removed broadcast_1238_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:21:00 INFO storage.MemoryStore: Block broadcast_1282 stored as values in memory (estimated size 5.7 KB, free 491.4 MB) 18/04/17 17:21:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 1281.0 (TID 1281, ***hostname masked***, executor 8, partition 0, RACK_LOCAL, 2045 bytes) 18/04/17 17:21:00 INFO storage.BlockManagerInfo: Removed broadcast_1238_piece0 on ***hostname masked***:55279 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:21:00 INFO storage.BlockManagerInfo: Removed broadcast_1240_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:21:00 INFO storage.BlockManagerInfo: Added broadcast_1280_piece0 in memory on ***hostname masked***:35790 (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:21:00 INFO storage.BlockManagerInfo: Removed broadcast_1240_piece0 on ***hostname masked***:56034 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:21:00 INFO storage.MemoryStore: Block broadcast_1282_piece0 stored as bytes in memory (estimated size 3.1 KB, free 491.4 MB) 18/04/17 17:21:00 INFO storage.BlockManagerInfo: Added broadcast_1282_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:21:00 INFO spark.SparkContext: Created broadcast 1282 from broadcast at DAGScheduler.scala:1006 18/04/17 17:21:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 1282 (KafkaRDD[1755] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 17:21:00 INFO cluster.YarnClusterScheduler: Adding task set 1282.0 with 1 tasks 18/04/17 17:21:00 INFO scheduler.DAGScheduler: Got job 1283 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 17:21:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 1283 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 17:21:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 17:21:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 17:21:00 INFO scheduler.DAGScheduler: Submitting ResultStage 1283 (KafkaRDD[1733] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 17:21:00 INFO storage.MemoryStore: Block broadcast_1283 stored as values in memory (estimated size 5.7 KB, free 491.4 MB) 18/04/17 17:21:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 1282.0 (TID 1282, ***hostname masked***, executor 1, partition 0, RACK_LOCAL, 2048 bytes) 18/04/17 17:21:00 INFO storage.MemoryStore: Block broadcast_1283_piece0 stored as bytes in memory (estimated size 3.1 KB, free 491.4 MB) 18/04/17 17:21:00 INFO storage.BlockManagerInfo: Added broadcast_1283_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:21:00 INFO spark.SparkContext: Created broadcast 1283 from broadcast at DAGScheduler.scala:1006 18/04/17 17:21:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 1283 (KafkaRDD[1733] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 17:21:00 INFO cluster.YarnClusterScheduler: Adding task set 1283.0 with 1 tasks 18/04/17 17:21:00 INFO scheduler.DAGScheduler: Got job 1284 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 17:21:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 1284 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 17:21:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 17:21:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 17:21:00 INFO scheduler.DAGScheduler: Submitting ResultStage 1284 (KafkaRDD[1738] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 17:21:00 INFO storage.MemoryStore: Block broadcast_1284 stored as values in memory (estimated size 5.7 KB, free 491.4 MB) 18/04/17 17:21:00 INFO storage.BlockManagerInfo: Added broadcast_1281_piece0 in memory on ***hostname masked***:50260 (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:21:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 1283.0 (TID 1283, ***hostname masked***, executor 3, partition 0, NODE_LOCAL, 2059 bytes) 18/04/17 17:21:00 INFO storage.BlockManagerInfo: Added broadcast_1282_piece0 in memory on ***hostname masked***:56034 (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:21:00 INFO storage.MemoryStore: Block broadcast_1284_piece0 stored as bytes in memory (estimated size 3.1 KB, free 491.4 MB) 18/04/17 17:21:00 INFO storage.BlockManagerInfo: Added broadcast_1284_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:21:00 INFO spark.SparkContext: Created broadcast 1284 from broadcast at DAGScheduler.scala:1006 18/04/17 17:21:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 1284 (KafkaRDD[1738] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 17:21:00 INFO cluster.YarnClusterScheduler: Adding task set 1284.0 with 1 tasks 18/04/17 17:21:00 INFO scheduler.DAGScheduler: Got job 1286 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 17:21:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 1285 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 17:21:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 17:21:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 17:21:00 INFO scheduler.DAGScheduler: Submitting ResultStage 1285 (KafkaRDD[1743] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 17:21:00 INFO storage.MemoryStore: Block broadcast_1285 stored as values in memory (estimated size 5.7 KB, free 491.4 MB) 18/04/17 17:21:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 1284.0 (TID 1284, ***hostname masked***, executor 3, partition 0, NODE_LOCAL, 2059 bytes) 18/04/17 17:21:00 INFO storage.MemoryStore: Block broadcast_1285_piece0 stored as bytes in memory (estimated size 3.1 KB, free 491.4 MB) 18/04/17 17:21:00 INFO storage.BlockManagerInfo: Added broadcast_1285_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:21:00 INFO spark.SparkContext: Created broadcast 1285 from broadcast at DAGScheduler.scala:1006 18/04/17 17:21:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 1285 (KafkaRDD[1743] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 17:21:00 INFO cluster.YarnClusterScheduler: Adding task set 1285.0 with 1 tasks 18/04/17 17:21:00 INFO scheduler.DAGScheduler: Got job 1285 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 17:21:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 1286 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 17:21:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 17:21:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 17:21:00 INFO scheduler.DAGScheduler: Submitting ResultStage 1286 (KafkaRDD[1748] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 17:21:00 INFO spark.ContextCleaner: Cleaned accumulator 1249 18/04/17 17:21:00 INFO storage.MemoryStore: Block broadcast_1286 stored as values in memory (estimated size 5.7 KB, free 491.4 MB) 18/04/17 17:21:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 1285.0 (TID 1285, ***hostname masked***, executor 7, partition 0, RACK_LOCAL, 2064 bytes) 18/04/17 17:21:00 INFO storage.BlockManagerInfo: Removed broadcast_1244_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:21:00 INFO storage.BlockManagerInfo: Removed broadcast_1244_piece0 on ***hostname masked***:35790 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:21:00 INFO storage.BlockManagerInfo: Added broadcast_1283_piece0 in memory on ***hostname masked***:60107 (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:21:00 INFO storage.BlockManagerInfo: Removed broadcast_1257_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:21:00 INFO storage.BlockManagerInfo: Removed broadcast_1257_piece0 on ***hostname masked***:55279 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:21:00 INFO storage.MemoryStore: Block broadcast_1286_piece0 stored as bytes in memory (estimated size 3.1 KB, free 491.4 MB) 18/04/17 17:21:00 INFO storage.BlockManagerInfo: Added broadcast_1286_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:21:00 INFO spark.SparkContext: Created broadcast 1286 from broadcast at DAGScheduler.scala:1006 18/04/17 17:21:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 1286 (KafkaRDD[1748] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 17:21:00 INFO cluster.YarnClusterScheduler: Adding task set 1286.0 with 1 tasks 18/04/17 17:21:00 INFO scheduler.DAGScheduler: Got job 1287 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 17:21:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 1287 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 17:21:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 17:21:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 17:21:00 INFO scheduler.DAGScheduler: Submitting ResultStage 1287 (KafkaRDD[1736] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 17:21:00 INFO spark.ContextCleaner: Cleaned accumulator 1258 18/04/17 17:21:00 INFO spark.ContextCleaner: Cleaned accumulator 1260 18/04/17 17:21:00 INFO spark.ContextCleaner: Cleaned accumulator 1241 18/04/17 17:21:00 INFO spark.ContextCleaner: Cleaned accumulator 1247 18/04/17 17:21:00 INFO storage.MemoryStore: Block broadcast_1287 stored as values in memory (estimated size 5.7 KB, free 491.4 MB) 18/04/17 17:21:00 INFO storage.BlockManagerInfo: Removed broadcast_1259_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:21:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 1286.0 (TID 1286, ***hostname masked***, executor 11, partition 0, RACK_LOCAL, 2048 bytes) 18/04/17 17:21:00 INFO storage.BlockManagerInfo: Added broadcast_1285_piece0 in memory on ***hostname masked***:41751 (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:21:00 INFO storage.BlockManagerInfo: Removed broadcast_1259_piece0 on ***hostname masked***:35790 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:21:00 INFO storage.MemoryStore: Block broadcast_1287_piece0 stored as bytes in memory (estimated size 3.1 KB, free 491.4 MB) 18/04/17 17:21:00 INFO storage.BlockManagerInfo: Added broadcast_1287_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:21:00 INFO spark.SparkContext: Created broadcast 1287 from broadcast at DAGScheduler.scala:1006 18/04/17 17:21:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 1287 (KafkaRDD[1736] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 17:21:00 INFO cluster.YarnClusterScheduler: Adding task set 1287.0 with 1 tasks 18/04/17 17:21:00 INFO scheduler.DAGScheduler: Got job 1288 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 17:21:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 1288 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 17:21:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 17:21:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 17:21:00 INFO scheduler.DAGScheduler: Submitting ResultStage 1288 (KafkaRDD[1740] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 17:21:00 INFO storage.MemoryStore: Block broadcast_1288 stored as values in memory (estimated size 5.7 KB, free 491.4 MB) 18/04/17 17:21:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 1287.0 (TID 1287, ***hostname masked***, executor 4, partition 0, RACK_LOCAL, 2051 bytes) 18/04/17 17:21:00 INFO storage.BlockManagerInfo: Removed broadcast_1262_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:21:00 INFO storage.MemoryStore: Block broadcast_1288_piece0 stored as bytes in memory (estimated size 3.1 KB, free 491.4 MB) 18/04/17 17:21:00 INFO storage.BlockManagerInfo: Added broadcast_1288_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:21:00 INFO spark.SparkContext: Created broadcast 1288 from broadcast at DAGScheduler.scala:1006 18/04/17 17:21:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 1288 (KafkaRDD[1740] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 17:21:00 INFO cluster.YarnClusterScheduler: Adding task set 1288.0 with 1 tasks 18/04/17 17:21:00 INFO scheduler.DAGScheduler: Got job 1289 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 17:21:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 1289 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 17:21:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 17:21:00 INFO storage.BlockManagerInfo: Removed broadcast_1262_piece0 on ***hostname masked***:55033 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:21:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 17:21:00 INFO scheduler.DAGScheduler: Submitting ResultStage 1289 (KafkaRDD[1756] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 17:21:00 INFO storage.MemoryStore: Block broadcast_1289 stored as values in memory (estimated size 5.7 KB, free 491.4 MB) 18/04/17 17:21:00 INFO spark.ContextCleaner: Cleaned accumulator 1263 18/04/17 17:21:00 INFO spark.ContextCleaner: Cleaned accumulator 1245 18/04/17 17:21:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 1288.0 (TID 1288, ***hostname masked***, executor 3, partition 0, RACK_LOCAL, 2051 bytes) 18/04/17 17:21:00 INFO storage.BlockManagerInfo: Removed broadcast_1239_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:21:00 INFO storage.BlockManagerInfo: Added broadcast_1284_piece0 in memory on ***hostname masked***:60107 (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:21:00 INFO storage.MemoryStore: Block broadcast_1289_piece0 stored as bytes in memory (estimated size 3.1 KB, free 491.4 MB) 18/04/17 17:21:00 INFO storage.BlockManagerInfo: Added broadcast_1289_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:21:00 INFO spark.SparkContext: Created broadcast 1289 from broadcast at DAGScheduler.scala:1006 18/04/17 17:21:00 INFO storage.BlockManagerInfo: Removed broadcast_1239_piece0 on ***hostname masked***:42188 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:21:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 1289 (KafkaRDD[1756] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 17:21:00 INFO cluster.YarnClusterScheduler: Adding task set 1289.0 with 1 tasks 18/04/17 17:21:00 INFO storage.BlockManagerInfo: Added broadcast_1286_piece0 in memory on ***hostname masked***:57847 (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:21:00 INFO scheduler.DAGScheduler: Got job 1290 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 17:21:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 1290 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 17:21:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 17:21:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 17:21:00 INFO scheduler.DAGScheduler: Submitting ResultStage 1290 (KafkaRDD[1762] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 17:21:00 INFO storage.MemoryStore: Block broadcast_1290 stored as values in memory (estimated size 5.7 KB, free 491.4 MB) 18/04/17 17:21:00 INFO storage.BlockManagerInfo: Removed broadcast_1246_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:21:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 1289.0 (TID 1289, ***hostname masked***, executor 2, partition 0, RACK_LOCAL, 2045 bytes) 18/04/17 17:21:00 INFO storage.BlockManagerInfo: Removed broadcast_1246_piece0 on ***hostname masked***:60107 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:21:00 INFO storage.MemoryStore: Block broadcast_1290_piece0 stored as bytes in memory (estimated size 3.1 KB, free 491.4 MB) 18/04/17 17:21:00 INFO storage.BlockManagerInfo: Added broadcast_1290_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:21:00 INFO spark.SparkContext: Created broadcast 1290 from broadcast at DAGScheduler.scala:1006 18/04/17 17:21:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 1290 (KafkaRDD[1762] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 17:21:00 INFO cluster.YarnClusterScheduler: Adding task set 1290.0 with 1 tasks 18/04/17 17:21:00 INFO scheduler.DAGScheduler: Got job 1291 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 17:21:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 1291 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 17:21:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 17:21:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 17:21:00 INFO scheduler.DAGScheduler: Submitting ResultStage 1291 (KafkaRDD[1760] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 17:21:00 INFO storage.MemoryStore: Block broadcast_1291 stored as values in memory (estimated size 5.7 KB, free 491.4 MB) 18/04/17 17:21:00 INFO storage.BlockManagerInfo: Removed broadcast_1255_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:21:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 1290.0 (TID 1290, ***hostname masked***, executor 12, partition 0, RACK_LOCAL, 2053 bytes) 18/04/17 17:21:00 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 1271.0 (TID 1271) in 82 ms on ***hostname masked*** (executor 9) (1/1) 18/04/17 17:21:00 INFO storage.BlockManagerInfo: Removed broadcast_1255_piece0 on ***hostname masked***:60107 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:21:00 INFO cluster.YarnClusterScheduler: Removed TaskSet 1271.0, whose tasks have all completed, from pool 18/04/17 17:21:00 INFO spark.ContextCleaner: Cleaned accumulator 1257 18/04/17 17:21:00 INFO spark.ContextCleaner: Cleaned accumulator 1240 18/04/17 17:21:00 INFO storage.MemoryStore: Block broadcast_1291_piece0 stored as bytes in memory (estimated size 3.1 KB, free 491.4 MB) 18/04/17 17:21:00 INFO storage.BlockManagerInfo: Added broadcast_1291_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:21:00 INFO storage.BlockManagerInfo: Removed broadcast_1248_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:21:00 INFO spark.SparkContext: Created broadcast 1291 from broadcast at DAGScheduler.scala:1006 18/04/17 17:21:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 1291 (KafkaRDD[1760] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 17:21:00 INFO cluster.YarnClusterScheduler: Adding task set 1291.0 with 1 tasks 18/04/17 17:21:00 INFO scheduler.DAGScheduler: ResultStage 1271 (foreachPartition at PredictorEngineApp.java:153) finished in 0.084 s 18/04/17 17:21:00 INFO storage.BlockManagerInfo: Removed broadcast_1248_piece0 on ***hostname masked***:35790 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:21:00 INFO scheduler.DAGScheduler: Job 1271 finished: foreachPartition at PredictorEngineApp.java:153, took 0.115587 s 18/04/17 17:21:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 1291.0 (TID 1291, ***hostname masked***, executor 5, partition 0, RACK_LOCAL, 2040 bytes) 18/04/17 17:21:00 INFO spark.ContextCleaner: Cleaned accumulator 1256 18/04/17 17:21:00 INFO spark.ContextCleaner: Cleaned accumulator 1239 18/04/17 17:21:00 INFO spark.ContextCleaner: Cleaned accumulator 1254 18/04/17 17:21:00 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x7842089e connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 17:21:00 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x7842089e0x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 17:21:00 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 17:21:00 INFO storage.BlockManagerInfo: Added broadcast_1288_piece0 in memory on ***hostname masked***:60107 (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:21:00 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:36452, server: ***hostname masked***/***IP masked***:2181 18/04/17 17:21:00 INFO storage.BlockManagerInfo: Added broadcast_1289_piece0 in memory on ***hostname masked***:43653 (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:21:00 INFO storage.BlockManagerInfo: Added broadcast_1287_piece0 in memory on ***hostname masked***:55279 (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:21:00 INFO storage.BlockManagerInfo: Added broadcast_1291_piece0 in memory on ***hostname masked***:53081 (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:21:00 INFO storage.BlockManagerInfo: Added broadcast_1290_piece0 in memory on ***hostname masked***:42188 (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:21:00 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x3626be1439a97a9, negotiated timeout = 60000 18/04/17 17:21:00 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x3626be1439a97a9 18/04/17 17:21:00 INFO zookeeper.ZooKeeper: Session: 0x3626be1439a97a9 closed 18/04/17 17:21:00 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 17:21:00 INFO scheduler.JobScheduler: Finished job streaming job 1523974860000 ms.22 from job set of time 1523974860000 ms 18/04/17 17:21:00 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 1277.0 (TID 1277) in 261 ms on ***hostname masked*** (executor 10) (1/1) 18/04/17 17:21:00 INFO cluster.YarnClusterScheduler: Removed TaskSet 1277.0, whose tasks have all completed, from pool 18/04/17 17:21:00 INFO scheduler.DAGScheduler: ResultStage 1277 (foreachPartition at PredictorEngineApp.java:153) finished in 0.261 s 18/04/17 17:21:00 INFO scheduler.DAGScheduler: Job 1277 finished: foreachPartition at PredictorEngineApp.java:153, took 0.318955 s 18/04/17 17:21:00 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x781bb5b connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 17:21:00 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x781bb5b0x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 17:21:00 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 17:21:00 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:36455, server: ***hostname masked***/***IP masked***:2181 18/04/17 17:21:00 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x3626be1439a97b0, negotiated timeout = 60000 18/04/17 17:21:00 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x3626be1439a97b0 18/04/17 17:21:00 INFO zookeeper.ZooKeeper: Session: 0x3626be1439a97b0 closed 18/04/17 17:21:00 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 17:21:00 INFO scheduler.JobScheduler: Finished job streaming job 1523974860000 ms.35 from job set of time 1523974860000 ms 18/04/17 17:21:00 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 1281.0 (TID 1281) in 835 ms on ***hostname masked*** (executor 8) (1/1) 18/04/17 17:21:00 INFO cluster.YarnClusterScheduler: Removed TaskSet 1281.0, whose tasks have all completed, from pool 18/04/17 17:21:00 INFO scheduler.DAGScheduler: ResultStage 1281 (foreachPartition at PredictorEngineApp.java:153) finished in 0.836 s 18/04/17 17:21:00 INFO scheduler.DAGScheduler: Job 1281 finished: foreachPartition at PredictorEngineApp.java:153, took 0.916950 s 18/04/17 17:21:00 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x44e0bea0 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 17:21:00 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x44e0bea00x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 17:21:00 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 17:21:00 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:36458, server: ***hostname masked***/***IP masked***:2181 18/04/17 17:21:00 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x3626be1439a97b2, negotiated timeout = 60000 18/04/17 17:21:00 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x3626be1439a97b2 18/04/17 17:21:01 INFO zookeeper.ZooKeeper: Session: 0x3626be1439a97b2 closed 18/04/17 17:21:01 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 17:21:01 INFO scheduler.JobScheduler: Finished job streaming job 1523974860000 ms.25 from job set of time 1523974860000 ms 18/04/17 17:21:02 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 1287.0 (TID 1287) in 2341 ms on ***hostname masked*** (executor 4) (1/1) 18/04/17 17:21:02 INFO cluster.YarnClusterScheduler: Removed TaskSet 1287.0, whose tasks have all completed, from pool 18/04/17 17:21:02 INFO scheduler.DAGScheduler: ResultStage 1287 (foreachPartition at PredictorEngineApp.java:153) finished in 2.342 s 18/04/17 17:21:02 INFO scheduler.DAGScheduler: Job 1287 finished: foreachPartition at PredictorEngineApp.java:153, took 2.446220 s 18/04/17 17:21:02 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x11950a11 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 17:21:02 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x11950a110x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 17:21:02 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 17:21:02 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:47440, server: ***hostname masked***/***IP masked***:2181 18/04/17 17:21:02 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x2626be142b290e6, negotiated timeout = 60000 18/04/17 17:21:02 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x2626be142b290e6 18/04/17 17:21:02 INFO zookeeper.ZooKeeper: Session: 0x2626be142b290e6 closed 18/04/17 17:21:02 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 17:21:02 INFO scheduler.JobScheduler: Finished job streaming job 1523974860000 ms.8 from job set of time 1523974860000 ms 18/04/17 17:21:03 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 1268.0 (TID 1268) in 3752 ms on ***hostname masked*** (executor 10) (1/1) 18/04/17 17:21:03 INFO cluster.YarnClusterScheduler: Removed TaskSet 1268.0, whose tasks have all completed, from pool 18/04/17 17:21:03 INFO scheduler.DAGScheduler: ResultStage 1268 (foreachPartition at PredictorEngineApp.java:153) finished in 3.752 s 18/04/17 17:21:03 INFO scheduler.DAGScheduler: Job 1268 finished: foreachPartition at PredictorEngineApp.java:153, took 3.772514 s 18/04/17 17:21:03 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x346f0fa6 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 17:21:03 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x346f0fa60x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 17:21:03 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 17:21:03 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:36468, server: ***hostname masked***/***IP masked***:2181 18/04/17 17:21:03 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x3626be1439a97b4, negotiated timeout = 60000 18/04/17 17:21:03 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x3626be1439a97b4 18/04/17 17:21:03 INFO zookeeper.ZooKeeper: Session: 0x3626be1439a97b4 closed 18/04/17 17:21:03 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 17:21:03 INFO scheduler.JobScheduler: Finished job streaming job 1523974860000 ms.7 from job set of time 1523974860000 ms 18/04/17 17:21:05 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 1290.0 (TID 1290) in 5504 ms on ***hostname masked*** (executor 12) (1/1) 18/04/17 17:21:05 INFO cluster.YarnClusterScheduler: Removed TaskSet 1290.0, whose tasks have all completed, from pool 18/04/17 17:21:05 INFO scheduler.DAGScheduler: ResultStage 1290 (foreachPartition at PredictorEngineApp.java:153) finished in 5.505 s 18/04/17 17:21:05 INFO scheduler.DAGScheduler: Job 1290 finished: foreachPartition at PredictorEngineApp.java:153, took 5.615890 s 18/04/17 17:21:05 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x68948fb2 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 17:21:05 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x68948fb20x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 17:21:05 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 17:21:05 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:47451, server: ***hostname masked***/***IP masked***:2181 18/04/17 17:21:05 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x2626be142b290e9, negotiated timeout = 60000 18/04/17 17:21:05 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 1273.0 (TID 1273) in 5587 ms on ***hostname masked*** (executor 7) (1/1) 18/04/17 17:21:05 INFO cluster.YarnClusterScheduler: Removed TaskSet 1273.0, whose tasks have all completed, from pool 18/04/17 17:21:05 INFO scheduler.DAGScheduler: ResultStage 1273 (foreachPartition at PredictorEngineApp.java:153) finished in 5.587 s 18/04/17 17:21:05 INFO scheduler.DAGScheduler: Job 1273 finished: foreachPartition at PredictorEngineApp.java:153, took 5.627882 s 18/04/17 17:21:05 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x2626be142b290e9 18/04/17 17:21:05 INFO zookeeper.ZooKeeper: Session: 0x2626be142b290e9 closed 18/04/17 17:21:05 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 17:21:05 INFO scheduler.JobScheduler: Finished job streaming job 1523974860000 ms.31 from job set of time 1523974860000 ms 18/04/17 17:21:05 INFO scheduler.JobScheduler: Finished job streaming job 1523974860000 ms.34 from job set of time 1523974860000 ms 18/04/17 17:21:06 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 1265.0 (TID 1265) in 6821 ms on ***hostname masked*** (executor 10) (1/1) 18/04/17 17:21:06 INFO cluster.YarnClusterScheduler: Removed TaskSet 1265.0, whose tasks have all completed, from pool 18/04/17 17:21:06 INFO scheduler.DAGScheduler: ResultStage 1265 (foreachPartition at PredictorEngineApp.java:153) finished in 6.821 s 18/04/17 17:21:06 INFO scheduler.DAGScheduler: Job 1265 finished: foreachPartition at PredictorEngineApp.java:153, took 6.828985 s 18/04/17 17:21:06 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0xb4f86c0 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 17:21:06 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0xb4f86c00x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 17:21:06 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 17:21:06 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:42860, server: ***hostname masked***/***IP masked***:2181 18/04/17 17:21:06 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x1626be1444c97f8, negotiated timeout = 60000 18/04/17 17:21:06 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x1626be1444c97f8 18/04/17 17:21:06 INFO zookeeper.ZooKeeper: Session: 0x1626be1444c97f8 closed 18/04/17 17:21:06 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 17:21:06 INFO scheduler.JobScheduler: Finished job streaming job 1523974860000 ms.2 from job set of time 1523974860000 ms 18/04/17 17:21:07 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 1285.0 (TID 1285) in 6941 ms on ***hostname masked*** (executor 7) (1/1) 18/04/17 17:21:07 INFO cluster.YarnClusterScheduler: Removed TaskSet 1285.0, whose tasks have all completed, from pool 18/04/17 17:21:07 INFO scheduler.DAGScheduler: ResultStage 1285 (foreachPartition at PredictorEngineApp.java:153) finished in 6.943 s 18/04/17 17:21:07 INFO scheduler.DAGScheduler: Job 1286 finished: foreachPartition at PredictorEngineApp.java:153, took 7.037701 s 18/04/17 17:21:07 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x75e008e4 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 17:21:07 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x75e008e40x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 17:21:07 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 17:21:07 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:36482, server: ***hostname masked***/***IP masked***:2181 18/04/17 17:21:07 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x3626be1439a97b5, negotiated timeout = 60000 18/04/17 17:21:07 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x3626be1439a97b5 18/04/17 17:21:07 INFO zookeeper.ZooKeeper: Session: 0x3626be1439a97b5 closed 18/04/17 17:21:07 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 17:21:07 INFO scheduler.JobScheduler: Finished job streaming job 1523974860000 ms.15 from job set of time 1523974860000 ms 18/04/17 17:21:09 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 1286.0 (TID 1286) in 9327 ms on ***hostname masked*** (executor 11) (1/1) 18/04/17 17:21:09 INFO cluster.YarnClusterScheduler: Removed TaskSet 1286.0, whose tasks have all completed, from pool 18/04/17 17:21:09 INFO scheduler.DAGScheduler: ResultStage 1286 (foreachPartition at PredictorEngineApp.java:153) finished in 9.328 s 18/04/17 17:21:09 INFO scheduler.DAGScheduler: Job 1285 finished: foreachPartition at PredictorEngineApp.java:153, took 9.427838 s 18/04/17 17:21:09 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x4c12682 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 17:21:09 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x4c126820x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 17:21:09 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 17:21:09 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:47466, server: ***hostname masked***/***IP masked***:2181 18/04/17 17:21:09 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x2626be142b290ec, negotiated timeout = 60000 18/04/17 17:21:09 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x2626be142b290ec 18/04/17 17:21:09 INFO zookeeper.ZooKeeper: Session: 0x2626be142b290ec closed 18/04/17 17:21:09 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 17:21:09 INFO scheduler.JobScheduler: Finished job streaming job 1523974860000 ms.20 from job set of time 1523974860000 ms 18/04/17 17:21:09 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 1275.0 (TID 1275) in 9463 ms on ***hostname masked*** (executor 11) (1/1) 18/04/17 17:21:09 INFO cluster.YarnClusterScheduler: Removed TaskSet 1275.0, whose tasks have all completed, from pool 18/04/17 17:21:09 INFO scheduler.DAGScheduler: ResultStage 1275 (foreachPartition at PredictorEngineApp.java:153) finished in 9.464 s 18/04/17 17:21:09 INFO scheduler.DAGScheduler: Job 1275 finished: foreachPartition at PredictorEngineApp.java:153, took 9.513212 s 18/04/17 17:21:09 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x70deea76 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 17:21:09 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x70deea760x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 17:21:09 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 17:21:09 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:42874, server: ***hostname masked***/***IP masked***:2181 18/04/17 17:21:09 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x1626be1444c97fb, negotiated timeout = 60000 18/04/17 17:21:09 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x1626be1444c97fb 18/04/17 17:21:09 INFO zookeeper.ZooKeeper: Session: 0x1626be1444c97fb closed 18/04/17 17:21:09 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 17:21:09 INFO scheduler.JobScheduler: Finished job streaming job 1523974860000 ms.33 from job set of time 1523974860000 ms 18/04/17 17:21:10 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 1272.0 (TID 1272) in 9981 ms on ***hostname masked*** (executor 1) (1/1) 18/04/17 17:21:10 INFO cluster.YarnClusterScheduler: Removed TaskSet 1272.0, whose tasks have all completed, from pool 18/04/17 17:21:10 INFO scheduler.DAGScheduler: ResultStage 1272 (foreachPartition at PredictorEngineApp.java:153) finished in 9.981 s 18/04/17 17:21:10 INFO scheduler.DAGScheduler: Job 1272 finished: foreachPartition at PredictorEngineApp.java:153, took 10.017969 s 18/04/17 17:21:10 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0xba2306f connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 17:21:10 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0xba2306f0x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 17:21:10 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 17:21:10 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:36497, server: ***hostname masked***/***IP masked***:2181 18/04/17 17:21:10 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x3626be1439a97b6, negotiated timeout = 60000 18/04/17 17:21:10 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x3626be1439a97b6 18/04/17 17:21:10 INFO zookeeper.ZooKeeper: Session: 0x3626be1439a97b6 closed 18/04/17 17:21:10 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 17:21:10 INFO scheduler.JobScheduler: Finished job streaming job 1523974860000 ms.24 from job set of time 1523974860000 ms 18/04/17 17:21:10 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 1291.0 (TID 1291) in 10073 ms on ***hostname masked*** (executor 5) (1/1) 18/04/17 17:21:10 INFO scheduler.DAGScheduler: ResultStage 1291 (foreachPartition at PredictorEngineApp.java:153) finished in 10.074 s 18/04/17 17:21:10 INFO cluster.YarnClusterScheduler: Removed TaskSet 1291.0, whose tasks have all completed, from pool 18/04/17 17:21:10 INFO scheduler.DAGScheduler: Job 1291 finished: foreachPartition at PredictorEngineApp.java:153, took 10.186430 s 18/04/17 17:21:10 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0xb661e19 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 17:21:10 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0xb661e190x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 17:21:10 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 17:21:10 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:42882, server: ***hostname masked***/***IP masked***:2181 18/04/17 17:21:10 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x1626be1444c97fd, negotiated timeout = 60000 18/04/17 17:21:10 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x1626be1444c97fd 18/04/17 17:21:10 INFO zookeeper.ZooKeeper: Session: 0x1626be1444c97fd closed 18/04/17 17:21:10 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 17:21:10 INFO scheduler.JobScheduler: Finished job streaming job 1523974860000 ms.32 from job set of time 1523974860000 ms 18/04/17 17:21:10 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 1282.0 (TID 1282) in 10225 ms on ***hostname masked*** (executor 1) (1/1) 18/04/17 17:21:10 INFO cluster.YarnClusterScheduler: Removed TaskSet 1282.0, whose tasks have all completed, from pool 18/04/17 17:21:10 INFO scheduler.DAGScheduler: ResultStage 1282 (foreachPartition at PredictorEngineApp.java:153) finished in 10.225 s 18/04/17 17:21:10 INFO scheduler.DAGScheduler: Job 1282 finished: foreachPartition at PredictorEngineApp.java:153, took 10.310643 s 18/04/17 17:21:10 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x4e1139ae connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 17:21:10 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x4e1139ae0x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 17:21:10 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 17:21:10 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:36503, server: ***hostname masked***/***IP masked***:2181 18/04/17 17:21:10 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x3626be1439a97b7, negotiated timeout = 60000 18/04/17 17:21:10 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x3626be1439a97b7 18/04/17 17:21:10 INFO zookeeper.ZooKeeper: Session: 0x3626be1439a97b7 closed 18/04/17 17:21:10 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 17:21:10 INFO scheduler.JobScheduler: Finished job streaming job 1523974860000 ms.27 from job set of time 1523974860000 ms 18/04/17 17:21:11 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 1288.0 (TID 1288) in 11242 ms on ***hostname masked*** (executor 3) (1/1) 18/04/17 17:21:11 INFO cluster.YarnClusterScheduler: Removed TaskSet 1288.0, whose tasks have all completed, from pool 18/04/17 17:21:11 INFO scheduler.DAGScheduler: ResultStage 1288 (foreachPartition at PredictorEngineApp.java:153) finished in 11.244 s 18/04/17 17:21:11 INFO scheduler.DAGScheduler: Job 1288 finished: foreachPartition at PredictorEngineApp.java:153, took 11.349620 s 18/04/17 17:21:11 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x1bc80a3d connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 17:21:11 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x1bc80a3d0x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 17:21:11 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 17:21:11 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:42889, server: ***hostname masked***/***IP masked***:2181 18/04/17 17:21:11 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x1626be1444c97ff, negotiated timeout = 60000 18/04/17 17:21:11 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x1626be1444c97ff 18/04/17 17:21:11 INFO zookeeper.ZooKeeper: Session: 0x1626be1444c97ff closed 18/04/17 17:21:11 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 17:21:11 INFO scheduler.JobScheduler: Finished job streaming job 1523974860000 ms.12 from job set of time 1523974860000 ms 18/04/17 17:21:14 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 1274.0 (TID 1274) in 13914 ms on ***hostname masked*** (executor 9) (1/1) 18/04/17 17:21:14 INFO cluster.YarnClusterScheduler: Removed TaskSet 1274.0, whose tasks have all completed, from pool 18/04/17 17:21:14 INFO scheduler.DAGScheduler: ResultStage 1274 (foreachPartition at PredictorEngineApp.java:153) finished in 13.915 s 18/04/17 17:21:14 INFO scheduler.DAGScheduler: Job 1274 finished: foreachPartition at PredictorEngineApp.java:153, took 13.959284 s 18/04/17 17:21:14 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x7196fe0f connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 17:21:14 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x7196fe0f0x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 17:21:14 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 17:21:14 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:36512, server: ***hostname masked***/***IP masked***:2181 18/04/17 17:21:14 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x3626be1439a97ba, negotiated timeout = 60000 18/04/17 17:21:14 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 1270.0 (TID 1270) in 13948 ms on ***hostname masked*** (executor 6) (1/1) 18/04/17 17:21:14 INFO scheduler.DAGScheduler: ResultStage 1270 (foreachPartition at PredictorEngineApp.java:153) finished in 13.948 s 18/04/17 17:21:14 INFO cluster.YarnClusterScheduler: Removed TaskSet 1270.0, whose tasks have all completed, from pool 18/04/17 17:21:14 INFO scheduler.DAGScheduler: Job 1270 finished: foreachPartition at PredictorEngineApp.java:153, took 13.976064 s 18/04/17 17:21:14 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x3626be1439a97ba 18/04/17 17:21:14 INFO zookeeper.ZooKeeper: Session: 0x3626be1439a97ba closed 18/04/17 17:21:14 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 17:21:14 INFO scheduler.JobScheduler: Finished job streaming job 1523974860000 ms.6 from job set of time 1523974860000 ms 18/04/17 17:21:14 INFO scheduler.JobScheduler: Finished job streaming job 1523974860000 ms.18 from job set of time 1523974860000 ms 18/04/17 17:21:14 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 1278.0 (TID 1278) in 14213 ms on ***hostname masked*** (executor 9) (1/1) 18/04/17 17:21:14 INFO cluster.YarnClusterScheduler: Removed TaskSet 1278.0, whose tasks have all completed, from pool 18/04/17 17:21:14 INFO scheduler.DAGScheduler: ResultStage 1278 (foreachPartition at PredictorEngineApp.java:153) finished in 14.213 s 18/04/17 17:21:14 INFO scheduler.DAGScheduler: Job 1278 finished: foreachPartition at PredictorEngineApp.java:153, took 14.274666 s 18/04/17 17:21:14 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x16a1f3bc connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 17:21:14 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x16a1f3bc0x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 17:21:14 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 17:21:14 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:36516, server: ***hostname masked***/***IP masked***:2181 18/04/17 17:21:14 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x3626be1439a97bc, negotiated timeout = 60000 18/04/17 17:21:14 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x3626be1439a97bc 18/04/17 17:21:14 INFO zookeeper.ZooKeeper: Session: 0x3626be1439a97bc closed 18/04/17 17:21:14 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 17:21:14 INFO scheduler.JobScheduler: Finished job streaming job 1523974860000 ms.9 from job set of time 1523974860000 ms 18/04/17 17:21:14 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 1276.0 (TID 1276) in 14296 ms on ***hostname masked*** (executor 6) (1/1) 18/04/17 17:21:14 INFO cluster.YarnClusterScheduler: Removed TaskSet 1276.0, whose tasks have all completed, from pool 18/04/17 17:21:14 INFO scheduler.DAGScheduler: ResultStage 1276 (foreachPartition at PredictorEngineApp.java:153) finished in 14.297 s 18/04/17 17:21:14 INFO scheduler.DAGScheduler: Job 1276 finished: foreachPartition at PredictorEngineApp.java:153, took 14.350038 s 18/04/17 17:21:14 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x3f5d51ef connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 17:21:14 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x3f5d51ef0x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 17:21:14 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 17:21:14 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:47498, server: ***hostname masked***/***IP masked***:2181 18/04/17 17:21:14 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x2626be142b290ef, negotiated timeout = 60000 18/04/17 17:21:14 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x2626be142b290ef 18/04/17 17:21:14 INFO zookeeper.ZooKeeper: Session: 0x2626be142b290ef closed 18/04/17 17:21:14 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 17:21:14 INFO scheduler.JobScheduler: Finished job streaming job 1523974860000 ms.23 from job set of time 1523974860000 ms 18/04/17 17:21:14 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 1280.0 (TID 1280) in 14390 ms on ***hostname masked*** (executor 6) (1/1) 18/04/17 17:21:14 INFO scheduler.DAGScheduler: ResultStage 1280 (foreachPartition at PredictorEngineApp.java:153) finished in 14.391 s 18/04/17 17:21:14 INFO cluster.YarnClusterScheduler: Removed TaskSet 1280.0, whose tasks have all completed, from pool 18/04/17 17:21:14 INFO scheduler.DAGScheduler: Job 1280 finished: foreachPartition at PredictorEngineApp.java:153, took 14.468871 s 18/04/17 17:21:14 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x79e23e99 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 17:21:14 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x79e23e990x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 17:21:14 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 17:21:14 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:36525, server: ***hostname masked***/***IP masked***:2181 18/04/17 17:21:14 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x3626be1439a97be, negotiated timeout = 60000 18/04/17 17:21:14 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x3626be1439a97be 18/04/17 17:21:14 INFO zookeeper.ZooKeeper: Session: 0x3626be1439a97be closed 18/04/17 17:21:14 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 17:21:14 INFO scheduler.JobScheduler: Finished job streaming job 1523974860000 ms.19 from job set of time 1523974860000 ms 18/04/17 17:21:15 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 1289.0 (TID 1289) in 15633 ms on ***hostname masked*** (executor 2) (1/1) 18/04/17 17:21:15 INFO cluster.YarnClusterScheduler: Removed TaskSet 1289.0, whose tasks have all completed, from pool 18/04/17 17:21:15 INFO scheduler.DAGScheduler: ResultStage 1289 (foreachPartition at PredictorEngineApp.java:153) finished in 15.634 s 18/04/17 17:21:15 INFO scheduler.DAGScheduler: Job 1289 finished: foreachPartition at PredictorEngineApp.java:153, took 15.743566 s 18/04/17 17:21:15 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0xf925bd3 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 17:21:15 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0xf925bd30x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 17:21:15 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 17:21:15 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:36531, server: ***hostname masked***/***IP masked***:2181 18/04/17 17:21:15 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x3626be1439a97bf, negotiated timeout = 60000 18/04/17 17:21:15 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x3626be1439a97bf 18/04/17 17:21:15 INFO zookeeper.ZooKeeper: Session: 0x3626be1439a97bf closed 18/04/17 17:21:15 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 17:21:15 INFO scheduler.JobScheduler: Finished job streaming job 1523974860000 ms.28 from job set of time 1523974860000 ms 18/04/17 17:21:16 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 1269.0 (TID 1269) in 16130 ms on ***hostname masked*** (executor 2) (1/1) 18/04/17 17:21:16 INFO cluster.YarnClusterScheduler: Removed TaskSet 1269.0, whose tasks have all completed, from pool 18/04/17 17:21:16 INFO scheduler.DAGScheduler: ResultStage 1269 (foreachPartition at PredictorEngineApp.java:153) finished in 16.130 s 18/04/17 17:21:16 INFO scheduler.DAGScheduler: Job 1269 finished: foreachPartition at PredictorEngineApp.java:153, took 16.153047 s 18/04/17 17:21:16 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x77103239 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 17:21:16 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x771032390x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 17:21:16 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 17:21:16 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:36535, server: ***hostname masked***/***IP masked***:2181 18/04/17 17:21:16 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x3626be1439a97c0, negotiated timeout = 60000 18/04/17 17:21:16 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x3626be1439a97c0 18/04/17 17:21:16 INFO zookeeper.ZooKeeper: Session: 0x3626be1439a97c0 closed 18/04/17 17:21:16 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 17:21:16 INFO scheduler.JobScheduler: Finished job streaming job 1523974860000 ms.29 from job set of time 1523974860000 ms 18/04/17 17:21:17 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 1266.0 (TID 1266) in 17683 ms on ***hostname masked*** (executor 5) (1/1) 18/04/17 17:21:17 INFO cluster.YarnClusterScheduler: Removed TaskSet 1266.0, whose tasks have all completed, from pool 18/04/17 17:21:17 INFO scheduler.DAGScheduler: ResultStage 1266 (foreachPartition at PredictorEngineApp.java:153) finished in 17.684 s 18/04/17 17:21:17 INFO scheduler.DAGScheduler: Job 1266 finished: foreachPartition at PredictorEngineApp.java:153, took 17.696301 s 18/04/17 17:21:17 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x6c1ca28 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 17:21:17 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x6c1ca280x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 17:21:17 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 17:21:17 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:47516, server: ***hostname masked***/***IP masked***:2181 18/04/17 17:21:17 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x2626be142b290f2, negotiated timeout = 60000 18/04/17 17:21:17 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x2626be142b290f2 18/04/17 17:21:17 INFO zookeeper.ZooKeeper: Session: 0x2626be142b290f2 closed 18/04/17 17:21:17 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 17:21:17 INFO scheduler.JobScheduler: Finished job streaming job 1523974860000 ms.11 from job set of time 1523974860000 ms 18/04/17 17:21:20 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 1267.0 (TID 1267) in 20213 ms on ***hostname masked*** (executor 9) (1/1) 18/04/17 17:21:20 INFO cluster.YarnClusterScheduler: Removed TaskSet 1267.0, whose tasks have all completed, from pool 18/04/17 17:21:20 INFO scheduler.DAGScheduler: ResultStage 1267 (foreachPartition at PredictorEngineApp.java:153) finished in 20.213 s 18/04/17 17:21:20 INFO scheduler.DAGScheduler: Job 1267 finished: foreachPartition at PredictorEngineApp.java:153, took 20.228807 s 18/04/17 17:21:20 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x31eb1a4a connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 17:21:20 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x31eb1a4a0x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 17:21:20 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 17:21:20 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:47524, server: ***hostname masked***/***IP masked***:2181 18/04/17 17:21:20 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x2626be142b290f5, negotiated timeout = 60000 18/04/17 17:21:20 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x2626be142b290f5 18/04/17 17:21:20 INFO zookeeper.ZooKeeper: Session: 0x2626be142b290f5 closed 18/04/17 17:21:20 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 17:21:20 INFO scheduler.JobScheduler: Finished job streaming job 1523974860000 ms.1 from job set of time 1523974860000 ms 18/04/17 17:21:22 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 1279.0 (TID 1279) in 22794 ms on ***hostname masked*** (executor 2) (1/1) 18/04/17 17:21:22 INFO cluster.YarnClusterScheduler: Removed TaskSet 1279.0, whose tasks have all completed, from pool 18/04/17 17:21:22 INFO scheduler.DAGScheduler: ResultStage 1279 (foreachPartition at PredictorEngineApp.java:153) finished in 22.794 s 18/04/17 17:21:22 INFO scheduler.DAGScheduler: Job 1279 finished: foreachPartition at PredictorEngineApp.java:153, took 22.868408 s 18/04/17 17:21:22 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x75c98f7e connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 17:21:22 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x75c98f7e0x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 17:21:22 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 17:21:22 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:47530, server: ***hostname masked***/***IP masked***:2181 18/04/17 17:21:22 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x2626be142b290f7, negotiated timeout = 60000 18/04/17 17:21:22 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x2626be142b290f7 18/04/17 17:21:22 INFO zookeeper.ZooKeeper: Session: 0x2626be142b290f7 closed 18/04/17 17:21:22 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 17:21:22 INFO scheduler.JobScheduler: Finished job streaming job 1523974860000 ms.26 from job set of time 1523974860000 ms 18/04/17 17:21:25 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 1283.0 (TID 1283) in 25597 ms on ***hostname masked*** (executor 3) (1/1) 18/04/17 17:21:25 INFO cluster.YarnClusterScheduler: Removed TaskSet 1283.0, whose tasks have all completed, from pool 18/04/17 17:21:25 INFO scheduler.DAGScheduler: ResultStage 1283 (foreachPartition at PredictorEngineApp.java:153) finished in 25.598 s 18/04/17 17:21:25 INFO scheduler.DAGScheduler: Job 1283 finished: foreachPartition at PredictorEngineApp.java:153, took 25.686594 s 18/04/17 17:21:25 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x1c44ad22 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 17:21:25 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x1c44ad220x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 17:21:25 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 17:21:25 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:47539, server: ***hostname masked***/***IP masked***:2181 18/04/17 17:21:25 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x2626be142b290f8, negotiated timeout = 60000 18/04/17 17:21:25 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x2626be142b290f8 18/04/17 17:21:25 INFO zookeeper.ZooKeeper: Session: 0x2626be142b290f8 closed 18/04/17 17:21:25 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 17:21:25 INFO scheduler.JobScheduler: Finished job streaming job 1523974860000 ms.5 from job set of time 1523974860000 ms 18/04/17 17:21:28 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 1284.0 (TID 1284) in 28031 ms on ***hostname masked*** (executor 3) (1/1) 18/04/17 17:21:28 INFO cluster.YarnClusterScheduler: Removed TaskSet 1284.0, whose tasks have all completed, from pool 18/04/17 17:21:28 INFO scheduler.DAGScheduler: ResultStage 1284 (foreachPartition at PredictorEngineApp.java:153) finished in 28.032 s 18/04/17 17:21:28 INFO scheduler.DAGScheduler: Job 1284 finished: foreachPartition at PredictorEngineApp.java:153, took 28.123672 s 18/04/17 17:21:28 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x18ed2329 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 17:21:28 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x18ed23290x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 17:21:28 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 17:21:28 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:47546, server: ***hostname masked***/***IP masked***:2181 18/04/17 17:21:28 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x2626be142b290fa, negotiated timeout = 60000 18/04/17 17:21:28 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x2626be142b290fa 18/04/17 17:21:28 INFO zookeeper.ZooKeeper: Session: 0x2626be142b290fa closed 18/04/17 17:21:28 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 17:21:28 INFO scheduler.JobScheduler: Finished job streaming job 1523974860000 ms.10 from job set of time 1523974860000 ms 18/04/17 17:21:28 INFO scheduler.JobScheduler: Total delay: 28.215 s for time 1523974860000 ms (execution: 28.159 s) 18/04/17 17:21:28 INFO kafka.KafkaRDD: Removing RDD 1692 from persistence list 18/04/17 17:21:28 INFO storage.BlockManager: Removing RDD 1692 18/04/17 17:21:28 INFO kafka.KafkaRDD: Removing RDD 1692 from persistence list 18/04/17 17:21:28 INFO storage.BlockManager: Removing RDD 1692 18/04/17 17:21:28 INFO kafka.KafkaRDD: Removing RDD 1693 from persistence list 18/04/17 17:21:28 INFO storage.BlockManager: Removing RDD 1693 18/04/17 17:21:28 INFO kafka.KafkaRDD: Removing RDD 1693 from persistence list 18/04/17 17:21:28 INFO storage.BlockManager: Removing RDD 1693 18/04/17 17:21:28 INFO kafka.KafkaRDD: Removing RDD 1694 from persistence list 18/04/17 17:21:28 INFO storage.BlockManager: Removing RDD 1694 18/04/17 17:21:28 INFO kafka.KafkaRDD: Removing RDD 1694 from persistence list 18/04/17 17:21:28 INFO storage.BlockManager: Removing RDD 1694 18/04/17 17:21:28 INFO kafka.KafkaRDD: Removing RDD 1695 from persistence list 18/04/17 17:21:28 INFO storage.BlockManager: Removing RDD 1695 18/04/17 17:21:28 INFO kafka.KafkaRDD: Removing RDD 1695 from persistence list 18/04/17 17:21:28 INFO storage.BlockManager: Removing RDD 1695 18/04/17 17:21:28 INFO kafka.KafkaRDD: Removing RDD 1696 from persistence list 18/04/17 17:21:28 INFO storage.BlockManager: Removing RDD 1696 18/04/17 17:21:28 INFO kafka.KafkaRDD: Removing RDD 1696 from persistence list 18/04/17 17:21:28 INFO storage.BlockManager: Removing RDD 1696 18/04/17 17:21:28 INFO kafka.KafkaRDD: Removing RDD 1697 from persistence list 18/04/17 17:21:28 INFO storage.BlockManager: Removing RDD 1697 18/04/17 17:21:28 INFO kafka.KafkaRDD: Removing RDD 1697 from persistence list 18/04/17 17:21:28 INFO storage.BlockManager: Removing RDD 1697 18/04/17 17:21:28 INFO kafka.KafkaRDD: Removing RDD 1698 from persistence list 18/04/17 17:21:28 INFO storage.BlockManager: Removing RDD 1698 18/04/17 17:21:28 INFO kafka.KafkaRDD: Removing RDD 1698 from persistence list 18/04/17 17:21:28 INFO storage.BlockManager: Removing RDD 1698 18/04/17 17:21:28 INFO kafka.KafkaRDD: Removing RDD 1699 from persistence list 18/04/17 17:21:28 INFO storage.BlockManager: Removing RDD 1699 18/04/17 17:21:28 INFO kafka.KafkaRDD: Removing RDD 1699 from persistence list 18/04/17 17:21:28 INFO storage.BlockManager: Removing RDD 1699 18/04/17 17:21:28 INFO kafka.KafkaRDD: Removing RDD 1700 from persistence list 18/04/17 17:21:28 INFO storage.BlockManager: Removing RDD 1700 18/04/17 17:21:28 INFO kafka.KafkaRDD: Removing RDD 1700 from persistence list 18/04/17 17:21:28 INFO storage.BlockManager: Removing RDD 1700 18/04/17 17:21:28 INFO kafka.KafkaRDD: Removing RDD 1701 from persistence list 18/04/17 17:21:28 INFO storage.BlockManager: Removing RDD 1701 18/04/17 17:21:28 INFO kafka.KafkaRDD: Removing RDD 1701 from persistence list 18/04/17 17:21:28 INFO storage.BlockManager: Removing RDD 1701 18/04/17 17:21:28 INFO kafka.KafkaRDD: Removing RDD 1702 from persistence list 18/04/17 17:21:28 INFO storage.BlockManager: Removing RDD 1702 18/04/17 17:21:28 INFO kafka.KafkaRDD: Removing RDD 1702 from persistence list 18/04/17 17:21:28 INFO storage.BlockManager: Removing RDD 1702 18/04/17 17:21:28 INFO kafka.KafkaRDD: Removing RDD 1703 from persistence list 18/04/17 17:21:28 INFO storage.BlockManager: Removing RDD 1703 18/04/17 17:21:28 INFO kafka.KafkaRDD: Removing RDD 1703 from persistence list 18/04/17 17:21:28 INFO storage.BlockManager: Removing RDD 1703 18/04/17 17:21:28 INFO kafka.KafkaRDD: Removing RDD 1704 from persistence list 18/04/17 17:21:28 INFO storage.BlockManager: Removing RDD 1704 18/04/17 17:21:28 INFO kafka.KafkaRDD: Removing RDD 1704 from persistence list 18/04/17 17:21:28 INFO storage.BlockManager: Removing RDD 1704 18/04/17 17:21:28 INFO kafka.KafkaRDD: Removing RDD 1705 from persistence list 18/04/17 17:21:28 INFO storage.BlockManager: Removing RDD 1705 18/04/17 17:21:28 INFO kafka.KafkaRDD: Removing RDD 1705 from persistence list 18/04/17 17:21:28 INFO storage.BlockManager: Removing RDD 1705 18/04/17 17:21:28 INFO kafka.KafkaRDD: Removing RDD 1706 from persistence list 18/04/17 17:21:28 INFO storage.BlockManager: Removing RDD 1706 18/04/17 17:21:28 INFO kafka.KafkaRDD: Removing RDD 1706 from persistence list 18/04/17 17:21:28 INFO storage.BlockManager: Removing RDD 1706 18/04/17 17:21:28 INFO kafka.KafkaRDD: Removing RDD 1707 from persistence list 18/04/17 17:21:28 INFO storage.BlockManager: Removing RDD 1707 18/04/17 17:21:28 INFO kafka.KafkaRDD: Removing RDD 1707 from persistence list 18/04/17 17:21:28 INFO storage.BlockManager: Removing RDD 1707 18/04/17 17:21:28 INFO kafka.KafkaRDD: Removing RDD 1708 from persistence list 18/04/17 17:21:28 INFO storage.BlockManager: Removing RDD 1708 18/04/17 17:21:28 INFO kafka.KafkaRDD: Removing RDD 1708 from persistence list 18/04/17 17:21:28 INFO storage.BlockManager: Removing RDD 1708 18/04/17 17:21:28 INFO kafka.KafkaRDD: Removing RDD 1709 from persistence list 18/04/17 17:21:28 INFO storage.BlockManager: Removing RDD 1709 18/04/17 17:21:28 INFO kafka.KafkaRDD: Removing RDD 1709 from persistence list 18/04/17 17:21:28 INFO storage.BlockManager: Removing RDD 1709 18/04/17 17:21:28 INFO kafka.KafkaRDD: Removing RDD 1710 from persistence list 18/04/17 17:21:28 INFO storage.BlockManager: Removing RDD 1710 18/04/17 17:21:28 INFO kafka.KafkaRDD: Removing RDD 1710 from persistence list 18/04/17 17:21:28 INFO storage.BlockManager: Removing RDD 1710 18/04/17 17:21:28 INFO kafka.KafkaRDD: Removing RDD 1711 from persistence list 18/04/17 17:21:28 INFO storage.BlockManager: Removing RDD 1711 18/04/17 17:21:28 INFO kafka.KafkaRDD: Removing RDD 1711 from persistence list 18/04/17 17:21:28 INFO storage.BlockManager: Removing RDD 1711 18/04/17 17:21:28 INFO kafka.KafkaRDD: Removing RDD 1712 from persistence list 18/04/17 17:21:28 INFO storage.BlockManager: Removing RDD 1712 18/04/17 17:21:28 INFO kafka.KafkaRDD: Removing RDD 1712 from persistence list 18/04/17 17:21:28 INFO storage.BlockManager: Removing RDD 1712 18/04/17 17:21:28 INFO kafka.KafkaRDD: Removing RDD 1713 from persistence list 18/04/17 17:21:28 INFO storage.BlockManager: Removing RDD 1713 18/04/17 17:21:28 INFO kafka.KafkaRDD: Removing RDD 1713 from persistence list 18/04/17 17:21:28 INFO storage.BlockManager: Removing RDD 1713 18/04/17 17:21:28 INFO kafka.KafkaRDD: Removing RDD 1714 from persistence list 18/04/17 17:21:28 INFO storage.BlockManager: Removing RDD 1714 18/04/17 17:21:28 INFO kafka.KafkaRDD: Removing RDD 1714 from persistence list 18/04/17 17:21:28 INFO storage.BlockManager: Removing RDD 1714 18/04/17 17:21:28 INFO kafka.KafkaRDD: Removing RDD 1715 from persistence list 18/04/17 17:21:28 INFO storage.BlockManager: Removing RDD 1715 18/04/17 17:21:28 INFO kafka.KafkaRDD: Removing RDD 1715 from persistence list 18/04/17 17:21:28 INFO storage.BlockManager: Removing RDD 1715 18/04/17 17:21:28 INFO kafka.KafkaRDD: Removing RDD 1716 from persistence list 18/04/17 17:21:28 INFO storage.BlockManager: Removing RDD 1716 18/04/17 17:21:28 INFO kafka.KafkaRDD: Removing RDD 1716 from persistence list 18/04/17 17:21:28 INFO storage.BlockManager: Removing RDD 1716 18/04/17 17:21:28 INFO kafka.KafkaRDD: Removing RDD 1717 from persistence list 18/04/17 17:21:28 INFO storage.BlockManager: Removing RDD 1717 18/04/17 17:21:28 INFO kafka.KafkaRDD: Removing RDD 1717 from persistence list 18/04/17 17:21:28 INFO storage.BlockManager: Removing RDD 1717 18/04/17 17:21:28 INFO kafka.KafkaRDD: Removing RDD 1718 from persistence list 18/04/17 17:21:28 INFO storage.BlockManager: Removing RDD 1718 18/04/17 17:21:28 INFO kafka.KafkaRDD: Removing RDD 1718 from persistence list 18/04/17 17:21:28 INFO storage.BlockManager: Removing RDD 1718 18/04/17 17:21:28 INFO kafka.KafkaRDD: Removing RDD 1719 from persistence list 18/04/17 17:21:28 INFO storage.BlockManager: Removing RDD 1719 18/04/17 17:21:28 INFO kafka.KafkaRDD: Removing RDD 1719 from persistence list 18/04/17 17:21:28 INFO storage.BlockManager: Removing RDD 1719 18/04/17 17:21:28 INFO kafka.KafkaRDD: Removing RDD 1720 from persistence list 18/04/17 17:21:28 INFO storage.BlockManager: Removing RDD 1720 18/04/17 17:21:28 INFO kafka.KafkaRDD: Removing RDD 1720 from persistence list 18/04/17 17:21:28 INFO storage.BlockManager: Removing RDD 1720 18/04/17 17:21:28 INFO kafka.KafkaRDD: Removing RDD 1721 from persistence list 18/04/17 17:21:28 INFO storage.BlockManager: Removing RDD 1721 18/04/17 17:21:28 INFO kafka.KafkaRDD: Removing RDD 1721 from persistence list 18/04/17 17:21:28 INFO storage.BlockManager: Removing RDD 1721 18/04/17 17:21:28 INFO kafka.KafkaRDD: Removing RDD 1722 from persistence list 18/04/17 17:21:28 INFO storage.BlockManager: Removing RDD 1722 18/04/17 17:21:28 INFO kafka.KafkaRDD: Removing RDD 1722 from persistence list 18/04/17 17:21:28 INFO storage.BlockManager: Removing RDD 1722 18/04/17 17:21:28 INFO kafka.KafkaRDD: Removing RDD 1723 from persistence list 18/04/17 17:21:28 INFO storage.BlockManager: Removing RDD 1723 18/04/17 17:21:28 INFO kafka.KafkaRDD: Removing RDD 1723 from persistence list 18/04/17 17:21:28 INFO storage.BlockManager: Removing RDD 1723 18/04/17 17:21:28 INFO kafka.KafkaRDD: Removing RDD 1724 from persistence list 18/04/17 17:21:28 INFO storage.BlockManager: Removing RDD 1724 18/04/17 17:21:28 INFO kafka.KafkaRDD: Removing RDD 1724 from persistence list 18/04/17 17:21:28 INFO storage.BlockManager: Removing RDD 1724 18/04/17 17:21:28 INFO kafka.KafkaRDD: Removing RDD 1725 from persistence list 18/04/17 17:21:28 INFO storage.BlockManager: Removing RDD 1725 18/04/17 17:21:28 INFO kafka.KafkaRDD: Removing RDD 1725 from persistence list 18/04/17 17:21:28 INFO storage.BlockManager: Removing RDD 1725 18/04/17 17:21:28 INFO kafka.KafkaRDD: Removing RDD 1726 from persistence list 18/04/17 17:21:28 INFO storage.BlockManager: Removing RDD 1726 18/04/17 17:21:28 INFO kafka.KafkaRDD: Removing RDD 1726 from persistence list 18/04/17 17:21:28 INFO storage.BlockManager: Removing RDD 1726 18/04/17 17:21:28 INFO kafka.KafkaRDD: Removing RDD 1727 from persistence list 18/04/17 17:21:28 INFO storage.BlockManager: Removing RDD 1727 18/04/17 17:21:28 INFO kafka.KafkaRDD: Removing RDD 1727 from persistence list 18/04/17 17:21:28 INFO storage.BlockManager: Removing RDD 1727 18/04/17 17:21:28 INFO scheduler.ReceivedBlockTracker: Deleting batches ArrayBuffer() 18/04/17 17:21:28 INFO scheduler.InputInfoTracker: remove old batch metadata: 1523974740000 ms 18/04/17 17:22:00 INFO spark.ContextCleaner: Cleaned accumulator 1289 18/04/17 17:22:00 INFO storage.BlockManagerInfo: Removed broadcast_1265_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:22:00 INFO storage.BlockManagerInfo: Removed broadcast_1265_piece0 on ***hostname masked***:55095 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:22:00 INFO scheduler.JobScheduler: Added jobs for time 1523974920000 ms 18/04/17 17:22:00 INFO scheduler.JobScheduler: Starting job streaming job 1523974920000 ms.0 from job set of time 1523974920000 ms 18/04/17 17:22:00 INFO scheduler.JobScheduler: Starting job streaming job 1523974920000 ms.1 from job set of time 1523974920000 ms 18/04/17 17:22:00 INFO scheduler.JobScheduler: Finished job streaming job 1523974920000 ms.0 from job set of time 1523974920000 ms 18/04/17 17:22:00 INFO scheduler.JobScheduler: Starting job streaming job 1523974920000 ms.2 from job set of time 1523974920000 ms 18/04/17 17:22:00 INFO scheduler.JobScheduler: Starting job streaming job 1523974920000 ms.4 from job set of time 1523974920000 ms 18/04/17 17:22:00 INFO scheduler.JobScheduler: Starting job streaming job 1523974920000 ms.3 from job set of time 1523974920000 ms 18/04/17 17:22:00 INFO scheduler.JobScheduler: Finished job streaming job 1523974920000 ms.4 from job set of time 1523974920000 ms 18/04/17 17:22:00 INFO scheduler.JobScheduler: Starting job streaming job 1523974920000 ms.5 from job set of time 1523974920000 ms 18/04/17 17:22:00 INFO scheduler.JobScheduler: Finished job streaming job 1523974920000 ms.3 from job set of time 1523974920000 ms 18/04/17 17:22:00 INFO scheduler.JobScheduler: Starting job streaming job 1523974920000 ms.7 from job set of time 1523974920000 ms 18/04/17 17:22:00 INFO scheduler.JobScheduler: Starting job streaming job 1523974920000 ms.6 from job set of time 1523974920000 ms 18/04/17 17:22:00 INFO scheduler.JobScheduler: Starting job streaming job 1523974920000 ms.8 from job set of time 1523974920000 ms 18/04/17 17:22:00 INFO scheduler.JobScheduler: Starting job streaming job 1523974920000 ms.9 from job set of time 1523974920000 ms 18/04/17 17:22:00 INFO scheduler.JobScheduler: Starting job streaming job 1523974920000 ms.10 from job set of time 1523974920000 ms 18/04/17 17:22:00 INFO scheduler.JobScheduler: Starting job streaming job 1523974920000 ms.11 from job set of time 1523974920000 ms 18/04/17 17:22:00 INFO scheduler.JobScheduler: Starting job streaming job 1523974920000 ms.12 from job set of time 1523974920000 ms 18/04/17 17:22:00 INFO scheduler.JobScheduler: Starting job streaming job 1523974920000 ms.13 from job set of time 1523974920000 ms 18/04/17 17:22:00 INFO scheduler.JobScheduler: Starting job streaming job 1523974920000 ms.14 from job set of time 1523974920000 ms 18/04/17 17:22:00 INFO scheduler.JobScheduler: Finished job streaming job 1523974920000 ms.13 from job set of time 1523974920000 ms 18/04/17 17:22:00 INFO scheduler.JobScheduler: Starting job streaming job 1523974920000 ms.16 from job set of time 1523974920000 ms 18/04/17 17:22:00 INFO scheduler.JobScheduler: Starting job streaming job 1523974920000 ms.15 from job set of time 1523974920000 ms 18/04/17 17:22:00 INFO scheduler.JobScheduler: Finished job streaming job 1523974920000 ms.16 from job set of time 1523974920000 ms 18/04/17 17:22:00 INFO scheduler.JobScheduler: Finished job streaming job 1523974920000 ms.14 from job set of time 1523974920000 ms 18/04/17 17:22:00 INFO scheduler.JobScheduler: Starting job streaming job 1523974920000 ms.18 from job set of time 1523974920000 ms 18/04/17 17:22:00 INFO scheduler.JobScheduler: Starting job streaming job 1523974920000 ms.17 from job set of time 1523974920000 ms 18/04/17 17:22:00 INFO scheduler.JobScheduler: Starting job streaming job 1523974920000 ms.20 from job set of time 1523974920000 ms 18/04/17 17:22:00 INFO scheduler.JobScheduler: Starting job streaming job 1523974920000 ms.19 from job set of time 1523974920000 ms 18/04/17 17:22:00 INFO scheduler.JobScheduler: Finished job streaming job 1523974920000 ms.17 from job set of time 1523974920000 ms 18/04/17 17:22:00 INFO scheduler.JobScheduler: Starting job streaming job 1523974920000 ms.22 from job set of time 1523974920000 ms 18/04/17 17:22:00 INFO scheduler.JobScheduler: Starting job streaming job 1523974920000 ms.21 from job set of time 1523974920000 ms 18/04/17 17:22:00 INFO scheduler.JobScheduler: Starting job streaming job 1523974920000 ms.23 from job set of time 1523974920000 ms 18/04/17 17:22:00 INFO scheduler.JobScheduler: Starting job streaming job 1523974920000 ms.24 from job set of time 1523974920000 ms 18/04/17 17:22:00 INFO scheduler.JobScheduler: Starting job streaming job 1523974920000 ms.26 from job set of time 1523974920000 ms 18/04/17 17:22:00 INFO scheduler.JobScheduler: Starting job streaming job 1523974920000 ms.25 from job set of time 1523974920000 ms 18/04/17 17:22:00 INFO scheduler.JobScheduler: Starting job streaming job 1523974920000 ms.27 from job set of time 1523974920000 ms 18/04/17 17:22:00 INFO scheduler.JobScheduler: Starting job streaming job 1523974920000 ms.28 from job set of time 1523974920000 ms 18/04/17 17:22:00 INFO scheduler.JobScheduler: Starting job streaming job 1523974920000 ms.29 from job set of time 1523974920000 ms 18/04/17 17:22:00 INFO scheduler.JobScheduler: Starting job streaming job 1523974920000 ms.30 from job set of time 1523974920000 ms 18/04/17 17:22:00 INFO scheduler.JobScheduler: Starting job streaming job 1523974920000 ms.31 from job set of time 1523974920000 ms 18/04/17 17:22:00 INFO scheduler.JobScheduler: Starting job streaming job 1523974920000 ms.32 from job set of time 1523974920000 ms 18/04/17 17:22:00 INFO scheduler.JobScheduler: Starting job streaming job 1523974920000 ms.33 from job set of time 1523974920000 ms 18/04/17 17:22:00 INFO scheduler.JobScheduler: Starting job streaming job 1523974920000 ms.34 from job set of time 1523974920000 ms 18/04/17 17:22:00 INFO scheduler.JobScheduler: Finished job streaming job 1523974920000 ms.30 from job set of time 1523974920000 ms 18/04/17 17:22:00 INFO scheduler.JobScheduler: Starting job streaming job 1523974920000 ms.35 from job set of time 1523974920000 ms 18/04/17 17:22:00 INFO scheduler.JobScheduler: Finished job streaming job 1523974920000 ms.35 from job set of time 1523974920000 ms 18/04/17 17:22:00 INFO scheduler.JobScheduler: Finished job streaming job 1523974920000 ms.21 from job set of time 1523974920000 ms 18/04/17 17:22:00 INFO spark.ContextCleaner: Cleaned accumulator 1266 18/04/17 17:22:00 INFO storage.BlockManagerInfo: Removed broadcast_1266_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:22:00 INFO storage.BlockManagerInfo: Removed broadcast_1266_piece0 on ***hostname masked***:53081 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:22:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 17:22:00 INFO scheduler.DAGScheduler: Got job 1292 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 17:22:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 1292 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 17:22:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 17:22:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 17:22:00 INFO spark.ContextCleaner: Cleaned accumulator 1267 18/04/17 17:22:00 INFO spark.ContextCleaner: Cleaned accumulator 1268 18/04/17 17:22:00 INFO scheduler.DAGScheduler: Submitting ResultStage 1292 (KafkaRDD[1765] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 17:22:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 17:22:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 17:22:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 17:22:00 INFO storage.BlockManagerInfo: Removed broadcast_1268_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:22:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 17:22:00 INFO storage.MemoryStore: Block broadcast_1292 stored as values in memory (estimated size 5.7 KB, free 491.4 MB) 18/04/17 17:22:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 17:22:00 INFO storage.BlockManagerInfo: Removed broadcast_1268_piece0 on ***hostname masked***:55095 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:22:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 17:22:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 17:22:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 17:22:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 17:22:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 17:22:00 INFO spark.ContextCleaner: Cleaned accumulator 1269 18/04/17 17:22:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 17:22:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 17:22:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 17:22:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 17:22:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 17:22:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 17:22:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 17:22:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 17:22:00 INFO storage.BlockManagerInfo: Removed broadcast_1267_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:22:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 17:22:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 17:22:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 17:22:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 17:22:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 17:22:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 17:22:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 17:22:00 INFO storage.BlockManagerInfo: Removed broadcast_1267_piece0 on ***hostname masked***:55033 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:22:00 INFO storage.MemoryStore: Block broadcast_1292_piece0 stored as bytes in memory (estimated size 3.1 KB, free 491.4 MB) 18/04/17 17:22:00 INFO spark.ContextCleaner: Cleaned accumulator 1270 18/04/17 17:22:00 INFO storage.BlockManagerInfo: Added broadcast_1292_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:22:00 INFO spark.SparkContext: Created broadcast 1292 from broadcast at DAGScheduler.scala:1006 18/04/17 17:22:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 1292 (KafkaRDD[1765] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 17:22:00 INFO cluster.YarnClusterScheduler: Adding task set 1292.0 with 1 tasks 18/04/17 17:22:00 INFO scheduler.DAGScheduler: Got job 1293 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 17:22:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 1293 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 17:22:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 17:22:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 17:22:00 INFO storage.BlockManagerInfo: Removed broadcast_1270_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:22:00 INFO scheduler.DAGScheduler: Submitting ResultStage 1293 (KafkaRDD[1772] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 17:22:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 1292.0 (TID 1292, ***hostname masked***, executor 6, partition 0, RACK_LOCAL, 2063 bytes) 18/04/17 17:22:00 INFO storage.MemoryStore: Block broadcast_1293 stored as values in memory (estimated size 5.7 KB, free 491.4 MB) 18/04/17 17:22:00 INFO storage.BlockManagerInfo: Removed broadcast_1270_piece0 on ***hostname masked***:35790 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:22:00 INFO spark.ContextCleaner: Cleaned accumulator 1271 18/04/17 17:22:00 INFO storage.BlockManagerInfo: Removed broadcast_1269_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:22:00 INFO storage.MemoryStore: Block broadcast_1293_piece0 stored as bytes in memory (estimated size 3.1 KB, free 491.4 MB) 18/04/17 17:22:00 INFO storage.BlockManagerInfo: Added broadcast_1293_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:22:00 INFO spark.SparkContext: Created broadcast 1293 from broadcast at DAGScheduler.scala:1006 18/04/17 17:22:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 1293 (KafkaRDD[1772] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 17:22:00 INFO cluster.YarnClusterScheduler: Adding task set 1293.0 with 1 tasks 18/04/17 17:22:00 INFO scheduler.DAGScheduler: Got job 1294 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 17:22:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 1294 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 17:22:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 17:22:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 17:22:00 INFO scheduler.DAGScheduler: Submitting ResultStage 1294 (KafkaRDD[1770] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 17:22:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 1293.0 (TID 1293, ***hostname masked***, executor 3, partition 0, RACK_LOCAL, 2051 bytes) 18/04/17 17:22:00 INFO storage.MemoryStore: Block broadcast_1294 stored as values in memory (estimated size 5.7 KB, free 491.4 MB) 18/04/17 17:22:00 INFO storage.BlockManagerInfo: Removed broadcast_1269_piece0 on ***hostname masked***:43653 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:22:00 INFO spark.ContextCleaner: Cleaned accumulator 1272 18/04/17 17:22:00 INFO storage.BlockManagerInfo: Removed broadcast_1271_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:22:00 INFO storage.BlockManagerInfo: Removed broadcast_1271_piece0 on ***hostname masked***:55033 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:22:00 INFO storage.MemoryStore: Block broadcast_1294_piece0 stored as bytes in memory (estimated size 3.1 KB, free 491.4 MB) 18/04/17 17:22:00 INFO storage.BlockManagerInfo: Added broadcast_1294_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:22:00 INFO spark.SparkContext: Created broadcast 1294 from broadcast at DAGScheduler.scala:1006 18/04/17 17:22:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 1294 (KafkaRDD[1770] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 17:22:00 INFO cluster.YarnClusterScheduler: Adding task set 1294.0 with 1 tasks 18/04/17 17:22:00 INFO spark.ContextCleaner: Cleaned accumulator 1274 18/04/17 17:22:00 INFO scheduler.DAGScheduler: Got job 1295 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 17:22:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 1295 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 17:22:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 17:22:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 17:22:00 INFO scheduler.DAGScheduler: Submitting ResultStage 1295 (KafkaRDD[1771] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 17:22:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 1294.0 (TID 1294, ***hostname masked***, executor 9, partition 0, NODE_LOCAL, 2045 bytes) 18/04/17 17:22:00 INFO storage.MemoryStore: Block broadcast_1295 stored as values in memory (estimated size 5.7 KB, free 491.4 MB) 18/04/17 17:22:00 INFO storage.BlockManagerInfo: Removed broadcast_1272_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:22:00 INFO storage.BlockManagerInfo: Removed broadcast_1272_piece0 on ***hostname masked***:56034 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:22:00 INFO storage.BlockManagerInfo: Added broadcast_1292_piece0 in memory on ***hostname masked***:35790 (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:22:00 INFO storage.MemoryStore: Block broadcast_1295_piece0 stored as bytes in memory (estimated size 3.1 KB, free 491.4 MB) 18/04/17 17:22:00 INFO spark.ContextCleaner: Cleaned accumulator 1273 18/04/17 17:22:00 INFO spark.ContextCleaner: Cleaned accumulator 1275 18/04/17 17:22:00 INFO storage.BlockManagerInfo: Added broadcast_1295_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:22:00 INFO spark.SparkContext: Created broadcast 1295 from broadcast at DAGScheduler.scala:1006 18/04/17 17:22:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 1295 (KafkaRDD[1771] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 17:22:00 INFO cluster.YarnClusterScheduler: Adding task set 1295.0 with 1 tasks 18/04/17 17:22:00 INFO scheduler.DAGScheduler: Got job 1296 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 17:22:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 1296 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 17:22:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 17:22:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 17:22:00 INFO scheduler.DAGScheduler: Submitting ResultStage 1296 (KafkaRDD[1775] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 17:22:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 1295.0 (TID 1295, ***hostname masked***, executor 11, partition 0, RACK_LOCAL, 2060 bytes) 18/04/17 17:22:00 INFO storage.BlockManagerInfo: Removed broadcast_1273_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:22:00 INFO storage.MemoryStore: Block broadcast_1296 stored as values in memory (estimated size 5.7 KB, free 491.4 MB) 18/04/17 17:22:00 INFO storage.BlockManagerInfo: Removed broadcast_1273_piece0 on ***hostname masked***:41751 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:22:00 INFO storage.BlockManagerInfo: Added broadcast_1293_piece0 in memory on ***hostname masked***:60107 (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:22:00 INFO spark.ContextCleaner: Cleaned accumulator 1276 18/04/17 17:22:00 INFO storage.MemoryStore: Block broadcast_1296_piece0 stored as bytes in memory (estimated size 3.1 KB, free 491.4 MB) 18/04/17 17:22:00 INFO storage.BlockManagerInfo: Added broadcast_1296_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:22:00 INFO storage.BlockManagerInfo: Removed broadcast_1274_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:22:00 INFO spark.SparkContext: Created broadcast 1296 from broadcast at DAGScheduler.scala:1006 18/04/17 17:22:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 1296 (KafkaRDD[1775] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 17:22:00 INFO cluster.YarnClusterScheduler: Adding task set 1296.0 with 1 tasks 18/04/17 17:22:00 INFO scheduler.DAGScheduler: Got job 1297 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 17:22:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 1297 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 17:22:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 17:22:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 17:22:00 INFO scheduler.DAGScheduler: Submitting ResultStage 1297 (KafkaRDD[1774] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 17:22:00 INFO storage.MemoryStore: Block broadcast_1297 stored as values in memory (estimated size 5.7 KB, free 491.4 MB) 18/04/17 17:22:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 1296.0 (TID 1296, ***hostname masked***, executor 11, partition 0, RACK_LOCAL, 2046 bytes) 18/04/17 17:22:00 INFO storage.BlockManagerInfo: Removed broadcast_1274_piece0 on ***hostname masked***:55033 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:22:00 INFO storage.BlockManagerInfo: Added broadcast_1294_piece0 in memory on ***hostname masked***:55033 (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:22:00 INFO storage.BlockManagerInfo: Removed broadcast_1275_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:22:00 INFO storage.MemoryStore: Block broadcast_1297_piece0 stored as bytes in memory (estimated size 3.1 KB, free 491.4 MB) 18/04/17 17:22:00 INFO storage.BlockManagerInfo: Added broadcast_1297_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:22:00 INFO spark.SparkContext: Created broadcast 1297 from broadcast at DAGScheduler.scala:1006 18/04/17 17:22:00 INFO storage.BlockManagerInfo: Removed broadcast_1275_piece0 on ***hostname masked***:57847 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:22:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 1297 (KafkaRDD[1774] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 17:22:00 INFO cluster.YarnClusterScheduler: Adding task set 1297.0 with 1 tasks 18/04/17 17:22:00 INFO scheduler.DAGScheduler: Got job 1298 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 17:22:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 1298 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 17:22:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 17:22:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 17:22:00 INFO scheduler.DAGScheduler: Submitting ResultStage 1298 (KafkaRDD[1792] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 17:22:00 INFO spark.ContextCleaner: Cleaned accumulator 1278 18/04/17 17:22:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 1297.0 (TID 1297, ***hostname masked***, executor 3, partition 0, NODE_LOCAL, 2059 bytes) 18/04/17 17:22:00 INFO storage.MemoryStore: Block broadcast_1298 stored as values in memory (estimated size 5.7 KB, free 491.4 MB) 18/04/17 17:22:00 INFO storage.BlockManagerInfo: Removed broadcast_1276_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:22:00 INFO storage.BlockManagerInfo: Removed broadcast_1276_piece0 on ***hostname masked***:35790 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:22:00 INFO storage.BlockManagerInfo: Added broadcast_1295_piece0 in memory on ***hostname masked***:57847 (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:22:00 INFO spark.ContextCleaner: Cleaned accumulator 1277 18/04/17 17:22:00 INFO storage.BlockManagerInfo: Removed broadcast_1277_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:22:00 INFO storage.BlockManagerInfo: Removed broadcast_1277_piece0 on ***hostname masked***:55095 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:22:00 INFO storage.MemoryStore: Block broadcast_1298_piece0 stored as bytes in memory (estimated size 3.1 KB, free 491.5 MB) 18/04/17 17:22:00 INFO storage.BlockManagerInfo: Added broadcast_1298_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:22:00 INFO spark.SparkContext: Created broadcast 1298 from broadcast at DAGScheduler.scala:1006 18/04/17 17:22:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 1298 (KafkaRDD[1792] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 17:22:00 INFO cluster.YarnClusterScheduler: Adding task set 1298.0 with 1 tasks 18/04/17 17:22:00 INFO spark.ContextCleaner: Cleaned accumulator 1279 18/04/17 17:22:00 INFO spark.ContextCleaner: Cleaned accumulator 1280 18/04/17 17:22:00 INFO scheduler.DAGScheduler: Got job 1299 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 17:22:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 1299 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 17:22:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 17:22:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 17:22:00 INFO scheduler.DAGScheduler: Submitting ResultStage 1299 (KafkaRDD[1769] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 17:22:00 INFO storage.BlockManagerInfo: Removed broadcast_1278_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:22:00 INFO storage.MemoryStore: Block broadcast_1299 stored as values in memory (estimated size 5.7 KB, free 491.5 MB) 18/04/17 17:22:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 1298.0 (TID 1298, ***hostname masked***, executor 8, partition 0, RACK_LOCAL, 2045 bytes) 18/04/17 17:22:00 INFO storage.BlockManagerInfo: Added broadcast_1297_piece0 in memory on ***hostname masked***:60107 (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:22:00 INFO storage.BlockManagerInfo: Removed broadcast_1278_piece0 on ***hostname masked***:55033 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:22:00 INFO storage.BlockManagerInfo: Removed broadcast_1280_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:22:00 INFO storage.BlockManagerInfo: Removed broadcast_1280_piece0 on ***hostname masked***:35790 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:22:00 INFO storage.MemoryStore: Block broadcast_1299_piece0 stored as bytes in memory (estimated size 3.1 KB, free 491.5 MB) 18/04/17 17:22:00 INFO storage.BlockManagerInfo: Added broadcast_1299_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:22:00 INFO spark.SparkContext: Created broadcast 1299 from broadcast at DAGScheduler.scala:1006 18/04/17 17:22:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 1299 (KafkaRDD[1769] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 17:22:00 INFO cluster.YarnClusterScheduler: Adding task set 1299.0 with 1 tasks 18/04/17 17:22:00 INFO scheduler.DAGScheduler: Got job 1302 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 17:22:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 1300 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 17:22:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 17:22:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 17:22:00 INFO scheduler.DAGScheduler: Submitting ResultStage 1300 (KafkaRDD[1784] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 17:22:00 INFO spark.ContextCleaner: Cleaned accumulator 1281 18/04/17 17:22:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 1299.0 (TID 1299, ***hostname masked***, executor 3, partition 0, NODE_LOCAL, 2059 bytes) 18/04/17 17:22:00 INFO storage.MemoryStore: Block broadcast_1300 stored as values in memory (estimated size 5.7 KB, free 491.5 MB) 18/04/17 17:22:00 INFO storage.BlockManagerInfo: Added broadcast_1296_piece0 in memory on ***hostname masked***:57847 (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:22:00 INFO storage.BlockManagerInfo: Removed broadcast_1279_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:22:00 INFO storage.BlockManagerInfo: Removed broadcast_1279_piece0 on ***hostname masked***:43653 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:22:00 INFO spark.ContextCleaner: Cleaned accumulator 1283 18/04/17 17:22:00 INFO storage.MemoryStore: Block broadcast_1300_piece0 stored as bytes in memory (estimated size 3.1 KB, free 491.5 MB) 18/04/17 17:22:00 INFO storage.BlockManagerInfo: Removed broadcast_1281_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:22:00 INFO storage.BlockManagerInfo: Added broadcast_1300_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:22:00 INFO spark.SparkContext: Created broadcast 1300 from broadcast at DAGScheduler.scala:1006 18/04/17 17:22:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 1300 (KafkaRDD[1784] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 17:22:00 INFO cluster.YarnClusterScheduler: Adding task set 1300.0 with 1 tasks 18/04/17 17:22:00 INFO scheduler.DAGScheduler: Got job 1300 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 17:22:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 1301 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 17:22:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 17:22:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 17:22:00 INFO scheduler.DAGScheduler: Submitting ResultStage 1301 (KafkaRDD[1795] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 17:22:00 INFO storage.MemoryStore: Block broadcast_1301 stored as values in memory (estimated size 5.7 KB, free 491.5 MB) 18/04/17 17:22:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 1300.0 (TID 1300, ***hostname masked***, executor 9, partition 0, RACK_LOCAL, 2048 bytes) 18/04/17 17:22:00 INFO storage.BlockManagerInfo: Added broadcast_1298_piece0 in memory on ***hostname masked***:50260 (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:22:00 INFO storage.BlockManagerInfo: Removed broadcast_1281_piece0 on ***hostname masked***:50260 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:22:00 INFO spark.ContextCleaner: Cleaned accumulator 1282 18/04/17 17:22:00 INFO storage.BlockManagerInfo: Added broadcast_1299_piece0 in memory on ***hostname masked***:60107 (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:22:00 INFO storage.BlockManagerInfo: Removed broadcast_1283_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:22:00 INFO storage.MemoryStore: Block broadcast_1301_piece0 stored as bytes in memory (estimated size 3.1 KB, free 491.5 MB) 18/04/17 17:22:00 INFO storage.BlockManagerInfo: Added broadcast_1301_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:22:00 INFO spark.SparkContext: Created broadcast 1301 from broadcast at DAGScheduler.scala:1006 18/04/17 17:22:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 1301 (KafkaRDD[1795] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 17:22:00 INFO cluster.YarnClusterScheduler: Adding task set 1301.0 with 1 tasks 18/04/17 17:22:00 INFO scheduler.DAGScheduler: Got job 1301 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 17:22:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 1302 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 17:22:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 17:22:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 17:22:00 INFO scheduler.DAGScheduler: Submitting ResultStage 1302 (KafkaRDD[1779] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 17:22:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 1301.0 (TID 1301, ***hostname masked***, executor 4, partition 0, RACK_LOCAL, 2042 bytes) 18/04/17 17:22:00 INFO storage.MemoryStore: Block broadcast_1302 stored as values in memory (estimated size 5.7 KB, free 491.5 MB) 18/04/17 17:22:00 INFO storage.BlockManagerInfo: Removed broadcast_1283_piece0 on ***hostname masked***:60107 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:22:00 INFO spark.ContextCleaner: Cleaned accumulator 1284 18/04/17 17:22:00 INFO storage.BlockManagerInfo: Removed broadcast_1282_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:22:00 INFO storage.BlockManagerInfo: Removed broadcast_1282_piece0 on ***hostname masked***:56034 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:22:00 INFO storage.BlockManagerInfo: Added broadcast_1300_piece0 in memory on ***hostname masked***:55033 (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:22:00 INFO storage.MemoryStore: Block broadcast_1302_piece0 stored as bytes in memory (estimated size 3.1 KB, free 491.5 MB) 18/04/17 17:22:00 INFO spark.ContextCleaner: Cleaned accumulator 1285 18/04/17 17:22:00 INFO storage.BlockManagerInfo: Added broadcast_1302_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:22:00 INFO spark.SparkContext: Created broadcast 1302 from broadcast at DAGScheduler.scala:1006 18/04/17 17:22:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 1302 (KafkaRDD[1779] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 17:22:00 INFO cluster.YarnClusterScheduler: Adding task set 1302.0 with 1 tasks 18/04/17 17:22:00 INFO scheduler.DAGScheduler: Got job 1303 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 17:22:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 1303 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 17:22:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 17:22:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 17:22:00 INFO scheduler.DAGScheduler: Submitting ResultStage 1303 (KafkaRDD[1791] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 17:22:00 INFO storage.BlockManagerInfo: Removed broadcast_1285_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:22:00 INFO storage.MemoryStore: Block broadcast_1303 stored as values in memory (estimated size 5.7 KB, free 491.5 MB) 18/04/17 17:22:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 1302.0 (TID 1302, ***hostname masked***, executor 12, partition 0, RACK_LOCAL, 2064 bytes) 18/04/17 17:22:00 INFO storage.BlockManagerInfo: Removed broadcast_1285_piece0 on ***hostname masked***:41751 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:22:00 INFO spark.ContextCleaner: Cleaned accumulator 1286 18/04/17 17:22:00 INFO storage.BlockManagerInfo: Removed broadcast_1284_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:22:00 INFO storage.BlockManagerInfo: Removed broadcast_1284_piece0 on ***hostname masked***:60107 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:22:00 INFO spark.ContextCleaner: Cleaned accumulator 1288 18/04/17 17:22:00 INFO storage.MemoryStore: Block broadcast_1303_piece0 stored as bytes in memory (estimated size 3.1 KB, free 491.5 MB) 18/04/17 17:22:00 INFO storage.BlockManagerInfo: Added broadcast_1303_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:22:00 INFO storage.BlockManagerInfo: Removed broadcast_1286_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:22:00 INFO spark.SparkContext: Created broadcast 1303 from broadcast at DAGScheduler.scala:1006 18/04/17 17:22:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 1303 (KafkaRDD[1791] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 17:22:00 INFO cluster.YarnClusterScheduler: Adding task set 1303.0 with 1 tasks 18/04/17 17:22:00 INFO scheduler.DAGScheduler: Got job 1304 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 17:22:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 1304 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 17:22:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 17:22:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 17:22:00 INFO scheduler.DAGScheduler: Submitting ResultStage 1304 (KafkaRDD[1793] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 17:22:00 INFO storage.MemoryStore: Block broadcast_1304 stored as values in memory (estimated size 5.7 KB, free 491.5 MB) 18/04/17 17:22:00 INFO storage.BlockManagerInfo: Removed broadcast_1286_piece0 on ***hostname masked***:57847 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:22:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 1303.0 (TID 1303, ***hostname masked***, executor 8, partition 0, RACK_LOCAL, 2048 bytes) 18/04/17 17:22:00 INFO spark.ContextCleaner: Cleaned accumulator 1287 18/04/17 17:22:00 INFO storage.BlockManagerInfo: Removed broadcast_1291_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:22:00 INFO storage.BlockManagerInfo: Removed broadcast_1291_piece0 on ***hostname masked***:53081 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:22:00 INFO spark.ContextCleaner: Cleaned accumulator 1292 18/04/17 17:22:00 INFO storage.MemoryStore: Block broadcast_1304_piece0 stored as bytes in memory (estimated size 3.1 KB, free 491.5 MB) 18/04/17 17:22:00 INFO storage.BlockManagerInfo: Added broadcast_1304_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:22:00 INFO storage.BlockManagerInfo: Removed broadcast_1290_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:22:00 INFO spark.SparkContext: Created broadcast 1304 from broadcast at DAGScheduler.scala:1006 18/04/17 17:22:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 1304 (KafkaRDD[1793] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 17:22:00 INFO cluster.YarnClusterScheduler: Adding task set 1304.0 with 1 tasks 18/04/17 17:22:00 INFO scheduler.DAGScheduler: Got job 1305 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 17:22:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 1305 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 17:22:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 17:22:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 17:22:00 INFO scheduler.DAGScheduler: Submitting ResultStage 1305 (KafkaRDD[1788] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 17:22:00 INFO storage.MemoryStore: Block broadcast_1305 stored as values in memory (estimated size 5.7 KB, free 491.5 MB) 18/04/17 17:22:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 1304.0 (TID 1304, ***hostname masked***, executor 5, partition 0, NODE_LOCAL, 2049 bytes) 18/04/17 17:22:00 INFO storage.BlockManagerInfo: Removed broadcast_1290_piece0 on ***hostname masked***:42188 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:22:00 INFO storage.BlockManagerInfo: Removed broadcast_1288_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:22:00 INFO storage.BlockManagerInfo: Added broadcast_1303_piece0 in memory on ***hostname masked***:50260 (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:22:00 INFO storage.BlockManagerInfo: Added broadcast_1301_piece0 in memory on ***hostname masked***:55279 (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:22:00 INFO storage.BlockManagerInfo: Removed broadcast_1288_piece0 on ***hostname masked***:60107 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:22:00 INFO storage.MemoryStore: Block broadcast_1305_piece0 stored as bytes in memory (estimated size 3.1 KB, free 491.5 MB) 18/04/17 17:22:00 INFO storage.BlockManagerInfo: Added broadcast_1305_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:22:00 INFO spark.SparkContext: Created broadcast 1305 from broadcast at DAGScheduler.scala:1006 18/04/17 17:22:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 1305 (KafkaRDD[1788] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 17:22:00 INFO cluster.YarnClusterScheduler: Adding task set 1305.0 with 1 tasks 18/04/17 17:22:00 INFO scheduler.DAGScheduler: Got job 1306 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 17:22:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 1306 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 17:22:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 17:22:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 17:22:00 INFO scheduler.DAGScheduler: Submitting ResultStage 1306 (KafkaRDD[1787] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 17:22:00 INFO storage.BlockManagerInfo: Removed broadcast_1287_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:22:00 INFO storage.MemoryStore: Block broadcast_1306 stored as values in memory (estimated size 5.7 KB, free 491.5 MB) 18/04/17 17:22:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 1305.0 (TID 1305, ***hostname masked***, executor 6, partition 0, NODE_LOCAL, 2047 bytes) 18/04/17 17:22:00 INFO storage.BlockManagerInfo: Removed broadcast_1287_piece0 on ***hostname masked***:55279 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:22:00 INFO spark.ContextCleaner: Cleaned accumulator 1291 18/04/17 17:22:00 INFO storage.BlockManagerInfo: Removed broadcast_1289_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:22:00 INFO storage.BlockManagerInfo: Removed broadcast_1289_piece0 on ***hostname masked***:43653 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:22:00 INFO storage.MemoryStore: Block broadcast_1306_piece0 stored as bytes in memory (estimated size 3.1 KB, free 491.5 MB) 18/04/17 17:22:00 INFO storage.BlockManagerInfo: Added broadcast_1306_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:22:00 INFO spark.SparkContext: Created broadcast 1306 from broadcast at DAGScheduler.scala:1006 18/04/17 17:22:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 1306 (KafkaRDD[1787] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 17:22:00 INFO cluster.YarnClusterScheduler: Adding task set 1306.0 with 1 tasks 18/04/17 17:22:00 INFO scheduler.DAGScheduler: Got job 1308 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 17:22:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 1307 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 17:22:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 17:22:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 17:22:00 INFO scheduler.DAGScheduler: Submitting ResultStage 1307 (KafkaRDD[1796] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 17:22:00 INFO spark.ContextCleaner: Cleaned accumulator 1290 18/04/17 17:22:00 INFO storage.MemoryStore: Block broadcast_1307 stored as values in memory (estimated size 5.7 KB, free 491.5 MB) 18/04/17 17:22:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 1306.0 (TID 1306, ***hostname masked***, executor 11, partition 0, RACK_LOCAL, 2048 bytes) 18/04/17 17:22:00 INFO storage.MemoryStore: Block broadcast_1307_piece0 stored as bytes in memory (estimated size 3.1 KB, free 491.5 MB) 18/04/17 17:22:00 INFO storage.BlockManagerInfo: Added broadcast_1307_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:22:00 INFO spark.SparkContext: Created broadcast 1307 from broadcast at DAGScheduler.scala:1006 18/04/17 17:22:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 1307 (KafkaRDD[1796] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 17:22:00 INFO cluster.YarnClusterScheduler: Adding task set 1307.0 with 1 tasks 18/04/17 17:22:00 INFO scheduler.DAGScheduler: Got job 1307 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 17:22:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 1308 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 17:22:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 17:22:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 17:22:00 INFO scheduler.DAGScheduler: Submitting ResultStage 1308 (KafkaRDD[1776] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 17:22:00 INFO storage.MemoryStore: Block broadcast_1308 stored as values in memory (estimated size 5.7 KB, free 491.5 MB) 18/04/17 17:22:00 INFO storage.BlockManagerInfo: Added broadcast_1305_piece0 in memory on ***hostname masked***:35790 (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:22:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 1307.0 (TID 1307, ***hostname masked***, executor 4, partition 0, RACK_LOCAL, 2040 bytes) 18/04/17 17:22:00 INFO storage.BlockManagerInfo: Added broadcast_1304_piece0 in memory on ***hostname masked***:53081 (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:22:00 INFO storage.MemoryStore: Block broadcast_1308_piece0 stored as bytes in memory (estimated size 3.1 KB, free 491.5 MB) 18/04/17 17:22:00 INFO storage.BlockManagerInfo: Added broadcast_1308_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:22:00 INFO spark.SparkContext: Created broadcast 1308 from broadcast at DAGScheduler.scala:1006 18/04/17 17:22:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 1308 (KafkaRDD[1776] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 17:22:00 INFO cluster.YarnClusterScheduler: Adding task set 1308.0 with 1 tasks 18/04/17 17:22:00 INFO scheduler.DAGScheduler: Got job 1309 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 17:22:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 1309 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 17:22:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 17:22:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 17:22:00 INFO scheduler.DAGScheduler: Submitting ResultStage 1309 (KafkaRDD[1773] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 17:22:00 INFO storage.MemoryStore: Block broadcast_1309 stored as values in memory (estimated size 5.7 KB, free 491.5 MB) 18/04/17 17:22:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 1308.0 (TID 1308, ***hostname masked***, executor 1, partition 0, RACK_LOCAL, 2051 bytes) 18/04/17 17:22:00 INFO storage.BlockManagerInfo: Added broadcast_1302_piece0 in memory on ***hostname masked***:42188 (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:22:00 INFO storage.MemoryStore: Block broadcast_1309_piece0 stored as bytes in memory (estimated size 3.1 KB, free 491.5 MB) 18/04/17 17:22:00 INFO storage.BlockManagerInfo: Added broadcast_1309_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:22:00 INFO spark.SparkContext: Created broadcast 1309 from broadcast at DAGScheduler.scala:1006 18/04/17 17:22:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 1309 (KafkaRDD[1773] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 17:22:00 INFO cluster.YarnClusterScheduler: Adding task set 1309.0 with 1 tasks 18/04/17 17:22:00 INFO scheduler.DAGScheduler: Got job 1310 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 17:22:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 1310 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 17:22:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 17:22:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 17:22:00 INFO scheduler.DAGScheduler: Submitting ResultStage 1310 (KafkaRDD[1766] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 17:22:00 INFO storage.MemoryStore: Block broadcast_1310 stored as values in memory (estimated size 5.7 KB, free 491.5 MB) 18/04/17 17:22:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 1309.0 (TID 1309, ***hostname masked***, executor 3, partition 0, RACK_LOCAL, 2041 bytes) 18/04/17 17:22:00 INFO storage.BlockManagerInfo: Added broadcast_1306_piece0 in memory on ***hostname masked***:57847 (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:22:00 INFO storage.BlockManagerInfo: Added broadcast_1307_piece0 in memory on ***hostname masked***:55279 (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:22:00 INFO storage.MemoryStore: Block broadcast_1310_piece0 stored as bytes in memory (estimated size 3.1 KB, free 491.5 MB) 18/04/17 17:22:00 INFO storage.BlockManagerInfo: Added broadcast_1310_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:22:00 INFO spark.SparkContext: Created broadcast 1310 from broadcast at DAGScheduler.scala:1006 18/04/17 17:22:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 1310 (KafkaRDD[1766] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 17:22:00 INFO cluster.YarnClusterScheduler: Adding task set 1310.0 with 1 tasks 18/04/17 17:22:00 INFO scheduler.DAGScheduler: Got job 1311 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 17:22:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 1311 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 17:22:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 17:22:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 17:22:00 INFO scheduler.DAGScheduler: Submitting ResultStage 1311 (KafkaRDD[1790] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 17:22:00 INFO storage.MemoryStore: Block broadcast_1311 stored as values in memory (estimated size 5.7 KB, free 491.5 MB) 18/04/17 17:22:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 1310.0 (TID 1310, ***hostname masked***, executor 1, partition 0, RACK_LOCAL, 2034 bytes) 18/04/17 17:22:00 INFO storage.MemoryStore: Block broadcast_1311_piece0 stored as bytes in memory (estimated size 3.1 KB, free 491.5 MB) 18/04/17 17:22:00 INFO storage.BlockManagerInfo: Added broadcast_1311_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:22:00 INFO spark.SparkContext: Created broadcast 1311 from broadcast at DAGScheduler.scala:1006 18/04/17 17:22:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 1311 (KafkaRDD[1790] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 17:22:00 INFO cluster.YarnClusterScheduler: Adding task set 1311.0 with 1 tasks 18/04/17 17:22:00 INFO scheduler.DAGScheduler: Got job 1312 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 17:22:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 1312 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 17:22:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 17:22:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 17:22:00 INFO scheduler.DAGScheduler: Submitting ResultStage 1312 (KafkaRDD[1782] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 17:22:00 INFO storage.MemoryStore: Block broadcast_1312 stored as values in memory (estimated size 5.7 KB, free 491.5 MB) 18/04/17 17:22:00 INFO storage.BlockManagerInfo: Added broadcast_1308_piece0 in memory on ***hostname masked***:56034 (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:22:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 1311.0 (TID 1311, ***hostname masked***, executor 6, partition 0, NODE_LOCAL, 2056 bytes) 18/04/17 17:22:00 INFO storage.BlockManagerInfo: Added broadcast_1309_piece0 in memory on ***hostname masked***:60107 (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:22:00 INFO storage.MemoryStore: Block broadcast_1312_piece0 stored as bytes in memory (estimated size 3.1 KB, free 491.5 MB) 18/04/17 17:22:00 INFO storage.BlockManagerInfo: Added broadcast_1312_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:22:00 INFO spark.SparkContext: Created broadcast 1312 from broadcast at DAGScheduler.scala:1006 18/04/17 17:22:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 1312 (KafkaRDD[1782] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 17:22:00 INFO cluster.YarnClusterScheduler: Adding task set 1312.0 with 1 tasks 18/04/17 17:22:00 INFO scheduler.DAGScheduler: Got job 1313 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 17:22:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 1313 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 17:22:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 17:22:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 17:22:00 INFO scheduler.DAGScheduler: Submitting ResultStage 1313 (KafkaRDD[1797] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 17:22:00 INFO storage.MemoryStore: Block broadcast_1313 stored as values in memory (estimated size 5.7 KB, free 491.4 MB) 18/04/17 17:22:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 1312.0 (TID 1312, ***hostname masked***, executor 5, partition 0, RACK_LOCAL, 2048 bytes) 18/04/17 17:22:00 INFO storage.MemoryStore: Block broadcast_1313_piece0 stored as bytes in memory (estimated size 3.1 KB, free 491.4 MB) 18/04/17 17:22:00 INFO storage.BlockManagerInfo: Added broadcast_1313_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:22:00 INFO spark.SparkContext: Created broadcast 1313 from broadcast at DAGScheduler.scala:1006 18/04/17 17:22:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 1313 (KafkaRDD[1797] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 17:22:00 INFO cluster.YarnClusterScheduler: Adding task set 1313.0 with 1 tasks 18/04/17 17:22:00 INFO scheduler.DAGScheduler: Got job 1315 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 17:22:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 1314 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 17:22:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 17:22:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 17:22:00 INFO scheduler.DAGScheduler: Submitting ResultStage 1314 (KafkaRDD[1786] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 17:22:00 INFO storage.MemoryStore: Block broadcast_1314 stored as values in memory (estimated size 5.7 KB, free 491.4 MB) 18/04/17 17:22:00 INFO storage.BlockManagerInfo: Added broadcast_1311_piece0 in memory on ***hostname masked***:35790 (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:22:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 1313.0 (TID 1313, ***hostname masked***, executor 10, partition 0, NODE_LOCAL, 2048 bytes) 18/04/17 17:22:00 INFO storage.MemoryStore: Block broadcast_1314_piece0 stored as bytes in memory (estimated size 3.1 KB, free 491.4 MB) 18/04/17 17:22:00 INFO storage.BlockManagerInfo: Added broadcast_1314_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:22:00 INFO storage.BlockManagerInfo: Added broadcast_1310_piece0 in memory on ***hostname masked***:56034 (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:22:00 INFO spark.SparkContext: Created broadcast 1314 from broadcast at DAGScheduler.scala:1006 18/04/17 17:22:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 1314 (KafkaRDD[1786] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 17:22:00 INFO cluster.YarnClusterScheduler: Adding task set 1314.0 with 1 tasks 18/04/17 17:22:00 INFO scheduler.DAGScheduler: Got job 1314 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 17:22:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 1315 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 17:22:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 17:22:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 17:22:00 INFO scheduler.DAGScheduler: Submitting ResultStage 1315 (KafkaRDD[1783] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 17:22:00 INFO storage.MemoryStore: Block broadcast_1315 stored as values in memory (estimated size 5.7 KB, free 491.4 MB) 18/04/17 17:22:00 INFO storage.BlockManagerInfo: Added broadcast_1312_piece0 in memory on ***hostname masked***:53081 (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:22:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 1314.0 (TID 1314, ***hostname masked***, executor 9, partition 0, RACK_LOCAL, 2069 bytes) 18/04/17 17:22:00 INFO storage.MemoryStore: Block broadcast_1315_piece0 stored as bytes in memory (estimated size 3.1 KB, free 491.4 MB) 18/04/17 17:22:00 INFO storage.BlockManagerInfo: Added broadcast_1315_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:22:00 INFO spark.SparkContext: Created broadcast 1315 from broadcast at DAGScheduler.scala:1006 18/04/17 17:22:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 1315 (KafkaRDD[1783] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 17:22:00 INFO cluster.YarnClusterScheduler: Adding task set 1315.0 with 1 tasks 18/04/17 17:22:00 INFO scheduler.DAGScheduler: Got job 1316 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 17:22:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 1316 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 17:22:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 17:22:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 17:22:00 INFO scheduler.DAGScheduler: Submitting ResultStage 1316 (KafkaRDD[1798] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 17:22:00 INFO storage.MemoryStore: Block broadcast_1316 stored as values in memory (estimated size 5.7 KB, free 491.4 MB) 18/04/17 17:22:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 1315.0 (TID 1315, ***hostname masked***, executor 11, partition 0, RACK_LOCAL, 2050 bytes) 18/04/17 17:22:00 INFO storage.MemoryStore: Block broadcast_1316_piece0 stored as bytes in memory (estimated size 3.1 KB, free 491.4 MB) 18/04/17 17:22:00 INFO storage.BlockManagerInfo: Added broadcast_1316_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:22:00 INFO spark.SparkContext: Created broadcast 1316 from broadcast at DAGScheduler.scala:1006 18/04/17 17:22:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 1316 (KafkaRDD[1798] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 17:22:00 INFO cluster.YarnClusterScheduler: Adding task set 1316.0 with 1 tasks 18/04/17 17:22:00 INFO scheduler.DAGScheduler: Got job 1317 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 17:22:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 1317 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 17:22:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 17:22:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 17:22:00 INFO scheduler.DAGScheduler: Submitting ResultStage 1317 (KafkaRDD[1789] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 17:22:00 INFO storage.MemoryStore: Block broadcast_1317 stored as values in memory (estimated size 5.7 KB, free 491.4 MB) 18/04/17 17:22:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 1316.0 (TID 1316, ***hostname masked***, executor 7, partition 0, RACK_LOCAL, 2053 bytes) 18/04/17 17:22:00 INFO storage.BlockManagerInfo: Added broadcast_1313_piece0 in memory on ***hostname masked***:55095 (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:22:00 INFO storage.BlockManagerInfo: Added broadcast_1314_piece0 in memory on ***hostname masked***:55033 (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:22:00 INFO storage.MemoryStore: Block broadcast_1317_piece0 stored as bytes in memory (estimated size 3.1 KB, free 491.4 MB) 18/04/17 17:22:00 INFO storage.BlockManagerInfo: Added broadcast_1317_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:22:00 INFO spark.SparkContext: Created broadcast 1317 from broadcast at DAGScheduler.scala:1006 18/04/17 17:22:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 1317 (KafkaRDD[1789] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 17:22:00 INFO cluster.YarnClusterScheduler: Adding task set 1317.0 with 1 tasks 18/04/17 17:22:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 1317.0 (TID 1317, ***hostname masked***, executor 2, partition 0, RACK_LOCAL, 2045 bytes) 18/04/17 17:22:00 INFO storage.BlockManagerInfo: Added broadcast_1315_piece0 in memory on ***hostname masked***:57847 (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:22:00 INFO storage.BlockManagerInfo: Added broadcast_1316_piece0 in memory on ***hostname masked***:41751 (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:22:00 INFO storage.BlockManagerInfo: Added broadcast_1317_piece0 in memory on ***hostname masked***:43653 (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:22:00 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 1310.0 (TID 1310) in 54 ms on ***hostname masked*** (executor 1) (1/1) 18/04/17 17:22:00 INFO cluster.YarnClusterScheduler: Removed TaskSet 1310.0, whose tasks have all completed, from pool 18/04/17 17:22:00 INFO scheduler.DAGScheduler: ResultStage 1310 (foreachPartition at PredictorEngineApp.java:153) finished in 0.055 s 18/04/17 17:22:00 INFO scheduler.DAGScheduler: Job 1310 finished: foreachPartition at PredictorEngineApp.java:153, took 0.126161 s 18/04/17 17:22:00 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x520b7bb0 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 17:22:00 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x520b7bb00x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 17:22:00 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 17:22:00 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:36699, server: ***hostname masked***/***IP masked***:2181 18/04/17 17:22:00 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x3626be1439a97ca, negotiated timeout = 60000 18/04/17 17:22:00 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x3626be1439a97ca 18/04/17 17:22:00 INFO zookeeper.ZooKeeper: Session: 0x3626be1439a97ca closed 18/04/17 17:22:00 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 17:22:00 INFO scheduler.JobScheduler: Finished job streaming job 1523974920000 ms.2 from job set of time 1523974920000 ms 18/04/17 17:22:01 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 1317.0 (TID 1317) in 1081 ms on ***hostname masked*** (executor 2) (1/1) 18/04/17 17:22:01 INFO cluster.YarnClusterScheduler: Removed TaskSet 1317.0, whose tasks have all completed, from pool 18/04/17 17:22:01 INFO scheduler.DAGScheduler: ResultStage 1317 (foreachPartition at PredictorEngineApp.java:153) finished in 1.082 s 18/04/17 17:22:01 INFO scheduler.DAGScheduler: Job 1317 finished: foreachPartition at PredictorEngineApp.java:153, took 1.177974 s 18/04/17 17:22:01 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0xae08346 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 17:22:01 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0xae083460x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 17:22:01 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 17:22:01 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:43085, server: ***hostname masked***/***IP masked***:2181 18/04/17 17:22:01 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x1626be1444c9816, negotiated timeout = 60000 18/04/17 17:22:01 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x1626be1444c9816 18/04/17 17:22:01 INFO zookeeper.ZooKeeper: Session: 0x1626be1444c9816 closed 18/04/17 17:22:01 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 17:22:01 INFO scheduler.JobScheduler: Finished job streaming job 1523974920000 ms.25 from job set of time 1523974920000 ms 18/04/17 17:22:04 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 1308.0 (TID 1308) in 4800 ms on ***hostname masked*** (executor 1) (1/1) 18/04/17 17:22:04 INFO scheduler.DAGScheduler: ResultStage 1308 (foreachPartition at PredictorEngineApp.java:153) finished in 4.801 s 18/04/17 17:22:04 INFO cluster.YarnClusterScheduler: Removed TaskSet 1308.0, whose tasks have all completed, from pool 18/04/17 17:22:04 INFO scheduler.DAGScheduler: Job 1307 finished: foreachPartition at PredictorEngineApp.java:153, took 4.866376 s 18/04/17 17:22:04 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x14dfc60c connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 17:22:04 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x14dfc60c0x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 17:22:04 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 17:22:04 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:47689, server: ***hostname masked***/***IP masked***:2181 18/04/17 17:22:04 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x2626be142b2910a, negotiated timeout = 60000 18/04/17 17:22:04 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x2626be142b2910a 18/04/17 17:22:04 INFO zookeeper.ZooKeeper: Session: 0x2626be142b2910a closed 18/04/17 17:22:04 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 17:22:04 INFO scheduler.JobScheduler: Finished job streaming job 1523974920000 ms.12 from job set of time 1523974920000 ms 18/04/17 17:22:05 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 1295.0 (TID 1295) in 4910 ms on ***hostname masked*** (executor 11) (1/1) 18/04/17 17:22:05 INFO cluster.YarnClusterScheduler: Removed TaskSet 1295.0, whose tasks have all completed, from pool 18/04/17 17:22:05 INFO scheduler.DAGScheduler: ResultStage 1295 (foreachPartition at PredictorEngineApp.java:153) finished in 4.910 s 18/04/17 17:22:05 INFO scheduler.DAGScheduler: Job 1295 finished: foreachPartition at PredictorEngineApp.java:153, took 4.928252 s 18/04/17 17:22:05 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x5c10b135 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 17:22:05 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x5c10b1350x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 17:22:05 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 17:22:05 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:43097, server: ***hostname masked***/***IP masked***:2181 18/04/17 17:22:05 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x1626be1444c9817, negotiated timeout = 60000 18/04/17 17:22:05 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x1626be1444c9817 18/04/17 17:22:05 INFO zookeeper.ZooKeeper: Session: 0x1626be1444c9817 closed 18/04/17 17:22:05 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 17:22:05 INFO scheduler.JobScheduler: Finished job streaming job 1523974920000 ms.7 from job set of time 1523974920000 ms 18/04/17 17:22:05 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 1316.0 (TID 1316) in 5148 ms on ***hostname masked*** (executor 7) (1/1) 18/04/17 17:22:05 INFO cluster.YarnClusterScheduler: Removed TaskSet 1316.0, whose tasks have all completed, from pool 18/04/17 17:22:05 INFO scheduler.DAGScheduler: ResultStage 1316 (foreachPartition at PredictorEngineApp.java:153) finished in 5.149 s 18/04/17 17:22:05 INFO scheduler.DAGScheduler: Job 1316 finished: foreachPartition at PredictorEngineApp.java:153, took 5.242813 s 18/04/17 17:22:05 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x72e128a3 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 17:22:05 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x72e128a30x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 17:22:05 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 17:22:05 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:36719, server: ***hostname masked***/***IP masked***:2181 18/04/17 17:22:05 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x3626be1439a97d3, negotiated timeout = 60000 18/04/17 17:22:05 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x3626be1439a97d3 18/04/17 17:22:05 INFO zookeeper.ZooKeeper: Session: 0x3626be1439a97d3 closed 18/04/17 17:22:05 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 17:22:05 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 1302.0 (TID 1302) in 5221 ms on ***hostname masked*** (executor 12) (1/1) 18/04/17 17:22:05 INFO scheduler.DAGScheduler: ResultStage 1302 (foreachPartition at PredictorEngineApp.java:153) finished in 5.221 s 18/04/17 17:22:05 INFO cluster.YarnClusterScheduler: Removed TaskSet 1302.0, whose tasks have all completed, from pool 18/04/17 17:22:05 INFO scheduler.DAGScheduler: Job 1301 finished: foreachPartition at PredictorEngineApp.java:153, took 5.265398 s 18/04/17 17:22:05 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x4b77418d connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 17:22:05 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x4b77418d0x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 17:22:05 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 17:22:05 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:36722, server: ***hostname masked***/***IP masked***:2181 18/04/17 17:22:05 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x3626be1439a97d4, negotiated timeout = 60000 18/04/17 17:22:05 INFO scheduler.JobScheduler: Finished job streaming job 1523974920000 ms.34 from job set of time 1523974920000 ms 18/04/17 17:22:05 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x3626be1439a97d4 18/04/17 17:22:05 INFO zookeeper.ZooKeeper: Session: 0x3626be1439a97d4 closed 18/04/17 17:22:05 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 17:22:05 INFO scheduler.JobScheduler: Finished job streaming job 1523974920000 ms.15 from job set of time 1523974920000 ms 18/04/17 17:22:05 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 1313.0 (TID 1313) in 5310 ms on ***hostname masked*** (executor 10) (1/1) 18/04/17 17:22:05 INFO cluster.YarnClusterScheduler: Removed TaskSet 1313.0, whose tasks have all completed, from pool 18/04/17 17:22:05 INFO scheduler.DAGScheduler: ResultStage 1313 (foreachPartition at PredictorEngineApp.java:153) finished in 5.317 s 18/04/17 17:22:05 INFO scheduler.DAGScheduler: Job 1313 finished: foreachPartition at PredictorEngineApp.java:153, took 5.397498 s 18/04/17 17:22:05 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x30a98a7a connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 17:22:05 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x30a98a7a0x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 17:22:05 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 17:22:05 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:47702, server: ***hostname masked***/***IP masked***:2181 18/04/17 17:22:05 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x2626be142b2910c, negotiated timeout = 60000 18/04/17 17:22:05 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x2626be142b2910c 18/04/17 17:22:05 INFO zookeeper.ZooKeeper: Session: 0x2626be142b2910c closed 18/04/17 17:22:05 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 17:22:05 INFO scheduler.JobScheduler: Finished job streaming job 1523974920000 ms.33 from job set of time 1523974920000 ms 18/04/17 17:22:05 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 1293.0 (TID 1293) in 5881 ms on ***hostname masked*** (executor 3) (1/1) 18/04/17 17:22:05 INFO cluster.YarnClusterScheduler: Removed TaskSet 1293.0, whose tasks have all completed, from pool 18/04/17 17:22:05 INFO scheduler.DAGScheduler: ResultStage 1293 (foreachPartition at PredictorEngineApp.java:153) finished in 5.882 s 18/04/17 17:22:05 INFO scheduler.DAGScheduler: Job 1293 finished: foreachPartition at PredictorEngineApp.java:153, took 5.892811 s 18/04/17 17:22:05 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x3a6d256e connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 17:22:05 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x3a6d256e0x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 17:22:05 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 17:22:05 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:47705, server: ***hostname masked***/***IP masked***:2181 18/04/17 17:22:05 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x2626be142b2910d, negotiated timeout = 60000 18/04/17 17:22:05 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x2626be142b2910d 18/04/17 17:22:05 INFO zookeeper.ZooKeeper: Session: 0x2626be142b2910d closed 18/04/17 17:22:05 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 17:22:06 INFO scheduler.JobScheduler: Finished job streaming job 1523974920000 ms.8 from job set of time 1523974920000 ms 18/04/17 17:22:06 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 1301.0 (TID 1301) in 6415 ms on ***hostname masked*** (executor 4) (1/1) 18/04/17 17:22:06 INFO cluster.YarnClusterScheduler: Removed TaskSet 1301.0, whose tasks have all completed, from pool 18/04/17 17:22:06 INFO scheduler.DAGScheduler: ResultStage 1301 (foreachPartition at PredictorEngineApp.java:153) finished in 6.415 s 18/04/17 17:22:06 INFO scheduler.DAGScheduler: Job 1300 finished: foreachPartition at PredictorEngineApp.java:153, took 6.455484 s 18/04/17 17:22:06 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x5265f7f5 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 17:22:06 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x5265f7f50x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 17:22:06 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 17:22:06 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:47709, server: ***hostname masked***/***IP masked***:2181 18/04/17 17:22:06 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x2626be142b2910f, negotiated timeout = 60000 18/04/17 17:22:06 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x2626be142b2910f 18/04/17 17:22:06 INFO zookeeper.ZooKeeper: Session: 0x2626be142b2910f closed 18/04/17 17:22:06 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 17:22:06 INFO scheduler.JobScheduler: Finished job streaming job 1523974920000 ms.31 from job set of time 1523974920000 ms 18/04/17 17:22:08 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 1307.0 (TID 1307) in 8832 ms on ***hostname masked*** (executor 4) (1/1) 18/04/17 17:22:08 INFO cluster.YarnClusterScheduler: Removed TaskSet 1307.0, whose tasks have all completed, from pool 18/04/17 17:22:08 INFO scheduler.DAGScheduler: ResultStage 1307 (foreachPartition at PredictorEngineApp.java:153) finished in 8.832 s 18/04/17 17:22:08 INFO scheduler.DAGScheduler: Job 1308 finished: foreachPartition at PredictorEngineApp.java:153, took 8.895216 s 18/04/17 17:22:08 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x3f21b07d connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 17:22:08 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x3f21b07d0x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 17:22:08 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 17:22:08 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:43120, server: ***hostname masked***/***IP masked***:2181 18/04/17 17:22:08 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x1626be1444c981f, negotiated timeout = 60000 18/04/17 17:22:08 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x1626be1444c981f 18/04/17 17:22:08 INFO zookeeper.ZooKeeper: Session: 0x1626be1444c981f closed 18/04/17 17:22:08 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 17:22:09 INFO scheduler.JobScheduler: Finished job streaming job 1523974920000 ms.32 from job set of time 1523974920000 ms 18/04/17 17:22:09 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 1298.0 (TID 1298) in 9126 ms on ***hostname masked*** (executor 8) (1/1) 18/04/17 17:22:09 INFO cluster.YarnClusterScheduler: Removed TaskSet 1298.0, whose tasks have all completed, from pool 18/04/17 17:22:09 INFO scheduler.DAGScheduler: ResultStage 1298 (foreachPartition at PredictorEngineApp.java:153) finished in 9.127 s 18/04/17 17:22:09 INFO scheduler.DAGScheduler: Job 1298 finished: foreachPartition at PredictorEngineApp.java:153, took 9.157122 s 18/04/17 17:22:09 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x76e4ddb3 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 17:22:09 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x76e4ddb30x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 17:22:09 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 17:22:09 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:47719, server: ***hostname masked***/***IP masked***:2181 18/04/17 17:22:09 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x2626be142b29111, negotiated timeout = 60000 18/04/17 17:22:09 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x2626be142b29111 18/04/17 17:22:09 INFO zookeeper.ZooKeeper: Session: 0x2626be142b29111 closed 18/04/17 17:22:09 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 17:22:09 INFO scheduler.JobScheduler: Finished job streaming job 1523974920000 ms.28 from job set of time 1523974920000 ms 18/04/17 17:22:09 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 1303.0 (TID 1303) in 9573 ms on ***hostname masked*** (executor 8) (1/1) 18/04/17 17:22:09 INFO cluster.YarnClusterScheduler: Removed TaskSet 1303.0, whose tasks have all completed, from pool 18/04/17 17:22:09 INFO scheduler.DAGScheduler: ResultStage 1303 (foreachPartition at PredictorEngineApp.java:153) finished in 9.574 s 18/04/17 17:22:09 INFO scheduler.DAGScheduler: Job 1303 finished: foreachPartition at PredictorEngineApp.java:153, took 9.622432 s 18/04/17 17:22:09 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x6b72569a connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 17:22:09 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x6b72569a0x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 17:22:09 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 17:22:09 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:47724, server: ***hostname masked***/***IP masked***:2181 18/04/17 17:22:09 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x2626be142b29112, negotiated timeout = 60000 18/04/17 17:22:09 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x2626be142b29112 18/04/17 17:22:09 INFO zookeeper.ZooKeeper: Session: 0x2626be142b29112 closed 18/04/17 17:22:09 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 17:22:09 INFO scheduler.JobScheduler: Finished job streaming job 1523974920000 ms.27 from job set of time 1523974920000 ms 18/04/17 17:22:10 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 1312.0 (TID 1312) in 10648 ms on ***hostname masked*** (executor 5) (1/1) 18/04/17 17:22:10 INFO cluster.YarnClusterScheduler: Removed TaskSet 1312.0, whose tasks have all completed, from pool 18/04/17 17:22:10 INFO scheduler.DAGScheduler: ResultStage 1312 (foreachPartition at PredictorEngineApp.java:153) finished in 10.649 s 18/04/17 17:22:10 INFO scheduler.DAGScheduler: Job 1312 finished: foreachPartition at PredictorEngineApp.java:153, took 10.726575 s 18/04/17 17:22:10 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x29215719 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 17:22:10 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x292157190x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 17:22:10 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 17:22:10 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:43134, server: ***hostname masked***/***IP masked***:2181 18/04/17 17:22:10 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x1626be1444c9822, negotiated timeout = 60000 18/04/17 17:22:10 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x1626be1444c9822 18/04/17 17:22:10 INFO zookeeper.ZooKeeper: Session: 0x1626be1444c9822 closed 18/04/17 17:22:10 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 17:22:10 INFO scheduler.JobScheduler: Finished job streaming job 1523974920000 ms.18 from job set of time 1523974920000 ms 18/04/17 17:22:10 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 1304.0 (TID 1304) in 10746 ms on ***hostname masked*** (executor 5) (1/1) 18/04/17 17:22:10 INFO cluster.YarnClusterScheduler: Removed TaskSet 1304.0, whose tasks have all completed, from pool 18/04/17 17:22:10 INFO scheduler.DAGScheduler: ResultStage 1304 (foreachPartition at PredictorEngineApp.java:153) finished in 10.748 s 18/04/17 17:22:10 INFO scheduler.DAGScheduler: Job 1304 finished: foreachPartition at PredictorEngineApp.java:153, took 10.799658 s 18/04/17 17:22:10 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x5edd6de8 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 17:22:10 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x5edd6de80x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 17:22:10 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 17:22:10 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:36755, server: ***hostname masked***/***IP masked***:2181 18/04/17 17:22:10 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x3626be1439a97d5, negotiated timeout = 60000 18/04/17 17:22:10 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x3626be1439a97d5 18/04/17 17:22:10 INFO zookeeper.ZooKeeper: Session: 0x3626be1439a97d5 closed 18/04/17 17:22:10 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 17:22:10 INFO scheduler.JobScheduler: Finished job streaming job 1523974920000 ms.29 from job set of time 1523974920000 ms 18/04/17 17:22:15 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 1315.0 (TID 1315) in 14971 ms on ***hostname masked*** (executor 11) (1/1) 18/04/17 17:22:15 INFO cluster.YarnClusterScheduler: Removed TaskSet 1315.0, whose tasks have all completed, from pool 18/04/17 17:22:15 INFO scheduler.DAGScheduler: ResultStage 1315 (foreachPartition at PredictorEngineApp.java:153) finished in 14.972 s 18/04/17 17:22:15 INFO scheduler.DAGScheduler: Job 1314 finished: foreachPartition at PredictorEngineApp.java:153, took 15.063012 s 18/04/17 17:22:15 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x49a5c6c2 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 17:22:15 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x49a5c6c20x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 17:22:15 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 17:22:15 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:43148, server: ***hostname masked***/***IP masked***:2181 18/04/17 17:22:15 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x1626be1444c9825, negotiated timeout = 60000 18/04/17 17:22:15 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x1626be1444c9825 18/04/17 17:22:15 INFO zookeeper.ZooKeeper: Session: 0x1626be1444c9825 closed 18/04/17 17:22:15 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 17:22:15 INFO scheduler.JobScheduler: Finished job streaming job 1523974920000 ms.19 from job set of time 1523974920000 ms 18/04/17 17:22:15 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 1305.0 (TID 1305) in 15075 ms on ***hostname masked*** (executor 6) (1/1) 18/04/17 17:22:15 INFO cluster.YarnClusterScheduler: Removed TaskSet 1305.0, whose tasks have all completed, from pool 18/04/17 17:22:15 INFO scheduler.DAGScheduler: ResultStage 1305 (foreachPartition at PredictorEngineApp.java:153) finished in 15.076 s 18/04/17 17:22:15 INFO scheduler.DAGScheduler: Job 1305 finished: foreachPartition at PredictorEngineApp.java:153, took 15.131661 s 18/04/17 17:22:15 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x73450fa2 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 17:22:15 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x73450fa20x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 17:22:15 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 17:22:15 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:43151, server: ***hostname masked***/***IP masked***:2181 18/04/17 17:22:15 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x1626be1444c9826, negotiated timeout = 60000 18/04/17 17:22:15 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x1626be1444c9826 18/04/17 17:22:15 INFO zookeeper.ZooKeeper: Session: 0x1626be1444c9826 closed 18/04/17 17:22:15 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 17:22:15 INFO scheduler.JobScheduler: Finished job streaming job 1523974920000 ms.24 from job set of time 1523974920000 ms 18/04/17 17:22:15 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 1306.0 (TID 1306) in 15198 ms on ***hostname masked*** (executor 11) (1/1) 18/04/17 17:22:15 INFO cluster.YarnClusterScheduler: Removed TaskSet 1306.0, whose tasks have all completed, from pool 18/04/17 17:22:15 INFO scheduler.DAGScheduler: ResultStage 1306 (foreachPartition at PredictorEngineApp.java:153) finished in 15.198 s 18/04/17 17:22:15 INFO scheduler.DAGScheduler: Job 1306 finished: foreachPartition at PredictorEngineApp.java:153, took 15.258209 s 18/04/17 17:22:15 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x75049ba4 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 17:22:15 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x75049ba40x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 17:22:15 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 17:22:15 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:43154, server: ***hostname masked***/***IP masked***:2181 18/04/17 17:22:15 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x1626be1444c9827, negotiated timeout = 60000 18/04/17 17:22:15 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x1626be1444c9827 18/04/17 17:22:15 INFO zookeeper.ZooKeeper: Session: 0x1626be1444c9827 closed 18/04/17 17:22:15 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 17:22:15 INFO scheduler.JobScheduler: Finished job streaming job 1523974920000 ms.23 from job set of time 1523974920000 ms 18/04/17 17:22:15 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 1309.0 (TID 1309) in 15504 ms on ***hostname masked*** (executor 3) (1/1) 18/04/17 17:22:15 INFO cluster.YarnClusterScheduler: Removed TaskSet 1309.0, whose tasks have all completed, from pool 18/04/17 17:22:15 INFO scheduler.DAGScheduler: ResultStage 1309 (foreachPartition at PredictorEngineApp.java:153) finished in 15.505 s 18/04/17 17:22:15 INFO scheduler.DAGScheduler: Job 1309 finished: foreachPartition at PredictorEngineApp.java:153, took 15.573471 s 18/04/17 17:22:15 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0xa4a99f3 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 17:22:15 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0xa4a99f30x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 17:22:15 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 17:22:15 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:47752, server: ***hostname masked***/***IP masked***:2181 18/04/17 17:22:15 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x2626be142b29114, negotiated timeout = 60000 18/04/17 17:22:15 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x2626be142b29114 18/04/17 17:22:15 INFO zookeeper.ZooKeeper: Session: 0x2626be142b29114 closed 18/04/17 17:22:15 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 17:22:15 INFO scheduler.JobScheduler: Finished job streaming job 1523974920000 ms.9 from job set of time 1523974920000 ms 18/04/17 17:22:16 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 1300.0 (TID 1300) in 16015 ms on ***hostname masked*** (executor 9) (1/1) 18/04/17 17:22:16 INFO cluster.YarnClusterScheduler: Removed TaskSet 1300.0, whose tasks have all completed, from pool 18/04/17 17:22:16 INFO scheduler.DAGScheduler: ResultStage 1300 (foreachPartition at PredictorEngineApp.java:153) finished in 16.016 s 18/04/17 17:22:16 INFO scheduler.DAGScheduler: Job 1302 finished: foreachPartition at PredictorEngineApp.java:153, took 16.052355 s 18/04/17 17:22:16 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x3e7247e2 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 17:22:16 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x3e7247e20x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 17:22:16 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 17:22:16 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:43161, server: ***hostname masked***/***IP masked***:2181 18/04/17 17:22:16 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x1626be1444c9828, negotiated timeout = 60000 18/04/17 17:22:16 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x1626be1444c9828 18/04/17 17:22:16 INFO zookeeper.ZooKeeper: Session: 0x1626be1444c9828 closed 18/04/17 17:22:16 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 17:22:16 INFO scheduler.JobScheduler: Finished job streaming job 1523974920000 ms.20 from job set of time 1523974920000 ms 18/04/17 17:22:16 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 1294.0 (TID 1294) in 16269 ms on ***hostname masked*** (executor 9) (1/1) 18/04/17 17:22:16 INFO cluster.YarnClusterScheduler: Removed TaskSet 1294.0, whose tasks have all completed, from pool 18/04/17 17:22:16 INFO scheduler.DAGScheduler: ResultStage 1294 (foreachPartition at PredictorEngineApp.java:153) finished in 16.269 s 18/04/17 17:22:16 INFO scheduler.DAGScheduler: Job 1294 finished: foreachPartition at PredictorEngineApp.java:153, took 16.283287 s 18/04/17 17:22:16 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x41237afe connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 17:22:16 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x41237afe0x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 17:22:16 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 17:22:16 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:47759, server: ***hostname masked***/***IP masked***:2181 18/04/17 17:22:16 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x2626be142b29117, negotiated timeout = 60000 18/04/17 17:22:16 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x2626be142b29117 18/04/17 17:22:16 INFO zookeeper.ZooKeeper: Session: 0x2626be142b29117 closed 18/04/17 17:22:16 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 17:22:16 INFO scheduler.JobScheduler: Finished job streaming job 1523974920000 ms.6 from job set of time 1523974920000 ms 18/04/17 17:22:22 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 1296.0 (TID 1296) in 22181 ms on ***hostname masked*** (executor 11) (1/1) 18/04/17 17:22:22 INFO scheduler.DAGScheduler: ResultStage 1296 (foreachPartition at PredictorEngineApp.java:153) finished in 22.182 s 18/04/17 17:22:22 INFO cluster.YarnClusterScheduler: Removed TaskSet 1296.0, whose tasks have all completed, from pool 18/04/17 17:22:22 INFO scheduler.DAGScheduler: Job 1296 finished: foreachPartition at PredictorEngineApp.java:153, took 22.204299 s 18/04/17 17:22:22 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x51323c86 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 17:22:22 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x51323c860x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 17:22:22 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 17:22:22 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:43176, server: ***hostname masked***/***IP masked***:2181 18/04/17 17:22:22 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x1626be1444c982c, negotiated timeout = 60000 18/04/17 17:22:22 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x1626be1444c982c 18/04/17 17:22:22 INFO zookeeper.ZooKeeper: Session: 0x1626be1444c982c closed 18/04/17 17:22:22 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 17:22:22 INFO scheduler.JobScheduler: Finished job streaming job 1523974920000 ms.11 from job set of time 1523974920000 ms 18/04/17 17:22:23 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 1314.0 (TID 1314) in 23363 ms on ***hostname masked*** (executor 9) (1/1) 18/04/17 17:22:23 INFO cluster.YarnClusterScheduler: Removed TaskSet 1314.0, whose tasks have all completed, from pool 18/04/17 17:22:23 INFO scheduler.DAGScheduler: ResultStage 1314 (foreachPartition at PredictorEngineApp.java:153) finished in 23.364 s 18/04/17 17:22:23 INFO scheduler.DAGScheduler: Job 1315 finished: foreachPartition at PredictorEngineApp.java:153, took 23.453242 s 18/04/17 17:22:23 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x63ff740d connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 17:22:23 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x63ff740d0x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 17:22:23 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 17:22:23 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:36798, server: ***hostname masked***/***IP masked***:2181 18/04/17 17:22:23 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x3626be1439a97dc, negotiated timeout = 60000 18/04/17 17:22:23 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x3626be1439a97dc 18/04/17 17:22:23 INFO zookeeper.ZooKeeper: Session: 0x3626be1439a97dc closed 18/04/17 17:22:23 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 17:22:23 INFO scheduler.JobScheduler: Finished job streaming job 1523974920000 ms.22 from job set of time 1523974920000 ms 18/04/17 17:22:26 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 1292.0 (TID 1292) in 26107 ms on ***hostname masked*** (executor 6) (1/1) 18/04/17 17:22:26 INFO cluster.YarnClusterScheduler: Removed TaskSet 1292.0, whose tasks have all completed, from pool 18/04/17 17:22:26 INFO scheduler.DAGScheduler: ResultStage 1292 (foreachPartition at PredictorEngineApp.java:153) finished in 26.107 s 18/04/17 17:22:26 INFO scheduler.DAGScheduler: Job 1292 finished: foreachPartition at PredictorEngineApp.java:153, took 26.114577 s 18/04/17 17:22:26 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x7d82776c connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 17:22:26 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x7d82776c0x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 17:22:26 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 17:22:26 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:36806, server: ***hostname masked***/***IP masked***:2181 18/04/17 17:22:26 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x3626be1439a97de, negotiated timeout = 60000 18/04/17 17:22:26 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x3626be1439a97de 18/04/17 17:22:26 INFO zookeeper.ZooKeeper: Session: 0x3626be1439a97de closed 18/04/17 17:22:26 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 17:22:26 INFO scheduler.JobScheduler: Finished job streaming job 1523974920000 ms.1 from job set of time 1523974920000 ms 18/04/17 17:22:26 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 1311.0 (TID 1311) in 26592 ms on ***hostname masked*** (executor 6) (1/1) 18/04/17 17:22:26 INFO cluster.YarnClusterScheduler: Removed TaskSet 1311.0, whose tasks have all completed, from pool 18/04/17 17:22:26 INFO scheduler.DAGScheduler: ResultStage 1311 (foreachPartition at PredictorEngineApp.java:153) finished in 26.593 s 18/04/17 17:22:26 INFO scheduler.DAGScheduler: Job 1311 finished: foreachPartition at PredictorEngineApp.java:153, took 26.667451 s 18/04/17 17:22:26 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x76d2c9f7 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 17:22:26 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x76d2c9f70x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 17:22:26 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 17:22:26 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:43191, server: ***hostname masked***/***IP masked***:2181 18/04/17 17:22:26 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x1626be1444c982f, negotiated timeout = 60000 18/04/17 17:22:26 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x1626be1444c982f 18/04/17 17:22:26 INFO zookeeper.ZooKeeper: Session: 0x1626be1444c982f closed 18/04/17 17:22:26 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 17:22:26 INFO scheduler.JobScheduler: Finished job streaming job 1523974920000 ms.26 from job set of time 1523974920000 ms 18/04/17 17:22:29 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 1299.0 (TID 1299) in 29659 ms on ***hostname masked*** (executor 3) (1/1) 18/04/17 17:22:29 INFO cluster.YarnClusterScheduler: Removed TaskSet 1299.0, whose tasks have all completed, from pool 18/04/17 17:22:29 INFO scheduler.DAGScheduler: ResultStage 1299 (foreachPartition at PredictorEngineApp.java:153) finished in 29.660 s 18/04/17 17:22:29 INFO scheduler.DAGScheduler: Job 1299 finished: foreachPartition at PredictorEngineApp.java:153, took 29.693523 s 18/04/17 17:22:29 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x6d2c56e4 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 17:22:29 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x6d2c56e40x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 17:22:29 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 17:22:29 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:36815, server: ***hostname masked***/***IP masked***:2181 18/04/17 17:22:29 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x3626be1439a97e0, negotiated timeout = 60000 18/04/17 17:22:29 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x3626be1439a97e0 18/04/17 17:22:29 INFO zookeeper.ZooKeeper: Session: 0x3626be1439a97e0 closed 18/04/17 17:22:29 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 17:22:29 INFO scheduler.JobScheduler: Finished job streaming job 1523974920000 ms.5 from job set of time 1523974920000 ms 18/04/17 17:22:31 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 1297.0 (TID 1297) in 31557 ms on ***hostname masked*** (executor 3) (1/1) 18/04/17 17:22:31 INFO cluster.YarnClusterScheduler: Removed TaskSet 1297.0, whose tasks have all completed, from pool 18/04/17 17:22:31 INFO scheduler.DAGScheduler: ResultStage 1297 (foreachPartition at PredictorEngineApp.java:153) finished in 31.557 s 18/04/17 17:22:31 INFO scheduler.DAGScheduler: Job 1297 finished: foreachPartition at PredictorEngineApp.java:153, took 31.583871 s 18/04/17 17:22:31 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x3a215d78 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 17:22:31 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x3a215d780x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 17:22:31 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 17:22:31 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:36821, server: ***hostname masked***/***IP masked***:2181 18/04/17 17:22:31 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x3626be1439a97e2, negotiated timeout = 60000 18/04/17 17:22:31 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x3626be1439a97e2 18/04/17 17:22:31 INFO zookeeper.ZooKeeper: Session: 0x3626be1439a97e2 closed 18/04/17 17:22:31 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 17:22:31 INFO scheduler.JobScheduler: Finished job streaming job 1523974920000 ms.10 from job set of time 1523974920000 ms 18/04/17 17:22:31 INFO scheduler.JobScheduler: Total delay: 31.688 s for time 1523974920000 ms (execution: 31.621 s) 18/04/17 17:22:31 INFO kafka.KafkaRDD: Removing RDD 1728 from persistence list 18/04/17 17:22:31 INFO storage.BlockManager: Removing RDD 1728 18/04/17 17:22:31 INFO kafka.KafkaRDD: Removing RDD 1728 from persistence list 18/04/17 17:22:31 INFO storage.BlockManager: Removing RDD 1728 18/04/17 17:22:31 INFO kafka.KafkaRDD: Removing RDD 1729 from persistence list 18/04/17 17:22:31 INFO storage.BlockManager: Removing RDD 1729 18/04/17 17:22:31 INFO kafka.KafkaRDD: Removing RDD 1729 from persistence list 18/04/17 17:22:31 INFO storage.BlockManager: Removing RDD 1729 18/04/17 17:22:31 INFO kafka.KafkaRDD: Removing RDD 1730 from persistence list 18/04/17 17:22:31 INFO storage.BlockManager: Removing RDD 1730 18/04/17 17:22:31 INFO kafka.KafkaRDD: Removing RDD 1730 from persistence list 18/04/17 17:22:31 INFO storage.BlockManager: Removing RDD 1730 18/04/17 17:22:31 INFO kafka.KafkaRDD: Removing RDD 1731 from persistence list 18/04/17 17:22:31 INFO storage.BlockManager: Removing RDD 1731 18/04/17 17:22:31 INFO kafka.KafkaRDD: Removing RDD 1731 from persistence list 18/04/17 17:22:31 INFO storage.BlockManager: Removing RDD 1731 18/04/17 17:22:31 INFO kafka.KafkaRDD: Removing RDD 1732 from persistence list 18/04/17 17:22:31 INFO storage.BlockManager: Removing RDD 1732 18/04/17 17:22:31 INFO kafka.KafkaRDD: Removing RDD 1732 from persistence list 18/04/17 17:22:31 INFO storage.BlockManager: Removing RDD 1732 18/04/17 17:22:31 INFO kafka.KafkaRDD: Removing RDD 1733 from persistence list 18/04/17 17:22:31 INFO storage.BlockManager: Removing RDD 1733 18/04/17 17:22:31 INFO kafka.KafkaRDD: Removing RDD 1733 from persistence list 18/04/17 17:22:31 INFO storage.BlockManager: Removing RDD 1733 18/04/17 17:22:31 INFO kafka.KafkaRDD: Removing RDD 1734 from persistence list 18/04/17 17:22:31 INFO storage.BlockManager: Removing RDD 1734 18/04/17 17:22:31 INFO kafka.KafkaRDD: Removing RDD 1734 from persistence list 18/04/17 17:22:31 INFO storage.BlockManager: Removing RDD 1734 18/04/17 17:22:31 INFO kafka.KafkaRDD: Removing RDD 1735 from persistence list 18/04/17 17:22:31 INFO storage.BlockManager: Removing RDD 1735 18/04/17 17:22:31 INFO kafka.KafkaRDD: Removing RDD 1735 from persistence list 18/04/17 17:22:31 INFO storage.BlockManager: Removing RDD 1735 18/04/17 17:22:31 INFO kafka.KafkaRDD: Removing RDD 1736 from persistence list 18/04/17 17:22:31 INFO storage.BlockManager: Removing RDD 1736 18/04/17 17:22:31 INFO kafka.KafkaRDD: Removing RDD 1736 from persistence list 18/04/17 17:22:31 INFO storage.BlockManager: Removing RDD 1736 18/04/17 17:22:31 INFO kafka.KafkaRDD: Removing RDD 1737 from persistence list 18/04/17 17:22:31 INFO storage.BlockManager: Removing RDD 1737 18/04/17 17:22:31 INFO kafka.KafkaRDD: Removing RDD 1737 from persistence list 18/04/17 17:22:31 INFO storage.BlockManager: Removing RDD 1737 18/04/17 17:22:31 INFO kafka.KafkaRDD: Removing RDD 1738 from persistence list 18/04/17 17:22:31 INFO storage.BlockManager: Removing RDD 1738 18/04/17 17:22:31 INFO kafka.KafkaRDD: Removing RDD 1738 from persistence list 18/04/17 17:22:31 INFO storage.BlockManager: Removing RDD 1738 18/04/17 17:22:31 INFO kafka.KafkaRDD: Removing RDD 1739 from persistence list 18/04/17 17:22:31 INFO storage.BlockManager: Removing RDD 1739 18/04/17 17:22:31 INFO kafka.KafkaRDD: Removing RDD 1739 from persistence list 18/04/17 17:22:31 INFO storage.BlockManager: Removing RDD 1739 18/04/17 17:22:31 INFO kafka.KafkaRDD: Removing RDD 1740 from persistence list 18/04/17 17:22:31 INFO storage.BlockManager: Removing RDD 1740 18/04/17 17:22:31 INFO kafka.KafkaRDD: Removing RDD 1740 from persistence list 18/04/17 17:22:31 INFO storage.BlockManager: Removing RDD 1740 18/04/17 17:22:31 INFO kafka.KafkaRDD: Removing RDD 1741 from persistence list 18/04/17 17:22:31 INFO storage.BlockManager: Removing RDD 1741 18/04/17 17:22:31 INFO kafka.KafkaRDD: Removing RDD 1741 from persistence list 18/04/17 17:22:31 INFO storage.BlockManager: Removing RDD 1741 18/04/17 17:22:31 INFO kafka.KafkaRDD: Removing RDD 1742 from persistence list 18/04/17 17:22:31 INFO storage.BlockManager: Removing RDD 1742 18/04/17 17:22:31 INFO kafka.KafkaRDD: Removing RDD 1742 from persistence list 18/04/17 17:22:31 INFO storage.BlockManager: Removing RDD 1742 18/04/17 17:22:31 INFO kafka.KafkaRDD: Removing RDD 1743 from persistence list 18/04/17 17:22:31 INFO storage.BlockManager: Removing RDD 1743 18/04/17 17:22:31 INFO kafka.KafkaRDD: Removing RDD 1743 from persistence list 18/04/17 17:22:31 INFO storage.BlockManager: Removing RDD 1743 18/04/17 17:22:31 INFO kafka.KafkaRDD: Removing RDD 1744 from persistence list 18/04/17 17:22:31 INFO storage.BlockManager: Removing RDD 1744 18/04/17 17:22:31 INFO kafka.KafkaRDD: Removing RDD 1744 from persistence list 18/04/17 17:22:31 INFO storage.BlockManager: Removing RDD 1744 18/04/17 17:22:31 INFO kafka.KafkaRDD: Removing RDD 1745 from persistence list 18/04/17 17:22:31 INFO storage.BlockManager: Removing RDD 1745 18/04/17 17:22:31 INFO kafka.KafkaRDD: Removing RDD 1745 from persistence list 18/04/17 17:22:31 INFO storage.BlockManager: Removing RDD 1745 18/04/17 17:22:31 INFO kafka.KafkaRDD: Removing RDD 1746 from persistence list 18/04/17 17:22:31 INFO storage.BlockManager: Removing RDD 1746 18/04/17 17:22:31 INFO kafka.KafkaRDD: Removing RDD 1746 from persistence list 18/04/17 17:22:31 INFO storage.BlockManager: Removing RDD 1746 18/04/17 17:22:31 INFO kafka.KafkaRDD: Removing RDD 1747 from persistence list 18/04/17 17:22:31 INFO storage.BlockManager: Removing RDD 1747 18/04/17 17:22:31 INFO kafka.KafkaRDD: Removing RDD 1747 from persistence list 18/04/17 17:22:31 INFO storage.BlockManager: Removing RDD 1747 18/04/17 17:22:31 INFO kafka.KafkaRDD: Removing RDD 1748 from persistence list 18/04/17 17:22:31 INFO storage.BlockManager: Removing RDD 1748 18/04/17 17:22:31 INFO kafka.KafkaRDD: Removing RDD 1748 from persistence list 18/04/17 17:22:31 INFO storage.BlockManager: Removing RDD 1748 18/04/17 17:22:31 INFO kafka.KafkaRDD: Removing RDD 1749 from persistence list 18/04/17 17:22:31 INFO storage.BlockManager: Removing RDD 1749 18/04/17 17:22:31 INFO kafka.KafkaRDD: Removing RDD 1749 from persistence list 18/04/17 17:22:31 INFO storage.BlockManager: Removing RDD 1749 18/04/17 17:22:31 INFO kafka.KafkaRDD: Removing RDD 1750 from persistence list 18/04/17 17:22:31 INFO storage.BlockManager: Removing RDD 1750 18/04/17 17:22:31 INFO kafka.KafkaRDD: Removing RDD 1750 from persistence list 18/04/17 17:22:31 INFO storage.BlockManager: Removing RDD 1750 18/04/17 17:22:31 INFO kafka.KafkaRDD: Removing RDD 1751 from persistence list 18/04/17 17:22:31 INFO storage.BlockManager: Removing RDD 1751 18/04/17 17:22:31 INFO kafka.KafkaRDD: Removing RDD 1751 from persistence list 18/04/17 17:22:31 INFO storage.BlockManager: Removing RDD 1751 18/04/17 17:22:31 INFO kafka.KafkaRDD: Removing RDD 1752 from persistence list 18/04/17 17:22:31 INFO storage.BlockManager: Removing RDD 1752 18/04/17 17:22:31 INFO kafka.KafkaRDD: Removing RDD 1752 from persistence list 18/04/17 17:22:31 INFO storage.BlockManager: Removing RDD 1752 18/04/17 17:22:31 INFO kafka.KafkaRDD: Removing RDD 1753 from persistence list 18/04/17 17:22:31 INFO storage.BlockManager: Removing RDD 1753 18/04/17 17:22:31 INFO kafka.KafkaRDD: Removing RDD 1753 from persistence list 18/04/17 17:22:31 INFO storage.BlockManager: Removing RDD 1753 18/04/17 17:22:31 INFO kafka.KafkaRDD: Removing RDD 1754 from persistence list 18/04/17 17:22:31 INFO storage.BlockManager: Removing RDD 1754 18/04/17 17:22:31 INFO kafka.KafkaRDD: Removing RDD 1754 from persistence list 18/04/17 17:22:31 INFO storage.BlockManager: Removing RDD 1754 18/04/17 17:22:31 INFO kafka.KafkaRDD: Removing RDD 1755 from persistence list 18/04/17 17:22:31 INFO storage.BlockManager: Removing RDD 1755 18/04/17 17:22:31 INFO kafka.KafkaRDD: Removing RDD 1755 from persistence list 18/04/17 17:22:31 INFO storage.BlockManager: Removing RDD 1755 18/04/17 17:22:31 INFO kafka.KafkaRDD: Removing RDD 1756 from persistence list 18/04/17 17:22:31 INFO storage.BlockManager: Removing RDD 1756 18/04/17 17:22:31 INFO kafka.KafkaRDD: Removing RDD 1756 from persistence list 18/04/17 17:22:31 INFO storage.BlockManager: Removing RDD 1756 18/04/17 17:22:31 INFO kafka.KafkaRDD: Removing RDD 1757 from persistence list 18/04/17 17:22:31 INFO storage.BlockManager: Removing RDD 1757 18/04/17 17:22:31 INFO kafka.KafkaRDD: Removing RDD 1757 from persistence list 18/04/17 17:22:31 INFO storage.BlockManager: Removing RDD 1757 18/04/17 17:22:31 INFO kafka.KafkaRDD: Removing RDD 1758 from persistence list 18/04/17 17:22:31 INFO storage.BlockManager: Removing RDD 1758 18/04/17 17:22:31 INFO kafka.KafkaRDD: Removing RDD 1758 from persistence list 18/04/17 17:22:31 INFO storage.BlockManager: Removing RDD 1758 18/04/17 17:22:31 INFO kafka.KafkaRDD: Removing RDD 1759 from persistence list 18/04/17 17:22:31 INFO storage.BlockManager: Removing RDD 1759 18/04/17 17:22:31 INFO kafka.KafkaRDD: Removing RDD 1759 from persistence list 18/04/17 17:22:31 INFO storage.BlockManager: Removing RDD 1759 18/04/17 17:22:31 INFO kafka.KafkaRDD: Removing RDD 1760 from persistence list 18/04/17 17:22:31 INFO storage.BlockManager: Removing RDD 1760 18/04/17 17:22:31 INFO kafka.KafkaRDD: Removing RDD 1760 from persistence list 18/04/17 17:22:31 INFO storage.BlockManager: Removing RDD 1760 18/04/17 17:22:31 INFO kafka.KafkaRDD: Removing RDD 1761 from persistence list 18/04/17 17:22:31 INFO storage.BlockManager: Removing RDD 1761 18/04/17 17:22:31 INFO kafka.KafkaRDD: Removing RDD 1761 from persistence list 18/04/17 17:22:31 INFO storage.BlockManager: Removing RDD 1761 18/04/17 17:22:31 INFO kafka.KafkaRDD: Removing RDD 1762 from persistence list 18/04/17 17:22:31 INFO storage.BlockManager: Removing RDD 1762 18/04/17 17:22:31 INFO kafka.KafkaRDD: Removing RDD 1762 from persistence list 18/04/17 17:22:31 INFO storage.BlockManager: Removing RDD 1762 18/04/17 17:22:31 INFO kafka.KafkaRDD: Removing RDD 1763 from persistence list 18/04/17 17:22:31 INFO storage.BlockManager: Removing RDD 1763 18/04/17 17:22:31 INFO kafka.KafkaRDD: Removing RDD 1763 from persistence list 18/04/17 17:22:31 INFO storage.BlockManager: Removing RDD 1763 18/04/17 17:22:31 INFO scheduler.ReceivedBlockTracker: Deleting batches ArrayBuffer() 18/04/17 17:22:31 INFO scheduler.InputInfoTracker: remove old batch metadata: 1523974800000 ms 18/04/17 17:23:00 INFO scheduler.JobScheduler: Added jobs for time 1523974980000 ms 18/04/17 17:23:00 INFO scheduler.JobScheduler: Starting job streaming job 1523974980000 ms.0 from job set of time 1523974980000 ms 18/04/17 17:23:00 INFO scheduler.JobScheduler: Starting job streaming job 1523974980000 ms.1 from job set of time 1523974980000 ms 18/04/17 17:23:00 INFO scheduler.JobScheduler: Starting job streaming job 1523974980000 ms.2 from job set of time 1523974980000 ms 18/04/17 17:23:00 INFO scheduler.JobScheduler: Starting job streaming job 1523974980000 ms.3 from job set of time 1523974980000 ms 18/04/17 17:23:00 INFO scheduler.JobScheduler: Starting job streaming job 1523974980000 ms.4 from job set of time 1523974980000 ms 18/04/17 17:23:00 INFO scheduler.JobScheduler: Finished job streaming job 1523974980000 ms.0 from job set of time 1523974980000 ms 18/04/17 17:23:00 INFO scheduler.JobScheduler: Finished job streaming job 1523974980000 ms.3 from job set of time 1523974980000 ms 18/04/17 17:23:00 INFO scheduler.JobScheduler: Starting job streaming job 1523974980000 ms.5 from job set of time 1523974980000 ms 18/04/17 17:23:00 INFO scheduler.JobScheduler: Finished job streaming job 1523974980000 ms.4 from job set of time 1523974980000 ms 18/04/17 17:23:00 INFO scheduler.JobScheduler: Starting job streaming job 1523974980000 ms.6 from job set of time 1523974980000 ms 18/04/17 17:23:00 INFO scheduler.JobScheduler: Starting job streaming job 1523974980000 ms.7 from job set of time 1523974980000 ms 18/04/17 17:23:00 INFO scheduler.JobScheduler: Starting job streaming job 1523974980000 ms.8 from job set of time 1523974980000 ms 18/04/17 17:23:00 INFO scheduler.JobScheduler: Starting job streaming job 1523974980000 ms.9 from job set of time 1523974980000 ms 18/04/17 17:23:00 INFO scheduler.JobScheduler: Starting job streaming job 1523974980000 ms.10 from job set of time 1523974980000 ms 18/04/17 17:23:00 INFO scheduler.JobScheduler: Starting job streaming job 1523974980000 ms.11 from job set of time 1523974980000 ms 18/04/17 17:23:00 INFO scheduler.JobScheduler: Starting job streaming job 1523974980000 ms.12 from job set of time 1523974980000 ms 18/04/17 17:23:00 INFO scheduler.JobScheduler: Starting job streaming job 1523974980000 ms.13 from job set of time 1523974980000 ms 18/04/17 17:23:00 INFO scheduler.JobScheduler: Starting job streaming job 1523974980000 ms.14 from job set of time 1523974980000 ms 18/04/17 17:23:00 INFO scheduler.JobScheduler: Finished job streaming job 1523974980000 ms.13 from job set of time 1523974980000 ms 18/04/17 17:23:00 INFO scheduler.JobScheduler: Starting job streaming job 1523974980000 ms.16 from job set of time 1523974980000 ms 18/04/17 17:23:00 INFO scheduler.JobScheduler: Finished job streaming job 1523974980000 ms.14 from job set of time 1523974980000 ms 18/04/17 17:23:00 INFO scheduler.JobScheduler: Finished job streaming job 1523974980000 ms.16 from job set of time 1523974980000 ms 18/04/17 17:23:00 INFO scheduler.JobScheduler: Starting job streaming job 1523974980000 ms.15 from job set of time 1523974980000 ms 18/04/17 17:23:00 INFO scheduler.JobScheduler: Starting job streaming job 1523974980000 ms.18 from job set of time 1523974980000 ms 18/04/17 17:23:00 INFO scheduler.JobScheduler: Starting job streaming job 1523974980000 ms.17 from job set of time 1523974980000 ms 18/04/17 17:23:00 INFO scheduler.JobScheduler: Starting job streaming job 1523974980000 ms.19 from job set of time 1523974980000 ms 18/04/17 17:23:00 INFO scheduler.JobScheduler: Finished job streaming job 1523974980000 ms.17 from job set of time 1523974980000 ms 18/04/17 17:23:00 INFO scheduler.JobScheduler: Starting job streaming job 1523974980000 ms.20 from job set of time 1523974980000 ms 18/04/17 17:23:00 INFO scheduler.JobScheduler: Starting job streaming job 1523974980000 ms.21 from job set of time 1523974980000 ms 18/04/17 17:23:00 INFO scheduler.JobScheduler: Finished job streaming job 1523974980000 ms.21 from job set of time 1523974980000 ms 18/04/17 17:23:00 INFO scheduler.JobScheduler: Starting job streaming job 1523974980000 ms.22 from job set of time 1523974980000 ms 18/04/17 17:23:00 INFO scheduler.JobScheduler: Starting job streaming job 1523974980000 ms.23 from job set of time 1523974980000 ms 18/04/17 17:23:00 INFO scheduler.JobScheduler: Starting job streaming job 1523974980000 ms.24 from job set of time 1523974980000 ms 18/04/17 17:23:00 INFO scheduler.JobScheduler: Starting job streaming job 1523974980000 ms.26 from job set of time 1523974980000 ms 18/04/17 17:23:00 INFO scheduler.JobScheduler: Starting job streaming job 1523974980000 ms.25 from job set of time 1523974980000 ms 18/04/17 17:23:00 INFO scheduler.JobScheduler: Starting job streaming job 1523974980000 ms.27 from job set of time 1523974980000 ms 18/04/17 17:23:00 INFO scheduler.JobScheduler: Starting job streaming job 1523974980000 ms.28 from job set of time 1523974980000 ms 18/04/17 17:23:00 INFO scheduler.JobScheduler: Starting job streaming job 1523974980000 ms.29 from job set of time 1523974980000 ms 18/04/17 17:23:00 INFO scheduler.JobScheduler: Starting job streaming job 1523974980000 ms.30 from job set of time 1523974980000 ms 18/04/17 17:23:00 INFO scheduler.JobScheduler: Finished job streaming job 1523974980000 ms.30 from job set of time 1523974980000 ms 18/04/17 17:23:00 INFO scheduler.JobScheduler: Starting job streaming job 1523974980000 ms.31 from job set of time 1523974980000 ms 18/04/17 17:23:00 INFO scheduler.JobScheduler: Starting job streaming job 1523974980000 ms.32 from job set of time 1523974980000 ms 18/04/17 17:23:00 INFO scheduler.JobScheduler: Starting job streaming job 1523974980000 ms.33 from job set of time 1523974980000 ms 18/04/17 17:23:00 INFO scheduler.JobScheduler: Starting job streaming job 1523974980000 ms.34 from job set of time 1523974980000 ms 18/04/17 17:23:00 INFO scheduler.JobScheduler: Starting job streaming job 1523974980000 ms.35 from job set of time 1523974980000 ms 18/04/17 17:23:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 17:23:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 17:23:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 17:23:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 17:23:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 17:23:00 INFO scheduler.DAGScheduler: Got job 1319 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 17:23:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 17:23:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 1318 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 17:23:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 17:23:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 17:23:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 17:23:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 17:23:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 17:23:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 17:23:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 17:23:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 17:23:00 INFO scheduler.DAGScheduler: Submitting ResultStage 1318 (KafkaRDD[1826] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 17:23:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 17:23:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 17:23:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 17:23:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 17:23:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 17:23:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 17:23:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 17:23:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 17:23:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 17:23:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 17:23:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 17:23:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 17:23:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 17:23:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 17:23:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 17:23:00 INFO storage.MemoryStore: Block broadcast_1318 stored as values in memory (estimated size 5.7 KB, free 491.4 MB) 18/04/17 17:23:00 INFO storage.MemoryStore: Block broadcast_1318_piece0 stored as bytes in memory (estimated size 3.1 KB, free 491.4 MB) 18/04/17 17:23:00 INFO storage.BlockManagerInfo: Added broadcast_1318_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:23:00 INFO spark.SparkContext: Created broadcast 1318 from broadcast at DAGScheduler.scala:1006 18/04/17 17:23:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 1318 (KafkaRDD[1826] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 17:23:00 INFO cluster.YarnClusterScheduler: Adding task set 1318.0 with 1 tasks 18/04/17 17:23:00 INFO scheduler.DAGScheduler: Got job 1318 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 17:23:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 1319 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 17:23:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 17:23:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 17:23:00 INFO scheduler.DAGScheduler: Submitting ResultStage 1319 (KafkaRDD[1801] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 17:23:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 1318.0 (TID 1318, ***hostname masked***, executor 2, partition 0, NODE_LOCAL, 2056 bytes) 18/04/17 17:23:00 INFO storage.MemoryStore: Block broadcast_1319 stored as values in memory (estimated size 5.7 KB, free 491.4 MB) 18/04/17 17:23:00 INFO storage.MemoryStore: Block broadcast_1319_piece0 stored as bytes in memory (estimated size 3.1 KB, free 491.4 MB) 18/04/17 17:23:00 INFO storage.BlockManagerInfo: Added broadcast_1319_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:23:00 INFO spark.SparkContext: Created broadcast 1319 from broadcast at DAGScheduler.scala:1006 18/04/17 17:23:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 1319 (KafkaRDD[1801] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 17:23:00 INFO cluster.YarnClusterScheduler: Adding task set 1319.0 with 1 tasks 18/04/17 17:23:00 INFO scheduler.DAGScheduler: Got job 1320 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 17:23:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 1320 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 17:23:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 17:23:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 17:23:00 INFO scheduler.DAGScheduler: Submitting ResultStage 1320 (KafkaRDD[1834] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 17:23:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 1319.0 (TID 1319, ***hostname masked***, executor 9, partition 0, RACK_LOCAL, 2063 bytes) 18/04/17 17:23:00 INFO storage.MemoryStore: Block broadcast_1320 stored as values in memory (estimated size 5.7 KB, free 491.4 MB) 18/04/17 17:23:00 INFO storage.MemoryStore: Block broadcast_1320_piece0 stored as bytes in memory (estimated size 3.1 KB, free 491.4 MB) 18/04/17 17:23:00 INFO storage.BlockManagerInfo: Added broadcast_1320_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:23:00 INFO spark.SparkContext: Created broadcast 1320 from broadcast at DAGScheduler.scala:1006 18/04/17 17:23:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 1320 (KafkaRDD[1834] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 17:23:00 INFO cluster.YarnClusterScheduler: Adding task set 1320.0 with 1 tasks 18/04/17 17:23:00 INFO scheduler.DAGScheduler: Got job 1321 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 17:23:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 1321 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 17:23:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 17:23:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 17:23:00 INFO scheduler.DAGScheduler: Submitting ResultStage 1321 (KafkaRDD[1825] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 17:23:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 1320.0 (TID 1320, ***hostname masked***, executor 11, partition 0, RACK_LOCAL, 2053 bytes) 18/04/17 17:23:00 INFO storage.MemoryStore: Block broadcast_1321 stored as values in memory (estimated size 5.7 KB, free 491.4 MB) 18/04/17 17:23:00 INFO storage.MemoryStore: Block broadcast_1321_piece0 stored as bytes in memory (estimated size 3.1 KB, free 491.4 MB) 18/04/17 17:23:00 INFO storage.BlockManagerInfo: Added broadcast_1321_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:23:00 INFO spark.SparkContext: Created broadcast 1321 from broadcast at DAGScheduler.scala:1006 18/04/17 17:23:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 1321 (KafkaRDD[1825] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 17:23:00 INFO cluster.YarnClusterScheduler: Adding task set 1321.0 with 1 tasks 18/04/17 17:23:00 INFO scheduler.DAGScheduler: Got job 1322 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 17:23:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 1322 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 17:23:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 17:23:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 17:23:00 INFO scheduler.DAGScheduler: Submitting ResultStage 1322 (KafkaRDD[1820] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 17:23:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 1321.0 (TID 1321, ***hostname masked***, executor 2, partition 0, RACK_LOCAL, 2045 bytes) 18/04/17 17:23:00 INFO storage.MemoryStore: Block broadcast_1322 stored as values in memory (estimated size 5.7 KB, free 491.4 MB) 18/04/17 17:23:00 INFO storage.MemoryStore: Block broadcast_1322_piece0 stored as bytes in memory (estimated size 3.1 KB, free 491.4 MB) 18/04/17 17:23:00 INFO storage.BlockManagerInfo: Added broadcast_1322_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:23:00 INFO spark.SparkContext: Created broadcast 1322 from broadcast at DAGScheduler.scala:1006 18/04/17 17:23:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 1322 (KafkaRDD[1820] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 17:23:00 INFO cluster.YarnClusterScheduler: Adding task set 1322.0 with 1 tasks 18/04/17 17:23:00 INFO scheduler.DAGScheduler: Got job 1323 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 17:23:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 1323 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 17:23:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 17:23:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 17:23:00 INFO scheduler.DAGScheduler: Submitting ResultStage 1323 (KafkaRDD[1811] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 17:23:00 INFO storage.MemoryStore: Block broadcast_1323 stored as values in memory (estimated size 5.7 KB, free 491.4 MB) 18/04/17 17:23:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 1322.0 (TID 1322, ***hostname masked***, executor 9, partition 0, RACK_LOCAL, 2048 bytes) 18/04/17 17:23:00 INFO storage.MemoryStore: Block broadcast_1323_piece0 stored as bytes in memory (estimated size 3.1 KB, free 491.4 MB) 18/04/17 17:23:00 INFO storage.BlockManagerInfo: Added broadcast_1323_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:23:00 INFO spark.SparkContext: Created broadcast 1323 from broadcast at DAGScheduler.scala:1006 18/04/17 17:23:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 1323 (KafkaRDD[1811] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 17:23:00 INFO cluster.YarnClusterScheduler: Adding task set 1323.0 with 1 tasks 18/04/17 17:23:00 INFO scheduler.DAGScheduler: Got job 1324 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 17:23:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 1324 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 17:23:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 17:23:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 17:23:00 INFO scheduler.DAGScheduler: Submitting ResultStage 1324 (KafkaRDD[1808] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 17:23:00 INFO storage.MemoryStore: Block broadcast_1324 stored as values in memory (estimated size 5.7 KB, free 491.4 MB) 18/04/17 17:23:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 1323.0 (TID 1323, ***hostname masked***, executor 4, partition 0, RACK_LOCAL, 2046 bytes) 18/04/17 17:23:00 INFO storage.BlockManagerInfo: Added broadcast_1318_piece0 in memory on ***hostname masked***:43653 (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:23:00 INFO storage.MemoryStore: Block broadcast_1324_piece0 stored as bytes in memory (estimated size 3.1 KB, free 491.4 MB) 18/04/17 17:23:00 INFO storage.BlockManagerInfo: Added broadcast_1324_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:23:00 INFO spark.SparkContext: Created broadcast 1324 from broadcast at DAGScheduler.scala:1006 18/04/17 17:23:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 1324 (KafkaRDD[1808] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 17:23:00 INFO cluster.YarnClusterScheduler: Adding task set 1324.0 with 1 tasks 18/04/17 17:23:00 INFO scheduler.DAGScheduler: Got job 1326 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 17:23:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 1325 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 17:23:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 17:23:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 17:23:00 INFO scheduler.DAGScheduler: Submitting ResultStage 1325 (KafkaRDD[1810] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 17:23:00 INFO storage.MemoryStore: Block broadcast_1325 stored as values in memory (estimated size 5.7 KB, free 491.3 MB) 18/04/17 17:23:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 1324.0 (TID 1324, ***hostname masked***, executor 7, partition 0, RACK_LOCAL, 2051 bytes) 18/04/17 17:23:00 INFO storage.BlockManagerInfo: Added broadcast_1321_piece0 in memory on ***hostname masked***:43653 (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:23:00 INFO storage.BlockManagerInfo: Removed broadcast_1314_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:23:00 INFO storage.MemoryStore: Block broadcast_1325_piece0 stored as bytes in memory (estimated size 3.1 KB, free 491.4 MB) 18/04/17 17:23:00 INFO storage.BlockManagerInfo: Added broadcast_1325_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:23:00 INFO spark.SparkContext: Created broadcast 1325 from broadcast at DAGScheduler.scala:1006 18/04/17 17:23:00 INFO storage.BlockManagerInfo: Added broadcast_1323_piece0 in memory on ***hostname masked***:55279 (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:23:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 1325 (KafkaRDD[1810] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 17:23:00 INFO cluster.YarnClusterScheduler: Adding task set 1325.0 with 1 tasks 18/04/17 17:23:00 INFO scheduler.DAGScheduler: Got job 1325 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 17:23:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 1326 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 17:23:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 17:23:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 17:23:00 INFO scheduler.DAGScheduler: Submitting ResultStage 1326 (KafkaRDD[1815] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 17:23:00 INFO storage.MemoryStore: Block broadcast_1326 stored as values in memory (estimated size 5.7 KB, free 491.3 MB) 18/04/17 17:23:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 1325.0 (TID 1325, ***hostname masked***, executor 4, partition 0, NODE_LOCAL, 2059 bytes) 18/04/17 17:23:00 INFO storage.MemoryStore: Block broadcast_1326_piece0 stored as bytes in memory (estimated size 3.1 KB, free 491.3 MB) 18/04/17 17:23:00 INFO storage.BlockManagerInfo: Removed broadcast_1314_piece0 on ***hostname masked***:55033 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:23:00 INFO storage.BlockManagerInfo: Added broadcast_1326_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:23:00 INFO storage.BlockManagerInfo: Added broadcast_1320_piece0 in memory on ***hostname masked***:57847 (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:23:00 INFO spark.SparkContext: Created broadcast 1326 from broadcast at DAGScheduler.scala:1006 18/04/17 17:23:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 1326 (KafkaRDD[1815] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 17:23:00 INFO cluster.YarnClusterScheduler: Adding task set 1326.0 with 1 tasks 18/04/17 17:23:00 INFO scheduler.DAGScheduler: Got job 1327 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 17:23:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 1327 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 17:23:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 17:23:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 17:23:00 INFO scheduler.DAGScheduler: Submitting ResultStage 1327 (KafkaRDD[1818] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 17:23:00 INFO storage.BlockManagerInfo: Added broadcast_1322_piece0 in memory on ***hostname masked***:55033 (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:23:00 INFO storage.MemoryStore: Block broadcast_1327 stored as values in memory (estimated size 5.7 KB, free 491.3 MB) 18/04/17 17:23:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 1326.0 (TID 1326, ***hostname masked***, executor 7, partition 0, RACK_LOCAL, 2064 bytes) 18/04/17 17:23:00 INFO storage.MemoryStore: Block broadcast_1327_piece0 stored as bytes in memory (estimated size 3.1 KB, free 491.3 MB) 18/04/17 17:23:00 INFO storage.BlockManagerInfo: Added broadcast_1327_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:23:00 INFO storage.BlockManagerInfo: Added broadcast_1325_piece0 in memory on ***hostname masked***:55279 (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:23:00 INFO spark.SparkContext: Created broadcast 1327 from broadcast at DAGScheduler.scala:1006 18/04/17 17:23:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 1327 (KafkaRDD[1818] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 17:23:00 INFO cluster.YarnClusterScheduler: Adding task set 1327.0 with 1 tasks 18/04/17 17:23:00 INFO scheduler.DAGScheduler: Got job 1328 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 17:23:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 1328 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 17:23:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 17:23:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 17:23:00 INFO scheduler.DAGScheduler: Submitting ResultStage 1328 (KafkaRDD[1802] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 17:23:00 INFO storage.MemoryStore: Block broadcast_1328 stored as values in memory (estimated size 5.7 KB, free 491.3 MB) 18/04/17 17:23:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 1327.0 (TID 1327, ***hostname masked***, executor 7, partition 0, RACK_LOCAL, 2048 bytes) 18/04/17 17:23:00 INFO spark.ContextCleaner: Cleaned accumulator 1293 18/04/17 17:23:00 INFO spark.ContextCleaner: Cleaned accumulator 1295 18/04/17 17:23:00 INFO storage.BlockManagerInfo: Removed broadcast_1293_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:23:00 INFO storage.BlockManagerInfo: Removed broadcast_1293_piece0 on ***hostname masked***:60107 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:23:00 INFO storage.MemoryStore: Block broadcast_1328_piece0 stored as bytes in memory (estimated size 3.1 KB, free 491.3 MB) 18/04/17 17:23:00 INFO spark.ContextCleaner: Cleaned accumulator 1294 18/04/17 17:23:00 INFO storage.BlockManagerInfo: Added broadcast_1328_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:23:00 INFO spark.SparkContext: Created broadcast 1328 from broadcast at DAGScheduler.scala:1006 18/04/17 17:23:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 1328 (KafkaRDD[1802] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 17:23:00 INFO cluster.YarnClusterScheduler: Adding task set 1328.0 with 1 tasks 18/04/17 17:23:00 INFO scheduler.DAGScheduler: Got job 1329 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 17:23:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 1329 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 17:23:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 17:23:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 17:23:00 INFO scheduler.DAGScheduler: Submitting ResultStage 1329 (KafkaRDD[1809] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 17:23:00 INFO storage.BlockManagerInfo: Removed broadcast_1292_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:23:00 INFO storage.MemoryStore: Block broadcast_1329 stored as values in memory (estimated size 5.7 KB, free 491.3 MB) 18/04/17 17:23:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 1328.0 (TID 1328, ***hostname masked***, executor 9, partition 0, RACK_LOCAL, 2034 bytes) 18/04/17 17:23:00 INFO storage.BlockManagerInfo: Added broadcast_1324_piece0 in memory on ***hostname masked***:41751 (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:23:00 INFO storage.BlockManagerInfo: Removed broadcast_1292_piece0 on ***hostname masked***:35790 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:23:00 INFO spark.ContextCleaner: Cleaned accumulator 1296 18/04/17 17:23:00 INFO storage.MemoryStore: Block broadcast_1329_piece0 stored as bytes in memory (estimated size 3.1 KB, free 491.3 MB) 18/04/17 17:23:00 INFO storage.BlockManagerInfo: Added broadcast_1329_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:23:00 INFO storage.BlockManagerInfo: Removed broadcast_1294_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:23:00 INFO spark.SparkContext: Created broadcast 1329 from broadcast at DAGScheduler.scala:1006 18/04/17 17:23:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 1329 (KafkaRDD[1809] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 17:23:00 INFO cluster.YarnClusterScheduler: Adding task set 1329.0 with 1 tasks 18/04/17 17:23:00 INFO scheduler.DAGScheduler: Got job 1330 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 17:23:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 1330 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 17:23:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 17:23:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 17:23:00 INFO scheduler.DAGScheduler: Submitting ResultStage 1330 (KafkaRDD[1805] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 17:23:00 INFO storage.BlockManagerInfo: Added broadcast_1319_piece0 in memory on ***hostname masked***:55033 (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:23:00 INFO storage.MemoryStore: Block broadcast_1330 stored as values in memory (estimated size 5.7 KB, free 491.3 MB) 18/04/17 17:23:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 1329.0 (TID 1329, ***hostname masked***, executor 2, partition 0, RACK_LOCAL, 2041 bytes) 18/04/17 17:23:00 INFO storage.BlockManagerInfo: Removed broadcast_1294_piece0 on ***hostname masked***:55033 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:23:00 INFO spark.ContextCleaner: Cleaned accumulator 1298 18/04/17 17:23:00 INFO storage.BlockManagerInfo: Removed broadcast_1296_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:23:00 INFO storage.BlockManagerInfo: Removed broadcast_1296_piece0 on ***hostname masked***:57847 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:23:00 INFO storage.MemoryStore: Block broadcast_1330_piece0 stored as bytes in memory (estimated size 3.1 KB, free 491.3 MB) 18/04/17 17:23:00 INFO storage.BlockManagerInfo: Added broadcast_1330_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:23:00 INFO spark.SparkContext: Created broadcast 1330 from broadcast at DAGScheduler.scala:1006 18/04/17 17:23:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 1330 (KafkaRDD[1805] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 17:23:00 INFO cluster.YarnClusterScheduler: Adding task set 1330.0 with 1 tasks 18/04/17 17:23:00 INFO spark.ContextCleaner: Cleaned accumulator 1297 18/04/17 17:23:00 INFO scheduler.DAGScheduler: Got job 1331 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 17:23:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 1331 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 17:23:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 17:23:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 17:23:00 INFO scheduler.DAGScheduler: Submitting ResultStage 1331 (KafkaRDD[1832] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 17:23:00 INFO storage.MemoryStore: Block broadcast_1331 stored as values in memory (estimated size 5.7 KB, free 491.3 MB) 18/04/17 17:23:00 INFO storage.BlockManagerInfo: Removed broadcast_1295_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:23:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 1330.0 (TID 1330, ***hostname masked***, executor 3, partition 0, NODE_LOCAL, 2059 bytes) 18/04/17 17:23:00 INFO storage.BlockManagerInfo: Removed broadcast_1295_piece0 on ***hostname masked***:57847 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:23:00 INFO spark.ContextCleaner: Cleaned accumulator 1300 18/04/17 17:23:00 INFO storage.MemoryStore: Block broadcast_1331_piece0 stored as bytes in memory (estimated size 3.1 KB, free 491.3 MB) 18/04/17 17:23:00 INFO storage.BlockManagerInfo: Added broadcast_1331_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:23:00 INFO storage.BlockManagerInfo: Removed broadcast_1298_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:23:00 INFO storage.BlockManagerInfo: Added broadcast_1328_piece0 in memory on ***hostname masked***:55033 (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:23:00 INFO spark.SparkContext: Created broadcast 1331 from broadcast at DAGScheduler.scala:1006 18/04/17 17:23:00 INFO storage.BlockManagerInfo: Added broadcast_1327_piece0 in memory on ***hostname masked***:41751 (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:23:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 1331 (KafkaRDD[1832] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 17:23:00 INFO cluster.YarnClusterScheduler: Adding task set 1331.0 with 1 tasks 18/04/17 17:23:00 INFO scheduler.DAGScheduler: Got job 1332 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 17:23:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 1332 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 17:23:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 17:23:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 17:23:00 INFO scheduler.DAGScheduler: Submitting ResultStage 1332 (KafkaRDD[1831] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 17:23:00 INFO storage.MemoryStore: Block broadcast_1332 stored as values in memory (estimated size 5.7 KB, free 491.3 MB) 18/04/17 17:23:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 1331.0 (TID 1331, ***hostname masked***, executor 5, partition 0, RACK_LOCAL, 2040 bytes) 18/04/17 17:23:00 INFO storage.BlockManagerInfo: Removed broadcast_1298_piece0 on ***hostname masked***:50260 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:23:00 INFO spark.ContextCleaner: Cleaned accumulator 1299 18/04/17 17:23:00 INFO storage.BlockManagerInfo: Added broadcast_1329_piece0 in memory on ***hostname masked***:43653 (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:23:00 INFO storage.BlockManagerInfo: Removed broadcast_1297_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:23:00 INFO storage.MemoryStore: Block broadcast_1332_piece0 stored as bytes in memory (estimated size 3.1 KB, free 491.4 MB) 18/04/17 17:23:00 INFO storage.BlockManagerInfo: Removed broadcast_1297_piece0 on ***hostname masked***:60107 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:23:00 INFO storage.BlockManagerInfo: Added broadcast_1332_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:23:00 INFO spark.SparkContext: Created broadcast 1332 from broadcast at DAGScheduler.scala:1006 18/04/17 17:23:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 1332 (KafkaRDD[1831] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 17:23:00 INFO cluster.YarnClusterScheduler: Adding task set 1332.0 with 1 tasks 18/04/17 17:23:00 INFO scheduler.DAGScheduler: Got job 1334 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 17:23:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 1333 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 17:23:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 17:23:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 17:23:00 INFO spark.ContextCleaner: Cleaned accumulator 1302 18/04/17 17:23:00 INFO scheduler.DAGScheduler: Submitting ResultStage 1333 (KafkaRDD[1829] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 17:23:00 INFO storage.MemoryStore: Block broadcast_1333 stored as values in memory (estimated size 5.7 KB, free 491.3 MB) 18/04/17 17:23:00 INFO storage.BlockManagerInfo: Removed broadcast_1300_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:23:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 1332.0 (TID 1332, ***hostname masked***, executor 5, partition 0, RACK_LOCAL, 2042 bytes) 18/04/17 17:23:00 INFO storage.BlockManagerInfo: Removed broadcast_1300_piece0 on ***hostname masked***:55033 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:23:00 INFO spark.ContextCleaner: Cleaned accumulator 1301 18/04/17 17:23:00 INFO storage.MemoryStore: Block broadcast_1333_piece0 stored as bytes in memory (estimated size 3.1 KB, free 491.4 MB) 18/04/17 17:23:00 INFO storage.BlockManagerInfo: Removed broadcast_1299_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:23:00 INFO storage.BlockManagerInfo: Added broadcast_1333_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:23:00 INFO spark.SparkContext: Created broadcast 1333 from broadcast at DAGScheduler.scala:1006 18/04/17 17:23:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 1333 (KafkaRDD[1829] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 17:23:00 INFO cluster.YarnClusterScheduler: Adding task set 1333.0 with 1 tasks 18/04/17 17:23:00 INFO scheduler.DAGScheduler: Got job 1333 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 17:23:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 1334 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 17:23:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 17:23:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 17:23:00 INFO storage.BlockManagerInfo: Added broadcast_1330_piece0 in memory on ***hostname masked***:60107 (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:23:00 INFO scheduler.DAGScheduler: Submitting ResultStage 1334 (KafkaRDD[1822] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 17:23:00 INFO storage.MemoryStore: Block broadcast_1334 stored as values in memory (estimated size 5.7 KB, free 491.4 MB) 18/04/17 17:23:00 INFO storage.BlockManagerInfo: Removed broadcast_1299_piece0 on ***hostname masked***:60107 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:23:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 1333.0 (TID 1333, ***hostname masked***, executor 1, partition 0, NODE_LOCAL, 2049 bytes) 18/04/17 17:23:00 INFO spark.ContextCleaner: Cleaned accumulator 1303 18/04/17 17:23:00 INFO storage.BlockManagerInfo: Added broadcast_1326_piece0 in memory on ***hostname masked***:41751 (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:23:00 INFO storage.BlockManagerInfo: Removed broadcast_1301_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:23:00 INFO storage.BlockManagerInfo: Removed broadcast_1301_piece0 on ***hostname masked***:55279 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:23:00 INFO storage.MemoryStore: Block broadcast_1334_piece0 stored as bytes in memory (estimated size 3.1 KB, free 491.4 MB) 18/04/17 17:23:00 INFO storage.BlockManagerInfo: Added broadcast_1334_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:23:00 INFO spark.ContextCleaner: Cleaned accumulator 1304 18/04/17 17:23:00 INFO spark.SparkContext: Created broadcast 1334 from broadcast at DAGScheduler.scala:1006 18/04/17 17:23:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 1334 (KafkaRDD[1822] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 17:23:00 INFO cluster.YarnClusterScheduler: Adding task set 1334.0 with 1 tasks 18/04/17 17:23:00 INFO scheduler.DAGScheduler: Got job 1335 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 17:23:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 1335 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 17:23:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 17:23:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 17:23:00 INFO scheduler.DAGScheduler: Submitting ResultStage 1335 (KafkaRDD[1828] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 17:23:00 INFO storage.BlockManagerInfo: Removed broadcast_1302_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:23:00 INFO storage.MemoryStore: Block broadcast_1335 stored as values in memory (estimated size 5.7 KB, free 491.4 MB) 18/04/17 17:23:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 1334.0 (TID 1334, ***hostname masked***, executor 6, partition 0, RACK_LOCAL, 2069 bytes) 18/04/17 17:23:00 INFO storage.BlockManagerInfo: Removed broadcast_1302_piece0 on ***hostname masked***:42188 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:23:00 INFO storage.MemoryStore: Block broadcast_1335_piece0 stored as bytes in memory (estimated size 3.1 KB, free 491.4 MB) 18/04/17 17:23:00 INFO storage.BlockManagerInfo: Added broadcast_1335_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:23:00 INFO spark.SparkContext: Created broadcast 1335 from broadcast at DAGScheduler.scala:1006 18/04/17 17:23:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 1335 (KafkaRDD[1828] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 17:23:00 INFO cluster.YarnClusterScheduler: Adding task set 1335.0 with 1 tasks 18/04/17 17:23:00 INFO scheduler.DAGScheduler: Got job 1336 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 17:23:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 1336 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 17:23:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 17:23:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 17:23:00 INFO scheduler.DAGScheduler: Submitting ResultStage 1336 (KafkaRDD[1824] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 17:23:00 INFO storage.BlockManagerInfo: Added broadcast_1331_piece0 in memory on ***hostname masked***:53081 (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:23:00 INFO storage.BlockManagerInfo: Removed broadcast_1303_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:23:00 INFO storage.MemoryStore: Block broadcast_1336 stored as values in memory (estimated size 5.7 KB, free 491.4 MB) 18/04/17 17:23:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 1335.0 (TID 1335, ***hostname masked***, executor 1, partition 0, RACK_LOCAL, 2045 bytes) 18/04/17 17:23:00 INFO storage.BlockManagerInfo: Removed broadcast_1303_piece0 on ***hostname masked***:50260 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:23:00 INFO spark.ContextCleaner: Cleaned accumulator 1306 18/04/17 17:23:00 INFO storage.BlockManagerInfo: Removed broadcast_1304_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:23:00 INFO storage.MemoryStore: Block broadcast_1336_piece0 stored as bytes in memory (estimated size 3.1 KB, free 491.4 MB) 18/04/17 17:23:00 INFO storage.BlockManagerInfo: Added broadcast_1336_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:23:00 INFO spark.SparkContext: Created broadcast 1336 from broadcast at DAGScheduler.scala:1006 18/04/17 17:23:00 INFO storage.BlockManagerInfo: Removed broadcast_1304_piece0 on ***hostname masked***:53081 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:23:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 1336 (KafkaRDD[1824] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 17:23:00 INFO cluster.YarnClusterScheduler: Adding task set 1336.0 with 1 tasks 18/04/17 17:23:00 INFO scheduler.DAGScheduler: Got job 1337 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 17:23:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 1337 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 17:23:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 17:23:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 17:23:00 INFO scheduler.DAGScheduler: Submitting ResultStage 1337 (KafkaRDD[1835] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 17:23:00 INFO storage.MemoryStore: Block broadcast_1337 stored as values in memory (estimated size 5.7 KB, free 491.4 MB) 18/04/17 17:23:00 INFO spark.ContextCleaner: Cleaned accumulator 1305 18/04/17 17:23:00 INFO spark.ContextCleaner: Cleaned accumulator 1308 18/04/17 17:23:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 1336.0 (TID 1336, ***hostname masked***, executor 3, partition 0, NODE_LOCAL, 2047 bytes) 18/04/17 17:23:00 INFO storage.BlockManagerInfo: Removed broadcast_1306_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:23:00 INFO storage.BlockManagerInfo: Removed broadcast_1306_piece0 on ***hostname masked***:57847 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:23:00 INFO storage.MemoryStore: Block broadcast_1337_piece0 stored as bytes in memory (estimated size 3.1 KB, free 491.4 MB) 18/04/17 17:23:00 INFO storage.BlockManagerInfo: Added broadcast_1337_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:23:00 INFO spark.SparkContext: Created broadcast 1337 from broadcast at DAGScheduler.scala:1006 18/04/17 17:23:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 1337 (KafkaRDD[1835] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 17:23:00 INFO cluster.YarnClusterScheduler: Adding task set 1337.0 with 1 tasks 18/04/17 17:23:00 INFO scheduler.DAGScheduler: Got job 1338 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 17:23:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 1338 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 17:23:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 17:23:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 17:23:00 INFO scheduler.DAGScheduler: Submitting ResultStage 1338 (KafkaRDD[1827] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 17:23:00 INFO storage.MemoryStore: Block broadcast_1338 stored as values in memory (estimated size 5.7 KB, free 491.4 MB) 18/04/17 17:23:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 1337.0 (TID 1337, ***hostname masked***, executor 10, partition 0, NODE_LOCAL, 2037 bytes) 18/04/17 17:23:00 INFO spark.ContextCleaner: Cleaned accumulator 1307 18/04/17 17:23:00 INFO storage.BlockManagerInfo: Added broadcast_1334_piece0 in memory on ***hostname masked***:35790 (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:23:00 INFO storage.BlockManagerInfo: Removed broadcast_1305_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:23:00 INFO storage.BlockManagerInfo: Removed broadcast_1305_piece0 on ***hostname masked***:35790 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:23:00 INFO storage.MemoryStore: Block broadcast_1338_piece0 stored as bytes in memory (estimated size 3.1 KB, free 491.4 MB) 18/04/17 17:23:00 INFO storage.BlockManagerInfo: Added broadcast_1338_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:23:00 INFO spark.SparkContext: Created broadcast 1338 from broadcast at DAGScheduler.scala:1006 18/04/17 17:23:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 1338 (KafkaRDD[1827] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 17:23:00 INFO cluster.YarnClusterScheduler: Adding task set 1338.0 with 1 tasks 18/04/17 17:23:00 INFO storage.BlockManagerInfo: Added broadcast_1332_piece0 in memory on ***hostname masked***:53081 (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:23:00 INFO scheduler.DAGScheduler: Got job 1339 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 17:23:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 1339 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 17:23:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 17:23:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 17:23:00 INFO scheduler.DAGScheduler: Submitting ResultStage 1339 (KafkaRDD[1807] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 17:23:00 INFO storage.MemoryStore: Block broadcast_1339 stored as values in memory (estimated size 5.7 KB, free 491.4 MB) 18/04/17 17:23:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 1338.0 (TID 1338, ***hostname masked***, executor 10, partition 0, RACK_LOCAL, 2048 bytes) 18/04/17 17:23:00 INFO storage.BlockManagerInfo: Added broadcast_1336_piece0 in memory on ***hostname masked***:60107 (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:23:00 INFO storage.MemoryStore: Block broadcast_1339_piece0 stored as bytes in memory (estimated size 3.1 KB, free 491.4 MB) 18/04/17 17:23:00 INFO storage.BlockManagerInfo: Added broadcast_1339_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:23:00 INFO spark.SparkContext: Created broadcast 1339 from broadcast at DAGScheduler.scala:1006 18/04/17 17:23:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 1339 (KafkaRDD[1807] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 17:23:00 INFO cluster.YarnClusterScheduler: Adding task set 1339.0 with 1 tasks 18/04/17 17:23:00 INFO scheduler.DAGScheduler: Got job 1340 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 17:23:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 1340 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 17:23:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 17:23:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 17:23:00 INFO scheduler.DAGScheduler: Submitting ResultStage 1340 (KafkaRDD[1806] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 17:23:00 INFO storage.MemoryStore: Block broadcast_1340 stored as values in memory (estimated size 5.7 KB, free 491.4 MB) 18/04/17 17:23:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 1339.0 (TID 1339, ***hostname masked***, executor 12, partition 0, RACK_LOCAL, 2060 bytes) 18/04/17 17:23:00 INFO storage.MemoryStore: Block broadcast_1340_piece0 stored as bytes in memory (estimated size 3.1 KB, free 491.4 MB) 18/04/17 17:23:00 INFO storage.BlockManagerInfo: Added broadcast_1340_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:23:00 INFO storage.BlockManagerInfo: Removed broadcast_1308_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:23:00 INFO spark.SparkContext: Created broadcast 1340 from broadcast at DAGScheduler.scala:1006 18/04/17 17:23:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 1340 (KafkaRDD[1806] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 17:23:00 INFO cluster.YarnClusterScheduler: Adding task set 1340.0 with 1 tasks 18/04/17 17:23:00 INFO storage.BlockManagerInfo: Added broadcast_1335_piece0 in memory on ***hostname masked***:56034 (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:23:00 INFO scheduler.DAGScheduler: Got job 1341 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 17:23:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 1341 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 17:23:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 17:23:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 17:23:00 INFO scheduler.DAGScheduler: Submitting ResultStage 1341 (KafkaRDD[1812] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 17:23:00 INFO storage.BlockManagerInfo: Removed broadcast_1308_piece0 on ***hostname masked***:56034 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:23:00 INFO storage.MemoryStore: Block broadcast_1341 stored as values in memory (estimated size 5.7 KB, free 491.4 MB) 18/04/17 17:23:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 1340.0 (TID 1340, ***hostname masked***, executor 9, partition 0, NODE_LOCAL, 2045 bytes) 18/04/17 17:23:00 INFO storage.BlockManagerInfo: Added broadcast_1337_piece0 in memory on ***hostname masked***:55095 (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:23:00 INFO storage.MemoryStore: Block broadcast_1341_piece0 stored as bytes in memory (estimated size 3.1 KB, free 491.4 MB) 18/04/17 17:23:00 INFO storage.BlockManagerInfo: Added broadcast_1341_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:23:00 INFO spark.SparkContext: Created broadcast 1341 from broadcast at DAGScheduler.scala:1006 18/04/17 17:23:00 INFO spark.ContextCleaner: Cleaned accumulator 1309 18/04/17 17:23:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 1341 (KafkaRDD[1812] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 17:23:00 INFO cluster.YarnClusterScheduler: Adding task set 1341.0 with 1 tasks 18/04/17 17:23:00 INFO scheduler.DAGScheduler: Got job 1342 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 17:23:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 1342 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 17:23:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 17:23:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 17:23:00 INFO scheduler.DAGScheduler: Submitting ResultStage 1342 (KafkaRDD[1819] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 17:23:00 INFO storage.MemoryStore: Block broadcast_1342 stored as values in memory (estimated size 5.7 KB, free 491.4 MB) 18/04/17 17:23:00 INFO storage.BlockManagerInfo: Removed broadcast_1307_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:23:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 1341.0 (TID 1341, ***hostname masked***, executor 12, partition 0, RACK_LOCAL, 2051 bytes) 18/04/17 17:23:00 INFO storage.BlockManagerInfo: Removed broadcast_1307_piece0 on ***hostname masked***:55279 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:23:00 INFO storage.MemoryStore: Block broadcast_1342_piece0 stored as bytes in memory (estimated size 3.1 KB, free 491.4 MB) 18/04/17 17:23:00 INFO storage.BlockManagerInfo: Added broadcast_1342_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:23:00 INFO spark.SparkContext: Created broadcast 1342 from broadcast at DAGScheduler.scala:1006 18/04/17 17:23:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 1342 (KafkaRDD[1819] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 17:23:00 INFO cluster.YarnClusterScheduler: Adding task set 1342.0 with 1 tasks 18/04/17 17:23:00 INFO scheduler.DAGScheduler: Got job 1343 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 17:23:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 1343 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 17:23:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 17:23:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 17:23:00 INFO scheduler.DAGScheduler: Submitting ResultStage 1343 (KafkaRDD[1833] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 17:23:00 INFO storage.MemoryStore: Block broadcast_1343 stored as values in memory (estimated size 5.7 KB, free 491.3 MB) 18/04/17 17:23:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 1342.0 (TID 1342, ***hostname masked***, executor 1, partition 0, RACK_LOCAL, 2050 bytes) 18/04/17 17:23:00 INFO storage.MemoryStore: Block broadcast_1343_piece0 stored as bytes in memory (estimated size 3.1 KB, free 491.3 MB) 18/04/17 17:23:00 INFO storage.BlockManagerInfo: Added broadcast_1343_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:23:00 INFO spark.SparkContext: Created broadcast 1343 from broadcast at DAGScheduler.scala:1006 18/04/17 17:23:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 1343 (KafkaRDD[1833] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 17:23:00 INFO cluster.YarnClusterScheduler: Adding task set 1343.0 with 1 tasks 18/04/17 17:23:00 INFO scheduler.DAGScheduler: Got job 1344 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 17:23:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 1344 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 17:23:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 17:23:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 17:23:00 INFO scheduler.DAGScheduler: Submitting ResultStage 1344 (KafkaRDD[1823] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 17:23:00 INFO storage.MemoryStore: Block broadcast_1344 stored as values in memory (estimated size 5.7 KB, free 491.3 MB) 18/04/17 17:23:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 1343.0 (TID 1343, ***hostname masked***, executor 12, partition 0, NODE_LOCAL, 2048 bytes) 18/04/17 17:23:00 INFO storage.BlockManagerInfo: Added broadcast_1339_piece0 in memory on ***hostname masked***:42188 (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:23:00 INFO storage.MemoryStore: Block broadcast_1344_piece0 stored as bytes in memory (estimated size 3.1 KB, free 491.3 MB) 18/04/17 17:23:00 INFO storage.BlockManagerInfo: Added broadcast_1344_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:23:00 INFO spark.SparkContext: Created broadcast 1344 from broadcast at DAGScheduler.scala:1006 18/04/17 17:23:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 1344 (KafkaRDD[1823] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 17:23:00 INFO cluster.YarnClusterScheduler: Adding task set 1344.0 with 1 tasks 18/04/17 17:23:00 INFO storage.BlockManagerInfo: Added broadcast_1333_piece0 in memory on ***hostname masked***:56034 (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:23:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 1344.0 (TID 1344, ***hostname masked***, executor 12, partition 0, RACK_LOCAL, 2048 bytes) 18/04/17 17:23:00 INFO storage.BlockManagerInfo: Added broadcast_1340_piece0 in memory on ***hostname masked***:55033 (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:23:00 INFO storage.BlockManagerInfo: Added broadcast_1338_piece0 in memory on ***hostname masked***:55095 (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:23:00 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 1321.0 (TID 1321) in 83 ms on ***hostname masked*** (executor 2) (1/1) 18/04/17 17:23:00 INFO cluster.YarnClusterScheduler: Removed TaskSet 1321.0, whose tasks have all completed, from pool 18/04/17 17:23:00 INFO scheduler.DAGScheduler: ResultStage 1321 (foreachPartition at PredictorEngineApp.java:153) finished in 0.084 s 18/04/17 17:23:00 INFO scheduler.DAGScheduler: Job 1321 finished: foreachPartition at PredictorEngineApp.java:153, took 0.100380 s 18/04/17 17:23:00 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x28e62b1f connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 17:23:00 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x28e62b1f0x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 17:23:00 INFO storage.BlockManagerInfo: Added broadcast_1342_piece0 in memory on ***hostname masked***:56034 (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:23:00 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 17:23:00 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:47922, server: ***hostname masked***/***IP masked***:2181 18/04/17 17:23:00 INFO storage.BlockManagerInfo: Added broadcast_1344_piece0 in memory on ***hostname masked***:42188 (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:23:00 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 1328.0 (TID 1328) in 52 ms on ***hostname masked*** (executor 9) (1/1) 18/04/17 17:23:00 INFO scheduler.DAGScheduler: ResultStage 1328 (foreachPartition at PredictorEngineApp.java:153) finished in 0.053 s 18/04/17 17:23:00 INFO cluster.YarnClusterScheduler: Removed TaskSet 1328.0, whose tasks have all completed, from pool 18/04/17 17:23:00 INFO scheduler.DAGScheduler: Job 1328 finished: foreachPartition at PredictorEngineApp.java:153, took 0.103726 s 18/04/17 17:23:00 INFO spark.ContextCleaner: Cleaned accumulator 1311 18/04/17 17:23:00 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x715fa5a0 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 17:23:00 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x715fa5a00x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 17:23:00 INFO storage.BlockManagerInfo: Removed broadcast_1309_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:23:00 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 17:23:00 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:43328, server: ***hostname masked***/***IP masked***:2181 18/04/17 17:23:00 INFO storage.BlockManagerInfo: Removed broadcast_1309_piece0 on ***hostname masked***:60107 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:23:00 INFO spark.ContextCleaner: Cleaned accumulator 1310 18/04/17 17:23:00 INFO spark.ContextCleaner: Cleaned accumulator 1313 18/04/17 17:23:00 INFO storage.BlockManagerInfo: Removed broadcast_1311_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:23:00 INFO storage.BlockManagerInfo: Removed broadcast_1311_piece0 on ***hostname masked***:35790 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:23:00 INFO spark.ContextCleaner: Cleaned accumulator 1312 18/04/17 17:23:00 INFO storage.BlockManagerInfo: Removed broadcast_1310_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:23:00 INFO storage.BlockManagerInfo: Removed broadcast_1310_piece0 on ***hostname masked***:56034 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:23:00 INFO storage.BlockManagerInfo: Added broadcast_1343_piece0 in memory on ***hostname masked***:42188 (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:23:00 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x2626be142b2911e, negotiated timeout = 60000 18/04/17 17:23:00 INFO storage.BlockManagerInfo: Removed broadcast_1313_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:23:00 INFO storage.BlockManagerInfo: Removed broadcast_1313_piece0 on ***hostname masked***:55095 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:23:00 INFO spark.ContextCleaner: Cleaned accumulator 1314 18/04/17 17:23:00 INFO storage.BlockManagerInfo: Removed broadcast_1312_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:23:00 INFO storage.BlockManagerInfo: Removed broadcast_1312_piece0 on ***hostname masked***:53081 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:23:00 INFO spark.ContextCleaner: Cleaned accumulator 1315 18/04/17 17:23:00 INFO spark.ContextCleaner: Cleaned accumulator 1316 18/04/17 17:23:00 INFO spark.ContextCleaner: Cleaned accumulator 1318 18/04/17 17:23:00 INFO storage.BlockManagerInfo: Removed broadcast_1316_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:23:00 INFO storage.BlockManagerInfo: Removed broadcast_1316_piece0 on ***hostname masked***:41751 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:23:00 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x2626be142b2911e 18/04/17 17:23:00 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x1626be1444c9838, negotiated timeout = 60000 18/04/17 17:23:00 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x1626be1444c9838 18/04/17 17:23:00 INFO zookeeper.ZooKeeper: Session: 0x2626be142b2911e closed 18/04/17 17:23:00 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 17:23:00 INFO zookeeper.ZooKeeper: Session: 0x1626be1444c9838 closed 18/04/17 17:23:00 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 17:23:00 INFO storage.BlockManagerInfo: Added broadcast_1341_piece0 in memory on ***hostname masked***:42188 (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:23:00 INFO spark.ContextCleaner: Cleaned accumulator 1317 18/04/17 17:23:00 INFO storage.BlockManagerInfo: Removed broadcast_1315_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:23:00 INFO storage.BlockManagerInfo: Removed broadcast_1315_piece0 on ***hostname masked***:57847 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:23:00 INFO scheduler.JobScheduler: Finished job streaming job 1523974980000 ms.25 from job set of time 1523974980000 ms 18/04/17 17:23:00 INFO storage.BlockManagerInfo: Removed broadcast_1317_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:23:00 INFO storage.BlockManagerInfo: Removed broadcast_1317_piece0 on ***hostname masked***:43653 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:23:00 INFO scheduler.JobScheduler: Finished job streaming job 1523974980000 ms.2 from job set of time 1523974980000 ms 18/04/17 17:23:00 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 1337.0 (TID 1337) in 61 ms on ***hostname masked*** (executor 10) (1/1) 18/04/17 17:23:00 INFO cluster.YarnClusterScheduler: Removed TaskSet 1337.0, whose tasks have all completed, from pool 18/04/17 17:23:00 INFO scheduler.DAGScheduler: ResultStage 1337 (foreachPartition at PredictorEngineApp.java:153) finished in 0.062 s 18/04/17 17:23:00 INFO scheduler.DAGScheduler: Job 1337 finished: foreachPartition at PredictorEngineApp.java:153, took 0.138245 s 18/04/17 17:23:00 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x6b2f0949 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 17:23:00 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x6b2f09490x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 17:23:00 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 17:23:00 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:47928, server: ***hostname masked***/***IP masked***:2181 18/04/17 17:23:00 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x2626be142b2911f, negotiated timeout = 60000 18/04/17 17:23:00 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x2626be142b2911f 18/04/17 17:23:00 INFO zookeeper.ZooKeeper: Session: 0x2626be142b2911f closed 18/04/17 17:23:00 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 17:23:00 INFO scheduler.JobScheduler: Finished job streaming job 1523974980000 ms.35 from job set of time 1523974980000 ms 18/04/17 17:23:00 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 1332.0 (TID 1332) in 156 ms on ***hostname masked*** (executor 5) (1/1) 18/04/17 17:23:00 INFO cluster.YarnClusterScheduler: Removed TaskSet 1332.0, whose tasks have all completed, from pool 18/04/17 17:23:00 INFO scheduler.DAGScheduler: ResultStage 1332 (foreachPartition at PredictorEngineApp.java:153) finished in 0.157 s 18/04/17 17:23:00 INFO scheduler.DAGScheduler: Job 1332 finished: foreachPartition at PredictorEngineApp.java:153, took 0.219687 s 18/04/17 17:23:00 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x6a7b6bdd connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 17:23:00 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x6a7b6bdd0x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 17:23:00 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 17:23:00 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:43337, server: ***hostname masked***/***IP masked***:2181 18/04/17 17:23:00 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x1626be1444c983a, negotiated timeout = 60000 18/04/17 17:23:00 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x1626be1444c983a 18/04/17 17:23:00 INFO zookeeper.ZooKeeper: Session: 0x1626be1444c983a closed 18/04/17 17:23:00 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 17:23:00 INFO scheduler.JobScheduler: Finished job streaming job 1523974980000 ms.31 from job set of time 1523974980000 ms 18/04/17 17:23:00 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 1341.0 (TID 1341) in 181 ms on ***hostname masked*** (executor 12) (1/1) 18/04/17 17:23:00 INFO scheduler.DAGScheduler: ResultStage 1341 (foreachPartition at PredictorEngineApp.java:153) finished in 0.181 s 18/04/17 17:23:00 INFO cluster.YarnClusterScheduler: Removed TaskSet 1341.0, whose tasks have all completed, from pool 18/04/17 17:23:00 INFO scheduler.DAGScheduler: Job 1341 finished: foreachPartition at PredictorEngineApp.java:153, took 0.269018 s 18/04/17 17:23:00 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x626cec1 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 17:23:00 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x626cec10x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 17:23:00 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 17:23:00 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:47935, server: ***hostname masked***/***IP masked***:2181 18/04/17 17:23:00 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x2626be142b29122, negotiated timeout = 60000 18/04/17 17:23:00 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x2626be142b29122 18/04/17 17:23:00 INFO zookeeper.ZooKeeper: Session: 0x2626be142b29122 closed 18/04/17 17:23:00 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 17:23:00 INFO scheduler.JobScheduler: Finished job streaming job 1523974980000 ms.12 from job set of time 1523974980000 ms 18/04/17 17:23:03 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 1324.0 (TID 1324) in 3811 ms on ***hostname masked*** (executor 7) (1/1) 18/04/17 17:23:03 INFO cluster.YarnClusterScheduler: Removed TaskSet 1324.0, whose tasks have all completed, from pool 18/04/17 17:23:03 INFO scheduler.DAGScheduler: ResultStage 1324 (foreachPartition at PredictorEngineApp.java:153) finished in 3.811 s 18/04/17 17:23:03 INFO scheduler.DAGScheduler: Job 1324 finished: foreachPartition at PredictorEngineApp.java:153, took 3.837172 s 18/04/17 17:23:03 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x59324d83 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 17:23:03 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x59324d830x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 17:23:03 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 17:23:03 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:43349, server: ***hostname masked***/***IP masked***:2181 18/04/17 17:23:03 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x1626be1444c9842, negotiated timeout = 60000 18/04/17 17:23:03 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x1626be1444c9842 18/04/17 17:23:03 INFO zookeeper.ZooKeeper: Session: 0x1626be1444c9842 closed 18/04/17 17:23:03 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 17:23:03 INFO scheduler.JobScheduler: Finished job streaming job 1523974980000 ms.8 from job set of time 1523974980000 ms 18/04/17 17:23:04 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 1339.0 (TID 1339) in 4101 ms on ***hostname masked*** (executor 12) (1/1) 18/04/17 17:23:04 INFO cluster.YarnClusterScheduler: Removed TaskSet 1339.0, whose tasks have all completed, from pool 18/04/17 17:23:04 INFO scheduler.DAGScheduler: ResultStage 1339 (foreachPartition at PredictorEngineApp.java:153) finished in 4.102 s 18/04/17 17:23:04 INFO scheduler.DAGScheduler: Job 1339 finished: foreachPartition at PredictorEngineApp.java:153, took 4.184662 s 18/04/17 17:23:04 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x3f004fbb connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 17:23:04 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x3f004fbb0x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 17:23:04 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 17:23:04 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:43352, server: ***hostname masked***/***IP masked***:2181 18/04/17 17:23:04 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x1626be1444c9844, negotiated timeout = 60000 18/04/17 17:23:04 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x1626be1444c9844 18/04/17 17:23:04 INFO zookeeper.ZooKeeper: Session: 0x1626be1444c9844 closed 18/04/17 17:23:04 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 17:23:04 INFO scheduler.JobScheduler: Finished job streaming job 1523974980000 ms.7 from job set of time 1523974980000 ms 18/04/17 17:23:05 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 1320.0 (TID 1320) in 5262 ms on ***hostname masked*** (executor 11) (1/1) 18/04/17 17:23:05 INFO cluster.YarnClusterScheduler: Removed TaskSet 1320.0, whose tasks have all completed, from pool 18/04/17 17:23:05 INFO scheduler.DAGScheduler: ResultStage 1320 (foreachPartition at PredictorEngineApp.java:153) finished in 5.262 s 18/04/17 17:23:05 INFO scheduler.DAGScheduler: Job 1320 finished: foreachPartition at PredictorEngineApp.java:153, took 5.276107 s 18/04/17 17:23:05 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x3b31e096 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 17:23:05 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x3b31e0960x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 17:23:05 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 17:23:05 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:36975, server: ***hostname masked***/***IP masked***:2181 18/04/17 17:23:05 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x3626be1439a97f0, negotiated timeout = 60000 18/04/17 17:23:05 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x3626be1439a97f0 18/04/17 17:23:05 INFO zookeeper.ZooKeeper: Session: 0x3626be1439a97f0 closed 18/04/17 17:23:05 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 17:23:05 INFO scheduler.JobScheduler: Finished job streaming job 1523974980000 ms.34 from job set of time 1523974980000 ms 18/04/17 17:23:05 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 1338.0 (TID 1338) in 5292 ms on ***hostname masked*** (executor 10) (1/1) 18/04/17 17:23:05 INFO cluster.YarnClusterScheduler: Removed TaskSet 1338.0, whose tasks have all completed, from pool 18/04/17 17:23:05 INFO scheduler.DAGScheduler: ResultStage 1338 (foreachPartition at PredictorEngineApp.java:153) finished in 5.293 s 18/04/17 17:23:05 INFO scheduler.DAGScheduler: Job 1338 finished: foreachPartition at PredictorEngineApp.java:153, took 5.373039 s 18/04/17 17:23:05 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x3d1183f connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 17:23:05 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x3d1183f0x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 17:23:05 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 17:23:05 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:47956, server: ***hostname masked***/***IP masked***:2181 18/04/17 17:23:05 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x2626be142b29127, negotiated timeout = 60000 18/04/17 17:23:05 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x2626be142b29127 18/04/17 17:23:05 INFO zookeeper.ZooKeeper: Session: 0x2626be142b29127 closed 18/04/17 17:23:05 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 17:23:05 INFO scheduler.JobScheduler: Finished job streaming job 1523974980000 ms.27 from job set of time 1523974980000 ms 18/04/17 17:23:05 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 1331.0 (TID 1331) in 5600 ms on ***hostname masked*** (executor 5) (1/1) 18/04/17 17:23:05 INFO scheduler.DAGScheduler: ResultStage 1331 (foreachPartition at PredictorEngineApp.java:153) finished in 5.601 s 18/04/17 17:23:05 INFO cluster.YarnClusterScheduler: Removed TaskSet 1331.0, whose tasks have all completed, from pool 18/04/17 17:23:05 INFO scheduler.DAGScheduler: Job 1331 finished: foreachPartition at PredictorEngineApp.java:153, took 5.660733 s 18/04/17 17:23:05 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x1091f275 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 17:23:05 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x1091f2750x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 17:23:05 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 17:23:05 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:36982, server: ***hostname masked***/***IP masked***:2181 18/04/17 17:23:05 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x3626be1439a97f1, negotiated timeout = 60000 18/04/17 17:23:05 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x3626be1439a97f1 18/04/17 17:23:05 INFO zookeeper.ZooKeeper: Session: 0x3626be1439a97f1 closed 18/04/17 17:23:05 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 17:23:05 INFO scheduler.JobScheduler: Finished job streaming job 1523974980000 ms.32 from job set of time 1523974980000 ms 18/04/17 17:23:09 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 1326.0 (TID 1326) in 9306 ms on ***hostname masked*** (executor 7) (1/1) 18/04/17 17:23:09 INFO cluster.YarnClusterScheduler: Removed TaskSet 1326.0, whose tasks have all completed, from pool 18/04/17 17:23:09 INFO scheduler.DAGScheduler: ResultStage 1326 (foreachPartition at PredictorEngineApp.java:153) finished in 9.306 s 18/04/17 17:23:09 INFO scheduler.DAGScheduler: Job 1325 finished: foreachPartition at PredictorEngineApp.java:153, took 9.351076 s 18/04/17 17:23:09 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x38c9ff74 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 17:23:09 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x38c9ff740x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 17:23:09 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 17:23:09 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:43372, server: ***hostname masked***/***IP masked***:2181 18/04/17 17:23:09 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x1626be1444c9848, negotiated timeout = 60000 18/04/17 17:23:09 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x1626be1444c9848 18/04/17 17:23:09 INFO zookeeper.ZooKeeper: Session: 0x1626be1444c9848 closed 18/04/17 17:23:09 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 17:23:09 INFO scheduler.JobScheduler: Finished job streaming job 1523974980000 ms.15 from job set of time 1523974980000 ms 18/04/17 17:23:09 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 1327.0 (TID 1327) in 9591 ms on ***hostname masked*** (executor 7) (1/1) 18/04/17 17:23:09 INFO scheduler.DAGScheduler: ResultStage 1327 (foreachPartition at PredictorEngineApp.java:153) finished in 9.591 s 18/04/17 17:23:09 INFO cluster.YarnClusterScheduler: Removed TaskSet 1327.0, whose tasks have all completed, from pool 18/04/17 17:23:09 INFO scheduler.DAGScheduler: Job 1327 finished: foreachPartition at PredictorEngineApp.java:153, took 9.638920 s 18/04/17 17:23:09 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x59a3835e connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 17:23:09 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x59a3835e0x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 17:23:09 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 17:23:09 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:43375, server: ***hostname masked***/***IP masked***:2181 18/04/17 17:23:09 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x1626be1444c9849, negotiated timeout = 60000 18/04/17 17:23:09 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x1626be1444c9849 18/04/17 17:23:09 INFO zookeeper.ZooKeeper: Session: 0x1626be1444c9849 closed 18/04/17 17:23:09 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 17:23:09 INFO scheduler.JobScheduler: Finished job streaming job 1523974980000 ms.18 from job set of time 1523974980000 ms 18/04/17 17:23:10 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 1336.0 (TID 1336) in 10027 ms on ***hostname masked*** (executor 3) (1/1) 18/04/17 17:23:10 INFO cluster.YarnClusterScheduler: Removed TaskSet 1336.0, whose tasks have all completed, from pool 18/04/17 17:23:10 INFO scheduler.DAGScheduler: ResultStage 1336 (foreachPartition at PredictorEngineApp.java:153) finished in 10.027 s 18/04/17 17:23:10 INFO scheduler.DAGScheduler: Job 1336 finished: foreachPartition at PredictorEngineApp.java:153, took 10.102151 s 18/04/17 17:23:10 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x47431d57 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 17:23:10 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x47431d570x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 17:23:10 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 17:23:10 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:43379, server: ***hostname masked***/***IP masked***:2181 18/04/17 17:23:10 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x1626be1444c984a, negotiated timeout = 60000 18/04/17 17:23:10 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x1626be1444c984a 18/04/17 17:23:10 INFO zookeeper.ZooKeeper: Session: 0x1626be1444c984a closed 18/04/17 17:23:10 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 17:23:10 INFO scheduler.JobScheduler: Finished job streaming job 1523974980000 ms.24 from job set of time 1523974980000 ms 18/04/17 17:23:10 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 1329.0 (TID 1329) in 10545 ms on ***hostname masked*** (executor 2) (1/1) 18/04/17 17:23:10 INFO cluster.YarnClusterScheduler: Removed TaskSet 1329.0, whose tasks have all completed, from pool 18/04/17 17:23:10 INFO scheduler.DAGScheduler: ResultStage 1329 (foreachPartition at PredictorEngineApp.java:153) finished in 10.545 s 18/04/17 17:23:10 INFO scheduler.DAGScheduler: Job 1329 finished: foreachPartition at PredictorEngineApp.java:153, took 10.599245 s 18/04/17 17:23:10 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x222ccb42 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 17:23:10 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x222ccb420x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 17:23:10 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 17:23:10 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:47979, server: ***hostname masked***/***IP masked***:2181 18/04/17 17:23:10 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x2626be142b29128, negotiated timeout = 60000 18/04/17 17:23:10 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x2626be142b29128 18/04/17 17:23:10 INFO zookeeper.ZooKeeper: Session: 0x2626be142b29128 closed 18/04/17 17:23:10 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 17:23:10 INFO scheduler.JobScheduler: Finished job streaming job 1523974980000 ms.9 from job set of time 1523974980000 ms 18/04/17 17:23:11 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 1344.0 (TID 1344) in 11513 ms on ***hostname masked*** (executor 12) (1/1) 18/04/17 17:23:11 INFO cluster.YarnClusterScheduler: Removed TaskSet 1344.0, whose tasks have all completed, from pool 18/04/17 17:23:11 INFO scheduler.DAGScheduler: ResultStage 1344 (foreachPartition at PredictorEngineApp.java:153) finished in 11.513 s 18/04/17 17:23:11 INFO scheduler.DAGScheduler: Job 1344 finished: foreachPartition at PredictorEngineApp.java:153, took 11.606651 s 18/04/17 17:23:11 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x565bb4f connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 17:23:11 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x565bb4f0x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 17:23:11 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 17:23:11 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:43388, server: ***hostname masked***/***IP masked***:2181 18/04/17 17:23:11 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x1626be1444c984c, negotiated timeout = 60000 18/04/17 17:23:11 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x1626be1444c984c 18/04/17 17:23:11 INFO zookeeper.ZooKeeper: Session: 0x1626be1444c984c closed 18/04/17 17:23:11 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 17:23:11 INFO scheduler.JobScheduler: Finished job streaming job 1523974980000 ms.23 from job set of time 1523974980000 ms 18/04/17 17:23:11 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 1343.0 (TID 1343) in 11651 ms on ***hostname masked*** (executor 12) (1/1) 18/04/17 17:23:11 INFO cluster.YarnClusterScheduler: Removed TaskSet 1343.0, whose tasks have all completed, from pool 18/04/17 17:23:11 INFO scheduler.DAGScheduler: ResultStage 1343 (foreachPartition at PredictorEngineApp.java:153) finished in 11.651 s 18/04/17 17:23:11 INFO scheduler.DAGScheduler: Job 1343 finished: foreachPartition at PredictorEngineApp.java:153, took 11.742501 s 18/04/17 17:23:11 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x557d0537 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 17:23:11 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x557d05370x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 17:23:11 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 17:23:11 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:43391, server: ***hostname masked***/***IP masked***:2181 18/04/17 17:23:11 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x1626be1444c984d, negotiated timeout = 60000 18/04/17 17:23:11 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x1626be1444c984d 18/04/17 17:23:11 INFO zookeeper.ZooKeeper: Session: 0x1626be1444c984d closed 18/04/17 17:23:11 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 17:23:11 INFO scheduler.JobScheduler: Finished job streaming job 1523974980000 ms.33 from job set of time 1523974980000 ms 18/04/17 17:23:13 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 1334.0 (TID 1334) in 13197 ms on ***hostname masked*** (executor 6) (1/1) 18/04/17 17:23:13 INFO cluster.YarnClusterScheduler: Removed TaskSet 1334.0, whose tasks have all completed, from pool 18/04/17 17:23:13 INFO scheduler.DAGScheduler: ResultStage 1334 (foreachPartition at PredictorEngineApp.java:153) finished in 13.199 s 18/04/17 17:23:13 INFO scheduler.DAGScheduler: Job 1333 finished: foreachPartition at PredictorEngineApp.java:153, took 13.267545 s 18/04/17 17:23:13 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x6c04fba3 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 17:23:13 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x6c04fba30x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 17:23:13 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 17:23:13 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:47994, server: ***hostname masked***/***IP masked***:2181 18/04/17 17:23:13 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x2626be142b2912d, negotiated timeout = 60000 18/04/17 17:23:13 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x2626be142b2912d 18/04/17 17:23:13 INFO zookeeper.ZooKeeper: Session: 0x2626be142b2912d closed 18/04/17 17:23:13 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 17:23:13 INFO scheduler.JobScheduler: Finished job streaming job 1523974980000 ms.22 from job set of time 1523974980000 ms 18/04/17 17:23:14 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 1340.0 (TID 1340) in 14434 ms on ***hostname masked*** (executor 9) (1/1) 18/04/17 17:23:14 INFO cluster.YarnClusterScheduler: Removed TaskSet 1340.0, whose tasks have all completed, from pool 18/04/17 17:23:14 INFO scheduler.DAGScheduler: ResultStage 1340 (foreachPartition at PredictorEngineApp.java:153) finished in 14.435 s 18/04/17 17:23:14 INFO scheduler.DAGScheduler: Job 1340 finished: foreachPartition at PredictorEngineApp.java:153, took 14.520576 s 18/04/17 17:23:14 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x2551ea40 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 17:23:14 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x2551ea400x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 17:23:14 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 17:23:14 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:37021, server: ***hostname masked***/***IP masked***:2181 18/04/17 17:23:14 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x3626be1439a97f8, negotiated timeout = 60000 18/04/17 17:23:14 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x3626be1439a97f8 18/04/17 17:23:14 INFO zookeeper.ZooKeeper: Session: 0x3626be1439a97f8 closed 18/04/17 17:23:14 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 17:23:14 INFO scheduler.JobScheduler: Finished job streaming job 1523974980000 ms.6 from job set of time 1523974980000 ms 18/04/17 17:23:14 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 1333.0 (TID 1333) in 14873 ms on ***hostname masked*** (executor 1) (1/1) 18/04/17 17:23:14 INFO cluster.YarnClusterScheduler: Removed TaskSet 1333.0, whose tasks have all completed, from pool 18/04/17 17:23:14 INFO scheduler.DAGScheduler: ResultStage 1333 (foreachPartition at PredictorEngineApp.java:153) finished in 14.874 s 18/04/17 17:23:14 INFO scheduler.DAGScheduler: Job 1334 finished: foreachPartition at PredictorEngineApp.java:153, took 14.939633 s 18/04/17 17:23:14 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x1654fd6f connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 17:23:14 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x1654fd6f0x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 17:23:14 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 17:23:15 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:43406, server: ***hostname masked***/***IP masked***:2181 18/04/17 17:23:15 INFO spark.ContextCleaner: Cleaned accumulator 1340 18/04/17 17:23:15 INFO spark.ContextCleaner: Cleaned accumulator 1321 18/04/17 17:23:15 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 1335.0 (TID 1335) in 14878 ms on ***hostname masked*** (executor 1) (1/1) 18/04/17 17:23:15 INFO cluster.YarnClusterScheduler: Removed TaskSet 1335.0, whose tasks have all completed, from pool 18/04/17 17:23:15 INFO scheduler.DAGScheduler: ResultStage 1335 (foreachPartition at PredictorEngineApp.java:153) finished in 14.879 s 18/04/17 17:23:15 INFO storage.BlockManagerInfo: Removed broadcast_1321_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:23:15 INFO scheduler.DAGScheduler: Job 1335 finished: foreachPartition at PredictorEngineApp.java:153, took 14.950985 s 18/04/17 17:23:15 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x7bbca655 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 17:23:15 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x7bbca6550x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 17:23:15 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 17:23:15 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x1626be1444c984f, negotiated timeout = 60000 18/04/17 17:23:15 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:37025, server: ***hostname masked***/***IP masked***:2181 18/04/17 17:23:15 INFO storage.BlockManagerInfo: Removed broadcast_1321_piece0 on ***hostname masked***:43653 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:23:15 INFO storage.BlockManagerInfo: Removed broadcast_1320_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:23:15 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x3626be1439a97f9, negotiated timeout = 60000 18/04/17 17:23:15 INFO storage.BlockManagerInfo: Removed broadcast_1320_piece0 on ***hostname masked***:57847 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:23:15 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x1626be1444c984f 18/04/17 17:23:15 INFO storage.BlockManagerInfo: Removed broadcast_1324_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:23:15 INFO storage.BlockManagerInfo: Removed broadcast_1324_piece0 on ***hostname masked***:41751 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:23:15 INFO spark.ContextCleaner: Cleaned accumulator 1325 18/04/17 17:23:15 INFO storage.BlockManagerInfo: Removed broadcast_1344_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:23:15 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x3626be1439a97f9 18/04/17 17:23:15 INFO zookeeper.ZooKeeper: Session: 0x1626be1444c984f closed 18/04/17 17:23:15 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 17:23:15 INFO storage.BlockManagerInfo: Removed broadcast_1344_piece0 on ***hostname masked***:42188 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:23:15 INFO spark.ContextCleaner: Cleaned accumulator 1345 18/04/17 17:23:15 INFO storage.BlockManagerInfo: Removed broadcast_1343_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:23:15 INFO storage.BlockManagerInfo: Removed broadcast_1343_piece0 on ***hostname masked***:42188 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:23:15 INFO storage.BlockManagerInfo: Removed broadcast_1326_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:23:15 INFO storage.BlockManagerInfo: Removed broadcast_1326_piece0 on ***hostname masked***:41751 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:23:15 INFO spark.ContextCleaner: Cleaned accumulator 1327 18/04/17 17:23:15 INFO spark.ContextCleaner: Cleaned accumulator 1329 18/04/17 17:23:15 INFO zookeeper.ZooKeeper: Session: 0x3626be1439a97f9 closed 18/04/17 17:23:15 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 17:23:15 INFO storage.BlockManagerInfo: Removed broadcast_1327_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:23:15 INFO storage.BlockManagerInfo: Removed broadcast_1327_piece0 on ***hostname masked***:41751 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:23:15 INFO spark.ContextCleaner: Cleaned accumulator 1328 18/04/17 17:23:15 INFO storage.BlockManagerInfo: Removed broadcast_1329_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:23:15 INFO storage.BlockManagerInfo: Removed broadcast_1329_piece0 on ***hostname masked***:43653 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:23:15 INFO scheduler.JobScheduler: Finished job streaming job 1523974980000 ms.29 from job set of time 1523974980000 ms 18/04/17 17:23:15 INFO spark.ContextCleaner: Cleaned accumulator 1330 18/04/17 17:23:15 INFO storage.BlockManagerInfo: Removed broadcast_1328_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:23:15 INFO storage.BlockManagerInfo: Removed broadcast_1328_piece0 on ***hostname masked***:55033 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:23:15 INFO spark.ContextCleaner: Cleaned accumulator 1332 18/04/17 17:23:15 INFO storage.BlockManagerInfo: Removed broadcast_1332_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:23:15 INFO storage.BlockManagerInfo: Removed broadcast_1332_piece0 on ***hostname masked***:53081 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:23:15 INFO spark.ContextCleaner: Cleaned accumulator 1333 18/04/17 17:23:15 INFO scheduler.JobScheduler: Finished job streaming job 1523974980000 ms.28 from job set of time 1523974980000 ms 18/04/17 17:23:15 INFO storage.BlockManagerInfo: Removed broadcast_1331_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:23:15 INFO storage.BlockManagerInfo: Removed broadcast_1331_piece0 on ***hostname masked***:53081 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:23:15 INFO spark.ContextCleaner: Cleaned accumulator 1335 18/04/17 17:23:15 INFO storage.BlockManagerInfo: Removed broadcast_1333_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:23:15 INFO storage.BlockManagerInfo: Removed broadcast_1333_piece0 on ***hostname masked***:56034 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:23:15 INFO spark.ContextCleaner: Cleaned accumulator 1334 18/04/17 17:23:15 INFO storage.BlockManagerInfo: Removed broadcast_1334_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:23:15 INFO storage.BlockManagerInfo: Removed broadcast_1334_piece0 on ***hostname masked***:35790 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:23:15 INFO spark.ContextCleaner: Cleaned accumulator 1338 18/04/17 17:23:15 INFO storage.BlockManagerInfo: Removed broadcast_1336_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:23:15 INFO storage.BlockManagerInfo: Removed broadcast_1336_piece0 on ***hostname masked***:60107 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:23:15 INFO spark.ContextCleaner: Cleaned accumulator 1337 18/04/17 17:23:15 INFO storage.BlockManagerInfo: Removed broadcast_1338_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:23:15 INFO storage.BlockManagerInfo: Removed broadcast_1338_piece0 on ***hostname masked***:55095 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:23:15 INFO spark.ContextCleaner: Cleaned accumulator 1339 18/04/17 17:23:15 INFO storage.BlockManagerInfo: Removed broadcast_1337_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:23:15 INFO storage.BlockManagerInfo: Removed broadcast_1337_piece0 on ***hostname masked***:55095 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:23:15 INFO spark.ContextCleaner: Cleaned accumulator 1341 18/04/17 17:23:15 INFO storage.BlockManagerInfo: Removed broadcast_1339_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:23:15 INFO storage.BlockManagerInfo: Removed broadcast_1339_piece0 on ***hostname masked***:42188 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:23:15 INFO spark.ContextCleaner: Cleaned accumulator 1342 18/04/17 17:23:15 INFO storage.BlockManagerInfo: Removed broadcast_1340_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:23:15 INFO storage.BlockManagerInfo: Removed broadcast_1340_piece0 on ***hostname masked***:55033 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:23:15 INFO storage.BlockManagerInfo: Removed broadcast_1341_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:23:15 INFO storage.BlockManagerInfo: Removed broadcast_1341_piece0 on ***hostname masked***:42188 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:23:15 INFO spark.ContextCleaner: Cleaned accumulator 1344 18/04/17 17:23:15 INFO spark.ContextCleaner: Cleaned accumulator 1322 18/04/17 17:23:15 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 1342.0 (TID 1342) in 14938 ms on ***hostname masked*** (executor 1) (1/1) 18/04/17 17:23:15 INFO scheduler.DAGScheduler: ResultStage 1342 (foreachPartition at PredictorEngineApp.java:153) finished in 14.938 s 18/04/17 17:23:15 INFO cluster.YarnClusterScheduler: Removed TaskSet 1342.0, whose tasks have all completed, from pool 18/04/17 17:23:15 INFO scheduler.DAGScheduler: Job 1342 finished: foreachPartition at PredictorEngineApp.java:153, took 15.027923 s 18/04/17 17:23:15 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x6a97cb9b connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 17:23:15 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x6a97cb9b0x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 17:23:15 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 17:23:15 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:43412, server: ***hostname masked***/***IP masked***:2181 18/04/17 17:23:15 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x1626be1444c9850, negotiated timeout = 60000 18/04/17 17:23:15 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x1626be1444c9850 18/04/17 17:23:15 INFO zookeeper.ZooKeeper: Session: 0x1626be1444c9850 closed 18/04/17 17:23:15 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 17:23:15 INFO scheduler.JobScheduler: Finished job streaming job 1523974980000 ms.19 from job set of time 1523974980000 ms 18/04/17 17:23:15 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 1322.0 (TID 1322) in 15284 ms on ***hostname masked*** (executor 9) (1/1) 18/04/17 17:23:15 INFO cluster.YarnClusterScheduler: Removed TaskSet 1322.0, whose tasks have all completed, from pool 18/04/17 17:23:15 INFO scheduler.DAGScheduler: ResultStage 1322 (foreachPartition at PredictorEngineApp.java:153) finished in 15.284 s 18/04/17 17:23:15 INFO scheduler.DAGScheduler: Job 1322 finished: foreachPartition at PredictorEngineApp.java:153, took 15.304513 s 18/04/17 17:23:15 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x3174efe3 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 17:23:15 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x3174efe30x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 17:23:15 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 17:23:15 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:48011, server: ***hostname masked***/***IP masked***:2181 18/04/17 17:23:15 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x2626be142b29130, negotiated timeout = 60000 18/04/17 17:23:15 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x2626be142b29130 18/04/17 17:23:15 INFO zookeeper.ZooKeeper: Session: 0x2626be142b29130 closed 18/04/17 17:23:15 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 17:23:15 INFO scheduler.JobScheduler: Finished job streaming job 1523974980000 ms.20 from job set of time 1523974980000 ms 18/04/17 17:23:17 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 1330.0 (TID 1330) in 17007 ms on ***hostname masked*** (executor 3) (1/1) 18/04/17 17:23:17 INFO cluster.YarnClusterScheduler: Removed TaskSet 1330.0, whose tasks have all completed, from pool 18/04/17 17:23:17 INFO scheduler.DAGScheduler: ResultStage 1330 (foreachPartition at PredictorEngineApp.java:153) finished in 17.008 s 18/04/17 17:23:17 INFO scheduler.DAGScheduler: Job 1330 finished: foreachPartition at PredictorEngineApp.java:153, took 17.065135 s 18/04/17 17:23:17 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x1132bb6f connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 17:23:17 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x1132bb6f0x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 17:23:17 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 17:23:17 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:43421, server: ***hostname masked***/***IP masked***:2181 18/04/17 17:23:17 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x1626be1444c9851, negotiated timeout = 60000 18/04/17 17:23:17 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x1626be1444c9851 18/04/17 17:23:17 INFO zookeeper.ZooKeeper: Session: 0x1626be1444c9851 closed 18/04/17 17:23:17 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 17:23:17 INFO scheduler.JobScheduler: Finished job streaming job 1523974980000 ms.5 from job set of time 1523974980000 ms 18/04/17 17:23:17 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 1318.0 (TID 1318) in 17703 ms on ***hostname masked*** (executor 2) (1/1) 18/04/17 17:23:17 INFO cluster.YarnClusterScheduler: Removed TaskSet 1318.0, whose tasks have all completed, from pool 18/04/17 17:23:17 INFO scheduler.DAGScheduler: ResultStage 1318 (foreachPartition at PredictorEngineApp.java:153) finished in 17.703 s 18/04/17 17:23:17 INFO scheduler.DAGScheduler: Job 1319 finished: foreachPartition at PredictorEngineApp.java:153, took 17.708585 s 18/04/17 17:23:17 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x66be2e35 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 17:23:17 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x66be2e350x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 17:23:17 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 17:23:17 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:43425, server: ***hostname masked***/***IP masked***:2181 18/04/17 17:23:17 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x1626be1444c9853, negotiated timeout = 60000 18/04/17 17:23:17 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x1626be1444c9853 18/04/17 17:23:17 INFO zookeeper.ZooKeeper: Session: 0x1626be1444c9853 closed 18/04/17 17:23:17 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 17:23:17 INFO scheduler.JobScheduler: Finished job streaming job 1523974980000 ms.26 from job set of time 1523974980000 ms 18/04/17 17:23:21 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 1319.0 (TID 1319) in 21895 ms on ***hostname masked*** (executor 9) (1/1) 18/04/17 17:23:21 INFO scheduler.DAGScheduler: ResultStage 1319 (foreachPartition at PredictorEngineApp.java:153) finished in 21.895 s 18/04/17 17:23:21 INFO cluster.YarnClusterScheduler: Removed TaskSet 1319.0, whose tasks have all completed, from pool 18/04/17 17:23:21 INFO scheduler.DAGScheduler: Job 1318 finished: foreachPartition at PredictorEngineApp.java:153, took 21.923432 s 18/04/17 17:23:21 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x14504755 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 17:23:21 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x145047550x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 17:23:21 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 17:23:21 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:43436, server: ***hostname masked***/***IP masked***:2181 18/04/17 17:23:21 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x1626be1444c9854, negotiated timeout = 60000 18/04/17 17:23:21 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x1626be1444c9854 18/04/17 17:23:21 INFO zookeeper.ZooKeeper: Session: 0x1626be1444c9854 closed 18/04/17 17:23:21 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 17:23:22 INFO scheduler.JobScheduler: Finished job streaming job 1523974980000 ms.1 from job set of time 1523974980000 ms 18/04/17 17:23:24 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 1323.0 (TID 1323) in 24269 ms on ***hostname masked*** (executor 4) (1/1) 18/04/17 17:23:24 INFO scheduler.DAGScheduler: ResultStage 1323 (foreachPartition at PredictorEngineApp.java:153) finished in 24.269 s 18/04/17 17:23:24 INFO cluster.YarnClusterScheduler: Removed TaskSet 1323.0, whose tasks have all completed, from pool 18/04/17 17:23:24 INFO scheduler.DAGScheduler: Job 1323 finished: foreachPartition at PredictorEngineApp.java:153, took 24.292215 s 18/04/17 17:23:24 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x13262b9b connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 17:23:24 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x13262b9b0x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 17:23:24 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 17:23:24 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:43443, server: ***hostname masked***/***IP masked***:2181 18/04/17 17:23:24 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x1626be1444c9856, negotiated timeout = 60000 18/04/17 17:23:24 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x1626be1444c9856 18/04/17 17:23:24 INFO zookeeper.ZooKeeper: Session: 0x1626be1444c9856 closed 18/04/17 17:23:24 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 17:23:24 INFO scheduler.JobScheduler: Finished job streaming job 1523974980000 ms.11 from job set of time 1523974980000 ms 18/04/17 17:23:25 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 1325.0 (TID 1325) in 25657 ms on ***hostname masked*** (executor 4) (1/1) 18/04/17 17:23:25 INFO cluster.YarnClusterScheduler: Removed TaskSet 1325.0, whose tasks have all completed, from pool 18/04/17 17:23:25 INFO scheduler.DAGScheduler: ResultStage 1325 (foreachPartition at PredictorEngineApp.java:153) finished in 25.657 s 18/04/17 17:23:25 INFO scheduler.DAGScheduler: Job 1326 finished: foreachPartition at PredictorEngineApp.java:153, took 25.699171 s 18/04/17 17:23:25 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x3ab57df2 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 17:23:25 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x3ab57df20x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 17:23:25 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 17:23:25 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:48043, server: ***hostname masked***/***IP masked***:2181 18/04/17 17:23:25 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x2626be142b29131, negotiated timeout = 60000 18/04/17 17:23:25 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x2626be142b29131 18/04/17 17:23:25 INFO zookeeper.ZooKeeper: Session: 0x2626be142b29131 closed 18/04/17 17:23:25 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 17:23:25 INFO scheduler.JobScheduler: Finished job streaming job 1523974980000 ms.10 from job set of time 1523974980000 ms 18/04/17 17:23:25 INFO scheduler.JobScheduler: Total delay: 25.779 s for time 1523974980000 ms (execution: 25.734 s) 18/04/17 17:23:25 INFO kafka.KafkaRDD: Removing RDD 1764 from persistence list 18/04/17 17:23:25 INFO storage.BlockManager: Removing RDD 1764 18/04/17 17:23:25 INFO kafka.KafkaRDD: Removing RDD 1764 from persistence list 18/04/17 17:23:25 INFO storage.BlockManager: Removing RDD 1764 18/04/17 17:23:25 INFO kafka.KafkaRDD: Removing RDD 1765 from persistence list 18/04/17 17:23:25 INFO storage.BlockManager: Removing RDD 1765 18/04/17 17:23:25 INFO kafka.KafkaRDD: Removing RDD 1765 from persistence list 18/04/17 17:23:25 INFO storage.BlockManager: Removing RDD 1765 18/04/17 17:23:25 INFO kafka.KafkaRDD: Removing RDD 1766 from persistence list 18/04/17 17:23:25 INFO storage.BlockManager: Removing RDD 1766 18/04/17 17:23:25 INFO kafka.KafkaRDD: Removing RDD 1766 from persistence list 18/04/17 17:23:25 INFO storage.BlockManager: Removing RDD 1766 18/04/17 17:23:25 INFO kafka.KafkaRDD: Removing RDD 1767 from persistence list 18/04/17 17:23:25 INFO storage.BlockManager: Removing RDD 1767 18/04/17 17:23:25 INFO kafka.KafkaRDD: Removing RDD 1767 from persistence list 18/04/17 17:23:25 INFO storage.BlockManager: Removing RDD 1767 18/04/17 17:23:25 INFO kafka.KafkaRDD: Removing RDD 1768 from persistence list 18/04/17 17:23:25 INFO storage.BlockManager: Removing RDD 1768 18/04/17 17:23:25 INFO kafka.KafkaRDD: Removing RDD 1768 from persistence list 18/04/17 17:23:25 INFO storage.BlockManager: Removing RDD 1768 18/04/17 17:23:25 INFO kafka.KafkaRDD: Removing RDD 1769 from persistence list 18/04/17 17:23:25 INFO storage.BlockManager: Removing RDD 1769 18/04/17 17:23:25 INFO kafka.KafkaRDD: Removing RDD 1769 from persistence list 18/04/17 17:23:25 INFO storage.BlockManager: Removing RDD 1769 18/04/17 17:23:25 INFO kafka.KafkaRDD: Removing RDD 1770 from persistence list 18/04/17 17:23:25 INFO storage.BlockManager: Removing RDD 1770 18/04/17 17:23:25 INFO kafka.KafkaRDD: Removing RDD 1770 from persistence list 18/04/17 17:23:25 INFO storage.BlockManager: Removing RDD 1770 18/04/17 17:23:25 INFO kafka.KafkaRDD: Removing RDD 1771 from persistence list 18/04/17 17:23:25 INFO storage.BlockManager: Removing RDD 1771 18/04/17 17:23:25 INFO kafka.KafkaRDD: Removing RDD 1771 from persistence list 18/04/17 17:23:25 INFO storage.BlockManager: Removing RDD 1771 18/04/17 17:23:25 INFO kafka.KafkaRDD: Removing RDD 1772 from persistence list 18/04/17 17:23:25 INFO storage.BlockManager: Removing RDD 1772 18/04/17 17:23:25 INFO kafka.KafkaRDD: Removing RDD 1772 from persistence list 18/04/17 17:23:25 INFO storage.BlockManager: Removing RDD 1772 18/04/17 17:23:25 INFO kafka.KafkaRDD: Removing RDD 1773 from persistence list 18/04/17 17:23:25 INFO storage.BlockManager: Removing RDD 1773 18/04/17 17:23:25 INFO kafka.KafkaRDD: Removing RDD 1773 from persistence list 18/04/17 17:23:25 INFO storage.BlockManager: Removing RDD 1773 18/04/17 17:23:25 INFO kafka.KafkaRDD: Removing RDD 1774 from persistence list 18/04/17 17:23:25 INFO storage.BlockManager: Removing RDD 1774 18/04/17 17:23:25 INFO kafka.KafkaRDD: Removing RDD 1774 from persistence list 18/04/17 17:23:25 INFO storage.BlockManager: Removing RDD 1774 18/04/17 17:23:25 INFO kafka.KafkaRDD: Removing RDD 1775 from persistence list 18/04/17 17:23:25 INFO storage.BlockManager: Removing RDD 1775 18/04/17 17:23:25 INFO kafka.KafkaRDD: Removing RDD 1775 from persistence list 18/04/17 17:23:25 INFO storage.BlockManager: Removing RDD 1775 18/04/17 17:23:25 INFO kafka.KafkaRDD: Removing RDD 1776 from persistence list 18/04/17 17:23:25 INFO storage.BlockManager: Removing RDD 1776 18/04/17 17:23:25 INFO kafka.KafkaRDD: Removing RDD 1776 from persistence list 18/04/17 17:23:25 INFO storage.BlockManager: Removing RDD 1776 18/04/17 17:23:25 INFO kafka.KafkaRDD: Removing RDD 1777 from persistence list 18/04/17 17:23:25 INFO storage.BlockManager: Removing RDD 1777 18/04/17 17:23:25 INFO kafka.KafkaRDD: Removing RDD 1777 from persistence list 18/04/17 17:23:25 INFO storage.BlockManager: Removing RDD 1777 18/04/17 17:23:25 INFO kafka.KafkaRDD: Removing RDD 1778 from persistence list 18/04/17 17:23:25 INFO storage.BlockManager: Removing RDD 1778 18/04/17 17:23:25 INFO kafka.KafkaRDD: Removing RDD 1778 from persistence list 18/04/17 17:23:25 INFO storage.BlockManager: Removing RDD 1778 18/04/17 17:23:25 INFO kafka.KafkaRDD: Removing RDD 1779 from persistence list 18/04/17 17:23:25 INFO storage.BlockManager: Removing RDD 1779 18/04/17 17:23:25 INFO kafka.KafkaRDD: Removing RDD 1779 from persistence list 18/04/17 17:23:25 INFO storage.BlockManager: Removing RDD 1779 18/04/17 17:23:25 INFO kafka.KafkaRDD: Removing RDD 1780 from persistence list 18/04/17 17:23:25 INFO storage.BlockManager: Removing RDD 1780 18/04/17 17:23:25 INFO kafka.KafkaRDD: Removing RDD 1780 from persistence list 18/04/17 17:23:25 INFO storage.BlockManager: Removing RDD 1780 18/04/17 17:23:25 INFO kafka.KafkaRDD: Removing RDD 1781 from persistence list 18/04/17 17:23:25 INFO storage.BlockManager: Removing RDD 1781 18/04/17 17:23:25 INFO kafka.KafkaRDD: Removing RDD 1781 from persistence list 18/04/17 17:23:25 INFO storage.BlockManager: Removing RDD 1781 18/04/17 17:23:25 INFO kafka.KafkaRDD: Removing RDD 1782 from persistence list 18/04/17 17:23:25 INFO storage.BlockManager: Removing RDD 1782 18/04/17 17:23:25 INFO kafka.KafkaRDD: Removing RDD 1782 from persistence list 18/04/17 17:23:25 INFO storage.BlockManager: Removing RDD 1782 18/04/17 17:23:25 INFO kafka.KafkaRDD: Removing RDD 1783 from persistence list 18/04/17 17:23:25 INFO storage.BlockManager: Removing RDD 1783 18/04/17 17:23:25 INFO kafka.KafkaRDD: Removing RDD 1783 from persistence list 18/04/17 17:23:25 INFO storage.BlockManager: Removing RDD 1783 18/04/17 17:23:25 INFO kafka.KafkaRDD: Removing RDD 1784 from persistence list 18/04/17 17:23:25 INFO storage.BlockManager: Removing RDD 1784 18/04/17 17:23:25 INFO kafka.KafkaRDD: Removing RDD 1784 from persistence list 18/04/17 17:23:25 INFO storage.BlockManager: Removing RDD 1784 18/04/17 17:23:25 INFO kafka.KafkaRDD: Removing RDD 1785 from persistence list 18/04/17 17:23:25 INFO storage.BlockManager: Removing RDD 1785 18/04/17 17:23:25 INFO kafka.KafkaRDD: Removing RDD 1785 from persistence list 18/04/17 17:23:25 INFO storage.BlockManager: Removing RDD 1785 18/04/17 17:23:25 INFO kafka.KafkaRDD: Removing RDD 1786 from persistence list 18/04/17 17:23:25 INFO storage.BlockManager: Removing RDD 1786 18/04/17 17:23:25 INFO kafka.KafkaRDD: Removing RDD 1786 from persistence list 18/04/17 17:23:25 INFO storage.BlockManager: Removing RDD 1786 18/04/17 17:23:25 INFO kafka.KafkaRDD: Removing RDD 1787 from persistence list 18/04/17 17:23:25 INFO storage.BlockManager: Removing RDD 1787 18/04/17 17:23:25 INFO kafka.KafkaRDD: Removing RDD 1787 from persistence list 18/04/17 17:23:25 INFO storage.BlockManager: Removing RDD 1787 18/04/17 17:23:25 INFO kafka.KafkaRDD: Removing RDD 1788 from persistence list 18/04/17 17:23:25 INFO storage.BlockManager: Removing RDD 1788 18/04/17 17:23:25 INFO kafka.KafkaRDD: Removing RDD 1788 from persistence list 18/04/17 17:23:25 INFO storage.BlockManager: Removing RDD 1788 18/04/17 17:23:25 INFO kafka.KafkaRDD: Removing RDD 1789 from persistence list 18/04/17 17:23:25 INFO storage.BlockManager: Removing RDD 1789 18/04/17 17:23:25 INFO kafka.KafkaRDD: Removing RDD 1789 from persistence list 18/04/17 17:23:25 INFO storage.BlockManager: Removing RDD 1789 18/04/17 17:23:25 INFO kafka.KafkaRDD: Removing RDD 1790 from persistence list 18/04/17 17:23:25 INFO storage.BlockManager: Removing RDD 1790 18/04/17 17:23:25 INFO kafka.KafkaRDD: Removing RDD 1790 from persistence list 18/04/17 17:23:25 INFO storage.BlockManager: Removing RDD 1790 18/04/17 17:23:25 INFO kafka.KafkaRDD: Removing RDD 1791 from persistence list 18/04/17 17:23:25 INFO storage.BlockManager: Removing RDD 1791 18/04/17 17:23:25 INFO kafka.KafkaRDD: Removing RDD 1791 from persistence list 18/04/17 17:23:25 INFO storage.BlockManager: Removing RDD 1791 18/04/17 17:23:25 INFO kafka.KafkaRDD: Removing RDD 1792 from persistence list 18/04/17 17:23:25 INFO storage.BlockManager: Removing RDD 1792 18/04/17 17:23:25 INFO kafka.KafkaRDD: Removing RDD 1792 from persistence list 18/04/17 17:23:25 INFO storage.BlockManager: Removing RDD 1792 18/04/17 17:23:25 INFO kafka.KafkaRDD: Removing RDD 1793 from persistence list 18/04/17 17:23:25 INFO storage.BlockManager: Removing RDD 1793 18/04/17 17:23:25 INFO kafka.KafkaRDD: Removing RDD 1793 from persistence list 18/04/17 17:23:25 INFO storage.BlockManager: Removing RDD 1793 18/04/17 17:23:25 INFO kafka.KafkaRDD: Removing RDD 1794 from persistence list 18/04/17 17:23:25 INFO storage.BlockManager: Removing RDD 1794 18/04/17 17:23:25 INFO kafka.KafkaRDD: Removing RDD 1794 from persistence list 18/04/17 17:23:25 INFO storage.BlockManager: Removing RDD 1794 18/04/17 17:23:25 INFO kafka.KafkaRDD: Removing RDD 1795 from persistence list 18/04/17 17:23:25 INFO storage.BlockManager: Removing RDD 1795 18/04/17 17:23:25 INFO kafka.KafkaRDD: Removing RDD 1795 from persistence list 18/04/17 17:23:25 INFO storage.BlockManager: Removing RDD 1795 18/04/17 17:23:25 INFO kafka.KafkaRDD: Removing RDD 1796 from persistence list 18/04/17 17:23:25 INFO storage.BlockManager: Removing RDD 1796 18/04/17 17:23:25 INFO kafka.KafkaRDD: Removing RDD 1796 from persistence list 18/04/17 17:23:25 INFO storage.BlockManager: Removing RDD 1796 18/04/17 17:23:25 INFO kafka.KafkaRDD: Removing RDD 1797 from persistence list 18/04/17 17:23:25 INFO storage.BlockManager: Removing RDD 1797 18/04/17 17:23:25 INFO kafka.KafkaRDD: Removing RDD 1797 from persistence list 18/04/17 17:23:25 INFO storage.BlockManager: Removing RDD 1797 18/04/17 17:23:25 INFO kafka.KafkaRDD: Removing RDD 1798 from persistence list 18/04/17 17:23:25 INFO storage.BlockManager: Removing RDD 1798 18/04/17 17:23:25 INFO kafka.KafkaRDD: Removing RDD 1798 from persistence list 18/04/17 17:23:25 INFO storage.BlockManager: Removing RDD 1798 18/04/17 17:23:25 INFO kafka.KafkaRDD: Removing RDD 1799 from persistence list 18/04/17 17:23:25 INFO storage.BlockManager: Removing RDD 1799 18/04/17 17:23:25 INFO kafka.KafkaRDD: Removing RDD 1799 from persistence list 18/04/17 17:23:25 INFO storage.BlockManager: Removing RDD 1799 18/04/17 17:23:25 INFO scheduler.ReceivedBlockTracker: Deleting batches ArrayBuffer() 18/04/17 17:23:25 INFO scheduler.InputInfoTracker: remove old batch metadata: 1523974860000 ms 18/04/17 17:24:00 INFO scheduler.JobScheduler: Added jobs for time 1523975040000 ms 18/04/17 17:24:00 INFO scheduler.JobScheduler: Starting job streaming job 1523975040000 ms.0 from job set of time 1523975040000 ms 18/04/17 17:24:00 INFO scheduler.JobScheduler: Starting job streaming job 1523975040000 ms.1 from job set of time 1523975040000 ms 18/04/17 17:24:00 INFO scheduler.JobScheduler: Starting job streaming job 1523975040000 ms.2 from job set of time 1523975040000 ms 18/04/17 17:24:00 INFO scheduler.JobScheduler: Finished job streaming job 1523975040000 ms.0 from job set of time 1523975040000 ms 18/04/17 17:24:00 INFO scheduler.JobScheduler: Starting job streaming job 1523975040000 ms.4 from job set of time 1523975040000 ms 18/04/17 17:24:00 INFO scheduler.JobScheduler: Starting job streaming job 1523975040000 ms.3 from job set of time 1523975040000 ms 18/04/17 17:24:00 INFO scheduler.JobScheduler: Finished job streaming job 1523975040000 ms.4 from job set of time 1523975040000 ms 18/04/17 17:24:00 INFO scheduler.JobScheduler: Starting job streaming job 1523975040000 ms.6 from job set of time 1523975040000 ms 18/04/17 17:24:00 INFO scheduler.JobScheduler: Finished job streaming job 1523975040000 ms.3 from job set of time 1523975040000 ms 18/04/17 17:24:00 INFO scheduler.JobScheduler: Starting job streaming job 1523975040000 ms.5 from job set of time 1523975040000 ms 18/04/17 17:24:00 INFO scheduler.JobScheduler: Starting job streaming job 1523975040000 ms.8 from job set of time 1523975040000 ms 18/04/17 17:24:00 INFO scheduler.JobScheduler: Starting job streaming job 1523975040000 ms.7 from job set of time 1523975040000 ms 18/04/17 17:24:00 INFO scheduler.JobScheduler: Starting job streaming job 1523975040000 ms.9 from job set of time 1523975040000 ms 18/04/17 17:24:00 INFO scheduler.JobScheduler: Starting job streaming job 1523975040000 ms.10 from job set of time 1523975040000 ms 18/04/17 17:24:00 INFO scheduler.JobScheduler: Starting job streaming job 1523975040000 ms.11 from job set of time 1523975040000 ms 18/04/17 17:24:00 INFO scheduler.JobScheduler: Starting job streaming job 1523975040000 ms.12 from job set of time 1523975040000 ms 18/04/17 17:24:00 INFO scheduler.JobScheduler: Starting job streaming job 1523975040000 ms.13 from job set of time 1523975040000 ms 18/04/17 17:24:00 INFO scheduler.JobScheduler: Starting job streaming job 1523975040000 ms.14 from job set of time 1523975040000 ms 18/04/17 17:24:00 INFO scheduler.JobScheduler: Finished job streaming job 1523975040000 ms.13 from job set of time 1523975040000 ms 18/04/17 17:24:00 INFO scheduler.JobScheduler: Starting job streaming job 1523975040000 ms.15 from job set of time 1523975040000 ms 18/04/17 17:24:00 INFO scheduler.JobScheduler: Finished job streaming job 1523975040000 ms.14 from job set of time 1523975040000 ms 18/04/17 17:24:00 INFO scheduler.JobScheduler: Starting job streaming job 1523975040000 ms.16 from job set of time 1523975040000 ms 18/04/17 17:24:00 INFO scheduler.JobScheduler: Finished job streaming job 1523975040000 ms.16 from job set of time 1523975040000 ms 18/04/17 17:24:00 INFO scheduler.JobScheduler: Starting job streaming job 1523975040000 ms.17 from job set of time 1523975040000 ms 18/04/17 17:24:00 INFO scheduler.JobScheduler: Starting job streaming job 1523975040000 ms.18 from job set of time 1523975040000 ms 18/04/17 17:24:00 INFO scheduler.JobScheduler: Starting job streaming job 1523975040000 ms.19 from job set of time 1523975040000 ms 18/04/17 17:24:00 INFO scheduler.JobScheduler: Finished job streaming job 1523975040000 ms.17 from job set of time 1523975040000 ms 18/04/17 17:24:00 INFO scheduler.JobScheduler: Starting job streaming job 1523975040000 ms.21 from job set of time 1523975040000 ms 18/04/17 17:24:00 INFO scheduler.JobScheduler: Finished job streaming job 1523975040000 ms.21 from job set of time 1523975040000 ms 18/04/17 17:24:00 INFO scheduler.JobScheduler: Starting job streaming job 1523975040000 ms.20 from job set of time 1523975040000 ms 18/04/17 17:24:00 INFO scheduler.JobScheduler: Starting job streaming job 1523975040000 ms.22 from job set of time 1523975040000 ms 18/04/17 17:24:00 INFO scheduler.JobScheduler: Starting job streaming job 1523975040000 ms.23 from job set of time 1523975040000 ms 18/04/17 17:24:00 INFO scheduler.JobScheduler: Starting job streaming job 1523975040000 ms.24 from job set of time 1523975040000 ms 18/04/17 17:24:00 INFO scheduler.JobScheduler: Starting job streaming job 1523975040000 ms.25 from job set of time 1523975040000 ms 18/04/17 17:24:00 INFO scheduler.JobScheduler: Starting job streaming job 1523975040000 ms.27 from job set of time 1523975040000 ms 18/04/17 17:24:00 INFO scheduler.JobScheduler: Starting job streaming job 1523975040000 ms.26 from job set of time 1523975040000 ms 18/04/17 17:24:00 INFO scheduler.JobScheduler: Starting job streaming job 1523975040000 ms.29 from job set of time 1523975040000 ms 18/04/17 17:24:00 INFO scheduler.JobScheduler: Starting job streaming job 1523975040000 ms.28 from job set of time 1523975040000 ms 18/04/17 17:24:00 INFO scheduler.JobScheduler: Starting job streaming job 1523975040000 ms.30 from job set of time 1523975040000 ms 18/04/17 17:24:00 INFO scheduler.JobScheduler: Starting job streaming job 1523975040000 ms.31 from job set of time 1523975040000 ms 18/04/17 17:24:00 INFO scheduler.JobScheduler: Finished job streaming job 1523975040000 ms.30 from job set of time 1523975040000 ms 18/04/17 17:24:00 INFO scheduler.JobScheduler: Starting job streaming job 1523975040000 ms.32 from job set of time 1523975040000 ms 18/04/17 17:24:00 INFO scheduler.JobScheduler: Starting job streaming job 1523975040000 ms.34 from job set of time 1523975040000 ms 18/04/17 17:24:00 INFO scheduler.JobScheduler: Starting job streaming job 1523975040000 ms.33 from job set of time 1523975040000 ms 18/04/17 17:24:00 INFO scheduler.JobScheduler: Starting job streaming job 1523975040000 ms.35 from job set of time 1523975040000 ms 18/04/17 17:24:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 17:24:00 INFO scheduler.DAGScheduler: Got job 1345 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 17:24:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 1345 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 17:24:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 17:24:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 17:24:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 17:24:00 INFO scheduler.DAGScheduler: Submitting ResultStage 1345 (KafkaRDD[1843] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 17:24:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 17:24:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 17:24:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 17:24:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 17:24:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 17:24:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 17:24:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 17:24:00 INFO storage.MemoryStore: Block broadcast_1345 stored as values in memory (estimated size 5.7 KB, free 491.6 MB) 18/04/17 17:24:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 17:24:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 17:24:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 17:24:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 17:24:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 17:24:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 17:24:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 17:24:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 17:24:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 17:24:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 17:24:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 17:24:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 17:24:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 17:24:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 17:24:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 17:24:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 17:24:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 17:24:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 17:24:00 INFO storage.MemoryStore: Block broadcast_1345_piece0 stored as bytes in memory (estimated size 3.1 KB, free 491.6 MB) 18/04/17 17:24:00 INFO storage.BlockManagerInfo: Added broadcast_1345_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:24:00 INFO spark.SparkContext: Created broadcast 1345 from broadcast at DAGScheduler.scala:1006 18/04/17 17:24:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 1345 (KafkaRDD[1843] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 17:24:00 INFO cluster.YarnClusterScheduler: Adding task set 1345.0 with 1 tasks 18/04/17 17:24:00 INFO scheduler.DAGScheduler: Got job 1346 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 17:24:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 1346 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 17:24:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 17:24:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 17:24:00 INFO scheduler.DAGScheduler: Submitting ResultStage 1346 (KafkaRDD[1858] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 17:24:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 1345.0 (TID 1345, ***hostname masked***, executor 9, partition 0, RACK_LOCAL, 2060 bytes) 18/04/17 17:24:00 INFO storage.MemoryStore: Block broadcast_1346 stored as values in memory (estimated size 5.7 KB, free 491.6 MB) 18/04/17 17:24:00 INFO storage.MemoryStore: Block broadcast_1346_piece0 stored as bytes in memory (estimated size 3.1 KB, free 491.5 MB) 18/04/17 17:24:00 INFO storage.BlockManagerInfo: Added broadcast_1346_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:24:00 INFO spark.SparkContext: Created broadcast 1346 from broadcast at DAGScheduler.scala:1006 18/04/17 17:24:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 1346 (KafkaRDD[1858] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 17:24:00 INFO cluster.YarnClusterScheduler: Adding task set 1346.0 with 1 tasks 18/04/17 17:24:00 INFO scheduler.DAGScheduler: Got job 1347 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 17:24:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 1347 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 17:24:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 17:24:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 17:24:00 INFO scheduler.DAGScheduler: Submitting ResultStage 1347 (KafkaRDD[1863] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 17:24:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 1346.0 (TID 1346, ***hostname masked***, executor 2, partition 0, RACK_LOCAL, 2069 bytes) 18/04/17 17:24:00 INFO storage.MemoryStore: Block broadcast_1347 stored as values in memory (estimated size 5.7 KB, free 491.5 MB) 18/04/17 17:24:00 INFO storage.MemoryStore: Block broadcast_1347_piece0 stored as bytes in memory (estimated size 3.1 KB, free 491.5 MB) 18/04/17 17:24:00 INFO storage.BlockManagerInfo: Added broadcast_1347_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:24:00 INFO spark.SparkContext: Created broadcast 1347 from broadcast at DAGScheduler.scala:1006 18/04/17 17:24:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 1347 (KafkaRDD[1863] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 17:24:00 INFO cluster.YarnClusterScheduler: Adding task set 1347.0 with 1 tasks 18/04/17 17:24:00 INFO scheduler.DAGScheduler: Got job 1348 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 17:24:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 1348 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 17:24:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 17:24:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 17:24:00 INFO scheduler.DAGScheduler: Submitting ResultStage 1348 (KafkaRDD[1842] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 17:24:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 1347.0 (TID 1347, ***hostname masked***, executor 12, partition 0, RACK_LOCAL, 2048 bytes) 18/04/17 17:24:00 INFO storage.MemoryStore: Block broadcast_1348 stored as values in memory (estimated size 5.7 KB, free 491.5 MB) 18/04/17 17:24:00 INFO storage.MemoryStore: Block broadcast_1348_piece0 stored as bytes in memory (estimated size 3.1 KB, free 491.5 MB) 18/04/17 17:24:00 INFO storage.BlockManagerInfo: Added broadcast_1348_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:24:00 INFO spark.SparkContext: Created broadcast 1348 from broadcast at DAGScheduler.scala:1006 18/04/17 17:24:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 1348 (KafkaRDD[1842] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 17:24:00 INFO cluster.YarnClusterScheduler: Adding task set 1348.0 with 1 tasks 18/04/17 17:24:00 INFO scheduler.DAGScheduler: Got job 1349 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 17:24:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 1349 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 17:24:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 17:24:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 17:24:00 INFO scheduler.DAGScheduler: Submitting ResultStage 1349 (KafkaRDD[1856] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 17:24:00 INFO storage.MemoryStore: Block broadcast_1349 stored as values in memory (estimated size 5.7 KB, free 491.5 MB) 18/04/17 17:24:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 1348.0 (TID 1348, ***hostname masked***, executor 12, partition 0, NODE_LOCAL, 2045 bytes) 18/04/17 17:24:00 INFO storage.MemoryStore: Block broadcast_1349_piece0 stored as bytes in memory (estimated size 3.1 KB, free 491.5 MB) 18/04/17 17:24:00 INFO storage.BlockManagerInfo: Added broadcast_1349_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:24:00 INFO spark.SparkContext: Created broadcast 1349 from broadcast at DAGScheduler.scala:1006 18/04/17 17:24:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 1349 (KafkaRDD[1856] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 17:24:00 INFO cluster.YarnClusterScheduler: Adding task set 1349.0 with 1 tasks 18/04/17 17:24:00 INFO scheduler.DAGScheduler: Got job 1350 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 17:24:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 1350 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 17:24:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 17:24:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 17:24:00 INFO scheduler.DAGScheduler: Submitting ResultStage 1350 (KafkaRDD[1864] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 17:24:00 INFO storage.MemoryStore: Block broadcast_1350 stored as values in memory (estimated size 5.7 KB, free 491.5 MB) 18/04/17 17:24:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 1349.0 (TID 1349, ***hostname masked***, executor 5, partition 0, RACK_LOCAL, 2048 bytes) 18/04/17 17:24:00 INFO storage.MemoryStore: Block broadcast_1350_piece0 stored as bytes in memory (estimated size 3.1 KB, free 491.5 MB) 18/04/17 17:24:00 INFO storage.BlockManagerInfo: Added broadcast_1350_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:24:00 INFO spark.SparkContext: Created broadcast 1350 from broadcast at DAGScheduler.scala:1006 18/04/17 17:24:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 1350 (KafkaRDD[1864] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 17:24:00 INFO cluster.YarnClusterScheduler: Adding task set 1350.0 with 1 tasks 18/04/17 17:24:00 INFO scheduler.DAGScheduler: Got job 1352 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 17:24:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 1351 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 17:24:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 17:24:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 17:24:00 INFO scheduler.DAGScheduler: Submitting ResultStage 1351 (KafkaRDD[1865] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 17:24:00 INFO storage.MemoryStore: Block broadcast_1351 stored as values in memory (estimated size 5.7 KB, free 491.5 MB) 18/04/17 17:24:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 1350.0 (TID 1350, ***hostname masked***, executor 11, partition 0, RACK_LOCAL, 2045 bytes) 18/04/17 17:24:00 INFO storage.BlockManagerInfo: Added broadcast_1348_piece0 in memory on ***hostname masked***:42188 (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:24:00 INFO storage.BlockManagerInfo: Added broadcast_1346_piece0 in memory on ***hostname masked***:43653 (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:24:00 INFO storage.MemoryStore: Block broadcast_1351_piece0 stored as bytes in memory (estimated size 3.1 KB, free 491.5 MB) 18/04/17 17:24:00 INFO storage.BlockManagerInfo: Added broadcast_1351_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:24:00 INFO spark.SparkContext: Created broadcast 1351 from broadcast at DAGScheduler.scala:1006 18/04/17 17:24:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 1351 (KafkaRDD[1865] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 17:24:00 INFO cluster.YarnClusterScheduler: Adding task set 1351.0 with 1 tasks 18/04/17 17:24:00 INFO scheduler.DAGScheduler: Got job 1351 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 17:24:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 1352 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 17:24:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 17:24:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 17:24:00 INFO scheduler.DAGScheduler: Submitting ResultStage 1352 (KafkaRDD[1869] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 17:24:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 1351.0 (TID 1351, ***hostname masked***, executor 3, partition 0, NODE_LOCAL, 2049 bytes) 18/04/17 17:24:00 INFO storage.MemoryStore: Block broadcast_1352 stored as values in memory (estimated size 5.7 KB, free 491.5 MB) 18/04/17 17:24:00 INFO storage.MemoryStore: Block broadcast_1352_piece0 stored as bytes in memory (estimated size 3.1 KB, free 491.5 MB) 18/04/17 17:24:00 INFO storage.BlockManagerInfo: Added broadcast_1352_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:24:00 INFO spark.SparkContext: Created broadcast 1352 from broadcast at DAGScheduler.scala:1006 18/04/17 17:24:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 1352 (KafkaRDD[1869] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 17:24:00 INFO cluster.YarnClusterScheduler: Adding task set 1352.0 with 1 tasks 18/04/17 17:24:00 INFO scheduler.DAGScheduler: Got job 1353 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 17:24:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 1353 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 17:24:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 17:24:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 17:24:00 INFO scheduler.DAGScheduler: Submitting ResultStage 1353 (KafkaRDD[1851] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 17:24:00 INFO storage.MemoryStore: Block broadcast_1353 stored as values in memory (estimated size 5.7 KB, free 491.5 MB) 18/04/17 17:24:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 1352.0 (TID 1352, ***hostname masked***, executor 7, partition 0, NODE_LOCAL, 2048 bytes) 18/04/17 17:24:00 INFO storage.BlockManagerInfo: Added broadcast_1349_piece0 in memory on ***hostname masked***:53081 (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:24:00 INFO storage.MemoryStore: Block broadcast_1353_piece0 stored as bytes in memory (estimated size 3.1 KB, free 491.5 MB) 18/04/17 17:24:00 INFO storage.BlockManagerInfo: Added broadcast_1353_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:24:00 INFO storage.BlockManagerInfo: Added broadcast_1347_piece0 in memory on ***hostname masked***:42188 (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:24:00 INFO spark.SparkContext: Created broadcast 1353 from broadcast at DAGScheduler.scala:1006 18/04/17 17:24:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 1353 (KafkaRDD[1851] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 17:24:00 INFO cluster.YarnClusterScheduler: Adding task set 1353.0 with 1 tasks 18/04/17 17:24:00 INFO scheduler.DAGScheduler: Got job 1354 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 17:24:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 1354 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 17:24:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 17:24:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 17:24:00 INFO scheduler.DAGScheduler: Submitting ResultStage 1354 (KafkaRDD[1871] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 17:24:00 INFO storage.BlockManagerInfo: Added broadcast_1350_piece0 in memory on ***hostname masked***:57847 (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:24:00 INFO storage.MemoryStore: Block broadcast_1354 stored as values in memory (estimated size 5.7 KB, free 491.5 MB) 18/04/17 17:24:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 1353.0 (TID 1353, ***hostname masked***, executor 10, partition 0, RACK_LOCAL, 2064 bytes) 18/04/17 17:24:00 INFO storage.BlockManagerInfo: Added broadcast_1345_piece0 in memory on ***hostname masked***:55033 (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:24:00 INFO storage.MemoryStore: Block broadcast_1354_piece0 stored as bytes in memory (estimated size 3.1 KB, free 491.5 MB) 18/04/17 17:24:00 INFO storage.BlockManagerInfo: Added broadcast_1354_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:24:00 INFO spark.SparkContext: Created broadcast 1354 from broadcast at DAGScheduler.scala:1006 18/04/17 17:24:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 1354 (KafkaRDD[1871] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 17:24:00 INFO cluster.YarnClusterScheduler: Adding task set 1354.0 with 1 tasks 18/04/17 17:24:00 INFO scheduler.DAGScheduler: Got job 1355 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 17:24:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 1355 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 17:24:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 17:24:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 17:24:00 INFO scheduler.DAGScheduler: Submitting ResultStage 1355 (KafkaRDD[1860] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 17:24:00 INFO storage.MemoryStore: Block broadcast_1355 stored as values in memory (estimated size 5.7 KB, free 491.5 MB) 18/04/17 17:24:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 1354.0 (TID 1354, ***hostname masked***, executor 9, partition 0, NODE_LOCAL, 2037 bytes) 18/04/17 17:24:00 INFO storage.MemoryStore: Block broadcast_1355_piece0 stored as bytes in memory (estimated size 3.1 KB, free 491.5 MB) 18/04/17 17:24:00 INFO storage.BlockManagerInfo: Added broadcast_1355_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:24:00 INFO spark.SparkContext: Created broadcast 1355 from broadcast at DAGScheduler.scala:1006 18/04/17 17:24:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 1355 (KafkaRDD[1860] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 17:24:00 INFO cluster.YarnClusterScheduler: Adding task set 1355.0 with 1 tasks 18/04/17 17:24:00 INFO scheduler.DAGScheduler: Got job 1357 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 17:24:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 1356 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 17:24:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 17:24:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 17:24:00 INFO scheduler.DAGScheduler: Submitting ResultStage 1356 (KafkaRDD[1868] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 17:24:00 INFO storage.MemoryStore: Block broadcast_1356 stored as values in memory (estimated size 5.7 KB, free 491.5 MB) 18/04/17 17:24:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 1355.0 (TID 1355, ***hostname masked***, executor 2, partition 0, NODE_LOCAL, 2047 bytes) 18/04/17 17:24:00 INFO storage.BlockManagerInfo: Added broadcast_1354_piece0 in memory on ***hostname masked***:55033 (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:24:00 INFO storage.BlockManagerInfo: Added broadcast_1352_piece0 in memory on ***hostname masked***:41751 (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:24:00 INFO storage.MemoryStore: Block broadcast_1356_piece0 stored as bytes in memory (estimated size 3.1 KB, free 491.5 MB) 18/04/17 17:24:00 INFO storage.BlockManagerInfo: Added broadcast_1356_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:24:00 INFO spark.SparkContext: Created broadcast 1356 from broadcast at DAGScheduler.scala:1006 18/04/17 17:24:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 1356 (KafkaRDD[1868] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 17:24:00 INFO cluster.YarnClusterScheduler: Adding task set 1356.0 with 1 tasks 18/04/17 17:24:00 INFO storage.BlockManagerInfo: Added broadcast_1353_piece0 in memory on ***hostname masked***:55095 (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:24:00 INFO scheduler.DAGScheduler: Got job 1356 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 17:24:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 1357 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 17:24:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 17:24:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 17:24:00 INFO scheduler.DAGScheduler: Submitting ResultStage 1357 (KafkaRDD[1867] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 17:24:00 INFO storage.MemoryStore: Block broadcast_1357 stored as values in memory (estimated size 5.7 KB, free 491.5 MB) 18/04/17 17:24:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 1356.0 (TID 1356, ***hostname masked***, executor 10, partition 0, RACK_LOCAL, 2040 bytes) 18/04/17 17:24:00 INFO storage.BlockManagerInfo: Added broadcast_1355_piece0 in memory on ***hostname masked***:43653 (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:24:00 INFO storage.BlockManagerInfo: Added broadcast_1351_piece0 in memory on ***hostname masked***:60107 (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:24:00 INFO storage.MemoryStore: Block broadcast_1357_piece0 stored as bytes in memory (estimated size 3.1 KB, free 491.5 MB) 18/04/17 17:24:00 INFO storage.BlockManagerInfo: Added broadcast_1357_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:24:00 INFO spark.SparkContext: Created broadcast 1357 from broadcast at DAGScheduler.scala:1006 18/04/17 17:24:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 1357 (KafkaRDD[1867] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 17:24:00 INFO cluster.YarnClusterScheduler: Adding task set 1357.0 with 1 tasks 18/04/17 17:24:00 INFO scheduler.DAGScheduler: Got job 1358 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 17:24:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 1358 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 17:24:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 17:24:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 17:24:00 INFO scheduler.DAGScheduler: Submitting ResultStage 1358 (KafkaRDD[1870] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 17:24:00 INFO storage.MemoryStore: Block broadcast_1358 stored as values in memory (estimated size 5.7 KB, free 491.4 MB) 18/04/17 17:24:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 1357.0 (TID 1357, ***hostname masked***, executor 5, partition 0, RACK_LOCAL, 2042 bytes) 18/04/17 17:24:00 INFO storage.MemoryStore: Block broadcast_1358_piece0 stored as bytes in memory (estimated size 3.1 KB, free 491.4 MB) 18/04/17 17:24:00 INFO storage.BlockManagerInfo: Added broadcast_1358_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:24:00 INFO spark.SparkContext: Created broadcast 1358 from broadcast at DAGScheduler.scala:1006 18/04/17 17:24:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 1358 (KafkaRDD[1870] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 17:24:00 INFO cluster.YarnClusterScheduler: Adding task set 1358.0 with 1 tasks 18/04/17 17:24:00 INFO scheduler.DAGScheduler: Got job 1359 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 17:24:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 1359 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 17:24:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 17:24:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 17:24:00 INFO scheduler.DAGScheduler: Submitting ResultStage 1359 (KafkaRDD[1855] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 17:24:00 INFO storage.MemoryStore: Block broadcast_1359 stored as values in memory (estimated size 5.7 KB, free 491.4 MB) 18/04/17 17:24:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 1358.0 (TID 1358, ***hostname masked***, executor 9, partition 0, RACK_LOCAL, 2053 bytes) 18/04/17 17:24:00 INFO storage.BlockManagerInfo: Added broadcast_1357_piece0 in memory on ***hostname masked***:53081 (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:24:00 INFO storage.BlockManagerInfo: Added broadcast_1356_piece0 in memory on ***hostname masked***:55095 (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:24:00 INFO storage.MemoryStore: Block broadcast_1359_piece0 stored as bytes in memory (estimated size 3.1 KB, free 491.4 MB) 18/04/17 17:24:00 INFO storage.BlockManagerInfo: Added broadcast_1359_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:24:00 INFO spark.SparkContext: Created broadcast 1359 from broadcast at DAGScheduler.scala:1006 18/04/17 17:24:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 1359 (KafkaRDD[1855] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 17:24:00 INFO cluster.YarnClusterScheduler: Adding task set 1359.0 with 1 tasks 18/04/17 17:24:00 INFO scheduler.DAGScheduler: Got job 1360 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 17:24:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 1360 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 17:24:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 17:24:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 17:24:00 INFO scheduler.DAGScheduler: Submitting ResultStage 1360 (KafkaRDD[1861] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 17:24:00 INFO storage.MemoryStore: Block broadcast_1360 stored as values in memory (estimated size 5.7 KB, free 491.4 MB) 18/04/17 17:24:00 INFO storage.BlockManagerInfo: Added broadcast_1358_piece0 in memory on ***hostname masked***:55033 (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:24:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 1359.0 (TID 1359, ***hostname masked***, executor 9, partition 0, RACK_LOCAL, 2050 bytes) 18/04/17 17:24:00 INFO spark.ContextCleaner: Cleaned accumulator 1324 18/04/17 17:24:00 INFO spark.ContextCleaner: Cleaned accumulator 1323 18/04/17 17:24:00 INFO storage.BlockManagerInfo: Removed broadcast_1322_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:24:00 INFO storage.BlockManagerInfo: Removed broadcast_1322_piece0 on ***hostname masked***:55033 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:24:00 INFO storage.MemoryStore: Block broadcast_1360_piece0 stored as bytes in memory (estimated size 3.1 KB, free 491.4 MB) 18/04/17 17:24:00 INFO storage.BlockManagerInfo: Added broadcast_1360_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:24:00 INFO spark.SparkContext: Created broadcast 1360 from broadcast at DAGScheduler.scala:1006 18/04/17 17:24:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 1360 (KafkaRDD[1861] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 17:24:00 INFO cluster.YarnClusterScheduler: Adding task set 1360.0 with 1 tasks 18/04/17 17:24:00 INFO scheduler.DAGScheduler: Got job 1363 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 17:24:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 1361 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 17:24:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 17:24:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 17:24:00 INFO scheduler.DAGScheduler: Submitting ResultStage 1361 (KafkaRDD[1841] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 17:24:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 1360.0 (TID 1360, ***hostname masked***, executor 1, partition 0, RACK_LOCAL, 2045 bytes) 18/04/17 17:24:00 INFO storage.MemoryStore: Block broadcast_1361 stored as values in memory (estimated size 5.7 KB, free 491.4 MB) 18/04/17 17:24:00 INFO storage.BlockManagerInfo: Removed broadcast_1335_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:24:00 INFO storage.BlockManagerInfo: Removed broadcast_1335_piece0 on ***hostname masked***:56034 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:24:00 INFO storage.BlockManagerInfo: Added broadcast_1359_piece0 in memory on ***hostname masked***:55033 (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:24:00 INFO storage.MemoryStore: Block broadcast_1361_piece0 stored as bytes in memory (estimated size 3.1 KB, free 491.4 MB) 18/04/17 17:24:00 INFO storage.BlockManagerInfo: Added broadcast_1361_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:24:00 INFO spark.ContextCleaner: Cleaned accumulator 1336 18/04/17 17:24:00 INFO spark.SparkContext: Created broadcast 1361 from broadcast at DAGScheduler.scala:1006 18/04/17 17:24:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 1361 (KafkaRDD[1841] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 17:24:00 INFO cluster.YarnClusterScheduler: Adding task set 1361.0 with 1 tasks 18/04/17 17:24:00 INFO scheduler.DAGScheduler: Got job 1361 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 17:24:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 1362 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 17:24:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 17:24:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 17:24:00 INFO scheduler.DAGScheduler: Submitting ResultStage 1362 (KafkaRDD[1846] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 17:24:00 INFO storage.BlockManagerInfo: Removed broadcast_1342_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:24:00 INFO storage.MemoryStore: Block broadcast_1362 stored as values in memory (estimated size 5.7 KB, free 491.4 MB) 18/04/17 17:24:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 1361.0 (TID 1361, ***hostname masked***, executor 2, partition 0, NODE_LOCAL, 2059 bytes) 18/04/17 17:24:00 INFO storage.BlockManagerInfo: Removed broadcast_1342_piece0 on ***hostname masked***:56034 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:24:00 INFO spark.ContextCleaner: Cleaned accumulator 1343 18/04/17 17:24:00 INFO storage.BlockManagerInfo: Removed broadcast_1330_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:24:00 INFO storage.BlockManagerInfo: Removed broadcast_1330_piece0 on ***hostname masked***:60107 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:24:00 INFO storage.MemoryStore: Block broadcast_1362_piece0 stored as bytes in memory (estimated size 3.1 KB, free 491.5 MB) 18/04/17 17:24:00 INFO storage.BlockManagerInfo: Added broadcast_1362_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:24:00 INFO storage.BlockManagerInfo: Removed broadcast_1323_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:24:00 INFO spark.SparkContext: Created broadcast 1362 from broadcast at DAGScheduler.scala:1006 18/04/17 17:24:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 1362 (KafkaRDD[1846] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 17:24:00 INFO cluster.YarnClusterScheduler: Adding task set 1362.0 with 1 tasks 18/04/17 17:24:00 INFO scheduler.DAGScheduler: Got job 1362 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 17:24:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 1363 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 17:24:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 17:24:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 17:24:00 INFO scheduler.DAGScheduler: Submitting ResultStage 1363 (KafkaRDD[1854] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 17:24:00 INFO storage.BlockManagerInfo: Added broadcast_1360_piece0 in memory on ***hostname masked***:56034 (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:24:00 INFO storage.MemoryStore: Block broadcast_1363 stored as values in memory (estimated size 5.7 KB, free 491.4 MB) 18/04/17 17:24:00 INFO storage.BlockManagerInfo: Removed broadcast_1323_piece0 on ***hostname masked***:55279 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:24:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 1362.0 (TID 1362, ***hostname masked***, executor 5, partition 0, NODE_LOCAL, 2059 bytes) 18/04/17 17:24:00 INFO spark.ContextCleaner: Cleaned accumulator 1326 18/04/17 17:24:00 INFO storage.BlockManagerInfo: Removed broadcast_1318_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:24:00 INFO storage.BlockManagerInfo: Added broadcast_1361_piece0 in memory on ***hostname masked***:43653 (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:24:00 INFO storage.BlockManagerInfo: Removed broadcast_1318_piece0 on ***hostname masked***:43653 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:24:00 INFO spark.ContextCleaner: Cleaned accumulator 1320 18/04/17 17:24:00 INFO storage.BlockManagerInfo: Removed broadcast_1319_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:24:00 INFO storage.MemoryStore: Block broadcast_1363_piece0 stored as bytes in memory (estimated size 3.1 KB, free 491.5 MB) 18/04/17 17:24:00 INFO storage.BlockManagerInfo: Added broadcast_1363_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:24:00 INFO spark.SparkContext: Created broadcast 1363 from broadcast at DAGScheduler.scala:1006 18/04/17 17:24:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 1363 (KafkaRDD[1854] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 17:24:00 INFO cluster.YarnClusterScheduler: Adding task set 1363.0 with 1 tasks 18/04/17 17:24:00 INFO scheduler.DAGScheduler: Got job 1364 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 17:24:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 1364 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 17:24:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 17:24:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 17:24:00 INFO scheduler.DAGScheduler: Submitting ResultStage 1364 (KafkaRDD[1837] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 17:24:00 INFO storage.BlockManagerInfo: Removed broadcast_1319_piece0 on ***hostname masked***:55033 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:24:00 INFO storage.MemoryStore: Block broadcast_1364 stored as values in memory (estimated size 5.7 KB, free 491.5 MB) 18/04/17 17:24:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 1363.0 (TID 1363, ***hostname masked***, executor 8, partition 0, RACK_LOCAL, 2048 bytes) 18/04/17 17:24:00 INFO storage.BlockManagerInfo: Added broadcast_1362_piece0 in memory on ***hostname masked***:53081 (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:24:00 INFO storage.MemoryStore: Block broadcast_1364_piece0 stored as bytes in memory (estimated size 3.1 KB, free 491.5 MB) 18/04/17 17:24:00 INFO storage.BlockManagerInfo: Added broadcast_1364_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:24:00 INFO spark.SparkContext: Created broadcast 1364 from broadcast at DAGScheduler.scala:1006 18/04/17 17:24:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 1364 (KafkaRDD[1837] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 17:24:00 INFO cluster.YarnClusterScheduler: Adding task set 1364.0 with 1 tasks 18/04/17 17:24:00 INFO scheduler.DAGScheduler: Got job 1365 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 17:24:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 1365 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 17:24:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 17:24:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 17:24:00 INFO scheduler.DAGScheduler: Submitting ResultStage 1365 (KafkaRDD[1847] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 17:24:00 INFO storage.MemoryStore: Block broadcast_1365 stored as values in memory (estimated size 5.7 KB, free 491.4 MB) 18/04/17 17:24:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 1364.0 (TID 1364, ***hostname masked***, executor 3, partition 0, RACK_LOCAL, 2063 bytes) 18/04/17 17:24:00 INFO storage.MemoryStore: Block broadcast_1365_piece0 stored as bytes in memory (estimated size 3.1 KB, free 491.4 MB) 18/04/17 17:24:00 INFO storage.BlockManagerInfo: Added broadcast_1365_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:24:00 INFO spark.SparkContext: Created broadcast 1365 from broadcast at DAGScheduler.scala:1006 18/04/17 17:24:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 1365 (KafkaRDD[1847] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 17:24:00 INFO cluster.YarnClusterScheduler: Adding task set 1365.0 with 1 tasks 18/04/17 17:24:00 INFO scheduler.DAGScheduler: Got job 1366 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 17:24:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 1366 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 17:24:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 17:24:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 17:24:00 INFO scheduler.DAGScheduler: Submitting ResultStage 1366 (KafkaRDD[1838] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 17:24:00 INFO storage.MemoryStore: Block broadcast_1366 stored as values in memory (estimated size 5.7 KB, free 491.4 MB) 18/04/17 17:24:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 1365.0 (TID 1365, ***hostname masked***, executor 5, partition 0, RACK_LOCAL, 2046 bytes) 18/04/17 17:24:00 INFO storage.BlockManagerInfo: Added broadcast_1364_piece0 in memory on ***hostname masked***:60107 (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:24:00 INFO storage.MemoryStore: Block broadcast_1366_piece0 stored as bytes in memory (estimated size 3.1 KB, free 491.4 MB) 18/04/17 17:24:00 INFO storage.BlockManagerInfo: Added broadcast_1366_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:24:00 INFO spark.SparkContext: Created broadcast 1366 from broadcast at DAGScheduler.scala:1006 18/04/17 17:24:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 1366 (KafkaRDD[1838] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 17:24:00 INFO cluster.YarnClusterScheduler: Adding task set 1366.0 with 1 tasks 18/04/17 17:24:00 INFO storage.BlockManagerInfo: Added broadcast_1363_piece0 in memory on ***hostname masked***:50260 (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:24:00 INFO scheduler.DAGScheduler: Got job 1367 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 17:24:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 1367 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 17:24:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 17:24:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 17:24:00 INFO scheduler.DAGScheduler: Submitting ResultStage 1367 (KafkaRDD[1844] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 17:24:00 INFO storage.MemoryStore: Block broadcast_1367 stored as values in memory (estimated size 5.7 KB, free 491.4 MB) 18/04/17 17:24:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 1366.0 (TID 1366, ***hostname masked***, executor 3, partition 0, RACK_LOCAL, 2034 bytes) 18/04/17 17:24:00 INFO storage.BlockManagerInfo: Added broadcast_1365_piece0 in memory on ***hostname masked***:53081 (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:24:00 INFO spark.ContextCleaner: Cleaned accumulator 1319 18/04/17 17:24:00 INFO storage.BlockManagerInfo: Removed broadcast_1325_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:24:00 INFO storage.MemoryStore: Block broadcast_1367_piece0 stored as bytes in memory (estimated size 3.1 KB, free 491.4 MB) 18/04/17 17:24:00 INFO storage.BlockManagerInfo: Added broadcast_1367_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:24:00 INFO spark.SparkContext: Created broadcast 1367 from broadcast at DAGScheduler.scala:1006 18/04/17 17:24:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 1367 (KafkaRDD[1844] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 17:24:00 INFO cluster.YarnClusterScheduler: Adding task set 1367.0 with 1 tasks 18/04/17 17:24:00 INFO scheduler.DAGScheduler: Got job 1368 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 17:24:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 1368 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 17:24:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 17:24:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 17:24:00 INFO scheduler.DAGScheduler: Submitting ResultStage 1368 (KafkaRDD[1848] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 17:24:00 INFO storage.BlockManagerInfo: Removed broadcast_1325_piece0 on ***hostname masked***:55279 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:24:00 INFO storage.MemoryStore: Block broadcast_1368 stored as values in memory (estimated size 5.7 KB, free 491.4 MB) 18/04/17 17:24:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 1367.0 (TID 1367, ***hostname masked***, executor 7, partition 0, RACK_LOCAL, 2051 bytes) 18/04/17 17:24:00 INFO storage.MemoryStore: Block broadcast_1368_piece0 stored as bytes in memory (estimated size 3.1 KB, free 491.4 MB) 18/04/17 17:24:00 INFO storage.BlockManagerInfo: Added broadcast_1368_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:24:00 INFO spark.SparkContext: Created broadcast 1368 from broadcast at DAGScheduler.scala:1006 18/04/17 17:24:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 1368 (KafkaRDD[1848] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 17:24:00 INFO cluster.YarnClusterScheduler: Adding task set 1368.0 with 1 tasks 18/04/17 17:24:00 INFO spark.ContextCleaner: Cleaned accumulator 1331 18/04/17 17:24:00 INFO scheduler.DAGScheduler: Got job 1369 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 17:24:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 1369 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 17:24:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 17:24:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 17:24:00 INFO scheduler.DAGScheduler: Submitting ResultStage 1369 (KafkaRDD[1859] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 17:24:00 INFO storage.MemoryStore: Block broadcast_1369 stored as values in memory (estimated size 5.7 KB, free 491.4 MB) 18/04/17 17:24:00 INFO storage.BlockManagerInfo: Added broadcast_1366_piece0 in memory on ***hostname masked***:60107 (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:24:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 1368.0 (TID 1368, ***hostname masked***, executor 4, partition 0, RACK_LOCAL, 2051 bytes) 18/04/17 17:24:00 INFO storage.BlockManagerInfo: Added broadcast_1367_piece0 in memory on ***hostname masked***:41751 (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:24:00 INFO storage.MemoryStore: Block broadcast_1369_piece0 stored as bytes in memory (estimated size 3.1 KB, free 491.4 MB) 18/04/17 17:24:00 INFO storage.BlockManagerInfo: Added broadcast_1369_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:24:00 INFO spark.SparkContext: Created broadcast 1369 from broadcast at DAGScheduler.scala:1006 18/04/17 17:24:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 1369 (KafkaRDD[1859] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 17:24:00 INFO cluster.YarnClusterScheduler: Adding task set 1369.0 with 1 tasks 18/04/17 17:24:00 INFO scheduler.DAGScheduler: Got job 1370 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 17:24:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 1370 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 17:24:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 17:24:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 17:24:00 INFO scheduler.DAGScheduler: Submitting ResultStage 1370 (KafkaRDD[1862] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 17:24:00 INFO storage.MemoryStore: Block broadcast_1370 stored as values in memory (estimated size 5.7 KB, free 491.4 MB) 18/04/17 17:24:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 1369.0 (TID 1369, ***hostname masked***, executor 8, partition 0, RACK_LOCAL, 2048 bytes) 18/04/17 17:24:00 INFO storage.MemoryStore: Block broadcast_1370_piece0 stored as bytes in memory (estimated size 3.1 KB, free 491.4 MB) 18/04/17 17:24:00 INFO storage.BlockManagerInfo: Added broadcast_1370_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:24:00 INFO spark.SparkContext: Created broadcast 1370 from broadcast at DAGScheduler.scala:1006 18/04/17 17:24:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 1370 (KafkaRDD[1862] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 17:24:00 INFO cluster.YarnClusterScheduler: Adding task set 1370.0 with 1 tasks 18/04/17 17:24:00 INFO scheduler.DAGScheduler: Got job 1371 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 17:24:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 1371 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 17:24:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 17:24:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 17:24:00 INFO scheduler.DAGScheduler: Submitting ResultStage 1371 (KafkaRDD[1845] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 17:24:00 INFO storage.MemoryStore: Block broadcast_1371 stored as values in memory (estimated size 5.7 KB, free 491.4 MB) 18/04/17 17:24:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 1370.0 (TID 1370, ***hostname masked***, executor 6, partition 0, NODE_LOCAL, 2056 bytes) 18/04/17 17:24:00 INFO storage.BlockManagerInfo: Added broadcast_1368_piece0 in memory on ***hostname masked***:55279 (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:24:00 INFO storage.MemoryStore: Block broadcast_1371_piece0 stored as bytes in memory (estimated size 3.1 KB, free 491.4 MB) 18/04/17 17:24:00 INFO storage.BlockManagerInfo: Added broadcast_1371_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:24:00 INFO spark.SparkContext: Created broadcast 1371 from broadcast at DAGScheduler.scala:1006 18/04/17 17:24:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 1371 (KafkaRDD[1845] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 17:24:00 INFO cluster.YarnClusterScheduler: Adding task set 1371.0 with 1 tasks 18/04/17 17:24:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 1371.0 (TID 1371, ***hostname masked***, executor 12, partition 0, RACK_LOCAL, 2041 bytes) 18/04/17 17:24:00 INFO storage.BlockManagerInfo: Added broadcast_1369_piece0 in memory on ***hostname masked***:50260 (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:24:00 INFO storage.BlockManagerInfo: Added broadcast_1371_piece0 in memory on ***hostname masked***:42188 (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:24:00 INFO storage.BlockManagerInfo: Added broadcast_1370_piece0 in memory on ***hostname masked***:35790 (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:24:00 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 1354.0 (TID 1354) in 627 ms on ***hostname masked*** (executor 9) (1/1) 18/04/17 17:24:00 INFO cluster.YarnClusterScheduler: Removed TaskSet 1354.0, whose tasks have all completed, from pool 18/04/17 17:24:00 INFO scheduler.DAGScheduler: ResultStage 1354 (foreachPartition at PredictorEngineApp.java:153) finished in 0.627 s 18/04/17 17:24:00 INFO scheduler.DAGScheduler: Job 1354 finished: foreachPartition at PredictorEngineApp.java:153, took 0.663112 s 18/04/17 17:24:00 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x580745e0 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 17:24:00 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x580745e00x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 17:24:00 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 17:24:00 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:43591, server: ***hostname masked***/***IP masked***:2181 18/04/17 17:24:00 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x1626be1444c9865, negotiated timeout = 60000 18/04/17 17:24:00 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x1626be1444c9865 18/04/17 17:24:00 INFO zookeeper.ZooKeeper: Session: 0x1626be1444c9865 closed 18/04/17 17:24:00 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 17:24:00 INFO scheduler.JobScheduler: Finished job streaming job 1523975040000 ms.35 from job set of time 1523975040000 ms 18/04/17 17:24:00 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 1360.0 (TID 1360) in 694 ms on ***hostname masked*** (executor 1) (1/1) 18/04/17 17:24:00 INFO cluster.YarnClusterScheduler: Removed TaskSet 1360.0, whose tasks have all completed, from pool 18/04/17 17:24:00 INFO scheduler.DAGScheduler: ResultStage 1360 (foreachPartition at PredictorEngineApp.java:153) finished in 0.695 s 18/04/17 17:24:00 INFO scheduler.DAGScheduler: Job 1360 finished: foreachPartition at PredictorEngineApp.java:153, took 0.768367 s 18/04/17 17:24:00 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x8feba0a connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 17:24:00 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x8feba0a0x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 17:24:00 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 17:24:00 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:43594, server: ***hostname masked***/***IP masked***:2181 18/04/17 17:24:00 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x1626be1444c9866, negotiated timeout = 60000 18/04/17 17:24:00 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x1626be1444c9866 18/04/17 17:24:00 INFO zookeeper.ZooKeeper: Session: 0x1626be1444c9866 closed 18/04/17 17:24:00 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 17:24:00 INFO scheduler.JobScheduler: Finished job streaming job 1523975040000 ms.25 from job set of time 1523975040000 ms 18/04/17 17:24:02 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 1367.0 (TID 1367) in 2585 ms on ***hostname masked*** (executor 7) (1/1) 18/04/17 17:24:02 INFO cluster.YarnClusterScheduler: Removed TaskSet 1367.0, whose tasks have all completed, from pool 18/04/17 17:24:02 INFO scheduler.DAGScheduler: ResultStage 1367 (foreachPartition at PredictorEngineApp.java:153) finished in 2.586 s 18/04/17 17:24:02 INFO scheduler.DAGScheduler: Job 1367 finished: foreachPartition at PredictorEngineApp.java:153, took 2.687147 s 18/04/17 17:24:02 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x4bf3f629 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 17:24:02 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x4bf3f6290x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 17:24:02 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 17:24:02 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:48195, server: ***hostname masked***/***IP masked***:2181 18/04/17 17:24:02 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x2626be142b29140, negotiated timeout = 60000 18/04/17 17:24:02 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x2626be142b29140 18/04/17 17:24:02 INFO zookeeper.ZooKeeper: Session: 0x2626be142b29140 closed 18/04/17 17:24:02 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 17:24:02 INFO scheduler.JobScheduler: Finished job streaming job 1523975040000 ms.8 from job set of time 1523975040000 ms 18/04/17 17:24:04 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 1345.0 (TID 1345) in 4209 ms on ***hostname masked*** (executor 9) (1/1) 18/04/17 17:24:04 INFO cluster.YarnClusterScheduler: Removed TaskSet 1345.0, whose tasks have all completed, from pool 18/04/17 17:24:04 INFO scheduler.DAGScheduler: ResultStage 1345 (foreachPartition at PredictorEngineApp.java:153) finished in 4.209 s 18/04/17 17:24:04 INFO scheduler.DAGScheduler: Job 1345 finished: foreachPartition at PredictorEngineApp.java:153, took 4.216292 s 18/04/17 17:24:04 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x5d7ed44b connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 17:24:04 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x5d7ed44b0x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 17:24:04 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 17:24:04 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:37224, server: ***hostname masked***/***IP masked***:2181 18/04/17 17:24:04 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x3626be1439a980f, negotiated timeout = 60000 18/04/17 17:24:04 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x3626be1439a980f 18/04/17 17:24:04 INFO zookeeper.ZooKeeper: Session: 0x3626be1439a980f closed 18/04/17 17:24:04 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 17:24:04 INFO scheduler.JobScheduler: Finished job streaming job 1523975040000 ms.7 from job set of time 1523975040000 ms 18/04/17 17:24:04 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 1368.0 (TID 1368) in 4170 ms on ***hostname masked*** (executor 4) (1/1) 18/04/17 17:24:04 INFO cluster.YarnClusterScheduler: Removed TaskSet 1368.0, whose tasks have all completed, from pool 18/04/17 17:24:04 INFO scheduler.DAGScheduler: ResultStage 1368 (foreachPartition at PredictorEngineApp.java:153) finished in 4.171 s 18/04/17 17:24:04 INFO scheduler.DAGScheduler: Job 1368 finished: foreachPartition at PredictorEngineApp.java:153, took 4.275132 s 18/04/17 17:24:04 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x1c12ed9e connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 17:24:04 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x1c12ed9e0x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 17:24:04 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 17:24:04 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:48204, server: ***hostname masked***/***IP masked***:2181 18/04/17 17:24:04 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x2626be142b29141, negotiated timeout = 60000 18/04/17 17:24:04 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x2626be142b29141 18/04/17 17:24:04 INFO zookeeper.ZooKeeper: Session: 0x2626be142b29141 closed 18/04/17 17:24:04 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 17:24:04 INFO scheduler.JobScheduler: Finished job streaming job 1523975040000 ms.12 from job set of time 1523975040000 ms 18/04/17 17:24:04 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 1350.0 (TID 1350) in 4726 ms on ***hostname masked*** (executor 11) (1/1) 18/04/17 17:24:04 INFO cluster.YarnClusterScheduler: Removed TaskSet 1350.0, whose tasks have all completed, from pool 18/04/17 17:24:04 INFO scheduler.DAGScheduler: ResultStage 1350 (foreachPartition at PredictorEngineApp.java:153) finished in 4.726 s 18/04/17 17:24:04 INFO scheduler.DAGScheduler: Job 1350 finished: foreachPartition at PredictorEngineApp.java:153, took 4.748507 s 18/04/17 17:24:04 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0xcede537 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 17:24:04 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0xcede5370x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 17:24:04 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 17:24:04 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:43613, server: ***hostname masked***/***IP masked***:2181 18/04/17 17:24:04 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x1626be1444c9868, negotiated timeout = 60000 18/04/17 17:24:04 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x1626be1444c9868 18/04/17 17:24:04 INFO zookeeper.ZooKeeper: Session: 0x1626be1444c9868 closed 18/04/17 17:24:04 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 17:24:04 INFO scheduler.JobScheduler: Finished job streaming job 1523975040000 ms.28 from job set of time 1523975040000 ms 18/04/17 17:24:06 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 1352.0 (TID 1352) in 6102 ms on ***hostname masked*** (executor 7) (1/1) 18/04/17 17:24:06 INFO cluster.YarnClusterScheduler: Removed TaskSet 1352.0, whose tasks have all completed, from pool 18/04/17 17:24:06 INFO scheduler.DAGScheduler: ResultStage 1352 (foreachPartition at PredictorEngineApp.java:153) finished in 6.103 s 18/04/17 17:24:06 INFO scheduler.DAGScheduler: Job 1351 finished: foreachPartition at PredictorEngineApp.java:153, took 6.131066 s 18/04/17 17:24:06 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x3730c92a connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 17:24:06 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x3730c92a0x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 17:24:06 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 17:24:06 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:43621, server: ***hostname masked***/***IP masked***:2181 18/04/17 17:24:06 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x1626be1444c986c, negotiated timeout = 60000 18/04/17 17:24:06 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x1626be1444c986c 18/04/17 17:24:06 INFO zookeeper.ZooKeeper: Session: 0x1626be1444c986c closed 18/04/17 17:24:06 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 17:24:06 INFO scheduler.JobScheduler: Finished job streaming job 1523975040000 ms.33 from job set of time 1523975040000 ms 18/04/17 17:24:08 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 1353.0 (TID 1353) in 8299 ms on ***hostname masked*** (executor 10) (1/1) 18/04/17 17:24:08 INFO cluster.YarnClusterScheduler: Removed TaskSet 1353.0, whose tasks have all completed, from pool 18/04/17 17:24:08 INFO scheduler.DAGScheduler: ResultStage 1353 (foreachPartition at PredictorEngineApp.java:153) finished in 8.300 s 18/04/17 17:24:08 INFO scheduler.DAGScheduler: Job 1353 finished: foreachPartition at PredictorEngineApp.java:153, took 8.332267 s 18/04/17 17:24:08 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x4d4e69a6 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 17:24:08 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x4d4e69a60x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 17:24:08 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 17:24:08 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:43626, server: ***hostname masked***/***IP masked***:2181 18/04/17 17:24:08 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x1626be1444c986d, negotiated timeout = 60000 18/04/17 17:24:08 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x1626be1444c986d 18/04/17 17:24:08 INFO zookeeper.ZooKeeper: Session: 0x1626be1444c986d closed 18/04/17 17:24:08 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 17:24:08 INFO scheduler.JobScheduler: Finished job streaming job 1523975040000 ms.15 from job set of time 1523975040000 ms 18/04/17 17:24:08 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 1356.0 (TID 1356) in 8515 ms on ***hostname masked*** (executor 10) (1/1) 18/04/17 17:24:08 INFO cluster.YarnClusterScheduler: Removed TaskSet 1356.0, whose tasks have all completed, from pool 18/04/17 17:24:08 INFO scheduler.DAGScheduler: ResultStage 1356 (foreachPartition at PredictorEngineApp.java:153) finished in 8.515 s 18/04/17 17:24:08 INFO scheduler.DAGScheduler: Job 1357 finished: foreachPartition at PredictorEngineApp.java:153, took 8.560339 s 18/04/17 17:24:08 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x47f4e4e5 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 17:24:08 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x47f4e4e50x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 17:24:08 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 17:24:08 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:43629, server: ***hostname masked***/***IP masked***:2181 18/04/17 17:24:08 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x1626be1444c986f, negotiated timeout = 60000 18/04/17 17:24:08 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x1626be1444c986f 18/04/17 17:24:08 INFO zookeeper.ZooKeeper: Session: 0x1626be1444c986f closed 18/04/17 17:24:08 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 17:24:08 INFO scheduler.JobScheduler: Finished job streaming job 1523975040000 ms.32 from job set of time 1523975040000 ms 18/04/17 17:24:09 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 1363.0 (TID 1363) in 9412 ms on ***hostname masked*** (executor 8) (1/1) 18/04/17 17:24:09 INFO scheduler.DAGScheduler: ResultStage 1363 (foreachPartition at PredictorEngineApp.java:153) finished in 9.412 s 18/04/17 17:24:09 INFO cluster.YarnClusterScheduler: Removed TaskSet 1363.0, whose tasks have all completed, from pool 18/04/17 17:24:09 INFO scheduler.DAGScheduler: Job 1362 finished: foreachPartition at PredictorEngineApp.java:153, took 9.498121 s 18/04/17 17:24:09 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x29769446 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 17:24:09 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x297694460x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 17:24:09 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 17:24:09 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:43633, server: ***hostname masked***/***IP masked***:2181 18/04/17 17:24:09 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x1626be1444c9870, negotiated timeout = 60000 18/04/17 17:24:09 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x1626be1444c9870 18/04/17 17:24:09 INFO zookeeper.ZooKeeper: Session: 0x1626be1444c9870 closed 18/04/17 17:24:09 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 17:24:09 INFO scheduler.JobScheduler: Finished job streaming job 1523975040000 ms.18 from job set of time 1523975040000 ms 18/04/17 17:24:09 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 1369.0 (TID 1369) in 9745 ms on ***hostname masked*** (executor 8) (1/1) 18/04/17 17:24:09 INFO cluster.YarnClusterScheduler: Removed TaskSet 1369.0, whose tasks have all completed, from pool 18/04/17 17:24:09 INFO scheduler.DAGScheduler: ResultStage 1369 (foreachPartition at PredictorEngineApp.java:153) finished in 9.745 s 18/04/17 17:24:09 INFO scheduler.DAGScheduler: Job 1369 finished: foreachPartition at PredictorEngineApp.java:153, took 9.852137 s 18/04/17 17:24:09 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x6717ca98 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 17:24:09 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x6717ca980x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 17:24:09 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 17:24:09 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:43636, server: ***hostname masked***/***IP masked***:2181 18/04/17 17:24:09 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x1626be1444c9873, negotiated timeout = 60000 18/04/17 17:24:09 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x1626be1444c9873 18/04/17 17:24:09 INFO zookeeper.ZooKeeper: Session: 0x1626be1444c9873 closed 18/04/17 17:24:09 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 17:24:09 INFO scheduler.JobScheduler: Finished job streaming job 1523975040000 ms.23 from job set of time 1523975040000 ms 18/04/17 17:24:10 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 1359.0 (TID 1359) in 10429 ms on ***hostname masked*** (executor 9) (1/1) 18/04/17 17:24:10 INFO cluster.YarnClusterScheduler: Removed TaskSet 1359.0, whose tasks have all completed, from pool 18/04/17 17:24:10 INFO scheduler.DAGScheduler: ResultStage 1359 (foreachPartition at PredictorEngineApp.java:153) finished in 10.442 s 18/04/17 17:24:10 INFO scheduler.DAGScheduler: Job 1359 finished: foreachPartition at PredictorEngineApp.java:153, took 10.498824 s 18/04/17 17:24:10 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x5f9c0d71 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 17:24:10 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x5f9c0d710x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 17:24:10 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 17:24:10 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:37259, server: ***hostname masked***/***IP masked***:2181 18/04/17 17:24:10 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x3626be1439a9810, negotiated timeout = 60000 18/04/17 17:24:10 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x3626be1439a9810 18/04/17 17:24:10 INFO zookeeper.ZooKeeper: Session: 0x3626be1439a9810 closed 18/04/17 17:24:10 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 17:24:10 INFO scheduler.JobScheduler: Finished job streaming job 1523975040000 ms.19 from job set of time 1523975040000 ms 18/04/17 17:24:10 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 1358.0 (TID 1358) in 10873 ms on ***hostname masked*** (executor 9) (1/1) 18/04/17 17:24:10 INFO scheduler.DAGScheduler: ResultStage 1358 (foreachPartition at PredictorEngineApp.java:153) finished in 10.873 s 18/04/17 17:24:10 INFO cluster.YarnClusterScheduler: Removed TaskSet 1358.0, whose tasks have all completed, from pool 18/04/17 17:24:10 INFO scheduler.DAGScheduler: Job 1358 finished: foreachPartition at PredictorEngineApp.java:153, took 10.926261 s 18/04/17 17:24:10 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x6217f0bc connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 17:24:10 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x6217f0bc0x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 17:24:10 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 17:24:10 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:48240, server: ***hostname masked***/***IP masked***:2181 18/04/17 17:24:10 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x2626be142b29147, negotiated timeout = 60000 18/04/17 17:24:10 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x2626be142b29147 18/04/17 17:24:10 INFO zookeeper.ZooKeeper: Session: 0x2626be142b29147 closed 18/04/17 17:24:10 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 17:24:11 INFO scheduler.JobScheduler: Finished job streaming job 1523975040000 ms.34 from job set of time 1523975040000 ms 18/04/17 17:24:11 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 1370.0 (TID 1370) in 11498 ms on ***hostname masked*** (executor 6) (1/1) 18/04/17 17:24:11 INFO cluster.YarnClusterScheduler: Removed TaskSet 1370.0, whose tasks have all completed, from pool 18/04/17 17:24:11 INFO scheduler.DAGScheduler: ResultStage 1370 (foreachPartition at PredictorEngineApp.java:153) finished in 11.499 s 18/04/17 17:24:11 INFO scheduler.DAGScheduler: Job 1370 finished: foreachPartition at PredictorEngineApp.java:153, took 11.608473 s 18/04/17 17:24:11 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x79f2d4a5 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 17:24:11 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x79f2d4a50x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 17:24:11 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 17:24:11 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:43649, server: ***hostname masked***/***IP masked***:2181 18/04/17 17:24:11 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x1626be1444c9875, negotiated timeout = 60000 18/04/17 17:24:11 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x1626be1444c9875 18/04/17 17:24:11 INFO zookeeper.ZooKeeper: Session: 0x1626be1444c9875 closed 18/04/17 17:24:11 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 17:24:11 INFO scheduler.JobScheduler: Finished job streaming job 1523975040000 ms.26 from job set of time 1523975040000 ms 18/04/17 17:24:12 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 1357.0 (TID 1357) in 12737 ms on ***hostname masked*** (executor 5) (1/1) 18/04/17 17:24:12 INFO cluster.YarnClusterScheduler: Removed TaskSet 1357.0, whose tasks have all completed, from pool 18/04/17 17:24:12 INFO scheduler.DAGScheduler: ResultStage 1357 (foreachPartition at PredictorEngineApp.java:153) finished in 12.738 s 18/04/17 17:24:12 INFO scheduler.DAGScheduler: Job 1356 finished: foreachPartition at PredictorEngineApp.java:153, took 12.786744 s 18/04/17 17:24:12 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x2a6e5fbf connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 17:24:12 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x2a6e5fbf0x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 17:24:12 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 17:24:12 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:43653, server: ***hostname masked***/***IP masked***:2181 18/04/17 17:24:12 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x1626be1444c9876, negotiated timeout = 60000 18/04/17 17:24:12 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x1626be1444c9876 18/04/17 17:24:12 INFO zookeeper.ZooKeeper: Session: 0x1626be1444c9876 closed 18/04/17 17:24:12 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 17:24:12 INFO scheduler.JobScheduler: Finished job streaming job 1523975040000 ms.31 from job set of time 1523975040000 ms 18/04/17 17:24:13 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 1351.0 (TID 1351) in 13236 ms on ***hostname masked*** (executor 3) (1/1) 18/04/17 17:24:13 INFO scheduler.DAGScheduler: ResultStage 1351 (foreachPartition at PredictorEngineApp.java:153) finished in 13.236 s 18/04/17 17:24:13 INFO cluster.YarnClusterScheduler: Removed TaskSet 1351.0, whose tasks have all completed, from pool 18/04/17 17:24:13 INFO scheduler.DAGScheduler: Job 1352 finished: foreachPartition at PredictorEngineApp.java:153, took 13.260736 s 18/04/17 17:24:13 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0xe41c35d connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 17:24:13 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0xe41c35d0x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 17:24:13 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 17:24:13 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:43657, server: ***hostname masked***/***IP masked***:2181 18/04/17 17:24:13 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x1626be1444c9878, negotiated timeout = 60000 18/04/17 17:24:13 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x1626be1444c9878 18/04/17 17:24:13 INFO zookeeper.ZooKeeper: Session: 0x1626be1444c9878 closed 18/04/17 17:24:13 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 17:24:13 INFO scheduler.JobScheduler: Finished job streaming job 1523975040000 ms.29 from job set of time 1523975040000 ms 18/04/17 17:24:13 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 1348.0 (TID 1348) in 13401 ms on ***hostname masked*** (executor 12) (1/1) 18/04/17 17:24:13 INFO scheduler.DAGScheduler: ResultStage 1348 (foreachPartition at PredictorEngineApp.java:153) finished in 13.401 s 18/04/17 17:24:13 INFO cluster.YarnClusterScheduler: Removed TaskSet 1348.0, whose tasks have all completed, from pool 18/04/17 17:24:13 INFO scheduler.DAGScheduler: Job 1348 finished: foreachPartition at PredictorEngineApp.java:153, took 13.417640 s 18/04/17 17:24:13 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x106a7b6b connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 17:24:13 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x106a7b6b0x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 17:24:13 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 17:24:13 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:37278, server: ***hostname masked***/***IP masked***:2181 18/04/17 17:24:13 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x3626be1439a9813, negotiated timeout = 60000 18/04/17 17:24:13 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x3626be1439a9813 18/04/17 17:24:13 INFO zookeeper.ZooKeeper: Session: 0x3626be1439a9813 closed 18/04/17 17:24:13 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 17:24:13 INFO scheduler.JobScheduler: Finished job streaming job 1523975040000 ms.6 from job set of time 1523975040000 ms 18/04/17 17:24:13 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 1371.0 (TID 1371) in 13407 ms on ***hostname masked*** (executor 12) (1/1) 18/04/17 17:24:13 INFO cluster.YarnClusterScheduler: Removed TaskSet 1371.0, whose tasks have all completed, from pool 18/04/17 17:24:13 INFO scheduler.DAGScheduler: ResultStage 1371 (foreachPartition at PredictorEngineApp.java:153) finished in 13.408 s 18/04/17 17:24:13 INFO scheduler.DAGScheduler: Job 1371 finished: foreachPartition at PredictorEngineApp.java:153, took 13.519636 s 18/04/17 17:24:13 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x6faa1460 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 17:24:13 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x6faa14600x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 17:24:13 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 17:24:13 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:43663, server: ***hostname masked***/***IP masked***:2181 18/04/17 17:24:13 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 1366.0 (TID 1366) in 13426 ms on ***hostname masked*** (executor 3) (1/1) 18/04/17 17:24:13 INFO cluster.YarnClusterScheduler: Removed TaskSet 1366.0, whose tasks have all completed, from pool 18/04/17 17:24:13 INFO scheduler.DAGScheduler: ResultStage 1366 (foreachPartition at PredictorEngineApp.java:153) finished in 13.427 s 18/04/17 17:24:13 INFO scheduler.DAGScheduler: Job 1366 finished: foreachPartition at PredictorEngineApp.java:153, took 13.524886 s 18/04/17 17:24:13 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x1bb5cb2f connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 17:24:13 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x1bb5cb2f0x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 17:24:13 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 17:24:13 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:37282, server: ***hostname masked***/***IP masked***:2181 18/04/17 17:24:13 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x1626be1444c987a, negotiated timeout = 60000 18/04/17 17:24:13 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x3626be1439a9814, negotiated timeout = 60000 18/04/17 17:24:13 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x1626be1444c987a 18/04/17 17:24:13 INFO zookeeper.ZooKeeper: Session: 0x1626be1444c987a closed 18/04/17 17:24:13 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 17:24:13 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x3626be1439a9814 18/04/17 17:24:13 INFO zookeeper.ZooKeeper: Session: 0x3626be1439a9814 closed 18/04/17 17:24:13 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 17:24:13 INFO scheduler.JobScheduler: Finished job streaming job 1523975040000 ms.9 from job set of time 1523975040000 ms 18/04/17 17:24:13 INFO scheduler.JobScheduler: Finished job streaming job 1523975040000 ms.2 from job set of time 1523975040000 ms 18/04/17 17:24:13 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 1355.0 (TID 1355) in 13549 ms on ***hostname masked*** (executor 2) (1/1) 18/04/17 17:24:13 INFO cluster.YarnClusterScheduler: Removed TaskSet 1355.0, whose tasks have all completed, from pool 18/04/17 17:24:13 INFO scheduler.DAGScheduler: ResultStage 1355 (foreachPartition at PredictorEngineApp.java:153) finished in 13.549 s 18/04/17 17:24:13 INFO scheduler.DAGScheduler: Job 1355 finished: foreachPartition at PredictorEngineApp.java:153, took 13.588938 s 18/04/17 17:24:13 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0xd077638 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 17:24:13 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0xd0776380x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 17:24:13 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 17:24:13 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:37287, server: ***hostname masked***/***IP masked***:2181 18/04/17 17:24:13 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x3626be1439a9815, negotiated timeout = 60000 18/04/17 17:24:13 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x3626be1439a9815 18/04/17 17:24:13 INFO zookeeper.ZooKeeper: Session: 0x3626be1439a9815 closed 18/04/17 17:24:13 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 17:24:13 INFO scheduler.JobScheduler: Finished job streaming job 1523975040000 ms.24 from job set of time 1523975040000 ms 18/04/17 17:24:14 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 1347.0 (TID 1347) in 14238 ms on ***hostname masked*** (executor 12) (1/1) 18/04/17 17:24:14 INFO cluster.YarnClusterScheduler: Removed TaskSet 1347.0, whose tasks have all completed, from pool 18/04/17 17:24:14 INFO scheduler.DAGScheduler: ResultStage 1347 (foreachPartition at PredictorEngineApp.java:153) finished in 14.239 s 18/04/17 17:24:14 INFO scheduler.DAGScheduler: Job 1347 finished: foreachPartition at PredictorEngineApp.java:153, took 14.251520 s 18/04/17 17:24:14 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x2d2914c8 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 17:24:14 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x2d2914c80x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 17:24:14 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 17:24:14 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:43673, server: ***hostname masked***/***IP masked***:2181 18/04/17 17:24:14 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x1626be1444c987c, negotiated timeout = 60000 18/04/17 17:24:14 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x1626be1444c987c 18/04/17 17:24:14 INFO zookeeper.ZooKeeper: Session: 0x1626be1444c987c closed 18/04/17 17:24:14 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 17:24:14 INFO scheduler.JobScheduler: Finished job streaming job 1523975040000 ms.27 from job set of time 1523975040000 ms 18/04/17 17:24:18 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 1349.0 (TID 1349) in 18427 ms on ***hostname masked*** (executor 5) (1/1) 18/04/17 17:24:18 INFO cluster.YarnClusterScheduler: Removed TaskSet 1349.0, whose tasks have all completed, from pool 18/04/17 17:24:18 INFO scheduler.DAGScheduler: ResultStage 1349 (foreachPartition at PredictorEngineApp.java:153) finished in 18.427 s 18/04/17 17:24:18 INFO scheduler.DAGScheduler: Job 1349 finished: foreachPartition at PredictorEngineApp.java:153, took 18.446656 s 18/04/17 17:24:18 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x65022dfa connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 17:24:18 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x65022dfa0x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 17:24:18 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 17:24:18 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:43689, server: ***hostname masked***/***IP masked***:2181 18/04/17 17:24:18 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x1626be1444c987e, negotiated timeout = 60000 18/04/17 17:24:18 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x1626be1444c987e 18/04/17 17:24:18 INFO zookeeper.ZooKeeper: Session: 0x1626be1444c987e closed 18/04/17 17:24:18 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 17:24:18 INFO scheduler.JobScheduler: Finished job streaming job 1523975040000 ms.20 from job set of time 1523975040000 ms 18/04/17 17:24:19 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 1364.0 (TID 1364) in 19536 ms on ***hostname masked*** (executor 3) (1/1) 18/04/17 17:24:19 INFO cluster.YarnClusterScheduler: Removed TaskSet 1364.0, whose tasks have all completed, from pool 18/04/17 17:24:19 INFO scheduler.DAGScheduler: ResultStage 1364 (foreachPartition at PredictorEngineApp.java:153) finished in 19.536 s 18/04/17 17:24:19 INFO scheduler.DAGScheduler: Job 1364 finished: foreachPartition at PredictorEngineApp.java:153, took 19.626876 s 18/04/17 17:24:19 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x61c36c6e connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 17:24:19 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x61c36c6e0x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 17:24:19 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 17:24:19 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:37311, server: ***hostname masked***/***IP masked***:2181 18/04/17 17:24:19 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x3626be1439a9817, negotiated timeout = 60000 18/04/17 17:24:19 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x3626be1439a9817 18/04/17 17:24:19 INFO zookeeper.ZooKeeper: Session: 0x3626be1439a9817 closed 18/04/17 17:24:19 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 17:24:19 INFO scheduler.JobScheduler: Finished job streaming job 1523975040000 ms.1 from job set of time 1523975040000 ms 18/04/17 17:24:25 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 1361.0 (TID 1361) in 25752 ms on ***hostname masked*** (executor 2) (1/1) 18/04/17 17:24:25 INFO cluster.YarnClusterScheduler: Removed TaskSet 1361.0, whose tasks have all completed, from pool 18/04/17 17:24:25 INFO scheduler.DAGScheduler: ResultStage 1361 (foreachPartition at PredictorEngineApp.java:153) finished in 25.753 s 18/04/17 17:24:25 INFO scheduler.DAGScheduler: Job 1363 finished: foreachPartition at PredictorEngineApp.java:153, took 25.830711 s 18/04/17 17:24:25 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x6804c7b connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 17:24:25 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x6804c7b0x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 17:24:25 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 17:24:25 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:37324, server: ***hostname masked***/***IP masked***:2181 18/04/17 17:24:25 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x3626be1439a981a, negotiated timeout = 60000 18/04/17 17:24:25 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x3626be1439a981a 18/04/17 17:24:25 INFO zookeeper.ZooKeeper: Session: 0x3626be1439a981a closed 18/04/17 17:24:25 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 17:24:25 INFO scheduler.JobScheduler: Finished job streaming job 1523975040000 ms.5 from job set of time 1523975040000 ms 18/04/17 17:24:26 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 1346.0 (TID 1346) in 26581 ms on ***hostname masked*** (executor 2) (1/1) 18/04/17 17:24:26 INFO cluster.YarnClusterScheduler: Removed TaskSet 1346.0, whose tasks have all completed, from pool 18/04/17 17:24:26 INFO scheduler.DAGScheduler: ResultStage 1346 (foreachPartition at PredictorEngineApp.java:153) finished in 26.581 s 18/04/17 17:24:26 INFO scheduler.DAGScheduler: Job 1346 finished: foreachPartition at PredictorEngineApp.java:153, took 26.591407 s 18/04/17 17:24:26 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x883dd96 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 17:24:26 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x883dd960x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 17:24:26 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 17:24:26 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:48306, server: ***hostname masked***/***IP masked***:2181 18/04/17 17:24:26 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x2626be142b2914f, negotiated timeout = 60000 18/04/17 17:24:26 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x2626be142b2914f 18/04/17 17:24:26 INFO zookeeper.ZooKeeper: Session: 0x2626be142b2914f closed 18/04/17 17:24:26 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 17:24:26 INFO scheduler.JobScheduler: Finished job streaming job 1523975040000 ms.22 from job set of time 1523975040000 ms 18/04/17 17:24:30 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 1365.0 (TID 1365) in 30148 ms on ***hostname masked*** (executor 5) (1/1) 18/04/17 17:24:30 INFO cluster.YarnClusterScheduler: Removed TaskSet 1365.0, whose tasks have all completed, from pool 18/04/17 17:24:30 INFO scheduler.DAGScheduler: ResultStage 1365 (foreachPartition at PredictorEngineApp.java:153) finished in 30.148 s 18/04/17 17:24:30 INFO scheduler.DAGScheduler: Job 1365 finished: foreachPartition at PredictorEngineApp.java:153, took 30.241982 s 18/04/17 17:24:30 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x1decf93e connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 17:24:30 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x1decf93e0x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 17:24:30 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 17:24:30 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:43719, server: ***hostname masked***/***IP masked***:2181 18/04/17 17:24:30 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x1626be1444c9880, negotiated timeout = 60000 18/04/17 17:24:30 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x1626be1444c9880 18/04/17 17:24:30 INFO zookeeper.ZooKeeper: Session: 0x1626be1444c9880 closed 18/04/17 17:24:30 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 17:24:30 INFO scheduler.JobScheduler: Finished job streaming job 1523975040000 ms.11 from job set of time 1523975040000 ms 18/04/17 17:24:32 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 1362.0 (TID 1362) in 32083 ms on ***hostname masked*** (executor 5) (1/1) 18/04/17 17:24:32 INFO cluster.YarnClusterScheduler: Removed TaskSet 1362.0, whose tasks have all completed, from pool 18/04/17 17:24:32 INFO scheduler.DAGScheduler: ResultStage 1362 (foreachPartition at PredictorEngineApp.java:153) finished in 32.083 s 18/04/17 17:24:32 INFO scheduler.DAGScheduler: Job 1361 finished: foreachPartition at PredictorEngineApp.java:153, took 32.165511 s 18/04/17 17:24:32 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x550a9180 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 17:24:32 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x550a91800x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 17:24:32 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 17:24:32 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:43725, server: ***hostname masked***/***IP masked***:2181 18/04/17 17:24:32 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x1626be1444c9884, negotiated timeout = 60000 18/04/17 17:24:32 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x1626be1444c9884 18/04/17 17:24:32 INFO zookeeper.ZooKeeper: Session: 0x1626be1444c9884 closed 18/04/17 17:24:32 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 17:24:32 INFO scheduler.JobScheduler: Finished job streaming job 1523975040000 ms.10 from job set of time 1523975040000 ms 18/04/17 17:24:32 INFO scheduler.JobScheduler: Total delay: 32.246 s for time 1523975040000 ms (execution: 32.202 s) 18/04/17 17:24:32 INFO kafka.KafkaRDD: Removing RDD 1800 from persistence list 18/04/17 17:24:32 INFO storage.BlockManager: Removing RDD 1800 18/04/17 17:24:32 INFO kafka.KafkaRDD: Removing RDD 1800 from persistence list 18/04/17 17:24:32 INFO storage.BlockManager: Removing RDD 1800 18/04/17 17:24:32 INFO kafka.KafkaRDD: Removing RDD 1801 from persistence list 18/04/17 17:24:32 INFO storage.BlockManager: Removing RDD 1801 18/04/17 17:24:32 INFO kafka.KafkaRDD: Removing RDD 1801 from persistence list 18/04/17 17:24:32 INFO storage.BlockManager: Removing RDD 1801 18/04/17 17:24:32 INFO kafka.KafkaRDD: Removing RDD 1802 from persistence list 18/04/17 17:24:32 INFO storage.BlockManager: Removing RDD 1802 18/04/17 17:24:32 INFO kafka.KafkaRDD: Removing RDD 1802 from persistence list 18/04/17 17:24:32 INFO storage.BlockManager: Removing RDD 1802 18/04/17 17:24:32 INFO kafka.KafkaRDD: Removing RDD 1803 from persistence list 18/04/17 17:24:32 INFO storage.BlockManager: Removing RDD 1803 18/04/17 17:24:32 INFO kafka.KafkaRDD: Removing RDD 1803 from persistence list 18/04/17 17:24:32 INFO storage.BlockManager: Removing RDD 1803 18/04/17 17:24:32 INFO kafka.KafkaRDD: Removing RDD 1804 from persistence list 18/04/17 17:24:32 INFO storage.BlockManager: Removing RDD 1804 18/04/17 17:24:32 INFO kafka.KafkaRDD: Removing RDD 1804 from persistence list 18/04/17 17:24:32 INFO storage.BlockManager: Removing RDD 1804 18/04/17 17:24:32 INFO kafka.KafkaRDD: Removing RDD 1805 from persistence list 18/04/17 17:24:32 INFO storage.BlockManager: Removing RDD 1805 18/04/17 17:24:32 INFO kafka.KafkaRDD: Removing RDD 1805 from persistence list 18/04/17 17:24:32 INFO storage.BlockManager: Removing RDD 1805 18/04/17 17:24:32 INFO kafka.KafkaRDD: Removing RDD 1806 from persistence list 18/04/17 17:24:32 INFO storage.BlockManager: Removing RDD 1806 18/04/17 17:24:32 INFO kafka.KafkaRDD: Removing RDD 1806 from persistence list 18/04/17 17:24:32 INFO storage.BlockManager: Removing RDD 1806 18/04/17 17:24:32 INFO kafka.KafkaRDD: Removing RDD 1807 from persistence list 18/04/17 17:24:32 INFO storage.BlockManager: Removing RDD 1807 18/04/17 17:24:32 INFO kafka.KafkaRDD: Removing RDD 1807 from persistence list 18/04/17 17:24:32 INFO storage.BlockManager: Removing RDD 1807 18/04/17 17:24:32 INFO kafka.KafkaRDD: Removing RDD 1808 from persistence list 18/04/17 17:24:32 INFO storage.BlockManager: Removing RDD 1808 18/04/17 17:24:32 INFO kafka.KafkaRDD: Removing RDD 1808 from persistence list 18/04/17 17:24:32 INFO storage.BlockManager: Removing RDD 1808 18/04/17 17:24:32 INFO kafka.KafkaRDD: Removing RDD 1809 from persistence list 18/04/17 17:24:32 INFO storage.BlockManager: Removing RDD 1809 18/04/17 17:24:32 INFO kafka.KafkaRDD: Removing RDD 1809 from persistence list 18/04/17 17:24:32 INFO storage.BlockManager: Removing RDD 1809 18/04/17 17:24:32 INFO kafka.KafkaRDD: Removing RDD 1810 from persistence list 18/04/17 17:24:32 INFO storage.BlockManager: Removing RDD 1810 18/04/17 17:24:32 INFO kafka.KafkaRDD: Removing RDD 1810 from persistence list 18/04/17 17:24:32 INFO storage.BlockManager: Removing RDD 1810 18/04/17 17:24:32 INFO kafka.KafkaRDD: Removing RDD 1811 from persistence list 18/04/17 17:24:32 INFO storage.BlockManager: Removing RDD 1811 18/04/17 17:24:32 INFO kafka.KafkaRDD: Removing RDD 1811 from persistence list 18/04/17 17:24:32 INFO storage.BlockManager: Removing RDD 1811 18/04/17 17:24:32 INFO kafka.KafkaRDD: Removing RDD 1812 from persistence list 18/04/17 17:24:32 INFO storage.BlockManager: Removing RDD 1812 18/04/17 17:24:32 INFO kafka.KafkaRDD: Removing RDD 1812 from persistence list 18/04/17 17:24:32 INFO storage.BlockManager: Removing RDD 1812 18/04/17 17:24:32 INFO kafka.KafkaRDD: Removing RDD 1813 from persistence list 18/04/17 17:24:32 INFO storage.BlockManager: Removing RDD 1813 18/04/17 17:24:32 INFO kafka.KafkaRDD: Removing RDD 1813 from persistence list 18/04/17 17:24:32 INFO storage.BlockManager: Removing RDD 1813 18/04/17 17:24:32 INFO kafka.KafkaRDD: Removing RDD 1814 from persistence list 18/04/17 17:24:32 INFO storage.BlockManager: Removing RDD 1814 18/04/17 17:24:32 INFO kafka.KafkaRDD: Removing RDD 1814 from persistence list 18/04/17 17:24:32 INFO storage.BlockManager: Removing RDD 1814 18/04/17 17:24:32 INFO kafka.KafkaRDD: Removing RDD 1815 from persistence list 18/04/17 17:24:32 INFO storage.BlockManager: Removing RDD 1815 18/04/17 17:24:32 INFO kafka.KafkaRDD: Removing RDD 1815 from persistence list 18/04/17 17:24:32 INFO storage.BlockManager: Removing RDD 1815 18/04/17 17:24:32 INFO kafka.KafkaRDD: Removing RDD 1816 from persistence list 18/04/17 17:24:32 INFO storage.BlockManager: Removing RDD 1816 18/04/17 17:24:32 INFO kafka.KafkaRDD: Removing RDD 1816 from persistence list 18/04/17 17:24:32 INFO storage.BlockManager: Removing RDD 1816 18/04/17 17:24:32 INFO kafka.KafkaRDD: Removing RDD 1817 from persistence list 18/04/17 17:24:32 INFO storage.BlockManager: Removing RDD 1817 18/04/17 17:24:32 INFO kafka.KafkaRDD: Removing RDD 1817 from persistence list 18/04/17 17:24:32 INFO storage.BlockManager: Removing RDD 1817 18/04/17 17:24:32 INFO kafka.KafkaRDD: Removing RDD 1818 from persistence list 18/04/17 17:24:32 INFO storage.BlockManager: Removing RDD 1818 18/04/17 17:24:32 INFO kafka.KafkaRDD: Removing RDD 1818 from persistence list 18/04/17 17:24:32 INFO storage.BlockManager: Removing RDD 1818 18/04/17 17:24:32 INFO kafka.KafkaRDD: Removing RDD 1819 from persistence list 18/04/17 17:24:32 INFO storage.BlockManager: Removing RDD 1819 18/04/17 17:24:32 INFO kafka.KafkaRDD: Removing RDD 1819 from persistence list 18/04/17 17:24:32 INFO storage.BlockManager: Removing RDD 1819 18/04/17 17:24:32 INFO kafka.KafkaRDD: Removing RDD 1820 from persistence list 18/04/17 17:24:32 INFO storage.BlockManager: Removing RDD 1820 18/04/17 17:24:32 INFO kafka.KafkaRDD: Removing RDD 1820 from persistence list 18/04/17 17:24:32 INFO storage.BlockManager: Removing RDD 1820 18/04/17 17:24:32 INFO kafka.KafkaRDD: Removing RDD 1821 from persistence list 18/04/17 17:24:32 INFO storage.BlockManager: Removing RDD 1821 18/04/17 17:24:32 INFO kafka.KafkaRDD: Removing RDD 1821 from persistence list 18/04/17 17:24:32 INFO storage.BlockManager: Removing RDD 1821 18/04/17 17:24:32 INFO kafka.KafkaRDD: Removing RDD 1822 from persistence list 18/04/17 17:24:32 INFO storage.BlockManager: Removing RDD 1822 18/04/17 17:24:32 INFO kafka.KafkaRDD: Removing RDD 1822 from persistence list 18/04/17 17:24:32 INFO storage.BlockManager: Removing RDD 1822 18/04/17 17:24:32 INFO kafka.KafkaRDD: Removing RDD 1823 from persistence list 18/04/17 17:24:32 INFO storage.BlockManager: Removing RDD 1823 18/04/17 17:24:32 INFO kafka.KafkaRDD: Removing RDD 1823 from persistence list 18/04/17 17:24:32 INFO storage.BlockManager: Removing RDD 1823 18/04/17 17:24:32 INFO kafka.KafkaRDD: Removing RDD 1824 from persistence list 18/04/17 17:24:32 INFO storage.BlockManager: Removing RDD 1824 18/04/17 17:24:32 INFO kafka.KafkaRDD: Removing RDD 1824 from persistence list 18/04/17 17:24:32 INFO storage.BlockManager: Removing RDD 1824 18/04/17 17:24:32 INFO kafka.KafkaRDD: Removing RDD 1825 from persistence list 18/04/17 17:24:32 INFO storage.BlockManager: Removing RDD 1825 18/04/17 17:24:32 INFO kafka.KafkaRDD: Removing RDD 1825 from persistence list 18/04/17 17:24:32 INFO storage.BlockManager: Removing RDD 1825 18/04/17 17:24:32 INFO kafka.KafkaRDD: Removing RDD 1826 from persistence list 18/04/17 17:24:32 INFO storage.BlockManager: Removing RDD 1826 18/04/17 17:24:32 INFO kafka.KafkaRDD: Removing RDD 1826 from persistence list 18/04/17 17:24:32 INFO storage.BlockManager: Removing RDD 1826 18/04/17 17:24:32 INFO kafka.KafkaRDD: Removing RDD 1827 from persistence list 18/04/17 17:24:32 INFO storage.BlockManager: Removing RDD 1827 18/04/17 17:24:32 INFO kafka.KafkaRDD: Removing RDD 1827 from persistence list 18/04/17 17:24:32 INFO storage.BlockManager: Removing RDD 1827 18/04/17 17:24:32 INFO kafka.KafkaRDD: Removing RDD 1828 from persistence list 18/04/17 17:24:32 INFO storage.BlockManager: Removing RDD 1828 18/04/17 17:24:32 INFO kafka.KafkaRDD: Removing RDD 1828 from persistence list 18/04/17 17:24:32 INFO storage.BlockManager: Removing RDD 1828 18/04/17 17:24:32 INFO kafka.KafkaRDD: Removing RDD 1829 from persistence list 18/04/17 17:24:32 INFO storage.BlockManager: Removing RDD 1829 18/04/17 17:24:32 INFO kafka.KafkaRDD: Removing RDD 1829 from persistence list 18/04/17 17:24:32 INFO storage.BlockManager: Removing RDD 1829 18/04/17 17:24:32 INFO kafka.KafkaRDD: Removing RDD 1830 from persistence list 18/04/17 17:24:32 INFO storage.BlockManager: Removing RDD 1830 18/04/17 17:24:32 INFO kafka.KafkaRDD: Removing RDD 1830 from persistence list 18/04/17 17:24:32 INFO storage.BlockManager: Removing RDD 1830 18/04/17 17:24:32 INFO kafka.KafkaRDD: Removing RDD 1831 from persistence list 18/04/17 17:24:32 INFO storage.BlockManager: Removing RDD 1831 18/04/17 17:24:32 INFO kafka.KafkaRDD: Removing RDD 1831 from persistence list 18/04/17 17:24:32 INFO storage.BlockManager: Removing RDD 1831 18/04/17 17:24:32 INFO kafka.KafkaRDD: Removing RDD 1832 from persistence list 18/04/17 17:24:32 INFO storage.BlockManager: Removing RDD 1832 18/04/17 17:24:32 INFO kafka.KafkaRDD: Removing RDD 1832 from persistence list 18/04/17 17:24:32 INFO storage.BlockManager: Removing RDD 1832 18/04/17 17:24:32 INFO kafka.KafkaRDD: Removing RDD 1833 from persistence list 18/04/17 17:24:32 INFO storage.BlockManager: Removing RDD 1833 18/04/17 17:24:32 INFO kafka.KafkaRDD: Removing RDD 1833 from persistence list 18/04/17 17:24:32 INFO storage.BlockManager: Removing RDD 1833 18/04/17 17:24:32 INFO kafka.KafkaRDD: Removing RDD 1834 from persistence list 18/04/17 17:24:32 INFO storage.BlockManager: Removing RDD 1834 18/04/17 17:24:32 INFO kafka.KafkaRDD: Removing RDD 1834 from persistence list 18/04/17 17:24:32 INFO storage.BlockManager: Removing RDD 1834 18/04/17 17:24:32 INFO kafka.KafkaRDD: Removing RDD 1835 from persistence list 18/04/17 17:24:32 INFO storage.BlockManager: Removing RDD 1835 18/04/17 17:24:32 INFO kafka.KafkaRDD: Removing RDD 1835 from persistence list 18/04/17 17:24:32 INFO storage.BlockManager: Removing RDD 1835 18/04/17 17:24:32 INFO scheduler.ReceivedBlockTracker: Deleting batches ArrayBuffer() 18/04/17 17:24:32 INFO scheduler.InputInfoTracker: remove old batch metadata: 1523974920000 ms 18/04/17 17:25:00 INFO scheduler.JobScheduler: Added jobs for time 1523975100000 ms 18/04/17 17:25:00 INFO scheduler.JobScheduler: Starting job streaming job 1523975100000 ms.2 from job set of time 1523975100000 ms 18/04/17 17:25:00 INFO scheduler.JobScheduler: Starting job streaming job 1523975100000 ms.1 from job set of time 1523975100000 ms 18/04/17 17:25:00 INFO scheduler.JobScheduler: Starting job streaming job 1523975100000 ms.0 from job set of time 1523975100000 ms 18/04/17 17:25:00 INFO scheduler.JobScheduler: Starting job streaming job 1523975100000 ms.3 from job set of time 1523975100000 ms 18/04/17 17:25:00 INFO scheduler.JobScheduler: Starting job streaming job 1523975100000 ms.4 from job set of time 1523975100000 ms 18/04/17 17:25:00 INFO scheduler.JobScheduler: Finished job streaming job 1523975100000 ms.3 from job set of time 1523975100000 ms 18/04/17 17:25:00 INFO scheduler.JobScheduler: Finished job streaming job 1523975100000 ms.0 from job set of time 1523975100000 ms 18/04/17 17:25:00 INFO scheduler.JobScheduler: Finished job streaming job 1523975100000 ms.4 from job set of time 1523975100000 ms 18/04/17 17:25:00 INFO scheduler.JobScheduler: Starting job streaming job 1523975100000 ms.6 from job set of time 1523975100000 ms 18/04/17 17:25:00 INFO scheduler.JobScheduler: Starting job streaming job 1523975100000 ms.7 from job set of time 1523975100000 ms 18/04/17 17:25:00 INFO scheduler.JobScheduler: Starting job streaming job 1523975100000 ms.5 from job set of time 1523975100000 ms 18/04/17 17:25:00 INFO scheduler.JobScheduler: Starting job streaming job 1523975100000 ms.8 from job set of time 1523975100000 ms 18/04/17 17:25:00 INFO scheduler.JobScheduler: Starting job streaming job 1523975100000 ms.9 from job set of time 1523975100000 ms 18/04/17 17:25:00 INFO scheduler.JobScheduler: Starting job streaming job 1523975100000 ms.10 from job set of time 1523975100000 ms 18/04/17 17:25:00 INFO scheduler.JobScheduler: Starting job streaming job 1523975100000 ms.11 from job set of time 1523975100000 ms 18/04/17 17:25:00 INFO scheduler.JobScheduler: Starting job streaming job 1523975100000 ms.12 from job set of time 1523975100000 ms 18/04/17 17:25:00 INFO scheduler.JobScheduler: Starting job streaming job 1523975100000 ms.13 from job set of time 1523975100000 ms 18/04/17 17:25:00 INFO scheduler.JobScheduler: Starting job streaming job 1523975100000 ms.14 from job set of time 1523975100000 ms 18/04/17 17:25:00 INFO scheduler.JobScheduler: Finished job streaming job 1523975100000 ms.13 from job set of time 1523975100000 ms 18/04/17 17:25:00 INFO scheduler.JobScheduler: Starting job streaming job 1523975100000 ms.16 from job set of time 1523975100000 ms 18/04/17 17:25:00 INFO scheduler.JobScheduler: Starting job streaming job 1523975100000 ms.15 from job set of time 1523975100000 ms 18/04/17 17:25:00 INFO scheduler.JobScheduler: Finished job streaming job 1523975100000 ms.14 from job set of time 1523975100000 ms 18/04/17 17:25:00 INFO scheduler.JobScheduler: Finished job streaming job 1523975100000 ms.16 from job set of time 1523975100000 ms 18/04/17 17:25:00 INFO scheduler.JobScheduler: Starting job streaming job 1523975100000 ms.17 from job set of time 1523975100000 ms 18/04/17 17:25:00 INFO scheduler.JobScheduler: Starting job streaming job 1523975100000 ms.18 from job set of time 1523975100000 ms 18/04/17 17:25:00 INFO scheduler.JobScheduler: Starting job streaming job 1523975100000 ms.19 from job set of time 1523975100000 ms 18/04/17 17:25:00 INFO scheduler.JobScheduler: Finished job streaming job 1523975100000 ms.17 from job set of time 1523975100000 ms 18/04/17 17:25:00 INFO scheduler.JobScheduler: Starting job streaming job 1523975100000 ms.20 from job set of time 1523975100000 ms 18/04/17 17:25:00 INFO scheduler.JobScheduler: Starting job streaming job 1523975100000 ms.21 from job set of time 1523975100000 ms 18/04/17 17:25:00 INFO scheduler.JobScheduler: Finished job streaming job 1523975100000 ms.21 from job set of time 1523975100000 ms 18/04/17 17:25:00 INFO scheduler.JobScheduler: Starting job streaming job 1523975100000 ms.23 from job set of time 1523975100000 ms 18/04/17 17:25:00 INFO scheduler.JobScheduler: Starting job streaming job 1523975100000 ms.22 from job set of time 1523975100000 ms 18/04/17 17:25:00 INFO scheduler.JobScheduler: Starting job streaming job 1523975100000 ms.24 from job set of time 1523975100000 ms 18/04/17 17:25:00 INFO scheduler.JobScheduler: Starting job streaming job 1523975100000 ms.25 from job set of time 1523975100000 ms 18/04/17 17:25:00 INFO scheduler.JobScheduler: Starting job streaming job 1523975100000 ms.26 from job set of time 1523975100000 ms 18/04/17 17:25:00 INFO scheduler.JobScheduler: Starting job streaming job 1523975100000 ms.28 from job set of time 1523975100000 ms 18/04/17 17:25:00 INFO scheduler.JobScheduler: Starting job streaming job 1523975100000 ms.29 from job set of time 1523975100000 ms 18/04/17 17:25:00 INFO scheduler.JobScheduler: Starting job streaming job 1523975100000 ms.30 from job set of time 1523975100000 ms 18/04/17 17:25:00 INFO scheduler.JobScheduler: Starting job streaming job 1523975100000 ms.27 from job set of time 1523975100000 ms 18/04/17 17:25:00 INFO scheduler.JobScheduler: Starting job streaming job 1523975100000 ms.31 from job set of time 1523975100000 ms 18/04/17 17:25:00 INFO scheduler.JobScheduler: Finished job streaming job 1523975100000 ms.30 from job set of time 1523975100000 ms 18/04/17 17:25:00 INFO scheduler.JobScheduler: Starting job streaming job 1523975100000 ms.32 from job set of time 1523975100000 ms 18/04/17 17:25:00 INFO scheduler.JobScheduler: Starting job streaming job 1523975100000 ms.33 from job set of time 1523975100000 ms 18/04/17 17:25:00 INFO scheduler.JobScheduler: Starting job streaming job 1523975100000 ms.34 from job set of time 1523975100000 ms 18/04/17 17:25:00 INFO scheduler.JobScheduler: Starting job streaming job 1523975100000 ms.35 from job set of time 1523975100000 ms 18/04/17 17:25:00 INFO spark.ContextCleaner: Cleaned accumulator 1346 18/04/17 17:25:00 INFO spark.ContextCleaner: Cleaned accumulator 1351 18/04/17 17:25:00 INFO spark.ContextCleaner: Cleaned accumulator 1347 18/04/17 17:25:00 INFO spark.ContextCleaner: Cleaned accumulator 1348 18/04/17 17:25:00 INFO storage.BlockManagerInfo: Removed broadcast_1349_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:25:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 17:25:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 17:25:00 INFO scheduler.DAGScheduler: Got job 1372 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 17:25:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 1372 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 17:25:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 17:25:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 17:25:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 17:25:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 17:25:00 INFO scheduler.DAGScheduler: Submitting ResultStage 1372 (KafkaRDD[1879] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 17:25:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 17:25:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 17:25:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 17:25:00 INFO storage.MemoryStore: Block broadcast_1372 stored as values in memory (estimated size 5.7 KB, free 491.4 MB) 18/04/17 17:25:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 17:25:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 17:25:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 17:25:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 17:25:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 17:25:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 17:25:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 17:25:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 17:25:00 INFO storage.BlockManagerInfo: Removed broadcast_1349_piece0 on ***hostname masked***:53081 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:25:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 17:25:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 17:25:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 17:25:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 17:25:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 17:25:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 17:25:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 17:25:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 17:25:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 17:25:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 17:25:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 17:25:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 17:25:00 INFO storage.MemoryStore: Block broadcast_1372_piece0 stored as bytes in memory (estimated size 3.1 KB, free 491.4 MB) 18/04/17 17:25:00 INFO storage.BlockManagerInfo: Added broadcast_1372_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:25:00 INFO storage.BlockManagerInfo: Removed broadcast_1347_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:25:00 INFO spark.SparkContext: Created broadcast 1372 from broadcast at DAGScheduler.scala:1006 18/04/17 17:25:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 1372 (KafkaRDD[1879] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 17:25:00 INFO cluster.YarnClusterScheduler: Adding task set 1372.0 with 1 tasks 18/04/17 17:25:00 INFO scheduler.DAGScheduler: Got job 1373 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 17:25:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 1373 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 17:25:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 17:25:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 17:25:00 INFO scheduler.DAGScheduler: Submitting ResultStage 1373 (KafkaRDD[1899] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 17:25:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 1372.0 (TID 1372, ***hostname masked***, executor 9, partition 0, RACK_LOCAL, 2060 bytes) 18/04/17 17:25:00 INFO storage.MemoryStore: Block broadcast_1373 stored as values in memory (estimated size 5.7 KB, free 491.4 MB) 18/04/17 17:25:00 INFO storage.BlockManagerInfo: Removed broadcast_1347_piece0 on ***hostname masked***:42188 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:25:00 INFO storage.MemoryStore: Block broadcast_1373_piece0 stored as bytes in memory (estimated size 3.1 KB, free 491.4 MB) 18/04/17 17:25:00 INFO storage.BlockManagerInfo: Added broadcast_1373_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:25:00 INFO storage.BlockManagerInfo: Removed broadcast_1348_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:25:00 INFO spark.SparkContext: Created broadcast 1373 from broadcast at DAGScheduler.scala:1006 18/04/17 17:25:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 1373 (KafkaRDD[1899] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 17:25:00 INFO cluster.YarnClusterScheduler: Adding task set 1373.0 with 1 tasks 18/04/17 17:25:00 INFO scheduler.DAGScheduler: Got job 1374 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 17:25:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 1374 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 17:25:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 17:25:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 17:25:00 INFO scheduler.DAGScheduler: Submitting ResultStage 1374 (KafkaRDD[1906] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 17:25:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 1373.0 (TID 1373, ***hostname masked***, executor 5, partition 0, RACK_LOCAL, 2048 bytes) 18/04/17 17:25:00 INFO storage.BlockManagerInfo: Removed broadcast_1348_piece0 on ***hostname masked***:42188 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:25:00 INFO storage.MemoryStore: Block broadcast_1374 stored as values in memory (estimated size 5.7 KB, free 491.4 MB) 18/04/17 17:25:00 INFO storage.BlockManagerInfo: Removed broadcast_1350_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:25:00 INFO storage.BlockManagerInfo: Removed broadcast_1350_piece0 on ***hostname masked***:57847 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:25:00 INFO storage.MemoryStore: Block broadcast_1374_piece0 stored as bytes in memory (estimated size 3.1 KB, free 491.4 MB) 18/04/17 17:25:00 INFO storage.BlockManagerInfo: Added broadcast_1374_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:25:00 INFO spark.SparkContext: Created broadcast 1374 from broadcast at DAGScheduler.scala:1006 18/04/17 17:25:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 1374 (KafkaRDD[1906] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 17:25:00 INFO cluster.YarnClusterScheduler: Adding task set 1374.0 with 1 tasks 18/04/17 17:25:00 INFO scheduler.DAGScheduler: Got job 1375 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 17:25:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 1375 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 17:25:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 17:25:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 17:25:00 INFO scheduler.DAGScheduler: Submitting ResultStage 1375 (KafkaRDD[1873] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 17:25:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 1374.0 (TID 1374, ***hostname masked***, executor 7, partition 0, RACK_LOCAL, 2053 bytes) 18/04/17 17:25:00 INFO storage.BlockManagerInfo: Removed broadcast_1352_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:25:00 INFO storage.MemoryStore: Block broadcast_1375 stored as values in memory (estimated size 5.7 KB, free 491.4 MB) 18/04/17 17:25:00 INFO storage.BlockManagerInfo: Removed broadcast_1352_piece0 on ***hostname masked***:41751 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:25:00 INFO spark.ContextCleaner: Cleaned accumulator 1353 18/04/17 17:25:00 INFO storage.MemoryStore: Block broadcast_1375_piece0 stored as bytes in memory (estimated size 3.1 KB, free 491.4 MB) 18/04/17 17:25:00 INFO storage.BlockManagerInfo: Added broadcast_1375_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:25:00 INFO storage.BlockManagerInfo: Removed broadcast_1351_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:25:00 INFO spark.SparkContext: Created broadcast 1375 from broadcast at DAGScheduler.scala:1006 18/04/17 17:25:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 1375 (KafkaRDD[1873] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 17:25:00 INFO cluster.YarnClusterScheduler: Adding task set 1375.0 with 1 tasks 18/04/17 17:25:00 INFO scheduler.DAGScheduler: Got job 1376 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 17:25:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 1376 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 17:25:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 17:25:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 17:25:00 INFO scheduler.DAGScheduler: Submitting ResultStage 1376 (KafkaRDD[1883] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 17:25:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 1375.0 (TID 1375, ***hostname masked***, executor 6, partition 0, RACK_LOCAL, 2063 bytes) 18/04/17 17:25:00 INFO storage.MemoryStore: Block broadcast_1376 stored as values in memory (estimated size 5.7 KB, free 491.4 MB) 18/04/17 17:25:00 INFO storage.BlockManagerInfo: Added broadcast_1372_piece0 in memory on ***hostname masked***:55033 (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:25:00 INFO storage.BlockManagerInfo: Added broadcast_1373_piece0 in memory on ***hostname masked***:53081 (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:25:00 INFO storage.BlockManagerInfo: Removed broadcast_1351_piece0 on ***hostname masked***:60107 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:25:00 INFO storage.MemoryStore: Block broadcast_1376_piece0 stored as bytes in memory (estimated size 3.1 KB, free 491.4 MB) 18/04/17 17:25:00 INFO storage.BlockManagerInfo: Added broadcast_1376_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:25:00 INFO spark.SparkContext: Created broadcast 1376 from broadcast at DAGScheduler.scala:1006 18/04/17 17:25:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 1376 (KafkaRDD[1883] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 17:25:00 INFO cluster.YarnClusterScheduler: Adding task set 1376.0 with 1 tasks 18/04/17 17:25:00 INFO scheduler.DAGScheduler: Got job 1377 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 17:25:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 1377 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 17:25:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 17:25:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 17:25:00 INFO scheduler.DAGScheduler: Submitting ResultStage 1377 (KafkaRDD[1877] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 17:25:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 1376.0 (TID 1376, ***hostname masked***, executor 9, partition 0, RACK_LOCAL, 2046 bytes) 18/04/17 17:25:00 INFO storage.MemoryStore: Block broadcast_1377 stored as values in memory (estimated size 5.7 KB, free 491.4 MB) 18/04/17 17:25:00 INFO spark.ContextCleaner: Cleaned accumulator 1352 18/04/17 17:25:00 INFO storage.BlockManagerInfo: Removed broadcast_1354_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:25:00 INFO storage.BlockManagerInfo: Removed broadcast_1354_piece0 on ***hostname masked***:55033 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:25:00 INFO storage.MemoryStore: Block broadcast_1377_piece0 stored as bytes in memory (estimated size 3.1 KB, free 491.4 MB) 18/04/17 17:25:00 INFO storage.BlockManagerInfo: Added broadcast_1377_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:25:00 INFO spark.SparkContext: Created broadcast 1377 from broadcast at DAGScheduler.scala:1006 18/04/17 17:25:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 1377 (KafkaRDD[1877] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 17:25:00 INFO cluster.YarnClusterScheduler: Adding task set 1377.0 with 1 tasks 18/04/17 17:25:00 INFO scheduler.DAGScheduler: Got job 1378 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 17:25:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 1378 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 17:25:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 17:25:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 17:25:00 INFO scheduler.DAGScheduler: Submitting ResultStage 1378 (KafkaRDD[1878] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 17:25:00 INFO storage.MemoryStore: Block broadcast_1378 stored as values in memory (estimated size 5.7 KB, free 491.4 MB) 18/04/17 17:25:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 1377.0 (TID 1377, ***hostname masked***, executor 5, partition 0, NODE_LOCAL, 2059 bytes) 18/04/17 17:25:00 INFO spark.ContextCleaner: Cleaned accumulator 1355 18/04/17 17:25:00 INFO storage.BlockManagerInfo: Removed broadcast_1353_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:25:00 INFO storage.BlockManagerInfo: Added broadcast_1374_piece0 in memory on ***hostname masked***:41751 (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:25:00 INFO storage.MemoryStore: Block broadcast_1378_piece0 stored as bytes in memory (estimated size 3.1 KB, free 491.4 MB) 18/04/17 17:25:00 INFO storage.BlockManagerInfo: Added broadcast_1378_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:25:00 INFO spark.SparkContext: Created broadcast 1378 from broadcast at DAGScheduler.scala:1006 18/04/17 17:25:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 1378 (KafkaRDD[1878] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 17:25:00 INFO cluster.YarnClusterScheduler: Adding task set 1378.0 with 1 tasks 18/04/17 17:25:00 INFO scheduler.DAGScheduler: Got job 1379 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 17:25:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 1379 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 17:25:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 17:25:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 17:25:00 INFO scheduler.DAGScheduler: Submitting ResultStage 1379 (KafkaRDD[1897] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 17:25:00 INFO storage.MemoryStore: Block broadcast_1379 stored as values in memory (estimated size 5.7 KB, free 491.4 MB) 18/04/17 17:25:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 1378.0 (TID 1378, ***hostname masked***, executor 7, partition 0, NODE_LOCAL, 2045 bytes) 18/04/17 17:25:00 INFO storage.BlockManagerInfo: Added broadcast_1376_piece0 in memory on ***hostname masked***:55033 (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:25:00 INFO storage.BlockManagerInfo: Removed broadcast_1353_piece0 on ***hostname masked***:55095 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:25:00 INFO spark.ContextCleaner: Cleaned accumulator 1354 18/04/17 17:25:00 INFO storage.MemoryStore: Block broadcast_1379_piece0 stored as bytes in memory (estimated size 3.1 KB, free 491.4 MB) 18/04/17 17:25:00 INFO storage.BlockManagerInfo: Added broadcast_1375_piece0 in memory on ***hostname masked***:35790 (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:25:00 INFO storage.BlockManagerInfo: Removed broadcast_1356_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:25:00 INFO storage.BlockManagerInfo: Added broadcast_1379_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:25:00 INFO spark.SparkContext: Created broadcast 1379 from broadcast at DAGScheduler.scala:1006 18/04/17 17:25:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 1379 (KafkaRDD[1897] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 17:25:00 INFO cluster.YarnClusterScheduler: Adding task set 1379.0 with 1 tasks 18/04/17 17:25:00 INFO scheduler.DAGScheduler: Got job 1380 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 17:25:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 1380 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 17:25:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 17:25:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 17:25:00 INFO scheduler.DAGScheduler: Submitting ResultStage 1380 (KafkaRDD[1901] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 17:25:00 INFO storage.MemoryStore: Block broadcast_1380 stored as values in memory (estimated size 5.7 KB, free 491.4 MB) 18/04/17 17:25:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 1379.0 (TID 1379, ***hostname masked***, executor 9, partition 0, RACK_LOCAL, 2045 bytes) 18/04/17 17:25:00 INFO storage.BlockManagerInfo: Removed broadcast_1356_piece0 on ***hostname masked***:55095 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:25:00 INFO storage.MemoryStore: Block broadcast_1380_piece0 stored as bytes in memory (estimated size 3.1 KB, free 491.4 MB) 18/04/17 17:25:00 INFO storage.BlockManagerInfo: Added broadcast_1380_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:25:00 INFO spark.SparkContext: Created broadcast 1380 from broadcast at DAGScheduler.scala:1006 18/04/17 17:25:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 1380 (KafkaRDD[1901] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 17:25:00 INFO cluster.YarnClusterScheduler: Adding task set 1380.0 with 1 tasks 18/04/17 17:25:00 INFO scheduler.DAGScheduler: Got job 1381 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 17:25:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 1381 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 17:25:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 17:25:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 17:25:00 INFO scheduler.DAGScheduler: Submitting ResultStage 1381 (KafkaRDD[1905] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 17:25:00 INFO storage.MemoryStore: Block broadcast_1381 stored as values in memory (estimated size 5.7 KB, free 491.4 MB) 18/04/17 17:25:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 1380.0 (TID 1380, ***hostname masked***, executor 4, partition 0, NODE_LOCAL, 2049 bytes) 18/04/17 17:25:00 INFO storage.BlockManagerInfo: Added broadcast_1378_piece0 in memory on ***hostname masked***:41751 (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:25:00 INFO storage.MemoryStore: Block broadcast_1381_piece0 stored as bytes in memory (estimated size 3.1 KB, free 491.4 MB) 18/04/17 17:25:00 INFO storage.BlockManagerInfo: Added broadcast_1381_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:25:00 INFO spark.SparkContext: Created broadcast 1381 from broadcast at DAGScheduler.scala:1006 18/04/17 17:25:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 1381 (KafkaRDD[1905] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 17:25:00 INFO cluster.YarnClusterScheduler: Adding task set 1381.0 with 1 tasks 18/04/17 17:25:00 INFO scheduler.DAGScheduler: Got job 1382 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 17:25:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 1382 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 17:25:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 17:25:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 17:25:00 INFO scheduler.DAGScheduler: Submitting ResultStage 1382 (KafkaRDD[1907] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 17:25:00 INFO storage.BlockManagerInfo: Added broadcast_1379_piece0 in memory on ***hostname masked***:55033 (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:25:00 INFO storage.MemoryStore: Block broadcast_1382 stored as values in memory (estimated size 5.7 KB, free 491.4 MB) 18/04/17 17:25:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 1381.0 (TID 1381, ***hostname masked***, executor 10, partition 0, NODE_LOCAL, 2048 bytes) 18/04/17 17:25:00 INFO storage.MemoryStore: Block broadcast_1382_piece0 stored as bytes in memory (estimated size 3.1 KB, free 491.4 MB) 18/04/17 17:25:00 INFO storage.BlockManagerInfo: Added broadcast_1382_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:25:00 INFO spark.SparkContext: Created broadcast 1382 from broadcast at DAGScheduler.scala:1006 18/04/17 17:25:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 1382 (KafkaRDD[1907] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 17:25:00 INFO cluster.YarnClusterScheduler: Adding task set 1382.0 with 1 tasks 18/04/17 17:25:00 INFO scheduler.DAGScheduler: Got job 1384 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 17:25:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 1383 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 17:25:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 17:25:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 17:25:00 INFO scheduler.DAGScheduler: Submitting ResultStage 1383 (KafkaRDD[1898] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 17:25:00 INFO storage.MemoryStore: Block broadcast_1383 stored as values in memory (estimated size 5.7 KB, free 491.4 MB) 18/04/17 17:25:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 1382.0 (TID 1382, ***hostname masked***, executor 9, partition 0, NODE_LOCAL, 2037 bytes) 18/04/17 17:25:00 INFO storage.MemoryStore: Block broadcast_1383_piece0 stored as bytes in memory (estimated size 3.1 KB, free 491.4 MB) 18/04/17 17:25:00 INFO storage.BlockManagerInfo: Added broadcast_1383_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:25:00 INFO spark.SparkContext: Created broadcast 1383 from broadcast at DAGScheduler.scala:1006 18/04/17 17:25:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 1383 (KafkaRDD[1898] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 17:25:00 INFO cluster.YarnClusterScheduler: Adding task set 1383.0 with 1 tasks 18/04/17 17:25:00 INFO scheduler.DAGScheduler: Got job 1383 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 17:25:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 1384 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 17:25:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 17:25:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 17:25:00 INFO scheduler.DAGScheduler: Submitting ResultStage 1384 (KafkaRDD[1874] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 17:25:00 INFO storage.MemoryStore: Block broadcast_1384 stored as values in memory (estimated size 5.7 KB, free 491.4 MB) 18/04/17 17:25:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 1383.0 (TID 1383, ***hostname masked***, executor 1, partition 0, NODE_LOCAL, 2056 bytes) 18/04/17 17:25:00 INFO storage.BlockManagerInfo: Added broadcast_1382_piece0 in memory on ***hostname masked***:55033 (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:25:00 INFO storage.MemoryStore: Block broadcast_1384_piece0 stored as bytes in memory (estimated size 3.1 KB, free 491.4 MB) 18/04/17 17:25:00 INFO storage.BlockManagerInfo: Added broadcast_1384_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:25:00 INFO spark.SparkContext: Created broadcast 1384 from broadcast at DAGScheduler.scala:1006 18/04/17 17:25:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 1384 (KafkaRDD[1874] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 17:25:00 INFO cluster.YarnClusterScheduler: Adding task set 1384.0 with 1 tasks 18/04/17 17:25:00 INFO scheduler.DAGScheduler: Got job 1385 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 17:25:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 1385 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 17:25:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 17:25:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 17:25:00 INFO scheduler.DAGScheduler: Submitting ResultStage 1385 (KafkaRDD[1892] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 17:25:00 INFO storage.MemoryStore: Block broadcast_1385 stored as values in memory (estimated size 5.7 KB, free 491.4 MB) 18/04/17 17:25:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 1384.0 (TID 1384, ***hostname masked***, executor 12, partition 0, RACK_LOCAL, 2034 bytes) 18/04/17 17:25:00 INFO storage.BlockManagerInfo: Added broadcast_1380_piece0 in memory on ***hostname masked***:55279 (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:25:00 INFO storage.BlockManagerInfo: Added broadcast_1381_piece0 in memory on ***hostname masked***:55095 (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:25:00 INFO storage.MemoryStore: Block broadcast_1385_piece0 stored as bytes in memory (estimated size 3.1 KB, free 491.4 MB) 18/04/17 17:25:00 INFO storage.BlockManagerInfo: Added broadcast_1385_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:25:00 INFO spark.SparkContext: Created broadcast 1385 from broadcast at DAGScheduler.scala:1006 18/04/17 17:25:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 1385 (KafkaRDD[1892] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 17:25:00 INFO cluster.YarnClusterScheduler: Adding task set 1385.0 with 1 tasks 18/04/17 17:25:00 INFO scheduler.DAGScheduler: Got job 1386 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 17:25:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 1386 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 17:25:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 17:25:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 17:25:00 INFO scheduler.DAGScheduler: Submitting ResultStage 1386 (KafkaRDD[1890] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 17:25:00 INFO storage.MemoryStore: Block broadcast_1386 stored as values in memory (estimated size 5.7 KB, free 491.4 MB) 18/04/17 17:25:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 1385.0 (TID 1385, ***hostname masked***, executor 11, partition 0, RACK_LOCAL, 2048 bytes) 18/04/17 17:25:00 INFO storage.MemoryStore: Block broadcast_1386_piece0 stored as bytes in memory (estimated size 3.1 KB, free 491.4 MB) 18/04/17 17:25:00 INFO storage.BlockManagerInfo: Added broadcast_1386_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:25:00 INFO spark.SparkContext: Created broadcast 1386 from broadcast at DAGScheduler.scala:1006 18/04/17 17:25:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 1386 (KafkaRDD[1890] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 17:25:00 INFO cluster.YarnClusterScheduler: Adding task set 1386.0 with 1 tasks 18/04/17 17:25:00 INFO scheduler.DAGScheduler: Got job 1387 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 17:25:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 1387 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 17:25:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 17:25:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 17:25:00 INFO scheduler.DAGScheduler: Submitting ResultStage 1387 (KafkaRDD[1884] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 17:25:00 INFO storage.MemoryStore: Block broadcast_1387 stored as values in memory (estimated size 5.7 KB, free 491.3 MB) 18/04/17 17:25:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 1386.0 (TID 1386, ***hostname masked***, executor 8, partition 0, RACK_LOCAL, 2048 bytes) 18/04/17 17:25:00 INFO storage.BlockManagerInfo: Added broadcast_1383_piece0 in memory on ***hostname masked***:56034 (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:25:00 INFO storage.MemoryStore: Block broadcast_1387_piece0 stored as bytes in memory (estimated size 3.1 KB, free 491.3 MB) 18/04/17 17:25:00 INFO storage.BlockManagerInfo: Added broadcast_1387_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:25:00 INFO spark.SparkContext: Created broadcast 1387 from broadcast at DAGScheduler.scala:1006 18/04/17 17:25:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 1387 (KafkaRDD[1884] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 17:25:00 INFO cluster.YarnClusterScheduler: Adding task set 1387.0 with 1 tasks 18/04/17 17:25:00 INFO scheduler.DAGScheduler: Got job 1388 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 17:25:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 1388 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 17:25:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 17:25:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 17:25:00 INFO scheduler.DAGScheduler: Submitting ResultStage 1388 (KafkaRDD[1904] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 17:25:00 INFO storage.MemoryStore: Block broadcast_1388 stored as values in memory (estimated size 5.7 KB, free 491.3 MB) 18/04/17 17:25:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 1387.0 (TID 1387, ***hostname masked***, executor 8, partition 0, RACK_LOCAL, 2051 bytes) 18/04/17 17:25:00 INFO storage.BlockManagerInfo: Added broadcast_1384_piece0 in memory on ***hostname masked***:42188 (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:25:00 INFO storage.MemoryStore: Block broadcast_1388_piece0 stored as bytes in memory (estimated size 3.1 KB, free 491.3 MB) 18/04/17 17:25:00 INFO storage.BlockManagerInfo: Added broadcast_1388_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:25:00 INFO spark.SparkContext: Created broadcast 1388 from broadcast at DAGScheduler.scala:1006 18/04/17 17:25:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 1388 (KafkaRDD[1904] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 17:25:00 INFO cluster.YarnClusterScheduler: Adding task set 1388.0 with 1 tasks 18/04/17 17:25:00 INFO scheduler.DAGScheduler: Got job 1389 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 17:25:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 1389 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 17:25:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 17:25:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 17:25:00 INFO scheduler.DAGScheduler: Submitting ResultStage 1389 (KafkaRDD[1880] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 17:25:00 INFO storage.MemoryStore: Block broadcast_1389 stored as values in memory (estimated size 5.7 KB, free 491.3 MB) 18/04/17 17:25:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 1388.0 (TID 1388, ***hostname masked***, executor 4, partition 0, RACK_LOCAL, 2040 bytes) 18/04/17 17:25:00 INFO storage.MemoryStore: Block broadcast_1389_piece0 stored as bytes in memory (estimated size 3.1 KB, free 491.3 MB) 18/04/17 17:25:00 INFO storage.BlockManagerInfo: Added broadcast_1389_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.5 MB) 18/04/17 17:25:00 INFO spark.SparkContext: Created broadcast 1389 from broadcast at DAGScheduler.scala:1006 18/04/17 17:25:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 1389 (KafkaRDD[1880] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 17:25:00 INFO cluster.YarnClusterScheduler: Adding task set 1389.0 with 1 tasks 18/04/17 17:25:00 INFO scheduler.DAGScheduler: Got job 1390 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 17:25:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 1390 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 17:25:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 17:25:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 17:25:00 INFO scheduler.DAGScheduler: Submitting ResultStage 1390 (KafkaRDD[1896] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 17:25:00 INFO storage.MemoryStore: Block broadcast_1390 stored as values in memory (estimated size 5.7 KB, free 491.3 MB) 18/04/17 17:25:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 1389.0 (TID 1389, ***hostname masked***, executor 3, partition 0, RACK_LOCAL, 2051 bytes) 18/04/17 17:25:00 INFO storage.MemoryStore: Block broadcast_1390_piece0 stored as bytes in memory (estimated size 3.1 KB, free 491.3 MB) 18/04/17 17:25:00 INFO storage.BlockManagerInfo: Added broadcast_1390_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.5 MB) 18/04/17 17:25:00 INFO spark.SparkContext: Created broadcast 1390 from broadcast at DAGScheduler.scala:1006 18/04/17 17:25:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 1390 (KafkaRDD[1896] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 17:25:00 INFO cluster.YarnClusterScheduler: Adding task set 1390.0 with 1 tasks 18/04/17 17:25:00 INFO scheduler.DAGScheduler: Got job 1391 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 17:25:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 1391 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 17:25:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 17:25:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 17:25:00 INFO scheduler.DAGScheduler: Submitting ResultStage 1391 (KafkaRDD[1903] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 17:25:00 INFO storage.MemoryStore: Block broadcast_1391 stored as values in memory (estimated size 5.7 KB, free 491.3 MB) 18/04/17 17:25:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 1390.0 (TID 1390, ***hostname masked***, executor 1, partition 0, NODE_LOCAL, 2047 bytes) 18/04/17 17:25:00 INFO storage.BlockManagerInfo: Added broadcast_1387_piece0 in memory on ***hostname masked***:50260 (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:25:00 INFO storage.BlockManagerInfo: Added broadcast_1388_piece0 in memory on ***hostname masked***:55279 (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:25:00 INFO storage.BlockManagerInfo: Added broadcast_1385_piece0 in memory on ***hostname masked***:57847 (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:25:00 INFO storage.BlockManagerInfo: Added broadcast_1377_piece0 in memory on ***hostname masked***:53081 (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:25:00 INFO spark.ContextCleaner: Cleaned accumulator 1357 18/04/17 17:25:00 INFO storage.BlockManagerInfo: Removed broadcast_1355_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.5 MB) 18/04/17 17:25:00 INFO storage.MemoryStore: Block broadcast_1391_piece0 stored as bytes in memory (estimated size 3.1 KB, free 491.3 MB) 18/04/17 17:25:00 INFO storage.BlockManagerInfo: Added broadcast_1391_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.5 MB) 18/04/17 17:25:00 INFO spark.SparkContext: Created broadcast 1391 from broadcast at DAGScheduler.scala:1006 18/04/17 17:25:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 1391 (KafkaRDD[1903] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 17:25:00 INFO cluster.YarnClusterScheduler: Adding task set 1391.0 with 1 tasks 18/04/17 17:25:00 INFO scheduler.DAGScheduler: Got job 1392 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 17:25:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 1392 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 17:25:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 17:25:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 17:25:00 INFO scheduler.DAGScheduler: Submitting ResultStage 1392 (KafkaRDD[1895] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 17:25:00 INFO storage.MemoryStore: Block broadcast_1392 stored as values in memory (estimated size 5.7 KB, free 491.3 MB) 18/04/17 17:25:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 1391.0 (TID 1391, ***hostname masked***, executor 12, partition 0, RACK_LOCAL, 2042 bytes) 18/04/17 17:25:00 INFO storage.MemoryStore: Block broadcast_1392_piece0 stored as bytes in memory (estimated size 3.1 KB, free 491.3 MB) 18/04/17 17:25:00 INFO storage.BlockManagerInfo: Added broadcast_1392_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.5 MB) 18/04/17 17:25:00 INFO storage.BlockManagerInfo: Removed broadcast_1355_piece0 on ***hostname masked***:43653 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:25:00 INFO spark.SparkContext: Created broadcast 1392 from broadcast at DAGScheduler.scala:1006 18/04/17 17:25:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 1392 (KafkaRDD[1895] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 17:25:00 INFO cluster.YarnClusterScheduler: Adding task set 1392.0 with 1 tasks 18/04/17 17:25:00 INFO scheduler.DAGScheduler: Got job 1394 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 17:25:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 1393 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 17:25:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 17:25:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 17:25:00 INFO scheduler.DAGScheduler: Submitting ResultStage 1393 (KafkaRDD[1887] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 17:25:00 INFO storage.MemoryStore: Block broadcast_1393 stored as values in memory (estimated size 5.7 KB, free 491.3 MB) 18/04/17 17:25:00 INFO storage.BlockManagerInfo: Added broadcast_1389_piece0 in memory on ***hostname masked***:60107 (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:25:00 INFO spark.ContextCleaner: Cleaned accumulator 1356 18/04/17 17:25:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 1392.0 (TID 1392, ***hostname masked***, executor 2, partition 0, RACK_LOCAL, 2048 bytes) 18/04/17 17:25:00 INFO storage.BlockManagerInfo: Removed broadcast_1358_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.5 MB) 18/04/17 17:25:00 INFO storage.BlockManagerInfo: Added broadcast_1390_piece0 in memory on ***hostname masked***:56034 (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:25:00 INFO storage.MemoryStore: Block broadcast_1393_piece0 stored as bytes in memory (estimated size 3.1 KB, free 491.3 MB) 18/04/17 17:25:00 INFO storage.BlockManagerInfo: Added broadcast_1393_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.5 MB) 18/04/17 17:25:00 INFO spark.SparkContext: Created broadcast 1393 from broadcast at DAGScheduler.scala:1006 18/04/17 17:25:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 1393 (KafkaRDD[1887] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 17:25:00 INFO cluster.YarnClusterScheduler: Adding task set 1393.0 with 1 tasks 18/04/17 17:25:00 INFO scheduler.DAGScheduler: Got job 1393 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 17:25:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 1394 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 17:25:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 17:25:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 17:25:00 INFO scheduler.DAGScheduler: Submitting ResultStage 1394 (KafkaRDD[1891] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 17:25:00 INFO storage.MemoryStore: Block broadcast_1394 stored as values in memory (estimated size 5.7 KB, free 491.3 MB) 18/04/17 17:25:00 INFO storage.BlockManagerInfo: Added broadcast_1391_piece0 in memory on ***hostname masked***:42188 (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:25:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 1393.0 (TID 1393, ***hostname masked***, executor 10, partition 0, RACK_LOCAL, 2064 bytes) 18/04/17 17:25:00 INFO storage.BlockManagerInfo: Removed broadcast_1358_piece0 on ***hostname masked***:55033 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:25:00 INFO storage.MemoryStore: Block broadcast_1394_piece0 stored as bytes in memory (estimated size 3.1 KB, free 491.3 MB) 18/04/17 17:25:00 INFO storage.BlockManagerInfo: Added broadcast_1394_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.5 MB) 18/04/17 17:25:00 INFO spark.SparkContext: Created broadcast 1394 from broadcast at DAGScheduler.scala:1006 18/04/17 17:25:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 1394 (KafkaRDD[1891] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 17:25:00 INFO cluster.YarnClusterScheduler: Adding task set 1394.0 with 1 tasks 18/04/17 17:25:00 INFO scheduler.DAGScheduler: Got job 1395 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 17:25:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 1395 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 17:25:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 17:25:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 17:25:00 INFO spark.ContextCleaner: Cleaned accumulator 1359 18/04/17 17:25:00 INFO scheduler.DAGScheduler: Submitting ResultStage 1395 (KafkaRDD[1882] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 17:25:00 INFO storage.MemoryStore: Block broadcast_1395 stored as values in memory (estimated size 5.7 KB, free 491.3 MB) 18/04/17 17:25:00 INFO storage.BlockManagerInfo: Removed broadcast_1357_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.5 MB) 18/04/17 17:25:00 INFO storage.BlockManagerInfo: Removed broadcast_1357_piece0 on ***hostname masked***:53081 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:25:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 1394.0 (TID 1394, ***hostname masked***, executor 2, partition 0, RACK_LOCAL, 2050 bytes) 18/04/17 17:25:00 INFO storage.MemoryStore: Block broadcast_1395_piece0 stored as bytes in memory (estimated size 3.1 KB, free 491.3 MB) 18/04/17 17:25:00 INFO storage.BlockManagerInfo: Added broadcast_1395_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.5 MB) 18/04/17 17:25:00 INFO spark.SparkContext: Created broadcast 1395 from broadcast at DAGScheduler.scala:1006 18/04/17 17:25:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 1395 (KafkaRDD[1882] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 17:25:00 INFO cluster.YarnClusterScheduler: Adding task set 1395.0 with 1 tasks 18/04/17 17:25:00 INFO scheduler.DAGScheduler: Got job 1396 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 17:25:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 1396 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 17:25:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 17:25:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 17:25:00 INFO scheduler.DAGScheduler: Submitting ResultStage 1396 (KafkaRDD[1881] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 17:25:00 INFO storage.MemoryStore: Block broadcast_1396 stored as values in memory (estimated size 5.7 KB, free 491.3 MB) 18/04/17 17:25:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 1395.0 (TID 1395, ***hostname masked***, executor 5, partition 0, NODE_LOCAL, 2059 bytes) 18/04/17 17:25:00 INFO storage.BlockManagerInfo: Added broadcast_1393_piece0 in memory on ***hostname masked***:55095 (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:25:00 INFO storage.MemoryStore: Block broadcast_1396_piece0 stored as bytes in memory (estimated size 3.1 KB, free 491.3 MB) 18/04/17 17:25:00 INFO storage.BlockManagerInfo: Added broadcast_1396_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.5 MB) 18/04/17 17:25:00 INFO spark.SparkContext: Created broadcast 1396 from broadcast at DAGScheduler.scala:1006 18/04/17 17:25:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 1396 (KafkaRDD[1881] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 17:25:00 INFO cluster.YarnClusterScheduler: Adding task set 1396.0 with 1 tasks 18/04/17 17:25:00 INFO scheduler.DAGScheduler: Got job 1397 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 17:25:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 1397 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 17:25:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 17:25:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 17:25:00 INFO scheduler.DAGScheduler: Submitting ResultStage 1397 (KafkaRDD[1900] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 17:25:00 INFO storage.MemoryStore: Block broadcast_1397 stored as values in memory (estimated size 5.7 KB, free 491.3 MB) 18/04/17 17:25:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 1396.0 (TID 1396, ***hostname masked***, executor 7, partition 0, RACK_LOCAL, 2041 bytes) 18/04/17 17:25:00 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 1376.0 (TID 1376) in 61 ms on ***hostname masked*** (executor 9) (1/1) 18/04/17 17:25:00 INFO cluster.YarnClusterScheduler: Removed TaskSet 1376.0, whose tasks have all completed, from pool 18/04/17 17:25:00 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 1379.0 (TID 1379) in 52 ms on ***hostname masked*** (executor 9) (1/1) 18/04/17 17:25:00 INFO cluster.YarnClusterScheduler: Removed TaskSet 1379.0, whose tasks have all completed, from pool 18/04/17 17:25:00 INFO storage.MemoryStore: Block broadcast_1397_piece0 stored as bytes in memory (estimated size 3.1 KB, free 491.3 MB) 18/04/17 17:25:00 INFO storage.BlockManagerInfo: Added broadcast_1397_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.5 MB) 18/04/17 17:25:00 INFO spark.SparkContext: Created broadcast 1397 from broadcast at DAGScheduler.scala:1006 18/04/17 17:25:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 1397 (KafkaRDD[1900] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 17:25:00 INFO cluster.YarnClusterScheduler: Adding task set 1397.0 with 1 tasks 18/04/17 17:25:00 INFO scheduler.DAGScheduler: Got job 1398 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 17:25:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 1398 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 17:25:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 17:25:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 17:25:00 INFO scheduler.DAGScheduler: Submitting ResultStage 1398 (KafkaRDD[1894] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 17:25:00 INFO storage.MemoryStore: Block broadcast_1398 stored as values in memory (estimated size 5.7 KB, free 491.3 MB) 18/04/17 17:25:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 1397.0 (TID 1397, ***hostname masked***, executor 8, partition 0, RACK_LOCAL, 2045 bytes) 18/04/17 17:25:00 INFO storage.MemoryStore: Block broadcast_1398_piece0 stored as bytes in memory (estimated size 3.1 KB, free 491.3 MB) 18/04/17 17:25:00 INFO storage.BlockManagerInfo: Added broadcast_1395_piece0 in memory on ***hostname masked***:53081 (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:25:00 INFO storage.BlockManagerInfo: Added broadcast_1398_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.5 MB) 18/04/17 17:25:00 INFO spark.SparkContext: Created broadcast 1398 from broadcast at DAGScheduler.scala:1006 18/04/17 17:25:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 1398 (KafkaRDD[1894] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 17:25:00 INFO cluster.YarnClusterScheduler: Adding task set 1398.0 with 1 tasks 18/04/17 17:25:00 INFO scheduler.DAGScheduler: ResultStage 1376 (foreachPartition at PredictorEngineApp.java:153) finished in 0.064 s 18/04/17 17:25:00 INFO scheduler.DAGScheduler: ResultStage 1379 (foreachPartition at PredictorEngineApp.java:153) finished in 0.056 s 18/04/17 17:25:00 INFO scheduler.DAGScheduler: Job 1376 finished: foreachPartition at PredictorEngineApp.java:153, took 0.084255 s 18/04/17 17:25:00 INFO scheduler.DAGScheduler: Job 1379 finished: foreachPartition at PredictorEngineApp.java:153, took 0.083315 s 18/04/17 17:25:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 1398.0 (TID 1398, ***hostname masked***, executor 8, partition 0, RACK_LOCAL, 2069 bytes) 18/04/17 17:25:00 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x518f2c0b connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 17:25:00 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x506c39e7 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 17:25:00 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x518f2c0b0x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 17:25:00 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x506c39e70x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 17:25:00 INFO storage.BlockManagerInfo: Added broadcast_1392_piece0 in memory on ***hostname masked***:43653 (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:25:00 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 17:25:00 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 17:25:00 INFO storage.BlockManagerInfo: Added broadcast_1386_piece0 in memory on ***hostname masked***:50260 (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:25:00 INFO spark.ContextCleaner: Cleaned accumulator 1358 18/04/17 17:25:00 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:43850, server: ***hostname masked***/***IP masked***:2181 18/04/17 17:25:00 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:37467, server: ***hostname masked***/***IP masked***:2181 18/04/17 17:25:00 INFO storage.BlockManagerInfo: Removed broadcast_1360_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.5 MB) 18/04/17 17:25:00 INFO storage.BlockManagerInfo: Removed broadcast_1360_piece0 on ***hostname masked***:56034 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:25:00 INFO spark.ContextCleaner: Cleaned accumulator 1361 18/04/17 17:25:00 INFO storage.BlockManagerInfo: Removed broadcast_1359_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.5 MB) 18/04/17 17:25:00 INFO storage.BlockManagerInfo: Removed broadcast_1359_piece0 on ***hostname masked***:55033 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:25:00 INFO storage.BlockManagerInfo: Added broadcast_1396_piece0 in memory on ***hostname masked***:41751 (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:25:00 INFO spark.ContextCleaner: Cleaned accumulator 1360 18/04/17 17:25:00 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x1626be1444c988d, negotiated timeout = 60000 18/04/17 17:25:00 INFO storage.BlockManagerInfo: Removed broadcast_1345_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.5 MB) 18/04/17 17:25:00 INFO storage.BlockManagerInfo: Added broadcast_1398_piece0 in memory on ***hostname masked***:50260 (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:25:00 INFO storage.BlockManagerInfo: Removed broadcast_1345_piece0 on ***hostname masked***:55033 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:25:00 INFO spark.ContextCleaner: Cleaned accumulator 1349 18/04/17 17:25:00 INFO spark.ContextCleaner: Cleaned accumulator 1362 18/04/17 17:25:00 INFO spark.ContextCleaner: Cleaned accumulator 1364 18/04/17 17:25:00 INFO storage.BlockManagerInfo: Removed broadcast_1362_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.5 MB) 18/04/17 17:25:00 INFO storage.BlockManagerInfo: Removed broadcast_1362_piece0 on ***hostname masked***:53081 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:25:00 INFO storage.BlockManagerInfo: Added broadcast_1394_piece0 in memory on ***hostname masked***:43653 (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:25:00 INFO spark.ContextCleaner: Cleaned accumulator 1363 18/04/17 17:25:00 INFO storage.BlockManagerInfo: Removed broadcast_1361_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.5 MB) 18/04/17 17:25:00 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x3626be1439a9828, negotiated timeout = 60000 18/04/17 17:25:00 INFO storage.BlockManagerInfo: Removed broadcast_1361_piece0 on ***hostname masked***:43653 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:25:00 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x1626be1444c988d 18/04/17 17:25:00 INFO storage.BlockManagerInfo: Added broadcast_1397_piece0 in memory on ***hostname masked***:50260 (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:25:00 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x3626be1439a9828 18/04/17 17:25:00 INFO zookeeper.ZooKeeper: Session: 0x1626be1444c988d closed 18/04/17 17:25:00 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 17:25:00 INFO zookeeper.ZooKeeper: Session: 0x3626be1439a9828 closed 18/04/17 17:25:00 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 17:25:00 INFO scheduler.JobScheduler: Finished job streaming job 1523975100000 ms.25 from job set of time 1523975100000 ms 18/04/17 17:25:00 INFO scheduler.JobScheduler: Finished job streaming job 1523975100000 ms.11 from job set of time 1523975100000 ms 18/04/17 17:25:00 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 1386.0 (TID 1386) in 83 ms on ***hostname masked*** (executor 8) (1/1) 18/04/17 17:25:00 INFO cluster.YarnClusterScheduler: Removed TaskSet 1386.0, whose tasks have all completed, from pool 18/04/17 17:25:00 INFO scheduler.DAGScheduler: ResultStage 1386 (foreachPartition at PredictorEngineApp.java:153) finished in 0.084 s 18/04/17 17:25:00 INFO scheduler.DAGScheduler: Job 1386 finished: foreachPartition at PredictorEngineApp.java:153, took 0.129260 s 18/04/17 17:25:00 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x20effe22 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 17:25:00 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x20effe220x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 17:25:00 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 17:25:00 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:37473, server: ***hostname masked***/***IP masked***:2181 18/04/17 17:25:00 INFO spark.ContextCleaner: Cleaned accumulator 1366 18/04/17 17:25:00 INFO storage.BlockManagerInfo: Removed broadcast_1364_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.5 MB) 18/04/17 17:25:00 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x3626be1439a9829, negotiated timeout = 60000 18/04/17 17:25:00 INFO storage.BlockManagerInfo: Removed broadcast_1364_piece0 on ***hostname masked***:60107 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:25:00 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x3626be1439a9829 18/04/17 17:25:00 INFO zookeeper.ZooKeeper: Session: 0x3626be1439a9829 closed 18/04/17 17:25:00 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 17:25:00 INFO scheduler.JobScheduler: Finished job streaming job 1523975100000 ms.18 from job set of time 1523975100000 ms 18/04/17 17:25:00 INFO spark.ContextCleaner: Cleaned accumulator 1365 18/04/17 17:25:00 INFO storage.BlockManagerInfo: Removed broadcast_1363_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:25:00 INFO storage.BlockManagerInfo: Removed broadcast_1363_piece0 on ***hostname masked***:50260 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:25:00 INFO spark.ContextCleaner: Cleaned accumulator 1368 18/04/17 17:25:00 INFO storage.BlockManagerInfo: Removed broadcast_1366_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:25:00 INFO storage.BlockManagerInfo: Removed broadcast_1366_piece0 on ***hostname masked***:60107 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:25:00 INFO spark.ContextCleaner: Cleaned accumulator 1367 18/04/17 17:25:00 INFO storage.BlockManagerInfo: Removed broadcast_1365_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:25:00 INFO storage.BlockManagerInfo: Removed broadcast_1365_piece0 on ***hostname masked***:53081 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:25:00 INFO spark.ContextCleaner: Cleaned accumulator 1369 18/04/17 17:25:00 INFO storage.BlockManagerInfo: Removed broadcast_1367_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:25:00 INFO storage.BlockManagerInfo: Removed broadcast_1367_piece0 on ***hostname masked***:41751 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:25:00 INFO storage.BlockManagerInfo: Removed broadcast_1346_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:25:00 INFO storage.BlockManagerInfo: Removed broadcast_1346_piece0 on ***hostname masked***:43653 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:25:00 INFO spark.ContextCleaner: Cleaned accumulator 1370 18/04/17 17:25:00 INFO storage.BlockManagerInfo: Removed broadcast_1368_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:25:00 INFO storage.BlockManagerInfo: Removed broadcast_1368_piece0 on ***hostname masked***:55279 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:25:00 INFO storage.BlockManagerInfo: Removed broadcast_1369_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:25:00 INFO storage.BlockManagerInfo: Removed broadcast_1369_piece0 on ***hostname masked***:50260 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:25:00 INFO spark.ContextCleaner: Cleaned accumulator 1350 18/04/17 17:25:00 INFO storage.BlockManagerInfo: Removed broadcast_1371_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:25:00 INFO storage.BlockManagerInfo: Removed broadcast_1371_piece0 on ***hostname masked***:42188 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:25:00 INFO spark.ContextCleaner: Cleaned accumulator 1372 18/04/17 17:25:00 INFO storage.BlockManagerInfo: Removed broadcast_1370_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:25:00 INFO storage.BlockManagerInfo: Removed broadcast_1370_piece0 on ***hostname masked***:35790 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:25:00 INFO spark.ContextCleaner: Cleaned accumulator 1371 18/04/17 17:25:00 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 1377.0 (TID 1377) in 190 ms on ***hostname masked*** (executor 5) (1/1) 18/04/17 17:25:00 INFO scheduler.DAGScheduler: ResultStage 1377 (foreachPartition at PredictorEngineApp.java:153) finished in 0.190 s 18/04/17 17:25:00 INFO cluster.YarnClusterScheduler: Removed TaskSet 1377.0, whose tasks have all completed, from pool 18/04/17 17:25:00 INFO scheduler.DAGScheduler: Job 1377 finished: foreachPartition at PredictorEngineApp.java:153, took 0.212168 s 18/04/17 17:25:00 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x1d07c97e connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 17:25:00 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x1d07c97e0x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 17:25:00 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 17:25:00 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:48453, server: ***hostname masked***/***IP masked***:2181 18/04/17 17:25:00 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 1391.0 (TID 1391) in 152 ms on ***hostname masked*** (executor 12) (1/1) 18/04/17 17:25:00 INFO scheduler.DAGScheduler: ResultStage 1391 (foreachPartition at PredictorEngineApp.java:153) finished in 0.153 s 18/04/17 17:25:00 INFO cluster.YarnClusterScheduler: Removed TaskSet 1391.0, whose tasks have all completed, from pool 18/04/17 17:25:00 INFO scheduler.DAGScheduler: Job 1391 finished: foreachPartition at PredictorEngineApp.java:153, took 0.213667 s 18/04/17 17:25:00 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x1ff33a3e connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 17:25:00 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x1ff33a3e0x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 17:25:00 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 17:25:00 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:43859, server: ***hostname masked***/***IP masked***:2181 18/04/17 17:25:00 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x2626be142b29160, negotiated timeout = 60000 18/04/17 17:25:00 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x2626be142b29160 18/04/17 17:25:00 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x1626be1444c9892, negotiated timeout = 60000 18/04/17 17:25:00 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x1626be1444c9892 18/04/17 17:25:00 INFO zookeeper.ZooKeeper: Session: 0x2626be142b29160 closed 18/04/17 17:25:00 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 17:25:00 INFO scheduler.JobScheduler: Finished job streaming job 1523975100000 ms.5 from job set of time 1523975100000 ms 18/04/17 17:25:00 INFO zookeeper.ZooKeeper: Session: 0x1626be1444c9892 closed 18/04/17 17:25:00 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 17:25:00 INFO scheduler.JobScheduler: Finished job streaming job 1523975100000 ms.31 from job set of time 1523975100000 ms 18/04/17 17:25:00 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 1382.0 (TID 1382) in 504 ms on ***hostname masked*** (executor 9) (1/1) 18/04/17 17:25:00 INFO cluster.YarnClusterScheduler: Removed TaskSet 1382.0, whose tasks have all completed, from pool 18/04/17 17:25:00 INFO scheduler.DAGScheduler: ResultStage 1382 (foreachPartition at PredictorEngineApp.java:153) finished in 0.505 s 18/04/17 17:25:00 INFO scheduler.DAGScheduler: Job 1382 finished: foreachPartition at PredictorEngineApp.java:153, took 0.540368 s 18/04/17 17:25:00 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0xb2d8241 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 17:25:00 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0xb2d82410x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 17:25:00 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 17:25:00 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:37482, server: ***hostname masked***/***IP masked***:2181 18/04/17 17:25:00 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x3626be1439a9830, negotiated timeout = 60000 18/04/17 17:25:00 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x3626be1439a9830 18/04/17 17:25:00 INFO zookeeper.ZooKeeper: Session: 0x3626be1439a9830 closed 18/04/17 17:25:00 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 17:25:00 INFO scheduler.JobScheduler: Finished job streaming job 1523975100000 ms.35 from job set of time 1523975100000 ms 18/04/17 17:25:01 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 1372.0 (TID 1372) in 1759 ms on ***hostname masked*** (executor 9) (1/1) 18/04/17 17:25:01 INFO cluster.YarnClusterScheduler: Removed TaskSet 1372.0, whose tasks have all completed, from pool 18/04/17 17:25:01 INFO scheduler.DAGScheduler: ResultStage 1372 (foreachPartition at PredictorEngineApp.java:153) finished in 1.759 s 18/04/17 17:25:01 INFO scheduler.DAGScheduler: Job 1372 finished: foreachPartition at PredictorEngineApp.java:153, took 1.765674 s 18/04/17 17:25:01 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x6113a9da connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 17:25:01 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x6113a9da0x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 17:25:01 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 17:25:01 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:37492, server: ***hostname masked***/***IP masked***:2181 18/04/17 17:25:01 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x3626be1439a9831, negotiated timeout = 60000 18/04/17 17:25:01 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x3626be1439a9831 18/04/17 17:25:01 INFO zookeeper.ZooKeeper: Session: 0x3626be1439a9831 closed 18/04/17 17:25:01 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 17:25:01 INFO scheduler.JobScheduler: Finished job streaming job 1523975100000 ms.7 from job set of time 1523975100000 ms 18/04/17 17:25:01 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 1389.0 (TID 1389) in 1750 ms on ***hostname masked*** (executor 3) (1/1) 18/04/17 17:25:01 INFO cluster.YarnClusterScheduler: Removed TaskSet 1389.0, whose tasks have all completed, from pool 18/04/17 17:25:01 INFO scheduler.DAGScheduler: ResultStage 1389 (foreachPartition at PredictorEngineApp.java:153) finished in 1.751 s 18/04/17 17:25:01 INFO scheduler.DAGScheduler: Job 1389 finished: foreachPartition at PredictorEngineApp.java:153, took 1.805859 s 18/04/17 17:25:01 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x1cb8fafd connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 17:25:01 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x1cb8fafd0x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 17:25:01 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 17:25:01 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:48472, server: ***hostname masked***/***IP masked***:2181 18/04/17 17:25:01 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x2626be142b29163, negotiated timeout = 60000 18/04/17 17:25:01 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x2626be142b29163 18/04/17 17:25:01 INFO zookeeper.ZooKeeper: Session: 0x2626be142b29163 closed 18/04/17 17:25:01 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 17:25:01 INFO scheduler.JobScheduler: Finished job streaming job 1523975100000 ms.8 from job set of time 1523975100000 ms 18/04/17 17:25:06 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 1385.0 (TID 1385) in 5917 ms on ***hostname masked*** (executor 11) (1/1) 18/04/17 17:25:06 INFO cluster.YarnClusterScheduler: Removed TaskSet 1385.0, whose tasks have all completed, from pool 18/04/17 17:25:06 INFO scheduler.DAGScheduler: ResultStage 1385 (foreachPartition at PredictorEngineApp.java:153) finished in 5.917 s 18/04/17 17:25:06 INFO scheduler.DAGScheduler: Job 1385 finished: foreachPartition at PredictorEngineApp.java:153, took 5.960365 s 18/04/17 17:25:06 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x751a8bf6 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 17:25:06 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x751a8bf60x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 17:25:06 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 17:25:06 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:37510, server: ***hostname masked***/***IP masked***:2181 18/04/17 17:25:06 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 1384.0 (TID 1384) in 5926 ms on ***hostname masked*** (executor 12) (1/1) 18/04/17 17:25:06 INFO cluster.YarnClusterScheduler: Removed TaskSet 1384.0, whose tasks have all completed, from pool 18/04/17 17:25:06 INFO scheduler.DAGScheduler: ResultStage 1384 (foreachPartition at PredictorEngineApp.java:153) finished in 5.927 s 18/04/17 17:25:06 INFO scheduler.DAGScheduler: Job 1383 finished: foreachPartition at PredictorEngineApp.java:153, took 5.967143 s 18/04/17 17:25:06 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x3979bf96 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 17:25:06 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x3979bf960x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 17:25:06 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 17:25:06 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:43893, server: ***hostname masked***/***IP masked***:2181 18/04/17 17:25:06 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x3626be1439a9832, negotiated timeout = 60000 18/04/17 17:25:06 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x1626be1444c9897, negotiated timeout = 60000 18/04/17 17:25:06 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x1626be1444c9897 18/04/17 17:25:06 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x3626be1439a9832 18/04/17 17:25:06 INFO zookeeper.ZooKeeper: Session: 0x3626be1439a9832 closed 18/04/17 17:25:06 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 17:25:06 INFO zookeeper.ZooKeeper: Session: 0x1626be1444c9897 closed 18/04/17 17:25:06 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 17:25:06 INFO scheduler.JobScheduler: Finished job streaming job 1523975100000 ms.20 from job set of time 1523975100000 ms 18/04/17 17:25:06 INFO scheduler.JobScheduler: Finished job streaming job 1523975100000 ms.2 from job set of time 1523975100000 ms 18/04/17 17:25:10 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 1388.0 (TID 1388) in 10315 ms on ***hostname masked*** (executor 4) (1/1) 18/04/17 17:25:10 INFO cluster.YarnClusterScheduler: Removed TaskSet 1388.0, whose tasks have all completed, from pool 18/04/17 17:25:10 INFO scheduler.DAGScheduler: ResultStage 1388 (foreachPartition at PredictorEngineApp.java:153) finished in 10.316 s 18/04/17 17:25:10 INFO scheduler.DAGScheduler: Job 1388 finished: foreachPartition at PredictorEngineApp.java:153, took 10.367923 s 18/04/17 17:25:10 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x21c87b3a connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 17:25:10 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x21c87b3a0x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 17:25:10 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 17:25:10 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:43904, server: ***hostname masked***/***IP masked***:2181 18/04/17 17:25:10 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x1626be1444c9898, negotiated timeout = 60000 18/04/17 17:25:10 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x1626be1444c9898 18/04/17 17:25:10 INFO zookeeper.ZooKeeper: Session: 0x1626be1444c9898 closed 18/04/17 17:25:10 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 17:25:10 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 1381.0 (TID 1381) in 10367 ms on ***hostname masked*** (executor 10) (1/1) 18/04/17 17:25:10 INFO cluster.YarnClusterScheduler: Removed TaskSet 1381.0, whose tasks have all completed, from pool 18/04/17 17:25:10 INFO scheduler.DAGScheduler: ResultStage 1381 (foreachPartition at PredictorEngineApp.java:153) finished in 10.367 s 18/04/17 17:25:10 INFO scheduler.DAGScheduler: Job 1381 finished: foreachPartition at PredictorEngineApp.java:153, took 10.400092 s 18/04/17 17:25:10 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0xeeba1ff connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 17:25:10 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0xeeba1ff0x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 17:25:10 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 17:25:10 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:43907, server: ***hostname masked***/***IP masked***:2181 18/04/17 17:25:10 INFO scheduler.JobScheduler: Finished job streaming job 1523975100000 ms.32 from job set of time 1523975100000 ms 18/04/17 17:25:10 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x1626be1444c9899, negotiated timeout = 60000 18/04/17 17:25:10 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x1626be1444c9899 18/04/17 17:25:10 INFO zookeeper.ZooKeeper: Session: 0x1626be1444c9899 closed 18/04/17 17:25:10 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 17:25:10 INFO scheduler.JobScheduler: Finished job streaming job 1523975100000 ms.33 from job set of time 1523975100000 ms 18/04/17 17:25:10 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 1393.0 (TID 1393) in 10522 ms on ***hostname masked*** (executor 10) (1/1) 18/04/17 17:25:10 INFO cluster.YarnClusterScheduler: Removed TaskSet 1393.0, whose tasks have all completed, from pool 18/04/17 17:25:10 INFO scheduler.DAGScheduler: ResultStage 1393 (foreachPartition at PredictorEngineApp.java:153) finished in 10.522 s 18/04/17 17:25:10 INFO scheduler.DAGScheduler: Job 1394 finished: foreachPartition at PredictorEngineApp.java:153, took 10.589191 s 18/04/17 17:25:10 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x205e4058 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 17:25:10 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x205e40580x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 17:25:10 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 17:25:10 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:37528, server: ***hostname masked***/***IP masked***:2181 18/04/17 17:25:10 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x3626be1439a9836, negotiated timeout = 60000 18/04/17 17:25:10 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x3626be1439a9836 18/04/17 17:25:10 INFO zookeeper.ZooKeeper: Session: 0x3626be1439a9836 closed 18/04/17 17:25:10 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 17:25:10 INFO scheduler.JobScheduler: Finished job streaming job 1523975100000 ms.15 from job set of time 1523975100000 ms 18/04/17 17:25:10 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 1380.0 (TID 1380) in 10604 ms on ***hostname masked*** (executor 4) (1/1) 18/04/17 17:25:10 INFO scheduler.DAGScheduler: ResultStage 1380 (foreachPartition at PredictorEngineApp.java:153) finished in 10.605 s 18/04/17 17:25:10 INFO cluster.YarnClusterScheduler: Removed TaskSet 1380.0, whose tasks have all completed, from pool 18/04/17 17:25:10 INFO scheduler.DAGScheduler: Job 1380 finished: foreachPartition at PredictorEngineApp.java:153, took 10.634409 s 18/04/17 17:25:10 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x40b106d6 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 17:25:10 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x40b106d60x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 17:25:10 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 17:25:10 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:37531, server: ***hostname masked***/***IP masked***:2181 18/04/17 17:25:10 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x3626be1439a9837, negotiated timeout = 60000 18/04/17 17:25:10 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x3626be1439a9837 18/04/17 17:25:10 INFO zookeeper.ZooKeeper: Session: 0x3626be1439a9837 closed 18/04/17 17:25:10 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 17:25:10 INFO scheduler.JobScheduler: Finished job streaming job 1523975100000 ms.29 from job set of time 1523975100000 ms 18/04/17 17:25:11 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 1390.0 (TID 1390) in 11109 ms on ***hostname masked*** (executor 1) (1/1) 18/04/17 17:25:11 INFO cluster.YarnClusterScheduler: Removed TaskSet 1390.0, whose tasks have all completed, from pool 18/04/17 17:25:11 INFO scheduler.DAGScheduler: ResultStage 1390 (foreachPartition at PredictorEngineApp.java:153) finished in 11.110 s 18/04/17 17:25:11 INFO scheduler.DAGScheduler: Job 1390 finished: foreachPartition at PredictorEngineApp.java:153, took 11.168302 s 18/04/17 17:25:11 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0xb6df2e6 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 17:25:11 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0xb6df2e60x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 17:25:11 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 17:25:11 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:37535, server: ***hostname masked***/***IP masked***:2181 18/04/17 17:25:11 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x3626be1439a9838, negotiated timeout = 60000 18/04/17 17:25:11 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x3626be1439a9838 18/04/17 17:25:11 INFO zookeeper.ZooKeeper: Session: 0x3626be1439a9838 closed 18/04/17 17:25:11 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 17:25:11 INFO scheduler.JobScheduler: Finished job streaming job 1523975100000 ms.24 from job set of time 1523975100000 ms 18/04/17 17:25:12 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 1394.0 (TID 1394) in 12165 ms on ***hostname masked*** (executor 2) (1/1) 18/04/17 17:25:12 INFO cluster.YarnClusterScheduler: Removed TaskSet 1394.0, whose tasks have all completed, from pool 18/04/17 17:25:12 INFO scheduler.DAGScheduler: ResultStage 1394 (foreachPartition at PredictorEngineApp.java:153) finished in 12.171 s 18/04/17 17:25:12 INFO scheduler.DAGScheduler: Job 1393 finished: foreachPartition at PredictorEngineApp.java:153, took 12.239887 s 18/04/17 17:25:12 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x60a8aecf connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 17:25:12 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x60a8aecf0x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 17:25:12 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 17:25:12 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:48517, server: ***hostname masked***/***IP masked***:2181 18/04/17 17:25:12 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x2626be142b29168, negotiated timeout = 60000 18/04/17 17:25:12 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x2626be142b29168 18/04/17 17:25:12 INFO zookeeper.ZooKeeper: Session: 0x2626be142b29168 closed 18/04/17 17:25:12 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 17:25:12 INFO scheduler.JobScheduler: Finished job streaming job 1523975100000 ms.19 from job set of time 1523975100000 ms 18/04/17 17:25:12 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 1373.0 (TID 1373) in 12276 ms on ***hostname masked*** (executor 5) (1/1) 18/04/17 17:25:12 INFO scheduler.DAGScheduler: ResultStage 1373 (foreachPartition at PredictorEngineApp.java:153) finished in 12.276 s 18/04/17 17:25:12 INFO cluster.YarnClusterScheduler: Removed TaskSet 1373.0, whose tasks have all completed, from pool 18/04/17 17:25:12 INFO scheduler.DAGScheduler: Job 1373 finished: foreachPartition at PredictorEngineApp.java:153, took 12.285191 s 18/04/17 17:25:12 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x2e358c38 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 17:25:12 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x2e358c380x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 17:25:12 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 17:25:12 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:43925, server: ***hostname masked***/***IP masked***:2181 18/04/17 17:25:12 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x1626be1444c989c, negotiated timeout = 60000 18/04/17 17:25:12 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x1626be1444c989c 18/04/17 17:25:12 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 1392.0 (TID 1392) in 12235 ms on ***hostname masked*** (executor 2) (1/1) 18/04/17 17:25:12 INFO scheduler.DAGScheduler: ResultStage 1392 (foreachPartition at PredictorEngineApp.java:153) finished in 12.236 s 18/04/17 17:25:12 INFO cluster.YarnClusterScheduler: Removed TaskSet 1392.0, whose tasks have all completed, from pool 18/04/17 17:25:12 INFO scheduler.DAGScheduler: Job 1392 finished: foreachPartition at PredictorEngineApp.java:153, took 12.299696 s 18/04/17 17:25:12 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x4ff50c9a connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 17:25:12 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x4ff50c9a0x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 17:25:12 INFO zookeeper.ZooKeeper: Session: 0x1626be1444c989c closed 18/04/17 17:25:12 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 17:25:12 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 17:25:12 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:43928, server: ***hostname masked***/***IP masked***:2181 18/04/17 17:25:12 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x1626be1444c989d, negotiated timeout = 60000 18/04/17 17:25:12 INFO scheduler.JobScheduler: Finished job streaming job 1523975100000 ms.27 from job set of time 1523975100000 ms 18/04/17 17:25:12 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x1626be1444c989d 18/04/17 17:25:12 INFO zookeeper.ZooKeeper: Session: 0x1626be1444c989d closed 18/04/17 17:25:12 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 17:25:12 INFO scheduler.JobScheduler: Finished job streaming job 1523975100000 ms.23 from job set of time 1523975100000 ms 18/04/17 17:25:12 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 1375.0 (TID 1375) in 12751 ms on ***hostname masked*** (executor 6) (1/1) 18/04/17 17:25:12 INFO cluster.YarnClusterScheduler: Removed TaskSet 1375.0, whose tasks have all completed, from pool 18/04/17 17:25:12 INFO scheduler.DAGScheduler: ResultStage 1375 (foreachPartition at PredictorEngineApp.java:153) finished in 12.751 s 18/04/17 17:25:12 INFO scheduler.DAGScheduler: Job 1375 finished: foreachPartition at PredictorEngineApp.java:153, took 12.768462 s 18/04/17 17:25:12 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x102cee12 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 17:25:12 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x102cee120x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 17:25:12 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 17:25:12 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:43931, server: ***hostname masked***/***IP masked***:2181 18/04/17 17:25:12 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x1626be1444c989e, negotiated timeout = 60000 18/04/17 17:25:12 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x1626be1444c989e 18/04/17 17:25:12 INFO zookeeper.ZooKeeper: Session: 0x1626be1444c989e closed 18/04/17 17:25:12 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 17:25:12 INFO scheduler.JobScheduler: Finished job streaming job 1523975100000 ms.1 from job set of time 1523975100000 ms 18/04/17 17:25:14 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 1387.0 (TID 1387) in 14096 ms on ***hostname masked*** (executor 8) (1/1) 18/04/17 17:25:14 INFO cluster.YarnClusterScheduler: Removed TaskSet 1387.0, whose tasks have all completed, from pool 18/04/17 17:25:14 INFO scheduler.DAGScheduler: ResultStage 1387 (foreachPartition at PredictorEngineApp.java:153) finished in 14.097 s 18/04/17 17:25:14 INFO scheduler.DAGScheduler: Job 1387 finished: foreachPartition at PredictorEngineApp.java:153, took 14.145374 s 18/04/17 17:25:14 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x1f929f24 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 17:25:14 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x1f929f240x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 17:25:14 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 17:25:14 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:43936, server: ***hostname masked***/***IP masked***:2181 18/04/17 17:25:14 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x1626be1444c989f, negotiated timeout = 60000 18/04/17 17:25:14 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x1626be1444c989f 18/04/17 17:25:14 INFO zookeeper.ZooKeeper: Session: 0x1626be1444c989f closed 18/04/17 17:25:14 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 17:25:14 INFO scheduler.JobScheduler: Finished job streaming job 1523975100000 ms.12 from job set of time 1523975100000 ms 18/04/17 17:25:14 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 1396.0 (TID 1396) in 14305 ms on ***hostname masked*** (executor 7) (1/1) 18/04/17 17:25:14 INFO scheduler.DAGScheduler: ResultStage 1396 (foreachPartition at PredictorEngineApp.java:153) finished in 14.306 s 18/04/17 17:25:14 INFO cluster.YarnClusterScheduler: Removed TaskSet 1396.0, whose tasks have all completed, from pool 18/04/17 17:25:14 INFO scheduler.DAGScheduler: Job 1396 finished: foreachPartition at PredictorEngineApp.java:153, took 14.383952 s 18/04/17 17:25:14 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x2a2a8c37 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 17:25:14 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x2a2a8c370x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 17:25:14 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 17:25:14 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:48534, server: ***hostname masked***/***IP masked***:2181 18/04/17 17:25:14 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x2626be142b2916c, negotiated timeout = 60000 18/04/17 17:25:14 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x2626be142b2916c 18/04/17 17:25:14 INFO zookeeper.ZooKeeper: Session: 0x2626be142b2916c closed 18/04/17 17:25:14 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 17:25:14 INFO scheduler.JobScheduler: Finished job streaming job 1523975100000 ms.9 from job set of time 1523975100000 ms 18/04/17 17:25:15 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 1397.0 (TID 1397) in 15075 ms on ***hostname masked*** (executor 8) (1/1) 18/04/17 17:25:15 INFO cluster.YarnClusterScheduler: Removed TaskSet 1397.0, whose tasks have all completed, from pool 18/04/17 17:25:15 INFO scheduler.DAGScheduler: ResultStage 1397 (foreachPartition at PredictorEngineApp.java:153) finished in 15.076 s 18/04/17 17:25:15 INFO scheduler.DAGScheduler: Job 1397 finished: foreachPartition at PredictorEngineApp.java:153, took 15.156123 s 18/04/17 17:25:15 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0xdc2e9 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 17:25:15 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0xdc2e90x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 17:25:15 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 1374.0 (TID 1374) in 15146 ms on ***hostname masked*** (executor 7) (1/1) 18/04/17 17:25:15 INFO cluster.YarnClusterScheduler: Removed TaskSet 1374.0, whose tasks have all completed, from pool 18/04/17 17:25:15 INFO scheduler.DAGScheduler: ResultStage 1374 (foreachPartition at PredictorEngineApp.java:153) finished in 15.146 s 18/04/17 17:25:15 INFO scheduler.DAGScheduler: Job 1374 finished: foreachPartition at PredictorEngineApp.java:153, took 15.159120 s 18/04/17 17:25:15 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x40cb0e23 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 17:25:15 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x40cb0e230x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 17:25:15 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 17:25:15 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:43944, server: ***hostname masked***/***IP masked***:2181 18/04/17 17:25:15 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 17:25:15 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:37563, server: ***hostname masked***/***IP masked***:2181 18/04/17 17:25:15 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x1626be1444c98a1, negotiated timeout = 60000 18/04/17 17:25:15 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x3626be1439a983b, negotiated timeout = 60000 18/04/17 17:25:15 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x1626be1444c98a1 18/04/17 17:25:15 INFO zookeeper.ZooKeeper: Session: 0x1626be1444c98a1 closed 18/04/17 17:25:15 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 17:25:15 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x3626be1439a983b 18/04/17 17:25:15 INFO zookeeper.ZooKeeper: Session: 0x3626be1439a983b closed 18/04/17 17:25:15 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 17:25:15 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 1378.0 (TID 1378) in 15161 ms on ***hostname masked*** (executor 7) (1/1) 18/04/17 17:25:15 INFO cluster.YarnClusterScheduler: Removed TaskSet 1378.0, whose tasks have all completed, from pool 18/04/17 17:25:15 INFO scheduler.DAGScheduler: ResultStage 1378 (foreachPartition at PredictorEngineApp.java:153) finished in 15.161 s 18/04/17 17:25:15 INFO scheduler.DAGScheduler: Job 1378 finished: foreachPartition at PredictorEngineApp.java:153, took 15.185795 s 18/04/17 17:25:15 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x24f0e515 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 17:25:15 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x24f0e5150x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 17:25:15 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 17:25:15 INFO scheduler.JobScheduler: Finished job streaming job 1523975100000 ms.28 from job set of time 1523975100000 ms 18/04/17 17:25:15 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:48545, server: ***hostname masked***/***IP masked***:2181 18/04/17 17:25:15 INFO scheduler.JobScheduler: Finished job streaming job 1523975100000 ms.34 from job set of time 1523975100000 ms 18/04/17 17:25:15 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x2626be142b29170, negotiated timeout = 60000 18/04/17 17:25:15 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x2626be142b29170 18/04/17 17:25:15 INFO zookeeper.ZooKeeper: Session: 0x2626be142b29170 closed 18/04/17 17:25:15 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 17:25:15 INFO scheduler.JobScheduler: Finished job streaming job 1523975100000 ms.6 from job set of time 1523975100000 ms 18/04/17 17:25:19 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 1383.0 (TID 1383) in 19293 ms on ***hostname masked*** (executor 1) (1/1) 18/04/17 17:25:19 INFO scheduler.DAGScheduler: ResultStage 1383 (foreachPartition at PredictorEngineApp.java:153) finished in 19.294 s 18/04/17 17:25:19 INFO cluster.YarnClusterScheduler: Removed TaskSet 1383.0, whose tasks have all completed, from pool 18/04/17 17:25:19 INFO scheduler.DAGScheduler: Job 1384 finished: foreachPartition at PredictorEngineApp.java:153, took 19.331936 s 18/04/17 17:25:19 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x162f5608 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 17:25:19 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x162f56080x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 17:25:19 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 17:25:19 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:37576, server: ***hostname masked***/***IP masked***:2181 18/04/17 17:25:19 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x3626be1439a983e, negotiated timeout = 60000 18/04/17 17:25:19 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x3626be1439a983e 18/04/17 17:25:19 INFO zookeeper.ZooKeeper: Session: 0x3626be1439a983e closed 18/04/17 17:25:19 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 17:25:19 INFO scheduler.JobScheduler: Finished job streaming job 1523975100000 ms.26 from job set of time 1523975100000 ms 18/04/17 17:25:22 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 1395.0 (TID 1395) in 22042 ms on ***hostname masked*** (executor 5) (1/1) 18/04/17 17:25:22 INFO scheduler.DAGScheduler: ResultStage 1395 (foreachPartition at PredictorEngineApp.java:153) finished in 22.043 s 18/04/17 17:25:22 INFO cluster.YarnClusterScheduler: Removed TaskSet 1395.0, whose tasks have all completed, from pool 18/04/17 17:25:22 INFO scheduler.DAGScheduler: Job 1395 finished: foreachPartition at PredictorEngineApp.java:153, took 22.118393 s 18/04/17 17:25:22 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x5182b0b7 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 17:25:22 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x5182b0b70x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 17:25:22 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 17:25:22 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:43966, server: ***hostname masked***/***IP masked***:2181 18/04/17 17:25:22 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x1626be1444c98a4, negotiated timeout = 60000 18/04/17 17:25:22 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x1626be1444c98a4 18/04/17 17:25:22 INFO zookeeper.ZooKeeper: Session: 0x1626be1444c98a4 closed 18/04/17 17:25:22 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 17:25:22 INFO scheduler.JobScheduler: Finished job streaming job 1523975100000 ms.10 from job set of time 1523975100000 ms 18/04/17 17:25:22 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 1398.0 (TID 1398) in 22704 ms on ***hostname masked*** (executor 8) (1/1) 18/04/17 17:25:22 INFO cluster.YarnClusterScheduler: Removed TaskSet 1398.0, whose tasks have all completed, from pool 18/04/17 17:25:22 INFO scheduler.DAGScheduler: ResultStage 1398 (foreachPartition at PredictorEngineApp.java:153) finished in 22.705 s 18/04/17 17:25:22 INFO scheduler.DAGScheduler: Job 1398 finished: foreachPartition at PredictorEngineApp.java:153, took 22.786582 s 18/04/17 17:25:22 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x1a9fc6fa connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 17:25:22 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x1a9fc6fa0x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 17:25:22 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 17:25:22 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:48564, server: ***hostname masked***/***IP masked***:2181 18/04/17 17:25:22 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x2626be142b29175, negotiated timeout = 60000 18/04/17 17:25:22 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x2626be142b29175 18/04/17 17:25:22 INFO zookeeper.ZooKeeper: Session: 0x2626be142b29175 closed 18/04/17 17:25:22 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 17:25:22 INFO scheduler.JobScheduler: Finished job streaming job 1523975100000 ms.22 from job set of time 1523975100000 ms 18/04/17 17:25:22 INFO scheduler.JobScheduler: Total delay: 22.887 s for time 1523975100000 ms (execution: 22.841 s) 18/04/17 17:25:22 INFO kafka.KafkaRDD: Removing RDD 1836 from persistence list 18/04/17 17:25:22 INFO storage.BlockManager: Removing RDD 1836 18/04/17 17:25:22 INFO kafka.KafkaRDD: Removing RDD 1836 from persistence list 18/04/17 17:25:22 INFO storage.BlockManager: Removing RDD 1836 18/04/17 17:25:22 INFO kafka.KafkaRDD: Removing RDD 1837 from persistence list 18/04/17 17:25:22 INFO storage.BlockManager: Removing RDD 1837 18/04/17 17:25:22 INFO kafka.KafkaRDD: Removing RDD 1837 from persistence list 18/04/17 17:25:22 INFO storage.BlockManager: Removing RDD 1837 18/04/17 17:25:22 INFO kafka.KafkaRDD: Removing RDD 1838 from persistence list 18/04/17 17:25:22 INFO storage.BlockManager: Removing RDD 1838 18/04/17 17:25:22 INFO kafka.KafkaRDD: Removing RDD 1838 from persistence list 18/04/17 17:25:22 INFO storage.BlockManager: Removing RDD 1838 18/04/17 17:25:22 INFO kafka.KafkaRDD: Removing RDD 1839 from persistence list 18/04/17 17:25:22 INFO storage.BlockManager: Removing RDD 1839 18/04/17 17:25:22 INFO kafka.KafkaRDD: Removing RDD 1839 from persistence list 18/04/17 17:25:22 INFO storage.BlockManager: Removing RDD 1839 18/04/17 17:25:22 INFO kafka.KafkaRDD: Removing RDD 1840 from persistence list 18/04/17 17:25:22 INFO storage.BlockManager: Removing RDD 1840 18/04/17 17:25:22 INFO kafka.KafkaRDD: Removing RDD 1840 from persistence list 18/04/17 17:25:22 INFO storage.BlockManager: Removing RDD 1840 18/04/17 17:25:22 INFO kafka.KafkaRDD: Removing RDD 1841 from persistence list 18/04/17 17:25:22 INFO storage.BlockManager: Removing RDD 1841 18/04/17 17:25:22 INFO kafka.KafkaRDD: Removing RDD 1841 from persistence list 18/04/17 17:25:22 INFO storage.BlockManager: Removing RDD 1841 18/04/17 17:25:22 INFO kafka.KafkaRDD: Removing RDD 1842 from persistence list 18/04/17 17:25:22 INFO storage.BlockManager: Removing RDD 1842 18/04/17 17:25:22 INFO kafka.KafkaRDD: Removing RDD 1842 from persistence list 18/04/17 17:25:22 INFO storage.BlockManager: Removing RDD 1842 18/04/17 17:25:22 INFO kafka.KafkaRDD: Removing RDD 1843 from persistence list 18/04/17 17:25:22 INFO storage.BlockManager: Removing RDD 1843 18/04/17 17:25:22 INFO kafka.KafkaRDD: Removing RDD 1843 from persistence list 18/04/17 17:25:22 INFO storage.BlockManager: Removing RDD 1843 18/04/17 17:25:22 INFO kafka.KafkaRDD: Removing RDD 1844 from persistence list 18/04/17 17:25:22 INFO storage.BlockManager: Removing RDD 1844 18/04/17 17:25:22 INFO kafka.KafkaRDD: Removing RDD 1844 from persistence list 18/04/17 17:25:22 INFO storage.BlockManager: Removing RDD 1844 18/04/17 17:25:22 INFO kafka.KafkaRDD: Removing RDD 1845 from persistence list 18/04/17 17:25:22 INFO storage.BlockManager: Removing RDD 1845 18/04/17 17:25:22 INFO kafka.KafkaRDD: Removing RDD 1845 from persistence list 18/04/17 17:25:22 INFO storage.BlockManager: Removing RDD 1845 18/04/17 17:25:22 INFO kafka.KafkaRDD: Removing RDD 1846 from persistence list 18/04/17 17:25:22 INFO storage.BlockManager: Removing RDD 1846 18/04/17 17:25:22 INFO kafka.KafkaRDD: Removing RDD 1846 from persistence list 18/04/17 17:25:22 INFO storage.BlockManager: Removing RDD 1846 18/04/17 17:25:22 INFO kafka.KafkaRDD: Removing RDD 1847 from persistence list 18/04/17 17:25:22 INFO storage.BlockManager: Removing RDD 1847 18/04/17 17:25:22 INFO kafka.KafkaRDD: Removing RDD 1847 from persistence list 18/04/17 17:25:22 INFO storage.BlockManager: Removing RDD 1847 18/04/17 17:25:22 INFO kafka.KafkaRDD: Removing RDD 1848 from persistence list 18/04/17 17:25:22 INFO storage.BlockManager: Removing RDD 1848 18/04/17 17:25:22 INFO kafka.KafkaRDD: Removing RDD 1848 from persistence list 18/04/17 17:25:22 INFO storage.BlockManager: Removing RDD 1848 18/04/17 17:25:22 INFO kafka.KafkaRDD: Removing RDD 1849 from persistence list 18/04/17 17:25:22 INFO storage.BlockManager: Removing RDD 1849 18/04/17 17:25:22 INFO kafka.KafkaRDD: Removing RDD 1849 from persistence list 18/04/17 17:25:22 INFO storage.BlockManager: Removing RDD 1849 18/04/17 17:25:22 INFO kafka.KafkaRDD: Removing RDD 1850 from persistence list 18/04/17 17:25:22 INFO storage.BlockManager: Removing RDD 1850 18/04/17 17:25:22 INFO kafka.KafkaRDD: Removing RDD 1850 from persistence list 18/04/17 17:25:22 INFO storage.BlockManager: Removing RDD 1850 18/04/17 17:25:22 INFO kafka.KafkaRDD: Removing RDD 1851 from persistence list 18/04/17 17:25:22 INFO storage.BlockManager: Removing RDD 1851 18/04/17 17:25:22 INFO kafka.KafkaRDD: Removing RDD 1851 from persistence list 18/04/17 17:25:22 INFO storage.BlockManager: Removing RDD 1851 18/04/17 17:25:22 INFO kafka.KafkaRDD: Removing RDD 1852 from persistence list 18/04/17 17:25:22 INFO storage.BlockManager: Removing RDD 1852 18/04/17 17:25:22 INFO kafka.KafkaRDD: Removing RDD 1852 from persistence list 18/04/17 17:25:22 INFO storage.BlockManager: Removing RDD 1852 18/04/17 17:25:22 INFO kafka.KafkaRDD: Removing RDD 1853 from persistence list 18/04/17 17:25:22 INFO storage.BlockManager: Removing RDD 1853 18/04/17 17:25:22 INFO kafka.KafkaRDD: Removing RDD 1853 from persistence list 18/04/17 17:25:22 INFO storage.BlockManager: Removing RDD 1853 18/04/17 17:25:22 INFO kafka.KafkaRDD: Removing RDD 1854 from persistence list 18/04/17 17:25:22 INFO storage.BlockManager: Removing RDD 1854 18/04/17 17:25:22 INFO kafka.KafkaRDD: Removing RDD 1854 from persistence list 18/04/17 17:25:22 INFO storage.BlockManager: Removing RDD 1854 18/04/17 17:25:22 INFO kafka.KafkaRDD: Removing RDD 1855 from persistence list 18/04/17 17:25:22 INFO storage.BlockManager: Removing RDD 1855 18/04/17 17:25:22 INFO kafka.KafkaRDD: Removing RDD 1855 from persistence list 18/04/17 17:25:22 INFO storage.BlockManager: Removing RDD 1855 18/04/17 17:25:22 INFO kafka.KafkaRDD: Removing RDD 1856 from persistence list 18/04/17 17:25:22 INFO storage.BlockManager: Removing RDD 1856 18/04/17 17:25:22 INFO kafka.KafkaRDD: Removing RDD 1856 from persistence list 18/04/17 17:25:22 INFO storage.BlockManager: Removing RDD 1856 18/04/17 17:25:22 INFO kafka.KafkaRDD: Removing RDD 1857 from persistence list 18/04/17 17:25:22 INFO storage.BlockManager: Removing RDD 1857 18/04/17 17:25:22 INFO kafka.KafkaRDD: Removing RDD 1857 from persistence list 18/04/17 17:25:22 INFO storage.BlockManager: Removing RDD 1857 18/04/17 17:25:22 INFO kafka.KafkaRDD: Removing RDD 1858 from persistence list 18/04/17 17:25:22 INFO storage.BlockManager: Removing RDD 1858 18/04/17 17:25:22 INFO kafka.KafkaRDD: Removing RDD 1858 from persistence list 18/04/17 17:25:22 INFO storage.BlockManager: Removing RDD 1858 18/04/17 17:25:22 INFO kafka.KafkaRDD: Removing RDD 1859 from persistence list 18/04/17 17:25:22 INFO storage.BlockManager: Removing RDD 1859 18/04/17 17:25:22 INFO kafka.KafkaRDD: Removing RDD 1859 from persistence list 18/04/17 17:25:22 INFO storage.BlockManager: Removing RDD 1859 18/04/17 17:25:22 INFO kafka.KafkaRDD: Removing RDD 1860 from persistence list 18/04/17 17:25:22 INFO storage.BlockManager: Removing RDD 1860 18/04/17 17:25:22 INFO kafka.KafkaRDD: Removing RDD 1860 from persistence list 18/04/17 17:25:22 INFO storage.BlockManager: Removing RDD 1860 18/04/17 17:25:22 INFO kafka.KafkaRDD: Removing RDD 1861 from persistence list 18/04/17 17:25:22 INFO storage.BlockManager: Removing RDD 1861 18/04/17 17:25:22 INFO kafka.KafkaRDD: Removing RDD 1861 from persistence list 18/04/17 17:25:22 INFO storage.BlockManager: Removing RDD 1861 18/04/17 17:25:22 INFO kafka.KafkaRDD: Removing RDD 1862 from persistence list 18/04/17 17:25:22 INFO storage.BlockManager: Removing RDD 1862 18/04/17 17:25:22 INFO kafka.KafkaRDD: Removing RDD 1862 from persistence list 18/04/17 17:25:22 INFO storage.BlockManager: Removing RDD 1862 18/04/17 17:25:22 INFO kafka.KafkaRDD: Removing RDD 1863 from persistence list 18/04/17 17:25:22 INFO storage.BlockManager: Removing RDD 1863 18/04/17 17:25:22 INFO kafka.KafkaRDD: Removing RDD 1863 from persistence list 18/04/17 17:25:22 INFO storage.BlockManager: Removing RDD 1863 18/04/17 17:25:22 INFO kafka.KafkaRDD: Removing RDD 1864 from persistence list 18/04/17 17:25:22 INFO storage.BlockManager: Removing RDD 1864 18/04/17 17:25:22 INFO kafka.KafkaRDD: Removing RDD 1864 from persistence list 18/04/17 17:25:22 INFO storage.BlockManager: Removing RDD 1864 18/04/17 17:25:22 INFO kafka.KafkaRDD: Removing RDD 1865 from persistence list 18/04/17 17:25:22 INFO storage.BlockManager: Removing RDD 1865 18/04/17 17:25:22 INFO kafka.KafkaRDD: Removing RDD 1865 from persistence list 18/04/17 17:25:22 INFO storage.BlockManager: Removing RDD 1865 18/04/17 17:25:22 INFO kafka.KafkaRDD: Removing RDD 1866 from persistence list 18/04/17 17:25:22 INFO storage.BlockManager: Removing RDD 1866 18/04/17 17:25:22 INFO kafka.KafkaRDD: Removing RDD 1866 from persistence list 18/04/17 17:25:22 INFO storage.BlockManager: Removing RDD 1866 18/04/17 17:25:22 INFO kafka.KafkaRDD: Removing RDD 1867 from persistence list 18/04/17 17:25:22 INFO storage.BlockManager: Removing RDD 1867 18/04/17 17:25:22 INFO kafka.KafkaRDD: Removing RDD 1867 from persistence list 18/04/17 17:25:22 INFO storage.BlockManager: Removing RDD 1867 18/04/17 17:25:22 INFO kafka.KafkaRDD: Removing RDD 1868 from persistence list 18/04/17 17:25:22 INFO storage.BlockManager: Removing RDD 1868 18/04/17 17:25:22 INFO kafka.KafkaRDD: Removing RDD 1868 from persistence list 18/04/17 17:25:22 INFO storage.BlockManager: Removing RDD 1868 18/04/17 17:25:22 INFO kafka.KafkaRDD: Removing RDD 1869 from persistence list 18/04/17 17:25:22 INFO storage.BlockManager: Removing RDD 1869 18/04/17 17:25:22 INFO kafka.KafkaRDD: Removing RDD 1869 from persistence list 18/04/17 17:25:22 INFO storage.BlockManager: Removing RDD 1869 18/04/17 17:25:22 INFO kafka.KafkaRDD: Removing RDD 1870 from persistence list 18/04/17 17:25:22 INFO storage.BlockManager: Removing RDD 1870 18/04/17 17:25:22 INFO kafka.KafkaRDD: Removing RDD 1870 from persistence list 18/04/17 17:25:22 INFO storage.BlockManager: Removing RDD 1870 18/04/17 17:25:22 INFO kafka.KafkaRDD: Removing RDD 1871 from persistence list 18/04/17 17:25:22 INFO storage.BlockManager: Removing RDD 1871 18/04/17 17:25:22 INFO kafka.KafkaRDD: Removing RDD 1871 from persistence list 18/04/17 17:25:22 INFO storage.BlockManager: Removing RDD 1871 18/04/17 17:25:22 INFO scheduler.ReceivedBlockTracker: Deleting batches ArrayBuffer() 18/04/17 17:25:22 INFO scheduler.InputInfoTracker: remove old batch metadata: 1523974980000 ms 18/04/17 17:25:48 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 1086.0 (TID 1086) in 708022 ms on ***hostname masked*** (executor 4) (1/1) 18/04/17 17:25:48 INFO cluster.YarnClusterScheduler: Removed TaskSet 1086.0, whose tasks have all completed, from pool 18/04/17 17:25:48 INFO scheduler.DAGScheduler: ResultStage 1086 (foreachPartition at PredictorEngineApp.java:153) finished in 708.022 s 18/04/17 17:25:48 INFO scheduler.DAGScheduler: Job 1086 finished: foreachPartition at PredictorEngineApp.java:153, took 708.065910 s 18/04/17 17:25:48 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x13e7cd98 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 17:25:48 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x13e7cd980x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 17:25:48 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 17:25:48 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:48617, server: ***hostname masked***/***IP masked***:2181 18/04/17 17:25:48 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x2626be142b2917c, negotiated timeout = 60000 18/04/17 17:25:48 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x2626be142b2917c 18/04/17 17:25:48 INFO zookeeper.ZooKeeper: Session: 0x2626be142b2917c closed 18/04/17 17:25:48 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 17:25:48 INFO scheduler.JobScheduler: Finished job streaming job 1523974440000 ms.26 from job set of time 1523974440000 ms 18/04/17 17:25:48 INFO scheduler.JobScheduler: Total delay: 708.153 s for time 1523974440000 ms (execution: 708.104 s) 18/04/17 17:25:48 INFO scheduler.ReceivedBlockTracker: Deleting batches ArrayBuffer() 18/04/17 17:25:48 INFO scheduler.InputInfoTracker: remove old batch metadata: 18/04/17 17:26:00 INFO scheduler.JobScheduler: Added jobs for time 1523975160000 ms 18/04/17 17:26:00 INFO scheduler.JobScheduler: Starting job streaming job 1523975160000 ms.1 from job set of time 1523975160000 ms 18/04/17 17:26:00 INFO scheduler.JobScheduler: Starting job streaming job 1523975160000 ms.2 from job set of time 1523975160000 ms 18/04/17 17:26:00 INFO scheduler.JobScheduler: Starting job streaming job 1523975160000 ms.3 from job set of time 1523975160000 ms 18/04/17 17:26:00 INFO scheduler.JobScheduler: Starting job streaming job 1523975160000 ms.0 from job set of time 1523975160000 ms 18/04/17 17:26:00 INFO scheduler.JobScheduler: Finished job streaming job 1523975160000 ms.0 from job set of time 1523975160000 ms 18/04/17 17:26:00 INFO scheduler.JobScheduler: Finished job streaming job 1523975160000 ms.3 from job set of time 1523975160000 ms 18/04/17 17:26:00 INFO scheduler.JobScheduler: Starting job streaming job 1523975160000 ms.4 from job set of time 1523975160000 ms 18/04/17 17:26:00 INFO scheduler.JobScheduler: Starting job streaming job 1523975160000 ms.6 from job set of time 1523975160000 ms 18/04/17 17:26:00 INFO scheduler.JobScheduler: Starting job streaming job 1523975160000 ms.5 from job set of time 1523975160000 ms 18/04/17 17:26:00 INFO scheduler.JobScheduler: Finished job streaming job 1523975160000 ms.4 from job set of time 1523975160000 ms 18/04/17 17:26:00 INFO scheduler.JobScheduler: Starting job streaming job 1523975160000 ms.8 from job set of time 1523975160000 ms 18/04/17 17:26:00 INFO scheduler.JobScheduler: Starting job streaming job 1523975160000 ms.7 from job set of time 1523975160000 ms 18/04/17 17:26:00 INFO scheduler.JobScheduler: Starting job streaming job 1523975160000 ms.9 from job set of time 1523975160000 ms 18/04/17 17:26:00 INFO scheduler.JobScheduler: Starting job streaming job 1523975160000 ms.10 from job set of time 1523975160000 ms 18/04/17 17:26:00 INFO scheduler.JobScheduler: Starting job streaming job 1523975160000 ms.11 from job set of time 1523975160000 ms 18/04/17 17:26:00 INFO scheduler.JobScheduler: Starting job streaming job 1523975160000 ms.12 from job set of time 1523975160000 ms 18/04/17 17:26:00 INFO scheduler.JobScheduler: Starting job streaming job 1523975160000 ms.13 from job set of time 1523975160000 ms 18/04/17 17:26:00 INFO scheduler.JobScheduler: Starting job streaming job 1523975160000 ms.14 from job set of time 1523975160000 ms 18/04/17 17:26:00 INFO scheduler.JobScheduler: Finished job streaming job 1523975160000 ms.13 from job set of time 1523975160000 ms 18/04/17 17:26:00 INFO scheduler.JobScheduler: Finished job streaming job 1523975160000 ms.14 from job set of time 1523975160000 ms 18/04/17 17:26:00 INFO scheduler.JobScheduler: Starting job streaming job 1523975160000 ms.15 from job set of time 1523975160000 ms 18/04/17 17:26:00 INFO scheduler.JobScheduler: Starting job streaming job 1523975160000 ms.16 from job set of time 1523975160000 ms 18/04/17 17:26:00 INFO scheduler.JobScheduler: Finished job streaming job 1523975160000 ms.16 from job set of time 1523975160000 ms 18/04/17 17:26:00 INFO scheduler.JobScheduler: Starting job streaming job 1523975160000 ms.17 from job set of time 1523975160000 ms 18/04/17 17:26:00 INFO scheduler.JobScheduler: Finished job streaming job 1523975160000 ms.17 from job set of time 1523975160000 ms 18/04/17 17:26:00 INFO scheduler.JobScheduler: Starting job streaming job 1523975160000 ms.19 from job set of time 1523975160000 ms 18/04/17 17:26:00 INFO scheduler.JobScheduler: Starting job streaming job 1523975160000 ms.18 from job set of time 1523975160000 ms 18/04/17 17:26:00 INFO scheduler.JobScheduler: Starting job streaming job 1523975160000 ms.20 from job set of time 1523975160000 ms 18/04/17 17:26:00 INFO scheduler.JobScheduler: Starting job streaming job 1523975160000 ms.21 from job set of time 1523975160000 ms 18/04/17 17:26:00 INFO scheduler.JobScheduler: Starting job streaming job 1523975160000 ms.22 from job set of time 1523975160000 ms 18/04/17 17:26:00 INFO scheduler.JobScheduler: Starting job streaming job 1523975160000 ms.23 from job set of time 1523975160000 ms 18/04/17 17:26:00 INFO scheduler.JobScheduler: Finished job streaming job 1523975160000 ms.21 from job set of time 1523975160000 ms 18/04/17 17:26:00 INFO scheduler.JobScheduler: Starting job streaming job 1523975160000 ms.25 from job set of time 1523975160000 ms 18/04/17 17:26:00 INFO scheduler.JobScheduler: Starting job streaming job 1523975160000 ms.24 from job set of time 1523975160000 ms 18/04/17 17:26:00 INFO scheduler.JobScheduler: Starting job streaming job 1523975160000 ms.26 from job set of time 1523975160000 ms 18/04/17 17:26:00 INFO scheduler.JobScheduler: Starting job streaming job 1523975160000 ms.27 from job set of time 1523975160000 ms 18/04/17 17:26:00 INFO scheduler.JobScheduler: Starting job streaming job 1523975160000 ms.28 from job set of time 1523975160000 ms 18/04/17 17:26:00 INFO scheduler.JobScheduler: Starting job streaming job 1523975160000 ms.29 from job set of time 1523975160000 ms 18/04/17 17:26:00 INFO scheduler.JobScheduler: Starting job streaming job 1523975160000 ms.30 from job set of time 1523975160000 ms 18/04/17 17:26:00 INFO scheduler.JobScheduler: Starting job streaming job 1523975160000 ms.31 from job set of time 1523975160000 ms 18/04/17 17:26:00 INFO scheduler.JobScheduler: Finished job streaming job 1523975160000 ms.30 from job set of time 1523975160000 ms 18/04/17 17:26:00 INFO scheduler.JobScheduler: Starting job streaming job 1523975160000 ms.32 from job set of time 1523975160000 ms 18/04/17 17:26:00 INFO scheduler.JobScheduler: Starting job streaming job 1523975160000 ms.33 from job set of time 1523975160000 ms 18/04/17 17:26:00 INFO scheduler.JobScheduler: Starting job streaming job 1523975160000 ms.34 from job set of time 1523975160000 ms 18/04/17 17:26:00 INFO scheduler.JobScheduler: Starting job streaming job 1523975160000 ms.35 from job set of time 1523975160000 ms 18/04/17 17:26:00 INFO scheduler.JobScheduler: Finished job streaming job 1523975160000 ms.35 from job set of time 1523975160000 ms 18/04/17 17:26:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 17:26:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 17:26:00 INFO scheduler.DAGScheduler: Got job 1399 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 17:26:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 1399 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 17:26:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 17:26:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 17:26:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 17:26:00 INFO scheduler.DAGScheduler: Submitting ResultStage 1399 (KafkaRDD[1933] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 17:26:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 17:26:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 17:26:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 17:26:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 17:26:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 17:26:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 17:26:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 17:26:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 17:26:00 INFO storage.MemoryStore: Block broadcast_1399 stored as values in memory (estimated size 5.7 KB, free 491.4 MB) 18/04/17 17:26:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 17:26:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 17:26:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 17:26:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 17:26:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 17:26:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 17:26:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 17:26:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 17:26:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 17:26:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 17:26:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 17:26:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 17:26:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 17:26:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 17:26:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 17:26:00 INFO storage.MemoryStore: Block broadcast_1399_piece0 stored as bytes in memory (estimated size 3.1 KB, free 491.4 MB) 18/04/17 17:26:00 INFO storage.BlockManagerInfo: Added broadcast_1399_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:26:00 INFO spark.SparkContext: Created broadcast 1399 from broadcast at DAGScheduler.scala:1006 18/04/17 17:26:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 1399 (KafkaRDD[1933] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 17:26:00 INFO cluster.YarnClusterScheduler: Adding task set 1399.0 with 1 tasks 18/04/17 17:26:00 INFO scheduler.DAGScheduler: Got job 1400 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 17:26:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 1400 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 17:26:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 17:26:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 17:26:00 INFO scheduler.DAGScheduler: Submitting ResultStage 1400 (KafkaRDD[1931] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 17:26:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 1399.0 (TID 1399, ***hostname masked***, executor 9, partition 0, RACK_LOCAL, 2045 bytes) 18/04/17 17:26:00 INFO storage.MemoryStore: Block broadcast_1400 stored as values in memory (estimated size 5.7 KB, free 491.4 MB) 18/04/17 17:26:00 INFO storage.MemoryStore: Block broadcast_1400_piece0 stored as bytes in memory (estimated size 3.1 KB, free 491.4 MB) 18/04/17 17:26:00 INFO storage.BlockManagerInfo: Added broadcast_1400_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:26:00 INFO spark.SparkContext: Created broadcast 1400 from broadcast at DAGScheduler.scala:1006 18/04/17 17:26:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 1400 (KafkaRDD[1931] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 17:26:00 INFO cluster.YarnClusterScheduler: Adding task set 1400.0 with 1 tasks 18/04/17 17:26:00 INFO scheduler.DAGScheduler: Got job 1401 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 17:26:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 1401 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 17:26:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 17:26:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 17:26:00 INFO scheduler.DAGScheduler: Submitting ResultStage 1401 (KafkaRDD[1937] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 17:26:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 1400.0 (TID 1400, ***hostname masked***, executor 3, partition 0, RACK_LOCAL, 2048 bytes) 18/04/17 17:26:00 INFO storage.MemoryStore: Block broadcast_1401 stored as values in memory (estimated size 5.7 KB, free 491.4 MB) 18/04/17 17:26:00 INFO storage.MemoryStore: Block broadcast_1401_piece0 stored as bytes in memory (estimated size 3.1 KB, free 491.4 MB) 18/04/17 17:26:00 INFO storage.BlockManagerInfo: Added broadcast_1401_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:26:00 INFO spark.SparkContext: Created broadcast 1401 from broadcast at DAGScheduler.scala:1006 18/04/17 17:26:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 1401 (KafkaRDD[1937] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 17:26:00 INFO cluster.YarnClusterScheduler: Adding task set 1401.0 with 1 tasks 18/04/17 17:26:00 INFO scheduler.DAGScheduler: Got job 1402 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 17:26:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 1402 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 17:26:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 17:26:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 17:26:00 INFO scheduler.DAGScheduler: Submitting ResultStage 1402 (KafkaRDD[1910] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 17:26:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 1401.0 (TID 1401, ***hostname masked***, executor 1, partition 0, NODE_LOCAL, 2049 bytes) 18/04/17 17:26:00 INFO storage.MemoryStore: Block broadcast_1402 stored as values in memory (estimated size 5.7 KB, free 491.4 MB) 18/04/17 17:26:00 INFO storage.MemoryStore: Block broadcast_1402_piece0 stored as bytes in memory (estimated size 3.1 KB, free 491.4 MB) 18/04/17 17:26:00 INFO storage.BlockManagerInfo: Added broadcast_1402_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:26:00 INFO spark.SparkContext: Created broadcast 1402 from broadcast at DAGScheduler.scala:1006 18/04/17 17:26:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 1402 (KafkaRDD[1910] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 17:26:00 INFO cluster.YarnClusterScheduler: Adding task set 1402.0 with 1 tasks 18/04/17 17:26:00 INFO scheduler.DAGScheduler: Got job 1403 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 17:26:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 1403 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 17:26:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 17:26:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 17:26:00 INFO scheduler.DAGScheduler: Submitting ResultStage 1403 (KafkaRDD[1935] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 17:26:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 1402.0 (TID 1402, ***hostname masked***, executor 3, partition 0, RACK_LOCAL, 2034 bytes) 18/04/17 17:26:00 INFO storage.MemoryStore: Block broadcast_1403 stored as values in memory (estimated size 5.7 KB, free 491.4 MB) 18/04/17 17:26:00 INFO storage.MemoryStore: Block broadcast_1403_piece0 stored as bytes in memory (estimated size 3.1 KB, free 491.4 MB) 18/04/17 17:26:00 INFO storage.BlockManagerInfo: Added broadcast_1403_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:26:00 INFO spark.SparkContext: Created broadcast 1403 from broadcast at DAGScheduler.scala:1006 18/04/17 17:26:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 1403 (KafkaRDD[1935] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 17:26:00 INFO cluster.YarnClusterScheduler: Adding task set 1403.0 with 1 tasks 18/04/17 17:26:00 INFO scheduler.DAGScheduler: Got job 1404 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 17:26:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 1404 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 17:26:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 17:26:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 17:26:00 INFO scheduler.DAGScheduler: Submitting ResultStage 1404 (KafkaRDD[1932] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 17:26:00 INFO storage.MemoryStore: Block broadcast_1404 stored as values in memory (estimated size 5.7 KB, free 491.4 MB) 18/04/17 17:26:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 1403.0 (TID 1403, ***hostname masked***, executor 2, partition 0, RACK_LOCAL, 2048 bytes) 18/04/17 17:26:00 INFO storage.MemoryStore: Block broadcast_1404_piece0 stored as bytes in memory (estimated size 3.1 KB, free 491.4 MB) 18/04/17 17:26:00 INFO storage.BlockManagerInfo: Added broadcast_1404_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:26:00 INFO spark.SparkContext: Created broadcast 1404 from broadcast at DAGScheduler.scala:1006 18/04/17 17:26:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 1404 (KafkaRDD[1932] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 17:26:00 INFO cluster.YarnClusterScheduler: Adding task set 1404.0 with 1 tasks 18/04/17 17:26:00 INFO scheduler.DAGScheduler: Got job 1405 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 17:26:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 1405 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 17:26:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 17:26:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 17:26:00 INFO scheduler.DAGScheduler: Submitting ResultStage 1405 (KafkaRDD[1926] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 17:26:00 INFO storage.MemoryStore: Block broadcast_1405 stored as values in memory (estimated size 5.7 KB, free 491.3 MB) 18/04/17 17:26:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 1404.0 (TID 1404, ***hostname masked***, executor 2, partition 0, NODE_LOCAL, 2047 bytes) 18/04/17 17:26:00 INFO storage.MemoryStore: Block broadcast_1405_piece0 stored as bytes in memory (estimated size 3.1 KB, free 491.3 MB) 18/04/17 17:26:00 INFO storage.BlockManagerInfo: Added broadcast_1405_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:26:00 INFO spark.SparkContext: Created broadcast 1405 from broadcast at DAGScheduler.scala:1006 18/04/17 17:26:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 1405 (KafkaRDD[1926] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 17:26:00 INFO cluster.YarnClusterScheduler: Adding task set 1405.0 with 1 tasks 18/04/17 17:26:00 INFO storage.BlockManagerInfo: Added broadcast_1399_piece0 in memory on ***hostname masked***:55033 (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:26:00 INFO scheduler.DAGScheduler: Got job 1406 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 17:26:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 1406 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 17:26:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 17:26:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 17:26:00 INFO scheduler.DAGScheduler: Submitting ResultStage 1406 (KafkaRDD[1913] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 17:26:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 1405.0 (TID 1405, ***hostname masked***, executor 10, partition 0, RACK_LOCAL, 2048 bytes) 18/04/17 17:26:00 INFO storage.MemoryStore: Block broadcast_1406 stored as values in memory (estimated size 5.7 KB, free 491.3 MB) 18/04/17 17:26:00 INFO spark.ContextCleaner: Cleaned accumulator 1377 18/04/17 17:26:00 INFO storage.BlockManagerInfo: Removed broadcast_1372_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:26:00 INFO storage.BlockManagerInfo: Removed broadcast_1372_piece0 on ***hostname masked***:55033 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:26:00 INFO storage.BlockManagerInfo: Added broadcast_1401_piece0 in memory on ***hostname masked***:56034 (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:26:00 INFO storage.MemoryStore: Block broadcast_1406_piece0 stored as bytes in memory (estimated size 3.1 KB, free 491.3 MB) 18/04/17 17:26:00 INFO storage.BlockManagerInfo: Added broadcast_1406_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:26:00 INFO spark.SparkContext: Created broadcast 1406 from broadcast at DAGScheduler.scala:1006 18/04/17 17:26:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 1406 (KafkaRDD[1913] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 17:26:00 INFO cluster.YarnClusterScheduler: Adding task set 1406.0 with 1 tasks 18/04/17 17:26:00 INFO scheduler.DAGScheduler: Got job 1407 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 17:26:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 1407 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 17:26:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 17:26:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 17:26:00 INFO scheduler.DAGScheduler: Submitting ResultStage 1407 (KafkaRDD[1909] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 17:26:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 1406.0 (TID 1406, ***hostname masked***, executor 6, partition 0, NODE_LOCAL, 2059 bytes) 18/04/17 17:26:00 INFO storage.BlockManagerInfo: Removed broadcast_1373_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:26:00 INFO storage.MemoryStore: Block broadcast_1407 stored as values in memory (estimated size 5.7 KB, free 491.3 MB) 18/04/17 17:26:00 INFO storage.BlockManagerInfo: Added broadcast_1403_piece0 in memory on ***hostname masked***:43653 (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:26:00 INFO storage.BlockManagerInfo: Removed broadcast_1373_piece0 on ***hostname masked***:53081 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:26:00 INFO spark.ContextCleaner: Cleaned accumulator 1379 18/04/17 17:26:00 INFO storage.BlockManagerInfo: Removed broadcast_1377_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:26:00 INFO storage.BlockManagerInfo: Removed broadcast_1377_piece0 on ***hostname masked***:53081 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:26:00 INFO storage.MemoryStore: Block broadcast_1407_piece0 stored as bytes in memory (estimated size 3.1 KB, free 491.4 MB) 18/04/17 17:26:00 INFO storage.BlockManagerInfo: Added broadcast_1407_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:26:00 INFO spark.SparkContext: Created broadcast 1407 from broadcast at DAGScheduler.scala:1006 18/04/17 17:26:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 1407 (KafkaRDD[1909] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 17:26:00 INFO cluster.YarnClusterScheduler: Adding task set 1407.0 with 1 tasks 18/04/17 17:26:00 INFO scheduler.DAGScheduler: Got job 1408 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 17:26:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 1408 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 17:26:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 17:26:00 INFO spark.ContextCleaner: Cleaned accumulator 1381 18/04/17 17:26:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 17:26:00 INFO scheduler.DAGScheduler: Submitting ResultStage 1408 (KafkaRDD[1916] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 17:26:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 1407.0 (TID 1407, ***hostname masked***, executor 11, partition 0, RACK_LOCAL, 2063 bytes) 18/04/17 17:26:00 INFO storage.BlockManagerInfo: Removed broadcast_1379_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:26:00 INFO storage.MemoryStore: Block broadcast_1408 stored as values in memory (estimated size 5.7 KB, free 491.3 MB) 18/04/17 17:26:00 INFO storage.BlockManagerInfo: Removed broadcast_1379_piece0 on ***hostname masked***:55033 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:26:00 INFO spark.ContextCleaner: Cleaned accumulator 1380 18/04/17 17:26:00 INFO storage.BlockManagerInfo: Removed broadcast_1378_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:26:00 INFO storage.BlockManagerInfo: Added broadcast_1406_piece0 in memory on ***hostname masked***:35790 (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:26:00 INFO storage.MemoryStore: Block broadcast_1408_piece0 stored as bytes in memory (estimated size 3.1 KB, free 491.4 MB) 18/04/17 17:26:00 INFO storage.BlockManagerInfo: Added broadcast_1408_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:26:00 INFO spark.SparkContext: Created broadcast 1408 from broadcast at DAGScheduler.scala:1006 18/04/17 17:26:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 1408 (KafkaRDD[1916] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 17:26:00 INFO cluster.YarnClusterScheduler: Adding task set 1408.0 with 1 tasks 18/04/17 17:26:00 INFO scheduler.DAGScheduler: Got job 1409 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 17:26:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 1409 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 17:26:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 17:26:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 17:26:00 INFO scheduler.DAGScheduler: Submitting ResultStage 1409 (KafkaRDD[1915] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 17:26:00 INFO storage.BlockManagerInfo: Removed broadcast_1378_piece0 on ***hostname masked***:41751 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:26:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 1408.0 (TID 1408, ***hostname masked***, executor 10, partition 0, RACK_LOCAL, 2051 bytes) 18/04/17 17:26:00 INFO storage.MemoryStore: Block broadcast_1409 stored as values in memory (estimated size 5.7 KB, free 491.4 MB) 18/04/17 17:26:00 INFO storage.BlockManagerInfo: Added broadcast_1405_piece0 in memory on ***hostname masked***:55095 (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:26:00 INFO spark.ContextCleaner: Cleaned accumulator 1373 18/04/17 17:26:00 INFO spark.ContextCleaner: Cleaned accumulator 1376 18/04/17 17:26:00 INFO spark.ContextCleaner: Cleaned accumulator 1378 18/04/17 17:26:00 INFO storage.BlockManagerInfo: Removed broadcast_1380_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:26:00 INFO storage.BlockManagerInfo: Removed broadcast_1380_piece0 on ***hostname masked***:55279 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:26:00 INFO storage.MemoryStore: Block broadcast_1409_piece0 stored as bytes in memory (estimated size 3.1 KB, free 491.4 MB) 18/04/17 17:26:00 INFO storage.BlockManagerInfo: Added broadcast_1409_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:26:00 INFO spark.SparkContext: Created broadcast 1409 from broadcast at DAGScheduler.scala:1006 18/04/17 17:26:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 1409 (KafkaRDD[1915] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 17:26:00 INFO cluster.YarnClusterScheduler: Adding task set 1409.0 with 1 tasks 18/04/17 17:26:00 INFO scheduler.DAGScheduler: Got job 1410 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 17:26:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 1410 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 17:26:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 17:26:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 17:26:00 INFO scheduler.DAGScheduler: Submitting ResultStage 1410 (KafkaRDD[1918] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 17:26:00 INFO storage.MemoryStore: Block broadcast_1410 stored as values in memory (estimated size 5.7 KB, free 491.4 MB) 18/04/17 17:26:00 INFO storage.BlockManagerInfo: Added broadcast_1407_piece0 in memory on ***hostname masked***:57847 (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:26:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 1409.0 (TID 1409, ***hostname masked***, executor 1, partition 0, RACK_LOCAL, 2060 bytes) 18/04/17 17:26:00 INFO storage.MemoryStore: Block broadcast_1410_piece0 stored as bytes in memory (estimated size 3.1 KB, free 491.4 MB) 18/04/17 17:26:00 INFO storage.BlockManagerInfo: Added broadcast_1410_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:26:00 INFO spark.SparkContext: Created broadcast 1410 from broadcast at DAGScheduler.scala:1006 18/04/17 17:26:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 1410 (KafkaRDD[1918] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 17:26:00 INFO cluster.YarnClusterScheduler: Adding task set 1410.0 with 1 tasks 18/04/17 17:26:00 INFO scheduler.DAGScheduler: Got job 1411 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 17:26:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 1411 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 17:26:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 17:26:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 17:26:00 INFO scheduler.DAGScheduler: Submitting ResultStage 1411 (KafkaRDD[1941] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 17:26:00 INFO storage.BlockManagerInfo: Added broadcast_1408_piece0 in memory on ***hostname masked***:55095 (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:26:00 INFO storage.MemoryStore: Block broadcast_1411 stored as values in memory (estimated size 5.7 KB, free 491.3 MB) 18/04/17 17:26:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 1410.0 (TID 1410, ***hostname masked***, executor 6, partition 0, NODE_LOCAL, 2059 bytes) 18/04/17 17:26:00 INFO storage.BlockManagerInfo: Added broadcast_1409_piece0 in memory on ***hostname masked***:56034 (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:26:00 INFO storage.MemoryStore: Block broadcast_1411_piece0 stored as bytes in memory (estimated size 3.1 KB, free 491.3 MB) 18/04/17 17:26:00 INFO storage.BlockManagerInfo: Added broadcast_1411_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:26:00 INFO spark.SparkContext: Created broadcast 1411 from broadcast at DAGScheduler.scala:1006 18/04/17 17:26:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 1411 (KafkaRDD[1941] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 17:26:00 INFO cluster.YarnClusterScheduler: Adding task set 1411.0 with 1 tasks 18/04/17 17:26:00 INFO scheduler.DAGScheduler: Got job 1412 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 17:26:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 1412 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 17:26:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 17:26:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 17:26:00 INFO scheduler.DAGScheduler: Submitting ResultStage 1412 (KafkaRDD[1919] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 17:26:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 1411.0 (TID 1411, ***hostname masked***, executor 8, partition 0, NODE_LOCAL, 2048 bytes) 18/04/17 17:26:00 INFO storage.MemoryStore: Block broadcast_1412 stored as values in memory (estimated size 5.7 KB, free 491.3 MB) 18/04/17 17:26:00 INFO storage.BlockManagerInfo: Added broadcast_1410_piece0 in memory on ***hostname masked***:35790 (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:26:00 INFO storage.MemoryStore: Block broadcast_1412_piece0 stored as bytes in memory (estimated size 3.1 KB, free 491.3 MB) 18/04/17 17:26:00 INFO storage.BlockManagerInfo: Added broadcast_1412_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:26:00 INFO spark.SparkContext: Created broadcast 1412 from broadcast at DAGScheduler.scala:1006 18/04/17 17:26:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 1412 (KafkaRDD[1919] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 17:26:00 INFO cluster.YarnClusterScheduler: Adding task set 1412.0 with 1 tasks 18/04/17 17:26:00 INFO scheduler.DAGScheduler: Got job 1413 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 17:26:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 1413 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 17:26:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 17:26:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 17:26:00 INFO scheduler.DAGScheduler: Submitting ResultStage 1413 (KafkaRDD[1934] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 17:26:00 INFO storage.MemoryStore: Block broadcast_1413 stored as values in memory (estimated size 5.7 KB, free 491.3 MB) 18/04/17 17:26:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 1412.0 (TID 1412, ***hostname masked***, executor 9, partition 0, RACK_LOCAL, 2046 bytes) 18/04/17 17:26:00 INFO storage.MemoryStore: Block broadcast_1413_piece0 stored as bytes in memory (estimated size 3.1 KB, free 491.3 MB) 18/04/17 17:26:00 INFO storage.BlockManagerInfo: Added broadcast_1413_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.5 MB) 18/04/17 17:26:00 INFO spark.SparkContext: Created broadcast 1413 from broadcast at DAGScheduler.scala:1006 18/04/17 17:26:00 INFO storage.BlockManagerInfo: Added broadcast_1404_piece0 in memory on ***hostname masked***:43653 (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:26:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 1413 (KafkaRDD[1934] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 17:26:00 INFO cluster.YarnClusterScheduler: Adding task set 1413.0 with 1 tasks 18/04/17 17:26:00 INFO scheduler.DAGScheduler: Got job 1414 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 17:26:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 1414 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 17:26:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 17:26:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 17:26:00 INFO scheduler.DAGScheduler: Submitting ResultStage 1414 (KafkaRDD[1928] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 17:26:00 INFO storage.MemoryStore: Block broadcast_1414 stored as values in memory (estimated size 5.7 KB, free 491.3 MB) 18/04/17 17:26:00 INFO storage.BlockManagerInfo: Removed broadcast_1382_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:26:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 1413.0 (TID 1413, ***hostname masked***, executor 5, partition 0, NODE_LOCAL, 2056 bytes) 18/04/17 17:26:00 INFO storage.BlockManagerInfo: Removed broadcast_1382_piece0 on ***hostname masked***:55033 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:26:00 INFO storage.BlockManagerInfo: Added broadcast_1411_piece0 in memory on ***hostname masked***:50260 (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:26:00 INFO storage.BlockManagerInfo: Added broadcast_1412_piece0 in memory on ***hostname masked***:55033 (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:26:00 INFO spark.ContextCleaner: Cleaned accumulator 1383 18/04/17 17:26:00 INFO storage.BlockManagerInfo: Removed broadcast_1381_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:26:00 INFO storage.MemoryStore: Block broadcast_1414_piece0 stored as bytes in memory (estimated size 3.1 KB, free 491.3 MB) 18/04/17 17:26:00 INFO storage.BlockManagerInfo: Added broadcast_1414_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:26:00 INFO spark.SparkContext: Created broadcast 1414 from broadcast at DAGScheduler.scala:1006 18/04/17 17:26:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 1414 (KafkaRDD[1928] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 17:26:00 INFO cluster.YarnClusterScheduler: Adding task set 1414.0 with 1 tasks 18/04/17 17:26:00 INFO scheduler.DAGScheduler: Got job 1415 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 17:26:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 1415 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 17:26:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 17:26:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 17:26:00 INFO scheduler.DAGScheduler: Submitting ResultStage 1415 (KafkaRDD[1939] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 17:26:00 INFO storage.MemoryStore: Block broadcast_1415 stored as values in memory (estimated size 5.7 KB, free 491.3 MB) 18/04/17 17:26:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 1414.0 (TID 1414, ***hostname masked***, executor 12, partition 0, RACK_LOCAL, 2048 bytes) 18/04/17 17:26:00 INFO storage.BlockManagerInfo: Removed broadcast_1381_piece0 on ***hostname masked***:55095 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:26:00 INFO spark.ContextCleaner: Cleaned accumulator 1382 18/04/17 17:26:00 INFO storage.BlockManagerInfo: Added broadcast_1400_piece0 in memory on ***hostname masked***:60107 (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:26:00 INFO storage.BlockManagerInfo: Removed broadcast_1384_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:26:00 INFO storage.MemoryStore: Block broadcast_1415_piece0 stored as bytes in memory (estimated size 3.1 KB, free 491.3 MB) 18/04/17 17:26:00 INFO storage.BlockManagerInfo: Added broadcast_1415_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:26:00 INFO spark.SparkContext: Created broadcast 1415 from broadcast at DAGScheduler.scala:1006 18/04/17 17:26:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 1415 (KafkaRDD[1939] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 17:26:00 INFO cluster.YarnClusterScheduler: Adding task set 1415.0 with 1 tasks 18/04/17 17:26:00 INFO scheduler.DAGScheduler: Got job 1416 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 17:26:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 1416 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 17:26:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 17:26:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 17:26:00 INFO scheduler.DAGScheduler: Submitting ResultStage 1416 (KafkaRDD[1942] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 17:26:00 INFO storage.MemoryStore: Block broadcast_1416 stored as values in memory (estimated size 5.7 KB, free 491.3 MB) 18/04/17 17:26:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 1415.0 (TID 1415, ***hostname masked***, executor 8, partition 0, RACK_LOCAL, 2042 bytes) 18/04/17 17:26:00 INFO storage.BlockManagerInfo: Removed broadcast_1384_piece0 on ***hostname masked***:42188 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:26:00 INFO storage.BlockManagerInfo: Added broadcast_1413_piece0 in memory on ***hostname masked***:53081 (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:26:00 INFO spark.ContextCleaner: Cleaned accumulator 1385 18/04/17 17:26:00 INFO storage.BlockManagerInfo: Removed broadcast_1383_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:26:00 INFO storage.MemoryStore: Block broadcast_1416_piece0 stored as bytes in memory (estimated size 3.1 KB, free 491.3 MB) 18/04/17 17:26:00 INFO storage.BlockManagerInfo: Added broadcast_1416_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:26:00 INFO spark.SparkContext: Created broadcast 1416 from broadcast at DAGScheduler.scala:1006 18/04/17 17:26:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 1416 (KafkaRDD[1942] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 17:26:00 INFO cluster.YarnClusterScheduler: Adding task set 1416.0 with 1 tasks 18/04/17 17:26:00 INFO scheduler.DAGScheduler: Got job 1417 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 17:26:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 1417 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 17:26:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 17:26:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 17:26:00 INFO scheduler.DAGScheduler: Submitting ResultStage 1417 (KafkaRDD[1920] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 17:26:00 INFO storage.MemoryStore: Block broadcast_1417 stored as values in memory (estimated size 5.7 KB, free 491.3 MB) 18/04/17 17:26:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 1416.0 (TID 1416, ***hostname masked***, executor 3, partition 0, RACK_LOCAL, 2053 bytes) 18/04/17 17:26:00 INFO storage.BlockManagerInfo: Added broadcast_1415_piece0 in memory on ***hostname masked***:50260 (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:26:00 INFO storage.BlockManagerInfo: Removed broadcast_1383_piece0 on ***hostname masked***:56034 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:26:00 INFO storage.BlockManagerInfo: Added broadcast_1414_piece0 in memory on ***hostname masked***:42188 (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:26:00 INFO storage.MemoryStore: Block broadcast_1417_piece0 stored as bytes in memory (estimated size 3.1 KB, free 491.3 MB) 18/04/17 17:26:00 INFO storage.BlockManagerInfo: Added broadcast_1417_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.5 MB) 18/04/17 17:26:00 INFO spark.SparkContext: Created broadcast 1417 from broadcast at DAGScheduler.scala:1006 18/04/17 17:26:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 1417 (KafkaRDD[1920] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 17:26:00 INFO cluster.YarnClusterScheduler: Adding task set 1417.0 with 1 tasks 18/04/17 17:26:00 INFO scheduler.DAGScheduler: Got job 1418 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 17:26:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 1418 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 17:26:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 17:26:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 17:26:00 INFO scheduler.DAGScheduler: Submitting ResultStage 1418 (KafkaRDD[1940] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 17:26:00 INFO spark.ContextCleaner: Cleaned accumulator 1384 18/04/17 17:26:00 INFO spark.ContextCleaner: Cleaned accumulator 1374 18/04/17 17:26:00 INFO spark.ContextCleaner: Cleaned accumulator 1387 18/04/17 17:26:00 INFO storage.MemoryStore: Block broadcast_1418 stored as values in memory (estimated size 5.7 KB, free 491.3 MB) 18/04/17 17:26:00 INFO storage.BlockManagerInfo: Removed broadcast_1385_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:26:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 1417.0 (TID 1417, ***hostname masked***, executor 1, partition 0, RACK_LOCAL, 2051 bytes) 18/04/17 17:26:00 INFO storage.BlockManagerInfo: Removed broadcast_1385_piece0 on ***hostname masked***:57847 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:26:00 INFO storage.MemoryStore: Block broadcast_1418_piece0 stored as bytes in memory (estimated size 3.1 KB, free 491.3 MB) 18/04/17 17:26:00 INFO storage.BlockManagerInfo: Added broadcast_1418_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.5 MB) 18/04/17 17:26:00 INFO spark.SparkContext: Created broadcast 1418 from broadcast at DAGScheduler.scala:1006 18/04/17 17:26:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 1418 (KafkaRDD[1940] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 17:26:00 INFO cluster.YarnClusterScheduler: Adding task set 1418.0 with 1 tasks 18/04/17 17:26:00 INFO scheduler.DAGScheduler: Got job 1419 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 17:26:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 1419 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 17:26:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 17:26:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 17:26:00 INFO scheduler.DAGScheduler: Submitting ResultStage 1419 (KafkaRDD[1936] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 17:26:00 INFO storage.MemoryStore: Block broadcast_1419 stored as values in memory (estimated size 5.7 KB, free 491.3 MB) 18/04/17 17:26:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 1418.0 (TID 1418, ***hostname masked***, executor 9, partition 0, RACK_LOCAL, 2040 bytes) 18/04/17 17:26:00 INFO storage.BlockManagerInfo: Added broadcast_1417_piece0 in memory on ***hostname masked***:56034 (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:26:00 INFO storage.MemoryStore: Block broadcast_1419_piece0 stored as bytes in memory (estimated size 3.1 KB, free 491.3 MB) 18/04/17 17:26:00 INFO storage.BlockManagerInfo: Added broadcast_1419_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.5 MB) 18/04/17 17:26:00 INFO spark.SparkContext: Created broadcast 1419 from broadcast at DAGScheduler.scala:1006 18/04/17 17:26:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 1419 (KafkaRDD[1936] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 17:26:00 INFO cluster.YarnClusterScheduler: Adding task set 1419.0 with 1 tasks 18/04/17 17:26:00 INFO scheduler.DAGScheduler: Got job 1420 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 17:26:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 1420 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 17:26:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 17:26:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 17:26:00 INFO scheduler.DAGScheduler: Submitting ResultStage 1420 (KafkaRDD[1927] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 17:26:00 INFO storage.MemoryStore: Block broadcast_1420 stored as values in memory (estimated size 5.7 KB, free 491.3 MB) 18/04/17 17:26:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 1419.0 (TID 1419, ***hostname masked***, executor 10, partition 0, RACK_LOCAL, 2045 bytes) 18/04/17 17:26:00 INFO spark.ContextCleaner: Cleaned accumulator 1386 18/04/17 17:26:00 INFO spark.ContextCleaner: Cleaned accumulator 1389 18/04/17 17:26:00 INFO storage.BlockManagerInfo: Added broadcast_1418_piece0 in memory on ***hostname masked***:55033 (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:26:00 INFO storage.BlockManagerInfo: Removed broadcast_1387_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.5 MB) 18/04/17 17:26:00 INFO storage.BlockManagerInfo: Added broadcast_1402_piece0 in memory on ***hostname masked***:60107 (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:26:00 INFO storage.MemoryStore: Block broadcast_1420_piece0 stored as bytes in memory (estimated size 3.1 KB, free 491.3 MB) 18/04/17 17:26:00 INFO storage.BlockManagerInfo: Removed broadcast_1387_piece0 on ***hostname masked***:50260 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:26:00 INFO storage.BlockManagerInfo: Added broadcast_1420_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.5 MB) 18/04/17 17:26:00 INFO spark.SparkContext: Created broadcast 1420 from broadcast at DAGScheduler.scala:1006 18/04/17 17:26:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 1420 (KafkaRDD[1927] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 17:26:00 INFO cluster.YarnClusterScheduler: Adding task set 1420.0 with 1 tasks 18/04/17 17:26:00 INFO scheduler.DAGScheduler: Got job 1421 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 17:26:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 1421 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 17:26:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 17:26:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 17:26:00 INFO scheduler.DAGScheduler: Submitting ResultStage 1421 (KafkaRDD[1917] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 17:26:00 INFO spark.ContextCleaner: Cleaned accumulator 1388 18/04/17 17:26:00 INFO storage.MemoryStore: Block broadcast_1421 stored as values in memory (estimated size 5.7 KB, free 491.3 MB) 18/04/17 17:26:00 INFO storage.BlockManagerInfo: Removed broadcast_1386_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.5 MB) 18/04/17 17:26:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 1420.0 (TID 1420, ***hostname masked***, executor 5, partition 0, RACK_LOCAL, 2050 bytes) 18/04/17 17:26:00 INFO storage.BlockManagerInfo: Removed broadcast_1386_piece0 on ***hostname masked***:50260 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:26:00 INFO storage.BlockManagerInfo: Removed broadcast_1375_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:26:00 INFO storage.BlockManagerInfo: Removed broadcast_1375_piece0 on ***hostname masked***:35790 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:26:00 INFO storage.BlockManagerInfo: Added broadcast_1419_piece0 in memory on ***hostname masked***:55095 (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:26:00 INFO spark.ContextCleaner: Cleaned accumulator 1390 18/04/17 17:26:00 INFO storage.MemoryStore: Block broadcast_1421_piece0 stored as bytes in memory (estimated size 3.1 KB, free 491.3 MB) 18/04/17 17:26:00 INFO storage.BlockManagerInfo: Added broadcast_1421_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.5 MB) 18/04/17 17:26:00 INFO storage.BlockManagerInfo: Removed broadcast_1388_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:26:00 INFO spark.SparkContext: Created broadcast 1421 from broadcast at DAGScheduler.scala:1006 18/04/17 17:26:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 1421 (KafkaRDD[1917] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 17:26:00 INFO cluster.YarnClusterScheduler: Adding task set 1421.0 with 1 tasks 18/04/17 17:26:00 INFO scheduler.DAGScheduler: Got job 1422 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 17:26:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 1422 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 17:26:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 17:26:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 17:26:00 INFO scheduler.DAGScheduler: Submitting ResultStage 1422 (KafkaRDD[1914] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 17:26:00 INFO storage.MemoryStore: Block broadcast_1422 stored as values in memory (estimated size 5.7 KB, free 491.3 MB) 18/04/17 17:26:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 1421.0 (TID 1421, ***hostname masked***, executor 5, partition 0, RACK_LOCAL, 2041 bytes) 18/04/17 17:26:00 INFO storage.BlockManagerInfo: Removed broadcast_1388_piece0 on ***hostname masked***:55279 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:26:00 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 1408.0 (TID 1408) in 56 ms on ***hostname masked*** (executor 10) (1/1) 18/04/17 17:26:00 INFO cluster.YarnClusterScheduler: Removed TaskSet 1408.0, whose tasks have all completed, from pool 18/04/17 17:26:00 INFO spark.ContextCleaner: Cleaned accumulator 1392 18/04/17 17:26:00 INFO storage.MemoryStore: Block broadcast_1422_piece0 stored as bytes in memory (estimated size 3.1 KB, free 491.3 MB) 18/04/17 17:26:00 INFO storage.BlockManagerInfo: Removed broadcast_1390_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:26:00 INFO storage.BlockManagerInfo: Added broadcast_1422_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:26:00 INFO spark.SparkContext: Created broadcast 1422 from broadcast at DAGScheduler.scala:1006 18/04/17 17:26:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 1422 (KafkaRDD[1914] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 17:26:00 INFO cluster.YarnClusterScheduler: Adding task set 1422.0 with 1 tasks 18/04/17 17:26:00 INFO scheduler.DAGScheduler: Got job 1423 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 17:26:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 1423 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 17:26:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 17:26:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 17:26:00 INFO scheduler.DAGScheduler: Submitting ResultStage 1423 (KafkaRDD[1930] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 17:26:00 INFO storage.MemoryStore: Block broadcast_1423 stored as values in memory (estimated size 5.7 KB, free 491.3 MB) 18/04/17 17:26:00 INFO storage.BlockManagerInfo: Removed broadcast_1390_piece0 on ***hostname masked***:56034 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:26:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 1422.0 (TID 1422, ***hostname masked***, executor 10, partition 0, NODE_LOCAL, 2045 bytes) 18/04/17 17:26:00 INFO storage.BlockManagerInfo: Added broadcast_1420_piece0 in memory on ***hostname masked***:53081 (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:26:00 INFO storage.BlockManagerInfo: Added broadcast_1416_piece0 in memory on ***hostname masked***:60107 (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:26:00 INFO storage.MemoryStore: Block broadcast_1423_piece0 stored as bytes in memory (estimated size 3.1 KB, free 491.3 MB) 18/04/17 17:26:00 INFO storage.BlockManagerInfo: Added broadcast_1423_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.5 MB) 18/04/17 17:26:00 INFO spark.ContextCleaner: Cleaned accumulator 1391 18/04/17 17:26:00 INFO spark.SparkContext: Created broadcast 1423 from broadcast at DAGScheduler.scala:1006 18/04/17 17:26:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 1423 (KafkaRDD[1930] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 17:26:00 INFO cluster.YarnClusterScheduler: Adding task set 1423.0 with 1 tasks 18/04/17 17:26:00 INFO scheduler.DAGScheduler: Got job 1424 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 17:26:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 1424 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 17:26:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 17:26:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 17:26:00 INFO scheduler.DAGScheduler: Submitting ResultStage 1424 (KafkaRDD[1923] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 17:26:00 INFO storage.BlockManagerInfo: Removed broadcast_1389_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:26:00 INFO storage.MemoryStore: Block broadcast_1424 stored as values in memory (estimated size 5.7 KB, free 491.3 MB) 18/04/17 17:26:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 1423.0 (TID 1423, ***hostname masked***, executor 8, partition 0, RACK_LOCAL, 2069 bytes) 18/04/17 17:26:00 INFO storage.BlockManagerInfo: Removed broadcast_1389_piece0 on ***hostname masked***:60107 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:26:00 INFO storage.MemoryStore: Block broadcast_1424_piece0 stored as bytes in memory (estimated size 3.1 KB, free 491.3 MB) 18/04/17 17:26:00 INFO storage.BlockManagerInfo: Added broadcast_1424_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.5 MB) 18/04/17 17:26:00 INFO spark.SparkContext: Created broadcast 1424 from broadcast at DAGScheduler.scala:1006 18/04/17 17:26:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 1424 (KafkaRDD[1923] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 17:26:00 INFO cluster.YarnClusterScheduler: Adding task set 1424.0 with 1 tasks 18/04/17 17:26:00 INFO scheduler.DAGScheduler: ResultStage 1408 (foreachPartition at PredictorEngineApp.java:153) finished in 0.062 s 18/04/17 17:26:00 INFO scheduler.DAGScheduler: Job 1408 finished: foreachPartition at PredictorEngineApp.java:153, took 0.109232 s 18/04/17 17:26:00 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x24b9dec4 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 17:26:00 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x24b9dec40x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 17:26:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 1424.0 (TID 1424, ***hostname masked***, executor 5, partition 0, RACK_LOCAL, 2064 bytes) 18/04/17 17:26:00 INFO storage.BlockManagerInfo: Added broadcast_1422_piece0 in memory on ***hostname masked***:55095 (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:26:00 INFO spark.ContextCleaner: Cleaned accumulator 1394 18/04/17 17:26:00 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 17:26:00 INFO storage.BlockManagerInfo: Removed broadcast_1392_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:26:00 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:37733, server: ***hostname masked***/***IP masked***:2181 18/04/17 17:26:00 INFO storage.BlockManagerInfo: Removed broadcast_1392_piece0 on ***hostname masked***:43653 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:26:00 INFO storage.BlockManagerInfo: Added broadcast_1421_piece0 in memory on ***hostname masked***:53081 (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:26:00 INFO storage.BlockManagerInfo: Added broadcast_1423_piece0 in memory on ***hostname masked***:50260 (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:26:00 INFO spark.ContextCleaner: Cleaned accumulator 1393 18/04/17 17:26:00 INFO storage.BlockManagerInfo: Removed broadcast_1391_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:26:00 INFO storage.BlockManagerInfo: Removed broadcast_1391_piece0 on ***hostname masked***:42188 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:26:00 INFO storage.BlockManagerInfo: Removed broadcast_1374_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:26:00 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x3626be1439a9850, negotiated timeout = 60000 18/04/17 17:26:00 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 1406.0 (TID 1406) in 82 ms on ***hostname masked*** (executor 6) (1/1) 18/04/17 17:26:00 INFO cluster.YarnClusterScheduler: Removed TaskSet 1406.0, whose tasks have all completed, from pool 18/04/17 17:26:00 INFO scheduler.DAGScheduler: ResultStage 1406 (foreachPartition at PredictorEngineApp.java:153) finished in 0.082 s 18/04/17 17:26:00 INFO scheduler.DAGScheduler: Job 1406 finished: foreachPartition at PredictorEngineApp.java:153, took 0.119277 s 18/04/17 17:26:00 INFO storage.BlockManagerInfo: Removed broadcast_1374_piece0 on ***hostname masked***:41751 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:26:00 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x5a62e8b4 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 17:26:00 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x5a62e8b40x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 17:26:00 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 17:26:00 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:48711, server: ***hostname masked***/***IP masked***:2181 18/04/17 17:26:00 INFO storage.BlockManagerInfo: Added broadcast_1424_piece0 in memory on ***hostname masked***:53081 (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:26:00 INFO storage.BlockManagerInfo: Removed broadcast_1394_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:26:00 INFO storage.BlockManagerInfo: Removed broadcast_1394_piece0 on ***hostname masked***:43653 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:26:00 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x2626be142b2917f, negotiated timeout = 60000 18/04/17 17:26:00 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 1404.0 (TID 1404) in 106 ms on ***hostname masked*** (executor 2) (1/1) 18/04/17 17:26:00 INFO scheduler.DAGScheduler: ResultStage 1404 (foreachPartition at PredictorEngineApp.java:153) finished in 0.107 s 18/04/17 17:26:00 INFO cluster.YarnClusterScheduler: Removed TaskSet 1404.0, whose tasks have all completed, from pool 18/04/17 17:26:00 INFO scheduler.DAGScheduler: Job 1404 finished: foreachPartition at PredictorEngineApp.java:153, took 0.125396 s 18/04/17 17:26:00 INFO spark.ContextCleaner: Cleaned accumulator 1395 18/04/17 17:26:00 INFO storage.BlockManagerInfo: Removed broadcast_1393_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:26:00 INFO storage.BlockManagerInfo: Removed broadcast_1393_piece0 on ***hostname masked***:55095 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:26:00 INFO storage.BlockManagerInfo: Removed broadcast_1396_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:26:00 INFO storage.BlockManagerInfo: Removed broadcast_1396_piece0 on ***hostname masked***:41751 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:26:00 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x3626be1439a9850 18/04/17 17:26:00 INFO spark.ContextCleaner: Cleaned accumulator 1397 18/04/17 17:26:00 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x2626be142b2917f 18/04/17 17:26:00 INFO storage.BlockManagerInfo: Removed broadcast_1395_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:26:00 INFO storage.BlockManagerInfo: Removed broadcast_1395_piece0 on ***hostname masked***:53081 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:26:00 INFO zookeeper.ZooKeeper: Session: 0x3626be1439a9850 closed 18/04/17 17:26:00 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 17:26:00 INFO zookeeper.ZooKeeper: Session: 0x2626be142b2917f closed 18/04/17 17:26:00 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 17:26:00 INFO scheduler.JobScheduler: Finished job streaming job 1523975160000 ms.5 from job set of time 1523975160000 ms 18/04/17 17:26:00 INFO scheduler.JobScheduler: Finished job streaming job 1523975160000 ms.8 from job set of time 1523975160000 ms 18/04/17 17:26:00 INFO scheduler.JobScheduler: Finished job streaming job 1523975160000 ms.24 from job set of time 1523975160000 ms 18/04/17 17:26:00 INFO spark.ContextCleaner: Cleaned accumulator 1396 18/04/17 17:26:00 INFO storage.BlockManagerInfo: Removed broadcast_1398_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:26:00 INFO storage.BlockManagerInfo: Removed broadcast_1398_piece0 on ***hostname masked***:50260 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:26:00 INFO spark.ContextCleaner: Cleaned accumulator 1399 18/04/17 17:26:00 INFO storage.BlockManagerInfo: Removed broadcast_1397_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:26:00 INFO storage.BlockManagerInfo: Removed broadcast_1397_piece0 on ***hostname masked***:50260 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:26:00 INFO spark.ContextCleaner: Cleaned accumulator 1398 18/04/17 17:26:00 INFO spark.ContextCleaner: Cleaned accumulator 1375 18/04/17 17:26:00 INFO storage.BlockManagerInfo: Removed broadcast_1376_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:26:00 INFO storage.BlockManagerInfo: Removed broadcast_1376_piece0 on ***hostname masked***:55033 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:26:02 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 1399.0 (TID 1399) in 2890 ms on ***hostname masked*** (executor 9) (1/1) 18/04/17 17:26:02 INFO cluster.YarnClusterScheduler: Removed TaskSet 1399.0, whose tasks have all completed, from pool 18/04/17 17:26:02 INFO scheduler.DAGScheduler: ResultStage 1399 (foreachPartition at PredictorEngineApp.java:153) finished in 2.890 s 18/04/17 17:26:02 INFO scheduler.DAGScheduler: Job 1399 finished: foreachPartition at PredictorEngineApp.java:153, took 2.895234 s 18/04/17 17:26:02 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x72b7f4f1 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 17:26:02 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x72b7f4f10x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 17:26:02 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 17:26:02 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:37742, server: ***hostname masked***/***IP masked***:2181 18/04/17 17:26:02 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x3626be1439a9856, negotiated timeout = 60000 18/04/17 17:26:02 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x3626be1439a9856 18/04/17 17:26:02 INFO zookeeper.ZooKeeper: Session: 0x3626be1439a9856 closed 18/04/17 17:26:02 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 17:26:02 INFO scheduler.JobScheduler: Finished job streaming job 1523975160000 ms.25 from job set of time 1523975160000 ms 18/04/17 17:26:04 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 1409.0 (TID 1409) in 4756 ms on ***hostname masked*** (executor 1) (1/1) 18/04/17 17:26:04 INFO cluster.YarnClusterScheduler: Removed TaskSet 1409.0, whose tasks have all completed, from pool 18/04/17 17:26:04 INFO scheduler.DAGScheduler: ResultStage 1409 (foreachPartition at PredictorEngineApp.java:153) finished in 4.756 s 18/04/17 17:26:04 INFO scheduler.DAGScheduler: Job 1409 finished: foreachPartition at PredictorEngineApp.java:153, took 4.807371 s 18/04/17 17:26:04 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x317a985b connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 17:26:04 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x317a985b0x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 17:26:04 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 17:26:04 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:44130, server: ***hostname masked***/***IP masked***:2181 18/04/17 17:26:04 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x1626be1444c98b9, negotiated timeout = 60000 18/04/17 17:26:04 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x1626be1444c98b9 18/04/17 17:26:04 INFO zookeeper.ZooKeeper: Session: 0x1626be1444c98b9 closed 18/04/17 17:26:04 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 17:26:04 INFO scheduler.JobScheduler: Finished job streaming job 1523975160000 ms.7 from job set of time 1523975160000 ms 18/04/17 17:26:05 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 1414.0 (TID 1414) in 5871 ms on ***hostname masked*** (executor 12) (1/1) 18/04/17 17:26:05 INFO cluster.YarnClusterScheduler: Removed TaskSet 1414.0, whose tasks have all completed, from pool 18/04/17 17:26:05 INFO scheduler.DAGScheduler: ResultStage 1414 (foreachPartition at PredictorEngineApp.java:153) finished in 5.871 s 18/04/17 17:26:05 INFO scheduler.DAGScheduler: Job 1414 finished: foreachPartition at PredictorEngineApp.java:153, took 5.941298 s 18/04/17 17:26:05 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x6141d82 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 17:26:05 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x6141d820x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 17:26:05 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 17:26:05 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:37752, server: ***hostname masked***/***IP masked***:2181 18/04/17 17:26:06 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x3626be1439a985c, negotiated timeout = 60000 18/04/17 17:26:06 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x3626be1439a985c 18/04/17 17:26:06 INFO zookeeper.ZooKeeper: Session: 0x3626be1439a985c closed 18/04/17 17:26:06 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 17:26:06 INFO scheduler.JobScheduler: Finished job streaming job 1523975160000 ms.20 from job set of time 1523975160000 ms 18/04/17 17:26:06 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 1403.0 (TID 1403) in 6579 ms on ***hostname masked*** (executor 2) (1/1) 18/04/17 17:26:06 INFO cluster.YarnClusterScheduler: Removed TaskSet 1403.0, whose tasks have all completed, from pool 18/04/17 17:26:06 INFO scheduler.DAGScheduler: ResultStage 1403 (foreachPartition at PredictorEngineApp.java:153) finished in 6.579 s 18/04/17 17:26:06 INFO scheduler.DAGScheduler: Job 1403 finished: foreachPartition at PredictorEngineApp.java:153, took 6.595628 s 18/04/17 17:26:06 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x1f84ddaa connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 17:26:06 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x1f84ddaa0x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 17:26:06 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 17:26:06 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:48733, server: ***hostname masked***/***IP masked***:2181 18/04/17 17:26:06 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x2626be142b29186, negotiated timeout = 60000 18/04/17 17:26:06 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x2626be142b29186 18/04/17 17:26:06 INFO zookeeper.ZooKeeper: Session: 0x2626be142b29186 closed 18/04/17 17:26:06 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 17:26:06 INFO scheduler.JobScheduler: Finished job streaming job 1523975160000 ms.27 from job set of time 1523975160000 ms 18/04/17 17:26:08 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 1415.0 (TID 1415) in 8091 ms on ***hostname masked*** (executor 8) (1/1) 18/04/17 17:26:08 INFO cluster.YarnClusterScheduler: Removed TaskSet 1415.0, whose tasks have all completed, from pool 18/04/17 17:26:08 INFO scheduler.DAGScheduler: ResultStage 1415 (foreachPartition at PredictorEngineApp.java:153) finished in 8.092 s 18/04/17 17:26:08 INFO scheduler.DAGScheduler: Job 1415 finished: foreachPartition at PredictorEngineApp.java:153, took 8.166872 s 18/04/17 17:26:08 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x5616ef33 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 17:26:08 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x5616ef330x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 17:26:08 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 17:26:08 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:44144, server: ***hostname masked***/***IP masked***:2181 18/04/17 17:26:08 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x1626be1444c98ba, negotiated timeout = 60000 18/04/17 17:26:08 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x1626be1444c98ba 18/04/17 17:26:08 INFO zookeeper.ZooKeeper: Session: 0x1626be1444c98ba closed 18/04/17 17:26:08 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 17:26:08 INFO scheduler.JobScheduler: Finished job streaming job 1523975160000 ms.31 from job set of time 1523975160000 ms 18/04/17 17:26:09 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 1417.0 (TID 1417) in 9423 ms on ***hostname masked*** (executor 1) (1/1) 18/04/17 17:26:09 INFO cluster.YarnClusterScheduler: Removed TaskSet 1417.0, whose tasks have all completed, from pool 18/04/17 17:26:09 INFO scheduler.DAGScheduler: ResultStage 1417 (foreachPartition at PredictorEngineApp.java:153) finished in 9.425 s 18/04/17 17:26:09 INFO scheduler.DAGScheduler: Job 1417 finished: foreachPartition at PredictorEngineApp.java:153, took 9.507615 s 18/04/17 17:26:09 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x3c99606c connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 17:26:09 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x3c99606c0x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 17:26:09 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 17:26:09 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:48743, server: ***hostname masked***/***IP masked***:2181 18/04/17 17:26:09 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x2626be142b29187, negotiated timeout = 60000 18/04/17 17:26:09 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x2626be142b29187 18/04/17 17:26:09 INFO zookeeper.ZooKeeper: Session: 0x2626be142b29187 closed 18/04/17 17:26:09 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 17:26:09 INFO scheduler.JobScheduler: Finished job streaming job 1523975160000 ms.12 from job set of time 1523975160000 ms 18/04/17 17:26:10 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 1407.0 (TID 1407) in 10594 ms on ***hostname masked*** (executor 11) (1/1) 18/04/17 17:26:10 INFO cluster.YarnClusterScheduler: Removed TaskSet 1407.0, whose tasks have all completed, from pool 18/04/17 17:26:10 INFO scheduler.DAGScheduler: ResultStage 1407 (foreachPartition at PredictorEngineApp.java:153) finished in 10.594 s 18/04/17 17:26:10 INFO scheduler.DAGScheduler: Job 1407 finished: foreachPartition at PredictorEngineApp.java:153, took 10.636159 s 18/04/17 17:26:10 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x35d116be connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 17:26:10 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x35d116be0x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 17:26:10 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 17:26:10 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:44153, server: ***hostname masked***/***IP masked***:2181 18/04/17 17:26:10 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x1626be1444c98bc, negotiated timeout = 60000 18/04/17 17:26:10 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x1626be1444c98bc 18/04/17 17:26:10 INFO zookeeper.ZooKeeper: Session: 0x1626be1444c98bc closed 18/04/17 17:26:10 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 17:26:10 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 1418.0 (TID 1418) in 10585 ms on ***hostname masked*** (executor 9) (1/1) 18/04/17 17:26:10 INFO cluster.YarnClusterScheduler: Removed TaskSet 1418.0, whose tasks have all completed, from pool 18/04/17 17:26:10 INFO scheduler.DAGScheduler: ResultStage 1418 (foreachPartition at PredictorEngineApp.java:153) finished in 10.586 s 18/04/17 17:26:10 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 1401.0 (TID 1401) in 10663 ms on ***hostname masked*** (executor 1) (1/1) 18/04/17 17:26:10 INFO cluster.YarnClusterScheduler: Removed TaskSet 1401.0, whose tasks have all completed, from pool 18/04/17 17:26:10 INFO scheduler.DAGScheduler: ResultStage 1401 (foreachPartition at PredictorEngineApp.java:153) finished in 10.663 s 18/04/17 17:26:10 INFO scheduler.DAGScheduler: Job 1418 finished: foreachPartition at PredictorEngineApp.java:153, took 10.672322 s 18/04/17 17:26:10 INFO scheduler.DAGScheduler: Job 1401 finished: foreachPartition at PredictorEngineApp.java:153, took 10.673855 s 18/04/17 17:26:10 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x6e83656c connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 17:26:10 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x6e83656c0x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 17:26:10 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x340a045c connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 17:26:10 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x340a045c0x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 17:26:10 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 17:26:10 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 17:26:10 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:37774, server: ***hostname masked***/***IP masked***:2181 18/04/17 17:26:10 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:48752, server: ***hostname masked***/***IP masked***:2181 18/04/17 17:26:10 INFO scheduler.JobScheduler: Finished job streaming job 1523975160000 ms.1 from job set of time 1523975160000 ms 18/04/17 17:26:10 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x3626be1439a9862, negotiated timeout = 60000 18/04/17 17:26:10 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x2626be142b29189, negotiated timeout = 60000 18/04/17 17:26:10 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x3626be1439a9862 18/04/17 17:26:10 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x2626be142b29189 18/04/17 17:26:10 INFO zookeeper.ZooKeeper: Session: 0x3626be1439a9862 closed 18/04/17 17:26:10 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 17:26:10 INFO zookeeper.ZooKeeper: Session: 0x2626be142b29189 closed 18/04/17 17:26:10 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 17:26:10 INFO scheduler.JobScheduler: Finished job streaming job 1523975160000 ms.29 from job set of time 1523975160000 ms 18/04/17 17:26:10 INFO scheduler.JobScheduler: Finished job streaming job 1523975160000 ms.32 from job set of time 1523975160000 ms 18/04/17 17:26:13 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 1411.0 (TID 1411) in 13136 ms on ***hostname masked*** (executor 8) (1/1) 18/04/17 17:26:13 INFO scheduler.DAGScheduler: ResultStage 1411 (foreachPartition at PredictorEngineApp.java:153) finished in 13.136 s 18/04/17 17:26:13 INFO cluster.YarnClusterScheduler: Removed TaskSet 1411.0, whose tasks have all completed, from pool 18/04/17 17:26:13 INFO scheduler.DAGScheduler: Job 1411 finished: foreachPartition at PredictorEngineApp.java:153, took 13.194551 s 18/04/17 17:26:13 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x27da2d0c connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 17:26:13 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x27da2d0c0x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 17:26:13 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 17:26:13 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:48761, server: ***hostname masked***/***IP masked***:2181 18/04/17 17:26:13 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x2626be142b2918b, negotiated timeout = 60000 18/04/17 17:26:13 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x2626be142b2918b 18/04/17 17:26:13 INFO zookeeper.ZooKeeper: Session: 0x2626be142b2918b closed 18/04/17 17:26:13 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 17:26:13 INFO scheduler.JobScheduler: Finished job streaming job 1523975160000 ms.33 from job set of time 1523975160000 ms 18/04/17 17:26:13 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 1405.0 (TID 1405) in 13240 ms on ***hostname masked*** (executor 10) (1/1) 18/04/17 17:26:13 INFO cluster.YarnClusterScheduler: Removed TaskSet 1405.0, whose tasks have all completed, from pool 18/04/17 17:26:13 INFO scheduler.DAGScheduler: ResultStage 1405 (foreachPartition at PredictorEngineApp.java:153) finished in 13.240 s 18/04/17 17:26:13 INFO scheduler.DAGScheduler: Job 1405 finished: foreachPartition at PredictorEngineApp.java:153, took 13.272487 s 18/04/17 17:26:13 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x3bc78293 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 17:26:13 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x3bc782930x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 17:26:13 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 17:26:13 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:44169, server: ***hostname masked***/***IP masked***:2181 18/04/17 17:26:13 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x1626be1444c98bf, negotiated timeout = 60000 18/04/17 17:26:13 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x1626be1444c98bf 18/04/17 17:26:13 INFO zookeeper.ZooKeeper: Session: 0x1626be1444c98bf closed 18/04/17 17:26:13 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 17:26:13 INFO scheduler.JobScheduler: Finished job streaming job 1523975160000 ms.18 from job set of time 1523975160000 ms 18/04/17 17:26:13 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 1422.0 (TID 1422) in 13461 ms on ***hostname masked*** (executor 10) (1/1) 18/04/17 17:26:13 INFO cluster.YarnClusterScheduler: Removed TaskSet 1422.0, whose tasks have all completed, from pool 18/04/17 17:26:13 INFO scheduler.DAGScheduler: ResultStage 1422 (foreachPartition at PredictorEngineApp.java:153) finished in 13.461 s 18/04/17 17:26:13 INFO scheduler.DAGScheduler: Job 1422 finished: foreachPartition at PredictorEngineApp.java:153, took 13.565375 s 18/04/17 17:26:13 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x42ee2513 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 17:26:13 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x42ee25130x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 17:26:13 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 17:26:13 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:44172, server: ***hostname masked***/***IP masked***:2181 18/04/17 17:26:13 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x1626be1444c98c1, negotiated timeout = 60000 18/04/17 17:26:13 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x1626be1444c98c1 18/04/17 17:26:13 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 1419.0 (TID 1419) in 13486 ms on ***hostname masked*** (executor 10) (1/1) 18/04/17 17:26:13 INFO scheduler.DAGScheduler: ResultStage 1419 (foreachPartition at PredictorEngineApp.java:153) finished in 13.488 s 18/04/17 17:26:13 INFO cluster.YarnClusterScheduler: Removed TaskSet 1419.0, whose tasks have all completed, from pool 18/04/17 17:26:13 INFO scheduler.DAGScheduler: Job 1419 finished: foreachPartition at PredictorEngineApp.java:153, took 13.578207 s 18/04/17 17:26:13 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x5f1ad69f connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 17:26:13 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x5f1ad69f0x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 17:26:13 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 17:26:13 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:44175, server: ***hostname masked***/***IP masked***:2181 18/04/17 17:26:13 INFO zookeeper.ZooKeeper: Session: 0x1626be1444c98c1 closed 18/04/17 17:26:13 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 17:26:13 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x1626be1444c98c2, negotiated timeout = 60000 18/04/17 17:26:13 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x1626be1444c98c2 18/04/17 17:26:13 INFO scheduler.JobScheduler: Finished job streaming job 1523975160000 ms.6 from job set of time 1523975160000 ms 18/04/17 17:26:13 INFO zookeeper.ZooKeeper: Session: 0x1626be1444c98c2 closed 18/04/17 17:26:13 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 17:26:13 INFO scheduler.JobScheduler: Finished job streaming job 1523975160000 ms.28 from job set of time 1523975160000 ms 18/04/17 17:26:14 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 1402.0 (TID 1402) in 14791 ms on ***hostname masked*** (executor 3) (1/1) 18/04/17 17:26:14 INFO scheduler.DAGScheduler: ResultStage 1402 (foreachPartition at PredictorEngineApp.java:153) finished in 14.791 s 18/04/17 17:26:14 INFO cluster.YarnClusterScheduler: Removed TaskSet 1402.0, whose tasks have all completed, from pool 18/04/17 17:26:14 INFO scheduler.DAGScheduler: Job 1402 finished: foreachPartition at PredictorEngineApp.java:153, took 14.804660 s 18/04/17 17:26:14 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x51ee8a9a connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 17:26:14 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x51ee8a9a0x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 17:26:14 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 17:26:14 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:44179, server: ***hostname masked***/***IP masked***:2181 18/04/17 17:26:14 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x1626be1444c98c3, negotiated timeout = 60000 18/04/17 17:26:14 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x1626be1444c98c3 18/04/17 17:26:14 INFO zookeeper.ZooKeeper: Session: 0x1626be1444c98c3 closed 18/04/17 17:26:14 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 17:26:14 INFO scheduler.JobScheduler: Finished job streaming job 1523975160000 ms.2 from job set of time 1523975160000 ms 18/04/17 17:26:14 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 1410.0 (TID 1410) in 14814 ms on ***hostname masked*** (executor 6) (1/1) 18/04/17 17:26:14 INFO cluster.YarnClusterScheduler: Removed TaskSet 1410.0, whose tasks have all completed, from pool 18/04/17 17:26:14 INFO scheduler.DAGScheduler: ResultStage 1410 (foreachPartition at PredictorEngineApp.java:153) finished in 14.814 s 18/04/17 17:26:14 INFO scheduler.DAGScheduler: Job 1410 finished: foreachPartition at PredictorEngineApp.java:153, took 14.869351 s 18/04/17 17:26:14 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x62d78a8f connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 17:26:14 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x62d78a8f0x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 17:26:14 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 17:26:14 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:44182, server: ***hostname masked***/***IP masked***:2181 18/04/17 17:26:14 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x1626be1444c98c5, negotiated timeout = 60000 18/04/17 17:26:14 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x1626be1444c98c5 18/04/17 17:26:14 INFO zookeeper.ZooKeeper: Session: 0x1626be1444c98c5 closed 18/04/17 17:26:14 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 17:26:14 INFO scheduler.JobScheduler: Finished job streaming job 1523975160000 ms.10 from job set of time 1523975160000 ms 18/04/17 17:26:15 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 1400.0 (TID 1400) in 15612 ms on ***hostname masked*** (executor 3) (1/1) 18/04/17 17:26:15 INFO cluster.YarnClusterScheduler: Removed TaskSet 1400.0, whose tasks have all completed, from pool 18/04/17 17:26:15 INFO scheduler.DAGScheduler: ResultStage 1400 (foreachPartition at PredictorEngineApp.java:153) finished in 15.613 s 18/04/17 17:26:15 INFO scheduler.DAGScheduler: Job 1400 finished: foreachPartition at PredictorEngineApp.java:153, took 15.620863 s 18/04/17 17:26:15 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x330f95eb connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 17:26:15 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x330f95eb0x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 17:26:15 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 17:26:15 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:44186, server: ***hostname masked***/***IP masked***:2181 18/04/17 17:26:15 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x1626be1444c98c6, negotiated timeout = 60000 18/04/17 17:26:15 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x1626be1444c98c6 18/04/17 17:26:15 INFO zookeeper.ZooKeeper: Session: 0x1626be1444c98c6 closed 18/04/17 17:26:15 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 17:26:15 INFO scheduler.JobScheduler: Finished job streaming job 1523975160000 ms.23 from job set of time 1523975160000 ms 18/04/17 17:26:15 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 1416.0 (TID 1416) in 15739 ms on ***hostname masked*** (executor 3) (1/1) 18/04/17 17:26:15 INFO cluster.YarnClusterScheduler: Removed TaskSet 1416.0, whose tasks have all completed, from pool 18/04/17 17:26:15 INFO scheduler.DAGScheduler: ResultStage 1416 (foreachPartition at PredictorEngineApp.java:153) finished in 15.740 s 18/04/17 17:26:15 INFO scheduler.DAGScheduler: Job 1416 finished: foreachPartition at PredictorEngineApp.java:153, took 15.818337 s 18/04/17 17:26:15 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x6127fc0e connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 17:26:15 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x6127fc0e0x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 17:26:15 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 17:26:15 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:48784, server: ***hostname masked***/***IP masked***:2181 18/04/17 17:26:15 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x2626be142b2918d, negotiated timeout = 60000 18/04/17 17:26:15 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x2626be142b2918d 18/04/17 17:26:15 INFO zookeeper.ZooKeeper: Session: 0x2626be142b2918d closed 18/04/17 17:26:15 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 17:26:15 INFO scheduler.JobScheduler: Finished job streaming job 1523975160000 ms.34 from job set of time 1523975160000 ms 18/04/17 17:26:17 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 1412.0 (TID 1412) in 17852 ms on ***hostname masked*** (executor 9) (1/1) 18/04/17 17:26:17 INFO cluster.YarnClusterScheduler: Removed TaskSet 1412.0, whose tasks have all completed, from pool 18/04/17 17:26:17 INFO scheduler.DAGScheduler: ResultStage 1412 (foreachPartition at PredictorEngineApp.java:153) finished in 17.852 s 18/04/17 17:26:17 INFO scheduler.DAGScheduler: Job 1412 finished: foreachPartition at PredictorEngineApp.java:153, took 17.914394 s 18/04/17 17:26:17 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x750474a2 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 17:26:17 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x750474a20x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 17:26:17 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 17:26:17 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:37813, server: ***hostname masked***/***IP masked***:2181 18/04/17 17:26:17 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x3626be1439a9866, negotiated timeout = 60000 18/04/17 17:26:17 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x3626be1439a9866 18/04/17 17:26:17 INFO zookeeper.ZooKeeper: Session: 0x3626be1439a9866 closed 18/04/17 17:26:17 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 17:26:17 INFO scheduler.JobScheduler: Finished job streaming job 1523975160000 ms.11 from job set of time 1523975160000 ms 18/04/17 17:26:19 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 1423.0 (TID 1423) in 19184 ms on ***hostname masked*** (executor 8) (1/1) 18/04/17 17:26:19 INFO scheduler.DAGScheduler: ResultStage 1423 (foreachPartition at PredictorEngineApp.java:153) finished in 19.185 s 18/04/17 17:26:19 INFO cluster.YarnClusterScheduler: Removed TaskSet 1423.0, whose tasks have all completed, from pool 18/04/17 17:26:19 INFO scheduler.DAGScheduler: Job 1423 finished: foreachPartition at PredictorEngineApp.java:153, took 19.290541 s 18/04/17 17:26:19 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x1297e39 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 17:26:19 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x1297e390x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 17:26:19 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 17:26:19 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:48796, server: ***hostname masked***/***IP masked***:2181 18/04/17 17:26:19 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x2626be142b2918f, negotiated timeout = 60000 18/04/17 17:26:19 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x2626be142b2918f 18/04/17 17:26:19 INFO zookeeper.ZooKeeper: Session: 0x2626be142b2918f closed 18/04/17 17:26:19 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 17:26:19 INFO scheduler.JobScheduler: Finished job streaming job 1523975160000 ms.22 from job set of time 1523975160000 ms 18/04/17 17:26:20 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 1420.0 (TID 1420) in 20790 ms on ***hostname masked*** (executor 5) (1/1) 18/04/17 17:26:20 INFO cluster.YarnClusterScheduler: Removed TaskSet 1420.0, whose tasks have all completed, from pool 18/04/17 17:26:20 INFO scheduler.DAGScheduler: ResultStage 1420 (foreachPartition at PredictorEngineApp.java:153) finished in 20.791 s 18/04/17 17:26:20 INFO scheduler.DAGScheduler: Job 1420 finished: foreachPartition at PredictorEngineApp.java:153, took 20.886598 s 18/04/17 17:26:20 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x1dd0785b connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 17:26:20 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x1dd0785b0x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 17:26:20 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 17:26:20 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:44207, server: ***hostname masked***/***IP masked***:2181 18/04/17 17:26:20 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x1626be1444c98ca, negotiated timeout = 60000 18/04/17 17:26:20 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x1626be1444c98ca 18/04/17 17:26:20 INFO zookeeper.ZooKeeper: Session: 0x1626be1444c98ca closed 18/04/17 17:26:20 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 17:26:20 INFO scheduler.JobScheduler: Finished job streaming job 1523975160000 ms.19 from job set of time 1523975160000 ms 18/04/17 17:26:21 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 1421.0 (TID 1421) in 20850 ms on ***hostname masked*** (executor 5) (1/1) 18/04/17 17:26:21 INFO cluster.YarnClusterScheduler: Removed TaskSet 1421.0, whose tasks have all completed, from pool 18/04/17 17:26:21 INFO scheduler.DAGScheduler: ResultStage 1421 (foreachPartition at PredictorEngineApp.java:153) finished in 20.851 s 18/04/17 17:26:21 INFO scheduler.DAGScheduler: Job 1421 finished: foreachPartition at PredictorEngineApp.java:153, took 20.951268 s 18/04/17 17:26:21 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x72478463 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 17:26:21 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x724784630x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 17:26:21 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 17:26:21 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:44210, server: ***hostname masked***/***IP masked***:2181 18/04/17 17:26:21 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x1626be1444c98cd, negotiated timeout = 60000 18/04/17 17:26:21 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x1626be1444c98cd 18/04/17 17:26:21 INFO zookeeper.ZooKeeper: Session: 0x1626be1444c98cd closed 18/04/17 17:26:21 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 17:26:21 INFO scheduler.JobScheduler: Finished job streaming job 1523975160000 ms.9 from job set of time 1523975160000 ms 18/04/17 17:26:21 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 1424.0 (TID 1424) in 20986 ms on ***hostname masked*** (executor 5) (1/1) 18/04/17 17:26:21 INFO cluster.YarnClusterScheduler: Removed TaskSet 1424.0, whose tasks have all completed, from pool 18/04/17 17:26:21 INFO scheduler.DAGScheduler: ResultStage 1424 (foreachPartition at PredictorEngineApp.java:153) finished in 20.987 s 18/04/17 17:26:21 INFO scheduler.DAGScheduler: Job 1424 finished: foreachPartition at PredictorEngineApp.java:153, took 21.094700 s 18/04/17 17:26:21 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x47116373 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 17:26:21 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x471163730x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 17:26:21 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 17:26:21 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:48809, server: ***hostname masked***/***IP masked***:2181 18/04/17 17:26:21 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x2626be142b29191, negotiated timeout = 60000 18/04/17 17:26:21 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x2626be142b29191 18/04/17 17:26:21 INFO zookeeper.ZooKeeper: Session: 0x2626be142b29191 closed 18/04/17 17:26:21 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 17:26:21 INFO scheduler.JobScheduler: Finished job streaming job 1523975160000 ms.15 from job set of time 1523975160000 ms 18/04/17 17:26:27 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 1413.0 (TID 1413) in 27521 ms on ***hostname masked*** (executor 5) (1/1) 18/04/17 17:26:27 INFO cluster.YarnClusterScheduler: Removed TaskSet 1413.0, whose tasks have all completed, from pool 18/04/17 17:26:27 INFO scheduler.DAGScheduler: ResultStage 1413 (foreachPartition at PredictorEngineApp.java:153) finished in 27.521 s 18/04/17 17:26:27 INFO scheduler.DAGScheduler: Job 1413 finished: foreachPartition at PredictorEngineApp.java:153, took 27.588067 s 18/04/17 17:26:27 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x57e97687 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 17:26:27 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x57e976870x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 17:26:27 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 17:26:27 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:44228, server: ***hostname masked***/***IP masked***:2181 18/04/17 17:26:27 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x1626be1444c98cf, negotiated timeout = 60000 18/04/17 17:26:27 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x1626be1444c98cf 18/04/17 17:26:27 INFO zookeeper.ZooKeeper: Session: 0x1626be1444c98cf closed 18/04/17 17:26:27 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 17:26:27 INFO scheduler.JobScheduler: Finished job streaming job 1523975160000 ms.26 from job set of time 1523975160000 ms 18/04/17 17:26:27 INFO scheduler.JobScheduler: Total delay: 27.672 s for time 1523975160000 ms (execution: 27.626 s) 18/04/17 17:26:27 INFO kafka.KafkaRDD: Removing RDD 1872 from persistence list 18/04/17 17:26:27 INFO storage.BlockManager: Removing RDD 1872 18/04/17 17:26:27 INFO kafka.KafkaRDD: Removing RDD 1872 from persistence list 18/04/17 17:26:27 INFO storage.BlockManager: Removing RDD 1872 18/04/17 17:26:27 INFO kafka.KafkaRDD: Removing RDD 1873 from persistence list 18/04/17 17:26:27 INFO storage.BlockManager: Removing RDD 1873 18/04/17 17:26:27 INFO kafka.KafkaRDD: Removing RDD 1873 from persistence list 18/04/17 17:26:27 INFO storage.BlockManager: Removing RDD 1873 18/04/17 17:26:27 INFO kafka.KafkaRDD: Removing RDD 1874 from persistence list 18/04/17 17:26:27 INFO storage.BlockManager: Removing RDD 1874 18/04/17 17:26:27 INFO kafka.KafkaRDD: Removing RDD 1874 from persistence list 18/04/17 17:26:27 INFO storage.BlockManager: Removing RDD 1874 18/04/17 17:26:27 INFO kafka.KafkaRDD: Removing RDD 1875 from persistence list 18/04/17 17:26:27 INFO storage.BlockManager: Removing RDD 1875 18/04/17 17:26:27 INFO kafka.KafkaRDD: Removing RDD 1875 from persistence list 18/04/17 17:26:27 INFO storage.BlockManager: Removing RDD 1875 18/04/17 17:26:27 INFO kafka.KafkaRDD: Removing RDD 1876 from persistence list 18/04/17 17:26:27 INFO storage.BlockManager: Removing RDD 1876 18/04/17 17:26:27 INFO kafka.KafkaRDD: Removing RDD 1876 from persistence list 18/04/17 17:26:27 INFO storage.BlockManager: Removing RDD 1876 18/04/17 17:26:27 INFO kafka.KafkaRDD: Removing RDD 1877 from persistence list 18/04/17 17:26:27 INFO storage.BlockManager: Removing RDD 1877 18/04/17 17:26:27 INFO kafka.KafkaRDD: Removing RDD 1877 from persistence list 18/04/17 17:26:27 INFO storage.BlockManager: Removing RDD 1877 18/04/17 17:26:27 INFO kafka.KafkaRDD: Removing RDD 1878 from persistence list 18/04/17 17:26:27 INFO storage.BlockManager: Removing RDD 1878 18/04/17 17:26:27 INFO kafka.KafkaRDD: Removing RDD 1878 from persistence list 18/04/17 17:26:27 INFO storage.BlockManager: Removing RDD 1878 18/04/17 17:26:27 INFO kafka.KafkaRDD: Removing RDD 1879 from persistence list 18/04/17 17:26:27 INFO storage.BlockManager: Removing RDD 1879 18/04/17 17:26:27 INFO kafka.KafkaRDD: Removing RDD 1879 from persistence list 18/04/17 17:26:27 INFO storage.BlockManager: Removing RDD 1879 18/04/17 17:26:27 INFO kafka.KafkaRDD: Removing RDD 1880 from persistence list 18/04/17 17:26:27 INFO storage.BlockManager: Removing RDD 1880 18/04/17 17:26:27 INFO kafka.KafkaRDD: Removing RDD 1880 from persistence list 18/04/17 17:26:27 INFO storage.BlockManager: Removing RDD 1880 18/04/17 17:26:27 INFO kafka.KafkaRDD: Removing RDD 1881 from persistence list 18/04/17 17:26:27 INFO storage.BlockManagerInfo: Removed broadcast_1413_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:26:27 INFO storage.BlockManager: Removing RDD 1881 18/04/17 17:26:27 INFO kafka.KafkaRDD: Removing RDD 1881 from persistence list 18/04/17 17:26:27 INFO storage.BlockManagerInfo: Removed broadcast_1413_piece0 on ***hostname masked***:53081 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:26:27 INFO storage.BlockManager: Removing RDD 1881 18/04/17 17:26:27 INFO kafka.KafkaRDD: Removing RDD 1882 from persistence list 18/04/17 17:26:27 INFO storage.BlockManager: Removing RDD 1882 18/04/17 17:26:27 INFO spark.ContextCleaner: Cleaned accumulator 1400 18/04/17 17:26:27 INFO kafka.KafkaRDD: Removing RDD 1882 from persistence list 18/04/17 17:26:27 INFO storage.BlockManager: Removing RDD 1882 18/04/17 17:26:27 INFO kafka.KafkaRDD: Removing RDD 1883 from persistence list 18/04/17 17:26:27 INFO storage.BlockManagerInfo: Removed broadcast_1400_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:26:27 INFO storage.BlockManager: Removing RDD 1883 18/04/17 17:26:27 INFO kafka.KafkaRDD: Removing RDD 1883 from persistence list 18/04/17 17:26:27 INFO storage.BlockManagerInfo: Removed broadcast_1400_piece0 on ***hostname masked***:60107 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:26:27 INFO storage.BlockManager: Removing RDD 1883 18/04/17 17:26:27 INFO kafka.KafkaRDD: Removing RDD 1884 from persistence list 18/04/17 17:26:27 INFO storage.BlockManager: Removing RDD 1884 18/04/17 17:26:27 INFO spark.ContextCleaner: Cleaned accumulator 1401 18/04/17 17:26:27 INFO kafka.KafkaRDD: Removing RDD 1884 from persistence list 18/04/17 17:26:27 INFO storage.BlockManager: Removing RDD 1884 18/04/17 17:26:27 INFO kafka.KafkaRDD: Removing RDD 1885 from persistence list 18/04/17 17:26:27 INFO storage.BlockManagerInfo: Removed broadcast_1399_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:26:27 INFO storage.BlockManager: Removing RDD 1885 18/04/17 17:26:27 INFO kafka.KafkaRDD: Removing RDD 1885 from persistence list 18/04/17 17:26:27 INFO storage.BlockManager: Removing RDD 1885 18/04/17 17:26:27 INFO kafka.KafkaRDD: Removing RDD 1886 from persistence list 18/04/17 17:26:27 INFO storage.BlockManagerInfo: Removed broadcast_1399_piece0 on ***hostname masked***:55033 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:26:27 INFO storage.BlockManager: Removing RDD 1886 18/04/17 17:26:27 INFO kafka.KafkaRDD: Removing RDD 1886 from persistence list 18/04/17 17:26:27 INFO storage.BlockManager: Removing RDD 1886 18/04/17 17:26:27 INFO kafka.KafkaRDD: Removing RDD 1887 from persistence list 18/04/17 17:26:27 INFO spark.ContextCleaner: Cleaned accumulator 1403 18/04/17 17:26:27 INFO storage.BlockManager: Removing RDD 1887 18/04/17 17:26:27 INFO kafka.KafkaRDD: Removing RDD 1887 from persistence list 18/04/17 17:26:27 INFO storage.BlockManager: Removing RDD 1887 18/04/17 17:26:27 INFO kafka.KafkaRDD: Removing RDD 1888 from persistence list 18/04/17 17:26:27 INFO storage.BlockManagerInfo: Removed broadcast_1401_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:26:27 INFO storage.BlockManager: Removing RDD 1888 18/04/17 17:26:27 INFO kafka.KafkaRDD: Removing RDD 1888 from persistence list 18/04/17 17:26:27 INFO storage.BlockManagerInfo: Removed broadcast_1401_piece0 on ***hostname masked***:56034 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:26:27 INFO storage.BlockManager: Removing RDD 1888 18/04/17 17:26:27 INFO kafka.KafkaRDD: Removing RDD 1889 from persistence list 18/04/17 17:26:27 INFO spark.ContextCleaner: Cleaned accumulator 1402 18/04/17 17:26:27 INFO storage.BlockManager: Removing RDD 1889 18/04/17 17:26:27 INFO kafka.KafkaRDD: Removing RDD 1889 from persistence list 18/04/17 17:26:27 INFO storage.BlockManager: Removing RDD 1889 18/04/17 17:26:27 INFO kafka.KafkaRDD: Removing RDD 1890 from persistence list 18/04/17 17:26:27 INFO storage.BlockManagerInfo: Removed broadcast_1403_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:26:27 INFO storage.BlockManager: Removing RDD 1890 18/04/17 17:26:27 INFO kafka.KafkaRDD: Removing RDD 1890 from persistence list 18/04/17 17:26:27 INFO storage.BlockManager: Removing RDD 1890 18/04/17 17:26:27 INFO kafka.KafkaRDD: Removing RDD 1891 from persistence list 18/04/17 17:26:27 INFO storage.BlockManager: Removing RDD 1891 18/04/17 17:26:27 INFO kafka.KafkaRDD: Removing RDD 1891 from persistence list 18/04/17 17:26:27 INFO storage.BlockManager: Removing RDD 1891 18/04/17 17:26:27 INFO kafka.KafkaRDD: Removing RDD 1892 from persistence list 18/04/17 17:26:27 INFO storage.BlockManagerInfo: Removed broadcast_1403_piece0 on ***hostname masked***:43653 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:26:27 INFO storage.BlockManager: Removing RDD 1892 18/04/17 17:26:27 INFO kafka.KafkaRDD: Removing RDD 1892 from persistence list 18/04/17 17:26:27 INFO storage.BlockManager: Removing RDD 1892 18/04/17 17:26:27 INFO kafka.KafkaRDD: Removing RDD 1893 from persistence list 18/04/17 17:26:27 INFO storage.BlockManager: Removing RDD 1893 18/04/17 17:26:27 INFO kafka.KafkaRDD: Removing RDD 1893 from persistence list 18/04/17 17:26:27 INFO spark.ContextCleaner: Cleaned accumulator 1404 18/04/17 17:26:27 INFO storage.BlockManager: Removing RDD 1893 18/04/17 17:26:27 INFO kafka.KafkaRDD: Removing RDD 1894 from persistence list 18/04/17 17:26:27 INFO storage.BlockManager: Removing RDD 1894 18/04/17 17:26:27 INFO kafka.KafkaRDD: Removing RDD 1894 from persistence list 18/04/17 17:26:27 INFO storage.BlockManagerInfo: Removed broadcast_1402_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:26:27 INFO storage.BlockManager: Removing RDD 1894 18/04/17 17:26:27 INFO kafka.KafkaRDD: Removing RDD 1895 from persistence list 18/04/17 17:26:27 INFO storage.BlockManager: Removing RDD 1895 18/04/17 17:26:27 INFO kafka.KafkaRDD: Removing RDD 1895 from persistence list 18/04/17 17:26:27 INFO storage.BlockManagerInfo: Removed broadcast_1402_piece0 on ***hostname masked***:60107 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:26:27 INFO storage.BlockManager: Removing RDD 1895 18/04/17 17:26:27 INFO spark.ContextCleaner: Cleaned accumulator 1406 18/04/17 17:26:27 INFO kafka.KafkaRDD: Removing RDD 1896 from persistence list 18/04/17 17:26:27 INFO storage.BlockManager: Removing RDD 1896 18/04/17 17:26:27 INFO storage.BlockManagerInfo: Removed broadcast_1404_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:26:27 INFO kafka.KafkaRDD: Removing RDD 1896 from persistence list 18/04/17 17:26:27 INFO storage.BlockManager: Removing RDD 1896 18/04/17 17:26:27 INFO kafka.KafkaRDD: Removing RDD 1897 from persistence list 18/04/17 17:26:27 INFO storage.BlockManagerInfo: Removed broadcast_1404_piece0 on ***hostname masked***:43653 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:26:27 INFO storage.BlockManager: Removing RDD 1897 18/04/17 17:26:27 INFO kafka.KafkaRDD: Removing RDD 1897 from persistence list 18/04/17 17:26:27 INFO spark.ContextCleaner: Cleaned accumulator 1405 18/04/17 17:26:27 INFO storage.BlockManager: Removing RDD 1897 18/04/17 17:26:27 INFO kafka.KafkaRDD: Removing RDD 1898 from persistence list 18/04/17 17:26:27 INFO storage.BlockManager: Removing RDD 1898 18/04/17 17:26:27 INFO kafka.KafkaRDD: Removing RDD 1898 from persistence list 18/04/17 17:26:27 INFO storage.BlockManagerInfo: Removed broadcast_1406_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:26:27 INFO storage.BlockManager: Removing RDD 1898 18/04/17 17:26:27 INFO kafka.KafkaRDD: Removing RDD 1899 from persistence list 18/04/17 17:26:27 INFO storage.BlockManager: Removing RDD 1899 18/04/17 17:26:27 INFO kafka.KafkaRDD: Removing RDD 1899 from persistence list 18/04/17 17:26:27 INFO storage.BlockManagerInfo: Removed broadcast_1406_piece0 on ***hostname masked***:35790 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:26:27 INFO storage.BlockManager: Removing RDD 1899 18/04/17 17:26:27 INFO kafka.KafkaRDD: Removing RDD 1900 from persistence list 18/04/17 17:26:27 INFO storage.BlockManager: Removing RDD 1900 18/04/17 17:26:27 INFO spark.ContextCleaner: Cleaned accumulator 1407 18/04/17 17:26:27 INFO kafka.KafkaRDD: Removing RDD 1900 from persistence list 18/04/17 17:26:27 INFO storage.BlockManager: Removing RDD 1900 18/04/17 17:26:27 INFO kafka.KafkaRDD: Removing RDD 1901 from persistence list 18/04/17 17:26:27 INFO storage.BlockManagerInfo: Removed broadcast_1405_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:26:27 INFO storage.BlockManager: Removing RDD 1901 18/04/17 17:26:27 INFO kafka.KafkaRDD: Removing RDD 1901 from persistence list 18/04/17 17:26:27 INFO storage.BlockManager: Removing RDD 1901 18/04/17 17:26:27 INFO kafka.KafkaRDD: Removing RDD 1902 from persistence list 18/04/17 17:26:27 INFO storage.BlockManagerInfo: Removed broadcast_1405_piece0 on ***hostname masked***:55095 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:26:27 INFO storage.BlockManager: Removing RDD 1902 18/04/17 17:26:27 INFO kafka.KafkaRDD: Removing RDD 1902 from persistence list 18/04/17 17:26:27 INFO storage.BlockManager: Removing RDD 1902 18/04/17 17:26:27 INFO kafka.KafkaRDD: Removing RDD 1903 from persistence list 18/04/17 17:26:27 INFO storage.BlockManager: Removing RDD 1903 18/04/17 17:26:27 INFO kafka.KafkaRDD: Removing RDD 1903 from persistence list 18/04/17 17:26:27 INFO storage.BlockManagerInfo: Removed broadcast_1424_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:26:27 INFO storage.BlockManager: Removing RDD 1903 18/04/17 17:26:27 INFO kafka.KafkaRDD: Removing RDD 1904 from persistence list 18/04/17 17:26:27 INFO storage.BlockManagerInfo: Removed broadcast_1424_piece0 on ***hostname masked***:53081 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:26:27 INFO storage.BlockManager: Removing RDD 1904 18/04/17 17:26:27 INFO kafka.KafkaRDD: Removing RDD 1904 from persistence list 18/04/17 17:26:27 INFO spark.ContextCleaner: Cleaned accumulator 1425 18/04/17 17:26:27 INFO storage.BlockManager: Removing RDD 1904 18/04/17 17:26:27 INFO kafka.KafkaRDD: Removing RDD 1905 from persistence list 18/04/17 17:26:27 INFO storage.BlockManager: Removing RDD 1905 18/04/17 17:26:27 INFO kafka.KafkaRDD: Removing RDD 1905 from persistence list 18/04/17 17:26:27 INFO storage.BlockManagerInfo: Removed broadcast_1423_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:26:27 INFO storage.BlockManager: Removing RDD 1905 18/04/17 17:26:27 INFO kafka.KafkaRDD: Removing RDD 1906 from persistence list 18/04/17 17:26:27 INFO storage.BlockManager: Removing RDD 1906 18/04/17 17:26:27 INFO kafka.KafkaRDD: Removing RDD 1906 from persistence list 18/04/17 17:26:27 INFO storage.BlockManager: Removing RDD 1906 18/04/17 17:26:27 INFO kafka.KafkaRDD: Removing RDD 1907 from persistence list 18/04/17 17:26:27 INFO storage.BlockManagerInfo: Removed broadcast_1423_piece0 on ***hostname masked***:50260 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:26:27 INFO storage.BlockManager: Removing RDD 1907 18/04/17 17:26:27 INFO kafka.KafkaRDD: Removing RDD 1907 from persistence list 18/04/17 17:26:27 INFO storage.BlockManager: Removing RDD 1907 18/04/17 17:26:27 INFO spark.ContextCleaner: Cleaned accumulator 1409 18/04/17 17:26:27 INFO scheduler.ReceivedBlockTracker: Deleting batches ArrayBuffer() 18/04/17 17:26:27 INFO scheduler.InputInfoTracker: remove old batch metadata: 1523975040000 ms 18/04/17 17:26:27 INFO storage.BlockManagerInfo: Removed broadcast_1407_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:26:27 INFO storage.BlockManagerInfo: Removed broadcast_1407_piece0 on ***hostname masked***:57847 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:26:27 INFO spark.ContextCleaner: Cleaned accumulator 1408 18/04/17 17:26:27 INFO spark.ContextCleaner: Cleaned accumulator 1410 18/04/17 17:26:27 INFO storage.BlockManagerInfo: Removed broadcast_1408_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:26:27 INFO storage.BlockManagerInfo: Removed broadcast_1408_piece0 on ***hostname masked***:55095 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:26:27 INFO storage.BlockManagerInfo: Removed broadcast_1410_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:26:27 INFO storage.BlockManagerInfo: Removed broadcast_1410_piece0 on ***hostname masked***:35790 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:26:27 INFO spark.ContextCleaner: Cleaned accumulator 1411 18/04/17 17:26:27 INFO storage.BlockManagerInfo: Removed broadcast_1409_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:26:27 INFO storage.BlockManagerInfo: Removed broadcast_1409_piece0 on ***hostname masked***:56034 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:26:27 INFO spark.ContextCleaner: Cleaned accumulator 1413 18/04/17 17:26:27 INFO storage.BlockManagerInfo: Removed broadcast_1411_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:26:27 INFO storage.BlockManagerInfo: Removed broadcast_1411_piece0 on ***hostname masked***:50260 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:26:27 INFO spark.ContextCleaner: Cleaned accumulator 1412 18/04/17 17:26:27 INFO spark.ContextCleaner: Cleaned accumulator 1414 18/04/17 17:26:27 INFO storage.BlockManagerInfo: Removed broadcast_1412_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:26:27 INFO storage.BlockManagerInfo: Removed broadcast_1412_piece0 on ***hostname masked***:55033 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:26:27 INFO spark.ContextCleaner: Cleaned accumulator 1416 18/04/17 17:26:27 INFO storage.BlockManagerInfo: Removed broadcast_1414_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:26:27 INFO storage.BlockManagerInfo: Removed broadcast_1414_piece0 on ***hostname masked***:42188 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:26:27 INFO spark.ContextCleaner: Cleaned accumulator 1415 18/04/17 17:26:27 INFO storage.BlockManagerInfo: Removed broadcast_1416_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:26:27 INFO storage.BlockManagerInfo: Removed broadcast_1416_piece0 on ***hostname masked***:60107 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:26:27 INFO spark.ContextCleaner: Cleaned accumulator 1417 18/04/17 17:26:27 INFO storage.BlockManagerInfo: Removed broadcast_1415_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:26:27 INFO storage.BlockManagerInfo: Removed broadcast_1415_piece0 on ***hostname masked***:50260 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:26:27 INFO spark.ContextCleaner: Cleaned accumulator 1419 18/04/17 17:26:27 INFO storage.BlockManagerInfo: Removed broadcast_1417_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:26:27 INFO storage.BlockManagerInfo: Removed broadcast_1417_piece0 on ***hostname masked***:56034 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:26:27 INFO spark.ContextCleaner: Cleaned accumulator 1418 18/04/17 17:26:27 INFO storage.BlockManagerInfo: Removed broadcast_1419_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:26:27 INFO storage.BlockManagerInfo: Removed broadcast_1419_piece0 on ***hostname masked***:55095 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:26:27 INFO spark.ContextCleaner: Cleaned accumulator 1420 18/04/17 17:26:27 INFO storage.BlockManagerInfo: Removed broadcast_1418_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:26:27 INFO storage.BlockManagerInfo: Removed broadcast_1418_piece0 on ***hostname masked***:55033 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:26:27 INFO spark.ContextCleaner: Cleaned accumulator 1422 18/04/17 17:26:27 INFO storage.BlockManagerInfo: Removed broadcast_1420_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.7 MB) 18/04/17 17:26:27 INFO storage.BlockManagerInfo: Removed broadcast_1420_piece0 on ***hostname masked***:53081 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:26:27 INFO spark.ContextCleaner: Cleaned accumulator 1421 18/04/17 17:26:27 INFO storage.BlockManagerInfo: Removed broadcast_1422_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.7 MB) 18/04/17 17:26:27 INFO storage.BlockManagerInfo: Removed broadcast_1422_piece0 on ***hostname masked***:55095 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:26:27 INFO spark.ContextCleaner: Cleaned accumulator 1423 18/04/17 17:26:27 INFO storage.BlockManagerInfo: Removed broadcast_1421_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.7 MB) 18/04/17 17:26:27 INFO storage.BlockManagerInfo: Removed broadcast_1421_piece0 on ***hostname masked***:53081 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:26:27 INFO spark.ContextCleaner: Cleaned accumulator 1424 18/04/17 17:27:00 INFO scheduler.JobScheduler: Added jobs for time 1523975220000 ms 18/04/17 17:27:00 INFO scheduler.JobScheduler: Starting job streaming job 1523975220000 ms.0 from job set of time 1523975220000 ms 18/04/17 17:27:00 INFO scheduler.JobScheduler: Starting job streaming job 1523975220000 ms.1 from job set of time 1523975220000 ms 18/04/17 17:27:00 INFO scheduler.JobScheduler: Starting job streaming job 1523975220000 ms.2 from job set of time 1523975220000 ms 18/04/17 17:27:00 INFO scheduler.JobScheduler: Finished job streaming job 1523975220000 ms.0 from job set of time 1523975220000 ms 18/04/17 17:27:00 INFO scheduler.JobScheduler: Starting job streaming job 1523975220000 ms.3 from job set of time 1523975220000 ms 18/04/17 17:27:00 INFO scheduler.JobScheduler: Starting job streaming job 1523975220000 ms.4 from job set of time 1523975220000 ms 18/04/17 17:27:00 INFO scheduler.JobScheduler: Finished job streaming job 1523975220000 ms.4 from job set of time 1523975220000 ms 18/04/17 17:27:00 INFO scheduler.JobScheduler: Starting job streaming job 1523975220000 ms.6 from job set of time 1523975220000 ms 18/04/17 17:27:00 INFO scheduler.JobScheduler: Finished job streaming job 1523975220000 ms.3 from job set of time 1523975220000 ms 18/04/17 17:27:00 INFO scheduler.JobScheduler: Starting job streaming job 1523975220000 ms.5 from job set of time 1523975220000 ms 18/04/17 17:27:00 INFO scheduler.JobScheduler: Starting job streaming job 1523975220000 ms.8 from job set of time 1523975220000 ms 18/04/17 17:27:00 INFO scheduler.JobScheduler: Starting job streaming job 1523975220000 ms.7 from job set of time 1523975220000 ms 18/04/17 17:27:00 INFO scheduler.JobScheduler: Starting job streaming job 1523975220000 ms.9 from job set of time 1523975220000 ms 18/04/17 17:27:00 INFO scheduler.JobScheduler: Starting job streaming job 1523975220000 ms.10 from job set of time 1523975220000 ms 18/04/17 17:27:00 INFO scheduler.JobScheduler: Starting job streaming job 1523975220000 ms.11 from job set of time 1523975220000 ms 18/04/17 17:27:00 INFO scheduler.JobScheduler: Starting job streaming job 1523975220000 ms.12 from job set of time 1523975220000 ms 18/04/17 17:27:00 INFO scheduler.JobScheduler: Starting job streaming job 1523975220000 ms.13 from job set of time 1523975220000 ms 18/04/17 17:27:00 INFO scheduler.JobScheduler: Finished job streaming job 1523975220000 ms.13 from job set of time 1523975220000 ms 18/04/17 17:27:00 INFO scheduler.JobScheduler: Starting job streaming job 1523975220000 ms.14 from job set of time 1523975220000 ms 18/04/17 17:27:00 INFO scheduler.JobScheduler: Starting job streaming job 1523975220000 ms.15 from job set of time 1523975220000 ms 18/04/17 17:27:00 INFO scheduler.JobScheduler: Finished job streaming job 1523975220000 ms.14 from job set of time 1523975220000 ms 18/04/17 17:27:00 INFO scheduler.JobScheduler: Starting job streaming job 1523975220000 ms.17 from job set of time 1523975220000 ms 18/04/17 17:27:00 INFO scheduler.JobScheduler: Finished job streaming job 1523975220000 ms.17 from job set of time 1523975220000 ms 18/04/17 17:27:00 INFO scheduler.JobScheduler: Starting job streaming job 1523975220000 ms.16 from job set of time 1523975220000 ms 18/04/17 17:27:00 INFO scheduler.JobScheduler: Starting job streaming job 1523975220000 ms.19 from job set of time 1523975220000 ms 18/04/17 17:27:00 INFO scheduler.JobScheduler: Starting job streaming job 1523975220000 ms.18 from job set of time 1523975220000 ms 18/04/17 17:27:00 INFO scheduler.JobScheduler: Finished job streaming job 1523975220000 ms.16 from job set of time 1523975220000 ms 18/04/17 17:27:00 INFO scheduler.JobScheduler: Starting job streaming job 1523975220000 ms.20 from job set of time 1523975220000 ms 18/04/17 17:27:00 INFO scheduler.JobScheduler: Starting job streaming job 1523975220000 ms.22 from job set of time 1523975220000 ms 18/04/17 17:27:00 INFO scheduler.JobScheduler: Starting job streaming job 1523975220000 ms.21 from job set of time 1523975220000 ms 18/04/17 17:27:00 INFO scheduler.JobScheduler: Starting job streaming job 1523975220000 ms.23 from job set of time 1523975220000 ms 18/04/17 17:27:00 INFO scheduler.JobScheduler: Finished job streaming job 1523975220000 ms.21 from job set of time 1523975220000 ms 18/04/17 17:27:00 INFO scheduler.JobScheduler: Starting job streaming job 1523975220000 ms.24 from job set of time 1523975220000 ms 18/04/17 17:27:00 INFO scheduler.JobScheduler: Starting job streaming job 1523975220000 ms.26 from job set of time 1523975220000 ms 18/04/17 17:27:00 INFO scheduler.JobScheduler: Starting job streaming job 1523975220000 ms.25 from job set of time 1523975220000 ms 18/04/17 17:27:00 INFO scheduler.JobScheduler: Starting job streaming job 1523975220000 ms.27 from job set of time 1523975220000 ms 18/04/17 17:27:00 INFO scheduler.JobScheduler: Starting job streaming job 1523975220000 ms.28 from job set of time 1523975220000 ms 18/04/17 17:27:00 INFO scheduler.JobScheduler: Starting job streaming job 1523975220000 ms.29 from job set of time 1523975220000 ms 18/04/17 17:27:00 INFO scheduler.JobScheduler: Starting job streaming job 1523975220000 ms.30 from job set of time 1523975220000 ms 18/04/17 17:27:00 INFO scheduler.JobScheduler: Starting job streaming job 1523975220000 ms.31 from job set of time 1523975220000 ms 18/04/17 17:27:00 INFO scheduler.JobScheduler: Finished job streaming job 1523975220000 ms.30 from job set of time 1523975220000 ms 18/04/17 17:27:00 INFO scheduler.JobScheduler: Starting job streaming job 1523975220000 ms.32 from job set of time 1523975220000 ms 18/04/17 17:27:00 INFO scheduler.JobScheduler: Starting job streaming job 1523975220000 ms.33 from job set of time 1523975220000 ms 18/04/17 17:27:00 INFO scheduler.JobScheduler: Starting job streaming job 1523975220000 ms.34 from job set of time 1523975220000 ms 18/04/17 17:27:00 INFO scheduler.JobScheduler: Starting job streaming job 1523975220000 ms.35 from job set of time 1523975220000 ms 18/04/17 17:27:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 17:27:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 17:27:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 17:27:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 17:27:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 17:27:00 INFO scheduler.DAGScheduler: Got job 1427 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 17:27:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 1425 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 17:27:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 17:27:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 17:27:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 17:27:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 17:27:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 17:27:00 INFO scheduler.DAGScheduler: Submitting ResultStage 1425 (KafkaRDD[1951] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 17:27:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 17:27:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 17:27:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 17:27:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 17:27:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 17:27:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 17:27:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 17:27:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 17:27:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 17:27:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 17:27:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 17:27:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 17:27:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 17:27:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 17:27:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 17:27:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 17:27:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 17:27:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 17:27:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 17:27:00 INFO storage.MemoryStore: Block broadcast_1425 stored as values in memory (estimated size 5.7 KB, free 491.6 MB) 18/04/17 17:27:00 INFO storage.MemoryStore: Block broadcast_1425_piece0 stored as bytes in memory (estimated size 3.1 KB, free 491.6 MB) 18/04/17 17:27:00 INFO storage.BlockManagerInfo: Added broadcast_1425_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.7 MB) 18/04/17 17:27:00 INFO spark.SparkContext: Created broadcast 1425 from broadcast at DAGScheduler.scala:1006 18/04/17 17:27:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 1425 (KafkaRDD[1951] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 17:27:00 INFO cluster.YarnClusterScheduler: Adding task set 1425.0 with 1 tasks 18/04/17 17:27:00 INFO scheduler.DAGScheduler: Got job 1426 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 17:27:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 1426 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 17:27:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 17:27:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 17:27:00 INFO scheduler.DAGScheduler: Submitting ResultStage 1426 (KafkaRDD[1976] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 17:27:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 1425.0 (TID 1425, ***hostname masked***, executor 4, partition 0, RACK_LOCAL, 2060 bytes) 18/04/17 17:27:00 INFO storage.MemoryStore: Block broadcast_1426 stored as values in memory (estimated size 5.7 KB, free 491.6 MB) 18/04/17 17:27:00 INFO storage.MemoryStore: Block broadcast_1426_piece0 stored as bytes in memory (estimated size 3.1 KB, free 491.6 MB) 18/04/17 17:27:00 INFO storage.BlockManagerInfo: Added broadcast_1426_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.7 MB) 18/04/17 17:27:00 INFO spark.SparkContext: Created broadcast 1426 from broadcast at DAGScheduler.scala:1006 18/04/17 17:27:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 1426 (KafkaRDD[1976] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 17:27:00 INFO cluster.YarnClusterScheduler: Adding task set 1426.0 with 1 tasks 18/04/17 17:27:00 INFO scheduler.DAGScheduler: Got job 1425 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 17:27:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 1427 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 17:27:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 17:27:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 17:27:00 INFO scheduler.DAGScheduler: Submitting ResultStage 1427 (KafkaRDD[1966] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 17:27:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 1426.0 (TID 1426, ***hostname masked***, executor 5, partition 0, RACK_LOCAL, 2040 bytes) 18/04/17 17:27:00 INFO storage.MemoryStore: Block broadcast_1427 stored as values in memory (estimated size 5.7 KB, free 491.6 MB) 18/04/17 17:27:00 INFO storage.MemoryStore: Block broadcast_1427_piece0 stored as bytes in memory (estimated size 3.1 KB, free 491.6 MB) 18/04/17 17:27:00 INFO storage.BlockManagerInfo: Added broadcast_1427_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:27:00 INFO spark.SparkContext: Created broadcast 1427 from broadcast at DAGScheduler.scala:1006 18/04/17 17:27:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 1427 (KafkaRDD[1966] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 17:27:00 INFO cluster.YarnClusterScheduler: Adding task set 1427.0 with 1 tasks 18/04/17 17:27:00 INFO scheduler.DAGScheduler: Got job 1429 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 17:27:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 1428 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 17:27:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 17:27:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 17:27:00 INFO scheduler.DAGScheduler: Submitting ResultStage 1428 (KafkaRDD[1955] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 17:27:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 1427.0 (TID 1427, ***hostname masked***, executor 1, partition 0, RACK_LOCAL, 2069 bytes) 18/04/17 17:27:00 INFO storage.MemoryStore: Block broadcast_1428 stored as values in memory (estimated size 5.7 KB, free 491.6 MB) 18/04/17 17:27:00 INFO storage.MemoryStore: Block broadcast_1428_piece0 stored as bytes in memory (estimated size 3.1 KB, free 491.6 MB) 18/04/17 17:27:00 INFO storage.BlockManagerInfo: Added broadcast_1428_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:27:00 INFO spark.SparkContext: Created broadcast 1428 from broadcast at DAGScheduler.scala:1006 18/04/17 17:27:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 1428 (KafkaRDD[1955] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 17:27:00 INFO cluster.YarnClusterScheduler: Adding task set 1428.0 with 1 tasks 18/04/17 17:27:00 INFO scheduler.DAGScheduler: Got job 1428 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 17:27:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 1429 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 17:27:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 17:27:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 17:27:00 INFO scheduler.DAGScheduler: Submitting ResultStage 1429 (KafkaRDD[1953] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 17:27:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 1428.0 (TID 1428, ***hostname masked***, executor 5, partition 0, RACK_LOCAL, 2046 bytes) 18/04/17 17:27:00 INFO storage.MemoryStore: Block broadcast_1429 stored as values in memory (estimated size 5.7 KB, free 491.6 MB) 18/04/17 17:27:00 INFO storage.BlockManagerInfo: Added broadcast_1425_piece0 in memory on ***hostname masked***:55279 (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:27:00 INFO storage.MemoryStore: Block broadcast_1429_piece0 stored as bytes in memory (estimated size 3.1 KB, free 491.6 MB) 18/04/17 17:27:00 INFO storage.BlockManagerInfo: Added broadcast_1429_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:27:00 INFO spark.SparkContext: Created broadcast 1429 from broadcast at DAGScheduler.scala:1006 18/04/17 17:27:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 1429 (KafkaRDD[1953] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 17:27:00 INFO cluster.YarnClusterScheduler: Adding task set 1429.0 with 1 tasks 18/04/17 17:27:00 INFO scheduler.DAGScheduler: Got job 1430 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 17:27:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 1430 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 17:27:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 17:27:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 17:27:00 INFO scheduler.DAGScheduler: Submitting ResultStage 1430 (KafkaRDD[1969] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 17:27:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 1429.0 (TID 1429, ***hostname masked***, executor 10, partition 0, RACK_LOCAL, 2041 bytes) 18/04/17 17:27:00 INFO storage.MemoryStore: Block broadcast_1430 stored as values in memory (estimated size 5.7 KB, free 491.6 MB) 18/04/17 17:27:00 INFO storage.BlockManagerInfo: Added broadcast_1426_piece0 in memory on ***hostname masked***:53081 (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:27:00 INFO storage.MemoryStore: Block broadcast_1430_piece0 stored as bytes in memory (estimated size 3.1 KB, free 491.6 MB) 18/04/17 17:27:00 INFO storage.BlockManagerInfo: Added broadcast_1430_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:27:00 INFO spark.SparkContext: Created broadcast 1430 from broadcast at DAGScheduler.scala:1006 18/04/17 17:27:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 1430 (KafkaRDD[1969] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 17:27:00 INFO cluster.YarnClusterScheduler: Adding task set 1430.0 with 1 tasks 18/04/17 17:27:00 INFO scheduler.DAGScheduler: Got job 1432 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 17:27:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 1431 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 17:27:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 17:27:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 17:27:00 INFO scheduler.DAGScheduler: Submitting ResultStage 1431 (KafkaRDD[1977] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 17:27:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 1430.0 (TID 1430, ***hostname masked***, executor 2, partition 0, RACK_LOCAL, 2045 bytes) 18/04/17 17:27:00 INFO storage.MemoryStore: Block broadcast_1431 stored as values in memory (estimated size 5.7 KB, free 491.6 MB) 18/04/17 17:27:00 INFO storage.MemoryStore: Block broadcast_1431_piece0 stored as bytes in memory (estimated size 3.1 KB, free 491.6 MB) 18/04/17 17:27:00 INFO storage.BlockManagerInfo: Added broadcast_1431_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:27:00 INFO storage.BlockManagerInfo: Added broadcast_1428_piece0 in memory on ***hostname masked***:53081 (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:27:00 INFO spark.SparkContext: Created broadcast 1431 from broadcast at DAGScheduler.scala:1006 18/04/17 17:27:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 1431 (KafkaRDD[1977] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 17:27:00 INFO cluster.YarnClusterScheduler: Adding task set 1431.0 with 1 tasks 18/04/17 17:27:00 INFO scheduler.DAGScheduler: Got job 1431 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 17:27:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 1432 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 17:27:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 17:27:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 17:27:00 INFO scheduler.DAGScheduler: Submitting ResultStage 1432 (KafkaRDD[1978] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 17:27:00 INFO storage.MemoryStore: Block broadcast_1432 stored as values in memory (estimated size 5.7 KB, free 491.6 MB) 18/04/17 17:27:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 1431.0 (TID 1431, ***hostname masked***, executor 10, partition 0, NODE_LOCAL, 2048 bytes) 18/04/17 17:27:00 INFO storage.BlockManagerInfo: Added broadcast_1429_piece0 in memory on ***hostname masked***:55095 (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:27:00 INFO storage.MemoryStore: Block broadcast_1432_piece0 stored as bytes in memory (estimated size 3.1 KB, free 491.6 MB) 18/04/17 17:27:00 INFO storage.BlockManagerInfo: Added broadcast_1432_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:27:00 INFO spark.SparkContext: Created broadcast 1432 from broadcast at DAGScheduler.scala:1006 18/04/17 17:27:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 1432 (KafkaRDD[1978] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 17:27:00 INFO cluster.YarnClusterScheduler: Adding task set 1432.0 with 1 tasks 18/04/17 17:27:00 INFO scheduler.DAGScheduler: Got job 1433 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 17:27:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 1433 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 17:27:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 17:27:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 17:27:00 INFO scheduler.DAGScheduler: Submitting ResultStage 1433 (KafkaRDD[1967] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 17:27:00 INFO storage.MemoryStore: Block broadcast_1433 stored as values in memory (estimated size 5.7 KB, free 491.6 MB) 18/04/17 17:27:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 1432.0 (TID 1432, ***hostname masked***, executor 10, partition 0, RACK_LOCAL, 2053 bytes) 18/04/17 17:27:00 INFO storage.MemoryStore: Block broadcast_1433_piece0 stored as bytes in memory (estimated size 3.1 KB, free 491.6 MB) 18/04/17 17:27:00 INFO storage.BlockManagerInfo: Added broadcast_1433_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:27:00 INFO storage.BlockManagerInfo: Added broadcast_1427_piece0 in memory on ***hostname masked***:56034 (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:27:00 INFO spark.SparkContext: Created broadcast 1433 from broadcast at DAGScheduler.scala:1006 18/04/17 17:27:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 1433 (KafkaRDD[1967] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 17:27:00 INFO cluster.YarnClusterScheduler: Adding task set 1433.0 with 1 tasks 18/04/17 17:27:00 INFO scheduler.DAGScheduler: Got job 1434 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 17:27:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 1434 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 17:27:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 17:27:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 17:27:00 INFO scheduler.DAGScheduler: Submitting ResultStage 1434 (KafkaRDD[1964] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 17:27:00 INFO storage.MemoryStore: Block broadcast_1434 stored as values in memory (estimated size 5.7 KB, free 491.6 MB) 18/04/17 17:27:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 1433.0 (TID 1433, ***hostname masked***, executor 1, partition 0, RACK_LOCAL, 2048 bytes) 18/04/17 17:27:00 INFO storage.BlockManagerInfo: Added broadcast_1430_piece0 in memory on ***hostname masked***:43653 (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:27:00 INFO storage.BlockManagerInfo: Added broadcast_1432_piece0 in memory on ***hostname masked***:55095 (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:27:00 INFO storage.MemoryStore: Block broadcast_1434_piece0 stored as bytes in memory (estimated size 3.1 KB, free 491.5 MB) 18/04/17 17:27:00 INFO storage.BlockManagerInfo: Added broadcast_1434_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:27:00 INFO spark.SparkContext: Created broadcast 1434 from broadcast at DAGScheduler.scala:1006 18/04/17 17:27:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 1434 (KafkaRDD[1964] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 17:27:00 INFO cluster.YarnClusterScheduler: Adding task set 1434.0 with 1 tasks 18/04/17 17:27:00 INFO scheduler.DAGScheduler: Got job 1435 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 17:27:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 1435 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 17:27:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 17:27:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 17:27:00 INFO scheduler.DAGScheduler: Submitting ResultStage 1435 (KafkaRDD[1952] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 17:27:00 INFO storage.MemoryStore: Block broadcast_1435 stored as values in memory (estimated size 5.7 KB, free 491.5 MB) 18/04/17 17:27:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 1434.0 (TID 1434, ***hostname masked***, executor 6, partition 0, RACK_LOCAL, 2048 bytes) 18/04/17 17:27:00 INFO storage.MemoryStore: Block broadcast_1435_piece0 stored as bytes in memory (estimated size 3.1 KB, free 491.5 MB) 18/04/17 17:27:00 INFO storage.BlockManagerInfo: Added broadcast_1435_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:27:00 INFO spark.SparkContext: Created broadcast 1435 from broadcast at DAGScheduler.scala:1006 18/04/17 17:27:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 1435 (KafkaRDD[1952] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 17:27:00 INFO cluster.YarnClusterScheduler: Adding task set 1435.0 with 1 tasks 18/04/17 17:27:00 INFO scheduler.DAGScheduler: Got job 1437 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 17:27:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 1436 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 17:27:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 17:27:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 17:27:00 INFO scheduler.DAGScheduler: Submitting ResultStage 1436 (KafkaRDD[1979] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 17:27:00 INFO storage.BlockManagerInfo: Added broadcast_1431_piece0 in memory on ***hostname masked***:55095 (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:27:00 INFO storage.MemoryStore: Block broadcast_1436 stored as values in memory (estimated size 5.7 KB, free 491.5 MB) 18/04/17 17:27:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 1435.0 (TID 1435, ***hostname masked***, executor 3, partition 0, RACK_LOCAL, 2051 bytes) 18/04/17 17:27:00 INFO storage.BlockManagerInfo: Added broadcast_1433_piece0 in memory on ***hostname masked***:56034 (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:27:00 INFO storage.MemoryStore: Block broadcast_1436_piece0 stored as bytes in memory (estimated size 3.1 KB, free 491.5 MB) 18/04/17 17:27:00 INFO storage.BlockManagerInfo: Added broadcast_1436_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:27:00 INFO spark.SparkContext: Created broadcast 1436 from broadcast at DAGScheduler.scala:1006 18/04/17 17:27:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 1436 (KafkaRDD[1979] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 17:27:00 INFO cluster.YarnClusterScheduler: Adding task set 1436.0 with 1 tasks 18/04/17 17:27:00 INFO scheduler.DAGScheduler: Got job 1436 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 17:27:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 1437 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 17:27:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 17:27:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 17:27:00 INFO scheduler.DAGScheduler: Submitting ResultStage 1437 (KafkaRDD[1973] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 17:27:00 INFO storage.MemoryStore: Block broadcast_1437 stored as values in memory (estimated size 5.7 KB, free 491.5 MB) 18/04/17 17:27:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 1436.0 (TID 1436, ***hostname masked***, executor 11, partition 0, NODE_LOCAL, 2037 bytes) 18/04/17 17:27:00 INFO storage.BlockManagerInfo: Added broadcast_1434_piece0 in memory on ***hostname masked***:35790 (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:27:00 INFO storage.MemoryStore: Block broadcast_1437_piece0 stored as bytes in memory (estimated size 3.1 KB, free 491.5 MB) 18/04/17 17:27:00 INFO storage.BlockManagerInfo: Added broadcast_1437_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:27:00 INFO spark.SparkContext: Created broadcast 1437 from broadcast at DAGScheduler.scala:1006 18/04/17 17:27:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 1437 (KafkaRDD[1973] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 17:27:00 INFO cluster.YarnClusterScheduler: Adding task set 1437.0 with 1 tasks 18/04/17 17:27:00 INFO scheduler.DAGScheduler: Got job 1438 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 17:27:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 1438 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 17:27:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 17:27:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 17:27:00 INFO scheduler.DAGScheduler: Submitting ResultStage 1438 (KafkaRDD[1945] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 17:27:00 INFO storage.MemoryStore: Block broadcast_1438 stored as values in memory (estimated size 5.7 KB, free 491.5 MB) 18/04/17 17:27:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 1437.0 (TID 1437, ***hostname masked***, executor 2, partition 0, NODE_LOCAL, 2049 bytes) 18/04/17 17:27:00 INFO storage.MemoryStore: Block broadcast_1438_piece0 stored as bytes in memory (estimated size 3.1 KB, free 491.5 MB) 18/04/17 17:27:00 INFO storage.BlockManagerInfo: Added broadcast_1438_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:27:00 INFO spark.SparkContext: Created broadcast 1438 from broadcast at DAGScheduler.scala:1006 18/04/17 17:27:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 1438 (KafkaRDD[1945] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 17:27:00 INFO cluster.YarnClusterScheduler: Adding task set 1438.0 with 1 tasks 18/04/17 17:27:00 INFO scheduler.DAGScheduler: Got job 1439 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 17:27:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 1439 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 17:27:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 17:27:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 17:27:00 INFO scheduler.DAGScheduler: Submitting ResultStage 1439 (KafkaRDD[1950] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 17:27:00 INFO storage.MemoryStore: Block broadcast_1439 stored as values in memory (estimated size 5.7 KB, free 491.5 MB) 18/04/17 17:27:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 1438.0 (TID 1438, ***hostname masked***, executor 8, partition 0, RACK_LOCAL, 2063 bytes) 18/04/17 17:27:00 INFO storage.BlockManagerInfo: Added broadcast_1435_piece0 in memory on ***hostname masked***:60107 (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:27:00 INFO storage.MemoryStore: Block broadcast_1439_piece0 stored as bytes in memory (estimated size 3.1 KB, free 491.5 MB) 18/04/17 17:27:00 INFO storage.BlockManagerInfo: Added broadcast_1439_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:27:00 INFO spark.SparkContext: Created broadcast 1439 from broadcast at DAGScheduler.scala:1006 18/04/17 17:27:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 1439 (KafkaRDD[1950] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 17:27:00 INFO cluster.YarnClusterScheduler: Adding task set 1439.0 with 1 tasks 18/04/17 17:27:00 INFO scheduler.DAGScheduler: Got job 1440 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 17:27:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 1440 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 17:27:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 17:27:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 17:27:00 INFO scheduler.DAGScheduler: Submitting ResultStage 1440 (KafkaRDD[1975] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 17:27:00 INFO storage.MemoryStore: Block broadcast_1440 stored as values in memory (estimated size 5.7 KB, free 491.5 MB) 18/04/17 17:27:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 1439.0 (TID 1439, ***hostname masked***, executor 11, partition 0, NODE_LOCAL, 2045 bytes) 18/04/17 17:27:00 INFO storage.BlockManagerInfo: Added broadcast_1436_piece0 in memory on ***hostname masked***:57847 (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:27:00 INFO storage.MemoryStore: Block broadcast_1440_piece0 stored as bytes in memory (estimated size 3.1 KB, free 491.5 MB) 18/04/17 17:27:00 INFO storage.BlockManagerInfo: Added broadcast_1440_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:27:00 INFO spark.SparkContext: Created broadcast 1440 from broadcast at DAGScheduler.scala:1006 18/04/17 17:27:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 1440 (KafkaRDD[1975] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 17:27:00 INFO cluster.YarnClusterScheduler: Adding task set 1440.0 with 1 tasks 18/04/17 17:27:00 INFO scheduler.DAGScheduler: Got job 1441 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 17:27:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 1441 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 17:27:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 17:27:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 17:27:00 INFO scheduler.DAGScheduler: Submitting ResultStage 1441 (KafkaRDD[1956] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 17:27:00 INFO storage.MemoryStore: Block broadcast_1441 stored as values in memory (estimated size 5.7 KB, free 491.5 MB) 18/04/17 17:27:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 1440.0 (TID 1440, ***hostname masked***, executor 8, partition 0, RACK_LOCAL, 2042 bytes) 18/04/17 17:27:00 INFO storage.MemoryStore: Block broadcast_1441_piece0 stored as bytes in memory (estimated size 3.1 KB, free 491.5 MB) 18/04/17 17:27:00 INFO storage.BlockManagerInfo: Added broadcast_1441_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:27:00 INFO spark.SparkContext: Created broadcast 1441 from broadcast at DAGScheduler.scala:1006 18/04/17 17:27:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 1441 (KafkaRDD[1956] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 17:27:00 INFO cluster.YarnClusterScheduler: Adding task set 1441.0 with 1 tasks 18/04/17 17:27:00 INFO scheduler.DAGScheduler: Got job 1442 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 17:27:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 1442 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 17:27:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 17:27:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 17:27:00 INFO scheduler.DAGScheduler: Submitting ResultStage 1442 (KafkaRDD[1946] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 17:27:00 INFO storage.MemoryStore: Block broadcast_1442 stored as values in memory (estimated size 5.7 KB, free 491.5 MB) 18/04/17 17:27:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 1441.0 (TID 1441, ***hostname masked***, executor 9, partition 0, RACK_LOCAL, 2051 bytes) 18/04/17 17:27:00 INFO storage.BlockManagerInfo: Added broadcast_1437_piece0 in memory on ***hostname masked***:43653 (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:27:00 INFO storage.MemoryStore: Block broadcast_1442_piece0 stored as bytes in memory (estimated size 3.1 KB, free 491.5 MB) 18/04/17 17:27:00 INFO storage.BlockManagerInfo: Added broadcast_1442_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:27:00 INFO spark.SparkContext: Created broadcast 1442 from broadcast at DAGScheduler.scala:1006 18/04/17 17:27:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 1442 (KafkaRDD[1946] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 17:27:00 INFO cluster.YarnClusterScheduler: Adding task set 1442.0 with 1 tasks 18/04/17 17:27:00 INFO scheduler.DAGScheduler: Got job 1444 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 17:27:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 1443 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 17:27:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 17:27:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 17:27:00 INFO scheduler.DAGScheduler: Submitting ResultStage 1443 (KafkaRDD[1970] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 17:27:00 INFO storage.MemoryStore: Block broadcast_1443 stored as values in memory (estimated size 5.7 KB, free 491.5 MB) 18/04/17 17:27:00 INFO storage.BlockManagerInfo: Added broadcast_1439_piece0 in memory on ***hostname masked***:57847 (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:27:00 INFO storage.BlockManagerInfo: Added broadcast_1440_piece0 in memory on ***hostname masked***:50260 (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:27:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 1442.0 (TID 1442, ***hostname masked***, executor 3, partition 0, RACK_LOCAL, 2034 bytes) 18/04/17 17:27:00 INFO storage.MemoryStore: Block broadcast_1443_piece0 stored as bytes in memory (estimated size 3.1 KB, free 491.5 MB) 18/04/17 17:27:00 INFO storage.BlockManagerInfo: Added broadcast_1443_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:27:00 INFO spark.SparkContext: Created broadcast 1443 from broadcast at DAGScheduler.scala:1006 18/04/17 17:27:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 1443 (KafkaRDD[1970] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 17:27:00 INFO cluster.YarnClusterScheduler: Adding task set 1443.0 with 1 tasks 18/04/17 17:27:00 INFO scheduler.DAGScheduler: Got job 1443 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 17:27:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 1444 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 17:27:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 17:27:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 17:27:00 INFO scheduler.DAGScheduler: Submitting ResultStage 1444 (KafkaRDD[1963] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 17:27:00 INFO storage.MemoryStore: Block broadcast_1444 stored as values in memory (estimated size 5.7 KB, free 491.5 MB) 18/04/17 17:27:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 1443.0 (TID 1443, ***hostname masked***, executor 6, partition 0, NODE_LOCAL, 2056 bytes) 18/04/17 17:27:00 INFO storage.MemoryStore: Block broadcast_1444_piece0 stored as bytes in memory (estimated size 3.1 KB, free 491.5 MB) 18/04/17 17:27:00 INFO storage.BlockManagerInfo: Added broadcast_1444_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:27:00 INFO spark.SparkContext: Created broadcast 1444 from broadcast at DAGScheduler.scala:1006 18/04/17 17:27:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 1444 (KafkaRDD[1963] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 17:27:00 INFO cluster.YarnClusterScheduler: Adding task set 1444.0 with 1 tasks 18/04/17 17:27:00 INFO scheduler.DAGScheduler: Got job 1445 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 17:27:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 1445 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 17:27:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 17:27:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 17:27:00 INFO storage.BlockManagerInfo: Added broadcast_1442_piece0 in memory on ***hostname masked***:60107 (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:27:00 INFO scheduler.DAGScheduler: Submitting ResultStage 1445 (KafkaRDD[1962] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 17:27:00 INFO storage.MemoryStore: Block broadcast_1445 stored as values in memory (estimated size 5.7 KB, free 491.5 MB) 18/04/17 17:27:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 1444.0 (TID 1444, ***hostname masked***, executor 2, partition 0, RACK_LOCAL, 2050 bytes) 18/04/17 17:27:00 INFO storage.BlockManagerInfo: Added broadcast_1438_piece0 in memory on ***hostname masked***:50260 (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:27:00 INFO storage.MemoryStore: Block broadcast_1445_piece0 stored as bytes in memory (estimated size 3.1 KB, free 491.5 MB) 18/04/17 17:27:00 INFO storage.BlockManagerInfo: Added broadcast_1445_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:27:00 INFO spark.SparkContext: Created broadcast 1445 from broadcast at DAGScheduler.scala:1006 18/04/17 17:27:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 1445 (KafkaRDD[1962] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 17:27:00 INFO cluster.YarnClusterScheduler: Adding task set 1445.0 with 1 tasks 18/04/17 17:27:00 INFO scheduler.DAGScheduler: Got job 1447 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 17:27:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 1446 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 17:27:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 17:27:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 17:27:00 INFO scheduler.DAGScheduler: Submitting ResultStage 1446 (KafkaRDD[1968] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 17:27:00 INFO storage.MemoryStore: Block broadcast_1446 stored as values in memory (estimated size 5.7 KB, free 491.4 MB) 18/04/17 17:27:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 1445.0 (TID 1445, ***hostname masked***, executor 2, partition 0, RACK_LOCAL, 2048 bytes) 18/04/17 17:27:00 INFO storage.BlockManagerInfo: Added broadcast_1441_piece0 in memory on ***hostname masked***:55033 (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:27:00 INFO storage.MemoryStore: Block broadcast_1446_piece0 stored as bytes in memory (estimated size 3.1 KB, free 491.4 MB) 18/04/17 17:27:00 INFO storage.BlockManagerInfo: Added broadcast_1446_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:27:00 INFO spark.SparkContext: Created broadcast 1446 from broadcast at DAGScheduler.scala:1006 18/04/17 17:27:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 1446 (KafkaRDD[1968] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 17:27:00 INFO cluster.YarnClusterScheduler: Adding task set 1446.0 with 1 tasks 18/04/17 17:27:00 INFO scheduler.DAGScheduler: Got job 1446 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 17:27:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 1447 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 17:27:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 17:27:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 17:27:00 INFO scheduler.DAGScheduler: Submitting ResultStage 1447 (KafkaRDD[1971] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 17:27:00 INFO storage.MemoryStore: Block broadcast_1447 stored as values in memory (estimated size 5.7 KB, free 491.4 MB) 18/04/17 17:27:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 1446.0 (TID 1446, ***hostname masked***, executor 6, partition 0, NODE_LOCAL, 2047 bytes) 18/04/17 17:27:00 INFO storage.BlockManagerInfo: Added broadcast_1444_piece0 in memory on ***hostname masked***:43653 (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:27:00 INFO storage.MemoryStore: Block broadcast_1447_piece0 stored as bytes in memory (estimated size 3.1 KB, free 491.4 MB) 18/04/17 17:27:00 INFO storage.BlockManagerInfo: Added broadcast_1447_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:27:00 INFO spark.SparkContext: Created broadcast 1447 from broadcast at DAGScheduler.scala:1006 18/04/17 17:27:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 1447 (KafkaRDD[1971] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 17:27:00 INFO cluster.YarnClusterScheduler: Adding task set 1447.0 with 1 tasks 18/04/17 17:27:00 INFO scheduler.DAGScheduler: Got job 1448 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 17:27:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 1448 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 17:27:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 17:27:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 17:27:00 INFO scheduler.DAGScheduler: Submitting ResultStage 1448 (KafkaRDD[1954] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 17:27:00 INFO storage.MemoryStore: Block broadcast_1448 stored as values in memory (estimated size 5.7 KB, free 491.4 MB) 18/04/17 17:27:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 1447.0 (TID 1447, ***hostname masked***, executor 11, partition 0, RACK_LOCAL, 2048 bytes) 18/04/17 17:27:00 INFO storage.MemoryStore: Block broadcast_1448_piece0 stored as bytes in memory (estimated size 3.1 KB, free 491.4 MB) 18/04/17 17:27:00 INFO storage.BlockManagerInfo: Added broadcast_1448_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:27:00 INFO spark.SparkContext: Created broadcast 1448 from broadcast at DAGScheduler.scala:1006 18/04/17 17:27:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 1448 (KafkaRDD[1954] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 17:27:00 INFO cluster.YarnClusterScheduler: Adding task set 1448.0 with 1 tasks 18/04/17 17:27:00 INFO scheduler.DAGScheduler: Got job 1450 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 17:27:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 1449 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 17:27:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 17:27:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 17:27:00 INFO scheduler.DAGScheduler: Submitting ResultStage 1449 (KafkaRDD[1949] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 17:27:00 INFO storage.MemoryStore: Block broadcast_1449 stored as values in memory (estimated size 5.7 KB, free 491.4 MB) 18/04/17 17:27:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 1448.0 (TID 1448, ***hostname masked***, executor 3, partition 0, NODE_LOCAL, 2059 bytes) 18/04/17 17:27:00 INFO storage.MemoryStore: Block broadcast_1449_piece0 stored as bytes in memory (estimated size 3.1 KB, free 491.4 MB) 18/04/17 17:27:00 INFO storage.BlockManagerInfo: Added broadcast_1449_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:27:00 INFO spark.SparkContext: Created broadcast 1449 from broadcast at DAGScheduler.scala:1006 18/04/17 17:27:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 1449 (KafkaRDD[1949] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 17:27:00 INFO cluster.YarnClusterScheduler: Adding task set 1449.0 with 1 tasks 18/04/17 17:27:00 INFO scheduler.DAGScheduler: Got job 1449 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 17:27:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 1450 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 17:27:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 17:27:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 17:27:00 INFO scheduler.DAGScheduler: Submitting ResultStage 1450 (KafkaRDD[1972] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 17:27:00 INFO storage.MemoryStore: Block broadcast_1450 stored as values in memory (estimated size 5.7 KB, free 491.4 MB) 18/04/17 17:27:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 1449.0 (TID 1449, ***hostname masked***, executor 4, partition 0, NODE_LOCAL, 2059 bytes) 18/04/17 17:27:00 INFO storage.MemoryStore: Block broadcast_1450_piece0 stored as bytes in memory (estimated size 3.1 KB, free 491.4 MB) 18/04/17 17:27:00 INFO storage.BlockManagerInfo: Added broadcast_1450_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:27:00 INFO spark.SparkContext: Created broadcast 1450 from broadcast at DAGScheduler.scala:1006 18/04/17 17:27:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 1450 (KafkaRDD[1972] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 17:27:00 INFO cluster.YarnClusterScheduler: Adding task set 1450.0 with 1 tasks 18/04/17 17:27:00 INFO scheduler.DAGScheduler: Got job 1451 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 17:27:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 1451 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 17:27:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 17:27:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 17:27:00 INFO storage.BlockManagerInfo: Added broadcast_1447_piece0 in memory on ***hostname masked***:57847 (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:27:00 INFO scheduler.DAGScheduler: Submitting ResultStage 1451 (KafkaRDD[1959] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 17:27:00 INFO storage.MemoryStore: Block broadcast_1451 stored as values in memory (estimated size 5.7 KB, free 491.4 MB) 18/04/17 17:27:00 INFO storage.BlockManagerInfo: Added broadcast_1448_piece0 in memory on ***hostname masked***:60107 (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:27:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 1450.0 (TID 1450, ***hostname masked***, executor 11, partition 0, RACK_LOCAL, 2045 bytes) 18/04/17 17:27:00 INFO storage.MemoryStore: Block broadcast_1451_piece0 stored as bytes in memory (estimated size 3.1 KB, free 491.4 MB) 18/04/17 17:27:00 INFO storage.BlockManagerInfo: Added broadcast_1451_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:27:00 INFO spark.SparkContext: Created broadcast 1451 from broadcast at DAGScheduler.scala:1006 18/04/17 17:27:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 1451 (KafkaRDD[1959] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 17:27:00 INFO cluster.YarnClusterScheduler: Adding task set 1451.0 with 1 tasks 18/04/17 17:27:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 1451.0 (TID 1451, ***hostname masked***, executor 12, partition 0, RACK_LOCAL, 2064 bytes) 18/04/17 17:27:00 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 1436.0 (TID 1436) in 61 ms on ***hostname masked*** (executor 11) (1/1) 18/04/17 17:27:00 INFO scheduler.DAGScheduler: ResultStage 1436 (foreachPartition at PredictorEngineApp.java:153) finished in 0.062 s 18/04/17 17:27:00 INFO cluster.YarnClusterScheduler: Removed TaskSet 1436.0, whose tasks have all completed, from pool 18/04/17 17:27:00 INFO storage.BlockManagerInfo: Added broadcast_1449_piece0 in memory on ***hostname masked***:55279 (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:27:00 INFO scheduler.DAGScheduler: ResultStage 1429 (foreachPartition at PredictorEngineApp.java:153) finished in 0.089 s 18/04/17 17:27:00 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 1429.0 (TID 1429) in 89 ms on ***hostname masked*** (executor 10) (1/1) 18/04/17 17:27:00 INFO scheduler.DAGScheduler: Job 1437 finished: foreachPartition at PredictorEngineApp.java:153, took 0.111549 s 18/04/17 17:27:00 INFO scheduler.DAGScheduler: Job 1428 finished: foreachPartition at PredictorEngineApp.java:153, took 0.112364 s 18/04/17 17:27:00 INFO cluster.YarnClusterScheduler: Removed TaskSet 1429.0, whose tasks have all completed, from pool 18/04/17 17:27:00 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x617e61d5 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 17:27:00 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x40f96638 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 17:27:00 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x617e61d50x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 17:27:00 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x40f966380x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 17:27:00 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 17:27:00 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 17:27:00 INFO storage.BlockManagerInfo: Added broadcast_1450_piece0 in memory on ***hostname masked***:57847 (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:27:00 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:44360, server: ***hostname masked***/***IP masked***:2181 18/04/17 17:27:00 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:44359, server: ***hostname masked***/***IP masked***:2181 18/04/17 17:27:00 INFO storage.BlockManagerInfo: Added broadcast_1451_piece0 in memory on ***hostname masked***:42188 (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:27:00 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x1626be1444c98da, negotiated timeout = 60000 18/04/17 17:27:00 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x1626be1444c98db, negotiated timeout = 60000 18/04/17 17:27:00 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x1626be1444c98da 18/04/17 17:27:00 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x1626be1444c98db 18/04/17 17:27:00 INFO zookeeper.ZooKeeper: Session: 0x1626be1444c98db closed 18/04/17 17:27:00 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 17:27:00 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 17:27:00 INFO zookeeper.ZooKeeper: Session: 0x1626be1444c98da closed 18/04/17 17:27:00 INFO storage.BlockManagerInfo: Added broadcast_1443_piece0 in memory on ***hostname masked***:35790 (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:27:00 INFO storage.BlockManagerInfo: Added broadcast_1445_piece0 in memory on ***hostname masked***:43653 (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:27:00 INFO scheduler.JobScheduler: Finished job streaming job 1523975220000 ms.35 from job set of time 1523975220000 ms 18/04/17 17:27:00 INFO storage.BlockManagerInfo: Added broadcast_1446_piece0 in memory on ***hostname masked***:35790 (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:27:00 INFO scheduler.JobScheduler: Finished job streaming job 1523975220000 ms.9 from job set of time 1523975220000 ms 18/04/17 17:27:00 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 1447.0 (TID 1447) in 64 ms on ***hostname masked*** (executor 11) (1/1) 18/04/17 17:27:00 INFO cluster.YarnClusterScheduler: Removed TaskSet 1447.0, whose tasks have all completed, from pool 18/04/17 17:27:00 INFO scheduler.DAGScheduler: ResultStage 1447 (foreachPartition at PredictorEngineApp.java:153) finished in 0.065 s 18/04/17 17:27:00 INFO scheduler.DAGScheduler: Job 1446 finished: foreachPartition at PredictorEngineApp.java:153, took 0.162226 s 18/04/17 17:27:00 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x35f6852d connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 17:27:00 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x35f6852d0x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 17:27:00 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 17:27:00 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:48960, server: ***hostname masked***/***IP masked***:2181 18/04/17 17:27:00 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x2626be142b2919d, negotiated timeout = 60000 18/04/17 17:27:00 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x2626be142b2919d 18/04/17 17:27:00 INFO zookeeper.ZooKeeper: Session: 0x2626be142b2919d closed 18/04/17 17:27:00 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 17:27:00 INFO scheduler.JobScheduler: Finished job streaming job 1523975220000 ms.27 from job set of time 1523975220000 ms 18/04/17 17:27:00 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 1446.0 (TID 1446) in 164 ms on ***hostname masked*** (executor 6) (1/1) 18/04/17 17:27:00 INFO scheduler.DAGScheduler: ResultStage 1446 (foreachPartition at PredictorEngineApp.java:153) finished in 0.165 s 18/04/17 17:27:00 INFO cluster.YarnClusterScheduler: Removed TaskSet 1446.0, whose tasks have all completed, from pool 18/04/17 17:27:00 INFO scheduler.DAGScheduler: Job 1447 finished: foreachPartition at PredictorEngineApp.java:153, took 0.259416 s 18/04/17 17:27:00 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x31e86bbb connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 17:27:00 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x31e86bbb0x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 17:27:00 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 17:27:00 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:37986, server: ***hostname masked***/***IP masked***:2181 18/04/17 17:27:00 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x3626be1439a987c, negotiated timeout = 60000 18/04/17 17:27:00 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x3626be1439a987c 18/04/17 17:27:00 INFO zookeeper.ZooKeeper: Session: 0x3626be1439a987c closed 18/04/17 17:27:00 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 17:27:00 INFO scheduler.JobScheduler: Finished job streaming job 1523975220000 ms.24 from job set of time 1523975220000 ms 18/04/17 17:27:03 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 1430.0 (TID 1430) in 3335 ms on ***hostname masked*** (executor 2) (1/1) 18/04/17 17:27:03 INFO cluster.YarnClusterScheduler: Removed TaskSet 1430.0, whose tasks have all completed, from pool 18/04/17 17:27:03 INFO scheduler.DAGScheduler: ResultStage 1430 (foreachPartition at PredictorEngineApp.java:153) finished in 3.336 s 18/04/17 17:27:03 INFO scheduler.DAGScheduler: Job 1430 finished: foreachPartition at PredictorEngineApp.java:153, took 3.363318 s 18/04/17 17:27:03 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x39d579e3 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 17:27:03 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x39d579e30x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 17:27:03 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 17:27:03 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:44375, server: ***hostname masked***/***IP masked***:2181 18/04/17 17:27:03 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x1626be1444c98e3, negotiated timeout = 60000 18/04/17 17:27:03 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x1626be1444c98e3 18/04/17 17:27:03 INFO zookeeper.ZooKeeper: Session: 0x1626be1444c98e3 closed 18/04/17 17:27:03 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 17:27:03 INFO scheduler.JobScheduler: Finished job streaming job 1523975220000 ms.25 from job set of time 1523975220000 ms 18/04/17 17:27:03 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 1425.0 (TID 1425) in 3746 ms on ***hostname masked*** (executor 4) (1/1) 18/04/17 17:27:03 INFO cluster.YarnClusterScheduler: Removed TaskSet 1425.0, whose tasks have all completed, from pool 18/04/17 17:27:03 INFO scheduler.DAGScheduler: ResultStage 1425 (foreachPartition at PredictorEngineApp.java:153) finished in 3.746 s 18/04/17 17:27:03 INFO scheduler.DAGScheduler: Job 1427 finished: foreachPartition at PredictorEngineApp.java:153, took 3.753592 s 18/04/17 17:27:03 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x48474e51 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 17:27:03 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x48474e510x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 17:27:03 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 17:27:03 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:44378, server: ***hostname masked***/***IP masked***:2181 18/04/17 17:27:03 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x1626be1444c98e4, negotiated timeout = 60000 18/04/17 17:27:03 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x1626be1444c98e4 18/04/17 17:27:03 INFO zookeeper.ZooKeeper: Session: 0x1626be1444c98e4 closed 18/04/17 17:27:03 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 17:27:03 INFO scheduler.JobScheduler: Finished job streaming job 1523975220000 ms.7 from job set of time 1523975220000 ms 18/04/17 17:27:05 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 1435.0 (TID 1435) in 5573 ms on ***hostname masked*** (executor 3) (1/1) 18/04/17 17:27:05 INFO cluster.YarnClusterScheduler: Removed TaskSet 1435.0, whose tasks have all completed, from pool 18/04/17 17:27:05 INFO scheduler.DAGScheduler: ResultStage 1435 (foreachPartition at PredictorEngineApp.java:153) finished in 5.574 s 18/04/17 17:27:05 INFO scheduler.DAGScheduler: Job 1435 finished: foreachPartition at PredictorEngineApp.java:153, took 5.619967 s 18/04/17 17:27:05 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x6cbb75df connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 17:27:05 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x6cbb75df0x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 17:27:05 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 17:27:05 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:38002, server: ***hostname masked***/***IP masked***:2181 18/04/17 17:27:05 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x3626be1439a9880, negotiated timeout = 60000 18/04/17 17:27:05 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x3626be1439a9880 18/04/17 17:27:05 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 1441.0 (TID 1441) in 5573 ms on ***hostname masked*** (executor 9) (1/1) 18/04/17 17:27:05 INFO cluster.YarnClusterScheduler: Removed TaskSet 1441.0, whose tasks have all completed, from pool 18/04/17 17:27:05 INFO scheduler.DAGScheduler: ResultStage 1441 (foreachPartition at PredictorEngineApp.java:153) finished in 5.574 s 18/04/17 17:27:05 INFO scheduler.DAGScheduler: Job 1441 finished: foreachPartition at PredictorEngineApp.java:153, took 5.640955 s 18/04/17 17:27:05 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x2cfce9d5 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 17:27:05 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x2cfce9d50x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 17:27:05 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 17:27:05 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:38005, server: ***hostname masked***/***IP masked***:2181 18/04/17 17:27:05 INFO zookeeper.ZooKeeper: Session: 0x3626be1439a9880 closed 18/04/17 17:27:05 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 17:27:05 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x3626be1439a9881, negotiated timeout = 60000 18/04/17 17:27:05 INFO scheduler.JobScheduler: Finished job streaming job 1523975220000 ms.8 from job set of time 1523975220000 ms 18/04/17 17:27:05 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x3626be1439a9881 18/04/17 17:27:05 INFO zookeeper.ZooKeeper: Session: 0x3626be1439a9881 closed 18/04/17 17:27:05 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 17:27:05 INFO scheduler.JobScheduler: Finished job streaming job 1523975220000 ms.12 from job set of time 1523975220000 ms 18/04/17 17:27:06 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 1451.0 (TID 1451) in 6122 ms on ***hostname masked*** (executor 12) (1/1) 18/04/17 17:27:06 INFO cluster.YarnClusterScheduler: Removed TaskSet 1451.0, whose tasks have all completed, from pool 18/04/17 17:27:06 INFO scheduler.DAGScheduler: ResultStage 1451 (foreachPartition at PredictorEngineApp.java:153) finished in 6.123 s 18/04/17 17:27:06 INFO scheduler.DAGScheduler: Job 1451 finished: foreachPartition at PredictorEngineApp.java:153, took 6.231050 s 18/04/17 17:27:06 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x4ab7e929 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 17:27:06 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x4ab7e9290x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 17:27:06 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 17:27:06 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:48986, server: ***hostname masked***/***IP masked***:2181 18/04/17 17:27:06 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x2626be142b291a5, negotiated timeout = 60000 18/04/17 17:27:06 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x2626be142b291a5 18/04/17 17:27:06 INFO zookeeper.ZooKeeper: Session: 0x2626be142b291a5 closed 18/04/17 17:27:06 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 17:27:06 INFO scheduler.JobScheduler: Finished job streaming job 1523975220000 ms.15 from job set of time 1523975220000 ms 18/04/17 17:27:07 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 1440.0 (TID 1440) in 6970 ms on ***hostname masked*** (executor 8) (1/1) 18/04/17 17:27:07 INFO cluster.YarnClusterScheduler: Removed TaskSet 1440.0, whose tasks have all completed, from pool 18/04/17 17:27:07 INFO scheduler.DAGScheduler: ResultStage 1440 (foreachPartition at PredictorEngineApp.java:153) finished in 6.971 s 18/04/17 17:27:07 INFO scheduler.DAGScheduler: Job 1440 finished: foreachPartition at PredictorEngineApp.java:153, took 7.035149 s 18/04/17 17:27:07 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x4f92c848 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 17:27:07 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x4f92c8480x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 17:27:07 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 17:27:07 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:44395, server: ***hostname masked***/***IP masked***:2181 18/04/17 17:27:07 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x1626be1444c98e6, negotiated timeout = 60000 18/04/17 17:27:07 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x1626be1444c98e6 18/04/17 17:27:07 INFO zookeeper.ZooKeeper: Session: 0x1626be1444c98e6 closed 18/04/17 17:27:07 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 17:27:07 INFO scheduler.JobScheduler: Finished job streaming job 1523975220000 ms.31 from job set of time 1523975220000 ms 18/04/17 17:27:11 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 1439.0 (TID 1439) in 10981 ms on ***hostname masked*** (executor 11) (1/1) 18/04/17 17:27:11 INFO cluster.YarnClusterScheduler: Removed TaskSet 1439.0, whose tasks have all completed, from pool 18/04/17 17:27:11 INFO scheduler.DAGScheduler: ResultStage 1439 (foreachPartition at PredictorEngineApp.java:153) finished in 10.982 s 18/04/17 17:27:11 INFO scheduler.DAGScheduler: Job 1439 finished: foreachPartition at PredictorEngineApp.java:153, took 11.042220 s 18/04/17 17:27:11 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x53b0f65c connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 17:27:11 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x53b0f65c0x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 17:27:11 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 17:27:11 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:49001, server: ***hostname masked***/***IP masked***:2181 18/04/17 17:27:11 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x2626be142b291a8, negotiated timeout = 60000 18/04/17 17:27:11 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x2626be142b291a8 18/04/17 17:27:11 INFO zookeeper.ZooKeeper: Session: 0x2626be142b291a8 closed 18/04/17 17:27:11 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 17:27:11 INFO scheduler.JobScheduler: Finished job streaming job 1523975220000 ms.6 from job set of time 1523975220000 ms 18/04/17 17:27:11 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 1450.0 (TID 1450) in 11436 ms on ***hostname masked*** (executor 11) (1/1) 18/04/17 17:27:11 INFO cluster.YarnClusterScheduler: Removed TaskSet 1450.0, whose tasks have all completed, from pool 18/04/17 17:27:11 INFO scheduler.DAGScheduler: ResultStage 1450 (foreachPartition at PredictorEngineApp.java:153) finished in 11.437 s 18/04/17 17:27:11 INFO scheduler.DAGScheduler: Job 1449 finished: foreachPartition at PredictorEngineApp.java:153, took 11.543409 s 18/04/17 17:27:11 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x5f6d676e connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 17:27:11 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x5f6d676e0x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 17:27:11 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 17:27:11 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:44410, server: ***hostname masked***/***IP masked***:2181 18/04/17 17:27:11 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x1626be1444c98e7, negotiated timeout = 60000 18/04/17 17:27:11 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x1626be1444c98e7 18/04/17 17:27:11 INFO zookeeper.ZooKeeper: Session: 0x1626be1444c98e7 closed 18/04/17 17:27:11 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 17:27:11 INFO scheduler.JobScheduler: Finished job streaming job 1523975220000 ms.28 from job set of time 1523975220000 ms 18/04/17 17:27:11 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 1432.0 (TID 1432) in 11646 ms on ***hostname masked*** (executor 10) (1/1) 18/04/17 17:27:11 INFO cluster.YarnClusterScheduler: Removed TaskSet 1432.0, whose tasks have all completed, from pool 18/04/17 17:27:11 INFO scheduler.DAGScheduler: ResultStage 1432 (foreachPartition at PredictorEngineApp.java:153) finished in 11.648 s 18/04/17 17:27:11 INFO scheduler.DAGScheduler: Job 1431 finished: foreachPartition at PredictorEngineApp.java:153, took 11.682707 s 18/04/17 17:27:11 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x7b750a2 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 17:27:11 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x7b750a20x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 17:27:11 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 17:27:11 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:49008, server: ***hostname masked***/***IP masked***:2181 18/04/17 17:27:11 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x2626be142b291a9, negotiated timeout = 60000 18/04/17 17:27:11 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x2626be142b291a9 18/04/17 17:27:11 INFO zookeeper.ZooKeeper: Session: 0x2626be142b291a9 closed 18/04/17 17:27:11 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 17:27:11 INFO scheduler.JobScheduler: Finished job streaming job 1523975220000 ms.34 from job set of time 1523975220000 ms 18/04/17 17:27:11 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 1426.0 (TID 1426) in 11864 ms on ***hostname masked*** (executor 5) (1/1) 18/04/17 17:27:11 INFO scheduler.DAGScheduler: ResultStage 1426 (foreachPartition at PredictorEngineApp.java:153) finished in 11.864 s 18/04/17 17:27:11 INFO cluster.YarnClusterScheduler: Removed TaskSet 1426.0, whose tasks have all completed, from pool 18/04/17 17:27:11 INFO scheduler.DAGScheduler: Job 1426 finished: foreachPartition at PredictorEngineApp.java:153, took 11.875713 s 18/04/17 17:27:11 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x3052b623 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 17:27:11 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x3052b6230x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 17:27:11 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 17:27:11 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:49011, server: ***hostname masked***/***IP masked***:2181 18/04/17 17:27:11 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x2626be142b291aa, negotiated timeout = 60000 18/04/17 17:27:11 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x2626be142b291aa 18/04/17 17:27:11 INFO zookeeper.ZooKeeper: Session: 0x2626be142b291aa closed 18/04/17 17:27:11 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 17:27:11 INFO scheduler.JobScheduler: Finished job streaming job 1523975220000 ms.32 from job set of time 1523975220000 ms 18/04/17 17:27:12 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 1431.0 (TID 1431) in 11922 ms on ***hostname masked*** (executor 10) (1/1) 18/04/17 17:27:12 INFO scheduler.DAGScheduler: ResultStage 1431 (foreachPartition at PredictorEngineApp.java:153) finished in 11.923 s 18/04/17 17:27:12 INFO cluster.YarnClusterScheduler: Removed TaskSet 1431.0, whose tasks have all completed, from pool 18/04/17 17:27:12 INFO scheduler.DAGScheduler: Job 1432 finished: foreachPartition at PredictorEngineApp.java:153, took 11.954565 s 18/04/17 17:27:12 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x6253f480 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 17:27:12 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x6253f4800x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 17:27:12 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 17:27:12 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:49014, server: ***hostname masked***/***IP masked***:2181 18/04/17 17:27:12 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x2626be142b291ab, negotiated timeout = 60000 18/04/17 17:27:12 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x2626be142b291ab 18/04/17 17:27:12 INFO zookeeper.ZooKeeper: Session: 0x2626be142b291ab closed 18/04/17 17:27:12 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 17:27:12 INFO scheduler.JobScheduler: Finished job streaming job 1523975220000 ms.33 from job set of time 1523975220000 ms 18/04/17 17:27:12 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 1434.0 (TID 1434) in 12108 ms on ***hostname masked*** (executor 6) (1/1) 18/04/17 17:27:12 INFO cluster.YarnClusterScheduler: Removed TaskSet 1434.0, whose tasks have all completed, from pool 18/04/17 17:27:12 INFO scheduler.DAGScheduler: ResultStage 1434 (foreachPartition at PredictorEngineApp.java:153) finished in 12.108 s 18/04/17 17:27:12 INFO scheduler.DAGScheduler: Job 1434 finished: foreachPartition at PredictorEngineApp.java:153, took 12.151404 s 18/04/17 17:27:12 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x68d69b5 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 17:27:12 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x68d69b50x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 17:27:12 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 17:27:12 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:49018, server: ***hostname masked***/***IP masked***:2181 18/04/17 17:27:12 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x2626be142b291ac, negotiated timeout = 60000 18/04/17 17:27:12 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x2626be142b291ac 18/04/17 17:27:12 INFO zookeeper.ZooKeeper: Session: 0x2626be142b291ac closed 18/04/17 17:27:12 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 17:27:12 INFO scheduler.JobScheduler: Finished job streaming job 1523975220000 ms.20 from job set of time 1523975220000 ms 18/04/17 17:27:13 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 1433.0 (TID 1433) in 13139 ms on ***hostname masked*** (executor 1) (1/1) 18/04/17 17:27:13 INFO cluster.YarnClusterScheduler: Removed TaskSet 1433.0, whose tasks have all completed, from pool 18/04/17 17:27:13 INFO scheduler.DAGScheduler: ResultStage 1433 (foreachPartition at PredictorEngineApp.java:153) finished in 13.140 s 18/04/17 17:27:13 INFO scheduler.DAGScheduler: Job 1433 finished: foreachPartition at PredictorEngineApp.java:153, took 13.178878 s 18/04/17 17:27:13 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x372766e connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 17:27:13 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x372766e0x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 17:27:13 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 17:27:13 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:49024, server: ***hostname masked***/***IP masked***:2181 18/04/17 17:27:13 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x2626be142b291ad, negotiated timeout = 60000 18/04/17 17:27:13 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x2626be142b291ad 18/04/17 17:27:13 INFO zookeeper.ZooKeeper: Session: 0x2626be142b291ad closed 18/04/17 17:27:13 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 17:27:13 INFO scheduler.JobScheduler: Finished job streaming job 1523975220000 ms.23 from job set of time 1523975220000 ms 18/04/17 17:27:14 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 1442.0 (TID 1442) in 13937 ms on ***hostname masked*** (executor 3) (1/1) 18/04/17 17:27:14 INFO scheduler.DAGScheduler: ResultStage 1442 (foreachPartition at PredictorEngineApp.java:153) finished in 13.938 s 18/04/17 17:27:14 INFO cluster.YarnClusterScheduler: Removed TaskSet 1442.0, whose tasks have all completed, from pool 18/04/17 17:27:14 INFO scheduler.DAGScheduler: Job 1442 finished: foreachPartition at PredictorEngineApp.java:153, took 14.008284 s 18/04/17 17:27:14 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x1c9547d4 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 17:27:14 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x1c9547d40x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 17:27:14 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 17:27:14 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:44433, server: ***hostname masked***/***IP masked***:2181 18/04/17 17:27:14 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x1626be1444c98eb, negotiated timeout = 60000 18/04/17 17:27:14 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x1626be1444c98eb 18/04/17 17:27:14 INFO zookeeper.ZooKeeper: Session: 0x1626be1444c98eb closed 18/04/17 17:27:14 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 17:27:14 INFO scheduler.JobScheduler: Finished job streaming job 1523975220000 ms.2 from job set of time 1523975220000 ms 18/04/17 17:27:15 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 1185.0 (TID 1185) in 555488 ms on ***hostname masked*** (executor 7) (1/1) 18/04/17 17:27:15 INFO cluster.YarnClusterScheduler: Removed TaskSet 1185.0, whose tasks have all completed, from pool 18/04/17 17:27:15 INFO scheduler.DAGScheduler: ResultStage 1185 (foreachPartition at PredictorEngineApp.java:153) finished in 555.489 s 18/04/17 17:27:15 INFO scheduler.DAGScheduler: Job 1185 finished: foreachPartition at PredictorEngineApp.java:153, took 555.494483 s 18/04/17 17:27:15 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x555f15d8 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 17:27:15 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x555f15d80x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 17:27:15 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 17:27:15 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:49032, server: ***hostname masked***/***IP masked***:2181 18/04/17 17:27:15 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x2626be142b291ae, negotiated timeout = 60000 18/04/17 17:27:15 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x2626be142b291ae 18/04/17 17:27:15 INFO zookeeper.ZooKeeper: Session: 0x2626be142b291ae closed 18/04/17 17:27:15 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 17:27:15 INFO scheduler.JobScheduler: Finished job streaming job 1523974680000 ms.23 from job set of time 1523974680000 ms 18/04/17 17:27:15 INFO scheduler.JobScheduler: Total delay: 555.580 s for time 1523974680000 ms (execution: 555.532 s) 18/04/17 17:27:15 INFO scheduler.ReceivedBlockTracker: Deleting batches ArrayBuffer() 18/04/17 17:27:15 INFO scheduler.InputInfoTracker: remove old batch metadata: 18/04/17 17:27:16 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 1449.0 (TID 1449) in 15937 ms on ***hostname masked*** (executor 4) (1/1) 18/04/17 17:27:16 INFO cluster.YarnClusterScheduler: Removed TaskSet 1449.0, whose tasks have all completed, from pool 18/04/17 17:27:16 INFO scheduler.DAGScheduler: ResultStage 1449 (foreachPartition at PredictorEngineApp.java:153) finished in 15.938 s 18/04/17 17:27:16 INFO scheduler.DAGScheduler: Job 1450 finished: foreachPartition at PredictorEngineApp.java:153, took 16.041287 s 18/04/17 17:27:16 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x64d26c7b connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 17:27:16 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x64d26c7b0x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 17:27:16 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 17:27:16 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:44441, server: ***hostname masked***/***IP masked***:2181 18/04/17 17:27:16 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x1626be1444c98ee, negotiated timeout = 60000 18/04/17 17:27:16 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x1626be1444c98ee 18/04/17 17:27:16 INFO zookeeper.ZooKeeper: Session: 0x1626be1444c98ee closed 18/04/17 17:27:16 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 17:27:16 INFO scheduler.JobScheduler: Finished job streaming job 1523975220000 ms.5 from job set of time 1523975220000 ms 18/04/17 17:27:16 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 1438.0 (TID 1438) in 16582 ms on ***hostname masked*** (executor 8) (1/1) 18/04/17 17:27:16 INFO scheduler.DAGScheduler: ResultStage 1438 (foreachPartition at PredictorEngineApp.java:153) finished in 16.583 s 18/04/17 17:27:16 INFO cluster.YarnClusterScheduler: Removed TaskSet 1438.0, whose tasks have all completed, from pool 18/04/17 17:27:16 INFO scheduler.DAGScheduler: Job 1438 finished: foreachPartition at PredictorEngineApp.java:153, took 16.640208 s 18/04/17 17:27:16 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x19e27e96 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 17:27:16 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x19e27e960x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 17:27:16 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 17:27:16 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:38062, server: ***hostname masked***/***IP masked***:2181 18/04/17 17:27:16 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x3626be1439a9889, negotiated timeout = 60000 18/04/17 17:27:16 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x3626be1439a9889 18/04/17 17:27:16 INFO zookeeper.ZooKeeper: Session: 0x3626be1439a9889 closed 18/04/17 17:27:16 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 17:27:16 INFO scheduler.JobScheduler: Finished job streaming job 1523975220000 ms.1 from job set of time 1523975220000 ms 18/04/17 17:27:18 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 1437.0 (TID 1437) in 18220 ms on ***hostname masked*** (executor 2) (1/1) 18/04/17 17:27:18 INFO cluster.YarnClusterScheduler: Removed TaskSet 1437.0, whose tasks have all completed, from pool 18/04/17 17:27:18 INFO scheduler.DAGScheduler: ResultStage 1437 (foreachPartition at PredictorEngineApp.java:153) finished in 18.221 s 18/04/17 17:27:18 INFO scheduler.DAGScheduler: Job 1436 finished: foreachPartition at PredictorEngineApp.java:153, took 18.273813 s 18/04/17 17:27:18 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x234e5dd0 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 17:27:18 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x234e5dd00x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 17:27:18 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 17:27:18 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:38069, server: ***hostname masked***/***IP masked***:2181 18/04/17 17:27:18 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x3626be1439a988a, negotiated timeout = 60000 18/04/17 17:27:18 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x3626be1439a988a 18/04/17 17:27:18 INFO zookeeper.ZooKeeper: Session: 0x3626be1439a988a closed 18/04/17 17:27:18 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 17:27:18 INFO scheduler.JobScheduler: Finished job streaming job 1523975220000 ms.29 from job set of time 1523975220000 ms 18/04/17 17:27:18 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 1444.0 (TID 1444) in 18533 ms on ***hostname masked*** (executor 2) (1/1) 18/04/17 17:27:18 INFO scheduler.DAGScheduler: ResultStage 1444 (foreachPartition at PredictorEngineApp.java:153) finished in 18.534 s 18/04/17 17:27:18 INFO cluster.YarnClusterScheduler: Removed TaskSet 1444.0, whose tasks have all completed, from pool 18/04/17 17:27:18 INFO scheduler.DAGScheduler: Job 1443 finished: foreachPartition at PredictorEngineApp.java:153, took 18.621886 s 18/04/17 17:27:18 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x7d46f8c connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 17:27:18 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x7d46f8c0x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 17:27:18 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 17:27:18 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:44455, server: ***hostname masked***/***IP masked***:2181 18/04/17 17:27:18 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x1626be1444c98ef, negotiated timeout = 60000 18/04/17 17:27:18 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x1626be1444c98ef 18/04/17 17:27:18 INFO zookeeper.ZooKeeper: Session: 0x1626be1444c98ef closed 18/04/17 17:27:18 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 17:27:18 INFO scheduler.JobScheduler: Finished job streaming job 1523975220000 ms.19 from job set of time 1523975220000 ms 18/04/17 17:27:19 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 1445.0 (TID 1445) in 19152 ms on ***hostname masked*** (executor 2) (1/1) 18/04/17 17:27:19 INFO cluster.YarnClusterScheduler: Removed TaskSet 1445.0, whose tasks have all completed, from pool 18/04/17 17:27:19 INFO scheduler.DAGScheduler: ResultStage 1445 (foreachPartition at PredictorEngineApp.java:153) finished in 19.153 s 18/04/17 17:27:19 INFO scheduler.DAGScheduler: Job 1445 finished: foreachPartition at PredictorEngineApp.java:153, took 19.243764 s 18/04/17 17:27:19 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x774fca8b connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 17:27:19 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x774fca8b0x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 17:27:19 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 17:27:19 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:38077, server: ***hostname masked***/***IP masked***:2181 18/04/17 17:27:19 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x3626be1439a988d, negotiated timeout = 60000 18/04/17 17:27:19 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x3626be1439a988d 18/04/17 17:27:19 INFO zookeeper.ZooKeeper: Session: 0x3626be1439a988d closed 18/04/17 17:27:19 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 17:27:19 INFO scheduler.JobScheduler: Finished job streaming job 1523975220000 ms.18 from job set of time 1523975220000 ms 18/04/17 17:27:21 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 1428.0 (TID 1428) in 21879 ms on ***hostname masked*** (executor 5) (1/1) 18/04/17 17:27:21 INFO cluster.YarnClusterScheduler: Removed TaskSet 1428.0, whose tasks have all completed, from pool 18/04/17 17:27:21 INFO scheduler.DAGScheduler: ResultStage 1428 (foreachPartition at PredictorEngineApp.java:153) finished in 21.879 s 18/04/17 17:27:21 INFO scheduler.DAGScheduler: Job 1429 finished: foreachPartition at PredictorEngineApp.java:153, took 21.898745 s 18/04/17 17:27:21 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x70f3aab8 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 17:27:21 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x70f3aab80x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 17:27:21 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 17:27:21 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:38083, server: ***hostname masked***/***IP masked***:2181 18/04/17 17:27:21 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x3626be1439a988e, negotiated timeout = 60000 18/04/17 17:27:21 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x3626be1439a988e 18/04/17 17:27:21 INFO zookeeper.ZooKeeper: Session: 0x3626be1439a988e closed 18/04/17 17:27:21 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 17:27:21 INFO scheduler.JobScheduler: Finished job streaming job 1523975220000 ms.11 from job set of time 1523975220000 ms 18/04/17 17:27:22 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 1427.0 (TID 1427) in 21963 ms on ***hostname masked*** (executor 1) (1/1) 18/04/17 17:27:22 INFO cluster.YarnClusterScheduler: Removed TaskSet 1427.0, whose tasks have all completed, from pool 18/04/17 17:27:22 INFO scheduler.DAGScheduler: ResultStage 1427 (foreachPartition at PredictorEngineApp.java:153) finished in 21.963 s 18/04/17 17:27:22 INFO scheduler.DAGScheduler: Job 1425 finished: foreachPartition at PredictorEngineApp.java:153, took 21.978608 s 18/04/17 17:27:22 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x450b257f connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 17:27:22 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x450b257f0x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 17:27:22 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 17:27:22 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:49063, server: ***hostname masked***/***IP masked***:2181 18/04/17 17:27:22 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x2626be142b291b5, negotiated timeout = 60000 18/04/17 17:27:22 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x2626be142b291b5 18/04/17 17:27:22 INFO zookeeper.ZooKeeper: Session: 0x2626be142b291b5 closed 18/04/17 17:27:22 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 17:27:22 INFO scheduler.JobScheduler: Finished job streaming job 1523975220000 ms.22 from job set of time 1523975220000 ms 18/04/17 17:27:25 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 1448.0 (TID 1448) in 25085 ms on ***hostname masked*** (executor 3) (1/1) 18/04/17 17:27:25 INFO cluster.YarnClusterScheduler: Removed TaskSet 1448.0, whose tasks have all completed, from pool 18/04/17 17:27:25 INFO scheduler.DAGScheduler: ResultStage 1448 (foreachPartition at PredictorEngineApp.java:153) finished in 25.086 s 18/04/17 17:27:25 INFO scheduler.DAGScheduler: Job 1448 finished: foreachPartition at PredictorEngineApp.java:153, took 25.186959 s 18/04/17 17:27:25 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x40011ac6 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 17:27:25 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x40011ac60x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 17:27:25 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 17:27:25 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:38098, server: ***hostname masked***/***IP masked***:2181 18/04/17 17:27:25 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x3626be1439a9890, negotiated timeout = 60000 18/04/17 17:27:25 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x3626be1439a9890 18/04/17 17:27:25 INFO zookeeper.ZooKeeper: Session: 0x3626be1439a9890 closed 18/04/17 17:27:25 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 17:27:25 INFO scheduler.JobScheduler: Finished job streaming job 1523975220000 ms.10 from job set of time 1523975220000 ms 18/04/17 17:28:00 INFO scheduler.JobScheduler: Added jobs for time 1523975280000 ms 18/04/17 17:28:00 INFO scheduler.JobScheduler: Starting job streaming job 1523975280000 ms.0 from job set of time 1523975280000 ms 18/04/17 17:28:00 INFO scheduler.JobScheduler: Starting job streaming job 1523975280000 ms.2 from job set of time 1523975280000 ms 18/04/17 17:28:00 INFO scheduler.JobScheduler: Starting job streaming job 1523975280000 ms.1 from job set of time 1523975280000 ms 18/04/17 17:28:00 INFO scheduler.JobScheduler: Starting job streaming job 1523975280000 ms.3 from job set of time 1523975280000 ms 18/04/17 17:28:00 INFO scheduler.JobScheduler: Finished job streaming job 1523975280000 ms.0 from job set of time 1523975280000 ms 18/04/17 17:28:00 INFO scheduler.JobScheduler: Finished job streaming job 1523975280000 ms.3 from job set of time 1523975280000 ms 18/04/17 17:28:00 INFO scheduler.JobScheduler: Starting job streaming job 1523975280000 ms.5 from job set of time 1523975280000 ms 18/04/17 17:28:00 INFO scheduler.JobScheduler: Starting job streaming job 1523975280000 ms.7 from job set of time 1523975280000 ms 18/04/17 17:28:00 INFO scheduler.JobScheduler: Starting job streaming job 1523975280000 ms.4 from job set of time 1523975280000 ms 18/04/17 17:28:00 INFO scheduler.JobScheduler: Starting job streaming job 1523975280000 ms.6 from job set of time 1523975280000 ms 18/04/17 17:28:00 INFO scheduler.JobScheduler: Finished job streaming job 1523975280000 ms.4 from job set of time 1523975280000 ms 18/04/17 17:28:00 INFO scheduler.JobScheduler: Starting job streaming job 1523975280000 ms.8 from job set of time 1523975280000 ms 18/04/17 17:28:00 INFO scheduler.JobScheduler: Starting job streaming job 1523975280000 ms.9 from job set of time 1523975280000 ms 18/04/17 17:28:00 INFO scheduler.JobScheduler: Starting job streaming job 1523975280000 ms.10 from job set of time 1523975280000 ms 18/04/17 17:28:00 INFO scheduler.JobScheduler: Starting job streaming job 1523975280000 ms.11 from job set of time 1523975280000 ms 18/04/17 17:28:00 INFO scheduler.JobScheduler: Starting job streaming job 1523975280000 ms.12 from job set of time 1523975280000 ms 18/04/17 17:28:00 INFO scheduler.JobScheduler: Starting job streaming job 1523975280000 ms.13 from job set of time 1523975280000 ms 18/04/17 17:28:00 INFO scheduler.JobScheduler: Starting job streaming job 1523975280000 ms.14 from job set of time 1523975280000 ms 18/04/17 17:28:00 INFO scheduler.JobScheduler: Finished job streaming job 1523975280000 ms.13 from job set of time 1523975280000 ms 18/04/17 17:28:00 INFO scheduler.JobScheduler: Starting job streaming job 1523975280000 ms.15 from job set of time 1523975280000 ms 18/04/17 17:28:00 INFO scheduler.JobScheduler: Finished job streaming job 1523975280000 ms.14 from job set of time 1523975280000 ms 18/04/17 17:28:00 INFO scheduler.JobScheduler: Starting job streaming job 1523975280000 ms.16 from job set of time 1523975280000 ms 18/04/17 17:28:00 INFO scheduler.JobScheduler: Starting job streaming job 1523975280000 ms.18 from job set of time 1523975280000 ms 18/04/17 17:28:00 INFO scheduler.JobScheduler: Finished job streaming job 1523975280000 ms.16 from job set of time 1523975280000 ms 18/04/17 17:28:00 INFO scheduler.JobScheduler: Starting job streaming job 1523975280000 ms.19 from job set of time 1523975280000 ms 18/04/17 17:28:00 INFO scheduler.JobScheduler: Starting job streaming job 1523975280000 ms.17 from job set of time 1523975280000 ms 18/04/17 17:28:00 INFO scheduler.JobScheduler: Starting job streaming job 1523975280000 ms.20 from job set of time 1523975280000 ms 18/04/17 17:28:00 INFO scheduler.JobScheduler: Finished job streaming job 1523975280000 ms.17 from job set of time 1523975280000 ms 18/04/17 17:28:00 INFO scheduler.JobScheduler: Starting job streaming job 1523975280000 ms.22 from job set of time 1523975280000 ms 18/04/17 17:28:00 INFO scheduler.JobScheduler: Starting job streaming job 1523975280000 ms.21 from job set of time 1523975280000 ms 18/04/17 17:28:00 INFO scheduler.JobScheduler: Starting job streaming job 1523975280000 ms.23 from job set of time 1523975280000 ms 18/04/17 17:28:00 INFO scheduler.JobScheduler: Starting job streaming job 1523975280000 ms.24 from job set of time 1523975280000 ms 18/04/17 17:28:00 INFO scheduler.JobScheduler: Finished job streaming job 1523975280000 ms.21 from job set of time 1523975280000 ms 18/04/17 17:28:00 INFO scheduler.JobScheduler: Starting job streaming job 1523975280000 ms.25 from job set of time 1523975280000 ms 18/04/17 17:28:00 INFO scheduler.JobScheduler: Starting job streaming job 1523975280000 ms.26 from job set of time 1523975280000 ms 18/04/17 17:28:00 INFO scheduler.JobScheduler: Starting job streaming job 1523975280000 ms.27 from job set of time 1523975280000 ms 18/04/17 17:28:00 INFO scheduler.JobScheduler: Starting job streaming job 1523975280000 ms.28 from job set of time 1523975280000 ms 18/04/17 17:28:00 INFO scheduler.JobScheduler: Starting job streaming job 1523975280000 ms.29 from job set of time 1523975280000 ms 18/04/17 17:28:00 INFO scheduler.JobScheduler: Starting job streaming job 1523975280000 ms.30 from job set of time 1523975280000 ms 18/04/17 17:28:00 INFO scheduler.JobScheduler: Starting job streaming job 1523975280000 ms.31 from job set of time 1523975280000 ms 18/04/17 17:28:00 INFO scheduler.JobScheduler: Starting job streaming job 1523975280000 ms.32 from job set of time 1523975280000 ms 18/04/17 17:28:00 INFO scheduler.JobScheduler: Starting job streaming job 1523975280000 ms.33 from job set of time 1523975280000 ms 18/04/17 17:28:00 INFO scheduler.JobScheduler: Starting job streaming job 1523975280000 ms.34 from job set of time 1523975280000 ms 18/04/17 17:28:00 INFO scheduler.JobScheduler: Starting job streaming job 1523975280000 ms.35 from job set of time 1523975280000 ms 18/04/17 17:28:00 INFO scheduler.JobScheduler: Finished job streaming job 1523975280000 ms.30 from job set of time 1523975280000 ms 18/04/17 17:28:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 17:28:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 17:28:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 17:28:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 17:28:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 17:28:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 17:28:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 17:28:00 INFO scheduler.DAGScheduler: Got job 1453 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 17:28:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 1452 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 17:28:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 17:28:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 17:28:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 17:28:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 17:28:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 17:28:00 INFO scheduler.DAGScheduler: Submitting ResultStage 1452 (KafkaRDD[2000] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 17:28:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 17:28:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 17:28:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 17:28:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 17:28:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 17:28:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 17:28:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 17:28:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 17:28:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 17:28:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 17:28:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 17:28:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 17:28:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 17:28:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 17:28:00 INFO storage.MemoryStore: Block broadcast_1452 stored as values in memory (estimated size 5.7 KB, free 491.4 MB) 18/04/17 17:28:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 17:28:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 17:28:00 INFO storage.MemoryStore: Block broadcast_1452_piece0 stored as bytes in memory (estimated size 3.1 KB, free 491.4 MB) 18/04/17 17:28:00 INFO storage.BlockManagerInfo: Added broadcast_1452_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:28:00 INFO spark.SparkContext: Created broadcast 1452 from broadcast at DAGScheduler.scala:1006 18/04/17 17:28:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 1452 (KafkaRDD[2000] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 17:28:00 INFO cluster.YarnClusterScheduler: Adding task set 1452.0 with 1 tasks 18/04/17 17:28:00 INFO scheduler.DAGScheduler: Got job 1452 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 17:28:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 1453 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 17:28:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 17:28:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 17:28:00 INFO scheduler.DAGScheduler: Submitting ResultStage 1453 (KafkaRDD[2004] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 17:28:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 1452.0 (TID 1452, ***hostname masked***, executor 12, partition 0, RACK_LOCAL, 2048 bytes) 18/04/17 17:28:00 INFO storage.MemoryStore: Block broadcast_1453 stored as values in memory (estimated size 5.7 KB, free 491.4 MB) 18/04/17 17:28:00 INFO storage.MemoryStore: Block broadcast_1453_piece0 stored as bytes in memory (estimated size 3.1 KB, free 491.4 MB) 18/04/17 17:28:00 INFO storage.BlockManagerInfo: Added broadcast_1453_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:28:00 INFO spark.SparkContext: Created broadcast 1453 from broadcast at DAGScheduler.scala:1006 18/04/17 17:28:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 1453 (KafkaRDD[2004] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 17:28:00 INFO cluster.YarnClusterScheduler: Adding task set 1453.0 with 1 tasks 18/04/17 17:28:00 INFO scheduler.DAGScheduler: Got job 1454 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 17:28:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 1454 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 17:28:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 17:28:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 17:28:00 INFO scheduler.DAGScheduler: Submitting ResultStage 1454 (KafkaRDD[1998] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 17:28:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 1453.0 (TID 1453, ***hostname masked***, executor 1, partition 0, NODE_LOCAL, 2047 bytes) 18/04/17 17:28:00 INFO storage.MemoryStore: Block broadcast_1454 stored as values in memory (estimated size 5.7 KB, free 491.4 MB) 18/04/17 17:28:00 INFO storage.MemoryStore: Block broadcast_1454_piece0 stored as bytes in memory (estimated size 3.1 KB, free 491.4 MB) 18/04/17 17:28:00 INFO storage.BlockManagerInfo: Added broadcast_1454_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:28:00 INFO spark.SparkContext: Created broadcast 1454 from broadcast at DAGScheduler.scala:1006 18/04/17 17:28:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 1454 (KafkaRDD[1998] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 17:28:00 INFO cluster.YarnClusterScheduler: Adding task set 1454.0 with 1 tasks 18/04/17 17:28:00 INFO scheduler.DAGScheduler: Got job 1455 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 17:28:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 1455 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 17:28:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 17:28:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 17:28:00 INFO scheduler.DAGScheduler: Submitting ResultStage 1455 (KafkaRDD[2012] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 17:28:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 1454.0 (TID 1454, ***hostname masked***, executor 12, partition 0, RACK_LOCAL, 2048 bytes) 18/04/17 17:28:00 INFO storage.MemoryStore: Block broadcast_1455 stored as values in memory (estimated size 5.7 KB, free 491.4 MB) 18/04/17 17:28:00 INFO storage.MemoryStore: Block broadcast_1455_piece0 stored as bytes in memory (estimated size 3.1 KB, free 491.4 MB) 18/04/17 17:28:00 INFO storage.BlockManagerInfo: Added broadcast_1455_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:28:00 INFO spark.SparkContext: Created broadcast 1455 from broadcast at DAGScheduler.scala:1006 18/04/17 17:28:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 1455 (KafkaRDD[2012] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 17:28:00 INFO cluster.YarnClusterScheduler: Adding task set 1455.0 with 1 tasks 18/04/17 17:28:00 INFO scheduler.DAGScheduler: Got job 1456 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 17:28:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 1456 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 17:28:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 17:28:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 17:28:00 INFO scheduler.DAGScheduler: Submitting ResultStage 1456 (KafkaRDD[1995] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 17:28:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 1455.0 (TID 1455, ***hostname masked***, executor 9, partition 0, RACK_LOCAL, 2040 bytes) 18/04/17 17:28:00 INFO storage.MemoryStore: Block broadcast_1456 stored as values in memory (estimated size 5.7 KB, free 491.4 MB) 18/04/17 17:28:00 INFO storage.BlockManagerInfo: Added broadcast_1452_piece0 in memory on ***hostname masked***:42188 (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:28:00 INFO storage.MemoryStore: Block broadcast_1456_piece0 stored as bytes in memory (estimated size 3.1 KB, free 491.4 MB) 18/04/17 17:28:00 INFO storage.BlockManagerInfo: Added broadcast_1456_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:28:00 INFO spark.SparkContext: Created broadcast 1456 from broadcast at DAGScheduler.scala:1006 18/04/17 17:28:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 1456 (KafkaRDD[1995] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 17:28:00 INFO cluster.YarnClusterScheduler: Adding task set 1456.0 with 1 tasks 18/04/17 17:28:00 INFO scheduler.DAGScheduler: Got job 1457 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 17:28:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 1457 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 17:28:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 17:28:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 17:28:00 INFO scheduler.DAGScheduler: Submitting ResultStage 1457 (KafkaRDD[2015] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 17:28:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 1456.0 (TID 1456, ***hostname masked***, executor 7, partition 0, RACK_LOCAL, 2064 bytes) 18/04/17 17:28:00 INFO storage.MemoryStore: Block broadcast_1457 stored as values in memory (estimated size 5.7 KB, free 491.4 MB) 18/04/17 17:28:00 INFO storage.MemoryStore: Block broadcast_1457_piece0 stored as bytes in memory (estimated size 3.1 KB, free 491.4 MB) 18/04/17 17:28:00 INFO storage.BlockManagerInfo: Added broadcast_1457_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:28:00 INFO spark.SparkContext: Created broadcast 1457 from broadcast at DAGScheduler.scala:1006 18/04/17 17:28:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 1457 (KafkaRDD[2015] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 17:28:00 INFO cluster.YarnClusterScheduler: Adding task set 1457.0 with 1 tasks 18/04/17 17:28:00 INFO scheduler.DAGScheduler: Got job 1458 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 17:28:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 1458 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 17:28:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 17:28:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 17:28:00 INFO scheduler.DAGScheduler: Submitting ResultStage 1458 (KafkaRDD[1991] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 17:28:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 1457.0 (TID 1457, ***hostname masked***, executor 11, partition 0, NODE_LOCAL, 2037 bytes) 18/04/17 17:28:00 INFO storage.MemoryStore: Block broadcast_1458 stored as values in memory (estimated size 5.7 KB, free 491.3 MB) 18/04/17 17:28:00 INFO storage.MemoryStore: Block broadcast_1458_piece0 stored as bytes in memory (estimated size 3.1 KB, free 491.3 MB) 18/04/17 17:28:00 INFO storage.BlockManagerInfo: Added broadcast_1458_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:28:00 INFO spark.SparkContext: Created broadcast 1458 from broadcast at DAGScheduler.scala:1006 18/04/17 17:28:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 1458 (KafkaRDD[1991] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 17:28:00 INFO cluster.YarnClusterScheduler: Adding task set 1458.0 with 1 tasks 18/04/17 17:28:00 INFO scheduler.DAGScheduler: Got job 1459 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 17:28:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 1459 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 17:28:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 17:28:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 17:28:00 INFO scheduler.DAGScheduler: Submitting ResultStage 1459 (KafkaRDD[2007] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 17:28:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 1458.0 (TID 1458, ***hostname masked***, executor 2, partition 0, RACK_LOCAL, 2046 bytes) 18/04/17 17:28:00 INFO storage.MemoryStore: Block broadcast_1459 stored as values in memory (estimated size 5.7 KB, free 491.3 MB) 18/04/17 17:28:00 INFO storage.BlockManagerInfo: Added broadcast_1453_piece0 in memory on ***hostname masked***:56034 (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:28:00 INFO storage.BlockManagerInfo: Added broadcast_1454_piece0 in memory on ***hostname masked***:42188 (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:28:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 17:28:00 INFO storage.MemoryStore: Block broadcast_1459_piece0 stored as bytes in memory (estimated size 3.1 KB, free 491.3 MB) 18/04/17 17:28:00 INFO storage.BlockManagerInfo: Added broadcast_1456_piece0 in memory on ***hostname masked***:41751 (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:28:00 INFO storage.BlockManagerInfo: Added broadcast_1459_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:28:00 INFO spark.SparkContext: Created broadcast 1459 from broadcast at DAGScheduler.scala:1006 18/04/17 17:28:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 1459 (KafkaRDD[2007] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 17:28:00 INFO cluster.YarnClusterScheduler: Adding task set 1459.0 with 1 tasks 18/04/17 17:28:00 INFO scheduler.DAGScheduler: Got job 1460 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 17:28:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 1460 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 17:28:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 17:28:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 17:28:00 INFO scheduler.DAGScheduler: Submitting ResultStage 1460 (KafkaRDD[1986] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 17:28:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 1459.0 (TID 1459, ***hostname masked***, executor 8, partition 0, RACK_LOCAL, 2048 bytes) 18/04/17 17:28:00 INFO storage.MemoryStore: Block broadcast_1460 stored as values in memory (estimated size 5.7 KB, free 491.3 MB) 18/04/17 17:28:00 INFO storage.MemoryStore: Block broadcast_1460_piece0 stored as bytes in memory (estimated size 3.1 KB, free 491.3 MB) 18/04/17 17:28:00 INFO storage.BlockManagerInfo: Added broadcast_1460_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.5 MB) 18/04/17 17:28:00 INFO spark.SparkContext: Created broadcast 1460 from broadcast at DAGScheduler.scala:1006 18/04/17 17:28:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 1460 (KafkaRDD[1986] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 17:28:00 INFO cluster.YarnClusterScheduler: Adding task set 1460.0 with 1 tasks 18/04/17 17:28:00 INFO scheduler.DAGScheduler: Got job 1461 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 17:28:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 1461 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 17:28:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 17:28:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 17:28:00 INFO storage.BlockManagerInfo: Added broadcast_1457_piece0 in memory on ***hostname masked***:57847 (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:28:00 INFO scheduler.DAGScheduler: Submitting ResultStage 1461 (KafkaRDD[1999] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 17:28:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 1460.0 (TID 1460, ***hostname masked***, executor 12, partition 0, NODE_LOCAL, 2045 bytes) 18/04/17 17:28:00 INFO storage.MemoryStore: Block broadcast_1461 stored as values in memory (estimated size 5.7 KB, free 491.3 MB) 18/04/17 17:28:00 INFO storage.BlockManagerInfo: Added broadcast_1455_piece0 in memory on ***hostname masked***:55033 (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:28:00 INFO storage.BlockManagerInfo: Added broadcast_1458_piece0 in memory on ***hostname masked***:43653 (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:28:00 INFO storage.BlockManagerInfo: Added broadcast_1459_piece0 in memory on ***hostname masked***:50260 (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:28:00 INFO storage.MemoryStore: Block broadcast_1461_piece0 stored as bytes in memory (estimated size 3.1 KB, free 491.3 MB) 18/04/17 17:28:00 INFO storage.BlockManagerInfo: Added broadcast_1461_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.5 MB) 18/04/17 17:28:00 INFO spark.SparkContext: Created broadcast 1461 from broadcast at DAGScheduler.scala:1006 18/04/17 17:28:00 INFO storage.BlockManagerInfo: Added broadcast_1460_piece0 in memory on ***hostname masked***:42188 (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:28:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 1461 (KafkaRDD[1999] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 17:28:00 INFO cluster.YarnClusterScheduler: Adding task set 1461.0 with 1 tasks 18/04/17 17:28:00 INFO scheduler.DAGScheduler: Got job 1462 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 17:28:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 1462 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 17:28:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 17:28:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 17:28:00 INFO scheduler.DAGScheduler: Submitting ResultStage 1462 (KafkaRDD[2013] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 17:28:00 INFO spark.ContextCleaner: Cleaned accumulator 1448 18/04/17 17:28:00 INFO spark.ContextCleaner: Cleaned accumulator 1427 18/04/17 17:28:00 INFO storage.MemoryStore: Block broadcast_1462 stored as values in memory (estimated size 5.7 KB, free 491.3 MB) 18/04/17 17:28:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 1461.0 (TID 1461, ***hostname masked***, executor 7, partition 0, RACK_LOCAL, 2050 bytes) 18/04/17 17:28:00 INFO storage.BlockManagerInfo: Removed broadcast_1426_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.5 MB) 18/04/17 17:28:00 INFO storage.MemoryStore: Block broadcast_1462_piece0 stored as bytes in memory (estimated size 3.1 KB, free 491.3 MB) 18/04/17 17:28:00 INFO storage.BlockManagerInfo: Added broadcast_1462_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.5 MB) 18/04/17 17:28:00 INFO spark.SparkContext: Created broadcast 1462 from broadcast at DAGScheduler.scala:1006 18/04/17 17:28:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 1462 (KafkaRDD[2013] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 17:28:00 INFO cluster.YarnClusterScheduler: Adding task set 1462.0 with 1 tasks 18/04/17 17:28:00 INFO scheduler.DAGScheduler: Got job 1463 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 17:28:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 1463 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 17:28:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 17:28:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 17:28:00 INFO scheduler.DAGScheduler: Submitting ResultStage 1463 (KafkaRDD[2008] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 17:28:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 1462.0 (TID 1462, ***hostname masked***, executor 11, partition 0, NODE_LOCAL, 2048 bytes) 18/04/17 17:28:00 INFO storage.MemoryStore: Block broadcast_1463 stored as values in memory (estimated size 5.7 KB, free 491.3 MB) 18/04/17 17:28:00 INFO storage.BlockManagerInfo: Added broadcast_1461_piece0 in memory on ***hostname masked***:41751 (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:28:00 INFO storage.BlockManagerInfo: Removed broadcast_1426_piece0 on ***hostname masked***:53081 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:28:00 INFO storage.MemoryStore: Block broadcast_1463_piece0 stored as bytes in memory (estimated size 3.1 KB, free 491.3 MB) 18/04/17 17:28:00 INFO storage.BlockManagerInfo: Added broadcast_1463_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.5 MB) 18/04/17 17:28:00 INFO spark.SparkContext: Created broadcast 1463 from broadcast at DAGScheduler.scala:1006 18/04/17 17:28:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 1463 (KafkaRDD[2008] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 17:28:00 INFO cluster.YarnClusterScheduler: Adding task set 1463.0 with 1 tasks 18/04/17 17:28:00 INFO scheduler.DAGScheduler: Got job 1465 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 17:28:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 1464 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 17:28:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 17:28:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 17:28:00 INFO scheduler.DAGScheduler: Submitting ResultStage 1464 (KafkaRDD[1981] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 17:28:00 INFO storage.MemoryStore: Block broadcast_1464 stored as values in memory (estimated size 5.7 KB, free 491.3 MB) 18/04/17 17:28:00 INFO spark.ContextCleaner: Cleaned accumulator 1430 18/04/17 17:28:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 1463.0 (TID 1463, ***hostname masked***, executor 2, partition 0, RACK_LOCAL, 2045 bytes) 18/04/17 17:28:00 INFO storage.BlockManagerInfo: Removed broadcast_1428_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.5 MB) 18/04/17 17:28:00 INFO storage.BlockManagerInfo: Added broadcast_1462_piece0 in memory on ***hostname masked***:57847 (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:28:00 INFO storage.BlockManagerInfo: Removed broadcast_1428_piece0 on ***hostname masked***:53081 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:28:00 INFO spark.ContextCleaner: Cleaned accumulator 1429 18/04/17 17:28:00 INFO storage.BlockManagerInfo: Removed broadcast_1425_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.5 MB) 18/04/17 17:28:00 INFO storage.MemoryStore: Block broadcast_1464_piece0 stored as bytes in memory (estimated size 3.1 KB, free 491.3 MB) 18/04/17 17:28:00 INFO storage.BlockManagerInfo: Added broadcast_1464_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.5 MB) 18/04/17 17:28:00 INFO spark.SparkContext: Created broadcast 1464 from broadcast at DAGScheduler.scala:1006 18/04/17 17:28:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 1464 (KafkaRDD[1981] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 17:28:00 INFO cluster.YarnClusterScheduler: Adding task set 1464.0 with 1 tasks 18/04/17 17:28:00 INFO scheduler.DAGScheduler: Got job 1464 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 17:28:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 1465 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 17:28:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 17:28:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 17:28:00 INFO scheduler.DAGScheduler: Submitting ResultStage 1465 (KafkaRDD[1985] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 17:28:00 INFO storage.MemoryStore: Block broadcast_1465 stored as values in memory (estimated size 5.7 KB, free 491.3 MB) 18/04/17 17:28:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 1464.0 (TID 1464, ***hostname masked***, executor 2, partition 0, RACK_LOCAL, 2063 bytes) 18/04/17 17:28:00 INFO storage.BlockManagerInfo: Removed broadcast_1425_piece0 on ***hostname masked***:55279 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:28:00 INFO storage.BlockManagerInfo: Added broadcast_1463_piece0 in memory on ***hostname masked***:43653 (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:28:00 INFO storage.BlockManagerInfo: Removed broadcast_1430_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.5 MB) 18/04/17 17:28:00 INFO storage.MemoryStore: Block broadcast_1465_piece0 stored as bytes in memory (estimated size 3.1 KB, free 491.3 MB) 18/04/17 17:28:00 INFO storage.BlockManagerInfo: Added broadcast_1465_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.5 MB) 18/04/17 17:28:00 INFO spark.SparkContext: Created broadcast 1465 from broadcast at DAGScheduler.scala:1006 18/04/17 17:28:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 1465 (KafkaRDD[1985] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 17:28:00 INFO cluster.YarnClusterScheduler: Adding task set 1465.0 with 1 tasks 18/04/17 17:28:00 INFO scheduler.DAGScheduler: Got job 1466 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 17:28:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 1466 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 17:28:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 17:28:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 17:28:00 INFO storage.BlockManagerInfo: Removed broadcast_1430_piece0 on ***hostname masked***:43653 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:28:00 INFO scheduler.DAGScheduler: Submitting ResultStage 1466 (KafkaRDD[2009] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 17:28:00 INFO storage.MemoryStore: Block broadcast_1466 stored as values in memory (estimated size 5.7 KB, free 491.3 MB) 18/04/17 17:28:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 1465.0 (TID 1465, ***hostname masked***, executor 5, partition 0, NODE_LOCAL, 2059 bytes) 18/04/17 17:28:00 INFO storage.MemoryStore: Block broadcast_1466_piece0 stored as bytes in memory (estimated size 3.1 KB, free 491.3 MB) 18/04/17 17:28:00 INFO storage.BlockManagerInfo: Added broadcast_1466_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.5 MB) 18/04/17 17:28:00 INFO spark.SparkContext: Created broadcast 1466 from broadcast at DAGScheduler.scala:1006 18/04/17 17:28:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 1466 (KafkaRDD[2009] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 17:28:00 INFO cluster.YarnClusterScheduler: Adding task set 1466.0 with 1 tasks 18/04/17 17:28:00 INFO scheduler.DAGScheduler: Got job 1467 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 17:28:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 1467 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 17:28:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 17:28:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 17:28:00 INFO scheduler.DAGScheduler: Submitting ResultStage 1467 (KafkaRDD[2014] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 17:28:00 INFO storage.MemoryStore: Block broadcast_1467 stored as values in memory (estimated size 5.7 KB, free 491.3 MB) 18/04/17 17:28:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 1466.0 (TID 1466, ***hostname masked***, executor 6, partition 0, NODE_LOCAL, 2049 bytes) 18/04/17 17:28:00 INFO storage.MemoryStore: Block broadcast_1467_piece0 stored as bytes in memory (estimated size 3.1 KB, free 491.3 MB) 18/04/17 17:28:00 INFO storage.BlockManagerInfo: Added broadcast_1467_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.5 MB) 18/04/17 17:28:00 INFO spark.SparkContext: Created broadcast 1467 from broadcast at DAGScheduler.scala:1006 18/04/17 17:28:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 1467 (KafkaRDD[2014] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 17:28:00 INFO cluster.YarnClusterScheduler: Adding task set 1467.0 with 1 tasks 18/04/17 17:28:00 INFO scheduler.DAGScheduler: Got job 1468 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 17:28:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 1468 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 17:28:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 17:28:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 17:28:00 INFO scheduler.DAGScheduler: Submitting ResultStage 1468 (KafkaRDD[1989] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 17:28:00 INFO storage.MemoryStore: Block broadcast_1468 stored as values in memory (estimated size 5.7 KB, free 491.3 MB) 18/04/17 17:28:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 1467.0 (TID 1467, ***hostname masked***, executor 1, partition 0, RACK_LOCAL, 2053 bytes) 18/04/17 17:28:00 INFO storage.MemoryStore: Block broadcast_1468_piece0 stored as bytes in memory (estimated size 3.1 KB, free 491.3 MB) 18/04/17 17:28:00 INFO storage.BlockManagerInfo: Added broadcast_1468_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.5 MB) 18/04/17 17:28:00 INFO spark.SparkContext: Created broadcast 1468 from broadcast at DAGScheduler.scala:1006 18/04/17 17:28:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 1468 (KafkaRDD[1989] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 17:28:00 INFO cluster.YarnClusterScheduler: Adding task set 1468.0 with 1 tasks 18/04/17 17:28:00 INFO scheduler.DAGScheduler: Got job 1469 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 17:28:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 1469 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 17:28:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 17:28:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 17:28:00 INFO scheduler.DAGScheduler: Submitting ResultStage 1469 (KafkaRDD[1982] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 17:28:00 INFO storage.MemoryStore: Block broadcast_1469 stored as values in memory (estimated size 5.7 KB, free 491.3 MB) 18/04/17 17:28:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 1468.0 (TID 1468, ***hostname masked***, executor 7, partition 0, RACK_LOCAL, 2041 bytes) 18/04/17 17:28:00 INFO storage.BlockManagerInfo: Added broadcast_1465_piece0 in memory on ***hostname masked***:53081 (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:28:00 INFO storage.MemoryStore: Block broadcast_1469_piece0 stored as bytes in memory (estimated size 3.1 KB, free 491.3 MB) 18/04/17 17:28:00 INFO storage.BlockManagerInfo: Added broadcast_1469_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.5 MB) 18/04/17 17:28:00 INFO spark.SparkContext: Created broadcast 1469 from broadcast at DAGScheduler.scala:1006 18/04/17 17:28:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 1469 (KafkaRDD[1982] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 17:28:00 INFO cluster.YarnClusterScheduler: Adding task set 1469.0 with 1 tasks 18/04/17 17:28:00 INFO scheduler.DAGScheduler: Got job 1470 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 17:28:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 1470 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 17:28:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 17:28:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 17:28:00 INFO scheduler.DAGScheduler: Submitting ResultStage 1470 (KafkaRDD[1988] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 17:28:00 INFO storage.MemoryStore: Block broadcast_1470 stored as values in memory (estimated size 5.7 KB, free 491.3 MB) 18/04/17 17:28:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 1469.0 (TID 1469, ***hostname masked***, executor 10, partition 0, RACK_LOCAL, 2034 bytes) 18/04/17 17:28:00 INFO storage.BlockManagerInfo: Added broadcast_1467_piece0 in memory on ***hostname masked***:56034 (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:28:00 INFO storage.MemoryStore: Block broadcast_1470_piece0 stored as bytes in memory (estimated size 3.1 KB, free 491.3 MB) 18/04/17 17:28:00 INFO storage.BlockManagerInfo: Added broadcast_1470_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.5 MB) 18/04/17 17:28:00 INFO spark.SparkContext: Created broadcast 1470 from broadcast at DAGScheduler.scala:1006 18/04/17 17:28:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 1470 (KafkaRDD[1988] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 17:28:00 INFO cluster.YarnClusterScheduler: Adding task set 1470.0 with 1 tasks 18/04/17 17:28:00 INFO scheduler.DAGScheduler: Got job 1471 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 17:28:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 1471 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 17:28:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 17:28:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 17:28:00 INFO scheduler.DAGScheduler: Submitting ResultStage 1471 (KafkaRDD[2011] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 17:28:00 INFO storage.MemoryStore: Block broadcast_1471 stored as values in memory (estimated size 5.7 KB, free 491.3 MB) 18/04/17 17:28:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 1470.0 (TID 1470, ***hostname masked***, executor 11, partition 0, RACK_LOCAL, 2051 bytes) 18/04/17 17:28:00 INFO storage.BlockManagerInfo: Added broadcast_1466_piece0 in memory on ***hostname masked***:35790 (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:28:00 INFO storage.BlockManagerInfo: Added broadcast_1468_piece0 in memory on ***hostname masked***:41751 (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:28:00 INFO storage.MemoryStore: Block broadcast_1471_piece0 stored as bytes in memory (estimated size 3.1 KB, free 491.3 MB) 18/04/17 17:28:00 INFO storage.BlockManagerInfo: Added broadcast_1471_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.5 MB) 18/04/17 17:28:00 INFO spark.SparkContext: Created broadcast 1471 from broadcast at DAGScheduler.scala:1006 18/04/17 17:28:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 1471 (KafkaRDD[2011] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 17:28:00 INFO cluster.YarnClusterScheduler: Adding task set 1471.0 with 1 tasks 18/04/17 17:28:00 INFO scheduler.DAGScheduler: Got job 1472 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 17:28:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 1472 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 17:28:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 17:28:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 17:28:00 INFO scheduler.DAGScheduler: Submitting ResultStage 1472 (KafkaRDD[1990] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 17:28:00 INFO storage.MemoryStore: Block broadcast_1472 stored as values in memory (estimated size 5.7 KB, free 491.3 MB) 18/04/17 17:28:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 1471.0 (TID 1471, ***hostname masked***, executor 7, partition 0, RACK_LOCAL, 2042 bytes) 18/04/17 17:28:00 INFO storage.BlockManagerInfo: Added broadcast_1464_piece0 in memory on ***hostname masked***:43653 (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:28:00 INFO spark.ContextCleaner: Cleaned accumulator 1431 18/04/17 17:28:00 INFO storage.BlockManagerInfo: Removed broadcast_1429_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.5 MB) 18/04/17 17:28:00 INFO storage.MemoryStore: Block broadcast_1472_piece0 stored as bytes in memory (estimated size 3.1 KB, free 491.3 MB) 18/04/17 17:28:00 INFO storage.BlockManagerInfo: Added broadcast_1472_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.5 MB) 18/04/17 17:28:00 INFO spark.SparkContext: Created broadcast 1472 from broadcast at DAGScheduler.scala:1006 18/04/17 17:28:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 1472 (KafkaRDD[1990] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 17:28:00 INFO cluster.YarnClusterScheduler: Adding task set 1472.0 with 1 tasks 18/04/17 17:28:00 INFO scheduler.DAGScheduler: Got job 1473 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 17:28:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 1473 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 17:28:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 17:28:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 17:28:00 INFO scheduler.DAGScheduler: Submitting ResultStage 1473 (KafkaRDD[1992] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 17:28:00 INFO storage.BlockManagerInfo: Added broadcast_1470_piece0 in memory on ***hostname masked***:57847 (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:28:00 INFO storage.MemoryStore: Block broadcast_1473 stored as values in memory (estimated size 5.7 KB, free 491.3 MB) 18/04/17 17:28:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 1472.0 (TID 1472, ***hostname masked***, executor 2, partition 0, NODE_LOCAL, 2059 bytes) 18/04/17 17:28:00 INFO storage.BlockManagerInfo: Added broadcast_1469_piece0 in memory on ***hostname masked***:55095 (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:28:00 INFO storage.BlockManagerInfo: Removed broadcast_1429_piece0 on ***hostname masked***:55095 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:28:00 INFO storage.MemoryStore: Block broadcast_1473_piece0 stored as bytes in memory (estimated size 3.1 KB, free 491.3 MB) 18/04/17 17:28:00 INFO spark.ContextCleaner: Cleaned accumulator 1428 18/04/17 17:28:00 INFO storage.BlockManagerInfo: Added broadcast_1473_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.5 MB) 18/04/17 17:28:00 INFO spark.SparkContext: Created broadcast 1473 from broadcast at DAGScheduler.scala:1006 18/04/17 17:28:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 1473 (KafkaRDD[1992] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 17:28:00 INFO cluster.YarnClusterScheduler: Adding task set 1473.0 with 1 tasks 18/04/17 17:28:00 INFO scheduler.DAGScheduler: Got job 1474 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 17:28:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 1474 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 17:28:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 17:28:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 17:28:00 INFO scheduler.DAGScheduler: Submitting ResultStage 1474 (KafkaRDD[2005] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 17:28:00 INFO storage.BlockManagerInfo: Removed broadcast_1431_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.5 MB) 18/04/17 17:28:00 INFO storage.MemoryStore: Block broadcast_1474 stored as values in memory (estimated size 5.7 KB, free 491.3 MB) 18/04/17 17:28:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 1473.0 (TID 1473, ***hostname masked***, executor 4, partition 0, RACK_LOCAL, 2051 bytes) 18/04/17 17:28:00 INFO storage.BlockManagerInfo: Removed broadcast_1431_piece0 on ***hostname masked***:55095 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:28:00 INFO storage.BlockManagerInfo: Added broadcast_1471_piece0 in memory on ***hostname masked***:41751 (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:28:00 INFO spark.ContextCleaner: Cleaned accumulator 1432 18/04/17 17:28:00 INFO storage.MemoryStore: Block broadcast_1474_piece0 stored as bytes in memory (estimated size 3.1 KB, free 491.3 MB) 18/04/17 17:28:00 INFO storage.BlockManagerInfo: Removed broadcast_1433_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.5 MB) 18/04/17 17:28:00 INFO storage.BlockManagerInfo: Added broadcast_1474_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.5 MB) 18/04/17 17:28:00 INFO spark.SparkContext: Created broadcast 1474 from broadcast at DAGScheduler.scala:1006 18/04/17 17:28:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 1474 (KafkaRDD[2005] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 17:28:00 INFO cluster.YarnClusterScheduler: Adding task set 1474.0 with 1 tasks 18/04/17 17:28:00 INFO scheduler.DAGScheduler: Got job 1475 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 17:28:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 1475 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 17:28:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 17:28:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 17:28:00 INFO scheduler.DAGScheduler: Submitting ResultStage 1475 (KafkaRDD[2002] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 17:28:00 INFO storage.MemoryStore: Block broadcast_1475 stored as values in memory (estimated size 5.7 KB, free 491.3 MB) 18/04/17 17:28:00 INFO storage.BlockManagerInfo: Removed broadcast_1433_piece0 on ***hostname masked***:56034 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:28:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 1474.0 (TID 1474, ***hostname masked***, executor 12, partition 0, RACK_LOCAL, 2045 bytes) 18/04/17 17:28:00 INFO spark.ContextCleaner: Cleaned accumulator 1434 18/04/17 17:28:00 INFO storage.BlockManagerInfo: Removed broadcast_1432_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.5 MB) 18/04/17 17:28:00 INFO storage.BlockManagerInfo: Removed broadcast_1432_piece0 on ***hostname masked***:55095 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:28:00 INFO storage.MemoryStore: Block broadcast_1475_piece0 stored as bytes in memory (estimated size 3.1 KB, free 491.3 MB) 18/04/17 17:28:00 INFO storage.BlockManagerInfo: Added broadcast_1475_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.5 MB) 18/04/17 17:28:00 INFO spark.SparkContext: Created broadcast 1475 from broadcast at DAGScheduler.scala:1006 18/04/17 17:28:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 1475 (KafkaRDD[2002] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 17:28:00 INFO cluster.YarnClusterScheduler: Adding task set 1475.0 with 1 tasks 18/04/17 17:28:00 INFO scheduler.DAGScheduler: Got job 1476 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 17:28:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 1476 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 17:28:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 17:28:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 17:28:00 INFO scheduler.DAGScheduler: Submitting ResultStage 1476 (KafkaRDD[2003] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 17:28:00 INFO storage.MemoryStore: Block broadcast_1476 stored as values in memory (estimated size 5.7 KB, free 491.3 MB) 18/04/17 17:28:00 INFO spark.ContextCleaner: Cleaned accumulator 1433 18/04/17 17:28:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 1475.0 (TID 1475, ***hostname masked***, executor 8, partition 0, RACK_LOCAL, 2069 bytes) 18/04/17 17:28:00 INFO storage.BlockManagerInfo: Removed broadcast_1435_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.5 MB) 18/04/17 17:28:00 INFO storage.BlockManagerInfo: Added broadcast_1472_piece0 in memory on ***hostname masked***:43653 (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:28:00 INFO storage.MemoryStore: Block broadcast_1476_piece0 stored as bytes in memory (estimated size 3.1 KB, free 491.3 MB) 18/04/17 17:28:00 INFO storage.BlockManagerInfo: Added broadcast_1476_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.5 MB) 18/04/17 17:28:00 INFO spark.SparkContext: Created broadcast 1476 from broadcast at DAGScheduler.scala:1006 18/04/17 17:28:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 1476 (KafkaRDD[2003] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 17:28:00 INFO cluster.YarnClusterScheduler: Adding task set 1476.0 with 1 tasks 18/04/17 17:28:00 INFO scheduler.DAGScheduler: Got job 1477 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 17:28:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 1477 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 17:28:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 17:28:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 17:28:00 INFO scheduler.DAGScheduler: Submitting ResultStage 1477 (KafkaRDD[1987] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 17:28:00 INFO storage.MemoryStore: Block broadcast_1477 stored as values in memory (estimated size 5.7 KB, free 491.3 MB) 18/04/17 17:28:00 INFO storage.BlockManagerInfo: Removed broadcast_1435_piece0 on ***hostname masked***:60107 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:28:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 1476.0 (TID 1476, ***hostname masked***, executor 9, partition 0, RACK_LOCAL, 2048 bytes) 18/04/17 17:28:00 INFO spark.ContextCleaner: Cleaned accumulator 1436 18/04/17 17:28:00 INFO storage.BlockManagerInfo: Added broadcast_1474_piece0 in memory on ***hostname masked***:42188 (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:28:00 INFO storage.BlockManagerInfo: Removed broadcast_1434_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.5 MB) 18/04/17 17:28:00 INFO storage.MemoryStore: Block broadcast_1477_piece0 stored as bytes in memory (estimated size 3.1 KB, free 491.3 MB) 18/04/17 17:28:00 INFO storage.BlockManagerInfo: Added broadcast_1477_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.5 MB) 18/04/17 17:28:00 INFO spark.SparkContext: Created broadcast 1477 from broadcast at DAGScheduler.scala:1006 18/04/17 17:28:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 1477 (KafkaRDD[1987] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 17:28:00 INFO cluster.YarnClusterScheduler: Adding task set 1477.0 with 1 tasks 18/04/17 17:28:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 1477.0 (TID 1477, ***hostname masked***, executor 9, partition 0, RACK_LOCAL, 2060 bytes) 18/04/17 17:28:00 INFO storage.BlockManagerInfo: Removed broadcast_1434_piece0 on ***hostname masked***:35790 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:28:00 INFO storage.BlockManagerInfo: Added broadcast_1475_piece0 in memory on ***hostname masked***:50260 (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:28:00 INFO spark.ContextCleaner: Cleaned accumulator 1435 18/04/17 17:28:00 INFO spark.ContextCleaner: Cleaned accumulator 1426 18/04/17 17:28:00 INFO spark.ContextCleaner: Cleaned accumulator 1438 18/04/17 17:28:00 INFO storage.BlockManagerInfo: Removed broadcast_1436_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.5 MB) 18/04/17 17:28:00 INFO scheduler.DAGScheduler: Registering RDD 2016 (repartition at PredictorEngineApp.java:152) 18/04/17 17:28:00 INFO storage.BlockManagerInfo: Removed broadcast_1436_piece0 on ***hostname masked***:57847 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:28:00 INFO scheduler.DAGScheduler: Got job 1478 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 17:28:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 1479 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 17:28:00 INFO scheduler.DAGScheduler: Parents of final stage: List(ShuffleMapStage 1478) 18/04/17 17:28:00 INFO scheduler.DAGScheduler: Missing parents: List(ShuffleMapStage 1478) 18/04/17 17:28:00 INFO spark.ContextCleaner: Cleaned accumulator 1437 18/04/17 17:28:00 INFO spark.ContextCleaner: Cleaned accumulator 1440 18/04/17 17:28:00 INFO scheduler.DAGScheduler: Submitting ShuffleMapStage 1478 (MapPartitionsRDD[2016] at repartition at PredictorEngineApp.java:152), which has no missing parents 18/04/17 17:28:00 INFO storage.BlockManagerInfo: Removed broadcast_1438_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.5 MB) 18/04/17 17:28:00 INFO storage.BlockManagerInfo: Removed broadcast_1438_piece0 on ***hostname masked***:50260 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:28:00 INFO spark.ContextCleaner: Cleaned accumulator 1439 18/04/17 17:28:00 INFO storage.BlockManagerInfo: Added broadcast_1473_piece0 in memory on ***hostname masked***:55279 (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:28:00 INFO storage.BlockManagerInfo: Removed broadcast_1437_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.5 MB) 18/04/17 17:28:00 INFO storage.BlockManagerInfo: Added broadcast_1477_piece0 in memory on ***hostname masked***:55033 (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:28:00 INFO storage.BlockManagerInfo: Removed broadcast_1437_piece0 on ***hostname masked***:43653 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:28:00 INFO spark.ContextCleaner: Cleaned accumulator 1442 18/04/17 17:28:00 INFO storage.BlockManagerInfo: Removed broadcast_1440_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.5 MB) 18/04/17 17:28:00 INFO storage.BlockManagerInfo: Removed broadcast_1440_piece0 on ***hostname masked***:50260 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:28:00 INFO spark.ContextCleaner: Cleaned accumulator 1441 18/04/17 17:28:00 INFO storage.BlockManagerInfo: Removed broadcast_1439_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.5 MB) 18/04/17 17:28:00 INFO storage.BlockManagerInfo: Removed broadcast_1439_piece0 on ***hostname masked***:57847 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:28:00 INFO storage.BlockManagerInfo: Removed broadcast_1442_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.5 MB) 18/04/17 17:28:00 INFO storage.MemoryStore: Block broadcast_1478 stored as values in memory (estimated size 5.1 KB, free 491.3 MB) 18/04/17 17:28:00 INFO storage.BlockManagerInfo: Removed broadcast_1442_piece0 on ***hostname masked***:60107 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:28:00 INFO spark.ContextCleaner: Cleaned accumulator 1443 18/04/17 17:28:00 INFO storage.BlockManagerInfo: Removed broadcast_1441_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.5 MB) 18/04/17 17:28:00 INFO storage.MemoryStore: Block broadcast_1478_piece0 stored as bytes in memory (estimated size 2.8 KB, free 491.3 MB) 18/04/17 17:28:00 INFO storage.BlockManagerInfo: Added broadcast_1478_piece0 in memory on ***IP masked***:45737 (size: 2.8 KB, free: 491.5 MB) 18/04/17 17:28:00 INFO spark.SparkContext: Created broadcast 1478 from broadcast at DAGScheduler.scala:1006 18/04/17 17:28:00 INFO storage.BlockManagerInfo: Removed broadcast_1441_piece0 on ***hostname masked***:55033 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:28:00 INFO spark.ContextCleaner: Cleaned accumulator 1446 18/04/17 17:28:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ShuffleMapStage 1478 (MapPartitionsRDD[2016] at repartition at PredictorEngineApp.java:152) 18/04/17 17:28:00 INFO cluster.YarnClusterScheduler: Adding task set 1478.0 with 1 tasks 18/04/17 17:28:00 INFO storage.BlockManagerInfo: Removed broadcast_1444_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.5 MB) 18/04/17 17:28:00 INFO storage.BlockManagerInfo: Removed broadcast_1444_piece0 on ***hostname masked***:43653 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:28:00 INFO storage.BlockManagerInfo: Added broadcast_1476_piece0 in memory on ***hostname masked***:55033 (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:28:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 1478.0 (TID 1478, ***hostname masked***, executor 4, partition 0, NODE_LOCAL, 2045 bytes) 18/04/17 17:28:00 INFO spark.ContextCleaner: Cleaned accumulator 1445 18/04/17 17:28:00 INFO storage.BlockManagerInfo: Removed broadcast_1446_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:28:00 INFO storage.BlockManagerInfo: Removed broadcast_1446_piece0 on ***hostname masked***:35790 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:28:00 INFO spark.ContextCleaner: Cleaned accumulator 1447 18/04/17 17:28:00 INFO storage.BlockManagerInfo: Removed broadcast_1445_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:28:00 INFO storage.BlockManagerInfo: Removed broadcast_1445_piece0 on ***hostname masked***:43653 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:28:00 INFO spark.ContextCleaner: Cleaned accumulator 1450 18/04/17 17:28:00 INFO storage.BlockManagerInfo: Removed broadcast_1448_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:28:00 INFO storage.BlockManagerInfo: Removed broadcast_1448_piece0 on ***hostname masked***:60107 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:28:00 INFO storage.BlockManagerInfo: Added broadcast_1478_piece0 in memory on ***hostname masked***:55279 (size: 2.8 KB, free: 3.1 GB) 18/04/17 17:28:00 INFO spark.ContextCleaner: Cleaned accumulator 1449 18/04/17 17:28:00 INFO storage.BlockManagerInfo: Removed broadcast_1447_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:28:00 INFO storage.BlockManagerInfo: Removed broadcast_1447_piece0 on ***hostname masked***:57847 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:28:00 INFO spark.ContextCleaner: Cleaned accumulator 1452 18/04/17 17:28:00 INFO storage.BlockManagerInfo: Removed broadcast_1450_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:28:00 INFO storage.BlockManagerInfo: Removed broadcast_1450_piece0 on ***hostname masked***:57847 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:28:00 INFO spark.ContextCleaner: Cleaned accumulator 1451 18/04/17 17:28:00 INFO storage.BlockManagerInfo: Removed broadcast_1449_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:28:00 INFO storage.BlockManagerInfo: Removed broadcast_1449_piece0 on ***hostname masked***:55279 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:28:00 INFO storage.BlockManagerInfo: Removed broadcast_1451_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:28:00 INFO storage.BlockManagerInfo: Removed broadcast_1451_piece0 on ***hostname masked***:42188 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:28:00 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 1468.0 (TID 1468) in 80 ms on ***hostname masked*** (executor 7) (1/1) 18/04/17 17:28:00 INFO cluster.YarnClusterScheduler: Removed TaskSet 1468.0, whose tasks have all completed, from pool 18/04/17 17:28:00 INFO scheduler.DAGScheduler: ResultStage 1468 (foreachPartition at PredictorEngineApp.java:153) finished in 0.080 s 18/04/17 17:28:00 INFO scheduler.DAGScheduler: Job 1468 finished: foreachPartition at PredictorEngineApp.java:153, took 0.162304 s 18/04/17 17:28:00 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x9294493 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 17:28:00 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x92944930x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 17:28:00 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 17:28:00 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:49225, server: ***hostname masked***/***IP masked***:2181 18/04/17 17:28:00 INFO storage.BlockManagerInfo: Removed broadcast_1427_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:28:00 INFO storage.BlockManagerInfo: Removed broadcast_1427_piece0 on ***hostname masked***:56034 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:28:00 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x2626be142b291c1, negotiated timeout = 60000 18/04/17 17:28:00 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x2626be142b291c1 18/04/17 17:28:00 INFO zookeeper.ZooKeeper: Session: 0x2626be142b291c1 closed 18/04/17 17:28:00 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 17:28:00 INFO scheduler.JobScheduler: Finished job streaming job 1523975280000 ms.9 from job set of time 1523975280000 ms 18/04/17 17:28:00 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 1457.0 (TID 1457) in 231 ms on ***hostname masked*** (executor 11) (1/1) 18/04/17 17:28:00 INFO scheduler.DAGScheduler: ResultStage 1457 (foreachPartition at PredictorEngineApp.java:153) finished in 0.232 s 18/04/17 17:28:00 INFO cluster.YarnClusterScheduler: Removed TaskSet 1457.0, whose tasks have all completed, from pool 18/04/17 17:28:00 INFO scheduler.DAGScheduler: Job 1457 finished: foreachPartition at PredictorEngineApp.java:153, took 0.258877 s 18/04/17 17:28:00 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x7eb13f43 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 17:28:00 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x7eb13f430x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 17:28:00 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 17:28:00 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:38255, server: ***hostname masked***/***IP masked***:2181 18/04/17 17:28:00 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x3626be1439a989b, negotiated timeout = 60000 18/04/17 17:28:00 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x3626be1439a989b 18/04/17 17:28:00 INFO zookeeper.ZooKeeper: Session: 0x3626be1439a989b closed 18/04/17 17:28:00 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 17:28:00 INFO scheduler.JobScheduler: Finished job streaming job 1523975280000 ms.35 from job set of time 1523975280000 ms 18/04/17 17:28:00 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 1478.0 (TID 1478) in 171 ms on ***hostname masked*** (executor 4) (1/1) 18/04/17 17:28:00 INFO cluster.YarnClusterScheduler: Removed TaskSet 1478.0, whose tasks have all completed, from pool 18/04/17 17:28:00 INFO scheduler.DAGScheduler: ShuffleMapStage 1478 (repartition at PredictorEngineApp.java:152) finished in 0.171 s 18/04/17 17:28:00 INFO scheduler.DAGScheduler: looking for newly runnable stages 18/04/17 17:28:00 INFO scheduler.DAGScheduler: running: Set(ResultStage 1453, ResultStage 1475, ResultStage 1454, ResultStage 1476, ResultStage 1477, ResultStage 1469, ResultStage 1470, ResultStage 1471, ResultStage 1472, ResultStage 1464, ResultStage 1443, ResultStage 1465, ResultStage 1466, ResultStage 1467, ResultStage 1459, ResultStage 1155, ResultStage 1460, ResultStage 1461, ResultStage 1462, ResultStage 1463, ResultStage 1455, ResultStage 1456, ResultStage 1458, ResultStage 1473, ResultStage 1452, ResultStage 1474) 18/04/17 17:28:00 INFO scheduler.DAGScheduler: waiting: Set(ResultStage 1479) 18/04/17 17:28:00 INFO scheduler.DAGScheduler: failed: Set() 18/04/17 17:28:00 INFO scheduler.DAGScheduler: Submitting ResultStage 1479 (MapPartitionsRDD[2019] at repartition at PredictorEngineApp.java:152), which has no missing parents 18/04/17 17:28:00 INFO storage.MemoryStore: Block broadcast_1479 stored as values in memory (estimated size 6.3 KB, free 491.4 MB) 18/04/17 17:28:00 INFO storage.MemoryStore: Block broadcast_1479_piece0 stored as bytes in memory (estimated size 3.6 KB, free 491.4 MB) 18/04/17 17:28:00 INFO storage.BlockManagerInfo: Added broadcast_1479_piece0 in memory on ***IP masked***:45737 (size: 3.6 KB, free: 491.6 MB) 18/04/17 17:28:00 INFO spark.SparkContext: Created broadcast 1479 from broadcast at DAGScheduler.scala:1006 18/04/17 17:28:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 1479 (MapPartitionsRDD[2019] at repartition at PredictorEngineApp.java:152) 18/04/17 17:28:00 INFO cluster.YarnClusterScheduler: Adding task set 1479.0 with 1 tasks 18/04/17 17:28:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 1479.0 (TID 1479, ***hostname masked***, executor 6, partition 0, NODE_LOCAL, 2163 bytes) 18/04/17 17:28:00 INFO storage.BlockManagerInfo: Added broadcast_1479_piece0 in memory on ***hostname masked***:35790 (size: 3.6 KB, free: 3.1 GB) 18/04/17 17:28:00 INFO spark.MapOutputTrackerMasterEndpoint: Asked to send map output locations for shuffle 0 to ***hostname masked***:37239 18/04/17 17:28:00 INFO spark.MapOutputTrackerMaster: Size of output statuses for shuffle 0 is 151 bytes 18/04/17 17:28:02 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 1470.0 (TID 1470) in 2841 ms on ***hostname masked*** (executor 11) (1/1) 18/04/17 17:28:02 INFO cluster.YarnClusterScheduler: Removed TaskSet 1470.0, whose tasks have all completed, from pool 18/04/17 17:28:02 INFO scheduler.DAGScheduler: ResultStage 1470 (foreachPartition at PredictorEngineApp.java:153) finished in 2.842 s 18/04/17 17:28:02 INFO scheduler.DAGScheduler: Job 1470 finished: foreachPartition at PredictorEngineApp.java:153, took 2.928900 s 18/04/17 17:28:02 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x7416cc64 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 17:28:02 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x7416cc640x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 17:28:02 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 17:28:02 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:49242, server: ***hostname masked***/***IP masked***:2181 18/04/17 17:28:02 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x2626be142b291c7, negotiated timeout = 60000 18/04/17 17:28:03 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x2626be142b291c7 18/04/17 17:28:03 INFO zookeeper.ZooKeeper: Session: 0x2626be142b291c7 closed 18/04/17 17:28:03 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 17:28:03 INFO scheduler.JobScheduler: Finished job streaming job 1523975280000 ms.8 from job set of time 1523975280000 ms 18/04/17 17:28:04 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 1474.0 (TID 1474) in 4058 ms on ***hostname masked*** (executor 12) (1/1) 18/04/17 17:28:04 INFO cluster.YarnClusterScheduler: Removed TaskSet 1474.0, whose tasks have all completed, from pool 18/04/17 17:28:04 INFO scheduler.DAGScheduler: ResultStage 1474 (foreachPartition at PredictorEngineApp.java:153) finished in 4.058 s 18/04/17 17:28:04 INFO scheduler.DAGScheduler: Job 1474 finished: foreachPartition at PredictorEngineApp.java:153, took 4.156243 s 18/04/17 17:28:04 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x6f472506 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 17:28:04 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x6f4725060x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 17:28:04 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 17:28:04 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:49255, server: ***hostname masked***/***IP masked***:2181 18/04/17 17:28:04 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x2626be142b291c8, negotiated timeout = 60000 18/04/17 17:28:04 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x2626be142b291c8 18/04/17 17:28:04 INFO zookeeper.ZooKeeper: Session: 0x2626be142b291c8 closed 18/04/17 17:28:04 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 17:28:04 INFO scheduler.JobScheduler: Finished job streaming job 1523975280000 ms.25 from job set of time 1523975280000 ms 18/04/17 17:28:04 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 1477.0 (TID 1477) in 4618 ms on ***hostname masked*** (executor 9) (1/1) 18/04/17 17:28:04 INFO cluster.YarnClusterScheduler: Removed TaskSet 1477.0, whose tasks have all completed, from pool 18/04/17 17:28:04 INFO scheduler.DAGScheduler: ResultStage 1477 (foreachPartition at PredictorEngineApp.java:153) finished in 4.618 s 18/04/17 17:28:04 INFO scheduler.DAGScheduler: Job 1477 finished: foreachPartition at PredictorEngineApp.java:153, took 4.722781 s 18/04/17 17:28:04 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x431caa0a connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 17:28:04 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x431caa0a0x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 17:28:04 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 17:28:04 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:49258, server: ***hostname masked***/***IP masked***:2181 18/04/17 17:28:04 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x2626be142b291c9, negotiated timeout = 60000 18/04/17 17:28:04 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x2626be142b291c9 18/04/17 17:28:04 INFO zookeeper.ZooKeeper: Session: 0x2626be142b291c9 closed 18/04/17 17:28:04 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 17:28:04 INFO scheduler.JobScheduler: Finished job streaming job 1523975280000 ms.7 from job set of time 1523975280000 ms 18/04/17 17:28:05 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 1473.0 (TID 1473) in 4852 ms on ***hostname masked*** (executor 4) (1/1) 18/04/17 17:28:05 INFO cluster.YarnClusterScheduler: Removed TaskSet 1473.0, whose tasks have all completed, from pool 18/04/17 17:28:05 INFO scheduler.DAGScheduler: ResultStage 1473 (foreachPartition at PredictorEngineApp.java:153) finished in 4.852 s 18/04/17 17:28:05 INFO scheduler.DAGScheduler: Job 1473 finished: foreachPartition at PredictorEngineApp.java:153, took 4.947286 s 18/04/17 17:28:05 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x5580f2ef connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 17:28:05 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x5580f2ef0x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 17:28:05 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 17:28:05 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:38284, server: ***hostname masked***/***IP masked***:2181 18/04/17 17:28:05 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x3626be1439a98a3, negotiated timeout = 60000 18/04/17 17:28:05 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x3626be1439a98a3 18/04/17 17:28:05 INFO zookeeper.ZooKeeper: Session: 0x3626be1439a98a3 closed 18/04/17 17:28:05 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 17:28:05 INFO scheduler.JobScheduler: Finished job streaming job 1523975280000 ms.12 from job set of time 1523975280000 ms 18/04/17 17:28:05 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 1469.0 (TID 1469) in 5428 ms on ***hostname masked*** (executor 10) (1/1) 18/04/17 17:28:05 INFO scheduler.DAGScheduler: ResultStage 1469 (foreachPartition at PredictorEngineApp.java:153) finished in 5.430 s 18/04/17 17:28:05 INFO cluster.YarnClusterScheduler: Removed TaskSet 1469.0, whose tasks have all completed, from pool 18/04/17 17:28:05 INFO scheduler.DAGScheduler: Job 1469 finished: foreachPartition at PredictorEngineApp.java:153, took 5.513826 s 18/04/17 17:28:05 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0xac81297 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 17:28:05 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0xac812970x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 17:28:05 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 17:28:05 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:38288, server: ***hostname masked***/***IP masked***:2181 18/04/17 17:28:05 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x3626be1439a98a4, negotiated timeout = 60000 18/04/17 17:28:05 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x3626be1439a98a4 18/04/17 17:28:05 INFO zookeeper.ZooKeeper: Session: 0x3626be1439a98a4 closed 18/04/17 17:28:05 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 17:28:05 INFO scheduler.JobScheduler: Finished job streaming job 1523975280000 ms.2 from job set of time 1523975280000 ms 18/04/17 17:28:07 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 1462.0 (TID 1462) in 6910 ms on ***hostname masked*** (executor 11) (1/1) 18/04/17 17:28:07 INFO scheduler.DAGScheduler: ResultStage 1462 (foreachPartition at PredictorEngineApp.java:153) finished in 6.911 s 18/04/17 17:28:07 INFO cluster.YarnClusterScheduler: Removed TaskSet 1462.0, whose tasks have all completed, from pool 18/04/17 17:28:07 INFO scheduler.DAGScheduler: Job 1462 finished: foreachPartition at PredictorEngineApp.java:153, took 6.972800 s 18/04/17 17:28:07 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x7793d63e connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 17:28:07 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x7793d63e0x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 17:28:07 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 17:28:07 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:38297, server: ***hostname masked***/***IP masked***:2181 18/04/17 17:28:07 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x3626be1439a98a6, negotiated timeout = 60000 18/04/17 17:28:07 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x3626be1439a98a6 18/04/17 17:28:07 INFO zookeeper.ZooKeeper: Session: 0x3626be1439a98a6 closed 18/04/17 17:28:07 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 17:28:07 INFO scheduler.JobScheduler: Finished job streaming job 1523975280000 ms.33 from job set of time 1523975280000 ms 18/04/17 17:28:08 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 1471.0 (TID 1471) in 8561 ms on ***hostname masked*** (executor 7) (1/1) 18/04/17 17:28:08 INFO cluster.YarnClusterScheduler: Removed TaskSet 1471.0, whose tasks have all completed, from pool 18/04/17 17:28:08 INFO scheduler.DAGScheduler: ResultStage 1471 (foreachPartition at PredictorEngineApp.java:153) finished in 8.562 s 18/04/17 17:28:08 INFO scheduler.DAGScheduler: Job 1471 finished: foreachPartition at PredictorEngineApp.java:153, took 8.651476 s 18/04/17 17:28:08 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x7db19c3e connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 17:28:08 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x7db19c3e0x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 17:28:08 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 17:28:08 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:44685, server: ***hostname masked***/***IP masked***:2181 18/04/17 17:28:08 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x1626be1444c98ff, negotiated timeout = 60000 18/04/17 17:28:08 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x1626be1444c98ff 18/04/17 17:28:08 INFO zookeeper.ZooKeeper: Session: 0x1626be1444c98ff closed 18/04/17 17:28:08 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 17:28:08 INFO scheduler.JobScheduler: Finished job streaming job 1523975280000 ms.31 from job set of time 1523975280000 ms 18/04/17 17:28:09 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 1459.0 (TID 1459) in 9830 ms on ***hostname masked*** (executor 8) (1/1) 18/04/17 17:28:09 INFO cluster.YarnClusterScheduler: Removed TaskSet 1459.0, whose tasks have all completed, from pool 18/04/17 17:28:09 INFO scheduler.DAGScheduler: ResultStage 1459 (foreachPartition at PredictorEngineApp.java:153) finished in 9.830 s 18/04/17 17:28:09 INFO scheduler.DAGScheduler: Job 1459 finished: foreachPartition at PredictorEngineApp.java:153, took 9.863858 s 18/04/17 17:28:09 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x751b3f9a connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 17:28:09 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x751b3f9a0x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 17:28:09 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 17:28:09 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:49290, server: ***hostname masked***/***IP masked***:2181 18/04/17 17:28:09 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x2626be142b291cb, negotiated timeout = 60000 18/04/17 17:28:09 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x2626be142b291cb 18/04/17 17:28:09 INFO zookeeper.ZooKeeper: Session: 0x2626be142b291cb closed 18/04/17 17:28:09 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 17:28:09 INFO scheduler.JobScheduler: Finished job streaming job 1523975280000 ms.27 from job set of time 1523975280000 ms 18/04/17 17:28:10 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 1466.0 (TID 1466) in 10421 ms on ***hostname masked*** (executor 6) (1/1) 18/04/17 17:28:10 INFO cluster.YarnClusterScheduler: Removed TaskSet 1466.0, whose tasks have all completed, from pool 18/04/17 17:28:10 INFO scheduler.DAGScheduler: ResultStage 1466 (foreachPartition at PredictorEngineApp.java:153) finished in 10.422 s 18/04/17 17:28:10 INFO scheduler.DAGScheduler: Job 1466 finished: foreachPartition at PredictorEngineApp.java:153, took 10.498947 s 18/04/17 17:28:10 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x664d2c5d connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 17:28:10 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x664d2c5d0x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 17:28:10 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 17:28:10 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:49294, server: ***hostname masked***/***IP masked***:2181 18/04/17 17:28:10 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x2626be142b291cd, negotiated timeout = 60000 18/04/17 17:28:10 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x2626be142b291cd 18/04/17 17:28:10 INFO zookeeper.ZooKeeper: Session: 0x2626be142b291cd closed 18/04/17 17:28:10 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 17:28:10 INFO scheduler.JobScheduler: Finished job streaming job 1523975280000 ms.29 from job set of time 1523975280000 ms 18/04/17 17:28:11 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 1155.0 (TID 1155) in 731138 ms on ***hostname masked*** (executor 6) (1/1) 18/04/17 17:28:11 INFO cluster.YarnClusterScheduler: Removed TaskSet 1155.0, whose tasks have all completed, from pool 18/04/17 17:28:11 INFO scheduler.DAGScheduler: ResultStage 1155 (foreachPartition at PredictorEngineApp.java:153) finished in 731.138 s 18/04/17 17:28:11 INFO scheduler.DAGScheduler: Job 1155 finished: foreachPartition at PredictorEngineApp.java:153, took 731.214913 s 18/04/17 17:28:11 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x1bce58fe connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 17:28:11 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x1bce58fe0x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 17:28:11 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 17:28:11 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:38321, server: ***hostname masked***/***IP masked***:2181 18/04/17 17:28:11 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x3626be1439a98aa, negotiated timeout = 60000 18/04/17 17:28:11 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x3626be1439a98aa 18/04/17 17:28:11 INFO zookeeper.ZooKeeper: Session: 0x3626be1439a98aa closed 18/04/17 17:28:11 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 17:28:11 INFO scheduler.JobScheduler: Finished job streaming job 1523974560000 ms.27 from job set of time 1523974560000 ms 18/04/17 17:28:11 INFO scheduler.JobScheduler: Total delay: 731.302 s for time 1523974560000 ms (execution: 731.250 s) 18/04/17 17:28:11 INFO scheduler.ReceivedBlockTracker: Deleting batches ArrayBuffer() 18/04/17 17:28:11 INFO scheduler.InputInfoTracker: remove old batch metadata: 18/04/17 17:28:11 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 1467.0 (TID 1467) in 11193 ms on ***hostname masked*** (executor 1) (1/1) 18/04/17 17:28:11 INFO cluster.YarnClusterScheduler: Removed TaskSet 1467.0, whose tasks have all completed, from pool 18/04/17 17:28:11 INFO scheduler.DAGScheduler: ResultStage 1467 (foreachPartition at PredictorEngineApp.java:153) finished in 11.194 s 18/04/17 17:28:11 INFO scheduler.DAGScheduler: Job 1467 finished: foreachPartition at PredictorEngineApp.java:153, took 11.273752 s 18/04/17 17:28:11 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x6a2d69f5 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 17:28:11 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x6a2d69f50x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 17:28:11 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 17:28:11 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:44706, server: ***hostname masked***/***IP masked***:2181 18/04/17 17:28:11 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x1626be1444c9902, negotiated timeout = 60000 18/04/17 17:28:11 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x1626be1444c9902 18/04/17 17:28:11 INFO zookeeper.ZooKeeper: Session: 0x1626be1444c9902 closed 18/04/17 17:28:11 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 17:28:11 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 1455.0 (TID 1455) in 11285 ms on ***hostname masked*** (executor 9) (1/1) 18/04/17 17:28:11 INFO cluster.YarnClusterScheduler: Removed TaskSet 1455.0, whose tasks have all completed, from pool 18/04/17 17:28:11 INFO scheduler.DAGScheduler: ResultStage 1455 (foreachPartition at PredictorEngineApp.java:153) finished in 11.285 s 18/04/17 17:28:11 INFO scheduler.DAGScheduler: Job 1455 finished: foreachPartition at PredictorEngineApp.java:153, took 11.302474 s 18/04/17 17:28:11 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x317b8522 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 17:28:11 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x317b85220x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 17:28:11 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 17:28:11 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:38328, server: ***hostname masked***/***IP masked***:2181 18/04/17 17:28:11 INFO scheduler.JobScheduler: Finished job streaming job 1523975280000 ms.34 from job set of time 1523975280000 ms 18/04/17 17:28:11 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x3626be1439a98ab, negotiated timeout = 60000 18/04/17 17:28:11 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x3626be1439a98ab 18/04/17 17:28:11 INFO zookeeper.ZooKeeper: Session: 0x3626be1439a98ab closed 18/04/17 17:28:11 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 17:28:11 INFO scheduler.JobScheduler: Finished job streaming job 1523975280000 ms.32 from job set of time 1523975280000 ms 18/04/17 17:28:12 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 1453.0 (TID 1453) in 12021 ms on ***hostname masked*** (executor 1) (1/1) 18/04/17 17:28:12 INFO scheduler.DAGScheduler: ResultStage 1453 (foreachPartition at PredictorEngineApp.java:153) finished in 12.021 s 18/04/17 17:28:12 INFO cluster.YarnClusterScheduler: Removed TaskSet 1453.0, whose tasks have all completed, from pool 18/04/17 17:28:12 INFO scheduler.DAGScheduler: Job 1452 finished: foreachPartition at PredictorEngineApp.java:153, took 12.031458 s 18/04/17 17:28:12 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x7f9ecd2f connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 17:28:12 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x7f9ecd2f0x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 17:28:12 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 17:28:12 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:49308, server: ***hostname masked***/***IP masked***:2181 18/04/17 17:28:12 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x2626be142b291ce, negotiated timeout = 60000 18/04/17 17:28:12 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x2626be142b291ce 18/04/17 17:28:12 INFO zookeeper.ZooKeeper: Session: 0x2626be142b291ce closed 18/04/17 17:28:12 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 17:28:12 INFO scheduler.JobScheduler: Finished job streaming job 1523975280000 ms.24 from job set of time 1523975280000 ms 18/04/17 17:28:12 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 1456.0 (TID 1456) in 12052 ms on ***hostname masked*** (executor 7) (1/1) 18/04/17 17:28:12 INFO cluster.YarnClusterScheduler: Removed TaskSet 1456.0, whose tasks have all completed, from pool 18/04/17 17:28:12 INFO scheduler.DAGScheduler: ResultStage 1456 (foreachPartition at PredictorEngineApp.java:153) finished in 12.053 s 18/04/17 17:28:12 INFO scheduler.DAGScheduler: Job 1456 finished: foreachPartition at PredictorEngineApp.java:153, took 12.075986 s 18/04/17 17:28:12 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x6c669d62 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 17:28:12 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x6c669d620x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 17:28:12 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 17:28:12 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:38334, server: ***hostname masked***/***IP masked***:2181 18/04/17 17:28:12 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x3626be1439a98ad, negotiated timeout = 60000 18/04/17 17:28:12 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x3626be1439a98ad 18/04/17 17:28:12 INFO zookeeper.ZooKeeper: Session: 0x3626be1439a98ad closed 18/04/17 17:28:12 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 17:28:12 INFO scheduler.JobScheduler: Finished job streaming job 1523975280000 ms.15 from job set of time 1523975280000 ms 18/04/17 17:28:12 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 1476.0 (TID 1476) in 12093 ms on ***hostname masked*** (executor 9) (1/1) 18/04/17 17:28:12 INFO cluster.YarnClusterScheduler: Removed TaskSet 1476.0, whose tasks have all completed, from pool 18/04/17 17:28:12 INFO scheduler.DAGScheduler: ResultStage 1476 (foreachPartition at PredictorEngineApp.java:153) finished in 12.093 s 18/04/17 17:28:12 INFO scheduler.DAGScheduler: Job 1476 finished: foreachPartition at PredictorEngineApp.java:153, took 12.195459 s 18/04/17 17:28:12 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x7b02f01d connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 17:28:12 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x7b02f01d0x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 17:28:12 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 17:28:12 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:38338, server: ***hostname masked***/***IP masked***:2181 18/04/17 17:28:12 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x3626be1439a98ae, negotiated timeout = 60000 18/04/17 17:28:12 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x3626be1439a98ae 18/04/17 17:28:12 INFO zookeeper.ZooKeeper: Session: 0x3626be1439a98ae closed 18/04/17 17:28:12 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 17:28:12 INFO scheduler.JobScheduler: Finished job streaming job 1523975280000 ms.23 from job set of time 1523975280000 ms 18/04/17 17:28:12 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 1461.0 (TID 1461) in 12307 ms on ***hostname masked*** (executor 7) (1/1) 18/04/17 17:28:12 INFO cluster.YarnClusterScheduler: Removed TaskSet 1461.0, whose tasks have all completed, from pool 18/04/17 17:28:12 INFO scheduler.DAGScheduler: ResultStage 1461 (foreachPartition at PredictorEngineApp.java:153) finished in 12.308 s 18/04/17 17:28:12 INFO scheduler.DAGScheduler: Job 1461 finished: foreachPartition at PredictorEngineApp.java:153, took 12.366175 s 18/04/17 17:28:12 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x4d24176b connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 17:28:12 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x4d24176b0x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 17:28:12 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 17:28:12 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:49320, server: ***hostname masked***/***IP masked***:2181 18/04/17 17:28:12 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x2626be142b291d0, negotiated timeout = 60000 18/04/17 17:28:12 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x2626be142b291d0 18/04/17 17:28:12 INFO zookeeper.ZooKeeper: Session: 0x2626be142b291d0 closed 18/04/17 17:28:12 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 17:28:12 INFO scheduler.JobScheduler: Finished job streaming job 1523975280000 ms.19 from job set of time 1523975280000 ms 18/04/17 17:28:13 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 1465.0 (TID 1465) in 13552 ms on ***hostname masked*** (executor 5) (1/1) 18/04/17 17:28:13 INFO cluster.YarnClusterScheduler: Removed TaskSet 1465.0, whose tasks have all completed, from pool 18/04/17 17:28:13 INFO scheduler.DAGScheduler: ResultStage 1465 (foreachPartition at PredictorEngineApp.java:153) finished in 13.553 s 18/04/17 17:28:13 INFO scheduler.DAGScheduler: Job 1464 finished: foreachPartition at PredictorEngineApp.java:153, took 13.626852 s 18/04/17 17:28:13 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x1af8ec0b connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 17:28:13 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x1af8ec0b0x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 17:28:13 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 17:28:13 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:49330, server: ***hostname masked***/***IP masked***:2181 18/04/17 17:28:13 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x2626be142b291d1, negotiated timeout = 60000 18/04/17 17:28:13 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x2626be142b291d1 18/04/17 17:28:13 INFO zookeeper.ZooKeeper: Session: 0x2626be142b291d1 closed 18/04/17 17:28:13 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 17:28:13 INFO scheduler.JobScheduler: Finished job streaming job 1523975280000 ms.5 from job set of time 1523975280000 ms 18/04/17 17:28:16 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 1454.0 (TID 1454) in 16086 ms on ***hostname masked*** (executor 12) (1/1) 18/04/17 17:28:16 INFO cluster.YarnClusterScheduler: Removed TaskSet 1454.0, whose tasks have all completed, from pool 18/04/17 17:28:16 INFO scheduler.DAGScheduler: ResultStage 1454 (foreachPartition at PredictorEngineApp.java:153) finished in 16.087 s 18/04/17 17:28:16 INFO scheduler.DAGScheduler: Job 1454 finished: foreachPartition at PredictorEngineApp.java:153, took 16.100591 s 18/04/17 17:28:16 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x525404a8 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 17:28:16 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x525404a80x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 17:28:16 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 17:28:16 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:38362, server: ***hostname masked***/***IP masked***:2181 18/04/17 17:28:16 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x3626be1439a98b1, negotiated timeout = 60000 18/04/17 17:28:16 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x3626be1439a98b1 18/04/17 17:28:16 INFO zookeeper.ZooKeeper: Session: 0x3626be1439a98b1 closed 18/04/17 17:28:16 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 17:28:16 INFO scheduler.JobScheduler: Finished job streaming job 1523975280000 ms.18 from job set of time 1523975280000 ms 18/04/17 17:28:16 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 1460.0 (TID 1460) in 16416 ms on ***hostname masked*** (executor 12) (1/1) 18/04/17 17:28:16 INFO cluster.YarnClusterScheduler: Removed TaskSet 1460.0, whose tasks have all completed, from pool 18/04/17 17:28:16 INFO scheduler.DAGScheduler: ResultStage 1460 (foreachPartition at PredictorEngineApp.java:153) finished in 16.416 s 18/04/17 17:28:16 INFO scheduler.DAGScheduler: Job 1460 finished: foreachPartition at PredictorEngineApp.java:153, took 16.456867 s 18/04/17 17:28:16 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x6feedf02 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 17:28:16 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x6feedf020x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 17:28:16 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 17:28:16 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:44748, server: ***hostname masked***/***IP masked***:2181 18/04/17 17:28:16 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x1626be1444c9906, negotiated timeout = 60000 18/04/17 17:28:16 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x1626be1444c9906 18/04/17 17:28:16 INFO zookeeper.ZooKeeper: Session: 0x1626be1444c9906 closed 18/04/17 17:28:16 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 17:28:16 INFO scheduler.JobScheduler: Finished job streaming job 1523975280000 ms.6 from job set of time 1523975280000 ms 18/04/17 17:28:16 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 1452.0 (TID 1452) in 16741 ms on ***hostname masked*** (executor 12) (1/1) 18/04/17 17:28:16 INFO cluster.YarnClusterScheduler: Removed TaskSet 1452.0, whose tasks have all completed, from pool 18/04/17 17:28:16 INFO scheduler.DAGScheduler: ResultStage 1452 (foreachPartition at PredictorEngineApp.java:153) finished in 16.742 s 18/04/17 17:28:16 INFO scheduler.DAGScheduler: Job 1453 finished: foreachPartition at PredictorEngineApp.java:153, took 16.747434 s 18/04/17 17:28:16 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x2d111f8b connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 17:28:16 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x2d111f8b0x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 17:28:16 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 17:28:16 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:38369, server: ***hostname masked***/***IP masked***:2181 18/04/17 17:28:16 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x3626be1439a98b2, negotiated timeout = 60000 18/04/17 17:28:16 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x3626be1439a98b2 18/04/17 17:28:16 INFO zookeeper.ZooKeeper: Session: 0x3626be1439a98b2 closed 18/04/17 17:28:16 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 17:28:16 INFO scheduler.JobScheduler: Finished job streaming job 1523975280000 ms.20 from job set of time 1523975280000 ms 18/04/17 17:28:17 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 1475.0 (TID 1475) in 17837 ms on ***hostname masked*** (executor 8) (1/1) 18/04/17 17:28:17 INFO scheduler.DAGScheduler: ResultStage 1475 (foreachPartition at PredictorEngineApp.java:153) finished in 17.838 s 18/04/17 17:28:17 INFO cluster.YarnClusterScheduler: Removed TaskSet 1475.0, whose tasks have all completed, from pool 18/04/17 17:28:17 INFO scheduler.DAGScheduler: Job 1475 finished: foreachPartition at PredictorEngineApp.java:153, took 17.937630 s 18/04/17 17:28:17 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x7262d717 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 17:28:17 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x7262d7170x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 17:28:17 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 17:28:17 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:44756, server: ***hostname masked***/***IP masked***:2181 18/04/17 17:28:18 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x1626be1444c9908, negotiated timeout = 60000 18/04/17 17:28:18 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x1626be1444c9908 18/04/17 17:28:18 INFO zookeeper.ZooKeeper: Session: 0x1626be1444c9908 closed 18/04/17 17:28:18 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 17:28:18 INFO scheduler.JobScheduler: Finished job streaming job 1523975280000 ms.22 from job set of time 1523975280000 ms 18/04/17 17:28:18 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 1479.0 (TID 1479) in 17957 ms on ***hostname masked*** (executor 6) (1/1) 18/04/17 17:28:18 INFO cluster.YarnClusterScheduler: Removed TaskSet 1479.0, whose tasks have all completed, from pool 18/04/17 17:28:18 INFO scheduler.DAGScheduler: ResultStage 1479 (foreachPartition at PredictorEngineApp.java:153) finished in 17.958 s 18/04/17 17:28:18 INFO scheduler.DAGScheduler: Job 1478 finished: foreachPartition at PredictorEngineApp.java:153, took 18.229904 s 18/04/17 17:28:18 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x606347c8 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 17:28:18 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x606347c80x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 17:28:18 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 17:28:18 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:44762, server: ***hostname masked***/***IP masked***:2181 18/04/17 17:28:18 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x1626be1444c9909, negotiated timeout = 60000 18/04/17 17:28:18 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x1626be1444c9909 18/04/17 17:28:18 INFO zookeeper.ZooKeeper: Session: 0x1626be1444c9909 closed 18/04/17 17:28:18 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 17:28:18 INFO scheduler.JobScheduler: Finished job streaming job 1523975280000 ms.26 from job set of time 1523975280000 ms 18/04/17 17:28:21 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 1463.0 (TID 1463) in 21241 ms on ***hostname masked*** (executor 2) (1/1) 18/04/17 17:28:21 INFO scheduler.DAGScheduler: ResultStage 1463 (foreachPartition at PredictorEngineApp.java:153) finished in 21.242 s 18/04/17 17:28:21 INFO cluster.YarnClusterScheduler: Removed TaskSet 1463.0, whose tasks have all completed, from pool 18/04/17 17:28:21 INFO scheduler.DAGScheduler: Job 1463 finished: foreachPartition at PredictorEngineApp.java:153, took 21.307829 s 18/04/17 17:28:21 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x12b73bc3 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 17:28:21 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x12b73bc30x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 17:28:21 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 17:28:21 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:49370, server: ***hostname masked***/***IP masked***:2181 18/04/17 17:28:21 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x2626be142b291d8, negotiated timeout = 60000 18/04/17 17:28:21 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x2626be142b291d8 18/04/17 17:28:21 INFO zookeeper.ZooKeeper: Session: 0x2626be142b291d8 closed 18/04/17 17:28:21 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 17:28:21 INFO scheduler.JobScheduler: Finished job streaming job 1523975280000 ms.28 from job set of time 1523975280000 ms 18/04/17 17:28:47 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 1464.0 (TID 1464) in 47311 ms on ***hostname masked*** (executor 2) (1/1) 18/04/17 17:28:47 INFO scheduler.DAGScheduler: ResultStage 1464 (foreachPartition at PredictorEngineApp.java:153) finished in 47.312 s 18/04/17 17:28:47 INFO cluster.YarnClusterScheduler: Removed TaskSet 1464.0, whose tasks have all completed, from pool 18/04/17 17:28:47 INFO scheduler.DAGScheduler: Job 1465 finished: foreachPartition at PredictorEngineApp.java:153, took 47.381996 s 18/04/17 17:28:47 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x43a6b4ea connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 17:28:47 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x43a6b4ea0x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 17:28:47 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 17:28:47 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:44828, server: ***hostname masked***/***IP masked***:2181 18/04/17 17:28:47 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x1626be1444c9913, negotiated timeout = 60000 18/04/17 17:28:47 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x1626be1444c9913 18/04/17 17:28:47 INFO zookeeper.ZooKeeper: Session: 0x1626be1444c9913 closed 18/04/17 17:28:47 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 17:28:47 INFO scheduler.JobScheduler: Finished job streaming job 1523975280000 ms.1 from job set of time 1523975280000 ms 18/04/17 17:28:47 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 1458.0 (TID 1458) in 47596 ms on ***hostname masked*** (executor 2) (1/1) 18/04/17 17:28:47 INFO cluster.YarnClusterScheduler: Removed TaskSet 1458.0, whose tasks have all completed, from pool 18/04/17 17:28:47 INFO scheduler.DAGScheduler: ResultStage 1458 (foreachPartition at PredictorEngineApp.java:153) finished in 47.596 s 18/04/17 17:28:47 INFO scheduler.DAGScheduler: Job 1458 finished: foreachPartition at PredictorEngineApp.java:153, took 47.626201 s 18/04/17 17:28:47 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x27a64e57 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 17:28:47 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x27a64e570x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 17:28:47 INFO spark.ContextCleaner: Cleaned accumulator 1454 18/04/17 17:28:47 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 17:28:47 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:44831, server: ***hostname masked***/***IP masked***:2181 18/04/17 17:28:47 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x1626be1444c9914, negotiated timeout = 60000 18/04/17 17:28:47 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x1626be1444c9914 18/04/17 17:28:47 INFO zookeeper.ZooKeeper: Session: 0x1626be1444c9914 closed 18/04/17 17:28:47 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 17:28:47 INFO scheduler.JobScheduler: Finished job streaming job 1523975280000 ms.11 from job set of time 1523975280000 ms 18/04/17 17:28:49 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 1472.0 (TID 1472) in 48872 ms on ***hostname masked*** (executor 2) (1/1) 18/04/17 17:28:49 INFO cluster.YarnClusterScheduler: Removed TaskSet 1472.0, whose tasks have all completed, from pool 18/04/17 17:28:49 INFO scheduler.DAGScheduler: ResultStage 1472 (foreachPartition at PredictorEngineApp.java:153) finished in 48.873 s 18/04/17 17:28:49 INFO scheduler.DAGScheduler: Job 1472 finished: foreachPartition at PredictorEngineApp.java:153, took 48.965538 s 18/04/17 17:28:49 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x41ee0106 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 17:28:49 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x41ee01060x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 17:28:49 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 17:28:49 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:38455, server: ***hostname masked***/***IP masked***:2181 18/04/17 17:28:49 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x3626be1439a98bb, negotiated timeout = 60000 18/04/17 17:28:49 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x3626be1439a98bb 18/04/17 17:28:49 INFO zookeeper.ZooKeeper: Session: 0x3626be1439a98bb closed 18/04/17 17:28:49 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 17:28:49 INFO scheduler.JobScheduler: Finished job streaming job 1523975280000 ms.10 from job set of time 1523975280000 ms 18/04/17 17:28:49 INFO scheduler.JobScheduler: Total delay: 49.053 s for time 1523975280000 ms (execution: 49.007 s) 18/04/17 17:28:49 INFO kafka.KafkaRDD: Removing RDD 1908 from persistence list 18/04/17 17:28:49 INFO storage.BlockManager: Removing RDD 1908 18/04/17 17:28:49 INFO kafka.KafkaRDD: Removing RDD 1944 from persistence list 18/04/17 17:28:49 INFO storage.BlockManager: Removing RDD 1944 18/04/17 17:28:49 INFO kafka.KafkaRDD: Removing RDD 1908 from persistence list 18/04/17 17:28:49 INFO storage.BlockManager: Removing RDD 1908 18/04/17 17:28:49 INFO kafka.KafkaRDD: Removing RDD 1944 from persistence list 18/04/17 17:28:49 INFO storage.BlockManager: Removing RDD 1944 18/04/17 17:28:49 INFO kafka.KafkaRDD: Removing RDD 1909 from persistence list 18/04/17 17:28:49 INFO storage.BlockManager: Removing RDD 1909 18/04/17 17:28:49 INFO kafka.KafkaRDD: Removing RDD 1945 from persistence list 18/04/17 17:28:49 INFO storage.BlockManager: Removing RDD 1945 18/04/17 17:28:49 INFO kafka.KafkaRDD: Removing RDD 1909 from persistence list 18/04/17 17:28:49 INFO storage.BlockManager: Removing RDD 1909 18/04/17 17:28:49 INFO kafka.KafkaRDD: Removing RDD 1945 from persistence list 18/04/17 17:28:49 INFO storage.BlockManager: Removing RDD 1945 18/04/17 17:28:49 INFO kafka.KafkaRDD: Removing RDD 1910 from persistence list 18/04/17 17:28:49 INFO storage.BlockManager: Removing RDD 1910 18/04/17 17:28:49 INFO kafka.KafkaRDD: Removing RDD 1946 from persistence list 18/04/17 17:28:49 INFO storage.BlockManager: Removing RDD 1946 18/04/17 17:28:49 INFO kafka.KafkaRDD: Removing RDD 1910 from persistence list 18/04/17 17:28:49 INFO storage.BlockManager: Removing RDD 1910 18/04/17 17:28:49 INFO kafka.KafkaRDD: Removing RDD 1946 from persistence list 18/04/17 17:28:49 INFO storage.BlockManager: Removing RDD 1946 18/04/17 17:28:49 INFO kafka.KafkaRDD: Removing RDD 1911 from persistence list 18/04/17 17:28:49 INFO storage.BlockManager: Removing RDD 1911 18/04/17 17:28:49 INFO kafka.KafkaRDD: Removing RDD 1947 from persistence list 18/04/17 17:28:49 INFO storage.BlockManager: Removing RDD 1947 18/04/17 17:28:49 INFO kafka.KafkaRDD: Removing RDD 1911 from persistence list 18/04/17 17:28:49 INFO storage.BlockManager: Removing RDD 1911 18/04/17 17:28:49 INFO kafka.KafkaRDD: Removing RDD 1947 from persistence list 18/04/17 17:28:49 INFO storage.BlockManager: Removing RDD 1947 18/04/17 17:28:49 INFO kafka.KafkaRDD: Removing RDD 1912 from persistence list 18/04/17 17:28:49 INFO storage.BlockManager: Removing RDD 1912 18/04/17 17:28:49 INFO kafka.KafkaRDD: Removing RDD 1948 from persistence list 18/04/17 17:28:49 INFO storage.BlockManager: Removing RDD 1948 18/04/17 17:28:49 INFO kafka.KafkaRDD: Removing RDD 1912 from persistence list 18/04/17 17:28:49 INFO storage.BlockManager: Removing RDD 1912 18/04/17 17:28:49 INFO kafka.KafkaRDD: Removing RDD 1948 from persistence list 18/04/17 17:28:49 INFO storage.BlockManager: Removing RDD 1948 18/04/17 17:28:49 INFO kafka.KafkaRDD: Removing RDD 1913 from persistence list 18/04/17 17:28:49 INFO storage.BlockManager: Removing RDD 1913 18/04/17 17:28:49 INFO kafka.KafkaRDD: Removing RDD 1949 from persistence list 18/04/17 17:28:49 INFO storage.BlockManager: Removing RDD 1949 18/04/17 17:28:49 INFO kafka.KafkaRDD: Removing RDD 1913 from persistence list 18/04/17 17:28:49 INFO storage.BlockManager: Removing RDD 1913 18/04/17 17:28:49 INFO kafka.KafkaRDD: Removing RDD 1949 from persistence list 18/04/17 17:28:49 INFO storage.BlockManager: Removing RDD 1949 18/04/17 17:28:49 INFO kafka.KafkaRDD: Removing RDD 1914 from persistence list 18/04/17 17:28:49 INFO storage.BlockManager: Removing RDD 1914 18/04/17 17:28:49 INFO kafka.KafkaRDD: Removing RDD 1950 from persistence list 18/04/17 17:28:49 INFO storage.BlockManager: Removing RDD 1950 18/04/17 17:28:49 INFO kafka.KafkaRDD: Removing RDD 1914 from persistence list 18/04/17 17:28:49 INFO storage.BlockManager: Removing RDD 1914 18/04/17 17:28:49 INFO kafka.KafkaRDD: Removing RDD 1950 from persistence list 18/04/17 17:28:49 INFO storage.BlockManager: Removing RDD 1950 18/04/17 17:28:49 INFO kafka.KafkaRDD: Removing RDD 1915 from persistence list 18/04/17 17:28:49 INFO storage.BlockManager: Removing RDD 1915 18/04/17 17:28:49 INFO kafka.KafkaRDD: Removing RDD 1951 from persistence list 18/04/17 17:28:49 INFO storage.BlockManager: Removing RDD 1951 18/04/17 17:28:49 INFO kafka.KafkaRDD: Removing RDD 1915 from persistence list 18/04/17 17:28:49 INFO storage.BlockManager: Removing RDD 1915 18/04/17 17:28:49 INFO kafka.KafkaRDD: Removing RDD 1951 from persistence list 18/04/17 17:28:49 INFO storage.BlockManager: Removing RDD 1951 18/04/17 17:28:49 INFO kafka.KafkaRDD: Removing RDD 1916 from persistence list 18/04/17 17:28:49 INFO storage.BlockManager: Removing RDD 1916 18/04/17 17:28:49 INFO kafka.KafkaRDD: Removing RDD 1952 from persistence list 18/04/17 17:28:49 INFO storage.BlockManager: Removing RDD 1952 18/04/17 17:28:49 INFO kafka.KafkaRDD: Removing RDD 1916 from persistence list 18/04/17 17:28:49 INFO storage.BlockManager: Removing RDD 1916 18/04/17 17:28:49 INFO kafka.KafkaRDD: Removing RDD 1952 from persistence list 18/04/17 17:28:49 INFO storage.BlockManager: Removing RDD 1952 18/04/17 17:28:49 INFO kafka.KafkaRDD: Removing RDD 1917 from persistence list 18/04/17 17:28:49 INFO storage.BlockManager: Removing RDD 1917 18/04/17 17:28:49 INFO kafka.KafkaRDD: Removing RDD 1953 from persistence list 18/04/17 17:28:49 INFO storage.BlockManager: Removing RDD 1953 18/04/17 17:28:49 INFO kafka.KafkaRDD: Removing RDD 1917 from persistence list 18/04/17 17:28:49 INFO storage.BlockManager: Removing RDD 1917 18/04/17 17:28:49 INFO kafka.KafkaRDD: Removing RDD 1953 from persistence list 18/04/17 17:28:49 INFO storage.BlockManager: Removing RDD 1953 18/04/17 17:28:49 INFO kafka.KafkaRDD: Removing RDD 1918 from persistence list 18/04/17 17:28:49 INFO storage.BlockManager: Removing RDD 1918 18/04/17 17:28:49 INFO kafka.KafkaRDD: Removing RDD 1954 from persistence list 18/04/17 17:28:49 INFO storage.BlockManager: Removing RDD 1954 18/04/17 17:28:49 INFO kafka.KafkaRDD: Removing RDD 1918 from persistence list 18/04/17 17:28:49 INFO storage.BlockManager: Removing RDD 1918 18/04/17 17:28:49 INFO kafka.KafkaRDD: Removing RDD 1954 from persistence list 18/04/17 17:28:49 INFO storage.BlockManager: Removing RDD 1954 18/04/17 17:28:49 INFO kafka.KafkaRDD: Removing RDD 1919 from persistence list 18/04/17 17:28:49 INFO storage.BlockManager: Removing RDD 1919 18/04/17 17:28:49 INFO kafka.KafkaRDD: Removing RDD 1955 from persistence list 18/04/17 17:28:49 INFO storage.BlockManager: Removing RDD 1955 18/04/17 17:28:49 INFO kafka.KafkaRDD: Removing RDD 1919 from persistence list 18/04/17 17:28:49 INFO storage.BlockManager: Removing RDD 1919 18/04/17 17:28:49 INFO kafka.KafkaRDD: Removing RDD 1955 from persistence list 18/04/17 17:28:49 INFO storage.BlockManager: Removing RDD 1955 18/04/17 17:28:49 INFO kafka.KafkaRDD: Removing RDD 1920 from persistence list 18/04/17 17:28:49 INFO storage.BlockManager: Removing RDD 1920 18/04/17 17:28:49 INFO kafka.KafkaRDD: Removing RDD 1956 from persistence list 18/04/17 17:28:49 INFO storage.BlockManager: Removing RDD 1956 18/04/17 17:28:49 INFO kafka.KafkaRDD: Removing RDD 1920 from persistence list 18/04/17 17:28:49 INFO storage.BlockManager: Removing RDD 1920 18/04/17 17:28:49 INFO kafka.KafkaRDD: Removing RDD 1956 from persistence list 18/04/17 17:28:49 INFO storage.BlockManager: Removing RDD 1956 18/04/17 17:28:49 INFO kafka.KafkaRDD: Removing RDD 1921 from persistence list 18/04/17 17:28:49 INFO storage.BlockManager: Removing RDD 1921 18/04/17 17:28:49 INFO kafka.KafkaRDD: Removing RDD 1957 from persistence list 18/04/17 17:28:49 INFO storage.BlockManager: Removing RDD 1957 18/04/17 17:28:49 INFO kafka.KafkaRDD: Removing RDD 1921 from persistence list 18/04/17 17:28:49 INFO storage.BlockManager: Removing RDD 1921 18/04/17 17:28:49 INFO kafka.KafkaRDD: Removing RDD 1957 from persistence list 18/04/17 17:28:49 INFO storage.BlockManager: Removing RDD 1957 18/04/17 17:28:49 INFO kafka.KafkaRDD: Removing RDD 1922 from persistence list 18/04/17 17:28:49 INFO storage.BlockManager: Removing RDD 1922 18/04/17 17:28:49 INFO kafka.KafkaRDD: Removing RDD 1958 from persistence list 18/04/17 17:28:49 INFO storage.BlockManager: Removing RDD 1958 18/04/17 17:28:49 INFO kafka.KafkaRDD: Removing RDD 1922 from persistence list 18/04/17 17:28:49 INFO storage.BlockManager: Removing RDD 1922 18/04/17 17:28:49 INFO kafka.KafkaRDD: Removing RDD 1958 from persistence list 18/04/17 17:28:49 INFO storage.BlockManager: Removing RDD 1958 18/04/17 17:28:49 INFO kafka.KafkaRDD: Removing RDD 1923 from persistence list 18/04/17 17:28:49 INFO storage.BlockManager: Removing RDD 1923 18/04/17 17:28:49 INFO kafka.KafkaRDD: Removing RDD 1959 from persistence list 18/04/17 17:28:49 INFO storage.BlockManager: Removing RDD 1959 18/04/17 17:28:49 INFO kafka.KafkaRDD: Removing RDD 1923 from persistence list 18/04/17 17:28:49 INFO storage.BlockManager: Removing RDD 1923 18/04/17 17:28:49 INFO kafka.KafkaRDD: Removing RDD 1959 from persistence list 18/04/17 17:28:49 INFO storage.BlockManager: Removing RDD 1959 18/04/17 17:28:49 INFO kafka.KafkaRDD: Removing RDD 1924 from persistence list 18/04/17 17:28:49 INFO storage.BlockManager: Removing RDD 1924 18/04/17 17:28:49 INFO kafka.KafkaRDD: Removing RDD 1960 from persistence list 18/04/17 17:28:49 INFO storage.BlockManager: Removing RDD 1960 18/04/17 17:28:49 INFO kafka.KafkaRDD: Removing RDD 1924 from persistence list 18/04/17 17:28:49 INFO storage.BlockManager: Removing RDD 1924 18/04/17 17:28:49 INFO kafka.KafkaRDD: Removing RDD 1960 from persistence list 18/04/17 17:28:49 INFO storage.BlockManager: Removing RDD 1960 18/04/17 17:28:49 INFO kafka.KafkaRDD: Removing RDD 1925 from persistence list 18/04/17 17:28:49 INFO storage.BlockManager: Removing RDD 1925 18/04/17 17:28:49 INFO kafka.KafkaRDD: Removing RDD 1961 from persistence list 18/04/17 17:28:49 INFO storage.BlockManager: Removing RDD 1961 18/04/17 17:28:49 INFO kafka.KafkaRDD: Removing RDD 1925 from persistence list 18/04/17 17:28:49 INFO storage.BlockManager: Removing RDD 1925 18/04/17 17:28:49 INFO kafka.KafkaRDD: Removing RDD 1961 from persistence list 18/04/17 17:28:49 INFO storage.BlockManager: Removing RDD 1961 18/04/17 17:28:49 INFO kafka.KafkaRDD: Removing RDD 1926 from persistence list 18/04/17 17:28:49 INFO storage.BlockManager: Removing RDD 1926 18/04/17 17:28:49 INFO kafka.KafkaRDD: Removing RDD 1962 from persistence list 18/04/17 17:28:49 INFO storage.BlockManager: Removing RDD 1962 18/04/17 17:28:49 INFO kafka.KafkaRDD: Removing RDD 1926 from persistence list 18/04/17 17:28:49 INFO storage.BlockManager: Removing RDD 1926 18/04/17 17:28:49 INFO kafka.KafkaRDD: Removing RDD 1962 from persistence list 18/04/17 17:28:49 INFO storage.BlockManager: Removing RDD 1962 18/04/17 17:28:49 INFO kafka.KafkaRDD: Removing RDD 1927 from persistence list 18/04/17 17:28:49 INFO storage.BlockManager: Removing RDD 1927 18/04/17 17:28:49 INFO kafka.KafkaRDD: Removing RDD 1963 from persistence list 18/04/17 17:28:49 INFO storage.BlockManager: Removing RDD 1963 18/04/17 17:28:49 INFO kafka.KafkaRDD: Removing RDD 1927 from persistence list 18/04/17 17:28:49 INFO storage.BlockManager: Removing RDD 1927 18/04/17 17:28:49 INFO kafka.KafkaRDD: Removing RDD 1963 from persistence list 18/04/17 17:28:49 INFO storage.BlockManager: Removing RDD 1963 18/04/17 17:28:49 INFO kafka.KafkaRDD: Removing RDD 1928 from persistence list 18/04/17 17:28:49 INFO storage.BlockManager: Removing RDD 1928 18/04/17 17:28:49 INFO kafka.KafkaRDD: Removing RDD 1964 from persistence list 18/04/17 17:28:49 INFO storage.BlockManager: Removing RDD 1964 18/04/17 17:28:49 INFO kafka.KafkaRDD: Removing RDD 1928 from persistence list 18/04/17 17:28:49 INFO storage.BlockManager: Removing RDD 1928 18/04/17 17:28:49 INFO kafka.KafkaRDD: Removing RDD 1964 from persistence list 18/04/17 17:28:49 INFO storage.BlockManager: Removing RDD 1964 18/04/17 17:28:49 INFO kafka.KafkaRDD: Removing RDD 1929 from persistence list 18/04/17 17:28:49 INFO storage.BlockManager: Removing RDD 1929 18/04/17 17:28:49 INFO kafka.KafkaRDD: Removing RDD 1965 from persistence list 18/04/17 17:28:49 INFO storage.BlockManager: Removing RDD 1965 18/04/17 17:28:49 INFO kafka.KafkaRDD: Removing RDD 1929 from persistence list 18/04/17 17:28:49 INFO storage.BlockManager: Removing RDD 1929 18/04/17 17:28:49 INFO kafka.KafkaRDD: Removing RDD 1965 from persistence list 18/04/17 17:28:49 INFO storage.BlockManager: Removing RDD 1965 18/04/17 17:28:49 INFO kafka.KafkaRDD: Removing RDD 1930 from persistence list 18/04/17 17:28:49 INFO storage.BlockManager: Removing RDD 1930 18/04/17 17:28:49 INFO kafka.KafkaRDD: Removing RDD 1966 from persistence list 18/04/17 17:28:49 INFO storage.BlockManager: Removing RDD 1966 18/04/17 17:28:49 INFO kafka.KafkaRDD: Removing RDD 1930 from persistence list 18/04/17 17:28:49 INFO storage.BlockManager: Removing RDD 1930 18/04/17 17:28:49 INFO kafka.KafkaRDD: Removing RDD 1966 from persistence list 18/04/17 17:28:49 INFO storage.BlockManager: Removing RDD 1966 18/04/17 17:28:49 INFO kafka.KafkaRDD: Removing RDD 1931 from persistence list 18/04/17 17:28:49 INFO storage.BlockManager: Removing RDD 1931 18/04/17 17:28:49 INFO kafka.KafkaRDD: Removing RDD 1967 from persistence list 18/04/17 17:28:49 INFO storage.BlockManager: Removing RDD 1967 18/04/17 17:28:49 INFO kafka.KafkaRDD: Removing RDD 1931 from persistence list 18/04/17 17:28:49 INFO storage.BlockManager: Removing RDD 1931 18/04/17 17:28:49 INFO kafka.KafkaRDD: Removing RDD 1967 from persistence list 18/04/17 17:28:49 INFO storage.BlockManager: Removing RDD 1967 18/04/17 17:28:49 INFO kafka.KafkaRDD: Removing RDD 1932 from persistence list 18/04/17 17:28:49 INFO storage.BlockManager: Removing RDD 1932 18/04/17 17:28:49 INFO kafka.KafkaRDD: Removing RDD 1968 from persistence list 18/04/17 17:28:49 INFO storage.BlockManager: Removing RDD 1968 18/04/17 17:28:49 INFO kafka.KafkaRDD: Removing RDD 1932 from persistence list 18/04/17 17:28:49 INFO storage.BlockManager: Removing RDD 1932 18/04/17 17:28:49 INFO kafka.KafkaRDD: Removing RDD 1968 from persistence list 18/04/17 17:28:49 INFO storage.BlockManager: Removing RDD 1968 18/04/17 17:28:49 INFO kafka.KafkaRDD: Removing RDD 1933 from persistence list 18/04/17 17:28:49 INFO storage.BlockManager: Removing RDD 1933 18/04/17 17:28:49 INFO kafka.KafkaRDD: Removing RDD 1969 from persistence list 18/04/17 17:28:49 INFO storage.BlockManager: Removing RDD 1969 18/04/17 17:28:49 INFO kafka.KafkaRDD: Removing RDD 1933 from persistence list 18/04/17 17:28:49 INFO storage.BlockManager: Removing RDD 1933 18/04/17 17:28:49 INFO kafka.KafkaRDD: Removing RDD 1969 from persistence list 18/04/17 17:28:49 INFO storage.BlockManager: Removing RDD 1969 18/04/17 17:28:49 INFO kafka.KafkaRDD: Removing RDD 1934 from persistence list 18/04/17 17:28:49 INFO storage.BlockManager: Removing RDD 1934 18/04/17 17:28:49 INFO kafka.KafkaRDD: Removing RDD 1970 from persistence list 18/04/17 17:28:49 INFO storage.BlockManager: Removing RDD 1970 18/04/17 17:28:49 INFO kafka.KafkaRDD: Removing RDD 1934 from persistence list 18/04/17 17:28:49 INFO storage.BlockManager: Removing RDD 1934 18/04/17 17:28:49 INFO kafka.KafkaRDD: Removing RDD 1970 from persistence list 18/04/17 17:28:49 INFO storage.BlockManager: Removing RDD 1970 18/04/17 17:28:49 INFO kafka.KafkaRDD: Removing RDD 1935 from persistence list 18/04/17 17:28:49 INFO storage.BlockManager: Removing RDD 1935 18/04/17 17:28:49 INFO kafka.KafkaRDD: Removing RDD 1971 from persistence list 18/04/17 17:28:49 INFO storage.BlockManager: Removing RDD 1971 18/04/17 17:28:49 INFO kafka.KafkaRDD: Removing RDD 1935 from persistence list 18/04/17 17:28:49 INFO storage.BlockManager: Removing RDD 1935 18/04/17 17:28:49 INFO kafka.KafkaRDD: Removing RDD 1971 from persistence list 18/04/17 17:28:49 INFO storage.BlockManager: Removing RDD 1971 18/04/17 17:28:49 INFO kafka.KafkaRDD: Removing RDD 1936 from persistence list 18/04/17 17:28:49 INFO storage.BlockManager: Removing RDD 1936 18/04/17 17:28:49 INFO kafka.KafkaRDD: Removing RDD 1972 from persistence list 18/04/17 17:28:49 INFO storage.BlockManager: Removing RDD 1972 18/04/17 17:28:49 INFO kafka.KafkaRDD: Removing RDD 1936 from persistence list 18/04/17 17:28:49 INFO storage.BlockManager: Removing RDD 1936 18/04/17 17:28:49 INFO kafka.KafkaRDD: Removing RDD 1972 from persistence list 18/04/17 17:28:49 INFO storage.BlockManager: Removing RDD 1972 18/04/17 17:28:49 INFO kafka.KafkaRDD: Removing RDD 1937 from persistence list 18/04/17 17:28:49 INFO storage.BlockManager: Removing RDD 1937 18/04/17 17:28:49 INFO kafka.KafkaRDD: Removing RDD 1973 from persistence list 18/04/17 17:28:49 INFO storage.BlockManager: Removing RDD 1973 18/04/17 17:28:49 INFO kafka.KafkaRDD: Removing RDD 1937 from persistence list 18/04/17 17:28:49 INFO storage.BlockManager: Removing RDD 1937 18/04/17 17:28:49 INFO kafka.KafkaRDD: Removing RDD 1973 from persistence list 18/04/17 17:28:49 INFO storage.BlockManager: Removing RDD 1973 18/04/17 17:28:49 INFO kafka.KafkaRDD: Removing RDD 1938 from persistence list 18/04/17 17:28:49 INFO storage.BlockManager: Removing RDD 1938 18/04/17 17:28:49 INFO kafka.KafkaRDD: Removing RDD 1974 from persistence list 18/04/17 17:28:49 INFO storage.BlockManager: Removing RDD 1974 18/04/17 17:28:49 INFO kafka.KafkaRDD: Removing RDD 1938 from persistence list 18/04/17 17:28:49 INFO storage.BlockManager: Removing RDD 1938 18/04/17 17:28:49 INFO kafka.KafkaRDD: Removing RDD 1974 from persistence list 18/04/17 17:28:49 INFO storage.BlockManager: Removing RDD 1974 18/04/17 17:28:49 INFO kafka.KafkaRDD: Removing RDD 1939 from persistence list 18/04/17 17:28:49 INFO storage.BlockManager: Removing RDD 1939 18/04/17 17:28:49 INFO kafka.KafkaRDD: Removing RDD 1975 from persistence list 18/04/17 17:28:49 INFO storage.BlockManager: Removing RDD 1975 18/04/17 17:28:49 INFO kafka.KafkaRDD: Removing RDD 1939 from persistence list 18/04/17 17:28:49 INFO storage.BlockManager: Removing RDD 1939 18/04/17 17:28:49 INFO kafka.KafkaRDD: Removing RDD 1975 from persistence list 18/04/17 17:28:49 INFO storage.BlockManager: Removing RDD 1975 18/04/17 17:28:49 INFO kafka.KafkaRDD: Removing RDD 1940 from persistence list 18/04/17 17:28:49 INFO storage.BlockManager: Removing RDD 1940 18/04/17 17:28:49 INFO kafka.KafkaRDD: Removing RDD 1976 from persistence list 18/04/17 17:28:49 INFO storage.BlockManager: Removing RDD 1976 18/04/17 17:28:49 INFO kafka.KafkaRDD: Removing RDD 1940 from persistence list 18/04/17 17:28:49 INFO storage.BlockManager: Removing RDD 1940 18/04/17 17:28:49 INFO kafka.KafkaRDD: Removing RDD 1976 from persistence list 18/04/17 17:28:49 INFO storage.BlockManager: Removing RDD 1976 18/04/17 17:28:49 INFO kafka.KafkaRDD: Removing RDD 1941 from persistence list 18/04/17 17:28:49 INFO storage.BlockManager: Removing RDD 1941 18/04/17 17:28:49 INFO kafka.KafkaRDD: Removing RDD 1977 from persistence list 18/04/17 17:28:49 INFO storage.BlockManager: Removing RDD 1977 18/04/17 17:28:49 INFO kafka.KafkaRDD: Removing RDD 1941 from persistence list 18/04/17 17:28:49 INFO storage.BlockManager: Removing RDD 1941 18/04/17 17:28:49 INFO kafka.KafkaRDD: Removing RDD 1977 from persistence list 18/04/17 17:28:49 INFO storage.BlockManager: Removing RDD 1977 18/04/17 17:28:49 INFO kafka.KafkaRDD: Removing RDD 1942 from persistence list 18/04/17 17:28:49 INFO storage.BlockManager: Removing RDD 1942 18/04/17 17:28:49 INFO kafka.KafkaRDD: Removing RDD 1978 from persistence list 18/04/17 17:28:49 INFO storage.BlockManager: Removing RDD 1978 18/04/17 17:28:49 INFO kafka.KafkaRDD: Removing RDD 1942 from persistence list 18/04/17 17:28:49 INFO storage.BlockManager: Removing RDD 1942 18/04/17 17:28:49 INFO kafka.KafkaRDD: Removing RDD 1978 from persistence list 18/04/17 17:28:49 INFO storage.BlockManager: Removing RDD 1978 18/04/17 17:28:49 INFO kafka.KafkaRDD: Removing RDD 1943 from persistence list 18/04/17 17:28:49 INFO storage.BlockManager: Removing RDD 1943 18/04/17 17:28:49 INFO kafka.KafkaRDD: Removing RDD 1979 from persistence list 18/04/17 17:28:49 INFO storage.BlockManager: Removing RDD 1979 18/04/17 17:28:49 INFO kafka.KafkaRDD: Removing RDD 1943 from persistence list 18/04/17 17:28:49 INFO storage.BlockManager: Removing RDD 1943 18/04/17 17:28:49 INFO kafka.KafkaRDD: Removing RDD 1979 from persistence list 18/04/17 17:28:49 INFO storage.BlockManager: Removing RDD 1979 18/04/17 17:28:49 INFO scheduler.ReceivedBlockTracker: Deleting batches ArrayBuffer() 18/04/17 17:28:49 INFO scheduler.InputInfoTracker: remove old batch metadata: 1523975160000 ms 1523975100000 ms 18/04/17 17:29:00 INFO scheduler.JobScheduler: Added jobs for time 1523975340000 ms 18/04/17 17:29:00 INFO scheduler.JobScheduler: Starting job streaming job 1523975340000 ms.0 from job set of time 1523975340000 ms 18/04/17 17:29:00 INFO scheduler.JobScheduler: Starting job streaming job 1523975340000 ms.1 from job set of time 1523975340000 ms 18/04/17 17:29:00 INFO scheduler.JobScheduler: Starting job streaming job 1523975340000 ms.2 from job set of time 1523975340000 ms 18/04/17 17:29:00 INFO scheduler.JobScheduler: Starting job streaming job 1523975340000 ms.3 from job set of time 1523975340000 ms 18/04/17 17:29:00 INFO scheduler.JobScheduler: Starting job streaming job 1523975340000 ms.4 from job set of time 1523975340000 ms 18/04/17 17:29:00 INFO scheduler.JobScheduler: Finished job streaming job 1523975340000 ms.0 from job set of time 1523975340000 ms 18/04/17 17:29:00 INFO scheduler.JobScheduler: Starting job streaming job 1523975340000 ms.5 from job set of time 1523975340000 ms 18/04/17 17:29:00 INFO scheduler.JobScheduler: Finished job streaming job 1523975340000 ms.3 from job set of time 1523975340000 ms 18/04/17 17:29:00 INFO scheduler.JobScheduler: Starting job streaming job 1523975340000 ms.7 from job set of time 1523975340000 ms 18/04/17 17:29:00 INFO scheduler.JobScheduler: Starting job streaming job 1523975340000 ms.6 from job set of time 1523975340000 ms 18/04/17 17:29:00 INFO scheduler.JobScheduler: Starting job streaming job 1523975340000 ms.9 from job set of time 1523975340000 ms 18/04/17 17:29:00 INFO scheduler.JobScheduler: Finished job streaming job 1523975340000 ms.4 from job set of time 1523975340000 ms 18/04/17 17:29:00 INFO scheduler.JobScheduler: Starting job streaming job 1523975340000 ms.8 from job set of time 1523975340000 ms 18/04/17 17:29:00 INFO scheduler.JobScheduler: Starting job streaming job 1523975340000 ms.10 from job set of time 1523975340000 ms 18/04/17 17:29:00 INFO scheduler.JobScheduler: Starting job streaming job 1523975340000 ms.11 from job set of time 1523975340000 ms 18/04/17 17:29:00 INFO scheduler.JobScheduler: Starting job streaming job 1523975340000 ms.12 from job set of time 1523975340000 ms 18/04/17 17:29:00 INFO scheduler.JobScheduler: Starting job streaming job 1523975340000 ms.13 from job set of time 1523975340000 ms 18/04/17 17:29:00 INFO scheduler.JobScheduler: Starting job streaming job 1523975340000 ms.14 from job set of time 1523975340000 ms 18/04/17 17:29:00 INFO scheduler.JobScheduler: Starting job streaming job 1523975340000 ms.15 from job set of time 1523975340000 ms 18/04/17 17:29:00 INFO scheduler.JobScheduler: Finished job streaming job 1523975340000 ms.13 from job set of time 1523975340000 ms 18/04/17 17:29:00 INFO scheduler.JobScheduler: Starting job streaming job 1523975340000 ms.18 from job set of time 1523975340000 ms 18/04/17 17:29:00 INFO scheduler.JobScheduler: Starting job streaming job 1523975340000 ms.16 from job set of time 1523975340000 ms 18/04/17 17:29:00 INFO scheduler.JobScheduler: Starting job streaming job 1523975340000 ms.17 from job set of time 1523975340000 ms 18/04/17 17:29:00 INFO scheduler.JobScheduler: Finished job streaming job 1523975340000 ms.14 from job set of time 1523975340000 ms 18/04/17 17:29:00 INFO scheduler.JobScheduler: Starting job streaming job 1523975340000 ms.19 from job set of time 1523975340000 ms 18/04/17 17:29:00 INFO scheduler.JobScheduler: Starting job streaming job 1523975340000 ms.20 from job set of time 1523975340000 ms 18/04/17 17:29:00 INFO scheduler.JobScheduler: Finished job streaming job 1523975340000 ms.17 from job set of time 1523975340000 ms 18/04/17 17:29:00 INFO scheduler.JobScheduler: Starting job streaming job 1523975340000 ms.21 from job set of time 1523975340000 ms 18/04/17 17:29:00 INFO scheduler.JobScheduler: Finished job streaming job 1523975340000 ms.21 from job set of time 1523975340000 ms 18/04/17 17:29:00 INFO scheduler.JobScheduler: Finished job streaming job 1523975340000 ms.16 from job set of time 1523975340000 ms 18/04/17 17:29:00 INFO scheduler.JobScheduler: Starting job streaming job 1523975340000 ms.22 from job set of time 1523975340000 ms 18/04/17 17:29:00 INFO scheduler.JobScheduler: Starting job streaming job 1523975340000 ms.23 from job set of time 1523975340000 ms 18/04/17 17:29:00 INFO scheduler.JobScheduler: Starting job streaming job 1523975340000 ms.24 from job set of time 1523975340000 ms 18/04/17 17:29:00 INFO scheduler.JobScheduler: Starting job streaming job 1523975340000 ms.25 from job set of time 1523975340000 ms 18/04/17 17:29:00 INFO scheduler.JobScheduler: Starting job streaming job 1523975340000 ms.26 from job set of time 1523975340000 ms 18/04/17 17:29:00 INFO scheduler.JobScheduler: Starting job streaming job 1523975340000 ms.27 from job set of time 1523975340000 ms 18/04/17 17:29:00 INFO scheduler.JobScheduler: Starting job streaming job 1523975340000 ms.28 from job set of time 1523975340000 ms 18/04/17 17:29:00 INFO scheduler.JobScheduler: Starting job streaming job 1523975340000 ms.29 from job set of time 1523975340000 ms 18/04/17 17:29:00 INFO scheduler.JobScheduler: Starting job streaming job 1523975340000 ms.30 from job set of time 1523975340000 ms 18/04/17 17:29:00 INFO scheduler.JobScheduler: Starting job streaming job 1523975340000 ms.31 from job set of time 1523975340000 ms 18/04/17 17:29:00 INFO scheduler.JobScheduler: Starting job streaming job 1523975340000 ms.32 from job set of time 1523975340000 ms 18/04/17 17:29:00 INFO scheduler.JobScheduler: Starting job streaming job 1523975340000 ms.33 from job set of time 1523975340000 ms 18/04/17 17:29:00 INFO scheduler.JobScheduler: Starting job streaming job 1523975340000 ms.34 from job set of time 1523975340000 ms 18/04/17 17:29:00 INFO scheduler.JobScheduler: Finished job streaming job 1523975340000 ms.30 from job set of time 1523975340000 ms 18/04/17 17:29:00 INFO scheduler.JobScheduler: Starting job streaming job 1523975340000 ms.35 from job set of time 1523975340000 ms 18/04/17 17:29:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 17:29:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 17:29:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 17:29:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 17:29:00 INFO scheduler.DAGScheduler: Got job 1479 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 17:29:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 17:29:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 17:29:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 1480 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 17:29:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 17:29:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 17:29:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 17:29:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 17:29:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 17:29:00 INFO scheduler.DAGScheduler: Submitting ResultStage 1480 (KafkaRDD[2045] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 17:29:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 17:29:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 17:29:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 17:29:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 17:29:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 17:29:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 17:29:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 17:29:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 17:29:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 17:29:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 17:29:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 17:29:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 17:29:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 17:29:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 17:29:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 17:29:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 17:29:00 INFO storage.MemoryStore: Block broadcast_1480 stored as values in memory (estimated size 5.7 KB, free 491.4 MB) 18/04/17 17:29:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 17:29:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 17:29:00 INFO storage.MemoryStore: Block broadcast_1480_piece0 stored as bytes in memory (estimated size 3.1 KB, free 491.4 MB) 18/04/17 17:29:00 INFO storage.BlockManagerInfo: Added broadcast_1480_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:29:00 INFO spark.SparkContext: Created broadcast 1480 from broadcast at DAGScheduler.scala:1006 18/04/17 17:29:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 1480 (KafkaRDD[2045] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 17:29:00 INFO cluster.YarnClusterScheduler: Adding task set 1480.0 with 1 tasks 18/04/17 17:29:00 INFO scheduler.DAGScheduler: Got job 1480 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 17:29:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 1481 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 17:29:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 17:29:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 17:29:00 INFO scheduler.DAGScheduler: Submitting ResultStage 1481 (KafkaRDD[2044] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 17:29:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 1480.0 (TID 1480, ***hostname masked***, executor 10, partition 0, RACK_LOCAL, 2045 bytes) 18/04/17 17:29:00 INFO storage.MemoryStore: Block broadcast_1481 stored as values in memory (estimated size 5.7 KB, free 491.4 MB) 18/04/17 17:29:00 INFO storage.MemoryStore: Block broadcast_1481_piece0 stored as bytes in memory (estimated size 3.1 KB, free 491.4 MB) 18/04/17 17:29:00 INFO storage.BlockManagerInfo: Added broadcast_1481_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:29:00 INFO spark.SparkContext: Created broadcast 1481 from broadcast at DAGScheduler.scala:1006 18/04/17 17:29:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 1481 (KafkaRDD[2044] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 17:29:00 INFO cluster.YarnClusterScheduler: Adding task set 1481.0 with 1 tasks 18/04/17 17:29:00 INFO scheduler.DAGScheduler: Got job 1481 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 17:29:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 1482 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 17:29:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 17:29:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 17:29:00 INFO scheduler.DAGScheduler: Submitting ResultStage 1482 (KafkaRDD[2046] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 17:29:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 1481.0 (TID 1481, ***hostname masked***, executor 6, partition 0, NODE_LOCAL, 2047 bytes) 18/04/17 17:29:00 INFO storage.MemoryStore: Block broadcast_1482 stored as values in memory (estimated size 5.7 KB, free 491.4 MB) 18/04/17 17:29:00 INFO storage.MemoryStore: Block broadcast_1482_piece0 stored as bytes in memory (estimated size 3.1 KB, free 491.4 MB) 18/04/17 17:29:00 INFO storage.BlockManagerInfo: Added broadcast_1482_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:29:00 INFO spark.SparkContext: Created broadcast 1482 from broadcast at DAGScheduler.scala:1006 18/04/17 17:29:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 1482 (KafkaRDD[2046] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 17:29:00 INFO cluster.YarnClusterScheduler: Adding task set 1482.0 with 1 tasks 18/04/17 17:29:00 INFO scheduler.DAGScheduler: Got job 1482 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 17:29:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 1483 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 17:29:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 17:29:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 17:29:00 INFO scheduler.DAGScheduler: Submitting ResultStage 1483 (KafkaRDD[2025] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 17:29:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 1482.0 (TID 1482, ***hostname masked***, executor 3, partition 0, NODE_LOCAL, 2056 bytes) 18/04/17 17:29:00 INFO storage.MemoryStore: Block broadcast_1483 stored as values in memory (estimated size 5.7 KB, free 491.4 MB) 18/04/17 17:29:00 INFO storage.MemoryStore: Block broadcast_1483_piece0 stored as bytes in memory (estimated size 3.1 KB, free 491.4 MB) 18/04/17 17:29:00 INFO storage.BlockManagerInfo: Added broadcast_1483_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:29:00 INFO spark.SparkContext: Created broadcast 1483 from broadcast at DAGScheduler.scala:1006 18/04/17 17:29:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 1483 (KafkaRDD[2025] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 17:29:00 INFO cluster.YarnClusterScheduler: Adding task set 1483.0 with 1 tasks 18/04/17 17:29:00 INFO scheduler.DAGScheduler: Got job 1483 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 17:29:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 1484 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 17:29:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 17:29:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 17:29:00 INFO scheduler.DAGScheduler: Submitting ResultStage 1484 (KafkaRDD[2029] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 17:29:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 1483.0 (TID 1483, ***hostname masked***, executor 3, partition 0, NODE_LOCAL, 2059 bytes) 18/04/17 17:29:00 INFO storage.MemoryStore: Block broadcast_1484 stored as values in memory (estimated size 5.7 KB, free 491.3 MB) 18/04/17 17:29:00 INFO storage.MemoryStore: Block broadcast_1484_piece0 stored as bytes in memory (estimated size 3.1 KB, free 491.3 MB) 18/04/17 17:29:00 INFO storage.BlockManagerInfo: Added broadcast_1484_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:29:00 INFO spark.SparkContext: Created broadcast 1484 from broadcast at DAGScheduler.scala:1006 18/04/17 17:29:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 1484 (KafkaRDD[2029] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 17:29:00 INFO cluster.YarnClusterScheduler: Adding task set 1484.0 with 1 tasks 18/04/17 17:29:00 INFO scheduler.DAGScheduler: Got job 1484 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 17:29:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 1485 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 17:29:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 17:29:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 17:29:00 INFO scheduler.DAGScheduler: Submitting ResultStage 1485 (KafkaRDD[2047] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 17:29:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 1484.0 (TID 1484, ***hostname masked***, executor 4, partition 0, RACK_LOCAL, 2041 bytes) 18/04/17 17:29:00 INFO storage.MemoryStore: Block broadcast_1485 stored as values in memory (estimated size 5.7 KB, free 491.3 MB) 18/04/17 17:29:00 INFO storage.BlockManagerInfo: Added broadcast_1480_piece0 in memory on ***hostname masked***:55095 (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:29:00 INFO storage.MemoryStore: Block broadcast_1485_piece0 stored as bytes in memory (estimated size 3.1 KB, free 491.3 MB) 18/04/17 17:29:00 INFO storage.BlockManagerInfo: Added broadcast_1485_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.6 MB) 18/04/17 17:29:00 INFO spark.SparkContext: Created broadcast 1485 from broadcast at DAGScheduler.scala:1006 18/04/17 17:29:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 1485 (KafkaRDD[2047] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 17:29:00 INFO cluster.YarnClusterScheduler: Adding task set 1485.0 with 1 tasks 18/04/17 17:29:00 INFO scheduler.DAGScheduler: Got job 1485 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 17:29:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 1486 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 17:29:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 17:29:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 17:29:00 INFO scheduler.DAGScheduler: Submitting ResultStage 1486 (KafkaRDD[2051] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 17:29:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 1485.0 (TID 1485, ***hostname masked***, executor 2, partition 0, RACK_LOCAL, 2048 bytes) 18/04/17 17:29:00 INFO storage.MemoryStore: Block broadcast_1486 stored as values in memory (estimated size 5.7 KB, free 491.3 MB) 18/04/17 17:29:00 INFO storage.BlockManagerInfo: Added broadcast_1481_piece0 in memory on ***hostname masked***:35790 (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:29:00 INFO storage.MemoryStore: Block broadcast_1486_piece0 stored as bytes in memory (estimated size 3.1 KB, free 491.3 MB) 18/04/17 17:29:00 INFO storage.BlockManagerInfo: Added broadcast_1486_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.5 MB) 18/04/17 17:29:00 INFO spark.SparkContext: Created broadcast 1486 from broadcast at DAGScheduler.scala:1006 18/04/17 17:29:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 1486 (KafkaRDD[2051] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 17:29:00 INFO cluster.YarnClusterScheduler: Adding task set 1486.0 with 1 tasks 18/04/17 17:29:00 INFO scheduler.DAGScheduler: Got job 1486 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 17:29:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 1487 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 17:29:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 17:29:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 17:29:00 INFO scheduler.DAGScheduler: Submitting ResultStage 1487 (KafkaRDD[2038] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 17:29:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 1486.0 (TID 1486, ***hostname masked***, executor 8, partition 0, RACK_LOCAL, 2042 bytes) 18/04/17 17:29:00 INFO storage.MemoryStore: Block broadcast_1487 stored as values in memory (estimated size 5.7 KB, free 491.3 MB) 18/04/17 17:29:00 INFO storage.BlockManagerInfo: Added broadcast_1484_piece0 in memory on ***hostname masked***:55279 (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:29:00 INFO storage.MemoryStore: Block broadcast_1487_piece0 stored as bytes in memory (estimated size 3.1 KB, free 491.3 MB) 18/04/17 17:29:00 INFO storage.BlockManagerInfo: Added broadcast_1487_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.5 MB) 18/04/17 17:29:00 INFO spark.SparkContext: Created broadcast 1487 from broadcast at DAGScheduler.scala:1006 18/04/17 17:29:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 1487 (KafkaRDD[2038] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 17:29:00 INFO cluster.YarnClusterScheduler: Adding task set 1487.0 with 1 tasks 18/04/17 17:29:00 INFO scheduler.DAGScheduler: Got job 1487 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 17:29:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 1488 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 17:29:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 17:29:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 17:29:00 INFO scheduler.DAGScheduler: Submitting ResultStage 1488 (KafkaRDD[2022] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 17:29:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 1487.0 (TID 1487, ***hostname masked***, executor 4, partition 0, RACK_LOCAL, 2048 bytes) 18/04/17 17:29:00 INFO storage.MemoryStore: Block broadcast_1488 stored as values in memory (estimated size 5.7 KB, free 491.3 MB) 18/04/17 17:29:00 INFO storage.MemoryStore: Block broadcast_1488_piece0 stored as bytes in memory (estimated size 3.1 KB, free 491.3 MB) 18/04/17 17:29:00 INFO storage.BlockManagerInfo: Added broadcast_1488_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.5 MB) 18/04/17 17:29:00 INFO spark.SparkContext: Created broadcast 1488 from broadcast at DAGScheduler.scala:1006 18/04/17 17:29:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 1488 (KafkaRDD[2022] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 17:29:00 INFO cluster.YarnClusterScheduler: Adding task set 1488.0 with 1 tasks 18/04/17 17:29:00 INFO scheduler.DAGScheduler: Got job 1488 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 17:29:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 1489 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 17:29:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 17:29:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 17:29:00 INFO scheduler.DAGScheduler: Submitting ResultStage 1489 (KafkaRDD[2030] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 17:29:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 1488.0 (TID 1488, ***hostname masked***, executor 8, partition 0, RACK_LOCAL, 2034 bytes) 18/04/17 17:29:00 INFO storage.MemoryStore: Block broadcast_1489 stored as values in memory (estimated size 5.7 KB, free 491.3 MB) 18/04/17 17:29:00 INFO storage.BlockManagerInfo: Added broadcast_1487_piece0 in memory on ***hostname masked***:55279 (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:29:00 INFO storage.MemoryStore: Block broadcast_1489_piece0 stored as bytes in memory (estimated size 3.1 KB, free 491.3 MB) 18/04/17 17:29:00 INFO storage.BlockManagerInfo: Added broadcast_1489_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.5 MB) 18/04/17 17:29:00 INFO spark.SparkContext: Created broadcast 1489 from broadcast at DAGScheduler.scala:1006 18/04/17 17:29:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 1489 (KafkaRDD[2030] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 17:29:00 INFO cluster.YarnClusterScheduler: Adding task set 1489.0 with 1 tasks 18/04/17 17:29:00 INFO scheduler.DAGScheduler: Got job 1489 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 17:29:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 1490 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 17:29:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 17:29:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 17:29:00 INFO scheduler.DAGScheduler: Submitting ResultStage 1490 (KafkaRDD[2027] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 17:29:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 1489.0 (TID 1489, ***hostname masked***, executor 2, partition 0, NODE_LOCAL, 2059 bytes) 18/04/17 17:29:00 INFO storage.MemoryStore: Block broadcast_1490 stored as values in memory (estimated size 5.7 KB, free 491.3 MB) 18/04/17 17:29:00 INFO storage.MemoryStore: Block broadcast_1490_piece0 stored as bytes in memory (estimated size 3.1 KB, free 491.3 MB) 18/04/17 17:29:00 INFO storage.BlockManagerInfo: Added broadcast_1490_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.5 MB) 18/04/17 17:29:00 INFO spark.SparkContext: Created broadcast 1490 from broadcast at DAGScheduler.scala:1006 18/04/17 17:29:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 1490 (KafkaRDD[2027] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 17:29:00 INFO cluster.YarnClusterScheduler: Adding task set 1490.0 with 1 tasks 18/04/17 17:29:00 INFO scheduler.DAGScheduler: Got job 1490 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 17:29:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 1491 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 17:29:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 17:29:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 17:29:00 INFO scheduler.DAGScheduler: Submitting ResultStage 1491 (KafkaRDD[2039] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 17:29:00 INFO storage.BlockManagerInfo: Added broadcast_1485_piece0 in memory on ***hostname masked***:43653 (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:29:00 INFO storage.MemoryStore: Block broadcast_1491 stored as values in memory (estimated size 5.7 KB, free 491.3 MB) 18/04/17 17:29:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 1490.0 (TID 1490, ***hostname masked***, executor 6, partition 0, RACK_LOCAL, 2060 bytes) 18/04/17 17:29:00 INFO storage.MemoryStore: Block broadcast_1491_piece0 stored as bytes in memory (estimated size 3.1 KB, free 491.3 MB) 18/04/17 17:29:00 INFO storage.BlockManagerInfo: Added broadcast_1491_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.5 MB) 18/04/17 17:29:00 INFO spark.SparkContext: Created broadcast 1491 from broadcast at DAGScheduler.scala:1006 18/04/17 17:29:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 1491 (KafkaRDD[2039] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 17:29:00 INFO cluster.YarnClusterScheduler: Adding task set 1491.0 with 1 tasks 18/04/17 17:29:00 INFO scheduler.DAGScheduler: Got job 1491 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 17:29:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 1492 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 17:29:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 17:29:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 17:29:00 INFO scheduler.DAGScheduler: Submitting ResultStage 1492 (KafkaRDD[2043] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 17:29:00 INFO storage.MemoryStore: Block broadcast_1492 stored as values in memory (estimated size 5.7 KB, free 491.3 MB) 18/04/17 17:29:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 1491.0 (TID 1491, ***hostname masked***, executor 5, partition 0, RACK_LOCAL, 2050 bytes) 18/04/17 17:29:00 INFO storage.MemoryStore: Block broadcast_1492_piece0 stored as bytes in memory (estimated size 3.1 KB, free 491.3 MB) 18/04/17 17:29:00 INFO storage.BlockManagerInfo: Added broadcast_1492_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.5 MB) 18/04/17 17:29:00 INFO storage.BlockManagerInfo: Added broadcast_1490_piece0 in memory on ***hostname masked***:35790 (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:29:00 INFO spark.SparkContext: Created broadcast 1492 from broadcast at DAGScheduler.scala:1006 18/04/17 17:29:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 1492 (KafkaRDD[2043] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 17:29:00 INFO cluster.YarnClusterScheduler: Adding task set 1492.0 with 1 tasks 18/04/17 17:29:00 INFO scheduler.DAGScheduler: Got job 1492 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 17:29:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 1493 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 17:29:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 17:29:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 17:29:00 INFO scheduler.DAGScheduler: Submitting ResultStage 1493 (KafkaRDD[2054] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 17:29:00 INFO storage.MemoryStore: Block broadcast_1493 stored as values in memory (estimated size 5.7 KB, free 491.3 MB) 18/04/17 17:29:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 1492.0 (TID 1492, ***hostname masked***, executor 6, partition 0, RACK_LOCAL, 2048 bytes) 18/04/17 17:29:00 INFO storage.BlockManagerInfo: Added broadcast_1489_piece0 in memory on ***hostname masked***:43653 (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:29:00 INFO storage.MemoryStore: Block broadcast_1493_piece0 stored as bytes in memory (estimated size 3.1 KB, free 491.3 MB) 18/04/17 17:29:00 INFO storage.BlockManagerInfo: Added broadcast_1493_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.5 MB) 18/04/17 17:29:00 INFO spark.SparkContext: Created broadcast 1493 from broadcast at DAGScheduler.scala:1006 18/04/17 17:29:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 1493 (KafkaRDD[2054] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 17:29:00 INFO cluster.YarnClusterScheduler: Adding task set 1493.0 with 1 tasks 18/04/17 17:29:00 INFO scheduler.DAGScheduler: Got job 1493 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 17:29:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 1494 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 17:29:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 17:29:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 17:29:00 INFO scheduler.DAGScheduler: Submitting ResultStage 1494 (KafkaRDD[2049] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 17:29:00 INFO storage.MemoryStore: Block broadcast_1494 stored as values in memory (estimated size 5.7 KB, free 491.3 MB) 18/04/17 17:29:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 1493.0 (TID 1493, ***hostname masked***, executor 7, partition 0, RACK_LOCAL, 2053 bytes) 18/04/17 17:29:00 INFO storage.BlockManagerInfo: Added broadcast_1492_piece0 in memory on ***hostname masked***:35790 (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:29:00 INFO storage.BlockManagerInfo: Added broadcast_1491_piece0 in memory on ***hostname masked***:53081 (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:29:00 INFO storage.MemoryStore: Block broadcast_1494_piece0 stored as bytes in memory (estimated size 3.1 KB, free 491.3 MB) 18/04/17 17:29:00 INFO storage.BlockManagerInfo: Added broadcast_1494_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.5 MB) 18/04/17 17:29:00 INFO spark.SparkContext: Created broadcast 1494 from broadcast at DAGScheduler.scala:1006 18/04/17 17:29:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 1494 (KafkaRDD[2049] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 17:29:00 INFO cluster.YarnClusterScheduler: Adding task set 1494.0 with 1 tasks 18/04/17 17:29:00 INFO scheduler.DAGScheduler: Got job 1494 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 17:29:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 1495 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 17:29:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 17:29:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 17:29:00 INFO scheduler.DAGScheduler: Submitting ResultStage 1495 (KafkaRDD[2032] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 17:29:00 INFO storage.MemoryStore: Block broadcast_1495 stored as values in memory (estimated size 5.7 KB, free 491.3 MB) 18/04/17 17:29:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 1494.0 (TID 1494, ***hostname masked***, executor 3, partition 0, NODE_LOCAL, 2049 bytes) 18/04/17 17:29:00 INFO storage.MemoryStore: Block broadcast_1495_piece0 stored as bytes in memory (estimated size 3.1 KB, free 491.2 MB) 18/04/17 17:29:00 INFO storage.BlockManagerInfo: Added broadcast_1495_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.5 MB) 18/04/17 17:29:00 INFO spark.SparkContext: Created broadcast 1495 from broadcast at DAGScheduler.scala:1006 18/04/17 17:29:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 1495 (KafkaRDD[2032] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 17:29:00 INFO cluster.YarnClusterScheduler: Adding task set 1495.0 with 1 tasks 18/04/17 17:29:00 INFO scheduler.DAGScheduler: Got job 1495 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 17:29:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 1496 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 17:29:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 17:29:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 17:29:00 INFO scheduler.DAGScheduler: Submitting ResultStage 1496 (KafkaRDD[2035] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 17:29:00 INFO storage.MemoryStore: Block broadcast_1496 stored as values in memory (estimated size 5.7 KB, free 491.2 MB) 18/04/17 17:29:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 1495.0 (TID 1495, ***hostname masked***, executor 2, partition 0, RACK_LOCAL, 2051 bytes) 18/04/17 17:29:00 INFO storage.MemoryStore: Block broadcast_1496_piece0 stored as bytes in memory (estimated size 3.1 KB, free 491.2 MB) 18/04/17 17:29:00 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 1487.0 (TID 1487) in 41 ms on ***hostname masked*** (executor 4) (1/1) 18/04/17 17:29:00 INFO cluster.YarnClusterScheduler: Removed TaskSet 1487.0, whose tasks have all completed, from pool 18/04/17 17:29:00 INFO storage.BlockManagerInfo: Added broadcast_1496_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.5 MB) 18/04/17 17:29:00 INFO spark.SparkContext: Created broadcast 1496 from broadcast at DAGScheduler.scala:1006 18/04/17 17:29:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 1496 (KafkaRDD[2035] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 17:29:00 INFO cluster.YarnClusterScheduler: Adding task set 1496.0 with 1 tasks 18/04/17 17:29:00 INFO scheduler.DAGScheduler: Got job 1496 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 17:29:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 1497 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 17:29:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 17:29:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 17:29:00 INFO scheduler.DAGScheduler: Submitting ResultStage 1497 (KafkaRDD[2021] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 17:29:00 INFO storage.MemoryStore: Block broadcast_1497 stored as values in memory (estimated size 5.7 KB, free 491.2 MB) 18/04/17 17:29:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 1496.0 (TID 1496, ***hostname masked***, executor 2, partition 0, RACK_LOCAL, 2064 bytes) 18/04/17 17:29:00 INFO storage.MemoryStore: Block broadcast_1497_piece0 stored as bytes in memory (estimated size 3.1 KB, free 491.2 MB) 18/04/17 17:29:00 INFO storage.BlockManagerInfo: Added broadcast_1497_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.5 MB) 18/04/17 17:29:00 INFO spark.SparkContext: Created broadcast 1497 from broadcast at DAGScheduler.scala:1006 18/04/17 17:29:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 1497 (KafkaRDD[2021] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 17:29:00 INFO cluster.YarnClusterScheduler: Adding task set 1497.0 with 1 tasks 18/04/17 17:29:00 INFO scheduler.DAGScheduler: Got job 1498 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 17:29:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 1498 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 17:29:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 17:29:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 17:29:00 INFO scheduler.DAGScheduler: Submitting ResultStage 1498 (KafkaRDD[2053] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 17:29:00 INFO storage.BlockManagerInfo: Added broadcast_1483_piece0 in memory on ***hostname masked***:60107 (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:29:00 INFO storage.MemoryStore: Block broadcast_1498 stored as values in memory (estimated size 5.7 KB, free 491.2 MB) 18/04/17 17:29:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 1497.0 (TID 1497, ***hostname masked***, executor 1, partition 0, RACK_LOCAL, 2063 bytes) 18/04/17 17:29:00 INFO storage.BlockManagerInfo: Added broadcast_1495_piece0 in memory on ***hostname masked***:43653 (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:29:00 INFO storage.MemoryStore: Block broadcast_1498_piece0 stored as bytes in memory (estimated size 3.1 KB, free 491.2 MB) 18/04/17 17:29:00 INFO storage.BlockManagerInfo: Added broadcast_1498_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.5 MB) 18/04/17 17:29:00 INFO spark.SparkContext: Created broadcast 1498 from broadcast at DAGScheduler.scala:1006 18/04/17 17:29:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 1498 (KafkaRDD[2053] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 17:29:00 INFO cluster.YarnClusterScheduler: Adding task set 1498.0 with 1 tasks 18/04/17 17:29:00 INFO scheduler.DAGScheduler: Got job 1497 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 17:29:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 1499 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 17:29:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 17:29:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 17:29:00 INFO scheduler.DAGScheduler: Submitting ResultStage 1499 (KafkaRDD[2028] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 17:29:00 INFO storage.MemoryStore: Block broadcast_1499 stored as values in memory (estimated size 5.7 KB, free 491.2 MB) 18/04/17 17:29:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 1498.0 (TID 1498, ***hostname masked***, executor 8, partition 0, NODE_LOCAL, 2048 bytes) 18/04/17 17:29:00 INFO storage.MemoryStore: Block broadcast_1499_piece0 stored as bytes in memory (estimated size 3.1 KB, free 491.2 MB) 18/04/17 17:29:00 INFO storage.BlockManagerInfo: Added broadcast_1499_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.5 MB) 18/04/17 17:29:00 INFO spark.SparkContext: Created broadcast 1499 from broadcast at DAGScheduler.scala:1006 18/04/17 17:29:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 1499 (KafkaRDD[2028] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 17:29:00 INFO cluster.YarnClusterScheduler: Adding task set 1499.0 with 1 tasks 18/04/17 17:29:00 INFO scheduler.DAGScheduler: Got job 1499 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 17:29:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 1500 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 17:29:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 17:29:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 17:29:00 INFO scheduler.DAGScheduler: Submitting ResultStage 1500 (KafkaRDD[2055] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 17:29:00 INFO storage.MemoryStore: Block broadcast_1500 stored as values in memory (estimated size 5.7 KB, free 491.2 MB) 18/04/17 17:29:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 1499.0 (TID 1499, ***hostname masked***, executor 10, partition 0, RACK_LOCAL, 2051 bytes) 18/04/17 17:29:00 INFO storage.MemoryStore: Block broadcast_1500_piece0 stored as bytes in memory (estimated size 3.1 KB, free 491.2 MB) 18/04/17 17:29:00 INFO storage.BlockManagerInfo: Added broadcast_1500_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.5 MB) 18/04/17 17:29:00 INFO spark.SparkContext: Created broadcast 1500 from broadcast at DAGScheduler.scala:1006 18/04/17 17:29:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 1500 (KafkaRDD[2055] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 17:29:00 INFO cluster.YarnClusterScheduler: Adding task set 1500.0 with 1 tasks 18/04/17 17:29:00 INFO scheduler.DAGScheduler: Got job 1500 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 17:29:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 1501 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 17:29:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 17:29:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 17:29:00 INFO scheduler.DAGScheduler: Submitting ResultStage 1501 (KafkaRDD[2052] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 17:29:00 INFO storage.MemoryStore: Block broadcast_1501 stored as values in memory (estimated size 5.7 KB, free 491.2 MB) 18/04/17 17:29:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 1500.0 (TID 1500, ***hostname masked***, executor 9, partition 0, NODE_LOCAL, 2037 bytes) 18/04/17 17:29:00 INFO storage.BlockManagerInfo: Added broadcast_1496_piece0 in memory on ***hostname masked***:43653 (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:29:00 INFO storage.BlockManagerInfo: Added broadcast_1494_piece0 in memory on ***hostname masked***:60107 (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:29:00 INFO storage.MemoryStore: Block broadcast_1501_piece0 stored as bytes in memory (estimated size 3.1 KB, free 491.2 MB) 18/04/17 17:29:00 INFO storage.BlockManagerInfo: Added broadcast_1501_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.5 MB) 18/04/17 17:29:00 INFO spark.SparkContext: Created broadcast 1501 from broadcast at DAGScheduler.scala:1006 18/04/17 17:29:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 1501 (KafkaRDD[2052] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 17:29:00 INFO cluster.YarnClusterScheduler: Adding task set 1501.0 with 1 tasks 18/04/17 17:29:00 INFO scheduler.DAGScheduler: Got job 1501 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 17:29:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 1502 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 17:29:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 17:29:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 17:29:00 INFO scheduler.DAGScheduler: Submitting ResultStage 1502 (KafkaRDD[2042] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 17:29:00 INFO storage.MemoryStore: Block broadcast_1502 stored as values in memory (estimated size 5.7 KB, free 491.2 MB) 18/04/17 17:29:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 1501.0 (TID 1501, ***hostname masked***, executor 7, partition 0, RACK_LOCAL, 2040 bytes) 18/04/17 17:29:00 INFO storage.MemoryStore: Block broadcast_1502_piece0 stored as bytes in memory (estimated size 3.1 KB, free 491.2 MB) 18/04/17 17:29:00 INFO storage.BlockManagerInfo: Added broadcast_1502_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.5 MB) 18/04/17 17:29:00 INFO spark.SparkContext: Created broadcast 1502 from broadcast at DAGScheduler.scala:1006 18/04/17 17:29:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 1502 (KafkaRDD[2042] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 17:29:00 INFO cluster.YarnClusterScheduler: Adding task set 1502.0 with 1 tasks 18/04/17 17:29:00 INFO scheduler.DAGScheduler: Got job 1502 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 17:29:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 1503 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 17:29:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 17:29:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 17:29:00 INFO scheduler.DAGScheduler: Submitting ResultStage 1503 (KafkaRDD[2048] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 17:29:00 INFO storage.MemoryStore: Block broadcast_1503 stored as values in memory (estimated size 5.7 KB, free 491.2 MB) 18/04/17 17:29:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 1502.0 (TID 1502, ***hostname masked***, executor 7, partition 0, RACK_LOCAL, 2069 bytes) 18/04/17 17:29:00 INFO storage.BlockManagerInfo: Added broadcast_1488_piece0 in memory on ***hostname masked***:50260 (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:29:00 INFO storage.MemoryStore: Block broadcast_1503_piece0 stored as bytes in memory (estimated size 3.1 KB, free 491.2 MB) 18/04/17 17:29:00 INFO storage.BlockManagerInfo: Added broadcast_1503_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.5 MB) 18/04/17 17:29:00 INFO spark.SparkContext: Created broadcast 1503 from broadcast at DAGScheduler.scala:1006 18/04/17 17:29:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 1503 (KafkaRDD[2048] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 17:29:00 INFO cluster.YarnClusterScheduler: Adding task set 1503.0 with 1 tasks 18/04/17 17:29:00 INFO scheduler.DAGScheduler: Got job 1503 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 17:29:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 1504 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 17:29:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 17:29:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 17:29:00 INFO scheduler.DAGScheduler: Submitting ResultStage 1504 (KafkaRDD[2040] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 17:29:00 INFO storage.MemoryStore: Block broadcast_1504 stored as values in memory (estimated size 5.7 KB, free 491.2 MB) 18/04/17 17:29:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 1503.0 (TID 1503, ***hostname masked***, executor 12, partition 0, RACK_LOCAL, 2045 bytes) 18/04/17 17:29:00 INFO storage.BlockManagerInfo: Added broadcast_1482_piece0 in memory on ***hostname masked***:60107 (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:29:00 INFO storage.BlockManagerInfo: Added broadcast_1497_piece0 in memory on ***hostname masked***:56034 (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:29:00 INFO storage.BlockManagerInfo: Added broadcast_1493_piece0 in memory on ***hostname masked***:41751 (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:29:00 INFO storage.MemoryStore: Block broadcast_1504_piece0 stored as bytes in memory (estimated size 3.1 KB, free 491.2 MB) 18/04/17 17:29:00 INFO storage.BlockManagerInfo: Added broadcast_1504_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.5 MB) 18/04/17 17:29:00 INFO spark.SparkContext: Created broadcast 1504 from broadcast at DAGScheduler.scala:1006 18/04/17 17:29:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 1504 (KafkaRDD[2040] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 17:29:00 INFO cluster.YarnClusterScheduler: Adding task set 1504.0 with 1 tasks 18/04/17 17:29:00 INFO scheduler.DAGScheduler: Got job 1504 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 17:29:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 1505 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 17:29:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 17:29:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 17:29:00 INFO scheduler.DAGScheduler: Submitting ResultStage 1505 (KafkaRDD[2031] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 17:29:00 INFO storage.MemoryStore: Block broadcast_1505 stored as values in memory (estimated size 5.7 KB, free 491.2 MB) 18/04/17 17:29:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 1504.0 (TID 1504, ***hostname masked***, executor 4, partition 0, RACK_LOCAL, 2048 bytes) 18/04/17 17:29:00 INFO storage.MemoryStore: Block broadcast_1505_piece0 stored as bytes in memory (estimated size 3.1 KB, free 491.2 MB) 18/04/17 17:29:00 INFO storage.BlockManagerInfo: Added broadcast_1505_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.5 MB) 18/04/17 17:29:00 INFO spark.SparkContext: Created broadcast 1505 from broadcast at DAGScheduler.scala:1006 18/04/17 17:29:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 1505 (KafkaRDD[2031] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 17:29:00 INFO cluster.YarnClusterScheduler: Adding task set 1505.0 with 1 tasks 18/04/17 17:29:00 INFO scheduler.DAGScheduler: Got job 1505 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 17:29:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 1506 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 17:29:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 17:29:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 17:29:00 INFO scheduler.DAGScheduler: Submitting ResultStage 1506 (KafkaRDD[2026] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 17:29:00 INFO storage.MemoryStore: Block broadcast_1506 stored as values in memory (estimated size 5.7 KB, free 491.2 MB) 18/04/17 17:29:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 1505.0 (TID 1505, ***hostname masked***, executor 4, partition 0, RACK_LOCAL, 2046 bytes) 18/04/17 17:29:00 INFO storage.BlockManagerInfo: Added broadcast_1498_piece0 in memory on ***hostname masked***:50260 (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:29:00 INFO storage.MemoryStore: Block broadcast_1506_piece0 stored as bytes in memory (estimated size 3.1 KB, free 491.2 MB) 18/04/17 17:29:00 INFO storage.BlockManagerInfo: Added broadcast_1506_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.5 MB) 18/04/17 17:29:00 INFO spark.SparkContext: Created broadcast 1506 from broadcast at DAGScheduler.scala:1006 18/04/17 17:29:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 1506 (KafkaRDD[2026] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 17:29:00 INFO cluster.YarnClusterScheduler: Adding task set 1506.0 with 1 tasks 18/04/17 17:29:00 INFO scheduler.DAGScheduler: ResultStage 1487 (foreachPartition at PredictorEngineApp.java:153) finished in 0.066 s 18/04/17 17:29:00 INFO scheduler.DAGScheduler: Job 1486 finished: foreachPartition at PredictorEngineApp.java:153, took 0.101401 s 18/04/17 17:29:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 1506.0 (TID 1506, ***hostname masked***, executor 11, partition 0, NODE_LOCAL, 2045 bytes) 18/04/17 17:29:00 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x721e4b8b connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 17:29:00 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x721e4b8b0x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 17:29:00 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 17:29:00 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:44931, server: ***hostname masked***/***IP masked***:2181 18/04/17 17:29:00 INFO storage.BlockManagerInfo: Added broadcast_1504_piece0 in memory on ***hostname masked***:55279 (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:29:00 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 1481.0 (TID 1481) in 94 ms on ***hostname masked*** (executor 6) (1/1) 18/04/17 17:29:00 INFO scheduler.DAGScheduler: ResultStage 1481 (foreachPartition at PredictorEngineApp.java:153) finished in 0.094 s 18/04/17 17:29:00 INFO cluster.YarnClusterScheduler: Removed TaskSet 1481.0, whose tasks have all completed, from pool 18/04/17 17:29:00 INFO scheduler.DAGScheduler: Job 1480 finished: foreachPartition at PredictorEngineApp.java:153, took 0.107405 s 18/04/17 17:29:00 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0xc93d3e7 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 17:29:00 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0xc93d3e70x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 17:29:00 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 17:29:00 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:38550, server: ***hostname masked***/***IP masked***:2181 18/04/17 17:29:00 INFO storage.BlockManagerInfo: Added broadcast_1486_piece0 in memory on ***hostname masked***:50260 (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:29:00 INFO storage.BlockManagerInfo: Added broadcast_1500_piece0 in memory on ***hostname masked***:55033 (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:29:00 INFO storage.BlockManagerInfo: Added broadcast_1499_piece0 in memory on ***hostname masked***:55095 (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:29:00 INFO storage.BlockManagerInfo: Added broadcast_1503_piece0 in memory on ***hostname masked***:42188 (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:29:00 INFO storage.BlockManagerInfo: Added broadcast_1506_piece0 in memory on ***hostname masked***:57847 (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:29:00 INFO storage.BlockManagerInfo: Added broadcast_1505_piece0 in memory on ***hostname masked***:55279 (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:29:00 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x1626be1444c9917, negotiated timeout = 60000 18/04/17 17:29:00 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x1626be1444c9917 18/04/17 17:29:00 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 1494.0 (TID 1494) in 68 ms on ***hostname masked*** (executor 3) (1/1) 18/04/17 17:29:00 INFO cluster.YarnClusterScheduler: Removed TaskSet 1494.0, whose tasks have all completed, from pool 18/04/17 17:29:00 INFO scheduler.DAGScheduler: ResultStage 1494 (foreachPartition at PredictorEngineApp.java:153) finished in 0.068 s 18/04/17 17:29:00 INFO scheduler.DAGScheduler: Job 1493 finished: foreachPartition at PredictorEngineApp.java:153, took 0.128633 s 18/04/17 17:29:00 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x1578e97c connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 17:29:00 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x1578e97c0x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 17:29:00 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 17:29:00 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:44935, server: ***hostname masked***/***IP masked***:2181 18/04/17 17:29:00 INFO storage.BlockManagerInfo: Added broadcast_1502_piece0 in memory on ***hostname masked***:41751 (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:29:00 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x3626be1439a98c2, negotiated timeout = 60000 18/04/17 17:29:00 INFO storage.BlockManagerInfo: Added broadcast_1501_piece0 in memory on ***hostname masked***:41751 (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:29:00 INFO zookeeper.ZooKeeper: Session: 0x1626be1444c9917 closed 18/04/17 17:29:00 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 17:29:00 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x1626be1444c9918, negotiated timeout = 60000 18/04/17 17:29:00 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x3626be1439a98c2 18/04/17 17:29:00 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x1626be1444c9918 18/04/17 17:29:00 INFO zookeeper.ZooKeeper: Session: 0x3626be1439a98c2 closed 18/04/17 17:29:00 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 17:29:00 INFO scheduler.JobScheduler: Finished job streaming job 1523975340000 ms.18 from job set of time 1523975340000 ms 18/04/17 17:29:00 INFO zookeeper.ZooKeeper: Session: 0x1626be1444c9918 closed 18/04/17 17:29:00 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 17:29:00 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 1486.0 (TID 1486) in 126 ms on ***hostname masked*** (executor 8) (1/1) 18/04/17 17:29:00 INFO cluster.YarnClusterScheduler: Removed TaskSet 1486.0, whose tasks have all completed, from pool 18/04/17 17:29:00 INFO scheduler.DAGScheduler: ResultStage 1486 (foreachPartition at PredictorEngineApp.java:153) finished in 0.127 s 18/04/17 17:29:00 INFO scheduler.DAGScheduler: Job 1485 finished: foreachPartition at PredictorEngineApp.java:153, took 0.158426 s 18/04/17 17:29:00 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x9f0ccb connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 17:29:00 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x9f0ccb0x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 17:29:00 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 17:29:00 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:49535, server: ***hostname masked***/***IP masked***:2181 18/04/17 17:29:00 INFO scheduler.JobScheduler: Finished job streaming job 1523975340000 ms.24 from job set of time 1523975340000 ms 18/04/17 17:29:00 INFO scheduler.JobScheduler: Finished job streaming job 1523975340000 ms.29 from job set of time 1523975340000 ms 18/04/17 17:29:00 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x2626be142b291e4, negotiated timeout = 60000 18/04/17 17:29:00 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x2626be142b291e4 18/04/17 17:29:00 INFO zookeeper.ZooKeeper: Session: 0x2626be142b291e4 closed 18/04/17 17:29:00 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 17:29:00 INFO scheduler.JobScheduler: Finished job streaming job 1523975340000 ms.31 from job set of time 1523975340000 ms 18/04/17 17:29:00 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 1502.0 (TID 1502) in 110 ms on ***hostname masked*** (executor 7) (1/1) 18/04/17 17:29:00 INFO cluster.YarnClusterScheduler: Removed TaskSet 1502.0, whose tasks have all completed, from pool 18/04/17 17:29:00 INFO scheduler.DAGScheduler: ResultStage 1502 (foreachPartition at PredictorEngineApp.java:153) finished in 0.110 s 18/04/17 17:29:00 INFO scheduler.DAGScheduler: Job 1501 finished: foreachPartition at PredictorEngineApp.java:153, took 0.199843 s 18/04/17 17:29:00 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x5a741cb1 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 17:29:00 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x5a741cb10x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 17:29:00 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 17:29:00 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:44943, server: ***hostname masked***/***IP masked***:2181 18/04/17 17:29:00 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x1626be1444c991a, negotiated timeout = 60000 18/04/17 17:29:00 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x1626be1444c991a 18/04/17 17:29:00 INFO zookeeper.ZooKeeper: Session: 0x1626be1444c991a closed 18/04/17 17:29:00 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 17:29:00 INFO scheduler.JobScheduler: Finished job streaming job 1523975340000 ms.22 from job set of time 1523975340000 ms 18/04/17 17:29:00 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 1500.0 (TID 1500) in 291 ms on ***hostname masked*** (executor 9) (1/1) 18/04/17 17:29:00 INFO cluster.YarnClusterScheduler: Removed TaskSet 1500.0, whose tasks have all completed, from pool 18/04/17 17:29:00 INFO scheduler.DAGScheduler: ResultStage 1500 (foreachPartition at PredictorEngineApp.java:153) finished in 0.292 s 18/04/17 17:29:00 INFO scheduler.DAGScheduler: Job 1499 finished: foreachPartition at PredictorEngineApp.java:153, took 0.376620 s 18/04/17 17:29:00 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x220280a7 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 17:29:00 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x220280a70x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 17:29:00 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 17:29:00 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:49541, server: ***hostname masked***/***IP masked***:2181 18/04/17 17:29:00 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x2626be142b291e8, negotiated timeout = 60000 18/04/17 17:29:00 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x2626be142b291e8 18/04/17 17:29:00 INFO zookeeper.ZooKeeper: Session: 0x2626be142b291e8 closed 18/04/17 17:29:00 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 17:29:00 INFO scheduler.JobScheduler: Finished job streaming job 1523975340000 ms.35 from job set of time 1523975340000 ms 18/04/17 17:29:01 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 1480.0 (TID 1480) in 1687 ms on ***hostname masked*** (executor 10) (1/1) 18/04/17 17:29:01 INFO cluster.YarnClusterScheduler: Removed TaskSet 1480.0, whose tasks have all completed, from pool 18/04/17 17:29:01 INFO scheduler.DAGScheduler: ResultStage 1480 (foreachPartition at PredictorEngineApp.java:153) finished in 1.687 s 18/04/17 17:29:01 INFO scheduler.DAGScheduler: Job 1479 finished: foreachPartition at PredictorEngineApp.java:153, took 1.695992 s 18/04/17 17:29:01 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x48bf9418 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 17:29:01 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x48bf94180x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 17:29:01 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 17:29:01 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:49545, server: ***hostname masked***/***IP masked***:2181 18/04/17 17:29:01 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x2626be142b291e9, negotiated timeout = 60000 18/04/17 17:29:01 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x2626be142b291e9 18/04/17 17:29:01 INFO zookeeper.ZooKeeper: Session: 0x2626be142b291e9 closed 18/04/17 17:29:01 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 17:29:01 INFO scheduler.JobScheduler: Finished job streaming job 1523975340000 ms.25 from job set of time 1523975340000 ms 18/04/17 17:29:02 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 1499.0 (TID 1499) in 2819 ms on ***hostname masked*** (executor 10) (1/1) 18/04/17 17:29:02 INFO cluster.YarnClusterScheduler: Removed TaskSet 1499.0, whose tasks have all completed, from pool 18/04/17 17:29:02 INFO scheduler.DAGScheduler: ResultStage 1499 (foreachPartition at PredictorEngineApp.java:153) finished in 2.820 s 18/04/17 17:29:02 INFO scheduler.DAGScheduler: Job 1497 finished: foreachPartition at PredictorEngineApp.java:153, took 2.903295 s 18/04/17 17:29:02 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x63952915 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 17:29:02 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x639529150x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 17:29:02 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 17:29:02 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:38576, server: ***hostname masked***/***IP masked***:2181 18/04/17 17:29:02 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x3626be1439a98ca, negotiated timeout = 60000 18/04/17 17:29:02 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x3626be1439a98ca 18/04/17 17:29:02 INFO zookeeper.ZooKeeper: Session: 0x3626be1439a98ca closed 18/04/17 17:29:02 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 17:29:02 INFO scheduler.JobScheduler: Finished job streaming job 1523975340000 ms.8 from job set of time 1523975340000 ms 18/04/17 17:29:04 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 1490.0 (TID 1490) in 3994 ms on ***hostname masked*** (executor 6) (1/1) 18/04/17 17:29:04 INFO cluster.YarnClusterScheduler: Removed TaskSet 1490.0, whose tasks have all completed, from pool 18/04/17 17:29:04 INFO scheduler.DAGScheduler: ResultStage 1490 (foreachPartition at PredictorEngineApp.java:153) finished in 3.994 s 18/04/17 17:29:04 INFO scheduler.DAGScheduler: Job 1489 finished: foreachPartition at PredictorEngineApp.java:153, took 4.040425 s 18/04/17 17:29:04 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x1a069120 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 17:29:04 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x1a0691200x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 17:29:04 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 17:29:04 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:49559, server: ***hostname masked***/***IP masked***:2181 18/04/17 17:29:04 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x2626be142b291ea, negotiated timeout = 60000 18/04/17 17:29:04 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x2626be142b291ea 18/04/17 17:29:04 INFO zookeeper.ZooKeeper: Session: 0x2626be142b291ea closed 18/04/17 17:29:04 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 17:29:04 INFO scheduler.JobScheduler: Finished job streaming job 1523975340000 ms.7 from job set of time 1523975340000 ms 18/04/17 17:29:05 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 1506.0 (TID 1506) in 5433 ms on ***hostname masked*** (executor 11) (1/1) 18/04/17 17:29:05 INFO cluster.YarnClusterScheduler: Removed TaskSet 1506.0, whose tasks have all completed, from pool 18/04/17 17:29:05 INFO scheduler.DAGScheduler: ResultStage 1506 (foreachPartition at PredictorEngineApp.java:153) finished in 5.434 s 18/04/17 17:29:05 INFO scheduler.DAGScheduler: Job 1505 finished: foreachPartition at PredictorEngineApp.java:153, took 5.532784 s 18/04/17 17:29:05 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x57a767a9 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 17:29:05 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x57a767a90x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 17:29:05 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 17:29:05 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:44968, server: ***hostname masked***/***IP masked***:2181 18/04/17 17:29:05 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x1626be1444c9921, negotiated timeout = 60000 18/04/17 17:29:05 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x1626be1444c9921 18/04/17 17:29:05 INFO zookeeper.ZooKeeper: Session: 0x1626be1444c9921 closed 18/04/17 17:29:05 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 17:29:05 INFO scheduler.JobScheduler: Finished job streaming job 1523975340000 ms.6 from job set of time 1523975340000 ms 18/04/17 17:29:05 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 1503.0 (TID 1503) in 5482 ms on ***hostname masked*** (executor 12) (1/1) 18/04/17 17:29:05 INFO cluster.YarnClusterScheduler: Removed TaskSet 1503.0, whose tasks have all completed, from pool 18/04/17 17:29:05 INFO scheduler.DAGScheduler: ResultStage 1503 (foreachPartition at PredictorEngineApp.java:153) finished in 5.483 s 18/04/17 17:29:05 INFO scheduler.DAGScheduler: Job 1502 finished: foreachPartition at PredictorEngineApp.java:153, took 5.575898 s 18/04/17 17:29:05 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x18f34a1f connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 17:29:05 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x18f34a1f0x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 17:29:05 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 17:29:05 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:38589, server: ***hostname masked***/***IP masked***:2181 18/04/17 17:29:05 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x3626be1439a98cd, negotiated timeout = 60000 18/04/17 17:29:05 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x3626be1439a98cd 18/04/17 17:29:05 INFO zookeeper.ZooKeeper: Session: 0x3626be1439a98cd closed 18/04/17 17:29:05 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 17:29:05 INFO scheduler.JobScheduler: Finished job streaming job 1523975340000 ms.28 from job set of time 1523975340000 ms 18/04/17 17:29:06 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 1491.0 (TID 1491) in 6020 ms on ***hostname masked*** (executor 5) (1/1) 18/04/17 17:29:06 INFO cluster.YarnClusterScheduler: Removed TaskSet 1491.0, whose tasks have all completed, from pool 18/04/17 17:29:06 INFO scheduler.DAGScheduler: ResultStage 1491 (foreachPartition at PredictorEngineApp.java:153) finished in 6.020 s 18/04/17 17:29:06 INFO scheduler.DAGScheduler: Job 1490 finished: foreachPartition at PredictorEngineApp.java:153, took 6.069360 s 18/04/17 17:29:06 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x1b092286 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 17:29:06 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x1b0922860x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 17:29:06 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 17:29:06 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:49570, server: ***hostname masked***/***IP masked***:2181 18/04/17 17:29:06 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x2626be142b291ee, negotiated timeout = 60000 18/04/17 17:29:06 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x2626be142b291ee 18/04/17 17:29:06 INFO zookeeper.ZooKeeper: Session: 0x2626be142b291ee closed 18/04/17 17:29:06 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 17:29:06 INFO scheduler.JobScheduler: Finished job streaming job 1523975340000 ms.19 from job set of time 1523975340000 ms 18/04/17 17:29:08 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 1492.0 (TID 1492) in 7937 ms on ***hostname masked*** (executor 6) (1/1) 18/04/17 17:29:08 INFO cluster.YarnClusterScheduler: Removed TaskSet 1492.0, whose tasks have all completed, from pool 18/04/17 17:29:08 INFO scheduler.DAGScheduler: ResultStage 1492 (foreachPartition at PredictorEngineApp.java:153) finished in 7.938 s 18/04/17 17:29:08 INFO scheduler.DAGScheduler: Job 1491 finished: foreachPartition at PredictorEngineApp.java:153, took 7.990389 s 18/04/17 17:29:08 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0xa1fa264 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 17:29:08 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0xa1fa2640x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 17:29:08 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 17:29:08 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:44980, server: ***hostname masked***/***IP masked***:2181 18/04/17 17:29:08 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x1626be1444c9922, negotiated timeout = 60000 18/04/17 17:29:08 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x1626be1444c9922 18/04/17 17:29:08 INFO zookeeper.ZooKeeper: Session: 0x1626be1444c9922 closed 18/04/17 17:29:08 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 17:29:08 INFO scheduler.JobScheduler: Finished job streaming job 1523975340000 ms.23 from job set of time 1523975340000 ms 18/04/17 17:29:10 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 1501.0 (TID 1501) in 9857 ms on ***hostname masked*** (executor 7) (1/1) 18/04/17 17:29:10 INFO cluster.YarnClusterScheduler: Removed TaskSet 1501.0, whose tasks have all completed, from pool 18/04/17 17:29:10 INFO scheduler.DAGScheduler: ResultStage 1501 (foreachPartition at PredictorEngineApp.java:153) finished in 9.858 s 18/04/17 17:29:10 INFO scheduler.DAGScheduler: Job 1500 finished: foreachPartition at PredictorEngineApp.java:153, took 9.945904 s 18/04/17 17:29:10 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x7405b27a connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 17:29:10 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x7405b27a0x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 17:29:10 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 17:29:10 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:44987, server: ***hostname masked***/***IP masked***:2181 18/04/17 17:29:10 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x1626be1444c9924, negotiated timeout = 60000 18/04/17 17:29:10 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x1626be1444c9924 18/04/17 17:29:10 INFO zookeeper.ZooKeeper: Session: 0x1626be1444c9924 closed 18/04/17 17:29:10 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 17:29:10 INFO scheduler.JobScheduler: Finished job streaming job 1523975340000 ms.32 from job set of time 1523975340000 ms 18/04/17 17:29:10 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 1488.0 (TID 1488) in 10183 ms on ***hostname masked*** (executor 8) (1/1) 18/04/17 17:29:10 INFO cluster.YarnClusterScheduler: Removed TaskSet 1488.0, whose tasks have all completed, from pool 18/04/17 17:29:10 INFO scheduler.DAGScheduler: ResultStage 1488 (foreachPartition at PredictorEngineApp.java:153) finished in 10.183 s 18/04/17 17:29:10 INFO scheduler.DAGScheduler: Job 1487 finished: foreachPartition at PredictorEngineApp.java:153, took 10.222581 s 18/04/17 17:29:10 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x30ae994e connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 17:29:10 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x30ae994e0x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 17:29:10 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 17:29:10 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:38608, server: ***hostname masked***/***IP masked***:2181 18/04/17 17:29:10 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x3626be1439a98d1, negotiated timeout = 60000 18/04/17 17:29:10 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x3626be1439a98d1 18/04/17 17:29:10 INFO zookeeper.ZooKeeper: Session: 0x3626be1439a98d1 closed 18/04/17 17:29:10 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 17:29:10 INFO scheduler.JobScheduler: Finished job streaming job 1523975340000 ms.2 from job set of time 1523975340000 ms 18/04/17 17:29:10 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 1493.0 (TID 1493) in 10648 ms on ***hostname masked*** (executor 7) (1/1) 18/04/17 17:29:10 INFO cluster.YarnClusterScheduler: Removed TaskSet 1493.0, whose tasks have all completed, from pool 18/04/17 17:29:10 INFO scheduler.DAGScheduler: ResultStage 1493 (foreachPartition at PredictorEngineApp.java:153) finished in 10.649 s 18/04/17 17:29:10 INFO scheduler.DAGScheduler: Job 1492 finished: foreachPartition at PredictorEngineApp.java:153, took 10.705387 s 18/04/17 17:29:10 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x34836375 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 17:29:10 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x348363750x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 17:29:10 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 17:29:10 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:44993, server: ***hostname masked***/***IP masked***:2181 18/04/17 17:29:10 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x1626be1444c9925, negotiated timeout = 60000 18/04/17 17:29:10 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x1626be1444c9925 18/04/17 17:29:10 INFO zookeeper.ZooKeeper: Session: 0x1626be1444c9925 closed 18/04/17 17:29:10 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 17:29:10 INFO scheduler.JobScheduler: Finished job streaming job 1523975340000 ms.34 from job set of time 1523975340000 ms 18/04/17 17:29:10 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 1498.0 (TID 1498) in 10749 ms on ***hostname masked*** (executor 8) (1/1) 18/04/17 17:29:10 INFO scheduler.DAGScheduler: ResultStage 1498 (foreachPartition at PredictorEngineApp.java:153) finished in 10.749 s 18/04/17 17:29:10 INFO cluster.YarnClusterScheduler: Removed TaskSet 1498.0, whose tasks have all completed, from pool 18/04/17 17:29:10 INFO scheduler.DAGScheduler: Job 1498 finished: foreachPartition at PredictorEngineApp.java:153, took 10.829869 s 18/04/17 17:29:10 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x370acfdb connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 17:29:10 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x370acfdb0x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 17:29:10 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 17:29:10 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:44996, server: ***hostname masked***/***IP masked***:2181 18/04/17 17:29:10 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x1626be1444c9926, negotiated timeout = 60000 18/04/17 17:29:10 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x1626be1444c9926 18/04/17 17:29:10 INFO zookeeper.ZooKeeper: Session: 0x1626be1444c9926 closed 18/04/17 17:29:10 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 17:29:10 INFO scheduler.JobScheduler: Finished job streaming job 1523975340000 ms.33 from job set of time 1523975340000 ms 18/04/17 17:29:12 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 1497.0 (TID 1497) in 12366 ms on ***hostname masked*** (executor 1) (1/1) 18/04/17 17:29:12 INFO cluster.YarnClusterScheduler: Removed TaskSet 1497.0, whose tasks have all completed, from pool 18/04/17 17:29:12 INFO scheduler.DAGScheduler: ResultStage 1497 (foreachPartition at PredictorEngineApp.java:153) finished in 12.366 s 18/04/17 17:29:12 INFO scheduler.DAGScheduler: Job 1496 finished: foreachPartition at PredictorEngineApp.java:153, took 12.444578 s 18/04/17 17:29:12 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x59e9e49a connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 17:29:12 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x59e9e49a0x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 17:29:12 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 17:29:12 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:38620, server: ***hostname masked***/***IP masked***:2181 18/04/17 17:29:12 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x3626be1439a98d2, negotiated timeout = 60000 18/04/17 17:29:12 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x3626be1439a98d2 18/04/17 17:29:12 INFO zookeeper.ZooKeeper: Session: 0x3626be1439a98d2 closed 18/04/17 17:29:12 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 17:29:12 INFO scheduler.JobScheduler: Finished job streaming job 1523975340000 ms.1 from job set of time 1523975340000 ms 18/04/17 17:29:15 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 1484.0 (TID 1484) in 15329 ms on ***hostname masked*** (executor 4) (1/1) 18/04/17 17:29:15 INFO cluster.YarnClusterScheduler: Removed TaskSet 1484.0, whose tasks have all completed, from pool 18/04/17 17:29:15 INFO scheduler.DAGScheduler: ResultStage 1484 (foreachPartition at PredictorEngineApp.java:153) finished in 15.329 s 18/04/17 17:29:15 INFO scheduler.DAGScheduler: Job 1483 finished: foreachPartition at PredictorEngineApp.java:153, took 15.353619 s 18/04/17 17:29:15 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x12ce62f6 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 17:29:15 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x12ce62f60x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 17:29:15 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 17:29:15 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:45009, server: ***hostname masked***/***IP masked***:2181 18/04/17 17:29:15 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x1626be1444c9927, negotiated timeout = 60000 18/04/17 17:29:15 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x1626be1444c9927 18/04/17 17:29:15 INFO zookeeper.ZooKeeper: Session: 0x1626be1444c9927 closed 18/04/17 17:29:15 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 17:29:15 INFO scheduler.JobScheduler: Finished job streaming job 1523975340000 ms.9 from job set of time 1523975340000 ms 18/04/17 17:29:17 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 1504.0 (TID 1504) in 17009 ms on ***hostname masked*** (executor 4) (1/1) 18/04/17 17:29:17 INFO cluster.YarnClusterScheduler: Removed TaskSet 1504.0, whose tasks have all completed, from pool 18/04/17 17:29:17 INFO scheduler.DAGScheduler: ResultStage 1504 (foreachPartition at PredictorEngineApp.java:153) finished in 17.010 s 18/04/17 17:29:17 INFO scheduler.DAGScheduler: Job 1503 finished: foreachPartition at PredictorEngineApp.java:153, took 17.105594 s 18/04/17 17:29:17 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x4de7eba1 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 17:29:17 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x4de7eba10x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 17:29:17 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 17:29:17 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:49610, server: ***hostname masked***/***IP masked***:2181 18/04/17 17:29:17 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x2626be142b291f4, negotiated timeout = 60000 18/04/17 17:29:17 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x2626be142b291f4 18/04/17 17:29:17 INFO zookeeper.ZooKeeper: Session: 0x2626be142b291f4 closed 18/04/17 17:29:17 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 17:29:17 INFO scheduler.JobScheduler: Finished job streaming job 1523975340000 ms.20 from job set of time 1523975340000 ms 18/04/17 17:29:18 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 1495.0 (TID 1495) in 18166 ms on ***hostname masked*** (executor 2) (1/1) 18/04/17 17:29:18 INFO cluster.YarnClusterScheduler: Removed TaskSet 1495.0, whose tasks have all completed, from pool 18/04/17 17:29:18 INFO scheduler.DAGScheduler: ResultStage 1495 (foreachPartition at PredictorEngineApp.java:153) finished in 18.167 s 18/04/17 17:29:18 INFO scheduler.DAGScheduler: Job 1494 finished: foreachPartition at PredictorEngineApp.java:153, took 18.241379 s 18/04/17 17:29:18 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x10a43874 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 17:29:18 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x10a438740x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 17:29:18 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 17:29:18 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:38637, server: ***hostname masked***/***IP masked***:2181 18/04/17 17:29:18 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x3626be1439a98d7, negotiated timeout = 60000 18/04/17 17:29:18 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x3626be1439a98d7 18/04/17 17:29:18 INFO zookeeper.ZooKeeper: Session: 0x3626be1439a98d7 closed 18/04/17 17:29:18 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 17:29:18 INFO scheduler.JobScheduler: Finished job streaming job 1523975340000 ms.12 from job set of time 1523975340000 ms 18/04/17 17:29:20 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 1496.0 (TID 1496) in 20451 ms on ***hostname masked*** (executor 2) (1/1) 18/04/17 17:29:20 INFO cluster.YarnClusterScheduler: Removed TaskSet 1496.0, whose tasks have all completed, from pool 18/04/17 17:29:20 INFO scheduler.DAGScheduler: ResultStage 1496 (foreachPartition at PredictorEngineApp.java:153) finished in 20.452 s 18/04/17 17:29:20 INFO scheduler.DAGScheduler: Job 1495 finished: foreachPartition at PredictorEngineApp.java:153, took 20.527607 s 18/04/17 17:29:20 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x5cfbb4aa connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 17:29:20 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x5cfbb4aa0x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 17:29:20 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 17:29:20 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:38644, server: ***hostname masked***/***IP masked***:2181 18/04/17 17:29:20 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x3626be1439a98da, negotiated timeout = 60000 18/04/17 17:29:20 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x3626be1439a98da 18/04/17 17:29:20 INFO zookeeper.ZooKeeper: Session: 0x3626be1439a98da closed 18/04/17 17:29:20 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 17:29:20 INFO scheduler.JobScheduler: Finished job streaming job 1523975340000 ms.15 from job set of time 1523975340000 ms 18/04/17 17:29:22 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 1485.0 (TID 1485) in 22380 ms on ***hostname masked*** (executor 2) (1/1) 18/04/17 17:29:22 INFO cluster.YarnClusterScheduler: Removed TaskSet 1485.0, whose tasks have all completed, from pool 18/04/17 17:29:22 INFO scheduler.DAGScheduler: ResultStage 1485 (foreachPartition at PredictorEngineApp.java:153) finished in 22.380 s 18/04/17 17:29:22 INFO scheduler.DAGScheduler: Job 1484 finished: foreachPartition at PredictorEngineApp.java:153, took 22.409248 s 18/04/17 17:29:22 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x7a567fda connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 17:29:22 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x7a567fda0x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 17:29:22 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 17:29:22 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:45033, server: ***hostname masked***/***IP masked***:2181 18/04/17 17:29:22 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x1626be1444c992b, negotiated timeout = 60000 18/04/17 17:29:22 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x1626be1444c992b 18/04/17 17:29:22 INFO zookeeper.ZooKeeper: Session: 0x1626be1444c992b closed 18/04/17 17:29:22 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 17:29:22 INFO scheduler.JobScheduler: Finished job streaming job 1523975340000 ms.27 from job set of time 1523975340000 ms 18/04/17 17:29:24 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 1483.0 (TID 1483) in 24509 ms on ***hostname masked*** (executor 3) (1/1) 18/04/17 17:29:24 INFO cluster.YarnClusterScheduler: Removed TaskSet 1483.0, whose tasks have all completed, from pool 18/04/17 17:29:24 INFO scheduler.DAGScheduler: ResultStage 1483 (foreachPartition at PredictorEngineApp.java:153) finished in 24.509 s 18/04/17 17:29:24 INFO scheduler.DAGScheduler: Job 1482 finished: foreachPartition at PredictorEngineApp.java:153, took 24.530009 s 18/04/17 17:29:24 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0xbb34acb connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 17:29:24 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0xbb34acb0x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 17:29:24 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 17:29:24 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:45040, server: ***hostname masked***/***IP masked***:2181 18/04/17 17:29:24 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 1505.0 (TID 1505) in 24435 ms on ***hostname masked*** (executor 4) (1/1) 18/04/17 17:29:24 INFO cluster.YarnClusterScheduler: Removed TaskSet 1505.0, whose tasks have all completed, from pool 18/04/17 17:29:24 INFO scheduler.DAGScheduler: ResultStage 1505 (foreachPartition at PredictorEngineApp.java:153) finished in 24.436 s 18/04/17 17:29:24 INFO scheduler.DAGScheduler: Job 1504 finished: foreachPartition at PredictorEngineApp.java:153, took 24.533244 s 18/04/17 17:29:24 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x49d78f99 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 17:29:24 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x49d78f990x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 17:29:24 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 17:29:24 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:49636, server: ***hostname masked***/***IP masked***:2181 18/04/17 17:29:24 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x1626be1444c992c, negotiated timeout = 60000 18/04/17 17:29:24 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x2626be142b291f7, negotiated timeout = 60000 18/04/17 17:29:24 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x1626be1444c992c 18/04/17 17:29:24 INFO zookeeper.ZooKeeper: Session: 0x1626be1444c992c closed 18/04/17 17:29:24 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 17:29:24 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x2626be142b291f7 18/04/17 17:29:24 INFO zookeeper.ZooKeeper: Session: 0x2626be142b291f7 closed 18/04/17 17:29:24 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 17:29:24 INFO scheduler.JobScheduler: Finished job streaming job 1523975340000 ms.5 from job set of time 1523975340000 ms 18/04/17 17:29:24 INFO scheduler.JobScheduler: Finished job streaming job 1523975340000 ms.11 from job set of time 1523975340000 ms 18/04/17 17:29:25 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 1482.0 (TID 1482) in 25689 ms on ***hostname masked*** (executor 3) (1/1) 18/04/17 17:29:25 INFO cluster.YarnClusterScheduler: Removed TaskSet 1482.0, whose tasks have all completed, from pool 18/04/17 17:29:25 INFO scheduler.DAGScheduler: ResultStage 1482 (foreachPartition at PredictorEngineApp.java:153) finished in 25.689 s 18/04/17 17:29:25 INFO scheduler.DAGScheduler: Job 1481 finished: foreachPartition at PredictorEngineApp.java:153, took 25.706665 s 18/04/17 17:29:25 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x586eebd2 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 17:29:25 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x586eebd20x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 17:29:25 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 17:29:25 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:49643, server: ***hostname masked***/***IP masked***:2181 18/04/17 17:29:25 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x2626be142b291f8, negotiated timeout = 60000 18/04/17 17:29:25 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x2626be142b291f8 18/04/17 17:29:25 INFO zookeeper.ZooKeeper: Session: 0x2626be142b291f8 closed 18/04/17 17:29:25 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 17:29:25 INFO scheduler.JobScheduler: Finished job streaming job 1523975340000 ms.26 from job set of time 1523975340000 ms 18/04/17 17:29:31 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 1489.0 (TID 1489) in 31850 ms on ***hostname masked*** (executor 2) (1/1) 18/04/17 17:29:31 INFO cluster.YarnClusterScheduler: Removed TaskSet 1489.0, whose tasks have all completed, from pool 18/04/17 17:29:31 INFO scheduler.DAGScheduler: ResultStage 1489 (foreachPartition at PredictorEngineApp.java:153) finished in 31.850 s 18/04/17 17:29:31 INFO scheduler.DAGScheduler: Job 1488 finished: foreachPartition at PredictorEngineApp.java:153, took 31.892653 s 18/04/17 17:29:31 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x73490839 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 17:29:31 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x734908390x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 17:29:31 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 17:29:31 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:38682, server: ***hostname masked***/***IP masked***:2181 18/04/17 17:29:31 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x3626be1439a98dd, negotiated timeout = 60000 18/04/17 17:29:31 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x3626be1439a98dd 18/04/17 17:29:31 INFO zookeeper.ZooKeeper: Session: 0x3626be1439a98dd closed 18/04/17 17:29:31 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 17:29:31 INFO scheduler.JobScheduler: Finished job streaming job 1523975340000 ms.10 from job set of time 1523975340000 ms 18/04/17 17:29:31 INFO scheduler.JobScheduler: Total delay: 31.982 s for time 1523975340000 ms (execution: 31.930 s) 18/04/17 17:29:31 INFO kafka.KafkaRDD: Removing RDD 1980 from persistence list 18/04/17 17:29:31 INFO kafka.KafkaRDD: Removing RDD 1980 from persistence list 18/04/17 17:29:31 INFO storage.BlockManager: Removing RDD 1980 18/04/17 17:29:31 INFO storage.BlockManager: Removing RDD 1980 18/04/17 17:29:31 INFO kafka.KafkaRDD: Removing RDD 1981 from persistence list 18/04/17 17:29:31 INFO storage.BlockManager: Removing RDD 1981 18/04/17 17:29:31 INFO kafka.KafkaRDD: Removing RDD 1981 from persistence list 18/04/17 17:29:31 INFO storage.BlockManager: Removing RDD 1981 18/04/17 17:29:31 INFO kafka.KafkaRDD: Removing RDD 1982 from persistence list 18/04/17 17:29:31 INFO storage.BlockManager: Removing RDD 1982 18/04/17 17:29:31 INFO kafka.KafkaRDD: Removing RDD 1982 from persistence list 18/04/17 17:29:31 INFO storage.BlockManager: Removing RDD 1982 18/04/17 17:29:31 INFO kafka.KafkaRDD: Removing RDD 1983 from persistence list 18/04/17 17:29:31 INFO storage.BlockManager: Removing RDD 1983 18/04/17 17:29:31 INFO kafka.KafkaRDD: Removing RDD 1983 from persistence list 18/04/17 17:29:31 INFO storage.BlockManager: Removing RDD 1983 18/04/17 17:29:31 INFO kafka.KafkaRDD: Removing RDD 1984 from persistence list 18/04/17 17:29:31 INFO storage.BlockManager: Removing RDD 1984 18/04/17 17:29:31 INFO kafka.KafkaRDD: Removing RDD 1984 from persistence list 18/04/17 17:29:31 INFO storage.BlockManager: Removing RDD 1984 18/04/17 17:29:31 INFO kafka.KafkaRDD: Removing RDD 1985 from persistence list 18/04/17 17:29:31 INFO storage.BlockManager: Removing RDD 1985 18/04/17 17:29:31 INFO kafka.KafkaRDD: Removing RDD 1985 from persistence list 18/04/17 17:29:31 INFO storage.BlockManager: Removing RDD 1985 18/04/17 17:29:31 INFO kafka.KafkaRDD: Removing RDD 1986 from persistence list 18/04/17 17:29:31 INFO storage.BlockManager: Removing RDD 1986 18/04/17 17:29:31 INFO kafka.KafkaRDD: Removing RDD 1986 from persistence list 18/04/17 17:29:31 INFO storage.BlockManager: Removing RDD 1986 18/04/17 17:29:31 INFO kafka.KafkaRDD: Removing RDD 1987 from persistence list 18/04/17 17:29:31 INFO storage.BlockManager: Removing RDD 1987 18/04/17 17:29:31 INFO kafka.KafkaRDD: Removing RDD 1987 from persistence list 18/04/17 17:29:31 INFO storage.BlockManager: Removing RDD 1987 18/04/17 17:29:31 INFO kafka.KafkaRDD: Removing RDD 1988 from persistence list 18/04/17 17:29:31 INFO storage.BlockManager: Removing RDD 1988 18/04/17 17:29:31 INFO kafka.KafkaRDD: Removing RDD 1988 from persistence list 18/04/17 17:29:31 INFO storage.BlockManager: Removing RDD 1988 18/04/17 17:29:31 INFO kafka.KafkaRDD: Removing RDD 1989 from persistence list 18/04/17 17:29:31 INFO storage.BlockManager: Removing RDD 1989 18/04/17 17:29:31 INFO kafka.KafkaRDD: Removing RDD 1989 from persistence list 18/04/17 17:29:31 INFO storage.BlockManager: Removing RDD 1989 18/04/17 17:29:31 INFO kafka.KafkaRDD: Removing RDD 1990 from persistence list 18/04/17 17:29:31 INFO storage.BlockManager: Removing RDD 1990 18/04/17 17:29:31 INFO kafka.KafkaRDD: Removing RDD 1990 from persistence list 18/04/17 17:29:31 INFO storage.BlockManager: Removing RDD 1990 18/04/17 17:29:31 INFO kafka.KafkaRDD: Removing RDD 1991 from persistence list 18/04/17 17:29:31 INFO storage.BlockManager: Removing RDD 1991 18/04/17 17:29:31 INFO kafka.KafkaRDD: Removing RDD 1991 from persistence list 18/04/17 17:29:31 INFO storage.BlockManager: Removing RDD 1991 18/04/17 17:29:31 INFO kafka.KafkaRDD: Removing RDD 1992 from persistence list 18/04/17 17:29:31 INFO storage.BlockManager: Removing RDD 1992 18/04/17 17:29:31 INFO kafka.KafkaRDD: Removing RDD 1992 from persistence list 18/04/17 17:29:31 INFO storage.BlockManager: Removing RDD 1992 18/04/17 17:29:31 INFO kafka.KafkaRDD: Removing RDD 1993 from persistence list 18/04/17 17:29:31 INFO storage.BlockManager: Removing RDD 1993 18/04/17 17:29:31 INFO kafka.KafkaRDD: Removing RDD 1993 from persistence list 18/04/17 17:29:31 INFO storage.BlockManager: Removing RDD 1993 18/04/17 17:29:31 INFO kafka.KafkaRDD: Removing RDD 1994 from persistence list 18/04/17 17:29:31 INFO storage.BlockManager: Removing RDD 1994 18/04/17 17:29:31 INFO kafka.KafkaRDD: Removing RDD 1994 from persistence list 18/04/17 17:29:31 INFO storage.BlockManager: Removing RDD 1994 18/04/17 17:29:31 INFO kafka.KafkaRDD: Removing RDD 1995 from persistence list 18/04/17 17:29:31 INFO storage.BlockManager: Removing RDD 1995 18/04/17 17:29:31 INFO kafka.KafkaRDD: Removing RDD 1995 from persistence list 18/04/17 17:29:31 INFO storage.BlockManager: Removing RDD 1995 18/04/17 17:29:31 INFO kafka.KafkaRDD: Removing RDD 1996 from persistence list 18/04/17 17:29:31 INFO storage.BlockManager: Removing RDD 1996 18/04/17 17:29:31 INFO kafka.KafkaRDD: Removing RDD 1996 from persistence list 18/04/17 17:29:31 INFO storage.BlockManager: Removing RDD 1996 18/04/17 17:29:32 INFO kafka.KafkaRDD: Removing RDD 1997 from persistence list 18/04/17 17:29:32 INFO storage.BlockManager: Removing RDD 1997 18/04/17 17:29:32 INFO kafka.KafkaRDD: Removing RDD 1997 from persistence list 18/04/17 17:29:32 INFO storage.BlockManager: Removing RDD 1997 18/04/17 17:29:32 INFO kafka.KafkaRDD: Removing RDD 1998 from persistence list 18/04/17 17:29:32 INFO storage.BlockManager: Removing RDD 1998 18/04/17 17:29:32 INFO kafka.KafkaRDD: Removing RDD 1998 from persistence list 18/04/17 17:29:32 INFO storage.BlockManager: Removing RDD 1998 18/04/17 17:29:32 INFO kafka.KafkaRDD: Removing RDD 1999 from persistence list 18/04/17 17:29:32 INFO storage.BlockManager: Removing RDD 1999 18/04/17 17:29:32 INFO kafka.KafkaRDD: Removing RDD 1999 from persistence list 18/04/17 17:29:32 INFO storage.BlockManager: Removing RDD 1999 18/04/17 17:29:32 INFO kafka.KafkaRDD: Removing RDD 2000 from persistence list 18/04/17 17:29:32 INFO storage.BlockManager: Removing RDD 2000 18/04/17 17:29:32 INFO kafka.KafkaRDD: Removing RDD 2000 from persistence list 18/04/17 17:29:32 INFO storage.BlockManager: Removing RDD 2000 18/04/17 17:29:32 INFO kafka.KafkaRDD: Removing RDD 2001 from persistence list 18/04/17 17:29:32 INFO storage.BlockManager: Removing RDD 2001 18/04/17 17:29:32 INFO kafka.KafkaRDD: Removing RDD 2001 from persistence list 18/04/17 17:29:32 INFO storage.BlockManager: Removing RDD 2001 18/04/17 17:29:32 INFO kafka.KafkaRDD: Removing RDD 2002 from persistence list 18/04/17 17:29:32 INFO storage.BlockManager: Removing RDD 2002 18/04/17 17:29:32 INFO kafka.KafkaRDD: Removing RDD 2002 from persistence list 18/04/17 17:29:32 INFO storage.BlockManager: Removing RDD 2002 18/04/17 17:29:32 INFO kafka.KafkaRDD: Removing RDD 2003 from persistence list 18/04/17 17:29:32 INFO storage.BlockManager: Removing RDD 2003 18/04/17 17:29:32 INFO kafka.KafkaRDD: Removing RDD 2003 from persistence list 18/04/17 17:29:32 INFO storage.BlockManager: Removing RDD 2003 18/04/17 17:29:32 INFO kafka.KafkaRDD: Removing RDD 2004 from persistence list 18/04/17 17:29:32 INFO storage.BlockManager: Removing RDD 2004 18/04/17 17:29:32 INFO kafka.KafkaRDD: Removing RDD 2004 from persistence list 18/04/17 17:29:32 INFO storage.BlockManager: Removing RDD 2004 18/04/17 17:29:32 INFO kafka.KafkaRDD: Removing RDD 2005 from persistence list 18/04/17 17:29:32 INFO storage.BlockManager: Removing RDD 2005 18/04/17 17:29:32 INFO kafka.KafkaRDD: Removing RDD 2005 from persistence list 18/04/17 17:29:32 INFO storage.BlockManager: Removing RDD 2005 18/04/17 17:29:32 INFO kafka.KafkaRDD: Removing RDD 2006 from persistence list 18/04/17 17:29:32 INFO storage.BlockManager: Removing RDD 2006 18/04/17 17:29:32 INFO kafka.KafkaRDD: Removing RDD 2006 from persistence list 18/04/17 17:29:32 INFO storage.BlockManager: Removing RDD 2006 18/04/17 17:29:32 INFO kafka.KafkaRDD: Removing RDD 2007 from persistence list 18/04/17 17:29:32 INFO storage.BlockManager: Removing RDD 2007 18/04/17 17:29:32 INFO kafka.KafkaRDD: Removing RDD 2007 from persistence list 18/04/17 17:29:32 INFO storage.BlockManager: Removing RDD 2007 18/04/17 17:29:32 INFO kafka.KafkaRDD: Removing RDD 2008 from persistence list 18/04/17 17:29:32 INFO storage.BlockManager: Removing RDD 2008 18/04/17 17:29:32 INFO kafka.KafkaRDD: Removing RDD 2008 from persistence list 18/04/17 17:29:32 INFO storage.BlockManager: Removing RDD 2008 18/04/17 17:29:32 INFO kafka.KafkaRDD: Removing RDD 2009 from persistence list 18/04/17 17:29:32 INFO storage.BlockManager: Removing RDD 2009 18/04/17 17:29:32 INFO kafka.KafkaRDD: Removing RDD 2009 from persistence list 18/04/17 17:29:32 INFO storage.BlockManager: Removing RDD 2009 18/04/17 17:29:32 INFO kafka.KafkaRDD: Removing RDD 2010 from persistence list 18/04/17 17:29:32 INFO storage.BlockManager: Removing RDD 2010 18/04/17 17:29:32 INFO kafka.KafkaRDD: Removing RDD 2010 from persistence list 18/04/17 17:29:32 INFO storage.BlockManager: Removing RDD 2010 18/04/17 17:29:32 INFO kafka.KafkaRDD: Removing RDD 2011 from persistence list 18/04/17 17:29:32 INFO storage.BlockManager: Removing RDD 2011 18/04/17 17:29:32 INFO kafka.KafkaRDD: Removing RDD 2011 from persistence list 18/04/17 17:29:32 INFO storage.BlockManager: Removing RDD 2011 18/04/17 17:29:32 INFO kafka.KafkaRDD: Removing RDD 2012 from persistence list 18/04/17 17:29:32 INFO storage.BlockManager: Removing RDD 2012 18/04/17 17:29:32 INFO kafka.KafkaRDD: Removing RDD 2012 from persistence list 18/04/17 17:29:32 INFO storage.BlockManager: Removing RDD 2012 18/04/17 17:29:32 INFO kafka.KafkaRDD: Removing RDD 2013 from persistence list 18/04/17 17:29:32 INFO storage.BlockManager: Removing RDD 2013 18/04/17 17:29:32 INFO kafka.KafkaRDD: Removing RDD 2013 from persistence list 18/04/17 17:29:32 INFO storage.BlockManager: Removing RDD 2013 18/04/17 17:29:32 INFO kafka.KafkaRDD: Removing RDD 2014 from persistence list 18/04/17 17:29:32 INFO storage.BlockManager: Removing RDD 2014 18/04/17 17:29:32 INFO kafka.KafkaRDD: Removing RDD 2014 from persistence list 18/04/17 17:29:32 INFO storage.BlockManager: Removing RDD 2014 18/04/17 17:29:32 INFO kafka.KafkaRDD: Removing RDD 2015 from persistence list 18/04/17 17:29:32 INFO storage.BlockManager: Removing RDD 2015 18/04/17 17:29:32 INFO kafka.KafkaRDD: Removing RDD 2015 from persistence list 18/04/17 17:29:32 INFO storage.BlockManager: Removing RDD 2015 18/04/17 17:29:32 INFO scheduler.ReceivedBlockTracker: Deleting batches ArrayBuffer() 18/04/17 17:29:32 INFO scheduler.InputInfoTracker: remove old batch metadata: 1523975220000 ms 18/04/17 17:30:00 INFO scheduler.JobScheduler: Added jobs for time 1523975400000 ms 18/04/17 17:30:00 INFO scheduler.JobScheduler: Starting job streaming job 1523975400000 ms.0 from job set of time 1523975400000 ms 18/04/17 17:30:00 INFO scheduler.JobScheduler: Starting job streaming job 1523975400000 ms.1 from job set of time 1523975400000 ms 18/04/17 17:30:00 INFO scheduler.JobScheduler: Finished job streaming job 1523975400000 ms.0 from job set of time 1523975400000 ms 18/04/17 17:30:00 INFO scheduler.JobScheduler: Starting job streaming job 1523975400000 ms.3 from job set of time 1523975400000 ms 18/04/17 17:30:00 INFO scheduler.JobScheduler: Starting job streaming job 1523975400000 ms.2 from job set of time 1523975400000 ms 18/04/17 17:30:00 INFO scheduler.JobScheduler: Finished job streaming job 1523975400000 ms.3 from job set of time 1523975400000 ms 18/04/17 17:30:00 INFO scheduler.JobScheduler: Starting job streaming job 1523975400000 ms.5 from job set of time 1523975400000 ms 18/04/17 17:30:00 INFO scheduler.JobScheduler: Starting job streaming job 1523975400000 ms.4 from job set of time 1523975400000 ms 18/04/17 17:30:00 INFO scheduler.JobScheduler: Starting job streaming job 1523975400000 ms.6 from job set of time 1523975400000 ms 18/04/17 17:30:00 INFO scheduler.JobScheduler: Starting job streaming job 1523975400000 ms.7 from job set of time 1523975400000 ms 18/04/17 17:30:00 INFO scheduler.JobScheduler: Starting job streaming job 1523975400000 ms.8 from job set of time 1523975400000 ms 18/04/17 17:30:00 INFO scheduler.JobScheduler: Finished job streaming job 1523975400000 ms.4 from job set of time 1523975400000 ms 18/04/17 17:30:00 INFO scheduler.JobScheduler: Starting job streaming job 1523975400000 ms.9 from job set of time 1523975400000 ms 18/04/17 17:30:00 INFO scheduler.JobScheduler: Starting job streaming job 1523975400000 ms.10 from job set of time 1523975400000 ms 18/04/17 17:30:00 INFO scheduler.JobScheduler: Starting job streaming job 1523975400000 ms.11 from job set of time 1523975400000 ms 18/04/17 17:30:00 INFO scheduler.JobScheduler: Starting job streaming job 1523975400000 ms.12 from job set of time 1523975400000 ms 18/04/17 17:30:00 INFO scheduler.JobScheduler: Starting job streaming job 1523975400000 ms.13 from job set of time 1523975400000 ms 18/04/17 17:30:00 INFO scheduler.JobScheduler: Finished job streaming job 1523975400000 ms.13 from job set of time 1523975400000 ms 18/04/17 17:30:00 INFO scheduler.JobScheduler: Starting job streaming job 1523975400000 ms.14 from job set of time 1523975400000 ms 18/04/17 17:30:00 INFO scheduler.JobScheduler: Starting job streaming job 1523975400000 ms.16 from job set of time 1523975400000 ms 18/04/17 17:30:00 INFO scheduler.JobScheduler: Starting job streaming job 1523975400000 ms.15 from job set of time 1523975400000 ms 18/04/17 17:30:00 INFO scheduler.JobScheduler: Finished job streaming job 1523975400000 ms.16 from job set of time 1523975400000 ms 18/04/17 17:30:00 INFO scheduler.JobScheduler: Starting job streaming job 1523975400000 ms.17 from job set of time 1523975400000 ms 18/04/17 17:30:00 INFO scheduler.JobScheduler: Finished job streaming job 1523975400000 ms.14 from job set of time 1523975400000 ms 18/04/17 17:30:00 INFO scheduler.JobScheduler: Starting job streaming job 1523975400000 ms.19 from job set of time 1523975400000 ms 18/04/17 17:30:00 INFO scheduler.JobScheduler: Starting job streaming job 1523975400000 ms.18 from job set of time 1523975400000 ms 18/04/17 17:30:00 INFO scheduler.JobScheduler: Finished job streaming job 1523975400000 ms.17 from job set of time 1523975400000 ms 18/04/17 17:30:00 INFO scheduler.JobScheduler: Starting job streaming job 1523975400000 ms.20 from job set of time 1523975400000 ms 18/04/17 17:30:00 INFO scheduler.JobScheduler: Starting job streaming job 1523975400000 ms.22 from job set of time 1523975400000 ms 18/04/17 17:30:00 INFO scheduler.JobScheduler: Starting job streaming job 1523975400000 ms.21 from job set of time 1523975400000 ms 18/04/17 17:30:00 INFO scheduler.JobScheduler: Starting job streaming job 1523975400000 ms.23 from job set of time 1523975400000 ms 18/04/17 17:30:00 INFO scheduler.JobScheduler: Finished job streaming job 1523975400000 ms.21 from job set of time 1523975400000 ms 18/04/17 17:30:00 INFO scheduler.JobScheduler: Starting job streaming job 1523975400000 ms.25 from job set of time 1523975400000 ms 18/04/17 17:30:00 INFO scheduler.JobScheduler: Starting job streaming job 1523975400000 ms.24 from job set of time 1523975400000 ms 18/04/17 17:30:00 INFO scheduler.JobScheduler: Starting job streaming job 1523975400000 ms.26 from job set of time 1523975400000 ms 18/04/17 17:30:00 INFO scheduler.JobScheduler: Starting job streaming job 1523975400000 ms.27 from job set of time 1523975400000 ms 18/04/17 17:30:00 INFO scheduler.JobScheduler: Starting job streaming job 1523975400000 ms.28 from job set of time 1523975400000 ms 18/04/17 17:30:00 INFO scheduler.JobScheduler: Starting job streaming job 1523975400000 ms.29 from job set of time 1523975400000 ms 18/04/17 17:30:00 INFO scheduler.JobScheduler: Starting job streaming job 1523975400000 ms.30 from job set of time 1523975400000 ms 18/04/17 17:30:00 INFO scheduler.JobScheduler: Starting job streaming job 1523975400000 ms.31 from job set of time 1523975400000 ms 18/04/17 17:30:00 INFO scheduler.JobScheduler: Starting job streaming job 1523975400000 ms.32 from job set of time 1523975400000 ms 18/04/17 17:30:00 INFO scheduler.JobScheduler: Finished job streaming job 1523975400000 ms.30 from job set of time 1523975400000 ms 18/04/17 17:30:00 INFO scheduler.JobScheduler: Starting job streaming job 1523975400000 ms.33 from job set of time 1523975400000 ms 18/04/17 17:30:00 INFO scheduler.JobScheduler: Starting job streaming job 1523975400000 ms.34 from job set of time 1523975400000 ms 18/04/17 17:30:00 INFO scheduler.JobScheduler: Starting job streaming job 1523975400000 ms.35 from job set of time 1523975400000 ms 18/04/17 17:30:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 17:30:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 17:30:00 INFO scheduler.DAGScheduler: Got job 1506 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 17:30:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 1507 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 17:30:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 17:30:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 17:30:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 17:30:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 17:30:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 17:30:00 INFO scheduler.DAGScheduler: Submitting ResultStage 1507 (KafkaRDD[2074] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 17:30:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 17:30:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 17:30:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 17:30:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 17:30:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 17:30:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 17:30:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 17:30:00 INFO storage.MemoryStore: Block broadcast_1507 stored as values in memory (estimated size 5.7 KB, free 491.1 MB) 18/04/17 17:30:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 17:30:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 17:30:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 17:30:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 17:30:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 17:30:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 17:30:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 17:30:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 17:30:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 17:30:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 17:30:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 17:30:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 17:30:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 17:30:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 17:30:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 17:30:00 INFO spark.ContextCleaner: Cleaned accumulator 1491 18/04/17 17:30:00 INFO storage.MemoryStore: Block broadcast_1507_piece0 stored as bytes in memory (estimated size 3.1 KB, free 491.1 MB) 18/04/17 17:30:00 INFO storage.BlockManagerInfo: Added broadcast_1507_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.5 MB) 18/04/17 17:30:00 INFO spark.SparkContext: Created broadcast 1507 from broadcast at DAGScheduler.scala:1006 18/04/17 17:30:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 1507 (KafkaRDD[2074] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 17:30:00 INFO cluster.YarnClusterScheduler: Adding task set 1507.0 with 1 tasks 18/04/17 17:30:00 INFO storage.BlockManagerInfo: Removed broadcast_1481_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.5 MB) 18/04/17 17:30:00 INFO scheduler.DAGScheduler: Got job 1507 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 17:30:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 1508 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 17:30:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 17:30:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 17:30:00 INFO scheduler.DAGScheduler: Submitting ResultStage 1508 (KafkaRDD[2089] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 17:30:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 1507.0 (TID 1507, ***hostname masked***, executor 10, partition 0, RACK_LOCAL, 2048 bytes) 18/04/17 17:30:00 INFO storage.MemoryStore: Block broadcast_1508 stored as values in memory (estimated size 5.7 KB, free 491.1 MB) 18/04/17 17:30:00 INFO storage.BlockManagerInfo: Removed broadcast_1481_piece0 on ***hostname masked***:35790 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:30:00 INFO storage.MemoryStore: Block broadcast_1508_piece0 stored as bytes in memory (estimated size 3.1 KB, free 491.1 MB) 18/04/17 17:30:00 INFO storage.BlockManagerInfo: Added broadcast_1508_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.5 MB) 18/04/17 17:30:00 INFO spark.SparkContext: Created broadcast 1508 from broadcast at DAGScheduler.scala:1006 18/04/17 17:30:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 1508 (KafkaRDD[2089] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 17:30:00 INFO cluster.YarnClusterScheduler: Adding task set 1508.0 with 1 tasks 18/04/17 17:30:00 INFO scheduler.DAGScheduler: Got job 1508 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 17:30:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 1509 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 17:30:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 17:30:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 17:30:00 INFO scheduler.DAGScheduler: Submitting ResultStage 1509 (KafkaRDD[2091] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 17:30:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 1508.0 (TID 1508, ***hostname masked***, executor 11, partition 0, NODE_LOCAL, 2048 bytes) 18/04/17 17:30:00 INFO storage.MemoryStore: Block broadcast_1509 stored as values in memory (estimated size 5.7 KB, free 491.1 MB) 18/04/17 17:30:00 INFO spark.ContextCleaner: Cleaned accumulator 1482 18/04/17 17:30:00 INFO storage.BlockManagerInfo: Removed broadcast_1480_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.5 MB) 18/04/17 17:30:00 INFO storage.MemoryStore: Block broadcast_1509_piece0 stored as bytes in memory (estimated size 3.1 KB, free 491.1 MB) 18/04/17 17:30:00 INFO storage.BlockManagerInfo: Added broadcast_1509_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.5 MB) 18/04/17 17:30:00 INFO spark.SparkContext: Created broadcast 1509 from broadcast at DAGScheduler.scala:1006 18/04/17 17:30:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 1509 (KafkaRDD[2091] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 17:30:00 INFO cluster.YarnClusterScheduler: Adding task set 1509.0 with 1 tasks 18/04/17 17:30:00 INFO scheduler.DAGScheduler: Got job 1509 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 17:30:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 1510 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 17:30:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 17:30:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 17:30:00 INFO scheduler.DAGScheduler: Submitting ResultStage 1510 (KafkaRDD[2068] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 17:30:00 INFO storage.BlockManagerInfo: Removed broadcast_1480_piece0 on ***hostname masked***:55095 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:30:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 1509.0 (TID 1509, ***hostname masked***, executor 10, partition 0, NODE_LOCAL, 2037 bytes) 18/04/17 17:30:00 INFO storage.MemoryStore: Block broadcast_1510 stored as values in memory (estimated size 5.7 KB, free 491.1 MB) 18/04/17 17:30:00 INFO spark.ContextCleaner: Cleaned accumulator 1481 18/04/17 17:30:00 INFO storage.BlockManagerInfo: Removed broadcast_1482_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.5 MB) 18/04/17 17:30:00 INFO storage.MemoryStore: Block broadcast_1510_piece0 stored as bytes in memory (estimated size 3.1 KB, free 491.1 MB) 18/04/17 17:30:00 INFO storage.BlockManagerInfo: Added broadcast_1510_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.5 MB) 18/04/17 17:30:00 INFO spark.SparkContext: Created broadcast 1510 from broadcast at DAGScheduler.scala:1006 18/04/17 17:30:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 1510 (KafkaRDD[2068] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 17:30:00 INFO cluster.YarnClusterScheduler: Adding task set 1510.0 with 1 tasks 18/04/17 17:30:00 INFO scheduler.DAGScheduler: Got job 1510 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 17:30:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 1511 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 17:30:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 17:30:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 17:30:00 INFO scheduler.DAGScheduler: Submitting ResultStage 1511 (KafkaRDD[2090] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 17:30:00 INFO storage.MemoryStore: Block broadcast_1511 stored as values in memory (estimated size 5.7 KB, free 491.1 MB) 18/04/17 17:30:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 1510.0 (TID 1510, ***hostname masked***, executor 10, partition 0, RACK_LOCAL, 2051 bytes) 18/04/17 17:30:00 INFO storage.BlockManagerInfo: Removed broadcast_1482_piece0 on ***hostname masked***:60107 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:30:00 INFO spark.ContextCleaner: Cleaned accumulator 1483 18/04/17 17:30:00 INFO storage.BlockManagerInfo: Removed broadcast_1484_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.5 MB) 18/04/17 17:30:00 INFO storage.MemoryStore: Block broadcast_1511_piece0 stored as bytes in memory (estimated size 3.1 KB, free 491.1 MB) 18/04/17 17:30:00 INFO storage.BlockManagerInfo: Added broadcast_1511_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.5 MB) 18/04/17 17:30:00 INFO spark.SparkContext: Created broadcast 1511 from broadcast at DAGScheduler.scala:1006 18/04/17 17:30:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 1511 (KafkaRDD[2090] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 17:30:00 INFO cluster.YarnClusterScheduler: Adding task set 1511.0 with 1 tasks 18/04/17 17:30:00 INFO scheduler.DAGScheduler: Got job 1511 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 17:30:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 1512 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 17:30:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 17:30:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 17:30:00 INFO scheduler.DAGScheduler: Submitting ResultStage 1512 (KafkaRDD[2088] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 17:30:00 INFO storage.MemoryStore: Block broadcast_1512 stored as values in memory (estimated size 5.7 KB, free 491.1 MB) 18/04/17 17:30:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 1511.0 (TID 1511, ***hostname masked***, executor 4, partition 0, RACK_LOCAL, 2053 bytes) 18/04/17 17:30:00 INFO storage.BlockManagerInfo: Removed broadcast_1484_piece0 on ***hostname masked***:55279 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:30:00 INFO storage.BlockManagerInfo: Added broadcast_1508_piece0 in memory on ***hostname masked***:57847 (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:30:00 INFO spark.ContextCleaner: Cleaned accumulator 1485 18/04/17 17:30:00 INFO storage.BlockManagerInfo: Added broadcast_1507_piece0 in memory on ***hostname masked***:55095 (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:30:00 INFO storage.MemoryStore: Block broadcast_1512_piece0 stored as bytes in memory (estimated size 3.1 KB, free 491.1 MB) 18/04/17 17:30:00 INFO storage.BlockManagerInfo: Removed broadcast_1483_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.5 MB) 18/04/17 17:30:00 INFO storage.BlockManagerInfo: Added broadcast_1512_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.5 MB) 18/04/17 17:30:00 INFO spark.SparkContext: Created broadcast 1512 from broadcast at DAGScheduler.scala:1006 18/04/17 17:30:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 1512 (KafkaRDD[2088] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 17:30:00 INFO cluster.YarnClusterScheduler: Adding task set 1512.0 with 1 tasks 18/04/17 17:30:00 INFO scheduler.DAGScheduler: Got job 1512 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 17:30:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 1513 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 17:30:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 17:30:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 17:30:00 INFO scheduler.DAGScheduler: Submitting ResultStage 1513 (KafkaRDD[2081] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 17:30:00 INFO storage.MemoryStore: Block broadcast_1513 stored as values in memory (estimated size 5.7 KB, free 491.1 MB) 18/04/17 17:30:00 INFO storage.BlockManagerInfo: Removed broadcast_1483_piece0 on ***hostname masked***:60107 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:30:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 1512.0 (TID 1512, ***hostname masked***, executor 2, partition 0, RACK_LOCAL, 2040 bytes) 18/04/17 17:30:00 INFO spark.ContextCleaner: Cleaned accumulator 1484 18/04/17 17:30:00 INFO storage.BlockManagerInfo: Removed broadcast_1486_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.5 MB) 18/04/17 17:30:00 INFO storage.MemoryStore: Block broadcast_1513_piece0 stored as bytes in memory (estimated size 3.1 KB, free 491.1 MB) 18/04/17 17:30:00 INFO storage.BlockManagerInfo: Added broadcast_1513_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.5 MB) 18/04/17 17:30:00 INFO spark.SparkContext: Created broadcast 1513 from broadcast at DAGScheduler.scala:1006 18/04/17 17:30:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 1513 (KafkaRDD[2081] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 17:30:00 INFO cluster.YarnClusterScheduler: Adding task set 1513.0 with 1 tasks 18/04/17 17:30:00 INFO scheduler.DAGScheduler: Got job 1513 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 17:30:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 1514 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 17:30:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 17:30:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 17:30:00 INFO scheduler.DAGScheduler: Submitting ResultStage 1514 (KafkaRDD[2084] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 17:30:00 INFO storage.BlockManagerInfo: Removed broadcast_1486_piece0 on ***hostname masked***:50260 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:30:00 INFO storage.MemoryStore: Block broadcast_1514 stored as values in memory (estimated size 5.7 KB, free 491.1 MB) 18/04/17 17:30:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 1513.0 (TID 1513, ***hostname masked***, executor 9, partition 0, RACK_LOCAL, 2045 bytes) 18/04/17 17:30:00 INFO spark.ContextCleaner: Cleaned accumulator 1487 18/04/17 17:30:00 INFO storage.BlockManagerInfo: Removed broadcast_1485_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.5 MB) 18/04/17 17:30:00 INFO storage.MemoryStore: Block broadcast_1514_piece0 stored as bytes in memory (estimated size 3.1 KB, free 491.1 MB) 18/04/17 17:30:00 INFO storage.BlockManagerInfo: Added broadcast_1514_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.5 MB) 18/04/17 17:30:00 INFO spark.SparkContext: Created broadcast 1514 from broadcast at DAGScheduler.scala:1006 18/04/17 17:30:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 1514 (KafkaRDD[2084] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 17:30:00 INFO cluster.YarnClusterScheduler: Adding task set 1514.0 with 1 tasks 18/04/17 17:30:00 INFO scheduler.DAGScheduler: Got job 1514 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 17:30:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 1515 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 17:30:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 17:30:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 17:30:00 INFO scheduler.DAGScheduler: Submitting ResultStage 1515 (KafkaRDD[2063] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 17:30:00 INFO storage.MemoryStore: Block broadcast_1515 stored as values in memory (estimated size 5.7 KB, free 491.1 MB) 18/04/17 17:30:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 1514.0 (TID 1514, ***hostname masked***, executor 11, partition 0, RACK_LOCAL, 2045 bytes) 18/04/17 17:30:00 INFO storage.BlockManagerInfo: Removed broadcast_1485_piece0 on ***hostname masked***:43653 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:30:00 INFO spark.ContextCleaner: Cleaned accumulator 1486 18/04/17 17:30:00 INFO spark.ContextCleaner: Cleaned accumulator 1489 18/04/17 17:30:00 INFO storage.BlockManagerInfo: Removed broadcast_1487_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.5 MB) 18/04/17 17:30:00 INFO storage.MemoryStore: Block broadcast_1515_piece0 stored as bytes in memory (estimated size 3.1 KB, free 491.1 MB) 18/04/17 17:30:00 INFO storage.BlockManagerInfo: Added broadcast_1515_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.5 MB) 18/04/17 17:30:00 INFO spark.SparkContext: Created broadcast 1515 from broadcast at DAGScheduler.scala:1006 18/04/17 17:30:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 1515 (KafkaRDD[2063] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 17:30:00 INFO cluster.YarnClusterScheduler: Adding task set 1515.0 with 1 tasks 18/04/17 17:30:00 INFO scheduler.DAGScheduler: Got job 1515 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 17:30:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 1516 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 17:30:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 17:30:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 17:30:00 INFO scheduler.DAGScheduler: Submitting ResultStage 1516 (KafkaRDD[2057] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 17:30:00 INFO storage.MemoryStore: Block broadcast_1516 stored as values in memory (estimated size 5.7 KB, free 491.1 MB) 18/04/17 17:30:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 1515.0 (TID 1515, ***hostname masked***, executor 8, partition 0, RACK_LOCAL, 2060 bytes) 18/04/17 17:30:00 INFO storage.BlockManagerInfo: Added broadcast_1511_piece0 in memory on ***hostname masked***:55279 (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:30:00 INFO storage.BlockManagerInfo: Removed broadcast_1487_piece0 on ***hostname masked***:55279 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:30:00 INFO storage.BlockManagerInfo: Added broadcast_1510_piece0 in memory on ***hostname masked***:55095 (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:30:00 INFO spark.ContextCleaner: Cleaned accumulator 1488 18/04/17 17:30:00 INFO storage.MemoryStore: Block broadcast_1516_piece0 stored as bytes in memory (estimated size 3.1 KB, free 491.1 MB) 18/04/17 17:30:00 INFO storage.BlockManagerInfo: Added broadcast_1516_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.5 MB) 18/04/17 17:30:00 INFO storage.BlockManagerInfo: Removed broadcast_1489_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.5 MB) 18/04/17 17:30:00 INFO spark.SparkContext: Created broadcast 1516 from broadcast at DAGScheduler.scala:1006 18/04/17 17:30:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 1516 (KafkaRDD[2057] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 17:30:00 INFO cluster.YarnClusterScheduler: Adding task set 1516.0 with 1 tasks 18/04/17 17:30:00 INFO scheduler.DAGScheduler: Got job 1516 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 17:30:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 1517 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 17:30:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 17:30:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 17:30:00 INFO scheduler.DAGScheduler: Submitting ResultStage 1517 (KafkaRDD[2082] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 17:30:00 INFO storage.MemoryStore: Block broadcast_1517 stored as values in memory (estimated size 5.7 KB, free 491.1 MB) 18/04/17 17:30:00 INFO storage.BlockManagerInfo: Added broadcast_1514_piece0 in memory on ***hostname masked***:57847 (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:30:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 1516.0 (TID 1516, ***hostname masked***, executor 4, partition 0, RACK_LOCAL, 2063 bytes) 18/04/17 17:30:00 INFO storage.MemoryStore: Block broadcast_1517_piece0 stored as bytes in memory (estimated size 3.1 KB, free 491.1 MB) 18/04/17 17:30:00 INFO storage.BlockManagerInfo: Removed broadcast_1489_piece0 on ***hostname masked***:43653 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:30:00 INFO storage.BlockManagerInfo: Added broadcast_1517_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.5 MB) 18/04/17 17:30:00 INFO spark.SparkContext: Created broadcast 1517 from broadcast at DAGScheduler.scala:1006 18/04/17 17:30:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 1517 (KafkaRDD[2082] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 17:30:00 INFO cluster.YarnClusterScheduler: Adding task set 1517.0 with 1 tasks 18/04/17 17:30:00 INFO scheduler.DAGScheduler: Got job 1517 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 17:30:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 1518 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 17:30:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 17:30:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 17:30:00 INFO scheduler.DAGScheduler: Submitting ResultStage 1518 (KafkaRDD[2065] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 17:30:00 INFO storage.BlockManagerInfo: Added broadcast_1512_piece0 in memory on ***hostname masked***:43653 (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:30:00 INFO storage.MemoryStore: Block broadcast_1518 stored as values in memory (estimated size 5.7 KB, free 491.1 MB) 18/04/17 17:30:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 1517.0 (TID 1517, ***hostname masked***, executor 5, partition 0, NODE_LOCAL, 2056 bytes) 18/04/17 17:30:00 INFO storage.MemoryStore: Block broadcast_1518_piece0 stored as bytes in memory (estimated size 3.1 KB, free 491.1 MB) 18/04/17 17:30:00 INFO storage.BlockManagerInfo: Added broadcast_1518_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.5 MB) 18/04/17 17:30:00 INFO spark.SparkContext: Created broadcast 1518 from broadcast at DAGScheduler.scala:1006 18/04/17 17:30:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 1518 (KafkaRDD[2065] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 17:30:00 INFO cluster.YarnClusterScheduler: Adding task set 1518.0 with 1 tasks 18/04/17 17:30:00 INFO scheduler.DAGScheduler: Got job 1518 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 17:30:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 1519 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 17:30:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 17:30:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 17:30:00 INFO scheduler.DAGScheduler: Submitting ResultStage 1519 (KafkaRDD[2058] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 17:30:00 INFO storage.BlockManagerInfo: Added broadcast_1516_piece0 in memory on ***hostname masked***:55279 (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:30:00 INFO storage.MemoryStore: Block broadcast_1519 stored as values in memory (estimated size 5.7 KB, free 491.1 MB) 18/04/17 17:30:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 1518.0 (TID 1518, ***hostname masked***, executor 10, partition 0, RACK_LOCAL, 2041 bytes) 18/04/17 17:30:00 INFO storage.BlockManagerInfo: Added broadcast_1515_piece0 in memory on ***hostname masked***:50260 (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:30:00 INFO storage.MemoryStore: Block broadcast_1519_piece0 stored as bytes in memory (estimated size 3.1 KB, free 491.1 MB) 18/04/17 17:30:00 INFO storage.BlockManagerInfo: Added broadcast_1519_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.5 MB) 18/04/17 17:30:00 INFO spark.SparkContext: Created broadcast 1519 from broadcast at DAGScheduler.scala:1006 18/04/17 17:30:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 1519 (KafkaRDD[2058] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 17:30:00 INFO cluster.YarnClusterScheduler: Adding task set 1519.0 with 1 tasks 18/04/17 17:30:00 INFO scheduler.DAGScheduler: Got job 1519 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 17:30:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 1520 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 17:30:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 17:30:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 17:30:00 INFO scheduler.DAGScheduler: Submitting ResultStage 1520 (KafkaRDD[2071] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 17:30:00 INFO storage.MemoryStore: Block broadcast_1520 stored as values in memory (estimated size 5.7 KB, free 491.1 MB) 18/04/17 17:30:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 1519.0 (TID 1519, ***hostname masked***, executor 9, partition 0, RACK_LOCAL, 2034 bytes) 18/04/17 17:30:00 INFO storage.MemoryStore: Block broadcast_1520_piece0 stored as bytes in memory (estimated size 3.1 KB, free 491.1 MB) 18/04/17 17:30:00 INFO storage.BlockManagerInfo: Added broadcast_1520_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.5 MB) 18/04/17 17:30:00 INFO spark.SparkContext: Created broadcast 1520 from broadcast at DAGScheduler.scala:1006 18/04/17 17:30:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 1520 (KafkaRDD[2071] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 17:30:00 INFO cluster.YarnClusterScheduler: Adding task set 1520.0 with 1 tasks 18/04/17 17:30:00 INFO scheduler.DAGScheduler: Got job 1521 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 17:30:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 1521 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 17:30:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 17:30:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 17:30:00 INFO scheduler.DAGScheduler: Submitting ResultStage 1521 (KafkaRDD[2079] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 17:30:00 INFO storage.MemoryStore: Block broadcast_1521 stored as values in memory (estimated size 5.7 KB, free 491.1 MB) 18/04/17 17:30:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 1520.0 (TID 1520, ***hostname masked***, executor 5, partition 0, RACK_LOCAL, 2064 bytes) 18/04/17 17:30:00 INFO storage.MemoryStore: Block broadcast_1521_piece0 stored as bytes in memory (estimated size 3.1 KB, free 491.1 MB) 18/04/17 17:30:00 INFO storage.BlockManagerInfo: Added broadcast_1521_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.5 MB) 18/04/17 17:30:00 INFO spark.SparkContext: Created broadcast 1521 from broadcast at DAGScheduler.scala:1006 18/04/17 17:30:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 1521 (KafkaRDD[2079] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 17:30:00 INFO cluster.YarnClusterScheduler: Adding task set 1521.0 with 1 tasks 18/04/17 17:30:00 INFO scheduler.DAGScheduler: Got job 1520 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 17:30:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 1522 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 17:30:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 17:30:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 17:30:00 INFO scheduler.DAGScheduler: Submitting ResultStage 1522 (KafkaRDD[2076] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 17:30:00 INFO storage.MemoryStore: Block broadcast_1522 stored as values in memory (estimated size 5.7 KB, free 491.1 MB) 18/04/17 17:30:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 1521.0 (TID 1521, ***hostname masked***, executor 2, partition 0, RACK_LOCAL, 2048 bytes) 18/04/17 17:30:00 INFO storage.MemoryStore: Block broadcast_1522_piece0 stored as bytes in memory (estimated size 3.1 KB, free 491.1 MB) 18/04/17 17:30:00 INFO storage.BlockManagerInfo: Added broadcast_1522_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.5 MB) 18/04/17 17:30:00 INFO spark.SparkContext: Created broadcast 1522 from broadcast at DAGScheduler.scala:1006 18/04/17 17:30:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 1522 (KafkaRDD[2076] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 17:30:00 INFO cluster.YarnClusterScheduler: Adding task set 1522.0 with 1 tasks 18/04/17 17:30:00 INFO scheduler.DAGScheduler: Got job 1523 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 17:30:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 1523 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 17:30:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 17:30:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 17:30:00 INFO scheduler.DAGScheduler: Submitting ResultStage 1523 (KafkaRDD[2087] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 17:30:00 INFO storage.MemoryStore: Block broadcast_1523 stored as values in memory (estimated size 5.7 KB, free 491.1 MB) 18/04/17 17:30:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 1522.0 (TID 1522, ***hostname masked***, executor 1, partition 0, RACK_LOCAL, 2048 bytes) 18/04/17 17:30:00 INFO storage.MemoryStore: Block broadcast_1523_piece0 stored as bytes in memory (estimated size 3.1 KB, free 491.1 MB) 18/04/17 17:30:00 INFO storage.BlockManagerInfo: Added broadcast_1523_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.5 MB) 18/04/17 17:30:00 INFO spark.SparkContext: Created broadcast 1523 from broadcast at DAGScheduler.scala:1006 18/04/17 17:30:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 1523 (KafkaRDD[2087] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 17:30:00 INFO cluster.YarnClusterScheduler: Adding task set 1523.0 with 1 tasks 18/04/17 17:30:00 INFO scheduler.DAGScheduler: Got job 1524 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 17:30:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 1524 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 17:30:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 17:30:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 17:30:00 INFO scheduler.DAGScheduler: Submitting ResultStage 1524 (KafkaRDD[2083] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 17:30:00 INFO storage.MemoryStore: Block broadcast_1524 stored as values in memory (estimated size 5.7 KB, free 491.1 MB) 18/04/17 17:30:00 INFO storage.BlockManagerInfo: Added broadcast_1521_piece0 in memory on ***hostname masked***:43653 (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:30:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 1523.0 (TID 1523, ***hostname masked***, executor 8, partition 0, RACK_LOCAL, 2042 bytes) 18/04/17 17:30:00 INFO storage.BlockManagerInfo: Added broadcast_1517_piece0 in memory on ***hostname masked***:53081 (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:30:00 INFO spark.ContextCleaner: Cleaned accumulator 1490 18/04/17 17:30:00 INFO storage.BlockManagerInfo: Added broadcast_1513_piece0 in memory on ***hostname masked***:55033 (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:30:00 INFO storage.BlockManagerInfo: Removed broadcast_1488_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.5 MB) 18/04/17 17:30:00 INFO storage.BlockManagerInfo: Removed broadcast_1488_piece0 on ***hostname masked***:50260 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:30:00 INFO storage.MemoryStore: Block broadcast_1524_piece0 stored as bytes in memory (estimated size 3.1 KB, free 491.1 MB) 18/04/17 17:30:00 INFO storage.BlockManagerInfo: Added broadcast_1524_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.5 MB) 18/04/17 17:30:00 INFO spark.SparkContext: Created broadcast 1524 from broadcast at DAGScheduler.scala:1006 18/04/17 17:30:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 1524 (KafkaRDD[2083] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 17:30:00 INFO cluster.YarnClusterScheduler: Adding task set 1524.0 with 1 tasks 18/04/17 17:30:00 INFO scheduler.DAGScheduler: Got job 1526 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 17:30:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 1525 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 17:30:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 17:30:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 17:30:00 INFO scheduler.DAGScheduler: Submitting ResultStage 1525 (KafkaRDD[2078] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 17:30:00 INFO storage.MemoryStore: Block broadcast_1525 stored as values in memory (estimated size 5.7 KB, free 491.1 MB) 18/04/17 17:30:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 1524.0 (TID 1524, ***hostname masked***, executor 9, partition 0, RACK_LOCAL, 2048 bytes) 18/04/17 17:30:00 INFO storage.MemoryStore: Block broadcast_1525_piece0 stored as bytes in memory (estimated size 3.1 KB, free 491.1 MB) 18/04/17 17:30:00 INFO storage.BlockManagerInfo: Added broadcast_1525_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.5 MB) 18/04/17 17:30:00 INFO spark.SparkContext: Created broadcast 1525 from broadcast at DAGScheduler.scala:1006 18/04/17 17:30:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 1525 (KafkaRDD[2078] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 17:30:00 INFO cluster.YarnClusterScheduler: Adding task set 1525.0 with 1 tasks 18/04/17 17:30:00 INFO scheduler.DAGScheduler: Got job 1525 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 17:30:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 1526 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 17:30:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 17:30:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 17:30:00 INFO scheduler.DAGScheduler: Submitting ResultStage 1526 (KafkaRDD[2080] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 17:30:00 INFO storage.MemoryStore: Block broadcast_1526 stored as values in memory (estimated size 5.7 KB, free 491.1 MB) 18/04/17 17:30:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 1525.0 (TID 1525, ***hostname masked***, executor 11, partition 0, RACK_LOCAL, 2069 bytes) 18/04/17 17:30:00 INFO storage.BlockManagerInfo: Added broadcast_1522_piece0 in memory on ***hostname masked***:56034 (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:30:00 INFO storage.MemoryStore: Block broadcast_1526_piece0 stored as bytes in memory (estimated size 3.1 KB, free 491.1 MB) 18/04/17 17:30:00 INFO storage.BlockManagerInfo: Added broadcast_1526_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.5 MB) 18/04/17 17:30:00 INFO spark.SparkContext: Created broadcast 1526 from broadcast at DAGScheduler.scala:1006 18/04/17 17:30:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 1526 (KafkaRDD[2080] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 17:30:00 INFO cluster.YarnClusterScheduler: Adding task set 1526.0 with 1 tasks 18/04/17 17:30:00 INFO scheduler.DAGScheduler: Got job 1522 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 17:30:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 1527 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 17:30:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 17:30:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 17:30:00 INFO scheduler.DAGScheduler: Submitting ResultStage 1527 (KafkaRDD[2062] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 17:30:00 INFO storage.MemoryStore: Block broadcast_1527 stored as values in memory (estimated size 5.7 KB, free 491.1 MB) 18/04/17 17:30:00 INFO storage.BlockManagerInfo: Added broadcast_1523_piece0 in memory on ***hostname masked***:50260 (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:30:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 1526.0 (TID 1526, ***hostname masked***, executor 4, partition 0, NODE_LOCAL, 2047 bytes) 18/04/17 17:30:00 INFO storage.BlockManagerInfo: Added broadcast_1509_piece0 in memory on ***hostname masked***:55095 (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:30:00 INFO spark.ContextCleaner: Cleaned accumulator 1493 18/04/17 17:30:00 INFO storage.MemoryStore: Block broadcast_1527_piece0 stored as bytes in memory (estimated size 3.1 KB, free 491.1 MB) 18/04/17 17:30:00 INFO storage.BlockManagerInfo: Removed broadcast_1491_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.5 MB) 18/04/17 17:30:00 INFO storage.BlockManagerInfo: Added broadcast_1527_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.5 MB) 18/04/17 17:30:00 INFO spark.SparkContext: Created broadcast 1527 from broadcast at DAGScheduler.scala:1006 18/04/17 17:30:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 1527 (KafkaRDD[2062] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 17:30:00 INFO cluster.YarnClusterScheduler: Adding task set 1527.0 with 1 tasks 18/04/17 17:30:00 INFO scheduler.DAGScheduler: Got job 1527 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 17:30:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 1528 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 17:30:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 17:30:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 17:30:00 INFO scheduler.DAGScheduler: Submitting ResultStage 1528 (KafkaRDD[2066] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 17:30:00 INFO storage.BlockManagerInfo: Added broadcast_1520_piece0 in memory on ***hostname masked***:53081 (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:30:00 INFO storage.MemoryStore: Block broadcast_1528 stored as values in memory (estimated size 5.7 KB, free 491.1 MB) 18/04/17 17:30:00 INFO storage.BlockManagerInfo: Added broadcast_1525_piece0 in memory on ***hostname masked***:57847 (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:30:00 INFO storage.BlockManagerInfo: Added broadcast_1524_piece0 in memory on ***hostname masked***:55033 (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:30:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 1527.0 (TID 1527, ***hostname masked***, executor 11, partition 0, NODE_LOCAL, 2045 bytes) 18/04/17 17:30:00 INFO storage.BlockManagerInfo: Removed broadcast_1491_piece0 on ***hostname masked***:53081 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:30:00 INFO spark.ContextCleaner: Cleaned accumulator 1492 18/04/17 17:30:00 INFO storage.BlockManagerInfo: Removed broadcast_1490_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.5 MB) 18/04/17 17:30:00 INFO storage.MemoryStore: Block broadcast_1528_piece0 stored as bytes in memory (estimated size 3.1 KB, free 491.1 MB) 18/04/17 17:30:00 INFO storage.BlockManagerInfo: Added broadcast_1528_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.5 MB) 18/04/17 17:30:00 INFO spark.SparkContext: Created broadcast 1528 from broadcast at DAGScheduler.scala:1006 18/04/17 17:30:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 1528 (KafkaRDD[2066] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 17:30:00 INFO cluster.YarnClusterScheduler: Adding task set 1528.0 with 1 tasks 18/04/17 17:30:00 INFO scheduler.DAGScheduler: Got job 1528 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 17:30:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 1529 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 17:30:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 17:30:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 17:30:00 INFO scheduler.DAGScheduler: Submitting ResultStage 1529 (KafkaRDD[2064] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 17:30:00 INFO storage.MemoryStore: Block broadcast_1529 stored as values in memory (estimated size 5.7 KB, free 491.1 MB) 18/04/17 17:30:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 1528.0 (TID 1528, ***hostname masked***, executor 3, partition 0, NODE_LOCAL, 2059 bytes) 18/04/17 17:30:00 INFO storage.BlockManagerInfo: Removed broadcast_1490_piece0 on ***hostname masked***:35790 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:30:00 INFO spark.ContextCleaner: Cleaned accumulator 1495 18/04/17 17:30:00 INFO storage.BlockManagerInfo: Added broadcast_1526_piece0 in memory on ***hostname masked***:55279 (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:30:00 INFO storage.BlockManagerInfo: Removed broadcast_1493_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.5 MB) 18/04/17 17:30:00 INFO storage.MemoryStore: Block broadcast_1529_piece0 stored as bytes in memory (estimated size 3.1 KB, free 491.1 MB) 18/04/17 17:30:00 INFO storage.BlockManagerInfo: Added broadcast_1529_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.5 MB) 18/04/17 17:30:00 INFO spark.SparkContext: Created broadcast 1529 from broadcast at DAGScheduler.scala:1006 18/04/17 17:30:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 1529 (KafkaRDD[2064] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 17:30:00 INFO cluster.YarnClusterScheduler: Adding task set 1529.0 with 1 tasks 18/04/17 17:30:00 INFO scheduler.DAGScheduler: Got job 1529 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 17:30:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 1530 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 17:30:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 17:30:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 17:30:00 INFO scheduler.DAGScheduler: Submitting ResultStage 1530 (KafkaRDD[2067] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 17:30:00 INFO storage.MemoryStore: Block broadcast_1530 stored as values in memory (estimated size 5.7 KB, free 491.1 MB) 18/04/17 17:30:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 1529.0 (TID 1529, ***hostname masked***, executor 9, partition 0, RACK_LOCAL, 2051 bytes) 18/04/17 17:30:00 INFO storage.BlockManagerInfo: Removed broadcast_1493_piece0 on ***hostname masked***:41751 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:30:00 INFO storage.BlockManagerInfo: Added broadcast_1527_piece0 in memory on ***hostname masked***:57847 (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:30:00 INFO spark.ContextCleaner: Cleaned accumulator 1494 18/04/17 17:30:00 INFO storage.MemoryStore: Block broadcast_1530_piece0 stored as bytes in memory (estimated size 3.1 KB, free 491.1 MB) 18/04/17 17:30:00 INFO storage.BlockManagerInfo: Added broadcast_1530_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.5 MB) 18/04/17 17:30:00 INFO storage.BlockManagerInfo: Added broadcast_1519_piece0 in memory on ***hostname masked***:55033 (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:30:00 INFO spark.SparkContext: Created broadcast 1530 from broadcast at DAGScheduler.scala:1006 18/04/17 17:30:00 INFO storage.BlockManagerInfo: Added broadcast_1518_piece0 in memory on ***hostname masked***:55095 (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:30:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 1530 (KafkaRDD[2067] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 17:30:00 INFO cluster.YarnClusterScheduler: Adding task set 1530.0 with 1 tasks 18/04/17 17:30:00 INFO storage.BlockManagerInfo: Removed broadcast_1492_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.5 MB) 18/04/17 17:30:00 INFO scheduler.DAGScheduler: Got job 1530 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 17:30:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 1531 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 17:30:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 17:30:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 17:30:00 INFO scheduler.DAGScheduler: Submitting ResultStage 1531 (KafkaRDD[2075] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 17:30:00 INFO storage.MemoryStore: Block broadcast_1531 stored as values in memory (estimated size 5.7 KB, free 491.1 MB) 18/04/17 17:30:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 1530.0 (TID 1530, ***hostname masked***, executor 12, partition 0, RACK_LOCAL, 2046 bytes) 18/04/17 17:30:00 INFO storage.BlockManagerInfo: Removed broadcast_1492_piece0 on ***hostname masked***:35790 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:30:00 INFO spark.ContextCleaner: Cleaned accumulator 1497 18/04/17 17:30:00 INFO storage.MemoryStore: Block broadcast_1531_piece0 stored as bytes in memory (estimated size 3.1 KB, free 491.1 MB) 18/04/17 17:30:00 INFO storage.BlockManagerInfo: Added broadcast_1531_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.5 MB) 18/04/17 17:30:00 INFO storage.BlockManagerInfo: Removed broadcast_1495_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.5 MB) 18/04/17 17:30:00 INFO spark.SparkContext: Created broadcast 1531 from broadcast at DAGScheduler.scala:1006 18/04/17 17:30:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 1531 (KafkaRDD[2075] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 17:30:00 INFO cluster.YarnClusterScheduler: Adding task set 1531.0 with 1 tasks 18/04/17 17:30:00 INFO scheduler.DAGScheduler: Got job 1531 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 17:30:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 1532 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 17:30:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 17:30:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 17:30:00 INFO scheduler.DAGScheduler: Submitting ResultStage 1532 (KafkaRDD[2061] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 17:30:00 INFO storage.MemoryStore: Block broadcast_1532 stored as values in memory (estimated size 5.7 KB, free 491.1 MB) 18/04/17 17:30:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 1531.0 (TID 1531, ***hostname masked***, executor 4, partition 0, RACK_LOCAL, 2050 bytes) 18/04/17 17:30:00 INFO storage.BlockManagerInfo: Removed broadcast_1495_piece0 on ***hostname masked***:43653 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:30:00 INFO storage.MemoryStore: Block broadcast_1532_piece0 stored as bytes in memory (estimated size 3.1 KB, free 491.1 MB) 18/04/17 17:30:00 INFO storage.BlockManagerInfo: Added broadcast_1532_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.5 MB) 18/04/17 17:30:00 INFO spark.SparkContext: Created broadcast 1532 from broadcast at DAGScheduler.scala:1006 18/04/17 17:30:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 1532 (KafkaRDD[2061] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 17:30:00 INFO cluster.YarnClusterScheduler: Adding task set 1532.0 with 1 tasks 18/04/17 17:30:00 INFO scheduler.DAGScheduler: Got job 1532 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 17:30:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 1533 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 17:30:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 17:30:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 17:30:00 INFO scheduler.DAGScheduler: Submitting ResultStage 1533 (KafkaRDD[2085] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 17:30:00 INFO spark.ContextCleaner: Cleaned accumulator 1496 18/04/17 17:30:00 INFO storage.MemoryStore: Block broadcast_1533 stored as values in memory (estimated size 5.7 KB, free 491.1 MB) 18/04/17 17:30:00 INFO storage.BlockManagerInfo: Added broadcast_1528_piece0 in memory on ***hostname masked***:60107 (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:30:00 INFO storage.BlockManagerInfo: Removed broadcast_1494_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.5 MB) 18/04/17 17:30:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 1532.0 (TID 1532, ***hostname masked***, executor 6, partition 0, NODE_LOCAL, 2059 bytes) 18/04/17 17:30:00 INFO storage.MemoryStore: Block broadcast_1533_piece0 stored as bytes in memory (estimated size 3.1 KB, free 491.1 MB) 18/04/17 17:30:00 INFO storage.BlockManagerInfo: Added broadcast_1533_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.5 MB) 18/04/17 17:30:00 INFO spark.SparkContext: Created broadcast 1533 from broadcast at DAGScheduler.scala:1006 18/04/17 17:30:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 1533 (KafkaRDD[2085] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 17:30:00 INFO storage.BlockManagerInfo: Added broadcast_1529_piece0 in memory on ***hostname masked***:55033 (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:30:00 INFO cluster.YarnClusterScheduler: Adding task set 1533.0 with 1 tasks 18/04/17 17:30:00 INFO storage.BlockManagerInfo: Removed broadcast_1494_piece0 on ***hostname masked***:60107 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:30:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 1533.0 (TID 1533, ***hostname masked***, executor 6, partition 0, NODE_LOCAL, 2049 bytes) 18/04/17 17:30:00 INFO storage.BlockManagerInfo: Removed broadcast_1497_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.5 MB) 18/04/17 17:30:00 INFO storage.BlockManagerInfo: Removed broadcast_1497_piece0 on ***hostname masked***:56034 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:30:00 INFO spark.ContextCleaner: Cleaned accumulator 1498 18/04/17 17:30:00 INFO storage.BlockManagerInfo: Removed broadcast_1496_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.5 MB) 18/04/17 17:30:00 INFO storage.BlockManagerInfo: Removed broadcast_1496_piece0 on ***hostname masked***:43653 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:30:00 INFO storage.BlockManagerInfo: Added broadcast_1531_piece0 in memory on ***hostname masked***:55279 (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:30:00 INFO storage.BlockManagerInfo: Removed broadcast_1499_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.5 MB) 18/04/17 17:30:00 INFO storage.BlockManagerInfo: Removed broadcast_1499_piece0 on ***hostname masked***:55095 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:30:00 INFO storage.BlockManagerInfo: Added broadcast_1530_piece0 in memory on ***hostname masked***:42188 (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:30:00 INFO storage.BlockManagerInfo: Added broadcast_1532_piece0 in memory on ***hostname masked***:35790 (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:30:00 INFO spark.ContextCleaner: Cleaned accumulator 1500 18/04/17 17:30:00 INFO storage.BlockManagerInfo: Removed broadcast_1498_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.5 MB) 18/04/17 17:30:00 INFO storage.BlockManagerInfo: Removed broadcast_1498_piece0 on ***hostname masked***:50260 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:30:00 INFO spark.ContextCleaner: Cleaned accumulator 1499 18/04/17 17:30:00 INFO storage.BlockManagerInfo: Removed broadcast_1501_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.5 MB) 18/04/17 17:30:00 INFO storage.BlockManagerInfo: Removed broadcast_1501_piece0 on ***hostname masked***:41751 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:30:00 INFO spark.ContextCleaner: Cleaned accumulator 1502 18/04/17 17:30:00 INFO storage.BlockManagerInfo: Removed broadcast_1500_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.5 MB) 18/04/17 17:30:00 INFO storage.BlockManagerInfo: Removed broadcast_1500_piece0 on ***hostname masked***:55033 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:30:00 INFO spark.ContextCleaner: Cleaned accumulator 1501 18/04/17 17:30:00 INFO storage.BlockManagerInfo: Removed broadcast_1503_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.5 MB) 18/04/17 17:30:00 INFO storage.BlockManagerInfo: Added broadcast_1533_piece0 in memory on ***hostname masked***:35790 (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:30:00 INFO storage.BlockManagerInfo: Removed broadcast_1503_piece0 on ***hostname masked***:42188 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:30:00 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 1507.0 (TID 1507) in 96 ms on ***hostname masked*** (executor 10) (1/1) 18/04/17 17:30:00 INFO scheduler.DAGScheduler: ResultStage 1507 (foreachPartition at PredictorEngineApp.java:153) finished in 0.096 s 18/04/17 17:30:00 INFO cluster.YarnClusterScheduler: Removed TaskSet 1507.0, whose tasks have all completed, from pool 18/04/17 17:30:00 INFO scheduler.DAGScheduler: Job 1506 finished: foreachPartition at PredictorEngineApp.java:153, took 0.114998 s 18/04/17 17:30:00 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x45690e95 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 17:30:00 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x45690e950x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 17:30:00 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 17:30:00 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:38808, server: ***hostname masked***/***IP masked***:2181 18/04/17 17:30:00 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x3626be1439a98e3, negotiated timeout = 60000 18/04/17 17:30:00 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x3626be1439a98e3 18/04/17 17:30:00 INFO zookeeper.ZooKeeper: Session: 0x3626be1439a98e3 closed 18/04/17 17:30:00 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 17:30:00 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 1519.0 (TID 1519) in 81 ms on ***hostname masked*** (executor 9) (1/1) 18/04/17 17:30:00 INFO scheduler.DAGScheduler: ResultStage 1519 (foreachPartition at PredictorEngineApp.java:153) finished in 0.082 s 18/04/17 17:30:00 INFO cluster.YarnClusterScheduler: Removed TaskSet 1519.0, whose tasks have all completed, from pool 18/04/17 17:30:00 INFO scheduler.DAGScheduler: Job 1518 finished: foreachPartition at PredictorEngineApp.java:153, took 0.134060 s 18/04/17 17:30:00 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x58f1914d connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 17:30:00 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x58f1914d0x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 17:30:00 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 17:30:00 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:38811, server: ***hostname masked***/***IP masked***:2181 18/04/17 17:30:00 INFO scheduler.JobScheduler: Finished job streaming job 1523975400000 ms.18 from job set of time 1523975400000 ms 18/04/17 17:30:00 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x3626be1439a98e4, negotiated timeout = 60000 18/04/17 17:30:00 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x3626be1439a98e4 18/04/17 17:30:00 INFO spark.ContextCleaner: Cleaned accumulator 1508 18/04/17 17:30:00 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 1533.0 (TID 1533) in 63 ms on ***hostname masked*** (executor 6) (1/1) 18/04/17 17:30:00 INFO cluster.YarnClusterScheduler: Removed TaskSet 1533.0, whose tasks have all completed, from pool 18/04/17 17:30:00 INFO scheduler.DAGScheduler: ResultStage 1533 (foreachPartition at PredictorEngineApp.java:153) finished in 0.063 s 18/04/17 17:30:00 INFO scheduler.DAGScheduler: Job 1532 finished: foreachPartition at PredictorEngineApp.java:153, took 0.137683 s 18/04/17 17:30:00 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x64f6bf1 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 17:30:00 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x64f6bf10x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 17:30:00 INFO storage.BlockManagerInfo: Removed broadcast_1507_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.5 MB) 18/04/17 17:30:00 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 17:30:00 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:49791, server: ***hostname masked***/***IP masked***:2181 18/04/17 17:30:00 INFO storage.BlockManagerInfo: Removed broadcast_1507_piece0 on ***hostname masked***:55095 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:30:00 INFO storage.BlockManagerInfo: Removed broadcast_1519_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.5 MB) 18/04/17 17:30:00 INFO storage.BlockManagerInfo: Removed broadcast_1519_piece0 on ***hostname masked***:55033 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:30:00 INFO spark.ContextCleaner: Cleaned accumulator 1520 18/04/17 17:30:00 INFO spark.ContextCleaner: Cleaned accumulator 1504 18/04/17 17:30:00 INFO zookeeper.ZooKeeper: Session: 0x3626be1439a98e4 closed 18/04/17 17:30:00 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 17:30:00 INFO storage.BlockManagerInfo: Removed broadcast_1502_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.5 MB) 18/04/17 17:30:00 INFO storage.BlockManagerInfo: Removed broadcast_1502_piece0 on ***hostname masked***:41751 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:30:00 INFO spark.ContextCleaner: Cleaned accumulator 1503 18/04/17 17:30:00 INFO spark.ContextCleaner: Cleaned accumulator 1506 18/04/17 17:30:00 INFO storage.BlockManagerInfo: Removed broadcast_1504_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.5 MB) 18/04/17 17:30:00 INFO storage.BlockManagerInfo: Removed broadcast_1504_piece0 on ***hostname masked***:55279 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:30:00 INFO spark.ContextCleaner: Cleaned accumulator 1505 18/04/17 17:30:00 INFO storage.BlockManagerInfo: Removed broadcast_1506_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.5 MB) 18/04/17 17:30:00 INFO storage.BlockManagerInfo: Removed broadcast_1506_piece0 on ***hostname masked***:57847 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:30:00 INFO spark.ContextCleaner: Cleaned accumulator 1507 18/04/17 17:30:00 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x2626be142b29205, negotiated timeout = 60000 18/04/17 17:30:00 INFO storage.BlockManagerInfo: Removed broadcast_1505_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.5 MB) 18/04/17 17:30:00 INFO storage.BlockManagerInfo: Removed broadcast_1505_piece0 on ***hostname masked***:55279 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:30:00 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x2626be142b29205 18/04/17 17:30:00 INFO scheduler.JobScheduler: Finished job streaming job 1523975400000 ms.2 from job set of time 1523975400000 ms 18/04/17 17:30:00 INFO zookeeper.ZooKeeper: Session: 0x2626be142b29205 closed 18/04/17 17:30:00 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 17:30:00 INFO scheduler.JobScheduler: Finished job streaming job 1523975400000 ms.29 from job set of time 1523975400000 ms 18/04/17 17:30:00 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 1517.0 (TID 1517) in 212 ms on ***hostname masked*** (executor 5) (1/1) 18/04/17 17:30:00 INFO cluster.YarnClusterScheduler: Removed TaskSet 1517.0, whose tasks have all completed, from pool 18/04/17 17:30:00 INFO scheduler.DAGScheduler: ResultStage 1517 (foreachPartition at PredictorEngineApp.java:153) finished in 0.212 s 18/04/17 17:30:00 INFO scheduler.DAGScheduler: Job 1516 finished: foreachPartition at PredictorEngineApp.java:153, took 0.259912 s 18/04/17 17:30:00 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x61632ae4 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 17:30:00 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x61632ae40x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 17:30:00 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 17:30:00 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:38818, server: ***hostname masked***/***IP masked***:2181 18/04/17 17:30:00 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x3626be1439a98e7, negotiated timeout = 60000 18/04/17 17:30:00 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x3626be1439a98e7 18/04/17 17:30:00 INFO zookeeper.ZooKeeper: Session: 0x3626be1439a98e7 closed 18/04/17 17:30:00 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 17:30:00 INFO scheduler.JobScheduler: Finished job streaming job 1523975400000 ms.26 from job set of time 1523975400000 ms 18/04/17 17:30:00 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 1509.0 (TID 1509) in 761 ms on ***hostname masked*** (executor 10) (1/1) 18/04/17 17:30:00 INFO cluster.YarnClusterScheduler: Removed TaskSet 1509.0, whose tasks have all completed, from pool 18/04/17 17:30:00 INFO scheduler.DAGScheduler: ResultStage 1509 (foreachPartition at PredictorEngineApp.java:153) finished in 0.761 s 18/04/17 17:30:00 INFO scheduler.DAGScheduler: Job 1508 finished: foreachPartition at PredictorEngineApp.java:153, took 0.785863 s 18/04/17 17:30:00 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0xb2f8a9 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 17:30:00 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0xb2f8a90x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 17:30:00 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 17:30:00 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:49799, server: ***hostname masked***/***IP masked***:2181 18/04/17 17:30:00 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x2626be142b2920d, negotiated timeout = 60000 18/04/17 17:30:00 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x2626be142b2920d 18/04/17 17:30:00 INFO zookeeper.ZooKeeper: Session: 0x2626be142b2920d closed 18/04/17 17:30:00 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 17:30:00 INFO scheduler.JobScheduler: Finished job streaming job 1523975400000 ms.35 from job set of time 1523975400000 ms 18/04/17 17:30:02 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 1513.0 (TID 1513) in 2501 ms on ***hostname masked*** (executor 9) (1/1) 18/04/17 17:30:02 INFO cluster.YarnClusterScheduler: Removed TaskSet 1513.0, whose tasks have all completed, from pool 18/04/17 17:30:02 INFO scheduler.DAGScheduler: ResultStage 1513 (foreachPartition at PredictorEngineApp.java:153) finished in 2.501 s 18/04/17 17:30:02 INFO scheduler.DAGScheduler: Job 1512 finished: foreachPartition at PredictorEngineApp.java:153, took 2.536844 s 18/04/17 17:30:02 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x4c3e0999 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 17:30:02 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x4c3e09990x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 17:30:02 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 17:30:02 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:38827, server: ***hostname masked***/***IP masked***:2181 18/04/17 17:30:02 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x3626be1439a98eb, negotiated timeout = 60000 18/04/17 17:30:02 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x3626be1439a98eb 18/04/17 17:30:02 INFO zookeeper.ZooKeeper: Session: 0x3626be1439a98eb closed 18/04/17 17:30:02 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 17:30:02 INFO scheduler.JobScheduler: Finished job streaming job 1523975400000 ms.25 from job set of time 1523975400000 ms 18/04/17 17:30:03 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 1515.0 (TID 1515) in 3143 ms on ***hostname masked*** (executor 8) (1/1) 18/04/17 17:30:03 INFO cluster.YarnClusterScheduler: Removed TaskSet 1515.0, whose tasks have all completed, from pool 18/04/17 17:30:03 INFO scheduler.DAGScheduler: ResultStage 1515 (foreachPartition at PredictorEngineApp.java:153) finished in 3.143 s 18/04/17 17:30:03 INFO scheduler.DAGScheduler: Job 1514 finished: foreachPartition at PredictorEngineApp.java:153, took 3.185025 s 18/04/17 17:30:03 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x713655cc connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 17:30:03 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x713655cc0x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 17:30:03 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 17:30:03 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:45212, server: ***hostname masked***/***IP masked***:2181 18/04/17 17:30:03 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x1626be1444c9937, negotiated timeout = 60000 18/04/17 17:30:03 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x1626be1444c9937 18/04/17 17:30:03 INFO zookeeper.ZooKeeper: Session: 0x1626be1444c9937 closed 18/04/17 17:30:03 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 17:30:03 INFO scheduler.JobScheduler: Finished job streaming job 1523975400000 ms.7 from job set of time 1523975400000 ms 18/04/17 17:30:04 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 1529.0 (TID 1529) in 4403 ms on ***hostname masked*** (executor 9) (1/1) 18/04/17 17:30:04 INFO cluster.YarnClusterScheduler: Removed TaskSet 1529.0, whose tasks have all completed, from pool 18/04/17 17:30:04 INFO scheduler.DAGScheduler: ResultStage 1529 (foreachPartition at PredictorEngineApp.java:153) finished in 4.404 s 18/04/17 17:30:04 INFO scheduler.DAGScheduler: Job 1528 finished: foreachPartition at PredictorEngineApp.java:153, took 4.468453 s 18/04/17 17:30:04 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x593d8089 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 17:30:04 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x593d80890x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 17:30:04 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 17:30:04 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:49814, server: ***hostname masked***/***IP masked***:2181 18/04/17 17:30:04 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x2626be142b29210, negotiated timeout = 60000 18/04/17 17:30:04 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x2626be142b29210 18/04/17 17:30:04 INFO zookeeper.ZooKeeper: Session: 0x2626be142b29210 closed 18/04/17 17:30:04 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 17:30:04 INFO scheduler.JobScheduler: Finished job streaming job 1523975400000 ms.8 from job set of time 1523975400000 ms 18/04/17 17:30:04 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 1523.0 (TID 1523) in 4609 ms on ***hostname masked*** (executor 8) (1/1) 18/04/17 17:30:04 INFO scheduler.DAGScheduler: ResultStage 1523 (foreachPartition at PredictorEngineApp.java:153) finished in 4.610 s 18/04/17 17:30:04 INFO cluster.YarnClusterScheduler: Removed TaskSet 1523.0, whose tasks have all completed, from pool 18/04/17 17:30:04 INFO scheduler.DAGScheduler: Job 1523 finished: foreachPartition at PredictorEngineApp.java:153, took 4.672967 s 18/04/17 17:30:04 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x10a63072 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 17:30:04 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x10a630720x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 17:30:04 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 17:30:04 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:45223, server: ***hostname masked***/***IP masked***:2181 18/04/17 17:30:04 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x1626be1444c9938, negotiated timeout = 60000 18/04/17 17:30:04 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x1626be1444c9938 18/04/17 17:30:04 INFO zookeeper.ZooKeeper: Session: 0x1626be1444c9938 closed 18/04/17 17:30:04 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 17:30:04 INFO scheduler.JobScheduler: Finished job streaming job 1523975400000 ms.31 from job set of time 1523975400000 ms 18/04/17 17:30:05 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 1522.0 (TID 1522) in 5281 ms on ***hostname masked*** (executor 1) (1/1) 18/04/17 17:30:05 INFO cluster.YarnClusterScheduler: Removed TaskSet 1522.0, whose tasks have all completed, from pool 18/04/17 17:30:05 INFO scheduler.DAGScheduler: ResultStage 1522 (foreachPartition at PredictorEngineApp.java:153) finished in 5.282 s 18/04/17 17:30:05 INFO scheduler.DAGScheduler: Job 1520 finished: foreachPartition at PredictorEngineApp.java:153, took 5.342384 s 18/04/17 17:30:05 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x69f9acfe connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 17:30:05 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x69f9acfe0x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 17:30:05 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 17:30:05 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:45226, server: ***hostname masked***/***IP masked***:2181 18/04/17 17:30:05 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x1626be1444c9939, negotiated timeout = 60000 18/04/17 17:30:05 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x1626be1444c9939 18/04/17 17:30:05 INFO zookeeper.ZooKeeper: Session: 0x1626be1444c9939 closed 18/04/17 17:30:05 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 17:30:05 INFO scheduler.JobScheduler: Finished job streaming job 1523975400000 ms.20 from job set of time 1523975400000 ms 18/04/17 17:30:05 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 1520.0 (TID 1520) in 5601 ms on ***hostname masked*** (executor 5) (1/1) 18/04/17 17:30:05 INFO scheduler.DAGScheduler: ResultStage 1520 (foreachPartition at PredictorEngineApp.java:153) finished in 5.601 s 18/04/17 17:30:05 INFO cluster.YarnClusterScheduler: Removed TaskSet 1520.0, whose tasks have all completed, from pool 18/04/17 17:30:05 INFO scheduler.DAGScheduler: Job 1519 finished: foreachPartition at PredictorEngineApp.java:153, took 5.656216 s 18/04/17 17:30:05 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0xe25b950 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 17:30:05 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0xe25b9500x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 17:30:05 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 17:30:05 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:38848, server: ***hostname masked***/***IP masked***:2181 18/04/17 17:30:05 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x3626be1439a98ef, negotiated timeout = 60000 18/04/17 17:30:05 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x3626be1439a98ef 18/04/17 17:30:05 INFO zookeeper.ZooKeeper: Session: 0x3626be1439a98ef closed 18/04/17 17:30:05 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 17:30:05 INFO scheduler.JobScheduler: Finished job streaming job 1523975400000 ms.15 from job set of time 1523975400000 ms 18/04/17 17:30:07 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 1524.0 (TID 1524) in 7838 ms on ***hostname masked*** (executor 9) (1/1) 18/04/17 17:30:07 INFO cluster.YarnClusterScheduler: Removed TaskSet 1524.0, whose tasks have all completed, from pool 18/04/17 17:30:07 INFO scheduler.DAGScheduler: ResultStage 1524 (foreachPartition at PredictorEngineApp.java:153) finished in 7.839 s 18/04/17 17:30:07 INFO scheduler.DAGScheduler: Job 1524 finished: foreachPartition at PredictorEngineApp.java:153, took 7.890183 s 18/04/17 17:30:07 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x2ec10113 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 17:30:07 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x2ec101130x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 17:30:07 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 17:30:07 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:49830, server: ***hostname masked***/***IP masked***:2181 18/04/17 17:30:07 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x2626be142b29212, negotiated timeout = 60000 18/04/17 17:30:07 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x2626be142b29212 18/04/17 17:30:07 INFO zookeeper.ZooKeeper: Session: 0x2626be142b29212 closed 18/04/17 17:30:07 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 17:30:07 INFO scheduler.JobScheduler: Finished job streaming job 1523975400000 ms.27 from job set of time 1523975400000 ms 18/04/17 17:30:08 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 1510.0 (TID 1510) in 8035 ms on ***hostname masked*** (executor 10) (1/1) 18/04/17 17:30:08 INFO cluster.YarnClusterScheduler: Removed TaskSet 1510.0, whose tasks have all completed, from pool 18/04/17 17:30:08 INFO scheduler.DAGScheduler: ResultStage 1510 (foreachPartition at PredictorEngineApp.java:153) finished in 8.035 s 18/04/17 17:30:08 INFO scheduler.DAGScheduler: Job 1509 finished: foreachPartition at PredictorEngineApp.java:153, took 8.063176 s 18/04/17 17:30:08 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x155c0204 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 17:30:08 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x155c02040x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 17:30:08 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 17:30:08 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:45238, server: ***hostname masked***/***IP masked***:2181 18/04/17 17:30:08 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x1626be1444c993a, negotiated timeout = 60000 18/04/17 17:30:08 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x1626be1444c993a 18/04/17 17:30:08 INFO zookeeper.ZooKeeper: Session: 0x1626be1444c993a closed 18/04/17 17:30:08 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 17:30:08 INFO scheduler.JobScheduler: Finished job streaming job 1523975400000 ms.12 from job set of time 1523975400000 ms 18/04/17 17:30:08 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 1518.0 (TID 1518) in 8514 ms on ***hostname masked*** (executor 10) (1/1) 18/04/17 17:30:08 INFO cluster.YarnClusterScheduler: Removed TaskSet 1518.0, whose tasks have all completed, from pool 18/04/17 17:30:08 INFO scheduler.DAGScheduler: ResultStage 1518 (foreachPartition at PredictorEngineApp.java:153) finished in 8.515 s 18/04/17 17:30:08 INFO scheduler.DAGScheduler: Job 1517 finished: foreachPartition at PredictorEngineApp.java:153, took 8.564634 s 18/04/17 17:30:08 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0xac2a959 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 17:30:08 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0xac2a9590x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 17:30:08 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 17:30:08 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:49837, server: ***hostname masked***/***IP masked***:2181 18/04/17 17:30:08 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x2626be142b29214, negotiated timeout = 60000 18/04/17 17:30:08 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x2626be142b29214 18/04/17 17:30:08 INFO zookeeper.ZooKeeper: Session: 0x2626be142b29214 closed 18/04/17 17:30:08 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 17:30:08 INFO scheduler.JobScheduler: Finished job streaming job 1523975400000 ms.9 from job set of time 1523975400000 ms 18/04/17 17:30:10 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 1512.0 (TID 1512) in 10712 ms on ***hostname masked*** (executor 2) (1/1) 18/04/17 17:30:10 INFO cluster.YarnClusterScheduler: Removed TaskSet 1512.0, whose tasks have all completed, from pool 18/04/17 17:30:10 INFO scheduler.DAGScheduler: ResultStage 1512 (foreachPartition at PredictorEngineApp.java:153) finished in 10.712 s 18/04/17 17:30:10 INFO scheduler.DAGScheduler: Job 1511 finished: foreachPartition at PredictorEngineApp.java:153, took 10.744937 s 18/04/17 17:30:10 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x79201bb4 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 17:30:10 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x79201bb40x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 17:30:10 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 17:30:10 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:38867, server: ***hostname masked***/***IP masked***:2181 18/04/17 17:30:10 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x3626be1439a98f1, negotiated timeout = 60000 18/04/17 17:30:10 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x3626be1439a98f1 18/04/17 17:30:10 INFO zookeeper.ZooKeeper: Session: 0x3626be1439a98f1 closed 18/04/17 17:30:10 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 17:30:10 INFO scheduler.JobScheduler: Finished job streaming job 1523975400000 ms.32 from job set of time 1523975400000 ms 18/04/17 17:30:10 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 1521.0 (TID 1521) in 10829 ms on ***hostname masked*** (executor 2) (1/1) 18/04/17 17:30:10 INFO cluster.YarnClusterScheduler: Removed TaskSet 1521.0, whose tasks have all completed, from pool 18/04/17 17:30:10 INFO scheduler.DAGScheduler: ResultStage 1521 (foreachPartition at PredictorEngineApp.java:153) finished in 10.829 s 18/04/17 17:30:10 INFO scheduler.DAGScheduler: Job 1521 finished: foreachPartition at PredictorEngineApp.java:153, took 10.887430 s 18/04/17 17:30:10 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x87c838c connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 17:30:10 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x87c838c0x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 17:30:10 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 17:30:10 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:49847, server: ***hostname masked***/***IP masked***:2181 18/04/17 17:30:10 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x2626be142b29215, negotiated timeout = 60000 18/04/17 17:30:10 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x2626be142b29215 18/04/17 17:30:10 INFO zookeeper.ZooKeeper: Session: 0x2626be142b29215 closed 18/04/17 17:30:10 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 17:30:10 INFO scheduler.JobScheduler: Finished job streaming job 1523975400000 ms.23 from job set of time 1523975400000 ms 18/04/17 17:30:11 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 1532.0 (TID 1532) in 11854 ms on ***hostname masked*** (executor 6) (1/1) 18/04/17 17:30:11 INFO scheduler.DAGScheduler: ResultStage 1532 (foreachPartition at PredictorEngineApp.java:153) finished in 11.855 s 18/04/17 17:30:11 INFO cluster.YarnClusterScheduler: Removed TaskSet 1532.0, whose tasks have all completed, from pool 18/04/17 17:30:11 INFO scheduler.DAGScheduler: Job 1531 finished: foreachPartition at PredictorEngineApp.java:153, took 11.927077 s 18/04/17 17:30:11 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x7c1e93f4 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 17:30:11 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x7c1e93f40x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 17:30:12 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 17:30:12 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:45256, server: ***hostname masked***/***IP masked***:2181 18/04/17 17:30:12 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x1626be1444c993b, negotiated timeout = 60000 18/04/17 17:30:12 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x1626be1444c993b 18/04/17 17:30:12 INFO zookeeper.ZooKeeper: Session: 0x1626be1444c993b closed 18/04/17 17:30:12 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 17:30:12 INFO scheduler.JobScheduler: Finished job streaming job 1523975400000 ms.5 from job set of time 1523975400000 ms 18/04/17 17:30:12 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 1530.0 (TID 1530) in 12619 ms on ***hostname masked*** (executor 12) (1/1) 18/04/17 17:30:12 INFO cluster.YarnClusterScheduler: Removed TaskSet 1530.0, whose tasks have all completed, from pool 18/04/17 17:30:12 INFO scheduler.DAGScheduler: ResultStage 1530 (foreachPartition at PredictorEngineApp.java:153) finished in 12.620 s 18/04/17 17:30:12 INFO scheduler.DAGScheduler: Job 1529 finished: foreachPartition at PredictorEngineApp.java:153, took 12.687828 s 18/04/17 17:30:12 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x7e87a5ae connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 17:30:12 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x7e87a5ae0x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 17:30:12 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 17:30:12 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:49855, server: ***hostname masked***/***IP masked***:2181 18/04/17 17:30:12 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x2626be142b29218, negotiated timeout = 60000 18/04/17 17:30:12 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x2626be142b29218 18/04/17 17:30:12 INFO zookeeper.ZooKeeper: Session: 0x2626be142b29218 closed 18/04/17 17:30:12 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 17:30:12 INFO scheduler.JobScheduler: Finished job streaming job 1523975400000 ms.11 from job set of time 1523975400000 ms 18/04/17 17:30:14 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 1528.0 (TID 1528) in 14565 ms on ***hostname masked*** (executor 3) (1/1) 18/04/17 17:30:14 INFO cluster.YarnClusterScheduler: Removed TaskSet 1528.0, whose tasks have all completed, from pool 18/04/17 17:30:14 INFO scheduler.DAGScheduler: ResultStage 1528 (foreachPartition at PredictorEngineApp.java:153) finished in 14.566 s 18/04/17 17:30:14 INFO scheduler.DAGScheduler: Job 1527 finished: foreachPartition at PredictorEngineApp.java:153, took 14.627664 s 18/04/17 17:30:14 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x1ab3f397 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 17:30:14 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x1ab3f3970x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 17:30:14 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 17:30:14 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:49861, server: ***hostname masked***/***IP masked***:2181 18/04/17 17:30:14 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x2626be142b29219, negotiated timeout = 60000 18/04/17 17:30:14 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x2626be142b29219 18/04/17 17:30:14 INFO zookeeper.ZooKeeper: Session: 0x2626be142b29219 closed 18/04/17 17:30:14 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 17:30:14 INFO scheduler.JobScheduler: Finished job streaming job 1523975400000 ms.10 from job set of time 1523975400000 ms 18/04/17 17:30:17 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 1527.0 (TID 1527) in 17432 ms on ***hostname masked*** (executor 11) (1/1) 18/04/17 17:30:17 INFO cluster.YarnClusterScheduler: Removed TaskSet 1527.0, whose tasks have all completed, from pool 18/04/17 17:30:17 INFO scheduler.DAGScheduler: ResultStage 1527 (foreachPartition at PredictorEngineApp.java:153) finished in 17.432 s 18/04/17 17:30:17 INFO scheduler.DAGScheduler: Job 1522 finished: foreachPartition at PredictorEngineApp.java:153, took 17.506328 s 18/04/17 17:30:17 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x7cdfaba7 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 17:30:17 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x7cdfaba70x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 17:30:17 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 17:30:17 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:38890, server: ***hostname masked***/***IP masked***:2181 18/04/17 17:30:17 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x3626be1439a98f5, negotiated timeout = 60000 18/04/17 17:30:17 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x3626be1439a98f5 18/04/17 17:30:17 INFO zookeeper.ZooKeeper: Session: 0x3626be1439a98f5 closed 18/04/17 17:30:17 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 17:30:17 INFO scheduler.JobScheduler: Finished job streaming job 1523975400000 ms.6 from job set of time 1523975400000 ms 18/04/17 17:30:17 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 1514.0 (TID 1514) in 17638 ms on ***hostname masked*** (executor 11) (1/1) 18/04/17 17:30:17 INFO cluster.YarnClusterScheduler: Removed TaskSet 1514.0, whose tasks have all completed, from pool 18/04/17 17:30:17 INFO scheduler.DAGScheduler: ResultStage 1514 (foreachPartition at PredictorEngineApp.java:153) finished in 17.639 s 18/04/17 17:30:17 INFO scheduler.DAGScheduler: Job 1513 finished: foreachPartition at PredictorEngineApp.java:153, took 17.677340 s 18/04/17 17:30:17 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x63410d37 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 17:30:17 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x63410d370x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 17:30:17 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 17:30:17 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:38894, server: ***hostname masked***/***IP masked***:2181 18/04/17 17:30:17 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x3626be1439a98f6, negotiated timeout = 60000 18/04/17 17:30:17 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x3626be1439a98f6 18/04/17 17:30:17 INFO zookeeper.ZooKeeper: Session: 0x3626be1439a98f6 closed 18/04/17 17:30:17 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 17:30:17 INFO scheduler.JobScheduler: Finished job streaming job 1523975400000 ms.28 from job set of time 1523975400000 ms 18/04/17 17:30:18 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 1531.0 (TID 1531) in 18389 ms on ***hostname masked*** (executor 4) (1/1) 18/04/17 17:30:18 INFO cluster.YarnClusterScheduler: Removed TaskSet 1531.0, whose tasks have all completed, from pool 18/04/17 17:30:18 INFO scheduler.DAGScheduler: ResultStage 1531 (foreachPartition at PredictorEngineApp.java:153) finished in 18.390 s 18/04/17 17:30:18 INFO scheduler.DAGScheduler: Job 1530 finished: foreachPartition at PredictorEngineApp.java:153, took 18.460119 s 18/04/17 17:30:18 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x69c13c78 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 17:30:18 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x69c13c780x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 17:30:18 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 17:30:18 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:38897, server: ***hostname masked***/***IP masked***:2181 18/04/17 17:30:18 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x3626be1439a98f7, negotiated timeout = 60000 18/04/17 17:30:18 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x3626be1439a98f7 18/04/17 17:30:18 INFO zookeeper.ZooKeeper: Session: 0x3626be1439a98f7 closed 18/04/17 17:30:18 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 17:30:18 INFO scheduler.JobScheduler: Finished job streaming job 1523975400000 ms.19 from job set of time 1523975400000 ms 18/04/17 17:30:18 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 1508.0 (TID 1508) in 18514 ms on ***hostname masked*** (executor 11) (1/1) 18/04/17 17:30:18 INFO cluster.YarnClusterScheduler: Removed TaskSet 1508.0, whose tasks have all completed, from pool 18/04/17 17:30:18 INFO scheduler.DAGScheduler: ResultStage 1508 (foreachPartition at PredictorEngineApp.java:153) finished in 18.514 s 18/04/17 17:30:18 INFO scheduler.DAGScheduler: Job 1507 finished: foreachPartition at PredictorEngineApp.java:153, took 18.535862 s 18/04/17 17:30:18 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x4de0250b connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 17:30:18 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x4de0250b0x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 17:30:18 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 17:30:18 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:45283, server: ***hostname masked***/***IP masked***:2181 18/04/17 17:30:18 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x1626be1444c993d, negotiated timeout = 60000 18/04/17 17:30:18 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x1626be1444c993d 18/04/17 17:30:18 INFO zookeeper.ZooKeeper: Session: 0x1626be1444c993d closed 18/04/17 17:30:18 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 17:30:18 INFO scheduler.JobScheduler: Finished job streaming job 1523975400000 ms.33 from job set of time 1523975400000 ms 18/04/17 17:30:19 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 1526.0 (TID 1526) in 19482 ms on ***hostname masked*** (executor 4) (1/1) 18/04/17 17:30:19 INFO cluster.YarnClusterScheduler: Removed TaskSet 1526.0, whose tasks have all completed, from pool 18/04/17 17:30:19 INFO scheduler.DAGScheduler: ResultStage 1526 (foreachPartition at PredictorEngineApp.java:153) finished in 19.483 s 18/04/17 17:30:19 INFO scheduler.DAGScheduler: Job 1525 finished: foreachPartition at PredictorEngineApp.java:153, took 19.539401 s 18/04/17 17:30:19 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x5c3f3b61 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 17:30:19 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x5c3f3b610x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 17:30:19 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 17:30:19 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:45288, server: ***hostname masked***/***IP masked***:2181 18/04/17 17:30:19 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x1626be1444c993f, negotiated timeout = 60000 18/04/17 17:30:19 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x1626be1444c993f 18/04/17 17:30:19 INFO zookeeper.ZooKeeper: Session: 0x1626be1444c993f closed 18/04/17 17:30:19 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 17:30:19 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 1511.0 (TID 1511) in 19552 ms on ***hostname masked*** (executor 4) (1/1) 18/04/17 17:30:19 INFO cluster.YarnClusterScheduler: Removed TaskSet 1511.0, whose tasks have all completed, from pool 18/04/17 17:30:19 INFO scheduler.DAGScheduler: ResultStage 1511 (foreachPartition at PredictorEngineApp.java:153) finished in 19.552 s 18/04/17 17:30:19 INFO scheduler.DAGScheduler: Job 1510 finished: foreachPartition at PredictorEngineApp.java:153, took 19.582823 s 18/04/17 17:30:19 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x3c9f067a connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 17:30:19 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x3c9f067a0x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 17:30:19 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 17:30:19 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:49886, server: ***hostname masked***/***IP masked***:2181 18/04/17 17:30:19 INFO scheduler.JobScheduler: Finished job streaming job 1523975400000 ms.24 from job set of time 1523975400000 ms 18/04/17 17:30:19 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x2626be142b2921d, negotiated timeout = 60000 18/04/17 17:30:19 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x2626be142b2921d 18/04/17 17:30:19 INFO zookeeper.ZooKeeper: Session: 0x2626be142b2921d closed 18/04/17 17:30:19 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 17:30:19 INFO scheduler.JobScheduler: Finished job streaming job 1523975400000 ms.34 from job set of time 1523975400000 ms 18/04/17 17:30:24 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 1525.0 (TID 1525) in 24327 ms on ***hostname masked*** (executor 11) (1/1) 18/04/17 17:30:24 INFO scheduler.DAGScheduler: ResultStage 1525 (foreachPartition at PredictorEngineApp.java:153) finished in 24.327 s 18/04/17 17:30:24 INFO cluster.YarnClusterScheduler: Removed TaskSet 1525.0, whose tasks have all completed, from pool 18/04/17 17:30:24 INFO scheduler.DAGScheduler: Job 1526 finished: foreachPartition at PredictorEngineApp.java:153, took 24.381621 s 18/04/17 17:30:24 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x13acc2e2 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 17:30:24 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x13acc2e20x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 17:30:24 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 17:30:24 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:45302, server: ***hostname masked***/***IP masked***:2181 18/04/17 17:30:24 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x1626be1444c9941, negotiated timeout = 60000 18/04/17 17:30:24 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x1626be1444c9941 18/04/17 17:30:24 INFO zookeeper.ZooKeeper: Session: 0x1626be1444c9941 closed 18/04/17 17:30:24 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 17:30:24 INFO scheduler.JobScheduler: Finished job streaming job 1523975400000 ms.22 from job set of time 1523975400000 ms 18/04/17 17:30:25 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 1516.0 (TID 1516) in 25394 ms on ***hostname masked*** (executor 4) (1/1) 18/04/17 17:30:25 INFO cluster.YarnClusterScheduler: Removed TaskSet 1516.0, whose tasks have all completed, from pool 18/04/17 17:30:25 INFO scheduler.DAGScheduler: ResultStage 1516 (foreachPartition at PredictorEngineApp.java:153) finished in 25.394 s 18/04/17 17:30:25 INFO scheduler.DAGScheduler: Job 1515 finished: foreachPartition at PredictorEngineApp.java:153, took 25.439094 s 18/04/17 17:30:25 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x76dae7b8 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 17:30:25 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x76dae7b80x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 17:30:25 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 17:30:25 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:49901, server: ***hostname masked***/***IP masked***:2181 18/04/17 17:30:25 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x2626be142b29221, negotiated timeout = 60000 18/04/17 17:30:25 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x2626be142b29221 18/04/17 17:30:25 INFO zookeeper.ZooKeeper: Session: 0x2626be142b29221 closed 18/04/17 17:30:25 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 17:30:25 INFO scheduler.JobScheduler: Finished job streaming job 1523975400000 ms.1 from job set of time 1523975400000 ms 18/04/17 17:30:25 INFO scheduler.JobScheduler: Total delay: 25.526 s for time 1523975400000 ms (execution: 25.477 s) 18/04/17 17:30:25 INFO kafka.KafkaRDD: Removing RDD 2020 from persistence list 18/04/17 17:30:25 INFO storage.BlockManager: Removing RDD 2020 18/04/17 17:30:25 INFO kafka.KafkaRDD: Removing RDD 2020 from persistence list 18/04/17 17:30:25 INFO storage.BlockManager: Removing RDD 2020 18/04/17 17:30:25 INFO kafka.KafkaRDD: Removing RDD 2021 from persistence list 18/04/17 17:30:25 INFO storage.BlockManager: Removing RDD 2021 18/04/17 17:30:25 INFO kafka.KafkaRDD: Removing RDD 2021 from persistence list 18/04/17 17:30:25 INFO storage.BlockManager: Removing RDD 2021 18/04/17 17:30:25 INFO kafka.KafkaRDD: Removing RDD 2022 from persistence list 18/04/17 17:30:25 INFO storage.BlockManager: Removing RDD 2022 18/04/17 17:30:25 INFO kafka.KafkaRDD: Removing RDD 2022 from persistence list 18/04/17 17:30:25 INFO storage.BlockManager: Removing RDD 2022 18/04/17 17:30:25 INFO kafka.KafkaRDD: Removing RDD 2023 from persistence list 18/04/17 17:30:25 INFO storage.BlockManager: Removing RDD 2023 18/04/17 17:30:25 INFO kafka.KafkaRDD: Removing RDD 2023 from persistence list 18/04/17 17:30:25 INFO storage.BlockManager: Removing RDD 2023 18/04/17 17:30:25 INFO kafka.KafkaRDD: Removing RDD 2024 from persistence list 18/04/17 17:30:25 INFO storage.BlockManager: Removing RDD 2024 18/04/17 17:30:25 INFO kafka.KafkaRDD: Removing RDD 2024 from persistence list 18/04/17 17:30:25 INFO storage.BlockManager: Removing RDD 2024 18/04/17 17:30:25 INFO kafka.KafkaRDD: Removing RDD 2025 from persistence list 18/04/17 17:30:25 INFO storage.BlockManager: Removing RDD 2025 18/04/17 17:30:25 INFO kafka.KafkaRDD: Removing RDD 2025 from persistence list 18/04/17 17:30:25 INFO storage.BlockManager: Removing RDD 2025 18/04/17 17:30:25 INFO kafka.KafkaRDD: Removing RDD 2026 from persistence list 18/04/17 17:30:25 INFO storage.BlockManager: Removing RDD 2026 18/04/17 17:30:25 INFO kafka.KafkaRDD: Removing RDD 2026 from persistence list 18/04/17 17:30:25 INFO storage.BlockManager: Removing RDD 2026 18/04/17 17:30:25 INFO kafka.KafkaRDD: Removing RDD 2027 from persistence list 18/04/17 17:30:25 INFO storage.BlockManager: Removing RDD 2027 18/04/17 17:30:25 INFO kafka.KafkaRDD: Removing RDD 2027 from persistence list 18/04/17 17:30:25 INFO storage.BlockManager: Removing RDD 2027 18/04/17 17:30:25 INFO kafka.KafkaRDD: Removing RDD 2028 from persistence list 18/04/17 17:30:25 INFO storage.BlockManager: Removing RDD 2028 18/04/17 17:30:25 INFO kafka.KafkaRDD: Removing RDD 2028 from persistence list 18/04/17 17:30:25 INFO storage.BlockManager: Removing RDD 2028 18/04/17 17:30:25 INFO kafka.KafkaRDD: Removing RDD 2029 from persistence list 18/04/17 17:30:25 INFO storage.BlockManager: Removing RDD 2029 18/04/17 17:30:25 INFO kafka.KafkaRDD: Removing RDD 2029 from persistence list 18/04/17 17:30:25 INFO storage.BlockManager: Removing RDD 2029 18/04/17 17:30:25 INFO kafka.KafkaRDD: Removing RDD 2030 from persistence list 18/04/17 17:30:25 INFO storage.BlockManager: Removing RDD 2030 18/04/17 17:30:25 INFO kafka.KafkaRDD: Removing RDD 2030 from persistence list 18/04/17 17:30:25 INFO storage.BlockManager: Removing RDD 2030 18/04/17 17:30:25 INFO kafka.KafkaRDD: Removing RDD 2031 from persistence list 18/04/17 17:30:25 INFO storage.BlockManager: Removing RDD 2031 18/04/17 17:30:25 INFO kafka.KafkaRDD: Removing RDD 2031 from persistence list 18/04/17 17:30:25 INFO storage.BlockManager: Removing RDD 2031 18/04/17 17:30:25 INFO kafka.KafkaRDD: Removing RDD 2032 from persistence list 18/04/17 17:30:25 INFO storage.BlockManager: Removing RDD 2032 18/04/17 17:30:25 INFO kafka.KafkaRDD: Removing RDD 2032 from persistence list 18/04/17 17:30:25 INFO storage.BlockManager: Removing RDD 2032 18/04/17 17:30:25 INFO kafka.KafkaRDD: Removing RDD 2033 from persistence list 18/04/17 17:30:25 INFO storage.BlockManager: Removing RDD 2033 18/04/17 17:30:25 INFO kafka.KafkaRDD: Removing RDD 2033 from persistence list 18/04/17 17:30:25 INFO storage.BlockManager: Removing RDD 2033 18/04/17 17:30:25 INFO kafka.KafkaRDD: Removing RDD 2034 from persistence list 18/04/17 17:30:25 INFO storage.BlockManager: Removing RDD 2034 18/04/17 17:30:25 INFO kafka.KafkaRDD: Removing RDD 2034 from persistence list 18/04/17 17:30:25 INFO storage.BlockManager: Removing RDD 2034 18/04/17 17:30:25 INFO kafka.KafkaRDD: Removing RDD 2035 from persistence list 18/04/17 17:30:25 INFO storage.BlockManager: Removing RDD 2035 18/04/17 17:30:25 INFO kafka.KafkaRDD: Removing RDD 2035 from persistence list 18/04/17 17:30:25 INFO storage.BlockManager: Removing RDD 2035 18/04/17 17:30:25 INFO kafka.KafkaRDD: Removing RDD 2036 from persistence list 18/04/17 17:30:25 INFO storage.BlockManager: Removing RDD 2036 18/04/17 17:30:25 INFO kafka.KafkaRDD: Removing RDD 2036 from persistence list 18/04/17 17:30:25 INFO storage.BlockManager: Removing RDD 2036 18/04/17 17:30:25 INFO kafka.KafkaRDD: Removing RDD 2037 from persistence list 18/04/17 17:30:25 INFO storage.BlockManager: Removing RDD 2037 18/04/17 17:30:25 INFO kafka.KafkaRDD: Removing RDD 2037 from persistence list 18/04/17 17:30:25 INFO storage.BlockManager: Removing RDD 2037 18/04/17 17:30:25 INFO kafka.KafkaRDD: Removing RDD 2038 from persistence list 18/04/17 17:30:25 INFO storage.BlockManager: Removing RDD 2038 18/04/17 17:30:25 INFO kafka.KafkaRDD: Removing RDD 2038 from persistence list 18/04/17 17:30:25 INFO storage.BlockManager: Removing RDD 2038 18/04/17 17:30:25 INFO kafka.KafkaRDD: Removing RDD 2039 from persistence list 18/04/17 17:30:25 INFO storage.BlockManager: Removing RDD 2039 18/04/17 17:30:25 INFO kafka.KafkaRDD: Removing RDD 2039 from persistence list 18/04/17 17:30:25 INFO storage.BlockManager: Removing RDD 2039 18/04/17 17:30:25 INFO kafka.KafkaRDD: Removing RDD 2040 from persistence list 18/04/17 17:30:25 INFO storage.BlockManager: Removing RDD 2040 18/04/17 17:30:25 INFO kafka.KafkaRDD: Removing RDD 2040 from persistence list 18/04/17 17:30:25 INFO storage.BlockManager: Removing RDD 2040 18/04/17 17:30:25 INFO kafka.KafkaRDD: Removing RDD 2041 from persistence list 18/04/17 17:30:25 INFO storage.BlockManager: Removing RDD 2041 18/04/17 17:30:25 INFO kafka.KafkaRDD: Removing RDD 2041 from persistence list 18/04/17 17:30:25 INFO storage.BlockManager: Removing RDD 2041 18/04/17 17:30:25 INFO kafka.KafkaRDD: Removing RDD 2042 from persistence list 18/04/17 17:30:25 INFO storage.BlockManager: Removing RDD 2042 18/04/17 17:30:25 INFO kafka.KafkaRDD: Removing RDD 2042 from persistence list 18/04/17 17:30:25 INFO storage.BlockManager: Removing RDD 2042 18/04/17 17:30:25 INFO kafka.KafkaRDD: Removing RDD 2043 from persistence list 18/04/17 17:30:25 INFO storage.BlockManager: Removing RDD 2043 18/04/17 17:30:25 INFO kafka.KafkaRDD: Removing RDD 2043 from persistence list 18/04/17 17:30:25 INFO storage.BlockManager: Removing RDD 2043 18/04/17 17:30:25 INFO kafka.KafkaRDD: Removing RDD 2044 from persistence list 18/04/17 17:30:25 INFO storage.BlockManager: Removing RDD 2044 18/04/17 17:30:25 INFO kafka.KafkaRDD: Removing RDD 2044 from persistence list 18/04/17 17:30:25 INFO storage.BlockManager: Removing RDD 2044 18/04/17 17:30:25 INFO kafka.KafkaRDD: Removing RDD 2045 from persistence list 18/04/17 17:30:25 INFO storage.BlockManager: Removing RDD 2045 18/04/17 17:30:25 INFO kafka.KafkaRDD: Removing RDD 2045 from persistence list 18/04/17 17:30:25 INFO storage.BlockManager: Removing RDD 2045 18/04/17 17:30:25 INFO kafka.KafkaRDD: Removing RDD 2046 from persistence list 18/04/17 17:30:25 INFO storage.BlockManager: Removing RDD 2046 18/04/17 17:30:25 INFO kafka.KafkaRDD: Removing RDD 2046 from persistence list 18/04/17 17:30:25 INFO storage.BlockManager: Removing RDD 2046 18/04/17 17:30:25 INFO kafka.KafkaRDD: Removing RDD 2047 from persistence list 18/04/17 17:30:25 INFO storage.BlockManager: Removing RDD 2047 18/04/17 17:30:25 INFO kafka.KafkaRDD: Removing RDD 2047 from persistence list 18/04/17 17:30:25 INFO storage.BlockManager: Removing RDD 2047 18/04/17 17:30:25 INFO kafka.KafkaRDD: Removing RDD 2048 from persistence list 18/04/17 17:30:25 INFO storage.BlockManager: Removing RDD 2048 18/04/17 17:30:25 INFO kafka.KafkaRDD: Removing RDD 2048 from persistence list 18/04/17 17:30:25 INFO storage.BlockManager: Removing RDD 2048 18/04/17 17:30:25 INFO kafka.KafkaRDD: Removing RDD 2049 from persistence list 18/04/17 17:30:25 INFO storage.BlockManager: Removing RDD 2049 18/04/17 17:30:25 INFO kafka.KafkaRDD: Removing RDD 2049 from persistence list 18/04/17 17:30:25 INFO storage.BlockManager: Removing RDD 2049 18/04/17 17:30:25 INFO kafka.KafkaRDD: Removing RDD 2050 from persistence list 18/04/17 17:30:25 INFO storage.BlockManager: Removing RDD 2050 18/04/17 17:30:25 INFO kafka.KafkaRDD: Removing RDD 2050 from persistence list 18/04/17 17:30:25 INFO storage.BlockManager: Removing RDD 2050 18/04/17 17:30:25 INFO kafka.KafkaRDD: Removing RDD 2051 from persistence list 18/04/17 17:30:25 INFO storage.BlockManager: Removing RDD 2051 18/04/17 17:30:25 INFO kafka.KafkaRDD: Removing RDD 2051 from persistence list 18/04/17 17:30:25 INFO storage.BlockManager: Removing RDD 2051 18/04/17 17:30:25 INFO kafka.KafkaRDD: Removing RDD 2052 from persistence list 18/04/17 17:30:25 INFO storage.BlockManager: Removing RDD 2052 18/04/17 17:30:25 INFO kafka.KafkaRDD: Removing RDD 2052 from persistence list 18/04/17 17:30:25 INFO storage.BlockManager: Removing RDD 2052 18/04/17 17:30:25 INFO kafka.KafkaRDD: Removing RDD 2053 from persistence list 18/04/17 17:30:25 INFO storage.BlockManager: Removing RDD 2053 18/04/17 17:30:25 INFO kafka.KafkaRDD: Removing RDD 2053 from persistence list 18/04/17 17:30:25 INFO storage.BlockManager: Removing RDD 2053 18/04/17 17:30:25 INFO kafka.KafkaRDD: Removing RDD 2054 from persistence list 18/04/17 17:30:25 INFO storage.BlockManager: Removing RDD 2054 18/04/17 17:30:25 INFO kafka.KafkaRDD: Removing RDD 2054 from persistence list 18/04/17 17:30:25 INFO storage.BlockManager: Removing RDD 2054 18/04/17 17:30:25 INFO kafka.KafkaRDD: Removing RDD 2055 from persistence list 18/04/17 17:30:25 INFO storage.BlockManager: Removing RDD 2055 18/04/17 17:30:25 INFO kafka.KafkaRDD: Removing RDD 2055 from persistence list 18/04/17 17:30:25 INFO storage.BlockManager: Removing RDD 2055 18/04/17 17:30:25 INFO scheduler.ReceivedBlockTracker: Deleting batches ArrayBuffer() 18/04/17 17:30:25 INFO scheduler.InputInfoTracker: remove old batch metadata: 1523975280000 ms 18/04/17 17:31:00 INFO scheduler.JobScheduler: Added jobs for time 1523975460000 ms 18/04/17 17:31:00 INFO scheduler.JobScheduler: Starting job streaming job 1523975460000 ms.0 from job set of time 1523975460000 ms 18/04/17 17:31:00 INFO scheduler.JobScheduler: Starting job streaming job 1523975460000 ms.2 from job set of time 1523975460000 ms 18/04/17 17:31:00 INFO scheduler.JobScheduler: Starting job streaming job 1523975460000 ms.1 from job set of time 1523975460000 ms 18/04/17 17:31:00 INFO scheduler.JobScheduler: Starting job streaming job 1523975460000 ms.3 from job set of time 1523975460000 ms 18/04/17 17:31:00 INFO scheduler.JobScheduler: Starting job streaming job 1523975460000 ms.4 from job set of time 1523975460000 ms 18/04/17 17:31:00 INFO scheduler.JobScheduler: Finished job streaming job 1523975460000 ms.0 from job set of time 1523975460000 ms 18/04/17 17:31:00 INFO scheduler.JobScheduler: Finished job streaming job 1523975460000 ms.4 from job set of time 1523975460000 ms 18/04/17 17:31:00 INFO scheduler.JobScheduler: Starting job streaming job 1523975460000 ms.5 from job set of time 1523975460000 ms 18/04/17 17:31:00 INFO scheduler.JobScheduler: Finished job streaming job 1523975460000 ms.3 from job set of time 1523975460000 ms 18/04/17 17:31:00 INFO scheduler.JobScheduler: Starting job streaming job 1523975460000 ms.7 from job set of time 1523975460000 ms 18/04/17 17:31:00 INFO scheduler.JobScheduler: Starting job streaming job 1523975460000 ms.6 from job set of time 1523975460000 ms 18/04/17 17:31:00 INFO scheduler.JobScheduler: Starting job streaming job 1523975460000 ms.9 from job set of time 1523975460000 ms 18/04/17 17:31:00 INFO scheduler.JobScheduler: Starting job streaming job 1523975460000 ms.8 from job set of time 1523975460000 ms 18/04/17 17:31:00 INFO scheduler.JobScheduler: Starting job streaming job 1523975460000 ms.10 from job set of time 1523975460000 ms 18/04/17 17:31:00 INFO scheduler.JobScheduler: Starting job streaming job 1523975460000 ms.11 from job set of time 1523975460000 ms 18/04/17 17:31:00 INFO scheduler.JobScheduler: Starting job streaming job 1523975460000 ms.12 from job set of time 1523975460000 ms 18/04/17 17:31:00 INFO scheduler.JobScheduler: Starting job streaming job 1523975460000 ms.13 from job set of time 1523975460000 ms 18/04/17 17:31:00 INFO scheduler.JobScheduler: Starting job streaming job 1523975460000 ms.14 from job set of time 1523975460000 ms 18/04/17 17:31:00 INFO scheduler.JobScheduler: Finished job streaming job 1523975460000 ms.13 from job set of time 1523975460000 ms 18/04/17 17:31:00 INFO scheduler.JobScheduler: Starting job streaming job 1523975460000 ms.15 from job set of time 1523975460000 ms 18/04/17 17:31:00 INFO scheduler.JobScheduler: Starting job streaming job 1523975460000 ms.16 from job set of time 1523975460000 ms 18/04/17 17:31:00 INFO scheduler.JobScheduler: Finished job streaming job 1523975460000 ms.16 from job set of time 1523975460000 ms 18/04/17 17:31:00 INFO scheduler.JobScheduler: Finished job streaming job 1523975460000 ms.14 from job set of time 1523975460000 ms 18/04/17 17:31:00 INFO scheduler.JobScheduler: Starting job streaming job 1523975460000 ms.17 from job set of time 1523975460000 ms 18/04/17 17:31:00 INFO scheduler.JobScheduler: Finished job streaming job 1523975460000 ms.17 from job set of time 1523975460000 ms 18/04/17 17:31:00 INFO scheduler.JobScheduler: Starting job streaming job 1523975460000 ms.19 from job set of time 1523975460000 ms 18/04/17 17:31:00 INFO scheduler.JobScheduler: Starting job streaming job 1523975460000 ms.18 from job set of time 1523975460000 ms 18/04/17 17:31:00 INFO scheduler.JobScheduler: Starting job streaming job 1523975460000 ms.20 from job set of time 1523975460000 ms 18/04/17 17:31:00 INFO scheduler.JobScheduler: Starting job streaming job 1523975460000 ms.21 from job set of time 1523975460000 ms 18/04/17 17:31:00 INFO scheduler.JobScheduler: Starting job streaming job 1523975460000 ms.22 from job set of time 1523975460000 ms 18/04/17 17:31:00 INFO scheduler.JobScheduler: Finished job streaming job 1523975460000 ms.21 from job set of time 1523975460000 ms 18/04/17 17:31:00 INFO scheduler.JobScheduler: Starting job streaming job 1523975460000 ms.23 from job set of time 1523975460000 ms 18/04/17 17:31:00 INFO scheduler.JobScheduler: Starting job streaming job 1523975460000 ms.24 from job set of time 1523975460000 ms 18/04/17 17:31:00 INFO scheduler.JobScheduler: Starting job streaming job 1523975460000 ms.25 from job set of time 1523975460000 ms 18/04/17 17:31:00 INFO scheduler.JobScheduler: Starting job streaming job 1523975460000 ms.26 from job set of time 1523975460000 ms 18/04/17 17:31:00 INFO scheduler.JobScheduler: Starting job streaming job 1523975460000 ms.27 from job set of time 1523975460000 ms 18/04/17 17:31:00 INFO scheduler.JobScheduler: Starting job streaming job 1523975460000 ms.28 from job set of time 1523975460000 ms 18/04/17 17:31:00 INFO scheduler.JobScheduler: Starting job streaming job 1523975460000 ms.29 from job set of time 1523975460000 ms 18/04/17 17:31:00 INFO scheduler.JobScheduler: Starting job streaming job 1523975460000 ms.30 from job set of time 1523975460000 ms 18/04/17 17:31:00 INFO scheduler.JobScheduler: Starting job streaming job 1523975460000 ms.31 from job set of time 1523975460000 ms 18/04/17 17:31:00 INFO scheduler.JobScheduler: Finished job streaming job 1523975460000 ms.30 from job set of time 1523975460000 ms 18/04/17 17:31:00 INFO scheduler.JobScheduler: Starting job streaming job 1523975460000 ms.32 from job set of time 1523975460000 ms 18/04/17 17:31:00 INFO scheduler.JobScheduler: Starting job streaming job 1523975460000 ms.34 from job set of time 1523975460000 ms 18/04/17 17:31:00 INFO scheduler.JobScheduler: Starting job streaming job 1523975460000 ms.33 from job set of time 1523975460000 ms 18/04/17 17:31:00 INFO scheduler.JobScheduler: Starting job streaming job 1523975460000 ms.35 from job set of time 1523975460000 ms 18/04/17 17:31:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 17:31:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 17:31:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 17:31:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 17:31:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 17:31:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 17:31:00 INFO scheduler.DAGScheduler: Got job 1533 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 17:31:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 1534 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 17:31:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 17:31:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 17:31:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 17:31:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 17:31:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 17:31:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 17:31:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 17:31:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 17:31:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 17:31:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 17:31:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 17:31:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 17:31:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 17:31:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 17:31:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 17:31:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 17:31:00 INFO scheduler.DAGScheduler: Submitting ResultStage 1534 (KafkaRDD[2125] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 17:31:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 17:31:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 17:31:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 17:31:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 17:31:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 17:31:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 17:31:00 INFO spark.SparkContext: Starting job: foreachPartition at PredictorEngineApp.java:153 18/04/17 17:31:00 INFO storage.MemoryStore: Block broadcast_1534 stored as values in memory (estimated size 5.7 KB, free 491.2 MB) 18/04/17 17:31:00 INFO storage.MemoryStore: Block broadcast_1534_piece0 stored as bytes in memory (estimated size 3.1 KB, free 491.2 MB) 18/04/17 17:31:00 INFO storage.BlockManagerInfo: Added broadcast_1534_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.5 MB) 18/04/17 17:31:00 INFO spark.SparkContext: Created broadcast 1534 from broadcast at DAGScheduler.scala:1006 18/04/17 17:31:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 1534 (KafkaRDD[2125] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 17:31:00 INFO cluster.YarnClusterScheduler: Adding task set 1534.0 with 1 tasks 18/04/17 17:31:00 INFO scheduler.DAGScheduler: Got job 1534 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 17:31:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 1535 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 17:31:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 17:31:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 17:31:00 INFO scheduler.DAGScheduler: Submitting ResultStage 1535 (KafkaRDD[2101] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 17:31:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 1534.0 (TID 1534, ***hostname masked***, executor 8, partition 0, NODE_LOCAL, 2048 bytes) 18/04/17 17:31:00 INFO storage.MemoryStore: Block broadcast_1535 stored as values in memory (estimated size 5.7 KB, free 491.2 MB) 18/04/17 17:31:00 INFO storage.MemoryStore: Block broadcast_1535_piece0 stored as bytes in memory (estimated size 3.1 KB, free 491.2 MB) 18/04/17 17:31:00 INFO storage.BlockManagerInfo: Added broadcast_1535_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.5 MB) 18/04/17 17:31:00 INFO spark.SparkContext: Created broadcast 1535 from broadcast at DAGScheduler.scala:1006 18/04/17 17:31:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 1535 (KafkaRDD[2101] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 17:31:00 INFO cluster.YarnClusterScheduler: Adding task set 1535.0 with 1 tasks 18/04/17 17:31:00 INFO scheduler.DAGScheduler: Got job 1535 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 17:31:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 1536 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 17:31:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 17:31:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 17:31:00 INFO scheduler.DAGScheduler: Submitting ResultStage 1536 (KafkaRDD[2119] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 17:31:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 1535.0 (TID 1535, ***hostname masked***, executor 4, partition 0, RACK_LOCAL, 2041 bytes) 18/04/17 17:31:00 INFO storage.MemoryStore: Block broadcast_1536 stored as values in memory (estimated size 5.7 KB, free 491.1 MB) 18/04/17 17:31:00 INFO storage.MemoryStore: Block broadcast_1536_piece0 stored as bytes in memory (estimated size 3.1 KB, free 491.1 MB) 18/04/17 17:31:00 INFO storage.BlockManagerInfo: Added broadcast_1536_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.5 MB) 18/04/17 17:31:00 INFO spark.SparkContext: Created broadcast 1536 from broadcast at DAGScheduler.scala:1006 18/04/17 17:31:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 1536 (KafkaRDD[2119] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 17:31:00 INFO cluster.YarnClusterScheduler: Adding task set 1536.0 with 1 tasks 18/04/17 17:31:00 INFO scheduler.DAGScheduler: Got job 1536 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 17:31:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 1537 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 17:31:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 17:31:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 17:31:00 INFO scheduler.DAGScheduler: Submitting ResultStage 1537 (KafkaRDD[2127] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 17:31:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 1536.0 (TID 1536, ***hostname masked***, executor 8, partition 0, RACK_LOCAL, 2048 bytes) 18/04/17 17:31:00 INFO storage.MemoryStore: Block broadcast_1537 stored as values in memory (estimated size 5.7 KB, free 491.1 MB) 18/04/17 17:31:00 INFO storage.MemoryStore: Block broadcast_1537_piece0 stored as bytes in memory (estimated size 3.1 KB, free 491.1 MB) 18/04/17 17:31:00 INFO storage.BlockManagerInfo: Added broadcast_1537_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.5 MB) 18/04/17 17:31:00 INFO spark.SparkContext: Created broadcast 1537 from broadcast at DAGScheduler.scala:1006 18/04/17 17:31:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 1537 (KafkaRDD[2127] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 17:31:00 INFO cluster.YarnClusterScheduler: Adding task set 1537.0 with 1 tasks 18/04/17 17:31:00 INFO scheduler.DAGScheduler: Got job 1537 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 17:31:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 1538 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 17:31:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 17:31:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 17:31:00 INFO scheduler.DAGScheduler: Submitting ResultStage 1538 (KafkaRDD[2094] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 17:31:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 1537.0 (TID 1537, ***hostname masked***, executor 12, partition 0, NODE_LOCAL, 2037 bytes) 18/04/17 17:31:00 INFO storage.MemoryStore: Block broadcast_1538 stored as values in memory (estimated size 5.7 KB, free 491.1 MB) 18/04/17 17:31:00 INFO storage.MemoryStore: Block broadcast_1538_piece0 stored as bytes in memory (estimated size 3.1 KB, free 491.1 MB) 18/04/17 17:31:00 INFO storage.BlockManagerInfo: Added broadcast_1538_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.5 MB) 18/04/17 17:31:00 INFO spark.SparkContext: Created broadcast 1538 from broadcast at DAGScheduler.scala:1006 18/04/17 17:31:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 1538 (KafkaRDD[2094] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 17:31:00 INFO cluster.YarnClusterScheduler: Adding task set 1538.0 with 1 tasks 18/04/17 17:31:00 INFO scheduler.DAGScheduler: Got job 1538 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 17:31:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 1539 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 17:31:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 17:31:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 17:31:00 INFO scheduler.DAGScheduler: Submitting ResultStage 1539 (KafkaRDD[2123] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 17:31:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 1538.0 (TID 1538, ***hostname masked***, executor 6, partition 0, RACK_LOCAL, 2034 bytes) 18/04/17 17:31:00 INFO storage.MemoryStore: Block broadcast_1539 stored as values in memory (estimated size 5.7 KB, free 491.1 MB) 18/04/17 17:31:00 INFO storage.MemoryStore: Block broadcast_1539_piece0 stored as bytes in memory (estimated size 3.1 KB, free 491.1 MB) 18/04/17 17:31:00 INFO storage.BlockManagerInfo: Added broadcast_1539_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.5 MB) 18/04/17 17:31:00 INFO spark.SparkContext: Created broadcast 1539 from broadcast at DAGScheduler.scala:1006 18/04/17 17:31:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 1539 (KafkaRDD[2123] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 17:31:00 INFO cluster.YarnClusterScheduler: Adding task set 1539.0 with 1 tasks 18/04/17 17:31:00 INFO scheduler.DAGScheduler: Got job 1539 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 17:31:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 1540 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 17:31:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 17:31:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 17:31:00 INFO scheduler.DAGScheduler: Submitting ResultStage 1540 (KafkaRDD[2126] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 17:31:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 1539.0 (TID 1539, ***hostname masked***, executor 5, partition 0, RACK_LOCAL, 2042 bytes) 18/04/17 17:31:00 INFO storage.MemoryStore: Block broadcast_1540 stored as values in memory (estimated size 5.7 KB, free 491.1 MB) 18/04/17 17:31:00 INFO storage.MemoryStore: Block broadcast_1540_piece0 stored as bytes in memory (estimated size 3.1 KB, free 491.1 MB) 18/04/17 17:31:00 INFO storage.BlockManagerInfo: Added broadcast_1535_piece0 in memory on ***hostname masked***:55279 (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:31:00 INFO storage.BlockManagerInfo: Added broadcast_1540_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.5 MB) 18/04/17 17:31:00 INFO spark.SparkContext: Created broadcast 1540 from broadcast at DAGScheduler.scala:1006 18/04/17 17:31:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 1540 (KafkaRDD[2126] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 17:31:00 INFO cluster.YarnClusterScheduler: Adding task set 1540.0 with 1 tasks 18/04/17 17:31:00 INFO scheduler.DAGScheduler: Got job 1540 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 17:31:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 1541 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 17:31:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 17:31:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 17:31:00 INFO scheduler.DAGScheduler: Submitting ResultStage 1541 (KafkaRDD[2102] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 17:31:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 1540.0 (TID 1540, ***hostname masked***, executor 9, partition 0, RACK_LOCAL, 2053 bytes) 18/04/17 17:31:00 INFO storage.MemoryStore: Block broadcast_1541 stored as values in memory (estimated size 5.7 KB, free 491.1 MB) 18/04/17 17:31:00 INFO storage.MemoryStore: Block broadcast_1541_piece0 stored as bytes in memory (estimated size 3.1 KB, free 491.1 MB) 18/04/17 17:31:00 INFO storage.BlockManagerInfo: Added broadcast_1538_piece0 in memory on ***hostname masked***:35790 (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:31:00 INFO storage.BlockManagerInfo: Added broadcast_1541_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.5 MB) 18/04/17 17:31:00 INFO spark.SparkContext: Created broadcast 1541 from broadcast at DAGScheduler.scala:1006 18/04/17 17:31:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 1541 (KafkaRDD[2102] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 17:31:00 INFO cluster.YarnClusterScheduler: Adding task set 1541.0 with 1 tasks 18/04/17 17:31:00 INFO scheduler.DAGScheduler: Got job 1541 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 17:31:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 1542 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 17:31:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 17:31:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 17:31:00 INFO scheduler.DAGScheduler: Submitting ResultStage 1542 (KafkaRDD[2121] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 17:31:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 1541.0 (TID 1541, ***hostname masked***, executor 1, partition 0, NODE_LOCAL, 2059 bytes) 18/04/17 17:31:00 INFO storage.MemoryStore: Block broadcast_1542 stored as values in memory (estimated size 5.7 KB, free 491.1 MB) 18/04/17 17:31:00 INFO storage.MemoryStore: Block broadcast_1542_piece0 stored as bytes in memory (estimated size 3.1 KB, free 491.1 MB) 18/04/17 17:31:00 INFO storage.BlockManagerInfo: Added broadcast_1542_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.5 MB) 18/04/17 17:31:00 INFO spark.SparkContext: Created broadcast 1542 from broadcast at DAGScheduler.scala:1006 18/04/17 17:31:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 1542 (KafkaRDD[2121] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 17:31:00 INFO cluster.YarnClusterScheduler: Adding task set 1542.0 with 1 tasks 18/04/17 17:31:00 INFO scheduler.DAGScheduler: Got job 1542 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 17:31:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 1543 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 17:31:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 17:31:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 17:31:00 INFO scheduler.DAGScheduler: Submitting ResultStage 1543 (KafkaRDD[2104] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 17:31:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 1542.0 (TID 1542, ***hostname masked***, executor 3, partition 0, NODE_LOCAL, 2049 bytes) 18/04/17 17:31:00 INFO storage.MemoryStore: Block broadcast_1543 stored as values in memory (estimated size 5.7 KB, free 491.1 MB) 18/04/17 17:31:00 INFO storage.BlockManagerInfo: Added broadcast_1539_piece0 in memory on ***hostname masked***:53081 (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:31:00 INFO storage.MemoryStore: Block broadcast_1543_piece0 stored as bytes in memory (estimated size 3.1 KB, free 491.1 MB) 18/04/17 17:31:00 INFO storage.BlockManagerInfo: Added broadcast_1543_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.5 MB) 18/04/17 17:31:00 INFO spark.SparkContext: Created broadcast 1543 from broadcast at DAGScheduler.scala:1006 18/04/17 17:31:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 1543 (KafkaRDD[2104] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 17:31:00 INFO cluster.YarnClusterScheduler: Adding task set 1543.0 with 1 tasks 18/04/17 17:31:00 INFO scheduler.DAGScheduler: Got job 1543 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 17:31:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 1544 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 17:31:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 17:31:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 17:31:00 INFO scheduler.DAGScheduler: Submitting ResultStage 1544 (KafkaRDD[2114] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 17:31:00 INFO storage.MemoryStore: Block broadcast_1544 stored as values in memory (estimated size 5.7 KB, free 491.1 MB) 18/04/17 17:31:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 1543.0 (TID 1543, ***hostname masked***, executor 12, partition 0, RACK_LOCAL, 2051 bytes) 18/04/17 17:31:00 INFO storage.BlockManagerInfo: Added broadcast_1540_piece0 in memory on ***hostname masked***:55033 (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:31:00 INFO storage.MemoryStore: Block broadcast_1544_piece0 stored as bytes in memory (estimated size 3.1 KB, free 491.1 MB) 18/04/17 17:31:00 INFO storage.BlockManagerInfo: Added broadcast_1544_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.5 MB) 18/04/17 17:31:00 INFO spark.SparkContext: Created broadcast 1544 from broadcast at DAGScheduler.scala:1006 18/04/17 17:31:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 1544 (KafkaRDD[2114] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 17:31:00 INFO cluster.YarnClusterScheduler: Adding task set 1544.0 with 1 tasks 18/04/17 17:31:00 INFO scheduler.DAGScheduler: Got job 1544 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 17:31:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 1545 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 17:31:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 17:31:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 17:31:00 INFO scheduler.DAGScheduler: Submitting ResultStage 1545 (KafkaRDD[2097] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 17:31:00 INFO storage.MemoryStore: Block broadcast_1545 stored as values in memory (estimated size 5.7 KB, free 491.1 MB) 18/04/17 17:31:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 1544.0 (TID 1544, ***hostname masked***, executor 12, partition 0, RACK_LOCAL, 2069 bytes) 18/04/17 17:31:00 INFO storage.MemoryStore: Block broadcast_1545_piece0 stored as bytes in memory (estimated size 3.1 KB, free 491.1 MB) 18/04/17 17:31:00 INFO storage.BlockManagerInfo: Added broadcast_1545_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.5 MB) 18/04/17 17:31:00 INFO spark.SparkContext: Created broadcast 1545 from broadcast at DAGScheduler.scala:1006 18/04/17 17:31:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 1545 (KafkaRDD[2097] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 17:31:00 INFO cluster.YarnClusterScheduler: Adding task set 1545.0 with 1 tasks 18/04/17 17:31:00 INFO scheduler.DAGScheduler: Got job 1545 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 17:31:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 1546 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 17:31:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 17:31:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 17:31:00 INFO scheduler.DAGScheduler: Submitting ResultStage 1546 (KafkaRDD[2120] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 17:31:00 INFO storage.MemoryStore: Block broadcast_1546 stored as values in memory (estimated size 5.7 KB, free 491.1 MB) 18/04/17 17:31:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 1545.0 (TID 1545, ***hostname masked***, executor 1, partition 0, NODE_LOCAL, 2059 bytes) 18/04/17 17:31:00 INFO storage.BlockManagerInfo: Added broadcast_1541_piece0 in memory on ***hostname masked***:56034 (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:31:00 INFO storage.BlockManagerInfo: Added broadcast_1542_piece0 in memory on ***hostname masked***:60107 (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:31:00 INFO storage.MemoryStore: Block broadcast_1546_piece0 stored as bytes in memory (estimated size 3.1 KB, free 491.1 MB) 18/04/17 17:31:00 INFO storage.BlockManagerInfo: Added broadcast_1546_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.5 MB) 18/04/17 17:31:00 INFO spark.SparkContext: Created broadcast 1546 from broadcast at DAGScheduler.scala:1006 18/04/17 17:31:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 1546 (KafkaRDD[2120] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 17:31:00 INFO cluster.YarnClusterScheduler: Adding task set 1546.0 with 1 tasks 18/04/17 17:31:00 INFO scheduler.DAGScheduler: Got job 1546 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 17:31:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 1547 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 17:31:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 17:31:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 17:31:00 INFO scheduler.DAGScheduler: Submitting ResultStage 1547 (KafkaRDD[2112] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 17:31:00 INFO storage.MemoryStore: Block broadcast_1547 stored as values in memory (estimated size 5.7 KB, free 491.1 MB) 18/04/17 17:31:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 1546.0 (TID 1546, ***hostname masked***, executor 5, partition 0, RACK_LOCAL, 2045 bytes) 18/04/17 17:31:00 INFO storage.MemoryStore: Block broadcast_1547_piece0 stored as bytes in memory (estimated size 3.1 KB, free 491.1 MB) 18/04/17 17:31:00 INFO storage.BlockManagerInfo: Added broadcast_1547_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.5 MB) 18/04/17 17:31:00 INFO spark.SparkContext: Created broadcast 1547 from broadcast at DAGScheduler.scala:1006 18/04/17 17:31:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 1547 (KafkaRDD[2112] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 17:31:00 INFO cluster.YarnClusterScheduler: Adding task set 1547.0 with 1 tasks 18/04/17 17:31:00 INFO scheduler.DAGScheduler: Got job 1547 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 17:31:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 1548 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 17:31:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 17:31:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 17:31:00 INFO scheduler.DAGScheduler: Submitting ResultStage 1548 (KafkaRDD[2117] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 17:31:00 INFO storage.MemoryStore: Block broadcast_1548 stored as values in memory (estimated size 5.7 KB, free 491.0 MB) 18/04/17 17:31:00 INFO storage.BlockManagerInfo: Added broadcast_1536_piece0 in memory on ***hostname masked***:50260 (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:31:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 1547.0 (TID 1547, ***hostname masked***, executor 6, partition 0, RACK_LOCAL, 2048 bytes) 18/04/17 17:31:00 INFO spark.ContextCleaner: Cleaned accumulator 1526 18/04/17 17:31:00 INFO spark.ContextCleaner: Cleaned accumulator 1510 18/04/17 17:31:00 INFO storage.BlockManagerInfo: Removed broadcast_1508_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.5 MB) 18/04/17 17:31:00 INFO storage.BlockManagerInfo: Added broadcast_1545_piece0 in memory on ***hostname masked***:56034 (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:31:00 INFO storage.BlockManagerInfo: Added broadcast_1546_piece0 in memory on ***hostname masked***:53081 (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:31:00 INFO storage.MemoryStore: Block broadcast_1548_piece0 stored as bytes in memory (estimated size 3.1 KB, free 491.1 MB) 18/04/17 17:31:00 INFO storage.BlockManagerInfo: Added broadcast_1548_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.5 MB) 18/04/17 17:31:00 INFO spark.SparkContext: Created broadcast 1548 from broadcast at DAGScheduler.scala:1006 18/04/17 17:31:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 1548 (KafkaRDD[2117] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 17:31:00 INFO cluster.YarnClusterScheduler: Adding task set 1548.0 with 1 tasks 18/04/17 17:31:00 INFO scheduler.DAGScheduler: Got job 1548 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 17:31:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 1549 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 17:31:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 17:31:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 17:31:00 INFO scheduler.DAGScheduler: Submitting ResultStage 1549 (KafkaRDD[2115] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 17:31:00 INFO storage.MemoryStore: Block broadcast_1549 stored as values in memory (estimated size 5.7 KB, free 491.0 MB) 18/04/17 17:31:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 1548.0 (TID 1548, ***hostname masked***, executor 7, partition 0, RACK_LOCAL, 2045 bytes) 18/04/17 17:31:00 INFO storage.BlockManagerInfo: Added broadcast_1543_piece0 in memory on ***hostname masked***:42188 (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:31:00 INFO storage.BlockManagerInfo: Removed broadcast_1508_piece0 on ***hostname masked***:57847 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:31:00 INFO storage.MemoryStore: Block broadcast_1549_piece0 stored as bytes in memory (estimated size 3.1 KB, free 491.0 MB) 18/04/17 17:31:00 INFO storage.BlockManagerInfo: Added broadcast_1549_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.4 MB) 18/04/17 17:31:00 INFO spark.SparkContext: Created broadcast 1549 from broadcast at DAGScheduler.scala:1006 18/04/17 17:31:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 1549 (KafkaRDD[2115] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 17:31:00 INFO cluster.YarnClusterScheduler: Adding task set 1549.0 with 1 tasks 18/04/17 17:31:00 INFO scheduler.DAGScheduler: Got job 1551 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 17:31:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 1550 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 17:31:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 17:31:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 17:31:00 INFO scheduler.DAGScheduler: Submitting ResultStage 1550 (KafkaRDD[2099] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 17:31:00 INFO spark.ContextCleaner: Cleaned accumulator 1509 18/04/17 17:31:00 INFO storage.MemoryStore: Block broadcast_1550 stored as values in memory (estimated size 5.7 KB, free 491.0 MB) 18/04/17 17:31:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 1549.0 (TID 1549, ***hostname masked***, executor 12, partition 0, RACK_LOCAL, 2048 bytes) 18/04/17 17:31:00 INFO storage.BlockManagerInfo: Added broadcast_1547_piece0 in memory on ***hostname masked***:35790 (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:31:00 INFO storage.BlockManagerInfo: Removed broadcast_1511_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.5 MB) 18/04/17 17:31:00 INFO storage.BlockManagerInfo: Removed broadcast_1511_piece0 on ***hostname masked***:55279 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:31:00 INFO storage.MemoryStore: Block broadcast_1550_piece0 stored as bytes in memory (estimated size 3.1 KB, free 491.0 MB) 18/04/17 17:31:00 INFO storage.BlockManagerInfo: Added broadcast_1550_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.4 MB) 18/04/17 17:31:00 INFO spark.SparkContext: Created broadcast 1550 from broadcast at DAGScheduler.scala:1006 18/04/17 17:31:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 1550 (KafkaRDD[2099] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 17:31:00 INFO cluster.YarnClusterScheduler: Adding task set 1550.0 with 1 tasks 18/04/17 17:31:00 INFO scheduler.DAGScheduler: Got job 1549 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 17:31:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 1551 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 17:31:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 17:31:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 17:31:00 INFO scheduler.DAGScheduler: Submitting ResultStage 1551 (KafkaRDD[2118] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 17:31:00 INFO spark.ContextCleaner: Cleaned accumulator 1512 18/04/17 17:31:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 1550.0 (TID 1550, ***hostname masked***, executor 3, partition 0, RACK_LOCAL, 2060 bytes) 18/04/17 17:31:00 INFO storage.MemoryStore: Block broadcast_1551 stored as values in memory (estimated size 5.7 KB, free 491.0 MB) 18/04/17 17:31:00 INFO storage.BlockManagerInfo: Removed broadcast_1510_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.5 MB) 18/04/17 17:31:00 INFO storage.BlockManagerInfo: Added broadcast_1534_piece0 in memory on ***hostname masked***:50260 (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:31:00 INFO storage.BlockManagerInfo: Removed broadcast_1510_piece0 on ***hostname masked***:55095 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:31:00 INFO storage.MemoryStore: Block broadcast_1551_piece0 stored as bytes in memory (estimated size 3.1 KB, free 491.0 MB) 18/04/17 17:31:00 INFO storage.BlockManagerInfo: Added broadcast_1551_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.4 MB) 18/04/17 17:31:00 INFO spark.ContextCleaner: Cleaned accumulator 1511 18/04/17 17:31:00 INFO spark.SparkContext: Created broadcast 1551 from broadcast at DAGScheduler.scala:1006 18/04/17 17:31:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 1551 (KafkaRDD[2118] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 17:31:00 INFO cluster.YarnClusterScheduler: Adding task set 1551.0 with 1 tasks 18/04/17 17:31:00 INFO scheduler.DAGScheduler: Got job 1550 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 17:31:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 1552 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 17:31:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 17:31:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 17:31:00 INFO storage.BlockManagerInfo: Removed broadcast_1509_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.5 MB) 18/04/17 17:31:00 INFO scheduler.DAGScheduler: Submitting ResultStage 1552 (KafkaRDD[2098] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 17:31:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 1551.0 (TID 1551, ***hostname masked***, executor 4, partition 0, NODE_LOCAL, 2056 bytes) 18/04/17 17:31:00 INFO storage.MemoryStore: Block broadcast_1552 stored as values in memory (estimated size 5.7 KB, free 491.0 MB) 18/04/17 17:31:00 INFO storage.BlockManagerInfo: Removed broadcast_1509_piece0 on ***hostname masked***:55095 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:31:00 INFO storage.BlockManagerInfo: Added broadcast_1550_piece0 in memory on ***hostname masked***:60107 (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:31:00 INFO spark.ContextCleaner: Cleaned accumulator 1515 18/04/17 17:31:00 INFO storage.BlockManagerInfo: Removed broadcast_1513_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.5 MB) 18/04/17 17:31:00 INFO storage.BlockManagerInfo: Added broadcast_1549_piece0 in memory on ***hostname masked***:42188 (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:31:00 INFO storage.BlockManagerInfo: Added broadcast_1548_piece0 in memory on ***hostname masked***:41751 (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:31:00 INFO storage.MemoryStore: Block broadcast_1552_piece0 stored as bytes in memory (estimated size 3.1 KB, free 491.1 MB) 18/04/17 17:31:00 INFO storage.BlockManagerInfo: Added broadcast_1552_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.5 MB) 18/04/17 17:31:00 INFO storage.BlockManagerInfo: Removed broadcast_1513_piece0 on ***hostname masked***:55033 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:31:00 INFO spark.SparkContext: Created broadcast 1552 from broadcast at DAGScheduler.scala:1006 18/04/17 17:31:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 1552 (KafkaRDD[2098] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 17:31:00 INFO cluster.YarnClusterScheduler: Adding task set 1552.0 with 1 tasks 18/04/17 17:31:00 INFO scheduler.DAGScheduler: Got job 1552 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 17:31:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 1553 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 17:31:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 17:31:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 17:31:00 INFO scheduler.DAGScheduler: Submitting ResultStage 1553 (KafkaRDD[2116] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 17:31:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 1552.0 (TID 1552, ***hostname masked***, executor 8, partition 0, NODE_LOCAL, 2045 bytes) 18/04/17 17:31:00 INFO storage.MemoryStore: Block broadcast_1553 stored as values in memory (estimated size 5.7 KB, free 491.0 MB) 18/04/17 17:31:00 INFO spark.ContextCleaner: Cleaned accumulator 1514 18/04/17 17:31:00 INFO storage.BlockManagerInfo: Removed broadcast_1512_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.5 MB) 18/04/17 17:31:00 INFO storage.BlockManagerInfo: Added broadcast_1551_piece0 in memory on ***hostname masked***:55279 (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:31:00 INFO storage.MemoryStore: Block broadcast_1553_piece0 stored as bytes in memory (estimated size 3.1 KB, free 491.1 MB) 18/04/17 17:31:00 INFO storage.BlockManagerInfo: Added broadcast_1553_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.5 MB) 18/04/17 17:31:00 INFO spark.SparkContext: Created broadcast 1553 from broadcast at DAGScheduler.scala:1006 18/04/17 17:31:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 1553 (KafkaRDD[2116] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 17:31:00 INFO cluster.YarnClusterScheduler: Adding task set 1553.0 with 1 tasks 18/04/17 17:31:00 INFO storage.BlockManagerInfo: Added broadcast_1544_piece0 in memory on ***hostname masked***:42188 (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:31:00 INFO scheduler.DAGScheduler: Got job 1553 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 17:31:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 1554 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 17:31:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 17:31:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 17:31:00 INFO scheduler.DAGScheduler: Submitting ResultStage 1554 (KafkaRDD[2100] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 17:31:00 INFO storage.MemoryStore: Block broadcast_1554 stored as values in memory (estimated size 5.7 KB, free 491.0 MB) 18/04/17 17:31:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 1553.0 (TID 1553, ***hostname masked***, executor 2, partition 0, NODE_LOCAL, 2047 bytes) 18/04/17 17:31:00 INFO storage.BlockManagerInfo: Removed broadcast_1512_piece0 on ***hostname masked***:43653 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:31:00 INFO spark.ContextCleaner: Cleaned accumulator 1513 18/04/17 17:31:00 INFO spark.ContextCleaner: Cleaned accumulator 1517 18/04/17 17:31:00 INFO storage.BlockManagerInfo: Added broadcast_1552_piece0 in memory on ***hostname masked***:50260 (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:31:00 INFO storage.BlockManagerInfo: Removed broadcast_1515_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.5 MB) 18/04/17 17:31:00 INFO storage.MemoryStore: Block broadcast_1554_piece0 stored as bytes in memory (estimated size 3.1 KB, free 491.1 MB) 18/04/17 17:31:00 INFO storage.BlockManagerInfo: Added broadcast_1554_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.5 MB) 18/04/17 17:31:00 INFO spark.SparkContext: Created broadcast 1554 from broadcast at DAGScheduler.scala:1006 18/04/17 17:31:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 1554 (KafkaRDD[2100] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 17:31:00 INFO cluster.YarnClusterScheduler: Adding task set 1554.0 with 1 tasks 18/04/17 17:31:00 INFO scheduler.DAGScheduler: Got job 1554 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 17:31:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 1555 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 17:31:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 17:31:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 17:31:00 INFO scheduler.DAGScheduler: Submitting ResultStage 1555 (KafkaRDD[2093] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 17:31:00 INFO storage.BlockManagerInfo: Removed broadcast_1515_piece0 on ***hostname masked***:50260 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:31:00 INFO storage.MemoryStore: Block broadcast_1555 stored as values in memory (estimated size 5.7 KB, free 491.0 MB) 18/04/17 17:31:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 1554.0 (TID 1554, ***hostname masked***, executor 2, partition 0, RACK_LOCAL, 2051 bytes) 18/04/17 17:31:00 INFO spark.ContextCleaner: Cleaned accumulator 1516 18/04/17 17:31:00 INFO storage.BlockManagerInfo: Removed broadcast_1514_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.5 MB) 18/04/17 17:31:00 INFO storage.BlockManagerInfo: Removed broadcast_1514_piece0 on ***hostname masked***:57847 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:31:00 INFO storage.BlockManagerInfo: Added broadcast_1537_piece0 in memory on ***hostname masked***:42188 (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:31:00 INFO storage.BlockManagerInfo: Removed broadcast_1518_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.5 MB) 18/04/17 17:31:00 INFO storage.MemoryStore: Block broadcast_1555_piece0 stored as bytes in memory (estimated size 3.1 KB, free 491.1 MB) 18/04/17 17:31:00 INFO storage.BlockManagerInfo: Added broadcast_1555_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.5 MB) 18/04/17 17:31:00 INFO storage.BlockManagerInfo: Removed broadcast_1518_piece0 on ***hostname masked***:55095 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:31:00 INFO spark.SparkContext: Created broadcast 1555 from broadcast at DAGScheduler.scala:1006 18/04/17 17:31:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 1555 (KafkaRDD[2093] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 17:31:00 INFO cluster.YarnClusterScheduler: Adding task set 1555.0 with 1 tasks 18/04/17 17:31:00 INFO scheduler.DAGScheduler: Got job 1555 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 17:31:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 1556 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 17:31:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 17:31:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 17:31:00 INFO scheduler.DAGScheduler: Submitting ResultStage 1556 (KafkaRDD[2107] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 17:31:00 INFO storage.MemoryStore: Block broadcast_1556 stored as values in memory (estimated size 5.7 KB, free 491.1 MB) 18/04/17 17:31:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 1555.0 (TID 1555, ***hostname masked***, executor 7, partition 0, RACK_LOCAL, 2063 bytes) 18/04/17 17:31:00 INFO storage.BlockManagerInfo: Added broadcast_1553_piece0 in memory on ***hostname masked***:43653 (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:31:00 INFO spark.ContextCleaner: Cleaned accumulator 1519 18/04/17 17:31:00 INFO storage.BlockManagerInfo: Removed broadcast_1517_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.5 MB) 18/04/17 17:31:00 INFO storage.BlockManagerInfo: Removed broadcast_1517_piece0 on ***hostname masked***:53081 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:31:00 INFO spark.ContextCleaner: Cleaned accumulator 1518 18/04/17 17:31:00 INFO storage.BlockManagerInfo: Removed broadcast_1516_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.5 MB) 18/04/17 17:31:00 INFO storage.MemoryStore: Block broadcast_1556_piece0 stored as bytes in memory (estimated size 3.1 KB, free 491.1 MB) 18/04/17 17:31:00 INFO storage.BlockManagerInfo: Added broadcast_1556_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.5 MB) 18/04/17 17:31:00 INFO spark.SparkContext: Created broadcast 1556 from broadcast at DAGScheduler.scala:1006 18/04/17 17:31:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 1556 (KafkaRDD[2107] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 17:31:00 INFO cluster.YarnClusterScheduler: Adding task set 1556.0 with 1 tasks 18/04/17 17:31:00 INFO scheduler.DAGScheduler: Got job 1557 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 17:31:00 INFO storage.BlockManagerInfo: Removed broadcast_1516_piece0 on ***hostname masked***:55279 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:31:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 1557 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 17:31:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 17:31:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 17:31:00 INFO scheduler.DAGScheduler: Submitting ResultStage 1557 (KafkaRDD[2111] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 17:31:00 INFO storage.MemoryStore: Block broadcast_1557 stored as values in memory (estimated size 5.7 KB, free 491.1 MB) 18/04/17 17:31:00 INFO spark.ContextCleaner: Cleaned accumulator 1523 18/04/17 17:31:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 1556.0 (TID 1556, ***hostname masked***, executor 8, partition 0, RACK_LOCAL, 2064 bytes) 18/04/17 17:31:00 INFO storage.BlockManagerInfo: Removed broadcast_1521_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.5 MB) 18/04/17 17:31:00 INFO storage.BlockManagerInfo: Removed broadcast_1521_piece0 on ***hostname masked***:43653 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:31:00 INFO spark.ContextCleaner: Cleaned accumulator 1522 18/04/17 17:31:00 INFO storage.BlockManagerInfo: Added broadcast_1555_piece0 in memory on ***hostname masked***:41751 (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:31:00 INFO storage.BlockManagerInfo: Removed broadcast_1520_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.5 MB) 18/04/17 17:31:00 INFO storage.BlockManagerInfo: Removed broadcast_1520_piece0 on ***hostname masked***:53081 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:31:00 INFO storage.MemoryStore: Block broadcast_1557_piece0 stored as bytes in memory (estimated size 3.1 KB, free 491.1 MB) 18/04/17 17:31:00 INFO storage.BlockManagerInfo: Added broadcast_1557_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.5 MB) 18/04/17 17:31:00 INFO spark.SparkContext: Created broadcast 1557 from broadcast at DAGScheduler.scala:1006 18/04/17 17:31:00 INFO storage.BlockManagerInfo: Added broadcast_1554_piece0 in memory on ***hostname masked***:43653 (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:31:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 1557 (KafkaRDD[2111] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 17:31:00 INFO spark.ContextCleaner: Cleaned accumulator 1521 18/04/17 17:31:00 INFO cluster.YarnClusterScheduler: Adding task set 1557.0 with 1 tasks 18/04/17 17:31:00 INFO scheduler.DAGScheduler: Got job 1556 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 17:31:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 1558 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 17:31:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 17:31:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 17:31:00 INFO scheduler.DAGScheduler: Submitting ResultStage 1558 (KafkaRDD[2124] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 17:31:00 INFO storage.BlockManagerInfo: Removed broadcast_1523_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.5 MB) 18/04/17 17:31:00 INFO storage.MemoryStore: Block broadcast_1558 stored as values in memory (estimated size 5.7 KB, free 491.1 MB) 18/04/17 17:31:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 1557.0 (TID 1557, ***hostname masked***, executor 6, partition 0, RACK_LOCAL, 2050 bytes) 18/04/17 17:31:00 INFO storage.BlockManagerInfo: Removed broadcast_1523_piece0 on ***hostname masked***:50260 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:31:00 INFO spark.ContextCleaner: Cleaned accumulator 1524 18/04/17 17:31:00 INFO storage.BlockManagerInfo: Removed broadcast_1522_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.5 MB) 18/04/17 17:31:00 INFO storage.MemoryStore: Block broadcast_1558_piece0 stored as bytes in memory (estimated size 3.1 KB, free 491.1 MB) 18/04/17 17:31:00 INFO storage.BlockManagerInfo: Added broadcast_1558_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.5 MB) 18/04/17 17:31:00 INFO spark.SparkContext: Created broadcast 1558 from broadcast at DAGScheduler.scala:1006 18/04/17 17:31:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 1558 (KafkaRDD[2124] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 17:31:00 INFO cluster.YarnClusterScheduler: Adding task set 1558.0 with 1 tasks 18/04/17 17:31:00 INFO scheduler.DAGScheduler: Got job 1558 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 17:31:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 1559 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 17:31:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 17:31:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 17:31:00 INFO scheduler.DAGScheduler: Submitting ResultStage 1559 (KafkaRDD[2103] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 17:31:00 INFO storage.BlockManagerInfo: Removed broadcast_1522_piece0 on ***hostname masked***:56034 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:31:00 INFO storage.MemoryStore: Block broadcast_1559 stored as values in memory (estimated size 5.7 KB, free 491.1 MB) 18/04/17 17:31:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 1558.0 (TID 1558, ***hostname masked***, executor 10, partition 0, RACK_LOCAL, 2040 bytes) 18/04/17 17:31:00 INFO storage.BlockManagerInfo: Added broadcast_1556_piece0 in memory on ***hostname masked***:50260 (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:31:00 INFO spark.ContextCleaner: Cleaned accumulator 1527 18/04/17 17:31:00 INFO storage.BlockManagerInfo: Removed broadcast_1525_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.5 MB) 18/04/17 17:31:00 INFO storage.BlockManagerInfo: Removed broadcast_1525_piece0 on ***hostname masked***:57847 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:31:00 INFO storage.MemoryStore: Block broadcast_1559_piece0 stored as bytes in memory (estimated size 3.1 KB, free 491.1 MB) 18/04/17 17:31:00 INFO storage.BlockManagerInfo: Added broadcast_1559_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.5 MB) 18/04/17 17:31:00 INFO storage.BlockManagerInfo: Added broadcast_1557_piece0 in memory on ***hostname masked***:35790 (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:31:00 INFO spark.SparkContext: Created broadcast 1559 from broadcast at DAGScheduler.scala:1006 18/04/17 17:31:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 1559 (KafkaRDD[2103] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 17:31:00 INFO cluster.YarnClusterScheduler: Adding task set 1559.0 with 1 tasks 18/04/17 17:31:00 INFO scheduler.DAGScheduler: Got job 1559 (foreachPartition at PredictorEngineApp.java:153) with 1 output partitions 18/04/17 17:31:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 1560 (foreachPartition at PredictorEngineApp.java:153) 18/04/17 17:31:00 INFO scheduler.DAGScheduler: Parents of final stage: List() 18/04/17 17:31:00 INFO scheduler.DAGScheduler: Missing parents: List() 18/04/17 17:31:00 INFO scheduler.DAGScheduler: Submitting ResultStage 1560 (KafkaRDD[2110] at createDirectStream at PredictorEngineApp.java:125), which has no missing parents 18/04/17 17:31:00 INFO storage.BlockManagerInfo: Removed broadcast_1524_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.5 MB) 18/04/17 17:31:00 INFO storage.MemoryStore: Block broadcast_1560 stored as values in memory (estimated size 5.7 KB, free 491.1 MB) 18/04/17 17:31:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 1559.0 (TID 1559, ***hostname masked***, executor 5, partition 0, RACK_LOCAL, 2046 bytes) 18/04/17 17:31:00 INFO storage.BlockManagerInfo: Removed broadcast_1524_piece0 on ***hostname masked***:55033 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:31:00 INFO storage.MemoryStore: Block broadcast_1560_piece0 stored as bytes in memory (estimated size 3.1 KB, free 491.1 MB) 18/04/17 17:31:00 INFO storage.BlockManagerInfo: Added broadcast_1560_piece0 in memory on ***IP masked***:45737 (size: 3.1 KB, free: 491.5 MB) 18/04/17 17:31:00 INFO spark.ContextCleaner: Cleaned accumulator 1525 18/04/17 17:31:00 INFO spark.SparkContext: Created broadcast 1560 from broadcast at DAGScheduler.scala:1006 18/04/17 17:31:00 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 1560 (KafkaRDD[2110] at createDirectStream at PredictorEngineApp.java:125) 18/04/17 17:31:00 INFO cluster.YarnClusterScheduler: Adding task set 1560.0 with 1 tasks 18/04/17 17:31:00 INFO storage.BlockManagerInfo: Removed broadcast_1528_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.5 MB) 18/04/17 17:31:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 1560.0 (TID 1560, ***hostname masked***, executor 10, partition 0, RACK_LOCAL, 2048 bytes) 18/04/17 17:31:00 INFO storage.BlockManagerInfo: Removed broadcast_1528_piece0 on ***hostname masked***:60107 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:31:00 INFO spark.ContextCleaner: Cleaned accumulator 1529 18/04/17 17:31:00 INFO storage.BlockManagerInfo: Removed broadcast_1527_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.5 MB) 18/04/17 17:31:00 INFO storage.BlockManagerInfo: Removed broadcast_1527_piece0 on ***hostname masked***:57847 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:31:00 INFO storage.BlockManagerInfo: Added broadcast_1558_piece0 in memory on ***hostname masked***:55095 (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:31:00 INFO storage.BlockManagerInfo: Added broadcast_1559_piece0 in memory on ***hostname masked***:53081 (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:31:00 INFO spark.ContextCleaner: Cleaned accumulator 1528 18/04/17 17:31:00 INFO storage.BlockManagerInfo: Removed broadcast_1526_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.5 MB) 18/04/17 17:31:00 INFO storage.BlockManagerInfo: Removed broadcast_1526_piece0 on ***hostname masked***:55279 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:31:00 INFO storage.BlockManagerInfo: Added broadcast_1560_piece0 in memory on ***hostname masked***:55095 (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:31:00 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 1534.0 (TID 1534) in 172 ms on ***hostname masked*** (executor 8) (1/1) 18/04/17 17:31:00 INFO cluster.YarnClusterScheduler: Removed TaskSet 1534.0, whose tasks have all completed, from pool 18/04/17 17:31:00 INFO scheduler.DAGScheduler: ResultStage 1534 (foreachPartition at PredictorEngineApp.java:153) finished in 0.172 s 18/04/17 17:31:00 INFO scheduler.DAGScheduler: Job 1533 finished: foreachPartition at PredictorEngineApp.java:153, took 0.180918 s 18/04/17 17:31:00 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x2cec8071 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 17:31:00 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x2cec80710x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 17:31:00 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 17:31:00 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:39059, server: ***hostname masked***/***IP masked***:2181 18/04/17 17:31:00 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x3626be1439a9904, negotiated timeout = 60000 18/04/17 17:31:00 INFO spark.ContextCleaner: Cleaned accumulator 1532 18/04/17 17:31:00 INFO storage.BlockManagerInfo: Removed broadcast_1530_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.5 MB) 18/04/17 17:31:00 INFO storage.BlockManagerInfo: Removed broadcast_1530_piece0 on ***hostname masked***:42188 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:31:00 INFO spark.ContextCleaner: Cleaned accumulator 1531 18/04/17 17:31:00 INFO storage.BlockManagerInfo: Removed broadcast_1529_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.5 MB) 18/04/17 17:31:00 INFO storage.BlockManagerInfo: Removed broadcast_1529_piece0 on ***hostname masked***:55033 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:31:00 INFO spark.ContextCleaner: Cleaned accumulator 1530 18/04/17 17:31:00 INFO storage.BlockManagerInfo: Removed broadcast_1533_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.5 MB) 18/04/17 17:31:00 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x3626be1439a9904 18/04/17 17:31:00 INFO storage.BlockManagerInfo: Removed broadcast_1533_piece0 on ***hostname masked***:35790 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:31:00 INFO spark.ContextCleaner: Cleaned accumulator 1534 18/04/17 17:31:00 INFO storage.BlockManagerInfo: Removed broadcast_1532_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.5 MB) 18/04/17 17:31:00 INFO storage.BlockManagerInfo: Removed broadcast_1532_piece0 on ***hostname masked***:35790 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:31:00 INFO spark.ContextCleaner: Cleaned accumulator 1533 18/04/17 17:31:00 INFO storage.BlockManagerInfo: Removed broadcast_1531_piece0 on ***IP masked***:45737 in memory (size: 3.1 KB, free: 491.5 MB) 18/04/17 17:31:00 INFO storage.BlockManagerInfo: Removed broadcast_1531_piece0 on ***hostname masked***:55279 in memory (size: 3.1 KB, free: 3.1 GB) 18/04/17 17:31:00 INFO zookeeper.ZooKeeper: Session: 0x3626be1439a9904 closed 18/04/17 17:31:00 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 17:31:00 INFO scheduler.JobScheduler: Finished job streaming job 1523975460000 ms.33 from job set of time 1523975460000 ms 18/04/17 17:31:00 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 1537.0 (TID 1537) in 566 ms on ***hostname masked*** (executor 12) (1/1) 18/04/17 17:31:00 INFO cluster.YarnClusterScheduler: Removed TaskSet 1537.0, whose tasks have all completed, from pool 18/04/17 17:31:00 INFO scheduler.DAGScheduler: ResultStage 1537 (foreachPartition at PredictorEngineApp.java:153) finished in 0.566 s 18/04/17 17:31:00 INFO scheduler.DAGScheduler: Job 1536 finished: foreachPartition at PredictorEngineApp.java:153, took 0.588176 s 18/04/17 17:31:00 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x57094a5d connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 17:31:00 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x57094a5d0x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 17:31:00 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 17:31:00 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:39062, server: ***hostname masked***/***IP masked***:2181 18/04/17 17:31:00 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x3626be1439a9907, negotiated timeout = 60000 18/04/17 17:31:00 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x3626be1439a9907 18/04/17 17:31:00 INFO zookeeper.ZooKeeper: Session: 0x3626be1439a9907 closed 18/04/17 17:31:00 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 17:31:00 INFO scheduler.JobScheduler: Finished job streaming job 1523975460000 ms.35 from job set of time 1523975460000 ms 18/04/17 17:31:01 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 1548.0 (TID 1548) in 1762 ms on ***hostname masked*** (executor 7) (1/1) 18/04/17 17:31:01 INFO cluster.YarnClusterScheduler: Removed TaskSet 1548.0, whose tasks have all completed, from pool 18/04/17 17:31:01 INFO scheduler.DAGScheduler: ResultStage 1548 (foreachPartition at PredictorEngineApp.java:153) finished in 1.763 s 18/04/17 17:31:01 INFO scheduler.DAGScheduler: Job 1547 finished: foreachPartition at PredictorEngineApp.java:153, took 1.847467 s 18/04/17 17:31:01 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x7f959766 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 17:31:01 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x7f9597660x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 17:31:01 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 17:31:01 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:39066, server: ***hostname masked***/***IP masked***:2181 18/04/17 17:31:01 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x3626be1439a9909, negotiated timeout = 60000 18/04/17 17:31:01 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x3626be1439a9909 18/04/17 17:31:01 INFO zookeeper.ZooKeeper: Session: 0x3626be1439a9909 closed 18/04/17 17:31:01 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 17:31:01 INFO scheduler.JobScheduler: Finished job streaming job 1523975460000 ms.25 from job set of time 1523975460000 ms 18/04/17 17:31:04 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 1550.0 (TID 1550) in 4014 ms on ***hostname masked*** (executor 3) (1/1) 18/04/17 17:31:04 INFO cluster.YarnClusterScheduler: Removed TaskSet 1550.0, whose tasks have all completed, from pool 18/04/17 17:31:04 INFO scheduler.DAGScheduler: ResultStage 1550 (foreachPartition at PredictorEngineApp.java:153) finished in 4.015 s 18/04/17 17:31:04 INFO scheduler.DAGScheduler: Job 1551 finished: foreachPartition at PredictorEngineApp.java:153, took 4.105960 s 18/04/17 17:31:04 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x3854f173 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 17:31:04 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x3854f1730x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 17:31:04 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 17:31:04 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:50050, server: ***hostname masked***/***IP masked***:2181 18/04/17 17:31:04 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x2626be142b29230, negotiated timeout = 60000 18/04/17 17:31:04 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x2626be142b29230 18/04/17 17:31:04 INFO zookeeper.ZooKeeper: Session: 0x2626be142b29230 closed 18/04/17 17:31:04 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 17:31:04 INFO scheduler.JobScheduler: Finished job streaming job 1523975460000 ms.7 from job set of time 1523975460000 ms 18/04/17 17:31:04 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 1554.0 (TID 1554) in 4348 ms on ***hostname masked*** (executor 2) (1/1) 18/04/17 17:31:04 INFO scheduler.DAGScheduler: ResultStage 1554 (foreachPartition at PredictorEngineApp.java:153) finished in 4.348 s 18/04/17 17:31:04 INFO cluster.YarnClusterScheduler: Removed TaskSet 1554.0, whose tasks have all completed, from pool 18/04/17 17:31:04 INFO scheduler.DAGScheduler: Job 1553 finished: foreachPartition at PredictorEngineApp.java:153, took 4.459705 s 18/04/17 17:31:04 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x734cb881 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 17:31:04 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x734cb8810x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 17:31:04 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 17:31:04 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:39076, server: ***hostname masked***/***IP masked***:2181 18/04/17 17:31:04 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x3626be1439a990e, negotiated timeout = 60000 18/04/17 17:31:04 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x3626be1439a990e 18/04/17 17:31:04 INFO zookeeper.ZooKeeper: Session: 0x3626be1439a990e closed 18/04/17 17:31:04 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 17:31:04 INFO scheduler.JobScheduler: Finished job streaming job 1523975460000 ms.8 from job set of time 1523975460000 ms 18/04/17 17:31:05 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 1540.0 (TID 1540) in 5502 ms on ***hostname masked*** (executor 9) (1/1) 18/04/17 17:31:05 INFO cluster.YarnClusterScheduler: Removed TaskSet 1540.0, whose tasks have all completed, from pool 18/04/17 17:31:05 INFO scheduler.DAGScheduler: ResultStage 1540 (foreachPartition at PredictorEngineApp.java:153) finished in 5.502 s 18/04/17 17:31:05 INFO scheduler.DAGScheduler: Job 1539 finished: foreachPartition at PredictorEngineApp.java:153, took 5.540382 s 18/04/17 17:31:05 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x541fb1b7 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 17:31:05 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x541fb1b70x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 17:31:05 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 17:31:05 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:50060, server: ***hostname masked***/***IP masked***:2181 18/04/17 17:31:05 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x2626be142b29231, negotiated timeout = 60000 18/04/17 17:31:05 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x2626be142b29231 18/04/17 17:31:05 INFO zookeeper.ZooKeeper: Session: 0x2626be142b29231 closed 18/04/17 17:31:05 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 17:31:05 INFO scheduler.JobScheduler: Finished job streaming job 1523975460000 ms.34 from job set of time 1523975460000 ms 18/04/17 17:31:07 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 1542.0 (TID 1542) in 7395 ms on ***hostname masked*** (executor 3) (1/1) 18/04/17 17:31:07 INFO scheduler.DAGScheduler: ResultStage 1542 (foreachPartition at PredictorEngineApp.java:153) finished in 7.395 s 18/04/17 17:31:07 INFO cluster.YarnClusterScheduler: Removed TaskSet 1542.0, whose tasks have all completed, from pool 18/04/17 17:31:07 INFO scheduler.DAGScheduler: Job 1541 finished: foreachPartition at PredictorEngineApp.java:153, took 7.443284 s 18/04/17 17:31:07 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x268dc9c9 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 17:31:07 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x268dc9c90x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 17:31:07 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 17:31:07 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:39089, server: ***hostname masked***/***IP masked***:2181 18/04/17 17:31:07 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x3626be1439a9911, negotiated timeout = 60000 18/04/17 17:31:07 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x3626be1439a9911 18/04/17 17:31:07 INFO zookeeper.ZooKeeper: Session: 0x3626be1439a9911 closed 18/04/17 17:31:07 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 17:31:07 INFO scheduler.JobScheduler: Finished job streaming job 1523975460000 ms.29 from job set of time 1523975460000 ms 18/04/17 17:31:08 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 1553.0 (TID 1553) in 7915 ms on ***hostname masked*** (executor 2) (1/1) 18/04/17 17:31:08 INFO cluster.YarnClusterScheduler: Removed TaskSet 1553.0, whose tasks have all completed, from pool 18/04/17 17:31:08 INFO scheduler.DAGScheduler: ResultStage 1553 (foreachPartition at PredictorEngineApp.java:153) finished in 7.916 s 18/04/17 17:31:08 INFO scheduler.DAGScheduler: Job 1552 finished: foreachPartition at PredictorEngineApp.java:153, took 8.022506 s 18/04/17 17:31:08 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x36545927 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 17:31:08 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x365459270x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 17:31:08 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 17:31:08 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:39093, server: ***hostname masked***/***IP masked***:2181 18/04/17 17:31:08 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x3626be1439a9912, negotiated timeout = 60000 18/04/17 17:31:08 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x3626be1439a9912 18/04/17 17:31:08 INFO zookeeper.ZooKeeper: Session: 0x3626be1439a9912 closed 18/04/17 17:31:08 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 17:31:08 INFO scheduler.JobScheduler: Finished job streaming job 1523975460000 ms.24 from job set of time 1523975460000 ms 18/04/17 17:31:09 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 1558.0 (TID 1558) in 8948 ms on ***hostname masked*** (executor 10) (1/1) 18/04/17 17:31:09 INFO cluster.YarnClusterScheduler: Removed TaskSet 1558.0, whose tasks have all completed, from pool 18/04/17 17:31:09 INFO scheduler.DAGScheduler: ResultStage 1558 (foreachPartition at PredictorEngineApp.java:153) finished in 8.948 s 18/04/17 17:31:09 INFO scheduler.DAGScheduler: Job 1556 finished: foreachPartition at PredictorEngineApp.java:153, took 9.077190 s 18/04/17 17:31:09 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x2f4c5d5b connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 17:31:09 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x2f4c5d5b0x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 17:31:09 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 17:31:09 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:45479, server: ***hostname masked***/***IP masked***:2181 18/04/17 17:31:09 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x1626be1444c995a, negotiated timeout = 60000 18/04/17 17:31:09 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x1626be1444c995a 18/04/17 17:31:09 INFO zookeeper.ZooKeeper: Session: 0x1626be1444c995a closed 18/04/17 17:31:09 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 17:31:09 INFO scheduler.JobScheduler: Finished job streaming job 1523975460000 ms.32 from job set of time 1523975460000 ms 18/04/17 17:31:09 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 1560.0 (TID 1560) in 9000 ms on ***hostname masked*** (executor 10) (1/1) 18/04/17 17:31:09 INFO cluster.YarnClusterScheduler: Removed TaskSet 1560.0, whose tasks have all completed, from pool 18/04/17 17:31:09 INFO scheduler.DAGScheduler: ResultStage 1560 (foreachPartition at PredictorEngineApp.java:153) finished in 9.000 s 18/04/17 17:31:09 INFO scheduler.DAGScheduler: Job 1559 finished: foreachPartition at PredictorEngineApp.java:153, took 9.136527 s 18/04/17 17:31:09 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x4521e561 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 17:31:09 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x4521e5610x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 17:31:09 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 17:31:09 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:39100, server: ***hostname masked***/***IP masked***:2181 18/04/17 17:31:09 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x3626be1439a9914, negotiated timeout = 60000 18/04/17 17:31:09 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x3626be1439a9914 18/04/17 17:31:09 INFO zookeeper.ZooKeeper: Session: 0x3626be1439a9914 closed 18/04/17 17:31:09 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 17:31:09 INFO scheduler.JobScheduler: Finished job streaming job 1523975460000 ms.18 from job set of time 1523975460000 ms 18/04/17 17:31:09 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 1535.0 (TID 1535) in 9480 ms on ***hostname masked*** (executor 4) (1/1) 18/04/17 17:31:09 INFO cluster.YarnClusterScheduler: Removed TaskSet 1535.0, whose tasks have all completed, from pool 18/04/17 17:31:09 INFO scheduler.DAGScheduler: ResultStage 1535 (foreachPartition at PredictorEngineApp.java:153) finished in 9.481 s 18/04/17 17:31:09 INFO scheduler.DAGScheduler: Job 1534 finished: foreachPartition at PredictorEngineApp.java:153, took 9.493236 s 18/04/17 17:31:09 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x561fddd2 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 17:31:09 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x561fddd20x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 17:31:09 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 17:31:09 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:39103, server: ***hostname masked***/***IP masked***:2181 18/04/17 17:31:09 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x3626be1439a9915, negotiated timeout = 60000 18/04/17 17:31:09 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x3626be1439a9915 18/04/17 17:31:09 INFO zookeeper.ZooKeeper: Session: 0x3626be1439a9915 closed 18/04/17 17:31:09 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 17:31:09 INFO scheduler.JobScheduler: Finished job streaming job 1523975460000 ms.9 from job set of time 1523975460000 ms 18/04/17 17:31:10 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 1539.0 (TID 1539) in 10183 ms on ***hostname masked*** (executor 5) (1/1) 18/04/17 17:31:10 INFO cluster.YarnClusterScheduler: Removed TaskSet 1539.0, whose tasks have all completed, from pool 18/04/17 17:31:10 INFO scheduler.DAGScheduler: ResultStage 1539 (foreachPartition at PredictorEngineApp.java:153) finished in 10.184 s 18/04/17 17:31:10 INFO scheduler.DAGScheduler: Job 1538 finished: foreachPartition at PredictorEngineApp.java:153, took 10.217625 s 18/04/17 17:31:10 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x41e59e1 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 17:31:10 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x41e59e10x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 17:31:10 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 17:31:10 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:50085, server: ***hostname masked***/***IP masked***:2181 18/04/17 17:31:10 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x2626be142b29233, negotiated timeout = 60000 18/04/17 17:31:10 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x2626be142b29233 18/04/17 17:31:10 INFO zookeeper.ZooKeeper: Session: 0x2626be142b29233 closed 18/04/17 17:31:10 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 17:31:10 INFO scheduler.JobScheduler: Finished job streaming job 1523975460000 ms.31 from job set of time 1523975460000 ms 18/04/17 17:31:12 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 1543.0 (TID 1543) in 12264 ms on ***hostname masked*** (executor 12) (1/1) 18/04/17 17:31:12 INFO cluster.YarnClusterScheduler: Removed TaskSet 1543.0, whose tasks have all completed, from pool 18/04/17 17:31:12 INFO scheduler.DAGScheduler: ResultStage 1543 (foreachPartition at PredictorEngineApp.java:153) finished in 12.265 s 18/04/17 17:31:12 INFO scheduler.DAGScheduler: Job 1542 finished: foreachPartition at PredictorEngineApp.java:153, took 12.317276 s 18/04/17 17:31:12 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x38e57566 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 17:31:12 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x38e575660x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 17:31:12 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 17:31:12 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:39113, server: ***hostname masked***/***IP masked***:2181 18/04/17 17:31:12 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x3626be1439a9917, negotiated timeout = 60000 18/04/17 17:31:12 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x3626be1439a9917 18/04/17 17:31:12 INFO zookeeper.ZooKeeper: Session: 0x3626be1439a9917 closed 18/04/17 17:31:12 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 17:31:12 INFO scheduler.JobScheduler: Finished job streaming job 1523975460000 ms.12 from job set of time 1523975460000 ms 18/04/17 17:31:12 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 1555.0 (TID 1555) in 12390 ms on ***hostname masked*** (executor 7) (1/1) 18/04/17 17:31:12 INFO cluster.YarnClusterScheduler: Removed TaskSet 1555.0, whose tasks have all completed, from pool 18/04/17 17:31:12 INFO scheduler.DAGScheduler: ResultStage 1555 (foreachPartition at PredictorEngineApp.java:153) finished in 12.390 s 18/04/17 17:31:12 INFO scheduler.DAGScheduler: Job 1554 finished: foreachPartition at PredictorEngineApp.java:153, took 12.506434 s 18/04/17 17:31:12 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x17bff434 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 17:31:12 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x17bff4340x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 17:31:12 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 17:31:12 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:39116, server: ***hostname masked***/***IP masked***:2181 18/04/17 17:31:12 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x3626be1439a9918, negotiated timeout = 60000 18/04/17 17:31:12 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x3626be1439a9918 18/04/17 17:31:12 INFO zookeeper.ZooKeeper: Session: 0x3626be1439a9918 closed 18/04/17 17:31:12 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 17:31:12 INFO scheduler.JobScheduler: Finished job streaming job 1523975460000 ms.1 from job set of time 1523975460000 ms 18/04/17 17:31:14 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 1546.0 (TID 1546) in 14295 ms on ***hostname masked*** (executor 5) (1/1) 18/04/17 17:31:14 INFO cluster.YarnClusterScheduler: Removed TaskSet 1546.0, whose tasks have all completed, from pool 18/04/17 17:31:14 INFO scheduler.DAGScheduler: ResultStage 1546 (foreachPartition at PredictorEngineApp.java:153) finished in 14.296 s 18/04/17 17:31:14 INFO scheduler.DAGScheduler: Job 1545 finished: foreachPartition at PredictorEngineApp.java:153, took 14.361641 s 18/04/17 17:31:14 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x53df9d01 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 17:31:14 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x53df9d010x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 17:31:14 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 17:31:14 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:50099, server: ***hostname masked***/***IP masked***:2181 18/04/17 17:31:14 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x2626be142b29234, negotiated timeout = 60000 18/04/17 17:31:14 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x2626be142b29234 18/04/17 17:31:14 INFO zookeeper.ZooKeeper: Session: 0x2626be142b29234 closed 18/04/17 17:31:14 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 17:31:14 INFO scheduler.JobScheduler: Finished job streaming job 1523975460000 ms.28 from job set of time 1523975460000 ms 18/04/17 17:31:14 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 1556.0 (TID 1556) in 14271 ms on ***hostname masked*** (executor 8) (1/1) 18/04/17 17:31:14 INFO cluster.YarnClusterScheduler: Removed TaskSet 1556.0, whose tasks have all completed, from pool 18/04/17 17:31:14 INFO scheduler.DAGScheduler: ResultStage 1556 (foreachPartition at PredictorEngineApp.java:153) finished in 14.272 s 18/04/17 17:31:14 INFO scheduler.DAGScheduler: Job 1555 finished: foreachPartition at PredictorEngineApp.java:153, took 14.392664 s 18/04/17 17:31:14 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x31e8d56d connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 17:31:14 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x31e8d56d0x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 17:31:14 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 17:31:14 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:50102, server: ***hostname masked***/***IP masked***:2181 18/04/17 17:31:14 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x2626be142b29235, negotiated timeout = 60000 18/04/17 17:31:14 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x2626be142b29235 18/04/17 17:31:14 INFO zookeeper.ZooKeeper: Session: 0x2626be142b29235 closed 18/04/17 17:31:14 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 17:31:14 INFO scheduler.JobScheduler: Finished job streaming job 1523975460000 ms.15 from job set of time 1523975460000 ms 18/04/17 17:31:14 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 1552.0 (TID 1552) in 14505 ms on ***hostname masked*** (executor 8) (1/1) 18/04/17 17:31:14 INFO cluster.YarnClusterScheduler: Removed TaskSet 1552.0, whose tasks have all completed, from pool 18/04/17 17:31:14 INFO scheduler.DAGScheduler: ResultStage 1552 (foreachPartition at PredictorEngineApp.java:153) finished in 14.505 s 18/04/17 17:31:14 INFO scheduler.DAGScheduler: Job 1550 finished: foreachPartition at PredictorEngineApp.java:153, took 14.606246 s 18/04/17 17:31:14 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x15d2e40a connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 17:31:14 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x15d2e40a0x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 17:31:14 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 17:31:14 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:39128, server: ***hostname masked***/***IP masked***:2181 18/04/17 17:31:14 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x3626be1439a991a, negotiated timeout = 60000 18/04/17 17:31:14 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x3626be1439a991a 18/04/17 17:31:14 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 1557.0 (TID 1557) in 14506 ms on ***hostname masked*** (executor 6) (1/1) 18/04/17 17:31:14 INFO cluster.YarnClusterScheduler: Removed TaskSet 1557.0, whose tasks have all completed, from pool 18/04/17 17:31:14 INFO scheduler.DAGScheduler: ResultStage 1557 (foreachPartition at PredictorEngineApp.java:153) finished in 14.507 s 18/04/17 17:31:14 INFO scheduler.DAGScheduler: Job 1557 finished: foreachPartition at PredictorEngineApp.java:153, took 14.632230 s 18/04/17 17:31:14 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x5ec9f782 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 17:31:14 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x5ec9f7820x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 17:31:14 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 17:31:14 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:45514, server: ***hostname masked***/***IP masked***:2181 18/04/17 17:31:14 INFO zookeeper.ZooKeeper: Session: 0x3626be1439a991a closed 18/04/17 17:31:14 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 17:31:14 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x1626be1444c995f, negotiated timeout = 60000 18/04/17 17:31:14 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x1626be1444c995f 18/04/17 17:31:14 INFO scheduler.JobScheduler: Finished job streaming job 1523975460000 ms.6 from job set of time 1523975460000 ms 18/04/17 17:31:14 INFO zookeeper.ZooKeeper: Session: 0x1626be1444c995f closed 18/04/17 17:31:14 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 17:31:14 INFO scheduler.JobScheduler: Finished job streaming job 1523975460000 ms.19 from job set of time 1523975460000 ms 18/04/17 17:31:15 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 1538.0 (TID 1538) in 15003 ms on ***hostname masked*** (executor 6) (1/1) 18/04/17 17:31:15 INFO scheduler.DAGScheduler: ResultStage 1538 (foreachPartition at PredictorEngineApp.java:153) finished in 15.003 s 18/04/17 17:31:15 INFO cluster.YarnClusterScheduler: Removed TaskSet 1538.0, whose tasks have all completed, from pool 18/04/17 17:31:15 INFO scheduler.DAGScheduler: Job 1537 finished: foreachPartition at PredictorEngineApp.java:153, took 15.031402 s 18/04/17 17:31:15 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x672ce8a1 connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 17:31:15 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x672ce8a10x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 17:31:15 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 17:31:15 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:45518, server: ***hostname masked***/***IP masked***:2181 18/04/17 17:31:15 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x1626be1444c9960, negotiated timeout = 60000 18/04/17 17:31:15 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x1626be1444c9960 18/04/17 17:31:15 INFO zookeeper.ZooKeeper: Session: 0x1626be1444c9960 closed 18/04/17 17:31:15 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 17:31:15 INFO scheduler.JobScheduler: Finished job streaming job 1523975460000 ms.2 from job set of time 1523975460000 ms 18/04/17 17:31:15 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 1549.0 (TID 1549) in 15158 ms on ***hostname masked*** (executor 12) (1/1) 18/04/17 17:31:15 INFO cluster.YarnClusterScheduler: Removed TaskSet 1549.0, whose tasks have all completed, from pool 18/04/17 17:31:15 INFO scheduler.DAGScheduler: ResultStage 1549 (foreachPartition at PredictorEngineApp.java:153) finished in 15.159 s 18/04/17 17:31:15 INFO scheduler.DAGScheduler: Job 1548 finished: foreachPartition at PredictorEngineApp.java:153, took 15.246076 s 18/04/17 17:31:15 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x8442a9c connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 17:31:15 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x8442a9c0x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 17:31:15 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 17:31:15 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:39139, server: ***hostname masked***/***IP masked***:2181 18/04/17 17:31:15 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 1536.0 (TID 1536) in 15241 ms on ***hostname masked*** (executor 8) (1/1) 18/04/17 17:31:15 INFO scheduler.DAGScheduler: ResultStage 1536 (foreachPartition at PredictorEngineApp.java:153) finished in 15.241 s 18/04/17 17:31:15 INFO cluster.YarnClusterScheduler: Removed TaskSet 1536.0, whose tasks have all completed, from pool 18/04/17 17:31:15 INFO scheduler.DAGScheduler: Job 1535 finished: foreachPartition at PredictorEngineApp.java:153, took 15.258010 s 18/04/17 17:31:15 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x3626be1439a991e, negotiated timeout = 60000 18/04/17 17:31:15 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x7ecbf10b connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 17:31:15 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x7ecbf10b0x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 17:31:15 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 17:31:15 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:50117, server: ***hostname masked***/***IP masked***:2181 18/04/17 17:31:15 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 1547.0 (TID 1547) in 15200 ms on ***hostname masked*** (executor 6) (1/1) 18/04/17 17:31:15 INFO cluster.YarnClusterScheduler: Removed TaskSet 1547.0, whose tasks have all completed, from pool 18/04/17 17:31:15 INFO scheduler.DAGScheduler: ResultStage 1547 (foreachPartition at PredictorEngineApp.java:153) finished in 15.212 s 18/04/17 17:31:15 INFO scheduler.DAGScheduler: Job 1546 finished: foreachPartition at PredictorEngineApp.java:153, took 15.281301 s 18/04/17 17:31:15 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x2626be142b29237, negotiated timeout = 60000 18/04/17 17:31:15 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x3626be1439a991e 18/04/17 17:31:15 WARN client.AsyncProcess: #3121, the task was rejected by the pool. This is unexpected. Server is ***hostname masked***,60020,1523949367813 java.util.concurrent.RejectedExecutionException: Task java.util.concurrent.FutureTask@3f377224 rejected from java.util.concurrent.ThreadPoolExecutor@639d4dae[Terminated, pool size = 0, active threads = 0, queued tasks = 0, completed tasks = 1] at java.util.concurrent.ThreadPoolExecutor$AbortPolicy.rejectedExecution(ThreadPoolExecutor.java:2047) at java.util.concurrent.ThreadPoolExecutor.reject(ThreadPoolExecutor.java:823) at java.util.concurrent.ThreadPoolExecutor.execute(ThreadPoolExecutor.java:1369) at java.util.concurrent.AbstractExecutorService.submit(AbstractExecutorService.java:112) at org.apache.hadoop.hbase.client.AsyncProcess$AsyncRequestFutureImpl.sendMultiAction(AsyncProcess.java:1013) at org.apache.hadoop.hbase.client.AsyncProcess$AsyncRequestFutureImpl.access$000(AsyncProcess.java:600) at org.apache.hadoop.hbase.client.AsyncProcess.submitMultiActions(AsyncProcess.java:449) at org.apache.hadoop.hbase.client.AsyncProcess.submit(AsyncProcess.java:429) at org.apache.hadoop.hbase.client.AsyncProcess.submit(AsyncProcess.java:344) at org.apache.hadoop.hbase.client.BufferedMutatorImpl.backgroundFlushCommits(BufferedMutatorImpl.java:238) at org.apache.hadoop.hbase.client.BufferedMutatorImpl.flush(BufferedMutatorImpl.java:190) at org.apache.hadoop.hbase.client.HTable.flushCommits(HTable.java:1495) at org.apache.hadoop.hbase.client.HTable.put(HTable.java:1098) at ru.croc.smartdata.spark.Utils.updateStartOffsets(Utils.java:230) at ru.croc.smartdata.spark.Utils.updateStartOffsets(Utils.java:215) at ru.croc.smartdata.spark.PredictorEngineApp.lambda$processTopic$d8eb8b1f$1(PredictorEngineApp.java:179) at org.apache.spark.streaming.api.java.JavaDStreamLike$$anonfun$foreachRDD$4.apply(JavaDStreamLike.scala:343) at org.apache.spark.streaming.api.java.JavaDStreamLike$$anonfun$foreachRDD$4.apply(JavaDStreamLike.scala:343) at org.apache.spark.streaming.dstream.ForEachDStream$$anonfun$1$$anonfun$apply$mcV$sp$1.apply$mcV$sp(ForEachDStream.scala:50) at org.apache.spark.streaming.dstream.ForEachDStream$$anonfun$1$$anonfun$apply$mcV$sp$1.apply(ForEachDStream.scala:50) at org.apache.spark.streaming.dstream.ForEachDStream$$anonfun$1$$anonfun$apply$mcV$sp$1.apply(ForEachDStream.scala:50) at org.apache.spark.streaming.dstream.DStream.createRDDWithLocalProperties(DStream.scala:426) at org.apache.spark.streaming.dstream.ForEachDStream$$anonfun$1.apply$mcV$sp(ForEachDStream.scala:49) at org.apache.spark.streaming.dstream.ForEachDStream$$anonfun$1.apply(ForEachDStream.scala:49) at org.apache.spark.streaming.dstream.ForEachDStream$$anonfun$1.apply(ForEachDStream.scala:49) at scala.util.Try$.apply(Try.scala:161) at org.apache.spark.streaming.scheduler.Job.run(Job.scala:39) at org.apache.spark.streaming.scheduler.JobScheduler$JobHandler$$anonfun$run$1.apply$mcV$sp(JobScheduler.scala:224) at org.apache.spark.streaming.scheduler.JobScheduler$JobHandler$$anonfun$run$1.apply(JobScheduler.scala:224) at org.apache.spark.streaming.scheduler.JobScheduler$JobHandler$$anonfun$run$1.apply(JobScheduler.scala:224) at scala.util.DynamicVariable.withValue(DynamicVariable.scala:57) at org.apache.spark.streaming.scheduler.JobScheduler$JobHandler.run(JobScheduler.scala:223) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) at java.lang.Thread.run(Thread.java:745) 18/04/17 17:31:15 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 17:31:15 INFO zookeeper.ZooKeeper: Session: 0x3626be1439a991e closed 18/04/17 17:31:15 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x2626be142b29237 18/04/17 17:31:15 INFO zookeeper.ZooKeeper: Session: 0x2626be142b29237 closed 18/04/17 17:31:15 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 17:31:15 INFO scheduler.JobScheduler: Finished job streaming job 1523975460000 ms.23 from job set of time 1523975460000 ms 18/04/17 17:31:15 INFO scheduler.JobScheduler: Finished job streaming job 1523975460000 ms.27 from job set of time 1523975460000 ms 18/04/17 17:31:15 ERROR client.AsyncProcess: Cannot get replica 0 location for {"totalColumns":1,"row":"predictor_passport_ru_number_gold","families":{"cf":[{"qualifier":"\\x00\\x00\\x00\\x00","vlen":8,"tag":[],"timestamp":9223372036854775807}]}} 18/04/17 17:31:15 ERROR spark.Utils: Error saving offsets [OffsetRange(topic: 'predictor_passport_ru_number_gold', partition: 0, range: [2536631 -> 2536718])] org.apache.hadoop.hbase.client.RetriesExhaustedWithDetailsException: Failed 1 action: IOException: 1 time, at org.apache.hadoop.hbase.client.AsyncProcess$BatchErrors.makeException(AsyncProcess.java:247) at org.apache.hadoop.hbase.client.AsyncProcess$BatchErrors.access$1800(AsyncProcess.java:227) at org.apache.hadoop.hbase.client.AsyncProcess.waitForAllPreviousOpsAndReset(AsyncProcess.java:1766) at org.apache.hadoop.hbase.client.BufferedMutatorImpl.backgroundFlushCommits(BufferedMutatorImpl.java:240) at org.apache.hadoop.hbase.client.BufferedMutatorImpl.flush(BufferedMutatorImpl.java:190) at org.apache.hadoop.hbase.client.HTable.flushCommits(HTable.java:1495) at org.apache.hadoop.hbase.client.HTable.put(HTable.java:1098) at ru.croc.smartdata.spark.Utils.updateStartOffsets(Utils.java:230) at ru.croc.smartdata.spark.Utils.updateStartOffsets(Utils.java:215) at ru.croc.smartdata.spark.PredictorEngineApp.lambda$processTopic$d8eb8b1f$1(PredictorEngineApp.java:179) at org.apache.spark.streaming.api.java.JavaDStreamLike$$anonfun$foreachRDD$4.apply(JavaDStreamLike.scala:343) at org.apache.spark.streaming.api.java.JavaDStreamLike$$anonfun$foreachRDD$4.apply(JavaDStreamLike.scala:343) at org.apache.spark.streaming.dstream.ForEachDStream$$anonfun$1$$anonfun$apply$mcV$sp$1.apply$mcV$sp(ForEachDStream.scala:50) at org.apache.spark.streaming.dstream.ForEachDStream$$anonfun$1$$anonfun$apply$mcV$sp$1.apply(ForEachDStream.scala:50) at org.apache.spark.streaming.dstream.ForEachDStream$$anonfun$1$$anonfun$apply$mcV$sp$1.apply(ForEachDStream.scala:50) at org.apache.spark.streaming.dstream.DStream.createRDDWithLocalProperties(DStream.scala:426) at org.apache.spark.streaming.dstream.ForEachDStream$$anonfun$1.apply$mcV$sp(ForEachDStream.scala:49) at org.apache.spark.streaming.dstream.ForEachDStream$$anonfun$1.apply(ForEachDStream.scala:49) at org.apache.spark.streaming.dstream.ForEachDStream$$anonfun$1.apply(ForEachDStream.scala:49) at scala.util.Try$.apply(Try.scala:161) at org.apache.spark.streaming.scheduler.Job.run(Job.scala:39) at org.apache.spark.streaming.scheduler.JobScheduler$JobHandler$$anonfun$run$1.apply$mcV$sp(JobScheduler.scala:224) at org.apache.spark.streaming.scheduler.JobScheduler$JobHandler$$anonfun$run$1.apply(JobScheduler.scala:224) at org.apache.spark.streaming.scheduler.JobScheduler$JobHandler$$anonfun$run$1.apply(JobScheduler.scala:224) at scala.util.DynamicVariable.withValue(DynamicVariable.scala:57) at org.apache.spark.streaming.scheduler.JobScheduler$JobHandler.run(JobScheduler.scala:223) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) at java.lang.Thread.run(Thread.java:745) 18/04/17 17:31:15 INFO scheduler.JobScheduler: Finished job streaming job 1523975460000 ms.20 from job set of time 1523975460000 ms 18/04/17 17:31:15 ERROR scheduler.JobScheduler: Error running job streaming job 1523975460000 ms.20 java.lang.RuntimeException: org.apache.hadoop.hbase.client.RetriesExhaustedWithDetailsException: Failed 1 action: IOException: 1 time, at ru.croc.smartdata.spark.Utils.updateStartOffsets(Utils.java:233) at ru.croc.smartdata.spark.Utils.updateStartOffsets(Utils.java:215) at ru.croc.smartdata.spark.PredictorEngineApp.lambda$processTopic$d8eb8b1f$1(PredictorEngineApp.java:179) at org.apache.spark.streaming.api.java.JavaDStreamLike$$anonfun$foreachRDD$4.apply(JavaDStreamLike.scala:343) at org.apache.spark.streaming.api.java.JavaDStreamLike$$anonfun$foreachRDD$4.apply(JavaDStreamLike.scala:343) at org.apache.spark.streaming.dstream.ForEachDStream$$anonfun$1$$anonfun$apply$mcV$sp$1.apply$mcV$sp(ForEachDStream.scala:50) at org.apache.spark.streaming.dstream.ForEachDStream$$anonfun$1$$anonfun$apply$mcV$sp$1.apply(ForEachDStream.scala:50) at org.apache.spark.streaming.dstream.ForEachDStream$$anonfun$1$$anonfun$apply$mcV$sp$1.apply(ForEachDStream.scala:50) at org.apache.spark.streaming.dstream.DStream.createRDDWithLocalProperties(DStream.scala:426) at org.apache.spark.streaming.dstream.ForEachDStream$$anonfun$1.apply$mcV$sp(ForEachDStream.scala:49) at org.apache.spark.streaming.dstream.ForEachDStream$$anonfun$1.apply(ForEachDStream.scala:49) at org.apache.spark.streaming.dstream.ForEachDStream$$anonfun$1.apply(ForEachDStream.scala:49) at scala.util.Try$.apply(Try.scala:161) at org.apache.spark.streaming.scheduler.Job.run(Job.scala:39) at org.apache.spark.streaming.scheduler.JobScheduler$JobHandler$$anonfun$run$1.apply$mcV$sp(JobScheduler.scala:224) at org.apache.spark.streaming.scheduler.JobScheduler$JobHandler$$anonfun$run$1.apply(JobScheduler.scala:224) at org.apache.spark.streaming.scheduler.JobScheduler$JobHandler$$anonfun$run$1.apply(JobScheduler.scala:224) at scala.util.DynamicVariable.withValue(DynamicVariable.scala:57) at org.apache.spark.streaming.scheduler.JobScheduler$JobHandler.run(JobScheduler.scala:223) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) at java.lang.Thread.run(Thread.java:745) Caused by: org.apache.hadoop.hbase.client.RetriesExhaustedWithDetailsException: Failed 1 action: IOException: 1 time, at org.apache.hadoop.hbase.client.AsyncProcess$BatchErrors.makeException(AsyncProcess.java:247) at org.apache.hadoop.hbase.client.AsyncProcess$BatchErrors.access$1800(AsyncProcess.java:227) at org.apache.hadoop.hbase.client.AsyncProcess.waitForAllPreviousOpsAndReset(AsyncProcess.java:1766) at org.apache.hadoop.hbase.client.BufferedMutatorImpl.backgroundFlushCommits(BufferedMutatorImpl.java:240) at org.apache.hadoop.hbase.client.BufferedMutatorImpl.flush(BufferedMutatorImpl.java:190) at org.apache.hadoop.hbase.client.HTable.flushCommits(HTable.java:1495) at org.apache.hadoop.hbase.client.HTable.put(HTable.java:1098) at ru.croc.smartdata.spark.Utils.updateStartOffsets(Utils.java:230) ... 21 more 18/04/17 17:31:15 ERROR yarn.ApplicationMaster: User class threw exception: java.lang.RuntimeException: org.apache.hadoop.hbase.client.RetriesExhaustedWithDetailsException: Failed 1 action: IOException: 1 time, java.lang.RuntimeException: org.apache.hadoop.hbase.client.RetriesExhaustedWithDetailsException: Failed 1 action: IOException: 1 time, at ru.croc.smartdata.spark.Utils.updateStartOffsets(Utils.java:233) at ru.croc.smartdata.spark.Utils.updateStartOffsets(Utils.java:215) at ru.croc.smartdata.spark.PredictorEngineApp.lambda$processTopic$d8eb8b1f$1(PredictorEngineApp.java:179) at org.apache.spark.streaming.api.java.JavaDStreamLike$$anonfun$foreachRDD$4.apply(JavaDStreamLike.scala:343) at org.apache.spark.streaming.api.java.JavaDStreamLike$$anonfun$foreachRDD$4.apply(JavaDStreamLike.scala:343) at org.apache.spark.streaming.dstream.ForEachDStream$$anonfun$1$$anonfun$apply$mcV$sp$1.apply$mcV$sp(ForEachDStream.scala:50) at org.apache.spark.streaming.dstream.ForEachDStream$$anonfun$1$$anonfun$apply$mcV$sp$1.apply(ForEachDStream.scala:50) at org.apache.spark.streaming.dstream.ForEachDStream$$anonfun$1$$anonfun$apply$mcV$sp$1.apply(ForEachDStream.scala:50) at org.apache.spark.streaming.dstream.DStream.createRDDWithLocalProperties(DStream.scala:426) at org.apache.spark.streaming.dstream.ForEachDStream$$anonfun$1.apply$mcV$sp(ForEachDStream.scala:49) at org.apache.spark.streaming.dstream.ForEachDStream$$anonfun$1.apply(ForEachDStream.scala:49) at org.apache.spark.streaming.dstream.ForEachDStream$$anonfun$1.apply(ForEachDStream.scala:49) at scala.util.Try$.apply(Try.scala:161) at org.apache.spark.streaming.scheduler.Job.run(Job.scala:39) at org.apache.spark.streaming.scheduler.JobScheduler$JobHandler$$anonfun$run$1.apply$mcV$sp(JobScheduler.scala:224) at org.apache.spark.streaming.scheduler.JobScheduler$JobHandler$$anonfun$run$1.apply(JobScheduler.scala:224) at org.apache.spark.streaming.scheduler.JobScheduler$JobHandler$$anonfun$run$1.apply(JobScheduler.scala:224) at scala.util.DynamicVariable.withValue(DynamicVariable.scala:57) at org.apache.spark.streaming.scheduler.JobScheduler$JobHandler.run(JobScheduler.scala:223) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) at java.lang.Thread.run(Thread.java:745) Caused by: org.apache.hadoop.hbase.client.RetriesExhaustedWithDetailsException: Failed 1 action: IOException: 1 time, at org.apache.hadoop.hbase.client.AsyncProcess$BatchErrors.makeException(AsyncProcess.java:247) at org.apache.hadoop.hbase.client.AsyncProcess$BatchErrors.access$1800(AsyncProcess.java:227) at org.apache.hadoop.hbase.client.AsyncProcess.waitForAllPreviousOpsAndReset(AsyncProcess.java:1766) at org.apache.hadoop.hbase.client.BufferedMutatorImpl.backgroundFlushCommits(BufferedMutatorImpl.java:240) at org.apache.hadoop.hbase.client.BufferedMutatorImpl.flush(BufferedMutatorImpl.java:190) at org.apache.hadoop.hbase.client.HTable.flushCommits(HTable.java:1495) at org.apache.hadoop.hbase.client.HTable.put(HTable.java:1098) at ru.croc.smartdata.spark.Utils.updateStartOffsets(Utils.java:230) ... 21 more 18/04/17 17:31:15 INFO yarn.ApplicationMaster: Final app status: FAILED, exitCode: 15, (reason: User class threw exception: java.lang.RuntimeException: org.apache.hadoop.hbase.client.RetriesExhaustedWithDetailsException: Failed 1 action: IOException: 1 time, ) 18/04/17 17:31:15 INFO streaming.StreamingContext: Invoking stop(stopGracefully=false) from shutdown hook 18/04/17 17:31:15 INFO scheduler.JobGenerator: Stopping JobGenerator immediately 18/04/17 17:31:15 INFO util.RecurringTimer: Stopped timer for JobGenerator after time 1523975460000 18/04/17 17:31:15 INFO scheduler.JobGenerator: Stopped JobGenerator 18/04/17 17:31:17 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 1551.0 (TID 1551) in 16964 ms on ***hostname masked*** (executor 4) (1/1) 18/04/17 17:31:17 INFO cluster.YarnClusterScheduler: Removed TaskSet 1551.0, whose tasks have all completed, from pool 18/04/17 17:31:17 INFO scheduler.DAGScheduler: ResultStage 1551 (foreachPartition at PredictorEngineApp.java:153) finished in 16.965 s 18/04/17 17:31:17 INFO scheduler.DAGScheduler: Job 1549 finished: foreachPartition at PredictorEngineApp.java:153, took 17.060775 s 18/04/17 17:31:17 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x2e7d796d connecting to ZooKeeper ensemble=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 18/04/17 17:31:17 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181 sessionTimeout=60000 watcher=hconnection-0x2e7d796d0x0, quorum=***hostname masked***:2181,***hostname masked***:2181,***hostname masked***:2181, baseZNode=/hbase 18/04/17 17:31:17 INFO zookeeper.ClientCnxn: Opening socket connection to server ***hostname masked***/***IP masked***:2181. Will not attempt to authenticate using SASL (unknown error) 18/04/17 17:31:17 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /***IP masked***:50124, server: ***hostname masked***/***IP masked***:2181 18/04/17 17:31:17 INFO zookeeper.ClientCnxn: Session establishment complete on server ***hostname masked***/***IP masked***:2181, sessionid = 0x2626be142b29238, negotiated timeout = 60000 18/04/17 17:31:17 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x2626be142b29238 18/04/17 17:31:17 INFO zookeeper.ZooKeeper: Session: 0x2626be142b29238 closed 18/04/17 17:31:17 INFO zookeeper.ClientCnxn: EventThread shut down 18/04/17 17:31:17 INFO scheduler.JobScheduler: Finished job streaming job 1523975460000 ms.26 from job set of time 1523975460000 ms Exception in thread "streaming-job-executor-27" Exception in thread "streaming-job-executor-16" Exception in thread "streaming-job-executor-15" Exception in thread "streaming-job-executor-29" Exception in thread "streaming-job-executor-13" java.lang.Error: java.lang.InterruptedException at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1148) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) at java.lang.Thread.run(Thread.java:745) Caused by: java.lang.InterruptedException at java.lang.Object.wait(Native Method) at java.lang.Object.wait(Object.java:502) at org.apache.spark.scheduler.JobWaiter.awaitResult(JobWaiter.scala:73) at org.apache.spark.scheduler.DAGScheduler.runJob(DAGScheduler.scala:612) at org.apache.spark.SparkContext.runJob(SparkContext.scala:1840) at org.apache.spark.SparkContext.runJob(SparkContext.scala:1853) at org.apache.spark.SparkContext.runJob(SparkContext.scala:1866) at org.apache.spark.SparkContext.runJob(SparkContext.scala:1937) at org.apache.spark.rdd.RDD$$anonfun$foreachPartition$1.apply(RDD.scala:920) at org.apache.spark.rdd.RDD$$anonfun$foreachPartition$1.apply(RDD.scala:918) at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:150) at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:111) at org.apache.spark.rdd.RDD.withScope(RDD.scala:316) at org.apache.spark.rdd.RDD.foreachPartition(RDD.scala:918) at org.apache.spark.api.java.JavaRDDLike$class.foreachPartition(JavaRDDLike.scala:225) at org.apache.spark.api.java.AbstractJavaRDDLike.foreachPartition(JavaRDDLike.scala:46) at ru.croc.smartdata.spark.PredictorEngineApp.lambda$processTopic$d8eb8b1f$1(PredictorEngineApp.java:153) at org.apache.spark.streaming.api.java.JavaDStreamLike$$anonfun$foreachRDD$4.apply(JavaDStreamLike.scala:343) at org.apache.spark.streaming.api.java.JavaDStreamLike$$anonfun$foreachRDD$4.apply(JavaDStreamLike.scala:343) at org.apache.spark.streaming.dstream.ForEachDStream$$anonfun$1$$anonfun$apply$mcV$sp$1.apply$mcV$sp(ForEachDStream.scala:50) at org.apache.spark.streaming.dstream.ForEachDStream$$anonfun$1$$anonfun$apply$mcV$sp$1.apply(ForEachDStream.scala:50) at org.apache.spark.streaming.dstream.ForEachDStream$$anonfun$1$$anonfun$apply$mcV$sp$1.apply(ForEachDStream.scala:50) at org.apache.spark.streaming.dstream.DStream.createRDDWithLocalProperties(DStream.scala:426) at org.apache.spark.streaming.dstream.ForEachDStream$$anonfun$1.apply$mcV$sp(ForEachDStream.scala:49) at org.apache.spark.streaming.dstream.ForEachDStream$$anonfun$1.apply(ForEachDStream.scala:49) at org.apache.spark.streaming.dstream.ForEachDStream$$anonfun$1.apply(ForEachDStream.scala:49) at scala.util.Try$.apply(Try.scala:161) at org.apache.spark.streaming.scheduler.Job.run(Job.scala:39) at org.apache.spark.streaming.scheduler.JobScheduler$JobHandler$$anonfun$run$1.apply$mcV$sp(JobScheduler.scala:224) at org.apache.spark.streaming.scheduler.JobScheduler$JobHandler$$anonfun$run$1.apply(JobScheduler.scala:224) at org.apache.spark.streaming.scheduler.JobScheduler$JobHandler$$anonfun$run$1.apply(JobScheduler.scala:224) at scala.util.DynamicVariable.withValue(DynamicVariable.scala:57) at org.apache.spark.streaming.scheduler.JobScheduler$JobHandler.run(JobScheduler.scala:223) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) ... 2 more java.lang.Error: java.lang.InterruptedException at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1148) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) at java.lang.Thread.run(Thread.java:745) Caused by: java.lang.InterruptedException at java.lang.Object.wait(Native Method) at java.lang.Object.wait(Object.java:502) at org.apache.spark.scheduler.JobWaiter.awaitResult(JobWaiter.scala:73) at org.apache.spark.scheduler.DAGScheduler.runJob(DAGScheduler.scala:612) at org.apache.spark.SparkContext.runJob(SparkContext.scala:1840) at org.apache.spark.SparkContext.runJob(SparkContext.scala:1853) at org.apache.spark.SparkContext.runJob(SparkContext.scala:1866) at org.apache.spark.SparkContext.runJob(SparkContext.scala:1937) at org.apache.spark.rdd.RDD$$anonfun$foreachPartition$1.apply(RDD.scala:920) at org.apache.spark.rdd.RDD$$anonfun$foreachPartition$1.apply(RDD.scala:918) at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:150) at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:111) at org.apache.spark.rdd.RDD.withScope(RDD.scala:316) at org.apache.spark.rdd.RDD.foreachPartition(RDD.scala:918) at org.apache.spark.api.java.JavaRDDLike$class.foreachPartition(JavaRDDLike.scala:225) at org.apache.spark.api.java.AbstractJavaRDDLike.foreachPartition(JavaRDDLike.scala:46) at ru.croc.smartdata.spark.PredictorEngineApp.lambda$processTopic$d8eb8b1f$1(PredictorEngineApp.java:153) at org.apache.spark.streaming.api.java.JavaDStreamLike$$anonfun$foreachRDD$4.apply(JavaDStreamLike.scala:343) at org.apache.spark.streaming.api.java.JavaDStreamLike$$anonfun$foreachRDD$4.apply(JavaDStreamLike.scala:343) at org.apache.spark.streaming.dstream.ForEachDStream$$anonfun$1$$anonfun$apply$mcV$sp$1.apply$mcV$sp(ForEachDStream.scala:50) at org.apache.spark.streaming.dstream.ForEachDStream$$anonfun$1$$anonfun$apply$mcV$sp$1.apply(ForEachDStream.scala:50) at org.apache.spark.streaming.dstream.ForEachDStream$$anonfun$1$$anonfun$apply$mcV$sp$1.apply(ForEachDStream.scala:50) at org.apache.spark.streaming.dstream.DStream.createRDDWithLocalProperties(DStream.scala:426) at org.apache.spark.streaming.dstream.ForEachDStream$$anonfun$1.apply$mcV$sp(ForEachDStream.scala:49) at org.apache.spark.streaming.dstream.ForEachDStream$$anonfun$1.apply(ForEachDStream.scala:49) at org.apache.spark.streaming.dstream.ForEachDStream$$anonfun$1.apply(ForEachDStream.scala:49) at scala.util.Try$.apply(Try.scala:161) at org.apache.spark.streaming.scheduler.Job.run(Job.scala:39) at org.apache.spark.streaming.scheduler.JobScheduler$JobHandler$$anonfun$run$1.apply$mcV$sp(JobScheduler.scala:224) at org.apache.spark.streaming.scheduler.JobScheduler$JobHandler$$anonfun$run$1.apply(JobScheduler.scala:224) at org.apache.spark.streaming.scheduler.JobScheduler$JobHandler$$anonfun$run$1.apply(JobScheduler.scala:224) at scala.util.DynamicVariable.withValue(DynamicVariable.scala:57) at org.apache.spark.streaming.scheduler.JobScheduler$JobHandler.run(JobScheduler.scala:223) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) ... 2 more java.lang.Error: java.lang.InterruptedException at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1148) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) at java.lang.Thread.run(Thread.java:745) Caused by: java.lang.InterruptedException at java.lang.Object.wait(Native Method) at java.lang.Object.wait(Object.java:502) at org.apache.spark.scheduler.JobWaiter.awaitResult(JobWaiter.scala:73) at org.apache.spark.scheduler.DAGScheduler.runJob(DAGScheduler.scala:612) at org.apache.spark.SparkContext.runJob(SparkContext.scala:1840) at org.apache.spark.SparkContext.runJob(SparkContext.scala:1853) at org.apache.spark.SparkContext.runJob(SparkContext.scala:1866) at org.apache.spark.SparkContext.runJob(SparkContext.scala:1937) at org.apache.spark.rdd.RDD$$anonfun$foreachPartition$1.apply(RDD.scala:920) at org.apache.spark.rdd.RDD$$anonfun$foreachPartition$1.apply(RDD.scala:918) at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:150) at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:111) at org.apache.spark.rdd.RDD.withScope(RDD.scala:316) at org.apache.spark.rdd.RDD.foreachPartition(RDD.scala:918) at org.apache.spark.api.java.JavaRDDLike$class.foreachPartition(JavaRDDLike.scala:225) at org.apache.spark.api.java.AbstractJavaRDDLike.foreachPartition(JavaRDDLike.scala:46) at ru.croc.smartdata.spark.PredictorEngineApp.lambda$processTopic$d8eb8b1f$1(PredictorEngineApp.java:153) at org.apache.spark.streaming.api.java.JavaDStreamLike$$anonfun$foreachRDD$4.apply(JavaDStreamLike.scala:343) at org.apache.spark.streaming.api.java.JavaDStreamLike$$anonfun$foreachRDD$4.apply(JavaDStreamLike.scala:343) at org.apache.spark.streaming.dstream.ForEachDStream$$anonfun$1$$anonfun$apply$mcV$sp$1.apply$mcV$sp(ForEachDStream.scala:50) at org.apache.spark.streaming.dstream.ForEachDStream$$anonfun$1$$anonfun$apply$mcV$sp$1.apply(ForEachDStream.scala:50) at org.apache.spark.streaming.dstream.ForEachDStream$$anonfun$1$$anonfun$apply$mcV$sp$1.apply(ForEachDStream.scala:50) at org.apache.spark.streaming.dstream.DStream.createRDDWithLocalProperties(DStream.scala:426) at org.apache.spark.streaming.dstream.ForEachDStream$$anonfun$1.apply$mcV$sp(ForEachDStream.scala:49) at org.apache.spark.streaming.dstream.ForEachDStream$$anonfun$1.apply(ForEachDStream.scala:49) at org.apache.spark.streaming.dstream.ForEachDStream$$anonfun$1.apply(ForEachDStream.scala:49) at scala.util.Try$.apply(Try.scala:161) at org.apache.spark.streaming.scheduler.Job.run(Job.scala:39) at org.apache.spark.streaming.scheduler.JobScheduler$JobHandler$$anonfun$run$1.apply$mcV$sp(JobScheduler.scala:224) at org.apache.spark.streaming.scheduler.JobScheduler$JobHandler$$anonfun$run$1.apply(JobScheduler.scala:224) at org.apache.spark.streaming.scheduler.JobScheduler$JobHandler$$anonfun$run$1.apply(JobScheduler.scala:224) at scala.util.DynamicVariable.withValue(DynamicVariable.scala:57) at org.apache.spark.streaming.scheduler.JobScheduler$JobHandler.run(JobScheduler.scala:223) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) ... 2 more java.lang.Error: java.lang.InterruptedException at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1148) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) at java.lang.Thread.run(Thread.java:745) Caused by: java.lang.InterruptedException at java.lang.Object.wait(Native Method) at java.lang.Object.wait(Object.java:502) at org.apache.spark.scheduler.JobWaiter.awaitResult(JobWaiter.scala:73) at org.apache.spark.scheduler.DAGScheduler.runJob(DAGScheduler.scala:612) at org.apache.spark.SparkContext.runJob(SparkContext.scala:1840) at org.apache.spark.SparkContext.runJob(SparkContext.scala:1853) at org.apache.spark.SparkContext.runJob(SparkContext.scala:1866) at org.apache.spark.SparkContext.runJob(SparkContext.scala:1937) at org.apache.spark.rdd.RDD$$anonfun$foreachPartition$1.apply(RDD.scala:920) at org.apache.spark.rdd.RDD$$anonfun$foreachPartition$1.apply(RDD.scala:918) at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:150) at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:111) at org.apache.spark.rdd.RDD.withScope(RDD.scala:316) at org.apache.spark.rdd.RDD.foreachPartition(RDD.scala:918) at org.apache.spark.api.java.JavaRDDLike$class.foreachPartition(JavaRDDLike.scala:225) at org.apache.spark.api.java.AbstractJavaRDDLike.foreachPartition(JavaRDDLike.scala:46) at ru.croc.smartdata.spark.PredictorEngineApp.lambda$processTopic$d8eb8b1f$1(PredictorEngineApp.java:153) at org.apache.spark.streaming.api.java.JavaDStreamLike$$anonfun$foreachRDD$4.apply(JavaDStreamLike.scala:343) at org.apache.spark.streaming.api.java.JavaDStreamLike$$anonfun$foreachRDD$4.apply(JavaDStreamLike.scala:343) at org.apache.spark.streaming.dstream.ForEachDStream$$anonfun$1$$anonfun$apply$mcV$sp$1.apply$mcV$sp(ForEachDStream.scala:50) at org.apache.spark.streaming.dstream.ForEachDStream$$anonfun$1$$anonfun$apply$mcV$sp$1.apply(ForEachDStream.scala:50) at org.apache.spark.streaming.dstream.ForEachDStream$$anonfun$1$$anonfun$apply$mcV$sp$1.apply(ForEachDStream.scala:50) at org.apache.spark.streaming.dstream.DStream.createRDDWithLocalProperties(DStream.scala:426) at org.apache.spark.streaming.dstream.ForEachDStream$$anonfun$1.apply$mcV$sp(ForEachDStream.scala:49) at org.apache.spark.streaming.dstream.ForEachDStream$$anonfun$1.apply(ForEachDStream.scala:49) at org.apache.spark.streaming.dstream.ForEachDStream$$anonfun$1.apply(ForEachDStream.scala:49) at scala.util.Try$.apply(Try.scala:161) at org.apache.spark.streaming.scheduler.Job.run(Job.scala:39) at org.apache.spark.streaming.scheduler.JobScheduler$JobHandler$$anonfun$run$1.apply$mcV$sp(JobScheduler.scala:224) at org.apache.spark.streaming.scheduler.JobScheduler$JobHandler$$anonfun$run$1.apply(JobScheduler.scala:224) at org.apache.spark.streaming.scheduler.JobScheduler$JobHandler$$anonfun$run$1.apply(JobScheduler.scala:224) at scala.util.DynamicVariable.withValue(DynamicVariable.scala:57) at org.apache.spark.streaming.scheduler.JobScheduler$JobHandler.run(JobScheduler.scala:223) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) ... 2 more java.lang.Error: java.lang.InterruptedException at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1148) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) at java.lang.Thread.run(Thread.java:745) Caused by: java.lang.InterruptedException at java.lang.Object.wait(Native Method) at java.lang.Object.wait(Object.java:502) at org.apache.spark.scheduler.JobWaiter.awaitResult(JobWaiter.scala:73) at org.apache.spark.scheduler.DAGScheduler.runJob(DAGScheduler.scala:612) at org.apache.spark.SparkContext.runJob(SparkContext.scala:1840) at org.apache.spark.SparkContext.runJob(SparkContext.scala:1853) at org.apache.spark.SparkContext.runJob(SparkContext.scala:1866) at org.apache.spark.SparkContext.runJob(SparkContext.scala:1937) at org.apache.spark.rdd.RDD$$anonfun$foreachPartition$1.apply(RDD.scala:920) at org.apache.spark.rdd.RDD$$anonfun$foreachPartition$1.apply(RDD.scala:918) at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:150) at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:111) at org.apache.spark.rdd.RDD.withScope(RDD.scala:316) at org.apache.spark.rdd.RDD.foreachPartition(RDD.scala:918) at org.apache.spark.api.java.JavaRDDLike$class.foreachPartition(JavaRDDLike.scala:225) at org.apache.spark.api.java.AbstractJavaRDDLike.foreachPartition(JavaRDDLike.scala:46) at ru.croc.smartdata.spark.PredictorEngineApp.lambda$processTopic$d8eb8b1f$1(PredictorEngineApp.java:153) at org.apache.spark.streaming.api.java.JavaDStreamLike$$anonfun$foreachRDD$4.apply(JavaDStreamLike.scala:343) at org.apache.spark.streaming.api.java.JavaDStreamLike$$anonfun$foreachRDD$4.apply(JavaDStreamLike.scala:343) at org.apache.spark.streaming.dstream.ForEachDStream$$anonfun$1$$anonfun$apply$mcV$sp$1.apply$mcV$sp(ForEachDStream.scala:50) at org.apache.spark.streaming.dstream.ForEachDStream$$anonfun$1$$anonfun$apply$mcV$sp$1.apply(ForEachDStream.scala:50) at org.apache.spark.streaming.dstream.ForEachDStream$$anonfun$1$$anonfun$apply$mcV$sp$1.apply(ForEachDStream.scala:50) at org.apache.spark.streaming.dstream.DStream.createRDDWithLocalProperties(DStream.scala:426) at org.apache.spark.streaming.dstream.ForEachDStream$$anonfun$1.apply$mcV$sp(ForEachDStream.scala:49) at org.apache.spark.streaming.dstream.ForEachDStream$$anonfun$1.apply(ForEachDStream.scala:49) at org.apache.spark.streaming.dstream.ForEachDStream$$anonfun$1.apply(ForEachDStream.scala:49) at scala.util.Try$.apply(Try.scala:161) at org.apache.spark.streaming.scheduler.Job.run(Job.scala:39) at org.apache.spark.streaming.scheduler.JobScheduler$JobHandler$$anonfun$run$1.apply$mcV$sp(JobScheduler.scala:224) at org.apache.spark.streaming.scheduler.JobScheduler$JobHandler$$anonfun$run$1.apply(JobScheduler.scala:224) at org.apache.spark.streaming.scheduler.JobScheduler$JobHandler$$anonfun$run$1.apply(JobScheduler.scala:224) at scala.util.DynamicVariable.withValue(DynamicVariable.scala:57) at org.apache.spark.streaming.scheduler.JobScheduler$JobHandler.run(JobScheduler.scala:223) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) ... 2 more 18/04/17 17:31:17 INFO scheduler.JobScheduler: Stopped JobScheduler 18/04/17 17:31:17 INFO streaming.StreamingContext: StreamingContext stopped successfully 18/04/17 17:31:17 INFO spark.SparkContext: Invoking stop() from shutdown hook 18/04/17 17:31:17 INFO ui.SparkUI: Stopped Spark web UI at http://***IP masked***:48756 18/04/17 17:31:17 INFO scheduler.DAGScheduler: ResultStage 1544 (foreachPartition at PredictorEngineApp.java:153) failed in 17.811 s due to Stage cancelled because SparkContext was shut down 18/04/17 17:31:17 ERROR scheduler.LiveListenerBus: SparkListenerBus has already stopped! Dropping event SparkListenerStageCompleted(org.apache.spark.scheduler.StageInfo@5c23fd48) 18/04/17 17:31:17 INFO scheduler.DAGScheduler: ResultStage 1443 (foreachPartition at PredictorEngineApp.java:153) failed in 257.794 s due to Stage cancelled because SparkContext was shut down 18/04/17 17:31:17 ERROR scheduler.LiveListenerBus: SparkListenerBus has already stopped! Dropping event SparkListenerStageCompleted(org.apache.spark.scheduler.StageInfo@245106ad) 18/04/17 17:31:17 INFO scheduler.DAGScheduler: ResultStage 1545 (foreachPartition at PredictorEngineApp.java:153) failed in 17.808 s due to Stage cancelled because SparkContext was shut down 18/04/17 17:31:17 ERROR scheduler.LiveListenerBus: SparkListenerBus has already stopped! Dropping event SparkListenerStageCompleted(org.apache.spark.scheduler.StageInfo@7f36f350) 18/04/17 17:31:17 INFO scheduler.DAGScheduler: ResultStage 1541 (foreachPartition at PredictorEngineApp.java:153) failed in 17.826 s due to Stage cancelled because SparkContext was shut down 18/04/17 17:31:17 ERROR scheduler.LiveListenerBus: SparkListenerBus has already stopped! Dropping event SparkListenerStageCompleted(org.apache.spark.scheduler.StageInfo@3331e8a9) 18/04/17 17:31:17 INFO scheduler.DAGScheduler: ResultStage 1559 (foreachPartition at PredictorEngineApp.java:153) failed in 17.736 s due to Stage cancelled because SparkContext was shut down 18/04/17 17:31:17 ERROR scheduler.LiveListenerBus: SparkListenerBus has already stopped! Dropping event SparkListenerStageCompleted(org.apache.spark.scheduler.StageInfo@70bb8a3a) 18/04/17 17:31:17 ERROR scheduler.LiveListenerBus: SparkListenerBus has already stopped! Dropping event SparkListenerJobEnd(1543,1523975477929,JobFailed(org.apache.spark.SparkException: Job 1543 cancelled because SparkContext was shut down)) 18/04/17 17:31:17 ERROR scheduler.LiveListenerBus: SparkListenerBus has already stopped! Dropping event SparkListenerJobEnd(1540,1523975477929,JobFailed(org.apache.spark.SparkException: Job 1540 cancelled because SparkContext was shut down)) 18/04/17 17:31:17 ERROR scheduler.LiveListenerBus: SparkListenerBus has already stopped! Dropping event SparkListenerJobEnd(1544,1523975477929,JobFailed(org.apache.spark.SparkException: Job 1544 cancelled because SparkContext was shut down)) 18/04/17 17:31:17 ERROR scheduler.LiveListenerBus: SparkListenerBus has already stopped! Dropping event SparkListenerJobEnd(1558,1523975477929,JobFailed(org.apache.spark.SparkException: Job 1558 cancelled because SparkContext was shut down)) 18/04/17 17:31:17 ERROR scheduler.LiveListenerBus: SparkListenerBus has already stopped! Dropping event SparkListenerJobEnd(1444,1523975477929,JobFailed(org.apache.spark.SparkException: Job 1444 cancelled because SparkContext was shut down)) 18/04/17 17:31:17 INFO yarn.YarnAllocator: Driver requested a total number of 0 executor(s). 18/04/17 17:31:17 INFO cluster.YarnClusterSchedulerBackend: Shutting down all executors 18/04/17 17:31:17 INFO cluster.YarnClusterSchedulerBackend: Asking each executor to shut down 18/04/17 17:31:17 INFO spark.MapOutputTrackerMasterEndpoint: MapOutputTrackerMasterEndpoint stopped! 18/04/17 17:31:17 INFO storage.MemoryStore: MemoryStore cleared 18/04/17 17:31:17 INFO storage.BlockManager: BlockManager stopped 18/04/17 17:31:17 INFO storage.BlockManagerMaster: BlockManagerMaster stopped 18/04/17 17:31:17 INFO scheduler.OutputCommitCoordinator$OutputCommitCoordinatorEndpoint: OutputCommitCoordinator stopped! 18/04/17 17:31:17 INFO spark.SparkContext: Successfully stopped SparkContext 18/04/17 17:31:17 INFO yarn.ApplicationMaster: Unregistering ApplicationMaster with FAILED (diag message: User class threw exception: java.lang.RuntimeException: org.apache.hadoop.hbase.client.RetriesExhaustedWithDetailsException: Failed 1 action: IOException: 1 time, ) 18/04/17 17:31:17 INFO remote.RemoteActorRefProvider$RemotingTerminator: Shutting down remote daemon. 18/04/17 17:31:17 INFO remote.RemoteActorRefProvider$RemotingTerminator: Remote daemon shut down; proceeding with flushing remote transports. 18/04/17 17:31:17 INFO impl.AMRMClientImpl: Waiting for application to be successfully unregistered. 18/04/17 17:31:18 INFO Remoting: Remoting shut down 18/04/17 17:31:18 INFO remote.RemoteActorRefProvider$RemotingTerminator: Remoting shut down. 18/04/17 17:31:18 INFO yarn.ApplicationMaster: Deleting staging directory .sparkStaging/application_1520875508177_0403 18/04/17 17:31:18 INFO util.ShutdownHookManager: Shutdown hook called 18/04/17 17:31:18 INFO util.ShutdownHookManager: Deleting directory /hadoop/2/yarn/nm/usercache/jenkins/appcache/application_1520875508177_0403/spark-b0c8c53e-6206-4e90-bce9-0667b303a3ca 18/04/17 17:31:18 INFO util.ShutdownHookManager: Deleting directory /hadoop/4/yarn/nm/usercache/jenkins/appcache/application_1520875508177_0403/spark-95d474fd-7927-498d-adb4-97703518178e 18/04/17 17:31:18 INFO util.ShutdownHookManager: Deleting directory /hadoop/1/yarn/nm/usercache/jenkins/appcache/application_1520875508177_0403/spark-f834c0e9-c1c2-4076-868d-bed296886555 18/04/17 17:31:18 INFO util.ShutdownHookManager: Deleting directory /hadoop/3/yarn/nm/usercache/jenkins/appcache/application_1520875508177_0403/spark-c27093ec-8d57-48f8-9bcb-9e86c2eaa3b8 18/04/17 17:31:18 INFO util.ShutdownHookManager: Deleting directory /hadoop/6/yarn/nm/usercache/jenkins/appcache/application_1520875508177_0403/spark-cd1da986-1298-44b1-a6ca-79b42ba27d41 18/04/17 17:31:18 INFO util.ShutdownHookManager: Deleting directory /hadoop/5/yarn/nm/usercache/jenkins/appcache/application_1520875508177_0403/spark-2e3f2291-2bf3-417b-a8c3-9a6be637cd1e |