<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:taxo="http://purl.org/rss/1.0/modules/taxonomy/" version="2.0">
  <channel>
    <title>question Re: Flink Kafka program in scala giving timeout error org.apache.kafka.common.errors.TimeoutException: Failed to update metadata after 60000 ms in Archives of Support Questions (Read Only)</title>
    <link>https://community.cloudera.com/t5/Archives-of-Support-Questions/Flink-Kafka-program-in-scala-giving-timeout-error-org-apache/m-p/225903#M72707</link>
    <description>&lt;P&gt;Issue got resolved .  &lt;/P&gt;&lt;P&gt;Follow this checklists   --&lt;/P&gt;&lt;P&gt;1. Check Zookeeper running .&lt;/P&gt;&lt;P&gt;2. Check Kafka Producer and Consumer running fine on console,  create one topic and list it this is to ensure that kafka running fine .&lt;/P&gt;&lt;P&gt;3. Similar version use in sbt &lt;/P&gt;&lt;P&gt;like for Kafka 0.9 below should be use :&lt;/P&gt;&lt;PRE&gt;org.apache.flink" %% "flink-connector-kafka-0.9" % flinkVersion % "provided"&lt;/PRE&gt;&lt;P&gt;and import in scala program : import org.apache.flink.streaming.connectors.kafka.{FlinkKafkaProducer09, FlinkKafkaConsumer09}&lt;/P&gt;&lt;P&gt;For Kafka 0.10 &lt;/P&gt;&lt;PRE&gt;org.apache.flink" %% "flink-connector-kafka-0.10" % flinkVersion % "provided"

And Import in scala program : import org.apache.flink.streaming.connectors.kafka.{FlinkKafkaProducer10, FlinkKafkaConsumer10}&lt;/PRE&gt;</description>
    <pubDate>Mon, 18 Dec 2017 19:47:09 GMT</pubDate>
    <dc:creator>amit_dass</dc:creator>
    <dc:date>2017-12-18T19:47:09Z</dc:date>
    <item>
      <title>Flink Kafka program in scala giving timeout error org.apache.kafka.common.errors.TimeoutException: Failed to update metadata after 60000 ms</title>
      <link>https://community.cloudera.com/t5/Archives-of-Support-Questions/Flink-Kafka-program-in-scala-giving-timeout-error-org-apache/m-p/225901#M72705</link>
      <description>&lt;TABLE&gt;&lt;TBODY&gt;&lt;TR&gt;&lt;TD&gt;0&lt;A href="https://stackoverflow.com/questions/47831484/flink-kafka-program-in-scala-giving-timeout-error-org-apache-kafka-common-errors#"&gt;favorite&lt;/A&gt;
&lt;/TD&gt;
&lt;TD&gt;&lt;P&gt;I am writing a Flink Kafka integration program as below but getting timeout error for kafka :&lt;/P&gt;&lt;PRE&gt;&amp;lt;code&amp;gt;import org.apache.flink.streaming.api.scala._
import org.apache.flink.streaming.connectors.kafka.{FlinkKafkaConsumer010, 
FlinkKafkaProducer010}
import org.apache.flink.streaming.util.serialization.SimpleStringSchema
import java.util.Properties

object StreamKafkaProducer {

def main(args: Array[String]) {
val env = StreamExecutionEnvironment.getExecutionEnvironment
val properties = new Properties()
properties.setProperty("bootstrap.servers", "localhost:9092")
properties.setProperty("zookeeper.connect", "localhost:2181")
properties.setProperty("serializer.class", "kafka.serializer.StringEncoder")
val stream: DataStream[String] =env.fromElements(
  ("Adam"),
  ("Sarah"))

val kafkaProducer = new FlinkKafkaProducer010[String](
  "localhost:9092",
  "output",
  new SimpleStringSchema
)
// write data into Kafka
stream.addSink(kafkaProducer)

env.execute("Flink kafka integration  ")
}
}
&lt;/PRE&gt;
&lt;P&gt;From terminal I can see kafka and zookeeper are running but when I run above program from Intellij it is showing this error :&lt;/P&gt;&lt;PRE&gt;&amp;lt;code&amp;gt;C:\Users\amdass\workspace\flink-project-master&amp;gt;sbt run
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=256m; 
support was removed in 8.0
[info] Loading project definition from C:\Users\amdass\workspace\flink-
project-master\project
[info] Set current project to Flink Project (in build 
file:/C:/Users/amdass/workspace/flink-project-master/)
[info] Compiling 1 Scala source to C:\Users\amdass\workspace\flink-project-
master\target\scala-2.11\classes...
[info] Running org.example.StreamKafkaProducer
SLF4J: Failed to load class "org.slf4j.impl.StaticLoggerBinder".
SLF4J: Defaulting to no-operation (NOP) logger implementation
SLF4J: See &lt;A href="http://www.slf4j.org/codes.html#StaticLoggerBinder" target="_blank"&gt;http://www.slf4j.org/codes.html#StaticLoggerBinder&lt;/A&gt; for further 
details.
Connected to JobManager at Actor[akka://flink/user/jobmanager_1#-563113020] 
with leader session id 5a637740-5c73-4f69-a19e-c8ef7141efa1.
12/15/2017 14:41:49     Job execution switched to status RUNNING.
12/15/2017 14:41:49     Source: Collection Source(1/1) switched to SCHEDULED
12/15/2017 14:41:49     Sink: Unnamed(1/4) switched to SCHEDULED
12/15/2017 14:41:49     Sink: Unnamed(2/4) switched to SCHEDULED
12/15/2017 14:41:49     Sink: Unnamed(3/4) switched to SCHEDULED
12/15/2017 14:41:49     Sink: Unnamed(4/4) switched to SCHEDULED
12/15/2017 14:41:49     Source: Collection Source(1/1) switched to DEPLOYING
12/15/2017 14:41:49     Sink: Unnamed(1/4) switched to DEPLOYING
12/15/2017 14:41:49     Sink: Unnamed(2/4) switched to DEPLOYING
12/15/2017 14:41:49     Sink: Unnamed(3/4) switched to DEPLOYING
12/15/2017 14:41:49     Sink: Unnamed(4/4) switched to DEPLOYING
12/15/2017 14:41:50     Source: Collection Source(1/1) switched to RUNNING
12/15/2017 14:41:50     Sink: Unnamed(2/4) switched to RUNNING
12/15/2017 14:41:50     Sink: Unnamed(4/4) switched to RUNNING
12/15/2017 14:41:50     Sink: Unnamed(3/4) switched to RUNNING
12/15/2017 14:41:50     Sink: Unnamed(1/4) switched to RUNNING
12/15/2017 14:41:50     Source: Collection Source(1/1) switched to FINISHED
12/15/2017 14:41:50     Sink: Unnamed(3/4) switched to FINISHED
12/15/2017 14:41:50     Sink: Unnamed(4/4) switched to FINISHED
12/15/2017 14:42:50     Sink: Unnamed(1/4) switched to FAILED
&amp;lt;b&amp;gt;  org.apache.kafka.common.errors.TimeoutException: Failed to update metadata 
after 60000 ms. &amp;lt;/b&amp;gt;

12/15/2017 14:42:50     Sink: Unnamed(2/4) switched to FAILED
org.apache.kafka.common.errors.TimeoutException: Failed to update metadata 
after 60000 ms.

12/15/2017 14:42:50     Job execution switched to status FAILING.
&lt;/PRE&gt;
&lt;P&gt;org.apache.kafka.common.errors.TimeoutException: Failed to update metadata after 60000 ms. 12/15/2017 14:42:50 Job execution switched to status FAILED. [error] (run-main-0) org.apache.flink.runtime.client.JobExecutionException: Job execution failed. org.apache.flink.runtime.client.JobExecutionException: Job execution failed. at org.apache.flink.runtime.jobmanager.JobManager$$anonfun$handleMessage$1$$anonfun$applyOrElse$6.apply$mcV$sp(JobManager.scala:933) at org.apache.flink.runtime.jobmanager.JobManager$$anonfun$handleMessage$1$$anonfun$applyOrElse$6.apply(JobManager.scala:876) at org.apache.flink.runtime.jobmanager.JobManager$$anonfun$handleMessage$1$$anonfun$applyOrElse$6.apply(JobManager.scala:876) at scala.concurrent.impl.Future$PromiseCompletingRunnable.liftedTree1$1(Future.scala:24) at scala.concurrent.impl.Future$PromiseCompletingRunnable.run(Future.scala:24) at akka.dispatch.TaskInvocation.run(AbstractDispatcher.scala:40) at akka.dispatch.ForkJoinExecutorConfigurator$AkkaForkJoinTask.exec(AbstractDispatcher.scala:397) at scala.concurrent.forkjoin.ForkJoinTask.doExec(ForkJoinTask.java:260) at scala.concurrent.forkjoin.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:1339) at scala.concurrent.forkjoin.ForkJoinPool.runWorker(ForkJoinPool.java:1979) at&lt;/P&gt;&lt;P&gt;scala.concurrent.forkjoin.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:107 ) Caused by: org.apache.kafka.common.errors.TimeoutException: Failed to update metadata after 60000 ms. [trace] Stack trace suppressed: run last *:run for the full output. java.lang.RuntimeException: Nonzero exit code: 1 at scala.sys.package$.error(package.scala:27) [trace] Stack trace suppressed: run last compile:run for the full output. [error] (compile:run) Nonzero exit code: 1 [error] Total time: 75 s, completed Dec 15, 2017 2:42:51 PM&lt;/P&gt;&lt;/TD&gt;&lt;/TR&gt;&lt;/TBODY&gt;&lt;/TABLE&gt;</description>
      <pubDate>Mon, 18 Dec 2017 16:44:46 GMT</pubDate>
      <guid>https://community.cloudera.com/t5/Archives-of-Support-Questions/Flink-Kafka-program-in-scala-giving-timeout-error-org-apache/m-p/225901#M72705</guid>
      <dc:creator>amit_dass</dc:creator>
      <dc:date>2017-12-18T16:44:46Z</dc:date>
    </item>
    <item>
      <title>Re: Flink Kafka program in scala giving timeout error org.apache.kafka.common.errors.TimeoutException: Failed to update metadata after 60000 ms</title>
      <link>https://community.cloudera.com/t5/Archives-of-Support-Questions/Flink-Kafka-program-in-scala-giving-timeout-error-org-apache/m-p/225902#M72706</link>
      <description>&lt;P&gt;@&lt;A href="https://community.hortonworks.com/users/51199/fhueske.html"&gt;Fabian Hueske&lt;/A&gt;  Please guide here &lt;/P&gt;</description>
      <pubDate>Mon, 18 Dec 2017 16:46:48 GMT</pubDate>
      <guid>https://community.cloudera.com/t5/Archives-of-Support-Questions/Flink-Kafka-program-in-scala-giving-timeout-error-org-apache/m-p/225902#M72706</guid>
      <dc:creator>amit_dass</dc:creator>
      <dc:date>2017-12-18T16:46:48Z</dc:date>
    </item>
    <item>
      <title>Re: Flink Kafka program in scala giving timeout error org.apache.kafka.common.errors.TimeoutException: Failed to update metadata after 60000 ms</title>
      <link>https://community.cloudera.com/t5/Archives-of-Support-Questions/Flink-Kafka-program-in-scala-giving-timeout-error-org-apache/m-p/225903#M72707</link>
      <description>&lt;P&gt;Issue got resolved .  &lt;/P&gt;&lt;P&gt;Follow this checklists   --&lt;/P&gt;&lt;P&gt;1. Check Zookeeper running .&lt;/P&gt;&lt;P&gt;2. Check Kafka Producer and Consumer running fine on console,  create one topic and list it this is to ensure that kafka running fine .&lt;/P&gt;&lt;P&gt;3. Similar version use in sbt &lt;/P&gt;&lt;P&gt;like for Kafka 0.9 below should be use :&lt;/P&gt;&lt;PRE&gt;org.apache.flink" %% "flink-connector-kafka-0.9" % flinkVersion % "provided"&lt;/PRE&gt;&lt;P&gt;and import in scala program : import org.apache.flink.streaming.connectors.kafka.{FlinkKafkaProducer09, FlinkKafkaConsumer09}&lt;/P&gt;&lt;P&gt;For Kafka 0.10 &lt;/P&gt;&lt;PRE&gt;org.apache.flink" %% "flink-connector-kafka-0.10" % flinkVersion % "provided"

And Import in scala program : import org.apache.flink.streaming.connectors.kafka.{FlinkKafkaProducer10, FlinkKafkaConsumer10}&lt;/PRE&gt;</description>
      <pubDate>Mon, 18 Dec 2017 19:47:09 GMT</pubDate>
      <guid>https://community.cloudera.com/t5/Archives-of-Support-Questions/Flink-Kafka-program-in-scala-giving-timeout-error-org-apache/m-p/225903#M72707</guid>
      <dc:creator>amit_dass</dc:creator>
      <dc:date>2017-12-18T19:47:09Z</dc:date>
    </item>
  </channel>
</rss>

