02-10-2016 08:55 AM
we create a sparking streaming code that is integrated with kafka. it works fine within Eclipse.
now we want to deploy it to our Cloudera cluster managed by Yarn.
how can we pass in the paramter like kafaka broker list, kafka topic name to the application running in the cluster?
in our eclipse we just use command line arguments: "StreamingApp <brokers> <topics>".
here is some example i see from cloudera website:
./bin/spark-submit --name "My app" --master local --conf spark.eventLog.enabled=false --conf "spark.executor.extraJavaOptions=-XX:+PrintGCDetails -XX:+PrintGCTimeStamps" myApp.jar
Can you please how i should pass in the brokers and topics info to our application?