Hi, I´m testing Flink with Hortonworks. Actually I have 4 servers, one ResourceManager and three Nodemanagers, but when I go to Flink Application Master, only 1 TaskManager is available with 1 slot, so no parallelism is possible.
In Ambari, Flink config file: flink_numcontainers = 3
how I have to set it up to use the three taskmanagers available?
Thank you for trying out Flink. Maybe there's a flaw in Hortonwork's Flink plugin. (@abajwa do you have an idea?)
Its actually quite easy to start Flink from the command line using YARN. Check out this documentation page: https://ci.apache.org/projects/flink/flink-docs-master/setup/yarn_setup.html#quickstart Maybe you need to set the HADOOP_CONF_DIR to point to the YARN configuration directory.
Hi Robert. I have a yarn-flink cluster running and working fine, but the point is to test Hortonworks-Flink, with ambari managing the cluster, and here is the problem. In fact, when i run my job, in the log you can read the line: "YARN properties set default parallelism to 3", but only 1 Available task slot.
$HADOOP_CONF_DIR config is pointing to /etc/hadoop/conf/.
I'm sorry that you are running into the issue. I would suggest to contact the author of the Ambari plugin to get support from there.
I'm pretty sure Flink works on Hortonworks clusters, but I can not support you with specifics of the Ambari plugin (because I didn't write it)