Community Articles

Find and share helpful community-sourced technical articles.
Announcements
Celebrating as our community reaches 100,000 members! Thank you!
Labels (1)
avatar

Exploring Apache Flink with HDP

Apache Flink is an open source platform for distributed stream and batch data processing. More details on Flink and how it is being used in the industry today available here: http://flink-forward.org/?post_type=session. There are a few ways you can explore Flink on HDP 2.3:

1. Compilation on HDP 2.3.2

To compile Flink from source on HDP 2.3 you can use these commands:

curl -o /etc/yum.repos.d/epel-apache-maven.repo https://repos.fedorapeople.org/repos/dchen/apache-maven/epel-apache-maven.repo
yum -y install apache-maven-3.2*
git clone https://github.com/apache/flink.git
cd flink
mvn clean install -DskipTests -Dhadoop.version=2.7.1.2.3.2.0-2950 -Pvendor-repos

Note that with this option I ran into a classpath bug and raised it here: https://issues.apache.org/jira/browse/FLINK-3032

2. Run using precompiledtarball

wget http://www.gtlib.gatech.edu/pub/apache/flink/flink-0.9.1/flink-0.9.1-bin-hadoop27.tgz
tar xvzf  flink-0.9.1-bin-hadoop27.tgzcd flink-0.9.1
export HADOOP_CONF_DIR=/etc/hadoop/conf./bin/yarn-session.sh -n 1 -jm 768 -tm 1024

3. Using Ambari service (demo purposes only for now)

The Ambari service lets you easily install/compile Flink on HDP 2.3

  • Features:
    • By default, downloads prebuilt package of Flink 0.9.1, but also gives option to build the latest Flink from source instead
    • Exposes flink-conf.yaml in Ambari UI

Setup

  • Download HDP 2.3 sandbox VM image (Sandbox_HDP_2.3_1_VMware.ova) from Hortonworks website
  • Import Sandbox_HDP_2.3_1_VMware.ova into VMWare and set the VM memory size to 8GB
  • Now start the VM
  • After it boots up, find the IP address of the VM and add an entry into your machines hosts file. For example:
192.168.191.241 sandbox.hortonworks.com sandbox    
  • Note that you will need to replace the above with the IP for your own VM
  • Connect to the VM via SSH (password hadoop)
ssh root@sandbox.hortonworks.com
  • To download the Flink service folder, run below
VERSION=`hdp-select status hadoop-client | sed 's/hadoop-client - \([0-9]\.[0-9]\).*/\1/'`
sudo git clone https://github.com/abajwa-hw/ambari-flink-service.git   /var/lib/ambari-server/resources/stacks/HDP/$VERSION/services/FLINK   
  • Restart Ambari
#sandbox
service ambari restart

#non sandbox
sudo service ambari-server restart
  • Then you can click on 'Add Service' from the 'Actions' dropdown menu in the bottom left of the Ambari dashboard:

On bottom left -> Actions -> Add service -> check Flink server -> Next -> Next -> Change any config you like (e.g. install dir, memory sizes, num containers or values in flink-conf.yaml) -> Next -> Deploy

  • By default:
    • Container memory is 1024 MB
    • Job manager memory of 768 MB
    • Number of YARN container is 1
  • On successful deployment you will see the Flink service as part of Ambari stack and will be able to start/stop the service from here: Image
  • You can see the parameters you configured under 'Configs' tab Image
  • One benefit to wrapping the component in Ambari service is that you can now monitor/manage this service remotely via REST API
export SERVICE=FLINK
export PASSWORD=admin
export AMBARI_HOST=localhost

#detect name of cluster
output=`curl -u admin:$PASSWORD -i -H 'X-Requested-By: ambari'  http://$AMBARI_HOST:8080/api/v1/clusters`
CLUSTER=`echo $output | sed -n 's/.*"cluster_name" : "\([^\"]*\)".*/\1/p'`
#get service status
curl -u admin:$PASSWORD -i -H 'X-Requested-By: ambari' -X GET http://$AMBARI_HOST:8080/api/v1/clusters/$CLUSTER/services/$SERVICE

#start service
curl -u admin:$PASSWORD -i -H 'X-Requested-By: ambari' -X PUT -d '{"RequestInfo": {"context" :"Start $SERVICE via REST"}, "Body": {"ServiceInfo": {"state": "STARTED"}}}' http://$AMBARI_HOST:8080/api/v1/clusters/$CLUSTER/services/$SERVICE

#stop service
curl -u admin:$PASSWORD -i -H 'X-Requested-By: ambari' -X PUT -d '{"RequestInfo": {"context" :"Stop $SERVICE via REST"}, "Body": {"ServiceInfo": {"state": "INSTALLED"}}}' http://$AMBARI_HOST:8080/api/v1/clusters/$CLUSTER/services/$SERVICE
  • ...and also install via Blueprint. See example here on how to deploy custom services via Blueprints

Use Flink

  • Run word count job
su flink
export HADOOP_CONF_DIR=/etc/hadoop/conf
cd /opt/flink
./bin/flink run ./examples/flink-java-examples-0.9.1-WordCount.jar
  • This should generate a series of word counts Image
  • Open the YARN ResourceManager UI. Notice Flink is running on YARN Image
  • Click the ApplicationMaster link to access Flink webUI Image
  • Use the History tab to review details of the job that ran: Image
  • View metrics in the Task Manager tab: Image

Other things to try

More details on Flink and how it is being used in the industry today available here: http://flink-forward.org/?post_type=session

Remove service

12,305 Views
Comments
avatar
New Contributor

Hello,

After downloading the ambari-flink-service I placed it in /var/lib/ambari-server/resources/stacks/HDP/$VERSION/services/FLINK and restarted ambari. But when I go to Actions > Add Service , Flink doesn't appear on the list. What could be the problem?

Thank you and regards,

Pedro Chaves

avatar
New Contributor

Had an issue with https://community.hortonworks.com/questions/54894/problem-when-i-install-flink-in-hortonworks.html on HDP 2.4. Have a fix - can give pull request if you want - Hananiel

avatar
New Contributor

Hi! When could we expect a stable, non-demo ambari service of Flink, that could be installed not only on your sandbox, but on real hadoop infrastructure? Have't found in your road map.

Thanks in advance!

Andrey

avatar
Contributor

Thanks,It is ok.

Version history
Last update:
‎11-03-2015 04:19 AM
Updated by:
Contributors