We are working on application which contains Spark, hdfs, and kafka.
We want to deploy this application on existing HDP cluster. So what would be the best approach/way to deploy this application on HDP in less time. What I want to do is to create some script that will co-ordinate with ambari and findout which component is already installed on existing HDP. For. Ex if some HDP cluster doesnot contain spark then it will automatically download (from hortonwork repo) and configure spark on that HDP cluster otherwise simply load all tasks/jobs.
Can I use zookeeper to detect which service is installed and detect its state (running/stopped/maintainence) ?
Unfortunately that information isn't in zookeeper, so we can't get it from there unless you write the information to zookeeper yourself (the solution might get overly complicated now). If you really must avoid REST you could try to query the ambari database to list the installed services for example like so;