- Subscribe to RSS Feed
- Mark Question as New
- Mark Question as Read
- Float this Question for Current User
- Bookmark
- Subscribe
- Mute
- Printer Friendly Page
Bigdata Continuous Delivery
Created on ‎06-06-2016 07:20 PM - edited ‎09-16-2022 03:23 AM
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hi,
We have started studies in order to implement Bigdata Continuous Delivery process. We'd like to know if someone has been implemented.
What we need is to know if there is any 'best practices' for:
- Dev environment
- Building process
- Deploy on unit test env
- Deploy on integration test env
- Deploy on production
Basically we develop: Hive, Python(spark), Shellscript, Flume, Sqoop. After all of above defined, we would like to provision these envs in containers to set a continuous integration deployment via:
Mesos + Jenkins + Marathon + Docker Containers to spin up dockers with Horton HDP 2.2.0. (same as production env)
Many thanks,
Fabricio
Created ‎06-12-2016 11:22 AM
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Sure so basically regarding the cluster, you may find useful to
- Configure queue with Capacity Scheduler (production, dev, integration, test), use elasticity and preemption
- Map users to queue
- You can use naming convention for queue and users by specifying -dev or -test
- Depending the tool you are using you can use
- Different database names with Hive
- Different directories with HDFS + quotas
- Namespace for HBase
- Ranger will help you configure the permission for each user / group to access the right resource
- Each user will have different environnement settings
- Use Jenkins and Maven (if needed) to build, push the code (with SSH plugin) and run the test
- Use template to provide tools to the user with logging features / correct parameter and option
Created ‎06-09-2016 11:01 AM
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Dear Fabricio,
Yes we have several customers working on this topic. It is a interesting one. From what I have seen last time, the architecture was based on 2 real clusters, one PROD, one DR + TEST + INTEGRATION, with YARN queue and HDFS quota configured accordingly. Jenkins + SVN to take care of the versioning + build + test.
Some great team has build also their own project to validate the dev and follow the deployment across different environments.
I don't know too much about Docker, Mesos, Marathon so can't answer for this part.
Can you perhaps give me more details about what you are looking for ? What did you try ?
Kind regards.
Created ‎06-10-2016 05:48 PM
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hi mlanciaux,
Thanks for your reply.
Let's put aside Docker, Mesos and Marathon. It was a way I've found to follow.
We do not have 2 clusters but something like a dev one. A small portion of the production env. So let's supose DEV + TEST + INTEGRATION on this small one.
I wonder if you could help me sharing with me some paper were I could start with. I've found lot of information and differents approaches. Is there anything Horton could recommend thinking the same way jenkins + SVN or Git.
Thanks
Fabricio
Created ‎06-12-2016 11:22 AM
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Sure so basically regarding the cluster, you may find useful to
- Configure queue with Capacity Scheduler (production, dev, integration, test), use elasticity and preemption
- Map users to queue
- You can use naming convention for queue and users by specifying -dev or -test
- Depending the tool you are using you can use
- Different database names with Hive
- Different directories with HDFS + quotas
- Namespace for HBase
- Ranger will help you configure the permission for each user / group to access the right resource
- Each user will have different environnement settings
- Use Jenkins and Maven (if needed) to build, push the code (with SSH plugin) and run the test
- Use template to provide tools to the user with logging features / correct parameter and option
Created ‎06-13-2016 04:52 PM
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Ok Thanks!
Regarding the cluster we are almost ok.
My concern is about last two options.
Would you have a specific documentation/configuration regarding installing Jenkins properly to deal with a Horton cluster
Created ‎06-13-2016 07:58 PM
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
I think the key point is to configure the different Jenkins to be able to use the edge node via the different SSH plugin (or install it there), the rest is a matter of configuring security, backup, and choose the right number of parameter to fit your usage and switch easily from one environment to the other (dev, test, prod)
Created ‎06-16-2016 01:15 PM
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
This is a simple example here :
https://community.hortonworks.com/articles/40171/simple-example-of-jenkins-hdp-integration.html, I will add more later
Created ‎06-16-2016 05:06 PM
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Thanks @mlanciaux
Created ‎06-16-2016 03:23 PM
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
And don't forget to check this best practices : https://wiki.jenkins-ci.org/display/JENKINS/Jenkins+Best+Practices
Created ‎06-20-2016 03:50 AM
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Dear Fabricio, I successfully make a workflow to tun from my local VM to my Hadoop remote Hadoop cluster by changing the SSH connection property. Hope that helps. Kind regards.
