Support Questions

Find answers, ask questions, and share your expertise

Bigdata Continuous Delivery

avatar

Hi,

We have started studies in order to implement Bigdata Continuous Delivery process. We'd like to know if someone has been implemented.

What we need is to know if there is any 'best practices' for:

  • Dev environment
  • Building process
  • Deploy on unit test env
  • Deploy on integration test env
  • Deploy on production

Basically we develop: Hive, Python(spark), Shellscript, Flume, Sqoop. After all of above defined, we would like to provision these envs in containers to set a continuous integration deployment via:

Mesos + Jenkins + Marathon + Docker Containers to spin up dockers with Horton HDP 2.2.0. (same as production env)

Many thanks,

Fabricio

1 ACCEPTED SOLUTION

avatar
Rising Star

Sure so basically regarding the cluster, you may find useful to

  • Configure queue with Capacity Scheduler (production, dev, integration, test), use elasticity and preemption
  • Map users to queue
  • You can use naming convention for queue and users by specifying -dev or -test
  • Depending the tool you are using you can use
    • Different database names with Hive
    • Different directories with HDFS + quotas
    • Namespace for HBase
  • Ranger will help you configure the permission for each user / group to access the right resource
  • Each user will have different environnement settings
  • Use Jenkins and Maven (if needed) to build, push the code (with SSH plugin) and run the test
  • Use template to provide tools to the user with logging features / correct parameter and option

View solution in original post

12 REPLIES 12

avatar
Rising Star

Dear Fabricio,

Yes we have several customers working on this topic. It is a interesting one. From what I have seen last time, the architecture was based on 2 real clusters, one PROD, one DR + TEST + INTEGRATION, with YARN queue and HDFS quota configured accordingly. Jenkins + SVN to take care of the versioning + build + test.

Some great team has build also their own project to validate the dev and follow the deployment across different environments.

I don't know too much about Docker, Mesos, Marathon so can't answer for this part.

Can you perhaps give me more details about what you are looking for ? What did you try ?

Kind regards.

avatar

Hi mlanciaux,

Thanks for your reply.

Let's put aside Docker, Mesos and Marathon. It was a way I've found to follow.

We do not have 2 clusters but something like a dev one. A small portion of the production env. So let's supose DEV + TEST + INTEGRATION on this small one.

I wonder if you could help me sharing with me some paper were I could start with. I've found lot of information and differents approaches. Is there anything Horton could recommend thinking the same way jenkins + SVN or Git.

Thanks

Fabricio

avatar
Rising Star

Sure so basically regarding the cluster, you may find useful to

  • Configure queue with Capacity Scheduler (production, dev, integration, test), use elasticity and preemption
  • Map users to queue
  • You can use naming convention for queue and users by specifying -dev or -test
  • Depending the tool you are using you can use
    • Different database names with Hive
    • Different directories with HDFS + quotas
    • Namespace for HBase
  • Ranger will help you configure the permission for each user / group to access the right resource
  • Each user will have different environnement settings
  • Use Jenkins and Maven (if needed) to build, push the code (with SSH plugin) and run the test
  • Use template to provide tools to the user with logging features / correct parameter and option

avatar

Ok Thanks!

Regarding the cluster we are almost ok.

My concern is about last two options.

Would you have a specific documentation/configuration regarding installing Jenkins properly to deal with a Horton cluster

avatar
Rising Star

I think the key point is to configure the different Jenkins to be able to use the edge node via the different SSH plugin (or install it there), the rest is a matter of configuring security, backup, and choose the right number of parameter to fit your usage and switch easily from one environment to the other (dev, test, prod)

avatar
Rising Star

avatar

Thanks @mlanciaux

avatar
Rising Star

And don't forget to check this best practices : https://wiki.jenkins-ci.org/display/JENKINS/Jenkins+Best+Practices

avatar
Rising Star

Dear Fabricio, I successfully make a workflow to tun from my local VM to my Hadoop remote Hadoop cluster by changing the SSH connection property. Hope that helps. Kind regards.