Support Questions

Find answers, ask questions, and share your expertise
Celebrating as our community reaches 100,000 members! Thank you!

CCP: Data Engineering Exam




I'm trying to study for the exam, but I am trying to find out what are the main tools/subject to study for this exam.


Should we focus on Sqoop (Import&Export), Flume, Kafka, Hive, Spark and Oozie?


Am I missing anything? Or studying more than I should once my background for now is only Sqoop and Hive?


Best regards,





Community Manager

I think the below wording from the CCP Data Engineer page should cover that question. 


What should you expect?

You are given five to eight customer problems each with a unique, large data set, a CDH cluster, and four hours. For each problem, you must implement a technical solution with a high degree of precision that meets all the requirements. You may use any tool or combination of tools on the cluster (see list below) -- you get to pick the tool(s) that are right for the job. You must possess enough industry knowledge to analyze the problem and arrive at an optimal approach given the time allowed. You need to know what you should do and then do it on a live cluster under rigorous conditions, including a time limit and while being watched by a proctor.

Cy Jervis, Manager, Community Program
Was your question answered? Make sure to mark the answer as the accepted solution.
If you find a reply useful, say thanks by clicking on the thumbs up button.

View solution in original post


Super Collaborator


What am I responsible for during the exam? 
These are practical exams. During the exam you will be asked to evaluate a scenario and implement a solution. You are responsible for everything necessary to generate that solution, such as writing code, configuring tools, and debugging any issues. You may use any approach or tools on the cluster that will produce your solution. Only the results will be graded.