Created on 08-03-202005:41 PM - edited 05-06-202108:50 AM
Cloudera Data Platform DC doesn't have one Quickstart/Sandbox VM like the ones for CDH/HDP releases that helped a lot of people (including me), to learn more about the open-source components and also see the improvements from the community in CDP Runtime.
The objective of this tutorial is to enable and create a VM from scratch via some automation (Shell Script and Cloudera Template) that can help whoever wants to use and/or learn Cloudera CDP in a Sandbox/Quickstart like environment in your machine.
Also in the same file find the line that begins with "# Example for VirtualBox:" and add/uncomment the lines below:
config.vm.provider "virtualbox" do |vb|
# Display the VirtualBox GUI when booting the machine
vb.gui = true
# Customize the amount of memory on the VM:
vb.memory = "10024"
vb.cpus = "8"
Save the file and now we can init and bring up the VM:
$ vagrant up
Now it'll ask to bridge to your public network (only for the first time) normally it's the one that you're connected on the internet, in my case is en0:
After this, the VM will be provisioned and automated CDP process will start, this will take up to one hour depending on your connection since also it'll configure the VM and also install all the components for Cloudera Manager and the Services in an automated process located in https://github.com/carrossoni/CDPDCTrial/
The template and the cluster created at the end will contain the following services:
After the install you can add more services like Nifi, Kafka etc. depending on the number of resources that you've reserved for the VM.
After the execution you should see the exit below (this will take up about 30 min to one hour depending on your connection since it'll download all the packages and parcels necessary for provisioning CDP Runtime):
After this the VM will reboot to do a fresh start, wait around 5 minutes for the services spin up and go to the next step.
If the install process failed, likely it's a problem during the VM configuration if CM was installed you can try going to https://localhost:7180 directrly and finish the install process manually via Cloudera Manager UI
If needed to debug more you can SSH the VM going to the directory that the Vagrant file is located and type:
$ vagrant ssh
Now you can sudo the box and start looking the machine, try to see if the hostname and ip in /etc/hosts is configured properly (most common issue since depends of your machine network).
Step 2: Cloudera Data Platform Access
After the automated process our CDP Runtime is ready (actually we've provisioned in only one step)!In your machine browser you can connect to the CM with the following URL:
Password will be admin/admin after the first login you can choose the 60-day trial option and click in "Continue":
The Welcome page appears, click in the Cloudera Logo on the top left since we've already added a new cluster with the automated process:
At this point all the services are initiated, some errors may happen since we are working on a resource constraint environment, usually follow the logs that it'll be easy to see in Cloudera Manager what's happening, also you can suppress warning messages if it's not something critical.
We've our environment ready to work and learn more about CDP!
HUE and Data Access
You can log in in Hue from the URL http://cloudera:8889/hue and for the first time we will use the user admin/admin, this will be the admin user for HUE:
In HUE go on the left panel and choose "Importer" → Type = File, choose /user/admin directory and then click in "Upload a file", choose your file (statewide_testing.csv) and then "Open". Now click in the file that you've uploaded and this will go to the next step:
Click in Next and HUE will infer the table name, field types etc, you can change or leave as is and click in "Submit":
At the end you should see the success of the job, close the job status window, and click in the Query button:
Now that we've hour data we can query and use Impala SQL in the data that we've uploaded!
(Optional) Ranger Security Masking with Impala Example
To start using/querying the environment with the system user/password that we've created (cloudera/cloudera) first we need to enter in Ranger we need to allow access to this user, click in the Ranger service and then in Ranger Admin WebUI:
Now we have the initial Ranger screen. Login with the user/password admin/cloudera123:
In the HADOOP SQL session click in the Hadoop SQL link. We will create a new policy to allow access to the new table but seeing the tested column in masked format with null results. For that click in the Masking tab and then Add New Policy with the following values:
Click in the Add button and now go back to the Access tab and Add New Policy Button with the following parameters:
Click in Add button and now our user should be ready to select only the data on this table with the masked values. First we'll configure the user in HUE, in the left panel click in the initial button and then in "Manage Users":
Click in "Add User" and then in username put cloudera with the password cloudera, you can skip step 2 and 3 clicking directly in Add user.
Logout from HUE and login with our new create user, go to the query editor and select the data again:
You should see the masked policy in action!
In this blog we've learned:
How to Setup a Vagrant Centos 7 machine with Virtualbox and CDP Packages
Configure CDP-DC for the first run
Configure data access
Setup simple security policies with the masking feature
You can play with the services, install other parcels like Kafka/Nifi/Kudu to create a streaming ingestion pipeline, and query in real-time with Spark/Impala. Of course for that, you'll need more resources and this can be changed in the beginning during the VM Configuration.