Member since
03-01-2017
34
Posts
2
Kudos Received
0
Solutions
02-18-2021
02:32 AM
1 Kudo
Running 'pyspark' applications in CML for model generation and prediction, with data residing in COD
With the recent addition of Cloudera Operation Database Experience to the CDP Public Cloud, we want to explore how it can be leveraged in the real-life 'DataFlow' end-user scenario. This article talks about how to execute Spark/pyspark job in CML to run modeling task using the data residing in COD. We read the table present in COD and also write back the score table back to the COD once the prediction is done.
Getting Started
CDP Runtime (supporting COD) >=7.2.2
We assume that CDP environment, datalake, datahub (Data Engineering) have been provisioned. We further assume that experiences COD and CML have been provisioned for the CDP target environment.
Note: Please refer to The world’s first enterprise data cloud, if you are just starting with CDP, and get to know how all the requirements can be in place with ease.
Some of the following steps are already documented in this blog (thanks @shlomi Tubul). On top of this, we further elaborate and expanded on what needs to be done for CML-COD use case.
Main components used in this demo:
Cloudera Operational Database (COD), as mentioned in my previous post, is a managed dbPaaS solution available as an experience in Cloudera Data Platform (CDP)
CML is designed for data scientists and ML engineers, enabling them to create and manage ML projects from code to production. Main features of CML:
Development Environment for Data Scientists, Isolated, Containerized, and Elastic
Production ML Toolkit – Deploying, Serving, Monitoring, and Governance of ML models
App Serving – Build and Serve Custom applications for ML use-cases
Setting Up the Environment
The first thing we need to do is to create a database in COD:
Log in to Cloudera Data Platform (CDP) Public Cloud 'Control Plane' (CP)
Select Operational Database and then click Create Database
Select the environment to which the COD will be attached and give a unique name for the COD, and then click Create Database
Once created, open the COD page and use the HBase Client Configuration URL to get the hbase-site.xml needed in CML
Next, Provision CML:
Log in to CDP Public Cloud CP
Select Machine Learning and click Provision Workspace
Select the environment for which the CML workspace will be provisioned and give a unique name for the same, and then click Provision Workspace
Create Project in CML: Model and Prediction
Once CML is provisioned, we go ahead and create a project in the workspace. We will be using the local template and upload the required files to it. create_model_and_score_phoenixTable.py is the pyspark script that we will be using for the task.
CML: Configuration for use in CML session
Upload the configuration files we downloaded from COD (A.4); we will require the hbase-site.xml file for use in the CML session to connect to the COD (see picture above).
We also need to configure the spark-defaults.conf file with jars to be used, and if there are any external cloud storage in use (from where data is being read), we will need to configure that too for Spark to authenticate with IDBroker and get access. Note: Since we have the data in an external S3 bucket, we added appropriate IDBorker mapping to allow the user access to this external bucket.
Running the Task
The pyspark script we used can be found here.
Though the code in this file is written for CML-CDSW integration (for On-prem set-ups), we modified it a little bit to work for the Cloud native platform i.e. CDP Public Cloud.
Firstly we added two lines at the start of the script file- these lines are required as of now to move the hbase-site.xml config to Spark's default conf dir in order for connection to COD to work and allow the file to be read by all users. (There is no way to override this as of now, so this workaround is needed).
Also, we modified the target_path for the temp files (that will be generated by the Spark job), since the user we executed this job (use has been given "MLUser" permission on the environment) needs to have access to the location specified. !cp /home/cdsw/hbase-site.xml /etc/spark/conf/
!chmod 644 /etc/spark/conf/hbase-site.xml
"""""""
same code section from the git file
""""""""
target_path = "<path to the location(in out case, external s3 bucket) where data is residing>"
"""""""
same code section from the git file
""""""""
Rest all is the same in the file.
Start running the project
Click New Session
Give the session a name and click the Start Session button at the bottom (adjust Workbench, kernel, and Resource Profile if required for the project)
Once the session has started, select the pyspark script file, and click the Run icon at the menu on top of the file contents. Once the execution starts, the session logs and task logs tabs will appear on the right half of the screen. The logs will end on completion of the script execution (Success or Failure) There we have it, on Success the table (BatchTable2) gets created in COD. The session can be closed manually by clicking the Stop button at the top right corner (or it will be killed by auto timeout if not in use for a certain amount of time.
... View more
Labels:
10-28-2018
06:30 PM
@Alexander Saip By clean-up, you mean just deleted contents of zookeper logs ? or cleaned up the "hiveserver2" znode ? By the looks (from the log snippet you posted above, the "hiveserver2" znode might not have been created, can you login to zookeeper cli and check: /usr/hdp/current/zookeeper-client/bin/zkCli.sh -server <zookeeper server host name>:2181, and then do a "ls /" on the zopokeeper cli. It should list a "hiverserver2" node there. If missing, try to create it [launch zookeeper cli as hive user (do "sudo su - hive")], and then restart hiveserver2. If this is a secured cluster, you should check for kerberose related errors in the log (could be auth token related issue).
... View more
10-28-2018
06:19 PM
@Mike Lok If you are running HDP2.x, then try following url: jdbc:hive2://<zookeeper server host name>:2181/;serviceDiscoveryMode=zooKeeper;zooKeeperNamespace=hiveserver2 for HDP3.0, try below: jdbc:hive2://<zookeeper server host name>:2181/;serviceDiscoveryMode=zooKeeper;zooKeeperNamespace=hiveserver2-interactive Also, make sure, Hiveserver2Interective(HSI) is actually running (do a ps -ef |grep llap) on the host where HSI is installed.
... View more
10-25-2018
11:03 AM
@Cody kamat Can you please elaborate a bit more, as to what is the memory usage post enabling LLAP (used/total memory) ? also which HDP version you are using and cluster size ? There are multiple params one should configure, like numbers bodes used by llap, number of llap daemons, llap_heap_size, memory cash per daemon, number of threads and some more like max memory for yarn container and tez container size. The value of all will depend on the cluster config: like memory/node, cpu cores/node and number of nodes. So, please check all these params and let me know cluster details if you want some recommendations from my side. Also, you can refer to this link which probably will clarify things for you: https://community.hortonworks.com/articles/149486/llap-sizing-and-setup.html
... View more
08-01-2018
04:20 PM
@Ashnee Sharma based on little info shared by you, below is my guess- The cache will only come into picture when similar data is being used by queries being run. If there is no overlap of any data in executing queries. Cache has no impact at all (as cache is empty when first query is run), the cache data will keep changing for every new (and data exclusive) query, base son cache size. If, the above is not the case, then pleas share hive-interective server logs to further debug it.
... View more
04-09-2018
06:29 AM
@Saurabh, There seem to be 2 different issues at hand. 1) Make sure the user ID of all users is same across all nodes in the cluster (else, this will cause conflicts, as the NFS permissions configuration usage both the username, userID and groupID) so, as you can see above in ur descreiption -
uid 0594903//where 0 is uid of root on another machine, and 594903 is uid of hdfs which is superuser on datanode machine where NFS gateway is running. This is cause by the same reason, so better you can keep 0 to root, and update the userID for HDFS (but once you do that, you need to update lot of directories to map to the new uid of HDFS user), not sure how complicated this might get, but has to be done. 2) Make sure the user you want to change the ownership too (chown), is part of the config file given when you changed the defualt.fs to NFS. The files (in my case it was user.json and group.json) mentions each user and it's user id (in user.txt), and also the group that all users we want to configure for NFS, goes to groups.txt (group name, and it's id) Example entry in users.json: {
"userName":"root",
"userID":"0"
} Example entry in groups.json: {
"groupName":"root",
"groupID":"0"
} Also, for you to run the 'chown' command, make sure you are doing this as hdfs user (from your above log, it seems only hdfs user can do this). Hope this helps.
... View more
03-22-2018
06:38 AM
@Vani Deeppak Have a look at this article: https://community.hortonworks.com/articles/53531/importing-data-from-teradata-into-hive.html it has the link(s) for Sqoop documentation and also talks about how to use it. Hope this helps.
... View more
02-13-2018
08:30 AM
@Ashnee follow this link: http://eastcirclek.blogspot.in/2016/10/how-to-start-hive-llap-functionality.html should solve your problem.
... View more
12-06-2017
10:10 AM
@Dmitro Vasilenko In the error log avoe, it says memory issue: [pid=15416,containerID=container_e119_1512480218177_0094_01_000002] is running beyond physical memory limits. Current usage: 27.0 GB of 26 GB physical memory used; I think the memoryh settings for llap daemon are beyond the physical available memory. Please check.
... View more
12-05-2017
05:18 AM
@yassine: check the permission on the entire path 'usr/hdp/current/hadoop-client/conf' on all cluster nodes, and make sure hdfs user can access it.
... View more