Member since
03-01-2017
34
Posts
2
Kudos Received
0
Solutions
07-10-2021
05:38 AM
Hi @dgiri_india1989 Could you please share more details for this issue about how you are able to fix this. We are also facing similar issue with Ranger KMS service. RangerKMS Principal is created in AD KDC, Also Keytab creation is success according to Ambari Server log, but it's not distributed to RangerKMS service hosted node. Due to this service is not starting up. Thank you.
... View more
02-18-2021
02:32 AM
1 Kudo
Running 'pyspark' applications in CML for model generation and prediction, with data residing in COD
With the recent addition of Cloudera Operation Database Experience to the CDP Public Cloud, we want to explore how it can be leveraged in the real-life 'DataFlow' end-user scenario. This article talks about how to execute Spark/pyspark job in CML to run modeling task using the data residing in COD. We read the table present in COD and also write back the score table back to the COD once the prediction is done.
Getting Started
CDP Runtime (supporting COD) >=7.2.2
We assume that CDP environment, datalake, datahub (Data Engineering) have been provisioned. We further assume that experiences COD and CML have been provisioned for the CDP target environment.
Note: Please refer to The world’s first enterprise data cloud, if you are just starting with CDP, and get to know how all the requirements can be in place with ease.
Some of the following steps are already documented in this blog (thanks @shlomi Tubul). On top of this, we further elaborate and expanded on what needs to be done for CML-COD use case.
Main components used in this demo:
Cloudera Operational Database (COD), as mentioned in my previous post, is a managed dbPaaS solution available as an experience in Cloudera Data Platform (CDP)
CML is designed for data scientists and ML engineers, enabling them to create and manage ML projects from code to production. Main features of CML:
Development Environment for Data Scientists, Isolated, Containerized, and Elastic
Production ML Toolkit – Deploying, Serving, Monitoring, and Governance of ML models
App Serving – Build and Serve Custom applications for ML use-cases
Setting Up the Environment
The first thing we need to do is to create a database in COD:
Log in to Cloudera Data Platform (CDP) Public Cloud 'Control Plane' (CP)
Select Operational Database and then click Create Database
Select the environment to which the COD will be attached and give a unique name for the COD, and then click Create Database
Once created, open the COD page and use the HBase Client Configuration URL to get the hbase-site.xml needed in CML
Next, Provision CML:
Log in to CDP Public Cloud CP
Select Machine Learning and click Provision Workspace
Select the environment for which the CML workspace will be provisioned and give a unique name for the same, and then click Provision Workspace
Create Project in CML: Model and Prediction
Once CML is provisioned, we go ahead and create a project in the workspace. We will be using the local template and upload the required files to it. create_model_and_score_phoenixTable.py is the pyspark script that we will be using for the task.
CML: Configuration for use in CML session
Upload the configuration files we downloaded from COD (A.4); we will require the hbase-site.xml file for use in the CML session to connect to the COD (see picture above).
We also need to configure the spark-defaults.conf file with jars to be used, and if there are any external cloud storage in use (from where data is being read), we will need to configure that too for Spark to authenticate with IDBroker and get access. Note: Since we have the data in an external S3 bucket, we added appropriate IDBorker mapping to allow the user access to this external bucket.
Running the Task
The pyspark script we used can be found here.
Though the code in this file is written for CML-CDSW integration (for On-prem set-ups), we modified it a little bit to work for the Cloud native platform i.e. CDP Public Cloud.
Firstly we added two lines at the start of the script file- these lines are required as of now to move the hbase-site.xml config to Spark's default conf dir in order for connection to COD to work and allow the file to be read by all users. (There is no way to override this as of now, so this workaround is needed).
Also, we modified the target_path for the temp files (that will be generated by the Spark job), since the user we executed this job (use has been given "MLUser" permission on the environment) needs to have access to the location specified. !cp /home/cdsw/hbase-site.xml /etc/spark/conf/
!chmod 644 /etc/spark/conf/hbase-site.xml
"""""""
same code section from the git file
""""""""
target_path = "<path to the location(in out case, external s3 bucket) where data is residing>"
"""""""
same code section from the git file
""""""""
Rest all is the same in the file.
Start running the project
Click New Session
Give the session a name and click the Start Session button at the bottom (adjust Workbench, kernel, and Resource Profile if required for the project)
Once the session has started, select the pyspark script file, and click the Run icon at the menu on top of the file contents. Once the execution starts, the session logs and task logs tabs will appear on the right half of the screen. The logs will end on completion of the script execution (Success or Failure) There we have it, on Success the table (BatchTable2) gets created in COD. The session can be closed manually by clicking the Stop button at the top right corner (or it will be killed by auto timeout if not in use for a certain amount of time.
... View more
Labels:
12-04-2019
12:12 PM
If this issue is fixed, can you share the sample code. I am looking to use 1.6.3 and need some Java code samples for extracting data from Hive.
... View more
11-03-2018
03:56 AM
Following up on this. All services are up and running. Is there another tool I can use besides DBeaver to connect to HiveServer2?
... View more
10-24-2017
08:17 AM
Thanks all for your help I resolved the issue by deleting entries of new server from ambari.hoststate table and retired. It worked. 🙂
... View more
10-10-2017
07:16 AM
1 : Check if hbase-master is running sudo /etc/init.d/hbase-master status
if not, then start it sudo /etc/init.d/hbase-master start 2 : Check if hbase-regionserver is running sudo /etc/init.d/hbase-regionserver status
if not, then start it sudo /etc/init.d/hbase-regionserver start 3 : Check if zookeeper-server is running sudo /etc/init.d/zookeeper-server status
if not, then start it sudo /etc/init.d/zookeeper-server start 4: Grepping for open port netstat -apn |grep <port to look for> 5: Process memory usage ps -eo pmem,pcpu,vsize,pid,cmd | sort -k 1 -nr | head -5 or ps -A --sort -rss -o comm,pmem | head -n 11 6: Free memory on system: CentOS free -g 7: Yarn application log checking yarn logs -applicationId <application ID> dig deeper via suing the container id along with application id yarn logs -applicationId <application ID> -containerId <container id> 8: starting zkCli cli connection to server cd /grid/0/hdp/current/zookeeper-client/bin And ./zkCli.sh -server <zookeeper server fqdn>:2181 9: starting name node from cli su -l hdfs -c "/usr/hdp/current/hadoop-hdfs-namenode/../hadoop/sbin/hadoop-daemon.sh start namenode" 10: Spark server - starting it when missing hdfs directory hdfs dfs -mkdir /spark-history hdfs dfs -chown -R spark:hadoop /spark-history hdfs dfs -chmod -R 777 /spark-history su - spark -c "/usr/hdp/current/spark-historyserver/sbin/start-history-server.sh" 11: Start History server::: (mapreduce) from cli su -l mapred -c "/usr/hdp/current/hadoop-mapreduce-historyserver/sbin/mr-jobhistory-daemon.sh start historyserver" 12: Starting App timeline server (Yarn) from cli su - yarn /grid/0/hdp/2.5.3.0-37/hadoop-yarn/sbin/yarn-daemon.sh start timelineserver 13: start and stop oozie server from CLI (on the machine where Ozzie server is installed) su oozie /usr/hdp/current/oozie-server/bin/oozied.sh start /usr/hdp/current/oozie-server/bin/oozied.sh stop 14: Hbase mater (Hbase) from cli Go to cluster node where Hbase master is installed, and then- su -l hbase -c "/usr/hdp/current/hbase-master/bin/hbase-daemon.sh start master; sleep 25" 15: Hive2 server (Hive) from cli Go to cluster node where HiveServer2 is installed, then- su hive nohup /usr/hdp/current/hive-server2/bin/hiveserver2 -hiveconf hive.metastore.uris=/tmp/hiveserver2HD.out 2 /tmp/hiveserver2HD.log Or su - hive -l -c 'HIVE_CONF_DIR=/etc/hive/conf /usr/hdp/current/hive-server2/bin/hiveserver2 -hiveconf hive.metastore.uris="" -hiveconf hive.log.dir=/var/log/hive -hiveconf hive.log.file=hiveserver2.log 1>/var/log/hive/hiveserver2.log 2>/var/log/hive/hiveserver2.log &' Or sudo su - -c "export HIVE_CONF_DIR=/tmp/hiveConf;nohup /usr/hdp/current/hive-server2/bin/hiveserver2 -hiveconf hive.metastore.uris=' ' -hiveconf hive.log.file=hiveServer2.log -hiveconf hive.log.dir=/var/log/hive > /var/log/hive/hiveServer2.out 2>> /var/log/hive/hiveServer2.log &" hive [root@stlrx2540m1-109 ~]# ps -ef | grep HiveSer [root@stlrx2540m1-109 ~]# echo 28627 > /var/run/hive/hive-server.pid [root@stlrx2540m1-109 ~]# echo 28627 > /var/run/hive/hive.pid [root@stlrx2540m1-109 ~]# chmod 644 /var/run/hive/hive-server.pid [root@stlrx2540m1-109 ~]# chmod 644 /var/run/hive/hive.pid [root@stlrx2540m1-109 ~]# chown hive:hadoop /var/run/hive/hive-server.pid [root@stlrx2540m1-109 ~]# chown hive:hadoop /var/run/hive/hive.pid
... View more
10-10-2017
06:50 AM
1 Kudo
Manual install cluster::: Create VM’s: OS = RHEL6/RHEL7 (depending on ur HDP version) Java = OpenJDK8 Set java home on all the nodes of cluster (make sure you have the correct java path first before setting it. The path value might differ based on what java subversion gets installed)
export JAVA_HOME=/usr/lib/jvm/java-1.8.0-openjdk.x86_64 Do the pass-word less ssh
pick one node as amber host, create pub-pri key pair using ssh-keygen, then copy the public key into "/home/root/.ssh/authorized_keys" file on all the nodes Do test is password less ssh works (via doing ssh from the ambari node to all other nodes), if this fails, you need to resolve it first as the entire installation depends on it. NTP set-up (on all nodes)
Yum install -y ntp Start ntp: "/etc/init.d/ntpd start" (RHEL6) "systemctl enable ntpd" and "systemctl start ntpd" (RHEL7) Check "/etc/hosts" file on all hosts: should have all node entries in it
vi /etc/hosts Edit network file (on all nodes)
vi /etc/sysconfig/network NETWORKING=yes HOSTNAME=<fully.qualified.domain.name>
(FQDN of the particular node where you are editing the /etc/sysconfig/network file) Stop IP-tables: some times it might throw some error like "permission denied", you can ignore it)
service iptables stop (RHEL6) systemctl disable firewalld and service firewalld stop (RHEL7) Disable secure linux (all nodes)
setenforce 0 Umask set to 0022 (all nodes)
umask 0022 echo umask 0022 >> /etc/profile Get the amber repo file (on node where amber will be installed)
wget -nv http://s3.amazonaws.com/dev.hortonworks.com/ambari/centos6/2.x/BUILDS/2.4.3.0-35/ambaribn.repo -O /etc/yum.repos.d/ambari.repo (RHEL6, HDP 2.6) wget -nv http://public-repo-1.hortonworks.com/ambari/centos7-ppc/2.x/updates/2.5.0.1/ambari.repo -O /etc/yum.repos.d/ambari.repo (RHEL7, HDP 2.6) Install amber server (amber server node only)
yum install ambari-server If want to use mysql for ambari: Set-up mysql
Check the page: https://docs.hortonworks.com/HDPDocuments/Ambari-2.1.0.0/bk_ambari_reference_guide/content/_using_ambari_with_mysql.html yum install mysql-connector-java Yum install mysql-server Start the service:service mysqld start(when mysql is installed) Do the steps to set-up the DAT file required by the amber install.
When u do 'mysql -u root -p’ is will ask for password, just hit ‘enter’ (i.e. blank password) Do amber server config, and start amber server
ambari-server setup -j $JAVA_HOME (assuming JAVA_HOME is already set, if not set it)
Accept y for temporarily disable SELinux Accept n for Customize user account for ambari-server (assuming ambari runs under "root" user). Accept y for temporarily disabling iptables Select n for advanced database configuration ( select y if you want to set-up ambari with MySQL or any other db which should be already installed on the same node). At Proceed with configuring remote database connection properties [y/n] choose y. This completes the set-up Start and check ambari server
ambari-server start ambari-server status If you want to stop ambari server: ambari-server stop When successful, you should reach amber ui and should work from there on. Happy Installing....
... View more
Labels:
07-10-2017
06:40 AM
1st. If you were executed spark cmd with master(local), then check the connection host and port in that local server. 2nd. Check your firewall & iptables status whether it is of or off.
... View more