Member since
06-05-2019
128
Posts
133
Kudos Received
11
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
1798 | 12-17-2016 08:30 PM | |
1344 | 08-08-2016 07:20 PM | |
2382 | 08-08-2016 03:13 PM | |
2489 | 08-04-2016 02:49 PM | |
2299 | 08-03-2016 06:29 PM |
02-03-2023
12:10 PM
Throughout my seven years working with Cloudera/Hortonworks, I'm always learning new things. One thing I've learned from the Cloudera/Hortonoworks merger was how amazing CDSW/CML is as a product. CML isn't only for Data Scientists, it's for anyone that needs an IDE (Integrated Development Environment). Coming from a development background working with Eclipse and IntelliJ, you become dependent on a solid IDE. In the past, I've used IDEs to develop applications, and then a build process would eventually deploy to a run environment. This is where CML shines, you're able to run your applications within CDP at scale with enormous amounts of data. Anyone using CML is over the moon running their applications/projects within CML because of the simplicity, as I'll demonstrate below. The tagline of CML is "BYOL" (Bring your own libraries), meaning ALL libraries are welcome (outside of the Cloudera ecosystem). This differentiates Cloudera from others such as native Azure, where native Azure is HIGHLY dependent on all things Microsoft (unless the application owner created something native Azure-specific). I'll demonstrate how easy deploying a third party such as Django "The web framework for perfectionists with deadlines." where the installation/run feels like you're running on your local laptop, instead of a highly scalable IDE that runs anywhere. Remember that CML runs ANYWHERE, within the cloud providers such as Azure, AWS, or GCP, and on-premise. Cloudera abstracts out the complexities. Using CML, we'll go from installing Django and then running Django in a matter of minutes. Step 1: Find the read-only URL to run your embedded application (Django) import os
url=os.environ["CDSW_ENGINE_ID"]+"."+os.environ["CDSW_DOMAIN"]
print("http://read-only-%s"%url) Step 2: Install Django !pip install django Step 3: Create a Django project as instructed here !django-admin startproject mysite
cd mysite Step 4: Modify the settings.py file adding the 'read-only-%s' value from step 1 and localhost ALLOWED_HOSTS = ['localhost','read-only-yourhostnamefromstep1'] Step 5: Run Django !python manage.py runserver localhost:$CDSW_READONLY_PORT That's IT! If you'd like to access your Django page, navigate to the URL in step 1! Nothing specialized for CML, as we say BYOL!
... View more
Labels:
10-04-2021
09:39 PM
Hello @RyanCicak Im trying. this flow but it doesn't work for me. This is my flow What should I do? thanks
... View more
05-16-2018
11:20 PM
7 Kudos
In order to debug pairing DLM, you'll need the following pre-req: 1) Root access to the DPS VM Problem statement - have you received an error when pairing a cluster? Follow these step-by-step instructions to access the DLM log, to gain granular log information that will help you debug: 1) Run the "sudo docker ps" command to gain the container id for "dlm-app": In the image above, the container id for "dlm-app" is "83d879e9a45e". 2) Once you receive the container id, you can run the following command "sudo docker exec -it 83d879e9a45e /bin/tailf /usr/dlm-app/logs/application.log" This will give you insight into the DPS-DLM application, in the example above you'll see "ERROR". The error log will post once you click "pair" in the DLM UI. Using the information from the log, you'll be able to troubleshoot your issue.
... View more
Labels:
04-03-2018
07:50 PM
4 Kudos
Reading and writing files to a MapR cluster (version 6) is simple, using the standard PutFile or GetFile, utilizing the MapR NFS. If you've searched high and low on how to do this, you've likely read articles and GitHub projects specifying steps. I've tried these steps without success, meaning whats out there is too complicated or out-dated to solve NiFi reading/writing to MapR. You don't need to re-compile the HDFS processors with the MapR dependencies, just follow the steps below: 1) Install the MapR client on each NiFi node #Install syslinux (for rpm install)
sudo yum install syslinux
#Download the RPM for your OS http://package.mapr.com/releases/v6.0.0/redhat/
rpm -Uvh mapr-client-6.0.0.20171109191718.GA-1.x86_64.rpm
#Configure the mapr client connecting with the cldb
/opt/mapr/server/configure.sh -c -N ryancicak.com -C cicakmapr0.field.hortonworks.com:7222 -genkeys -secure
#Once you have the same users/groups on your OS (as MapR), you will be able to use maprlogin password (allowing you to login with a Kerberos ticket)
#Prove that you can access the MapR FS
hadoop fs -ls /
2) Mount the MaprR FS on each NiFi node sudo mount -o hard,nolock cicakmapr0.field.hortonworks.com:/mapr /mapr *This will allow you to access the MapRFS on the mount point /mapr/yourclustername.com/location 3) Use the PutFile and GetFile processor referencing the /mapr directory on your NiFi nodes *Following 1-3 allows you to quickly read/write to MapR, using NiFi.
... View more
Labels:
11-19-2018
03:23 PM
What kind of extension should have PACKAGES file?
... View more
10-25-2017
07:37 PM
7 Kudos
Installing the Alarm Fatigue Demo via Cloudbreak:
There are multiple ways to deploy the Alarm Fatigue Demo via Cloudbreak. Below are four options:
1) Deploy via the Cloudbreak UI
a) Login to https://cbdtest.field.hortonworks.com b) Select your credentials – if you credentials don’t exist, create them under “Manage Credentials” c) Once your credentials are selected, click “Create Cluster” d) Make-up a cluster name and choose the Availability Zone (SE) and then click “Setup Network and Security" e) “fieldcloud-openstack-network” should be selected and click “Choose Blueprint” f) Select the Blueprint called “alarm_fatigue_v2” Host Group 1 (Select Ambari Server, alarm-fatigue-demo and pre-install-java8) Host Group 2 (select pre-install-java8) Host Group 3 (select pre-install-java8) g) Click on “Review and Launch” e) Click on “Create and start cluster” (After clicking, the deployment via Cloudbreak will likely take 30-50 minutes, go get a coffee)
2) Deploy via Bash Script (specifying configuration file)
Create file .deploy.config with the following
Version=0.5
CloudBreakServer=
https://cbdtest.field.hortonworks.com
CloudBreakIdentityServer=
http://cbdtest.field.hortonworks.com:8089
CloudBreakUser=admin@example.com
CloudBreakPassword=yourpassword
CloudBreakCredentials=
CloudBreakClusterName=alarmfatigue-auto
CloudBreakTemplate=openstack-m3-xlarge
CloudBreakRegion=RegionOne
CloudBreakSecurityGroup=openstack-connected-platform-demo-all-services-port-v3
CloudBreakNetwork=fieldcloud-openstack-network
CloudBreakAvailabilityZone=SE
Change the highlighted
Then execute the following:
wget -O -
https://raw.githubusercontent.com/ryancicak/northcentral_hackathon/master/CloudBreakArtifacts/cloudbreak-cmd/deployer.sh| bash
3) Deploy via Bash Script inputting configurations (while prompted) Just execute wget -O -https://raw.githubusercontent.com/ryancicak/northcentral_hackathon/master/CloudBreakArtifacts/cloudbreak-cmd/deployer.sh| bash and fill out the information as prompted
4) Deploy via Jenkins
All four options will deploy install, configure and run all necessary services including "Alarm Fatigue Demo Control"
... View more
09-22-2017
03:08 PM
Thanks Ryan. Can you please verify the following would work? The value of the FlowFileAttribute grok.expression is (?<severity>.{1}) (?<time>.{8}) (?<sequence>.{8}) (?<source>.{12}) (?<destination>.{12}) (?<action>.{30}) %{GREEDYDATA:data} Within Configure Processor of the ExtractGrok Processor, the value of Grok Expression is ${grok.expression} The expected behavior is that the ExtractGrok Processor would continue to work as though the Grok Expression were hardcoded with (?<severity>.{1}) (?<time>.{8}) (?<sequence>.{8}) (?<source>.{12}) (?<destination>.{12}) (?<action>.{30}) %{GREEDYDATA:data}
... View more
03-10-2017
05:36 PM
Hi @Ryan Cicak, I checked the metastore URI, it was not correct, so I fixed that, but now I'm getting a different error; no Kerberos on my setup; and our HDP is HA This error is similar to what @Matt Burgess pointed to as an issue at onetime in the past - NIFI-2873, but that issue should be resolved in NiFi 1.1.0 and above; I am trying this in NiFi 1.1.2, but still same issue as pre-1.1.0 Any thoughts ??
... View more
03-02-2017
03:23 PM
Hi Sunile, As we discussed yesterday, I found this installing HDP 2.5.3 using Ambari 2.4.2. Looking further into this, RHEL 7.3 comes installed with snappy 1.1.0-3.el7 while HDP 2.5.3 needs snappy 1.0.5-1.el6.x86_64. I spun up a RHEL 7.3 instance and ran the following command, showing snappy 1.1.0-3.el7 came pre-installed: As Jay posted - Looking at the latest documentation for Ambari 2.4.2, I found this problem in "Resolving Cluster Deployment Problems" - there should be a bug fix that goes into RHEL 7 (so we don't rely on a rhel 6 dependency) https://docs.hortonworks.com/HDPDocuments/Ambari-2.4.2.0/bk_ambari-troubleshooting/content/resolving_cluster_install_and_configuration_problems.html - What do you think?
... View more
11-30-2016
01:18 AM
5 Kudos
Prerequisites 1) Service Ambari Infra installed -> Ranger will use Ambari Infra's SolrCloud for Ranger Audit 2) MySQL installed and running (I'll use Hive's Metastore MySQL instance * MySQL is one of the many DB options) Installing Apache Ranger using Ambari Infra (SolrCloud) for Ranger Audit 1) Find the location of mysql-connector-java.jar (assume /usr/share/java/mysql-connector-java.jar) run the following command on the Ambari server sudo ambari-server setup --jdbc-db=mysql --jdbc-driver=/usr/share/java/mysql-connector-java.jar 2) In Ambari, click Add Service 3) Choose Ranger and click Next 4) Choose "I have met all the requirements above." and click Proceed (this was done in #1 above) 5) Assign master(s) for "Ranger Usersync" and "Ranger Admin" and click Next 6) Assign Slaves and Clients - since we did not install Apache Atlas, Ranger TagSync is not required and click Next 7) Customize Services -> Ranger Audit, click on "OFF" to enable SolrCloud Before clicking: After clicking: 😎 Customize Services -> Ranger Admin, enter "Ranger DB host" the DB you chose (in my case, I chose MySQL) and a password "Ranger DB password" for the user rangeradmin *Ranger will automatically add the user "rangeradmin" Add the proper credentials for a DB user that has administrator credentials (this administrator will create the user rangeradmin and Ranger tables) MySQL create an administrator user *Note: rcicak2.field.hortonworks.com is the server where Ranger is being installed CREATE USER 'ryan'@'rcicak2.field.hortonworks.com' IDENTIFIED BY 'lebronjamesisawesome';
GRANT ALL PRIVILEGES ON *.* TO 'ryan'@'rcicak2.field.hortonworks.com' WITH GRANT OPTION; Click Next 9) Review -> Click Deploy * Install, Start and Test will show you the progess of Ranger installing 10) Choose Ranger in Ambari 11) Choose "Configs" and "Ranger Plugin" and select the services you'd like Ranger to authorize (You'll need to restart the service after saving changes)
... View more
Labels: