Member since
09-17-2015
436
Posts
736
Kudos Received
81
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
3606 | 01-14-2017 01:52 AM | |
5611 | 12-07-2016 06:41 PM | |
6423 | 11-02-2016 06:56 PM | |
2112 | 10-19-2016 08:10 PM | |
5550 | 10-19-2016 08:05 AM |
09-28-2016
02:49 AM
8 Kudos
In the previous article, we showed how to enable SSL and set up identity mappings for Apache Nifi on the previously installed HDF 2.x or 3.0 cluster. Here, we will build on the same cluster and show how to install Apache Ranger and setup the Ranger Nifi plugin. For simplicity, we will assume this is a demo environment where there is no requirement to enable SSL for Ranger. If instead you would like to use secured Ranger with NiFi, follow steps here Summary At a high level, Apache Ranger provides a centralized platform to define, administer and manage security policies consistently across Hadoop components. In the case of HDF, it enables the administrator to create/manage authorization policies for Kafka, Storm and Nifi from the same web interface (or REST APIs). To achieve this, the high level steps we will follow are:
Ranger install prerequisites Ranger install Update Nifi Ranger repo Test Ranger plugin Create Ranger uses and policies Test Nifi access as nifiadmin user The official documentation for this can be found here Tested with HDF 2.x and 3.0 Step Details 1. Ranger install prerequisites: a) Make sure Logsearch or external Solr is installed/running before installing Ranger (used to store audits) In our case, we had deployed the cluster with Logsearch so will use that option b) Configure RDBMS for Ranger (used to store policies) in our case we will use the same PostGres used by Ambari. So from the Ambari node, run below: ranger_user=rangeradmin #set this to DB user you wish to own Ranger schema in RDBMS
ranger_pass=BadPass#1 #set this to password you wish to use
yum install -y postgresql-jdbc*
chmod 644 /usr/share/java/postgresql-jdbc.jar
echo "CREATE DATABASE ranger;" | sudo -u postgres psql -U postgres
echo "CREATE USER ${ranger_user} WITH PASSWORD '${ranger_pass}';" | sudo -u postgres psql -U postgres
echo "ALTER DATABASE ranger OWNER TO ${ranger_user};" | sudo -u postgres psql -U postgres
echo "GRANT ALL PRIVILEGES ON DATABASE ranger TO ${ranger_user};" | sudo -u postgres psql -U postgres
sed -i.bak s/ambari,mapred/${ranger_user},ambari,mapred/g /var/lib/pgsql/data/pg_hba.conf
cat /var/lib/pgsql/data/postgresql.conf | grep listen_addresses
#make sure listen_addresses='*'
ambari-server setup --jdbc-db=postgres --jdbc-driver=/usr/share/java/postgresql-jdbc.jar
service ambari-server stop
service postgresql restart
service ambari-server start 2. Ranger install: Start Ambari ‘Add service’ wizard and select Ranger and choose any host to install install Ranger components on. a) On the configuration screen there are few things to set:On ‘Ranger Admin’ tab set below and run ‘Test connection’:
Db flavor: POSTGRES Host: FQDN of Ambari node Database Administrator (DBA) username: rangeradmin Passwords: BadPass#1 b) ‘Ranger User Info’ tab is where you would optionally configure Ranger to pull users from Active Directory or LDAP (see here for sample steps on how we setup our AD)
‘Common configs’ sub-tab
‘User configs’ sub-tab c) On ‘Ranger Plugin’ tab enable plugins for Nifi, Storm, Kafka. (Note the plugins for Storm/Kafka will not be enabled until kerberos is enabled on the cluster) d) On ‘Ranger Audit’ tab provide Solr details. In our case, since Logsearch/Ambari_infra components were installed, just turn on SolrCloud - Ambari will autodetect the Zookeeper string e) On ‘Ranger Tagsync’ tab, no changes needed f) On ‘Advanced’, no changes needed. If you wanted to setup ability to use AD/LDAP credentials to log into Ranger you can configure this (and other advanced features) here g) Click Next > Proceed Anyway > Deploy to start the Ranger install and wait for it to complete h) Once installed, Ambari will show that Storm, Kafka, Nifi need to be restarted. Use the “Restart All Required” button (new in Ambari 2.4) to do this: 3. Update Nifi Ranger repo: This is needed to enable auto-completion when creating policies in Ranger for Nifi. Note that if this step is skipped, Ranger plugin will still work as usual - it just impacts lookups when creating Nifi policies from the Ranger web interface. If SSL for Ranger will not be setup, you should consider just skipping this step. To access the Nifi repo in Ranger: a) Open Ranger using Quicklink in Ambari b) In Ranger > Access Manager > Nifi > click Edit icon c) Notice most of the configs are empty. If you try a test connect, it will fail with below: d) On Ranger host, run below to find the keystore/truststore details (like path, type and password) cat /usr/hdf/current/nifi/conf/nifi.properties | grep nifi.security e) Ensure ranger user can access key/truststores by running below on Ranger host: chmod o+r /usr/hdf/current/nifi/conf/keystore.jks /usr/hdf/current/nifi/conf/truststore.jks Note: in secure environments Ranger should not access Nifi keystore/truststore: there should be a separate keystore/truststore for Ranger to use as part of enabling SSL for it. Also note that these files could be re-generated by Nifi CA, resetting the permissions f) Update as shown :
Keystore path, type, password Truststore path, type, password g) Now the test connection should return 403 error: This is an authorization error from Ranger (since we have not yet created any policies). Click Save to commit the changes you made 4. Test Ranger plugin Attempting to open Nifi UI results in "Access denied" due to insufficient permissions: Navigate to the ‘Audit’ tab in Ranger UI and notice that the requesting user showed up on Ranger audit. This shows the Ranger Nifi plugin is working Notice how Ranger is showing details such as below for multiple HDF components: what time access attempt occurred user/IP who attempted the access resource that was attempted to be accessed whether access for allowed or denied Also notice that Nifi now shows up as one of the registered plugins (under ‘Plugins’ tab) 5. Create Ranger uses and policies To be able to access the Nifi U we will need to create a number of objects. The details below assume you have already setup identity mappings for Nifi (as described in previous article), but you should be able follow similar steps even if you have not. i) Ranger users for admin and node identities (in a real customer env, you would not be manually creating these: they would be synced over from Active Directory/LDAP)
nifiadmin@CLOUD.HORTONWORKS.COM node1.fqdn@CLOUD.HORTONWORKS.COM node2.fqdn@CLOUD.HORTONWORKS.COM node3.fqdn@CLOUD.HORTONWORKS.COM ii) Read policy on /flow for node1-3 identities iii) Read/write policy on /proxy for node1-3 identities iv) Read/write policy on /data/* for node1-3 identities (needed to list/delete queue) v) Read/write policy on /* for nifiadmin identity (needed to make nifiadmin an admin) More details on what Ranger policies to create can be found here Option 1: Run script below from Ranger node to create above using Ranger’s REST APIs export hosts="node1.fqdn node2.fqdn node3.fqdn" #set hostnames of nodes running Nifi
export admin="nifiadmin" #set your desired Nifi admin user
export realm="CLOUD.HORTONWORKS.COM" #set domain of certificate
export cluster="HDF" #set cluster name
#download/run script
curl -sSL https://gist.github.com/abajwa-hw/2b59db1a850406616d4583f44bad0a78/raw | sudo -E sh End result: Option 2: Manually create users and policies Create local users in Ranger for all requesting users from Ranger UI under Settings > Users/Groups Assuming you setup identity mapping earlier, create the users appropriately e.g. node-1.fqdn@CLOUD.HORTONWORKS.COM,node-2.fqdn@CLOUD.HORTONWORKS.COM,node-3.fqdn@CLOUD.HORTONWORKS.COM Alternatively if you do not wish to use node identities, you would enter the long form of the identity as the username (e.g. CN=nifiadmin, OU=CLOUD.HORTONWORKS.COM; CN=node-1.fqdn, OU=CLOUD.HORTONWORKS.COM; CN=node-1.fqdn, OU=CLOUD.HORTONWORKS.COM; CN=node-1.fqdn, OU=CLOUD.HORTONWORKS.COM) Now create Ranger policies for node identities for each host:
/flow - read /proxy - read/write /data/* - read/write To do this, access Nifi policies in Ranger by navigating to Ranger > Access Manager > Nifi > HDF_nifi Then click ‘Add New Policy’ to display below form: Create a new READ policy for node identities on /flow: Similarly, create a new READ/WRITE policy for node identities on /proxy: Similarly create a new READ/WRITE policy for node identities on /data/*: We still need to manually add nifiadmin user to the global policy To do this, click the ‘HDF_nifi’ link highlighted: Then click Edit icon on “all-nifi-resource” policy: Under ‘Select User’ add nifiadmin@CLOUD.HORTONWORKS.COM and provide Read/Write access, then Save. 6. Test Nifi acess as nifiadmin user Whether you created the users/policies via the script or manually, at this point the Nifi policy page should appear as below: Note that it may take up to 30s after creating the policies in Ranger UI for them to take affect. How to confirm that the new policies were downloaded by the Nifi Ranger plugin after they we created them? You can do this by checking the first ‘Export Date’ for nifi service under Audit > Plugins tab in Ranger: when this timestamp shows a time after the changes were made, it means the new policies have been downloaded and should be in effect. Open Nifi UI via Quicklink and confirm it now opens. Confirm via Ranger audits that Ranger now allows access With this you have successfully installed Ranger and configured Nifi to use the Ranger authorizer Other things to try: Try disabling the policies created one by one, waiting 30s and refreshing Nifi UI to see what breaks. Tips: 1. To authorize separate users/group access to different parts of a flow, implement multiple process groups and then: Grant user/group access to modify the NiFi flow with a policy for /process-groups/<root-group-id> with RW Create a separate a policy for /provenance/process-groups/<root-group-id> (with each of the cluster node DNs) for read access 2. Troubleshooting tip: When the Ranger plugin is enabled and you are encountering permission errors trying to login to Nifi or performing a certain action within Nifi, check Ranger audits to check for any 'Denied' requests. In the event that you encounter these, Ranger will tell you exactly what user was trying to access what resource which will help you create the right policy to avoid the issue What next? If you haven't already, review what Ranger policies you can create for Nifi here: https://community.hortonworks.com/articles/60842/hdf-20-defining-nifi-policies-in-ranger.html Next, we will enable kerberos on the cluster and show how users can then login to Nifi without certificates (using AD/KDC credentials): Steps for enabling security on HDF using Active directory:
https://community.hortonworks.com/articles/60186/hdf-20-use-ambari-to-enable-kerberos-for-hdf-clust-1.html Steps for enabling security on HDF using KDC:
https://community.hortonworks.com/articles/58793/hdf-20-use-ambari-to-enable-kerberos-for-hdf-clust.html
... View more
09-26-2016
06:47 AM
8 Kudos
Summary:
Automation/AMI to install HDP 2.5.x with Nifi 1.1.0 on any cloud and deploy commonly used demos via Ambari blueprints
Currently supported demos:
Nifi-Twitter
IoT (trucking) demo
Zeppelin notebooks
Vanilla HDF 2.1 (w/o any demos) Option 1: Deploy single node instances using AMIs
1. For deploying the above on single node setups on Amazon, AMI images are also available. To launch an instance using one of the AMIs, refer to steps below. A video that shows using these steps to launch the HDP 2.5.3 AMI is available here.
Login into EC2 dashboard using your credentials
Change your region to "N. California"
Click 'Launch instance'
Choose AMI: search for 081339556850 under Community AMIs (as shown in screenshot), select the desired AMI. For the HDP 2.5.x version of the AMI that has the demos pre-installed, select "HDP 2.5 Demo kit cluster" Choose instance type: select m4.2xlarge for HDP AMIs or m4.xlarge for HDF
Configure instance: leave defaults
Add storage: 100gb or larger (500gb preferred)
Tag: name your instance and add any tags you like
Configure Security Group: choose security group that opens all the ports (e.g. sg-1c53d279summit2015) or create new
While deploying choose an SSH key you have the .pem file for or create new
2. Once the instance comes up and Ambari server/agent are fully up, it will automatically start the services. You can monitor this by connecting to your instance via
SSH as ec2-user and tailing /var/log/hdp_startup.log
3. Once the service start call was made, you can login to Ambari UI (port 8080) to monitor progress. Note: if Ambari is not accessible make sure a) the security group you used has a policy for 8080 b) you waited enough time for Ambari to come up.
The password for 'admin' user of Ambari and Zeppelin is defaulted to your AWS account number. You can look this up using your EC2 dashboard as below
3. So 15-20 min after AWS shows the instance came up, you should see a fully started cluster. Note: in case any service does not come up, you can bring it up using 'Service Actions' menu in Ambari
Notes:
Once the cluster is up, it is recommended that you change the Ambari and Zeppelin admin passwords
The instance launched is EBS backed - so the VM can be stopped when not in use and restarted when needed. Just make sure to stop all HDP/HDF services via Ambari before stopping the instance via EC2 dashboard. What gets installed?
HDP 2.5.x with below vanilla components
IotDemo demo service - allows users to stop/start Iot Demo, open webUI and generate events
Demo Ambari service for Solr
This service will pre-configure Solr/Banana for Twitter demo
Demo Ambari service for Nifi 1.1
The script auto-deploys the specified flow - by default, it deploys the the Twitter flow but this is overridable
Even though the flow is deployed, you will need to set processors that contain env-specific details e.g. you will need to enter Twitter key/secret in GetTwitter processor
IoT Trucking demo steps Once the instance is up, you can follow the below steps to start the trucking demo. Video here - In Ambari, open 'IotDemo UI' using quicklink:
- In IotDemo UI, click "Deploy the Storm Topology"
- After 30-60 seconds, the topology will be deployed. Confirm using the Storm View in Ambari:
- Click "Truck Monitoring Application" link in 'IotDemo UI' to open the monitoring app showing an empty map.
- Click 'Nifi Data Flow' in In IotDemo UI to launch Nifi and then double click on 'Iot Trucking demo' processor group. Then right click on both PublishKafka_0_10 processors > Configure > Properties. Confirm that the 'Kafka Broker' hostname/port is correctly populated. The flow should already be started so no other action needed.
- In Ambari, click "Generate Events" to simulate 50 events (this can be configured)
- Switch back to "Truck Monitoring Application" in IotDemo UI and after 30s the trucking events will appear on screen
- Explore Storm topology using Storm View in Ambari
Nifi Sentiment demo Next you can follow the below steps to start the Nifi sentiment demo. Video of these steps available here
- Open Nifi UI using Quicklinks in Ambari
- Double click "Twitter Dashboard" to open this process group:
- Right click "Grab Garden Hose" > Properties and enter your Twitter Consumer key/secret and Access token/secret. Optionally change the 'Terms to filter on' as desired. Once complete, start the flow.
- Use Banana UI quicklink from Ambari to open Twitter dashboard
- An empty dashboard will initially appear. After a minute, you should start seeing charts appear
Zeppelin demos
- Open Zeppelin UI via Quicklink
- Login as admin. Password is same as Ambari password
- Demo notebooks will appear. Open the first notebook and walk through each cell.
Option 2: To install HDP (including demos) or HDF using scripts
Pre-reqs:
One or more freshly installed CentOS/RHEL 6 or 7 VMs on your cloud of choice
Do not run this script on VMs running an existing HDP cluster or sandbox
If planning to install ‘IoT Demo’ make sure you allocate enough memory - especially if also deploying other demos
16GB or more of RAM is recommended if using single node setup
The sample script should only be used to create test/demo clusters
Default password for Ambari and Zeppelin admin users is BadPass#1
Override by exporting ambari_password prior to running the script
Steps:
1. This step is only needed if installing a multi-node cluster. After choosing a host where you would like Ambari-server to run, first prepare the other hosts. Run this on all hosts
where Ambari-server will not be running to run pre-requisite steps, install Ambari-agents and point them to Ambari-server host:
export ambari_server=<FQDN of ambari-server host>
curl -sSL https://raw.githubusercontent.com/seanorama/ambari-bootstrap/master/ambari-bootstrap.sh | sudo -E sh ;
2. Run remaining steps on host
where Ambari-server is to be installed. These run pre-reqs and install Ambari-server and deploy demos requested
a)
To install HDP 2.5.x (Ambari 2.4.1/Java 😎 - including Solr/Nifi 1.0.0 via Ambari and deploy a Nifi flow:
export host_count=1 #set to number of nodes in your cluster (including Ambari-server node)
export hdp_ver=2.5
export install_nifidemo=true
export install_iotdemo=true
curl -sSL https://gist.github.com/abajwa-hw/3f2e211d252bba6cad6a6735f78a4a93/raw | sudo -E sh
After 5-10 min, you should get a message saying the blueprint was deployed. At this point you can open Ambari UI (port 8080) and monitor the cluster install
Note: if you installed iotdemo on a multi-node cluster, there maybe some manual steps required (e.g. moving storm jars or setting up latest Storm view). See here for more info: https://github.com/hortonworks-gallery/iotdemo-service/tree/hdp25#post-install-manual-steps
b)
To install HDP 2.4 (Ambari 2.4.1/java 😎 - including IoTDemo, plus Solr/Nifi 1.0.0 via Ambari and deploy Nifi Twitter flow run below:
export host_count=1 #set to number of nodes in your cluster (including Ambari-server node)
export hdp_ver=2.4
export install_iotdemo=true
export install_nifidemo=true
curl -sSL https://gist.github.com/abajwa-hw/3f2e211d252bba6cad6a6735f78a4a93/raw | sudo -E sh
c)
To install vanilla HDF 2.1 cluster, you can use the script/steps below:
https://community.hortonworks.com/articles/56849/automate-deployment-of-hdf-20-clusters-using-ambar.html
Note this does not install any of the demos, just a vanilla HDF 2.1 cluster
Deployment
After 5-10min, you should get a message saying the blueprint was deployed. At this point you can open Ambari UI (port 8080) and monitor the cluster install. (Note make sure the port was opened). Default password is BadPass#1
What gets installed?
refer to previous 'What gets installed' section
... View more
Labels:
09-23-2016
06:52 AM
14 Kudos
In the previous article, we showed how to deploy a cluster running HDF 2.x or 3.x. Here we will look into enabling SSL for Apache Nifi on the cluster setup previously and optionally setup identity mappings. This approach also sets up users/authorizations using Nifi's file-based authorizer (as opposed to Ranger based authorizer). Tested with HDF 2.x, 3.0, 3.2 1. Configure Nifi for SSL There are 2 options for configuring SSL for Apache Nifi via Ambari: i). Use Nifi CA to generate self-signed certificates (good for quick start/demos) ii). Use existing certificates (used in production envs) Option i) - Use Nifi Certificate Authority (CA) to generate self-signed certificates: Assuming Nifi CA is already installed (via Ambari when you installed NiFi), you can make the below config changes in Ambari under Nifi > Configs > “Advanced nifi-ambari-ssl-config” and click Save to commit the changes:
a) Enable SSL? Check box b) Clients need to authenticate? Check box c) NiFi CA Token - Set this to long, random value (at least 16 chars) but make sure you remember what it is set to d) Initial Admin Identity - set this to the long form (full DN) of identity for who your nifi admin user should be e.g. CN=nifiadmin, OU=CLOUD.HORTONWORKS.COM (note the space after the comma) e) Node Identities - set this to the long form (full DN) of identity for each node running Nifi (replace CN entries below with FQDNs of nodes running Nifi...also note the space after the comma) e.g. <property name="Node Identity 1">CN=node1.fqdn, OU=CLOUD.HORTONWORKS.COM</property>
<property name="Node Identity 2">CN=node2.fqdn, OU=CLOUD.HORTONWORKS.COM</property>
<property name="Node Identity 3">CN=node3.fqdn, OU=CLOUD.HORTONWORKS.COM</property>
Tip: By default the node identities are commented out using <!-- and --> tags. As you are updating this field, make sure you remove these or you changes will not take affect.
f) NiFi CA DN suffix - in case you are not using OU=NIFI then you need to change this too (note the space after the comma) e.g. , OU=CLOUD.HORTONWORKS.COM
g) (Optional) You may also choose to set Identity Mapping properties at this time. These are used to normalize identities for better integration with LDAP/AD (e.g. if you wish to login as nifiadmin@CLOUD.HORTONWORKS.COM instead of CN=nifiadmin, OU=CLOUD.HORTONWORKS.COM). Let's skip this for now...step #6 (see end of this article) is provided to show how we can switch to using these later on in the process. Summary of above changes: Note on identity fields above: These are not needed to be set if you plan to use Ranger authorizer. But if you plan on logging into the Nifi UI before enabling Ranger you will need to set these. When setting these, you must make sure that on all the nodes, authorizations.xml do not contain any policies. On initial install they should already have no policies, but for example, if you made a mistake setting these first time around and want to modify the values, for the new values to take effect you will need to delete authorizations.xml on all the nodes before restarting Nifi). You can find authorizations.xml under /var/lib/nifi/conf by default (this location can be configured by ‘Nifi internal config dir’). Troubleshooting node identities: How will you know you made a mistake while setting node identities? Usually if the node identities field was not correctly set, when you attempt to open the Nifi UI, you will see an untrusted proxy error similar to below: You will see some a similar 'Untrusted proxy' error in /var/log/nifi/nifi-user.log: [NiFi Web Server-172] o.a.n.w.s.NiFiAuthenticationFilter Rejecting access to web api: Untrusted proxy CN=tsys-nifi0.field.hortonworks.com, OU=NIFI In the above case, you would need to double check that the 'Node identity' values you provided in Ambari match the one from the log file (e.g. CN=tsys-nifi0.field.hortonworks.com, OU=NIFI) and ensure the values are not commented out. Next, you would manually delete /var/lib/nifi/conf/authorizations.xml from all nodes running Nifi and then restart Nifi service via Ambari. Notes on Nifi CA: If you already enabled ssl and wanted to change OU (or wanted to move CA to different node) you can force regeneration of the certificates by either checking “NiFi CA Force Regenerate” checkbox or changing the passwords If you had previously were not using CA and had set the passwords, but now wanted to start using CA, you can clear the passwords and check the “NiFi CA Force Regenerate” checkbox Option ii) - Use existing certificates: First manually copy certificates to all nodes running Nifi (e.g. under /usr/hdf/current/nifi/conf), then make the below config changes in Ambari under Nifi > Configs > “Advanced nifi-ambari-ssl-config” and and click Save to commit the changes:
a) Enable SSL? Check box b) Clients need to authenticate? Check box c) Set Keystore and Truststore path e.g. {{nifi_config_dir}}/keystore.jks d) Set Keystore and Truststore type e.g. JKS e) Set Keystore and Truststore passwords f) Initial Admin Identity - set this to the long form (full DN) of identity for who your nifi admin user should be e.g. CN=nifiadmin, OU=CLOUD.HORTONWORKS.COM g) Node Identities - set this to the long form (full DN) of identity for each node running Nifi (replace nodeX.fdqn with FQDNs of nodes running Nifi) e.g. <property name="Node Identity 1">CN=node1.fqdn, OU=CLOUD.HORTONWORKS.COM</property>
<property name="Node Identity 2">CN=node2.fqdn, OU=CLOUD.HORTONWORKS.COM</property>
<property name="Node Identity 3">CN=node3.fqdn, OU=CLOUD.HORTONWORKS.COM</property>
h) (Optional) You may also choose to set Identity Mapping properties at this time. Step #6 (see end of this article) is provided to show how we can switch to using these later on in the process. 2. Enable SSL for Nifi For both options, once the above changes have been made, Ambari will prompt you to restart Nifi. After restarting, it may take a minute for Nifi UI to come up. You can track the progress by monitoring nifi-app.log. You can do this by either tailing the log via SSH or using Logsearch: tail -f /var/log/nifi/nifi-app.log Another option is to run Nifi service check from Ambari. It will keep checking if the UI came up until it does: 3. Generate client certificate In order to login to SSL-enabled Nifi, you will need to generate a client certificate and import into your browser. If you used the CA, you can use tls-toolkit that comes with Nifi CA: First run below from Ambari node to install the toolkit: wget http://localhost:8080/resources/common-services/NIFI/1.0.0/package/archive.zip
unzip archive.zip Then run below to generate keystore. You will need to pass in your values for :
-D : pass in your “Initial Admin Identity” value -t: pass in your “CA token” value. -c: pass in the hostname of the node where Nifi CA is running: export JAVA_HOME=/usr/java/default
./files/nifi-toolkit-*/bin/tls-toolkit.sh client -c <nifi_CA_host.fqdn> -D 'CN=nifiadmin, OU=CLOUD.HORTONWORKS.COM' -p 10443 -t hadoop -T pkcs12 If you pass in the wrong password, you will see an error like: Service client error: Received response code 403 with payload {"hmac":null,"pemEncodedCertificate":null,"error":"forbidden"} Before we can import the certificate, we will need to find the password to import. To do this, run below: cat config.json | grep keyStorePassword (Optional) - The password generated above will be a long randomly generated string. If you want to change this password to one of your choosing instead, first run the below to remove the keystore/truststore: rm -f keystore.pkcs12 truststore.pkcs12 Then edit config.json by modifying the value of “keyStorePassword" to your desired password vi config.json Then re-run tls-toolkit.sh as below: ./files/nifi-toolkit-*/bin/tls-toolkit.sh client -F At this point the keystore.pkcs12 has been generated. Rename it to keystore.p12 and transfer it (e.g. via scp) over to your local laptop. mv keystore.pkcs12 keystore.p12 . 4. Import certificate to your browser The exact steps depend on your OS and browser. For example if using Chrome on Mac, use “Keychain Access” app: File > Import items > Enter password from above (you will need to type it out) For Firefox example see here 5. Check Nifi access Now you open Nifi UI using the Quicklink in Ambari. After selecting the certificate you imported earlier, follow the below screens to get through Chrome warnings and access the Nifi UI: a) Select the certificate you just imported b) Choose "Always Allow" c) Since the certificate was self-signed, Chrome will warn you that the connection is not private. Click "Show Advanced" and click the "Proceed to <hostname>" link d) At this point, the Nifi UI should come up. On the left, it shows 3/3, meaning all three of the Nifi nodes have joined the cluster. Note that on the top right, it shows you are logged in as "CN=nifiadmin, OU=CLOUD.HORTONWORKS.COM" e) The /var/log/nifi/nifi-user.log log file will also confirm the user you are getting logged in as: o.a.n.w.s.NiFiAuthenticationFilter Authentication success for CN=nifiadmin, OU=CLOUD.HORTONWORKS.COM f) Notice also that users.xml and authorizations.xml were created. Checking their content reveals that Nifi auto-created users and access policies for the 'Initial Admin Identity' and 'Node Identities'. More details on these files can be found here cat /var/lib/nifi/conf/users.xml
cat /var/lib/nifi/conf/authorizations.xml With this you have successfully enabled SSL for Apache Nifi on your HDF cluster 6. (Optional) Setup Identity mappings (Optional) If desired, we can also setup the Identity mappings to try that option as well.
First let's remove the authorization.xml on all nifi nodes to force Nifi to re-generate them. Without doing this, you will encounter an error at login saying: "Unable to perform the desired action due to insufficient permissions" rm /var/lib/nifi/conf/authorizations.xml
Now make the below changes in Ambari under Nifi > Configs and click Save. (Tip: Type .dn in the textbox to Filter the fields to easily find these fields)
nifi.security.identity.mapping.pattern.dn = ^CN=(.*?), OU=(.*?)$ nifi.security.identity.mapping.value.dn = $1@$2
From Ambari, restart Nifi and wait for the Nifi nodes to join back the cluster After about a minute, refresh the Nifi UI and notice now you are logged in as nifiadmin@CLOUD.HORTONWORKS.COM instead
Opening /var/log/nifi/nifi-user.log confirms this: o.a.n.w.s.NiFiAuthenticationFilter Authentication success for nifiadmin@CLOUD.HORTONWORKS.COM
Opening users.xml, authorizations.xml shows that this time Nifi auto-created users and access policies for the 'Initial Admin Identity' and 'Node Identities' in both unmapped (e.g. CN=nifiadmin, OU=CLOUD.HORTONWORKS.COM) and mapped (e.g. nifiadmin@CLOUD.HORTONWORKS.COM) formats: # cat /var/lib/nifi/conf/users.xml
<?xml version="1.0" encoding="UTF-8" standalone="yes"?>
<tenants>
<groups/>
<users>
<user identifier="60911b91-233d-33fe-abe9-b832d8fb06fc" identity="nifiadmin@CLOUD.HORTONWORKS.COM"/>
<user identifier="dbbad79e-b7d8-30a7-963a-d152f1953343" identity="abajwa-hdf-dev-rhel7-1.openstacklocal@CLOUD.HORTONWORKS.COM"/>
<user identifier="7ee92ccf-b548-3de3-b74e-27c1e6e280ab" identity="abajwa-hdf-dev-rhel7-2.openstacklocal@CLOUD.HORTONWORKS.COM"/>
<user identifier="6ce282f4-9da7-31d4-8733-138364d88261" identity="abajwa-hdf-dev-rhel7-3.openstacklocal@CLOUD.HORTONWORKS.COM"/>
<user identifier="e3c6593b-8ab7-3e50-9778-dd662635aa8f" identity="CN=nifiadmin, OU=CLOUD.HORTONWORKS.COM"/>
<user identifier="bb834fc7-7232-3c4d-821e-3e07731100e4" identity="CN=abajwa-hdf-dev-rhel7-3.openstacklocal, OU=CLOUD.HORTONWORKS.COM"/>
<user identifier="6c6c1c9c-90b9-3fc1-9c4f-83db69f9d2b6" identity="CN=abajwa-hdf-dev-rhel7-2.openstacklocal, OU=CLOUD.HORTONWORKS.COM"/>
<user identifier="c5bb13c1-79bc-3c9e-98bc-1593985f7fd1" identity="CN=abajwa-hdf-dev-rhel7-1.openstacklocal, OU=CLOUD.HORTONWORKS.COM"/>
</users>
</tenants>
With this we have completed the setup of Identity mappings with SSL enabled Nifi What to try next?
Configuring other users and access policies (i.e. continuing to use Nifi's file based authorizer):
https://nifi.apache.org/docs/nifi-docs/html/administration-guide.html#config-users-access-policies Setup Ranger and configure Nifi Ranger plugin (i.e. switching to using Nifi's Ranger authorizer): Using unsecure Ranger:
https://community.hortonworks.com/articles/58769/hdf-20-enable-ranger-authorization-for-hdf-compone.html Or Using secure Ranger:
https://community.hortonworks.com/articles/60001/hdf-20-integrating-secured-nifi-with-secured-range.html
... View more
Labels:
09-23-2016
02:53 AM
15 Kudos
Highlights of integrating Apache NiFi with Apache Ambari/Ranger
Article credits: @Ali Bajwa, @Bryan Bende, @jluniya, @Yolanda M. Davis, @brosander
With the recently announced HDF 2.0, users are able to deploy an HDF cluster comprised of Apache NiFi, Apache Storm, Apache Kafka and other components. The mechanics of setting this up using Apache Ambari’s Install Wizard are outlined in the official documentation here and sample steps to automate the setup via Ambari blueprints are provided here. The goal of this article is to highlight some features NiFi administrators can leverage when using Ambari managed HDF 2.0 clusters vs using NiFi standalone
The article is divided into sections on how the integration helps administrators with HDF:
Deployment
Configuration
Monitoring
Security
Ease of Deployment
Users have the choice of deploying NiFi through Ambari install wizard or operationalize via blueprints automation
(For detailed steps, see links provided on above line)
Using the wizard, users can choose which nodes NiFi should be installed on. So users can:
Either choose NiFi hosts at time of cluster install
...OR Add NiFi to existing host after the cluster is already installed and then start it. Note that in this case, ‘Zookeeper client’ must be installed on a host first before NiFi can be added to it
Ambari also allows users to configure which user/group NiFi runs as. This is done via the Misc tab which is editable either when cluster installed or when NiFi service is added to existing cluster for the first time.
Starting Ambari 2.4, users can also remove NiFi service from Ambari, but note that this does not remove the bits from the cluster.
NiFi can be stopped/started/configured across the cluster via both Ambari UI and also via Ambari’s REST API’s
The same can be done on individual hosts:
For easy access to NiFi UI, quick links are available. The benefit of using these is that the url is dynamically determined based on which users settings (e.g. what ports were specified and whether SSL enabled)
Ease of Configuration
Ambari allows configurations to be done once across the cluster. This is time saving because when setting up NiFi standalone, users need to manage configuration files on each node NiFi is running on
Most important NiFi config files are exposed via Ambari and are managed there (e.g. NiFi.properties, bootstrap.conf etc)
When going through the configuration process, there are a number of ways Ambari provides assistance for the admin:
Help text displayed, on hover, with property descriptions
Checkboxes instead of true/false values
User friendly labels and default values
‘Computed’ values can be automatically handled (e.g. node address)
NiFi benefits from other standard Ambari config features like:
Update configs via Ambari REST API
Configuration history is available meaning that users can diff versions and revert to older version etc
Host-specific configurations can be managed using ‘Config groups’ feature where users can:
‘override’ a value (e.g. max mem in the screenshot) and
create a subset group of hosts that will use that value
‘Common’ configs are grouped together and exposed in the first config section (‘Advanced NiFi-ambari-config’) to allow configuration of commonly used properties:
Ports (nonSSL, SSL, protocol)
Initial and max memory (Xms, Xmx)
Repo default dir locations (provenance, content, db, flow file)
‘Internal’ dir location - contains files NiFi will write to
‘conf’ subdir for flow/tar.gz, authorizations.xml
‘state’ subdir for internal state
Can change subdir names by prefixing the desired subdir name with ‘{NiFi_internal_dir}/’
Sensitive property key (used to encrypt sensitive property values)
Zookeeper znode for NiFi
Contents of NiFi.properties are exposed under ‘Advanced NiFi-properties’ as key/value pairs with helptext
Values replaced by Ambari shown surrounded by double braces e.g.{{ }} but can be overridden by end user
Properties can be updated or added to NiFi.properties via ‘Custom NiFi-properties’ and will get written to all nodes
It also handles properties whose values need to be ‘computed’ e.g.
‘Node address’ fields are populated with each hosts own FQDN
Conditional logic handled:
When SSL enabled, populates NiFi.web.https.host/port
When SSL disabled, populates NiFi.web.http.host/port
Other property-based configuration files exposed as jinja templates (large text box)
Values that will be replaced by Ambari shown surrounded by double braces e.g. {{ }} but can be overridden by end user
Properties can be added/updated in the template and will get written to all nodes
Other xml based config files also exposed as jinja templates
Values replaced by Ambari shown surrounded by double braces e.g. {{ }} but can be overridden
Elements can be updated/added and will get written to all nodes
Note that config files written out with either 0400 or 0600 permissions
Why? Because some property files contain plaintext passwords
Ease of Debugging
Logsearch integration is included for ease of visualizing/debugging NiFi logs w/o connecting to system e.g. NiFi_app.log, NiFi_user.log, NiFi_bootstrap.log
Note: Logsearch component is Tech Preview in HDF 2.0
By default, monitors FATAL,ERROR,WARN messages (for all HDF services)
Can view/drill into errors at component level or host level
Can filter errors based on severity (fatal, error, warn, info, debug, trace)
Can exclude ‘noisy’ messages to find the needle in the haystack
Can ‘tail’ log from Logsearch UI
By clicking the ‘refresh’ button or ‘play’ button (to auto refresh every 10s)
Ease of Monitoring
NiFi Service check: Used to ensure that the NiFi UI has come up after restart. It can also be invoked via REST API for automation
NiFi alerts are host-level alerts that let admins know when a NiFi process goes down
Can temporarily be disabled by turning on maintenance mode
Alerts tab in Ambari allows users to disable or configure alerts (e.g. changing polling intervals)
Admins can choose to notifications email or SNMP through the alerts frameworks
AMS (Ambari Metrics) integration
When NiFi is installed via Ambari, an Ambari reporting task is auto-created in NiFi, pointing to the cluster’s AMS collector host/port (autodetected)
How is the task autocreated? By providing a configurable initial flow.xml (which can also be used to deploy any flows you like when NiFi is deployed) …..
...and passing arguments (like AMS url) via bootstrap.conf. Advantage of doing it this way: if the collector is ever moved to a different host in the cluster, Ambari will let NiFi know (next time NiFi is restarted after the move)
As a result of the metrics integration, users get a dashboard for NiFi metrics in Ambari, such as:
Flowfiles sent/received
MBs read/written
JVM usage/thread counts
Dashboard widgets can:
be drilled into to see results from last 1,2,4 hours, day, week etc
export metrics data to csv or JSON
These same metrics can be viewed in Grafana dashboard:
Grafana can be accessed via quick link under ‘Ambari metrics’ service in Ambari
Pre-configured dashboards are available for each service but users can easily create custom dashboards for each component too
Ease of Security Setup
NiFi Identity mappings
These are used to map identities in DN pattern format (e.g. CN=Tom, OU=NiFi) into common identify strings (e.g. Tom@NiFi)
The patterns can be configured via ‘Advanced NiFi-properties’ section of Ambari configs. Sample values are provided via helptext
ActiveDirectory/LDAP integration
To enable users to login to NiFi using AD/LDAP credentials the ‘Advanced NiFi-login-identity-providers.xml’ section can be used to setup an ldap-provider for NiFi. Commented out sample xml fields are provided for the relevant settings e.g.
AD/LDAP url, search base, search filter, manager credentials
SSL for NiFi
Detailed steps for enabling SSL/identity mappings for Nifi available here
Options for SSL for NiFi:
1. Use NiFi CA to generate self-signed certificates
good for quick start/demos
2. Use your existing certificates
Usually done for production envs
SSL related configs are combined together in ‘Advanced NiFi-ambari-ssl-config’ config panel
Checkbox for whether SSL is enabled
NiFi CA fields - to configure certificate to be generated:
NiFi CA token(required)
NiFi CA DN prefix/suffix
NiFi CA Cert duration
NiFi CA host port
Checkbox for ‘NiFi CA Force Regenerate’
Keystore/truststore related fields - location/type of certificates:
Paths
Passwords
Types
Node identity fields:
Initial Admin Identity: long form of identity of Nifi admin user
Node Identities: long form of identities of nodes running Nifi
SSL Option 1 - using NiFi CA to generate new certificates through Ambari:
Just check “Enable SSL?” box and make sure CA token is set
Optionally update below as needed:
NiFi CA DN prefix/suffix
NiFi CA Cert duration
NiFi CA port
Check ‘NiFi CA Force Regenerate’ box
For changing certs after SSL already enabled
You can force regeneration of the certificates by either:
checking “NiFi CA Force Regenerate” checkbox
Or changing the passwords
You can also manually use tls-toolkit in standalone mode to generate new certificates outside of Ambari
SSL Option 2 - using your existing certificates:
Manually copy certificates to nodes
Populate keystore/truststore path/password/type fields
For keystore/trust paths that contain FQDN that need resolving:
use {NiFi_node_ssl_host} (This is useful for certs generated by NiFi-toolkit as they have the host’s FQDN in their name/path)
In both cases while enabling SSL, you will also need to populate the identity fields. This is to be able to login to NiFi after enabling SSL (assuming Ranger authorizer will not be used)
When setting these, first make sure that on all the nodes, authorizations.xml do not contain any policies. If it does, delete authorizations.xml from all nodes running NiFi. Otherwise, the identity related changes would not take effect.
On initial install there will not be any policies, but they will get created the first time the Identity fields are updated and NiFi restarted (i.e. if you entered incorrect values the first time, you will need to delete policies before re-entering the values)
Then save config changes and restart NiFi from Ambari to enable SSL
If NiFi CA option was used, this is the point at which certificates will get generated
Ranger integration with NiFi
Before installing Ranger there are some manual prerequisite steps:
Setup RDBMs to store Ranger policies
Install/setup Solr to store audits. In test/development environments, Ranger can re-use the Solr that comes with Logsearch/Ambari Infra services
Detailed steps for integrating Nifi with Ranger here
During Ranger install…
The backend RDBMS details are provided first via ‘Ranger Admin’ tab
The NiFi Ranger plugin can be enabled manage NiFi authorization policies in Ranger via ‘Ranger Plugin’ tab
Users/Groups can be synced from Active Directory/LDAP via ‘Ranger User Info’ tab
Ranger audits can be configured via ‘Ranger audit’ tab
After enabling Ranger and restarting NiFi, new config tabs appear under NiFi configs. NiFi/Ranger related configs can be accessed/updated here:
Ranger can be configured to communicate and retrieve resources from NiFi using a keystore (that has been imported into NiFi’s truststore)
Using a NiFi REST Client, Ranger is able to retrieve NiFi’s API endpoint information that can be secured via authorization
This list of resources are made available as auto-complete options when users are attempting to configure policies in Ranger
To communicate with NiFi over SSL a keystore and truststore should be available (with Ranger’s certificate imported into NiFi node truststores) for identification. The Owner for Certificate should be populated as well.
Once Ranger is identified NiFi will authorize Ranger to perform its resource lookup
Ranger policies can be created for NiFi (either via Ranger UI or API)
Create users in Ranger for NiFi users (either from certificate DNs, or import using AD/LDAP sync)
Decide which user has what type of access on what identifier
Default policy automatically created on first setup
Policy updates will be picked by Nifi after 30 seconds (by default)
Recommended approach:
Grant user access to modify the NiFi flow with a policy for /process-groups/<root-group-id> with RW
separate a policy for /provenance/process-groups/<root-group-id> (with each of the cluster node DNs) for read access
Ranger now track audits for NiFi (stored in standalone Solr or logsearch Solr)
For example: What user attempted what kind of NiFi access from what IP at what time?
Ranger also audits user actions related to NiFi in Ranger
For example: Which user created/updated NiFi policy at what time?
Kerberos for NiFi
HDF cluster with NiFi can be kerberized via standard Ambari security wizard (via MIT KDC or AD)
Also supported: NiFi installation on already kerberized HDF cluster
Detailed steps for enabling kerberos for HDF available here
Wizard will allow configuration of principal name and keytab path
NiFi principal and keytabs will be automatically be created/distributed across the cluster where needed by Ambari
During security wizard, NiFi.properties will automatically be updated:
NiFi.kerberos.service.principal
NiFi.kerberos.keytab.location
NiFi.kerberos.krb5.file
NiFi.kerberos.authentication.expiration
After enabling kerberos, login provider will also be switched to kerberos under the covers
Allows users to login via KDC credentials instead of importing certificates into the browser
Writing audits to kerberized Solr supported
After security wizard completes, NiFi’s kerberos details will appear alongside other components (under Admin > Kerberos)
Try it out yourself!
Installation using official documentation: link
Automation to deploy clusters using Ambari blueprints: link
Enable SSL/Identity mappings for Nifi via Ambari: link
Enable Ranger authorization for Nifi: link
Enable Kerberos for HDF via Ambari: link
... View more
09-16-2016
07:21 AM
18 Kudos
Update Feb 2018 - Updated article for HDF 3.1: https://community.hortonworks.com/articles/173816/automate-deployment-of-hdf-31-clusters-using-ambar.html Summary: Ambari blueprints can be used to automate setting up clusters. With Ambari support being added to HDF 2.0, the same can be done for HDF clusters running Nifi, Storm, Kafka.
This article shows how you can use ambari-bootstrap project to easily generate a blueprint and deploy HDF clusters to both either single node or development/demo environments in 5 easy steps. If you prefer, a script is also provided at the bottom of the article that automates these steps, so you can deploy the cluster in a few commands. Tested with HDF 2.x and 3.0 There is also a single node HDF 2.1 demo cluster available on AWS as an AMI which can be brought up in 10 min. Details here Prerequisite: A number of freshly installed hosts running CentOS/RHEL 6 or 7 where HDF is to be installed Reminder: Do not try to install HDF on a env where Ambari or HDP are already installed (e.g. HDP sandbox or HDP cluster) Steps: 1. After choosing a host where you would like Ambari-server to run, first let's prepare the other hosts. Run this on all hosts where Ambari-server will not be running to run pre-requisite steps, install Ambari-agents and point them to Ambari-server host: export ambari_server=<FQDN of host where ambari-server will be installed>; #replace this
export install_ambari_server=false
export ambari_version=2.5.1.0 ##don't use 2.5.2 for HDF, there is a bug
curl -sSL https://raw.githubusercontent.com/seanorama/ambari-bootstrap/master/ambari-bootstrap.sh | sudo -E sh ; 2. Run remaining steps on host where Ambari-server is to be installed. These run pre-reqs and install Ambari-server export ambari_password="admin" # customize password
export cluster_name="HDF" # customize cluster name
export ambari_services="ZOOKEEPER NIFI KAFKA STORM LOGSEARCH AMBARI_METRICS AMBARI_INFRA"
export hdf_ambari_mpack_url="http://public-repo-1.hortonworks.com/HDF/centos7/3.x/updates/3.0.0.0/tars/hdf_ambari_mp/hdf-ambari-mpack-3.0.0.0-453.tar.gz" #replace with the mpack url you want to install
export ambari_version=2.5.1.0 ##don't use 2.5.2 for HDF, there is a bug
#install bootstrap
yum install -y git python-argparse
git clone https://github.com/seanorama/ambari-bootstrap.git
#Runs pre-reqs and install ambari-server
export install_ambari_server=true
~/ambari-bootstrap/ambari-bootstrap.sh
3. Install mpack and restart Ambari so it forgets HDP and recognizes only HDF stack: ambari-server install-mpack --mpack=${hdf_ambari_mpack_url} --verbose
ambari-server restart
At this point, if you wanted you could use Ambari install wizard to install HDF you can do that as well. Just open http://<Ambari host IP>:8080 and login and follow the steps in the doc. Other to proceed with deploying via blueprints follow the remaining steps. 4. (Optional) modify any configurations you like for any of the components by creating configuration-custom.json. Below shows how to customize Nifi dirs, ports, and the user/group the service runs as. Basically you would create sections in the JSON corresponding to the name of the relevant config file and include the property name and desired value. For a complete listing of available Nifi property files and corresponding properties that Ambari recognizes, check the Nifi service code cd ~/ambari-bootstrap/deploy/
tee configuration-custom.json > /dev/null << EOF
{
"configurations" : {
"nifi-ambari-config": {
"nifi.security.encrypt.configuration.password": "changemeplease",
"nifi.content.repository.dir.default": "/nifi/content_repository",
"nifi.database.dir": "/nifi/database_repository",
"nifi.flowfile.repository.dir": "/nifi/flowfile_repository",
"nifi.internal.dir": "/nifi",
"nifi.provenance.repository.dir.default": "/nifi/provenance_repository",
"nifi.max_mem": "1g",
"nifi.node.port": "9092",
"nifi.node.protocol.port": "9089",
"nifi.node.ssl.port": "9093"
},
"nifi-env": {
"nifi_user": "mynifiuser",
"nifi_group": "mynifigroup"
}
}
}
EOF
5. If you chose to skip the previous step, run below to generate a basic configuration-custom.json file. Change the password, but make sure its at least 12 characters or deployment will fail. echo '{ "configurations" : { "nifi-ambari-config": { "nifi.security.encrypt.configuration.password": "changemeplease" }}}' > ~/ambari-bootstrap/deploy/configuration-custom.json Then generate a recommended blueprint and deploy the cluster install. Make sure to set host_count to the total number of hosts in your cluster (including Ambari server) export host_count=<Number of total nodes>
export ambari_stack_name=HDF
export ambari_stack_version=3.0 #replace this with HDF stack version
export ambari_services="NIFI KAFKA STORM AMBARI_METRICS ZOOKEEPER LOGSEARCH AMBARI_INFRA"
./deploy-recommended-cluster.bash
You can now login into Ambari at http://<Ambari host IP>:8080 and sit back and watch your HDF cluster get installed! Notes: a) This will only install Nifi on a single node of the cluster by default b) Nifi Certificate Authority (CA) component will be installed by default. This means that if you wanted to, you could enable SSL to be enabled for Nifi out of the box by including a "nifi-ambari-ssl-config" section in the above configuration-custom.json: "nifi-ambari-ssl-config": {
"nifi.toolkit.tls.token": "hadoop",
"nifi.node.ssl.isenabled": "true",
"nifi.security.needClientAuth": "true",
"nifi.toolkit.dn.suffix": ", OU=HORTONWORKS",
"nifi.initial.admin.identity": "CN=nifiadmin, OU=HORTONWORKS",
"content":"<property name='Node Identity 1'>CN=node-1.fqdn, OU=HORTONWORKS</property><property name='Node Identity 2'>CN=node-2.fqdn, OU=HORTONWORKS</property><property name='Node Identity 3'>node-3.fqdn, OU=HORTONWORKS</property>"
},
Make sure to replace node-x.fqdn with the FQDN of each node running Nifi c) As part of the install, you can also have an existing Nifi flow deployed by Ambari. First, read in a flow.xml file from existing Nifi system (you can find this in flow.xml.gz). For example, run below to read the flow for the Twitter demo into an env var twitter_flow=$(curl -L https://gist.githubusercontent.com/abajwa-hw/3a3e2b2d9fb239043a38d204c94e609f/raw)
Then include a "nifi-ambari-ssl-config" section in the above configuration-custom.json when you run the tee command - to have ambari-bootstrap include the whole flow xml into the generated blueprint: "nifi-flow-env" : {
"properties_attributes" : { },
"properties" : {
"content" : "${twitter_flow}"
}
} d) In case you would like to review the generated blueprint before it gets deployed, just set the below variable as well: export deploy=false .... The blueprint will be created under ~/ambari-bootstrap/deploy/tempdir*/blueprint.json Sample script A sample script based on this logic is available here. In addition to the steps above it can also optionally: enable installation of Nifi to all nodes of the cluster sets up Ambari's Postgres DB for Ranger (in case Ranger will be installed post-cluster-install) sets up KDC (in case kerberos will be enabled later) For example, to deploy a single node HDF sandbox, you can just run below on freshly installed CentOS 6 VM (don't run this on sandbox or VM where Ambari already installed). You can customize the behaviour by exporting environment variables as shown. #run below as root
export host_count=1;
curl -sSL https://gist.github.com/abajwa-hw/ae4125c5154deac6713cdd25d2b83620/raw | sudo -E sh ;
What next? Now that your cluster is up, you can explore what Nifi's Ambari integration means: https://community.hortonworks.com/articles/57980/hdf-20-apache-nifi-integration-with-apache-ambarir.html Next, you can enable SSL for Nifi: https://community.hortonworks.com/articles/58009/hdf-20-enable-ssl-for-apache-nifi-from-ambari.html Sample blueprint Sample generated blueprint for 3 node cluster is provided for reference here: {
"Blueprints": {
"stack_name": "HDF",
"stack_version": "2.0"
},
"host_groups": [
{
"name": "host-group-1",
"components": [
{
"name": "METRICS_MONITOR"
},
{
"name": "SUPERVISOR"
},
{
"name": "LOGSEARCH_LOGFEEDER"
},
{
"name": "NIFI_CA"
},
{
"name": "NIMBUS"
},
{
"name": "DRPC_SERVER"
},
{
"name": "ZOOKEEPER_SERVER"
},
{
"name": "STORM_UI_SERVER"
}
]
},
{
"name": "host-group-2",
"components": [
{
"name": "NIFI_MASTER"
},
{
"name": "METRICS_MONITOR"
},
{
"name": "SUPERVISOR"
},
{
"name": "INFRA_SOLR"
},
{
"name": "INFRA_SOLR_CLIENT"
},
{
"name": "LOGSEARCH_LOGFEEDER"
},
{
"name": "LOGSEARCH_SERVER"
},
{
"name": "ZOOKEEPER_CLIENT"
},
{
"name": "METRICS_GRAFANA"
},
{
"name": "KAFKA_BROKER"
},
{
"name": "ZOOKEEPER_SERVER"
}
]
},
{
"name": "host-group-3",
"components": [
{
"name": "METRICS_MONITOR"
},
{
"name": "SUPERVISOR"
},
{
"name": "LOGSEARCH_LOGFEEDER"
},
{
"name": "METRICS_COLLECTOR"
},
{
"name": "ZOOKEEPER_SERVER"
}
]
}
],
"configurations": [
{
"nifi-ambari-config": {
"nifi.node.protocol.port": "9089",
"nifi.internal.dir": "/nifi",
"nifi.node.port": "9092",
"nifi.provenance.repository.dir.default": "/nifi/provenance_repository",
"nifi.content.repository.dir.default": "/nifi/content_repository",
"nifi.flowfile.repository.dir": "/nifi/flowfile_repository",
"nifi.max_mem": "1g",
"nifi.database.dir": "/nifi/database_repository",
"nifi.node.ssl.port": "9093"
}
},
{
"ams-env": {
"metrics_collector_heapsize": "512"
}
},
{
"ams-hbase-env": {
"hbase_master_heapsize": "512",
"hbase_regionserver_heapsize": "768",
"hbase_master_xmn_size": "192"
}
},
{
"storm-site": {
"metrics.reporter.register": "org.apache.hadoop.metrics2.sink.storm.StormTimelineMetricsReporter"
}
},
{
"nifi-env": {
"nifi_group": "mynifigroup",
"nifi_user": "mynifiuser"
}
},
{
"ams-hbase-site": {
"hbase.regionserver.global.memstore.upperLimit": "0.35",
"hbase.regionserver.global.memstore.lowerLimit": "0.3",
"hbase.tmp.dir": "/var/lib/ambari-metrics-collector/hbase-tmp",
"hbase.hregion.memstore.flush.size": "134217728",
"hfile.block.cache.size": "0.3",
"hbase.rootdir": "file:///var/lib/ambari-metrics-collector/hbase",
"hbase.cluster.distributed": "false",
"phoenix.coprocessor.maxMetaDataCacheSize": "20480000",
"hbase.zookeeper.property.clientPort": "61181"
}
},
{
"logsearch-properties": {}
},
{
"kafka-log4j": {}
},
{
"ams-site": {
"timeline.metrics.service.webapp.address": "localhost:6188",
"timeline.metrics.cluster.aggregate.splitpoints": "kafka.network.SocketServer.IdlePercent.networkProcessor.0.5MinuteRate",
"timeline.metrics.host.aggregate.splitpoints": "kafka.network.SocketServer.IdlePercent.networkProcessor.0.5MinuteRate",
"timeline.metrics.host.aggregator.ttl": "86400",
"timeline.metrics.service.handler.thread.count": "20",
"timeline.metrics.service.watcher.disabled": "false"
}
},
{
"kafka-broker": {
"kafka.metrics.reporters": "org.apache.hadoop.metrics2.sink.kafka.KafkaTimelineMetricsReporter"
}
},
{
"ams-grafana-env": {}
}
]
}
... View more
08-08-2016
01:34 AM
3 Kudos
@Zach Kirsch here is how I usually detect the cluster name using parsing the json output via sed export SERVICE=ZEPPELIN
export PASSWORD=admin
export AMBARI_HOST=localhost
#detect name of cluster
output=`curl -u admin:$PASSWORD -i -H 'X-Requested-By: ambari' http://$AMBARI_HOST:8080/api/v1/clusters`
CLUSTER=`echo $output | sed -n 's/.*"cluster_name" : "\([^\"]*\)".*/\1/p'`
echo $CLUSTER
... View more
08-02-2016
06:02 PM
1 Kudo
@Timothy Spann: Can you try @nmaillard's suggestion from here?
... View more
07-27-2016
07:24 PM
1 Kudo
@bigdata.neophyte: We have a single node HDP 2.3 VM where kerberos, Ranger, Ranger KMS enabled available for download here This was done as part of security workshop/webinar we did: https://github.com/abajwa-hw/security-workshops#current-release
... View more
07-27-2016
06:47 PM
1 Kudo
@vshukla can confirm but I believe that Magellan has not been ported to Spark 1.6 yet (only 1.4)
... View more
07-27-2016
06:44 PM
2 Kudos
In general you can change the executor memory in Zeppelin by modifying zeppelin-env.sh and including export ZEPPELIN_JAVA_OPTS="-Dspark.executor.memory=1g" If you are installing Zeppelin via Ambari, you can set this via zeppelin.executor.mem (see screenshot) You can follow the tutorial here which goes through create queue and configure zeppelin to use it https://github.com/hortonworks-gallery/ambari-zeppelin-service/blob/master/README.md
... View more