Member since
08-31-2015
15
Posts
21
Kudos Received
3
Solutions
09-26-2015
01:18 PM
There is also the list of all the contrib views on github. https://github.com/apache/ambari/tree/trunk/contrib/views
... View more
02-02-2017
10:28 PM
Hi Neeraj, I am trying to setup the hortonworks hdp2.5 in AWS. This is test the webapplication https://community.hortonworks.com/content/kbentry/38457/credit-fraud-prevention-demo-a-guided-tour.html. This application uses Kafka,Storm,Hive,Hbase and many more stuffs. Could you please suggest what is the best way to go ahead ?
... View more
09-25-2015
09:16 PM
One good thing to show in the tutorial would be how this lets you manage multi-tenancy for Spark (currently only available via Spark on YARN) https://github.com/hortonworks-gallery/ambari-zeppelin-service/blob/master/README.md#zeppelin-yarn-integration
... View more
09-25-2015
12:03 AM
7 Kudos
Apache Ranger delivers a comprehensive approach to security for a Hadoop cluster. It provides central security policy administration across the core enterprise security requirements of authorization, accounting and data protection. Apache Ranger already extends baseline features for coordinated enforcement across Hadoop workloads from batch, interactive SQL and real–time in Hadoop. In this tutorial, we cover using Apache Ranger for HDP 2.3 to secure your Hadoop environment. We will walkthrough the following topics:
Support for Knox authorization and audit Command line policies in Hive Command line policies in HBase REST APIs for policy manager Prerequisite The only prerequisite for this tutorial is that you have Hortonworks Sandbox. Once you have Hortonworks Sandbox, login through SSH: Starting Knox Service and Demo LDAP Service From the Ambari console at http://localhost:8080/ (username and password is admin and admin respectively), select Knox from the list of Services on the left-hand side of the page.
Then click on Service Actions from the top right hand side of the page click on Start ![]http://www.dropbox.com/s/jhb30dgey8m30n6/Screenshot%202015-09-08%2010.27.06.png?dl=1) From the following you can track the start of the Knox service to completion: Then go back to the Service Actions button on the Knox service and click on Start Demo LDAP You can track the start of the Demo LDAP Service from the following screen: Knox access scenarios Check if Ranger Admin console is running, at http://localhost:6080/from your host machine. The username is admin and the password is admin If it is not running you can start from the command line using the command sudo service ranger-admin start Click on sandbox_knox link under Knox section in the main screen of Ranger Administration Portal You can review policy details by a clicking on the policy name. To start testing Knox policies, we would need to turn off the “global knox allow” policy. Locate Sandbox for Guest policy on the Ranger Admin console and edit the policy and enable policy named “Sandbox for Guest” From your local SSHd terminal (not directly on the Sandbox), run this CURL command to access WebHDFS curl -k -u admin:admin-password 'https://127.0.0.1:8443/gateway/knox_sample/webhdfs/v1?op=LISTSTATUS' Go to Ranger Policy Manager tool → Audit screen and check the knox access (denied) being audited. Now let us try the same CURL command using “guest” user credentials from the terminal curl -k -u guest:guest-password 'https://127.0.0.1:8443/gateway/knox_sample/webhdfs/v1?op=LISTSTATUS' <code>{"FileStatuses":{"FileStatus":[{"accessTime":0,"blockSize":0,"childrenNum":0,"fileId":16393,"group":"hadoop","length":0,"modificationTime":1439987528048,"owner":"yarn","pathSuffix":"app-logs","permission":"777","replication":0,"storagePolicy":0,"type":"DIRECTORY"},{"accessTime":0,"blockSize":0,"childrenNum":4,"fileId":16389,"group":"hdfs","length":0,"modificationTime":1439987809562,"owner":"hdfs","pathSuffix":"apps","permission":"755","replication":0,"storagePolicy":0,"type":"DIRECTORY"},{"accessTime":0,"blockSize":0,"childrenNum":1,"fileId":17000,"group":"hdfs","length":0,"modificationTime":1439989173392,"owner":"hdfs","pathSuffix":"demo","permission":"755","replication":0,"storagePolicy":0,"type":"DIRECTORY"},{"accessTime":0,"blockSize":0,"childrenNum":1,"fileId":16398,"group":"hdfs","length":0,"modificationTime":1439987529660,"owner":"hdfs","pathSuffix":"hdp","permission":"755","replication":0,"storagePolicy":0,"type":"DIRECTORY"},{"accessTime":0,"blockSize":0,"childrenNum":1,"fileId":16394,"group":"hdfs","length":0,"modificationTime":1439987528532,"owner":"mapred","pathSuffix":"mapred","permission":"755","replication":0,"storagePolicy":0,"type":"DIRECTORY"},{"accessTime":0,"blockSize":0,"childrenNum":2,"fileId":16396,"group":"hadoop","length":0,"modificationTime":1439987538099,"owner":"mapred","pathSuffix":"mr-history","permission":"777","replication":0,"storagePolicy":0,"type":"DIRECTORY"},{"accessTime":0,"blockSize":0,"childrenNum":1,"fileId":16954,"group":"hdfs","length":0,"modificationTime":1439988741413,"owner":"hdfs","pathSuffix":"ranger","permission":"755","replication":0,"storagePolicy":0,"type":"DIRECTORY"},{"accessTime":0,"blockSize":0,"childrenNum":3,"fileId":16386,"group":"hdfs","length":0,"modificationTime":1440165443820,"owner":"hdfs","pathSuffix":"tmp","permission":"777","replication":0,"storagePolicy":0,"type":"DIRECTORY"},{"accessTime":0,"blockSize":0,"childrenNum":8,"fileId":16387,"group":"hdfs","length":0,"modificationTime":1439988397561,"owner":"hdfs","pathSuffix":"user","permission":"755","replication":0,"storagePolicy":0,"type":"DIRECTORY"}]}}
We can check the auditing in the Ranger Policy Manager → Audit screen Ranger plugin for Knox intercepts any request made to Knox and enforces policies which are retrieved from the Ranger Administration Portal You can configure the Knox policies in Ranger to restrict to a specific service (WebHDFS, WebHCAT etc) and to a specific user or a group and you can even bind user/group to an ip address Hive grant/revoke permission scenarios Ranger can support import of grant/revoke policies set through command line or Hue for Hive. Ranger can store these policies centrally along with policies created in the administration portal and enforce it in Hive using its plugin. As a first step, disable the global access policy for Hive in Ranger Administration Portal Let us try running a Grant operation using user Hive from the command line. Login into beeline tool using the following command beeline -u "jdbc:hive2://sandbox.hortonworks.com:10000/default" -n it1 -p it1-d org.apache.hive.jdbc.HiveDriver Then issue the GRANT command grant select, update on table xademo.customer_details to user network1; You should see the following error: Let us check the audit log in the Ranger Administration Portal → Audit You can see that access was denied for an admin operation for user it1. We can create a policy in Ranger for user ‘it1’ to be an admin. Create a new policy from the Ranger Admin console and ensure the configuration matches the illustration below We can try the beeline command again, once the policy has been saved. GRANT select, update on table xademo.customer_details to user network1; If the command goes through successfully, you will see the policy created/updated in Ranger Admin Portal → Policy Manager. It checks if there is an existing relevant policy to update, else it creates a new one.
What happened here? Ranger plugin intercepts GRANT/REVOKE commands in Hive and creates corresponding policies in Admin portal. The plugin then uses these policies for enforcing Hive authorization (Hiveserver2) Users can run further GRANT commands to update permissions and REVOKE commands to take away permissions. HBase grant/revoke permission scenarios Ranger can support import of grant/revoke policies set through command line in Hbase. Similar to Hive, Ranger can store these policies as part of the Policy Manager and enforce it in Hbase using its plugin. Before you go further, ensure HBase is running from Ambari – http://127.0.0.1:8080 (username and password are admin ). If it is not go to Service Actions button on top right and Start the service As a first step, let us try running a Grant operation using user Hbase. Disable the public access policy “HBase Global Allow” in Ranger Administration Portal – policy manager Login into HBase shell as ‘it1’ user <code>su - it1
[it1@sandbox ~]$ hbase shell
Run a grant command to give “Read”, “Write”, “Create” access to user mktg1 in table ‘iemployee’ hbase(main):001:0> grant 'mktg1', 'RWC', 'iemployee' you should get a Acess Denied as below: Go to Ranger Administration Portal → Policy Manager and create a new policy to assign “admin” rights to user it1 Save the policy and rerun the HBase command again <code>hbase(main):006:0> grant 'mktg1', 'RWC', 'iemployee'
0 row(s) in 0.8670 seconds
Check HBase policies in the Ranger Policy Administration portal. The grant permissions were added to an existing policy for table ‘iemployee’ that we created in previous step You can revoke the same permissions and the permissions will be removed from Ranger admin. Try this in the same HBase shell <code>hbase(main):007:0> revoke 'mktg1', 'iemployee'
0 row(s) in 0.4330 seconds
You can check the existing policy and see if it has been changed What happened here? Ranger plugin intercepts GRANT/REVOKE commands in Hbase and creates corresponding policies in the Admin portal. The plugin then uses these policies for enforcing authorization Users can run further GRANT commands to update permissions and REVOKE commands to take away permissions. REST APIs for Policy Administration Ranger policies administration can be managed through REST APIs. Users can use the APIs to create or update policies, instead of going into the Administration Portal. Running REST APIs from command line From your local command line shell, run this CURL command. This API will create a policy with the name “hadoopdev-testing-policy2” within the HDFS repository “sandbox_hdfs” <code>curl -i --header "Accept:application/json" -H "Content-Type: application/json" --user admin:admin -X POST http://127.0.0.1:6080/service/public/api/policy -d '{ "policyName":"hadoopdev-testing-policy2","resourceName":"/demo/data/test","description":"Testing policy for /demo/data/test","repositoryName":"sandbox_hdfs","repositoryType":"HDFS","permMapList":[{"userList":["mktg1"],"permList":["Read"]},{"groupList":["IT"],"permList":["Read"]}],"isEnabled":true,"isRecursive":true,"isAuditEnabled":true,"version":"0.1.0","replacePerm":false}'
the policy manager and see the new policy named “hadoopdev-testing-policy2” Click on the policy and check the permissions that has been created The policy id is part of the URL of this policy detail pagehttp://127.0.0.1:6080/index.html#!/hdfs/1/policy/26 We can use the policy id to retrieve or change the policy. Run the below CURL command to get policy details using API curl -i --user admin:admin -X GET http://127.0.0.1:6080/service/public/api/policy/26 What happened here? We created a policy and retrieved policy details using REST APIs. Users can now manage their policies using API tools or applications integrated with the Ranger REST APIs Hopefully, through this whirlwind tour of Ranger, you were introduced to the simplicity and power of Ranger for security administration.
... View more
01-09-2016
09:11 PM
1 Kudo
@Sean Roberts Want to understand the impersonation configuration better. The problem is that it is not clear what is impersonating what. For example, when trying to access the Hive view as an admin user failed with "User: hive is not allowed to impersonate user admin". So, by extension, it would seem logical that we introduce another proxy variables hadoop.proxyuser.hive.groups & hosts, but what is the group that the hive user needs? Is that information available in the stack trace? Is there a diagram of the view services that maps out the impersonation and user attributes in play?
... View more
09-29-2015
12:24 AM
2 Kudos
There's a field called "Ambari Metrics User" under "Advanced ams-env". See the screenshot.
... View more
09-29-2015
01:21 PM
@MCarter@hortonworks.com the easiest way to compile this is going to be with Maven. The list of dependencies is rather lengthy and you will spend a good bit of time trying to include them all if you want to use javac. If you have never used Maven before then it probably isn't installed on your Mac either. The easiest way to install Maven is going to be with the Mac package manager "Homebrew". You might not have that installed either but its a simple one liner to install. More information on HomeBrew can be found here. It is commonly used among developers on OSX. ruby -e "$(curl -fsSL https://raw.githubusercontent.com/Homebrew/install/master/install)" After HomeBrew (brew) is installed run this command to install Maven. brew install maven Once you have successfully installed Maven you can compile the project by changing to the directory where the pom.xml file is located and running. mvn clean install package Once that has completed you will see the resulting jar at ./target/simple-yarn-app-1.1.0.jar Hope that helps.
... View more