Member since
09-11-2015
41
Posts
48
Kudos Received
14
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
2177 | 02-03-2017 09:39 AM | |
1916 | 01-31-2017 12:41 PM | |
2936 | 01-20-2017 12:38 PM | |
4239 | 01-18-2017 01:26 PM | |
7407 | 01-11-2017 02:35 PM |
11-11-2016
09:10 PM
Yes, you should copy the id_rsa file from the sandbox to your Windows host. Alternatively you can copy and paste the contents of id_rsa in to the edit box that says 'ssh private key' like in the screenshot below.
... View more
11-10-2016
11:16 AM
@A. Karray You can specify JARs to use with Livy jobs using livy.spark.jars in the Livy interpreter conf. This should be a comma separated list of JAR locations which must be stored on HDFS. Currently local files cannot be used (i.e. they won't be localized on the cluster when the job runs.) It is a global setting so all JARs listed will be available for all Livy jobs run by all users.
... View more
11-03-2016
11:26 AM
1 Kudo
@vamsi valiveti 1) 'show tables;' is the standard SQL way of getting table names. '!tables' is specific to Beeline so use 'show tables;' to make sure your SQL is portable to other SQL clients. 2) Use '!sh <command>' to run shell command, e.g. 0: jdbc:hive2://hdp224.local:10000/default> !sh hdfs dfs -ls /
Found 9 items
drwxrwxrwx - yarn hadoop 0 2016-11-01 14:07 /app-logs
drwxr-xr-x - hdfs hdfs 0 2016-11-01 12:41 /apps
drwxr-xr-x - yarn hadoop 0 2016-11-01 15:55 /ats
drwxr-xr-x - usera users 0 2016-11-01 14:29 /data
drwxr-xr-x - hdfs hdfs 0 2016-11-01 12:38 /hdp
drwxr-xr-x - mapred hdfs 0 2016-11-01 12:38 /mapred
drwxrwxrwx - mapred hadoop 0 2016-11-01 12:38 /mr-history
drwxrwxrwx - hdfs hdfs 0 2016-11-01 15:56 /tmp
drwxr-xr-x - hdfs hdfs 0 2016-11-01 14:06 /user
... View more
10-31-2016
04:41 PM
1 Kudo
@Roger Young Assuming you are running Ambari as 'root' it will be in ~root/.ssh/id_rsa. If you're running Ambari as a non-root user you will need to set up passworldless SSH for that user so the file will be ~<username>/.ssh/id_rsa.
... View more
10-26-2016
09:47 AM
2 Kudos
@Pooja Kamle Ranger policies are not applied to Hive CLI which is old technology and may be phased out in the future. You should be using Beeline/JDBC/ODBC to connect to Hiveserver2.
... View more
10-24-2016
11:19 AM
There are two types of policies in Ranger - resource based policies and tag based policies. The Policy Conditions only apply to tag based policies. If you go to the Ranger Admin UI and click on Access Manager > Tag Based Policies then click on your tag service you'll be able to add a tag based policy with the Policy Conditions you require. There's more information here: Tag Based Policies
... View more
10-24-2016
08:45 AM
@Anders Boje Larsen Deny policies are only enabled for service definitions that have property enableDenyAndExceptionsInPolicies = true and are off by default for all services. You'll need to update the service definitions for the services you want deny policies for. This page has the required information: Deny-conditions and excludes in Ranger policies
... View more
09-28-2016
07:39 AM
1 Kudo
@Adi Jabkowsky The reason that the users need to exist on the OS (or for you to use Hadoop Group Mapping) is that it is the Hiveserver2 process that gets the username and looks up the groups that user is a member of. It then passes the username and its group membership list to the Ranger Hive plugin (which runs in a thread in the Hiveserver2 process) and this uses the user details to check against a cache of the policies defined for Hive. It is important to understand that the Ranger Hive plugin does not communicate back to the Ranger Admin component during this authorization process. If it did it would be much slower and would make Ranger Admin a single point of failure. When you synchronize your Active Directory users to Ranger using Ranger UserSync, this is only to allow you to add the users and groups to policies in the Ranger Admin UI, it doesn't then make those users available on the cluster itself. You either need to integrate the OS with Active Directory or use the Hadoop Group Mapping feature to make the users and groups available.
... View more
09-27-2016
03:35 PM
1 Kudo
@Adi Jabkowsky Usually this happens because Hiveserver2 cannot determine which groups the user belongs to. Check your Hiveserver2 log for a message that looks like "No groups for user XXX" where XXX is the user that is being denied access. If this is the case you'll need to make sure that the OS on the Hiveserver2 node can resolve the groups for that user. Either configure the OS to pull user and group information from Active Directory or set up Hadoop Group Mapping.
... View more
08-03-2016
01:28 PM
@bigdata.neophyte The hdfs command doesn't print the result but sets it's return code to the shell. You'll need to test the return code from the 'hdfs dfs -test' command. On Linux try this: hdfs dfs -test -e /tmp
echo $?
hdfs dfs -test -e /this_doesnt_exist
echo $?
... View more
- « Previous
- Next »