Member since
07-29-2016
14
Posts
5
Kudos Received
0
Solutions
12-07-2017
12:07 PM
Yes you'll need a jaas.conf on your path looking something like the following and we also have a krb5.conf on the path. com.sun.security.jgss.krb5.initiate {
com.sun.security.auth.module.Krb5LoginModule required
doNotPrompt=true
principal=""
useKeyTab=true
keyTab=""
storeKey=true;
};
LoginKerb {
com.sun.security.auth.module.Krb5LoginModule required client=TRUE;
};
... View more
12-07-2017
11:29 AM
I've updated the code snippet. And yes kerberos.user is the principal.
... View more
12-07-2017
10:34 AM
We have this working now. When running in local mode we invoke the following method loginKerb, before creating the SparkSession: import org.apache.hadoop.security.UserGroupInformation;
import javax.security.auth.callback.*;
import javax.security.auth.login.LoginContext;
import javax.security.auth.login.LoginException;
import java.io.IOException;
public class LoginKerb {
public static void loginKerb() throws LoginException, IOException {
LoginContext lc = kinit();
UserGroupInformation.loginUserFromSubject(lc.getSubject());
}
private static LoginContext kinit() throws LoginException {
LoginContext lc = new LoginContext(LoginKerb.class.getSimpleName(), callbacks -> {
for(Callback c : callbacks){
if(c instanceof NameCallback)
((NameCallback) c).setName(System.getProperty("kerberos.user"));
if(c instanceof PasswordCallback)
((PasswordCallback) c).setPassword(System.getProperty("kerberos.password").toCharArray());
}
});
lc.login();
return lc;
}
}
... View more
11-23-2017
04:14 PM
Thanks adding the nodes to all - nifi-resource policy fixed the same problem for me
... View more
09-26-2017
09:27 AM
As of HDP 2.6.1 it is possible to install HDF components on an HDP cluster. See https://docs.hortonworks.com/HDPDocuments/HDF3/HDF-3.0.1.1/bk_installing-hdf-and-hdp/content/ch_install-ambari.html
... View more
06-16-2017
09:33 AM
We've recently kerberized our HDFS development cluster. Before we did this we could run Spark jobs using spark.master=local from an IDE to test new code to allow debugging before deploying the code to the cluster and running in yarn mode. Since kerberizing the cluster I've not been able to find a way run spark jobs in local mode. We get the following error: org.apache.hadoop.security.AccessControlException: SIMPLE authentication is not enabled. Available:[TOKEN, KERBEROS] Everything works fine if we deploy the code and run in yarn mode, but this slows down development cycles. I've tried passing through the hdfs config files and setting "hadoop.security.authentication"="kerberos" and looked on the internet but have not found definitive answer as to whether I can run a Spark job in local mode against a kerberized cluster.
... View more
Labels:
- Labels:
-
Apache Hadoop
-
Apache Spark
05-18-2017
10:05 AM
For me the following fixed it. Set the following configs on hive: webhcat.proxyuser.root.groups * webhcat.proxyuser.root.hosts *
... View more
05-18-2017
10:04 AM
For me the following fixed it. Set the following configs on hive: webhcat.proxyuser.root.groups * webhcat.proxyuser.root.hosts *
... View more
05-10-2017
01:39 PM
3 Kudos
This worked for me. The command to enable the optional repo is: yum-config-manager --enable rhui-REGION-rhel-server-optional
... View more
02-10-2017
10:51 AM
1 Kudo
I hit the same issue. In my case I was missing the yum ambari repository. I fixed with the command below. This command works for ambari version 2.4.1.0 on Red Hat 7. Make sure you add the correct repo for your OS and ambari version. yum-config-manager --add-repo http://public-repo-1.hortonworks.com/ambari/centos7/2.x/updates/2.4.1.0/ambari.repo
... View more