Member since
01-19-2017
3676
Posts
632
Kudos Received
372
Solutions
My Accepted Solutions
| Title | Views | Posted |
|---|---|---|
| 609 | 06-04-2025 11:36 PM | |
| 1175 | 03-23-2025 05:23 AM | |
| 580 | 03-17-2025 10:18 AM | |
| 2185 | 03-05-2025 01:34 PM | |
| 1373 | 03-03-2025 01:09 PM |
09-11-2017
10:11 AM
@Jean-François Vandemoortele The workaround is to add "-- conf spark.hadoop.fs.hdfs.impl.disable.cache=true" to Spark job command line parameters to disable the token cache from spark side. Please let me know it that worked out for you .
... View more
09-11-2017
08:49 AM
1 Kudo
@Jean-François Vandemoortele You will need to create a jaas.conf and pass it like in the below example java -Djava.security.auth.login.config=/home/hdfs-user/jaas.conf \
-Djava.security.krb5.conf=/etc/krb5.conf \
-Djavax.security.auth.useSubjectCredsOnly=false \
-cp "./hdfs-sample-1.0-SNAPSHOT.jar:/usr/hdp/current/hadoop-client/lib/*:/usr/hdp/current/hadoop-hdfs-client/*:/usr/hdp/current/hadoop-client/*" \
hdfs.sample.HdfsMain Contents of the jaas.conf should look like this ---start of jaas.conf-------
com.sun.security.jgss.krb5.initiate {
com.sun.security.auth.module.Krb5LoginModule required
doNotPrompt=true
principal="hdfs-user@YOUR_REALM"
useKeyTab=true
keyTab="/path/to/hdfs-user.keytab"
storeKey=true;
};
---End of jaas.conf------- Now your jobs will run successfully by renewing the Kerberos tickets.
... View more
09-11-2017
06:13 AM
@ASIF Khan Unfortunately you went the wrong road. Troubleshooting such an orthodox implementation becomes a nightmare, it advisable to use standard management tool like Ambari when implementing HDP clusters. Folks here could easily guide you and locate the config files easily and hence help out.
... View more
09-10-2017
09:16 AM
2 Kudos
@Raul Pingarrón The culprit is "file:/// "you should get a was to create a mount point /fast_nfs/yarn/local, hence the message "Must be a slash or drive ........" like te list below /hadoop/yarn/local,/opt/hadoop/yarn/local,/usr/hadoop/yarn/local,/var/hadoop/yarn/local Hope that helps
... View more
09-08-2017
01:17 PM
@Andres Urrego How much memory have you allocated the Sandbox? What is the network config you have chosen.
... View more
09-08-2017
01:14 PM
@Sundara Palanki You should enable the Ranger Kafka hook,restart all stale configs and retry
... View more
09-07-2017
10:27 PM
1 Kudo
@Sam Red I don't know whether you love firefox but, most members find the firefox config easier to setup. Once you kerberise your cluster,to access the UI, you will need a valid ticket on your laptop to access these kerberised UI's.
Below are instructions and links to explain how to enable you browser with kerberos authentication. Is this what you are encountering Configure Firefox to authenticate using SPNEGO and Kerberos https://community.hortonworks.com/articles/28537/user-authentication-from-windows-workstation-to-hd.html
... View more
09-07-2017
08:03 PM
2 Kudos
@Sam Red Unfortunately, you will have to use the classic way 🙂 depending on your OS adapt appropriate commands as root, below example is on centos6 # useradd user15
# passwd user15 And repeat that on all the hosts in the cluster, from the Ambari server if you created a passwordless ssh then it's easier! # ssh root@host5
[root@host5 ~]# useradd user15
[root@host5 ~]# passwd user15 Tedious work ..... if you have a cluster with 100 nodes!
... View more
09-07-2017
07:49 PM
@Vicente Ciampa Got in late tonight, can you adjust your krb5.conf the [domain_realm] should look like this [domain_realm]
prosqladmin.local = PROSQLADMIN.LOCAL
.prosqladmin.local = PROSQLADMIN.LOCAL For the errors can you drill down and copy paste the stack trace. All the components ain't starting because of a misconfiguration somewhere indeed the zookeeper starts first that's where all configurations are stored. Please attach the requested files to facilitate the troubleshooting. Have a look at this HCC document before I can log on tomorrow through skype
... View more
09-07-2017
07:25 PM
@Sam Red When your a cluster integrated with Kerberos security then authenticated user must exist in the every node where the task runs. So create the berlin user on all the hosts and add user berlin to the hadoop group that should resolve the problem. Please revert
... View more