Member since
08-08-2013
339
Posts
132
Kudos Received
27
Solutions
My Accepted Solutions
| Title | Views | Posted |
|---|---|---|
| 16083 | 01-18-2018 08:38 AM | |
| 1999 | 05-11-2017 06:50 PM | |
| 10401 | 04-28-2017 11:00 AM | |
| 4123 | 04-12-2017 01:36 AM | |
| 3211 | 02-14-2017 05:11 AM |
01-27-2016
05:50 PM
1 Kudo
Hi @mkataria , sure, I'll try my best. First click on service 'HDFS' in Ambari, then In the next dialog, create one config-group per Nodemanager , provide a corresponding name and assign that node to that config group Then get back to the "general" HDFS config page (picture 1), select a config group and adjust the log destination for that particular Nodemanager-node (==config-group). ...and restart HDFS 😉 Regards, Gerd
... View more
01-27-2016
12:28 PM
Hi @Sai ram , looks like you are using Ranger and you do not have a Ranger-HDFS-policy which allows the user hive to write to "/flume" On the one hand, the solution from @Neeraj Sabharwal is granting permissions on HDFS level and solves your problem, on the other hand, if you want to go with Ranger I'd recommend to create/adjust Ranger-HDFS-policies for certain folders/users (and do a, at least, chmod 700 on HDFS level itself to prevent accessing folders/files "by accident")
... View more
01-27-2016
07:55 AM
3 Kudos
Hi @mkataria , did I understand that correct, do all the Nodemanagers have kind of network storage mounted and want to write to /hdp/logs/hadoop-yarn/nodemanager/recovery-state/yarn-nm-state ? This won't work since every NM wants to keep his own state in dir ..../yarn-nm-state, therefore just one NM can create the LOCK file there (besides the files keeping the state). Logging to a central directory for that cases is difficult. One solution could be to put each NM in a different config group, and specify the log directory for each config group, e.g. /hdp/logs/hadoop-yarn/nm1 /hdp/logs/hadoop-yarn/nm2 ... Regards, Gerd
... View more
01-27-2016
07:41 AM
Thanks @Kevin Minder , brilliant !
... View more
01-26-2016
07:39 PM
1 Kudo
Hi @Kevin Minder , Hi @Neeraj Sabharwal I investigated a bit more to deep dive into how I can set the Bind DN to: uid=admin,ou=people,dc=hadoop,dc=apache,dc=prd Yes, I just want to change the "domain". I did that in Ambari=>Knox=>users.ldif and additionally I set log-output to DEBUG. After restarting the DemoLDAP server I found in the log: 2016-01-26 20:23:41,136 INFO store.LdifFileLoader (LdifFileLoader.java:execute(212)) - Could not create entry Entry
dn[n]: uid=admin,ou=people,dc=hadoop,dc=apache,dc=prd
objectclass: top
objectclass: person
objectclass: organizationalPerson
objectclass: inetOrgPerson
uid: admin
userpassword: admin-password
sn: Admin
cn: Admin
org.apache.directory.api.ldap.model.exception.LdapNoSuchObjectException: ERR_268 Cannot find a partition for uid=admin,ou=people,dc=hadoop,dc=apache,dc=prd Seems like by default just dc=hadoop,dc=apache,dc=org is allowed ?!?! How to add a custom 'partition' to set a custom domain? I tried it using ApacheDirectoryStudio by right-click on the connection => "open configuration" (while being connected successfully), but unfortunately I received the error org.apache.directory.api.ldap.model.exception.LdapNoSuchObjectException: Unable to find the 'ou=config' base entry.
at org.apache.directory.studio.apacheds.configuration.jobs.LoadConfigurationRunnable.readConfiguration(LoadConfigurationRunnable.java:382)
at org.apache.directory.studio.apacheds.configuration.jobs.LoadConfigurationRunnable.getConfiguration(LoadConfigurationRunnable.java:201)
at org.apache.directory.studio.apacheds.configuration.jobs.LoadConfigurationRunnable.run(LoadConfigurationRunnable.java:139)
at org.apache.directory.studio.common.core.jobs.StudioJob.run(StudioJob.java:83)
at org.eclipse.core.internal.jobs.Worker.run(Worker.java:54) Any hints for how to set a custom domain? Thanks in advance...
... View more
01-25-2016
01:57 PM
3 Kudos
Hi @Robert Levas , @mahadev , just wanted to drop you the note that I now have a Kerberos enabled cluster. How? I just ignored the failure messages during service startup and wanted to deep dive into what is going on while Ambari creates principals and keytabs. I left the cluster in the stopped state including all errors ( ~60 red alerts) To start the journey I ran in Ambari=>Admin=>Kerberos "regenerate keytabs" Surprisingly this triggered the creation of principals and keytabs successfully and I ended up in the state I expected from the Wizard, to have all the required principals and keytabs on the corresponding hosts. Anyway, after the "regenerate keytabs" I was able to successfully start all the services.
... View more
01-25-2016
01:07 PM
Hi @Robert Levas , thanks for this hint. I did exactly that, but ended up in the same situation. No principals have been created and no keytabs have been deployed, although the wizard marked every step as "green" until starting up the services
... View more
01-25-2016
11:40 AM
@Robert Levas : it is Ambari2.0.1 (in combination with HDP2.2.4.2)
... View more
01-25-2016
11:15 AM
Hello,
I am again facing the same issue while enabling Kerberos on a newly installed cluster => no principals are being created and no keytabs are generated, although the enable Kerberos wizard tells so ?!?!? I didn't edit the encryption type field in the Kerberos-wizard and Ambari is running as root, therefore it will be able to write to /var/lib/ambari-server/tmp Ambari logfile states the creation of the keytab files: 25 Jan 2016 12:01:48,279 INFO [Server Action Executor Worker 2148] CreateKeytabFilesServerAction:170 - Creating keytab file for HTTP/b0d05g22.<domain>@<realm> on host b0d05g22.<domain>
25 Jan 2016 12:01:48,280 INFO [Server Action Executor Worker 2148] CreateKeytabFilesServerAction:170 - Creating keytab file for hdfs@<realm> on host b0d05g22.<domain> but at the end, no keytab file is being deployed, and also no principal has been created. If I check the principals AFTER the Kerberos-Wizard has "successfully" created them, none of the are in the KDC => sudo kadmin.localkadmin.local: listprincs
K/M@<realm>
admin/admin@<realm>
kadmin/admin@<realm>
kadmin/b0d095j2.<domain>@<realm>
kadmin/changepw@<realm>
krbtgt/HDP.ZURICH.PRD@<realm>
kadmin.local: Kerberos client conf contains: [libdefaults]
renew_lifetime = 7d
forwardable = true
default_realm = <realm>
ticket_lifetime = 24h
dns_lookup_realm = false
dns_lookup_kdc = false
#default_tgs_enctypes = aes des3-cbc-sha1 rc4 des-cbc-md5
#default_tkt_enctypes = aes des3-cbc-sha1 rc4 des-cbc-md5 What to check further...Any hint highly appreciated...
... View more
01-25-2016
07:48 AM
Hi @Robin Dong , you have to choose "[3] Custom JDK" since you installed JDK 'manually', the other options are for let Ambari install JDK. It doesn't mean that you installed a custom built JDK 😉 ... maybe the wording is a bit confusing here
... View more