Member since
02-08-2016
793
Posts
669
Kudos Received
85
Solutions
11-11-2016
05:32 PM
4 Kudos
Issue: While performing HDP upgrade it failed on last step. Error Finalizing HDP upgrade Failed on: Save Cluster State Sample screenshot is attached below - Many times error might change, and in my case below was the error - Begin finalizing the upgrade of cluster TEST_DEV to version 2.5.0.0-1245
The following 7 host(s) have not been upgraded to version 2.5.0.0-1245. Please install and upgrade the Stack Version on those hosts and try again.
Hosts: host1.example.com, host2.example.com, host3.example.com, host4.example.com, host5.example.com, host6.example.com, host7.example.com
Root Cause: When check in ambari DB for "host_version" table we were able to see the 7 hosts were pointing to state as INSTALLED as shown below - The repo_version_id=151 should be in "CURRENT" state which was not the case. It was in INSTALLED state. Solution: 1. First set the previous repo_version_id ie 101 to INSTALLED state using below command - mysql> update host_version set state='INSTALLED' where repo_version_id='101'; 2. Now modify the hosts which was having latest repo_version_id [ie. 151] to CURRENT state - mysql> update host_version set state='CURRENT' where repo_version_id='151'; 3. Restarted ambari server and click on "Retry" on host upgrade screen after which upgrade was successfully completed.
... View more
11-10-2016
02:35 PM
2 Kudos
Issue: Upgrade Ambari server from X.X.X version to Y.Y.Y version which was successful with errors as showing below - sudo ambari-server upgrade
Using python /usr/bin/python
Upgrading ambari-server
Updating properties in ambari.properties ...
WARNING: Can not find ambari-env.sh.rpmsave file from previous version, skipping restore of environment settings
Fixing database objects owner
Ambari Server configured for Embedded Postgres. Confirm you have made a backup of the Ambari Server database [y/n] (y)? y
Upgrading database schema
Error output from schema upgrade command:
com.google.inject.ProvisionException: Guice provision errors:
1) null returned by binding at org.apache.ambari.server.state.ServiceComponentHostFactory.createExisting()
but parameter 2 of org.apache.ambari.server.state.svccomphost.ServiceComponentHostImpl.<init>() is not @Nullable
while locating org.apache.ambari.server.orm.entities.HostComponentDesiredStateEntity annotated with @com.google.inject.assistedinject.Assisted(value=)
for parameter 2 at org.apache.ambari.server.state.svccomphost.ServiceComponentHostImpl.<init>(ServiceComponentHostImpl.java:785)
while locating org.apache.ambari.server.state.ServiceComponentHost annotated with interface com.google.inject.assistedinject.Assisted
1 error
at com.google.inject.internal.InjectorImpl$4.get(InjectorImpl.java:987)
at com.google.inject.assistedinject.FactoryProvider2.invoke(FactoryProvider2.java:632)
at com.sun.proxy.$Proxy19.createExisting(Unknown Source)
at org.apache.ambari.server.state.ServiceComponentImpl.<init>(ServiceComponentImpl.java:161)
at org.apache.ambari.server.state.ServiceComponentImpl$$EnhancerByGuice$$ecf0911c.<init>(<generated>)
at org.apache.ambari.server.state.ServiceComponentImpl$$EnhancerByGuice$$ecf0911c$$FastClassByGuice$$5fef485a.newInstance(<generated>)
at com.google.inject.internal.cglib.reflect.$FastConstructor.newInstance(FastConstructor.java:40)
at com.google.inject.internal.ProxyFactory$ProxyConstructor.newInstance(ProxyFactory.java:260)
at com.google.inject.internal.ConstructorInjector.construct(ConstructorInjector.java:85)
at com.google.inject.internal.ConstructorBindingImpl$Factory.get(ConstructorBindingImpl.java:254)
at com.google.inject.internal.InjectorImpl$4$1.call(InjectorImpl.java:978)
at com.google.inject.internal.InjectorImpl.callInContext(InjectorImpl.java:1031)
at com.google.inject.internal.InjectorImpl$4.get(InjectorImpl.java:974)
at com.google.inject.assistedinject.FactoryProvider2.invoke(FactoryProvider2.java:632)
at com.sun.proxy.$Proxy18.createExisting(Unknown Source)
at org.apache.ambari.server.state.ServiceImpl.<init>(ServiceImpl.java:163)
at org.apache.ambari.server.state.ServiceImpl$$EnhancerByGuice$$d0875be7.<init>(<generated>)
at org.apache.ambari.server.state.ServiceImpl$$EnhancerByGuice$$d0875be7$$FastClassByGuice$$f62c0b29.newInstance(<generated>)
at com.google.inject.internal.cglib.reflect.$FastConstructor.newInstance(FastConstructor.java:40)
at com.google.inject.internal.ProxyFactory$ProxyConstructor.newInstance(ProxyFactory.java:260)
at com.google.inject.internal.ConstructorInjector.construct(ConstructorInjector.java:85)
at com.google.inject.internal.ConstructorBindingImpl$Factory.get(ConstructorBindingImpl.java:254)
at com.google.inject.internal.InjectorImpl$4$1.call(InjectorImpl.java:978)
at com.google.inject.internal.InjectorImpl.callInContext(InjectorImpl.java:1024)
at com.google.inject.internal.InjectorImpl$4.get(InjectorImpl.java:974)
at com.google.inject.assistedinject.FactoryProvider2.invoke(FactoryProvider2.java:632)
at com.sun.proxy.$Proxy14.createExisting(Unknown Source)
at org.apache.ambari.server.state.cluster.ClusterImpl.loadServices(ClusterImpl.java:426)
at org.apache.ambari.server.state.cluster.ClusterImpl.getServices(ClusterImpl.java:961)
at org.apache.ambari.server.upgrade.AbstractUpgradeCatalog.addNewConfigurationsFromXml(AbstractUpgradeCatalog.java:311)
at org.apache.ambari.server.upgrade.UpgradeCatalog212.executeDMLUpdates(UpgradeCatalog212.java:157)
at org.apache.ambari.server.upgrade.AbstractUpgradeCatalog.upgradeData(AbstractUpgradeCatalog.java:664)
at org.apache.ambari.server.upgrade.SchemaUpgradeHelper.executeDMLUpdates(SchemaUpgradeHelper.java:228)
at org.apache.ambari.server.upgrade.SchemaUpgradeHelper.main(SchemaUpgradeHelper.java:305)
com.google.inject.ProvisionException: Guice provision errors:
Adjusting ambari-server permissions and ownership...
Ambari Server 'upgrade' completed successfully. When tried to start ambari server it got failed with database consistency check failed. Debug: When check ambari database logs we found below error - $cat /var/log/ambari-server/ambari-check-database.log 2016-11-10 09:40:55,365 INFO - ******************************* Check database started *******************************
2016-11-10 09:40:58,841 INFO - Checking for configs not mapped to any cluster
2016-11-10 09:40:58,858 INFO - Checking for configs selected more than once
2016-11-10 09:40:58,860 INFO - Checking for hosts without state
2016-11-10 09:40:58,861 INFO - Checking host component states count equals host component desired states count
2016-11-10 09:40:58,863 ERROR - Your host component states (hostcomponentstate table) count not equals host component desired states (hostcomponentdesiredstate table) count!
2016-11-10 09:40:58,863 INFO - Checking services and their configs
2016-11-10 09:41:01,195 INFO - ******************************* Check database completed ******************************* Resolution: 1. As per above logs file ie.ambari-check-database.log we see that there is count differences for hostcomponentstate table and hostcomponentdesiredstate table . 2. Tried to do select count(*) on both tables as below and we found 'X' no of rows differences. [X was = 10 in my case] select count(*) on hostcomponentstate;
select count(*) on hostcomponentdesiredstate;
3. To find out differences we did "group by" on each column as below - select count(*) from hostcomponentstate group by host_id;
select count(*) from hostcomponentdesiredstate group by host_id;
Followed same steps for all rest of the columns and was able to find the differences in the count values for the respective columns. 4. We deleted additional entries using delete command as shown below - delete from hostcomponentstate where host_id=**** and version='****';
For Eg.
delete from hostcomponentstate where host_id=4 and version='2.3.0.0-2557'; 5. Tried restarting ambari server which worked successfully. 6. Stopped ambari server again as there was error previously while we ran 'ambari-server upgrade' initially. 7. Re-run the ambari-server upgrade command which ran successfully. 8. Started ambari-server, which was able to start without any issue.
... View more
Labels:
11-08-2016
07:08 PM
2 Kudos
1. Lets assume you have HDP cluster installed and managed by Ambari. 2. When we want to delete a service [either Custom service or HDP service] using api, you generally use below command - curl -u admin:admin -H "X-Requested-By: ambari" -X DELETE http://<ambari-server>:8080/api/v1/clusters/c1/services/<SERVICENAME>; 2. After executing above command you might see below error - $curl -u admin:admin -H "X-Requested-By: ambari" -X DELETE http://<ambari-server>:8080/api/v1/clusters/c1/services/HBASE
{ "status" : 500, "message" : "org.apache.ambari.server.controller.spi.SystemException: An internal system exception occurred: Cannot remove HBASE. Desired state STARTED is not removable. Service must be stopped or disabled." 3. If you see the above error while Removing/Stopping service please use below steps to resolve the issue 4. Login to ambari database [In my case its postgresql] and check the values of the service in below tables - # psql -U ambari
[Default password is 'bigdata']
ambari=> select * from servicedesiredstate where service_name='HBASE';
ambari=> select * from servicecomponentdesiredstate where service_name='HBASE';
5. Make sure here in above output the value of column 'desired_state' should be INSTALLED 6. If you see the above value of "desired_state" is set to STARTED then update the column and set it to STARTED using below command - ambari=> update servicedesiredstate set desired_state='INSTALLED' where service_name='HBASE';
7. Follow same steps for "servicecomponentdesiredstate" table - ambari=> update servicecomponentdesiredstate set desired_state='INSTALLED' where service_name='HBASE'; 8. Now try removing/deleting the service now. It should work. $curl -u admin:admin -H "X-Requested-By: ambari" -X DELETE http://<ambari-server>:8080/api/v1/clusters/c1/services/HBASE
... View more
Labels:
06-21-2016
09:38 AM
Nice Article and Very useful.
... View more
06-17-2016
12:42 PM
5 Kudos
1.Configure sample ambari alert
using Ambari UI. . 2.Login to ambari webui using
[admin/admin] . 3.Click on "Alerts" .
. 4.Click "Action"->
"Manage Notification" . 5.Add sample alert for service . 6. In above example you can use
Gmail username and password to test connection. . 7. After adding sample alert close the screen. . 8.From
the ambari ui now stop one of the zookeeper service . . 9.You
should be able to see alert indication in ambari ui for the zookeeper service
stopped . . 10. Same time if you tail for
ambari-alerts.log you will be able to see "Connection Failed"
log messages for zookeeper service. . 2016-05-27 14:57:31,541 [CRITICAL] [ZOOKEEPER] [zookeeper_server_process] (ZooKeeper Server Process) Connection failed: [Errno 111] Connection refused to node1.example.com:2181 . 11. Check your email and make
sure you receive the alert. . 12. If you don’t see alerts the
enable debugging for Alerts in ambari log4j.properties as shown below. . 13. Login to ambari server cli
using superuser credentials [eg. root] . vi /etc/ambari-server/conf/log4j.properties
Modify
- log4j.logger.alerts=INFO,alerts to log4j.logger.alerts=DEBUG,alerts
Add below line to Alerts section -
log4j.logger.org.apache.ambari.server.notifications.dispatchers=DEBUG,alerts . 14. Save the file and restart
ambari server service. . 15. Try to stop zookeeper on any
one of the node and check for the ambari-server.log and amabri-alerts.log. . 16. You should be able to see the
alert logs sent on email via smtp settings you did. . 17. If there are any smtp errors
you will see those in "/var/log/ambari-server/ambari-server.log"
... View more
Labels:
05-27-2016
01:20 PM
1 Kudo
Problem Statement: When you try to execute GET call using ambari api to list/GET services, it usually gives error as shown below - # curl -u admin:admin -H "X-Requested-By: ambari" -X GET http://<AMBARI_SERVER_HOST>:8080/api/v1/clusters/<cluster_name>/services/
curl: (1) Protocol http not supported or disabled in libcurl
OR # curl -u admin:admin -H "X-Requested-By: ambari" -X GET “http://node1.example.com:8080/api/v1/clusters/HDP_TEST/services/“
curl: (1) Protocol “http not supported or disabled in libcurl
Solution: It’s a tipical curl_php error, but, the error response is not very, ehmmmm easy to deduce. It’s simple, surely there is an extra space before ‘http’, so check the CURLOPT_URL declaration, and search for this space, and then, delete it!!!!
Make sure the syntax of double quotes is correct. This also leads to the error.
... View more
Labels:
05-24-2016
09:23 AM
8 Kudos
Adding service through ambari gives error as shown below -
[root@sandbox ~]# curl -u admin:admin -i -X POST -d '{"ServiceInfo":{"service_name":"STORM"}}' http://xxx.xxx.xxx.xxx:8080/api/v1/clusters/Sandbox/services
HTTP/1.1 400 Bad Request
Set-Cookie: AMBARISESSIONID=qraouzksi4vktobhob5heqml;Path=/
Expires: Thu, 01 Jan 1970 00:00:00 GMT
Content-Type: text/plain
Content-Length: 107
Server: Jetty(7.6.7.v20120910)
{
"status" : 400,
"message" : "CSRF protection is turned on. X-Requested-By HTTP header is required."
You need to disable CSRF protection as mentioned below -
1.Login to ambari server using cli [superuser credentials]
vi /etc/ambari-server/conf/ambari.properties
2. Add below line at the bottom of the file
api.csrfPrevention.enabled=false
3. Restart ambari server
#ambari-server restart
4. Try executing POST command again to add service and it should work
[root@sandbox ~]# curl -u admin:admin -i -X POST -d '{"ServiceInfo":{"service_name":"STORM"}}' http://xxx.xxx.xxx.xxx:8080/api/v1/clusters/Sandbox/services
HTTP/1.1 201 Created
Set-Cookie: AMBARISESSIONID=1t4c7yfbu64nw1nenrgplco7sd;Path=/
Expires: Thu, 01 Jan 1970 00:00:00 GMT
Content-Type: text/plain
Content-Length: 0
Server: Jetty(7.6.7.v20120910)
Thanks.
... View more
Labels:
05-19-2016
10:40 AM
1 Kudo
Problem Statement: Ranger HDFS repository test connection getting failed while the policies are working fine. When tried enabling debug more for ranger admin service found below error in log - (RangerAuthenticationProvider.java:335) - Unix Authentication Failed:
org.springframework.security.authentication.AuthenticationServiceException: FAILED: unable to authenticate to AuthenticationService: node.example.com:5151
at org.springframework.security.authentication.jaas.DefaultLoginExceptionResolver.resolveException(DefaultLoginExceptionResolver.java:33)
at org.springframework.security.authentication.jaas.AbstractJaasAuthenticationProvider.authenticate(AbstractJaasAuthenticationProvider.java:181)
at org.apache.ranger.security.handler.RangerAuthenticationProvider.getUnixAuthentication(RangerAuthenticationProvider.java:327)
at org.apache.ranger.security.handler.RangerAuthenticationProvider.authenticate(RangerAuthenticationProvider.java:114)
at org.springframework.security.authentication.ProviderManager.authenticate(ProviderManager.java:156)
at org.springframework.security.authentication.ProviderManager.authenticate(ProviderManager.java:174)
at org.springframework.security.web.authentication.www.BasicAuthenticationFilter.doFilter(BasicAuthenticationFilter.java:168)
at org.springframework.security.web.FilterChainProxy$VirtualFilterChain.doFilter(FilterChainProxy.java:342)
at org.springframework.security.web.authentication.AbstractAuthenticationProcessingFilter.doFilter(AbstractAuthenticationProcessingFilter.java:183)
at org.springframework.security.web.FilterChainProxy$VirtualFilterChain.doFilter(FilterChainProxy.java:342)
at org.springframework.security.web.authentication.logout.LogoutFilter.doFilter(LogoutFilter.java:105)
at org.springframework.security.web.FilterChainProxy$VirtualFilterChain.doFilter(FilterChainProxy.java:342)
at org.springframework.security.web.context.SecurityContextPersistenceFilter.doFilter(SecurityContextPersistenceFilter.java:87)
at org.springframework.security.web.FilterChainProxy$VirtualFilterChain.doFilter(FilterChainProxy.java:342)
at org.springframework.security.web.FilterChainProxy.doFilterInternal(FilterChainProxy.java:192)
at org.springframework.security.web.FilterChainProxy.doFilter(FilterChainProxy.java:160)
at org.springframework.web.filter.DelegatingFilterProxy.invokeDelegate(DelegatingFilterProxy.java:346)
at org.springframework.web.filter.DelegatingFilterProxy.doFilter(DelegatingFilterProxy.java:259)
at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:241)
at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:208)
at org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.java:220)
at org.apache.catalina.core.StandardContextValve.invoke(StandardContextValve.java:122)
at org.apache.catalina.authenticator.AuthenticatorBase.invoke(AuthenticatorBase.java:501)
at org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.java:171)
at org.apache.catalina.valves.ErrorReportValve.invoke(ErrorReportValve.java:103)
at org.apache.catalina.valves.AccessLogValve.invoke(AccessLogValve.java:950)
at org.apache.catalina.core.StandardEngineValve.invoke(StandardEngineValve.java:116)
at org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:408)
at org.apache.coyote.http11.AbstractHttp11Processor.process(AbstractHttp11Processor.java:1070)
at org.apache.coyote.AbstractProtocol$AbstractConnectionHandler.process(AbstractProtocol.java:611)
at org.apache.tomcat.util.net.JIoEndpoint$SocketProcessor.run(JIoEndpoint.java:314)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at org.apache.tomcat.util.threads.TaskThread$WrappingRunnable.run(TaskThread.java:61)
at java.lang.Thread.run(Thread.java:745)
Resolution: As checked the user principal was different than that was created in KDC. Corrected the user principal for HDFS repository in ranger as well as in HDFS configs for ranger plugin properties.
... View more
Labels:
05-16-2016
06:33 PM
7 Kudos
1.Login to Ambari UI using admin credentials [admin/admin] 2.Check the alert definition using below command
http://<ambari_fqdn>:8080/api/v1/clusters/<cluster-name>/alert_definitions/ 3.Get the respective alert definition you wan to modify from the above output. Here for example lets say we will modify "Hive Metastore". Here we will change 'check.command.timeout' value from default 60 to 120
4.Copy the output of point2 to a file say "test.json" 5.First - Modify the test.json and remove the 2nd line of the file which starts with "href" eg. is below - "href" : "http://<ambari-fqdn>:8080/api/v1/clusters/sandbox/alert_definitions/51", 6.Second - edit the value of 'check.command.timeout' in test.json from 60.0 to 120.0 {
"name" : "check.command.timeout",
"display_name" : "Check command timeout",
"units" : "seconds",
"value" : 120.0,
"description" : "The maximum time before check command will be killed by timeout",
"type" : "NUMERIC"
},
7.Save the test.json. 8.Now POST the JSON back to reflect new updated value using below command - curl -H 'X-Requested-By:ambari' -u $ambari_username:$ambari_password -X PUT --data @test.json http://<ambari_fqdn>:8080/api/v1/clusters/<cluster-name>/alert_definitions/<alert_no>;
Ex. curl -H 'X-Requested-By:ambari' -u $ambari_username:$ambari_password -X PUT --data @test.json http://<ambari-fqdn>:8080/api/v1/clusters/sandbox/alert_definitions/51 9. Above command will display no output. 10.Check from the line below if the value are successfully modified - http://<ambari_fqdn>:8080/api/v1/clusters/<cluster-name>/alert_definitions/<alert_no>;
Eg. http://<ambari-fqdn>:8080/api/v1/clusters/sandbox/alert_definitions/51
... View more
Labels:
05-01-2016
08:44 AM
3 Kudos
Configure Ldap server on Redhat/Centos :-
Check the ldap packages are
installed or not on Server with following command
#rpm –qa|grep openldap
2. If packages are not
installed then install the packages with yum command #yum install openldap-* -y 3. Once pacakge are installed
then check with following command #rpm –qa |grep openldap
4. Create Ldap password with
following command #slappasswd
[Enter the password and copy the md5 formate password for adding the password into the
database file] 5. Edit database files for
domain #vi /etc/openldap/slapd.d/cn=config/olcDatabase={2}bdb.ldif
oldSuffix:dc=example,dc=com
olcRootDN:cn=Manager,dc=example,dc=com
olcRootPW:
copy the password here which is
generated after set the slpappasswd.
#vi /etc/openldap/slpapd.d/cn=config/olcDatabase={1}monitor.ldif
olcAccess: {0}to * by dn.base="gidNumber=0+uidNumber=0,cn=peercred,cn=external,cn=auth" read by dn.base="cn=manager,dc=example,dc=com" read by * none 6.Run the updatedb command to initialize database. Create
or update a database used by locate. It
will take a time to update. So keep a patient and wait for few second #yum install mlocate
#updatedb 7.Copy
LDAP example database file #cp /usr/share/openldap-servers/DB_CONFIG.example /var/lib/ldap/DB_CONFIG
#chown ldap:ldap -Rf /var/lib/ldap
#slaptest –u
8. Start ldap server. #service slapd start 9. Check the service process is started properly and is running using ps command #ps -aef |grep slapd
#netstat -tauepn |grep 389 10. Run ldapsearch command #ldapsearch –x –b "dc=example,dc=com"
11. Install Migration tools. A
set of script for migrating user,group,aliases,
hosts,netgroups,network,protocols,RPCs, and servicesfrom existing nameserver
(flat files, NIS, and NetInfo) to LDAP. #yum install -y migrationtools
#cd /usr/share/migrationtools
#vi migrate_common.ph
Do the following changes :-
NAMEINGCONTEXT{‘group’} = ”ou=Groups”;
DEFAULT_MAIL_DOMAIN = “example.com”
DEFAULT_BASE = “dc=example,dc=com”
EXTENDED_SCHEMA = 1;
12. Create LDIF file for base and users #mkdir /root/ldap/
#/usr/share/migrationtools/migrate_base.pl >/root/ldap/base.ldif
- Create users,password and groups for LDAP user testing.
#mkdir /home/ldap
#useradd –d /home/ldap/user1 user1;passwd user1
#useradd –d /home/ldap/user2 user2;passwd user2
#useradd –d /home/ldap/user3 user3;passwd user3
#getent passwd |tail –n 3 >/root/ldap/users
#getent shadow |tail –n 3 >/root/ldap/passwords
#getent group |tail –n 3 >/root/ldap/groups
- Create LDAP files for users
#./usr/share/migrationtools/migrate_passwd.pl /root/ldap/users > /root/ldap/users.ldif
#./usr/share/migrationtools/migrate_group.pl /root/ldap/groups > /root/ldap/groups.ldif 13. Add data to LDAP server #ldapadd –x –W –D “cn=Manager,dc=example,dc=com” –f /root/ldap/base.ldif
#ldapadd –x –W –D “cn=Manager,dc=example,dc=com” –f /root/ldap/users.ldif
#ldapadd –x –W –D “cn=Manager,dc=example,dc=com” –f /root/ldap/groups.ldif
14. Test user data in LDAP #ldapsearch –x –b “dc=example,dc=com”
#ldapsearch –x –b “dc=example,dc=com” |grep user1
#slapcat –v
14.a) Lets map the users to respective group as shown below - Create a file name groupsmap.ldif and add below lines to it - #cat /root/groupsmap.ldif
dn: cn=user1,ou=Groups,dc=example,dc=com
changetype: modify
add: memberUid
memberUid: user1
dn: cn=user2,ou=Groups,dc=example,dc=com
changetype: modify
add: memberUid
memberUid: user2
dn: cn=user3,ou=Groups,dc=example,dc=com
changetype: modify
add: memberUid
memberUid: user3
Use Ldap modify command to modify the entries for user and group mapping - #ldapmodify -D "cn=Manager,dc=example,dc=com" -W < /root/groupsmap.ldif
15. LDAP Client Configuration #yum install openldap-clients openldap openldap-devel nss-pam-ldapd pam_ldap authconfig authconfig-gtk –y
16.Run authconfig command to configure ldap client. $authconfig-tui 17. Check the configuration set in file #cat /etc/openldap/ldap.conf
18. Check ldap client configuration at client side #getent passwd user1
#su - user1 19. If you are not able to see user home directory the use the authconfig command to enable home directory #authconfig --enableldapauth --enablemkhomedir --ldapserver=ldap://<ldap-server-fqdn>:389 --ldapbasedn="dc=example,dc=com" --update
20. You can also configure home directory on NFS. Add-on step will be required for nfs ldap configuration.
... View more
Labels:
- « Previous
- Next »