Member since
01-19-2017
3676
Posts
632
Kudos Received
372
Solutions
My Accepted Solutions
| Title | Views | Posted |
|---|---|---|
| 608 | 06-04-2025 11:36 PM | |
| 1166 | 03-23-2025 05:23 AM | |
| 575 | 03-17-2025 10:18 AM | |
| 2172 | 03-05-2025 01:34 PM | |
| 1368 | 03-03-2025 01:09 PM |
11-09-2017
07:22 PM
Well, it turns out that neither the ZooKeeper nor the Hive services were running. I started them and the Hive error went away. However, after restarting those services, I was still getting the a failure on the ATS check. Starting the YARN service resolved this. Thanks, Sonu.
... View more
11-01-2017
04:57 AM
@Chaitanya D Please run HDFS Service check from Ambari Server UI to see if all the DataNodes are healthy and running? java.lang.Exception: org.apache.hadoop.ipc.RemoteException(java.io.IOException): File
/user/hduser/sqoop_import/customers/_temporary/0/_temporary/attempt_local270107642_0001_m_000000_0/part-m-00000
could only be replicated to 0 nodes instead of minReplication (=1). There are 0 datanode(s) running and no node(s) are excluded in this operation. Above error indicates that No DataNodes are running or DataNodes are not healthy. So please check if your Sqoop is using the correct hdfs-site.xml / core-site.xml in it's classpath with Valid Running DataNodes. . You can also try running your Sqoop command using "--verbose" option to see the "Classpath" setting to know if it is including the correct "hadoop/conf" directory something like: "/usr/hdp/2.6.0.3-8/hadoop/conf" . Please check the DataNode process is running and try to put sode file to HDFS to see if your HDFS store operations are running fine? # ps -ef | grep DataNode
# su - hdfs
# hdfs dfs -put /var/log/messages /tmp .
... View more
10-30-2017
12:54 PM
1 Kudo
@Florin Miron This error might arise if you do not start namenode as hdfs user. Actually, you are trying to run it as root user but the file /hadoop/hdfs/namenode/in_use.lock is owned by hdfs user. Do not use sudo to start hadoop processess. Try to start it without sudo. Try this: su -l hdfs -c "/usr/hdp/current/hadoop-hdfs-namenode/../hadoop/sbin/hadoop-daemon.sh start namenode" Can you try that and revert
... View more
11-17-2017
07:32 AM
I found the solution. Issue is fixed now. In my case, one of LDAP username is 'dvteam' but in LDAP database there was full description of username as 'architecture dev team, locations, team details, etc'. Error messages I found in nifi-user.log. is 'architecture dev team' user was trying to authenticate with nifi nodes. Authentication was successful but authorizations not happening. The username which I've mentioned in initial admin identity was 'dvteam'.(cn=dvteam,ou=xx,ou=xx,ou=xx,ou=xx,dc=abc,dc=com) Then as per logs, I changed it to (cn=architecture dev team,ou=xx,ou=xx,ou=xx,ou=xx,dc=abc,dc=com) Also there was some mismatch about host names in node identities section. 'hostname -f' shows a hostname ip-zz-xx-ec2-internal. So, I have given 'ip-zz-xx-ec2-internal' in node identities section but that was not working. Then I have changed the hostnames to 'nifi1.abc.local' and mentioned in node identities. In 'Template for login-identity-providers.xml' I've made some changes. Earlier I had set 'use_username' in '<property name="Identity Strategy">USE_DN</property>' this section. later I've changed to use_dn. because as per nifi-user log authentication is happening with LDAP user 'architecture dev team'. So in my case user_username was not working for authentications. Every configurations changes I used to remove authorizations.xml and users.xml file from my all nifi nodes. Also There was confusion on about 'OU' in Node identities section. What does it mean OU in node identities section? I don't know yet. Later I've mentioned 'OU=nifi' and also gave host names as 'nifi1.abc.local' , 'nifi2.abc.local', etc. I have added AD/LDAP user in Initial Admin Identity(cn=architecture dev team,ou=xx,ou=xx,ou=xx,ou=xx,dc=abc,dc=com) After setting above all, I was facing an error about setting nifi.security.identity.mapping.pattern.dn. There was a challenge about the pattern definition. There was 4 'ou' I have defined in initial admin identities and login-identity-providers.xml. So I've used below pattern and it worked well. ^cn=(.?),ou=(.?),ou=(.?),ou=(.?),ou=(.?),dc=(.?),dc=(.?)$ Note: I have removed Ranger completely. Thanks, Suraj
... View more
10-24-2017
01:04 PM
The issue was resolved by disabling Kerberos authentication for Druid and also by fixing Broker host and Broker port values in the Superset console for the Druid cluster. Thank you, @Nishant Bangarwa, for all the help.
... View more
10-17-2017
04:54 PM
@Neha G In a kerberized cluster there are 2 types of keytabs or principals headless and service principals. Headless principals are not bound to a specific host or node and are presented like @ SRV.COM Service principals are bound to a specific service and host or node, and are presented like with syntax: /@ SRV.COM So when you initialize the hdfs.headless.keytab is as DoAs so the user will take hdfs permissions
... View more
06-05-2018
01:39 PM
@raouia Please check the following: https://stackoverflow.com/questions/44217654/how-to-recover-zookeeper-from-java-io-eofexception-after-a-server-crash The above lists few solutions you can take to resolve this problem. HTH
... View more
10-13-2017
03:03 PM
Sorry unable to find it.
... View more
10-13-2018
11:26 AM
This was fixed for me by updating fqdn name to point to domain name by updating /etc/hosts and resolv.conf.
... View more
10-12-2017
08:28 AM
1 Kudo
@forest lin The kdc.conf looks fine, but your initial and final krb5.conf don't look correct you forgot to add the entry in lowercase see below !. Please backup of your current krb5.conf on all the hosts and replace them with the below exactly as it is. [libdefaults]
renew_lifetime = 7d
forwardable = true
default_realm = ABC.COM
ticket_lifetime = 24h
dns_lookup_realm = false
dns_lookup_kdc = false
default_ccache_name = /tmp/krb5cc_%{uid}
#default_tgs_enctypes = aes des3-cbc-sha1 rc4 des-cbc-md5
#default_tkt_enctypes = aes des3-cbc-sha1 rc4 des-cbc-md5
[domain_realm]
abc.com = ABC.COM
.abc.com = ABC.COM
[logging]
default = FILE:/var/log/krb5kdc.log
admin_server = FILE:/var/log/kadmind.log
kdc = FILE:/var/log/krb5kdc.log
[realms]
ABC.COM = {
admin_server = nn1-dev1-tbdp
kdc = nn1-dev1-tbdp
} Did you re-run the below to correctly setup the KDC and KDC Admin hostnames dpkg-reconfigure krb5-kdc Can you also validate that the host entries on all the hosts are the same and include the KDC server host entry? What the content of your kadm5.acl file? On the KDC server can you paste the output of the below command. Please obscure the domain name # kdestroy
# kadmin.local
Authenticating as principal root/admin@ABC.COM with password.
kadmin.local: listprincs After validating and changing the above restart the services service krb5-kdc restart
service krb5-admin-server restart Don't forget to enable auto-restart of kdc and kadmin use appropriate ubuntu command chkconfig krb5kdc on
chkconfig kadmin on Now try the Ambari--> Kerberos wizard again it should succeed The logs are in these directories on the KDC and Clients default = /var/log/krb5kdc.log
admin_server = /var/log/kadmind.log
kdc = /var/log/krb5kdc.log Please revert
... View more