Member since
01-19-2017
3679
Posts
632
Kudos Received
372
Solutions
My Accepted Solutions
| Title | Views | Posted |
|---|---|---|
| 732 | 06-04-2025 11:36 PM | |
| 1304 | 03-23-2025 05:23 AM | |
| 641 | 03-17-2025 10:18 AM | |
| 2357 | 03-05-2025 01:34 PM | |
| 1530 | 03-03-2025 01:09 PM |
11-10-2017
04:53 PM
Yeah, I have tried that approach as well. The ODI doc. mentions about using it's weblogic hive jdbc driver but one can use other drivers as well. The question that I have mentioned here is around the standard(Apache)jdbc driver.
... View more
11-09-2017
07:22 PM
Well, it turns out that neither the ZooKeeper nor the Hive services were running. I started them and the Hive error went away. However, after restarting those services, I was still getting the a failure on the ATS check. Starting the YARN service resolved this. Thanks, Sonu.
... View more
11-01-2017
04:57 AM
@Chaitanya D Please run HDFS Service check from Ambari Server UI to see if all the DataNodes are healthy and running? java.lang.Exception: org.apache.hadoop.ipc.RemoteException(java.io.IOException): File
/user/hduser/sqoop_import/customers/_temporary/0/_temporary/attempt_local270107642_0001_m_000000_0/part-m-00000
could only be replicated to 0 nodes instead of minReplication (=1). There are 0 datanode(s) running and no node(s) are excluded in this operation. Above error indicates that No DataNodes are running or DataNodes are not healthy. So please check if your Sqoop is using the correct hdfs-site.xml / core-site.xml in it's classpath with Valid Running DataNodes. . You can also try running your Sqoop command using "--verbose" option to see the "Classpath" setting to know if it is including the correct "hadoop/conf" directory something like: "/usr/hdp/2.6.0.3-8/hadoop/conf" . Please check the DataNode process is running and try to put sode file to HDFS to see if your HDFS store operations are running fine? # ps -ef | grep DataNode
# su - hdfs
# hdfs dfs -put /var/log/messages /tmp .
... View more
10-30-2017
12:54 PM
1 Kudo
@Florin Miron This error might arise if you do not start namenode as hdfs user. Actually, you are trying to run it as root user but the file /hadoop/hdfs/namenode/in_use.lock is owned by hdfs user. Do not use sudo to start hadoop processess. Try to start it without sudo. Try this: su -l hdfs -c "/usr/hdp/current/hadoop-hdfs-namenode/../hadoop/sbin/hadoop-daemon.sh start namenode" Can you try that and revert
... View more
11-17-2017
07:32 AM
I found the solution. Issue is fixed now. In my case, one of LDAP username is 'dvteam' but in LDAP database there was full description of username as 'architecture dev team, locations, team details, etc'. Error messages I found in nifi-user.log. is 'architecture dev team' user was trying to authenticate with nifi nodes. Authentication was successful but authorizations not happening. The username which I've mentioned in initial admin identity was 'dvteam'.(cn=dvteam,ou=xx,ou=xx,ou=xx,ou=xx,dc=abc,dc=com) Then as per logs, I changed it to (cn=architecture dev team,ou=xx,ou=xx,ou=xx,ou=xx,dc=abc,dc=com) Also there was some mismatch about host names in node identities section. 'hostname -f' shows a hostname ip-zz-xx-ec2-internal. So, I have given 'ip-zz-xx-ec2-internal' in node identities section but that was not working. Then I have changed the hostnames to 'nifi1.abc.local' and mentioned in node identities. In 'Template for login-identity-providers.xml' I've made some changes. Earlier I had set 'use_username' in '<property name="Identity Strategy">USE_DN</property>' this section. later I've changed to use_dn. because as per nifi-user log authentication is happening with LDAP user 'architecture dev team'. So in my case user_username was not working for authentications. Every configurations changes I used to remove authorizations.xml and users.xml file from my all nifi nodes. Also There was confusion on about 'OU' in Node identities section. What does it mean OU in node identities section? I don't know yet. Later I've mentioned 'OU=nifi' and also gave host names as 'nifi1.abc.local' , 'nifi2.abc.local', etc. I have added AD/LDAP user in Initial Admin Identity(cn=architecture dev team,ou=xx,ou=xx,ou=xx,ou=xx,dc=abc,dc=com) After setting above all, I was facing an error about setting nifi.security.identity.mapping.pattern.dn. There was a challenge about the pattern definition. There was 4 'ou' I have defined in initial admin identities and login-identity-providers.xml. So I've used below pattern and it worked well. ^cn=(.?),ou=(.?),ou=(.?),ou=(.?),ou=(.?),dc=(.?),dc=(.?)$ Note: I have removed Ranger completely. Thanks, Suraj
... View more
10-24-2017
01:04 PM
The issue was resolved by disabling Kerberos authentication for Druid and also by fixing Broker host and Broker port values in the Superset console for the Druid cluster. Thank you, @Nishant Bangarwa, for all the help.
... View more
10-17-2017
04:54 PM
@Neha G In a kerberized cluster there are 2 types of keytabs or principals headless and service principals. Headless principals are not bound to a specific host or node and are presented like @ SRV.COM Service principals are bound to a specific service and host or node, and are presented like with syntax: /@ SRV.COM So when you initialize the hdfs.headless.keytab is as DoAs so the user will take hdfs permissions
... View more
06-05-2018
01:39 PM
@raouia Please check the following: https://stackoverflow.com/questions/44217654/how-to-recover-zookeeper-from-java-io-eofexception-after-a-server-crash The above lists few solutions you can take to resolve this problem. HTH
... View more
10-13-2017
03:03 PM
Sorry unable to find it.
... View more
10-13-2018
11:26 AM
This was fixed for me by updating fqdn name to point to domain name by updating /etc/hosts and resolv.conf.
... View more