Member since
01-19-2017
3676
Posts
632
Kudos Received
372
Solutions
My Accepted Solutions
| Title | Views | Posted |
|---|---|---|
| 479 | 06-04-2025 11:36 PM | |
| 1006 | 03-23-2025 05:23 AM | |
| 536 | 03-17-2025 10:18 AM | |
| 1946 | 03-05-2025 01:34 PM | |
| 1252 | 03-03-2025 01:09 PM |
08-06-2021
01:31 PM
@NIFI_123 Maybe you try this crontab generator it has more possibilities Hope that helps
... View more
08-01-2021
12:24 PM
@Vinay1991 I mentioned the logs below.You will need definitely the ZK ZKFailoverController logs and NameNode logs
... View more
07-30-2021
02:48 PM
@Buithuy96 First and foremost re-running these steps won't do damage to your cluster I assure you. What you have is purely a permissions issue java.sql.SQLException: Access denied for user 'ambari'@'mtnode.hdp.vn' (using password: YES) Revalidate the MySQL connector # yum install -y mysql-connector-java
# ambari-server setup --jdbc-db=mysql --jdbc-driver=/usr/share/java/mysql-connector-java.jar Re-run the Ambari user setup CREATE USER 'ambari'@'%' IDENTIFIED BY 'aCtct@123';
CREATE USER 'ambari'@'localhost' IDENTIFIED BY 'Ctct@123';
GRANT ALL PRIVILEGES ON *.* TO 'ambari'@'%';
GRANT ALL PRIVILEGES ON *.* TO 'ambari'@'localhost';
GRANT ALL PRIVILEGES ON *.* TO 'ambari'@'mtnode.hdp.vn';
GRANT ALL PRIVILEGES ON *.* TO 'ambari'@'mtnode.hdp.vn' IDENTIFIED BY 'Ctct@123';
FLUSH PRIVILEGES; Try restarting Ambari while tailing the ambari-server.log and share the contents, first reset the log before starting Ambari, to ensure you have a minimum log to delve through # truncate --size 0 /var/logs/ambari-server/ambari-server.log Restart your Ambari server and tail the logs # tail -f /var/logs/ambari-server/ambari-server.log Share the status
... View more
07-29-2021
01:04 PM
@Vinay1991 The ZK's look okay please go through the list I shared about the connectivity. Please validae one by one.
... View more
07-28-2021
10:30 AM
1 Kudo
@Vinay1991 From the logs, I see connectivity loss and that's precisely what's causing the NN switch. Remember in my earlier posting the importance of Zk quorum! Your NN and losing Connection to the ZK so the NN that loses active connection is causing the ZK to elect a new leader and that's happening in a loop Caused by : java.net.SocketTimeoutException: 5000 millis timeout while waiting for channel to be ready for read I would start by checking FW I see you are on Ubuntu so ensure the FW is disabled across the Cluster. Identifying and Fixing Socket Timeouts The root cause of a Socket Timeout is a connectivity failure between the machines, so try the usual process Check the settings: is this the machine you really wanted to talk to? From the machine that is raising the exception, can you resolve the hostname? Is that resolved hostname the correct one? Can you ping the remote host? Is the target machine running the relevant Hadoop processes? Can you telnet to the target host and port? Can you telnet to the target host and port from any other machine? On the target machine, can you telnet to the port using localhost as the hostname? If this works but external network connections time out, it's usually a firewall issue. If it is a remote object store: is the address correct? Does it go away when you repeat the operation? Does it only happen on bulk operations? If the latter, it's probably due to throttling at the far end. Check your hostname resolution DNS or /etc/hosts should be in sync and another important thing is all your host time should be in sync. can you share the value of Core-site.xml parameter ha.zookeeper.quorum
... View more
07-27-2021
12:35 PM
2 Kudos
@Vinay1991 Unfortunately, you haven't described your cluster setup but my assumption is that you have 3 Zk's in your HA implementation. There are two components deployed to Hadoop HDFS for implementing Automatic Failover. These two components are- ZKFailoverController process(ZKFC) ZooKeeper quorum (3 Zk's) 1. ZKFailoverController(ZKFC) The ZKFC is the ZooKeeper client, who is also responsible for managing and monitoring the NameNode state. ZKFC is a client that runs on all nodes on the Hadoop cluster, which is running NameNode. These 2 components are responsible for: Health monitoring ZKFC is accountable for health monitoring heart beating the NameNode with health-check commands periodically. As long as the NameNode responds with a healthy status timely, it considers the NameNode as healthy. In this case, if the NameNode got crashed, froze, or entered an unhealthy state, then it marks the NameNode as unhealthy. ZooKeeper session management It is also responsible for the session management with ZooKeeper. The ZKFC maintains a session open in the ZooKeeper when the local Namenode is healthy. Also, if the Local NameNode is the active NameNode, then with the session, it also holds a special lock “znode”. This lock uses ZooKeeper support for the ”ephemeral” nodes. Thus, if the session gets expires, the lock node will be deleted automatically. ZooKeeper-based election When the local Namenode is healthy and ZKFC finds that no other NameNode acquires the lock “znode”, then it will try by itself to acquire the lock. If it gets successful in obtaining the lock, then ZKFC has won the election, and now it is responsible for running the failover to make its local NameNode active. The failover process run by the ZKFC is similar to the failover process run by the manual failover described in the NameNode High Availability article. 2. ZooKeeper quorum A ZK quorum is a highly available service for maintaining little amounts of coordination data. It notifies the clients about the changes in that data. It monitors clients for the failures. The HDFS implementation of automatic failover depends on ZooKeeper for the following things: How does it detect NN Failure each NameNode machine in the Hadoop cluster maintains a persistent session in the ZooKeeper. If any of the machines crashes, then the ZooKeeper session maintained will get expire—zooKeeper than reveal to all the other NameNodes to start the failover process. To exclusively select the active NameNode, ZooKeeper provides a simple mechanism. In the case of active NameNode failure, another standby NameNode may take the special exclusive lock in the ZooKeeper, stating that it should become the next active NameNode. After the initialization of Health Monitor is completed, internal threads are started to call the method corresponding to the HASERVICE Protocol RPC interface of NameNode periodically to detect the health status of NameNode. If the Health Monitor detects a change in the health status of NameNode, it calls back the corresponding method registered by ZKFailover Controller for processing. If ZKFailover Controller decides that a primary-standby switch is needed, it will first use Active Standby Elector to conduct an automatic primary election. Active Standby Elector interacts with Zookeeper to complete an automatic backup election. Active Standby Elector calls back the corresponding method of ZKFailover Controller to notify the current NameNode to become the main NameNode or the standby NameNode after the primary election is completed. ZKFailover Controller calls the HASERVICE Protocol RPC interface corresponding to NameNode to convert NameNode to Active or Standby state. Taking all the above into account, the first component logs to check are Zk and NN, /var/log/hadoop/hdfs
/var/log/zookeeper My suspicion is you have issues with the Namenode heartbeat which makes the zookeeper fail to get the pingback in time and marks the NN as dead and elects a new leader and that keeps happening in a loop. So check those ZK logs to ensure time is set correctly and is in sync! Please revert
... View more
07-26-2021
10:51 PM
@sipocootap2 Unfortunately, you cannot disallow snapshots in a snapshottable directory that already has snapshots! Yes, you will have to list and delete the snapshot even if it contains subdirs you only pass the root snapshot in the hdfs dfs -deleteSnapshot command. If you had an $ hdfs dfs -ls /app/tomtest/.snapshot
Found 2 items
drwxr-xr-x - tom developer 0 2021-07-26 23:14 /app/tomtest/.snapshot/sipo/work/john
drwxr-xr-x - tom developer 0 2021-07-26 23:14 /app/tomtest/.snapshot/tap2/work//peter You would simply delete the snapshots like $ hdfs dfs -deleteSnapshot /app/tomtest/ sipo
$ hdfs dfs -deleteSnapshot /app/tomtest/ tap2
... View more
07-26-2021
02:53 PM
1 Kudo
@sipocootap2 Here is a walkthrough on how to delete a snapshot Created a directory $ hdfs dfs -mkdir -p /app/tomtest Changed the owner $ hdfs dfs -chown -R tom:developer /app/tomtest To be able to create a snapshot the directory has to be snapshottable $ hdfs dfsadmin -allowSnapshot /app/tomtest
Allowing snaphot on /app/tomtest succeeded Now I created 3 snapshots $ hdfs dfs -createSnapshot /app/tomtest sipo
Created snapshot /app/tomtest/.snapshot/sipo
$ hdfs dfs -createSnapshot /app/tomtest coo
Created snapshot /app/tomtest/.snapshot/coo
$ hdfs dfs -createSnapshot /app/tomtest tap2
Created snapshot /app/tomtest/.snapshot/tap2 Confirm the directory is snapshottable $ hdfs lsSnapshottableDir
drwxr-xr-x 0 tom developer 0 2021-07-26 23:14 3 65536 /app/tomtest List all the snapshots in the directory $ hdfs dfs -ls /app/tomtest/.snapshot
Found 3 items
drwxr-xr-x - tom developer 0 2021-07-26 23:14 /app/tomtest/.snapshot/coo
drwxr-xr-x - tom developer 0 2021-07-26 23:14 /app/tomtest/.snapshot/sipo
drwxr-xr-x - tom developer 0 2021-07-26 23:14 /app/tomtest/.snapshot/tap2 Now I need to delete the snapshot coo $ hdfs dfs -deleteSnapshot /app/tomtest/ coo Confirm the snapshot is gone $ hdfs dfs -ls /app/tomtest/.snapshot
Found 2 items
drwxr-xr-x - tom developer 0 2021-07-26 23:14 /app/tomtest/.snapshot/sipo
drwxr-xr-x - tom developer 0 2021-07-26 23:14 /app/tomtest/.snapshot/tap2 Voila To delete a snapshot the format is hdfs dfs -deleteSnapshot <path> <snapshotName> i.e hdfs dfs -deleteSnapshot /app/tomtest/ coo notice the space and omittion of the .snapshot as all .(dot) files the snapshot directory is not visible with normal hdfs command The -ls command gives 0 results $ hdfs dfs -ls /app/tomtest/ The special command shows the 2 remaining snapshots $ hdfs dfs -ls /app/tomtest/.snapshot
Found 2 items
drwxr-xr-x - tom developer 0 2021-07-26 23:14 /app/tomtest/.snapshot/sipo
drwxr-xr-x - tom developer 0 2021-07-26 23:14 /app/tomtest/.snapshot/tap2 Is there a command to disallow snapshots for all the subdirectories? Yes there is only after you have deleted all the snapshots therein demo, or better at directory creation time you can disallow snapshots $ hdfs dfsadmin -disallowSnapshot /app/tomtest/
disallowSnapshot: The directory /app/tomtest has snapshot(s). Please redo the operation after removing all the snapshots. The only way I have found which works when for me and permits me to have a cup of coffee is to first list all the snapshots and copy-paste the delete even if there are 60 snapshots it works and I only get back when the snapshots are gone or better still do something else while the deletion is going on not automated though the example The below would run concurrently hdfs dfs -deleteSnapshot /app/tomtest/ sipo
.....
....
hdfs dfs -deleteSnapshot /app/tomtest/ tap2 -deleteSnapshot skips trash by default! Happy hadooping
... View more
07-26-2021
01:20 PM
@enirys As suggested we need more details and there is no silver bullet a piece of advance from experience it's better you open a new thread and give as much details as possible. OS HDP version Ambari Mit or AD kerberos Documented steps or official document reference Your Kerberos config krb5.conf, kdc.conf kadm5.acl Hosts files Node number [Single or Multi node] Just any information that will reduce the too many exchange of posts but gives members the info needed to help. Cheers
... View more
07-26-2021
02:27 AM
@ambari275 Great please accept the answer so the thread can be closed and referenced byother users Happy hadooping !!!
... View more