Member since
10-11-2022
128
Posts
20
Kudos Received
10
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
1099 | 11-07-2024 10:00 PM | |
1690 | 05-23-2024 11:44 PM | |
1513 | 05-19-2024 11:32 PM | |
7722 | 05-18-2024 11:26 PM | |
2770 | 05-18-2024 12:02 AM |
09-05-2025
05:39 AM
1 Kudo
However, I was able to resolve this by leveraging ExecuteStreamCommand (ESC). Specifically, I used the Output Destination Attribute property to push the required attributes into it, which I can then process separately.
... View more
08-19-2025
10:33 PM
Is the source table a JdbcStorageHandler table? Please provide the DDL of the source table, the query used, and any sample data if possible. This information will help us understand the problem better. Also, validate the set -v command, especially configurations like hive.tez.container.size.
... View more
08-19-2025
07:10 AM
@RAGHUY Thank you! I figured that later but Router starting to fail with below error.I have the jaas.conf in place. Any help on this is appreciated. ERROR client.ZooKeeperSaslClient - SASL authentication failed using login context 'ZKDelegationTokenSecretManagerClient' with exception: {} javax.security.sasl.SaslException: Error in authenticating with a Zookeeper Quorum member: the quorum member's saslToken is null. at org.apache.zookeeper.client.ZooKeeperSaslClient.createSaslToken(ZooKeeperSaslClient.java:312) at org.apache.zookeeper.client.ZooKeeperSaslClient.respondToServer(ZooKeeperSaslClient.java:275) at org.apache.zookeeper.ClientCnxn$SendThread.readResponse(ClientCnxn.java:882) at org.apache.zookeeper.ClientCnxnSocketNIO.doIO(ClientCnxnSocketNIO.java:101) at org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:363) at org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1223) 2025-08-19 20:45:37,097 ERROR curator.ConnectionState - Authentication failed 2025-08-19 20:45:37,098 INFO zookeeper.ClientCnxn - Unable to read additional data from server sessionid 0x1088d05c6550015, likely server has closed socket, closing socket connection and attempting reconnect 2025-08-19 20:45:37,098 INFO zookeeper.ClientCnxn - EventThread shut down for session: 0x1088d05c6550015 2025-08-19 20:45:37,212 ERROR imps.CuratorFrameworkImpl - Ensure path threw exception org.apache.zookeeper.KeeperException$AuthFailedException: KeeperErrorCode = AuthFailed for /hdfs-router-tokens
... View more
08-19-2025
01:51 AM
Hi, @Hz In HDFS 2.7.3, setting a storage policy on a directory does not immediately place new blocks directly into the target storage (e.g., ARCHIVE). New writes still go to default storage (usually DISK), and the Mover process is required to relocate both existing and newly written blocks to comply with the policy. The storage policy only marks the desired storage type, but actual enforcement happens through the Mover. This is expected behavior and you did not miss any configuration. There’s no way in 2.7.3 to bypass the Mover and force blocks to land directly in cold storage on write. Later Hadoop versions introduced improvements, but for your version, running the Mover is required.
... View more
08-19-2025
01:50 AM
Hi, @quangbilly79 Yes, you can continue to use HDFS normally while the Balancer is running. The Balancer only moves replicated block copies between DataNodes to even out disk usage; it does not modify the actual data files. Reads and writes are fully supported in parallel with balancing, and HDFS ensures data integrity through replication and checksums. The process may add some extra network and disk load, so you might see reduced performance during heavy balancing. There is no risk of data corruption caused by the Balancer. You don’t need to wait — it’s safe to continue your normal operations.
... View more
08-19-2025
01:48 AM
Hi, @allen_chu Your jstack shows many DataXceiver threads stuck in epollWait, meaning the DataNode is waiting on slow or stalled client/network I/O. Over time, this exhausts threads and makes the DataNode unresponsive. Please check network health and identify if certain clients (e.g., 172.18.x.x) are holding connections open. Review these configs in hdfs-site.xml: dfs.datanode.max.transfer.threads, dfs.datanode.socket.read.timeout, and dfs.datanode.socket.write.timeout to ensure proper limits and timeouts. Increasing max threads or lowering timeouts often helps. Also monitor for stuck jobs on the client side.
... View more
07-16-2025
03:55 AM
If you're using Conda Create the environment conda create -n pyspark_env python=3.9 numpy Activate it conda activate pyspark_env Tell Spark to use it export PYSPARK_PYTHON=$(which python) export PYSPARK_DRIVER_PYTHON=$(which python)
... View more
11-17-2024
09:22 PM
@Bhavs, Did the response help resolve your query? If it did, kindly mark the relevant reply as the solution, as it will aid others in locating the answer more easily in the future.
... View more
11-11-2024
02:35 AM
1 Kudo
Thanks ~ Good Answer~
... View more
06-03-2024
03:49 PM
1 Kudo
@sibin Has the reply helped resolve your issue? If so, please mark the appropriate reply as the solution, as it will make it easier for others to find the answer in the future. Thanks.
... View more