Member since
10-11-2022
128
Posts
20
Kudos Received
10
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
1100 | 11-07-2024 10:00 PM | |
1691 | 05-23-2024 11:44 PM | |
1513 | 05-19-2024 11:32 PM | |
7725 | 05-18-2024 11:26 PM | |
2770 | 05-18-2024 12:02 AM |
09-05-2025
04:17 AM
In NiFi 2.4 and above, the built-in Jython (Python 2 interpreter) for ExecuteScript has been removed, so the traditional approach using inline Python is no longer supported. There is, however, a modern and robust alternative using NiFi's first-class Python processor support for attribute manipulation, or using Groovy/Clojure in ExecuteScript, or simply leveraging UpdateAttribute for simple logic. https://nifi.apache.org/nifi-docs/python-developer-guide.html https://nifi.apache.org/components/org.apache.nifi.processors.script.ExecuteScript/
... View more
08-19-2025
01:56 AM
Hi, @Hadoop16 The error Zookeeper connection string cannot be null means the Router process is expecting ZooKeeper configs for token management but isn’t finding them. Even if you already have a ZooKeeper quorum set in core-site.xml, the Router Federation requires its own configs in hdfs-rbf-site.xml. Specifically, you need to set hadoop.kms.authentication.zk-dt-secret-manager.zkConnectionString (or in some versions hadoop.security.token.service.use_ip + hadoop.zk.address) depending on your setup. Please double-check that hdfs-rbf-site.xml contains the federation and router related properties, including ZooKeeper connection string and Kerberos settings. Ensure the file is deployed to all Router nodes and included in the classpath. Also verify that the Router service user has Kerberos credentials and permissions to connect to ZooKeeper. Once the ZK connection string is set properly, the Router daemon should start without this error.
... View more
08-19-2025
01:54 AM
Hi, @linssab This error (java.lang.ArithmeticException: / by zero in HiveSplitGenerator) usually comes from Hive when the query compiles into an empty or invalid input split. With PutHive3QL, DELETE/UPDATE operations on ACID tables often trigger a full table scan, and if stats are missing or corrupted, Tez can fail this way. First, try running the same SQL directly in Hive CLI/Beeline to confirm it’s not NiFi-specific. Then, run ANALYZE TABLE <table> COMPUTE STATISTICS and ANALYZE TABLE <table> COMPUTE STATISTICS FOR COLUMNS to refresh stats. Also check that the table is bucketed/transactional as required for ACID.
... View more
08-19-2025
01:51 AM
Hi, @Hz In HDFS 2.7.3, setting a storage policy on a directory does not immediately place new blocks directly into the target storage (e.g., ARCHIVE). New writes still go to default storage (usually DISK), and the Mover process is required to relocate both existing and newly written blocks to comply with the policy. The storage policy only marks the desired storage type, but actual enforcement happens through the Mover. This is expected behavior and you did not miss any configuration. There’s no way in 2.7.3 to bypass the Mover and force blocks to land directly in cold storage on write. Later Hadoop versions introduced improvements, but for your version, running the Mover is required.
... View more
08-19-2025
01:50 AM
Hi, @quangbilly79 Yes, you can continue to use HDFS normally while the Balancer is running. The Balancer only moves replicated block copies between DataNodes to even out disk usage; it does not modify the actual data files. Reads and writes are fully supported in parallel with balancing, and HDFS ensures data integrity through replication and checksums. The process may add some extra network and disk load, so you might see reduced performance during heavy balancing. There is no risk of data corruption caused by the Balancer. You don’t need to wait — it’s safe to continue your normal operations.
... View more
08-19-2025
01:48 AM
Hi, @allen_chu Your jstack shows many DataXceiver threads stuck in epollWait, meaning the DataNode is waiting on slow or stalled client/network I/O. Over time, this exhausts threads and makes the DataNode unresponsive. Please check network health and identify if certain clients (e.g., 172.18.x.x) are holding connections open. Review these configs in hdfs-site.xml: dfs.datanode.max.transfer.threads, dfs.datanode.socket.read.timeout, and dfs.datanode.socket.write.timeout to ensure proper limits and timeouts. Increasing max threads or lowering timeouts often helps. Also monitor for stuck jobs on the client side.
... View more
07-16-2025
03:55 AM
If you're using Conda Create the environment conda create -n pyspark_env python=3.9 numpy Activate it conda activate pyspark_env Tell Spark to use it export PYSPARK_PYTHON=$(which python) export PYSPARK_DRIVER_PYTHON=$(which python)
... View more
11-07-2024
10:00 PM
1 Kudo
@flashone If a disk error is detected, HDFS can mark the affected disk as failed and stop using it. HDFS DataNodes are designed to handle disk failures gracefully. If you have replication set up correctly, the data should remain accessible, though replication might temporarily increase on other nodes to compensate for the loss. The service itself (HDFS) will usually stay operational as long as there are other healthy disks and nodes available ++++++++++++++ YARN NodeManagers can handle disk failures by marking disks as unhealthy if configured to monitor disk health, When a disk fails, NodeManager excludes that disk from the list of usable directories. The NodeManager service itself will continue running as long as other disks are healthy. ++++++++++++++++ If Impala detects a disk I/O error, it will stop using that disk. The Impala Daemon will continue running, but queries that rely on data stored on the failed disk might fail until data can be accessed from another replica or node. +++++++++++++ Kudu Tablet Servers monitor disk health, and if a disk fails, Kudu can mark it as failed and continue operating if there are other healthy disks. However, if the failure impacts multiple disks or replicas, it can lead to data availability issues. ++++++++++++ You can usually keep the services running if only a single disk fails and if replication is properly configured. However, it’s best to replace the failed disk promptly to avoid further risk. In HDFS and Kudu especially, losing additional disks could risk data loss or availability issues.
... View more
11-06-2024
02:06 AM
2 Kudos
@Bhavs Disabling Kerberos for a specific service in Hive isn’t directly supported, as Kerberos is typically enabled cluster-wide for enhanced security. If your cluster setup allows, you can configure a separate instance of HiveServer2 without Kerberos. This setup would require an additional HiveServer2 instance configured with authentication set toNONE, and it would run separately from the kerberized services. If you use a combination of LDAP or custom authentication alongside Kerberos. If you only need to bypass Kerberos for certain users, setting up LDAP authentication with Ranger might help.
... View more
05-30-2024
02:30 AM
1 Kudo
@sibin Ensure that the /hiveserver2 znode exists and contains the necessary configurations. The fact that ls /hiveserver2 returns an empty list suggests that HiveServer2 has not correctly registered its configurations in ZooKeeper. Look into the HiveServer2 logs for any errors or warnings related to ZooKeeper or Kerberos Create Kerberos Principal kadmin.local -q "addprinc -randkey khive/im19-vm4@IM19-V4.REALM" Generate Keytab File kadmin.local -q "xst -k /etc/security/keytabs/khive.keytab khive/im19-vm4@IM19-V4.REALM" Verify Keytab File klist -k /etc/security/keytabs/khive.keytab Set Permissions chown hive:hive /etc/security/keytabs/khive.keytab chmod 400 /etc/security/keytabs/khive.keytab Update hive-site.xml <property> <name>hive.server2.authentication.kerberos.principal</name> <value>khive/im19-vm4@IM19-V4.REALM</value> </property> <property> <name>hive.server2.authentication.kerberos.keytab</name> <value>/etc/security/keytabs/khive.keytab</value> </property> Restart hiveserver2
... View more