Member since
11-17-2021
1128
Posts
257
Kudos Received
29
Solutions
My Accepted Solutions
| Title | Views | Posted |
|---|---|---|
| 2978 | 11-05-2025 10:13 AM | |
| 484 | 10-16-2025 02:45 PM | |
| 1043 | 10-06-2025 01:01 PM | |
| 822 | 09-24-2025 01:51 PM | |
| 629 | 08-04-2025 04:17 PM |
07-09-2024
11:36 AM
@bigluman Has the reply helped resolve your issue? If so, please mark the appropriate reply as the solution, as it will make it easier for others to find the answer in the future. Thanks.
... View more
07-09-2024
03:18 AM
1 Kudo
@uinn Help us in letting know the heap size set for both Hive and Hive Metastore services. Is the issue happens while executing few specific queries or continuously the pause (JVM pause) is happening? Is it appears at HiveMetastore or at Hiveserver2 level?
... View more
07-02-2024
11:54 PM
1 Kudo
Yes, almost same behavior is observed with retry strategy as "penalize". Just the additional penalty duration gets added into the time. For example by default the penalty duration is 30 secs, if incoming flow files are 10 and number of retries is 1. For this case 10 flow files are clubbed up and first retry happens at 50secs. Then for 30secs it penalizes the clubbed flow files. Then after 50secs it goes into the failure relationship. So, In total (numberOfRetries+1)*5secs*(numberOfInComingFlowFiles) + Penalty duration time taken by publishKafka processor to route file into failure relationship in case of penalize retry policy. If retry is not checked then similar behavior like yield is observed 5*numberOfIncomingFlowFiles secs to route to failure relationship as shown in photos. Penalty and yield settings are default only. target kafka version is 3.4.0 and number of partition is 1. Number of nifi nodes are 3. Number of concurrent Tasks on PublishKafkaRecord is 1, but the execution is on all nodes, which is I think 1 thread on 3 nodes each.
... View more
07-02-2024
05:16 AM
I tried so much, to fix that Problem. I also copied the whole nifi-2.0.0-M1 Folder to a new Computer but i have always the same Problem. Maximum Value is not increasing, just counting up bei 1.
... View more
07-01-2024
10:28 AM
@kaif Has the reply helped resolve your issue? If so, please mark the appropriate reply as the solution, as it will make it easier for others to find the answer in the future. Thanks.
... View more
07-01-2024
10:27 AM
@NidhiPal09 Has the reply helped resolve your issue? If so, please mark the appropriate reply as the solution, as it will make it easier for others to find the answer in the future. Thanks.
... View more
07-01-2024
10:27 AM
@prfbessa Has the reply helped resolve your issue? If so, please mark the appropriate reply as the solution, as it will make it easier for others to find the answer in the future. Thanks.
... View more
06-28-2024
10:09 AM
1 Kudo
@Azusaings As this is an older post, you would have a better chance of receiving a resolution by starting a new thread. This will also be an opportunity to provide details specific to your environment that could aid others in assisting you with a more accurate answer to your question. You can link this thread as a reference in your new post. Thanks.
... View more
06-26-2024
01:52 AM
Thank you for response and apologies for delay reply. I have created a custom tmp directory as below under core-site.xml. I ran stop-all.sh and start-all.sh under sbin directory but still custom tmp directory is empty and not utilized. Is I need to do anything else, please suggest. Thanks in advance. <property> <name>hadoop.tmp.dir</name> <value>/opt/osa/hadoop-2.10.2/tmp_Hadoop</value> <description>A base for other temporary directories.</description> </property>
... View more
06-25-2024
12:58 PM
Despite extensive efforts, I was unable to directly resolve the issue, but I devised a workaround. Rather than directly accessing the Hadoop Job object for status updates, I extracted the job ID after submitting the job. Using this job ID, I created an ApplicationID object, which I then used to instantiate my YarnClient. This approach enabled me to effectively monitor the status and completion rate of the running job. Interestingly, both my distcp job and YarnClient are utilizing the same HadoopConf object (YarnClient is instantiated right after the DistCP Job is executed) and is within the same scope of the UserGroupInformation. The exact reason why the YarnClient can access the necessary information while the Job object cannot remains unclear. Nevertheless, this workaround has successfully unblocked me. Additional context: I am using Java 8 and running on an Ubuntu Xenial image.
... View more