Member since
11-12-2018
218
Posts
178
Kudos Received
35
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
280 | 08-08-2025 04:22 PM | |
349 | 07-11-2025 08:48 PM | |
553 | 07-09-2025 09:33 PM | |
1081 | 04-26-2024 02:20 AM | |
1427 | 04-18-2024 12:35 PM |
06-27-2022
09:33 AM
@haze5736 Has the reply helped resolve your issue? If so, please mark the appropriate reply as the solution, as it will make it easier for others to find the answer in the future. Thanks
... View more
06-27-2022
07:46 AM
Hi @ds_explorer, it seems because the edit log is too big and cannot be read by NameNode completely on the default/configured timeout. 2022-06-25 08:32:24,872 ERROR org.apache.hadoop.hdfs.server.namenode.NameNode: Failed to start namenode. org.apache.hadoop.hdfs.server.namenode.EditLogInputException: Error replaying edit log at offset 554705629. Expected transaction ID was 60366342312 Recent opcode offsets: 554704754 554705115 554705361 554705629 ..... Caused by: java.io.IOException: Premature EOF from inputStream at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:203) at org.apache.hadoop.hdfs.server.namenode.FSEditLogOp$LengthPrefixedReader.decodeOpFrame(FSEditLogOp.java:4488) To fix this, can you add the below parameter and value (if you already have then kindly increase the value) HDFS > Configuration > JournalNode Advanced Configuration Snippet (Safety Valve) for hdfs-site.xml hadoop.http.idle_timeout.ms=180000 And then restart the required services.
... View more
04-10-2021
12:53 AM
1 Kudo
Hi @ryu, then you might need to build some customize in-house monitoring scripts using Yarn APIs or other tools like Prometheus or Grafana for your use case. Please also refer to the below links for more insights https://www.programmersought.com/article/61565532790/ http://rokroskar.github.io/monitoring-spark-on-hadoop-with-prometheus-and-grafana.html https://www.linkedin.com/pulse/how-monitor-yarn-application-via-restful-api-wayne-zhu/
... View more
02-23-2021
01:44 AM
Thanks @adrijand for sharing your updates, it's highly appreciated.
... View more
02-09-2021
04:57 AM
Hi @joyabrata I think you are looking in the Data Lake tab which is a different one, you can go to the Summary tab, then scroll down to FreeIPA session then click Actions and get Get FreeIPA Certificate from the drop-down menu. Hope this will help you.
... View more
01-06-2021
03:26 AM
Thanks, that's very helpful input in terms of debugging performance issues and tuning jobs. So far as I can see there aren't any metrics that provide simple ways to track overall job performance over time on a shared cluster. Aggregated resource usage seems the closest thing, but on a shared cluster I think I will just need to accept that can vary widely depending on the state of the cluster for 2 identical job runs, so while it gives some indication of job performance, there's not really any panacea. I think I was looking for something like I would usually track for web app (e.g. response times of a web server under a given load), which help me to spot when performance regressions happen. I guess not such a straightforward thing to do with the kinds of workload Spark handles!
... View more
12-22-2020
10:31 PM
Hi, @murali2425 @vchhipa It seems some dependency issue while building your custom NiFi Processor, org.apache.nifi:nifi-standard-services-api-nar dependency needs to be added in pom.xml of nifi-*-nar module. Ref here <dependency>
<groupId>org.apache.nifi</groupId>
<artifactId>nifi-standard-services-api-nar</artifactId>
<version>1.11.3</version>
<type>nar</type>
</dependency> Please modify your pom.xml and rebuild and see whether that fixes the issue. Please accept the answer you found most useful.
... View more
07-20-2020
12:27 AM
1 Kudo
Can you verify one, whether did you followed all the steps listed in the documentation? https://docs.cloudera.com/runtime/7.1.1/ozone-storing-data/topics/ozone-setting-up-ozonefs.html
... View more
06-06-2020
09:38 PM
Hi @Ettery Can you try to add those properties in nifi.properties? the Docker configuration has been updated to allow proxy whitelisting from the run command the host header protection is only enforced on "secured" NiFi instances. This should make it much easier for users to quickly deploy sandbox environments like you are doing in this case Even you can try with: -e NIFI_WEB_HTTP_HOST=<host> in docker run command docker run --name nifi -p 9090:9090 -d -e NIFI_WEB_HTTP_PORT='9090' -e NIFI_WEB_HTTP_HOST=<host> apache/nifi:latest In GitHub example configuration and documentation for NiFi running behind a reverse proxy that you may be interested in. For more detail refer stackoverflow1 and stackoverflow2
... View more
06-06-2020
09:15 PM
Glad to hear that you have finally found the root cause of this issue. Thanks for sharing @Heri
... View more