- Subscribe to RSS Feed
- Mark Question as New
- Mark Question as Read
- Float this Question for Current User
- Bookmark
- Subscribe
- Mute
- Printer Friendly Page
Zeppelin max open file limits
- Labels:
-
Apache Spark
-
Apache Zeppelin
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Running some queries e we get this error:
Caused by: org.apache.spark.SparkException: Job aborted due to stage failure: Task 11 in stage 102.0 failed 1 times, most recent failure: Lost task 11.0 in stage 102.0 (TID 4722, localhost, executor driver): java.io.FileNotFoundException: /tmp/blockmgr-95c88a5a-8d68-4b16-878d-158f40123999/1c/temp_shuffle_f82e49e7-a383-4803-82b3-030425703624 (Too many open files)
We modified limits.conf adding these value :
zeppelin soft nofile 32000
zeppelin hard nofile 32000
anyway looking at value of zeppelin process about open file limits the value is 4096
zeppelin@ ~]$ cat /proc/24222/limits
Limit Soft Limit Hard Limit Units
Max processes 510371 510371 processes
Max open files 4096 4096 files
Max locked memory 65536 65536 bytes
[zeppelin@~]$
Is there some configuration file to set somewhere?
Thanks in advance
Created 10-17-2021 11:54 PM
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hi @Paop
We don't have enough information (how much data, spark submit command etc) to provide solution. Please raise a case for this issue.