Member since
09-07-2016
5
Posts
1
Kudos Received
0
Solutions
01-18-2022
01:20 AM
For my case, I observed the spark job was working fine on some hosts and hitting the above exception for a couple of worker hosts. Found that the issue with spark-submit --version on hosts. working hosts spark-submit version was version 2.4.7.7.1.7.0-551 and non-working hosts spark-submit version was version 3.1.2 I created the symbolic link with the correct spark-submit version file and the issue got resolved. ``` [root@host bin]# cd /usr/local/bin [root@hostbin]# ln -s /etc/alternatives/spark-submit spark-submit ```
... View more
02-20-2019
06:26 AM
When the first attempt fails, it tries to run again the app. So the status changes from "running" to "accepted". If you check the RM webUI you could see several attempts were run.
... View more
12-21-2018
12:55 AM
I can't see the relationship between yarn.scheduler.minimum-allocation-mb and the error is reported. According to hive documentation, yarn.scheduler.minimum-allocation-mb is the "container memory minimum". But in this case, the container is running of memory, so it makes sense to increase the "maximum-allocation" instead. Anyway, as it was answered, increasing "mapreduce.map.memory.mb" and "mapreduce.reduce.memory.mb" must work, as those parameters controls how much memory is used by the map-reduce task is run by Hive.
... View more
10-24-2016
08:52 AM
1 Kudo
In linux, according to this http://web.mit.edu/kerberos/www/krb5-1.9/krb5-1.9.4/doc/krb5-admin.html The default kerberos cache files are stored in /tmp folder, they match with this pattern: /tmp/krb5cc_<uid>, where <uid> is your UNIX user-id, represented in decimal format. Hope it helps.
... View more