Member since
08-10-2018
8
Posts
0
Kudos Received
0
Solutions
09-18-2018
10:20 AM
Thanks Tarum, I will use the calc to configure it
... View more
09-17-2018
06:53 PM
I am running that as admin user using spark-submit export PYTHONIOENCODING=utf8; time spark-submit -v --master yarn --deploy-mode cluster --driver-memory 8G --conf spark.network.timeout=10000000 --conf spark.executor.heartbeatInterval=1000000 --conf spark.dynamicAllocation.enabled=true --conf spark.shuffle.service.enabled=true --conf spark.default.parallelism=2200 --conf spark.sql.shuffle.partitions=2200 --conf spark.driver.maxResultSize="4G" test.py When reduced yarn.scheduler.minimum-allocation-mb from 4G to 1G the error changed from 12Gb to is running beyond physical memory limits. Current usage: 10.3 GB of 9 GB physical memory used So... How are that limits calculated?
... View more
09-17-2018
09:51 AM
Hi I am gettin the tipical error Container [pid=18542,containerID=container_e75_1537176390063_0001_01_000001] is running beyond physical memory limits. Current usage: 12.6 GB of 12 GB physical memory used; 19.0 GB of 25.2 GB virtual memory used. But I do not have 12GB configured anywere on Ambari, nor yarn nor mapreduce2 ¿where is that value? Thanks Roberto
... View more
Labels:
- Labels:
-
Apache YARN