Member since
10-20-2021
21
Posts
1
Kudos Received
1
Solution
My Accepted Solutions
Title | Views | Posted |
---|---|---|
737 | 05-03-2022 01:54 PM |
01-27-2022
04:41 AM
It's reported as "INFO", but then it doesnt submit the app on Yarn, remain stuck. INFO: 22/01/27 12:38:44 INFO Client: Verifying our application has not requested more than the maximum memory capability of the cluster (3072 MB per container), INFO: Exception in thread "main" java.lang.IllegalArgumentException: Required AM memory (5214+521 MB) is above the max threshold (3072 MB) of this cluster! Please increase the value of 'yarn.scheduler.maximum-allocation-mb'.
... View more
01-27-2022
02:35 AM
Hi, thanks for the tip i checked but everything seems ok: <property> <name>yarn.scheduler.maximum-allocation-mb</name> <value>15360</value> </property> I upgraded some minutes ago at 15360 MB and in fact it's there. But still i keep get that error. Yarn it's all good, no error no warning nothing.
... View more
01-27-2022
02:10 AM
Hi, since i'm in Cluster Mode i set spark.driver.memory = 6G And it doesnt work since keep saying that maximum is 3072MB. I tried on another cluster and actually changing yarn.nodemanager.resource.memory-mb and yarn.scheduler.maximum-allocation-mb to a lower value than spark.driver.memory i obtain the same error as above. So at this point i guess that Yarn (on my cluster) doesn't update the parameters values. I updated them from AMBARI and try to restart YARN many times but nothing changed.
... View more
01-27-2022
01:18 AM
I'm not using MapReduce actually, i'm using Spark, so i'm submitting via Spark-Submit. In fact MapReduce yarn.app.mapreduce.am.resource.mb is at 1 GB but the error says 3072. Is it possibile that modifing the Yarn values from Ambari doesn't actually update those values? Even after reboot?
... View more
01-26-2022
04:32 AM
Hello everyone, i have a problem i set from Ambari those values: yarn.nodemanager.resource.memory-mb = 7820 yarn.scheduler.minimum-allocation-mb = 1024 yarn.scheduler.maximum-allocation-mb = 7820 I restart Yarn but the same error keeps coming, why i can't have a bigger AM memory than 3072? Where this 3072 MB comes from?? Should i edit some other settings, cause i cant found any? Thanks
... View more
Labels:
- Labels:
-
Apache Ambari
-
Apache Spark
-
Apache YARN
01-25-2022
02:37 PM
Hi everyone, as title i'd like to run ANY generic application (written by users) . So the idea is: 1) User write its own Python application (about anything he wants, without using Spark API) 2) I run this app via Spark-submit 3) A SINGLE DOCKER CONTAINER on Yarn is created, with the resources i set It actually works, but.... Since the users dont use Spark, the Executors "role" in the Spark mechanism is useless, or better, the user DOESNT NEED THEM, so basically the user app should run only in the Application Master. I tried to set spark.executor.instances to 0 (in spark-submit) but an Error appears during the submit saying that "must be a positive value". So, do you know i there is a way to disable/enable executors and just use AM as i please? (Maybe assign all resources to AM and 0 to executors *?*) Because there will be also user that may want to use Spark, so they will have also executors. Thanks for any advice on that
... View more
Labels:
01-24-2022
12:00 PM
Hi everybody, i'm submitting jobs to a Yarn cluster via SparkLauncher. Im under HDP 3.1.4.0 Now, i'd like to have only 1 executor for each job i run (since ofter i found 2 executor for each job) with the resources that i decide (of course if those resources are available in a machine). So i tried to add .setConf("spark.executor.instances", "1") .setConf("spark.executor.cores", "3") But even if i set spark.executor.instaces to 1, i have 2 executors, do you know why? (I read somewhere that the N° of executors = spark.executor.instances * spark.executor.cores . I don't know if that's true, but it seems true. Is there a way to achieve my goal of have MIN and MAX 1 executor for each job??? Could be achieve with dynamicAllocation (i'd prefer not to set that since it's not designed for that and can do a lot of stuff that i don't need) ? Thanks in advance!!!
... View more
Labels:
12-02-2021
05:36 AM
Hi, since every time i run a job via spark-submit i have a delay due to the request attempt of spark to connect to hive metastore (and i don't need it since as i read is only for SparkSQL via JDBC) i'd like to disable this try of spark to connect to the metastore, how can i do it via ambari? P.S. i have more spark client installed on different machines, 0 spark2 thrift server installed and 0 Livy for spark2 installed. Thanks for your help!!
... View more
Labels:
- Labels:
-
Apache Ambari
-
Apache Hive
-
Apache Spark
11-02-2021
11:26 AM
1 Kudo
I fixed just editing the fields directly in the Ambari Postgres DB and then Rebooting.
... View more
11-02-2021
11:24 AM
Hi, i have an issue that had also time ago and dont remember how i fixed it. Basically when try to connect to some hbase tables, it gives error : "org.apache.hadoop.hbase.NotServingRegionException: table XXX is not online on worker04" My hbase is 2.0. Another thing is that if i restart hbase, i have like 200 Regions in transit, but they disappear if i do "restart all region servers" . But it doesnt fix the problem. I recall that i did something with zookeeper, with the ZkCli, but cant remember what. I think the problem is with the Meta table. But since i'm in a Production env. i must be very careful. Thanks for your help!
... View more
Labels:
- « Previous
-
- 1
- 2
- Next »