Hello, I set up a single Node ec2 instance to run Cloudera Manger. The installation seemed to complete fine except for this on HDFS: Bad : 718 under replicated blocks in the cluster. 720 total blocks in the cluster. Percentage under replicated blocks: 99.72%. Critical threshold: 40.00%. This is the error message I get when I run pyspark: ERROR spark.SparkContext: Error initializing SparkContext.
java.lang.IllegalArgumentException: Required executor memory (1024+384
MB) is above the max threshold (1024 MB) of this cluster! Please check
the values of 'yarn.scheduler.maximum-allocation-mb' and/or
'yarn.nodemanager.resource.memory-mb'. I'm not sure where to start looking into, but if anybody can direct me inthe right direction I would appareciate it. This is my dashboard: I based my settings on this tutorial by Hue. Thanks!
... View more