I have a CDH 5.14 test installation on a pseudo distributed cluster. Everything is fine except Spark2 shells (scala and python).
The shells are starting up, but after a few lines nothing happens....
WARNING: User-defined SPARK_HOME (/opt/cloudera/parcels/SPARK2-2.2.0.cloudera2-1.cdh5.12.0.p0.232957/lib/spark2) overrides detected (/opt/cloudera/parcels/SPARK2/lib/spark2).
WARNING: Running spark-class from user-defined location.
Setting default log level to "WARN".
To adjust logging level use sc.setLogLevel(newLevel). For SparkR, use setLogLevel(newLevel).
18/07/31 21:27:01 WARN util.Utils: Your hostname, centos.gbdmp resolves to a loopback address: 127.0.0.1; using 192.168.2.106 instead (on interface wlp3s0)
18/07/31 21:27:01 WARN util.Utils: Set SPARK_LOCAL_IP if you need to bind to another address
As it is working fine in local mode there must be some Yarn configuration issue. Which parameter do I have to set or increase.
By default spark-shell is running on root.default queue on YARN. Maybe this queue has not got resources assigned. You can check this in 'YARN Applications' view in Cloudera manager. If you have defined some queues you used before you can pass --queue parameter to spark-shell command.