Hi there,
I would like to install Spark 2 on a cluster that is set up through a Cloudera Director script.
I used this script as an example : https://github.com/cloudera/director-scripts/blob/master/configs/aws.reference.conf
More precisely, I did :
cloudera-manager {
csds: [
"http://archive.cloudera.com/spark2/csd/SPARK2_ON_YARN-2.2.0.cloudera1.jar"
]
}
cluster {
products {
...
SPARK2: 2
}
parcelRepositories: ["http://archive.cloudera.com/spark2/parcels/2.2.0.cloudera1/"]
services : [..., SPARK2-ON_YARN]
masters-1 {
roles {
SPARK2_ON_YARN: [SPARK2_YARN_HISTORY_SERVER]
}
}
}
However, the Spark2 service never starts and when I attempt do start it manually through the Cloudera Manager interface, I get the following error :
"This role requires the following additional parcels to be activated before it can start: [spark2]."
Do you have any idea of what is missing in the configuration ?
Thank you in advance.