I'm planning to upgrade my CDH version to 5.10.2 and some of our Developers needs Spark 2.1 to use it in spark streaming.
I'm planning to manage the 2 versions using Cloudera manager, 1.6 will be intergrated one and the Spark 2.1 with parcels.
1- Should i use the spark2 as a service? will they let me have 2 spark services, the reqular one and the spark 2.1 one?
2- is it preferable to istall the roles and gateways for spark on the same servers of the reqular one? i assume the history and spark server can be different servers and using different port for the history server, how it will looks like when i add 2 gateways on the same DN?
3- Is it compleicated to be managed?
4- Is there away that 2 versions conflicted and affecting the current Spark jobs?
Yes I restart both CM server and cloudera management service both and still it is not showing Spark2 as service in cluster.
I see JDK 1.7 used with CDH 5.9 on cluster I'm on. I read somewhere that Spark 2 requires JDK 1.8. Could that be stopping to get spark2 service.
Also, is there way to confirm csd file is properly deployed. Also, I don't see scala 11 libraries under /opt/cloudera/parcels/CDH/jars and only scala 10 libraries.
I heard that scala 10 and 11 both are installed with CDH 5.7 and later. Shouldn't scala 11 be available, Is this also cause for spark2 service not appearing.
I did all steps as mentioned and all steps did completely successfully, spark2 parcel is activated now.