Member since
10-18-2017
6
Posts
0
Kudos Received
0
Solutions
07-13-2018
02:20 AM
Hi Yuexin, Thanks for your response. I have gone through all these links and many more to research on this issue. And I have already done all this configuration of setting jaas file for both driver and executor and also setting kafka ssl settings in the kafkaparams in the program. $SPARK_HOME/bin/spark-submit \ --conf spark.yarn.queue=$yarnQueue \ --conf spark.hadoop.yarn.timeline-service.enabled=false \ --conf spark.yarn.archive=$sparkYarnArchive \ $sparkOpts \ --properties-file $sparkPropertiesFile \ --files /conf/kafka/kafka_client_jaas_dev.conf,/conf/kafka/krb5_dev.conf,/conf/keystore/kafka_client_truststore.jks,conf/kafka/kafka/kafkausr.keytab \ --conf "spark.executor.extraJavaOptions=-Djava.security.auth.login.config=kafka_client_jaas_dev.conf -Djava.security.krb5.conf=krb5_dev.conf -Dsun.security.krb5.debug=true" \ --driver-java-options "-Djava.security.auth.login.config=/conf/kafka_client_jaas_dev.conf -Djava.security.krb5.conf=/conf/krb5_dev.conf \ --class com.commerzbank.streams.KafkaHDFSPersister $moduleJar \ $1 $2 $3 $4 $KafkaParamsconfFile \ Problem here is running in yarn client mode (and all links you mentioned also talks about yarn cluster mode) but i have to run in yarn client mode only here due to some project constraints , and issue is if I specify full path of keystore file(where it is located on edge node where I run the command) in 'ssl.truststore.location' parameter then executors cannot find this file in their cache as it looks for complete path+file name and executor cache contains file with name 'kafka_client_truststore.jks'. And when i pass the keystore file (kafka_client_truststore.jks) without path to 'ssl.truststore.location' parameter then it fails for driver as driver looks in current path of edge node from where the job is run (and if I run the job from same directory /conf/keystore where this keystore file is present on edge node, then job succeeds) Is there way to solve this in your view or better way to load same set of files for driver and executors running in yarn client mode. Regards, Hitesh
... View more
10-19-2017
07:25 AM
Also, is there way to confirm csd file is properly deployed. Also, I don't see scala 11 libraries under /opt/cloudera/parcels/CDH/jars and only scala 10 libraries. I heard that scala 10 and 11 both are installed with CDH 5.7 and later. Shouldn't scala 11 be available, Is this also cause for spark2 service not appearing. I did all steps as mentioned and all steps did completely successfully, spark2 parcel is activated now. Regards, Hitesh
... View more