Support Questions

Find answers, ask questions, and share your expertise
Announcements
Celebrating as our community reaches 100,000 members! Thank you!

How do i pass variables to spark job using Envelope

avatar
New Contributor

Hello!

 

In my Envelope pipeline, I need to compare two Hive tables. Instead of hardcoding the tables in the .conf file, I would like to pass which tables I'm going to compare. I tried using spark.yarn.appMaster.varName but it doesn't seem to work. I'm running CDH 5.13.3 with Java 1.8 on a Centos VM.

 

This is what the script that runs the spark job looks like:

#!bin/bash

sudo -u hdfs spark2-submit \

--master yarn \

--deploy-mode client \

--conf spark.yarn.appMaster.Env.tableA=dbA.tableA \

--conf spark.yarn.appMaster.Env.tableB=dbB.tableB \

envelope-0.7.2.jar comparison.conf 

 

Part of my .conf file:

application{name = comparison}

steps{

tableA{

type = hive

table = ${tableA}

}

tableB{

type = hive

table = ${tableB}}

}

 

 

2 ACCEPTED SOLUTIONS

avatar
Expert Contributor

You just need to use local environment variables since you are running in client mode.

 

For example,

export tableA=dbA.tableA
export tableB=dbB.tableB

spark2-submit \
--master yarn \
--deploy-mode client \
envelope-0.7.2.jar comparison.conf 

 

For sudo you would need to use -E to pass the variables through, but it is not good practice to run jobs as the HDFS superuser instead of your own user.

View solution in original post

avatar
Expert Contributor

You don't need to declare them in the conf file, but for environment variables you can't have them inside the SQL string because of the way the file format handles variable substitution. Concatenation with variables is reasonably easy though, for example:

"SELECT * FROM tableA A INNER JOIN tableB B ON A."${primaryKey}" = B."${primaryKey}

 

View solution in original post

4 REPLIES 4

avatar
Expert Contributor

You just need to use local environment variables since you are running in client mode.

 

For example,

export tableA=dbA.tableA
export tableB=dbB.tableB

spark2-submit \
--master yarn \
--deploy-mode client \
envelope-0.7.2.jar comparison.conf 

 

For sudo you would need to use -E to pass the variables through, but it is not good practice to run jobs as the HDFS superuser instead of your own user.

avatar
New Contributor

Thank you for your answer!

I need to use sudo -u hdfs because the comparison of those two tables are stored in a third table in HDFS, and for that i need write permission. Also, if i pass those variables using export, do I need to declare the variable inside the .conf file besides the run.sh? And does this work inside the SQL? For example, one of my variables is a primaryKey field. I'm comparing A.${primaryKey} = B.${primaryKey}, but the comparison doesnt give any results. Just point an error in the SQL: "A. = B."

avatar
Expert Contributor

You don't need to declare them in the conf file, but for environment variables you can't have them inside the SQL string because of the way the file format handles variable substitution. Concatenation with variables is reasonably easy though, for example:

"SELECT * FROM tableA A INNER JOIN tableB B ON A."${primaryKey}" = B."${primaryKey}

 

avatar
New Contributor

Thank you so much for helping! You helped a lot.