Support Questions

Find answers, ask questions, and share your expertise

Spark - Hive tables not found when running in YARN-Cluster mode

avatar
Expert Contributor

I have a Spark (version 1.4.1) application on HDP 2.3.2. It works fine when running it in YARN-Client mode. However, when running it on YARN-Cluster mode none of my Hive tables can be found by the application.

I submit the application like so:

  ./bin/spark-submit 
  --class com.myCompany.Main 
  --master yarn-cluster 
  --num-executors 3 
  --driver-memory 4g 
  --executor-memory 10g 
  --executor-cores 1 
  --jars lib/datanucleus-api-jdo-3.2.6.jar,lib/datanucleus-rdbms-3.2.9.jar,lib/datanucleus-core-3.2.10.jar /home/spark/apps/YarnClusterTest.jar  
  --files /etc/hive/conf/hive-site.xml

Here's an excerpt from the logs:

5/12/02 11:05:13 INFO hive.HiveContext: Initializing execution hive, version 0.13.1
15/12/02 11:05:14 INFO metastore.HiveMetaStore: 0: Opening raw store with implemenation class:org.apache.hadoop.hive.metastore.ObjectStore
15/12/02 11:05:14 INFO metastore.ObjectStore: ObjectStore, initialize called
15/12/02 11:05:14 INFO DataNucleus.Persistence: Property hive.metastore.integral.jdo.pushdown unknown - will be ignored
15/12/02 11:05:14 INFO DataNucleus.Persistence: Property datanucleus.cache.level2 unknown - will be ignored
15/12/02 11:05:14 INFO storage.BlockManagerMasterEndpoint: Registering block manager worker2.xxx.com:34697 with 5.2 GB RAM, BlockManagerId(1, worker2.xxx.com, 34697)
15/12/02 11:05:16 INFO metastore.ObjectStore: Setting MetaStore object pin classes with hive.metastore.cache.pinobjtypes="Table,StorageDescriptor,SerDeInfo,Partition,Database,Type,FieldSchema,Order"
15/12/02 11:05:16 INFO metastore.MetaStoreDirectSql: MySQL check failed, assuming we are not on mysql: Lexical error at line 1, column 5.  Encountered: "@" (64), after : "".
15/12/02 11:05:17 INFO DataNucleus.Datastore: The class "org.apache.hadoop.hive.metastore.model.MFieldSchema" is tagged as "embedded-only" so does not have its own datastore table.
15/12/02 11:05:17 INFO DataNucleus.Datastore: The class "org.apache.hadoop.hive.metastore.model.MOrder" is tagged as "embedded-only" so does not have its own datastore table.
15/12/02 11:05:18 INFO DataNucleus.Datastore: The class "org.apache.hadoop.hive.metastore.model.MFieldSchema" is tagged as "embedded-only" so does not have its own datastore table.
15/12/02 11:05:18 INFO DataNucleus.Datastore: The class "org.apache.hadoop.hive.metastore.model.MOrder" is tagged as "embedded-only" so does not have its own datastore table.
15/12/02 11:05:18 INFO metastore.ObjectStore: Initialized ObjectStore
15/12/02 11:05:19 WARN metastore.ObjectStore: Version information not found in metastore. hive.metastore.schema.verification is not enabled so recording the schema version 0.13.1aa
15/12/02 11:05:19 INFO metastore.HiveMetaStore: Added admin role in metastore
15/12/02 11:05:19 INFO metastore.HiveMetaStore: Added public role in metastore
15/12/02 11:05:19 INFO metastore.HiveMetaStore: No user is added in admin role, since config is empty
15/12/02 11:05:19 INFO session.SessionState: No Tez session required at this point. hive.execution.engine=mr.
15/12/02 11:05:19 INFO parse.ParseDriver: Parsing command: SELECT * FROM streamsummary
15/12/02 11:05:20 INFO parse.ParseDriver: Parse Completed
15/12/02 11:05:20 INFO hive.HiveContext: Initializing HiveMetastoreConnection version 0.13.1 using Spark classes.
15/12/02 11:05:20 INFO metastore.HiveMetaStore: 0: get_table : db=default tbl=streamsummary
15/12/02 11:05:20 INFO HiveMetaStore.audit: ugi=spark  ip=unknown-ip-addr  cmd=get_table : db=default tbl=streamsummary   
15/12/02 11:05:20 DEBUG myCompany.Main$: no such table streamsummary; line 1 pos 14

I basically run into the same 'no such table' problem for any time my application needs to read from or write to the Hive tables.

Thanks in advance!

UPDATE:

I tried submitting the spark application with the --files parameter provided before --jars as per @Guilherme Braccialli's suggestion, but doing so now gives me an exception saying that the HiveMetastoreClient could not be instantiated.

spark-submit:

  ./bin/spark-submit \
  --class com.myCompany.Main \
  --master yarn-cluster \
  --num-executors 3 \
  --driver-memory 1g \
  --executor-memory 11g \
  --executor-cores 1 \
  --files /etc/hive/conf/hive-site.xml \
  --jars lib/datanucleus-api-jdo-3.2.6.jar,lib/datanucleus-rdbms-3.2.9.jar,lib/datanucleus-core-3.2.10.jar \<br>  /home/spark/apps/YarnClusterTest.jar

code:

// core.scala
trait Core extends java.io.Serializable{
/**
 *  This trait should be mixed in by every other class or trait that is dependent on `sc`
 * 
 */
  val sc: SparkContext
  lazy val sqlContext = new HiveContext(sc)
}

// yarncore.scala
trait YarnCore extends Core {
/** 
 * This trait initializes the SparkContext with YARN as the master
 */
  val conf = new SparkConf().setAppName("my app").setMaster("yarn-cluster")
  val sc = new SparkContext(conf)
}

main.scala
	object Test {
	  def main(args:Array[String]){
	
	  /**initialize the spark application**/
	  val app = new YarnCore  // initializes the SparkContext in YARN mode
	  with sqlHelper  // provides SQL functionality
	  with Transformer  // provides UDF's for transforming the dataframes into the marts
	
	  /**initialize the logger**/
	  val log = Logger.getLogger(getClass.getName)
	
	  val count = app.sqlContext.sql("SELECT COUNT(*) FROM streamsummary")
	
	  log.info("streamsummary has ${count} records")
	
	  /**Shut down the spark app**/
	  app.sc.stop
	  }
	}

exception:

15/12/11 09:34:55 INFO hive.HiveContext: Initializing execution hive, version 0.13.1
15/12/11 09:34:56 ERROR yarn.ApplicationMaster: User class threw exception: java.lang.RuntimeException: java.lang.RuntimeException: Unable to instantiate org.apache.hadoop.hive.metastore.HiveMetaStoreClient
java.lang.RuntimeException: java.lang.RuntimeException: Unable to instantiate org.apache.hadoop.hive.metastore.HiveMetaStoreClient
	at org.apache.hadoop.hive.ql.session.SessionState.start(SessionState.java:346)
	at org.apache.spark.sql.hive.client.ClientWrapper.<init>(ClientWrapper.scala:117)
	at org.apache.spark.sql.hive.HiveContext.executionHive$lzycompute(HiveContext.scala:165)
	at org.apache.spark.sql.hive.HiveContext.executionHive(HiveContext.scala:163)
	at org.apache.spark.sql.hive.HiveContext.<init>(HiveContext.scala:170)
	at com.epldt.core.Core$class.sqlContext(core.scala:13)
	at com.epldt.Test$anon$1.sqlContext$lzycompute(main.scala:17)
	at com.epldt.Test$anon$1.sqlContext(main.scala:17)
	at com.epldt.Test$.main(main.scala:26)
	at com.epldt.Test.main(main.scala)
	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
	at java.lang.reflect.Method.invoke(Method.java:497)
	at org.apache.spark.deploy.yarn.ApplicationMaster$anon$2.run(ApplicationMaster.scala:486)
Caused by: java.lang.RuntimeException: Unable to instantiate org.apache.hadoop.hive.metastore.HiveMetaStoreClient
	at org.apache.hadoop.hive.metastore.MetaStoreUtils.newInstance(MetaStoreUtils.java:1412)
	at org.apache.hadoop.hive.metastore.RetryingMetaStoreClient.<init>(RetryingMetaStoreClient.java:62)
	at org.apache.hadoop.hive.metastore.RetryingMetaStoreClient.getProxy(RetryingMetaStoreClient.java:72)
	at org.apache.hadoop.hive.ql.metadata.Hive.createMetaStoreClient(Hive.java:2453)
	at org.apache.hadoop.hive.ql.metadata.Hive.getMSC(Hive.java:2465)
	at org.apache.hadoop.hive.ql.session.SessionState.start(SessionState.java:340)
	... 14 more
Caused by: java.lang.reflect.InvocationTargetException
	at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
	at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
	at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
	at java.lang.reflect.Constructor.newInstance(Constructor.java:422)
	at org.apache.hadoop.hive.metastore.MetaStoreUtils.newInstance(MetaStoreUtils.java:1410)
	... 19 more
Caused by: java.lang.NumberFormatException: For input string: "1800s"
	at java.lang.NumberFormatException.forInputString(NumberFormatException.java:65)
	at java.lang.Integer.parseInt(Integer.java:580)
	at java.lang.Integer.parseInt(Integer.java:615)
	at org.apache.hadoop.conf.Configuration.getInt(Configuration.java:1258)
	at org.apache.hadoop.hive.conf.HiveConf.getIntVar(HiveConf.java:1211)
	at org.apache.hadoop.hive.conf.HiveConf.getIntVar(HiveConf.java:1220)
	at org.apache.hadoop.hive.metastore.HiveMetaStoreClient.open(HiveMetaStoreClient.java:293)
	at org.apache.hadoop.hive.metastore.HiveMetaStoreClient.<init>(HiveMetaStoreClient.java:214)
	... 24 more

1 ACCEPTED SOLUTION

avatar

@Luis Antonio Torres

I did few tests and I think you just need to change location of --files, it must come before you .jar file.

Find my sample class here:

https://github.com/gbraccialli/SparkUtils/blob/master/src/main/scala/com/github/gbraccialli/spark/Hi...

Project is here:

https://github.com/gbraccialli/SparkUtils

Sample spark-submit with hive commands as parameter:

git clone https://github.com/gbraccialli/SparkUtils
cd SparkUtils/
mvn clean package
spark-submit \
  --class com.github.gbraccialli.spark.HiveCommand \
  --master yarn-cluster \
  --num-executors 1 \
  --driver-memory 1g \
  --executor-memory 1g \
  --executor-cores 1 \
  --files /usr/hdp/current/spark-client/conf/hive-site.xml \
  --jars /usr/hdp/current/spark-client/lib/datanucleus-api-jdo-3.2.6.jar,/usr/hdp/current/spark-client/lib/datanucleus-rdbms-3.2.9.jar,/usr/hdp/current/spark-client/lib/datanucleus-core-3.2.10.jar \
 target/SparkUtils-1.0.0-SNAPSHOT.jar "show tables" "select * from sample_08"

View solution in original post

16 REPLIES 16

avatar

@Luis Antonio Torres I'm glad it worked! It took me a while to make it work also.

You can also check this scala code I created, where you can get hive commands from command line instead of "hard coded":

https://github.com/gbraccialli/SparkUtils/blob/master/src/main/scala/com/github/gbraccialli/spark/Hi...

avatar
Expert Contributor

@Guilherme Braccialli

I did look at your code, and it's an interesting approach. However in my actual application, the SQL commands don't need to be parameterized since the app performs ETL on a specific set of data already. Still, I'll keep it in mind. I have been trying to come up with a utility mixin that provides wrapper methods to the calls to hiveContext.sql, and all my other Spark apps that needs to call it need only to provide the column and table names, as well as the where conditions.

avatar
Expert Contributor

@Guilherme Braccialli

one thing about your code that got me curious though is how you instantiated your SparkContext... I've been following the Spark programming guide, and that's why I set the app name and master on the SparkConf before initializing `sc`... In your code you don't set the app master, so in this case does `SparkContext` pick up master from the `--master` arg from the submit command?

avatar

Yes, it picks automatically. You can use same code to run in yarn-master, yarn-client or standalone.

If you want, you can define the app name as well:

val sparkConf = new SparkConf().setAppName("app-name")
val sc = new SparkContext(sparkConf)
val sqlContext = new org.apache.spark.sql.hive.HiveContext(sc)

I copied my code and pom.xml from @Randy Gelhausen one:

https://github.com/randerzander/HiveToPhoenix/blob/master/src/main/scala/com/github/randerzander/Hiv...

avatar
Expert Contributor

I see! They probably could have phrased the documentation better, IMHO.

you will not want to hardcode master in the program, but rather launch the application with spark-submit and receive it there

The above quote from the documentation was never actually clear to me and I thought that I had to "receive" the master URL by reading it in the code from some configuration or parameter then setting master. Thanks for clearing that up!

avatar
Expert Contributor

@Ana Gillan we're using 2.3.2, and Kerberos is disabled.

@Jonas Straub I've updated the post with some of the simpler sample code I've used to try and test things out. Even a simple select statement is giving me the same errors... though I can't be sure if my use of the CAKE pattern could be resulting in some unwanted side-effects.

avatar

@Luis Antonio Torres

Please, do not use hive-site.xml from hive.

You need a clean hive-site.xml for spark, your hive-site.xml for spark should have only this:

<configuration>
    <property>
      <name>hive.metastore.uris</name>
      <value>thrift://sandbox.hortonworks.com:9083</value>
    </property>
  </configuration>