Support Questions

Find answers, ask questions, and share your expertise
Announcements
Celebrating as our community reaches 100,000 members! Thank you!

remote pyspark shell and spark-submit error java.lang.NoSuchFieldError: METASTORE_CLIENT_SOCKET_LIFETIME

avatar
New Contributor

Hi all,

we are executing pyspark and spark-submit to kerberized CDH 5.15v from remote airflow docker container not managed by CDH CM node, e.g. airflow container is not in CDH env. Versions of hive, spark and java are the same as on CDH. There is a valid kerberos ticket before executing spark-submit or pyspark.

Python script:

 

from pyspark.sql import SparkSession, functions as F
spark = SparkSession.builder.enableHiveSupport().appName('appName').getOrCreate()
sa_df=spark.sql("SELECT * FROM lnz_ch.lnz_cfg_codebook")

 

Error is:

 

To adjust logging level use sc.setLogLevel(newLevel). For SparkR, use setLogLevel(newLevel).
Welcome to
      ____              __
     / __/__  ___ _____/ /__
    _\ \/ _ \/ _ `/ __/  '_/
   /__ / .__/\_,_/_/ /_/\_\   version 2.3.0
      /_/

Using Python version 3.6.12 (default, Oct 13 2020 21:45:01)
SparkSession available as 'spark'.
>>> from pyspark.sql import SparkSession, functions as F
>>> spark = SparkSession.builder.enableHiveSupport().appName('appName').getOrCreate()
>>> sa_df=spark.sql("SELECT * FROM lnz_ch.lnz_cfg_codebook")
Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
  File "/var/lib/airflow/spark/spark-2.3.0-bin-without-hadoop/python/pyspark/sql/session.py", line 708, in sql
    return DataFrame(self._jsparkSession.sql(sqlQuery), self._wrapped)
  File "/var/lib/airflow/spark/spark-2.3.0-bin-without-hadoop/python/lib/py4j-0.10.6-src.zip/py4j/java_gateway.py", line 1160, in __call__
  File "/var/lib/airflow/spark/spark-2.3.0-bin-without-hadoop/python/pyspark/sql/utils.py", line 63, in deco
    return f(*a, **kw)
  File "/var/lib/airflow/spark/spark-2.3.0-bin-without-hadoop/python/lib/py4j-0.10.6-src.zip/py4j/protocol.py", line 320, in get_return_value
py4j.protocol.Py4JJavaError: An error occurred while calling o24.sql.
: java.lang.NoSuchFieldError: METASTORE_CLIENT_SOCKET_LIFETIME
        at org.apache.spark.sql.hive.HiveUtils$.formatTimeVarsForHiveClient(HiveUtils.scala:195)
        at org.apache.spark.sql.hive.HiveUtils$.newClientForMetadata(HiveUtils.scala:286)
        at org.apache.spark.sql.hive.HiveExternalCatalog.client$lzycompute(HiveExternalCatalog.scala:66)
        at org.apache.spark.sql.hive.HiveExternalCatalog.client(HiveExternalCatalog.scala:65)
        at org.apache.spark.sql.hive.HiveExternalCatalog$$anonfun$databaseExists$1.apply$mcZ$sp(HiveExternalCatalog.scala:195)
        at org.apache.spark.sql.hive.HiveExternalCatalog$$anonfun$databaseExists$1.apply(HiveExternalCatalog.scala:195)
        at org.apache.spark.sql.hive.HiveExternalCatalog$$anonfun$databaseExists$1.apply(HiveExternalCatalog.scala:195)
        at org.apache.spark.sql.hive.HiveExternalCatalog.withClient(HiveExternalCatalog.scala:97)
        at org.apache.spark.sql.hive.HiveExternalCatalog.databaseExists(HiveExternalCatalog.scala:194)
        at org.apache.spark.sql.internal.SharedState.externalCatalog$lzycompute(SharedState.scala:114)
        at org.apache.spark.sql.internal.SharedState.externalCatalog(SharedState.scala:102)
        at org.apache.spark.sql.hive.HiveSessionStateBuilder.externalCatalog(HiveSessionStateBuilder.scala:39)
        at org.apache.spark.sql.hive.HiveSessionStateBuilder.catalog$lzycompute(HiveSessionStateBuilder.scala:54)
        at org.apache.spark.sql.hive.HiveSessionStateBuilder.catalog(HiveSessionStateBuilder.scala:52)
        at org.apache.spark.sql.hive.HiveSessionStateBuilder$$anon$1.<init>(HiveSessionStateBuilder.scala:69)
        at org.apache.spark.sql.hive.HiveSessionStateBuilder.analyzer(HiveSessionStateBuilder.scala:69)
        at org.apache.spark.sql.internal.BaseSessionStateBuilder$$anonfun$build$2.apply(BaseSessionStateBuilder.scala:293)
        at org.apache.spark.sql.internal.BaseSessionStateBuilder$$anonfun$build$2.apply(BaseSessionStateBuilder.scala:293)
        at org.apache.spark.sql.internal.SessionState.analyzer$lzycompute(SessionState.scala:79)
        at org.apache.spark.sql.internal.SessionState.analyzer(SessionState.scala:79)
        at org.apache.spark.sql.execution.QueryExecution.analyzed$lzycompute(QueryExecution.scala:57)
        at org.apache.spark.sql.execution.QueryExecution.analyzed(QueryExecution.scala:55)
        at org.apache.spark.sql.execution.QueryExecution.assertAnalyzed(QueryExecution.scala:47)
        at org.apache.spark.sql.Dataset$.ofRows(Dataset.scala:74)
        at org.apache.spark.sql.SparkSession.sql(SparkSession.scala:638)
        at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
        at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
        at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
        at java.lang.reflect.Method.invoke(Method.java:498)
        at py4j.reflection.MethodInvoker.invoke(MethodInvoker.java:244)
        at py4j.reflection.ReflectionEngine.invoke(ReflectionEngine.java:357)
        at py4j.Gateway.invoke(Gateway.java:282)
        at py4j.commands.AbstractCommand.invokeMethod(AbstractCommand.java:132)
        at py4j.commands.CallCommand.execute(CallCommand.java:79)
        at py4j.GatewayConnection.run(GatewayConnection.java:214)
        at java.lang.Thread.run(Thread.java:748)

 

Same error is retured from yarn when executing spark-submit.

 

Details:

googling this error, we assume that the versions of spark and hive in airflow container are "somehow mismatched". Error still occurs if we specify spark-submit or pyspark like this:

 

spark-shell \
--jars \
/var/lib/airflow/spark/apache-hive-1.1.0-bin/lib/hive-metastore-1.1.0.jar,\
/var/lib/airflow/spark/apache-hive-1.1.0-bin/lib/hive-exec-1.1.0.jar,\
/var/lib/airflow/spark/apache-hive-1.1.0-bin/lib/hive-common-1.1.0.jar,\
/var/lib/airflow/spark/apache-hive-1.1.0-bin/lib/hive-serde-1.1.0.jar,\
/var/lib/airflow/spark/apache-hive-1.1.0-bin/lib/guava-14.0.1.jar,\
/var/lib/airflow/spark/HiveJDBC4.jar \
--conf spark.sql.hive.metastore.version=1.1.0 \
--conf spark.sql.hive.metastore.jars=/var/lib/airflow/spark/spark-2.3.0-bin-without-hadoop/jars/*

 

As you can see, we are heavily experimenting with --jars argument :D.

Any ideas?

 

Thank you.

 

Links

https://issues.apache.org/jira/browse/SPARK-14492

https://jaceklaskowski.gitbooks.io/mastering-spark-sql/content/spark-sql-properties.html

https://jaceklaskowski.gitbooks.io/mastering-spark-sql/content/hive/

1 ACCEPTED SOLUTION

avatar
New Contributor

Hi @jagadeesan

 

here is the short update. After numerous failed attempts to build 

SPARK_HADOOP_VERSION=2.3.0.cloudera4

 we added airflow node to CDH env. That was done by installing cloudera agent and registering the node to ClouderaManager. Next step was to edit docker-compose airflow.yaml file:

-volumes mount

...
# java home
- /usr/java/jdk1.8.0_162:/usr/java/jdk1.8.0_162
# krb5.conf
- /etc/krb5.conf:/etc/krb5.conf:ro,z
# CDH bin
- /opt/cloudera/parcels:/opt/cloudera/parcels
# /etc
- /etc/hadoop:/etc/hadoop:rw,z
- /etc/spark2:/etc/spark2:rw,z
- /etc/sqoop:/etc/sqoop:rw,z
# sqoop
- /var/lib/sqoop:/var/lib/sqoop
...

-env var

###java
JAVA_HOME=/usr/java/jdk1.8.0_162

Inside the container, symlinks were missing, so:

###java
ln -s /usr/java/jdk1.8.0_162/jre/bin/java /etc/alternatives/java
ln -s /etc/alternatives/java /usr/bin/java
#
ln -s /usr/java/jdk1.8.0_162/bin/java-rmi.cgi /etc/alternatives/java-rmi.cgi
ln -s /etc/alternatives/java-rmi.cgi /usr/bin/java-rmi.cgi
#
ln -s /usr/java/jdk1.8.0_162/bin/javac /etc/alternatives/javac
ln -s /etc/alternatives/javac /usr/bin/javac
#
ln -s /usr/java/jdk1.8.0_162/bin/javaws /etc/alternatives/javaws
ln -s /etc/alternatives/javaws /usr/bin/javaws
#
ln -s /usr/java/jdk1.8.0_162/bin/javapackager /etc/alternatives/javapackager
ln -s /etc/alternatives/javapackager /usr/bin/javapackager
#
ln -s /usr/java/jdk1.8.0_162/bin/javap /etc/alternatives/javap
ln -s /etc/alternatives/javap /usr/bin/javap
#
ln -s /usr/java/jdk1.8.0_162/bin/javah /etc/alternatives/javah
ln -s /etc/alternatives/javah /usr/bin/javah
#
ln -s /usr/java/jdk1.8.0_162/bin/javafxpackager /etc/alternatives/javafxpackager
ln -s /etc/alternatives/javafxpackager /usr/bin/javafxpackager
#
ln -s /usr/java/jdk1.8.0_162/bin/javadoc /etc/alternatives/javadoc
ln -s /etc/alternatives/javadoc /usr/bin/javadoc

###spark2-submit
ln -s /opt/cloudera/parcels/SPARK2-2.3.0.cloudera4-1.cdh5.13.3.p0.611179/bin/spark2-submit /etc/alternatives/spark2-submit
ln -s /etc/alternatives/spark2-submit /usr/bin/spark2-submit
ln -s /etc/spark2/conf.cloudera.spark2_on_yarn /etc/alternatives/spark2-conf

###hdfs
ln -s /opt/cloudera/parcels/CDH-5.15.1-1.cdh5.15.1.p0.4/bin/hdfs /etc/alternatives/hdfs
ln -s /etc/alternatives/hdfs /usr/bin/hdfs
ln -s /etc/hadoop/conf.cloudera.yarn /etc/alternatives/hadoop-conf

###sqoop
ln -s /opt/cloudera/parcels/CDH-5.15.1-1.cdh5.15.1.p0.4/bin/sqoop /etc/alternatives/sqoop
ln -s /etc/alternatives/sqoop /usr/bin/sqoop
ln -s /opt/cloudera/parcels/CDH-5.15.1-1.cdh5.15.1.p0.4/etc/sqoop/conf.dist /etc/alternatives/sqoop-conf

After this, spark2-submit works as expected.

View solution in original post

5 REPLIES 5

avatar
Master Collaborator

Hi @adrijand

Yeah, it seems some jar conflicts somewhere. You are trying to load Hive 1.1.0 classes before the ones included with Spark, and as such, they might fail to reference a Hive configuration that didn't exist in 1.1.0. like below.

: java.lang.NoSuchFieldError: METASTORE_CLIENT_SOCKET_LIFETIME
        at org.apache.spark.sql.hive.HiveUtils$.formatTimeVarsForHiveClient(HiveUtils.scala:195)
        at org.apache.spark.sql.hive.HiveUtils$.newClientForMetadata(HiveUtils.scala:286)
        at org.apache.spark.sql.hive.HiveExternalCatalog.client$lzycompute(HiveExternalCatalog.scala:66)

But here in the description mentioned you are using CDH 5.15v but in your log snippets it showing Apache Spark (spark-2.3.0-bin-without-hadoop) and Apache Hive (apache-hive-1.1.0-bin) version which is not a pre-built package version that comes along with CDH stack compatibility. Are you trying with building with varying versions of Hive which you would like to connect from a remote airflow docker container? 

avatar
New Contributor

Hi @jagadeesan 

thank you for your reply.

 

This airflow docker node is not in Cloudera environment, it's spark, hive, hadoop and java dependencies are not managed by CDH CM node. CDH 5.15 comes with spark 2.3.0, hadoop 2.6.0, hive 1.1.0 and java8 version.

To submit a job to yarn, all versions must be the same (I know I've read this somewhere, I'm not going crazy yet :D). That's why we downloaded spark-2.3.0-bin-without-hadoop, hive-1.1.0 and so on.

 

We even tried to build CDH spark.

 

SPARK_HADOOP_VERSION=2.3.0.cloudera4 SPARK_YARN=true sbt assembly

 

But it throws error:

 

[warn] 	module not found: org.apache.hadoop#hadoop-client;2.6.0-cdh5.13.3
[warn] ==== public: tried
[warn]   https://repo1.maven.org/maven2/org/apache/hadoop/hadoop-client/2.6.0-cdh5.13.3/hadoop-client-2.6.0-cdh5.13.3.pom

 

 

Our next shot is to add this airflow node in CDH env and assign spark, hive and hdfs gateways. Apparently, this type of node does not need additional license...

avatar
Master Collaborator

Hi @adrijand 

Thanks for your detailed explanation here. Yeah indeed, we need all versions to be the same to avoid some classnotfoundexception because of jar conflicts. We encourage you to explore these and provide feedback on your experiences. 

avatar
New Contributor

Hi @jagadeesan

 

here is the short update. After numerous failed attempts to build 

SPARK_HADOOP_VERSION=2.3.0.cloudera4

 we added airflow node to CDH env. That was done by installing cloudera agent and registering the node to ClouderaManager. Next step was to edit docker-compose airflow.yaml file:

-volumes mount

...
# java home
- /usr/java/jdk1.8.0_162:/usr/java/jdk1.8.0_162
# krb5.conf
- /etc/krb5.conf:/etc/krb5.conf:ro,z
# CDH bin
- /opt/cloudera/parcels:/opt/cloudera/parcels
# /etc
- /etc/hadoop:/etc/hadoop:rw,z
- /etc/spark2:/etc/spark2:rw,z
- /etc/sqoop:/etc/sqoop:rw,z
# sqoop
- /var/lib/sqoop:/var/lib/sqoop
...

-env var

###java
JAVA_HOME=/usr/java/jdk1.8.0_162

Inside the container, symlinks were missing, so:

###java
ln -s /usr/java/jdk1.8.0_162/jre/bin/java /etc/alternatives/java
ln -s /etc/alternatives/java /usr/bin/java
#
ln -s /usr/java/jdk1.8.0_162/bin/java-rmi.cgi /etc/alternatives/java-rmi.cgi
ln -s /etc/alternatives/java-rmi.cgi /usr/bin/java-rmi.cgi
#
ln -s /usr/java/jdk1.8.0_162/bin/javac /etc/alternatives/javac
ln -s /etc/alternatives/javac /usr/bin/javac
#
ln -s /usr/java/jdk1.8.0_162/bin/javaws /etc/alternatives/javaws
ln -s /etc/alternatives/javaws /usr/bin/javaws
#
ln -s /usr/java/jdk1.8.0_162/bin/javapackager /etc/alternatives/javapackager
ln -s /etc/alternatives/javapackager /usr/bin/javapackager
#
ln -s /usr/java/jdk1.8.0_162/bin/javap /etc/alternatives/javap
ln -s /etc/alternatives/javap /usr/bin/javap
#
ln -s /usr/java/jdk1.8.0_162/bin/javah /etc/alternatives/javah
ln -s /etc/alternatives/javah /usr/bin/javah
#
ln -s /usr/java/jdk1.8.0_162/bin/javafxpackager /etc/alternatives/javafxpackager
ln -s /etc/alternatives/javafxpackager /usr/bin/javafxpackager
#
ln -s /usr/java/jdk1.8.0_162/bin/javadoc /etc/alternatives/javadoc
ln -s /etc/alternatives/javadoc /usr/bin/javadoc

###spark2-submit
ln -s /opt/cloudera/parcels/SPARK2-2.3.0.cloudera4-1.cdh5.13.3.p0.611179/bin/spark2-submit /etc/alternatives/spark2-submit
ln -s /etc/alternatives/spark2-submit /usr/bin/spark2-submit
ln -s /etc/spark2/conf.cloudera.spark2_on_yarn /etc/alternatives/spark2-conf

###hdfs
ln -s /opt/cloudera/parcels/CDH-5.15.1-1.cdh5.15.1.p0.4/bin/hdfs /etc/alternatives/hdfs
ln -s /etc/alternatives/hdfs /usr/bin/hdfs
ln -s /etc/hadoop/conf.cloudera.yarn /etc/alternatives/hadoop-conf

###sqoop
ln -s /opt/cloudera/parcels/CDH-5.15.1-1.cdh5.15.1.p0.4/bin/sqoop /etc/alternatives/sqoop
ln -s /etc/alternatives/sqoop /usr/bin/sqoop
ln -s /opt/cloudera/parcels/CDH-5.15.1-1.cdh5.15.1.p0.4/etc/sqoop/conf.dist /etc/alternatives/sqoop-conf

After this, spark2-submit works as expected.

avatar
Master Collaborator

Thanks @adrijand  for sharing your updates, it's highly appreciated.