Support Questions
Find answers, ask questions, and share your expertise

java.lang.NoSuckMethodError with Spark Thift and LLAP

Contributor

Hello

I’m trying to get LLAP to work with the Spark2 Thrift server running in HDP-2.6. I have tried to follow the guide on https://community.hortonworks.com/content/kbentry/72454/apache-spark-fine-grain-security-with-llap-t... but have found a number of problems.

According to that guide, I should download a spark-llap-assembly jar file from repo.hortonworks.com. The guide is written from HDP-2.5.3, and the jar file is there for that version of HDP. But it’s not there for HDP-2.6 for some strange reason. Anyway, I downloaded the spark-llap_2-11-1.0.2-2.1-assembly.jar instead and the Thrift server starts up with LLAP support. Using beeline, I can connect to the Thrift server and it looks fine until I try to run a query. As soon as I do that, I get the java.lang.NoSuckMethodError as you see below. Anybody know a solution for this?

17/06/09 12:34:51 ERROR SparkExecuteStatementOperation: Error executing query, currentState RUNNING, java.lang.NoSuchMethodError: org.apache.spark.sql.catalyst.catalog.CatalogTable.copy(Lorg/apache/spark/sql/catalyst/TableIdentifier;Lorg/apache/spark/sql/catalyst/catalog/CatalogTableType;Lorg/apache/spark/sql/catalyst/catalog/CatalogStorageFormat;Lorg/apache/spark/sql/types/StructType;Lscala/Option;Lscala/collection/Seq;Lscala/Option;Ljava/lang/String;JJLscala/collection/immutable/Map;Lscala/Option;Lscala/Option;Lscala/Option;Lscala/Option;Lscala/collection/Seq;Z)Lorg/apache/spark/sql/catalyst/catalog/CatalogTable; at org.apache.spark.sql.hive.llap.LlapExternalCatalog$anonfun$getTable$1.apply(LlapExternalCatalog.scala:160) at org.apache.spark.sql.hive.llap.LlapExternalCatalog$anonfun$getTable$1.apply(LlapExternalCatalog.scala:158) at org.apache.spark.sql.hive.llap.LlapExternalCatalog.withClient(LlapExternalCatalog.scala:78) at org.apache.spark.sql.hive.llap.LlapExternalCatalog.getTable(LlapExternalCatalog.scala:158) at org.apache.spark.sql.catalyst.catalog.SessionCatalog.getTableMetadata(SessionCatalog.scala:290) at org.apache.spark.sql.hive.llap.LlapSessionCatalog.getTableMetadata(LlapSessionCatalog.scala:90) at org.apache.spark.sql.execution.command.DescribeTableCommand.run(tables.scala:437) at org.apache.spark.sql.execution.command.ExecutedCommandExec.sideEffectResult$lzycompute(commands.scala:58) at org.apache.spark.sql.execution.command.ExecutedCommandExec.sideEffectResult(commands.scala:56) at org.apache.spark.sql.execution.command.ExecutedCommandExec.doExecute(commands.scala:74) at org.apache.spark.sql.execution.SparkPlan$anonfun$execute$1.apply(SparkPlan.scala:114) at org.apache.spark.sql.execution.SparkPlan$anonfun$execute$1.apply(SparkPlan.scala:114) at org.apache.spark.sql.execution.SparkPlan$anonfun$executeQuery$1.apply(SparkPlan.scala:135) at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:151) at org.apache.spark.sql.execution.SparkPlan.executeQuery(SparkPlan.scala:132) at org.apache.spark.sql.execution.SparkPlan.execute(SparkPlan.scala:113) at org.apache.spark.sql.execution.QueryExecution.toRdd$lzycompute(QueryExecution.scala:92) at org.apache.spark.sql.execution.QueryExecution.toRdd(QueryExecution.scala:92) at org.apache.spark.sql.Dataset.<init>(Dataset.scala:185) at org.apache.spark.sql.Dataset$.ofRows(Dataset.scala:64) at org.apache.spark.sql.SparkSession.sql(SparkSession.scala:592) at org.apache.spark.sql.SQLContext.sql(SQLContext.scala:699) at org.apache.spark.sql.hive.thriftserver.SparkExecuteStatementOperation.org$apache$spark$sql$hive$thriftserver$SparkExecuteStatementOperation$execute(SparkExecuteStatementOperation.scala:231) at org.apache.spark.sql.hive.thriftserver.SparkExecuteStatementOperation$anon$1$anon$2.run(SparkExecuteStatementOperation.scala:174) at org.apache.spark.sql.hive.thriftserver.SparkExecuteStatementOperation$anon$1$anon$2.run(SparkExecuteStatementOperation.scala:171) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:422) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1866) at org.apache.spark.sql.hive.thriftserver.SparkExecuteStatementOperation$anon$1.run(SparkExecuteStatementOperation.scala:184) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) at java.lang.Thread.run(Thread.java:745)

1 REPLY 1

Master Collaborator

The most upto date document to follow for configuring would be this one:

https://community.hortonworks.com/articles/101181/rowcolumn-level-security-in-sql-for-apache-spark-2...

; ;