Support Questions

Find answers, ask questions, and share your expertise

java.lang.IllegalArgumentException: Wrong FS: hdfs://******:8020/user/hive/warehouse/employee_test

avatar
Expert Contributor

Hi, 

 

HDFS error while running creating table from spark2 shell

 

desind@fsad145:~#> spark2-shell
Setting default log level to "WARN".
To adjust logging level use sc.setLogLevel(newLevel). For SparkR, use setLogLevel(newLevel).
17/11/09 14:22:43 WARN spark.SparkContext: Support for Java 7 is deprecated as of Spark 2.0.0
Spark context Web UI available at http://******:4040
Spark context available as 'sc' (master = yarn, app id = application_1510255229586_0001).
Spark session available as 'spark'.
Welcome to
____ __
/ __/__ ___ _____/ /__
_\ \/ _ \/ _ `/ __/ '_/
/___/ .__/\_,_/_/ /_/\_\ version 2.1.0.cloudera2
/_/

Using Scala version 2.11.8 (Java HotSpot(TM) 64-Bit Server VM, Java 1.7.0_67)
Type in expressions to have them evaluated.
Type :help for more information.

scala> sql("CREATE TABLE IF NOT EXISTS default.employee_test(id INT, name STRING, age INT) ROW FORMAT DELIMITED FIELDS TERMINATED BY ',' LINES TERMINATED BY '\n'")
java.lang.IllegalArgumentException: Wrong FS: hdfs://******:8020/user/hive/warehouse/employee_test, expected: hdfs://nameservice1
at org.apache.hadoop.fs.FileSystem.checkPath(FileSystem.java:662)
at org.apache.hadoop.fs.FileSystem.makeQualified(FileSystem.java:482)
at org.apache.spark.sql.hive.HiveExternalCatalog$$anonfun$createTable$1.apply$mcV$sp(HiveExternalCatalog.scala:231)
at org.apache.spark.sql.hive.HiveExternalCatalog$$anonfun$createTable$1.apply(HiveExternalCatalog.scala:200)
at org.apache.spark.sql.hive.HiveExternalCatalog$$anonfun$createTable$1.apply(HiveExternalCatalog.scala:200)
at org.apache.spark.sql.hive.HiveExternalCatalog.withClient(HiveExternalCatalog.scala:98)
at org.apache.spark.sql.hive.HiveExternalCatalog.createTable(HiveExternalCatalog.scala:200)
at org.apache.spark.sql.catalyst.catalog.SessionCatalog.createTable(SessionCatalog.scala:248)
at org.apache.spark.sql.execution.command.CreateTableCommand.run(tables.scala:116)
at org.apache.spark.sql.execution.command.ExecutedCommandExec.sideEffectResult$lzycompute(commands.scala:58)
at org.apache.spark.sql.execution.command.ExecutedCommandExec.sideEffectResult(commands.scala:56)
at org.apache.spark.sql.execution.command.ExecutedCommandExec.doExecute(commands.scala:74)
at org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$1.apply(SparkPlan.scala:114)
at org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$1.apply(SparkPlan.scala:114)
at org.apache.spark.sql.execution.SparkPlan$$anonfun$executeQuery$1.apply(SparkPlan.scala:135)
at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:151)
at org.apache.spark.sql.execution.SparkPlan.executeQuery(SparkPlan.scala:132)
at org.apache.spark.sql.execution.SparkPlan.execute(SparkPlan.scala:113)
at org.apache.spark.sql.execution.QueryExecution.toRdd$lzycompute(QueryExecution.scala:87)
at org.apache.spark.sql.execution.QueryExecution.toRdd(QueryExecution.scala:87)
at org.apache.spark.sql.Dataset.<init>(Dataset.scala:185)
at org.apache.spark.sql.Dataset$.ofRows(Dataset.scala:64)
at org.apache.spark.sql.SparkSession.sql(SparkSession.scala:600)
... 48 elided

1 REPLY 1

avatar
Expert Contributor

This issue was resolved by changing the HIVE.DBS to point to nameservice1. 

 

This is the equivalent in Oracle Database.

 

SQL> update HIVE.DBS set DB_LOCATION_URI = 'hdfs://nameservice1/user/hive/warehouse' where NAME='default'; 1 row updated.

 

SQL> select DB_LOCATION_URI from HIVE.DBS where NAME = 'default'; DB_LOCATION_URI -------------------------------------------------------------------------------------------------------------------------------------------- hdfs://nameservice1/user/hive/warehouse

SQL> commit;

Commit complete

 

Using Cloudera manager to make this update did not work in 5.13.0