Member since
07-04-2018
1
Post
0
Kudos Received
0
Solutions
07-04-2018
08:46 AM
I have the exact same problem. Spark 2.2.0.cloudera1. Can you please tell me what's wrong with this code? It's the easiest Java version of the example code in Spark's Javadoc example (here). The code: List<Row> rows = new ArrayList<>(); Object[] cols = new Object[2]; cols[0] = "one"; cols[1] = 1; rows.add(RowFactory.create(cols)); spark.createDataFrame(rows, st).write().format("parquet").mode(SaveMode.Overwrite).saveAsTable("my_scheme.my_table"); rows = new ArrayList<>(); cols[0] = "two"; cols[1] = 2; rows.add(RowFactory.create(cols)); spark.createDataFrame(rows, st).write().format("parquet").mode(SaveMode.Append).saveAsTable("my_scheme.my_table"); The error: java.lang.IllegalArgumentException: Expected exactly one path to be specified, but got: at org.apache.spark.sql.execution.datasources.DataSource.writeInFileFormat(DataSource.scala:410) at org.apache.spark.sql.execution.datasources.DataSource.writeAndRead(DataSource.scala:454) at org.apache.spark.sql.execution.command.CreateDataSourceTableAsSelectCommand.saveDataIntoTable(createDataSourceTables.scala:198) at org.apache.spark.sql.execution.command.CreateDataSourceTableAsSelectCommand.run(createDataSourceTables.scala:148) at org.apache.spark.sql.execution.command.ExecutedCommandExec.sideEffectResult$lzycompute(commands.scala:58) at org.apache.spark.sql.execution.command.ExecutedCommandExec.sideEffectResult(commands.scala:56) at org.apache.spark.sql.execution.command.ExecutedCommandExec.doExecute(commands.scala:74) at org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$1.apply(SparkPlan.scala:117) at org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$1.apply(SparkPlan.scala:117) at org.apache.spark.sql.execution.SparkPlan$$anonfun$executeQuery$1.apply(SparkPlan.scala:138) at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:151) at org.apache.spark.sql.execution.SparkPlan.executeQuery(SparkPlan.scala:135) at org.apache.spark.sql.execution.SparkPlan.execute(SparkPlan.scala:116) at org.apache.spark.sql.execution.QueryExecution.toRdd$lzycompute(QueryExecution.scala:92) at org.apache.spark.sql.execution.QueryExecution.toRdd(QueryExecution.scala:92) at org.apache.spark.sql.DataFrameWriter.runCommand(DataFrameWriter.scala:610) at org.apache.spark.sql.DataFrameWriter.createTable(DataFrameWriter.scala:420) at org.apache.spark.sql.DataFrameWriter.saveAsTable(DataFrameWriter.scala:399) at org.apache.spark.sql.DataFrameWriter.saveAsTable(DataFrameWriter.scala:354)
... View more