@Maher Hattabi
You should be able to directly read in multiple files as part of the sqlContext.read statement, as shown below:
import org.apache.spark.sql.SQLContext
val sqlContext = new SQLContext(sc)
val df = sqlContext.read.format("com.databricks.spark.csv").option("header", "true").option("inferSchema", "true").load("/tmp/test_1.csv","/tmp/test_2.csv","/tmp/test_3.csv")
df.show()
If you are using Spark 2.0 or newer, this is the preferred syntax (using the spark context):
val df = spark.read.format("com.databricks.spark.csv").option("header", "true").option("inferSchema", "true").load("/tmp/test_1.csv","/tmp/test_2.csv","/tmp/test_3.csv")
df.show()
Please let me know if this helps.