- Subscribe to RSS Feed
- Mark Question as New
- Mark Question as Read
- Float this Question for Current User
- Bookmark
- Subscribe
- Mute
- Printer Friendly Page
Spark 1.6.3 bucketBy error
- Labels:
-
Apache Spark
Created 10-10-2017 01:13 PM
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hi Team,
I was trying to load dataframe to hive table, bucket by one of the column. I am facing error.
File "<stdin>", line 1, in <module>
AttributeError: 'DataFrameWriter' object has no attribute 'bucketBy'
Here is the statement I am trying to pass
rs.write.bucketBy(4,"Column1").sortBy("column2").saveAsTable("database.table")
Can you please help me out in this
Created 10-10-2017 05:47 PM
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Spark 1.6.3 does not support this.
https://spark.apache.org/docs/1.6.3/sql-programming-guide.html#creating-dataframes
Created 10-10-2017 05:47 PM
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Spark 1.6.3 does not support this.
https://spark.apache.org/docs/1.6.3/sql-programming-guide.html#creating-dataframes
