Support Questions

Find answers, ask questions, and share your expertise
Announcements
Celebrating as our community reaches 100,000 members! Thank you!

spark.sql.sources.partitionOverwriteMode=dynamic" not working in CDP 7.1.4

avatar
Rising Star

Hi,

 

We are using spark.sql.sources.partitionOverwriteMode=dynamic" in our pyspark scripts in our CDH 6.3.2 cluster with spark version 2.4.0, but when we are trying it to CDP 7.1.4 with Spark 2.4.0 version and it is not working, is there anyway to have the config spark.sql.sources.partitionOverwriteMode=dynamic" work in CDP? is there any alternatives on it?

 

Just to highlight that both our CDP 7.1.4 and CDH 6.3.2 clusters are having the same Spark version of 2.4.0

1 ACCEPTED SOLUTION

avatar
Super Collaborator

Hi Team,

 

CDP uses the "org.apache.spark.internal.io.cloud.PathOutputCommitProtocol" OutputCommitter which does not support dynamicPartitionOverwrite.

 

You can set the following parameters into your spark job.

 

code level:

 

spark.conf.set("spark.sql.sources.partitionOverwriteMode", "dynamic")
spark.conf.set("spark.sql.parquet.output.committer.class", "org.apache.parquet.hadoop.ParquetOutputCommitter")
spark.conf.set("spark.sql.sources.commitProtocolClass", "org.apache.spark.sql.execution.datasources.SQLHadoopMapReduceCommitProtocol")

spark-submit/spark-shell:

 

--conf spark.sql.sources.partitionOverwriteMode=dynamic
--conf spark.sql.parquet.output.committer.class=org.apache.parquet.hadoop.ParquetOutputCommitter
--conf spark.sql.sources.commitProtocolClass=org.apache.spark.sql.execution.datasources.SQLHadoopMapReduceCommitProtocol 

Note: If you are using S3, you can disable it by specifying spark.cloudera.s3_committers.enabled parameter.

 

--conf spark.cloduera.s3_committers.enabled=false 

 

View solution in original post

5 REPLIES 5

avatar
New Contributor

I'm having this same issue whether I specify this config in the spark-defaults.conf via Cloudera Manager for CDP 7.1.4 or inline in my spark.write.option("partitionOverwriteMode", "dynamic").

Error message is: 

java.io.IOException: PathOutputCommitProtocol does not support dynamicPartitionOverwrite

avatar
New Contributor

I was getting the same error after Cloudera upgradation while using insert overwrite with config spark.sql.sources.partitionOverwriteMode=dynamic. For me below config property resolved the issue.

"spark.sql.sources.commitProtocolClass=org.apache.spark.sql.execution.datasources.SQLHadoopMapReduceCommitProtocol"

Since after upgrade default value was "spark.sql.sources.commitProtocolClass= org.apache.spark.internal.io.cloud.PathOutputCommitProtocol" and it was creating issue.

avatar
New Contributor

I was able to fix this in our CDP 7.1.4 cluster today by disabling the
Enable Optimized S3 Committers - spark.cloudera.s3_committers.enabled

in the Spark Service Configuration
This works for me because we are using HDFS on premise. If you are using S3, I'm guessing that this is put in place because of the S3 eventual consistency issues.

I've then also added the spark.sql.sources.partitionOverwriteMode=dynamic setting to my spark-defaults.conf also in Spark Service Configuration via the Safety Valve settings.

avatar
Explorer

It works also for me using CDP 7.1.7

Thank you 

avatar
Super Collaborator

Hi Team,

 

CDP uses the "org.apache.spark.internal.io.cloud.PathOutputCommitProtocol" OutputCommitter which does not support dynamicPartitionOverwrite.

 

You can set the following parameters into your spark job.

 

code level:

 

spark.conf.set("spark.sql.sources.partitionOverwriteMode", "dynamic")
spark.conf.set("spark.sql.parquet.output.committer.class", "org.apache.parquet.hadoop.ParquetOutputCommitter")
spark.conf.set("spark.sql.sources.commitProtocolClass", "org.apache.spark.sql.execution.datasources.SQLHadoopMapReduceCommitProtocol")

spark-submit/spark-shell:

 

--conf spark.sql.sources.partitionOverwriteMode=dynamic
--conf spark.sql.parquet.output.committer.class=org.apache.parquet.hadoop.ParquetOutputCommitter
--conf spark.sql.sources.commitProtocolClass=org.apache.spark.sql.execution.datasources.SQLHadoopMapReduceCommitProtocol 

Note: If you are using S3, you can disable it by specifying spark.cloudera.s3_committers.enabled parameter.

 

--conf spark.cloduera.s3_committers.enabled=false