Support Questions
Find answers, ask questions, and share your expertise

Loading to S3 Fails - CDH 5.3.0

New Contributor

Since upgrading our cluster from 5.1.2 to 5.3.0, we have been unable to load data to a Hive table that points to S3. It fails with the following error:


Loading data to table schema.table_name partition (dt=null)
Failed with exception Wrong FS: s3n://<s3_bucket>/converted_installs/.hive-staging_hive_2015-01-26_11-05-32_849_2677145287515034575-1/-ext-10000/dt=2015-01-25/000000_0.gz, expected: hdfs://<name_node>:8020
FAILED: Execution Error, return code 1 from org.apache.hadoop.hive.ql.exec.MoveTask

The table itself was created using the following DDL (I removed the columns, since they are not very important):


ROW FORMAT SERDE 'com.bizo.hive.serde.csv.CSVSerde'
LOCATION 's3n://<s3_bucket>/data/warehouse_v1/converted_installs';

We don't have any issues writing to tables that reside on HDFS locally, but for some reason, writing to S3 fails. Anyone have an idea how to fix this?


New Contributor

This was fixed in release 5.4.4. We have installed and successfully retested.

Thank you for following up on this Jim, it's greatly appreciated. I will migrate to 5.4.4 when time allows.



Are you sure this is fixed? We just upgraded to  1.1.0+cdh5.4.4+157-1.cdh5.4.4.p0.6.el6 and are seeing the same issue.

inserting into a s3 table and selecting from a hdfs table fails. We are using hiveserver 1 and the hive cli.







Which s3 provider are you using? Can you show the query that is failing, and the stacktrace? It may prove helpful.

We've tried s3n and s3a, both have the same results: 


hive> insert overwrite table s3_brand_test select * from brand;



Failed with exception Wrong FS: s3a://hdfs.hive/s3_brand_test/.hive-staging_hive_2015-07-27_15-27-59_802_7825996587812451043-1/-ext-10000/000000_0, expected: hdfs:// FAILED: Execution Error, return code 1 from org.apache.hadoop.hive.ql.exec.MoveTask



java.lang.IllegalArgumentException: Wrong FS: s3a://hdfs.hive/s3_brand_test/.hive-staging_hive_2015-07-27_15-57-08_961_2773119355707995501-1/-ext-10000/000000_0, expected: hdfs:/


        at org.apache.hadoop.fs.FileSystem.checkPath(

        at org.apache.hadoop.hdfs.DistributedFileSystem.getPathName(

        at org.apache.hadoop.hdfs.DistributedFileSystem.getEZForPath(

        at org.apache.hadoop.hdfs.client.HdfsAdmin.getEncryptionZoneForPath(

        at org.apache.hadoop.hive.shims.Hadoop23Shims$HdfsEncryptionShim.isPathEncrypted(

        at org.apache.hadoop.hive.ql.metadata.Hive.moveFile(

        at org.apache.hadoop.hive.ql.metadata.Hive.replaceFiles(

        at org.apache.hadoop.hive.ql.metadata.Hive.loadTable(

        at org.apache.hadoop.hive.ql.exec.MoveTask.execute(

        at org.apache.hadoop.hive.ql.exec.Task.executeTask(

        at org.apache.hadoop.hive.ql.exec.TaskRunner.runSequential(

        at org.apache.hadoop.hive.ql.Driver.launchTask(

        at org.apache.hadoop.hive.ql.Driver.execute(

        at org.apache.hadoop.hive.ql.Driver.runInternal(



        at org.apache.hadoop.hive.cli.CliDriver.processLocalCmd(

        at org.apache.hadoop.hive.cli.CliDriver.processCmd(

        at org.apache.hadoop.hive.cli.CliDriver.processLine(

        at org.apache.hadoop.hive.cli.CliDriver.processLine(

        at org.apache.hadoop.hive.cli.CliDriver.executeDriver(


        at org.apache.hadoop.hive.cli.CliDriver.main(

        at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)

        at sun.reflect.NativeMethodAccessorImpl.invoke(

        at sun.reflect.DelegatingMethodAccessorImpl.invoke(

        at java.lang.reflect.Method.invoke(


        at org.apache.hadoop.util.RunJar.main(

Cloudera Employee
This fix is in CDH5.4.5. CDH5.4.4 does not have the fix. CDH5.4.5 is not released yet.

Ah, Sorry I missed that.



Do we know the eta on 5.4.5 or is there a patch we could apply?




Cloudera Employee
Current eta on 5.4.5 is end of August.