Our Hive application is failing with the following error:
[HiveServer2-Handler-Pool: Thread-3522853]: Job Submission failed with exception 'java.io.IOException(Unable to close file because the last block does not have enough number of replicas.)' java.io.IOException: Unable to close file because the last block does not have enough number of replicas. at org.apache.hadoop.hdfs.DFSOutputStream.completeFile(DFSOutputStream.java:2600) at org.apache.hadoop.hdfs.DFSOutputStream.closeImpl(DFSOutputStream.java:2562)
We decided to try different suggestions like increasing the following settings but CM doesn't have the option to do so.
If CM doesn't have a setting you have to use the Advance Configuration Snippet.
It isn't always easy to figure out which one to put the settings in. First, step is to search by the file that these go in, which I believe is the hdfs-site.xml. My guess for the two client setting, you will want to find the Gateway ACS (there may not be one specifically for the core-site.xml). The block report setting is specific to the Datanodes, so look for an ACS for the Datanode roles for the hdfs-site.xml file.
If you do the service level ACS it will apply to all roles in the service.
It seems like if we are not confident that changing those configuration will solve the HDFS IO Exception (in our Hive application) its better to not mess around with them.
Do you have any suggestion about the root couse of our exception?