Support Questions

Find answers, ask questions, and share your expertise

Hive fails due to not have enough number of replicas in HDFS

avatar
New Contributor

Our Hive application is failing with the following error: 

 

[HiveServer2-Handler-Pool: Thread-3522853]: Job Submission failed with exception 'java.io.IOException(Unable to close file because the last block does not have enough number of replicas.)'
java.io.IOException: Unable to close file because the last block does not have enough number of replicas.
	at org.apache.hadoop.hdfs.DFSOutputStream.completeFile(DFSOutputStream.java:2600)
	at org.apache.hadoop.hdfs.DFSOutputStream.closeImpl(DFSOutputStream.java:2562)

 

 We decided to try different suggestions like increasing the following settings but CM doesn't have the option to do so.

 

dfs.client.block.write.locateFollowingBlock.retries
dfs.client.retry.interval-ms.get-last-block-length
dfs.blockreport.incremental.intervalMsec 

Any suggestion

2 REPLIES 2

avatar
Champion

If CM doesn't have a setting you have to use the Advance Configuration Snippet.  

 

It isn't always easy to figure out which one to put the settings in.  First, step is to search by the file that these go in, which I believe is the hdfs-site.xml.  My guess for the two client setting, you will want to find the Gateway ACS (there may not be one specifically for the core-site.xml).  The block report setting is specific to the Datanodes, so look for an ACS for the Datanode roles for the hdfs-site.xml file.

 

If you do the service level ACS it will apply to all roles in the service.

 

http://www.cloudera.com/documentation/manager/5-1-x/Cloudera-Manager-Managing-Clusters/cm5mc_config_...

avatar
New Contributor

It seems like if we are not confident that changing those configuration will solve the HDFS IO Exception (in our Hive application) its better to not mess around with them. 

Do you have any suggestion about the root couse of our exception?