Support Questions

Find answers, ask questions, and share your expertise
Announcements
Celebrating as our community reaches 100,000 members! Thank you!

Unable to create table in Accumulo

avatar
Explorer
Hi All,
I am having a problem.please help me. 
I am able to use some of the features of the accumulo shell

I can't create or delete a table without getting the following error:

[impl.ThriftTransportPool] WARN: Thread "shell" stuck on io to x.x.x.x:9999:9999 (0) for at least 120040 ms

 

Thanks in advance.

1 ACCEPTED SOLUTION

avatar
Expert Contributor

Hi!

 

The problem you are getting is a known limitation of Accumulo on small clusters. By default Accumulo attempts to use a replication factor of 5 for the metadata table, ignoring the "table.file.replication" setting. Normally, Cloudera Manager does not set a max replication factor. This causes under-replication warnings until you can correct either the number of nodes or manually adjust the replication setting on that table.

 

In your cluster, it appears the "dfs.replication.max" setting has been adjusted to match your number of cluster nodes. This is causing Accumulo's attempts to create new files for its internal tables to fail.

 

Unfortunately, I'm not sure this can be fixed without data loss. However, to recover you should first edit the "dfs.replication.max" setting for HDFS to be >= 5. Then you should adjust the replication on the metadata and root tables to be <= your number of DataNodes. After that it should be safe to lower dfs.replication.max again.

 

Adjust the replication in the accumulo shell:

 

$> config -t accumulo.metadata -s table.file.replication=3
$> config -t accumulo.root -s table.file.replication=3

 

View solution in original post

15 REPLIES 15

avatar
Explorer

dod you need any information?

avatar
Explorer

waiting for your reply ...

avatar
Explorer
Did you guys find any solution?

avatar
Explorer

I am getting same error when I do reinstall Cloudera total set up. I think this one is BUG.

avatar
Expert Contributor

Hi!

 

The problem you are getting is a known limitation of Accumulo on small clusters. By default Accumulo attempts to use a replication factor of 5 for the metadata table, ignoring the "table.file.replication" setting. Normally, Cloudera Manager does not set a max replication factor. This causes under-replication warnings until you can correct either the number of nodes or manually adjust the replication setting on that table.

 

In your cluster, it appears the "dfs.replication.max" setting has been adjusted to match your number of cluster nodes. This is causing Accumulo's attempts to create new files for its internal tables to fail.

 

Unfortunately, I'm not sure this can be fixed without data loss. However, to recover you should first edit the "dfs.replication.max" setting for HDFS to be >= 5. Then you should adjust the replication on the metadata and root tables to be <= your number of DataNodes. After that it should be safe to lower dfs.replication.max again.

 

Adjust the replication in the accumulo shell:

 

$> config -t accumulo.metadata -s table.file.replication=3
$> config -t accumulo.root -s table.file.replication=3

 

avatar
New Contributor

This works. But, if you used CM to install, use CM to change the HDFS setting dfs.replication.max via configuration tab, first. Then use the accumulo shell as directed.

 

Jim Heyssel