Reply
Highlighted
NSU
Explorer
Posts: 26
Registered: ‎08-13-2014
Accepted Solution

Unable to create table in Accumulo

[ Edited ]
Hi All,
I am having a problem.please help me. 
I am able to use some of the features of the accumulo shell

I can't create or delete a table without getting the following error:

[impl.ThriftTransportPool] WARN: Thread "shell" stuck on io to x.x.x.x:9999:9999 (0) for at least 120040 ms

 

Thanks in advance.

Cloudera Employee
Posts: 31
Registered: ‎07-28-2014

Re: Unable to create table in Accumulo

Hi NSU,

 

If you navigate to the Monitor web page, do you see any messages under "recent logs"?

 

Alternatively, are there any ERROR or WARN messages in the tablet server logs?

 

 

Mike

NSU
Explorer
Posts: 26
Registered: ‎08-13-2014

Re: Unable to create table in Accumulo

Thank you for reply. Here I am seeing errors / warnings in logs file.

 

18 22:56:44,0217tserver:DN248
WARN
System swappiness setting is greater than ten (60) which can cause time-sensitive operations to be delayed.  Accumulo is time sensitive because it needs to maintain distributed lock agreement.
18 22:56:46,0982gc:DN148
WARN
System swappiness setting is greater than ten (60) which can cause time-sensitive operations to be delayed.  Accumulo is time sensitive because it needs to maintain distributed lock agreement.
18 22:56:49,0061master:DN148
WARN
System swappiness setting is greater than ten (60) which can cause time-sensitive operations to be delayed.  Accumulo is time sensitive because it needs to maintain distributed lock agreement.
18 22:56:51,0503tserver:master48
WARN
System swappiness setting is greater than ten (60) which can cause time-sensitive operations to be delayed.  Accumulo is time sensitive because it needs to maintain distributed lock agreement.
18 23:03:38,0935tserver:DN2103
ERROR
org.apache.hadoop.ipc.RemoteException(java.io.IOException): file /accumulo/tables/+r/root_tablet/F00000rq.rf_tmp on client xx.xx.xx.xx.
Requested replication 5 exceeds maximum 3
	at org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.verifyReplication(BlockManager.java:942)
	at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.startFileInt(FSNamesystem.java:2216)
	at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.startFile(FSNamesystem.java:2188)
	at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.create(NameNodeRpcServer.java:505)
	at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.create(ClientNamenodeProtocolServerSideTranslatorPB.java:354)
	at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
	at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:585)
	at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1026)
	at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1986)
	at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1982)
	at java.security.AccessController.doPrivileged(Native Method)
	at javax.security.auth.Subject.doAs(Subject.java:415)
	at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1554)
	at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1980)

	org.apache.hadoop.ipc.RemoteException(java.io.IOException): file /accumulo/tables/+r/root_tablet/F00000rq.rf_tmp on client xx.xx.xx.xx.
	Requested replication 5 exceeds maximum 3
		at org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.verifyReplication(BlockManager.java:942)
		at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.startFileInt(FSNamesystem.java:2216)
		at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.startFile(FSNamesystem.java:2188)
		at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.create(NameNodeRpcServer.java:505)
		at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.create(ClientNamenodeProtocolServerSideTranslatorPB.java:354)
		at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
		at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:585)
		at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1026)
		at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1986)
		at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1982)
		at java.security.AccessController.doPrivileged(Native Method)
		at javax.security.auth.Subject.doAs(Subject.java:415)
		at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1554)
		at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1980)
	
		at org.apache.hadoop.ipc.Client.call(Client.java:1409)
		at org.apache.hadoop.ipc.Client.call(Client.java:1362)
		at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:206)
		at com.sun.proxy.$Proxy14.create(Unknown Source)
		at sun.reflect.GeneratedMethodAccessor8.invoke(Unknown Source)
		at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
		at java.lang.reflect.Method.invoke(Method.java:606)
		at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:186)
		at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:102)
		at com.sun.proxy.$Proxy14.create(Unknown Source)
		at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.create(ClientNamenodeProtocolTranslatorPB.java:258)
		at org.apache.hadoop.hdfs.DFSOutputStream.newStreamForCreate(DFSOutputStream.java:1599)
		at org.apache.hadoop.hdfs.DFSClient.create(DFSClient.java:1461)
		at org.apache.hadoop.hdfs.DFSClient.create(DFSClient.java:1386)
		at org.apache.hadoop.hdfs.DistributedFileSystem$6.doCall(DistributedFileSystem.java:394)
		at org.apache.hadoop.hdfs.DistributedFileSystem$6.doCall(DistributedFileSystem.java:390)
		at org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
		at org.apache.hadoop.hdfs.DistributedFileSystem.create(DistributedFileSystem.java:390)
		at org.apache.hadoop.hdfs.DistributedFileSystem.create(DistributedFileSystem.java:334)
		at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:906)
		at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:887)
		at org.apache.accumulo.core.file.rfile.RFileOperations.openWriter(RFileOperations.java:126)
		at org.apache.accumulo.core.file.rfile.RFileOperations.openWriter(RFileOperations.java:106)
		at org.apache.accumulo.core.file.DispatchingFileFactory.openWriter(FileOperations.java:80)
		at org.apache.accumulo.tserver.Compactor.call(Compactor.java:340)
		at org.apache.accumulo.tserver.MinorCompactor.call(MinorCompactor.java:96)
		at org.apache.accumulo.tserver.Tablet.minorCompact(Tablet.java:2045)
		at org.apache.accumulo.tserver.Tablet.access$4300(Tablet.java:170)
		at org.apache.accumulo.tserver.Tablet$MinorCompactionTask.run(Tablet.java:2132)
		at org.apache.accumulo.tserver.Tablet.minorCompactNow(Tablet.java:2238)
		at org.apache.accumulo.tserver.TabletServer$AssignmentHandler.run(TabletServer.java:2922)
		at org.apache.accumulo.core.util.LoggingRunnable.run(LoggingRunnable.java:34)
		at org.apache.accumulo.tserver.TabletServer$ThriftClientHandler$3.run(TabletServer.java:2277)
18 23:03:38,0937tserver:DN2103
WARN
MinC failed (file /accumulo/tables/+r/root_tablet/F00000rq.rf_tmp on client xx.xx.xx.xx.
Requested replication 5 exceeds maximum 3
	at org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.verifyReplication(BlockManager.java:942)
	at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.startFileInt(FSNamesystem.java:2216)
	at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.startFile(FSNamesystem.java:2188)
	at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.create(NameNodeRpcServer.java:505)
	at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.create(ClientNamenodeProtocolServerSideTranslatorPB.java:354)
	at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
	at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:585)
	at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1026)
	at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1986)
	at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1982)
	at java.security.AccessController.doPrivileged(Native Method)
	at javax.security.auth.Subject.doAs(Subject.java:415)
	at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1554)
	at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1980)
) to create hdfs://DN1:8020/accumulo/tables/+r/root_tablet/F00000rq.rf_tmp retrying ...
Cloudera Employee
Posts: 31
Registered: ‎07-28-2014

Re: Unable to create table in Accumulo

Thanks for sharing those logs, that is very helpful.

 

Just to confirm, which versions of Accumulo and CDH are yo using? Also, if you are using Cloudera Manager, which version of that?

NSU
Explorer
Posts: 26
Registered: ‎08-13-2014

Re: Unable to create table in Accumulo

I am using following versions

for Accumulo -1.6

     CDH -5

     Cloudera Manager - 5.1

 

Thank you for response.

 

NSU
Explorer
Posts: 26
Registered: ‎08-13-2014

Re: Unable to create table in Accumulo

[ Edited ]

Hi,

following are some logs,I am seeing in per-Table problem report.

accumulo.rootFILE_WRITExx.xx.xx2014/08/18 13:21:33 EDThdfs://DN1:8020/accumulo/tables/+r/root_tablet/F00000e9.rf_tmpfile /accumulo/tables/+r/root_tablet/F00000e9.rf_tmp on client xx.xx.xx. Requested replication 5 exceeds maximum 2 at org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.verifyReplication(BlockManager.java:942) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.startFileInt(FSNamesystem.java:2216) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.startFile(FSNamesystem.java:2188) at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.create(NameNodeRpcServer.java:505) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.create(ClientNamenodeProtocolServerSideTranslatorPB.java:354) at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java) at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:585) at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1026) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1986) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1982) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:415) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1554) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1980)



Thank you for response.
Please help me.

Cloudera Employee
Posts: 31
Registered: ‎07-28-2014

Re: Unable to create table in Accumulo

Hi NSU,

 

A couple more questions about your configuration. I think I've identified the issue and am working on a solution for you.

 

I would like to make sure that you used Cloudera Manager to initialize Accumulo. Please confirm that this is the case.

 

Are you using a single node cluster? If not, how many hosts do you have? An order of magnitude is fine if you do not have the exact number available.

 

Thanks,

Mike

 

 

NSU
Explorer
Posts: 26
Registered: ‎08-13-2014

Re: Unable to create table in Accumulo

Hi Mr.Mike,

I did used the Cloudera Manager to initialize Accumulo.

I am using Three nodes cluster.

Thanks,
Ms.Ramya
Cloudera Employee
Posts: 31
Registered: ‎07-28-2014

Re: Unable to create table in Accumulo

Can you give me some description of the layout of the roles on your three nodes?

 

I expect that it is NameNode on one, Accumulo Master on the second, and DataNode + Tablet Server on the third. Is this correct?

NSU
Explorer
Posts: 26
Registered: ‎08-13-2014

Re: Unable to create table in Accumulo

I am attaching the roles running on the each of node as a reference.

I named my Nodes as: Master,DN1,DN2.

Following are roles on my three clustered nodes:

Roles

Master: Tablet server[Accumulo],

Data Node,

Node Manger[Yarn],

server[ZooKeeper]

DN1: Garbage Collector[Accumulo],

Master[Accumulo],

Monitor[Accumulo],

Tracer[Accumulo],

NameNode,

Secondary Name Node,

Data Node,

server[ZooKeeper]

DN2: Tablet Server [Accumulo],

DataNode,

server[ZooKeeper]


Thanks,

Ms.Ramya




































Announcements