Support Questions
Find answers, ask questions, and share your expertise
Announcements
Alert: Welcome to the Unified Cloudera Community. Former HCC members be sure to read and learn how to activate your account here.

Operation category READ is not supported in state standby while writing to Hive table through informatica

Highlighted

Operation category READ is not supported in state standby while writing to Hive table through informatica

Hi @Jay Kumar SenSharma

While writing to hive through informatica IDQ. I am getting the error as below.

following directory: [/home/minfaadmin/informatica102]. 2018-02-01 15:58:59.438 <LdtmCompile-pool-2-thread-23> INFO: [AUTOINST_3029] The cluster Hadoop distribution type is: [hortonworks_2.6]. 2018-02-01 15:58:59.784 <LdtmCompile-pool-2-thread-23> INFO: [AUTOINST_3001] The MD5 hexadecimal value for the Informatica archive is [d152bc991f7cf7a905c05b22e494b30e]. 2018-02-01 15:58:59.785 <LdtmCompile-pool-2-thread-23> INFO: [AUTOINST_3026] The Informatica archive already exists on the Data Integration Service machine. No archiving necessary. 2018-02-01 16:01:42.561 <LdtmCompile-pool-2-thread-23> SEVERE: Data integration service failed to create DTM instance because of the following error: java.lang.RuntimeException: [java.lang.RuntimeException: org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.ipc.StandbyException): Operation category READ is not supported in state standby at org.apache.hadoop.hdfs.server.namenode.ha.StandbyState.checkOperation(StandbyState.java:87) at org.apache.hadoop.hdfs.server.namenode.NameNode$NameNodeHAContext.checkOperation(NameNode.java:2000) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkOperation(FSNamesystem.java:1377) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getFileInfo(FSNamesystem.java:4105) at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.getFileInfo(NameNodeRpcServer.java:1136) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.getFileInfo(ClientNamenodeProtocolServerSideTranslatorPB.java:854) at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java) at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:640) at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:982) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2351) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2347) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:422) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1866) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2345) ] 2018-02-01 16:01:42.564 <LdtmCompile-pool-2-thread-23> SEVERE: [DSCMN_10282] The Integration Service failed to submit the mapping [m_load_mms_dl_his_mb_followupdet] because of the following error: [java.lang.RuntimeException: org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.ipc.StandbyException): Operation category READ is not supported in state standby at org.apache.hadoop.hdfs.server.namenode.ha.StandbyState.checkOperation(StandbyState.java:87) at org.apache.hadoop.hdfs.server.namenode.NameNode$NameNodeHAContext.checkOperation(NameNode.java:2000) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkOperation(FSNamesystem.java:1377) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getFileInfo(FSNamesystem.java:4105) at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.getFileInfo(NameNodeRpcServer.java:1136) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.getFileInfo(ClientNamenodeProtocolServerSideTranslatorPB.java:854) at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java) at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:640) at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:982) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2351) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2347) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:422) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1866) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2345) ]. com.informatica.ds.common.exceptions.DataServiceRuntimeException: [DSCMN_10282] The Integration Service failed to submit the mapping [m_load_mms_dl_his_mb_followupdet] because of the following error: [java.lang.RuntimeException: org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.ipc.StandbyException): Operation category READ is not supported in state standby at WA].

3 REPLIES 3
Highlighted

Re: Operation category READ is not supported in state standby while writing to Hive table through informatica

@kotesh banoth Looks like your client is using the standby namenode. Do use the active namenode or the nameservice instead.

Highlighted

Re: Operation category READ is not supported in state standby while writing to Hive table through informatica

@Sandeep Nemuri can you help me, how to change to namenode or the nameservice.

Thanks

Highlighted

Re: Operation category READ is not supported in state standby while writing to Hive table through informatica

New Contributor

Any solution on this?

Don't have an account?
Coming from Hortonworks? Activate your account here