Support Questions

Find answers, ask questions, and share your expertise

StreamSets writting to HDFS issues

avatar
Explorer

We have StramSets (STS) intalled using Clodera parcels on CDH 5.6.0.

A pipeline is generating test and trying to save to HDFS.

Kereberos is enabled. Keytab file is in the Configuration Directory.

 

But I get the following error:

 

HADOOPFS_44 - Could not verify the base directory: 'org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.security.authorize.AuthorizationException): User: sdc/<HOST>.lab.nordigy.ru@LAB.NORDIGY.RU is not allowed to impersonate devuser@LAB.NORDIGY.RU'

 

Any one have an idea?

1 ACCEPTED SOLUTION

avatar
Explorer

Problem was solved by:

 

Cloudera Manager -> HDFS -> Configuration

 

Than in the  "Cluster-wide Advanced Configuration Snippet (Safety Valve) for core-site.xml"

add:

 

<property>
<name>hadoop.proxyuser.sdc.hosts</name>
<value>*</value>
</property>
<property>
<name>hadoop.proxyuser.sdc.groups</name>
<value>*</value>
</property>

 

 Then restart the cluster

View solution in original post

3 REPLIES 3

avatar
Explorer

Problem was solved by:

 

Cloudera Manager -> HDFS -> Configuration

 

Than in the  "Cluster-wide Advanced Configuration Snippet (Safety Valve) for core-site.xml"

add:

 

<property>
<name>hadoop.proxyuser.sdc.hosts</name>
<value>*</value>
</property>
<property>
<name>hadoop.proxyuser.sdc.groups</name>
<value>*</value>
</property>

 

 Then restart the cluster

avatar
New Contributor

followed the procedure above and now getting this error code.

 

 

HADOOPFS_44 - Could not verify the base directory: 'org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.ipc.StandbyException): Operation category READ is not supported in state standby at org.apache.hadoop.hdfs.server.namenode.ha.StandbyState.checkOperation(StandbyState.java:87) at org.apache.hadoop.hdfs.server.namenode.NameNode$NameNodeHAContext.checkOperation(NameNode.java:1754) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkOperation(FSNamesystem.java:1337) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getFileInfo(FSNamesystem.java:4032) at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.getFileInfo(NameNodeRpcServer.java:849) at org.apache.hadoop.hdfs.server.namenode.AuthorizationProviderProxyClientProtocol.getFileInfo(AuthorizationProviderProxyClientProtocol.java:502) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.getFileInfo(ClientNamenodeProtocolServerSideTranslatorPB.java:815) at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java) at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:587) at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1026) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2013) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2009) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:415) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1678) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2007) '

avatar
Expert Contributor

This setting is a global one. Are there options at a pipeline level ?