Support Questions

Find answers, ask questions, and share your expertise

User not allowed to do 'DECRYPT_EEK' despite the group to which the user belong have proper access

Hi All,

I have created an encryption zone and I am not able to copy data into this encryption zone using USER_1 which belongs to GROUP_1 and getting the below error:

copyFromLocal: User:USER_1 not allowed to do 'DECRYPT_EEK' on 'key1'

In ranger ranger kms policies I have given full access to the group GROUP_1. But still I am facing this issue. Is it like group level policies does not apply for Ranger KMS or is there some configuration I have to tweak to make it work.

Please help me understand this issue and also any clue or suggestion is appreciated.

FYI, the cluster is kerberized.

thanks in advance.

26 REPLIES 26

Mentor

@sachin gupta

Have you created a keytab for the USER_1 check in

$ ls -al /etc/security/keytabs 

Did USER_1 grab a valide kerberos ticket as the USER_1 run

$ klist 

Do yo have any output?

USER_1 might have been blacklisted by the Ambari property, hadoop.kms.blacklist.DECRYPT_EEK. Thats the most probable reason why you are unable to decrypt being an 'USER_1' user.

Did you give read permission to USER_1 to that encryption zone?

@Geoffrey Shelton Okot

Have you created a keytab for the USER_1 check in

I am not using keytab for access.

Did USER_1 grab a valide kerberos ticket as the USER_1 run

Yes I have created proper ticket using kinit command

USER_1 might have been blacklisted by the Ambari property, hadoop.kms.blacklist.DECRYPT_EEK. Thats the most probable reason why you are unable to decrypt being an 'USER_1' user.

No I have seen into ranger KMS configs only hdfs user is blacklisted.

Did you give read permission to USER_1 to that encryption zone?

what do you mean by this ?

==============================================================

FYI, As soon as I add USER_1 in the access list. it works just fine. It seems to be not working when group level access is given.

attaching some logswhich might help you understand the problem better:

<em>#hadoop fs -copyFromLocal test3 /test/1234
copyFromLocal: User:USER_1 not allowed to do 'DECRYPT_EEK' on '1234'
17/08/10 13:48:45 WARN retry.RetryInvocationHandler: Exception while invoking ClientNamenodeProtocolTranslatorPB.complete over null. Not retrying because try once and fail.
org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.hdfs.server.namenode.LeaseExpiredException): No lease on /test/1234/test3._COPYING_ (inode 329412): File does not exist. Holder DFSClient_NONMAPREDUCE_-196143005_1 does not have any open files.
        at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkLease(FSNamesystem.java:3521)
        at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.completeFileInternal(FSNamesystem.java:3611)
        at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.completeFile(FSNamesystem.java:3578)
        at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.complete(NameNodeRpcServer.java:905)
        at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.complete(ClientNamenodeProtocolServerSideTranslatorPB.java:544)
        at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
        at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:640)
        at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:982)
        at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2313)
        at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2309)
        at java.security.AccessController.doPrivileged(Native Method)
        at javax.security.auth.Subject.doAs(Subject.java:415)
        at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1724)
        at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2307)
        at org.apache.hadoop.ipc.Client.getRpcResponse(Client.java:1552)
        at org.apache.hadoop.ipc.Client.call(Client.java:1496)
        at org.apache.hadoop.ipc.Client.call(Client.java:1396)
        at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:233)
        at com.sun.proxy.$Proxy16.complete(Unknown Source)
        at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.complete(ClientNamenodeProtocolTranslatorPB.java:501)
        at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
        at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
        at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
        at java.lang.reflect.Method.invoke(Method.java:606)
        at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:278)
        at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:194)
        at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:176)
        at com.sun.proxy.$Proxy17.complete(Unknown Source)
        at org.apache.hadoop.hdfs.DFSOutputStream.completeFile(DFSOutputStream.java:2361)
        at org.apache.hadoop.hdfs.DFSOutputStream.closeImpl(DFSOutputStream.java:2338)
        at org.apache.hadoop.hdfs.DFSOutputStream.close(DFSOutputStream.java:2303)
        at org.apache.hadoop.hdfs.DFSClient.closeAllFilesBeingWritten(DFSClient.java:947)
        at org.apache.hadoop.hdfs.DFSClient.closeOutputStreams(DFSClient.java:979)
        at org.apache.hadoop.hdfs.DistributedFileSystem.close(DistributedFileSystem.java:1192)
        at org.apache.hadoop.fs.FileSystem$Cache.closeAll(FileSystem.java:2852)
        at org.apache.hadoop.fs.FileSystem$Cache$ClientFinalizer.run(FileSystem.java:2869)
        at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
        at java.util.concurrent.FutureTask.run(FutureTask.java:262)
        at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
        at java.lang.Thread.run(Thread.java:745)<br></em>


New Contributor

I meet the same problem:

$ hdfs dfs -put ./ca.key /tmp/webb

put: User:jj not allowed to do 'DECRYPT_EEK' on 'zonekey1'

I fix the problem by add permissions to user 'jj'.

27512-kms1.png

I am not really looking for this kind of solution. It works when you give permission at user level but i want it to work on group level. I think I have explained well the question if not let me know what I can do to make it more clear.

thanks anyways for your reply.

Mentor

@sachin gupta

So it was a permissions issue as I had earlier stated! Can you validate the same on your cluster

No @Geoffrey Shelton Okot this is not the solution. I want to give the permission on the group level.

Mentor

@sachin gupta

KMS has an ACL file named "kms-acls.xml". Can you copy and paste or attach the contents in a file in this thread?

I checked the kms-acls.xml file and the value for all the properties is * set. Is there any property I need to change to make it work ?

New Contributor

@Geoffrey Shelton Okot

my kms is new installed. I check kms-acls.xml, all conf is default (almost is "*")。 I changed the policy, and it work.

31382-kms2.png

you can see the "Select Group" coloumn right ? Did you try putting some group name there and tested ? If yes please mention, and if not then please try and then mention.

Mentor

@sachin gupta @webb wang

All the same you should have attached your kms-acls.xml so I could visualize it. Having said that can you add this key value in kms-acls.xml

<name>key.acl.key4USER_1.DECRYPT_EEK</name> 
<value>USER_1 GROUP_1</value>

Keep me posted

If you need it to solve the issue then here's my kms-acls.xml

<configuration>
  <property>
    <name>hadoop.kms.acl.CREATE</name>
    <value>*</value>
    <description>
      ACL for create-key operations.
      If the user is not in the GET ACL, the key material is not returned
      as part of the response.
    </description>
  </property>


  <property>
    <name>hadoop.kms.acl.DELETE</name>
    <value>*</value>
    <description>
      ACL for delete-key operations.
    </description>
  </property>


  <property>
    <name>hadoop.kms.acl.ROLLOVER</name>
    <value>*</value>
    <description>
      ACL for rollover-key operations.
      If the user is not in the GET ACL, the key material is not returned
      as part of the response.
    </description>
  </property>


  <property>
    <name>hadoop.kms.acl.GET</name>
    <value>*</value>
    <description>
      ACL for get-key-version and get-current-key operations.
    </description>
  </property>


  <property>
    <name>hadoop.kms.acl.GET_KEYS</name>
    <value>*</value>
    <description>
      ACL for get-keys operations.
    </description>
  </property>


  <property>
    <name>hadoop.kms.acl.GET_METADATA</name>
    <value>*</value>
    <description>
      ACL for get-key-metadata and get-keys-metadata operations.
    </description>
  </property>


  <property>
    <name>hadoop.kms.acl.SET_KEY_MATERIAL</name>
    <value>*</value>
    <description>
      Complementary ACL for CREATE and ROLLOVER operations to allow the client
      to provide the key material when creating or rolling a key.
    </description>
  </property>


  <property>
    <name>hadoop.kms.acl.GENERATE_EEK</name>
    <value>*</value>
    <description>
      ACL for generateEncryptedKey CryptoExtension operations.
    </description>
  </property>


  <property>
    <name>hadoop.kms.acl.DECRYPT_EEK</name>
    <value>*</value>
    <description>
      ACL for decryptEncryptedKey CryptoExtension operations.
    </description>
  </property>


  <property>
    <name>default.key.acl.MANAGEMENT</name>
    <value>*</value>
    <description>
      default ACL for MANAGEMENT operations for all key acls that are not
      explicitly defined.
    </description>
  </property>


  <property>
    <name>default.key.acl.GENERATE_EEK</name>
    <value>*</value>
    <description>
      default ACL for GENERATE_EEK operations for all key acls that are not
      explicitly defined.
    </description>
  </property>


  <property>
    <name>default.key.acl.DECRYPT_EEK</name>
    <value>*</value>
    <description>
      default ACL for DECRYPT_EEK operations for all key acls that are not
      explicitly defined.
    </description>
  </property>


  <property>
    <name>default.key.acl.READ</name>
    <value>*</value>
    <description>
      default ACL for READ operations for all key acls that are not
      explicitly defined.
    </description>
  </property>




</configuration>

@Geoffrey Shelton Okot did you get some time to visualize the kms-acls.xml file which I attached in previous comments because the solution which you gave did not work. As I am still not able to set the policies on group level. Please let me know if you have something that we can try out.

Mentor

@sachin gupta

This is the property to change always make a copy of the original file

$cp kms-acls.xml kms-acls.xml.bak

<property>
    <name>default.key.acl.DECRYPT_EEK</name>
    <value>*</value>
    <description>
      default ACL for DECRYPT_EEK operations for all key acls that are not
      explicitly defined.
    </description>
  </property>

Whats the name of key.acl.key /decrypt key USER_1 ?

Assuming its test then you should have an entry like this in your kms-acls.xml

<name>test.DECRYPT_EEK</name>  
<value>USER_1 GROUP_1</value>

Usually advisable to use ambari change any HDP parameter

Please let me know and of course restart the appropriate component for stale configs to take effect

Mentor

@sachin gupta

I have seen your attached kms-acls.xml.Have you changed the values? If so can you copy and past the specifi entry below?

 <property>
    <name>hadoop.kms.acl.DECRYPT_EEK</name>
    <value>*</value>
    <description>
      ACL for decryptEncryptedKey CryptoExtension operations.
    </description>
  </property

No @Geoffrey Shelton Okot I did not change anything.

Mentor

@sachin gupta

Then change it to the USER_1 and GROUP_1 and retest

@Geoffrey Shelton Okot do you know any solution in which I don't have specify user name. Is there no solution in which policy can be created on group level by specifying only group name ?

Mentor

@sachin gupta

Could you tell me your Ranger or HDP version. I could reproduce it and test. Maybe a description of what you have done some setup steps

HDp is 2.5.3 and ranger is 0.6.0

Take a Tour of the Community
Don't have an account?
Your experience may be limited. Sign in to explore more.