Support Questions

Find answers, ask questions, and share your expertise

HDFS Encryption Zone - HBase shutting down

avatar
Expert Contributor

Hi - i'm trying to evaluate & implement Data at Rest encryption for HBase.

here is what is done ->

- created folder /encrypt_hbase1/hbase

- created Encryption zone using key - testkeyfromcli, path - /encrypt_hbase1

- added folders /encrypt_hbase1/hbase/staging, /encrypt_hbase1/hbase/data

- made the following changes to properties in hbase-site,xml, to point Hbase to encrypted locations.

hbase.rootdir => hdfs://sandbox.hortonworks.com:8020/encrypt_hbase1/hbase/data

hbase.bulkload.staging.dir => /encrypt_hbase1/hbase/staging

- added hbase to have access to locations under /encrypt_hbase1 (recursive)- using Ranger

- Added hbase access to key - testkeyfromcli using Ranger

I restarted Hbase using Ranger, and it starts up.

However, when i try to access the tables (using command - list), the region server is shutting down, and it errors out.

Any ideas on what needs to be done ?

attached screen-shots of Ranger policies for HDFS location & key

screen-shot-2017-01-24-at-62538-pm.png

screen-shot-2017-01-24-at-62459-pm.png

----------------------------------------------------------------

hbase(main):003:0> list TABLE ERROR: org.apache.hadoop.hbase.PleaseHoldException: Master is initializing at org.apache.hadoop.hbase.master.HMaster.checkInitialized(HMaster.java:2314) at org.apache.hadoop.hbase.master.MasterRpcServices.getTableDescriptors(MasterRpcServices.java:853) at org.apache.hadoop.hbase.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java:53136) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2114) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:101) at org.apache.hadoop.hbase.ipc.RpcExecutor.consumerLoop(RpcExecutor.java:130) at org.apache.hadoop.hbase.ipc.RpcExecutor$1.run(RpcExecutor.java:107) at java.lang.Thread.run(Thread.java:745) Here is some help for this command: List all tables in hbase. Optional regular expression parameter could be used to filter the output. Examples: hbase> list hbase> list 'abc.*' hbase> list 'ns:abc.*' hbase> list 'ns:.*'

1 ACCEPTED SOLUTION

avatar
Expert Contributor

@mqureshi -

Appreciate your help in this. .. finally figured out the cause of the issue.

I was trying to use Ranger to restrict access to tables created in Encryption Zone, and in the process had removed access to user 'hbase' - to the encrypted HDFS location, and the key.

This was causing the issue in starting up HBase... I've added back the permisson to the HDFS location & the key, and am able to startup HBase, create & access tables.

However, one more issue -

How do i restrict access to the table created (using Ranger)

Here is what i did -

1) Removed Global access to Hbase tables 2) Gave access to table created - 'emp'

However, now i'm not able to see the table created.

Any ideas on how to achieve this ?

View solution in original post

15 REPLIES 15

avatar
Expert Contributor

@Mahesh M. Pillai, @svenkat - any ideas on this ?

avatar
Super Guru

@Karan Alang

Did you try clearing up your zookeeper directory? Your zookeeper directory is hbase.zookeeper.property.datadir (in your hbase-site.xml). Login to zookeeper cli and run rmr /path. Make sure both hbase and zookeeper are shutdown.

avatar
Expert Contributor

@mqureshi - somehow i don't see the property - hbase.zookeeper.property.datadir in file hbase-site.xml

attaching the file - hbase-site.xml (location - /etc/hbase/conf/hbase-site.xml)

any ideas ? i'm on the HDP 2.4 sandbox.hbase-site.xml

avatar
Super Guru
@Karan Alang

Here is what I would suggest. run zookeeper cli by running zkcli.sh (probably at following location). Then do "ls /" to find znodes. I think you should see a path for hbase which should be /hbase. If not then probably similar. Then just run rmr <path> for example rmr /hbase.

  1. /usr/hdp/current/zookeeper-client/bin/zkCli.sh -server localhost:2181 ->assuming zookeeper is local

avatar
Expert Contributor

@mqureshi -

thanks, the znode corresponding to HBase is /hbase-unsecure, and i logged on & cleared zookeeper /hbase-unsecure.

Then, restarted HBase, however the Region Server is shutting down .. attaching the log file & hbase-site.xml.

hbase-regionserver-logs.txt

hbase-sitexml.txt

Error snippet shown below, any ideas on this ?

------------------------------

2017-01-25 21:22:57,337 INFO [RpcServer.responder] ipc.RpcServer: RpcServer.responder: stopping 2017-01-25 21:22:57,417 WARN [RS_OPEN_META-sandbox:16020-0] ipc.Client: interrupted waiting to send rpc request to server java.lang.InterruptedException at java.util.concurrent.FutureTask.awaitDone(FutureTask.java:400) at java.util.concurrent.FutureTask.get(FutureTask.java:187) at org.apache.hadoop.ipc.Client$Connection.sendRpcRequest(Client.java:1057) at org.apache.hadoop.ipc.Client.call(Client.java:1400) at org.apache.hadoop.ipc.Client.call(Client.java:1358) at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:229) at com.sun.proxy.$Proxy15.getFileInfo(Unknown Source) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.getFileInfo(ClientNamenodeProtocolTranslatorPB.java:771) at sun.reflect.GeneratedMethodAccessor1.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:606) at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:252) at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:104) at com.sun.proxy.$Proxy16.getFileInfo(Unknown Source) at sun.reflect.GeneratedMethodAccessor1.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:606) at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:279) at com.sun.proxy.$Proxy17.getFileInfo(Unknown Source) at org.apache.hadoop.hdfs.DFSClient.getFileInfo(DFSClient.java:2116) at org.apache.hadoop.hdfs.DistributedFileSystem$22.doCall(DistributedFileSystem.java:1315) at org.apache.hadoop.hdfs.DistributedFileSystem$22.doCall(DistributedFileSystem.java:1311) at org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81) at org.apache.hadoop.hdfs.DistributedFileSystem.getFileStatus(DistributedFileSystem.java:1311) at org.apache.hadoop.fs.FilterFileSystem.getFileStatus(FilterFileSystem.java:421) at org.apache.hadoop.fs.FileSystem.exists(FileSystem.java:1424) at org.apache.hadoop.fs.FileSystem.createNewFile(FileSystem.java:1149) at org.apache.hadoop.hbase.wal.WALSplitter.writeRegionSequenceIdFile(WALSplitter.java:716) at org.apache.hadoop.hbase.regionserver.HRegion.initializeRegionInternals(HRegion.java:860) at org.apache.hadoop.hbase.regionserver.HRegion.initialize(HRegion.java:794) at org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:6328) at org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:6289) at org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:6260) at org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:6216) at org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:6167) at org.apache.hadoop.hbase.regionserver.handler.OpenRegionHandler.openRegion(OpenRegionHandler.java:362) at org.apache.hadoop.hbase.regionserver.handler.OpenRegionHandler.process(OpenRegionHandler.java:129) at org.apache.hadoop.hbase.executor.EventHandler.run(EventHandler.java:128) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) at java.lang.Thread.run(Thread.java:745) 2017-01-25 21:22:57,418 ERROR [RS_OPEN_META-sandbox:16020-0] handler.OpenRegionHandler: Failed open of region=hbase:meta,,1.1588230740, starting to roll back the global memstore size. java.io.IOException: java.lang.InterruptedException at org.apache.hadoop.ipc.Client.call(Client.java:1406) at org.apache.hadoop.ipc.Client.call(Client.java:1358) at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufR

avatar
Super Guru

can you please share the hbase-master server logs? first start the master and then start region servers. When you run the same zkcli ls / command, do you see /hbase-unsecure back? You should because master should recreate this znode and everything under. It might take a while so start master, give it some time. check /hbase-unsecure exists and also check the subfolders. see following link. does the structure in your /hbase-unsecure matches what's explained in the link below?

https://community.hortonworks.com/articles/73627/hbase-zookeeper-znodes-explained.html

avatar
Expert Contributor

hbase-masterlog.txt

@mqureshi - seems the Hbase master is not starting now, attached is the log file.

Any ideas ?

Pls note - This cluster is not kerberized.

----------------------------------------

2017-01-26 07:06:47,104 INFO [master/sandbox.hortonworks.com/10.0.2.15:16000] client.ZooKeeperRegistry: ClusterId read in ZooKeeper is null 2017-01-26 07:06:47,455 FATAL [sandbox:16000.activeMasterManager] master.HMaster: Failed to become active master java.net.ConnectException: Connection refused at java.net.PlainSocketImpl.socketConnect(Native Method) at java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:339) at java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:200) at java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:182) at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:392) at java.net.Socket.connect(Socket.java:579) at sun.net.NetworkClient.doConnect(NetworkClient.java:175) at sun.net.www.http.HttpClient.openServer(HttpClient.java:432) at sun.net.www.http.HttpClient.openServer(HttpClient.java:527) at sun.net.www.http.HttpClient.<init>(HttpClient.java:211) at sun.net.www.http.HttpClient.New(HttpClient.java:308) at sun.net.www.http.HttpClient.New(HttpClient.java:326) at sun.net.www.protocol.http.HttpURLConnection.getNewHttpClient(HttpURLConnection.java:998) at sun.net.www.protocol.http.HttpURLConnection.plainConnect(HttpURLConnection.java:934) at sun.net.www.protocol.http.HttpURLConnection.connect(HttpURLConnection.java:852) at org.apache.hadoop.security.authentication.client.KerberosAuthenticator.authenticate(KerberosAuthenticator.java:190) at org.apache.hadoop.security.token.delegation.web.DelegationTokenAuthenticator.authenticate(DelegationTokenAuthenticator.java:128) at org.apache.hadoop.security.authentication.client.AuthenticatedURL.openConnection(AuthenticatedURL.java:215) at org.apache.hadoop.security.token.delegation.web.DelegationTokenAuthenticatedURL.openConnection(DelegationTokenAuthenticatedURL.java:322) at org.apache.hadoop.crypto.key.kms.KMSClientProvider$1.run(KMSClientProvider.java:483) at org.apache.hadoop.crypto.key.kms.KMSClientProvider$1.run(KMSClientProvider.java:478) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:415) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1657) at org.apache.hadoop.crypto.key.kms.KMSClientProvider.createConnection(KMSClientProvider.java:478) at org.apache.hadoop.crypto.key.kms.KMSClientProvider.decryptEncryptedKey(KMSClientProvider.java:776) at org.apache.hadoop.crypto.key.KeyProviderCryptoExtension.decryptEncryptedKey(KeyProviderCryptoExtension.java:388) at org.apache.hadoop.hdfs.DFSClient.decryptEncryptedDataEncryptionKey(DFSClient.java:1395) at org.apache.hadoop.hdfs.DFSClient.createWrappedInputStream(DFSClient.java:1465) at org.apache.hadoop.hdfs.DistributedFileSystem$3.doCall(DistributedFileSystem.java:305) at org.apache.hadoop.hdfs.DistributedFileSystem$3.doCall(DistributedFileSystem.java:299) at org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81) at org.apache.hadoop.hdfs.DistributedFileSystem.open(DistributedFileSystem.java:299) at org.apache.hadoop.fs.FileSystem.open(FileSystem.java:767) at org.apache.hadoop.hbase.util.FSUtils.getVersion(FSUtils.java:571) at org.apache.hadoop.hbase.util.FSUtils.checkVersion(FSUtils.java:656) at org.apache.hadoop.hbase.master.MasterFileSystem.checkRootDir(MasterFileSystem.java:455) at org.apache.hadoop.hbase.master.MasterFileSystem.createInitialFileSystemLayout(MasterFileSystem.java:146) at org.apache.hadoop.hbase.master.MasterFileSystem.<init>(MasterFileSystem.java:126) at org.apache.hadoop.hbase.master.HMaster.finishActiveMasterInitialization(HMaster.java:667) at org.apache.hadoop.hbase.master.HMaster.access$500(HMaster.java:191) at org.apache.hadoop.hbase.master.HMaster$1.run(HMaster.java:1783) at java.lang.Thread.run(Thread.java:745) 2017-01-26 07:06:47,467 FATAL [sandbox:16000.activeMasterManager] master.HMaster: Unhandled exception. Starting shutdown. java.net.ConnectException: Connection refused at java.net.PlainSocketImpl.socketConnect(Native Method) at java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:339) at java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:200) at java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:182) at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:392) at java.net.Socket.connect(Socket.java:579) at sun.net.NetworkClient.doConnect(NetworkClient.java:175) at sun.net.www.http.HttpClient.openServer(HttpClient.java:432) at sun.net.www.http.HttpClient.openServer(HttpClient.java:527) at sun.net.www.http.HttpClient.<init>(HttpClient.java:211) at sun.net.www.http.HttpClient.New(HttpClient.java:308) at sun.net.www.http.HttpClient.New(HttpClient.java:326) at sun.net.www.protocol.http.HttpURLConnection.getNewHttpClient(HttpURLConnection.java:998) at sun.net.www.protocol.http.HttpURLConnection.plainConnect(HttpURLConnection.java:934) at sun.net.www.protocol.http.HttpURLConnection.connect(HttpURLConnection.java:852) at org.apache.hadoop.security.authentication.client.KerberosAuthenticator.authenticate(KerberosAuthenticator.java:190) at org.apache.hadoop.security.token.delegation.web.DelegationTokenAuthenticator.authenticate(DelegationTokenAuthenticator.java:128) at org.apache.hadoop.security.authentication.client.AuthenticatedURL.openConnection(AuthenticatedURL.java:215) at org.apache.hadoop.security.token.delegation.web.DelegationTokenAuthenticatedURL.openConnection(DelegationTokenAuthenticatedURL.java:322) at org.apache.hadoop.crypto.key.kms.KMSClientProvider$1.run(KMSClientProvider.java:483) at org.apache.hadoop.crypto.key.kms.KMSClientProvider$1.run(KMSClientProvider.java:478) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:415) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1657) at org.apache.hadoop.crypto.key.kms.KMSClientProvider.createConnection(KMSClientProvider.java:478) at org.apache.hadoop.crypto.key.kms.KMSClientProvider.decryptEncryptedKey(KMSClientProvider.java:776) at org.apache.hadoop.crypto.key.KeyProviderCryptoExtension.decryptEncryptedKey(KeyProviderCryptoExtension.java:388) at org.apache.hadoop.hdfs.DFSClient.decryptEncryptedDataEncryptionKey(DFSClient.java:1395

avatar
Super Guru

@Karan Alang

I am assuming you stopped when you cleaned up zookeeper and restarted after it. For now, shut down your HBase. We need to look into zookeeper. Can you please share zookeeper logs (/var/log/zookeeper/)?

Since, this is sandbox, do you have any data there?

Can you try running "zookeeper-client" and share what your output is?

avatar
Expert Contributor

@mqureshi

- attaching the zookeeper log, & the hbase master log...

zookeeper-zookeeper-server-sandboxhortonworkscomou.txt

hbase-masterlog.txt

also, no data as yet in the sandbox (in the new encrypted location), also, cleaned up and restarted zookeeper multiple times.

Pls note - If i revert back to the original unencrpted location & restart Hbase, it starts working fine.

This is what i see on running zookeeper-client ->

Command ->

/usr/hdp/current/zookeeper-client/bin/zkCli.sh -server sandbox.hortonworks.com:2181

[zk: sandbox.hortonworks.com:2181(CONNECTED) 4] ls /

[clusterstate.json, consumers, hiveserver2, storm, rmstore, controller_epoch, configs, isr_change_notification, admin, zookeeper, aliases.json, config, hbase-unsecure, registry, templeton-hadoop, live_nodes, overseer, overseer_elect, collections, brokers]

[zk: 127.0.0.1:2181(CONNECTED) 16] ls /hbase-unsecure

[recovering-regions, splitWAL, rs, backup-masters, region-in-transition, draining, table, table-lock]

... so On starting hbase master, seems some of the znodes under /hbase-unsecure are not getting created, including /hbase-unsecure/master

what needs to be done for this ?