Member since
10-28-2016
392
Posts
7
Kudos Received
20
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
2321 | 03-12-2018 02:28 AM | |
3610 | 12-18-2017 11:41 PM | |
2558 | 07-17-2017 07:01 PM | |
1758 | 07-13-2017 07:20 PM | |
5280 | 07-12-2017 08:31 PM |
01-27-2017
01:07 AM
@mqureshi -
i had a to do a - su hbase, and then launch hbase shell .. else the user root was trying to access the hbase table. i'm able to create & access the table now. Thnx.
... View more
01-27-2017
12:52 AM
@mqureshi - Appreciate your help in this. .. finally figured out the cause of the issue. I was trying to use Ranger to restrict access to tables created in Encryption Zone, and in the process had removed access to user 'hbase' - to the encrypted HDFS location, and the key. This was causing the issue in starting up HBase... I've added back the permisson to the HDFS location & the key, and am able to startup HBase, create & access tables. However, one more issue - How do i restrict access to the table created (using Ranger) Here is what i did - 1) Removed Global access to Hbase tables 2) Gave access to table created - 'emp' However, now i'm not able to see the table created. Any ideas on how to achieve this ?
... View more
01-26-2017
10:48 PM
@mqureshi - yes, i did .. Infact, just did a redo .. 1) rmr /hbase-unsecure 2) restarted zookeeper 3) restarted hbase master if you see the highlighted portion in hbase master log - seems it is trying to look for /hbase-unsecure/rs/sandbox.hortonworks.com,16000,1485470635621 already deleted, retry=false which seems to be not getting created ? Zookeeper client -> [zk: sandbox.hortonworks.com:2181(CONNECTED) 3] ls /hbase-unsecure
[recovering-regions, splitWAL, rs, backup-masters, region-in-transition, draining, table, table-lock]
[zk: sandbox.hortonworks.com:2181(CONNECTED) 4] ls /hbase-unsecure/rs
[] HBase Master logs -> ----------------------------------------------- [root@sandbox ~]# tail -f /var/log/hbase/hbase-hbase-master-sandbox.hortonworks.com.log
2017-01-26 22:44:01,359 INFO [master/sandbox.hortonworks.com/10.0.2.15:16000-EventThread] zookeeper.ClientCnxn: EventThread shut down
2017-01-26 22:44:01,359 INFO [master/sandbox.hortonworks.com/10.0.2.15:16000] zookeeper.ZooKeeper: Session: 0x159dcf27e800001 closed
2017-01-26 22:44:01,370 INFO [master/sandbox.hortonworks.com/10.0.2.15:16000] regionserver.HRegionServer: stopping server sandbox.hortonworks.com,16000,1485470635621; all regions closed.
2017-01-26 22:44:01,371 INFO [master/sandbox.hortonworks.com/10.0.2.15:16000] hbase.ChoreService: Chore service for: sandbox.hortonworks.com,16000,1485470635621 had [] on shutdown
2017-01-26 22:44:01,379 INFO [master/sandbox.hortonworks.com/10.0.2.15:16000] ipc.RpcServer: Stopping server on 16000
2017-01-26 22:44:01,382 INFO [master/sandbox.hortonworks.com/10.0.2.15:16000] zookeeper.RecoverableZooKeeper: Node /hbase-unsecure/rs/sandbox.hortonworks.com,16000,1485470635621 already deleted, retry=false
2017-01-26 22:44:01,384 INFO [main-EventThread] zookeeper.ClientCnxn: EventThread shut down
2017-01-26 22:44:01,385 INFO [master/sandbox.hortonworks.com/10.0.2.15:16000] zookeeper.ZooKeeper: Session: 0x159dcf27e800000 closed
2017-01-26 22:44:01,385 INFO [master/sandbox.hortonworks.com/10.0.2.15:16000] regionserver.HRegionServer: stopping server sandbox.hortonworks.com,16000,1485470635621; zookeeper connection closed.
2017-01-26 22:44:01,385 INFO [master/sandbox.hortonworks.com/10.0.2.15:16000] regionserver.HRegionServer: master/sandbox.hortonworks.com/10.0.2.15:16000 exiting
... View more
01-26-2017
10:25 PM
@mqureshi - i created new encryption zone, /encrypt_hbase2/hbase to re-test it. The current hbase-site.xml also points to new encryption zone -> /encrypt_hbase2/hbase sorry, forgot to mention that earlier.
... View more
01-26-2017
09:24 PM
@mqureshi - attaching the zookeeper log, & the hbase master log... zookeeper-zookeeper-server-sandboxhortonworkscomou.txt
hbase-masterlog.txt also, no data as yet in the sandbox (in the new encrypted location), also, cleaned up and restarted zookeeper multiple times. Pls note - If i revert back to the original unencrpted location & restart Hbase, it starts working fine. This is what i see on running zookeeper-client -> Command -> /usr/hdp/current/zookeeper-client/bin/zkCli.sh -server sandbox.hortonworks.com:2181 [zk: sandbox.hortonworks.com:2181(CONNECTED) 4] ls / [clusterstate.json, consumers, hiveserver2, storm, rmstore, controller_epoch, configs, isr_change_notification, admin, zookeeper, aliases.json, config, hbase-unsecure, registry, templeton-hadoop, live_nodes, overseer, overseer_elect, collections, brokers] [zk: 127.0.0.1:2181(CONNECTED) 16] ls /hbase-unsecure [recovering-regions, splitWAL, rs, backup-masters, region-in-transition, draining, table, table-lock] ... so On starting hbase master, seems some of the znodes under /hbase-unsecure are not getting created, including /hbase-unsecure/master what needs to be done for this ?
... View more
01-26-2017
07:13 AM
hbase-masterlog.txt @mqureshi - seems the Hbase master is not starting now, attached is the log file. Any ideas ? Pls note - This cluster is not kerberized. ---------------------------------------- 2017-01-26 07:06:47,104 INFO [master/sandbox.hortonworks.com/10.0.2.15:16000] client.ZooKeeperRegistry: ClusterId read in ZooKeeper is null
2017-01-26 07:06:47,455 FATAL [sandbox:16000.activeMasterManager] master.HMaster: Failed to become active master
java.net.ConnectException: Connection refused
at java.net.PlainSocketImpl.socketConnect(Native Method)
at java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:339)
at java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:200)
at java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:182)
at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:392)
at java.net.Socket.connect(Socket.java:579)
at sun.net.NetworkClient.doConnect(NetworkClient.java:175)
at sun.net.www.http.HttpClient.openServer(HttpClient.java:432)
at sun.net.www.http.HttpClient.openServer(HttpClient.java:527)
at sun.net.www.http.HttpClient.<init>(HttpClient.java:211)
at sun.net.www.http.HttpClient.New(HttpClient.java:308)
at sun.net.www.http.HttpClient.New(HttpClient.java:326)
at sun.net.www.protocol.http.HttpURLConnection.getNewHttpClient(HttpURLConnection.java:998)
at sun.net.www.protocol.http.HttpURLConnection.plainConnect(HttpURLConnection.java:934)
at sun.net.www.protocol.http.HttpURLConnection.connect(HttpURLConnection.java:852)
at org.apache.hadoop.security.authentication.client.KerberosAuthenticator.authenticate(KerberosAuthenticator.java:190)
at org.apache.hadoop.security.token.delegation.web.DelegationTokenAuthenticator.authenticate(DelegationTokenAuthenticator.java:128)
at org.apache.hadoop.security.authentication.client.AuthenticatedURL.openConnection(AuthenticatedURL.java:215)
at org.apache.hadoop.security.token.delegation.web.DelegationTokenAuthenticatedURL.openConnection(DelegationTokenAuthenticatedURL.java:322)
at org.apache.hadoop.crypto.key.kms.KMSClientProvider$1.run(KMSClientProvider.java:483)
at org.apache.hadoop.crypto.key.kms.KMSClientProvider$1.run(KMSClientProvider.java:478)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:415)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1657)
at org.apache.hadoop.crypto.key.kms.KMSClientProvider.createConnection(KMSClientProvider.java:478)
at org.apache.hadoop.crypto.key.kms.KMSClientProvider.decryptEncryptedKey(KMSClientProvider.java:776)
at org.apache.hadoop.crypto.key.KeyProviderCryptoExtension.decryptEncryptedKey(KeyProviderCryptoExtension.java:388)
at org.apache.hadoop.hdfs.DFSClient.decryptEncryptedDataEncryptionKey(DFSClient.java:1395)
at org.apache.hadoop.hdfs.DFSClient.createWrappedInputStream(DFSClient.java:1465)
at org.apache.hadoop.hdfs.DistributedFileSystem$3.doCall(DistributedFileSystem.java:305)
at org.apache.hadoop.hdfs.DistributedFileSystem$3.doCall(DistributedFileSystem.java:299)
at org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
at org.apache.hadoop.hdfs.DistributedFileSystem.open(DistributedFileSystem.java:299)
at org.apache.hadoop.fs.FileSystem.open(FileSystem.java:767)
at org.apache.hadoop.hbase.util.FSUtils.getVersion(FSUtils.java:571)
at org.apache.hadoop.hbase.util.FSUtils.checkVersion(FSUtils.java:656)
at org.apache.hadoop.hbase.master.MasterFileSystem.checkRootDir(MasterFileSystem.java:455)
at org.apache.hadoop.hbase.master.MasterFileSystem.createInitialFileSystemLayout(MasterFileSystem.java:146)
at org.apache.hadoop.hbase.master.MasterFileSystem.<init>(MasterFileSystem.java:126)
at org.apache.hadoop.hbase.master.HMaster.finishActiveMasterInitialization(HMaster.java:667)
at org.apache.hadoop.hbase.master.HMaster.access$500(HMaster.java:191)
at org.apache.hadoop.hbase.master.HMaster$1.run(HMaster.java:1783)
at java.lang.Thread.run(Thread.java:745)
2017-01-26 07:06:47,467 FATAL [sandbox:16000.activeMasterManager] master.HMaster: Unhandled exception. Starting shutdown.
java.net.ConnectException: Connection refused
at java.net.PlainSocketImpl.socketConnect(Native Method)
at java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:339)
at java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:200)
at java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:182)
at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:392)
at java.net.Socket.connect(Socket.java:579)
at sun.net.NetworkClient.doConnect(NetworkClient.java:175)
at sun.net.www.http.HttpClient.openServer(HttpClient.java:432)
at sun.net.www.http.HttpClient.openServer(HttpClient.java:527)
at sun.net.www.http.HttpClient.<init>(HttpClient.java:211)
at sun.net.www.http.HttpClient.New(HttpClient.java:308)
at sun.net.www.http.HttpClient.New(HttpClient.java:326)
at sun.net.www.protocol.http.HttpURLConnection.getNewHttpClient(HttpURLConnection.java:998)
at sun.net.www.protocol.http.HttpURLConnection.plainConnect(HttpURLConnection.java:934)
at sun.net.www.protocol.http.HttpURLConnection.connect(HttpURLConnection.java:852)
at org.apache.hadoop.security.authentication.client.KerberosAuthenticator.authenticate(KerberosAuthenticator.java:190)
at org.apache.hadoop.security.token.delegation.web.DelegationTokenAuthenticator.authenticate(DelegationTokenAuthenticator.java:128)
at org.apache.hadoop.security.authentication.client.AuthenticatedURL.openConnection(AuthenticatedURL.java:215)
at org.apache.hadoop.security.token.delegation.web.DelegationTokenAuthenticatedURL.openConnection(DelegationTokenAuthenticatedURL.java:322)
at org.apache.hadoop.crypto.key.kms.KMSClientProvider$1.run(KMSClientProvider.java:483)
at org.apache.hadoop.crypto.key.kms.KMSClientProvider$1.run(KMSClientProvider.java:478)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:415)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1657)
at org.apache.hadoop.crypto.key.kms.KMSClientProvider.createConnection(KMSClientProvider.java:478)
at org.apache.hadoop.crypto.key.kms.KMSClientProvider.decryptEncryptedKey(KMSClientProvider.java:776)
at org.apache.hadoop.crypto.key.KeyProviderCryptoExtension.decryptEncryptedKey(KeyProviderCryptoExtension.java:388)
at org.apache.hadoop.hdfs.DFSClient.decryptEncryptedDataEncryptionKey(DFSClient.java:1395
... View more
01-26-2017
06:37 AM
@stevel, @Pierre Villard - agreed.. i'll be using kerberos as first step, but still wanted to confirm if this was mandatory for hdfs encryption at rest.
... View more
01-25-2017
11:05 PM
hive.exec.stagingdir was already set to - /encrypt/hive/tmp/
also, scrachdir to location in encryption zone -> /encrypt/hive/tmp & provided permission 777 There is an additional variable that was to be changed ->
hive.metastore.warehouse.dir - I changed this from existing value (/apps/hive/warehouse) to location in the encrypted zone -> /encrypt/hive, and this problem is fixed. ---------------------------------------------------------------------------------------------------------- INFO
: Moving data to: /encrypt/hive/testtable2 from
hdfs://sandbox.hortonworks.com:8020/encrypt/hive/.hive-staging_hive_2017-01-25_22-54-41_396_5265658181234256688-1/-ext-10001
INFO : Table default.testtable2 stats: [numFiles=1, numRows=5, totalSize=211, rawDataSize=206]
No rows affected (47.001 seconds)
... View more
01-25-2017
11:03 PM
2 Kudos
@Mahesh M. Pillai - hive.exec.stagingdir was already set to - /encrypt/hive/tmp/ There is an additional varaible that was to be changed -> hive.metastore.warehouse.dir , I changed this from existing value (/apps/hive/warehouse) to location in the encrypted zone -> /encrypt/hive, and this problem is fixed. ---------------------------------- INFO : Moving data to: /encrypt/hive/testtable2 from hdfs://sandbox.hortonworks.com:8020/encrypt/hive/.hive-staging_hive_2017-01-25_22-54-41_396_5265658181234256688-1/-ext-10001
INFO : Table default.testtable2 stats: [numFiles=1, numRows=5, totalSize=211, rawDataSize=206]
No rows affected (47.001 seconds)
... View more
01-25-2017
09:48 PM
@mqureshi - thanks, the znode corresponding to HBase is /hbase-unsecure, and i logged on & cleared zookeeper /hbase-unsecure. Then, restarted HBase, however the Region Server is shutting down .. attaching the log file & hbase-site.xml. hbase-regionserver-logs.txt
hbase-sitexml.txt
Error snippet shown below, any ideas on this ? ------------------------------ 2017-01-25 21:22:57,337 INFO [RpcServer.responder] ipc.RpcServer: RpcServer.responder: stopping
2017-01-25 21:22:57,417 WARN [RS_OPEN_META-sandbox:16020-0] ipc.Client: interrupted waiting to send rpc request to server
java.lang.InterruptedException
at java.util.concurrent.FutureTask.awaitDone(FutureTask.java:400)
at java.util.concurrent.FutureTask.get(FutureTask.java:187)
at org.apache.hadoop.ipc.Client$Connection.sendRpcRequest(Client.java:1057)
at org.apache.hadoop.ipc.Client.call(Client.java:1400)
at org.apache.hadoop.ipc.Client.call(Client.java:1358)
at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:229)
at com.sun.proxy.$Proxy15.getFileInfo(Unknown Source)
at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.getFileInfo(ClientNamenodeProtocolTranslatorPB.java:771)
at sun.reflect.GeneratedMethodAccessor1.invoke(Unknown Source)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:252)
at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:104)
at com.sun.proxy.$Proxy16.getFileInfo(Unknown Source)
at sun.reflect.GeneratedMethodAccessor1.invoke(Unknown Source)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:279)
at com.sun.proxy.$Proxy17.getFileInfo(Unknown Source)
at org.apache.hadoop.hdfs.DFSClient.getFileInfo(DFSClient.java:2116)
at org.apache.hadoop.hdfs.DistributedFileSystem$22.doCall(DistributedFileSystem.java:1315)
at org.apache.hadoop.hdfs.DistributedFileSystem$22.doCall(DistributedFileSystem.java:1311)
at org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
at org.apache.hadoop.hdfs.DistributedFileSystem.getFileStatus(DistributedFileSystem.java:1311)
at org.apache.hadoop.fs.FilterFileSystem.getFileStatus(FilterFileSystem.java:421)
at org.apache.hadoop.fs.FileSystem.exists(FileSystem.java:1424)
at org.apache.hadoop.fs.FileSystem.createNewFile(FileSystem.java:1149)
at org.apache.hadoop.hbase.wal.WALSplitter.writeRegionSequenceIdFile(WALSplitter.java:716)
at org.apache.hadoop.hbase.regionserver.HRegion.initializeRegionInternals(HRegion.java:860)
at org.apache.hadoop.hbase.regionserver.HRegion.initialize(HRegion.java:794)
at org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:6328)
at org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:6289)
at org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:6260)
at org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:6216)
at org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:6167)
at org.apache.hadoop.hbase.regionserver.handler.OpenRegionHandler.openRegion(OpenRegionHandler.java:362)
at org.apache.hadoop.hbase.regionserver.handler.OpenRegionHandler.process(OpenRegionHandler.java:129)
at org.apache.hadoop.hbase.executor.EventHandler.run(EventHandler.java:128)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:745)
2017-01-25 21:22:57,418 ERROR [RS_OPEN_META-sandbox:16020-0] handler.OpenRegionHandler: Failed open of region=hbase:meta,,1.1588230740, starting to roll back the global memstore size.
java.io.IOException: java.lang.InterruptedException
at org.apache.hadoop.ipc.Client.call(Client.java:1406)
at org.apache.hadoop.ipc.Client.call(Client.java:1358)
at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufR
... View more