Member since
07-17-2019
738
Posts
433
Kudos Received
111
Solutions
My Accepted Solutions
| Title | Views | Posted |
|---|---|---|
| 3479 | 08-06-2019 07:09 PM | |
| 3681 | 07-19-2019 01:57 PM | |
| 5208 | 02-25-2019 04:47 PM | |
| 4675 | 10-11-2018 02:47 PM | |
| 1772 | 09-26-2018 02:49 PM |
03-16-2017
05:43 PM
In the Ambari Hosts view, click on a host, and then the "Add" button near the top of the page. This should contain the "Phoenix Query Server" that you can click to install it on the given host.
... View more
03-14-2017
11:10 PM
This isn't answering you question, but this is generally a bad idea. It is not a scalable way to retrieve data in HBase as this is an exhaustive search of your entire HBase table (a full table scan). This should only be used for an infrequently-execute MapReduce job (and even then, with caution as it is an expensive operation and can affect other latencies). But, if you know what you're doing, see this chapter in the HBase book: https://hbase.apache.org/book.html#cf.keep.deleted Make sure you create your table to retain more than one version of a cell Use the RAW => true option on `scan` to see the timestamp field.
... View more
03-14-2017
09:38 PM
At risk of repeating myself, you should use Apache Phoenix. It will give you the perks of HBase as a backing store with a SQL front-end for you to use as an entry-point.
... View more
03-13-2017
04:23 PM
2 Kudos
"Where can i add this prefix in my row key? It's coding some file, or it's implemented backwards in SQL?" You add it to your code which is ingesting data into HBase. That is the most simple way to implement this logic. "if i have a prefix in my row key, if i want to select a specific row key
in hbase i won't know what prefix that row key have, this is a
limitation or am i seeing thins wrong?" Yes, this is a limitation of your "formula". For salt/hashing, you want a stable hash such that, with the actual data in the rowKey, you can compute what the salt/hash is. I'd recommend that you use Apache Phoenix if you want to implement salting. It provides this as a feature and would likely save you a bit of time/effort.
... View more
03-12-2017
12:35 AM
If the above worked, it's not a port-forwarding issue.
... View more
03-11-2017
10:22 PM
Did you verify that node-A can connect to the ZooKeeper instance on node-B? $ echo ruok | nc node-B 2181 You should see the response "imok". The error appears to imply that a network connection cannot be opened to ZooKeeper which suggests a firewall or port-forwarding issue.
... View more
03-10-2017
11:26 PM
A common error to see in initial installations is the following from Accumulo TabletServer logs Caused by: org.apache.hadoop.ipc.RemoteException(java.io.IOException): File /apps/accumulo/data/wal/myhost.mydomain.com+9997/1ff916a2-13d0-4bb7-aa38-c44b69831519 could only be replicated to 0 nodes instead of minReplication (=1). There are 3 datanode(s) running and no node(s) are excluded in this operation.
at org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.chooseTarget4NewBlock(BlockManager.java:1649)
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getNewBlockTargets(FSNamesystem.java:3198)
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:3122)
at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.addBlock(NameNodeRpcServer.java:843)
at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.addBlock(ClientNamenodeProtocolServerSideTranslatorPB.java:500)
at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:640)
at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:982)
at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2313)
at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2309)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:422)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1724)
at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2307)
at org.apache.hadoop.ipc.Client.getRpcResponse(Client.java:1552)
at org.apache.hadoop.ipc.Client.call(Client.java:1496)
at org.apache.hadoop.ipc.Client.call(Client.java:1396)
at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:233)
This exception will be printed repeatedly in the TabletServer logs as Accumulo has no other solution than to try to create its write-ahead log file again. This exception is, indirectly, telling us multiple things about the current state: There are three Datanodes None of the Datanodes were avoided -- this means all three of them should have been able to accept the write None of the Datanodes successfully accepted the write The most common cause of this issue is that each Datanode has a very small amount of disk space to use. When Accumulo creates its write-ahead log files, it sets a large HDFS block size (by default: 1GB). If the Datanode does not have enough free space to store 1GB of data, the allocation fails. When all of the Datanodes are in this situation, you would see the above error message. The solution to the above problem is to provide more storage for the Datanode. Commonly, this is because HDFS is not configured to use the correct data directories or some hard drives were not mounted to the data dirs (and thus the Datanodes are using the root volume).
... View more
Labels:
03-09-2017
03:26 PM
Please ignore this exception. This exception should just not be printed in the logs for you to see at the INFO level. These tokens expire and can be renewed by design. This should be transparent to you. The bug is that this exception is printed at all for you to see.
... View more
03-08-2017
05:46 PM
1 Kudo
ZooKeeper is telling you what went wrong: An error: (java.security.PrivilegedActionException:
javax.security.sasl.SaslException: GSS initiate failed [Caused by
GSSException: No valid credentials provided (Mechanism level: Server not
found in Kerberos database (7) - UNKNOWN_SERVER)]) occurred when
evaluating Zookeeper Quorum Member's received SASL token. This may be
caused by Java's being unable to resolve the Zookeeper Quorum Member's
hostname correctly. You may want to try to adding
'-Dsun.net.spi.nameservice.provider.1=dns,sun' to your client's JVMFLAGS
environment. Zookeeper Client will go to AUTH_FAILED state. SASL authentication (with Kerberos) failed which caused the ZK client to fall back to an un-authenticated state. When the HBase client tried to read the ACL'ed znodes in ZK, it failed because you were not autheticated. The error implies that the code was able to find your Kerberos ticket, however, one or more of the ZooKeeper servers which you specified were not found when looking them up in the KDC to perform the Kerberos authentication. Make sure that you are specifying the correct, fully-qualified domain name for each ZooKeeper server. This must exactly match the "instance" component of the Kerberos principal that your ZooKeeper servers are using (e.g. "host.domain.com" in the principal "zookeeper/host.domain.com@DOMAIN.COM").
... View more