Member since
07-17-2019
738
Posts
433
Kudos Received
111
Solutions
My Accepted Solutions
| Title | Views | Posted |
|---|---|---|
| 3474 | 08-06-2019 07:09 PM | |
| 3673 | 07-19-2019 01:57 PM | |
| 5205 | 02-25-2019 04:47 PM | |
| 4668 | 10-11-2018 02:47 PM | |
| 1771 | 09-26-2018 02:49 PM |
01-23-2017
09:02 PM
1 Kudo
Sometimes, in the face of initialize configuration and setup, it is easier to completely re-initialize an Accumulo installation than try to repair it. These steps are different than what might be found in the Apache ecosystem as the following steps are related to how Apache Ambari is used to install and configure Accumulo. Warning: the following steps (may) irreversibly remove all Accumulo related data. This includes table data, namespaces, table configuration, and Accumulo users. Do not perform these steps unless you are positive that you do not want to preserve any of the information. First, Accumulo must be stopped. This can be done via Ambari and verified using tools like `ps` on the nodes. Second, the Accumulo HDFS directory should be removed. This can be done by the HDFS superuser ("hdfs" by default) or the Accumulo user ("accumulo" by default). # sudo -u hdfs hdfs dfs -rm -R /apps/accumulo/data Next, Accumulo needs to re-initialized using the command line tools. This command must be executed from a node in your cluster where an Accumulo service is currently installed (the Accumulo Client is not sufficient). # sudo -u accumulo ACCUMULO_CONF_DIR=/etc/accumulo/conf/server accumulo init --instance-name hdp-accumulo-instance --clear-instance-name This command requires two pieces of information. The first we are providing as an argument to the command: the Accumulo instance name. By default, this name is "hdp-accumulo-instance" using Ambari; however, users may have provided their own value. As this name is how clients find and connect to Accumulo, it is important to use the correct name. The second piece of information is the Accumulo root user's password. You will be prompted for this information after running this command. This is only relevant when Kerberos authentication is not configured. When Kerberos is enabled, this command will prompt you for the full Kerberos principal of the user to grant Accumulo administrative (SYSTEM) permissions to. If this command successfully returns, you can restart Accumulo via Ambari. Visit the Accumulo Monitor page to verify that the system is online and/or use the proper Accumulo credentials to access the system via the Accumulo shell.
... View more
Labels:
01-23-2017
04:32 PM
"I had added that flag already and that only showed me that I have a valid service ticket (as mentioned above)." Would suggest sharing that information anyways. Sometimes the details printed by that option are very subtle yet telling. You can also try setting the log4j level to DEBUG or TRACE for org.apache.hadoop.hbase.ipc to see if there is more context there.
... View more
01-19-2017
04:58 PM
ZooKeeper session expiration happens when the client (HBase RegionServer) fails to successfully contact the ZooKeeper server. This can happen for a variety of reasons: JVM GC Pauses in the RegionServer Swapping of pages in memory to disk (instead of remaining in memory) Network partitions -- the RS host is physically unable to send traffic to the ZK host ZooKeeper connection rate-limiting https://community.hortonworks.com/articles/51191/understanding-apache-zookeeper-connection-rate-lim.html Typically, JVM GC pauses and swapping are the most common causes. Make sure that you have adequate memory on your system and configured for the RegionServer. The article linked for ZK connection rate-limiting has instructions to check if that is happening on your system. If you are a Hortonworks customer, please consider using SmartSense to help automatically diagnose some of these issues.
... View more
01-19-2017
04:55 PM
Might want to double-check your link 🙂
... View more
01-19-2017
12:59 AM
Sorry for the tangent, Qi. Glad Sergey was here to steer you in a better direction 🙂
... View more
01-18-2017
08:55 PM
Can you please try to create a table? This specifically uses the Master whereas scanning a table can be done by talking to a RegionServer.
... View more
01-18-2017
05:13 PM
org.apache.hadoop.hbase.MasterNotRunningException It would appear that HBase is not actually running. Can you verify that HBase is healthy? Can you interact with HBase via `hbase shell` (e.g. create a table, add data to it, delete the table)?
... View more
01-16-2017
04:12 PM
Look at the logs from the YARN container which corresponds to that Hive vertex. You can find these logs via the YARN ResourceManager web UI.
... View more
01-11-2017
04:40 PM
@Christopher Bridge There was a recent issue in https://issues.apache.org/jira/browse/PHOENIX-3126, but this was not included in HDP-2.5.0.0. Connections by PQS to HBase are always made by the PQS principal+keytab -- the end user is always "proxied" on top of the PQS credentials. If you have an example code which shows something happening and can describe why you think this is wrong, I'll try to take a look at it. If you are a Hortonworks customer, you can/should also reach out through support channels.
... View more
01-10-2017
08:45 PM
Just ran into this one myself. Looks like Ambari parses the output of hdp-select to determine the full package name to install: rpm -q --queryformat '%{version}-%{release}' hdp-select | sed -e 's/\.el[0-9]//g'
... View more