Member since
07-04-2016
40
Posts
0
Kudos Received
2
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
5238 | 09-05-2016 12:05 PM | |
1977 | 09-05-2016 12:03 PM |
08-31-2016
12:09 PM
It seems that there is only one directory in each and they are not "unwelcome", and are owned by hdfs. But thank you for the idea! [EDIT]: hdfs dfs -ls / did show me 8 items, no errors.
... View more
08-31-2016
11:03 AM
@Sagar Shimpi I looked at the options for configuring existing views but I don't think this will fix the issue, as I think it is an HDFS connection/installation problem and not just a UI issue do to the errors. Unless you have an idea for configuration that could fix the "cluster's connection" and the hdfs recognition.
... View more
08-31-2016
10:46 AM
So I just very recently installed HDP 2.3.4.0 using the Ambari Install wizard and a local repository. I installed on 5 nodes according to default/suggested options on the wizard. This is my first experience with the Ambari and HDP environment, so I am a bit lost. After a bit of bug fixing, all of the services are running (dashboard-aug31.png) on all 5 nodes, with no alerts. Yet, the widgets all say n/a or infinitely load, and all of the views are empty with messages like "cluster not connected", "NullPointerException", etc. Obviously there is a large flaw in my setup, and I don't know how to figure out what it is or how to fix it. I can't find anyone posting as to having the same thing happen to them. Does anyone have any ideas? I haven't started actually using it yet, so there is no data anywhere. Here are screenshots of the views: yarn-queue-manager.png smartsense-view.png hive-view.png tez-view.png
... View more
Labels:
08-31-2016
08:51 AM
Thank you for both responses @Josh Elser, at least I know its not some obvious mistake I made. I will re-install the service and see what happens.
... View more
08-30-2016
07:16 AM
@Josh Elser Your suspicion is right: /usr/hdp/2.3.4.0-3485/etc/default/accumulo exists on the nodes which are correctly running TServer, and doesn't exist on the ones that aren't. EDIT: I tried just adding the file to one of the incorrectly-running nodes, and the error changed to "Error: Could not find or load main class org.apache.accumulo.start.Main" ... so @Jonathan Hurley is most likely right about there being an issue with the configuration files, I think? Is there a place either of you recommend I look for this difference in configuration between this node and the others? Thanks for your help.
... View more
08-30-2016
07:11 AM
This is the /usr/hdp/current/accumulo-client/bin/accumulo file that gives the error, which is identical on both nodes, the one whose TServer crashes and one who's tserver seems to work fine. #!/bin/sh
. /usr/hdp/2.3.4.0-3485/etc/default/hadoop
. /usr/hdp/2.3.4.0-3485/etc/default/accumulo
# Autodetect JAVA_HOME if not defined
if [ -e /usr/libexec/bigtop-detect-javahome ]; then
. /usr/libexec/bigtop-detect-javahome
elif [ -e /usr/lib/bigtop-utils/bigtop-detect-javahome ]; then
. /usr/lib/bigtop-utils/bigtop-detect-javahome
fi
export HDP_VERSION=${HDP_VERSION:-2.3.4.0-3485}
export ACCUMULO_OTHER_OPTS="-Dhdp.version=${HDP_VERSION} ${ACCUMULO_OTHER_OPTS}"
exec /usr/hdp/2.3.4.0-3485//accumulo/bin/accumulo.distro "$@"
so I could change the line ". /usr/hdp/2.3.4.0-3485/etc/default/accumulo" but if that's whats causing me problems then all of the nodes running it would/should have the same problem. I will compare their configs a little more to see if there's a difference somewhere.
... View more
08-29-2016
08:19 AM
Hi @Josh Elser, thanks for the response! I can compare their log files but otherwise I'm not sure how exactly to cross-reference them, as there is little to no log of where the problem is happening. What should I look at for cross-referencing other than the logs? I copied over the log files(from my working nodes) and made sure the appropriate names were used, but the behavior was the same. EDIT: I feel like I should add that I didn't copy over the "err" files from the working nodes, as there are no errors so this file exists but is empty. I did remove the err line from both instances, this didn't change the behavior, which I didn't think it would.
... View more
08-26-2016
08:47 AM
Hi @Jonathan Hurley, thanks for the response. So it's a problem with the way Ambari tests whether or not Accumulo TServer is started? That thread indicates a problem with "Ambari" in general, all of my other Ambari services are running. It is only TServer that starts, stops immediately, and says to not be running. If it was running, would there not be logs in those folders? Do you have any suggestions as to what I can do to confirm that this is the problem? As mentioned, I am brand new to Ambari and HDP in general.
... View more
08-25-2016
09:51 AM
So I've just completed the Ambari install wizard (Ambari 2.2, HDP 2.3.4) and started the services.Everything is running and working now (not at first) except for 3/5 hosts have the same issue with the Accumulo TServer. It starts up with the solid green check (photo attached) but stops after just a few seconds with the alert icon. The only information I found about the error is "Connection failed: [Errno 111] Connection refused to mo-31aeb5591.mo.sap.corp:9997" which I showed in the t-server-process. I checked my ssh connection and its fine, and all of the other services installed fine so I'm not sure what exactly that means. I posted the logs below, the .err file just said no such directory, and the .out file is empty. Are there other locations where there is more verbose err logs about this? As said, I am new to the environment. Any general troubleshooting advice for initial issues after installation or links to guides that may help would also be very appreciated. [root@xxxxxxxxxxxx ~]# cd /var/log/accumulo/
[root@xxxxxxxxxxxx accumulo]# ls
accumulo-tserver.err accumulo-tserver.out
[root@xxxxxxxxxxxx accumulo]# cat accumulo-tserver.err
/usr/hdp/current/accumulo-client/bin/accumulo: line 4: /usr/hdp/2.3.4.0-3485/etc/default/accumulo: No such file or directory
... View more
Labels:
- Labels:
-
Apache Accumulo
-
Apache Ambari
08-05-2016
11:54 AM
Hi @sbhat thank you for your response, I tried following that guide but now my epel.repo is broken because it is provided through my network ( I think that makes sense). This is the result. I guess I could get this repository elsewhere but I have a feeling this will cause other large scale issues as I am using a product for my VMs that is provided only through my company's network. Is the only way to bypass the proxy completely? I am hoping there is something else I can do. [root@mo-1184a7ee4 ~]# yum update
Loaded plugins: product-id, subscription-manager
Setting up Update Process
http://my-proxy-addr:8080/mrepo/redhat/6/rhel6epel-x86_64/RPMS.all/repodata/repomd.xml: [Errno 12] Timeout on http://my-proxy-addr:8080/mrepo/redhat/6/rhel6epel-x86_64/RPMS.all/repodata/repomd.xml: (28, 'Operation too slow. Less than 1 bytes/sec transfered the last 30 seconds')
Trying other mirror.
Error: Cannot retrieve repository metadata (repomd.xml) for repository: epel.repo. Please verify its path and try again
... View more
- « Previous
-
- 1
- 2
- Next »