Member since
09-26-2016
33
Posts
4
Kudos Received
3
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
1844 | 08-28-2018 07:37 PM | |
17158 | 12-27-2017 02:48 PM | |
3737 | 11-08-2016 06:45 PM |
10-08-2018
03:23 PM
SO AM I. I did a full ambari 2.7 and HDP 3.0.1 upgrade, only to find that the lucidworks HDP SOLR mpacks and such don't work, and I no longer see SOLR as a service I can add (even though I tried to reinstall the mpack etc).
... View more
08-28-2018
07:37 PM
1 Kudo
After several hours of searching, I have come to the conclusion that there is no easy fix for this, specifically through ambari API calls, which is a shame. The API calls in terms of services are limited to stopping, starting, adding, deleting. But there's no way to "fix" a broken state as per above. Apparently the only way to solve this was to hack the ambari database and change the component state from "STOPPING" to "INSTALLED" and then I was fine. There really oight to be a better way.....
... View more
08-28-2018
02:23 PM
1 Kudo
Good morning! So, I just upgraded a small cluster to Version 3.0.0. The upgrade seemed to go well. But after a reboot I am stuck- I have two services that ambari still thinks are "stopping...". (seconday namenode and also zookeeper) Because of this, I can't get the services running. The error I get is: Error message: java.lang.IllegalArgumentException: Invalid transition for servicecomponenthost, clusterName=ontomatedev, clusterId=2, serviceName=HDFS, componentName=SECONDARY_NAMENODE, hostname=myhost.domain.com, currentState=STOPPING, newDesiredState=STARTED As far as I can tell, I *should* hopefully be able to "reset" this somehow via a curl command but I'm at a loss as to what. Something like this has no effect: curl -s -u admin:admin -H 'X-Requested-By: Ambari' -X PUT -d '{"RequestInfo":{"context":"Stop Component"},"Body":{"HostRoles":{"state":"INSTALLED"}}}' http://localhost:8080/api/v1/clusters/ontomatedev/hosts/myhost.domain.com/host_components/SECONDARY_NAMENODE If anyone can tell me how to reset this incorrect 'state' I'd be *most* grateful!!!
... View more
Labels:
- Labels:
-
Apache Ambari
-
Apache Hadoop
12-27-2017
02:48 PM
I deleted all the snapshots and data after getting a go-ahead from the developers...
... View more
08-04-2017
07:39 PM
yup yup yup. Found the snapshots.... guessing THAT is the culprit. Time to have a conversation with the developers.... there's.. a lot.
... View more
08-04-2017
07:14 PM
As far as I can fine, the hbase.master.hfilecleaner.ttl value was not set at all. (does that then mean.. NO cleaning?). I set it to 900000 (15 minutes) and we'll see if anything happens.
... View more
08-04-2017
06:35 PM
hi! So, I'm the sysadmin of a hadoop cluster. I am not a developer, nor do I "use" it. But... I make sure it's running and happy and secure and... so on. In reviewing HDFS disk use lately, I noticed our numbers are kinda high. After some digging, it appears all of the space is going into hbase. OK cool, that's what our developers are doing. Stuffing things in hbase. But I appear to be losing a bunch of disk space to the hbase "archives" folder. Which is something I assume that hbase is putting stuff in when tables are deleted or...? I checked with one of our developers, he sees that in the archive there's tables he deleted long ago. So... my simple question is, how do I clean out unneeded things from the hbase "archive"? I assume manually deleting stuff via hdfs is **not** the way to go. [hdfs dfs -du -s -h /apps/hbase/data/*
338.6 K /apps/hbase/data/.hbase-snapshot
0 /apps/hbase/data/.tmp
20 /apps/hbase/data/MasterProcWALs
830 /apps/hbase/data/WALs
6.6 T /apps/hbase/data/archive <=== THIS. 0 /apps/hbase/data/corrupt
4.1 T /apps/hbase/data/data
42 /apps/hbase/data/hbase.id
7 /apps/hbase/data/hbase.version
30.7 K /apps/hbase/data/oldWALs ANY and all help for an hbase newbie would be really appreciated
... View more
Labels:
- Labels:
-
Apache HBase
04-26-2017
09:32 PM
Yeah, the decentralized nature of the keystores is... kind of a huge problem and like both you and I discovered, not inherently obvious. I still like the horton setup for ease of management... I *really* do. (Tried a hadoop cluster once built from scratch manually.. you have no ideas how bad that was). I'm discovering the "ease" of managing courtesy of Horton's setup really applies only about 90% of the time. The other 10% is a bitch. heh. With the Ambari UI, just about everything is there. Great! ALMOST.
... View more
03-23-2017
04:30 PM
Oh my gosh, I totally forgot that yes, this is front-ended by apache. Setting the AllowEncodedSlashes to "On" in my environment solved the issue. (I have apache 2.2.x which does not yet have the "NoDecode" value, but that's OK as "On" seems to work).
thankyouthankyyouthankyouthankyouthankyou.
... View more
03-23-2017
02:50 PM
@SBandaru I tried the change above and it has made no difference. I still have the same behavior.
... View more