Member since
10-14-2015
165
Posts
63
Kudos Received
27
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
2607 | 12-11-2018 03:42 PM | |
2259 | 04-13-2018 09:17 PM | |
1471 | 02-08-2018 06:34 PM | |
3351 | 01-24-2018 02:18 PM | |
8156 | 10-11-2017 07:27 PM |
12-13-2018
08:12 PM
In order to associate an `AlertTarget` with specific groups, you would use the `groups` property along with an array of IDs for the groups you care about: {
"AlertTarget": {
"name": "Administrators",
"description": "The Admins",
"notification_type": "EMAIL",
"groups": [1, 17, 23],
...
}
}
... View more
12-11-2018
03:42 PM
I agree with Akhil, we'd need some more information about what you are trying to do. However, his links are a great place to start. Also, it seems like you're trying to use the REST APIs directly. In which case, this link might also be of some help since it gives examples for using the alert groups and targets endpoints: https://github.com/apache/ambari/blob/trunk/ambari-server/docs/api/v1/alert-dispatching.md
... View more
07-26-2018
12:59 PM
It's a timeout problem. The alert is giving the beeline command 60 seconds to spin up a JVM and connect to Hive. You can always go to the Alert's definition in the Ambari UI and change this timeout property to something higher (like 75 seconds). However, before you do that, you might want to run the command yourself and see how long it takes. If it's taking more than a minute, that could indicate a problem with resources on this host.
... View more
04-16-2018
01:26 PM
Ah, sorry, try /var/log/hadoop-yarn/yarn
... View more
04-13-2018
09:37 PM
The disk usage alert runs every few minutes. If it hasn't cleared, then perhaps you didn't add enough storage. If you check the message, you can see why it thinks you don't have enough and you can verify your new mounts. The logs would be in /var/log/hadoop/yarn on the ResourceManager host.
... View more
04-13-2018
09:17 PM
Your disk usage alerts are still there because they are valid. They won't clear until you resolve the problem (adding more space to your hosts) or edit the alert and increase the threshold which triggers the alert. The ResourceManager alert also seems real since you can't login to it and since the UI indicates it's not running. I would check the RM logs on that host to see why it's having problems.
... View more
03-21-2018
07:23 PM
Yes, you can - the above calls can capture it on a host/component basis.
... View more
03-21-2018
04:54 PM
The is the correct way to check for Maintenance Mode being enabled. Which version of Ambari are you running? When I try this locally, the correct value for `maintenance_state` is reflected. Also - you're putting the ZooKeeper service itself into MM, right? Putting individual hosts/components into MM won't reflect here.
... View more
02-08-2018
06:34 PM
The alerts.json files are only used to seed alert definitions initially. After an alert definition has been created in the system, modification of that alert must be done through the REST API. See: https://github.com/apache/ambari/blob/trunk/ambari-server/docs/api/v1/alert-definitions.md#create You can delete the alert definition as well. This will cause the alerts.json to be read in again on Ambari Server restart.
... View more
01-24-2018
03:03 PM
Let me see if I can help you through this. Can you perform the following query for me: SELECT repo_version_id, version, display_name FROM repo_version ORDER BY version;
This will get you a list that looks something like: repo_version_id | version | display_name
-----------------+--------------+------------------
1 | 2.5.0.0-1237 | HDP-2.5.0.0-1237
101 | 2.5.4.0-121 | HDP-2.5.4.0-121
51 | 2.6.0.0-334 | HDP-2.6.0.0-334
Chances are the most recent version is the one that you're on (or are at least supposed to be on). In my case, this is ID 51. So, you would do: UPDATE cluster_version SET state = 'CURRENT' WHERE repo_version_id = 51;
The upgrade should work now after making this kind of change.
... View more