Member since
07-30-2019
333
Posts
356
Kudos Received
76
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
9915 | 02-17-2017 10:58 PM | |
2309 | 02-16-2017 07:55 PM | |
8009 | 12-21-2016 06:24 PM | |
1764 | 12-20-2016 01:29 PM | |
1241 | 12-16-2016 01:21 PM |
10-22-2015
02:23 PM
Based on @afernandez@hortonworks.com suggestion, reference docs for Alerts and Alerts History are available at https://github.com/apache/ambari/blob/branch-2.1/ambari-server/docs/api/v1/alerts.md E.g., to get critical Kafka service alerts (I am using a brilliant cli JSON processor, jq https://stedolan.github.io/jq/ 😞 curl -s -u admin:admin -H 'X-Requested-BY: ambari' \
http://localhost:8080/api/v1/clusters/Sandbox/services/KAFKA/alert_history?fields=* \
| jq '.items[] | select (.AlertHistory.state=="CRITICAL")' This gives me an output like this (pretty-printed): {
"href": "http://localhost:8080/api/v1/clusters/Sandbox/services/KAFKA/alert_history/16",
"AlertHistory": {
"cluster_name": "Sandbox",
"component_name": "KAFKA_BROKER",
"definition_id": 1,
"definition_name": "kafka_broker_process",
"host_name": "sandbox.hortonworks.com",
"id": 16,
"instance": null,
"label": "Kafka Broker Process",
"service_name": "KAFKA",
"state": "CRITICAL",
"text": "Connection failed: [Errno 111] Connection refused to sandbox.hortonworks.com:6667",
"timestamp": 1439987016669
}
}
{
"href": "http://localhost:8080/api/v1/clusters/Sandbox/services/KAFKA/alert_history/184",
"AlertHistory": {
"cluster_name": "Sandbox",
"component_name": "KAFKA_BROKER",
"definition_id": 1,
"definition_name": "kafka_broker_process",
"host_name": "sandbox.hortonworks.com",
"id": 184,
"instance": null,
"label": "Kafka Broker Process",
"service_name": "KAFKA",
"state": "CRITICAL",
"text": "Connection failed: [Errno 111] Connection refused to sandbox.hortonworks.com:6667",
"timestamp": 1439989776335
}
}
{
"href": "http://localhost:8080/api/v1/clusters/Sandbox/services/KAFKA/alert_history/452",
"AlertHistory": {
"cluster_name": "Sandbox",
"component_name": "KAFKA_BROKER",
"definition_id": 1,
"definition_name": "kafka_broker_process",
"host_name": "sandbox.hortonworks.com",
"id": 452,
"instance": null,
"label": "Kafka Broker Process",
"service_name": "KAFKA",
"state": "CRITICAL",
"text": "Connection failed: [Errno 111] Connection refused to sandbox.hortonworks.com:6667",
"timestamp": 1443742219799
}
}
... View more
10-22-2015
01:24 PM
1 Kudo
The MergeContent processor certainly can be challenging to understand its inner workings. If you are running into the nifi.queue.swap.threshold limit of MergeContent as described in NIFI-697, then you should increase that value in the nifi.properties file and restart your NiFi process. A multiple of 10000 is recommended. You will also likely have to increase your Java memory settings in bootstrap.conf. MergeContent works like this. When a FlowFile arrives at MergeContent, it is assigned to a bin based on Merge Strategy and Correlation Attribute Name. Maximum Number of Bins controls resource usage such that if all bins have FlowFiles in them and another FlowFile arrives that doesn't fit into one of those bins, then the oldest bin is automatically marked as complete, and the new FlowFile starts its own new bin. A bin will be complete once (number of files in bin) >= Minimum Number of Entries AND (number of bytes in bin) >= Minimum Group Size OR the bin has existed for Max Bin Age. Then the FlowFiles in the bin are merged and sent to an output relationship. The Maximum Number of Entries and Maximum Group Size can prevent bins from becoming "over full". For example, when Maximum Group Size is 1 GB and a bin currently has 900 MB in it, then a flowfile arrives that is 200 MB in size, the 200 MB FlowFile will not make that bin "over full" but instead will get a bin all to itself. Credit goes to Michael Moser from the NiFi user list.
... View more
Labels:
10-21-2015
06:25 PM
Changing a retention period on the topic only marks data segments for deletion. The actual thread which performs an action kicks in every 5 minutes (the default for log.retention.check.interval.ms). Not sure the OP wants to wait up to 5 minutes to delete data.
... View more
10-21-2015
06:12 PM
11 Kudos
Once one moves beyond trivial flow design a visual image like the one below becomes common. Surely, it becomes more and more messy over time and harder to glance over. Here's a great tip - double-click on the connection line that you want to bend: A new yellow anchor will appear which one can then drag around to organize things nicely. Bonus tip - one can add multiple bend points to a connection. To remove a specific anchor, simply double-click on it again, repeat for each yellow point. And an extra bonus courtesy of @mgilman@hortonworks.com: a connection label can be moved along the connection and snaps to a bend point, see below:
... View more
Labels:
10-21-2015
02:15 PM
Wouldn't that cause all kinds of issues with zookeeper state? Besides, the data replicas will be on multiple nodes.
... View more
10-21-2015
02:14 PM
4 Kudos
In HDP 2.3 you can delete the topic via a standard admin command: bin/kafka-topics.sh --zookeeper 127.0.0.1:2181 --delete --topic someTopic
For this to work, one has to make sure the delete.topic.enable flag is set to true (check it via Ambari). Once deleted, any new message posted to the topic will re-create it, assuming the auto.create.topics.enable property is set to true (it might be disabled in production, same as delete one above). If that's the case, then the admin must create the topic again: bin/kafka-topics.sh --zookeeper 127.0.0.1:2181 --create --topic someTopic --partitions 10 --replication-factor 3
... View more
10-19-2015
07:17 PM
No, there's a lot more going on under the hood and Ambari Agent is required for operations. Essentially, bring a node into a cluster (agent is installed) and provision only the services desired (e.g. only Knox).
... View more
10-16-2015
07:19 PM
Is there a way to tell this SEND event from 'regular flow' SEND events? We have people asking about fishing out this event specifically. Replay would probably be the same implementation.
... View more
10-16-2015
07:02 PM
NiFi has a a special role requirement to allow/disallow downloading the content from the provenance view. However, is it possible to record the fact that someone downloaded the contents of an event?
... View more
Labels:
- Labels:
-
Apache NiFi
-
Cloudera DataFlow (CDF)