Member since
03-16-2016
707
Posts
1753
Kudos Received
203
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
4997 | 09-21-2018 09:54 PM | |
6329 | 03-31-2018 03:59 AM | |
1932 | 03-31-2018 03:55 AM | |
2142 | 03-31-2018 03:31 AM | |
4732 | 03-27-2018 03:46 PM |
12-26-2016
10:01 PM
2 Kudos
@Raghvendra Singh Tutorial: http://hortonworks.com/hadoop-tutorial/getting-started-with-pivotal-hawq-on-hortonworks-sandbox/ Look for section USING OTHER TOOLS TO WORK WITH HAWQ Follow instructions on how to download ODBC/JDBC driver and how to use it. If your data stored is JSON then you are set, otherwise you have to handle it before displaying to your d3js based dashboard. HAWQ is SQL-like database with advance ANSI compliance.
... View more
06-20-2017
04:59 AM
Hi, Is there any steps to take our own kafka server metrics to elasticsearch..because we have grafana which will have all the dashboards but some of our project requirement need to keep few kafka metrics in kibana visualization.so we want to index the kafka metrics logs to elasticsearch. Can we consume kafka metrics into elasticsearch ?
... View more
05-24-2016
03:53 PM
@Constantin Stanca what do you mean by "Additionally, you need to start the JVM with something like this in order to be able to truly access the JVM remotely"? JVM start as they normally do to use this tool.
... View more
05-27-2016
09:34 PM
@Vladimir Zlatkin Updated section "NUMA optimization" to include a link to OS CPU optimizations for RHEL. "Spark applications performance could be improved not only by configuring various Spark parameters and JVM options, but also using the operating system side optimizations, e.g. CPU affinity, NUMA policy, hardware performance policy etc. to take advantage of the most recent hardware NUMA capable." The referenced section is vast. We could qualify some of the settings to best choices. This could be a follow-up article if it part 1 presented real interest.
... View more
05-18-2016
07:25 PM
1 Kudo
We certify against major versions of HDP.
... View more
05-17-2016
10:59 PM
16 Kudos
This article is a continuation of Monitoring
Kafka with Burrow - Part 1. Before diving into evaluation rules, HTTP endpoint API and notifiers, I would like to point-out a few other tools that are utilizing Burrow.
Burrower (http://github.com/splee/burrower), a tool for gathering consumer lag information from Burrow and sending it into influxDB ansible-burrow (https://github.com/slb350/ansible-burrow) provides an Ansible role for installing Burrow. Consumer Lag Evaluation Status The status of a consumer group in Burrow is determined based on
several rules evaluated against the offsets for each partition the group
consumes. Thus, there is no need for
setting a discrete threshold for the number of messages a consumer is allowed
to be behind before alerts go off. By evaluating against every partition the
group consumes, the entire consumer group health status is evaluated, and not
just the topics that are being monitored. This is very important for wildcard consumers, such as Kafka Mirror Maker. Window The lagcheck configuration determines the length of the sliding window, specifying the
number of offsets to store for each partition that a consumer group consumes. This window moves forward with each offset the
consumer commits (the oldest offset is removed when the new offset is added). For
each consumer offset, the following are stored: the offset itself, the timestamp that the
consumer committed it, and the lag at the point Burrow received it. The lag is
calculated as the difference between the head offset of the broker and the
consumer's offset. Because broker offsets are fetched on a fixed interval, the result could be a negative number, however, by convention, the
stored lag value is zero. Rules The
following rules are used for evaluation of a group's status for a given
partition: If
any lag within the window is zero, the status is considered to be OK. If
the consumer offset does not change over the window, and the lag is either
fixed or increasing, the consumer is in an ERROR state, and the partition
is marked as STALLED. If
the consumer offsets are increasing over the window, but the lag either
stays the same or increases between every pair of offsets, the consumer is
in a WARNING state. This means that the consumer is slow, and is falling
behind. If
the difference between the time now and the time of the most recent offset
is greater than the difference between the most recent offset and the
oldest offset in the window, the consumer is in an ERROR state and the
partition is marked as STOPPED. However, if the consumer offset and the
current broker offset for the partition are equal, the partition is not considered
to be in error. If
the lag is -1, this is a special value that means we do not have a broker
offset yet for that partition. This only happens when Burrow is starting
up, and the status is considered to be OK. HTTP Endpoint API The HTTP Server in Burrow provides a convenient way to interact
with both Burrow and the Kafka and Zookeeper clusters. Requests are simple HTTP
calls and all responses are formatted as JSON. For bad requests,
Burrow will return an appropriate HTTP status code in the 400 or 500 range. The
response body will contain a JSON object with more detail on the particular
error encountered. Examples of requests: Request URL Path Description Healthcheck GET /burrow/admin Healthcheck of Burrow, whether for monitoring or load balancing within a VIP. List Clusters T /v2/kafka GET /v2/zookeeper List of the Kafka clusters that Burrow is configured with. Kafka Cluster Detail GET /v2/kafka/(cluster) Detailed information about a single cluster, specified in the URL. This will include a list of the brokers and zookeepers that Burrow is aware of. List Consumers GET /v2/kafka/(cluster)/consumer List of the consumer groups that Burrow is aware of from offset commits in the specified Kafka cluster.
Remove Consumer Group DELETE /v2/kafka/(cluster)/consumer/(group) Removes the offsets for a single consumer group from a cluster. This is useful in the case where the topic list for a consumer has changed, and Burrow believes the consumer is consuming topics that it no longer is. The consumer group will be removed, but it will automatically be repopulated if the consumer is continuing to commit offsets. List Consumer Topics GET /v2/kafka/(cluster)/consumer/(group)/topic List of the topics the topics that Burrow is aware of from offset commits consumed by the specified consumer group in the specified Kafka cluster.
Consumer Topic Detail GET /v2/kafka/(cluster)/consumer/(group)/topic/(topic) Most recent offsets for each partition in the specified topic, as committed by the specified consumer group. Consumer Group Status GET /v2/kafka/(cluster)/consumer/(group)/status or GET /v2/kafka/(cluster)/consumer/(group)/lag Current status of the consumer group, based on evaluation of all partitions it consumes. The evaluation is performed on request, and the result is calculated based on the consumer lag evaluation rules. There are two versions of this request. The endpoint "/status" will return an object that only includes the partitions that are in a bad state. The endpoint "/lag" will return an object that includes all partitions for the consumer, regardless of the evaluated state of the partition. The second version can be used for full reporting of consumer message lag on all partitions. List Cluster Topics GET /v2/kafka/(cluster)/topic List of the topics in the specified Kafka cluster. Cluster Topic Detail GET /v2/kafka/(cluster)/topic/(topic) Head offsets for each partition in the specified topic, as retrieved from the brokers. Note that these brokers may be up to the number of seconds old specified by the broker-offsets configuration parameter. List Clusters GET /v2/kafka GET /v2/zookeeper List of the Kafka clusters that Burrow is configured with. Notifiers Two notifier modules are available to configure to check and report consumer group status: email and HTTP. Email Notifier The email notifier is used
to send out emails to a specified address whenever a consumer group is in a bad
state. Multiple groups can be configured for a single email address, and the
interval to check the status on (and send out emails on) is configurable per
email address. Before configuring
any email notifiers, the [smtp] section needs to be configured in Burrow
configuration file. Example of configuration: [smtp]
server=mailserver.example.com
port=25a
auth-type=plain
username=emailuser
password=s3cur3!
from=burrow-noreply@example.com
template=config/default-email.tmpl Multiple email
notifiers can be configured in the Burrow configuration file. Each notifier
configuration resides in its own section. Example of configuration: [email "bofh@example.com"]
group=local,critical-consumer-group
group=local,other-consumer-group
interval=60 The email that is sent is formatted according to the template
specified in the [smtp] configuration section. A default template is provided
as part of the Burrow distribution in theconfig/default-email.tmplfile. The template format is the
standard Golang
text template. There are several good posts available on how to
compose Golang templates:
http://andlabs.lostsig.com/blog/2014/05/26/8/the-go-templates-post
http://jan.newmarch.name/go/template/chapter-template.html
http://golangtutorials.blogspot.com/2011/06/go-templates.html A timer is set up
inside Burrow to fire everyintervalseconds and check the listed consumer
groups. The current status is requested for each group, and if any group in the
list is not in an OK state, an email is sent out with the status of all groups.
This means that the email can contain listings for both good and bad groups,
but no email will be sent out if everything is OK. HTTP Notifier The HTTP notifier reports error states for all consumer groups
to an external HTTP endpoint via POST requests. DELETE
requests can be also sent to the same endpoint when a consumer group returns to normal. The HTTP notifier is used to
send POST requests to an external endpoint, such as for a monitoring or
notification system, on a specified interval whenever a consumer group is in a
bad state. This notifier operates on all consumer groups in all clusters
(excluding groups matched by the blacklist). Incidents of a consumer group
going bad have a unique ID generated that is maintained until that group
transitions back to a good state. This allows notification systems to handle incidents,
rather than individual reports of consumer group status, if needed. The configuration for the HTTP notifier is specified under a
heading [httpnotifier]. This is where is configured the URL to connect to, as
well as the templates to use for POST and DELETE request bodies. Extra fields can be provided as they are provided to the template. An example HTTP
notifier configuration looks like this: [httpnotifier]
url=http://notification.server.example.com:9000/v1/alert
interval=60
extra=field1=custom information
extra=field2=special info to pass to template
template-post=config/default-http-post.tmpl
template-delete=config/default-http-delete.tmpl
timeout=5
keepalive=30
The request body that is sent is with each HTTP request is
formatted according to the templates specified. A default template is provided
as part of the Burrow distribution in theconfig/default-http-post.tmplandconfig/default-http-delete.tmplfiles. The template format is the standard Golang
text template. There are several good posts available on how to
compose Golang templates:
http://andlabs.lostsig.com/blog/2014/05/26/8/the-go-templates-post http://jan.newmarch.name/go/template/chapter-template.html http://golangtutorials.blogspot.com/2011/06/go-templates.html A timer is set up inside
Burrow to fire everyintervalseconds. When the timer fires, all
consumer groups in all Kafka clusters are enumerated and the current status is
requested for each group. For each group that is not in an OK state, a unique
ID is generated (if it does not already exist) and a POST request is generated
for that group. For each group that is in an OK state, a check is performed as
to whether or not an ID exists for that group currently. If it does, the ID is
removed (as the group has transitioned to OK). If the DELETE template is
specified, a DELETE request is generated for that group. Conclusion The most important metric to watch is whether or not the consumer is keeping up with the messages that are being produced. Until Burrow, the fundamental approach was to monitor the consumer lag and alert on that number. Burrow monitors the consumer lag and keeps track of the health of the consuming application automatically monitoring all consumers, for every partition that they consume. It does this by consuming the special internal Kafka topic to which consumer offsets are written. Burrow provides consumer information as a centralized service that is separate from any single consumer, based on the offsets that the consumers are committing and the broker's state.
... View more
Labels:
05-16-2016
01:44 PM
2 Kudos
@Kirk Haslbeck Interval data type is not supported in Hive, yet. See https://issues.apache.org/jira/browse/HIVE-5021. Until HIVE-5021 feature is added, I would use two BigInt fields in Hive target table: startInterval, endInterval. Queries using these two fields in WHERE clauses would run better, being more appropriate for indexing and fast scan. For bit[n] in HAWQ, I would use a char, varchar, or string data type in Hive, depends on how big the string needs to be.
... View more
06-11-2016
11:03 PM
@Armando Segnini Thank you so much for your review. Your findings were spot-on. I had a few typos and omitted a mv command. Excellent catches.
... View more
03-21-2016
10:46 AM
1 Kudo
Hoping you have completed all the pre-requisites to run spark on Mesos, however please follow below if you haven't done yet. http://spark.apache.org/docs/latest/running-on-mesos.html#connecting-spark-to-mesos Regarding spark + Mesos and Tableau connection, I believe you need a SparkSql thrift server so that Tableau can directly connect to the thrift port. Morever you can start your thrift server like below. $SPARK_HOME/sbin/start-thriftserver.sh --master mesos://host:port --deploy-mode cluster --executor-memory 5G Note: You also need spark ODBC driver at Tableau client side to connect to the Thrift server, you can download it from Here
... View more
07-14-2016
03:22 PM
I tried those, but got errors trying to run the example programs. More here: https://community.hortonworks.com/content/kbentry/8452/running-r-program-on-hdp.html
... View more
- « Previous
- Next »