Member since
07-19-2018
613
Posts
100
Kudos Received
117
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
3706 | 01-11-2021 05:54 AM | |
2586 | 01-11-2021 05:52 AM | |
6988 | 01-08-2021 05:23 AM | |
6439 | 01-04-2021 04:08 AM | |
29537 | 12-18-2020 05:42 AM |
02-23-2019
03:06 PM
Today I am working on an ELK Mpack for HDP 3.0+ and HDF 3.0+. To pick this back up I have started on another documentation journey to record my progress. Starting with this post and https://cwiki.apache.org/confluence/display/AMBARI/Quick+Start+Guide I was able to get a base cluster up with the ELK above. I will be creating another article for getting this to work on the new HDP stacks as well as changes for most recent version of ELK stack.
... View more
09-02-2018
04:11 PM
@Thuy
Le great job! Glad you got it working.
... View more
09-02-2018
04:03 PM
@Amit Nandi sure I will work on an article for how to make an mpack.
... View more
08-31-2018
04:04 PM
Look at the other question you posted, which i answered: https://community.hortonworks.com/questions/215236/apache-nifi-to-elasticsearch-convert-time-to-date.html
... View more
08-31-2018
11:08 AM
@Thuy Le In order for the timestamp to work it has to be in the correct format which elasticsearch expects (default ISO 8601). Here is a good article about timestamp manipulation in nifi: https://community.hortonworks.com/questions/113959/use-nifi-to-change-the-format-of-numeric-date-and.html
Then in elasticsearch, you need to create a mapping which tells the index which field is the timestamp: https://www.elastic.co/guide/en/elasticsearch/hadoop/current/mapping.html#mapping-date
https://www.elastic.co/guide/en/elasticsearch/reference/2.0/mapping-date-format.html#built-in-date-formats
If this answer is helpful, please click ACCEPT to mark your question resolved.
... View more
08-30-2018
03:07 PM
In Hortworks HCP they use an ElasticSearch Mpack for version 5.x of Elasticsearch and Kibana: http://public-repo-1.hortonworks.com/HCP/centos7/1.x/updates/1.6.0.0/tars/metron/elasticsearch_mpack-1.6.0.0-7.tar.gz I have taken this and expanded it to include version 6.3.2 of ElasticSearch, Logstash, Kibana, FileBeat, and MetricBeat. My cluster is 6 nodes. ElasticSearch is installed on Nodes 4,5,6(4 Master & 5,6 Data Nodes). Logstash is Installed on Node 3. FileBeat & MetricBeat are installed on all 6 nodes. Kibana is installed on Node 4. The rest of the cluster is configured normally for a minimal Install. Downloads Orginal HCP Mpack elasticsearch-mpack-1600-7.tar.gz My MPack 6.3.0 - Version Change + Logstash + Beats elasticsearch-mpack-2500-9.tar.gz My Mpack 6.3.2 - Version Change + Logstash (multi-node & jvm settings config) + sudo non root capabilities elasticsearch-mpack-2600-9.tar.gz Installation Steps 1. Deliver Management Pack tar.gz to local filesystem on Ambari-Server upload to hdfs view Files View download to ambari-server node (/home/root) 2. Install Management Pack sudo ambari-server install-mpack --mpack=/home/root/elasticsearch_mpack-2.6.0.0-9.tar.gz --verbose Uninstall is: sudo ambari-server uninstall-mpack --mpack-name=elasticsearch-ambari.mpack 3. Restart Ambari Server sudo ambari-server restart 4. Use Ambari Add Service to Install ELK Stack Components The following settings will be required during the Install Wizard ES_URL: example: http://node4.hostname.com:9200 KIBANA_URL: example: http://node4.hostname.com:5000 LOGSTASH_URL: example: http://node3.hostname.com:5044 ElasticSearch Zen Discovery Hosts: example: [ node4.hostname.com, node5.hostname.com, node6.hostname.com ] Configuration Post installation Ambari handles the configuration of all components including logstash (input, output, and filters) and FileBeat and MetricBeat configuration files. ElasticSearch configuration should work out of the box without any changes other than Zen Discovery Hosts. Logstash filters are setup for beats input, file FileBeat filter, and elasticsearch output. FileBeat is setup to use Logstash. MetricBeat is setup to send metrics directly to Elasticsearch. Summary Learning this Elasticsearch MPack is a good example of how to create your own custom stack using a Management Pack to define services not normally found in an Ambari Cluster. Ambari Administrators looking to understand how to create their own Management Pack should take some time to diff the Mpacks attached below. Creating custom services controlled via Ambari is pretty easy if you mimic the folder structure, make the necessary xml file changes, and adjust the python package scripts accordingly.
... View more
Labels:
08-30-2018
03:04 PM
@Michael Bronson i dont normally like to suggest non ASF options here in HCC, but have you checked out Elastic Beats? I am using MetricBeat to get unix cluster monitoring on our ambari nodes as well as windows workstations metrics such as: CPU Used Memory Used Disk Used Load Average Inbound/Outbound Traffic Host Processes and more... There is also a WinLog beat that allows us to tap into Windows Syslog and Performance Monitoring.
... View more
08-30-2018
11:33 AM
@Raj ji Check out this solution here: https://community.hortonworks.com/questions/147226/replacetextprocessor-remove-blank-lines.html If this answer is helpful please choose ACCEPT to mark the question resolved.
... View more
08-30-2018
11:23 AM
@Tony Cheng In my setup I had to edit configuration file to set 3 values to debug for logging to show full details Tailing the Ranger Sync Logs In order to see full output for Ranger User Sync it is necessary to modify Log4j XML: sudo nano /usr/hdp/current/ranger-admin/ews/webapp/WEB-INF/log4j.xml **NOTE: Edit 3 log entries at bottom of the file from "info" to "debug". Be sure to change them back when done debugging. Restart All for Ranger and then you can tail these 2 files: sudo tail -f /var/log/ranger/admin/xa_portal.log sudo tail -f /var/log/ranger/usersync/usersync.log
... View more