Member since
07-19-2018
613
Posts
101
Kudos Received
117
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
4901 | 01-11-2021 05:54 AM | |
3337 | 01-11-2021 05:52 AM | |
8642 | 01-08-2021 05:23 AM | |
8157 | 01-04-2021 04:08 AM | |
36035 | 12-18-2020 05:42 AM |
11-04-2019
10:02 AM
1 Kudo
Very good question here. Let me share some of my thoughts as I have installed ambari both from source and from Hortonworks Repos. Before I get started you should know that Hortonworks was a major contributor to Ambari Project, as such their documentation is very detailed for how to install Ambari and its components. In my opinion this is the preferred documentation. Hortonwork repos are THE public repos for ambari. Using them is much easier than building from source. The Ambari Project page at ambari.apache.org is just the project page. The documentation is specifically for ambari, and not necessarily for "hadoop" and does not include all the screen shots and deeper info you will find in the HortonWorks/Cloudera documentation for the same. Although the Project Page does not go into much detail, it does have the required artifacts, and enough information to setup nodes and get into the Cluster Install Wizard. For those organizations which are required to use private repos or to build their own, the Ambari Project page is very important.
... View more
11-04-2019
09:52 AM
Here is working processor: Your value would be $.busId, $.speed, $.location Nested values are: $.location.lat $.location.long Also make sure the sample json is "location" (" missing in your sample above).
... View more
09-30-2019
07:53 AM
@Jasthi How can you get the processgroupname and ID ? this script get the current processgroup and not the one where the error occur . I am facing the same problem and i trying to create a log error flow that logs everything on the server , right now i was able to get the error and send by email , but there is nothing that indentifies in what flow the error happened
... View more
09-23-2019
09:56 AM
Sending an email every time an error occurs may be problatic. The only thing that seems to come close is the site to site bulletin report Processor. As mentioned you can find the messages in the logs. If your main concern is accessibility, look into LogSearch on HDP.
... View more
02-23-2019
04:08 PM
This is a Work In Progress Article. Downloads: HDP/HDF 3 Ambari 2.7 Mpack ELK 6.3.2 with sudo (for non root user install) elasticsearch_mpack-3.0.0.0-1.tar.gz HDP/HDF 3 Ambari 2.7 Mpack ELK 6.3.2 without sudo elasticsearch_mpack-3.0.0.0-0.tar.gz In the Parent Article I introduce the process to take an existing HortonWorks ELK Mpack and take it through a series of versions (up to 2.6) which would allow me to install the ELK Version and Components I want (ElasticSearch, Logstash, Kibana, FileBeats, MetricBeats) in Ambari 2.6. In this article I am going to install the last articles ELK 2.6 version into my local machine using Vagrant. With this working Test Base I will version the Mpack up to 3.0 as I change the files to allow install into HDP & HDF 3.0. I am also going to change to the current version of ELK 6.6.1. Starting with the 2.6 Mpack and the Ambari Quick Start Guide I am able to get a cluster installed very easily in my local machine. In my Test Base install I chose a single node c7401.ambari.apache.org. For the purpose of this Test Base I only want to complete the most minimal install to support the Mpack Services without any issues during the Install Wizard. On this single node I install Ambari Server and Agent and the following components: Zookeeper Ambari Metrics ElasticSearch LogStash Kibana FileBeats MetricBeats Terminal Commands Required In Local Machine git clone https://github.com/u39kun/ambari-vagrant.git sudo -s 'cat ambari-vagrant/append-to-etc-hosts.txt >> /etc/hosts' cd ambari-vagrant/centos7.4 cp ../insecure_private_key . cp ~/Downloads/elasticsearch_mpack-2.6.0.0-9.tar.gz . vagrant up c7401 vagrant ssh c7401 Terminal Commands Required in Vagrant Node wget -O /etc/yum.repos.d/ambari.repo http://public-repo-1.hortonworks.com/ambari/centos6/2.x/updates/2.5.1.0/ambari.repo yum --enablerepo=extras install epel-release -y yum install java java-devel ambari-server ambari-agent -y ambari-server setup -s ambari-server install-mpack --mpack=/vagrant/elasticsearch_mpack-2.6.0.0-9.tar.gz --verbose ambari-server start ambari-agent start After install there were some issues with starting the services. This is okay for now, the ELK Mpack really needs 4 nodes. Most of the work for versioning will be making sure the new Stack Versions are included. I can test all of these changes without expecting any services to work. With the Test Base complete I can now quickly spin up fresh clusters working my way from HDP 2.6 to HDP 3.1.0 and ELK 6.3.2 to 6.6.1. The ELK versions will likely require some additional configuration changes, so this version will be completed in a final test base that will include HDP 3.1.0, 4 nodes, and expect the services to start. Results From Using This Test Base HDP It took me a few sessions working with this base to figure out that the out of box install issues for HDP 3 were related to just a few conflicts in the original Mpack parameters. Conflict with config settings for user management. The following python command was necessary: python /var/lib/ambari-server/resources/scripts/configs.py -u admin -p admin -n DFHZ_ELK -l c7401.ambari.apache.org -t 8080 -a set -c cluster-env -k ignore_groupsusers_create -v true Adjustments to Mpack services params.py to get hostname and java_home from slightly different paths in the config object. hostname = config['agentLevelParams']['hostname'] java64_home = config['ambariLevelParams']['java_home'] With a working install for this new Mpack in HDP 3 I can now start working on a 2 node cluster to make sure nothing else is required to get the original 4 node ELK stack installed on HDP 3. After the 2 node install cluster wizard is complete I did have to manually start some ELK services. HDF 2 Node Test Base Next I need to create an HDF cluster and make sure this Mpack works there. This requires a fully new test base and some changes to the mpack.json file to include HDF stack_version. Terminal Command Required in Vagrant Node for Ambari Master wget http://public-repo-1.hortonworks.com/HDF/centos7/3.x/updates/3.3.1.0/tars/hdf_ambari_mp/hdf-ambari-mpack-3.3.1.0-10.tar.gz && wget -O /etc/yum.repos.d/ambari.repo http://public-repo-1.hortonworks.com/ambari/centos7/2.x/updates/2.7.3.0/ambari.repo && yum --enablerepo=extras install epel-release -y && yum install nano java java-devel ambari-server ambari-agent -y && ambari-server setup -s && ambari-server install-mpack --mpack=/root/hdf-ambari-mpack-3.3.1.0-10.tar.gz --verbose && ambari-server install-mpack --mpack=/vagrant/elasticsearch_mpack-3.0.0.0-0.tar.gz --verbose && ambari-server start && ambari-agent start Terminal Command Required in Vagrant Node Ambari Agent wget -O /etc/yum.repos.d/ambari.repo http://public-repo-1.hortonworks.com/ambari/centos7/2.x/updates/2.7.3.0/ambari.repo && yum --enablerepo=extras install epel-release -y && yum install nano java java-devel ambari-agent -y && ambari-agent start Results From Using Test Base HDF It took me quite a few attempts to identify a solution to very many symptoms. Somewhere in the config object for HDF are some non UTF-8 characters. In some places this threw and error, in other places it ended up silently creating empty files across the elk stack during install. I added these lines to the python scripts: # encoding=utf8 import sys reload(sys) sys.setdefaultencoding('utf8') Once I identified the solution for the component python scripts I was able to get the stack installed without errors and running in HDF with only some starting issues. After starting components manually, Logstash & Kibana reporting as stopped but they are actually running. During my next sitting I focused on the stopped services. I set the Ambari-agent log level to DEBUG and noticed some additional terminal output in the command status output from "sudo service start". After changing Kibana and Logstash to just service start, I was able to manually stop the services in the node, then start from Ambari. I am not 100% sure this sudo was related. At any rate the ELK Mpack is now installed and all services running in HDF: I am going to go through a few more complete tests to make sure I can get the cluster stable right after install without any additional work. I completed my final test with "sudo service" replaced with "service". During Cluster Install Wizard everything installed without errors. The only issue was a warning for Check ElasticSearch which happens faster than ElasticSearch is started up. I came back to Ambari w/ Elasticsearch, Kibana, and Logstash running. I just had to manually start FileBeats and Metricbeats. Now I will be able to focus on last part of this article: upgrade ELK 6.3.1 to 6.6.1.
... View more
10-30-2018
11:08 AM
Properly setting up the nifi.security.identity.mapping.pattern.kerb and nifi.security.identity.mapping.pattern.dn fixed the problem. Also, while debugging these kind of problems, it's best to delete ranger plugin cache (under /etc/ranger/SERVICE_NAME/policycache/) to ensure that there are no communication problem between NiFi and Ranger.
... View more
09-05-2018
10:47 AM
@felix Albani
Thanks for the viedo.
Sorry to reply after such a long time period.
I have watched and check, but still dont know what I miss config.
Before reinstall HDF and config again, there are some questions I would like to ask.
In the nifi-app.log:
2018-09-05 17:54:07,793 WARN [Thread-22] o.a.r.admin.client.RangerAdminRESTClient Error getting policies. secureMode=false, user=nifi (auth:SIMPLE), response={"httpStatusCode":400,"statusCode":0}, serviceName=hdf_nifi
Do I need to resolve the WARN message in nifi-app.log.
[Error getting policies. secureMode=false, user=nifi (auth:SIMPLE) user=nifi]
Both NiFi and Ranger had been enabled in SSL mode.
But getting policies does not seems run in secure mode.
I have three NiFi ranger plugin certificate with DN [CN=ambari01.test.com, OU=NiFi、CN=ambari02.test.com, OU=NiFi、CN=ambari03.test.com, OU=NiFi]
A nifi user is manually created in Ranger admin UI as internal.
The following images are my Ranger/Ambari screen shot and question
1.Do the nifi user need to create certificate too?
2.Is the nifi user a OS user in NiFi host or also a NiFi application user?
#nifi user in Ranger admin
#ranger_nifi_policymgr
Thanks for your help.
... View more
08-27-2018
12:15 PM
Excellent Answer!! Thank you I had forgotten about the archive history. I was able to restore the file too. It appears only the last 7 lines of xml were missing. After fixing the xml, making a new flow.xml.gz, copying that back into /var/lib/nifi/conf, and restarting NiFi my workflow is restored. Was not really able to find a lot of matches here or online for "Cannot load flow.xml.gz" so I wanted to create this thread for anyone having these issues in the future.
... View more
08-23-2018
04:49 AM
1 Kudo
I have solved the problem.
It was simple issue but all new to this and none of the examples online show this fix.
First you need to have a keystore and there is a link above on creating one of those.
Then this is what was missed you need to download the certificate from the site providing the data.
echo -n | openssl s_client -connect newsapi.org:443 | sed -ne '/-BEGIN CERTIFICATE-/,/-END CERTIFICATE-/p' > /tmp/examplecert.crt Then import to your keystore sudo keytool -import -keystore truststore.jks -file /tmp/examplecer.crt -alias <sitename> Then set up the Control Service Device for the StandardSSLContextService
... View more
04-16-2019
01:44 PM
How did you create AVRO schema from nested JSON?
... View more