Member since
07-14-2016
215
Posts
45
Kudos Received
16
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
3769 | 12-13-2018 05:01 PM | |
10506 | 09-07-2018 06:12 AM | |
2753 | 08-02-2018 07:04 AM | |
3715 | 03-26-2018 07:38 AM | |
2818 | 12-06-2017 07:53 AM |
03-28-2018
06:25 AM
Can you post the version of python-requests you have installed? pip list | grep requests
... View more
03-27-2018
06:57 AM
@Bramantya Anggriawan
You need to look for the indexing logs under the storm worker logs. There are two topologies that run as a part of the Metron Indexing service--random_access_indexing and batch_indexing. You can view the respective logs under /var/log/storm/worker-logs/<indexing-topo-name>/6700/worker.log .
Btw, I would also suggest that you look at the Ambari agent log for any errors. I have seen earlier that the indexing service appearing stopped could possibly be an issue with python-requests package not installed as well (see https://issues.apache.org/jira/browse/METRON-1451).
... View more
03-26-2018
07:38 AM
Sure.. refer to these steps for creating the table and adding the user. https://docs.hortonworks.com/HDPDocuments/HCP1/HCP-1.4.1/bk_installation/content/installing_rest_app_manually.html
... View more
03-26-2018
05:18 AM
Hello @Wang Ao, can you check in Ambari (Services -> Metron -> Configs -> REST) that the Metron REST settings are properly configured? Here's an example that you can use: Metron JDBC Driver = org.h2.Driver
Metron JDBC password = root
Metron JDBC platform = h2
Metron JDBC URL = jdbc:h2:file:~/metrondb
Metron JDBC username = root
Active Spring profiles = dev
... View more
03-20-2018
05:11 AM
Btw, I wonder why you are seeing the 'No such file or directory' error if your installation went through fine. Are you able to see the file /usr/hcp/1.4.1.0-18/metron/config/zookeeper in your metron node?
Can you describe your installation procedure in detail please?
... View more
03-20-2018
05:08 AM
@Bramantya Anggriawan You can find out the Zookeeper details from the Ambari UI... see screenshot below. And then you can form the zookeeper quorum for e.g. as: host1.mydomain:2181,host2.mydomain:2181
... View more
12-06-2017
07:53 AM
@Gaurav Bapat
1 -> You can do vagrant halt command (from the folder metron/metron-deployment/vagrant/full-dev-platform ) in order to gracefully power down the VM 2 -> Can you do vagrant ssh<code> into the full dev VM and check <code>/var/log/messages to see if you are seeing any issues? I have seen these issues when the system resources are starved.
You can also try increasing the VM system resources by modifying the memory and cpu fields in metron/metron-deployment/vagrant/full-dev-platform/Vagrantfile , under this section
hosts = [{
hostname: "node1",
ip: "192.168.66.121",
memory: "8192",
cpus: 4,
promisc: 2 # enables promisc on the 'Nth' network interface
}]
3 -> For adding NiFi, you can follow the below links to have NiFi running and configured, It is recommended to use a separate cluster, since the vagrant full dev Metron platform will not suffice.
Check out more details on the following links:
https://docs.hortonworks.com/HDPDocuments/HDF3/HDF-3.0.1.1/bk_command-line-installation/content/ch_HDF_installing.html You can also follow the HCP runbook here to know more:
https://docs.hortonworks.com/HDPDocuments/HCP1/HCP-1.3.1/bk_runbook/content/install_nifi_runbook.html 4 -> This could be a problem of #2 above. Pls check the logs on the VM.
... View more
12-01-2017
08:38 AM
I would recommend increasing to 16 GB RAM since you seem to be having issues with 13 GB.
... View more
11-30-2017
12:59 PM
@Gaurav Bapat is this on the new setup with increased RAM ?
... View more
11-30-2017
11:01 AM
You will have to discard the existing VM and build another afresh on the new system with higher system configuration.
... View more