Member since
09-10-2015
93
Posts
33
Kudos Received
8
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
2045 | 10-07-2016 03:37 PM | |
2048 | 10-04-2016 04:14 PM | |
2166 | 09-29-2016 03:17 PM | |
1109 | 09-28-2016 03:14 PM | |
1852 | 09-09-2016 09:41 PM |
07-15-2016
03:25 PM
Good questions. Not sure, but I'm checking. If I find answers I'll post them (or send the Solr expert this way 🙂
... View more
07-12-2016
06:12 PM
Hi Saurabh, here is a partial response in case it's helpful: HDP Search (which includes Solr) should be deployed on all nodes that run HDFS. Ambari is not supported quite yet. The HDP Search Guide contains basic information and links to additional documentation.
... View more
05-24-2016
03:42 AM
@Smart Solutions please see note from @jzhang, below (thanks Jeff!)
... View more
05-23-2016
04:22 PM
1 Kudo
@Smart Solutions I'll see if I can find out more info.
... View more
05-20-2016
10:18 PM
From what I understand, STS runs a version of HS2. The two processes
need to use different ports. The default port used by Ambari for STS is
10015 (we need to revise the example in the document). The default port used by Ambari for HS2 is 10000. (See Configuring Ports in the HDP Ref. Guide for a list of ports per component. As a side note, you mentioned that you configured the STS port
100015 but the first beeline command lists 10015. If that's a code mismatch (as opposed to a typo in the post), that would create a mismatch. Accessing Spark SQL through JDBC in the Spark Guide has additional info about using the STS, and the Spark Guide has some Ambari-specific configuration information in an earlier chapter (Installing STS after Deploying Spark and Customizing the STS Port; links are for HDP 2.4.0).
... View more
05-17-2016
12:40 AM
Looks like you've resolved your question, but for other readers interested in IPC for Hive queries there's a new diagram in the Spark Guide: http://docs.hortonworks.com/HDPDocuments/HDP2/HDP-2.4.2/bk_spark-guide/content/ch_accessing-spark-sql.html.
... View more
05-17-2016
12:27 AM
For additional information, see recent additions to the Kafka Guide. Here's the link for HDP 2.4.2: http://docs.hortonworks.com/HDPDocuments/HDP2/HDP-2.4.2/bk_kafka-user-guide/content/ch_kafka_mirrormaker.html
... View more
04-27-2016
05:14 AM
@jsirota, maybe so! MacBook Pro, 2.5 GHz i7 with 16 GB of memory, OS X 10.9.5, vanilla VirtualBox config.
... View more
04-27-2016
05:12 AM
@Sunile Manjee, sure. My approach is more of a workaround than a solution, but I'll do my best to describe it here.
I followed these steps while the VM was still running with Ambari and HDP deployed (the point at which the error occurred).
Make a backup copy of the Vagrantfile in incubator-metron-Metron_0.1BETA_rc7/deployment/vagrant/singlenode-vagrant/
Edit Vagrantfile so that ansible uses the metron_install.yml file instead of the metron_full_install.yml file:
Near the end of the file, change the line with ansible.playbook = "../../playbooks/metron_full_install.yml"
to
ansible.playbook = "../../playbooks/metron_install.yml"
Then save your changes.
From the deployment/vagrant/singlenode-vagrant directory, issue the following command:
vagrant provision
(If the VM is stopped/halted you'd probably want to use vagrant up instead. It shouldn't hurt to run vagrant up; if it isn't needed it will let you know.)
At this point the Metron installation playbook stepped through nine blocks of instructions; looks they're called "plays". You can see the list in the metron_install.yml file, and you can look in the roles directory for a better understanding of the tasks associated with the role in each play (hadoop_setup, mysql-server, etc.).
When the provisioning finished I could see all four topologies in the Storm UI but I didn't see any data. I'd gotten an error during the hadoop_setup role (under hosts: hadoop_client) so I re-ran that part by making a backup copy of metron_install.yml, removing everything except the hadoop_client section from the original file, and running vagrant provision again. A little while later I started seeing data in the Metron UI -- great to see it!
I hope I've remembered everything; let me know if any of it isn't clear.
... View more
04-26-2016
11:17 PM
The ansible playbook failed at the end of ambari_install.yml with Ambari & HDP running, so I re-provisioned the single-node cluster with the second half of the playbook: the metron_install.yml script. That deployed all four topologies and the Metron UI. I saw an error in the HDFS section of hadoop_setup so I reran/re-provisioned that part, and now I'm seeing data in the Metron UI. My guess is that the playbook timed out before the HDP components had time to come up. Next I'm going to suspend the VMs before my Mac melts down... the fan's pegged! 🙂
... View more
- « Previous
- Next »