Member since
05-21-2017
5
Posts
3
Kudos Received
2
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
5964 | 12-03-2017 08:15 PM | |
522 | 05-21-2017 09:05 PM |
12-03-2017
08:15 PM
2 Kudos
After a bit more tinkering around, the cluster appears to have no more communication issues. I checked the /etc/hosts file and although all of the FQDNs were present for all nodes, what I noticed is that a 127.0.0.1 FQDN had the same description as some other FQDNs For example: 127.0.0.1 slavenode1 slave1 192.168.##.### slavenode1 slave1 After removing this extra FQDN, I believe the communication could work fine... the nodes now showed the correct IP address which had all the proper ports forwarded.
... View more
05-30-2017
01:44 PM
Lots of tools in the tool box for cleansing data. https://community.hortonworks.com/articles/87632/ingesting-sql-server-tables-into-hive-via-apache-n.html https://community.hortonworks.com/articles/81270/adding-stanford-corenlp-to-big-data-pipelines-apac-1.html https://community.hortonworks.com/articles/79842/ingesting-osquery-into-apache-phoenix-using-apache.html https://community.hortonworks.com/content/kbentry/80339/iot-capturing-photos-and-analyzing-the-image-with.html https://community.hortonworks.com/content/kbentry/77988/ingest-remote-camera-images-from-raspberry-pi-via.html
... View more
05-21-2017
09:05 PM
I have a temporary fix for this. I uninstalled Accumulo. Since it forces a replication factor of 5, I had coped with this by changing the max blocks to 2. However, since I could not find the other file ( mapreduce.client.submit.file.replication ), I instead deleted the Accumulo service, and changed the max blocks back up to 10, or even 50. I think it will work for now, but I would really like to know what is going on here and how to find the .xml file that has the submit.file.replication. Also, did I actually need to adjust all these proxy users now? Thank you in advance. .
... View more