I would love a few answers that are thoughtful and consider the nature of modern Hadoop - a world where Datanodes run with 512GB ram and 1TB of SSD is not far off - additionally, I am asking this on behalf of another interested party and I am guessing there may be a few others who could get interested in the following thread:
There is a Jira: https://issues.apache.org/jira/browse/HADOOP-12008 that indicates someone was thinking that implementing support for SPARC (not Spark) chipsets might happen some day. What is the likelihood that Hadoop will be implemented to support and exploit the various chipsets that exist in the wild? Is most of the Hadoop stack implemented in Java - thus making this moot as Java runs anywhere?
1) Where are the tricky bits that prevent someone from leveraging the latest open SPARC standards and things like Intel AVX or MMX extensions?
2) What is the likely timeline for supporting and exploiting these architectures within the Hadoop ecosystem?
3) Is the idea of leveraging such exotic things out of sync with the future of Hadoop due to it's commodity roots?
... View more