Hey guys, I have already asked this on multiple forums but never got a reply, so I thought that I might get one here.
I have an about 1 gig dataset, and it's got a "cityid" column of which there are 324 unique values, so after partitioning I should get 324 folders in hdfs. But whenever I partition, it fails, you can look at the exception messages here https://community.hortonworks.com/questions/238893/notenoughreplicasexception-when-writing-into-a-pa...
It's definitely an HDFS issue, because everything worked out on MapR. What could possible be the problem?
Btw, I tried this on a fresh install of hortonworks and cloudera and with default settings, so nothing was compromised.
If you need any more details please ask.
Could this be a setup issue or something? Like maybe I need to increase memory somewhere in the HDFS or something?