I'm working on a windows service that will deliver files to HDFS using the REST API to connect to hadoop-httpfs.
The service works fine for files under 2MB. It immediatly throws an exception when transferring files over 2MB. I believe this is because there is a 2MB POST limit default in the embedded Tomcat server. On a normal Tomcat server you can override that by adding maxPostSize=-1. I haven't been able to find a place where overriding that works in the embedded version or any of the Cloudera configuration items.
edit /etc/hadoop-httpfs/tomcat-conf/conf/server.xml and add maxPostSize in connector settings. Then restart hadoop-httpfs. Will found edit apply in /var/lib/hadoop-httpfs/tomcat-deployment/conf/server.xml
2. check your tomcat version and workaround
The issue happens in tomcat 6.0.44
check tomcat version using /usr/lib/bigtop-tomcat/bin/version.sh
In tomcat 6.0.44 changelog, due to CVE-2014-0230, tomcat add org.apache.coyote.MAX_SWALLOW_SIZE (defaults to 2MB)