Support Questions
Find answers, ask questions, and share your expertise

hdfs httpfs out of memory error


We are experiencing out of memory errors with httpfs.


This is happening when users use Hue to access a particular larger folder in hdfs.


We upped "java heap size of httpfs" to 1 gb - but still facing issue.


There is also a "java client heap size" parameter - will upping that help in our case?


Appreciate the insights.


@vtpcnk It really helps the community members to respond more on target if you show us actual error messages.  Also, give us some idea of your platform, version, node specs, etc.



That said and assuming your system has available memory:   increasing the java settings, for either hdfs and client separately, should be experimented with in tested increments.   Assuming these values are default, double the value, then restart, and retest your issue.   Do not just throw a huge number at it.    During testing increments, if you get to large numbers and still not seeming to address the issue, revert back to the original setting, and re-evaluate the actual error message context.


funnily nothing turned up in the log. But we got this alert :

Content: The health test result for HTTPFS_SCM_HEALTH has become bad: This role's process exited. This role is supposed to be started.


We are using cloudera 5.15.1 on RHEL.