I am running CDH 6.3.2.

I set the namenode java heap size to 70GB where almost 9GB used, but I got the block infomation as below:

**1,757,092 files and directories, 1,340,670 blocks (1,340,670 replicated blocks, 0 erasure coded block groups) = 3,097,762 total filesystem object(s).**

**Heap Memory used 8.74 GB of 69.65 GB Heap Memory. Max Heap Memory is 69.65 GB.**

**Non Heap Memory used 130.75 MB of 132.38 MB Commited Non Heap Memory. Max Non Heap Memory is <unbounded>.**

at first, the value of heap memory used is changing between 1GB to 2GB, but the value changes between 6GB to 9 GB after a few days.I think it should be 3GB at most.

Can anyone help me to figure it out? Thanks very much.

I do have a sparkstreaming proccess for storing files to hdfs from kafka every two minutes.

**And I have no idea that if the sparkstreaming proccess effect the java heap.**

The java heap is increasing every day. I think it will be over 70GB after 1 month, but the blocks is still less than 2 million.

Is there anyway to clean cache of javaheap？