Created 07-22-2018 12:56 PM
Hello,
I am running into this issue, one of my host says that process directory memory is full, i am new to cdh. Any suggestions how should act to this ? should i change the directory path ? if yes how ?
Complete warning:
Created 07-23-2018 11:23 PM
In order to address this issue, you will need to free up space in /run/cloudera-scm-agent/process.
To do so, we need to know much space each process directorty is taking and also how old they are.
You can try listing the directories in order of size with a command like:
du -h --max-depth=1 /run/cloudera-scm-agent/process | sort -h
It is OK to delete directories in /run/cloudera-scm-agent/process provided that process directory is not used for a running process.
/run/cloudera-scm-agent/process is where the configuration for any role you are starting resides, so if you run out of space, you will not be able to start processes on that host.
Created 07-22-2018 11:37 PM
Hi @hadoopNoob,
Your agent directory is in a disck with few memory, you need to move the directory to another unit with more space memory.
You need to change the path in the configuration file. Usually located in /etc/cloudera-scm-agent/config.ini
Then, restart the agent, service cloudera-scm-agent restart.
Regards,
Manu.
Created 07-23-2018 11:23 PM
In order to address this issue, you will need to free up space in /run/cloudera-scm-agent/process.
To do so, we need to know much space each process directorty is taking and also how old they are.
You can try listing the directories in order of size with a command like:
du -h --max-depth=1 /run/cloudera-scm-agent/process | sort -h
It is OK to delete directories in /run/cloudera-scm-agent/process provided that process directory is not used for a running process.
/run/cloudera-scm-agent/process is where the configuration for any role you are starting resides, so if you run out of space, you will not be able to start processes on that host.