Member since
10-22-2021
15
Posts
0
Kudos Received
0
Solutions
10-24-2023
09:36 PM
I think you don't have sufficient resources to run the job for queue root.hdfs. Verify is there any pending running jobs/application in the root.hdfs queue from Resource Manager UI. If it is running kill those if it is not required. And also verify from spark side you have given less resource to test it.
... View more
01-26-2022
09:54 PM
Can you check the userlimit of the queue and the max AM resource percentage ? RM UI -> Scheduler -> Expand your queue( take screenshot and attach to this case)
... View more
12-07-2021
10:34 PM
can you share few details. 1. is is apache cluster or any enterprise? 2. how you are launching/submitting your job? 3. please check connectivity between your hosts 4. make sure cluster is up and running.
... View more
12-06-2021
10:53 AM
@hadoopFreak01 One idea is to set a cron job to cleanup older files periodically but I don’t think there is any inbuilt feature available to purge these files.
... View more
11-19-2021
08:01 AM
Thanks, but i want to remove data resulting from executing Spark applications through the command spark-submit not from HDFS, could you confirm those are the commands to use in this case ?
... View more