Support Questions

Find answers, ask questions, and share your expertise

HDP cluster + resource manager logs with warning as Reservation Exceeds Allowed number of nodes

avatar

we have Hadoop cluster ( HDP 2.6.5 cluster with ambari , with 25 datanodes & nodemanager machines )

we are using spark streaming application (spark 2.1 run over Hortonworks 2.6.x )

the current situation is that spark streaming applications runs on all datanodes & node-manager machines

but from the resources manager logs we see the following INFO warning:

 

2021-06-27 14:07:01,456 INFO  fair.FSAppAttempt (FSAppAttempt.java:reservationExceedsThreshold(495)) - Reservation Exceeds Allowed number of nodes: app_id=application_1624802728037_0004 existingReservations=10 totalAvailableNodes=181 reservableNodesRatio=0.05 numAllowedReservations=10
2021-06-27 14:07:01,456 INFO  fair.FSAppAttempt (FSAppAttempt.java:reservationExceedsThreshold(495)) - Reservation Exceeds Allowed number of nodes: app_id=application_1624802728037_0003 existingReservations=10 totalAvailableNodes=181 reservableNodesRatio=0.05 numAllowedReservations=10
2021-06-27 14:07:01,456 INFO  fair.FSAppAttempt (FSAppAttempt.java:reservationExceedsThreshold(495)) - Reservation Exceeds Allowed number of nodes: app_id=application_1624802728037_0009 existingReservations=10 totalAvailableNodes=181 reservableNodesRatio=0.05 numAllowedReservations=10

regarding to Reservation Exceeds Allowed number of nodes

what is the customization that we should to do in order to avoid above those messages?



Michael-Bronson
1 REPLY 1

avatar
Master Mentor

@mike_bronson7 

Can you share your capacity scheduler , total memory and vcores configs ?