02-09-2017 07:51 PM
We are running a spark streaming job using yarn as cluster manager, i have dedicated 7 cores per node to each node ...via yarn-site.xml as shown in the pic below
when the job is running ..it's only using 2 vcores and 5 vcores are left alone and the job is slow with lot of batches queued up ..
how can we make it use all the 7 vcores ? that's available to it this is usage when running so that it speed's up our job
Would greatly appreciate if any of the experts in the community will help out as we are new to Yarn & Spark
02-13-2017 08:46 PM
From your first screen shot, you have already maxed yout your memory so you won't be able to allocate more yarn containers. You may want to lower your spark memory settings or increase your cores per executor when submitting your spark application.