Member since
04-12-2019
6
Posts
0
Kudos Received
0
Solutions
05-17-2019
11:33 AM
HI @dbompart Yes the logic you mentioned is perfectly I have some more clarification regarding containers on Map Reduce and Spark In Map Reduce running sqoop Import In Spark running PySpark shell on top of yarn Now the configuration : MapReduce:- yarn.scheduler.maximum-allocation-MB :- 36864 * 2 = 73728 But my concern is now how can i limit the Running containers per user basics (I cant set Different queues in capacitor scheduler as mentioned above) -> When ever i am running spark application is also running on top of yarn Running Containers :- 3 Allocated CPU's :- 3 Total Memory allocated :- 5120 Will you help me the logic what is happening behind these Thanks a lot
... View more
04-30-2019
05:47 AM
@Geoffrey Shelton Okot Thanks a lot helping in this
... View more