Member since
07-26-2017
12
Posts
1
Kudos Received
1
Solution
My Accepted Solutions
Title | Views | Posted |
---|---|---|
3568 | 08-02-2017 07:36 PM |
12-21-2018
12:55 AM
I can't see the relationship between yarn.scheduler.minimum-allocation-mb and the error is reported. According to hive documentation, yarn.scheduler.minimum-allocation-mb is the "container memory minimum". But in this case, the container is running of memory, so it makes sense to increase the "maximum-allocation" instead. Anyway, as it was answered, increasing "mapreduce.map.memory.mb" and "mapreduce.reduce.memory.mb" must work, as those parameters controls how much memory is used by the map-reduce task is run by Hive.
... View more
08-02-2017
07:36 PM
1 Kudo
Hi , Yes , i fix it after checking version of ClouderaM in the host. So , i'm recommending that check version of ClouderaM in all of existing host and make sure have same version ClouderaClient in them. Good luckly.
... View more