Member since
02-10-2019
47
Posts
9
Kudos Received
8
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
4155 | 07-15-2019 12:04 PM | |
3350 | 11-03-2018 05:00 AM | |
5895 | 10-24-2018 07:38 AM | |
6772 | 10-08-2018 09:47 AM | |
1765 | 08-17-2018 06:33 AM |
09-18-2018
09:59 AM
1 Kudo
@Roberto
Ayuso
In spark, spark.driver.memoryOverhead is considered in calculating the total memory required for the driver. By default it is 0.10 of the driver-memory or minimum 384MB. In your case it will be 8GB * 0.1 = 9011MB ~= 9G YARN allocates memory only in increments/multiples of yarn.scheduler.minimum-allocation-mb . When yarn.scheduler.minimum-allocation-mb=4G, it can only allocate container sizes of 4G,8G,12G etc. So if something like 9G is requested it will round up to the next multiple and will allocate 12G of container size for the driver. When yarn.scheduler.minimum-allocation-mb=1G, then container sizes of 8G, 9G, 10G are possible. The nearest rounded up size of 9G will be used in this case.
... View more
09-18-2018
06:44 AM
1 Kudo
@Amila
Silva
HDP 3.0 supports GPU isolation in docker using nvidia-docker-plugin https://github.com/NVIDIA/nvidia-docker/wiki/nvidia-docker-plugin which is part of nvidia-docker v1. Currently only this is supported and not the newer version.
... View more
09-13-2018
12:02 PM
1 Kudo
@Michael Bronson I will look to create an article about configuring the vcores for cpu scheduling when I get time. I will mention this part there.
... View more
09-13-2018
10:48 AM
Interesting. Can you paste the lscpu output of the nodes you are mentioning?
... View more
09-13-2018
10:12 AM
1 Kudo
@Michael Bronson Yarn Vcores can ideally be set up to 2x the actual cpu present based on the use case. Thats why ambari provides the option in the scroll bar. It does not depend on the number of threads shown in lscpu. If you want to prevent over utilization of cpu by Yarn and leave cpu for OS and other processes you can set to 80% of 32 . But keep in mind that this value will only be considered by YARN for scheduling containers, when CPU scheduling is enabled.
... View more
09-13-2018
09:28 AM
1 Kudo
@Michael Bronson The lscpu output of "CPU(s):" itself always takes into consideration the "Thread(s) per core". So usually, CPU(s) = [Thread(s) per core] X [Core(s) per socket] X [Socket(s)] It is sufficient to consider only CPU(s), when setting yarn.nodemanager.resource.cpu-vcores .
... View more
09-07-2018
05:09 AM
@naveen r The amount of resource a Yarn application will request is completely dependent on the type of Application. For MapReduce, it is generally based on the inputsplit/no. of reducers configured and the memory/vcores configured per mapper/reducer in the JobConf. To check how much Resources are currently being used by a single application, you can use the following Rest API and check the fields "allocatedMB","allocatedVCores" and "runningContainers" GET http://<rm http address:port>/ws/v1/cluster/apps/<applicationId>
... View more
08-17-2018
06:33 AM
@Muthukumar S This is a normal log whenever the BlockManager starts up. You can even check it in the namenode logs. Invalid blocks such as overreplicated blocks if they exist will be deleted one hour after Namenode starts. No need to worry about any data loss here at all. Start your HDFS service as usual.
... View more
08-16-2018
04:51 PM
@Sivasankar Chandrasekar yarn.scheduler.maximum.allocation-mb is a scheduler level config and applies to the ResourceManager only. It should be set to a single value, ideally the largest possible container your applications may want to request. You can set it to 8GB if your applications will only use the maximum of 8GB for a single container. If you have requirement of launching a single container of size 32GB, you can also set it to 32GB. But only nodes which have 32GB memory can fulfill the container request. You should also create config groups in ambari for the property yarn.nodemanager.resource.memory-mb and set to different values 8 GB/16GB/32 GB with respect to the nodes.
... View more
07-30-2018
02:20 PM
You can use http://<rm http address:port>/ws/v1/cluster/metrics . This will retrieve the allocatedMB and allocatedVirtualCores at a given point in time. Refer http://hadoop.apache.org/docs/stable/hadoop-yarn/hadoop-yarn-site/ResourceManagerRest.html#Cluster_Metrics_API
... View more