mirror of https://github.com/torvalds/linux.git
mm/page_alloc: prevent reporting pcp->batch = 0
zone_batchsize returns the appropriate value that should be used for pcp->batch. If it finds a zone with less than 4096 pages or PAGE_SIZE > 1M, however, it leads to some incorrect math. In the above case, we will get an intermediary value of 1, which is then rounded down to the nearest power of two, and 1 is subtracted from it. Since 1 is already a power of two, we will get batch = 1-1 = 0: batch = rounddown_pow_of_two(batch + batch/2) - 1; A pcp->batch value of 0 is nonsensical. If this were actually set, then functions like drain_zone_pages would become no-ops, since they could only free 0 pages at a time. Of the two callers of zone_batchsize, the one that is actually used to set pcp->batch works around this by setting pcp->batch to the maximum of 1 and zone_batchsize. However, the other caller, zone_pcp_init, incorrectly prints out the batch size of the zone to be 0. This is probably rare in a typical zone, but the DMA zone can often have less than 4096 pages, which means it will print out "LIFO batch:0". Before: [ 0.001216] DMA zone: 3998 pages, LIFO batch:0 After: [ 0.001210] DMA zone: 3998 pages, LIFO batch:1 Instead of dealing with the error handling and the mismatch between the reported and actual zone batchsize, just return 1 if the zone_batchsize is 1 page or less before the rounding. Link: https://lkml.kernel.org/r/20251009192933.3756712-3-joshua.hahnjy@gmail.com Signed-off-by: Joshua Hahn <joshua.hahnjy@gmail.com> Acked-by: Vlastimil Babka <vbabka@suse.cz> Cc: Brendan Jackman <jackmanb@google.com> Cc: Dave Hansen <dave.hansen@intel.com> Cc: Johannes Weiner <hannes@cmpxchg.org> Cc: Michal Hocko <mhocko@suse.com> Cc: Suren Baghdasaryan <surenb@google.com> Cc: Zi Yan <ziy@nvidia.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
This commit is contained in:
parent
4dcf65bf5b
commit
2783088ef2
|
|
@ -5866,8 +5866,8 @@ static int zone_batchsize(struct zone *zone)
|
||||||
* and zone lock contention.
|
* and zone lock contention.
|
||||||
*/
|
*/
|
||||||
batch = min(zone_managed_pages(zone) >> 12, SZ_256K / PAGE_SIZE);
|
batch = min(zone_managed_pages(zone) >> 12, SZ_256K / PAGE_SIZE);
|
||||||
if (batch < 1)
|
if (batch <= 1)
|
||||||
batch = 1;
|
return 1;
|
||||||
|
|
||||||
/*
|
/*
|
||||||
* Clamp the batch to a 2^n - 1 value. Having a power
|
* Clamp the batch to a 2^n - 1 value. Having a power
|
||||||
|
|
@ -6018,7 +6018,7 @@ static void zone_set_pageset_high_and_batch(struct zone *zone, int cpu_online)
|
||||||
{
|
{
|
||||||
int new_high_min, new_high_max, new_batch;
|
int new_high_min, new_high_max, new_batch;
|
||||||
|
|
||||||
new_batch = max(1, zone_batchsize(zone));
|
new_batch = zone_batchsize(zone);
|
||||||
if (percpu_pagelist_high_fraction) {
|
if (percpu_pagelist_high_fraction) {
|
||||||
new_high_min = zone_highsize(zone, new_batch, cpu_online,
|
new_high_min = zone_highsize(zone, new_batch, cpu_online,
|
||||||
percpu_pagelist_high_fraction);
|
percpu_pagelist_high_fraction);
|
||||||
|
|
|
||||||
Loading…
Reference in New Issue