mm/page_alloc: optimize lowmem_reserve max lookup using its semantic monotonicity

calculate_totalreserve_pages() currently finds the maximum
lowmem_reserve[j] for a zone by scanning the full forward range [j =
zone_idx ..  MAX_NR_ZONES).  However, for a given zone i, the
lowmem_reserve[j] array (for j > i) is naturally expected to form a
monotonically non-decreasing sequence in j, not as an implementation
detail, but as a consequence that naturally arises from the semantics of
lowmem_reserve[].

For zone "i", lowmem_reserve[j] expresses how many pages in zone i must
effectively be kept in reserve when deciding whether an allocation class
that may allocate from zones up to j is allowed to fall back into i.  It
protects less flexible allocation classes (which cannot use higher zones)
from being starved by more flexible ones.

Viewed from this semantics, it is natural to expect a partial ordering in
j: as j increases, the allocation class gains access to a strictly larger
set of fallback zones.  Therefore lowmem_reserve[j] is expected to be
monotonically non-decreasing in j: more flexible allocation classes must
not be allowed to deplete low zones more aggressively than less flexible
ones.

In other words, if lowmem_reserve[j] were ever observed to *decrease* as j
grows, that would be unexpected from the reserve semantics' point of view
and would likely indicate a semantic change or a misconfiguration.

The current implementation in setup_per_zone_lowmem_reserve() reflects
this policy by accumulating managed pages from higher zones and applying
the configured ratio, which results in a non-decreasing sequence.  This
patch makes calculate_totalreserve_pages() rely on that monotonicity
explicitly and finds the maximum reserve value by scanning backward and
stopping at the first non-zero entry.  This avoids unnecessary iteration
and reflects the conceptual model more directly.  No functional behavior
changes.

To maintain this assumption explicitly, a comment is added next to
setup_per_zone_lowmem_reserve() documenting the monotonicity expectation
and noting that calculate_totalreserve_pages() relies on it.

Link: https://lkml.kernel.org/r/tencent_EB0FED91B01B1F8B6DAEE96719C5F5797F07@qq.com
Signed-off-by: fujunjie <fujunjie1@qq.com>
Acked-by: Zi Yan <ziy@nvidia.com>
Cc: Brendan Jackman <jackmanb@google.com>
Cc: Johannes Weiner <hannes@cmpxchg.org>
Cc: Michal Hocko <mhocko@suse.com>
Cc: Suren Baghdasaryan <surenb@google.com>
Cc: Vlastimil Babka <vbabka@suse.cz>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
This commit is contained in:
fujunjie 2025-11-15 03:02:55 +00:00 committed by Andrew Morton
parent 3cf41edc20
commit a493c7a650
1 changed files with 29 additions and 4 deletions

View File

@ -6311,10 +6311,21 @@ static void calculate_totalreserve_pages(void)
long max = 0;
unsigned long managed_pages = zone_managed_pages(zone);
/* Find valid and maximum lowmem_reserve in the zone */
for (j = i; j < MAX_NR_ZONES; j++)
max = max(max, zone->lowmem_reserve[j]);
/*
* lowmem_reserve[j] is monotonically non-decreasing
* in j for a given zone (see
* setup_per_zone_lowmem_reserve()). The maximum
* valid reserve lives at the highest index with a
* non-zero value, so scan backwards and stop at the
* first hit.
*/
for (j = MAX_NR_ZONES - 1; j > i; j--) {
if (!zone->lowmem_reserve[j])
continue;
max = zone->lowmem_reserve[j];
break;
}
/* we treat the high watermark as reserved pages. */
max += high_wmark_pages(zone);
@ -6339,7 +6350,21 @@ static void setup_per_zone_lowmem_reserve(void)
{
struct pglist_data *pgdat;
enum zone_type i, j;
/*
* For a given zone node_zones[i], lowmem_reserve[j] (j > i)
* represents how many pages in zone i must effectively be kept
* in reserve when deciding whether an allocation class that is
* allowed to allocate from zones up to j may fall back into
* zone i.
*
* As j increases, the allocation class can use a strictly larger
* set of fallback zones and therefore must not be allowed to
* deplete low zones more aggressively than a less flexible one.
* As a result, lowmem_reserve[j] is required to be monotonically
* non-decreasing in j for each zone i. Callers such as
* calculate_totalreserve_pages() rely on this monotonicity when
* selecting the maximum reserve entry.
*/
for_each_online_pgdat(pgdat) {
for (i = 0; i < MAX_NR_ZONES - 1; i++) {
struct zone *zone = &pgdat->node_zones[i];