ca77f290cf
kasan: check KASAN_NO_FREE_META in __kasan_metadata_size
...
Patch series "kasan: switch tag-based modes to stack ring from per-object
metadata", v3.
This series makes the tag-based KASAN modes use a ring buffer for storing
stack depot handles for alloc/free stack traces for slab objects instead
of per-object metadata. This ring buffer is referred to as the stack
ring.
On each alloc/free of a slab object, the tagged address of the object and
the current stack trace are recorded in the stack ring.
On each bug report, if the accessed address belongs to a slab object, the
stack ring is scanned for matching entries. The newest entries are used
to print the alloc/free stack traces in the report: one entry for alloc
and one for free.
The advantages of this approach over storing stack trace handles in
per-object metadata with the tag-based KASAN modes:
- Allows to find relevant stack traces for use-after-free bugs without
using quarantine for freed memory. (Currently, if the object was
reallocated multiple times, the report contains the latest alloc/free
stack traces, not necessarily the ones relevant to the buggy allocation.)
- Allows to better identify and mark use-after-free bugs, effectively
making the CONFIG_KASAN_TAGS_IDENTIFY functionality always-on.
- Has fixed memory overhead.
The disadvantage:
- If the affected object was allocated/freed long before the bug happened
and the stack trace events were purged from the stack ring, the report
will have no stack traces.
Discussion
==========
The proposed implementation of the stack ring uses a single ring buffer
for the whole kernel. This might lead to contention due to atomic
accesses to the ring buffer index on multicore systems.
At this point, it is unknown whether the performance impact from this
contention would be significant compared to the slowdown introduced by
collecting stack traces due to the planned changes to the latter part, see
the section below.
For now, the proposed implementation is deemed to be good enough, but this
might need to be revisited once the stack collection becomes faster.
A considered alternative is to keep a separate ring buffer for each CPU
and then iterate over all of them when printing a bug report. This
approach requires somehow figuring out which of the stack rings has the
freshest stack traces for an object if multiple stack rings have them.
Further plans
=============
This series is a part of an effort to make KASAN stack trace collection
suitable for production. This requires stack trace collection to be fast
and memory-bounded.
The planned steps are:
1. Speed up stack trace collection (potentially, by using SCS;
patches on-hold until steps #2 and #3 are completed).
2. Keep stack trace handles in the stack ring (this series).
3. Add a memory-bounded mode to stack depot or provide an alternative
memory-bounded stack storage.
4. Potentially, implement stack trace collection sampling to minimize
the performance impact.
This patch (of 34):
__kasan_metadata_size() calculates the size of the redzone for objects in
a slab cache.
When accounting for presence of kasan_free_meta in the redzone, this
function only compares free_meta_offset with 0. But free_meta_offset
could also be equal to KASAN_NO_FREE_META, which indicates that
kasan_free_meta is not present at all.
Add a comparison with KASAN_NO_FREE_META into __kasan_metadata_size().
Link: https://lkml.kernel.org/r/cover.1662411799.git.andreyknvl@google.com
Link: https://lkml.kernel.org/r/c7b316d30d90e5947eb8280f4dc78856a49298cf.1662411799.git.andreyknvl@google.com
Signed-off-by: Andrey Konovalov <andreyknvl@google.com >
Reviewed-by: Marco Elver <elver@google.com >
Cc: Alexander Potapenko <glider@google.com >
Cc: Andrey Ryabinin <ryabinin.a.a@gmail.com >
Cc: Dmitry Vyukov <dvyukov@google.com >
Cc: Evgenii Stepanov <eugenis@google.com >
Cc: Peter Collingbourne <pcc@google.com >
Signed-off-by: Andrew Morton <akpm@linux-foundation.org >
2022-10-03 14:02:56 -07:00
b05f41a1aa
filemap: convert filemap_range_has_writeback() to use folios
...
Removes 3 calls to compound_head().
Link: https://lkml.kernel.org/r/20220905214557.868606-1-vishal.moola@gmail.com
Signed-off-by: Vishal Moola (Oracle) <vishal.moola@gmail.com >
Cc: Matthew Wilcox <willy@infradead.org >
Signed-off-by: Andrew Morton <akpm@linux-foundation.org >
2022-10-03 14:02:56 -07:00
c274cd5c9b
mm/damon/sysfs: simplify the judgement whether kdamonds are busy
...
It is unnecessary to get the number of the running kdamond to judge
whether kdamonds are busy. Here we can use the
damon_sysfs_kdamond_running() helper and return -EBUSY directly when
finding a running kdamond. Meanwhile, merging with the judgement that a
kdamond has current sysfs command callback request to make the code more
clear.
Link: https://lkml.kernel.org/r/1662302166-13216-1-git-send-email-kaixuxia@tencent.com
Signed-off-by: Kaixu Xia <kaixuxia@tencent.com >
Reviewed-by: SeongJae Park <sj@kernel.org >
Signed-off-by: Andrew Morton <akpm@linux-foundation.org >
2022-10-03 14:02:55 -07:00
8eeda55fe0
mm/hugetlb.c: remove unnecessary initialization of local `err'
...
Link: https://lkml.kernel.org/r/20220905020918.3552-1-zeming@nfschina.com
Signed-off-by: Li zeming <zeming@nfschina.com >
Cc: Mike Kravetz <mike.kravetz@oracle.com >
Cc: Muchun Song <songmuchun@bytedance.com >
Signed-off-by: Andrew Morton <akpm@linux-foundation.org >
2022-10-03 14:02:55 -07:00
19672a9e4a
mm: convert lock_page_or_retry() to folio_lock_or_retry()
...
Remove a call to compound_head() in each of the two callers.
Link: https://lkml.kernel.org/r/20220902194653.1739778-58-willy@infradead.org
Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org >
Signed-off-by: Andrew Morton <akpm@linux-foundation.org >
2022-10-03 14:02:55 -07:00
0c826c0b6a
rmap: remove page_unlock_anon_vma_read()
...
This was simply an alias for anon_vma_unlock_read() since 2011.
Link: https://lkml.kernel.org/r/20220902194653.1739778-56-willy@infradead.org
Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org >
Signed-off-by: Andrew Morton <akpm@linux-foundation.org >
2022-10-03 14:02:54 -07:00
29eea9b5a9
mm: convert page_get_anon_vma() to folio_get_anon_vma()
...
With all callers now passing in a folio, rename the function and convert
all callers. Removes a couple of calls to compound_head() and a reference
to page->mapping.
Link: https://lkml.kernel.org/r/20220902194653.1739778-55-willy@infradead.org
Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org >
Signed-off-by: Andrew Morton <akpm@linux-foundation.org >
2022-10-03 14:02:54 -07:00
684555aacc
huge_memory: convert unmap_page() to unmap_folio()
...
Remove a folio->page->folio conversion.
Link: https://lkml.kernel.org/r/20220902194653.1739778-54-willy@infradead.org
Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org >
Signed-off-by: Andrew Morton <akpm@linux-foundation.org >
2022-10-03 14:02:54 -07:00
3e9a13daa6
huge_memory: convert split_huge_page_to_list() to use a folio
...
Saves many calls to compound_head().
Link: https://lkml.kernel.org/r/20220902194653.1739778-53-willy@infradead.org
Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org >
Signed-off-by: Andrew Morton <akpm@linux-foundation.org >
2022-10-03 14:02:54 -07:00
c33db29231
migrate: convert unmap_and_move_huge_page() to use folios
...
Saves several calls to compound_head() and removes a couple of uses of
page->lru.
Link: https://lkml.kernel.org/r/20220902194653.1739778-52-willy@infradead.org
Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org >
Signed-off-by: Andrew Morton <akpm@linux-foundation.org >
2022-10-03 14:02:54 -07:00
682a71a1b6
migrate: convert __unmap_and_move() to use folios
...
Removes a lot of calls to compound_head(). Also remove a VM_BUG_ON that
can never trigger as the PageAnon bit is the bottom bit of page->mapping.
Link: https://lkml.kernel.org/r/20220902194653.1739778-51-willy@infradead.org
Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org >
Signed-off-by: Andrew Morton <akpm@linux-foundation.org >
2022-10-03 14:02:53 -07:00
595af4c936
rmap: convert page_move_anon_rmap() to use a folio
...
Removes one call to compound_head() and a reference to page->mapping.
Link: https://lkml.kernel.org/r/20220902194653.1739778-50-willy@infradead.org
Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org >
Signed-off-by: Andrew Morton <akpm@linux-foundation.org >
2022-10-03 14:02:53 -07:00
3b344157c0
mm: remove try_to_free_swap()
...
All callers have now been converted to folio_free_swap() and we can remove
this wrapper.
Link: https://lkml.kernel.org/r/20220902194653.1739778-49-willy@infradead.org
Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org >
Signed-off-by: Andrew Morton <akpm@linux-foundation.org >
2022-10-03 14:02:53 -07:00
9202d527b7
memcg: convert mem_cgroup_swap_full() to take a folio
...
All callers now have a folio, so convert the function to take a folio.
Saves a couple of calls to compound_head().
Link: https://lkml.kernel.org/r/20220902194653.1739778-48-willy@infradead.org
Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org >
Signed-off-by: Andrew Morton <akpm@linux-foundation.org >
2022-10-03 14:02:53 -07:00
a160e5377b
mm: convert do_swap_page() to use folio_free_swap()
...
Also convert should_try_to_free_swap() to use a folio. This removes a few
calls to compound_head().
Link: https://lkml.kernel.org/r/20220902194653.1739778-47-willy@infradead.org
Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org >
Signed-off-by: Andrew Morton <akpm@linux-foundation.org >
2022-10-03 14:02:53 -07:00
b4e6f66e45
ksm: use a folio in replace_page()
...
Replace three calls to compound_head() with one.
Link: https://lkml.kernel.org/r/20220902194653.1739778-46-willy@infradead.org
Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org >
Signed-off-by: Andrew Morton <akpm@linux-foundation.org >
2022-10-03 14:02:53 -07:00
98b211d641
madvise: convert madvise_free_pte_range() to use a folio
...
Saves a lot of calls to compound_head().
Link: https://lkml.kernel.org/r/20220902194653.1739778-44-willy@infradead.org
Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org >
Signed-off-by: Andrew Morton <akpm@linux-foundation.org >
2022-10-03 14:02:52 -07:00
2fad3d14b9
huge_memory: convert do_huge_pmd_wp_page() to use a folio
...
Removes many calls to compound_head(). Does not remove the assumption
that a folio may not be larger than a PMD.
Link: https://lkml.kernel.org/r/20220902194653.1739778-43-willy@infradead.org
Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org >
Signed-off-by: Andrew Morton <akpm@linux-foundation.org >
2022-10-03 14:02:52 -07:00
e4a2ed9490
mm: convert do_wp_page() to use a folio
...
Saves many calls to compound_head().
Link: https://lkml.kernel.org/r/20220902194653.1739778-42-willy@infradead.org
Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org >
Signed-off-by: Andrew Morton <akpm@linux-foundation.org >
2022-10-03 14:02:52 -07:00
71fa1a533d
swap: convert swap_writepage() to use a folio
...
Removes many calls to compound_head().
Link: https://lkml.kernel.org/r/20220902194653.1739778-41-willy@infradead.org
Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org >
Signed-off-by: Andrew Morton <akpm@linux-foundation.org >
2022-10-03 14:02:52 -07:00
aedd74d439
swap_state: convert free_swap_cache() to use a folio
...
Saves several calls to compound_head().
Link: https://lkml.kernel.org/r/20220902194653.1739778-40-willy@infradead.org
Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org >
Signed-off-by: Andrew Morton <akpm@linux-foundation.org >
2022-10-03 14:02:51 -07:00
cb691e2f28
mm: remove lookup_swap_cache()
...
All callers have now been converted to swap_cache_get_folio(), so we can
remove this wrapper.
Link: https://lkml.kernel.org/r/20220902194653.1739778-39-willy@infradead.org
Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org >
Signed-off-by: Andrew Morton <akpm@linux-foundation.org >
2022-10-03 14:02:51 -07:00
5a423081b2
mm: convert do_swap_page() to use swap_cache_get_folio()
...
Saves a folio->page->folio conversion.
Link: https://lkml.kernel.org/r/20220902194653.1739778-38-willy@infradead.org
Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org >
Signed-off-by: Andrew Morton <akpm@linux-foundation.org >
2022-10-03 14:02:51 -07:00
f102cd8b17
swapfile: convert unuse_pte_range() to use a folio
...
Delay fetching the precise page from the folio until we're in unuse_pte().
Saves many calls to compound_head().
Link: https://lkml.kernel.org/r/20220902194653.1739778-37-willy@infradead.org
Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org >
Signed-off-by: Andrew Morton <akpm@linux-foundation.org >
2022-10-03 14:02:51 -07:00
2c3f6194b0
swapfile: convert __try_to_reclaim_swap() to use a folio
...
Saves five calls to compound_head().
Link: https://lkml.kernel.org/r/20220902194653.1739778-36-willy@infradead.org
Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org >
Signed-off-by: Andrew Morton <akpm@linux-foundation.org >
2022-10-03 14:02:51 -07:00
000085b9af
swapfile: convert try_to_unuse() to use a folio
...
Saves five calls to compound_head().
Link: https://lkml.kernel.org/r/20220902194653.1739778-35-willy@infradead.org
Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org >
Signed-off-by: Andrew Morton <akpm@linux-foundation.org >
2022-10-03 14:02:50 -07:00
923e2f0e7c
shmem: remove shmem_getpage()
...
With all callers removed, remove this wrapper function. The flags are now
mysteriously called SGP, but I think we can live with that.
Link: https://lkml.kernel.org/r/20220902194653.1739778-34-willy@infradead.org
Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org >
Signed-off-by: Andrew Morton <akpm@linux-foundation.org >
2022-10-03 14:02:50 -07:00
12acf4fbc4
userfaultfd: convert mcontinue_atomic_pte() to use a folio
...
shmem_getpage() is being replaced by shmem_get_folio() so use a folio
throughout this function. Saves several calls to compound_head().
Link: https://lkml.kernel.org/r/20220902194653.1739778-33-willy@infradead.org
Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org >
Signed-off-by: Andrew Morton <akpm@linux-foundation.org >
2022-10-03 14:02:50 -07:00
7459c149ae
khugepaged: call shmem_get_folio()
...
shmem_getpage() is being removed, so call its replacement and find the
precise page ourselves.
Link: https://lkml.kernel.org/r/20220902194653.1739778-32-willy@infradead.org
Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org >
Signed-off-by: Andrew Morton <akpm@linux-foundation.org >
2022-10-03 14:02:50 -07:00
e4b57722d0
shmem: convert shmem_get_link() to use a folio
...
Symlinks will never use a large folio, but using the folio API removes a
lot of unnecessary folio->page->folio conversions.
Link: https://lkml.kernel.org/r/20220902194653.1739778-31-willy@infradead.org
Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org >
Signed-off-by: Andrew Morton <akpm@linux-foundation.org >
2022-10-03 14:02:50 -07:00
7ad0414bde
shmem: convert shmem_symlink() to use a folio
...
While symlinks will always be < PAGE_SIZE, using the folio APIs gets rid
of unnecessary calls to compound_head().
Link: https://lkml.kernel.org/r/20220902194653.1739778-30-willy@infradead.org
Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org >
Signed-off-by: Andrew Morton <akpm@linux-foundation.org >
2022-10-03 14:02:49 -07:00
b0802b22a9
shmem: convert shmem_fallocate() to use a folio
...
Call shmem_get_folio() and use the folio APIs instead of the page APIs.
Saves several calls to compound_head() and removes assumptions about the
size of a large folio.
Link: https://lkml.kernel.org/r/20220902194653.1739778-29-willy@infradead.org
Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org >
Signed-off-by: Andrew Morton <akpm@linux-foundation.org >
2022-10-03 14:02:49 -07:00
4601e2fc8b
shmem: convert shmem_file_read_iter() to use shmem_get_folio()
...
Use a folio throughout, saving five calls to compound_head().
Link: https://lkml.kernel.org/r/20220902194653.1739778-28-willy@infradead.org
Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org >
Signed-off-by: Andrew Morton <akpm@linux-foundation.org >
2022-10-03 14:02:49 -07:00
eff1f906c2
shmem: convert shmem_write_begin() to use shmem_get_folio()
...
Use a folio throughout this function, saving a couple of calls to
compound_head().
Link: https://lkml.kernel.org/r/20220902194653.1739778-27-willy@infradead.org
Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org >
Signed-off-by: Andrew Morton <akpm@linux-foundation.org >
2022-10-03 14:02:49 -07:00
a7f5862cc0
shmem: convert shmem_get_partial_folio() to use shmem_get_folio()
...
Get rid of an unnecessary folio->page->folio conversion.
Link: https://lkml.kernel.org/r/20220902194653.1739778-26-willy@infradead.org
Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org >
Signed-off-by: Andrew Morton <akpm@linux-foundation.org >
2022-10-03 14:02:49 -07:00
4e1fc793ad
shmem: add shmem_get_folio()
...
With no remaining callers of shmem_getpage_gfp(), add shmem_get_folio()
and reimplement shmem_getpage() as a call to shmem_get_folio().
Link: https://lkml.kernel.org/r/20220902194653.1739778-25-willy@infradead.org
Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org >
Signed-off-by: Andrew Morton <akpm@linux-foundation.org >
2022-10-03 14:02:49 -07:00
a3a9c39704
shmem: convert shmem_read_mapping_page_gfp() to use shmem_get_folio_gfp()
...
Saves a couple of calls to compound_head().
Link: https://lkml.kernel.org/r/20220902194653.1739778-24-willy@infradead.org
Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org >
Signed-off-by: Andrew Morton <akpm@linux-foundation.org >
2022-10-03 14:02:48 -07:00
68a541001a
shmem: convert shmem_fault() to use shmem_get_folio_gfp()
...
No particular advantage for this function, but necessary to remove
shmem_getpage_gfp().
[hughd@google.com: fix crash]
Link: https://lkml.kernel.org/r/7693a84-bdc2-27b5-2695-d0fe8566571f@google.com
Link: https://lkml.kernel.org/r/20220902194653.1739778-23-willy@infradead.org
Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org >
Signed-off-by: Andrew Morton <akpm@linux-foundation.org >
2022-10-03 14:02:48 -07:00
fc26babbc7
shmem: convert shmem_getpage_gfp() to shmem_get_folio_gfp()
...
Add a shmem_getpage_gfp() wrapper for compatibility with current users.
Link: https://lkml.kernel.org/r/20220902194653.1739778-22-willy@infradead.org
Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org >
Signed-off-by: Andrew Morton <akpm@linux-foundation.org >
2022-10-03 14:02:48 -07:00
5739a81cf8
shmem: eliminate struct page from shmem_swapin_folio()
...
Convert shmem_swapin() to return a folio and use swap_cache_get_folio(),
removing all uses of struct page in this function.
Link: https://lkml.kernel.org/r/20220902194653.1739778-21-willy@infradead.org
Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org >
Signed-off-by: Andrew Morton <akpm@linux-foundation.org >
2022-10-03 14:02:48 -07:00
c9edc24281
swap: add swap_cache_get_folio()
...
Convert lookup_swap_cache() into swap_cache_get_folio() and add a
lookup_swap_cache() wrapper around it.
[akpm@linux-foundation.org: add CONFIG_SWAP=n stub for swap_cache_get_folio()]
Link: https://lkml.kernel.org/r/20220902194653.1739778-20-willy@infradead.org
Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org >
Signed-off-by: Andrew Morton <akpm@linux-foundation.org >
2022-10-03 14:02:48 -07:00
0d698e2572
shmem: convert shmem_replace_page() to shmem_replace_folio()
...
The caller has a folio, so convert the calling convention and rename the
function.
Link: https://lkml.kernel.org/r/20220902194653.1739778-19-willy@infradead.org
Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org >
Signed-off-by: Andrew Morton <akpm@linux-foundation.org >
2022-10-03 14:02:47 -07:00
7a7256d5f5
shmem: convert shmem_mfill_atomic_pte() to use a folio
...
Assert that this is a single-page folio as there are several assumptions
in here that it's exactly PAGE_SIZE bytes large. Saves several calls to
compound_head() and removes the last caller of shmem_alloc_page().
Link: https://lkml.kernel.org/r/20220902194653.1739778-18-willy@infradead.org
Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org >
Signed-off-by: Andrew Morton <akpm@linux-foundation.org >
2022-10-03 14:02:47 -07:00
6599591816
memcg: convert mem_cgroup_swapin_charge_page() to mem_cgroup_swapin_charge_folio()
...
All callers now have a folio, so pass it in here and remove an unnecessary
call to page_folio().
Link: https://lkml.kernel.org/r/20220902194653.1739778-17-willy@infradead.org
Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org >
Signed-off-by: Andrew Morton <akpm@linux-foundation.org >
2022-10-03 14:02:47 -07:00
d4f9565ae5
mm: convert do_swap_page()'s swapcache variable to a folio
...
The 'swapcache' variable is used to track whether the page is from the
swapcache or not. It can do this equally well by being the folio of the
page rather than the page itself, and this saves a number of calls to
compound_head().
Link: https://lkml.kernel.org/r/20220902194653.1739778-16-willy@infradead.org
Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org >
Signed-off-by: Andrew Morton <akpm@linux-foundation.org >
2022-10-03 14:02:47 -07:00
63ad4add38
mm: convert do_swap_page() to use a folio
...
Removes quite a lot of calls to compound_head().
Link: https://lkml.kernel.org/r/20220902194653.1739778-15-willy@infradead.org
Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org >
Signed-off-by: Andrew Morton <akpm@linux-foundation.org >
2022-10-03 14:02:47 -07:00
4081f7446d
mm/swap: convert put_swap_page() to put_swap_folio()
...
With all callers now using a folio, we can convert this function.
Link: https://lkml.kernel.org/r/20220902194653.1739778-14-willy@infradead.org
Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org >
Signed-off-by: Andrew Morton <akpm@linux-foundation.org >
2022-10-03 14:02:46 -07:00
a4c366f01f
mm/swap: convert add_to_swap_cache() to take a folio
...
With all callers using folios, we can convert add_to_swap_cache() to take
a folio and use it throughout.
Link: https://lkml.kernel.org/r/20220902194653.1739778-13-willy@infradead.org
Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org >
Signed-off-by: Andrew Morton <akpm@linux-foundation.org >
2022-10-03 14:02:46 -07:00
a0d3374b07
mm/swap: convert __read_swap_cache_async() to use a folio
...
Remove a few hidden (and one visible) calls to compound_head().
Link: https://lkml.kernel.org/r/20220902194653.1739778-12-willy@infradead.org
Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org >
Signed-off-by: Andrew Morton <akpm@linux-foundation.org >
2022-10-03 14:02:46 -07:00
bdb0ed54a4
mm/swapfile: convert try_to_free_swap() to folio_free_swap()
...
Add kernel-doc for folio_free_swap() and make it return bool. Add a
try_to_free_swap() compatibility wrapper.
Link: https://lkml.kernel.org/r/20220902194653.1739778-11-willy@infradead.org
Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org >
Signed-off-by: Andrew Morton <akpm@linux-foundation.org >
2022-10-03 14:02:46 -07:00