arm64: tlbflush: add some comments for TLB batched flushing
Add comments for arch_flush_tlb_batched_pending() and arch_tlbbatch_flush() to illustrate why only a DSB is needed. Link: https://lkml.kernel.org/r/20230801124203.62164-1-yangyicong@huawei.com Cc: Catalin Marinas <catalin.marinas@arm.com> Signed-off-by: Yicong Yang <yangyicong@hisilicon.com> Reviewed-by: Alistair Popple <apopple@nvidia.com> Reviewed-by: Catalin Marinas <catalin.marinas@arm.com> Cc: Barry Song <21cnbao@gmail.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
This commit is contained in:
parent
ebddd111fc
commit
6a718bd2ed
@ -304,11 +304,26 @@ static inline void arch_tlbbatch_add_pending(struct arch_tlbflush_unmap_batch *b
|
||||
__flush_tlb_page_nosync(mm, uaddr);
|
||||
}
|
||||
|
||||
/*
|
||||
* If mprotect/munmap/etc occurs during TLB batched flushing, we need to
|
||||
* synchronise all the TLBI issued with a DSB to avoid the race mentioned in
|
||||
* flush_tlb_batched_pending().
|
||||
*/
|
||||
static inline void arch_flush_tlb_batched_pending(struct mm_struct *mm)
|
||||
{
|
||||
dsb(ish);
|
||||
}
|
||||
|
||||
/*
|
||||
* To support TLB batched flush for multiple pages unmapping, we only send
|
||||
* the TLBI for each page in arch_tlbbatch_add_pending() and wait for the
|
||||
* completion at the end in arch_tlbbatch_flush(). Since we've already issued
|
||||
* TLBI for each page so only a DSB is needed to synchronise its effect on the
|
||||
* other CPUs.
|
||||
*
|
||||
* This will save the time waiting on DSB comparing issuing a TLBI;DSB sequence
|
||||
* for each page.
|
||||
*/
|
||||
static inline void arch_tlbbatch_flush(struct arch_tlbflush_unmap_batch *batch)
|
||||
{
|
||||
dsb(ish);
|
||||
|
Loading…
x
Reference in New Issue
Block a user