iommufd/iova_bitmap: Consider page offset for the pages to be pinned
[ Upstream commit 4bbcbc6ea2fa379632a24c14cfb47aa603816ac6 ] For small bitmaps that aren't PAGE_SIZE aligned *and* that are less than 512 pages in bitmap length, use an extra page to be able to cover the entire range e.g. [1M..3G] which would be iterated more efficiently in a single iteration, rather than two. Fixes: b058ea3ab5af ("vfio/iova_bitmap: refactor iova_bitmap_set() to better handle page boundaries") Link: https://lore.kernel.org/r/20240202133415.23819-10-joao.m.martins@oracle.com Signed-off-by: Joao Martins <joao.m.martins@oracle.com> Tested-by: Avihai Horon <avihaih@nvidia.com> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com> Signed-off-by: Sasha Levin <sashal@kernel.org>
This commit is contained in:
parent
929766dadb
commit
c99e6b267d
@ -175,18 +175,19 @@ static int iova_bitmap_get(struct iova_bitmap *bitmap)
|
||||
bitmap->mapped_base_index) *
|
||||
sizeof(*bitmap->bitmap), PAGE_SIZE);
|
||||
|
||||
/*
|
||||
* We always cap at max number of 'struct page' a base page can fit.
|
||||
* This is, for example, on x86 means 2M of bitmap data max.
|
||||
*/
|
||||
npages = min(npages, PAGE_SIZE / sizeof(struct page *));
|
||||
|
||||
/*
|
||||
* Bitmap address to be pinned is calculated via pointer arithmetic
|
||||
* with bitmap u64 word index.
|
||||
*/
|
||||
addr = bitmap->bitmap + bitmap->mapped_base_index;
|
||||
|
||||
/*
|
||||
* We always cap at max number of 'struct page' a base page can fit.
|
||||
* This is, for example, on x86 means 2M of bitmap data max.
|
||||
*/
|
||||
npages = min(npages + !!offset_in_page(addr),
|
||||
PAGE_SIZE / sizeof(struct page *));
|
||||
|
||||
ret = pin_user_pages_fast((unsigned long)addr, npages,
|
||||
FOLL_WRITE, mapped->pages);
|
||||
if (ret <= 0)
|
||||
|
Loading…
x
Reference in New Issue
Block a user