Intel-IOMMU Alignment Issue in dma_pte_clear_range()
This issue was pointed out by Linus. In dma_pte_clear_range() in intel-iommu.c start = PAGE_ALIGN(start); end &= PAGE_MASK; npages = (end - start) / VTD_PAGE_SIZE; In partial page case, start could be bigger than end and npages will be negative. Currently the issue doesn't show up as a real bug in because start and end have been aligned to page boundary already by all callers. So the issue has been hidden. But it is dangerous programming practice. Signed-off-by: Fenghua Yu <fenghua.yu@intel.com> Signed-off-by: David Woodhouse <David.Woodhouse@intel.com>
This commit is contained in:
parent
ffa009c366
commit
31d3568dfe
@ -733,8 +733,8 @@ static void dma_pte_clear_range(struct dmar_domain *domain, u64 start, u64 end)
|
|||||||
start &= (((u64)1) << addr_width) - 1;
|
start &= (((u64)1) << addr_width) - 1;
|
||||||
end &= (((u64)1) << addr_width) - 1;
|
end &= (((u64)1) << addr_width) - 1;
|
||||||
/* in case it's partial page */
|
/* in case it's partial page */
|
||||||
start = PAGE_ALIGN(start);
|
start &= PAGE_MASK;
|
||||||
end &= PAGE_MASK;
|
end = PAGE_ALIGN(end);
|
||||||
npages = (end - start) / VTD_PAGE_SIZE;
|
npages = (end - start) / VTD_PAGE_SIZE;
|
||||||
|
|
||||||
/* we don't need lock here, nobody else touches the iova range */
|
/* we don't need lock here, nobody else touches the iova range */
|
||||||
|
Loading…
x
Reference in New Issue
Block a user