intel-iommu: Don't use identity mapping for PCI devices behind bridges
Our current strategy for pass-through mode is to put all devices into the 1:1 domain at startup (which is before we know what their dma_mask will be), and only _later_ take them out of that domain, if it turns out that they really can't address all of memory. However, when there are a bunch of PCI devices behind a bridge, they all end up with the same source-id on their DMA transactions, and hence in the same IOMMU domain. This means that we _can't_ easily move them from the 1:1 domain into their own domain at runtime, because there might be DMA in-flight from their siblings. So we have to adjust our pass-through strategy: For PCI devices not on the root bus, and for the bridges which will take responsibility for their transactions, we have to start up _out_ of the 1:1 domain, just in case. This fixes the BUG() we see when we have 32-bit-capable devices behind a PCI-PCI bridge, and use the software identity mapping. It does mean that we might end up using 'normal' mapping mode for some devices which could actually live with the faster 1:1 mapping -- but this is only for PCI devices behind bridges, which presumably aren't the devices for which people are most concerned about performance. Signed-off-by: David Woodhouse <David.Woodhouse@intel.com>
This commit is contained in:
parent
6941af2810
commit
3dfc813d94
@ -2122,6 +2122,36 @@ static int iommu_should_identity_map(struct pci_dev *pdev, int startup)
|
||||
if (iommu_identity_mapping == 2)
|
||||
return IS_GFX_DEVICE(pdev);
|
||||
|
||||
/*
|
||||
* We want to start off with all devices in the 1:1 domain, and
|
||||
* take them out later if we find they can't access all of memory.
|
||||
*
|
||||
* However, we can't do this for PCI devices behind bridges,
|
||||
* because all PCI devices behind the same bridge will end up
|
||||
* with the same source-id on their transactions.
|
||||
*
|
||||
* Practically speaking, we can't change things around for these
|
||||
* devices at run-time, because we can't be sure there'll be no
|
||||
* DMA transactions in flight for any of their siblings.
|
||||
*
|
||||
* So PCI devices (unless they're on the root bus) as well as
|
||||
* their parent PCI-PCI or PCIe-PCI bridges must be left _out_ of
|
||||
* the 1:1 domain, just in _case_ one of their siblings turns out
|
||||
* not to be able to map all of memory.
|
||||
*/
|
||||
if (!pdev->is_pcie) {
|
||||
if (!pci_is_root_bus(pdev->bus))
|
||||
return 0;
|
||||
if (pdev->class >> 8 == PCI_CLASS_BRIDGE_PCI)
|
||||
return 0;
|
||||
} else if (pdev->pcie_type == PCI_EXP_TYPE_PCI_BRIDGE)
|
||||
return 0;
|
||||
|
||||
/*
|
||||
* At boot time, we don't yet know if devices will be 64-bit capable.
|
||||
* Assume that they will -- if they turn out not to be, then we can
|
||||
* take them out of the 1:1 domain later.
|
||||
*/
|
||||
if (!startup)
|
||||
return pdev->dma_mask > DMA_BIT_MASK(32);
|
||||
|
||||
|
Loading…
Reference in New Issue
Block a user