2018-01-26 20:45:16 +03:00
# SPDX-License-Identifier: GPL-2.0
2005-04-17 02:20:36 +04:00
#
# PCI configuration
#
2016-02-04 00:24:22 +03:00
2018-11-15 22:05:32 +03:00
# select this to offer the PCI prompt
config HAVE_PCI
bool
# select this to unconditionally force on PCI support
config FORCE_PCI
bool
select HAVE_PCI
select PCI
menuconfig PCI
bool "PCI support"
depends on HAVE_PCI
help
This option enables support for the PCI local bus, including
support for PCI-X and the foundations for PCI Express support.
Say 'Y' here unless you know what you are doing.
2019-01-15 00:35:46 +03:00
if PCI
2018-11-15 22:05:33 +03:00
config PCI_DOMAINS
bool
depends on PCI
config PCI_DOMAINS_GENERIC
bool
select PCI_DOMAINS
2018-11-15 22:05:34 +03:00
config PCI_SYSCALL
bool
2016-02-04 00:24:22 +03:00
source "drivers/pci/pcie/Kconfig"
2005-04-17 02:20:36 +04:00
config PCI_MSI
bool "Message Signaled Interrupts (MSI and MSI-X)"
2014-11-12 14:11:25 +03:00
select GENERIC_MSI_IRQ
2005-04-17 02:20:36 +04:00
help
This allows device drivers to enable MSI (Message Signaled
Interrupts). Message Signaled Interrupts enable a device to
generate an interrupt using an inbound Memory Write on its
PCI bus instead of asserting a device IRQ pin.
2006-03-06 08:33:34 +03:00
Use of PCI MSI interrupts can be disabled at kernel boot time
by using the 'pci=nomsi' option. This disables MSI for the
entire system.
2010-04-08 20:38:47 +04:00
If you don't know what to do here, say Y.
2005-04-17 02:20:36 +04:00
2014-11-11 16:02:18 +03:00
config PCI_MSI_IRQ_DOMAIN
2019-07-26 00:28:07 +03:00
def_bool ARC || ARM || ARM64 || X86 || RISCV
2014-11-11 16:02:18 +03:00
depends on PCI_MSI
select GENERIC_MSI_IRQ_DOMAIN
2017-11-03 01:14:02 +03:00
config PCI_QUIRKS
default y
bool "Enable PCI quirk workarounds" if EXPERT
help
This enables workarounds for various PCI chipset bugs/quirks.
Disable this only if your target machine is unaffected by PCI
quirks.
2005-04-17 02:20:36 +04:00
config PCI_DEBUG
bool "PCI Debugging"
2019-01-15 00:35:46 +03:00
depends on DEBUG_KERNEL
2005-04-17 02:20:36 +04:00
help
Say Y here if you want the PCI core to produce a bunch of debug
messages to the system log. Select this if you are having a
problem with PCI support and want to see more of what is going on.
When in doubt, say N.
2012-02-24 07:23:32 +04:00
config PCI_REALLOC_ENABLE_AUTO
bool "Enable PCI resource re-allocation detection"
2017-09-20 09:44:20 +03:00
depends on PCI_IOV
2012-02-24 07:23:32 +04:00
help
Say Y here if you want the PCI core to detect if PCI resource
re-allocation needs to be enabled. You can always use pci=realloc=on
2017-09-20 09:44:20 +03:00
or pci=realloc=off to override it. It will automatically
re-allocate PCI resources if SR-IOV BARs have not been allocated by
the BIOS.
2012-02-24 07:23:32 +04:00
When in doubt, say N.
2008-11-26 08:17:13 +03:00
config PCI_STUB
tristate "PCI Stub driver"
help
Say Y or M here if you want be able to reserve a PCI device
when it is going to be assigned to a guest operating system.
When in doubt, say N.
2018-04-25 00:47:16 +03:00
config PCI_PF_STUB
tristate "PCI PF Stub driver"
depends on PCI_IOV
help
Say Y or M here if you want to enable support for devices that
2018-11-05 23:53:21 +03:00
require SR-IOV support, while at the same time the PF (Physical
Function) itself is not providing any actual services on the
host itself such as storage or networking.
2018-04-25 00:47:16 +03:00
When in doubt, say N.
2010-08-03 05:31:05 +04:00
config XEN_PCIDEV_FRONTEND
tristate "Xen PCI Frontend"
2019-01-15 00:35:46 +03:00
depends on X86 && XEN
2010-08-03 05:31:05 +04:00
select PCI_XEN
2010-12-11 06:33:15 +03:00
select XEN_XENBUS_FRONTEND
2010-08-03 05:31:05 +04:00
default y
help
The PCI device frontend driver allows the kernel to import arbitrary
PCI devices from a PCI backend to support PCI driver domains.
2011-09-27 17:57:13 +04:00
config PCI_ATS
bool
2016-05-10 18:19:51 +03:00
config PCI_ECAM
bool
2017-03-17 00:50:06 +03:00
config PCI_LOCKLESS_CONFIG
bool
2018-10-18 18:37:16 +03:00
config PCI_BRIDGE_EMUL
bool
2017-03-17 00:50:06 +03:00
2009-03-20 06:25:11 +03:00
config PCI_IOV
bool "PCI IOV support"
2011-09-27 17:57:13 +04:00
select PCI_ATS
2009-03-20 06:25:11 +03:00
help
I/O Virtualization is a PCI feature supported by some devices
which allows them to create virtual devices which share their
physical resources.
If unsure, say N.
PCI hotplug: move IOAPIC support from acpiphp to ioapic driver
This patch moves PCI I/O APIC support from acpiphp to a separate driver.
Like pciehp and shpchp, acpiphp handles PCI hotplug, i.e., addition and
removal of PCI adapters. But in addition, acpiphp handles some ACPI
hotplug, such as the addition of new host bridges, and the I/O APIC
support was tangled up with that.
I don't think the I/O APIC support needs to be in acpiphp; PCI I/O APICs
usually appear as a function on a PCI host bridge, and we'll enumerate the
APIC before any of the devices behind the bridge that use it.
As far as I know, nobody actually uses I/O APIC hotplug. It depends on
acpi_register_ioapic(), which is only implemented for ia64, and I don't
think any vendors have supported I/O chassis hotplug yet.
Signed-off-by: Bjorn Helgaas <bjorn.helgaas@hp.com>
Reviewed-by: Kenji Kaneshige <kaneshige.kenji@jp.fujitsu.com>
CC: Satoru Takeuchi <takeuchi_satoru@jp.fujitsu.com>
CC: MUNEDA Takahiro <muneda.takahiro@jp.fujitsu.com>
Signed-off-by: Jesse Barnes <jbarnes@virtuousgeek.org>
2009-10-26 20:20:47 +03:00
2011-09-27 17:57:15 +04:00
config PCI_PRI
bool "PCI PRI support"
select PCI_ATS
help
PRI is the PCI Page Request Interface. It allows PCI devices that are
behind an IOMMU to recover from page faults.
If unsure, say N.
2011-09-27 17:57:16 +04:00
config PCI_PASID
bool "PCI PASID support"
select PCI_ATS
help
Process Address Space Identifiers (PASIDs) can be used by PCI devices
to access more than one IO address space at the same time. To make
use of this feature an IOMMU is required which also supports PASIDs.
Select this option if you have such an IOMMU and want to compile the
driver for it into your kernel.
If unsure, say N.
PCI/P2PDMA: Support peer-to-peer memory
Some PCI devices may have memory mapped in a BAR space that's intended for
use in peer-to-peer transactions. To enable such transactions the memory
must be registered with ZONE_DEVICE pages so it can be used by DMA
interfaces in existing drivers.
Add an interface for other subsystems to find and allocate chunks of P2P
memory as necessary to facilitate transfers between two PCI peers:
struct pci_dev *pci_p2pmem_find[_many]();
int pci_p2pdma_distance[_many]();
void *pci_alloc_p2pmem();
The new interface requires a driver to collect a list of client devices
involved in the transaction then call pci_p2pmem_find() to obtain any
suitable P2P memory. Alternatively, if the caller knows a device which
provides P2P memory, they can use pci_p2pdma_distance() to determine if it
is usable. With a suitable p2pmem device, memory can then be allocated
with pci_alloc_p2pmem() for use in DMA transactions.
Depending on hardware, using peer-to-peer memory may reduce the bandwidth
of the transfer but can significantly reduce pressure on system memory.
This may be desirable in many cases: for example a system could be designed
with a small CPU connected to a PCIe switch by a small number of lanes
which would maximize the number of lanes available to connect to NVMe
devices.
The code is designed to only utilize the p2pmem device if all the devices
involved in a transfer are behind the same PCI bridge. This is because we
have no way of knowing whether peer-to-peer routing between PCIe Root Ports
is supported (PCIe r4.0, sec 1.3.1). Additionally, the benefits of P2P
transfers that go through the RC is limited to only reducing DRAM usage
and, in some cases, coding convenience. The PCI-SIG may be exploring
adding a new capability bit to advertise whether this is possible for
future hardware.
This commit includes significant rework and feedback from Christoph
Hellwig.
Signed-off-by: Christoph Hellwig <hch@lst.de>
Signed-off-by: Logan Gunthorpe <logang@deltatee.com>
[bhelgaas: fold in fix from Keith Busch <keith.busch@intel.com>:
https://lore.kernel.org/linux-pci/20181012155920.15418-1-keith.busch@intel.com,
to address comment from Dan Carpenter <dan.carpenter@oracle.com>, fold in
https://lore.kernel.org/linux-pci/20181017160510.17926-1-logang@deltatee.com]
Signed-off-by: Bjorn Helgaas <bhelgaas@google.com>
2018-10-05 00:27:35 +03:00
config PCI_P2PDMA
bool "PCI peer-to-peer transfer support"
2019-01-15 00:35:46 +03:00
depends on ZONE_DEVICE
PCI/P2PDMA: Support peer-to-peer memory
Some PCI devices may have memory mapped in a BAR space that's intended for
use in peer-to-peer transactions. To enable such transactions the memory
must be registered with ZONE_DEVICE pages so it can be used by DMA
interfaces in existing drivers.
Add an interface for other subsystems to find and allocate chunks of P2P
memory as necessary to facilitate transfers between two PCI peers:
struct pci_dev *pci_p2pmem_find[_many]();
int pci_p2pdma_distance[_many]();
void *pci_alloc_p2pmem();
The new interface requires a driver to collect a list of client devices
involved in the transaction then call pci_p2pmem_find() to obtain any
suitable P2P memory. Alternatively, if the caller knows a device which
provides P2P memory, they can use pci_p2pdma_distance() to determine if it
is usable. With a suitable p2pmem device, memory can then be allocated
with pci_alloc_p2pmem() for use in DMA transactions.
Depending on hardware, using peer-to-peer memory may reduce the bandwidth
of the transfer but can significantly reduce pressure on system memory.
This may be desirable in many cases: for example a system could be designed
with a small CPU connected to a PCIe switch by a small number of lanes
which would maximize the number of lanes available to connect to NVMe
devices.
The code is designed to only utilize the p2pmem device if all the devices
involved in a transfer are behind the same PCI bridge. This is because we
have no way of knowing whether peer-to-peer routing between PCIe Root Ports
is supported (PCIe r4.0, sec 1.3.1). Additionally, the benefits of P2P
transfers that go through the RC is limited to only reducing DRAM usage
and, in some cases, coding convenience. The PCI-SIG may be exploring
adding a new capability bit to advertise whether this is possible for
future hardware.
This commit includes significant rework and feedback from Christoph
Hellwig.
Signed-off-by: Christoph Hellwig <hch@lst.de>
Signed-off-by: Logan Gunthorpe <logang@deltatee.com>
[bhelgaas: fold in fix from Keith Busch <keith.busch@intel.com>:
https://lore.kernel.org/linux-pci/20181012155920.15418-1-keith.busch@intel.com,
to address comment from Dan Carpenter <dan.carpenter@oracle.com>, fold in
https://lore.kernel.org/linux-pci/20181017160510.17926-1-logang@deltatee.com]
Signed-off-by: Bjorn Helgaas <bhelgaas@google.com>
2018-10-05 00:27:35 +03:00
select GENERIC_ALLOCATOR
help
Enableѕ drivers to do PCI peer-to-peer transactions to and from
BARs that are exposed in other devices that are the part of
the hierarchy where peer-to-peer DMA is guaranteed by the PCI
specification to work (ie. anything below a single PCI bridge).
Many PCIe root complexes do not support P2P transactions and
it's hard to tell which support it at all, so at this time,
P2P DMA transations must be between devices behind the same root
port.
If unsure, say N.
2011-03-29 20:45:57 +04:00
config PCI_LABEL
def_bool y if (DMI || ACPI)
select NLS
pci: PCIe driver for Marvell Armada 370/XP systems
This driver implements the support for the PCIe interfaces on the
Marvell Armada 370/XP ARM SoCs. In the future, it might be extended to
cover earlier families of Marvell SoCs, such as Dove, Orion and
Kirkwood.
The driver implements the hw_pci operations needed by the core ARM PCI
code to setup PCI devices and get their corresponding IRQs, and the
pci_ops operations that are used by the PCI core to read/write the
configuration space of PCI devices.
Since the PCIe interfaces of Marvell SoCs are completely separate and
not linked together in a bus, this driver sets up an emulated PCI host
bridge, with one PCI-to-PCI bridge as child for each hardware PCIe
interface.
In addition, this driver enumerates the different PCIe slots, and for
those having a device plugged in, it sets up the necessary address
decoding windows, using the mvebu-mbus driver.
Signed-off-by: Thomas Petazzoni <thomas.petazzoni@free-electrons.com>
Acked-by: Bjorn Helgaas <bhelgaas@google.com>
Signed-off-by: Jason Cooper <jason@lakedaemon.net>
2013-05-16 19:55:22 +04:00
2016-02-17 00:56:23 +03:00
config PCI_HYPERV
tristate "Hyper-V PCI Frontend"
2019-01-15 00:35:46 +03:00
depends on X86 && HYPERV && PCI_MSI && PCI_MSI_IRQ_DOMAIN && X86_64
2016-02-17 00:56:23 +03:00
help
The PCI device frontend driver allows the kernel to import arbitrary
PCI devices from a PCI backend to support PCI driver domains.
2016-03-21 10:26:41 +03:00
source "drivers/pci/hotplug/Kconfig"
2018-05-31 04:12:37 +03:00
source "drivers/pci/controller/Kconfig"
2017-04-10 16:55:10 +03:00
source "drivers/pci/endpoint/Kconfig"
2017-03-07 03:30:54 +03:00
source "drivers/pci/switch/Kconfig"
2019-01-15 00:35:46 +03:00
endif