pci-v6.6-changes
-----BEGIN PGP SIGNATURE----- iQJIBAABCgAyFiEEgMe7l+5h9hnxdsnuWYigwDrT+vwFAmTvfQgUHGJoZWxnYWFz QGdvb2dsZS5jb20ACgkQWYigwDrT+vyDKA//UBxniXTyxvN8L/agMZngFJd9jLkE p2lnk5eTW6y/aJp1g+ujc7IJEmHG/B1Flp0b5mK8XL7S6OBtAGlPwnuPPpXb0ZxV ofSuQpYoNZGpkYrQMYvATfdLnH2WF3Yj3WCqh5jd2EldPEyqhMV68l7NMzf6+td2 KWJPli1XO8e60JAzbhpXH9vn1I0T8e6Qx8z/ulcydfiOH3PGDPnVrEo8gw9CvJOr aDqSPW7uhTk2SjjUJcAlQVpTGclE4yBxOOhEbuSGc7L6Ab04Y6D0XKx1589AUK6Z W2dQFK3cFYNQQ9aS/2DMUG88H09ca5t8kgUf7Iz3uan1soPzSYK8SLNBgxAPs11S 1jY093rDXXoaCJqxWUwDc/JUpWq6T3g4m445SNvFIOMcSwmMOIfAwfug4UexE1zC Ie8u3Um35Mp25o0o6V1J2EjdBsUsm0p//CsslfoAAIWi85W02Z/46bLLcITchkCe bP05H+c55ZN6maRJiaeghcpY+iWO4XCRCKS9mF1v9yn7FOhNxhBcwgTNPyGBVrYz T9w3ynTHAmuwNqtd6jhpTR/b1902up/Qv9I8uHhBDMqJAXfHocGEXHZblNuZMgfE bu9cjcbFghUPdrhUHYmbEqAzhdlL2SFuMYfn8D4QV4A6x+32xCdwsi39I0Effm5V wl0HmemjKjTYbLw= =iFFM -----END PGP SIGNATURE----- Merge tag 'pci-v6.6-changes' of git://git.kernel.org/pub/scm/linux/kernel/git/pci/pci Pull PCI updates from Bjorn Helgaas: "Enumeration: - Add locking to read/modify/write PCIe Capability Register accessors for Link Control and Root Control - Use pci_dev_id() when possible instead of manually composing ID from dev->bus->number and dev->devfn Resource management: - Move prototypes for __weak sysfs resource files to linux/pci.h to fix 'no previous prototype' warnings - Make more I/O port accesses depend on HAS_IOPORT - Use devm_platform_get_and_ioremap_resource() instead of open-coding platform_get_resource() followed by devm_ioremap_resource() Power management: - Ensure devices are powered up while accessing VPD - If device is powered-up, keep it that way while polling for PME - Only read PCI_PM_CTRL register when available, to avoid reading the wrong register and corrupting dev->current_state Virtualization: - Avoid Secondary Bus Reset on NVIDIA T4 GPUs Error handling: - Remove unused pci_disable_pcie_error_reporting() - Unexport pci_enable_pcie_error_reporting(), used only by aer.c - Unexport pcie_port_bus_type, used only by PCI core VGA: - Simplify and clean up typos in VGA arbiter Apple PCIe controller driver: - Initialize pcie->nvecs (number of available MSIs) before use Broadcom iProc PCIe controller driver: - Use of_property_read_bool() instead of low-level accessors for boolean properties Broadcom STB PCIe controller driver: - Assert PERST# when probing BCM2711 because some bootloaders don't do it Freescale i.MX6 PCIe controller driver: - Add .host_deinit() callback so we can clean up things like regulators on probe failure or driver unload Freescale Layerscape PCIe controller driver: - Add support for link-down notification so the endpoint driver can process LINK_DOWN events - Add suspend/resume support, including manual PME_Turn_off/PME_TO_Ack handshake - Save Link Capabilities during probe so they can be restored when handling a link-up event, since the controller loses the Link Width and Link Speed values during reset Intel VMD host bridge driver: - Fix disable of bridge windows during domain reset; previously we cleared the base/limit registers, which actually left the windows enabled Marvell MVEBU PCIe controller driver: - Remove unused busn member Microchip PolarFlare PCIe controller driver: - Fix interrupt bit definitions so the SEC and DED interrupt handlers work correctly - Make driver buildable as a module - Read FPGA MSI configuration parameters from hardware instead of hard-coding them Microsoft Hyper-V host bridge driver: - To avoid a NULL pointer dereference, skip MSI restore after hibernate if MSI/MSI-X hasn't been enabled NVIDIA Tegra194 PCIe controller driver: - Revert 'PCI: tegra194: Enable support for 256 Byte payload' because Linux doesn't know how to reduce MPS from to 256 to 128 bytes for endpoints below a switch (because other devices below the switch might already be operating), which leads to 'Malformed TLP' errors Qualcomm PCIe controller driver: - Add DT and driver support for interconnect bandwidth voting for 'pcie-mem' and 'cpu-pcie' interconnects - Fix broken SDX65 'compatible' DT property - Configure controller so MHI bus master clock will be switched off while in ASPM L1.x states - Use alignment restriction from EPF core in EPF MHI driver - Add Endpoint eDMA support - Add MHI eDMA support - Add Snapdragon SM8450 support to the EPF MHI driversupport - Add MHI eDMA support - Add Snapdragon SM8450 support to the EPF MHI driversupport - Add MHI eDMA support - Add Snapdragon SM8450 support to the EPF MHI driversupport - Add MHI eDMA support - Add Snapdragon SM8450 support to the EPF MHI driver - Use iATU for EPF MHI transfers smaller than 4K to avoid eDMA setup latency - Add sa8775p DT binding and driver support Rockchip PCIe controller driver: - Use 64-bit mask on MSI 64-bit PCI address to avoid zeroing out the upper 32 bits SiFive FU740 PCIe controller driver: - Set the supported number of MSI vectors so we can use all available MSI interrupts Synopsys DesignWare PCIe controller driver: - Add generic dwc suspend/resume APIs (dw_pcie_suspend_noirq() and dw_pcie_resume_noirq()) to be called by controller driver suspend/resume ops, and a controller callback to send PME_Turn_Off MicroSemi Switchtec management driver: - Add support for PCIe Gen5 devices Miscellaneous: - Reorder and compress to reduce size of struct pci_dev - Fix race in DOE destroy_work_on_stack() - Add stubs to avoid casts between incompatible function types - Explicitly include correct DT includes to untangle headers" * tag 'pci-v6.6-changes' of git://git.kernel.org/pub/scm/linux/kernel/git/pci/pci: (96 commits) PCI: qcom-ep: Add ICC bandwidth voting support dt-bindings: PCI: qcom: ep: Add interconnects path PCI: qcom-ep: Treat unknown IRQ events as an error dt-bindings: PCI: qcom: Fix SDX65 compatible PCI: endpoint: Add kernel-doc for pci_epc_mem_init() API PCI: epf-mhi: Use iATU for small transfers PCI: epf-mhi: Add support for SM8450 PCI: epf-mhi: Add eDMA support PCI: qcom-ep: Add eDMA support PCI: epf-mhi: Make use of the alignment restriction from EPF core PCI/PM: Only read PCI_PM_CTRL register when available PCI: qcom: Add support for sa8775p SoC dt-bindings: PCI: qcom: Add sa8775p compatible PCI: qcom-ep: Pass alignment restriction to the EPF core PCI: Simplify pcie_capability_clear_and_set_word() control flow PCI: Tidy config space save/restore messages PCI: Fix code formatting inconsistencies PCI: Fix typos in docs and comments PCI: Fix pci_bus_resetable(), pci_slot_resetable() name typos PCI: Simplify pci_dev_driver() ...
This commit is contained in:
commit
b6f6167ea8
@ -17,7 +17,7 @@ chipsets are able to deal with these errors; these include PCI-E chipsets,
|
||||
and the PCI-host bridges found on IBM Power4, Power5 and Power6-based
|
||||
pSeries boxes. A typical action taken is to disconnect the affected device,
|
||||
halting all I/O to it. The goal of a disconnection is to avoid system
|
||||
corruption; for example, to halt system memory corruption due to DMA's
|
||||
corruption; for example, to halt system memory corruption due to DMAs
|
||||
to "wild" addresses. Typically, a reconnection mechanism is also
|
||||
offered, so that the affected PCI device(s) are reset and put back
|
||||
into working condition. The reset phase requires coordination
|
||||
@ -178,9 +178,9 @@ is STEP 6 (Permanent Failure).
|
||||
complex and not worth implementing.
|
||||
|
||||
The current powerpc implementation doesn't much care if the device
|
||||
attempts I/O at this point, or not. I/O's will fail, returning
|
||||
attempts I/O at this point, or not. I/Os will fail, returning
|
||||
a value of 0xff on read, and writes will be dropped. If more than
|
||||
EEH_MAX_FAILS I/O's are attempted to a frozen adapter, EEH
|
||||
EEH_MAX_FAILS I/Os are attempted to a frozen adapter, EEH
|
||||
assumes that the device driver has gone into an infinite loop
|
||||
and prints an error to syslog. A reboot is then required to
|
||||
get the device working again.
|
||||
@ -204,7 +204,7 @@ instead will have gone directly to STEP 3 (Link Reset) or STEP 4 (Slot Reset)
|
||||
.. note::
|
||||
|
||||
The following is proposed; no platform implements this yet:
|
||||
Proposal: All I/O's should be done _synchronously_ from within
|
||||
Proposal: All I/Os should be done _synchronously_ from within
|
||||
this callback, errors triggered by them will be returned via
|
||||
the normal pci_check_whatever() API, no new error_detected()
|
||||
callback will be issued due to an error happening here. However,
|
||||
@ -258,7 +258,7 @@ Powerpc platforms implement two levels of slot reset:
|
||||
soft reset(default) and fundamental(optional) reset.
|
||||
|
||||
Powerpc soft reset consists of asserting the adapter #RST line and then
|
||||
restoring the PCI BAR's and PCI configuration header to a state
|
||||
restoring the PCI BARs and PCI configuration header to a state
|
||||
that is equivalent to what it would be after a fresh system
|
||||
power-on followed by power-on BIOS/system firmware initialization.
|
||||
Soft reset is also known as hot-reset.
|
||||
@ -362,7 +362,7 @@ permanent failure in some way. If the device is hotplug-capable,
|
||||
the operator will probably want to remove and replace the device.
|
||||
Note, however, not all failures are truly "permanent". Some are
|
||||
caused by over-heating, some by a poorly seated card. Many
|
||||
PCI error events are caused by software bugs, e.g. DMA's to
|
||||
PCI error events are caused by software bugs, e.g. DMAs to
|
||||
wild addresses or bogus split transactions due to programming
|
||||
errors. See the discussion in Documentation/powerpc/eeh-pci-error-recovery.rst
|
||||
for additional detail on real-life experience of the causes of
|
||||
|
@ -213,8 +213,12 @@ PCI Config Registers
|
||||
--------------------
|
||||
|
||||
Each service driver runs its PCI config operations on its own
|
||||
capability structure except the PCI Express capability structure, in
|
||||
which Root Control register and Device Control register are shared
|
||||
between PME and AER. This patch assumes that all service drivers
|
||||
will be well behaved and not overwrite other service driver's
|
||||
configuration settings.
|
||||
capability structure except the PCI Express capability structure,
|
||||
that is shared between many drivers including the service drivers.
|
||||
RMW Capability accessors (pcie_capability_clear_and_set_word(),
|
||||
pcie_capability_set_word(), and pcie_capability_clear_word()) protect
|
||||
a selected set of PCI Express Capability Registers (Link Control
|
||||
Register and Root Control Register). Any change to those registers
|
||||
should be performed using RMW accessors to avoid problems due to
|
||||
concurrent updates. For the up-to-date list of protected registers,
|
||||
see pcie_capability_clear_and_set_word().
|
||||
|
@ -11,10 +11,13 @@ maintainers:
|
||||
|
||||
properties:
|
||||
compatible:
|
||||
enum:
|
||||
- qcom,sdx55-pcie-ep
|
||||
- qcom,sdx65-pcie-ep
|
||||
- qcom,sm8450-pcie-ep
|
||||
oneOf:
|
||||
- enum:
|
||||
- qcom,sdx55-pcie-ep
|
||||
- qcom,sm8450-pcie-ep
|
||||
- items:
|
||||
- const: qcom,sdx65-pcie-ep
|
||||
- const: qcom,sdx55-pcie-ep
|
||||
|
||||
reg:
|
||||
items:
|
||||
@ -71,6 +74,14 @@ properties:
|
||||
description: GPIO used as WAKE# output signal
|
||||
maxItems: 1
|
||||
|
||||
interconnects:
|
||||
maxItems: 2
|
||||
|
||||
interconnect-names:
|
||||
items:
|
||||
- const: pcie-mem
|
||||
- const: cpu-pcie
|
||||
|
||||
resets:
|
||||
maxItems: 1
|
||||
|
||||
@ -98,6 +109,8 @@ required:
|
||||
- interrupts
|
||||
- interrupt-names
|
||||
- reset-gpios
|
||||
- interconnects
|
||||
- interconnect-names
|
||||
- resets
|
||||
- reset-names
|
||||
- power-domains
|
||||
@ -110,7 +123,6 @@ allOf:
|
||||
contains:
|
||||
enum:
|
||||
- qcom,sdx55-pcie-ep
|
||||
- qcom,sdx65-pcie-ep
|
||||
then:
|
||||
properties:
|
||||
clocks:
|
||||
@ -167,7 +179,9 @@ examples:
|
||||
- |
|
||||
#include <dt-bindings/clock/qcom,gcc-sdx55.h>
|
||||
#include <dt-bindings/gpio/gpio.h>
|
||||
#include <dt-bindings/interconnect/qcom,sdx55.h>
|
||||
#include <dt-bindings/interrupt-controller/arm-gic.h>
|
||||
|
||||
pcie_ep: pcie-ep@1c00000 {
|
||||
compatible = "qcom,sdx55-pcie-ep";
|
||||
reg = <0x01c00000 0x3000>,
|
||||
@ -194,6 +208,9 @@ examples:
|
||||
interrupts = <GIC_SPI 140 IRQ_TYPE_LEVEL_HIGH>,
|
||||
<GIC_SPI 145 IRQ_TYPE_LEVEL_HIGH>;
|
||||
interrupt-names = "global", "doorbell";
|
||||
interconnects = <&system_noc MASTER_PCIE &mc_virt SLAVE_EBI_CH0>,
|
||||
<&mem_noc MASTER_AMPSS_M0 &system_noc SLAVE_PCIE_0>;
|
||||
interconnect-names = "pcie-mem", "cpu-pcie";
|
||||
reset-gpios = <&tlmm 57 GPIO_ACTIVE_LOW>;
|
||||
wake-gpios = <&tlmm 53 GPIO_ACTIVE_LOW>;
|
||||
resets = <&gcc GCC_PCIE_BCR>;
|
||||
|
@ -29,6 +29,7 @@ properties:
|
||||
- qcom,pcie-msm8996
|
||||
- qcom,pcie-qcs404
|
||||
- qcom,pcie-sa8540p
|
||||
- qcom,pcie-sa8775p
|
||||
- qcom,pcie-sc7280
|
||||
- qcom,pcie-sc8180x
|
||||
- qcom,pcie-sc8280xp
|
||||
@ -211,6 +212,7 @@ allOf:
|
||||
compatible:
|
||||
contains:
|
||||
enum:
|
||||
- qcom,pcie-sa8775p
|
||||
- qcom,pcie-sc7280
|
||||
- qcom,pcie-sc8180x
|
||||
- qcom,pcie-sc8280xp
|
||||
@ -743,12 +745,37 @@ allOf:
|
||||
items:
|
||||
- const: pci # PCIe core reset
|
||||
|
||||
- if:
|
||||
properties:
|
||||
compatible:
|
||||
contains:
|
||||
enum:
|
||||
- qcom,pcie-sa8775p
|
||||
then:
|
||||
properties:
|
||||
clocks:
|
||||
minItems: 5
|
||||
maxItems: 5
|
||||
clock-names:
|
||||
items:
|
||||
- const: aux # Auxiliary clock
|
||||
- const: cfg # Configuration clock
|
||||
- const: bus_master # Master AXI clock
|
||||
- const: bus_slave # Slave AXI clock
|
||||
- const: slave_q2a # Slave Q2A clock
|
||||
resets:
|
||||
maxItems: 1
|
||||
reset-names:
|
||||
items:
|
||||
- const: pci # PCIe core reset
|
||||
|
||||
- if:
|
||||
properties:
|
||||
compatible:
|
||||
contains:
|
||||
enum:
|
||||
- qcom,pcie-sa8540p
|
||||
- qcom,pcie-sa8775p
|
||||
- qcom,pcie-sc8280xp
|
||||
then:
|
||||
required:
|
||||
@ -790,6 +817,7 @@ allOf:
|
||||
contains:
|
||||
enum:
|
||||
- qcom,pcie-msm8996
|
||||
- qcom,pcie-sa8775p
|
||||
- qcom,pcie-sc7280
|
||||
- qcom,pcie-sc8180x
|
||||
- qcom,pcie-sdm845
|
||||
|
@ -88,7 +88,4 @@ extern void pci_adjust_legacy_attr(struct pci_bus *bus,
|
||||
enum pci_mmap_state mmap_type);
|
||||
#define HAVE_PCI_LEGACY 1
|
||||
|
||||
extern int pci_create_resource_files(struct pci_dev *dev);
|
||||
extern void pci_remove_resource_files(struct pci_dev *dev);
|
||||
|
||||
#endif /* __ALPHA_PCI_H */
|
||||
|
@ -136,14 +136,14 @@ static inline struct irq_routing_table *pirq_convert_irt_table(u8 *addr,
|
||||
if (ir->signature != IRT_SIGNATURE || !ir->used || ir->size < ir->used)
|
||||
return NULL;
|
||||
|
||||
size = sizeof(*ir) + ir->used * sizeof(ir->slots[0]);
|
||||
size = struct_size(ir, slots, ir->used);
|
||||
if (size > limit - addr)
|
||||
return NULL;
|
||||
|
||||
DBG(KERN_DEBUG "PCI: $IRT Interrupt Routing Table found at 0x%lx\n",
|
||||
__pa(ir));
|
||||
|
||||
size = sizeof(*rt) + ir->used * sizeof(rt->slots[0]);
|
||||
size = struct_size(rt, slots, ir->used);
|
||||
rt = kzalloc(size, GFP_KERNEL);
|
||||
if (!rt)
|
||||
return NULL;
|
||||
|
@ -1574,17 +1574,8 @@ static void cik_pcie_gen3_enable(struct amdgpu_device *adev)
|
||||
u16 bridge_cfg2, gpu_cfg2;
|
||||
u32 max_lw, current_lw, tmp;
|
||||
|
||||
pcie_capability_read_word(root, PCI_EXP_LNKCTL,
|
||||
&bridge_cfg);
|
||||
pcie_capability_read_word(adev->pdev, PCI_EXP_LNKCTL,
|
||||
&gpu_cfg);
|
||||
|
||||
tmp16 = bridge_cfg | PCI_EXP_LNKCTL_HAWD;
|
||||
pcie_capability_write_word(root, PCI_EXP_LNKCTL, tmp16);
|
||||
|
||||
tmp16 = gpu_cfg | PCI_EXP_LNKCTL_HAWD;
|
||||
pcie_capability_write_word(adev->pdev, PCI_EXP_LNKCTL,
|
||||
tmp16);
|
||||
pcie_capability_set_word(root, PCI_EXP_LNKCTL, PCI_EXP_LNKCTL_HAWD);
|
||||
pcie_capability_set_word(adev->pdev, PCI_EXP_LNKCTL, PCI_EXP_LNKCTL_HAWD);
|
||||
|
||||
tmp = RREG32_PCIE(ixPCIE_LC_STATUS1);
|
||||
max_lw = (tmp & PCIE_LC_STATUS1__LC_DETECTED_LINK_WIDTH_MASK) >>
|
||||
@ -1637,21 +1628,14 @@ static void cik_pcie_gen3_enable(struct amdgpu_device *adev)
|
||||
msleep(100);
|
||||
|
||||
/* linkctl */
|
||||
pcie_capability_read_word(root, PCI_EXP_LNKCTL,
|
||||
&tmp16);
|
||||
tmp16 &= ~PCI_EXP_LNKCTL_HAWD;
|
||||
tmp16 |= (bridge_cfg & PCI_EXP_LNKCTL_HAWD);
|
||||
pcie_capability_write_word(root, PCI_EXP_LNKCTL,
|
||||
tmp16);
|
||||
|
||||
pcie_capability_read_word(adev->pdev,
|
||||
PCI_EXP_LNKCTL,
|
||||
&tmp16);
|
||||
tmp16 &= ~PCI_EXP_LNKCTL_HAWD;
|
||||
tmp16 |= (gpu_cfg & PCI_EXP_LNKCTL_HAWD);
|
||||
pcie_capability_write_word(adev->pdev,
|
||||
PCI_EXP_LNKCTL,
|
||||
tmp16);
|
||||
pcie_capability_clear_and_set_word(root, PCI_EXP_LNKCTL,
|
||||
PCI_EXP_LNKCTL_HAWD,
|
||||
bridge_cfg &
|
||||
PCI_EXP_LNKCTL_HAWD);
|
||||
pcie_capability_clear_and_set_word(adev->pdev, PCI_EXP_LNKCTL,
|
||||
PCI_EXP_LNKCTL_HAWD,
|
||||
gpu_cfg &
|
||||
PCI_EXP_LNKCTL_HAWD);
|
||||
|
||||
/* linkctl2 */
|
||||
pcie_capability_read_word(root, PCI_EXP_LNKCTL2,
|
||||
|
@ -2276,17 +2276,8 @@ static void si_pcie_gen3_enable(struct amdgpu_device *adev)
|
||||
u16 bridge_cfg2, gpu_cfg2;
|
||||
u32 max_lw, current_lw, tmp;
|
||||
|
||||
pcie_capability_read_word(root, PCI_EXP_LNKCTL,
|
||||
&bridge_cfg);
|
||||
pcie_capability_read_word(adev->pdev, PCI_EXP_LNKCTL,
|
||||
&gpu_cfg);
|
||||
|
||||
tmp16 = bridge_cfg | PCI_EXP_LNKCTL_HAWD;
|
||||
pcie_capability_write_word(root, PCI_EXP_LNKCTL, tmp16);
|
||||
|
||||
tmp16 = gpu_cfg | PCI_EXP_LNKCTL_HAWD;
|
||||
pcie_capability_write_word(adev->pdev, PCI_EXP_LNKCTL,
|
||||
tmp16);
|
||||
pcie_capability_set_word(root, PCI_EXP_LNKCTL, PCI_EXP_LNKCTL_HAWD);
|
||||
pcie_capability_set_word(adev->pdev, PCI_EXP_LNKCTL, PCI_EXP_LNKCTL_HAWD);
|
||||
|
||||
tmp = RREG32_PCIE(PCIE_LC_STATUS1);
|
||||
max_lw = (tmp & LC_DETECTED_LINK_WIDTH_MASK) >> LC_DETECTED_LINK_WIDTH_SHIFT;
|
||||
@ -2331,21 +2322,14 @@ static void si_pcie_gen3_enable(struct amdgpu_device *adev)
|
||||
|
||||
mdelay(100);
|
||||
|
||||
pcie_capability_read_word(root, PCI_EXP_LNKCTL,
|
||||
&tmp16);
|
||||
tmp16 &= ~PCI_EXP_LNKCTL_HAWD;
|
||||
tmp16 |= (bridge_cfg & PCI_EXP_LNKCTL_HAWD);
|
||||
pcie_capability_write_word(root, PCI_EXP_LNKCTL,
|
||||
tmp16);
|
||||
|
||||
pcie_capability_read_word(adev->pdev,
|
||||
PCI_EXP_LNKCTL,
|
||||
&tmp16);
|
||||
tmp16 &= ~PCI_EXP_LNKCTL_HAWD;
|
||||
tmp16 |= (gpu_cfg & PCI_EXP_LNKCTL_HAWD);
|
||||
pcie_capability_write_word(adev->pdev,
|
||||
PCI_EXP_LNKCTL,
|
||||
tmp16);
|
||||
pcie_capability_clear_and_set_word(root, PCI_EXP_LNKCTL,
|
||||
PCI_EXP_LNKCTL_HAWD,
|
||||
bridge_cfg &
|
||||
PCI_EXP_LNKCTL_HAWD);
|
||||
pcie_capability_clear_and_set_word(adev->pdev, PCI_EXP_LNKCTL,
|
||||
PCI_EXP_LNKCTL_HAWD,
|
||||
gpu_cfg &
|
||||
PCI_EXP_LNKCTL_HAWD);
|
||||
|
||||
pcie_capability_read_word(root, PCI_EXP_LNKCTL2,
|
||||
&tmp16);
|
||||
|
@ -9534,17 +9534,8 @@ static void cik_pcie_gen3_enable(struct radeon_device *rdev)
|
||||
u16 bridge_cfg2, gpu_cfg2;
|
||||
u32 max_lw, current_lw, tmp;
|
||||
|
||||
pcie_capability_read_word(root, PCI_EXP_LNKCTL,
|
||||
&bridge_cfg);
|
||||
pcie_capability_read_word(rdev->pdev, PCI_EXP_LNKCTL,
|
||||
&gpu_cfg);
|
||||
|
||||
tmp16 = bridge_cfg | PCI_EXP_LNKCTL_HAWD;
|
||||
pcie_capability_write_word(root, PCI_EXP_LNKCTL, tmp16);
|
||||
|
||||
tmp16 = gpu_cfg | PCI_EXP_LNKCTL_HAWD;
|
||||
pcie_capability_write_word(rdev->pdev, PCI_EXP_LNKCTL,
|
||||
tmp16);
|
||||
pcie_capability_set_word(root, PCI_EXP_LNKCTL, PCI_EXP_LNKCTL_HAWD);
|
||||
pcie_capability_set_word(rdev->pdev, PCI_EXP_LNKCTL, PCI_EXP_LNKCTL_HAWD);
|
||||
|
||||
tmp = RREG32_PCIE_PORT(PCIE_LC_STATUS1);
|
||||
max_lw = (tmp & LC_DETECTED_LINK_WIDTH_MASK) >> LC_DETECTED_LINK_WIDTH_SHIFT;
|
||||
@ -9591,21 +9582,14 @@ static void cik_pcie_gen3_enable(struct radeon_device *rdev)
|
||||
msleep(100);
|
||||
|
||||
/* linkctl */
|
||||
pcie_capability_read_word(root, PCI_EXP_LNKCTL,
|
||||
&tmp16);
|
||||
tmp16 &= ~PCI_EXP_LNKCTL_HAWD;
|
||||
tmp16 |= (bridge_cfg & PCI_EXP_LNKCTL_HAWD);
|
||||
pcie_capability_write_word(root, PCI_EXP_LNKCTL,
|
||||
tmp16);
|
||||
|
||||
pcie_capability_read_word(rdev->pdev,
|
||||
PCI_EXP_LNKCTL,
|
||||
&tmp16);
|
||||
tmp16 &= ~PCI_EXP_LNKCTL_HAWD;
|
||||
tmp16 |= (gpu_cfg & PCI_EXP_LNKCTL_HAWD);
|
||||
pcie_capability_write_word(rdev->pdev,
|
||||
PCI_EXP_LNKCTL,
|
||||
tmp16);
|
||||
pcie_capability_clear_and_set_word(root, PCI_EXP_LNKCTL,
|
||||
PCI_EXP_LNKCTL_HAWD,
|
||||
bridge_cfg &
|
||||
PCI_EXP_LNKCTL_HAWD);
|
||||
pcie_capability_clear_and_set_word(rdev->pdev, PCI_EXP_LNKCTL,
|
||||
PCI_EXP_LNKCTL_HAWD,
|
||||
gpu_cfg &
|
||||
PCI_EXP_LNKCTL_HAWD);
|
||||
|
||||
/* linkctl2 */
|
||||
pcie_capability_read_word(root, PCI_EXP_LNKCTL2,
|
||||
|
@ -7131,17 +7131,8 @@ static void si_pcie_gen3_enable(struct radeon_device *rdev)
|
||||
u16 bridge_cfg2, gpu_cfg2;
|
||||
u32 max_lw, current_lw, tmp;
|
||||
|
||||
pcie_capability_read_word(root, PCI_EXP_LNKCTL,
|
||||
&bridge_cfg);
|
||||
pcie_capability_read_word(rdev->pdev, PCI_EXP_LNKCTL,
|
||||
&gpu_cfg);
|
||||
|
||||
tmp16 = bridge_cfg | PCI_EXP_LNKCTL_HAWD;
|
||||
pcie_capability_write_word(root, PCI_EXP_LNKCTL, tmp16);
|
||||
|
||||
tmp16 = gpu_cfg | PCI_EXP_LNKCTL_HAWD;
|
||||
pcie_capability_write_word(rdev->pdev, PCI_EXP_LNKCTL,
|
||||
tmp16);
|
||||
pcie_capability_set_word(root, PCI_EXP_LNKCTL, PCI_EXP_LNKCTL_HAWD);
|
||||
pcie_capability_set_word(rdev->pdev, PCI_EXP_LNKCTL, PCI_EXP_LNKCTL_HAWD);
|
||||
|
||||
tmp = RREG32_PCIE(PCIE_LC_STATUS1);
|
||||
max_lw = (tmp & LC_DETECTED_LINK_WIDTH_MASK) >> LC_DETECTED_LINK_WIDTH_SHIFT;
|
||||
@ -7188,22 +7179,14 @@ static void si_pcie_gen3_enable(struct radeon_device *rdev)
|
||||
msleep(100);
|
||||
|
||||
/* linkctl */
|
||||
pcie_capability_read_word(root, PCI_EXP_LNKCTL,
|
||||
&tmp16);
|
||||
tmp16 &= ~PCI_EXP_LNKCTL_HAWD;
|
||||
tmp16 |= (bridge_cfg & PCI_EXP_LNKCTL_HAWD);
|
||||
pcie_capability_write_word(root,
|
||||
PCI_EXP_LNKCTL,
|
||||
tmp16);
|
||||
|
||||
pcie_capability_read_word(rdev->pdev,
|
||||
PCI_EXP_LNKCTL,
|
||||
&tmp16);
|
||||
tmp16 &= ~PCI_EXP_LNKCTL_HAWD;
|
||||
tmp16 |= (gpu_cfg & PCI_EXP_LNKCTL_HAWD);
|
||||
pcie_capability_write_word(rdev->pdev,
|
||||
PCI_EXP_LNKCTL,
|
||||
tmp16);
|
||||
pcie_capability_clear_and_set_word(root, PCI_EXP_LNKCTL,
|
||||
PCI_EXP_LNKCTL_HAWD,
|
||||
bridge_cfg &
|
||||
PCI_EXP_LNKCTL_HAWD);
|
||||
pcie_capability_clear_and_set_word(rdev->pdev, PCI_EXP_LNKCTL,
|
||||
PCI_EXP_LNKCTL_HAWD,
|
||||
gpu_cfg &
|
||||
PCI_EXP_LNKCTL_HAWD);
|
||||
|
||||
/* linkctl2 */
|
||||
pcie_capability_read_word(root, PCI_EXP_LNKCTL2,
|
||||
|
@ -338,7 +338,7 @@ static int mlx5_check_dev_ids(struct mlx5_core_dev *dev, u16 dev_id)
|
||||
list_for_each_entry(sdev, &bridge_bus->devices, bus_list) {
|
||||
err = pci_read_config_word(sdev, PCI_DEVICE_ID, &sdev_id);
|
||||
if (err)
|
||||
return err;
|
||||
return pcibios_err_to_errno(err);
|
||||
if (sdev_id != dev_id) {
|
||||
mlx5_core_warn(dev, "unrecognized dev_id (0x%x)\n", sdev_id);
|
||||
return -EPERM;
|
||||
@ -398,7 +398,7 @@ static int mlx5_pci_link_toggle(struct mlx5_core_dev *dev)
|
||||
|
||||
err = pci_read_config_word(dev->pdev, PCI_DEVICE_ID, &dev_id);
|
||||
if (err)
|
||||
return err;
|
||||
return pcibios_err_to_errno(err);
|
||||
err = mlx5_check_dev_ids(dev, dev_id);
|
||||
if (err)
|
||||
return err;
|
||||
@ -411,18 +411,13 @@ static int mlx5_pci_link_toggle(struct mlx5_core_dev *dev)
|
||||
pci_cfg_access_lock(sdev);
|
||||
}
|
||||
/* PCI link toggle */
|
||||
err = pci_read_config_word(bridge, cap + PCI_EXP_LNKCTL, ®16);
|
||||
err = pcie_capability_set_word(bridge, PCI_EXP_LNKCTL, PCI_EXP_LNKCTL_LD);
|
||||
if (err)
|
||||
return err;
|
||||
reg16 |= PCI_EXP_LNKCTL_LD;
|
||||
err = pci_write_config_word(bridge, cap + PCI_EXP_LNKCTL, reg16);
|
||||
if (err)
|
||||
return err;
|
||||
return pcibios_err_to_errno(err);
|
||||
msleep(500);
|
||||
reg16 &= ~PCI_EXP_LNKCTL_LD;
|
||||
err = pci_write_config_word(bridge, cap + PCI_EXP_LNKCTL, reg16);
|
||||
err = pcie_capability_clear_word(bridge, PCI_EXP_LNKCTL, PCI_EXP_LNKCTL_LD);
|
||||
if (err)
|
||||
return err;
|
||||
return pcibios_err_to_errno(err);
|
||||
|
||||
/* Check link */
|
||||
if (!bridge->link_active_reporting) {
|
||||
@ -435,7 +430,7 @@ static int mlx5_pci_link_toggle(struct mlx5_core_dev *dev)
|
||||
do {
|
||||
err = pci_read_config_word(bridge, cap + PCI_EXP_LNKSTA, ®16);
|
||||
if (err)
|
||||
return err;
|
||||
return pcibios_err_to_errno(err);
|
||||
if (reg16 & PCI_EXP_LNKSTA_DLLLA)
|
||||
break;
|
||||
msleep(20);
|
||||
@ -453,7 +448,7 @@ static int mlx5_pci_link_toggle(struct mlx5_core_dev *dev)
|
||||
do {
|
||||
err = pci_read_config_word(dev->pdev, PCI_DEVICE_ID, ®16);
|
||||
if (err)
|
||||
return err;
|
||||
return pcibios_err_to_errno(err);
|
||||
if (reg16 == dev_id)
|
||||
break;
|
||||
msleep(20);
|
||||
|
@ -1963,8 +1963,9 @@ static int ath10k_pci_hif_start(struct ath10k *ar)
|
||||
ath10k_pci_irq_enable(ar);
|
||||
ath10k_pci_rx_post(ar);
|
||||
|
||||
pcie_capability_write_word(ar_pci->pdev, PCI_EXP_LNKCTL,
|
||||
ar_pci->link_ctl);
|
||||
pcie_capability_clear_and_set_word(ar_pci->pdev, PCI_EXP_LNKCTL,
|
||||
PCI_EXP_LNKCTL_ASPMC,
|
||||
ar_pci->link_ctl & PCI_EXP_LNKCTL_ASPMC);
|
||||
|
||||
return 0;
|
||||
}
|
||||
@ -2821,8 +2822,8 @@ static int ath10k_pci_hif_power_up(struct ath10k *ar,
|
||||
|
||||
pcie_capability_read_word(ar_pci->pdev, PCI_EXP_LNKCTL,
|
||||
&ar_pci->link_ctl);
|
||||
pcie_capability_write_word(ar_pci->pdev, PCI_EXP_LNKCTL,
|
||||
ar_pci->link_ctl & ~PCI_EXP_LNKCTL_ASPMC);
|
||||
pcie_capability_clear_word(ar_pci->pdev, PCI_EXP_LNKCTL,
|
||||
PCI_EXP_LNKCTL_ASPMC);
|
||||
|
||||
/*
|
||||
* Bring the target up cleanly.
|
||||
|
@ -582,8 +582,8 @@ static void ath11k_pci_aspm_disable(struct ath11k_pci *ab_pci)
|
||||
u16_get_bits(ab_pci->link_ctl, PCI_EXP_LNKCTL_ASPM_L1));
|
||||
|
||||
/* disable L0s and L1 */
|
||||
pcie_capability_write_word(ab_pci->pdev, PCI_EXP_LNKCTL,
|
||||
ab_pci->link_ctl & ~PCI_EXP_LNKCTL_ASPMC);
|
||||
pcie_capability_clear_word(ab_pci->pdev, PCI_EXP_LNKCTL,
|
||||
PCI_EXP_LNKCTL_ASPMC);
|
||||
|
||||
set_bit(ATH11K_PCI_ASPM_RESTORE, &ab_pci->flags);
|
||||
}
|
||||
@ -591,8 +591,10 @@ static void ath11k_pci_aspm_disable(struct ath11k_pci *ab_pci)
|
||||
static void ath11k_pci_aspm_restore(struct ath11k_pci *ab_pci)
|
||||
{
|
||||
if (test_and_clear_bit(ATH11K_PCI_ASPM_RESTORE, &ab_pci->flags))
|
||||
pcie_capability_write_word(ab_pci->pdev, PCI_EXP_LNKCTL,
|
||||
ab_pci->link_ctl);
|
||||
pcie_capability_clear_and_set_word(ab_pci->pdev, PCI_EXP_LNKCTL,
|
||||
PCI_EXP_LNKCTL_ASPMC,
|
||||
ab_pci->link_ctl &
|
||||
PCI_EXP_LNKCTL_ASPMC);
|
||||
}
|
||||
|
||||
static int ath11k_pci_power_up(struct ath11k_base *ab)
|
||||
|
@ -794,8 +794,8 @@ static void ath12k_pci_aspm_disable(struct ath12k_pci *ab_pci)
|
||||
u16_get_bits(ab_pci->link_ctl, PCI_EXP_LNKCTL_ASPM_L1));
|
||||
|
||||
/* disable L0s and L1 */
|
||||
pcie_capability_write_word(ab_pci->pdev, PCI_EXP_LNKCTL,
|
||||
ab_pci->link_ctl & ~PCI_EXP_LNKCTL_ASPMC);
|
||||
pcie_capability_clear_word(ab_pci->pdev, PCI_EXP_LNKCTL,
|
||||
PCI_EXP_LNKCTL_ASPMC);
|
||||
|
||||
set_bit(ATH12K_PCI_ASPM_RESTORE, &ab_pci->flags);
|
||||
}
|
||||
@ -803,8 +803,10 @@ static void ath12k_pci_aspm_disable(struct ath12k_pci *ab_pci)
|
||||
static void ath12k_pci_aspm_restore(struct ath12k_pci *ab_pci)
|
||||
{
|
||||
if (test_and_clear_bit(ATH12K_PCI_ASPM_RESTORE, &ab_pci->flags))
|
||||
pcie_capability_write_word(ab_pci->pdev, PCI_EXP_LNKCTL,
|
||||
ab_pci->link_ctl);
|
||||
pcie_capability_clear_and_set_word(ab_pci->pdev, PCI_EXP_LNKCTL,
|
||||
PCI_EXP_LNKCTL_ASPMC,
|
||||
ab_pci->link_ctl &
|
||||
PCI_EXP_LNKCTL_ASPMC);
|
||||
}
|
||||
|
||||
static void ath12k_pci_kill_tasklets(struct ath12k_base *ab)
|
||||
|
@ -497,22 +497,35 @@ int pcie_capability_write_dword(struct pci_dev *dev, int pos, u32 val)
|
||||
}
|
||||
EXPORT_SYMBOL(pcie_capability_write_dword);
|
||||
|
||||
int pcie_capability_clear_and_set_word(struct pci_dev *dev, int pos,
|
||||
u16 clear, u16 set)
|
||||
int pcie_capability_clear_and_set_word_unlocked(struct pci_dev *dev, int pos,
|
||||
u16 clear, u16 set)
|
||||
{
|
||||
int ret;
|
||||
u16 val;
|
||||
|
||||
ret = pcie_capability_read_word(dev, pos, &val);
|
||||
if (!ret) {
|
||||
val &= ~clear;
|
||||
val |= set;
|
||||
ret = pcie_capability_write_word(dev, pos, val);
|
||||
}
|
||||
if (ret)
|
||||
return ret;
|
||||
|
||||
val &= ~clear;
|
||||
val |= set;
|
||||
return pcie_capability_write_word(dev, pos, val);
|
||||
}
|
||||
EXPORT_SYMBOL(pcie_capability_clear_and_set_word_unlocked);
|
||||
|
||||
int pcie_capability_clear_and_set_word_locked(struct pci_dev *dev, int pos,
|
||||
u16 clear, u16 set)
|
||||
{
|
||||
unsigned long flags;
|
||||
int ret;
|
||||
|
||||
spin_lock_irqsave(&dev->pcie_cap_lock, flags);
|
||||
ret = pcie_capability_clear_and_set_word_unlocked(dev, pos, clear, set);
|
||||
spin_unlock_irqrestore(&dev->pcie_cap_lock, flags);
|
||||
|
||||
return ret;
|
||||
}
|
||||
EXPORT_SYMBOL(pcie_capability_clear_and_set_word);
|
||||
EXPORT_SYMBOL(pcie_capability_clear_and_set_word_locked);
|
||||
|
||||
int pcie_capability_clear_and_set_dword(struct pci_dev *dev, int pos,
|
||||
u32 clear, u32 set)
|
||||
@ -521,13 +534,12 @@ int pcie_capability_clear_and_set_dword(struct pci_dev *dev, int pos,
|
||||
u32 val;
|
||||
|
||||
ret = pcie_capability_read_dword(dev, pos, &val);
|
||||
if (!ret) {
|
||||
val &= ~clear;
|
||||
val |= set;
|
||||
ret = pcie_capability_write_dword(dev, pos, val);
|
||||
}
|
||||
if (ret)
|
||||
return ret;
|
||||
|
||||
return ret;
|
||||
val &= ~clear;
|
||||
val |= set;
|
||||
return pcie_capability_write_dword(dev, pos, val);
|
||||
}
|
||||
EXPORT_SYMBOL(pcie_capability_clear_and_set_dword);
|
||||
|
||||
|
@ -216,7 +216,7 @@ config PCIE_MT7621
|
||||
This selects a driver for the MediaTek MT7621 PCIe Controller.
|
||||
|
||||
config PCIE_MICROCHIP_HOST
|
||||
bool "Microchip AXI PCIe controller"
|
||||
tristate "Microchip AXI PCIe controller"
|
||||
depends on PCI_MSI && OF
|
||||
select PCI_HOST_COMMON
|
||||
help
|
||||
|
@ -14,8 +14,8 @@
|
||||
#include <linux/irqdomain.h>
|
||||
#include <linux/mfd/syscon.h>
|
||||
#include <linux/of.h>
|
||||
#include <linux/of_device.h>
|
||||
#include <linux/pci.h>
|
||||
#include <linux/platform_device.h>
|
||||
#include <linux/pm_runtime.h>
|
||||
#include <linux/regmap.h>
|
||||
|
||||
|
@ -6,11 +6,10 @@
|
||||
* Author: Tom Joseph <tjoseph@cadence.com>
|
||||
*/
|
||||
#include <linux/kernel.h>
|
||||
#include <linux/of_address.h>
|
||||
#include <linux/of.h>
|
||||
#include <linux/of_pci.h>
|
||||
#include <linux/platform_device.h>
|
||||
#include <linux/pm_runtime.h>
|
||||
#include <linux/of_device.h>
|
||||
#include "pcie-cadence.h"
|
||||
|
||||
#define CDNS_PLAT_CPU_TO_BUS_ADDR 0x0FFFFFFF
|
||||
|
@ -4,6 +4,7 @@
|
||||
// Author: Cyrille Pitchen <cyrille.pitchen@free-electrons.com>
|
||||
|
||||
#include <linux/kernel.h>
|
||||
#include <linux/of.h>
|
||||
|
||||
#include "pcie-cadence.h"
|
||||
|
||||
|
@ -32,7 +32,7 @@
|
||||
#define CDNS_PCIE_LM_ID_SUBSYS(sub) \
|
||||
(((sub) << CDNS_PCIE_LM_ID_SUBSYS_SHIFT) & CDNS_PCIE_LM_ID_SUBSYS_MASK)
|
||||
|
||||
/* Root Port Requestor ID Register */
|
||||
/* Root Port Requester ID Register */
|
||||
#define CDNS_PCIE_LM_RP_RID (CDNS_PCIE_LM_BASE + 0x0228)
|
||||
#define CDNS_PCIE_LM_RP_RID_MASK GENMASK(15, 0)
|
||||
#define CDNS_PCIE_LM_RP_RID_SHIFT 0
|
||||
|
@ -16,7 +16,7 @@
|
||||
#include <linux/irqdomain.h>
|
||||
#include <linux/kernel.h>
|
||||
#include <linux/module.h>
|
||||
#include <linux/of_device.h>
|
||||
#include <linux/of.h>
|
||||
#include <linux/of_gpio.h>
|
||||
#include <linux/of_pci.h>
|
||||
#include <linux/pci.h>
|
||||
|
@ -14,11 +14,11 @@
|
||||
#include <linux/interrupt.h>
|
||||
#include <linux/kernel.h>
|
||||
#include <linux/init.h>
|
||||
#include <linux/of_device.h>
|
||||
#include <linux/pci.h>
|
||||
#include <linux/platform_device.h>
|
||||
#include <linux/phy/phy.h>
|
||||
#include <linux/regulator/consumer.h>
|
||||
#include <linux/mod_devicetable.h>
|
||||
#include <linux/module.h>
|
||||
|
||||
#include "pcie-designware.h"
|
||||
|
@ -17,8 +17,8 @@
|
||||
#include <linux/mfd/syscon/imx6q-iomuxc-gpr.h>
|
||||
#include <linux/mfd/syscon/imx7-iomuxc-gpr.h>
|
||||
#include <linux/module.h>
|
||||
#include <linux/of.h>
|
||||
#include <linux/of_gpio.h>
|
||||
#include <linux/of_device.h>
|
||||
#include <linux/of_address.h>
|
||||
#include <linux/pci.h>
|
||||
#include <linux/platform_device.h>
|
||||
@ -1040,6 +1040,7 @@ static void imx6_pcie_host_exit(struct dw_pcie_rp *pp)
|
||||
|
||||
static const struct dw_pcie_host_ops imx6_pcie_host_ops = {
|
||||
.host_init = imx6_pcie_host_init,
|
||||
.host_deinit = imx6_pcie_host_exit,
|
||||
};
|
||||
|
||||
static const struct dw_pcie_ops dw_pcie_ops = {
|
||||
@ -1282,8 +1283,7 @@ static int imx6_pcie_probe(struct platform_device *pdev)
|
||||
return PTR_ERR(imx6_pcie->phy_base);
|
||||
}
|
||||
|
||||
dbi_base = platform_get_resource(pdev, IORESOURCE_MEM, 0);
|
||||
pci->dbi_base = devm_ioremap_resource(dev, dbi_base);
|
||||
pci->dbi_base = devm_platform_get_and_ioremap_resource(pdev, 0, &dbi_base);
|
||||
if (IS_ERR(pci->dbi_base))
|
||||
return PTR_ERR(pci->dbi_base);
|
||||
|
||||
|
@ -19,7 +19,6 @@
|
||||
#include <linux/mfd/syscon.h>
|
||||
#include <linux/msi.h>
|
||||
#include <linux/of.h>
|
||||
#include <linux/of_device.h>
|
||||
#include <linux/of_irq.h>
|
||||
#include <linux/of_pci.h>
|
||||
#include <linux/phy/phy.h>
|
||||
|
@ -45,6 +45,7 @@ struct ls_pcie_ep {
|
||||
struct pci_epc_features *ls_epc;
|
||||
const struct ls_pcie_ep_drvdata *drvdata;
|
||||
int irq;
|
||||
u32 lnkcap;
|
||||
bool big_endian;
|
||||
};
|
||||
|
||||
@ -73,6 +74,7 @@ static irqreturn_t ls_pcie_ep_event_handler(int irq, void *dev_id)
|
||||
struct ls_pcie_ep *pcie = dev_id;
|
||||
struct dw_pcie *pci = pcie->pci;
|
||||
u32 val, cfg;
|
||||
u8 offset;
|
||||
|
||||
val = ls_lut_readl(pcie, PEX_PF0_PME_MES_DR);
|
||||
ls_lut_writel(pcie, PEX_PF0_PME_MES_DR, val);
|
||||
@ -81,6 +83,19 @@ static irqreturn_t ls_pcie_ep_event_handler(int irq, void *dev_id)
|
||||
return IRQ_NONE;
|
||||
|
||||
if (val & PEX_PF0_PME_MES_DR_LUD) {
|
||||
|
||||
offset = dw_pcie_find_capability(pci, PCI_CAP_ID_EXP);
|
||||
|
||||
/*
|
||||
* The values of the Maximum Link Width and Supported Link
|
||||
* Speed from the Link Capabilities Register will be lost
|
||||
* during link down or hot reset. Restore initial value
|
||||
* that configured by the Reset Configuration Word (RCW).
|
||||
*/
|
||||
dw_pcie_dbi_ro_wr_en(pci);
|
||||
dw_pcie_writel_dbi(pci, offset + PCI_EXP_LNKCAP, pcie->lnkcap);
|
||||
dw_pcie_dbi_ro_wr_dis(pci);
|
||||
|
||||
cfg = ls_lut_readl(pcie, PEX_PF0_CONFIG);
|
||||
cfg |= PEX_PF0_CFG_READY;
|
||||
ls_lut_writel(pcie, PEX_PF0_CONFIG, cfg);
|
||||
@ -89,6 +104,7 @@ static irqreturn_t ls_pcie_ep_event_handler(int irq, void *dev_id)
|
||||
dev_dbg(pci->dev, "Link up\n");
|
||||
} else if (val & PEX_PF0_PME_MES_DR_LDD) {
|
||||
dev_dbg(pci->dev, "Link down\n");
|
||||
pci_epc_linkdown(pci->ep.epc);
|
||||
} else if (val & PEX_PF0_PME_MES_DR_HRD) {
|
||||
dev_dbg(pci->dev, "Hot reset\n");
|
||||
}
|
||||
@ -215,6 +231,7 @@ static int __init ls_pcie_ep_probe(struct platform_device *pdev)
|
||||
struct ls_pcie_ep *pcie;
|
||||
struct pci_epc_features *ls_epc;
|
||||
struct resource *dbi_base;
|
||||
u8 offset;
|
||||
int ret;
|
||||
|
||||
pcie = devm_kzalloc(dev, sizeof(*pcie), GFP_KERNEL);
|
||||
@ -251,6 +268,9 @@ static int __init ls_pcie_ep_probe(struct platform_device *pdev)
|
||||
|
||||
platform_set_drvdata(pdev, pcie);
|
||||
|
||||
offset = dw_pcie_find_capability(pci, PCI_CAP_ID_EXP);
|
||||
pcie->lnkcap = dw_pcie_readl_dbi(pci, offset + PCI_EXP_LNKCAP);
|
||||
|
||||
ret = dw_pcie_ep_init(&pci->ep);
|
||||
if (ret)
|
||||
return ret;
|
||||
|
@ -8,9 +8,11 @@
|
||||
* Author: Minghuan Lian <Minghuan.Lian@freescale.com>
|
||||
*/
|
||||
|
||||
#include <linux/delay.h>
|
||||
#include <linux/kernel.h>
|
||||
#include <linux/interrupt.h>
|
||||
#include <linux/init.h>
|
||||
#include <linux/iopoll.h>
|
||||
#include <linux/of_pci.h>
|
||||
#include <linux/of_platform.h>
|
||||
#include <linux/of_address.h>
|
||||
@ -20,6 +22,7 @@
|
||||
#include <linux/mfd/syscon.h>
|
||||
#include <linux/regmap.h>
|
||||
|
||||
#include "../../pci.h"
|
||||
#include "pcie-designware.h"
|
||||
|
||||
/* PEX Internal Configuration Registers */
|
||||
@ -27,12 +30,26 @@
|
||||
#define PCIE_ABSERR 0x8d0 /* Bridge Slave Error Response Register */
|
||||
#define PCIE_ABSERR_SETTING 0x9401 /* Forward error of non-posted request */
|
||||
|
||||
/* PF Message Command Register */
|
||||
#define LS_PCIE_PF_MCR 0x2c
|
||||
#define PF_MCR_PTOMR BIT(0)
|
||||
#define PF_MCR_EXL2S BIT(1)
|
||||
|
||||
#define PCIE_IATU_NUM 6
|
||||
|
||||
struct ls_pcie_drvdata {
|
||||
const u32 pf_off;
|
||||
bool pm_support;
|
||||
};
|
||||
|
||||
struct ls_pcie {
|
||||
struct dw_pcie *pci;
|
||||
const struct ls_pcie_drvdata *drvdata;
|
||||
void __iomem *pf_base;
|
||||
bool big_endian;
|
||||
};
|
||||
|
||||
#define ls_pcie_pf_readl_addr(addr) ls_pcie_pf_readl(pcie, addr)
|
||||
#define to_ls_pcie(x) dev_get_drvdata((x)->dev)
|
||||
|
||||
static bool ls_pcie_is_bridge(struct ls_pcie *pcie)
|
||||
@ -73,6 +90,68 @@ static void ls_pcie_fix_error_response(struct ls_pcie *pcie)
|
||||
iowrite32(PCIE_ABSERR_SETTING, pci->dbi_base + PCIE_ABSERR);
|
||||
}
|
||||
|
||||
static u32 ls_pcie_pf_readl(struct ls_pcie *pcie, u32 off)
|
||||
{
|
||||
if (pcie->big_endian)
|
||||
return ioread32be(pcie->pf_base + off);
|
||||
|
||||
return ioread32(pcie->pf_base + off);
|
||||
}
|
||||
|
||||
static void ls_pcie_pf_writel(struct ls_pcie *pcie, u32 off, u32 val)
|
||||
{
|
||||
if (pcie->big_endian)
|
||||
iowrite32be(val, pcie->pf_base + off);
|
||||
else
|
||||
iowrite32(val, pcie->pf_base + off);
|
||||
}
|
||||
|
||||
static void ls_pcie_send_turnoff_msg(struct dw_pcie_rp *pp)
|
||||
{
|
||||
struct dw_pcie *pci = to_dw_pcie_from_pp(pp);
|
||||
struct ls_pcie *pcie = to_ls_pcie(pci);
|
||||
u32 val;
|
||||
int ret;
|
||||
|
||||
val = ls_pcie_pf_readl(pcie, LS_PCIE_PF_MCR);
|
||||
val |= PF_MCR_PTOMR;
|
||||
ls_pcie_pf_writel(pcie, LS_PCIE_PF_MCR, val);
|
||||
|
||||
ret = readx_poll_timeout(ls_pcie_pf_readl_addr, LS_PCIE_PF_MCR,
|
||||
val, !(val & PF_MCR_PTOMR),
|
||||
PCIE_PME_TO_L2_TIMEOUT_US/10,
|
||||
PCIE_PME_TO_L2_TIMEOUT_US);
|
||||
if (ret)
|
||||
dev_err(pcie->pci->dev, "PME_Turn_off timeout\n");
|
||||
}
|
||||
|
||||
static void ls_pcie_exit_from_l2(struct dw_pcie_rp *pp)
|
||||
{
|
||||
struct dw_pcie *pci = to_dw_pcie_from_pp(pp);
|
||||
struct ls_pcie *pcie = to_ls_pcie(pci);
|
||||
u32 val;
|
||||
int ret;
|
||||
|
||||
/*
|
||||
* Set PF_MCR_EXL2S bit in LS_PCIE_PF_MCR register for the link
|
||||
* to exit L2 state.
|
||||
*/
|
||||
val = ls_pcie_pf_readl(pcie, LS_PCIE_PF_MCR);
|
||||
val |= PF_MCR_EXL2S;
|
||||
ls_pcie_pf_writel(pcie, LS_PCIE_PF_MCR, val);
|
||||
|
||||
/*
|
||||
* L2 exit timeout of 10ms is not defined in the specifications,
|
||||
* it was chosen based on empirical observations.
|
||||
*/
|
||||
ret = readx_poll_timeout(ls_pcie_pf_readl_addr, LS_PCIE_PF_MCR,
|
||||
val, !(val & PF_MCR_EXL2S),
|
||||
1000,
|
||||
10000);
|
||||
if (ret)
|
||||
dev_err(pcie->pci->dev, "L2 exit timeout\n");
|
||||
}
|
||||
|
||||
static int ls_pcie_host_init(struct dw_pcie_rp *pp)
|
||||
{
|
||||
struct dw_pcie *pci = to_dw_pcie_from_pp(pp);
|
||||
@ -91,18 +170,28 @@ static int ls_pcie_host_init(struct dw_pcie_rp *pp)
|
||||
|
||||
static const struct dw_pcie_host_ops ls_pcie_host_ops = {
|
||||
.host_init = ls_pcie_host_init,
|
||||
.pme_turn_off = ls_pcie_send_turnoff_msg,
|
||||
};
|
||||
|
||||
static const struct ls_pcie_drvdata ls1021a_drvdata = {
|
||||
.pm_support = false,
|
||||
};
|
||||
|
||||
static const struct ls_pcie_drvdata layerscape_drvdata = {
|
||||
.pf_off = 0xc0000,
|
||||
.pm_support = true,
|
||||
};
|
||||
|
||||
static const struct of_device_id ls_pcie_of_match[] = {
|
||||
{ .compatible = "fsl,ls1012a-pcie", },
|
||||
{ .compatible = "fsl,ls1021a-pcie", },
|
||||
{ .compatible = "fsl,ls1028a-pcie", },
|
||||
{ .compatible = "fsl,ls1043a-pcie", },
|
||||
{ .compatible = "fsl,ls1046a-pcie", },
|
||||
{ .compatible = "fsl,ls2080a-pcie", },
|
||||
{ .compatible = "fsl,ls2085a-pcie", },
|
||||
{ .compatible = "fsl,ls2088a-pcie", },
|
||||
{ .compatible = "fsl,ls1088a-pcie", },
|
||||
{ .compatible = "fsl,ls1012a-pcie", .data = &layerscape_drvdata },
|
||||
{ .compatible = "fsl,ls1021a-pcie", .data = &ls1021a_drvdata },
|
||||
{ .compatible = "fsl,ls1028a-pcie", .data = &layerscape_drvdata },
|
||||
{ .compatible = "fsl,ls1043a-pcie", .data = &ls1021a_drvdata },
|
||||
{ .compatible = "fsl,ls1046a-pcie", .data = &layerscape_drvdata },
|
||||
{ .compatible = "fsl,ls2080a-pcie", .data = &layerscape_drvdata },
|
||||
{ .compatible = "fsl,ls2085a-pcie", .data = &layerscape_drvdata },
|
||||
{ .compatible = "fsl,ls2088a-pcie", .data = &layerscape_drvdata },
|
||||
{ .compatible = "fsl,ls1088a-pcie", .data = &layerscape_drvdata },
|
||||
{ },
|
||||
};
|
||||
|
||||
@ -121,6 +210,8 @@ static int ls_pcie_probe(struct platform_device *pdev)
|
||||
if (!pci)
|
||||
return -ENOMEM;
|
||||
|
||||
pcie->drvdata = of_device_get_match_data(dev);
|
||||
|
||||
pci->dev = dev;
|
||||
pci->pp.ops = &ls_pcie_host_ops;
|
||||
|
||||
@ -131,6 +222,10 @@ static int ls_pcie_probe(struct platform_device *pdev)
|
||||
if (IS_ERR(pci->dbi_base))
|
||||
return PTR_ERR(pci->dbi_base);
|
||||
|
||||
pcie->big_endian = of_property_read_bool(dev->of_node, "big-endian");
|
||||
|
||||
pcie->pf_base = pci->dbi_base + pcie->drvdata->pf_off;
|
||||
|
||||
if (!ls_pcie_is_bridge(pcie))
|
||||
return -ENODEV;
|
||||
|
||||
@ -139,12 +234,39 @@ static int ls_pcie_probe(struct platform_device *pdev)
|
||||
return dw_pcie_host_init(&pci->pp);
|
||||
}
|
||||
|
||||
static int ls_pcie_suspend_noirq(struct device *dev)
|
||||
{
|
||||
struct ls_pcie *pcie = dev_get_drvdata(dev);
|
||||
|
||||
if (!pcie->drvdata->pm_support)
|
||||
return 0;
|
||||
|
||||
return dw_pcie_suspend_noirq(pcie->pci);
|
||||
}
|
||||
|
||||
static int ls_pcie_resume_noirq(struct device *dev)
|
||||
{
|
||||
struct ls_pcie *pcie = dev_get_drvdata(dev);
|
||||
|
||||
if (!pcie->drvdata->pm_support)
|
||||
return 0;
|
||||
|
||||
ls_pcie_exit_from_l2(&pcie->pci->pp);
|
||||
|
||||
return dw_pcie_resume_noirq(pcie->pci);
|
||||
}
|
||||
|
||||
static const struct dev_pm_ops ls_pcie_pm_ops = {
|
||||
NOIRQ_SYSTEM_SLEEP_PM_OPS(ls_pcie_suspend_noirq, ls_pcie_resume_noirq)
|
||||
};
|
||||
|
||||
static struct platform_driver ls_pcie_driver = {
|
||||
.probe = ls_pcie_probe,
|
||||
.driver = {
|
||||
.name = "layerscape-pcie",
|
||||
.of_match_table = ls_pcie_of_match,
|
||||
.suppress_bind_attrs = true,
|
||||
.pm = &ls_pcie_pm_ops,
|
||||
},
|
||||
};
|
||||
builtin_platform_driver(ls_pcie_driver);
|
||||
|
@ -9,7 +9,6 @@
|
||||
#include <linux/clk.h>
|
||||
#include <linux/delay.h>
|
||||
#include <linux/gpio/consumer.h>
|
||||
#include <linux/of_device.h>
|
||||
#include <linux/of_gpio.h>
|
||||
#include <linux/pci.h>
|
||||
#include <linux/platform_device.h>
|
||||
@ -17,6 +16,7 @@
|
||||
#include <linux/resource.h>
|
||||
#include <linux/types.h>
|
||||
#include <linux/phy/phy.h>
|
||||
#include <linux/mod_devicetable.h>
|
||||
#include <linux/module.h>
|
||||
|
||||
#include "pcie-designware.h"
|
||||
@ -163,6 +163,13 @@ static int meson_pcie_reset(struct meson_pcie *mp)
|
||||
return 0;
|
||||
}
|
||||
|
||||
static inline void meson_pcie_disable_clock(void *data)
|
||||
{
|
||||
struct clk *clk = data;
|
||||
|
||||
clk_disable_unprepare(clk);
|
||||
}
|
||||
|
||||
static inline struct clk *meson_pcie_probe_clock(struct device *dev,
|
||||
const char *id, u64 rate)
|
||||
{
|
||||
@ -187,9 +194,7 @@ static inline struct clk *meson_pcie_probe_clock(struct device *dev,
|
||||
return ERR_PTR(ret);
|
||||
}
|
||||
|
||||
devm_add_action_or_reset(dev,
|
||||
(void (*) (void *))clk_disable_unprepare,
|
||||
clk);
|
||||
devm_add_action_or_reset(dev, meson_pcie_disable_clock, clk);
|
||||
|
||||
return clk;
|
||||
}
|
||||
|
@ -10,7 +10,7 @@
|
||||
#include <linux/delay.h>
|
||||
#include <linux/kernel.h>
|
||||
#include <linux/init.h>
|
||||
#include <linux/of_device.h>
|
||||
#include <linux/of.h>
|
||||
#include <linux/pci.h>
|
||||
#include <linux/platform_device.h>
|
||||
#include <linux/resource.h>
|
||||
|
@ -8,6 +8,7 @@
|
||||
* Author: Jingoo Han <jg1.han@samsung.com>
|
||||
*/
|
||||
|
||||
#include <linux/iopoll.h>
|
||||
#include <linux/irqchip/chained_irq.h>
|
||||
#include <linux/irqdomain.h>
|
||||
#include <linux/msi.h>
|
||||
@ -16,6 +17,7 @@
|
||||
#include <linux/pci_regs.h>
|
||||
#include <linux/platform_device.h>
|
||||
|
||||
#include "../../pci.h"
|
||||
#include "pcie-designware.h"
|
||||
|
||||
static struct pci_ops dw_pcie_ops;
|
||||
@ -807,3 +809,72 @@ int dw_pcie_setup_rc(struct dw_pcie_rp *pp)
|
||||
return 0;
|
||||
}
|
||||
EXPORT_SYMBOL_GPL(dw_pcie_setup_rc);
|
||||
|
||||
int dw_pcie_suspend_noirq(struct dw_pcie *pci)
|
||||
{
|
||||
u8 offset = dw_pcie_find_capability(pci, PCI_CAP_ID_EXP);
|
||||
u32 val;
|
||||
int ret;
|
||||
|
||||
/*
|
||||
* If L1SS is supported, then do not put the link into L2 as some
|
||||
* devices such as NVMe expect low resume latency.
|
||||
*/
|
||||
if (dw_pcie_readw_dbi(pci, offset + PCI_EXP_LNKCTL) & PCI_EXP_LNKCTL_ASPM_L1)
|
||||
return 0;
|
||||
|
||||
if (dw_pcie_get_ltssm(pci) <= DW_PCIE_LTSSM_DETECT_ACT)
|
||||
return 0;
|
||||
|
||||
if (!pci->pp.ops->pme_turn_off)
|
||||
return 0;
|
||||
|
||||
pci->pp.ops->pme_turn_off(&pci->pp);
|
||||
|
||||
ret = read_poll_timeout(dw_pcie_get_ltssm, val, val == DW_PCIE_LTSSM_L2_IDLE,
|
||||
PCIE_PME_TO_L2_TIMEOUT_US/10,
|
||||
PCIE_PME_TO_L2_TIMEOUT_US, false, pci);
|
||||
if (ret) {
|
||||
dev_err(pci->dev, "Timeout waiting for L2 entry! LTSSM: 0x%x\n", val);
|
||||
return ret;
|
||||
}
|
||||
|
||||
if (pci->pp.ops->host_deinit)
|
||||
pci->pp.ops->host_deinit(&pci->pp);
|
||||
|
||||
pci->suspended = true;
|
||||
|
||||
return ret;
|
||||
}
|
||||
EXPORT_SYMBOL_GPL(dw_pcie_suspend_noirq);
|
||||
|
||||
int dw_pcie_resume_noirq(struct dw_pcie *pci)
|
||||
{
|
||||
int ret;
|
||||
|
||||
if (!pci->suspended)
|
||||
return 0;
|
||||
|
||||
pci->suspended = false;
|
||||
|
||||
if (pci->pp.ops->host_init) {
|
||||
ret = pci->pp.ops->host_init(&pci->pp);
|
||||
if (ret) {
|
||||
dev_err(pci->dev, "Host init failed: %d\n", ret);
|
||||
return ret;
|
||||
}
|
||||
}
|
||||
|
||||
dw_pcie_setup_rc(&pci->pp);
|
||||
|
||||
ret = dw_pcie_start_link(pci);
|
||||
if (ret)
|
||||
return ret;
|
||||
|
||||
ret = dw_pcie_wait_for_link(pci);
|
||||
if (ret)
|
||||
return ret;
|
||||
|
||||
return ret;
|
||||
}
|
||||
EXPORT_SYMBOL_GPL(dw_pcie_resume_noirq);
|
||||
|
@ -12,7 +12,7 @@
|
||||
#include <linux/interrupt.h>
|
||||
#include <linux/kernel.h>
|
||||
#include <linux/init.h>
|
||||
#include <linux/of_device.h>
|
||||
#include <linux/of.h>
|
||||
#include <linux/pci.h>
|
||||
#include <linux/platform_device.h>
|
||||
#include <linux/resource.h>
|
||||
|
@ -16,7 +16,7 @@
|
||||
#include <linux/gpio/consumer.h>
|
||||
#include <linux/ioport.h>
|
||||
#include <linux/of.h>
|
||||
#include <linux/of_platform.h>
|
||||
#include <linux/platform_device.h>
|
||||
#include <linux/sizes.h>
|
||||
#include <linux/types.h>
|
||||
|
||||
|
@ -288,10 +288,21 @@ enum dw_pcie_core_rst {
|
||||
DW_PCIE_NUM_CORE_RSTS
|
||||
};
|
||||
|
||||
enum dw_pcie_ltssm {
|
||||
/* Need to align with PCIE_PORT_DEBUG0 bits 0:5 */
|
||||
DW_PCIE_LTSSM_DETECT_QUIET = 0x0,
|
||||
DW_PCIE_LTSSM_DETECT_ACT = 0x1,
|
||||
DW_PCIE_LTSSM_L0 = 0x11,
|
||||
DW_PCIE_LTSSM_L2_IDLE = 0x15,
|
||||
|
||||
DW_PCIE_LTSSM_UNKNOWN = 0xFFFFFFFF,
|
||||
};
|
||||
|
||||
struct dw_pcie_host_ops {
|
||||
int (*host_init)(struct dw_pcie_rp *pp);
|
||||
void (*host_deinit)(struct dw_pcie_rp *pp);
|
||||
int (*msi_host_init)(struct dw_pcie_rp *pp);
|
||||
void (*pme_turn_off)(struct dw_pcie_rp *pp);
|
||||
};
|
||||
|
||||
struct dw_pcie_rp {
|
||||
@ -364,6 +375,7 @@ struct dw_pcie_ops {
|
||||
void (*write_dbi2)(struct dw_pcie *pcie, void __iomem *base, u32 reg,
|
||||
size_t size, u32 val);
|
||||
int (*link_up)(struct dw_pcie *pcie);
|
||||
enum dw_pcie_ltssm (*get_ltssm)(struct dw_pcie *pcie);
|
||||
int (*start_link)(struct dw_pcie *pcie);
|
||||
void (*stop_link)(struct dw_pcie *pcie);
|
||||
};
|
||||
@ -393,6 +405,7 @@ struct dw_pcie {
|
||||
struct reset_control_bulk_data app_rsts[DW_PCIE_NUM_APP_RSTS];
|
||||
struct reset_control_bulk_data core_rsts[DW_PCIE_NUM_CORE_RSTS];
|
||||
struct gpio_desc *pe_rst;
|
||||
bool suspended;
|
||||
};
|
||||
|
||||
#define to_dw_pcie_from_pp(port) container_of((port), struct dw_pcie, pp)
|
||||
@ -430,6 +443,9 @@ void dw_pcie_iatu_detect(struct dw_pcie *pci);
|
||||
int dw_pcie_edma_detect(struct dw_pcie *pci);
|
||||
void dw_pcie_edma_remove(struct dw_pcie *pci);
|
||||
|
||||
int dw_pcie_suspend_noirq(struct dw_pcie *pci);
|
||||
int dw_pcie_resume_noirq(struct dw_pcie *pci);
|
||||
|
||||
static inline void dw_pcie_writel_dbi(struct dw_pcie *pci, u32 reg, u32 val)
|
||||
{
|
||||
dw_pcie_write_dbi(pci, reg, 0x4, val);
|
||||
@ -501,6 +517,18 @@ static inline void dw_pcie_stop_link(struct dw_pcie *pci)
|
||||
pci->ops->stop_link(pci);
|
||||
}
|
||||
|
||||
static inline enum dw_pcie_ltssm dw_pcie_get_ltssm(struct dw_pcie *pci)
|
||||
{
|
||||
u32 val;
|
||||
|
||||
if (pci->ops && pci->ops->get_ltssm)
|
||||
return pci->ops->get_ltssm(pci);
|
||||
|
||||
val = dw_pcie_readl_dbi(pci, PCIE_PORT_DEBUG0);
|
||||
|
||||
return (enum dw_pcie_ltssm)FIELD_GET(PORT_LOGIC_LTSSM_STATE_MASK, val);
|
||||
}
|
||||
|
||||
#ifdef CONFIG_PCIE_DW_HOST
|
||||
irqreturn_t dw_handle_msi_irq(struct dw_pcie_rp *pp);
|
||||
int dw_pcie_setup_rc(struct dw_pcie_rp *pp);
|
||||
|
@ -14,7 +14,7 @@
|
||||
#include <linux/irqdomain.h>
|
||||
#include <linux/mfd/syscon.h>
|
||||
#include <linux/module.h>
|
||||
#include <linux/of_device.h>
|
||||
#include <linux/of.h>
|
||||
#include <linux/of_irq.h>
|
||||
#include <linux/phy/phy.h>
|
||||
#include <linux/platform_device.h>
|
||||
|
@ -299,6 +299,7 @@ static int fu740_pcie_probe(struct platform_device *pdev)
|
||||
pci->dev = dev;
|
||||
pci->ops = &dw_pcie_ops;
|
||||
pci->pp.ops = &fu740_pcie_host_ops;
|
||||
pci->pp.num_vectors = MAX_MSI_IRQS;
|
||||
|
||||
/* SiFive specific region: mgmt */
|
||||
afp->mgmt_base = devm_platform_ioremap_resource_byname(pdev, "mgmt");
|
||||
|
@ -9,9 +9,11 @@
|
||||
#include <linux/clk.h>
|
||||
#include <linux/gpio/consumer.h>
|
||||
#include <linux/iopoll.h>
|
||||
#include <linux/mod_devicetable.h>
|
||||
#include <linux/pci_regs.h>
|
||||
#include <linux/phy/phy.h>
|
||||
#include <linux/platform_device.h>
|
||||
#include <linux/property.h>
|
||||
#include <linux/reset.h>
|
||||
|
||||
#include "../../pci.h"
|
||||
|
@ -148,6 +148,13 @@ static const struct dw_pcie_ops keembay_pcie_ops = {
|
||||
.stop_link = keembay_pcie_stop_link,
|
||||
};
|
||||
|
||||
static inline void keembay_pcie_disable_clock(void *data)
|
||||
{
|
||||
struct clk *clk = data;
|
||||
|
||||
clk_disable_unprepare(clk);
|
||||
}
|
||||
|
||||
static inline struct clk *keembay_pcie_probe_clock(struct device *dev,
|
||||
const char *id, u64 rate)
|
||||
{
|
||||
@ -168,9 +175,7 @@ static inline struct clk *keembay_pcie_probe_clock(struct device *dev,
|
||||
if (ret)
|
||||
return ERR_PTR(ret);
|
||||
|
||||
ret = devm_add_action_or_reset(dev,
|
||||
(void(*)(void *))clk_disable_unprepare,
|
||||
clk);
|
||||
ret = devm_add_action_or_reset(dev, keembay_pcie_disable_clock, clk);
|
||||
if (ret)
|
||||
return ERR_PTR(ret);
|
||||
|
||||
|
@ -16,8 +16,7 @@
|
||||
#include <linux/gpio/consumer.h>
|
||||
#include <linux/interrupt.h>
|
||||
#include <linux/mfd/syscon.h>
|
||||
#include <linux/of_address.h>
|
||||
#include <linux/of_device.h>
|
||||
#include <linux/of.h>
|
||||
#include <linux/of_gpio.h>
|
||||
#include <linux/of_pci.h>
|
||||
#include <linux/phy/phy.h>
|
||||
|
@ -13,6 +13,7 @@
|
||||
#include <linux/debugfs.h>
|
||||
#include <linux/delay.h>
|
||||
#include <linux/gpio/consumer.h>
|
||||
#include <linux/interconnect.h>
|
||||
#include <linux/mfd/syscon.h>
|
||||
#include <linux/phy/pcie.h>
|
||||
#include <linux/phy/phy.h>
|
||||
@ -74,6 +75,7 @@
|
||||
#define PARF_INT_ALL_PLS_ERR BIT(15)
|
||||
#define PARF_INT_ALL_PME_LEGACY BIT(16)
|
||||
#define PARF_INT_ALL_PLS_PME BIT(17)
|
||||
#define PARF_INT_ALL_EDMA BIT(22)
|
||||
|
||||
/* PARF_BDF_TO_SID_CFG register fields */
|
||||
#define PARF_BDF_TO_SID_BYPASS BIT(0)
|
||||
@ -133,6 +135,11 @@
|
||||
#define CORE_RESET_TIME_US_MAX 1005
|
||||
#define WAKE_DELAY_US 2000 /* 2 ms */
|
||||
|
||||
#define PCIE_GEN1_BW_MBPS 250
|
||||
#define PCIE_GEN2_BW_MBPS 500
|
||||
#define PCIE_GEN3_BW_MBPS 985
|
||||
#define PCIE_GEN4_BW_MBPS 1969
|
||||
|
||||
#define to_pcie_ep(x) dev_get_drvdata((x)->dev)
|
||||
|
||||
enum qcom_pcie_ep_link_status {
|
||||
@ -155,6 +162,7 @@ enum qcom_pcie_ep_link_status {
|
||||
* @wake: WAKE# GPIO
|
||||
* @phy: PHY controller block
|
||||
* @debugfs: PCIe Endpoint Debugfs directory
|
||||
* @icc_mem: Handle to an interconnect path between PCIe and MEM
|
||||
* @clks: PCIe clocks
|
||||
* @num_clks: PCIe clocks count
|
||||
* @perst_en: Flag for PERST enable
|
||||
@ -178,6 +186,8 @@ struct qcom_pcie_ep {
|
||||
struct phy *phy;
|
||||
struct dentry *debugfs;
|
||||
|
||||
struct icc_path *icc_mem;
|
||||
|
||||
struct clk_bulk_data *clks;
|
||||
int num_clks;
|
||||
|
||||
@ -253,8 +263,49 @@ static void qcom_pcie_dw_stop_link(struct dw_pcie *pci)
|
||||
disable_irq(pcie_ep->perst_irq);
|
||||
}
|
||||
|
||||
static void qcom_pcie_ep_icc_update(struct qcom_pcie_ep *pcie_ep)
|
||||
{
|
||||
struct dw_pcie *pci = &pcie_ep->pci;
|
||||
u32 offset, status, bw;
|
||||
int speed, width;
|
||||
int ret;
|
||||
|
||||
if (!pcie_ep->icc_mem)
|
||||
return;
|
||||
|
||||
offset = dw_pcie_find_capability(pci, PCI_CAP_ID_EXP);
|
||||
status = readw(pci->dbi_base + offset + PCI_EXP_LNKSTA);
|
||||
|
||||
speed = FIELD_GET(PCI_EXP_LNKSTA_CLS, status);
|
||||
width = FIELD_GET(PCI_EXP_LNKSTA_NLW, status);
|
||||
|
||||
switch (speed) {
|
||||
case 1:
|
||||
bw = MBps_to_icc(PCIE_GEN1_BW_MBPS);
|
||||
break;
|
||||
case 2:
|
||||
bw = MBps_to_icc(PCIE_GEN2_BW_MBPS);
|
||||
break;
|
||||
case 3:
|
||||
bw = MBps_to_icc(PCIE_GEN3_BW_MBPS);
|
||||
break;
|
||||
default:
|
||||
dev_warn(pci->dev, "using default GEN4 bandwidth\n");
|
||||
fallthrough;
|
||||
case 4:
|
||||
bw = MBps_to_icc(PCIE_GEN4_BW_MBPS);
|
||||
break;
|
||||
}
|
||||
|
||||
ret = icc_set_bw(pcie_ep->icc_mem, 0, width * bw);
|
||||
if (ret)
|
||||
dev_err(pci->dev, "failed to set interconnect bandwidth: %d\n",
|
||||
ret);
|
||||
}
|
||||
|
||||
static int qcom_pcie_enable_resources(struct qcom_pcie_ep *pcie_ep)
|
||||
{
|
||||
struct dw_pcie *pci = &pcie_ep->pci;
|
||||
int ret;
|
||||
|
||||
ret = clk_bulk_prepare_enable(pcie_ep->num_clks, pcie_ep->clks);
|
||||
@ -277,8 +328,24 @@ static int qcom_pcie_enable_resources(struct qcom_pcie_ep *pcie_ep)
|
||||
if (ret)
|
||||
goto err_phy_exit;
|
||||
|
||||
/*
|
||||
* Some Qualcomm platforms require interconnect bandwidth constraints
|
||||
* to be set before enabling interconnect clocks.
|
||||
*
|
||||
* Set an initial peak bandwidth corresponding to single-lane Gen 1
|
||||
* for the pcie-mem path.
|
||||
*/
|
||||
ret = icc_set_bw(pcie_ep->icc_mem, 0, MBps_to_icc(PCIE_GEN1_BW_MBPS));
|
||||
if (ret) {
|
||||
dev_err(pci->dev, "failed to set interconnect bandwidth: %d\n",
|
||||
ret);
|
||||
goto err_phy_off;
|
||||
}
|
||||
|
||||
return 0;
|
||||
|
||||
err_phy_off:
|
||||
phy_power_off(pcie_ep->phy);
|
||||
err_phy_exit:
|
||||
phy_exit(pcie_ep->phy);
|
||||
err_disable_clk:
|
||||
@ -289,6 +356,7 @@ err_disable_clk:
|
||||
|
||||
static void qcom_pcie_disable_resources(struct qcom_pcie_ep *pcie_ep)
|
||||
{
|
||||
icc_set_bw(pcie_ep->icc_mem, 0, 0);
|
||||
phy_power_off(pcie_ep->phy);
|
||||
phy_exit(pcie_ep->phy);
|
||||
clk_bulk_disable_unprepare(pcie_ep->num_clks, pcie_ep->clks);
|
||||
@ -395,7 +463,7 @@ static int qcom_pcie_perst_deassert(struct dw_pcie *pci)
|
||||
writel_relaxed(0, pcie_ep->parf + PARF_INT_ALL_MASK);
|
||||
val = PARF_INT_ALL_LINK_DOWN | PARF_INT_ALL_BME |
|
||||
PARF_INT_ALL_PM_TURNOFF | PARF_INT_ALL_DSTATE_CHANGE |
|
||||
PARF_INT_ALL_LINK_UP;
|
||||
PARF_INT_ALL_LINK_UP | PARF_INT_ALL_EDMA;
|
||||
writel_relaxed(val, pcie_ep->parf + PARF_INT_ALL_MASK);
|
||||
|
||||
ret = dw_pcie_ep_init_complete(&pcie_ep->pci.ep);
|
||||
@ -415,7 +483,7 @@ static int qcom_pcie_perst_deassert(struct dw_pcie *pci)
|
||||
/* Gate Master AXI clock to MHI bus during L1SS */
|
||||
val = readl_relaxed(pcie_ep->parf + PARF_MHI_CLOCK_RESET_CTRL);
|
||||
val &= ~PARF_MSTR_AXI_CLK_EN;
|
||||
val = readl_relaxed(pcie_ep->parf + PARF_MHI_CLOCK_RESET_CTRL);
|
||||
writel_relaxed(val, pcie_ep->parf + PARF_MHI_CLOCK_RESET_CTRL);
|
||||
|
||||
dw_pcie_ep_init_notify(&pcie_ep->pci.ep);
|
||||
|
||||
@ -550,6 +618,10 @@ static int qcom_pcie_ep_get_resources(struct platform_device *pdev,
|
||||
if (IS_ERR(pcie_ep->phy))
|
||||
ret = PTR_ERR(pcie_ep->phy);
|
||||
|
||||
pcie_ep->icc_mem = devm_of_icc_get(dev, "pcie-mem");
|
||||
if (IS_ERR(pcie_ep->icc_mem))
|
||||
ret = PTR_ERR(pcie_ep->icc_mem);
|
||||
|
||||
return ret;
|
||||
}
|
||||
|
||||
@ -573,6 +645,7 @@ static irqreturn_t qcom_pcie_ep_global_irq_thread(int irq, void *data)
|
||||
} else if (FIELD_GET(PARF_INT_ALL_BME, status)) {
|
||||
dev_dbg(dev, "Received BME event. Link is enabled!\n");
|
||||
pcie_ep->link_status = QCOM_PCIE_EP_LINK_ENABLED;
|
||||
qcom_pcie_ep_icc_update(pcie_ep);
|
||||
pci_epc_bme_notify(pci->ep.epc);
|
||||
} else if (FIELD_GET(PARF_INT_ALL_PM_TURNOFF, status)) {
|
||||
dev_dbg(dev, "Received PM Turn-off event! Entering L23\n");
|
||||
@ -593,7 +666,7 @@ static irqreturn_t qcom_pcie_ep_global_irq_thread(int irq, void *data)
|
||||
dw_pcie_ep_linkup(&pci->ep);
|
||||
pcie_ep->link_status = QCOM_PCIE_EP_LINK_UP;
|
||||
} else {
|
||||
dev_dbg(dev, "Received unknown event: %d\n", status);
|
||||
dev_err(dev, "Received unknown event: %d\n", status);
|
||||
}
|
||||
|
||||
return IRQ_HANDLED;
|
||||
@ -706,6 +779,7 @@ static const struct pci_epc_features qcom_pcie_epc_features = {
|
||||
.core_init_notifier = true,
|
||||
.msi_capable = true,
|
||||
.msix_capable = false,
|
||||
.align = SZ_4K,
|
||||
};
|
||||
|
||||
static const struct pci_epc_features *
|
||||
@ -743,6 +817,7 @@ static int qcom_pcie_ep_probe(struct platform_device *pdev)
|
||||
pcie_ep->pci.dev = dev;
|
||||
pcie_ep->pci.ops = &pci_ops;
|
||||
pcie_ep->pci.ep.ops = &pci_ep_ops;
|
||||
pcie_ep->pci.edma.nr_irqs = 1;
|
||||
platform_set_drvdata(pdev, pcie_ep);
|
||||
|
||||
ret = qcom_pcie_ep_get_resources(pdev, pcie_ep);
|
||||
|
@ -19,7 +19,7 @@
|
||||
#include <linux/iopoll.h>
|
||||
#include <linux/kernel.h>
|
||||
#include <linux/init.h>
|
||||
#include <linux/of_device.h>
|
||||
#include <linux/of.h>
|
||||
#include <linux/of_gpio.h>
|
||||
#include <linux/pci.h>
|
||||
#include <linux/pm_runtime.h>
|
||||
@ -1613,6 +1613,7 @@ static const struct of_device_id qcom_pcie_match[] = {
|
||||
{ .compatible = "qcom,pcie-msm8996", .data = &cfg_2_3_2 },
|
||||
{ .compatible = "qcom,pcie-qcs404", .data = &cfg_2_4_0 },
|
||||
{ .compatible = "qcom,pcie-sa8540p", .data = &cfg_1_9_0 },
|
||||
{ .compatible = "qcom,pcie-sa8775p", .data = &cfg_1_9_0},
|
||||
{ .compatible = "qcom,pcie-sc7280", .data = &cfg_1_9_0 },
|
||||
{ .compatible = "qcom,pcie-sc8180x", .data = &cfg_1_9_0 },
|
||||
{ .compatible = "qcom,pcie-sc8280xp", .data = &cfg_1_9_0 },
|
||||
|
@ -20,7 +20,6 @@
|
||||
#include <linux/kernel.h>
|
||||
#include <linux/module.h>
|
||||
#include <linux/of.h>
|
||||
#include <linux/of_device.h>
|
||||
#include <linux/of_gpio.h>
|
||||
#include <linux/of_pci.h>
|
||||
#include <linux/pci.h>
|
||||
@ -900,11 +899,6 @@ static int tegra_pcie_dw_host_init(struct dw_pcie_rp *pp)
|
||||
pcie->pcie_cap_base = dw_pcie_find_capability(&pcie->pci,
|
||||
PCI_CAP_ID_EXP);
|
||||
|
||||
val_16 = dw_pcie_readw_dbi(pci, pcie->pcie_cap_base + PCI_EXP_DEVCTL);
|
||||
val_16 &= ~PCI_EXP_DEVCTL_PAYLOAD;
|
||||
val_16 |= PCI_EXP_DEVCTL_PAYLOAD_256B;
|
||||
dw_pcie_writew_dbi(pci, pcie->pcie_cap_base + PCI_EXP_DEVCTL, val_16);
|
||||
|
||||
val = dw_pcie_readl_dbi(pci, PCI_IO_BASE);
|
||||
val &= ~(IO_BASE_IO_DECODE | IO_BASE_IO_DECODE_BIT8);
|
||||
dw_pcie_writel_dbi(pci, PCI_IO_BASE, val);
|
||||
@ -1887,11 +1881,6 @@ static void pex_ep_event_pex_rst_deassert(struct tegra_pcie_dw *pcie)
|
||||
pcie->pcie_cap_base = dw_pcie_find_capability(&pcie->pci,
|
||||
PCI_CAP_ID_EXP);
|
||||
|
||||
val_16 = dw_pcie_readw_dbi(pci, pcie->pcie_cap_base + PCI_EXP_DEVCTL);
|
||||
val_16 &= ~PCI_EXP_DEVCTL_PAYLOAD;
|
||||
val_16 |= PCI_EXP_DEVCTL_PAYLOAD_256B;
|
||||
dw_pcie_writew_dbi(pci, pcie->pcie_cap_base + PCI_EXP_DEVCTL, val_16);
|
||||
|
||||
/* Clear Slot Clock Configuration bit if SRNS configuration */
|
||||
if (pcie->enable_srns) {
|
||||
val_16 = dw_pcie_readw_dbi(pci, pcie->pcie_cap_base +
|
||||
|
@ -11,7 +11,7 @@
|
||||
#include <linux/delay.h>
|
||||
#include <linux/init.h>
|
||||
#include <linux/iopoll.h>
|
||||
#include <linux/of_device.h>
|
||||
#include <linux/of.h>
|
||||
#include <linux/pci.h>
|
||||
#include <linux/phy/phy.h>
|
||||
#include <linux/platform_device.h>
|
||||
|
@ -17,9 +17,6 @@
|
||||
#include <linux/kernel.h>
|
||||
#include <linux/module.h>
|
||||
#include <linux/msi.h>
|
||||
#include <linux/of_address.h>
|
||||
#include <linux/of_irq.h>
|
||||
#include <linux/of_platform.h>
|
||||
#include <linux/of_pci.h>
|
||||
#include <linux/pci.h>
|
||||
#include <linux/platform_device.h>
|
||||
|
@ -15,8 +15,7 @@
|
||||
#include <linux/interrupt.h>
|
||||
#include <linux/io.h>
|
||||
#include <linux/kernel.h>
|
||||
#include <linux/of_address.h>
|
||||
#include <linux/of_device.h>
|
||||
#include <linux/of.h>
|
||||
#include <linux/of_irq.h>
|
||||
#include <linux/of_pci.h>
|
||||
#include <linux/pci.h>
|
||||
|
@ -9,8 +9,8 @@
|
||||
|
||||
#include <linux/kernel.h>
|
||||
#include <linux/module.h>
|
||||
#include <linux/of.h>
|
||||
#include <linux/of_address.h>
|
||||
#include <linux/of_device.h>
|
||||
#include <linux/of_pci.h>
|
||||
#include <linux/pci-ecam.h>
|
||||
#include <linux/platform_device.h>
|
||||
|
@ -3983,6 +3983,9 @@ static int hv_pci_restore_msi_msg(struct pci_dev *pdev, void *arg)
|
||||
struct msi_desc *entry;
|
||||
int ret = 0;
|
||||
|
||||
if (!pdev->msi_enabled && !pdev->msix_enabled)
|
||||
return 0;
|
||||
|
||||
msi_lock_descs(&pdev->dev);
|
||||
msi_for_each_desc(entry, &pdev->dev, MSI_DESC_ASSOCIATED) {
|
||||
irq_data = irq_get_irq_data(entry->irq);
|
||||
|
@ -19,8 +19,7 @@
|
||||
#include <linux/init.h>
|
||||
#include <linux/io.h>
|
||||
#include <linux/kernel.h>
|
||||
#include <linux/of_address.h>
|
||||
#include <linux/of_device.h>
|
||||
#include <linux/of.h>
|
||||
#include <linux/of_pci.h>
|
||||
#include <linux/pci.h>
|
||||
#include <linux/platform_device.h>
|
||||
|
@ -5,7 +5,7 @@
|
||||
* Copyright (C) 2020 Jiaxun Yang <jiaxun.yang@flygoat.com>
|
||||
*/
|
||||
|
||||
#include <linux/of_device.h>
|
||||
#include <linux/of.h>
|
||||
#include <linux/of_pci.h>
|
||||
#include <linux/pci.h>
|
||||
#include <linux/pci_ids.h>
|
||||
|
@ -87,7 +87,6 @@ struct mvebu_pcie {
|
||||
struct resource io;
|
||||
struct resource realio;
|
||||
struct resource mem;
|
||||
struct resource busn;
|
||||
int nports;
|
||||
};
|
||||
|
||||
|
@ -290,8 +290,7 @@ static int rcar_pci_probe(struct platform_device *pdev)
|
||||
priv = pci_host_bridge_priv(bridge);
|
||||
bridge->sysdata = priv;
|
||||
|
||||
cfg_res = platform_get_resource(pdev, IORESOURCE_MEM, 0);
|
||||
reg = devm_ioremap_resource(dev, cfg_res);
|
||||
reg = devm_platform_get_and_ioremap_resource(pdev, 0, &cfg_res);
|
||||
if (IS_ERR(reg))
|
||||
return PTR_ERR(reg);
|
||||
|
||||
|
@ -20,8 +20,7 @@
|
||||
#include <linux/interrupt.h>
|
||||
#include <linux/io.h>
|
||||
#include <linux/kernel.h>
|
||||
#include <linux/of_address.h>
|
||||
#include <linux/of_device.h>
|
||||
#include <linux/of.h>
|
||||
#include <linux/of_pci.h>
|
||||
#include <linux/pci.h>
|
||||
#include <linux/platform_device.h>
|
||||
@ -736,8 +735,7 @@ static int v3_pci_probe(struct platform_device *pdev)
|
||||
return ret;
|
||||
}
|
||||
|
||||
regs = platform_get_resource(pdev, IORESOURCE_MEM, 0);
|
||||
v3->base = devm_ioremap_resource(dev, regs);
|
||||
v3->base = devm_platform_get_and_ioremap_resource(pdev, 0, ®s);
|
||||
if (IS_ERR(v3->base))
|
||||
return PTR_ERR(v3->base);
|
||||
/*
|
||||
|
@ -441,8 +441,7 @@ static int xgene_msi_probe(struct platform_device *pdev)
|
||||
|
||||
platform_set_drvdata(pdev, xgene_msi);
|
||||
|
||||
res = platform_get_resource(pdev, IORESOURCE_MEM, 0);
|
||||
xgene_msi->msi_regs = devm_ioremap_resource(&pdev->dev, res);
|
||||
xgene_msi->msi_regs = devm_platform_get_and_ioremap_resource(pdev, 0, &res);
|
||||
if (IS_ERR(xgene_msi->msi_regs)) {
|
||||
rc = PTR_ERR(xgene_msi->msi_regs);
|
||||
goto error;
|
||||
|
@ -9,11 +9,10 @@
|
||||
#include <linux/delay.h>
|
||||
#include <linux/interrupt.h>
|
||||
#include <linux/irqchip/chained_irq.h>
|
||||
#include <linux/irqdomain.h>
|
||||
#include <linux/init.h>
|
||||
#include <linux/module.h>
|
||||
#include <linux/of_address.h>
|
||||
#include <linux/of_device.h>
|
||||
#include <linux/of_irq.h>
|
||||
#include <linux/of.h>
|
||||
#include <linux/of_pci.h>
|
||||
#include <linux/pci.h>
|
||||
#include <linux/platform_device.h>
|
||||
|
@ -670,7 +670,7 @@ static struct apple_pcie_port *apple_pcie_get_port(struct pci_dev *pdev)
|
||||
static int apple_pcie_add_device(struct apple_pcie_port *port,
|
||||
struct pci_dev *pdev)
|
||||
{
|
||||
u32 sid, rid = PCI_DEVID(pdev->bus->number, pdev->devfn);
|
||||
u32 sid, rid = pci_dev_id(pdev);
|
||||
int idx, err;
|
||||
|
||||
dev_dbg(&pdev->dev, "added to bus %s, index %d\n",
|
||||
@ -701,7 +701,7 @@ static int apple_pcie_add_device(struct apple_pcie_port *port,
|
||||
static void apple_pcie_release_device(struct apple_pcie_port *port,
|
||||
struct pci_dev *pdev)
|
||||
{
|
||||
u32 rid = PCI_DEVID(pdev->bus->number, pdev->devfn);
|
||||
u32 rid = pci_dev_id(pdev);
|
||||
int idx;
|
||||
|
||||
mutex_lock(&port->pcie->lock);
|
||||
@ -783,6 +783,10 @@ static int apple_pcie_init(struct pci_config_window *cfg)
|
||||
cfg->priv = pcie;
|
||||
INIT_LIST_HEAD(&pcie->ports);
|
||||
|
||||
ret = apple_msi_init(pcie);
|
||||
if (ret)
|
||||
return ret;
|
||||
|
||||
for_each_child_of_node(dev->of_node, of_port) {
|
||||
ret = apple_pcie_setup_port(pcie, of_port);
|
||||
if (ret) {
|
||||
@ -792,7 +796,7 @@ static int apple_pcie_init(struct pci_config_window *cfg)
|
||||
}
|
||||
}
|
||||
|
||||
return apple_msi_init(pcie);
|
||||
return 0;
|
||||
}
|
||||
|
||||
static int apple_pcie_probe(struct platform_device *pdev)
|
||||
|
@ -439,7 +439,6 @@ static struct irq_chip brcm_msi_irq_chip = {
|
||||
};
|
||||
|
||||
static struct msi_domain_info brcm_msi_domain_info = {
|
||||
/* Multi MSI is supported by the controller, but not by this driver */
|
||||
.flags = (MSI_FLAG_USE_DEF_DOM_OPS | MSI_FLAG_USE_DEF_CHIP_OPS |
|
||||
MSI_FLAG_MULTI_PCI_MSI),
|
||||
.chip = &brcm_msi_irq_chip,
|
||||
@ -874,6 +873,11 @@ static int brcm_pcie_setup(struct brcm_pcie *pcie)
|
||||
|
||||
/* Reset the bridge */
|
||||
pcie->bridge_sw_init_set(pcie, 1);
|
||||
|
||||
/* Ensure that PERST# is asserted; some bootloaders may deassert it. */
|
||||
if (pcie->type == BCM2711)
|
||||
pcie->perst_set(pcie, 1);
|
||||
|
||||
usleep_range(100, 200);
|
||||
|
||||
/* Take the bridge out of reset */
|
||||
|
@ -525,7 +525,7 @@ int iproc_msi_init(struct iproc_pcie *pcie, struct device_node *node)
|
||||
if (!of_device_is_compatible(node, "brcm,iproc-msi"))
|
||||
return -ENODEV;
|
||||
|
||||
if (!of_find_property(node, "msi-controller", NULL))
|
||||
if (!of_property_read_bool(node, "msi-controller"))
|
||||
return -ENODEV;
|
||||
|
||||
if (pcie->msi)
|
||||
@ -585,8 +585,7 @@ int iproc_msi_init(struct iproc_pcie *pcie, struct device_node *node)
|
||||
return -EINVAL;
|
||||
}
|
||||
|
||||
if (of_find_property(node, "brcm,pcie-msi-inten", NULL))
|
||||
msi->has_inten_reg = true;
|
||||
msi->has_inten_reg = of_property_read_bool(node, "brcm,pcie-msi-inten");
|
||||
|
||||
msi->nr_msi_vecs = msi->nr_irqs * EQ_LEN;
|
||||
msi->bitmap = devm_bitmap_zalloc(pcie->dev, msi->nr_msi_vecs,
|
||||
|
@ -7,6 +7,7 @@
|
||||
* Author: Daire McNamara <daire.mcnamara@microchip.com>
|
||||
*/
|
||||
|
||||
#include <linux/bitfield.h>
|
||||
#include <linux/clk.h>
|
||||
#include <linux/irqchip/chained_irq.h>
|
||||
#include <linux/irqdomain.h>
|
||||
@ -20,8 +21,7 @@
|
||||
#include "../pci.h"
|
||||
|
||||
/* Number of MSI IRQs */
|
||||
#define MC_NUM_MSI_IRQS 32
|
||||
#define MC_NUM_MSI_IRQS_CODED 5
|
||||
#define MC_MAX_NUM_MSI_IRQS 32
|
||||
|
||||
/* PCIe Bridge Phy and Controller Phy offsets */
|
||||
#define MC_PCIE1_BRIDGE_ADDR 0x00008000u
|
||||
@ -30,65 +30,11 @@
|
||||
#define MC_PCIE_BRIDGE_ADDR (MC_PCIE1_BRIDGE_ADDR)
|
||||
#define MC_PCIE_CTRL_ADDR (MC_PCIE1_CTRL_ADDR)
|
||||
|
||||
/* PCIe Controller Phy Regs */
|
||||
#define SEC_ERROR_CNT 0x20
|
||||
#define DED_ERROR_CNT 0x24
|
||||
#define SEC_ERROR_INT 0x28
|
||||
#define SEC_ERROR_INT_TX_RAM_SEC_ERR_INT GENMASK(3, 0)
|
||||
#define SEC_ERROR_INT_RX_RAM_SEC_ERR_INT GENMASK(7, 4)
|
||||
#define SEC_ERROR_INT_PCIE2AXI_RAM_SEC_ERR_INT GENMASK(11, 8)
|
||||
#define SEC_ERROR_INT_AXI2PCIE_RAM_SEC_ERR_INT GENMASK(15, 12)
|
||||
#define NUM_SEC_ERROR_INTS (4)
|
||||
#define SEC_ERROR_INT_MASK 0x2c
|
||||
#define DED_ERROR_INT 0x30
|
||||
#define DED_ERROR_INT_TX_RAM_DED_ERR_INT GENMASK(3, 0)
|
||||
#define DED_ERROR_INT_RX_RAM_DED_ERR_INT GENMASK(7, 4)
|
||||
#define DED_ERROR_INT_PCIE2AXI_RAM_DED_ERR_INT GENMASK(11, 8)
|
||||
#define DED_ERROR_INT_AXI2PCIE_RAM_DED_ERR_INT GENMASK(15, 12)
|
||||
#define NUM_DED_ERROR_INTS (4)
|
||||
#define DED_ERROR_INT_MASK 0x34
|
||||
#define ECC_CONTROL 0x38
|
||||
#define ECC_CONTROL_TX_RAM_INJ_ERROR_0 BIT(0)
|
||||
#define ECC_CONTROL_TX_RAM_INJ_ERROR_1 BIT(1)
|
||||
#define ECC_CONTROL_TX_RAM_INJ_ERROR_2 BIT(2)
|
||||
#define ECC_CONTROL_TX_RAM_INJ_ERROR_3 BIT(3)
|
||||
#define ECC_CONTROL_RX_RAM_INJ_ERROR_0 BIT(4)
|
||||
#define ECC_CONTROL_RX_RAM_INJ_ERROR_1 BIT(5)
|
||||
#define ECC_CONTROL_RX_RAM_INJ_ERROR_2 BIT(6)
|
||||
#define ECC_CONTROL_RX_RAM_INJ_ERROR_3 BIT(7)
|
||||
#define ECC_CONTROL_PCIE2AXI_RAM_INJ_ERROR_0 BIT(8)
|
||||
#define ECC_CONTROL_PCIE2AXI_RAM_INJ_ERROR_1 BIT(9)
|
||||
#define ECC_CONTROL_PCIE2AXI_RAM_INJ_ERROR_2 BIT(10)
|
||||
#define ECC_CONTROL_PCIE2AXI_RAM_INJ_ERROR_3 BIT(11)
|
||||
#define ECC_CONTROL_AXI2PCIE_RAM_INJ_ERROR_0 BIT(12)
|
||||
#define ECC_CONTROL_AXI2PCIE_RAM_INJ_ERROR_1 BIT(13)
|
||||
#define ECC_CONTROL_AXI2PCIE_RAM_INJ_ERROR_2 BIT(14)
|
||||
#define ECC_CONTROL_AXI2PCIE_RAM_INJ_ERROR_3 BIT(15)
|
||||
#define ECC_CONTROL_TX_RAM_ECC_BYPASS BIT(24)
|
||||
#define ECC_CONTROL_RX_RAM_ECC_BYPASS BIT(25)
|
||||
#define ECC_CONTROL_PCIE2AXI_RAM_ECC_BYPASS BIT(26)
|
||||
#define ECC_CONTROL_AXI2PCIE_RAM_ECC_BYPASS BIT(27)
|
||||
#define LTSSM_STATE 0x5c
|
||||
#define LTSSM_L0_STATE 0x10
|
||||
#define PCIE_EVENT_INT 0x14c
|
||||
#define PCIE_EVENT_INT_L2_EXIT_INT BIT(0)
|
||||
#define PCIE_EVENT_INT_HOTRST_EXIT_INT BIT(1)
|
||||
#define PCIE_EVENT_INT_DLUP_EXIT_INT BIT(2)
|
||||
#define PCIE_EVENT_INT_MASK GENMASK(2, 0)
|
||||
#define PCIE_EVENT_INT_L2_EXIT_INT_MASK BIT(16)
|
||||
#define PCIE_EVENT_INT_HOTRST_EXIT_INT_MASK BIT(17)
|
||||
#define PCIE_EVENT_INT_DLUP_EXIT_INT_MASK BIT(18)
|
||||
#define PCIE_EVENT_INT_ENB_MASK GENMASK(18, 16)
|
||||
#define PCIE_EVENT_INT_ENB_SHIFT 16
|
||||
#define NUM_PCIE_EVENTS (3)
|
||||
|
||||
/* PCIe Bridge Phy Regs */
|
||||
#define PCIE_PCI_IDS_DW1 0x9c
|
||||
|
||||
/* PCIe Config space MSI capability structure */
|
||||
#define MC_MSI_CAP_CTRL_OFFSET 0xe0u
|
||||
#define MC_MSI_MAX_Q_AVAIL (MC_NUM_MSI_IRQS_CODED << 1)
|
||||
#define MC_MSI_Q_SIZE (MC_NUM_MSI_IRQS_CODED << 4)
|
||||
#define PCIE_PCI_IRQ_DW0 0xa8
|
||||
#define MSIX_CAP_MASK BIT(31)
|
||||
#define NUM_MSI_MSGS_MASK GENMASK(6, 4)
|
||||
#define NUM_MSI_MSGS_SHIFT 4
|
||||
|
||||
#define IMASK_LOCAL 0x180
|
||||
#define DMA_END_ENGINE_0_MASK 0x00000000u
|
||||
@ -137,7 +83,7 @@
|
||||
#define ISTATUS_LOCAL 0x184
|
||||
#define IMASK_HOST 0x188
|
||||
#define ISTATUS_HOST 0x18c
|
||||
#define MSI_ADDR 0x190
|
||||
#define IMSI_ADDR 0x190
|
||||
#define ISTATUS_MSI 0x194
|
||||
|
||||
/* PCIe Master table init defines */
|
||||
@ -162,17 +108,73 @@
|
||||
|
||||
#define ATR_ENTRY_SIZE 32
|
||||
|
||||
/* PCIe Controller Phy Regs */
|
||||
#define SEC_ERROR_EVENT_CNT 0x20
|
||||
#define DED_ERROR_EVENT_CNT 0x24
|
||||
#define SEC_ERROR_INT 0x28
|
||||
#define SEC_ERROR_INT_TX_RAM_SEC_ERR_INT GENMASK(3, 0)
|
||||
#define SEC_ERROR_INT_RX_RAM_SEC_ERR_INT GENMASK(7, 4)
|
||||
#define SEC_ERROR_INT_PCIE2AXI_RAM_SEC_ERR_INT GENMASK(11, 8)
|
||||
#define SEC_ERROR_INT_AXI2PCIE_RAM_SEC_ERR_INT GENMASK(15, 12)
|
||||
#define SEC_ERROR_INT_ALL_RAM_SEC_ERR_INT GENMASK(15, 0)
|
||||
#define NUM_SEC_ERROR_INTS (4)
|
||||
#define SEC_ERROR_INT_MASK 0x2c
|
||||
#define DED_ERROR_INT 0x30
|
||||
#define DED_ERROR_INT_TX_RAM_DED_ERR_INT GENMASK(3, 0)
|
||||
#define DED_ERROR_INT_RX_RAM_DED_ERR_INT GENMASK(7, 4)
|
||||
#define DED_ERROR_INT_PCIE2AXI_RAM_DED_ERR_INT GENMASK(11, 8)
|
||||
#define DED_ERROR_INT_AXI2PCIE_RAM_DED_ERR_INT GENMASK(15, 12)
|
||||
#define DED_ERROR_INT_ALL_RAM_DED_ERR_INT GENMASK(15, 0)
|
||||
#define NUM_DED_ERROR_INTS (4)
|
||||
#define DED_ERROR_INT_MASK 0x34
|
||||
#define ECC_CONTROL 0x38
|
||||
#define ECC_CONTROL_TX_RAM_INJ_ERROR_0 BIT(0)
|
||||
#define ECC_CONTROL_TX_RAM_INJ_ERROR_1 BIT(1)
|
||||
#define ECC_CONTROL_TX_RAM_INJ_ERROR_2 BIT(2)
|
||||
#define ECC_CONTROL_TX_RAM_INJ_ERROR_3 BIT(3)
|
||||
#define ECC_CONTROL_RX_RAM_INJ_ERROR_0 BIT(4)
|
||||
#define ECC_CONTROL_RX_RAM_INJ_ERROR_1 BIT(5)
|
||||
#define ECC_CONTROL_RX_RAM_INJ_ERROR_2 BIT(6)
|
||||
#define ECC_CONTROL_RX_RAM_INJ_ERROR_3 BIT(7)
|
||||
#define ECC_CONTROL_PCIE2AXI_RAM_INJ_ERROR_0 BIT(8)
|
||||
#define ECC_CONTROL_PCIE2AXI_RAM_INJ_ERROR_1 BIT(9)
|
||||
#define ECC_CONTROL_PCIE2AXI_RAM_INJ_ERROR_2 BIT(10)
|
||||
#define ECC_CONTROL_PCIE2AXI_RAM_INJ_ERROR_3 BIT(11)
|
||||
#define ECC_CONTROL_AXI2PCIE_RAM_INJ_ERROR_0 BIT(12)
|
||||
#define ECC_CONTROL_AXI2PCIE_RAM_INJ_ERROR_1 BIT(13)
|
||||
#define ECC_CONTROL_AXI2PCIE_RAM_INJ_ERROR_2 BIT(14)
|
||||
#define ECC_CONTROL_AXI2PCIE_RAM_INJ_ERROR_3 BIT(15)
|
||||
#define ECC_CONTROL_TX_RAM_ECC_BYPASS BIT(24)
|
||||
#define ECC_CONTROL_RX_RAM_ECC_BYPASS BIT(25)
|
||||
#define ECC_CONTROL_PCIE2AXI_RAM_ECC_BYPASS BIT(26)
|
||||
#define ECC_CONTROL_AXI2PCIE_RAM_ECC_BYPASS BIT(27)
|
||||
#define PCIE_EVENT_INT 0x14c
|
||||
#define PCIE_EVENT_INT_L2_EXIT_INT BIT(0)
|
||||
#define PCIE_EVENT_INT_HOTRST_EXIT_INT BIT(1)
|
||||
#define PCIE_EVENT_INT_DLUP_EXIT_INT BIT(2)
|
||||
#define PCIE_EVENT_INT_MASK GENMASK(2, 0)
|
||||
#define PCIE_EVENT_INT_L2_EXIT_INT_MASK BIT(16)
|
||||
#define PCIE_EVENT_INT_HOTRST_EXIT_INT_MASK BIT(17)
|
||||
#define PCIE_EVENT_INT_DLUP_EXIT_INT_MASK BIT(18)
|
||||
#define PCIE_EVENT_INT_ENB_MASK GENMASK(18, 16)
|
||||
#define PCIE_EVENT_INT_ENB_SHIFT 16
|
||||
#define NUM_PCIE_EVENTS (3)
|
||||
|
||||
/* PCIe Config space MSI capability structure */
|
||||
#define MC_MSI_CAP_CTRL_OFFSET 0xe0u
|
||||
|
||||
/* Events */
|
||||
#define EVENT_PCIE_L2_EXIT 0
|
||||
#define EVENT_PCIE_HOTRST_EXIT 1
|
||||
#define EVENT_PCIE_DLUP_EXIT 2
|
||||
#define EVENT_SEC_TX_RAM_SEC_ERR 3
|
||||
#define EVENT_SEC_RX_RAM_SEC_ERR 4
|
||||
#define EVENT_SEC_AXI2PCIE_RAM_SEC_ERR 5
|
||||
#define EVENT_SEC_PCIE2AXI_RAM_SEC_ERR 6
|
||||
#define EVENT_SEC_PCIE2AXI_RAM_SEC_ERR 5
|
||||
#define EVENT_SEC_AXI2PCIE_RAM_SEC_ERR 6
|
||||
#define EVENT_DED_TX_RAM_DED_ERR 7
|
||||
#define EVENT_DED_RX_RAM_DED_ERR 8
|
||||
#define EVENT_DED_AXI2PCIE_RAM_DED_ERR 9
|
||||
#define EVENT_DED_PCIE2AXI_RAM_DED_ERR 10
|
||||
#define EVENT_DED_PCIE2AXI_RAM_DED_ERR 9
|
||||
#define EVENT_DED_AXI2PCIE_RAM_DED_ERR 10
|
||||
#define EVENT_LOCAL_DMA_END_ENGINE_0 11
|
||||
#define EVENT_LOCAL_DMA_END_ENGINE_1 12
|
||||
#define EVENT_LOCAL_DMA_ERROR_ENGINE_0 13
|
||||
@ -259,7 +261,7 @@ struct mc_msi {
|
||||
struct irq_domain *dev_domain;
|
||||
u32 num_vectors;
|
||||
u64 vector_phy;
|
||||
DECLARE_BITMAP(used, MC_NUM_MSI_IRQS);
|
||||
DECLARE_BITMAP(used, MC_MAX_NUM_MSI_IRQS);
|
||||
};
|
||||
|
||||
struct mc_pcie {
|
||||
@ -382,25 +384,29 @@ static struct {
|
||||
|
||||
static char poss_clks[][5] = { "fic0", "fic1", "fic2", "fic3" };
|
||||
|
||||
static void mc_pcie_enable_msi(struct mc_pcie *port, void __iomem *base)
|
||||
static struct mc_pcie *port;
|
||||
|
||||
static void mc_pcie_enable_msi(struct mc_pcie *port, void __iomem *ecam)
|
||||
{
|
||||
struct mc_msi *msi = &port->msi;
|
||||
u32 cap_offset = MC_MSI_CAP_CTRL_OFFSET;
|
||||
u16 msg_ctrl = readw_relaxed(base + cap_offset + PCI_MSI_FLAGS);
|
||||
u16 reg;
|
||||
u8 queue_size;
|
||||
|
||||
msg_ctrl |= PCI_MSI_FLAGS_ENABLE;
|
||||
msg_ctrl &= ~PCI_MSI_FLAGS_QMASK;
|
||||
msg_ctrl |= MC_MSI_MAX_Q_AVAIL;
|
||||
msg_ctrl &= ~PCI_MSI_FLAGS_QSIZE;
|
||||
msg_ctrl |= MC_MSI_Q_SIZE;
|
||||
msg_ctrl |= PCI_MSI_FLAGS_64BIT;
|
||||
/* Fixup MSI enable flag */
|
||||
reg = readw_relaxed(ecam + MC_MSI_CAP_CTRL_OFFSET + PCI_MSI_FLAGS);
|
||||
reg |= PCI_MSI_FLAGS_ENABLE;
|
||||
writew_relaxed(reg, ecam + MC_MSI_CAP_CTRL_OFFSET + PCI_MSI_FLAGS);
|
||||
|
||||
writew_relaxed(msg_ctrl, base + cap_offset + PCI_MSI_FLAGS);
|
||||
/* Fixup PCI MSI queue flags */
|
||||
queue_size = FIELD_GET(PCI_MSI_FLAGS_QMASK, reg);
|
||||
reg |= FIELD_PREP(PCI_MSI_FLAGS_QSIZE, queue_size);
|
||||
writew_relaxed(reg, ecam + MC_MSI_CAP_CTRL_OFFSET + PCI_MSI_FLAGS);
|
||||
|
||||
/* Fixup MSI addr fields */
|
||||
writel_relaxed(lower_32_bits(msi->vector_phy),
|
||||
base + cap_offset + PCI_MSI_ADDRESS_LO);
|
||||
ecam + MC_MSI_CAP_CTRL_OFFSET + PCI_MSI_ADDRESS_LO);
|
||||
writel_relaxed(upper_32_bits(msi->vector_phy),
|
||||
base + cap_offset + PCI_MSI_ADDRESS_HI);
|
||||
ecam + MC_MSI_CAP_CTRL_OFFSET + PCI_MSI_ADDRESS_HI);
|
||||
}
|
||||
|
||||
static void mc_handle_msi(struct irq_desc *desc)
|
||||
@ -473,10 +479,7 @@ static int mc_irq_msi_domain_alloc(struct irq_domain *domain, unsigned int virq,
|
||||
{
|
||||
struct mc_pcie *port = domain->host_data;
|
||||
struct mc_msi *msi = &port->msi;
|
||||
void __iomem *bridge_base_addr =
|
||||
port->axi_base_addr + MC_PCIE_BRIDGE_ADDR;
|
||||
unsigned long bit;
|
||||
u32 val;
|
||||
|
||||
mutex_lock(&msi->lock);
|
||||
bit = find_first_zero_bit(msi->used, msi->num_vectors);
|
||||
@ -490,11 +493,6 @@ static int mc_irq_msi_domain_alloc(struct irq_domain *domain, unsigned int virq,
|
||||
irq_domain_set_info(domain, virq, bit, &mc_msi_bottom_irq_chip,
|
||||
domain->host_data, handle_edge_irq, NULL, NULL);
|
||||
|
||||
/* Enable MSI interrupts */
|
||||
val = readl_relaxed(bridge_base_addr + IMASK_LOCAL);
|
||||
val |= PM_MSI_INT_MSI_MASK;
|
||||
writel_relaxed(val, bridge_base_addr + IMASK_LOCAL);
|
||||
|
||||
mutex_unlock(&msi->lock);
|
||||
|
||||
return 0;
|
||||
@ -656,9 +654,10 @@ static inline u32 reg_to_event(u32 reg, struct event_map field)
|
||||
return (reg & field.reg_mask) ? BIT(field.event_bit) : 0;
|
||||
}
|
||||
|
||||
static u32 pcie_events(void __iomem *addr)
|
||||
static u32 pcie_events(struct mc_pcie *port)
|
||||
{
|
||||
u32 reg = readl_relaxed(addr);
|
||||
void __iomem *ctrl_base_addr = port->axi_base_addr + MC_PCIE_CTRL_ADDR;
|
||||
u32 reg = readl_relaxed(ctrl_base_addr + PCIE_EVENT_INT);
|
||||
u32 val = 0;
|
||||
int i;
|
||||
|
||||
@ -668,9 +667,10 @@ static u32 pcie_events(void __iomem *addr)
|
||||
return val;
|
||||
}
|
||||
|
||||
static u32 sec_errors(void __iomem *addr)
|
||||
static u32 sec_errors(struct mc_pcie *port)
|
||||
{
|
||||
u32 reg = readl_relaxed(addr);
|
||||
void __iomem *ctrl_base_addr = port->axi_base_addr + MC_PCIE_CTRL_ADDR;
|
||||
u32 reg = readl_relaxed(ctrl_base_addr + SEC_ERROR_INT);
|
||||
u32 val = 0;
|
||||
int i;
|
||||
|
||||
@ -680,9 +680,10 @@ static u32 sec_errors(void __iomem *addr)
|
||||
return val;
|
||||
}
|
||||
|
||||
static u32 ded_errors(void __iomem *addr)
|
||||
static u32 ded_errors(struct mc_pcie *port)
|
||||
{
|
||||
u32 reg = readl_relaxed(addr);
|
||||
void __iomem *ctrl_base_addr = port->axi_base_addr + MC_PCIE_CTRL_ADDR;
|
||||
u32 reg = readl_relaxed(ctrl_base_addr + DED_ERROR_INT);
|
||||
u32 val = 0;
|
||||
int i;
|
||||
|
||||
@ -692,9 +693,10 @@ static u32 ded_errors(void __iomem *addr)
|
||||
return val;
|
||||
}
|
||||
|
||||
static u32 local_events(void __iomem *addr)
|
||||
static u32 local_events(struct mc_pcie *port)
|
||||
{
|
||||
u32 reg = readl_relaxed(addr);
|
||||
void __iomem *bridge_base_addr = port->axi_base_addr + MC_PCIE_BRIDGE_ADDR;
|
||||
u32 reg = readl_relaxed(bridge_base_addr + ISTATUS_LOCAL);
|
||||
u32 val = 0;
|
||||
int i;
|
||||
|
||||
@ -706,15 +708,12 @@ static u32 local_events(void __iomem *addr)
|
||||
|
||||
static u32 get_events(struct mc_pcie *port)
|
||||
{
|
||||
void __iomem *bridge_base_addr =
|
||||
port->axi_base_addr + MC_PCIE_BRIDGE_ADDR;
|
||||
void __iomem *ctrl_base_addr = port->axi_base_addr + MC_PCIE_CTRL_ADDR;
|
||||
u32 events = 0;
|
||||
|
||||
events |= pcie_events(ctrl_base_addr + PCIE_EVENT_INT);
|
||||
events |= sec_errors(ctrl_base_addr + SEC_ERROR_INT);
|
||||
events |= ded_errors(ctrl_base_addr + DED_ERROR_INT);
|
||||
events |= local_events(bridge_base_addr + ISTATUS_LOCAL);
|
||||
events |= pcie_events(port);
|
||||
events |= sec_errors(port);
|
||||
events |= ded_errors(port);
|
||||
events |= local_events(port);
|
||||
|
||||
return events;
|
||||
}
|
||||
@ -848,6 +847,13 @@ static const struct irq_domain_ops event_domain_ops = {
|
||||
.map = mc_pcie_event_map,
|
||||
};
|
||||
|
||||
static inline void mc_pcie_deinit_clk(void *data)
|
||||
{
|
||||
struct clk *clk = data;
|
||||
|
||||
clk_disable_unprepare(clk);
|
||||
}
|
||||
|
||||
static inline struct clk *mc_pcie_init_clk(struct device *dev, const char *id)
|
||||
{
|
||||
struct clk *clk;
|
||||
@ -863,8 +869,7 @@ static inline struct clk *mc_pcie_init_clk(struct device *dev, const char *id)
|
||||
if (ret)
|
||||
return ERR_PTR(ret);
|
||||
|
||||
devm_add_action_or_reset(dev, (void (*) (void *))clk_disable_unprepare,
|
||||
clk);
|
||||
devm_add_action_or_reset(dev, mc_pcie_deinit_clk, clk);
|
||||
|
||||
return clk;
|
||||
}
|
||||
@ -987,39 +992,73 @@ static int mc_pcie_setup_windows(struct platform_device *pdev,
|
||||
return 0;
|
||||
}
|
||||
|
||||
static int mc_platform_init(struct pci_config_window *cfg)
|
||||
static inline void mc_clear_secs(struct mc_pcie *port)
|
||||
{
|
||||
struct device *dev = cfg->parent;
|
||||
struct platform_device *pdev = to_platform_device(dev);
|
||||
struct mc_pcie *port;
|
||||
void __iomem *bridge_base_addr;
|
||||
void __iomem *ctrl_base_addr;
|
||||
int ret;
|
||||
void __iomem *ctrl_base_addr = port->axi_base_addr + MC_PCIE_CTRL_ADDR;
|
||||
|
||||
writel_relaxed(SEC_ERROR_INT_ALL_RAM_SEC_ERR_INT, ctrl_base_addr +
|
||||
SEC_ERROR_INT);
|
||||
writel_relaxed(0, ctrl_base_addr + SEC_ERROR_EVENT_CNT);
|
||||
}
|
||||
|
||||
static inline void mc_clear_deds(struct mc_pcie *port)
|
||||
{
|
||||
void __iomem *ctrl_base_addr = port->axi_base_addr + MC_PCIE_CTRL_ADDR;
|
||||
|
||||
writel_relaxed(DED_ERROR_INT_ALL_RAM_DED_ERR_INT, ctrl_base_addr +
|
||||
DED_ERROR_INT);
|
||||
writel_relaxed(0, ctrl_base_addr + DED_ERROR_EVENT_CNT);
|
||||
}
|
||||
|
||||
static void mc_disable_interrupts(struct mc_pcie *port)
|
||||
{
|
||||
void __iomem *bridge_base_addr = port->axi_base_addr + MC_PCIE_BRIDGE_ADDR;
|
||||
void __iomem *ctrl_base_addr = port->axi_base_addr + MC_PCIE_CTRL_ADDR;
|
||||
u32 val;
|
||||
|
||||
/* Ensure ECC bypass is enabled */
|
||||
val = ECC_CONTROL_TX_RAM_ECC_BYPASS |
|
||||
ECC_CONTROL_RX_RAM_ECC_BYPASS |
|
||||
ECC_CONTROL_PCIE2AXI_RAM_ECC_BYPASS |
|
||||
ECC_CONTROL_AXI2PCIE_RAM_ECC_BYPASS;
|
||||
writel_relaxed(val, ctrl_base_addr + ECC_CONTROL);
|
||||
|
||||
/* Disable SEC errors and clear any outstanding */
|
||||
writel_relaxed(SEC_ERROR_INT_ALL_RAM_SEC_ERR_INT, ctrl_base_addr +
|
||||
SEC_ERROR_INT_MASK);
|
||||
mc_clear_secs(port);
|
||||
|
||||
/* Disable DED errors and clear any outstanding */
|
||||
writel_relaxed(DED_ERROR_INT_ALL_RAM_DED_ERR_INT, ctrl_base_addr +
|
||||
DED_ERROR_INT_MASK);
|
||||
mc_clear_deds(port);
|
||||
|
||||
/* Disable local interrupts and clear any outstanding */
|
||||
writel_relaxed(0, bridge_base_addr + IMASK_LOCAL);
|
||||
writel_relaxed(GENMASK(31, 0), bridge_base_addr + ISTATUS_LOCAL);
|
||||
writel_relaxed(GENMASK(31, 0), bridge_base_addr + ISTATUS_MSI);
|
||||
|
||||
/* Disable PCIe events and clear any outstanding */
|
||||
val = PCIE_EVENT_INT_L2_EXIT_INT |
|
||||
PCIE_EVENT_INT_HOTRST_EXIT_INT |
|
||||
PCIE_EVENT_INT_DLUP_EXIT_INT |
|
||||
PCIE_EVENT_INT_L2_EXIT_INT_MASK |
|
||||
PCIE_EVENT_INT_HOTRST_EXIT_INT_MASK |
|
||||
PCIE_EVENT_INT_DLUP_EXIT_INT_MASK;
|
||||
writel_relaxed(val, ctrl_base_addr + PCIE_EVENT_INT);
|
||||
|
||||
/* Disable host interrupts and clear any outstanding */
|
||||
writel_relaxed(0, bridge_base_addr + IMASK_HOST);
|
||||
writel_relaxed(GENMASK(31, 0), bridge_base_addr + ISTATUS_HOST);
|
||||
}
|
||||
|
||||
static int mc_init_interrupts(struct platform_device *pdev, struct mc_pcie *port)
|
||||
{
|
||||
struct device *dev = &pdev->dev;
|
||||
int irq;
|
||||
int i, intx_irq, msi_irq, event_irq;
|
||||
u32 val;
|
||||
int err;
|
||||
int ret;
|
||||
|
||||
port = devm_kzalloc(dev, sizeof(*port), GFP_KERNEL);
|
||||
if (!port)
|
||||
return -ENOMEM;
|
||||
port->dev = dev;
|
||||
|
||||
ret = mc_pcie_init_clks(dev);
|
||||
if (ret) {
|
||||
dev_err(dev, "failed to get clock resources, error %d\n", ret);
|
||||
return -ENODEV;
|
||||
}
|
||||
|
||||
port->axi_base_addr = devm_platform_ioremap_resource(pdev, 1);
|
||||
if (IS_ERR(port->axi_base_addr))
|
||||
return PTR_ERR(port->axi_base_addr);
|
||||
|
||||
bridge_base_addr = port->axi_base_addr + MC_PCIE_BRIDGE_ADDR;
|
||||
ctrl_base_addr = port->axi_base_addr + MC_PCIE_CTRL_ADDR;
|
||||
|
||||
port->msi.vector_phy = MSI_ADDR;
|
||||
port->msi.num_vectors = MC_NUM_MSI_IRQS;
|
||||
ret = mc_pcie_init_irq_domains(port);
|
||||
if (ret) {
|
||||
dev_err(dev, "failed creating IRQ domains\n");
|
||||
@ -1037,11 +1076,11 @@ static int mc_platform_init(struct pci_config_window *cfg)
|
||||
return -ENXIO;
|
||||
}
|
||||
|
||||
err = devm_request_irq(dev, event_irq, mc_event_handler,
|
||||
ret = devm_request_irq(dev, event_irq, mc_event_handler,
|
||||
0, event_cause[i].sym, port);
|
||||
if (err) {
|
||||
if (ret) {
|
||||
dev_err(dev, "failed to request IRQ %d\n", event_irq);
|
||||
return err;
|
||||
return ret;
|
||||
}
|
||||
}
|
||||
|
||||
@ -1066,44 +1105,81 @@ static int mc_platform_init(struct pci_config_window *cfg)
|
||||
/* Plug the main event chained handler */
|
||||
irq_set_chained_handler_and_data(irq, mc_handle_event, port);
|
||||
|
||||
/* Hardware doesn't setup MSI by default */
|
||||
return 0;
|
||||
}
|
||||
|
||||
static int mc_platform_init(struct pci_config_window *cfg)
|
||||
{
|
||||
struct device *dev = cfg->parent;
|
||||
struct platform_device *pdev = to_platform_device(dev);
|
||||
void __iomem *bridge_base_addr =
|
||||
port->axi_base_addr + MC_PCIE_BRIDGE_ADDR;
|
||||
int ret;
|
||||
|
||||
/* Configure address translation table 0 for PCIe config space */
|
||||
mc_pcie_setup_window(bridge_base_addr, 0, cfg->res.start,
|
||||
cfg->res.start,
|
||||
resource_size(&cfg->res));
|
||||
|
||||
/* Need some fixups in config space */
|
||||
mc_pcie_enable_msi(port, cfg->win);
|
||||
|
||||
val = readl_relaxed(bridge_base_addr + IMASK_LOCAL);
|
||||
val |= PM_MSI_INT_INTX_MASK;
|
||||
writel_relaxed(val, bridge_base_addr + IMASK_LOCAL);
|
||||
/* Configure non-config space outbound ranges */
|
||||
ret = mc_pcie_setup_windows(pdev, port);
|
||||
if (ret)
|
||||
return ret;
|
||||
|
||||
writel_relaxed(val, ctrl_base_addr + ECC_CONTROL);
|
||||
/* Address translation is up; safe to enable interrupts */
|
||||
ret = mc_init_interrupts(pdev, port);
|
||||
if (ret)
|
||||
return ret;
|
||||
|
||||
val = PCIE_EVENT_INT_L2_EXIT_INT |
|
||||
PCIE_EVENT_INT_HOTRST_EXIT_INT |
|
||||
PCIE_EVENT_INT_DLUP_EXIT_INT;
|
||||
writel_relaxed(val, ctrl_base_addr + PCIE_EVENT_INT);
|
||||
return 0;
|
||||
}
|
||||
|
||||
val = SEC_ERROR_INT_TX_RAM_SEC_ERR_INT |
|
||||
SEC_ERROR_INT_RX_RAM_SEC_ERR_INT |
|
||||
SEC_ERROR_INT_PCIE2AXI_RAM_SEC_ERR_INT |
|
||||
SEC_ERROR_INT_AXI2PCIE_RAM_SEC_ERR_INT;
|
||||
writel_relaxed(val, ctrl_base_addr + SEC_ERROR_INT);
|
||||
writel_relaxed(0, ctrl_base_addr + SEC_ERROR_INT_MASK);
|
||||
writel_relaxed(0, ctrl_base_addr + SEC_ERROR_CNT);
|
||||
static int mc_host_probe(struct platform_device *pdev)
|
||||
{
|
||||
struct device *dev = &pdev->dev;
|
||||
void __iomem *bridge_base_addr;
|
||||
int ret;
|
||||
u32 val;
|
||||
|
||||
val = DED_ERROR_INT_TX_RAM_DED_ERR_INT |
|
||||
DED_ERROR_INT_RX_RAM_DED_ERR_INT |
|
||||
DED_ERROR_INT_PCIE2AXI_RAM_DED_ERR_INT |
|
||||
DED_ERROR_INT_AXI2PCIE_RAM_DED_ERR_INT;
|
||||
writel_relaxed(val, ctrl_base_addr + DED_ERROR_INT);
|
||||
writel_relaxed(0, ctrl_base_addr + DED_ERROR_INT_MASK);
|
||||
writel_relaxed(0, ctrl_base_addr + DED_ERROR_CNT);
|
||||
port = devm_kzalloc(dev, sizeof(*port), GFP_KERNEL);
|
||||
if (!port)
|
||||
return -ENOMEM;
|
||||
|
||||
writel_relaxed(0, bridge_base_addr + IMASK_HOST);
|
||||
writel_relaxed(GENMASK(31, 0), bridge_base_addr + ISTATUS_HOST);
|
||||
port->dev = dev;
|
||||
|
||||
/* Configure Address Translation Table 0 for PCIe config space */
|
||||
mc_pcie_setup_window(bridge_base_addr, 0, cfg->res.start & 0xffffffff,
|
||||
cfg->res.start, resource_size(&cfg->res));
|
||||
port->axi_base_addr = devm_platform_ioremap_resource(pdev, 1);
|
||||
if (IS_ERR(port->axi_base_addr))
|
||||
return PTR_ERR(port->axi_base_addr);
|
||||
|
||||
return mc_pcie_setup_windows(pdev, port);
|
||||
mc_disable_interrupts(port);
|
||||
|
||||
bridge_base_addr = port->axi_base_addr + MC_PCIE_BRIDGE_ADDR;
|
||||
|
||||
/* Allow enabling MSI by disabling MSI-X */
|
||||
val = readl(bridge_base_addr + PCIE_PCI_IRQ_DW0);
|
||||
val &= ~MSIX_CAP_MASK;
|
||||
writel(val, bridge_base_addr + PCIE_PCI_IRQ_DW0);
|
||||
|
||||
/* Pick num vectors from bitfile programmed onto FPGA fabric */
|
||||
val = readl(bridge_base_addr + PCIE_PCI_IRQ_DW0);
|
||||
val &= NUM_MSI_MSGS_MASK;
|
||||
val >>= NUM_MSI_MSGS_SHIFT;
|
||||
|
||||
port->msi.num_vectors = 1 << val;
|
||||
|
||||
/* Pick vector address from design */
|
||||
port->msi.vector_phy = readl_relaxed(bridge_base_addr + IMSI_ADDR);
|
||||
|
||||
ret = mc_pcie_init_clks(dev);
|
||||
if (ret) {
|
||||
dev_err(dev, "failed to get clock resources, error %d\n", ret);
|
||||
return -ENODEV;
|
||||
}
|
||||
|
||||
return pci_host_common_probe(pdev);
|
||||
}
|
||||
|
||||
static const struct pci_ecam_ops mc_ecam_ops = {
|
||||
@ -1126,7 +1202,7 @@ static const struct of_device_id mc_pcie_of_match[] = {
|
||||
MODULE_DEVICE_TABLE(of, mc_pcie_of_match);
|
||||
|
||||
static struct platform_driver mc_pcie_driver = {
|
||||
.probe = pci_host_common_probe,
|
||||
.probe = mc_host_probe,
|
||||
.driver = {
|
||||
.name = "microchip-pcie",
|
||||
.of_match_table = mc_pcie_of_match,
|
||||
@ -1135,5 +1211,6 @@ static struct platform_driver mc_pcie_driver = {
|
||||
};
|
||||
|
||||
builtin_platform_driver(mc_pcie_driver);
|
||||
MODULE_LICENSE("GPL");
|
||||
MODULE_DESCRIPTION("Microchip PCIe host controller driver");
|
||||
MODULE_AUTHOR("Daire McNamara <daire.mcnamara@microchip.com>");
|
||||
|
@ -24,10 +24,8 @@
|
||||
#include <linux/kernel.h>
|
||||
#include <linux/mfd/syscon.h>
|
||||
#include <linux/module.h>
|
||||
#include <linux/of_address.h>
|
||||
#include <linux/of_device.h>
|
||||
#include <linux/of.h>
|
||||
#include <linux/of_pci.h>
|
||||
#include <linux/of_platform.h>
|
||||
#include <linux/pci.h>
|
||||
#include <linux/pci_ids.h>
|
||||
#include <linux/phy/phy.h>
|
||||
|
@ -15,6 +15,7 @@
|
||||
#include <linux/delay.h>
|
||||
#include <linux/gpio/consumer.h>
|
||||
#include <linux/iopoll.h>
|
||||
#include <linux/of.h>
|
||||
#include <linux/of_pci.h>
|
||||
#include <linux/phy/phy.h>
|
||||
#include <linux/platform_device.h>
|
||||
|
@ -158,7 +158,9 @@
|
||||
#define PCIE_RC_CONFIG_THP_CAP (PCIE_RC_CONFIG_BASE + 0x274)
|
||||
#define PCIE_RC_CONFIG_THP_CAP_NEXT_MASK GENMASK(31, 20)
|
||||
|
||||
#define PCIE_ADDR_MASK 0xffffff00
|
||||
#define MAX_AXI_IB_ROOTPORT_REGION_NUM 3
|
||||
#define MIN_AXI_ADDR_BITS_PASSED 8
|
||||
#define PCIE_ADDR_MASK GENMASK_ULL(63, MIN_AXI_ADDR_BITS_PASSED)
|
||||
#define PCIE_CORE_AXI_CONF_BASE 0xc00000
|
||||
#define PCIE_CORE_OB_REGION_ADDR0 (PCIE_CORE_AXI_CONF_BASE + 0x0)
|
||||
#define PCIE_CORE_OB_REGION_ADDR0_NUM_BITS 0x3f
|
||||
@ -185,8 +187,6 @@
|
||||
#define AXI_WRAPPER_TYPE1_CFG 0xb
|
||||
#define AXI_WRAPPER_NOR_MSG 0xc
|
||||
|
||||
#define MAX_AXI_IB_ROOTPORT_REGION_NUM 3
|
||||
#define MIN_AXI_ADDR_BITS_PASSED 8
|
||||
#define PCIE_RC_SEND_PME_OFF 0x11960
|
||||
#define ROCKCHIP_VENDOR_ID 0x1d87
|
||||
#define PCIE_LINK_IS_L2(x) \
|
||||
|
@ -541,8 +541,23 @@ static void vmd_domain_reset(struct vmd_dev *vmd)
|
||||
PCI_CLASS_BRIDGE_PCI))
|
||||
continue;
|
||||
|
||||
memset_io(base + PCI_IO_BASE, 0,
|
||||
PCI_ROM_ADDRESS1 - PCI_IO_BASE);
|
||||
/*
|
||||
* Temporarily disable the I/O range before updating
|
||||
* PCI_IO_BASE.
|
||||
*/
|
||||
writel(0x0000ffff, base + PCI_IO_BASE_UPPER16);
|
||||
/* Update lower 16 bits of I/O base/limit */
|
||||
writew(0x00f0, base + PCI_IO_BASE);
|
||||
/* Update upper 16 bits of I/O base/limit */
|
||||
writel(0, base + PCI_IO_BASE_UPPER16);
|
||||
|
||||
/* MMIO Base/Limit */
|
||||
writel(0x0000fff0, base + PCI_MEMORY_BASE);
|
||||
|
||||
/* Prefetchable MMIO Base/Limit */
|
||||
writel(0, base + PCI_PREF_LIMIT_UPPER32);
|
||||
writel(0x0000fff0, base + PCI_PREF_MEMORY_BASE);
|
||||
writel(0xffffffff, base + PCI_PREF_BASE_UPPER32);
|
||||
}
|
||||
}
|
||||
}
|
||||
|
@ -293,8 +293,8 @@ static int pci_doe_recv_resp(struct pci_doe_mb *doe_mb, struct pci_doe_task *tas
|
||||
static void signal_task_complete(struct pci_doe_task *task, int rv)
|
||||
{
|
||||
task->rv = rv;
|
||||
task->complete(task);
|
||||
destroy_work_on_stack(&task->work);
|
||||
task->complete(task);
|
||||
}
|
||||
|
||||
static void signal_task_abort(struct pci_doe_task *task, int rv)
|
||||
|
@ -6,8 +6,10 @@
|
||||
* Author: Manivannan Sadhasivam <manivannan.sadhasivam@linaro.org>
|
||||
*/
|
||||
|
||||
#include <linux/dmaengine.h>
|
||||
#include <linux/mhi_ep.h>
|
||||
#include <linux/module.h>
|
||||
#include <linux/of_dma.h>
|
||||
#include <linux/platform_device.h>
|
||||
#include <linux/pci-epc.h>
|
||||
#include <linux/pci-epf.h>
|
||||
@ -16,6 +18,9 @@
|
||||
|
||||
#define to_epf_mhi(cntrl) container_of(cntrl, struct pci_epf_mhi, cntrl)
|
||||
|
||||
/* Platform specific flags */
|
||||
#define MHI_EPF_USE_DMA BIT(0)
|
||||
|
||||
struct pci_epf_mhi_ep_info {
|
||||
const struct mhi_ep_cntrl_config *config;
|
||||
struct pci_epf_header *epf_header;
|
||||
@ -23,6 +28,7 @@ struct pci_epf_mhi_ep_info {
|
||||
u32 epf_flags;
|
||||
u32 msi_count;
|
||||
u32 mru;
|
||||
u32 flags;
|
||||
};
|
||||
|
||||
#define MHI_EP_CHANNEL_CONFIG(ch_num, ch_name, direction) \
|
||||
@ -91,17 +97,42 @@ static const struct pci_epf_mhi_ep_info sdx55_info = {
|
||||
.mru = 0x8000,
|
||||
};
|
||||
|
||||
static struct pci_epf_header sm8450_header = {
|
||||
.vendorid = PCI_VENDOR_ID_QCOM,
|
||||
.deviceid = 0x0306,
|
||||
.baseclass_code = PCI_CLASS_OTHERS,
|
||||
.interrupt_pin = PCI_INTERRUPT_INTA,
|
||||
};
|
||||
|
||||
static const struct pci_epf_mhi_ep_info sm8450_info = {
|
||||
.config = &mhi_v1_config,
|
||||
.epf_header = &sm8450_header,
|
||||
.bar_num = BAR_0,
|
||||
.epf_flags = PCI_BASE_ADDRESS_MEM_TYPE_32,
|
||||
.msi_count = 32,
|
||||
.mru = 0x8000,
|
||||
.flags = MHI_EPF_USE_DMA,
|
||||
};
|
||||
|
||||
struct pci_epf_mhi {
|
||||
const struct pci_epc_features *epc_features;
|
||||
const struct pci_epf_mhi_ep_info *info;
|
||||
struct mhi_ep_cntrl mhi_cntrl;
|
||||
struct pci_epf *epf;
|
||||
struct mutex lock;
|
||||
void __iomem *mmio;
|
||||
resource_size_t mmio_phys;
|
||||
struct dma_chan *dma_chan_tx;
|
||||
struct dma_chan *dma_chan_rx;
|
||||
u32 mmio_size;
|
||||
int irq;
|
||||
};
|
||||
|
||||
static size_t get_align_offset(struct pci_epf_mhi *epf_mhi, u64 addr)
|
||||
{
|
||||
return addr & (epf_mhi->epc_features->align -1);
|
||||
}
|
||||
|
||||
static int __pci_epf_mhi_alloc_map(struct mhi_ep_cntrl *mhi_cntrl, u64 pci_addr,
|
||||
phys_addr_t *paddr, void __iomem **vaddr,
|
||||
size_t offset, size_t size)
|
||||
@ -133,8 +164,7 @@ static int pci_epf_mhi_alloc_map(struct mhi_ep_cntrl *mhi_cntrl, u64 pci_addr,
|
||||
size_t size)
|
||||
{
|
||||
struct pci_epf_mhi *epf_mhi = to_epf_mhi(mhi_cntrl);
|
||||
struct pci_epc *epc = epf_mhi->epf->epc;
|
||||
size_t offset = pci_addr & (epc->mem->window.page_size - 1);
|
||||
size_t offset = get_align_offset(epf_mhi, pci_addr);
|
||||
|
||||
return __pci_epf_mhi_alloc_map(mhi_cntrl, pci_addr, paddr, vaddr,
|
||||
offset, size);
|
||||
@ -159,9 +189,7 @@ static void pci_epf_mhi_unmap_free(struct mhi_ep_cntrl *mhi_cntrl, u64 pci_addr,
|
||||
size_t size)
|
||||
{
|
||||
struct pci_epf_mhi *epf_mhi = to_epf_mhi(mhi_cntrl);
|
||||
struct pci_epf *epf = epf_mhi->epf;
|
||||
struct pci_epc *epc = epf->epc;
|
||||
size_t offset = pci_addr & (epc->mem->window.page_size - 1);
|
||||
size_t offset = get_align_offset(epf_mhi, pci_addr);
|
||||
|
||||
__pci_epf_mhi_unmap_free(mhi_cntrl, pci_addr, paddr, vaddr, offset,
|
||||
size);
|
||||
@ -181,11 +209,11 @@ static void pci_epf_mhi_raise_irq(struct mhi_ep_cntrl *mhi_cntrl, u32 vector)
|
||||
vector + 1);
|
||||
}
|
||||
|
||||
static int pci_epf_mhi_read_from_host(struct mhi_ep_cntrl *mhi_cntrl, u64 from,
|
||||
void *to, size_t size)
|
||||
static int pci_epf_mhi_iatu_read(struct mhi_ep_cntrl *mhi_cntrl, u64 from,
|
||||
void *to, size_t size)
|
||||
{
|
||||
struct pci_epf_mhi *epf_mhi = to_epf_mhi(mhi_cntrl);
|
||||
size_t offset = from % SZ_4K;
|
||||
size_t offset = get_align_offset(epf_mhi, from);
|
||||
void __iomem *tre_buf;
|
||||
phys_addr_t tre_phys;
|
||||
int ret;
|
||||
@ -209,11 +237,11 @@ static int pci_epf_mhi_read_from_host(struct mhi_ep_cntrl *mhi_cntrl, u64 from,
|
||||
return 0;
|
||||
}
|
||||
|
||||
static int pci_epf_mhi_write_to_host(struct mhi_ep_cntrl *mhi_cntrl,
|
||||
void *from, u64 to, size_t size)
|
||||
static int pci_epf_mhi_iatu_write(struct mhi_ep_cntrl *mhi_cntrl,
|
||||
void *from, u64 to, size_t size)
|
||||
{
|
||||
struct pci_epf_mhi *epf_mhi = to_epf_mhi(mhi_cntrl);
|
||||
size_t offset = to % SZ_4K;
|
||||
size_t offset = get_align_offset(epf_mhi, to);
|
||||
void __iomem *tre_buf;
|
||||
phys_addr_t tre_phys;
|
||||
int ret;
|
||||
@ -237,6 +265,206 @@ static int pci_epf_mhi_write_to_host(struct mhi_ep_cntrl *mhi_cntrl,
|
||||
return 0;
|
||||
}
|
||||
|
||||
static void pci_epf_mhi_dma_callback(void *param)
|
||||
{
|
||||
complete(param);
|
||||
}
|
||||
|
||||
static int pci_epf_mhi_edma_read(struct mhi_ep_cntrl *mhi_cntrl, u64 from,
|
||||
void *to, size_t size)
|
||||
{
|
||||
struct pci_epf_mhi *epf_mhi = to_epf_mhi(mhi_cntrl);
|
||||
struct device *dma_dev = epf_mhi->epf->epc->dev.parent;
|
||||
struct dma_chan *chan = epf_mhi->dma_chan_rx;
|
||||
struct device *dev = &epf_mhi->epf->dev;
|
||||
DECLARE_COMPLETION_ONSTACK(complete);
|
||||
struct dma_async_tx_descriptor *desc;
|
||||
struct dma_slave_config config = {};
|
||||
dma_cookie_t cookie;
|
||||
dma_addr_t dst_addr;
|
||||
int ret;
|
||||
|
||||
if (size < SZ_4K)
|
||||
return pci_epf_mhi_iatu_read(mhi_cntrl, from, to, size);
|
||||
|
||||
mutex_lock(&epf_mhi->lock);
|
||||
|
||||
config.direction = DMA_DEV_TO_MEM;
|
||||
config.src_addr = from;
|
||||
|
||||
ret = dmaengine_slave_config(chan, &config);
|
||||
if (ret) {
|
||||
dev_err(dev, "Failed to configure DMA channel\n");
|
||||
goto err_unlock;
|
||||
}
|
||||
|
||||
dst_addr = dma_map_single(dma_dev, to, size, DMA_FROM_DEVICE);
|
||||
ret = dma_mapping_error(dma_dev, dst_addr);
|
||||
if (ret) {
|
||||
dev_err(dev, "Failed to map remote memory\n");
|
||||
goto err_unlock;
|
||||
}
|
||||
|
||||
desc = dmaengine_prep_slave_single(chan, dst_addr, size, DMA_DEV_TO_MEM,
|
||||
DMA_CTRL_ACK | DMA_PREP_INTERRUPT);
|
||||
if (!desc) {
|
||||
dev_err(dev, "Failed to prepare DMA\n");
|
||||
ret = -EIO;
|
||||
goto err_unmap;
|
||||
}
|
||||
|
||||
desc->callback = pci_epf_mhi_dma_callback;
|
||||
desc->callback_param = &complete;
|
||||
|
||||
cookie = dmaengine_submit(desc);
|
||||
ret = dma_submit_error(cookie);
|
||||
if (ret) {
|
||||
dev_err(dev, "Failed to do DMA submit\n");
|
||||
goto err_unmap;
|
||||
}
|
||||
|
||||
dma_async_issue_pending(chan);
|
||||
ret = wait_for_completion_timeout(&complete, msecs_to_jiffies(1000));
|
||||
if (!ret) {
|
||||
dev_err(dev, "DMA transfer timeout\n");
|
||||
dmaengine_terminate_sync(chan);
|
||||
ret = -ETIMEDOUT;
|
||||
}
|
||||
|
||||
err_unmap:
|
||||
dma_unmap_single(dma_dev, dst_addr, size, DMA_FROM_DEVICE);
|
||||
err_unlock:
|
||||
mutex_unlock(&epf_mhi->lock);
|
||||
|
||||
return ret;
|
||||
}
|
||||
|
||||
static int pci_epf_mhi_edma_write(struct mhi_ep_cntrl *mhi_cntrl, void *from,
|
||||
u64 to, size_t size)
|
||||
{
|
||||
struct pci_epf_mhi *epf_mhi = to_epf_mhi(mhi_cntrl);
|
||||
struct device *dma_dev = epf_mhi->epf->epc->dev.parent;
|
||||
struct dma_chan *chan = epf_mhi->dma_chan_tx;
|
||||
struct device *dev = &epf_mhi->epf->dev;
|
||||
DECLARE_COMPLETION_ONSTACK(complete);
|
||||
struct dma_async_tx_descriptor *desc;
|
||||
struct dma_slave_config config = {};
|
||||
dma_cookie_t cookie;
|
||||
dma_addr_t src_addr;
|
||||
int ret;
|
||||
|
||||
if (size < SZ_4K)
|
||||
return pci_epf_mhi_iatu_write(mhi_cntrl, from, to, size);
|
||||
|
||||
mutex_lock(&epf_mhi->lock);
|
||||
|
||||
config.direction = DMA_MEM_TO_DEV;
|
||||
config.dst_addr = to;
|
||||
|
||||
ret = dmaengine_slave_config(chan, &config);
|
||||
if (ret) {
|
||||
dev_err(dev, "Failed to configure DMA channel\n");
|
||||
goto err_unlock;
|
||||
}
|
||||
|
||||
src_addr = dma_map_single(dma_dev, from, size, DMA_TO_DEVICE);
|
||||
ret = dma_mapping_error(dma_dev, src_addr);
|
||||
if (ret) {
|
||||
dev_err(dev, "Failed to map remote memory\n");
|
||||
goto err_unlock;
|
||||
}
|
||||
|
||||
desc = dmaengine_prep_slave_single(chan, src_addr, size, DMA_MEM_TO_DEV,
|
||||
DMA_CTRL_ACK | DMA_PREP_INTERRUPT);
|
||||
if (!desc) {
|
||||
dev_err(dev, "Failed to prepare DMA\n");
|
||||
ret = -EIO;
|
||||
goto err_unmap;
|
||||
}
|
||||
|
||||
desc->callback = pci_epf_mhi_dma_callback;
|
||||
desc->callback_param = &complete;
|
||||
|
||||
cookie = dmaengine_submit(desc);
|
||||
ret = dma_submit_error(cookie);
|
||||
if (ret) {
|
||||
dev_err(dev, "Failed to do DMA submit\n");
|
||||
goto err_unmap;
|
||||
}
|
||||
|
||||
dma_async_issue_pending(chan);
|
||||
ret = wait_for_completion_timeout(&complete, msecs_to_jiffies(1000));
|
||||
if (!ret) {
|
||||
dev_err(dev, "DMA transfer timeout\n");
|
||||
dmaengine_terminate_sync(chan);
|
||||
ret = -ETIMEDOUT;
|
||||
}
|
||||
|
||||
err_unmap:
|
||||
dma_unmap_single(dma_dev, src_addr, size, DMA_FROM_DEVICE);
|
||||
err_unlock:
|
||||
mutex_unlock(&epf_mhi->lock);
|
||||
|
||||
return ret;
|
||||
}
|
||||
|
||||
struct epf_dma_filter {
|
||||
struct device *dev;
|
||||
u32 dma_mask;
|
||||
};
|
||||
|
||||
static bool pci_epf_mhi_filter(struct dma_chan *chan, void *node)
|
||||
{
|
||||
struct epf_dma_filter *filter = node;
|
||||
struct dma_slave_caps caps;
|
||||
|
||||
memset(&caps, 0, sizeof(caps));
|
||||
dma_get_slave_caps(chan, &caps);
|
||||
|
||||
return chan->device->dev == filter->dev && filter->dma_mask &
|
||||
caps.directions;
|
||||
}
|
||||
|
||||
static int pci_epf_mhi_dma_init(struct pci_epf_mhi *epf_mhi)
|
||||
{
|
||||
struct device *dma_dev = epf_mhi->epf->epc->dev.parent;
|
||||
struct device *dev = &epf_mhi->epf->dev;
|
||||
struct epf_dma_filter filter;
|
||||
dma_cap_mask_t mask;
|
||||
|
||||
dma_cap_zero(mask);
|
||||
dma_cap_set(DMA_SLAVE, mask);
|
||||
|
||||
filter.dev = dma_dev;
|
||||
filter.dma_mask = BIT(DMA_MEM_TO_DEV);
|
||||
epf_mhi->dma_chan_tx = dma_request_channel(mask, pci_epf_mhi_filter,
|
||||
&filter);
|
||||
if (IS_ERR_OR_NULL(epf_mhi->dma_chan_tx)) {
|
||||
dev_err(dev, "Failed to request tx channel\n");
|
||||
return -ENODEV;
|
||||
}
|
||||
|
||||
filter.dma_mask = BIT(DMA_DEV_TO_MEM);
|
||||
epf_mhi->dma_chan_rx = dma_request_channel(mask, pci_epf_mhi_filter,
|
||||
&filter);
|
||||
if (IS_ERR_OR_NULL(epf_mhi->dma_chan_rx)) {
|
||||
dev_err(dev, "Failed to request rx channel\n");
|
||||
dma_release_channel(epf_mhi->dma_chan_tx);
|
||||
epf_mhi->dma_chan_tx = NULL;
|
||||
return -ENODEV;
|
||||
}
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
static void pci_epf_mhi_dma_deinit(struct pci_epf_mhi *epf_mhi)
|
||||
{
|
||||
dma_release_channel(epf_mhi->dma_chan_tx);
|
||||
dma_release_channel(epf_mhi->dma_chan_rx);
|
||||
epf_mhi->dma_chan_tx = NULL;
|
||||
epf_mhi->dma_chan_rx = NULL;
|
||||
}
|
||||
|
||||
static int pci_epf_mhi_core_init(struct pci_epf *epf)
|
||||
{
|
||||
struct pci_epf_mhi *epf_mhi = epf_get_drvdata(epf);
|
||||
@ -270,6 +498,10 @@ static int pci_epf_mhi_core_init(struct pci_epf *epf)
|
||||
return ret;
|
||||
}
|
||||
|
||||
epf_mhi->epc_features = pci_epc_get_features(epc, epf->func_no, epf->vfunc_no);
|
||||
if (!epf_mhi->epc_features)
|
||||
return -ENODATA;
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
@ -282,6 +514,14 @@ static int pci_epf_mhi_link_up(struct pci_epf *epf)
|
||||
struct device *dev = &epf->dev;
|
||||
int ret;
|
||||
|
||||
if (info->flags & MHI_EPF_USE_DMA) {
|
||||
ret = pci_epf_mhi_dma_init(epf_mhi);
|
||||
if (ret) {
|
||||
dev_err(dev, "Failed to initialize DMA: %d\n", ret);
|
||||
return ret;
|
||||
}
|
||||
}
|
||||
|
||||
mhi_cntrl->mmio = epf_mhi->mmio;
|
||||
mhi_cntrl->irq = epf_mhi->irq;
|
||||
mhi_cntrl->mru = info->mru;
|
||||
@ -291,13 +531,20 @@ static int pci_epf_mhi_link_up(struct pci_epf *epf)
|
||||
mhi_cntrl->raise_irq = pci_epf_mhi_raise_irq;
|
||||
mhi_cntrl->alloc_map = pci_epf_mhi_alloc_map;
|
||||
mhi_cntrl->unmap_free = pci_epf_mhi_unmap_free;
|
||||
mhi_cntrl->read_from_host = pci_epf_mhi_read_from_host;
|
||||
mhi_cntrl->write_to_host = pci_epf_mhi_write_to_host;
|
||||
if (info->flags & MHI_EPF_USE_DMA) {
|
||||
mhi_cntrl->read_from_host = pci_epf_mhi_edma_read;
|
||||
mhi_cntrl->write_to_host = pci_epf_mhi_edma_write;
|
||||
} else {
|
||||
mhi_cntrl->read_from_host = pci_epf_mhi_iatu_read;
|
||||
mhi_cntrl->write_to_host = pci_epf_mhi_iatu_write;
|
||||
}
|
||||
|
||||
/* Register the MHI EP controller */
|
||||
ret = mhi_ep_register_controller(mhi_cntrl, info->config);
|
||||
if (ret) {
|
||||
dev_err(dev, "Failed to register MHI EP controller: %d\n", ret);
|
||||
if (info->flags & MHI_EPF_USE_DMA)
|
||||
pci_epf_mhi_dma_deinit(epf_mhi);
|
||||
return ret;
|
||||
}
|
||||
|
||||
@ -307,10 +554,13 @@ static int pci_epf_mhi_link_up(struct pci_epf *epf)
|
||||
static int pci_epf_mhi_link_down(struct pci_epf *epf)
|
||||
{
|
||||
struct pci_epf_mhi *epf_mhi = epf_get_drvdata(epf);
|
||||
const struct pci_epf_mhi_ep_info *info = epf_mhi->info;
|
||||
struct mhi_ep_cntrl *mhi_cntrl = &epf_mhi->mhi_cntrl;
|
||||
|
||||
if (mhi_cntrl->mhi_dev) {
|
||||
mhi_ep_power_down(mhi_cntrl);
|
||||
if (info->flags & MHI_EPF_USE_DMA)
|
||||
pci_epf_mhi_dma_deinit(epf_mhi);
|
||||
mhi_ep_unregister_controller(mhi_cntrl);
|
||||
}
|
||||
|
||||
@ -320,6 +570,7 @@ static int pci_epf_mhi_link_down(struct pci_epf *epf)
|
||||
static int pci_epf_mhi_bme(struct pci_epf *epf)
|
||||
{
|
||||
struct pci_epf_mhi *epf_mhi = epf_get_drvdata(epf);
|
||||
const struct pci_epf_mhi_ep_info *info = epf_mhi->info;
|
||||
struct mhi_ep_cntrl *mhi_cntrl = &epf_mhi->mhi_cntrl;
|
||||
struct device *dev = &epf->dev;
|
||||
int ret;
|
||||
@ -332,6 +583,8 @@ static int pci_epf_mhi_bme(struct pci_epf *epf)
|
||||
ret = mhi_ep_power_up(mhi_cntrl);
|
||||
if (ret) {
|
||||
dev_err(dev, "Failed to power up MHI EP: %d\n", ret);
|
||||
if (info->flags & MHI_EPF_USE_DMA)
|
||||
pci_epf_mhi_dma_deinit(epf_mhi);
|
||||
mhi_ep_unregister_controller(mhi_cntrl);
|
||||
}
|
||||
}
|
||||
@ -382,6 +635,8 @@ static void pci_epf_mhi_unbind(struct pci_epf *epf)
|
||||
*/
|
||||
if (mhi_cntrl->mhi_dev) {
|
||||
mhi_ep_power_down(mhi_cntrl);
|
||||
if (info->flags & MHI_EPF_USE_DMA)
|
||||
pci_epf_mhi_dma_deinit(epf_mhi);
|
||||
mhi_ep_unregister_controller(mhi_cntrl);
|
||||
}
|
||||
|
||||
@ -422,9 +677,8 @@ static int pci_epf_mhi_probe(struct pci_epf *epf,
|
||||
}
|
||||
|
||||
static const struct pci_epf_device_id pci_epf_mhi_ids[] = {
|
||||
{
|
||||
.name = "sdx55", .driver_data = (kernel_ulong_t)&sdx55_info,
|
||||
},
|
||||
{ .name = "sdx55", .driver_data = (kernel_ulong_t)&sdx55_info },
|
||||
{ .name = "sm8450", .driver_data = (kernel_ulong_t)&sm8450_info },
|
||||
{},
|
||||
};
|
||||
|
||||
|
@ -986,22 +986,22 @@ static struct config_group *epf_ntb_add_cfs(struct pci_epf *epf,
|
||||
/*==== virtual PCI bus driver, which only load virtual NTB PCI driver ====*/
|
||||
|
||||
static u32 pci_space[] = {
|
||||
0xffffffff, /*DeviceID, Vendor ID*/
|
||||
0, /*Status, Command*/
|
||||
0xffffffff, /*Class code, subclass, prog if, revision id*/
|
||||
0x40, /*bist, header type, latency Timer, cache line size*/
|
||||
0, /*BAR 0*/
|
||||
0, /*BAR 1*/
|
||||
0, /*BAR 2*/
|
||||
0, /*BAR 3*/
|
||||
0, /*BAR 4*/
|
||||
0, /*BAR 5*/
|
||||
0, /*Cardbus cis point*/
|
||||
0, /*Subsystem ID Subystem vendor id*/
|
||||
0, /*ROM Base Address*/
|
||||
0, /*Reserved, Cap. Point*/
|
||||
0, /*Reserved,*/
|
||||
0, /*Max Lat, Min Gnt, interrupt pin, interrupt line*/
|
||||
0xffffffff, /* Device ID, Vendor ID */
|
||||
0, /* Status, Command */
|
||||
0xffffffff, /* Base Class, Subclass, Prog Intf, Revision ID */
|
||||
0x40, /* BIST, Header Type, Latency Timer, Cache Line Size */
|
||||
0, /* BAR 0 */
|
||||
0, /* BAR 1 */
|
||||
0, /* BAR 2 */
|
||||
0, /* BAR 3 */
|
||||
0, /* BAR 4 */
|
||||
0, /* BAR 5 */
|
||||
0, /* Cardbus CIS Pointer */
|
||||
0, /* Subsystem ID, Subsystem Vendor ID */
|
||||
0, /* ROM Base Address */
|
||||
0, /* Reserved, Capabilities Pointer */
|
||||
0, /* Reserved */
|
||||
0, /* Max_Lat, Min_Gnt, Interrupt Pin, Interrupt Line */
|
||||
};
|
||||
|
||||
static int pci_read(struct pci_bus *bus, unsigned int devfn, int where, int size, u32 *val)
|
||||
|
@ -9,7 +9,6 @@
|
||||
#include <linux/device.h>
|
||||
#include <linux/slab.h>
|
||||
#include <linux/module.h>
|
||||
#include <linux/of_device.h>
|
||||
|
||||
#include <linux/pci-epc.h>
|
||||
#include <linux/pci-epf.h>
|
||||
|
@ -115,6 +115,16 @@ err_mem:
|
||||
}
|
||||
EXPORT_SYMBOL_GPL(pci_epc_multi_mem_init);
|
||||
|
||||
/**
|
||||
* pci_epc_mem_init() - Initialize the pci_epc_mem structure
|
||||
* @epc: the EPC device that invoked pci_epc_mem_init
|
||||
* @base: Physical address of the window region
|
||||
* @size: Total Size of the window region
|
||||
* @page_size: Page size of the window region
|
||||
*
|
||||
* Invoke to initialize a single pci_epc_mem structure used by the
|
||||
* endpoint functions to allocate memory for mapping the PCI host memory
|
||||
*/
|
||||
int pci_epc_mem_init(struct pci_epc *epc, phys_addr_t base,
|
||||
size_t size, size_t page_size)
|
||||
{
|
||||
|
@ -178,7 +178,6 @@ void acpiphp_unregister_hotplug_slot(struct acpiphp_slot *slot);
|
||||
int acpiphp_enable_slot(struct acpiphp_slot *slot);
|
||||
int acpiphp_disable_slot(struct acpiphp_slot *slot);
|
||||
u8 acpiphp_get_power_status(struct acpiphp_slot *slot);
|
||||
u8 acpiphp_get_attention_status(struct acpiphp_slot *slot);
|
||||
u8 acpiphp_get_latch_status(struct acpiphp_slot *slot);
|
||||
u8 acpiphp_get_adapter_status(struct acpiphp_slot *slot);
|
||||
|
||||
|
@ -83,8 +83,6 @@ extern int cpci_debug;
|
||||
* board/chassis drivers.
|
||||
*/
|
||||
u8 cpci_get_attention_status(struct slot *slot);
|
||||
u8 cpci_get_latch_status(struct slot *slot);
|
||||
u8 cpci_get_adapter_status(struct slot *slot);
|
||||
u16 cpci_get_hs_csr(struct slot *slot);
|
||||
int cpci_set_attention_status(struct slot *slot, int status);
|
||||
int cpci_check_and_clear_ins(struct slot *slot);
|
||||
|
@ -264,8 +264,6 @@ extern struct list_head ibmphp_slot_head;
|
||||
void ibmphp_free_ebda_hpc_queue(void);
|
||||
int ibmphp_access_ebda(void);
|
||||
struct slot *ibmphp_get_slot_from_physical_num(u8);
|
||||
int ibmphp_get_total_hp_slots(void);
|
||||
void ibmphp_free_ibm_slot(struct slot *);
|
||||
void ibmphp_free_bus_info_queue(void);
|
||||
void ibmphp_free_ebda_pci_rsrc_queue(void);
|
||||
struct bus_info *ibmphp_find_same_bus_num(u32);
|
||||
|
@ -329,7 +329,7 @@ error:
|
||||
static int configure_device(struct pci_func *func)
|
||||
{
|
||||
u32 bar[6];
|
||||
u32 address[] = {
|
||||
static const u32 address[] = {
|
||||
PCI_BASE_ADDRESS_0,
|
||||
PCI_BASE_ADDRESS_1,
|
||||
PCI_BASE_ADDRESS_2,
|
||||
@ -564,7 +564,7 @@ static int configure_bridge(struct pci_func **func_passed, u8 slotno)
|
||||
struct resource_node *pfmem = NULL;
|
||||
struct resource_node *bus_pfmem[2] = {NULL, NULL};
|
||||
struct bus_node *bus;
|
||||
u32 address[] = {
|
||||
static const u32 address[] = {
|
||||
PCI_BASE_ADDRESS_0,
|
||||
PCI_BASE_ADDRESS_1,
|
||||
0
|
||||
@ -1053,7 +1053,7 @@ static struct res_needed *scan_behind_bridge(struct pci_func *func, u8 busno)
|
||||
int howmany = 0; /*this is to see if there are any devices behind the bridge */
|
||||
|
||||
u32 bar[6], class;
|
||||
u32 address[] = {
|
||||
static const u32 address[] = {
|
||||
PCI_BASE_ADDRESS_0,
|
||||
PCI_BASE_ADDRESS_1,
|
||||
PCI_BASE_ADDRESS_2,
|
||||
@ -1182,7 +1182,7 @@ static struct res_needed *scan_behind_bridge(struct pci_func *func, u8 busno)
|
||||
static int unconfigure_boot_device(u8 busno, u8 device, u8 function)
|
||||
{
|
||||
u32 start_address;
|
||||
u32 address[] = {
|
||||
static const u32 address[] = {
|
||||
PCI_BASE_ADDRESS_0,
|
||||
PCI_BASE_ADDRESS_1,
|
||||
PCI_BASE_ADDRESS_2,
|
||||
@ -1310,7 +1310,7 @@ static int unconfigure_boot_bridge(u8 busno, u8 device, u8 function)
|
||||
struct resource_node *mem = NULL;
|
||||
struct resource_node *pfmem = NULL;
|
||||
struct bus_node *bus;
|
||||
u32 address[] = {
|
||||
static const u32 address[] = {
|
||||
PCI_BASE_ADDRESS_0,
|
||||
PCI_BASE_ADDRESS_1,
|
||||
0
|
||||
|
@ -332,17 +332,11 @@ int pciehp_check_link_status(struct controller *ctrl)
|
||||
static int __pciehp_link_set(struct controller *ctrl, bool enable)
|
||||
{
|
||||
struct pci_dev *pdev = ctrl_dev(ctrl);
|
||||
u16 lnk_ctrl;
|
||||
|
||||
pcie_capability_read_word(pdev, PCI_EXP_LNKCTL, &lnk_ctrl);
|
||||
pcie_capability_clear_and_set_word(pdev, PCI_EXP_LNKCTL,
|
||||
PCI_EXP_LNKCTL_LD,
|
||||
enable ? 0 : PCI_EXP_LNKCTL_LD);
|
||||
|
||||
if (enable)
|
||||
lnk_ctrl &= ~PCI_EXP_LNKCTL_LD;
|
||||
else
|
||||
lnk_ctrl |= PCI_EXP_LNKCTL_LD;
|
||||
|
||||
pcie_capability_write_word(pdev, PCI_EXP_LNKCTL, lnk_ctrl);
|
||||
ctrl_dbg(ctrl, "%s: lnk_ctrl = %x\n", __func__, lnk_ctrl);
|
||||
return 0;
|
||||
}
|
||||
|
||||
|
@ -41,8 +41,7 @@ int pci_iov_vf_id(struct pci_dev *dev)
|
||||
return -EINVAL;
|
||||
|
||||
pf = pci_physfn(dev);
|
||||
return (((dev->bus->number << 8) + dev->devfn) -
|
||||
((pf->bus->number << 8) + pf->devfn + pf->sriov->offset)) /
|
||||
return (pci_dev_id(dev) - (pci_dev_id(pf) + pf->sriov->offset)) /
|
||||
pf->sriov->stride;
|
||||
}
|
||||
EXPORT_SYMBOL_GPL(pci_iov_vf_id);
|
||||
|
@ -336,7 +336,7 @@ bool pci_msi_domain_supports(struct pci_dev *pdev, unsigned int feature_mask,
|
||||
if (!irq_domain_is_msi_parent(domain)) {
|
||||
/*
|
||||
* For "global" PCI/MSI interrupt domains the associated
|
||||
* msi_domain_info::flags is the authoritive source of
|
||||
* msi_domain_info::flags is the authoritative source of
|
||||
* information.
|
||||
*/
|
||||
info = domain->host_data;
|
||||
@ -344,7 +344,7 @@ bool pci_msi_domain_supports(struct pci_dev *pdev, unsigned int feature_mask,
|
||||
} else {
|
||||
/*
|
||||
* For MSI parent domains the supported feature set
|
||||
* is avaliable in the parent ops. This makes checks
|
||||
* is available in the parent ops. This makes checks
|
||||
* possible before actually instantiating the
|
||||
* per device domain because the parent is never
|
||||
* expanding the PCI/MSI functionality.
|
||||
|
@ -435,7 +435,7 @@ static const struct pci_p2pdma_whitelist_entry {
|
||||
/* Intel Xeon E7 v3/Xeon E5 v3/Core i7 */
|
||||
{PCI_VENDOR_ID_INTEL, 0x2f00, REQ_SAME_HOST_BRIDGE},
|
||||
{PCI_VENDOR_ID_INTEL, 0x2f01, REQ_SAME_HOST_BRIDGE},
|
||||
/* Intel SkyLake-E */
|
||||
/* Intel Skylake-E */
|
||||
{PCI_VENDOR_ID_INTEL, 0x2030, 0},
|
||||
{PCI_VENDOR_ID_INTEL, 0x2031, 0},
|
||||
{PCI_VENDOR_ID_INTEL, 0x2032, 0},
|
||||
@ -532,8 +532,7 @@ static bool host_bridge_whitelist(struct pci_dev *a, struct pci_dev *b,
|
||||
|
||||
static unsigned long map_types_idx(struct pci_dev *client)
|
||||
{
|
||||
return (pci_domain_nr(client->bus) << 16) |
|
||||
(client->bus->number << 8) | client->devfn;
|
||||
return (pci_domain_nr(client->bus) << 16) | pci_dev_id(client);
|
||||
}
|
||||
|
||||
/*
|
||||
|
@ -193,7 +193,7 @@ static ssize_t new_id_store(struct device_driver *driver, const char *buf,
|
||||
u32 vendor, device, subvendor = PCI_ANY_ID,
|
||||
subdevice = PCI_ANY_ID, class = 0, class_mask = 0;
|
||||
unsigned long driver_data = 0;
|
||||
int fields = 0;
|
||||
int fields;
|
||||
int retval = 0;
|
||||
|
||||
fields = sscanf(buf, "%x %x %x %x %x %x %lx",
|
||||
@ -260,7 +260,7 @@ static ssize_t remove_id_store(struct device_driver *driver, const char *buf,
|
||||
struct pci_driver *pdrv = to_pci_driver(driver);
|
||||
u32 vendor, device, subvendor = PCI_ANY_ID,
|
||||
subdevice = PCI_ANY_ID, class = 0, class_mask = 0;
|
||||
int fields = 0;
|
||||
int fields;
|
||||
size_t retval = -ENODEV;
|
||||
|
||||
fields = sscanf(buf, "%x %x %x %x %x %x",
|
||||
@ -1474,14 +1474,15 @@ static struct pci_driver pci_compat_driver = {
|
||||
*/
|
||||
struct pci_driver *pci_dev_driver(const struct pci_dev *dev)
|
||||
{
|
||||
int i;
|
||||
|
||||
if (dev->driver)
|
||||
return dev->driver;
|
||||
else {
|
||||
int i;
|
||||
for (i = 0; i <= PCI_ROM_RESOURCE; i++)
|
||||
if (dev->resource[i].flags & IORESOURCE_BUSY)
|
||||
return &pci_compat_driver;
|
||||
}
|
||||
|
||||
for (i = 0; i <= PCI_ROM_RESOURCE; i++)
|
||||
if (dev->resource[i].flags & IORESOURCE_BUSY)
|
||||
return &pci_compat_driver;
|
||||
|
||||
return NULL;
|
||||
}
|
||||
EXPORT_SYMBOL(pci_dev_driver);
|
||||
@ -1705,7 +1706,6 @@ struct bus_type pcie_port_bus_type = {
|
||||
.name = "pci_express",
|
||||
.match = pcie_port_bus_match,
|
||||
};
|
||||
EXPORT_SYMBOL_GPL(pcie_port_bus_type);
|
||||
#endif
|
||||
|
||||
static int __init pci_driver_init(void)
|
||||
|
@ -1083,6 +1083,7 @@ static ssize_t pci_resource_io(struct file *filp, struct kobject *kobj,
|
||||
struct bin_attribute *attr, char *buf,
|
||||
loff_t off, size_t count, bool write)
|
||||
{
|
||||
#ifdef CONFIG_HAS_IOPORT
|
||||
struct pci_dev *pdev = to_pci_dev(kobj_to_dev(kobj));
|
||||
int bar = (unsigned long)attr->private;
|
||||
unsigned long port = off;
|
||||
@ -1116,6 +1117,9 @@ static ssize_t pci_resource_io(struct file *filp, struct kobject *kobj,
|
||||
return 4;
|
||||
}
|
||||
return -EINVAL;
|
||||
#else
|
||||
return -ENXIO;
|
||||
#endif
|
||||
}
|
||||
|
||||
static ssize_t pci_read_resource_io(struct file *filp, struct kobject *kobj,
|
||||
|
@ -1226,6 +1226,10 @@ static int pci_dev_wait(struct pci_dev *dev, char *reset_type, int timeout)
|
||||
*
|
||||
* On success, return 0 or 1, depending on whether or not it is necessary to
|
||||
* restore the device's BARs subsequently (1 is returned in that case).
|
||||
*
|
||||
* On failure, return a negative error code. Always return failure if @dev
|
||||
* lacks a Power Management Capability, even if the platform was able to
|
||||
* put the device in D0 via non-PCI means.
|
||||
*/
|
||||
int pci_power_up(struct pci_dev *dev)
|
||||
{
|
||||
@ -1242,9 +1246,6 @@ int pci_power_up(struct pci_dev *dev)
|
||||
else
|
||||
dev->current_state = state;
|
||||
|
||||
if (state == PCI_D0)
|
||||
return 0;
|
||||
|
||||
return -EIO;
|
||||
}
|
||||
|
||||
@ -1290,7 +1291,7 @@ end:
|
||||
*
|
||||
* Call pci_power_up() to put @dev into D0, read from its PCI_PM_CTRL register
|
||||
* to confirm the state change, restore its BARs if they might be lost and
|
||||
* reconfigure ASPM in acordance with the new power state.
|
||||
* reconfigure ASPM in accordance with the new power state.
|
||||
*
|
||||
* If pci_restore_state() is going to be called right after a power state change
|
||||
* to D0, it is more efficient to use pci_power_up() directly instead of this
|
||||
@ -1302,8 +1303,12 @@ static int pci_set_full_power_state(struct pci_dev *dev)
|
||||
int ret;
|
||||
|
||||
ret = pci_power_up(dev);
|
||||
if (ret < 0)
|
||||
if (ret < 0) {
|
||||
if (dev->current_state == PCI_D0)
|
||||
return 0;
|
||||
|
||||
return ret;
|
||||
}
|
||||
|
||||
pci_read_config_word(dev, dev->pm_cap + PCI_PM_CTRL, &pmcsr);
|
||||
dev->current_state = pmcsr & PCI_PM_CTRL_STATE_MASK;
|
||||
@ -1681,7 +1686,7 @@ int pci_save_state(struct pci_dev *dev)
|
||||
/* XXX: 100% dword access ok here? */
|
||||
for (i = 0; i < 16; i++) {
|
||||
pci_read_config_dword(dev, i * 4, &dev->saved_config_space[i]);
|
||||
pci_dbg(dev, "saving config space at offset %#x (reading %#x)\n",
|
||||
pci_dbg(dev, "save config %#04x: %#010x\n",
|
||||
i * 4, dev->saved_config_space[i]);
|
||||
}
|
||||
dev->state_saved = true;
|
||||
@ -1712,7 +1717,7 @@ static void pci_restore_config_dword(struct pci_dev *pdev, int offset,
|
||||
return;
|
||||
|
||||
for (;;) {
|
||||
pci_dbg(pdev, "restoring config space at offset %#x (was %#x, writing %#x)\n",
|
||||
pci_dbg(pdev, "restore config %#04x: %#010x -> %#010x\n",
|
||||
offset, val, saved_val);
|
||||
pci_write_config_dword(pdev, offset, saved_val);
|
||||
if (retry-- <= 0)
|
||||
@ -2415,10 +2420,13 @@ static void pci_pme_list_scan(struct work_struct *work)
|
||||
|
||||
mutex_lock(&pci_pme_list_mutex);
|
||||
list_for_each_entry_safe(pme_dev, n, &pci_pme_list, list) {
|
||||
if (pme_dev->dev->pme_poll) {
|
||||
struct pci_dev *bridge;
|
||||
struct pci_dev *pdev = pme_dev->dev;
|
||||
|
||||
if (pdev->pme_poll) {
|
||||
struct pci_dev *bridge = pdev->bus->self;
|
||||
struct device *dev = &pdev->dev;
|
||||
int pm_status;
|
||||
|
||||
bridge = pme_dev->dev->bus->self;
|
||||
/*
|
||||
* If bridge is in low power state, the
|
||||
* configuration space of subordinate devices
|
||||
@ -2426,14 +2434,20 @@ static void pci_pme_list_scan(struct work_struct *work)
|
||||
*/
|
||||
if (bridge && bridge->current_state != PCI_D0)
|
||||
continue;
|
||||
|
||||
/*
|
||||
* If the device is in D3cold it should not be
|
||||
* polled either.
|
||||
* If the device is in a low power state it
|
||||
* should not be polled either.
|
||||
*/
|
||||
if (pme_dev->dev->current_state == PCI_D3cold)
|
||||
pm_status = pm_runtime_get_if_active(dev, true);
|
||||
if (!pm_status)
|
||||
continue;
|
||||
|
||||
pci_pme_wakeup(pme_dev->dev, NULL);
|
||||
if (pdev->current_state != PCI_D3cold)
|
||||
pci_pme_wakeup(pdev, NULL);
|
||||
|
||||
if (pm_status > 0)
|
||||
pm_runtime_put(dev);
|
||||
} else {
|
||||
list_del(&pme_dev->list);
|
||||
kfree(pme_dev);
|
||||
@ -4191,16 +4205,12 @@ int pci_register_io_range(struct fwnode_handle *fwnode, phys_addr_t addr,
|
||||
|
||||
phys_addr_t pci_pio_to_address(unsigned long pio)
|
||||
{
|
||||
phys_addr_t address = (phys_addr_t)OF_BAD_ADDR;
|
||||
|
||||
#ifdef PCI_IOBASE
|
||||
if (pio >= MMIO_UPPER_LIMIT)
|
||||
return address;
|
||||
|
||||
address = logic_pio_to_hwaddr(pio);
|
||||
if (pio < MMIO_UPPER_LIMIT)
|
||||
return logic_pio_to_hwaddr(pio);
|
||||
#endif
|
||||
|
||||
return address;
|
||||
return (phys_addr_t) OF_BAD_ADDR;
|
||||
}
|
||||
EXPORT_SYMBOL_GPL(pci_pio_to_address);
|
||||
|
||||
@ -4927,7 +4937,6 @@ static int pcie_wait_for_link_status(struct pci_dev *pdev,
|
||||
int pcie_retrain_link(struct pci_dev *pdev, bool use_lt)
|
||||
{
|
||||
int rc;
|
||||
u16 lnkctl;
|
||||
|
||||
/*
|
||||
* Ensure the updated LNKCTL parameters are used during link
|
||||
@ -4939,17 +4948,14 @@ int pcie_retrain_link(struct pci_dev *pdev, bool use_lt)
|
||||
if (rc)
|
||||
return rc;
|
||||
|
||||
pcie_capability_read_word(pdev, PCI_EXP_LNKCTL, &lnkctl);
|
||||
lnkctl |= PCI_EXP_LNKCTL_RL;
|
||||
pcie_capability_write_word(pdev, PCI_EXP_LNKCTL, lnkctl);
|
||||
pcie_capability_set_word(pdev, PCI_EXP_LNKCTL, PCI_EXP_LNKCTL_RL);
|
||||
if (pdev->clear_retrain_link) {
|
||||
/*
|
||||
* Due to an erratum in some devices the Retrain Link bit
|
||||
* needs to be cleared again manually to allow the link
|
||||
* training to succeed.
|
||||
*/
|
||||
lnkctl &= ~PCI_EXP_LNKCTL_RL;
|
||||
pcie_capability_write_word(pdev, PCI_EXP_LNKCTL, lnkctl);
|
||||
pcie_capability_clear_word(pdev, PCI_EXP_LNKCTL, PCI_EXP_LNKCTL_RL);
|
||||
}
|
||||
|
||||
return pcie_wait_for_link_status(pdev, use_lt, !use_lt);
|
||||
@ -5631,7 +5637,7 @@ int pci_try_reset_function(struct pci_dev *dev)
|
||||
EXPORT_SYMBOL_GPL(pci_try_reset_function);
|
||||
|
||||
/* Do any devices on or below this bus prevent a bus reset? */
|
||||
static bool pci_bus_resetable(struct pci_bus *bus)
|
||||
static bool pci_bus_resettable(struct pci_bus *bus)
|
||||
{
|
||||
struct pci_dev *dev;
|
||||
|
||||
@ -5641,7 +5647,7 @@ static bool pci_bus_resetable(struct pci_bus *bus)
|
||||
|
||||
list_for_each_entry(dev, &bus->devices, bus_list) {
|
||||
if (dev->dev_flags & PCI_DEV_FLAGS_NO_BUS_RESET ||
|
||||
(dev->subordinate && !pci_bus_resetable(dev->subordinate)))
|
||||
(dev->subordinate && !pci_bus_resettable(dev->subordinate)))
|
||||
return false;
|
||||
}
|
||||
|
||||
@ -5699,7 +5705,7 @@ unlock:
|
||||
}
|
||||
|
||||
/* Do any devices on or below this slot prevent a bus reset? */
|
||||
static bool pci_slot_resetable(struct pci_slot *slot)
|
||||
static bool pci_slot_resettable(struct pci_slot *slot)
|
||||
{
|
||||
struct pci_dev *dev;
|
||||
|
||||
@ -5711,7 +5717,7 @@ static bool pci_slot_resetable(struct pci_slot *slot)
|
||||
if (!dev->slot || dev->slot != slot)
|
||||
continue;
|
||||
if (dev->dev_flags & PCI_DEV_FLAGS_NO_BUS_RESET ||
|
||||
(dev->subordinate && !pci_bus_resetable(dev->subordinate)))
|
||||
(dev->subordinate && !pci_bus_resettable(dev->subordinate)))
|
||||
return false;
|
||||
}
|
||||
|
||||
@ -5847,7 +5853,7 @@ static int pci_slot_reset(struct pci_slot *slot, bool probe)
|
||||
{
|
||||
int rc;
|
||||
|
||||
if (!slot || !pci_slot_resetable(slot))
|
||||
if (!slot || !pci_slot_resettable(slot))
|
||||
return -ENOTTY;
|
||||
|
||||
if (!probe)
|
||||
@ -5914,7 +5920,7 @@ static int pci_bus_reset(struct pci_bus *bus, bool probe)
|
||||
{
|
||||
int ret;
|
||||
|
||||
if (!bus->self || !pci_bus_resetable(bus))
|
||||
if (!bus->self || !pci_bus_resettable(bus))
|
||||
return -ENOTTY;
|
||||
|
||||
if (probe)
|
||||
|
@ -13,6 +13,12 @@
|
||||
|
||||
#define PCIE_LINK_RETRAIN_TIMEOUT_MS 1000
|
||||
|
||||
/*
|
||||
* PCIe r6.0, sec 5.3.3.2.1 <PME Synchronization>
|
||||
* Recommends 1ms to 10ms timeout to check L2 ready.
|
||||
*/
|
||||
#define PCIE_PME_TO_L2_TIMEOUT_US 10000
|
||||
|
||||
extern const unsigned char pcie_link_speed[];
|
||||
extern bool pci_early_dump;
|
||||
|
||||
@ -147,8 +153,8 @@ int pci_hp_add_bridge(struct pci_dev *dev);
|
||||
void pci_create_legacy_files(struct pci_bus *bus);
|
||||
void pci_remove_legacy_files(struct pci_bus *bus);
|
||||
#else
|
||||
static inline void pci_create_legacy_files(struct pci_bus *bus) { return; }
|
||||
static inline void pci_remove_legacy_files(struct pci_bus *bus) { return; }
|
||||
static inline void pci_create_legacy_files(struct pci_bus *bus) { }
|
||||
static inline void pci_remove_legacy_files(struct pci_bus *bus) { }
|
||||
#endif
|
||||
|
||||
/* Lock for read/write access to pci device and bus lists */
|
||||
@ -422,9 +428,9 @@ void dpc_process_error(struct pci_dev *pdev);
|
||||
pci_ers_result_t dpc_reset_link(struct pci_dev *pdev);
|
||||
bool pci_dpc_recovered(struct pci_dev *pdev);
|
||||
#else
|
||||
static inline void pci_save_dpc_state(struct pci_dev *dev) {}
|
||||
static inline void pci_restore_dpc_state(struct pci_dev *dev) {}
|
||||
static inline void pci_dpc_init(struct pci_dev *pdev) {}
|
||||
static inline void pci_save_dpc_state(struct pci_dev *dev) { }
|
||||
static inline void pci_restore_dpc_state(struct pci_dev *dev) { }
|
||||
static inline void pci_dpc_init(struct pci_dev *pdev) { }
|
||||
static inline bool pci_dpc_recovered(struct pci_dev *pdev) { return false; }
|
||||
#endif
|
||||
|
||||
@ -436,12 +442,12 @@ void pcie_walk_rcec(struct pci_dev *rcec,
|
||||
int (*cb)(struct pci_dev *, void *),
|
||||
void *userdata);
|
||||
#else
|
||||
static inline void pci_rcec_init(struct pci_dev *dev) {}
|
||||
static inline void pci_rcec_exit(struct pci_dev *dev) {}
|
||||
static inline void pcie_link_rcec(struct pci_dev *rcec) {}
|
||||
static inline void pci_rcec_init(struct pci_dev *dev) { }
|
||||
static inline void pci_rcec_exit(struct pci_dev *dev) { }
|
||||
static inline void pcie_link_rcec(struct pci_dev *rcec) { }
|
||||
static inline void pcie_walk_rcec(struct pci_dev *rcec,
|
||||
int (*cb)(struct pci_dev *, void *),
|
||||
void *userdata) {}
|
||||
void *userdata) { }
|
||||
#endif
|
||||
|
||||
#ifdef CONFIG_PCI_ATS
|
||||
@ -484,16 +490,9 @@ static inline int pci_iov_init(struct pci_dev *dev)
|
||||
{
|
||||
return -ENODEV;
|
||||
}
|
||||
static inline void pci_iov_release(struct pci_dev *dev)
|
||||
|
||||
{
|
||||
}
|
||||
static inline void pci_iov_remove(struct pci_dev *dev)
|
||||
{
|
||||
}
|
||||
static inline void pci_restore_iov_state(struct pci_dev *dev)
|
||||
{
|
||||
}
|
||||
static inline void pci_iov_release(struct pci_dev *dev) { }
|
||||
static inline void pci_iov_remove(struct pci_dev *dev) { }
|
||||
static inline void pci_restore_iov_state(struct pci_dev *dev) { }
|
||||
static inline int pci_iov_bus_range(struct pci_bus *bus)
|
||||
{
|
||||
return 0;
|
||||
@ -730,7 +729,7 @@ static inline int pci_dev_acpi_reset(struct pci_dev *dev, bool probe)
|
||||
{
|
||||
return -ENOTTY;
|
||||
}
|
||||
static inline void pci_set_acpi_fwnode(struct pci_dev *dev) {}
|
||||
static inline void pci_set_acpi_fwnode(struct pci_dev *dev) { }
|
||||
static inline int pci_acpi_program_hp_params(struct pci_dev *dev)
|
||||
{
|
||||
return -ENODEV;
|
||||
@ -751,7 +750,7 @@ static inline pci_power_t acpi_pci_get_power_state(struct pci_dev *dev)
|
||||
{
|
||||
return PCI_UNKNOWN;
|
||||
}
|
||||
static inline void acpi_pci_refresh_power_state(struct pci_dev *dev) {}
|
||||
static inline void acpi_pci_refresh_power_state(struct pci_dev *dev) { }
|
||||
static inline int acpi_pci_wakeup(struct pci_dev *dev, bool enable)
|
||||
{
|
||||
return -ENODEV;
|
||||
|
@ -230,7 +230,7 @@ int pcie_aer_is_native(struct pci_dev *dev)
|
||||
return pcie_ports_native || host->native_aer;
|
||||
}
|
||||
|
||||
int pci_enable_pcie_error_reporting(struct pci_dev *dev)
|
||||
static int pci_enable_pcie_error_reporting(struct pci_dev *dev)
|
||||
{
|
||||
int rc;
|
||||
|
||||
@ -240,19 +240,6 @@ int pci_enable_pcie_error_reporting(struct pci_dev *dev)
|
||||
rc = pcie_capability_set_word(dev, PCI_EXP_DEVCTL, PCI_EXP_AER_FLAGS);
|
||||
return pcibios_err_to_errno(rc);
|
||||
}
|
||||
EXPORT_SYMBOL_GPL(pci_enable_pcie_error_reporting);
|
||||
|
||||
int pci_disable_pcie_error_reporting(struct pci_dev *dev)
|
||||
{
|
||||
int rc;
|
||||
|
||||
if (!pcie_aer_is_native(dev))
|
||||
return -EIO;
|
||||
|
||||
rc = pcie_capability_clear_word(dev, PCI_EXP_DEVCTL, PCI_EXP_AER_FLAGS);
|
||||
return pcibios_err_to_errno(rc);
|
||||
}
|
||||
EXPORT_SYMBOL_GPL(pci_disable_pcie_error_reporting);
|
||||
|
||||
int pci_aer_clear_nonfatal_status(struct pci_dev *dev)
|
||||
{
|
||||
@ -712,7 +699,7 @@ static void __aer_print_error(struct pci_dev *dev,
|
||||
void aer_print_error(struct pci_dev *dev, struct aer_err_info *info)
|
||||
{
|
||||
int layer, agent;
|
||||
int id = ((dev->bus->number << 8) | dev->devfn);
|
||||
int id = pci_dev_id(dev);
|
||||
const char *level;
|
||||
|
||||
if (!info->status) {
|
||||
@ -847,7 +834,7 @@ static bool is_error_source(struct pci_dev *dev, struct aer_err_info *e_info)
|
||||
if ((PCI_BUS_NUM(e_info->id) != 0) &&
|
||||
!(dev->bus->bus_flags & PCI_BUS_FLAGS_NO_AERSID)) {
|
||||
/* Device ID match? */
|
||||
if (e_info->id == ((dev->bus->number << 8) | dev->devfn))
|
||||
if (e_info->id == pci_dev_id(dev))
|
||||
return true;
|
||||
|
||||
/* Continue id comparing if there is no multiple error */
|
||||
@ -981,8 +968,7 @@ static void handle_error_source(struct pci_dev *dev, struct aer_err_info *info)
|
||||
|
||||
#ifdef CONFIG_ACPI_APEI_PCIEAER
|
||||
|
||||
#define AER_RECOVER_RING_ORDER 4
|
||||
#define AER_RECOVER_RING_SIZE (1 << AER_RECOVER_RING_ORDER)
|
||||
#define AER_RECOVER_RING_SIZE 16
|
||||
|
||||
struct aer_recover_entry {
|
||||
u8 bus;
|
||||
|
@ -199,7 +199,7 @@ static void pcie_clkpm_cap_init(struct pcie_link_state *link, int blacklist)
|
||||
static void pcie_aspm_configure_common_clock(struct pcie_link_state *link)
|
||||
{
|
||||
int same_clock = 1;
|
||||
u16 reg16, parent_reg, child_reg[8];
|
||||
u16 reg16, ccc, parent_old_ccc, child_old_ccc[8];
|
||||
struct pci_dev *child, *parent = link->pdev;
|
||||
struct pci_bus *linkbus = parent->subordinate;
|
||||
/*
|
||||
@ -221,6 +221,7 @@ static void pcie_aspm_configure_common_clock(struct pcie_link_state *link)
|
||||
|
||||
/* Port might be already in common clock mode */
|
||||
pcie_capability_read_word(parent, PCI_EXP_LNKCTL, ®16);
|
||||
parent_old_ccc = reg16 & PCI_EXP_LNKCTL_CCC;
|
||||
if (same_clock && (reg16 & PCI_EXP_LNKCTL_CCC)) {
|
||||
bool consistent = true;
|
||||
|
||||
@ -237,34 +238,29 @@ static void pcie_aspm_configure_common_clock(struct pcie_link_state *link)
|
||||
pci_info(parent, "ASPM: current common clock configuration is inconsistent, reconfiguring\n");
|
||||
}
|
||||
|
||||
ccc = same_clock ? PCI_EXP_LNKCTL_CCC : 0;
|
||||
/* Configure downstream component, all functions */
|
||||
list_for_each_entry(child, &linkbus->devices, bus_list) {
|
||||
pcie_capability_read_word(child, PCI_EXP_LNKCTL, ®16);
|
||||
child_reg[PCI_FUNC(child->devfn)] = reg16;
|
||||
if (same_clock)
|
||||
reg16 |= PCI_EXP_LNKCTL_CCC;
|
||||
else
|
||||
reg16 &= ~PCI_EXP_LNKCTL_CCC;
|
||||
pcie_capability_write_word(child, PCI_EXP_LNKCTL, reg16);
|
||||
child_old_ccc[PCI_FUNC(child->devfn)] = reg16 & PCI_EXP_LNKCTL_CCC;
|
||||
pcie_capability_clear_and_set_word(child, PCI_EXP_LNKCTL,
|
||||
PCI_EXP_LNKCTL_CCC, ccc);
|
||||
}
|
||||
|
||||
/* Configure upstream component */
|
||||
pcie_capability_read_word(parent, PCI_EXP_LNKCTL, ®16);
|
||||
parent_reg = reg16;
|
||||
if (same_clock)
|
||||
reg16 |= PCI_EXP_LNKCTL_CCC;
|
||||
else
|
||||
reg16 &= ~PCI_EXP_LNKCTL_CCC;
|
||||
pcie_capability_write_word(parent, PCI_EXP_LNKCTL, reg16);
|
||||
pcie_capability_clear_and_set_word(parent, PCI_EXP_LNKCTL,
|
||||
PCI_EXP_LNKCTL_CCC, ccc);
|
||||
|
||||
if (pcie_retrain_link(link->pdev, true)) {
|
||||
|
||||
/* Training failed. Restore common clock configurations */
|
||||
pci_err(parent, "ASPM: Could not configure common clock\n");
|
||||
list_for_each_entry(child, &linkbus->devices, bus_list)
|
||||
pcie_capability_write_word(child, PCI_EXP_LNKCTL,
|
||||
child_reg[PCI_FUNC(child->devfn)]);
|
||||
pcie_capability_write_word(parent, PCI_EXP_LNKCTL, parent_reg);
|
||||
pcie_capability_clear_and_set_word(child, PCI_EXP_LNKCTL,
|
||||
PCI_EXP_LNKCTL_CCC,
|
||||
child_old_ccc[PCI_FUNC(child->devfn)]);
|
||||
pcie_capability_clear_and_set_word(parent, PCI_EXP_LNKCTL,
|
||||
PCI_EXP_LNKCTL_CCC, parent_old_ccc);
|
||||
}
|
||||
}
|
||||
|
||||
|
@ -8,7 +8,6 @@
|
||||
#include <linux/init.h>
|
||||
#include <linux/pci.h>
|
||||
#include <linux/msi.h>
|
||||
#include <linux/of_device.h>
|
||||
#include <linux/of_pci.h>
|
||||
#include <linux/pci_hotplug.h>
|
||||
#include <linux/slab.h>
|
||||
@ -2137,7 +2136,7 @@ static void pci_configure_relaxed_ordering(struct pci_dev *dev)
|
||||
{
|
||||
struct pci_dev *root;
|
||||
|
||||
/* PCI_EXP_DEVICE_RELAX_EN is RsvdP in VFs */
|
||||
/* PCI_EXP_DEVCTL_RELAX_EN is RsvdP in VFs */
|
||||
if (dev->is_virtfn)
|
||||
return;
|
||||
|
||||
@ -2324,6 +2323,7 @@ struct pci_dev *pci_alloc_dev(struct pci_bus *bus)
|
||||
.end = -1,
|
||||
};
|
||||
|
||||
spin_lock_init(&dev->pcie_cap_lock);
|
||||
#ifdef CONFIG_PCI_MSI
|
||||
raw_spin_lock_init(&dev->msi_lock);
|
||||
#endif
|
||||
|
@ -361,8 +361,9 @@ DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_NEC, PCI_DEVICE_ID_NEC_CBUS_2, quirk_isa_d
|
||||
DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_NEC, PCI_DEVICE_ID_NEC_CBUS_3, quirk_isa_dma_hangs);
|
||||
#endif
|
||||
|
||||
#ifdef CONFIG_HAS_IOPORT
|
||||
/*
|
||||
* Intel NM10 "TigerPoint" LPC PM1a_STS.BM_STS must be clear
|
||||
* Intel NM10 "Tiger Point" LPC PM1a_STS.BM_STS must be clear
|
||||
* for some HT machines to use C4 w/o hanging.
|
||||
*/
|
||||
static void quirk_tigerpoint_bm_sts(struct pci_dev *dev)
|
||||
@ -375,11 +376,12 @@ static void quirk_tigerpoint_bm_sts(struct pci_dev *dev)
|
||||
pm1a = inw(pmbase);
|
||||
|
||||
if (pm1a & 0x10) {
|
||||
pci_info(dev, FW_BUG "TigerPoint LPC.BM_STS cleared\n");
|
||||
pci_info(dev, FW_BUG "Tiger Point LPC.BM_STS cleared\n");
|
||||
outw(0x10, pmbase);
|
||||
}
|
||||
}
|
||||
DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_INTEL, PCI_DEVICE_ID_INTEL_TGP_LPC, quirk_tigerpoint_bm_sts);
|
||||
#endif
|
||||
|
||||
/* Chipsets where PCI->PCI transfers vanish or hang */
|
||||
static void quirk_nopcipci(struct pci_dev *dev)
|
||||
@ -3073,7 +3075,7 @@ static void __nv_msi_ht_cap_quirk(struct pci_dev *dev, int all)
|
||||
|
||||
/*
|
||||
* HT MSI mapping should be disabled on devices that are below
|
||||
* a non-Hypertransport host bridge. Locate the host bridge...
|
||||
* a non-HyperTransport host bridge. Locate the host bridge.
|
||||
*/
|
||||
host_bridge = pci_get_domain_bus_and_slot(pci_domain_nr(dev->bus), 0,
|
||||
PCI_DEVFN(0, 0));
|
||||
@ -3724,7 +3726,7 @@ static void quirk_no_bus_reset(struct pci_dev *dev)
|
||||
*/
|
||||
static void quirk_nvidia_no_bus_reset(struct pci_dev *dev)
|
||||
{
|
||||
if ((dev->device & 0xffc0) == 0x2340)
|
||||
if ((dev->device & 0xffc0) == 0x2340 || dev->device == 0x1eb8)
|
||||
quirk_no_bus_reset(dev);
|
||||
}
|
||||
DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_NVIDIA, PCI_ANY_ID,
|
||||
@ -5729,7 +5731,7 @@ int pci_idt_bus_quirk(struct pci_bus *bus, int devfn, u32 *l, int timeout)
|
||||
/*
|
||||
* Microsemi Switchtec NTB uses devfn proxy IDs to move TLPs between
|
||||
* NT endpoints via the internal switch fabric. These IDs replace the
|
||||
* originating requestor ID TLPs which access host memory on peer NTB
|
||||
* originating Requester ID TLPs which access host memory on peer NTB
|
||||
* ports. Therefore, all proxy IDs must be aliased to the NTB device
|
||||
* to permit access when the IOMMU is turned on.
|
||||
*/
|
||||
@ -5867,6 +5869,42 @@ SWITCHTEC_QUIRK(0x4428); /* PSXA 28XG4 */
|
||||
SWITCHTEC_QUIRK(0x4552); /* PAXA 52XG4 */
|
||||
SWITCHTEC_QUIRK(0x4536); /* PAXA 36XG4 */
|
||||
SWITCHTEC_QUIRK(0x4528); /* PAXA 28XG4 */
|
||||
SWITCHTEC_QUIRK(0x5000); /* PFX 100XG5 */
|
||||
SWITCHTEC_QUIRK(0x5084); /* PFX 84XG5 */
|
||||
SWITCHTEC_QUIRK(0x5068); /* PFX 68XG5 */
|
||||
SWITCHTEC_QUIRK(0x5052); /* PFX 52XG5 */
|
||||
SWITCHTEC_QUIRK(0x5036); /* PFX 36XG5 */
|
||||
SWITCHTEC_QUIRK(0x5028); /* PFX 28XG5 */
|
||||
SWITCHTEC_QUIRK(0x5100); /* PSX 100XG5 */
|
||||
SWITCHTEC_QUIRK(0x5184); /* PSX 84XG5 */
|
||||
SWITCHTEC_QUIRK(0x5168); /* PSX 68XG5 */
|
||||
SWITCHTEC_QUIRK(0x5152); /* PSX 52XG5 */
|
||||
SWITCHTEC_QUIRK(0x5136); /* PSX 36XG5 */
|
||||
SWITCHTEC_QUIRK(0x5128); /* PSX 28XG5 */
|
||||
SWITCHTEC_QUIRK(0x5200); /* PAX 100XG5 */
|
||||
SWITCHTEC_QUIRK(0x5284); /* PAX 84XG5 */
|
||||
SWITCHTEC_QUIRK(0x5268); /* PAX 68XG5 */
|
||||
SWITCHTEC_QUIRK(0x5252); /* PAX 52XG5 */
|
||||
SWITCHTEC_QUIRK(0x5236); /* PAX 36XG5 */
|
||||
SWITCHTEC_QUIRK(0x5228); /* PAX 28XG5 */
|
||||
SWITCHTEC_QUIRK(0x5300); /* PFXA 100XG5 */
|
||||
SWITCHTEC_QUIRK(0x5384); /* PFXA 84XG5 */
|
||||
SWITCHTEC_QUIRK(0x5368); /* PFXA 68XG5 */
|
||||
SWITCHTEC_QUIRK(0x5352); /* PFXA 52XG5 */
|
||||
SWITCHTEC_QUIRK(0x5336); /* PFXA 36XG5 */
|
||||
SWITCHTEC_QUIRK(0x5328); /* PFXA 28XG5 */
|
||||
SWITCHTEC_QUIRK(0x5400); /* PSXA 100XG5 */
|
||||
SWITCHTEC_QUIRK(0x5484); /* PSXA 84XG5 */
|
||||
SWITCHTEC_QUIRK(0x5468); /* PSXA 68XG5 */
|
||||
SWITCHTEC_QUIRK(0x5452); /* PSXA 52XG5 */
|
||||
SWITCHTEC_QUIRK(0x5436); /* PSXA 36XG5 */
|
||||
SWITCHTEC_QUIRK(0x5428); /* PSXA 28XG5 */
|
||||
SWITCHTEC_QUIRK(0x5500); /* PAXA 100XG5 */
|
||||
SWITCHTEC_QUIRK(0x5584); /* PAXA 84XG5 */
|
||||
SWITCHTEC_QUIRK(0x5568); /* PAXA 68XG5 */
|
||||
SWITCHTEC_QUIRK(0x5552); /* PAXA 52XG5 */
|
||||
SWITCHTEC_QUIRK(0x5536); /* PAXA 36XG5 */
|
||||
SWITCHTEC_QUIRK(0x5528); /* PAXA 28XG5 */
|
||||
|
||||
/*
|
||||
* The PLX NTB uses devfn proxy IDs to move TLPs between NT endpoints.
|
||||
|
@ -1799,7 +1799,7 @@ static void remove_dev_resources(struct pci_dev *dev, struct resource *io,
|
||||
* Make sure prefetchable memory is reduced from
|
||||
* the correct resource. Specifically we put 32-bit
|
||||
* prefetchable memory in non-prefetchable window
|
||||
* if there is an 64-bit pretchable window.
|
||||
* if there is an 64-bit prefetchable window.
|
||||
*
|
||||
* See comments in __pci_bus_size_bridges() for
|
||||
* more information.
|
||||
|
@ -104,7 +104,7 @@ static void pci_std_update_resource(struct pci_dev *dev, int resno)
|
||||
pci_read_config_dword(dev, reg, &check);
|
||||
|
||||
if ((new ^ check) & mask) {
|
||||
pci_err(dev, "BAR %d: error updating (%#08x != %#08x)\n",
|
||||
pci_err(dev, "BAR %d: error updating (%#010x != %#010x)\n",
|
||||
resno, new, check);
|
||||
}
|
||||
|
||||
@ -113,7 +113,7 @@ static void pci_std_update_resource(struct pci_dev *dev, int resno)
|
||||
pci_write_config_dword(dev, reg + 4, new);
|
||||
pci_read_config_dword(dev, reg + 4, &check);
|
||||
if (check != new) {
|
||||
pci_err(dev, "BAR %d: error updating (high %#08x != %#08x)\n",
|
||||
pci_err(dev, "BAR %d: error updating (high %#010x != %#010x)\n",
|
||||
resno, new, check);
|
||||
}
|
||||
}
|
||||
|
@ -372,7 +372,7 @@ static ssize_t field ## _show(struct device *dev, \
|
||||
if (stdev->gen == SWITCHTEC_GEN3) \
|
||||
return io_string_show(buf, &si->gen3.field, \
|
||||
sizeof(si->gen3.field)); \
|
||||
else if (stdev->gen == SWITCHTEC_GEN4) \
|
||||
else if (stdev->gen >= SWITCHTEC_GEN4) \
|
||||
return io_string_show(buf, &si->gen4.field, \
|
||||
sizeof(si->gen4.field)); \
|
||||
else \
|
||||
@ -663,7 +663,7 @@ static int ioctl_flash_info(struct switchtec_dev *stdev,
|
||||
if (stdev->gen == SWITCHTEC_GEN3) {
|
||||
info.flash_length = ioread32(&fi->gen3.flash_length);
|
||||
info.num_partitions = SWITCHTEC_NUM_PARTITIONS_GEN3;
|
||||
} else if (stdev->gen == SWITCHTEC_GEN4) {
|
||||
} else if (stdev->gen >= SWITCHTEC_GEN4) {
|
||||
info.flash_length = ioread32(&fi->gen4.flash_length);
|
||||
info.num_partitions = SWITCHTEC_NUM_PARTITIONS_GEN4;
|
||||
} else {
|
||||
@ -870,7 +870,7 @@ static int ioctl_flash_part_info(struct switchtec_dev *stdev,
|
||||
ret = flash_part_info_gen3(stdev, &info);
|
||||
if (ret)
|
||||
return ret;
|
||||
} else if (stdev->gen == SWITCHTEC_GEN4) {
|
||||
} else if (stdev->gen >= SWITCHTEC_GEN4) {
|
||||
ret = flash_part_info_gen4(stdev, &info);
|
||||
if (ret)
|
||||
return ret;
|
||||
@ -1610,7 +1610,7 @@ static int switchtec_init_pci(struct switchtec_dev *stdev,
|
||||
|
||||
if (stdev->gen == SWITCHTEC_GEN3)
|
||||
part_id = &stdev->mmio_sys_info->gen3.partition_id;
|
||||
else if (stdev->gen == SWITCHTEC_GEN4)
|
||||
else if (stdev->gen >= SWITCHTEC_GEN4)
|
||||
part_id = &stdev->mmio_sys_info->gen4.partition_id;
|
||||
else
|
||||
return -EOPNOTSUPP;
|
||||
@ -1727,63 +1727,99 @@ static void switchtec_pci_remove(struct pci_dev *pdev)
|
||||
}
|
||||
|
||||
static const struct pci_device_id switchtec_pci_tbl[] = {
|
||||
SWITCHTEC_PCI_DEVICE(0x8531, SWITCHTEC_GEN3), //PFX 24xG3
|
||||
SWITCHTEC_PCI_DEVICE(0x8532, SWITCHTEC_GEN3), //PFX 32xG3
|
||||
SWITCHTEC_PCI_DEVICE(0x8533, SWITCHTEC_GEN3), //PFX 48xG3
|
||||
SWITCHTEC_PCI_DEVICE(0x8534, SWITCHTEC_GEN3), //PFX 64xG3
|
||||
SWITCHTEC_PCI_DEVICE(0x8535, SWITCHTEC_GEN3), //PFX 80xG3
|
||||
SWITCHTEC_PCI_DEVICE(0x8536, SWITCHTEC_GEN3), //PFX 96xG3
|
||||
SWITCHTEC_PCI_DEVICE(0x8541, SWITCHTEC_GEN3), //PSX 24xG3
|
||||
SWITCHTEC_PCI_DEVICE(0x8542, SWITCHTEC_GEN3), //PSX 32xG3
|
||||
SWITCHTEC_PCI_DEVICE(0x8543, SWITCHTEC_GEN3), //PSX 48xG3
|
||||
SWITCHTEC_PCI_DEVICE(0x8544, SWITCHTEC_GEN3), //PSX 64xG3
|
||||
SWITCHTEC_PCI_DEVICE(0x8545, SWITCHTEC_GEN3), //PSX 80xG3
|
||||
SWITCHTEC_PCI_DEVICE(0x8546, SWITCHTEC_GEN3), //PSX 96xG3
|
||||
SWITCHTEC_PCI_DEVICE(0x8551, SWITCHTEC_GEN3), //PAX 24XG3
|
||||
SWITCHTEC_PCI_DEVICE(0x8552, SWITCHTEC_GEN3), //PAX 32XG3
|
||||
SWITCHTEC_PCI_DEVICE(0x8553, SWITCHTEC_GEN3), //PAX 48XG3
|
||||
SWITCHTEC_PCI_DEVICE(0x8554, SWITCHTEC_GEN3), //PAX 64XG3
|
||||
SWITCHTEC_PCI_DEVICE(0x8555, SWITCHTEC_GEN3), //PAX 80XG3
|
||||
SWITCHTEC_PCI_DEVICE(0x8556, SWITCHTEC_GEN3), //PAX 96XG3
|
||||
SWITCHTEC_PCI_DEVICE(0x8561, SWITCHTEC_GEN3), //PFXL 24XG3
|
||||
SWITCHTEC_PCI_DEVICE(0x8562, SWITCHTEC_GEN3), //PFXL 32XG3
|
||||
SWITCHTEC_PCI_DEVICE(0x8563, SWITCHTEC_GEN3), //PFXL 48XG3
|
||||
SWITCHTEC_PCI_DEVICE(0x8564, SWITCHTEC_GEN3), //PFXL 64XG3
|
||||
SWITCHTEC_PCI_DEVICE(0x8565, SWITCHTEC_GEN3), //PFXL 80XG3
|
||||
SWITCHTEC_PCI_DEVICE(0x8566, SWITCHTEC_GEN3), //PFXL 96XG3
|
||||
SWITCHTEC_PCI_DEVICE(0x8571, SWITCHTEC_GEN3), //PFXI 24XG3
|
||||
SWITCHTEC_PCI_DEVICE(0x8572, SWITCHTEC_GEN3), //PFXI 32XG3
|
||||
SWITCHTEC_PCI_DEVICE(0x8573, SWITCHTEC_GEN3), //PFXI 48XG3
|
||||
SWITCHTEC_PCI_DEVICE(0x8574, SWITCHTEC_GEN3), //PFXI 64XG3
|
||||
SWITCHTEC_PCI_DEVICE(0x8575, SWITCHTEC_GEN3), //PFXI 80XG3
|
||||
SWITCHTEC_PCI_DEVICE(0x8576, SWITCHTEC_GEN3), //PFXI 96XG3
|
||||
SWITCHTEC_PCI_DEVICE(0x4000, SWITCHTEC_GEN4), //PFX 100XG4
|
||||
SWITCHTEC_PCI_DEVICE(0x4084, SWITCHTEC_GEN4), //PFX 84XG4
|
||||
SWITCHTEC_PCI_DEVICE(0x4068, SWITCHTEC_GEN4), //PFX 68XG4
|
||||
SWITCHTEC_PCI_DEVICE(0x4052, SWITCHTEC_GEN4), //PFX 52XG4
|
||||
SWITCHTEC_PCI_DEVICE(0x4036, SWITCHTEC_GEN4), //PFX 36XG4
|
||||
SWITCHTEC_PCI_DEVICE(0x4028, SWITCHTEC_GEN4), //PFX 28XG4
|
||||
SWITCHTEC_PCI_DEVICE(0x4100, SWITCHTEC_GEN4), //PSX 100XG4
|
||||
SWITCHTEC_PCI_DEVICE(0x4184, SWITCHTEC_GEN4), //PSX 84XG4
|
||||
SWITCHTEC_PCI_DEVICE(0x4168, SWITCHTEC_GEN4), //PSX 68XG4
|
||||
SWITCHTEC_PCI_DEVICE(0x4152, SWITCHTEC_GEN4), //PSX 52XG4
|
||||
SWITCHTEC_PCI_DEVICE(0x4136, SWITCHTEC_GEN4), //PSX 36XG4
|
||||
SWITCHTEC_PCI_DEVICE(0x4128, SWITCHTEC_GEN4), //PSX 28XG4
|
||||
SWITCHTEC_PCI_DEVICE(0x4200, SWITCHTEC_GEN4), //PAX 100XG4
|
||||
SWITCHTEC_PCI_DEVICE(0x4284, SWITCHTEC_GEN4), //PAX 84XG4
|
||||
SWITCHTEC_PCI_DEVICE(0x4268, SWITCHTEC_GEN4), //PAX 68XG4
|
||||
SWITCHTEC_PCI_DEVICE(0x4252, SWITCHTEC_GEN4), //PAX 52XG4
|
||||
SWITCHTEC_PCI_DEVICE(0x4236, SWITCHTEC_GEN4), //PAX 36XG4
|
||||
SWITCHTEC_PCI_DEVICE(0x4228, SWITCHTEC_GEN4), //PAX 28XG4
|
||||
SWITCHTEC_PCI_DEVICE(0x4352, SWITCHTEC_GEN4), //PFXA 52XG4
|
||||
SWITCHTEC_PCI_DEVICE(0x4336, SWITCHTEC_GEN4), //PFXA 36XG4
|
||||
SWITCHTEC_PCI_DEVICE(0x4328, SWITCHTEC_GEN4), //PFXA 28XG4
|
||||
SWITCHTEC_PCI_DEVICE(0x4452, SWITCHTEC_GEN4), //PSXA 52XG4
|
||||
SWITCHTEC_PCI_DEVICE(0x4436, SWITCHTEC_GEN4), //PSXA 36XG4
|
||||
SWITCHTEC_PCI_DEVICE(0x4428, SWITCHTEC_GEN4), //PSXA 28XG4
|
||||
SWITCHTEC_PCI_DEVICE(0x4552, SWITCHTEC_GEN4), //PAXA 52XG4
|
||||
SWITCHTEC_PCI_DEVICE(0x4536, SWITCHTEC_GEN4), //PAXA 36XG4
|
||||
SWITCHTEC_PCI_DEVICE(0x4528, SWITCHTEC_GEN4), //PAXA 28XG4
|
||||
SWITCHTEC_PCI_DEVICE(0x8531, SWITCHTEC_GEN3), /* PFX 24xG3 */
|
||||
SWITCHTEC_PCI_DEVICE(0x8532, SWITCHTEC_GEN3), /* PFX 32xG3 */
|
||||
SWITCHTEC_PCI_DEVICE(0x8533, SWITCHTEC_GEN3), /* PFX 48xG3 */
|
||||
SWITCHTEC_PCI_DEVICE(0x8534, SWITCHTEC_GEN3), /* PFX 64xG3 */
|
||||
SWITCHTEC_PCI_DEVICE(0x8535, SWITCHTEC_GEN3), /* PFX 80xG3 */
|
||||
SWITCHTEC_PCI_DEVICE(0x8536, SWITCHTEC_GEN3), /* PFX 96xG3 */
|
||||
SWITCHTEC_PCI_DEVICE(0x8541, SWITCHTEC_GEN3), /* PSX 24xG3 */
|
||||
SWITCHTEC_PCI_DEVICE(0x8542, SWITCHTEC_GEN3), /* PSX 32xG3 */
|
||||
SWITCHTEC_PCI_DEVICE(0x8543, SWITCHTEC_GEN3), /* PSX 48xG3 */
|
||||
SWITCHTEC_PCI_DEVICE(0x8544, SWITCHTEC_GEN3), /* PSX 64xG3 */
|
||||
SWITCHTEC_PCI_DEVICE(0x8545, SWITCHTEC_GEN3), /* PSX 80xG3 */
|
||||
SWITCHTEC_PCI_DEVICE(0x8546, SWITCHTEC_GEN3), /* PSX 96xG3 */
|
||||
SWITCHTEC_PCI_DEVICE(0x8551, SWITCHTEC_GEN3), /* PAX 24XG3 */
|
||||
SWITCHTEC_PCI_DEVICE(0x8552, SWITCHTEC_GEN3), /* PAX 32XG3 */
|
||||
SWITCHTEC_PCI_DEVICE(0x8553, SWITCHTEC_GEN3), /* PAX 48XG3 */
|
||||
SWITCHTEC_PCI_DEVICE(0x8554, SWITCHTEC_GEN3), /* PAX 64XG3 */
|
||||
SWITCHTEC_PCI_DEVICE(0x8555, SWITCHTEC_GEN3), /* PAX 80XG3 */
|
||||
SWITCHTEC_PCI_DEVICE(0x8556, SWITCHTEC_GEN3), /* PAX 96XG3 */
|
||||
SWITCHTEC_PCI_DEVICE(0x8561, SWITCHTEC_GEN3), /* PFXL 24XG3 */
|
||||
SWITCHTEC_PCI_DEVICE(0x8562, SWITCHTEC_GEN3), /* PFXL 32XG3 */
|
||||
SWITCHTEC_PCI_DEVICE(0x8563, SWITCHTEC_GEN3), /* PFXL 48XG3 */
|
||||
SWITCHTEC_PCI_DEVICE(0x8564, SWITCHTEC_GEN3), /* PFXL 64XG3 */
|
||||
SWITCHTEC_PCI_DEVICE(0x8565, SWITCHTEC_GEN3), /* PFXL 80XG3 */
|
||||
SWITCHTEC_PCI_DEVICE(0x8566, SWITCHTEC_GEN3), /* PFXL 96XG3 */
|
||||
SWITCHTEC_PCI_DEVICE(0x8571, SWITCHTEC_GEN3), /* PFXI 24XG3 */
|
||||
SWITCHTEC_PCI_DEVICE(0x8572, SWITCHTEC_GEN3), /* PFXI 32XG3 */
|
||||
SWITCHTEC_PCI_DEVICE(0x8573, SWITCHTEC_GEN3), /* PFXI 48XG3 */
|
||||
SWITCHTEC_PCI_DEVICE(0x8574, SWITCHTEC_GEN3), /* PFXI 64XG3 */
|
||||
SWITCHTEC_PCI_DEVICE(0x8575, SWITCHTEC_GEN3), /* PFXI 80XG3 */
|
||||
SWITCHTEC_PCI_DEVICE(0x8576, SWITCHTEC_GEN3), /* PFXI 96XG3 */
|
||||
SWITCHTEC_PCI_DEVICE(0x4000, SWITCHTEC_GEN4), /* PFX 100XG4 */
|
||||
SWITCHTEC_PCI_DEVICE(0x4084, SWITCHTEC_GEN4), /* PFX 84XG4 */
|
||||
SWITCHTEC_PCI_DEVICE(0x4068, SWITCHTEC_GEN4), /* PFX 68XG4 */
|
||||
SWITCHTEC_PCI_DEVICE(0x4052, SWITCHTEC_GEN4), /* PFX 52XG4 */
|
||||
SWITCHTEC_PCI_DEVICE(0x4036, SWITCHTEC_GEN4), /* PFX 36XG4 */
|
||||
SWITCHTEC_PCI_DEVICE(0x4028, SWITCHTEC_GEN4), /* PFX 28XG4 */
|
||||
SWITCHTEC_PCI_DEVICE(0x4100, SWITCHTEC_GEN4), /* PSX 100XG4 */
|
||||
SWITCHTEC_PCI_DEVICE(0x4184, SWITCHTEC_GEN4), /* PSX 84XG4 */
|
||||
SWITCHTEC_PCI_DEVICE(0x4168, SWITCHTEC_GEN4), /* PSX 68XG4 */
|
||||
SWITCHTEC_PCI_DEVICE(0x4152, SWITCHTEC_GEN4), /* PSX 52XG4 */
|
||||
SWITCHTEC_PCI_DEVICE(0x4136, SWITCHTEC_GEN4), /* PSX 36XG4 */
|
||||
SWITCHTEC_PCI_DEVICE(0x4128, SWITCHTEC_GEN4), /* PSX 28XG4 */
|
||||
SWITCHTEC_PCI_DEVICE(0x4200, SWITCHTEC_GEN4), /* PAX 100XG4 */
|
||||
SWITCHTEC_PCI_DEVICE(0x4284, SWITCHTEC_GEN4), /* PAX 84XG4 */
|
||||
SWITCHTEC_PCI_DEVICE(0x4268, SWITCHTEC_GEN4), /* PAX 68XG4 */
|
||||
SWITCHTEC_PCI_DEVICE(0x4252, SWITCHTEC_GEN4), /* PAX 52XG4 */
|
||||
SWITCHTEC_PCI_DEVICE(0x4236, SWITCHTEC_GEN4), /* PAX 36XG4 */
|
||||
SWITCHTEC_PCI_DEVICE(0x4228, SWITCHTEC_GEN4), /* PAX 28XG4 */
|
||||
SWITCHTEC_PCI_DEVICE(0x4352, SWITCHTEC_GEN4), /* PFXA 52XG4 */
|
||||
SWITCHTEC_PCI_DEVICE(0x4336, SWITCHTEC_GEN4), /* PFXA 36XG4 */
|
||||
SWITCHTEC_PCI_DEVICE(0x4328, SWITCHTEC_GEN4), /* PFXA 28XG4 */
|
||||
SWITCHTEC_PCI_DEVICE(0x4452, SWITCHTEC_GEN4), /* PSXA 52XG4 */
|
||||
SWITCHTEC_PCI_DEVICE(0x4436, SWITCHTEC_GEN4), /* PSXA 36XG4 */
|
||||
SWITCHTEC_PCI_DEVICE(0x4428, SWITCHTEC_GEN4), /* PSXA 28XG4 */
|
||||
SWITCHTEC_PCI_DEVICE(0x4552, SWITCHTEC_GEN4), /* PAXA 52XG4 */
|
||||
SWITCHTEC_PCI_DEVICE(0x4536, SWITCHTEC_GEN4), /* PAXA 36XG4 */
|
||||
SWITCHTEC_PCI_DEVICE(0x4528, SWITCHTEC_GEN4), /* PAXA 28XG4 */
|
||||
SWITCHTEC_PCI_DEVICE(0x5000, SWITCHTEC_GEN5), /* PFX 100XG5 */
|
||||
SWITCHTEC_PCI_DEVICE(0x5084, SWITCHTEC_GEN5), /* PFX 84XG5 */
|
||||
SWITCHTEC_PCI_DEVICE(0x5068, SWITCHTEC_GEN5), /* PFX 68XG5 */
|
||||
SWITCHTEC_PCI_DEVICE(0x5052, SWITCHTEC_GEN5), /* PFX 52XG5 */
|
||||
SWITCHTEC_PCI_DEVICE(0x5036, SWITCHTEC_GEN5), /* PFX 36XG5 */
|
||||
SWITCHTEC_PCI_DEVICE(0x5028, SWITCHTEC_GEN5), /* PFX 28XG5 */
|
||||
SWITCHTEC_PCI_DEVICE(0x5100, SWITCHTEC_GEN5), /* PSX 100XG5 */
|
||||
SWITCHTEC_PCI_DEVICE(0x5184, SWITCHTEC_GEN5), /* PSX 84XG5 */
|
||||
SWITCHTEC_PCI_DEVICE(0x5168, SWITCHTEC_GEN5), /* PSX 68XG5 */
|
||||
SWITCHTEC_PCI_DEVICE(0x5152, SWITCHTEC_GEN5), /* PSX 52XG5 */
|
||||
SWITCHTEC_PCI_DEVICE(0x5136, SWITCHTEC_GEN5), /* PSX 36XG5 */
|
||||
SWITCHTEC_PCI_DEVICE(0x5128, SWITCHTEC_GEN5), /* PSX 28XG5 */
|
||||
SWITCHTEC_PCI_DEVICE(0x5200, SWITCHTEC_GEN5), /* PAX 100XG5 */
|
||||
SWITCHTEC_PCI_DEVICE(0x5284, SWITCHTEC_GEN5), /* PAX 84XG5 */
|
||||
SWITCHTEC_PCI_DEVICE(0x5268, SWITCHTEC_GEN5), /* PAX 68XG5 */
|
||||
SWITCHTEC_PCI_DEVICE(0x5252, SWITCHTEC_GEN5), /* PAX 52XG5 */
|
||||
SWITCHTEC_PCI_DEVICE(0x5236, SWITCHTEC_GEN5), /* PAX 36XG5 */
|
||||
SWITCHTEC_PCI_DEVICE(0x5228, SWITCHTEC_GEN5), /* PAX 28XG5 */
|
||||
SWITCHTEC_PCI_DEVICE(0x5300, SWITCHTEC_GEN5), /* PFXA 100XG5 */
|
||||
SWITCHTEC_PCI_DEVICE(0x5384, SWITCHTEC_GEN5), /* PFXA 84XG5 */
|
||||
SWITCHTEC_PCI_DEVICE(0x5368, SWITCHTEC_GEN5), /* PFXA 68XG5 */
|
||||
SWITCHTEC_PCI_DEVICE(0x5352, SWITCHTEC_GEN5), /* PFXA 52XG5 */
|
||||
SWITCHTEC_PCI_DEVICE(0x5336, SWITCHTEC_GEN5), /* PFXA 36XG5 */
|
||||
SWITCHTEC_PCI_DEVICE(0x5328, SWITCHTEC_GEN5), /* PFXA 28XG5 */
|
||||
SWITCHTEC_PCI_DEVICE(0x5400, SWITCHTEC_GEN5), /* PSXA 100XG5 */
|
||||
SWITCHTEC_PCI_DEVICE(0x5484, SWITCHTEC_GEN5), /* PSXA 84XG5 */
|
||||
SWITCHTEC_PCI_DEVICE(0x5468, SWITCHTEC_GEN5), /* PSXA 68XG5 */
|
||||
SWITCHTEC_PCI_DEVICE(0x5452, SWITCHTEC_GEN5), /* PSXA 52XG5 */
|
||||
SWITCHTEC_PCI_DEVICE(0x5436, SWITCHTEC_GEN5), /* PSXA 36XG5 */
|
||||
SWITCHTEC_PCI_DEVICE(0x5428, SWITCHTEC_GEN5), /* PSXA 28XG5 */
|
||||
SWITCHTEC_PCI_DEVICE(0x5500, SWITCHTEC_GEN5), /* PAXA 100XG5 */
|
||||
SWITCHTEC_PCI_DEVICE(0x5584, SWITCHTEC_GEN5), /* PAXA 84XG5 */
|
||||
SWITCHTEC_PCI_DEVICE(0x5568, SWITCHTEC_GEN5), /* PAXA 68XG5 */
|
||||
SWITCHTEC_PCI_DEVICE(0x5552, SWITCHTEC_GEN5), /* PAXA 52XG5 */
|
||||
SWITCHTEC_PCI_DEVICE(0x5536, SWITCHTEC_GEN5), /* PAXA 36XG5 */
|
||||
SWITCHTEC_PCI_DEVICE(0x5528, SWITCHTEC_GEN5), /* PAXA 28XG5 */
|
||||
{0}
|
||||
};
|
||||
MODULE_DEVICE_TABLE(pci, switchtec_pci_tbl);
|
||||
|
@ -52,13 +52,13 @@ SYSCALL_DEFINE5(pciconfig_read, unsigned long, bus, unsigned long, dfn,
|
||||
|
||||
switch (len) {
|
||||
case 1:
|
||||
err = put_user(byte, (unsigned char __user *)buf);
|
||||
err = put_user(byte, (u8 __user *)buf);
|
||||
break;
|
||||
case 2:
|
||||
err = put_user(word, (unsigned short __user *)buf);
|
||||
err = put_user(word, (u16 __user *)buf);
|
||||
break;
|
||||
case 4:
|
||||
err = put_user(dword, (unsigned int __user *)buf);
|
||||
err = put_user(dword, (u32 __user *)buf);
|
||||
break;
|
||||
}
|
||||
pci_dev_put(dev);
|
||||
@ -70,13 +70,13 @@ error:
|
||||
they get instead of a machine check on x86. */
|
||||
switch (len) {
|
||||
case 1:
|
||||
put_user(-1, (unsigned char __user *)buf);
|
||||
put_user(-1, (u8 __user *)buf);
|
||||
break;
|
||||
case 2:
|
||||
put_user(-1, (unsigned short __user *)buf);
|
||||
put_user(-1, (u16 __user *)buf);
|
||||
break;
|
||||
case 4:
|
||||
put_user(-1, (unsigned int __user *)buf);
|
||||
put_user(-1, (u32 __user *)buf);
|
||||
break;
|
||||
}
|
||||
pci_dev_put(dev);
|
||||
|
@ -1,6 +1,6 @@
|
||||
// SPDX-License-Identifier: MIT
|
||||
/*
|
||||
* vgaarb.c: Implements the VGA arbitration. For details refer to
|
||||
* vgaarb.c: Implements VGA arbitration. For details refer to
|
||||
* Documentation/gpu/vgaarbiter.rst
|
||||
*
|
||||
* (C) Copyright 2005 Benjamin Herrenschmidt <benh@kernel.crashing.org>
|
||||
@ -30,22 +30,21 @@
|
||||
#include <linux/vt.h>
|
||||
#include <linux/console.h>
|
||||
#include <linux/acpi.h>
|
||||
|
||||
#include <linux/uaccess.h>
|
||||
|
||||
#include <linux/vgaarb.h>
|
||||
|
||||
static void vga_arbiter_notify_clients(void);
|
||||
|
||||
/*
|
||||
* We keep a list of all vga devices in the system to speed
|
||||
* We keep a list of all VGA devices in the system to speed
|
||||
* up the various operations of the arbiter
|
||||
*/
|
||||
struct vga_device {
|
||||
struct list_head list;
|
||||
struct pci_dev *pdev;
|
||||
unsigned int decodes; /* what does it decodes */
|
||||
unsigned int owns; /* what does it owns */
|
||||
unsigned int locks; /* what does it locks */
|
||||
unsigned int decodes; /* what it decodes */
|
||||
unsigned int owns; /* what it owns */
|
||||
unsigned int locks; /* what it locks */
|
||||
unsigned int io_lock_cnt; /* legacy IO lock count */
|
||||
unsigned int mem_lock_cnt; /* legacy MEM lock count */
|
||||
unsigned int io_norm_cnt; /* normal IO count */
|
||||
@ -61,7 +60,6 @@ static bool vga_arbiter_used;
|
||||
static DEFINE_SPINLOCK(vga_lock);
|
||||
static DECLARE_WAIT_QUEUE_HEAD(vga_wait_queue);
|
||||
|
||||
|
||||
static const char *vga_iostate_to_str(unsigned int iostate)
|
||||
{
|
||||
/* Ignore VGA_RSRC_IO and VGA_RSRC_MEM */
|
||||
@ -77,16 +75,18 @@ static const char *vga_iostate_to_str(unsigned int iostate)
|
||||
return "none";
|
||||
}
|
||||
|
||||
static int vga_str_to_iostate(char *buf, int str_size, int *io_state)
|
||||
static int vga_str_to_iostate(char *buf, int str_size, unsigned int *io_state)
|
||||
{
|
||||
/* we could in theory hand out locks on IO and mem
|
||||
* separately to userspace but it can cause deadlocks */
|
||||
/*
|
||||
* In theory, we could hand out locks on IO and MEM separately to
|
||||
* userspace, but this can cause deadlocks.
|
||||
*/
|
||||
if (strncmp(buf, "none", 4) == 0) {
|
||||
*io_state = VGA_RSRC_NONE;
|
||||
return 1;
|
||||
}
|
||||
|
||||
/* XXX We're not chekcing the str_size! */
|
||||
/* XXX We're not checking the str_size! */
|
||||
if (strncmp(buf, "io+mem", 6) == 0)
|
||||
goto both;
|
||||
else if (strncmp(buf, "io", 2) == 0)
|
||||
@ -99,7 +99,7 @@ both:
|
||||
return 1;
|
||||
}
|
||||
|
||||
/* this is only used a cookie - it should not be dereferenced */
|
||||
/* This is only used as a cookie, it should not be dereferenced */
|
||||
static struct pci_dev *vga_default;
|
||||
|
||||
/* Find somebody in our list */
|
||||
@ -116,20 +116,18 @@ static struct vga_device *vgadev_find(struct pci_dev *pdev)
|
||||
/**
|
||||
* vga_default_device - return the default VGA device, for vgacon
|
||||
*
|
||||
* This can be defined by the platform. The default implementation
|
||||
* is rather dumb and will probably only work properly on single
|
||||
* vga card setups and/or x86 platforms.
|
||||
* This can be defined by the platform. The default implementation is
|
||||
* rather dumb and will probably only work properly on single VGA card
|
||||
* setups and/or x86 platforms.
|
||||
*
|
||||
* If your VGA default device is not PCI, you'll have to return
|
||||
* NULL here. In this case, I assume it will not conflict with
|
||||
* any PCI card. If this is not true, I'll have to define two archs
|
||||
* hooks for enabling/disabling the VGA default device if that is
|
||||
* possible. This may be a problem with real _ISA_ VGA cards, in
|
||||
* addition to a PCI one. I don't know at this point how to deal
|
||||
* with that card. Can theirs IOs be disabled at all ? If not, then
|
||||
* I suppose it's a matter of having the proper arch hook telling
|
||||
* us about it, so we basically never allow anybody to succeed a
|
||||
* vga_get()...
|
||||
* If your VGA default device is not PCI, you'll have to return NULL here.
|
||||
* In this case, I assume it will not conflict with any PCI card. If this
|
||||
* is not true, I'll have to define two arch hooks for enabling/disabling
|
||||
* the VGA default device if that is possible. This may be a problem with
|
||||
* real _ISA_ VGA cards, in addition to a PCI one. I don't know at this
|
||||
* point how to deal with that card. Can their IOs be disabled at all? If
|
||||
* not, then I suppose it's a matter of having the proper arch hook telling
|
||||
* us about it, so we basically never allow anybody to succeed a vga_get().
|
||||
*/
|
||||
struct pci_dev *vga_default_device(void)
|
||||
{
|
||||
@ -147,14 +145,13 @@ void vga_set_default_device(struct pci_dev *pdev)
|
||||
}
|
||||
|
||||
/**
|
||||
* vga_remove_vgacon - deactivete vga console
|
||||
* vga_remove_vgacon - deactivate VGA console
|
||||
*
|
||||
* Unbind and unregister vgacon in case pdev is the default vga
|
||||
* device. Can be called by gpu drivers on initialization to make
|
||||
* sure vga register access done by vgacon will not disturb the
|
||||
* device.
|
||||
* Unbind and unregister vgacon in case pdev is the default VGA device.
|
||||
* Can be called by GPU drivers on initialization to make sure VGA register
|
||||
* access done by vgacon will not disturb the device.
|
||||
*
|
||||
* @pdev: pci device.
|
||||
* @pdev: PCI device.
|
||||
*/
|
||||
#if !defined(CONFIG_VGA_CONSOLE)
|
||||
int vga_remove_vgacon(struct pci_dev *pdev)
|
||||
@ -193,14 +190,17 @@ int vga_remove_vgacon(struct pci_dev *pdev)
|
||||
#endif
|
||||
EXPORT_SYMBOL(vga_remove_vgacon);
|
||||
|
||||
/* If we don't ever use VGA arb we should avoid
|
||||
turning off anything anywhere due to old X servers getting
|
||||
confused about the boot device not being VGA */
|
||||
/*
|
||||
* If we don't ever use VGA arbitration, we should avoid turning off
|
||||
* anything anywhere due to old X servers getting confused about the boot
|
||||
* device not being VGA.
|
||||
*/
|
||||
static void vga_check_first_use(void)
|
||||
{
|
||||
/* we should inform all GPUs in the system that
|
||||
* VGA arb has occurred and to try and disable resources
|
||||
* if they can */
|
||||
/*
|
||||
* Inform all GPUs in the system that VGA arbitration has occurred
|
||||
* so they can disable resources if possible.
|
||||
*/
|
||||
if (!vga_arbiter_used) {
|
||||
vga_arbiter_used = true;
|
||||
vga_arbiter_notify_clients();
|
||||
@ -216,7 +216,8 @@ static struct vga_device *__vga_tryget(struct vga_device *vgadev,
|
||||
unsigned int pci_bits;
|
||||
u32 flags = 0;
|
||||
|
||||
/* Account for "normal" resources to lock. If we decode the legacy,
|
||||
/*
|
||||
* Account for "normal" resources to lock. If we decode the legacy,
|
||||
* counterpart, we need to request it as well
|
||||
*/
|
||||
if ((rsrc & VGA_RSRC_NORMAL_IO) &&
|
||||
@ -236,14 +237,15 @@ static struct vga_device *__vga_tryget(struct vga_device *vgadev,
|
||||
if (wants == 0)
|
||||
goto lock_them;
|
||||
|
||||
/* We don't need to request a legacy resource, we just enable
|
||||
* appropriate decoding and go
|
||||
/*
|
||||
* We don't need to request a legacy resource, we just enable
|
||||
* appropriate decoding and go.
|
||||
*/
|
||||
legacy_wants = wants & VGA_RSRC_LEGACY_MASK;
|
||||
if (legacy_wants == 0)
|
||||
goto enable_them;
|
||||
|
||||
/* Ok, we don't, let's find out how we need to kick off */
|
||||
/* Ok, we don't, let's find out who we need to kick off */
|
||||
list_for_each_entry(conflict, &vga_list, list) {
|
||||
unsigned int lwants = legacy_wants;
|
||||
unsigned int change_bridge = 0;
|
||||
@ -252,39 +254,44 @@ static struct vga_device *__vga_tryget(struct vga_device *vgadev,
|
||||
if (vgadev == conflict)
|
||||
continue;
|
||||
|
||||
/* We have a possible conflict. before we go further, we must
|
||||
/*
|
||||
* We have a possible conflict. Before we go further, we must
|
||||
* check if we sit on the same bus as the conflicting device.
|
||||
* if we don't, then we must tie both IO and MEM resources
|
||||
* If we don't, then we must tie both IO and MEM resources
|
||||
* together since there is only a single bit controlling
|
||||
* VGA forwarding on P2P bridges
|
||||
* VGA forwarding on P2P bridges.
|
||||
*/
|
||||
if (vgadev->pdev->bus != conflict->pdev->bus) {
|
||||
change_bridge = 1;
|
||||
lwants = VGA_RSRC_LEGACY_IO | VGA_RSRC_LEGACY_MEM;
|
||||
}
|
||||
|
||||
/* Check if the guy has a lock on the resource. If he does,
|
||||
* return the conflicting entry
|
||||
/*
|
||||
* Check if the guy has a lock on the resource. If he does,
|
||||
* return the conflicting entry.
|
||||
*/
|
||||
if (conflict->locks & lwants)
|
||||
return conflict;
|
||||
|
||||
/* Ok, now check if it owns the resource we want. We can
|
||||
* lock resources that are not decoded, therefore a device
|
||||
/*
|
||||
* Ok, now check if it owns the resource we want. We can
|
||||
* lock resources that are not decoded; therefore a device
|
||||
* can own resources it doesn't decode.
|
||||
*/
|
||||
match = lwants & conflict->owns;
|
||||
if (!match)
|
||||
continue;
|
||||
|
||||
/* looks like he doesn't have a lock, we can steal
|
||||
* them from him
|
||||
/*
|
||||
* Looks like he doesn't have a lock, we can steal them
|
||||
* from him.
|
||||
*/
|
||||
|
||||
flags = 0;
|
||||
pci_bits = 0;
|
||||
|
||||
/* If we can't control legacy resources via the bridge, we
|
||||
/*
|
||||
* If we can't control legacy resources via the bridge, we
|
||||
* also need to disable normal decoding.
|
||||
*/
|
||||
if (!conflict->bridge_has_one_vga) {
|
||||
@ -311,7 +318,8 @@ static struct vga_device *__vga_tryget(struct vga_device *vgadev,
|
||||
}
|
||||
|
||||
enable_them:
|
||||
/* ok dude, we got it, everybody conflicting has been disabled, let's
|
||||
/*
|
||||
* Ok, we got it, everybody conflicting has been disabled, let's
|
||||
* enable us. Mark any bits in "owns" regardless of whether we
|
||||
* decoded them. We can lock resources we don't decode, therefore
|
||||
* we must track them via "owns".
|
||||
@ -353,8 +361,9 @@ static void __vga_put(struct vga_device *vgadev, unsigned int rsrc)
|
||||
|
||||
vgaarb_dbg(dev, "%s\n", __func__);
|
||||
|
||||
/* Update our counters, and account for equivalent legacy resources
|
||||
* if we decode them
|
||||
/*
|
||||
* Update our counters and account for equivalent legacy resources
|
||||
* if we decode them.
|
||||
*/
|
||||
if ((rsrc & VGA_RSRC_NORMAL_IO) && vgadev->io_norm_cnt > 0) {
|
||||
vgadev->io_norm_cnt--;
|
||||
@ -371,32 +380,34 @@ static void __vga_put(struct vga_device *vgadev, unsigned int rsrc)
|
||||
if ((rsrc & VGA_RSRC_LEGACY_MEM) && vgadev->mem_lock_cnt > 0)
|
||||
vgadev->mem_lock_cnt--;
|
||||
|
||||
/* Just clear lock bits, we do lazy operations so we don't really
|
||||
* have to bother about anything else at this point
|
||||
/*
|
||||
* Just clear lock bits, we do lazy operations so we don't really
|
||||
* have to bother about anything else at this point.
|
||||
*/
|
||||
if (vgadev->io_lock_cnt == 0)
|
||||
vgadev->locks &= ~VGA_RSRC_LEGACY_IO;
|
||||
if (vgadev->mem_lock_cnt == 0)
|
||||
vgadev->locks &= ~VGA_RSRC_LEGACY_MEM;
|
||||
|
||||
/* Kick the wait queue in case somebody was waiting if we actually
|
||||
* released something
|
||||
/*
|
||||
* Kick the wait queue in case somebody was waiting if we actually
|
||||
* released something.
|
||||
*/
|
||||
if (old_locks != vgadev->locks)
|
||||
wake_up_all(&vga_wait_queue);
|
||||
}
|
||||
|
||||
/**
|
||||
* vga_get - acquire & locks VGA resources
|
||||
* @pdev: pci device of the VGA card or NULL for the system default
|
||||
* vga_get - acquire & lock VGA resources
|
||||
* @pdev: PCI device of the VGA card or NULL for the system default
|
||||
* @rsrc: bit mask of resources to acquire and lock
|
||||
* @interruptible: blocking should be interruptible by signals ?
|
||||
*
|
||||
* This function acquires VGA resources for the given card and mark those
|
||||
* resources locked. If the resource requested are "normal" (and not legacy)
|
||||
* Acquire VGA resources for the given card and mark those resources
|
||||
* locked. If the resources requested are "normal" (and not legacy)
|
||||
* resources, the arbiter will first check whether the card is doing legacy
|
||||
* decoding for that type of resource. If yes, the lock is "converted" into a
|
||||
* legacy resource lock.
|
||||
* decoding for that type of resource. If yes, the lock is "converted" into
|
||||
* a legacy resource lock.
|
||||
*
|
||||
* The arbiter will first look for all VGA cards that might conflict and disable
|
||||
* their IOs and/or Memory access, including VGA forwarding on P2P bridges if
|
||||
@ -428,7 +439,7 @@ int vga_get(struct pci_dev *pdev, unsigned int rsrc, int interruptible)
|
||||
int rc = 0;
|
||||
|
||||
vga_check_first_use();
|
||||
/* The one who calls us should check for this, but lets be sure... */
|
||||
/* The caller should check for this, but let's be sure */
|
||||
if (pdev == NULL)
|
||||
pdev = vga_default_device();
|
||||
if (pdev == NULL)
|
||||
@ -447,12 +458,12 @@ int vga_get(struct pci_dev *pdev, unsigned int rsrc, int interruptible)
|
||||
if (conflict == NULL)
|
||||
break;
|
||||
|
||||
|
||||
/* We have a conflict, we wait until somebody kicks the
|
||||
/*
|
||||
* We have a conflict; we wait until somebody kicks the
|
||||
* work queue. Currently we have one work queue that we
|
||||
* kick each time some resources are released, but it would
|
||||
* be fairly easy to have a per device one so that we only
|
||||
* need to attach to the conflicting device
|
||||
* be fairly easy to have a per-device one so that we only
|
||||
* need to attach to the conflicting device.
|
||||
*/
|
||||
init_waitqueue_entry(&wait, current);
|
||||
add_wait_queue(&vga_wait_queue, &wait);
|
||||
@ -474,12 +485,12 @@ EXPORT_SYMBOL(vga_get);
|
||||
|
||||
/**
|
||||
* vga_tryget - try to acquire & lock legacy VGA resources
|
||||
* @pdev: pci devivce of VGA card or NULL for system default
|
||||
* @pdev: PCI device of VGA card or NULL for system default
|
||||
* @rsrc: bit mask of resources to acquire and lock
|
||||
*
|
||||
* This function performs the same operation as vga_get(), but will return an
|
||||
* error (-EBUSY) instead of blocking if the resources are already locked by
|
||||
* another card. It can be called in any context
|
||||
* Perform the same operation as vga_get(), but return an error (-EBUSY)
|
||||
* instead of blocking if the resources are already locked by another card.
|
||||
* Can be called in any context.
|
||||
*
|
||||
* On success, release the VGA resource again with vga_put().
|
||||
*
|
||||
@ -495,7 +506,7 @@ static int vga_tryget(struct pci_dev *pdev, unsigned int rsrc)
|
||||
|
||||
vga_check_first_use();
|
||||
|
||||
/* The one who calls us should check for this, but lets be sure... */
|
||||
/* The caller should check for this, but let's be sure */
|
||||
if (pdev == NULL)
|
||||
pdev = vga_default_device();
|
||||
if (pdev == NULL)
|
||||
@ -515,20 +526,20 @@ bail:
|
||||
|
||||
/**
|
||||
* vga_put - release lock on legacy VGA resources
|
||||
* @pdev: pci device of VGA card or NULL for system default
|
||||
* @rsrc: but mask of resource to release
|
||||
* @pdev: PCI device of VGA card or NULL for system default
|
||||
* @rsrc: bit mask of resource to release
|
||||
*
|
||||
* This fuction releases resources previously locked by vga_get() or
|
||||
* vga_tryget(). The resources aren't disabled right away, so that a subsequence
|
||||
* vga_get() on the same card will succeed immediately. Resources have a
|
||||
* counter, so locks are only released if the counter reaches 0.
|
||||
* Release resources previously locked by vga_get() or vga_tryget(). The
|
||||
* resources aren't disabled right away, so that a subsequent vga_get() on
|
||||
* the same card will succeed immediately. Resources have a counter, so
|
||||
* locks are only released if the counter reaches 0.
|
||||
*/
|
||||
void vga_put(struct pci_dev *pdev, unsigned int rsrc)
|
||||
{
|
||||
struct vga_device *vgadev;
|
||||
unsigned long flags;
|
||||
|
||||
/* The one who calls us should check for this, but lets be sure... */
|
||||
/* The caller should check for this, but let's be sure */
|
||||
if (pdev == NULL)
|
||||
pdev = vga_default_device();
|
||||
if (pdev == NULL)
|
||||
@ -665,7 +676,7 @@ static bool vga_is_boot_device(struct vga_device *vgadev)
|
||||
}
|
||||
|
||||
/*
|
||||
* vgadev has neither IO nor MEM enabled. If we haven't found any
|
||||
* Vgadev has neither IO nor MEM enabled. If we haven't found any
|
||||
* other VGA devices, it is the best candidate so far.
|
||||
*/
|
||||
if (!boot_vga)
|
||||
@ -696,20 +707,20 @@ static void vga_arbiter_check_bridge_sharing(struct vga_device *vgadev)
|
||||
return;
|
||||
}
|
||||
|
||||
/* okay iterate the new devices bridge hierarachy */
|
||||
/* Iterate the new device's bridge hierarchy */
|
||||
new_bus = vgadev->pdev->bus;
|
||||
while (new_bus) {
|
||||
new_bridge = new_bus->self;
|
||||
|
||||
/* go through list of devices already registered */
|
||||
/* Go through list of devices already registered */
|
||||
list_for_each_entry(same_bridge_vgadev, &vga_list, list) {
|
||||
bus = same_bridge_vgadev->pdev->bus;
|
||||
bridge = bus->self;
|
||||
|
||||
/* see if the share a bridge with this device */
|
||||
/* See if it shares a bridge with this device */
|
||||
if (new_bridge == bridge) {
|
||||
/*
|
||||
* If their direct parent bridge is the same
|
||||
* If its direct parent bridge is the same
|
||||
* as any bridge of this device then it can't
|
||||
* be used for that device.
|
||||
*/
|
||||
@ -717,10 +728,10 @@ static void vga_arbiter_check_bridge_sharing(struct vga_device *vgadev)
|
||||
}
|
||||
|
||||
/*
|
||||
* Now iterate the previous devices bridge hierarchy.
|
||||
* If the new devices parent bridge is in the other
|
||||
* devices hierarchy then we can't use it to control
|
||||
* this device
|
||||
* Now iterate the previous device's bridge hierarchy.
|
||||
* If the new device's parent bridge is in the other
|
||||
* device's hierarchy, we can't use it to control this
|
||||
* device.
|
||||
*/
|
||||
while (bus) {
|
||||
bridge = bus->self;
|
||||
@ -741,10 +752,9 @@ static void vga_arbiter_check_bridge_sharing(struct vga_device *vgadev)
|
||||
}
|
||||
|
||||
/*
|
||||
* Currently, we assume that the "initial" setup of the system is
|
||||
* not sane, that is we come up with conflicting devices and let
|
||||
* the arbiter's client decides if devices decodes or not legacy
|
||||
* things.
|
||||
* Currently, we assume that the "initial" setup of the system is not sane,
|
||||
* that is, we come up with conflicting devices and let the arbiter's
|
||||
* client decide if devices decodes legacy things or not.
|
||||
*/
|
||||
static bool vga_arbiter_add_pci_device(struct pci_dev *pdev)
|
||||
{
|
||||
@ -763,7 +773,7 @@ static bool vga_arbiter_add_pci_device(struct pci_dev *pdev)
|
||||
if (vgadev == NULL) {
|
||||
vgaarb_err(&pdev->dev, "failed to allocate VGA arbiter data\n");
|
||||
/*
|
||||
* What to do on allocation failure ? For now, let's just do
|
||||
* What to do on allocation failure? For now, let's just do
|
||||
* nothing, I'm not sure there is anything saner to be done.
|
||||
*/
|
||||
return false;
|
||||
@ -781,10 +791,12 @@ static bool vga_arbiter_add_pci_device(struct pci_dev *pdev)
|
||||
vgadev->decodes = VGA_RSRC_LEGACY_IO | VGA_RSRC_LEGACY_MEM |
|
||||
VGA_RSRC_NORMAL_IO | VGA_RSRC_NORMAL_MEM;
|
||||
|
||||
/* by default mark it as decoding */
|
||||
/* By default, mark it as decoding */
|
||||
vga_decode_count++;
|
||||
/* Mark that we "own" resources based on our enables, we will
|
||||
* clear that below if the bridge isn't forwarding
|
||||
|
||||
/*
|
||||
* Mark that we "own" resources based on our enables, we will
|
||||
* clear that below if the bridge isn't forwarding.
|
||||
*/
|
||||
pci_read_config_word(pdev, PCI_COMMAND, &cmd);
|
||||
if (cmd & PCI_COMMAND_IO)
|
||||
@ -864,24 +876,23 @@ bail:
|
||||
return ret;
|
||||
}
|
||||
|
||||
/* this is called with the lock */
|
||||
static inline void vga_update_device_decodes(struct vga_device *vgadev,
|
||||
int new_decodes)
|
||||
/* Called with the lock */
|
||||
static void vga_update_device_decodes(struct vga_device *vgadev,
|
||||
unsigned int new_decodes)
|
||||
{
|
||||
struct device *dev = &vgadev->pdev->dev;
|
||||
int old_decodes, decodes_removed, decodes_unlocked;
|
||||
unsigned int old_decodes = vgadev->decodes;
|
||||
unsigned int decodes_removed = ~new_decodes & old_decodes;
|
||||
unsigned int decodes_unlocked = vgadev->locks & decodes_removed;
|
||||
|
||||
old_decodes = vgadev->decodes;
|
||||
decodes_removed = ~new_decodes & old_decodes;
|
||||
decodes_unlocked = vgadev->locks & decodes_removed;
|
||||
vgadev->decodes = new_decodes;
|
||||
|
||||
vgaarb_info(dev, "changed VGA decodes: olddecodes=%s,decodes=%s:owns=%s\n",
|
||||
vga_iostate_to_str(old_decodes),
|
||||
vga_iostate_to_str(vgadev->decodes),
|
||||
vga_iostate_to_str(vgadev->owns));
|
||||
vgaarb_info(dev, "VGA decodes changed: olddecodes=%s,decodes=%s:owns=%s\n",
|
||||
vga_iostate_to_str(old_decodes),
|
||||
vga_iostate_to_str(vgadev->decodes),
|
||||
vga_iostate_to_str(vgadev->owns));
|
||||
|
||||
/* if we removed locked decodes, lock count goes to zero, and release */
|
||||
/* If we removed locked decodes, lock count goes to zero, and release */
|
||||
if (decodes_unlocked) {
|
||||
if (decodes_unlocked & VGA_RSRC_LEGACY_IO)
|
||||
vgadev->io_lock_cnt = 0;
|
||||
@ -890,7 +901,7 @@ static inline void vga_update_device_decodes(struct vga_device *vgadev,
|
||||
__vga_put(vgadev, decodes_unlocked);
|
||||
}
|
||||
|
||||
/* change decodes counter */
|
||||
/* Change decodes counter */
|
||||
if (old_decodes & VGA_RSRC_LEGACY_MASK &&
|
||||
!(new_decodes & VGA_RSRC_LEGACY_MASK))
|
||||
vga_decode_count--;
|
||||
@ -914,16 +925,17 @@ static void __vga_set_legacy_decoding(struct pci_dev *pdev,
|
||||
if (vgadev == NULL)
|
||||
goto bail;
|
||||
|
||||
/* don't let userspace futz with kernel driver decodes */
|
||||
/* Don't let userspace futz with kernel driver decodes */
|
||||
if (userspace && vgadev->set_decode)
|
||||
goto bail;
|
||||
|
||||
/* update the device decodes + counter */
|
||||
/* Update the device decodes + counter */
|
||||
vga_update_device_decodes(vgadev, decodes);
|
||||
|
||||
/* XXX if somebody is going from "doesn't decode" to "decodes" state
|
||||
* here, additional care must be taken as we may have pending owner
|
||||
* ship of non-legacy region ...
|
||||
/*
|
||||
* XXX If somebody is going from "doesn't decode" to "decodes"
|
||||
* state here, additional care must be taken as we may have pending
|
||||
* ownership of non-legacy region.
|
||||
*/
|
||||
bail:
|
||||
spin_unlock_irqrestore(&vga_lock, flags);
|
||||
@ -931,10 +943,10 @@ bail:
|
||||
|
||||
/**
|
||||
* vga_set_legacy_decoding
|
||||
* @pdev: pci device of the VGA card
|
||||
* @pdev: PCI device of the VGA card
|
||||
* @decodes: bit mask of what legacy regions the card decodes
|
||||
*
|
||||
* Indicates to the arbiter if the card decodes legacy VGA IOs, legacy VGA
|
||||
* Indicate to the arbiter if the card decodes legacy VGA IOs, legacy VGA
|
||||
* Memory, both, or none. All cards default to both, the card driver (fbdev for
|
||||
* example) should tell the arbiter if it has disabled legacy decoding, so the
|
||||
* card can be left out of the arbitration process (and can be safe to take
|
||||
@ -948,47 +960,42 @@ EXPORT_SYMBOL(vga_set_legacy_decoding);
|
||||
|
||||
/**
|
||||
* vga_client_register - register or unregister a VGA arbitration client
|
||||
* @pdev: pci device of the VGA client
|
||||
* @set_decode: vga decode change callback
|
||||
* @pdev: PCI device of the VGA client
|
||||
* @set_decode: VGA decode change callback
|
||||
*
|
||||
* Clients have two callback mechanisms they can use.
|
||||
*
|
||||
* @set_decode callback: If a client can disable its GPU VGA resource, it
|
||||
* will get a callback from this to set the encode/decode state.
|
||||
*
|
||||
* Rationale: we cannot disable VGA decode resources unconditionally some single
|
||||
* GPU laptops seem to require ACPI or BIOS access to the VGA registers to
|
||||
* control things like backlights etc. Hopefully newer multi-GPU laptops do
|
||||
* something saner, and desktops won't have any special ACPI for this. The
|
||||
* driver will get a callback when VGA arbitration is first used by userspace
|
||||
* since some older X servers have issues.
|
||||
* Rationale: we cannot disable VGA decode resources unconditionally
|
||||
* because some single GPU laptops seem to require ACPI or BIOS access to
|
||||
* the VGA registers to control things like backlights etc. Hopefully newer
|
||||
* multi-GPU laptops do something saner, and desktops won't have any
|
||||
* special ACPI for this. The driver will get a callback when VGA
|
||||
* arbitration is first used by userspace since some older X servers have
|
||||
* issues.
|
||||
*
|
||||
* This function does not check whether a client for @pdev has been registered
|
||||
* already.
|
||||
* Does not check whether a client for @pdev has been registered already.
|
||||
*
|
||||
* To unregister just call vga_client_unregister().
|
||||
* To unregister, call vga_client_unregister().
|
||||
*
|
||||
* Returns: 0 on success, -1 on failure
|
||||
* Returns: 0 on success, -ENODEV on failure
|
||||
*/
|
||||
int vga_client_register(struct pci_dev *pdev,
|
||||
unsigned int (*set_decode)(struct pci_dev *pdev, bool decode))
|
||||
{
|
||||
int ret = -ENODEV;
|
||||
struct vga_device *vgadev;
|
||||
unsigned long flags;
|
||||
struct vga_device *vgadev;
|
||||
|
||||
spin_lock_irqsave(&vga_lock, flags);
|
||||
vgadev = vgadev_find(pdev);
|
||||
if (!vgadev)
|
||||
goto bail;
|
||||
|
||||
vgadev->set_decode = set_decode;
|
||||
ret = 0;
|
||||
|
||||
bail:
|
||||
if (vgadev)
|
||||
vgadev->set_decode = set_decode;
|
||||
spin_unlock_irqrestore(&vga_lock, flags);
|
||||
return ret;
|
||||
|
||||
if (!vgadev)
|
||||
return -ENODEV;
|
||||
return 0;
|
||||
}
|
||||
EXPORT_SYMBOL(vga_client_register);
|
||||
|
||||
@ -997,13 +1004,13 @@ EXPORT_SYMBOL(vga_client_register);
|
||||
*
|
||||
* Semantics is:
|
||||
*
|
||||
* open : open user instance of the arbitrer. by default, it's
|
||||
* open : Open user instance of the arbiter. By default, it's
|
||||
* attached to the default VGA device of the system.
|
||||
*
|
||||
* close : close user instance, release locks
|
||||
* close : Close user instance, release locks
|
||||
*
|
||||
* read : return a string indicating the status of the target.
|
||||
* an IO state string is of the form {io,mem,io+mem,none},
|
||||
* read : Return a string indicating the status of the target.
|
||||
* An IO state string is of the form {io,mem,io+mem,none},
|
||||
* mc and ic are respectively mem and io lock counts (for
|
||||
* debugging/diagnostic only). "decodes" indicate what the
|
||||
* card currently decodes, "owns" indicates what is currently
|
||||
@ -1017,7 +1024,7 @@ EXPORT_SYMBOL(vga_client_register);
|
||||
* write : write a command to the arbiter. List of commands is:
|
||||
*
|
||||
* target <card_ID> : switch target to card <card_ID> (see below)
|
||||
* lock <io_state> : acquires locks on target ("none" is invalid io_state)
|
||||
* lock <io_state> : acquire locks on target ("none" is invalid io_state)
|
||||
* trylock <io_state> : non-blocking acquire locks on target
|
||||
* unlock <io_state> : release locks on target
|
||||
* unlock all : release all locks on target held by this user
|
||||
@ -1034,23 +1041,21 @@ EXPORT_SYMBOL(vga_client_register);
|
||||
* Note about locks:
|
||||
*
|
||||
* The driver keeps track of which user has what locks on which card. It
|
||||
* supports stacking, like the kernel one. This complexifies the implementation
|
||||
* supports stacking, like the kernel one. This complicates the implementation
|
||||
* a bit, but makes the arbiter more tolerant to userspace problems and able
|
||||
* to properly cleanup in all cases when a process dies.
|
||||
* Currently, a max of 16 cards simultaneously can have locks issued from
|
||||
* userspace for a given user (file descriptor instance) of the arbiter.
|
||||
*
|
||||
* If the device is hot-unplugged, there is a hook inside the module to notify
|
||||
* they being added/removed in the system and automatically added/removed in
|
||||
* it being added/removed in the system and automatically added/removed in
|
||||
* the arbiter.
|
||||
*/
|
||||
|
||||
#define MAX_USER_CARDS CONFIG_VGA_ARB_MAX_GPUS
|
||||
#define PCI_INVALID_CARD ((struct pci_dev *)-1UL)
|
||||
|
||||
/*
|
||||
* Each user has an array of these, tracking which cards have locks
|
||||
*/
|
||||
/* Each user has an array of these, tracking which cards have locks */
|
||||
struct vga_arb_user_card {
|
||||
struct pci_dev *pdev;
|
||||
unsigned int mem_cnt;
|
||||
@ -1069,9 +1074,8 @@ static DEFINE_SPINLOCK(vga_user_lock);
|
||||
|
||||
|
||||
/*
|
||||
* This function gets a string in the format: "PCI:domain:bus:dev.fn" and
|
||||
* returns the respective values. If the string is not in this format,
|
||||
* it returns 0.
|
||||
* Take a string in the format: "PCI:domain:bus:dev.fn" and return the
|
||||
* respective values. If the string is not in this format, return 0.
|
||||
*/
|
||||
static int vga_pci_str_to_vars(char *buf, int count, unsigned int *domain,
|
||||
unsigned int *bus, unsigned int *devfn)
|
||||
@ -1079,7 +1083,6 @@ static int vga_pci_str_to_vars(char *buf, int count, unsigned int *domain,
|
||||
int n;
|
||||
unsigned int slot, func;
|
||||
|
||||
|
||||
n = sscanf(buf, "PCI:%x:%x:%x.%x", domain, bus, &slot, &func);
|
||||
if (n != 4)
|
||||
return 0;
|
||||
@ -1104,7 +1107,7 @@ static ssize_t vga_arb_read(struct file *file, char __user *buf,
|
||||
if (lbuf == NULL)
|
||||
return -ENOMEM;
|
||||
|
||||
/* Protects vga_list */
|
||||
/* Protect vga_list */
|
||||
spin_lock_irqsave(&vga_lock, flags);
|
||||
|
||||
/* If we are targeting the default, use it */
|
||||
@ -1118,15 +1121,16 @@ static ssize_t vga_arb_read(struct file *file, char __user *buf,
|
||||
/* Find card vgadev structure */
|
||||
vgadev = vgadev_find(pdev);
|
||||
if (vgadev == NULL) {
|
||||
/* Wow, it's not in the list, that shouldn't happen,
|
||||
* let's fix us up and return invalid card
|
||||
/*
|
||||
* Wow, it's not in the list, that shouldn't happen, let's
|
||||
* fix us up and return invalid card.
|
||||
*/
|
||||
spin_unlock_irqrestore(&vga_lock, flags);
|
||||
len = sprintf(lbuf, "invalid");
|
||||
goto done;
|
||||
}
|
||||
|
||||
/* Fill the buffer with infos */
|
||||
/* Fill the buffer with info */
|
||||
len = snprintf(lbuf, 1024,
|
||||
"count:%d,PCI:%s,decodes=%s,owns=%s,locks=%s(%u:%u)\n",
|
||||
vga_decode_count, pci_name(pdev),
|
||||
@ -1172,7 +1176,7 @@ static ssize_t vga_arb_write(struct file *file, const char __user *buf,
|
||||
if (copy_from_user(kbuf, buf, count))
|
||||
return -EFAULT;
|
||||
curr_pos = kbuf;
|
||||
kbuf[count] = '\0'; /* Just to make sure... */
|
||||
kbuf[count] = '\0';
|
||||
|
||||
if (strncmp(curr_pos, "lock ", 5) == 0) {
|
||||
curr_pos += 5;
|
||||
@ -1197,7 +1201,7 @@ static ssize_t vga_arb_write(struct file *file, const char __user *buf,
|
||||
|
||||
vga_get_uninterruptible(pdev, io_state);
|
||||
|
||||
/* Update the client's locks lists... */
|
||||
/* Update the client's locks lists */
|
||||
for (i = 0; i < MAX_USER_CARDS; i++) {
|
||||
if (priv->cards[i].pdev == pdev) {
|
||||
if (io_state & VGA_RSRC_LEGACY_IO)
|
||||
@ -1314,7 +1318,7 @@ static ssize_t vga_arb_write(struct file *file, const char __user *buf,
|
||||
curr_pos += 7;
|
||||
remaining -= 7;
|
||||
pr_debug("client 0x%p called 'target'\n", priv);
|
||||
/* if target is default */
|
||||
/* If target is default */
|
||||
if (!strncmp(curr_pos, "default", 7))
|
||||
pdev = pci_dev_get(vga_default_device());
|
||||
else {
|
||||
@ -1364,7 +1368,7 @@ static ssize_t vga_arb_write(struct file *file, const char __user *buf,
|
||||
vgaarb_dbg(&pdev->dev, "maximum user cards (%d) number reached, ignoring this one!\n",
|
||||
MAX_USER_CARDS);
|
||||
pci_dev_put(pdev);
|
||||
/* XXX: which value to return? */
|
||||
/* XXX: Which value to return? */
|
||||
ret_val = -ENOMEM;
|
||||
goto done;
|
||||
}
|
||||
@ -1425,13 +1429,12 @@ static int vga_arb_open(struct inode *inode, struct file *file)
|
||||
list_add(&priv->list, &vga_user_list);
|
||||
spin_unlock_irqrestore(&vga_user_lock, flags);
|
||||
|
||||
/* Set the client' lists of locks */
|
||||
/* Set the client's lists of locks */
|
||||
priv->target = vga_default_device(); /* Maybe this is still null! */
|
||||
priv->cards[0].pdev = priv->target;
|
||||
priv->cards[0].io_cnt = 0;
|
||||
priv->cards[0].mem_cnt = 0;
|
||||
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
@ -1465,25 +1468,23 @@ static int vga_arb_release(struct inode *inode, struct file *file)
|
||||
}
|
||||
|
||||
/*
|
||||
* callback any registered clients to let them know we have a
|
||||
* change in VGA cards
|
||||
* Callback any registered clients to let them know we have a change in VGA
|
||||
* cards.
|
||||
*/
|
||||
static void vga_arbiter_notify_clients(void)
|
||||
{
|
||||
struct vga_device *vgadev;
|
||||
unsigned long flags;
|
||||
uint32_t new_decodes;
|
||||
unsigned int new_decodes;
|
||||
bool new_state;
|
||||
|
||||
if (!vga_arbiter_used)
|
||||
return;
|
||||
|
||||
new_state = (vga_count > 1) ? false : true;
|
||||
|
||||
spin_lock_irqsave(&vga_lock, flags);
|
||||
list_for_each_entry(vgadev, &vga_list, list) {
|
||||
if (vga_count > 1)
|
||||
new_state = false;
|
||||
else
|
||||
new_state = true;
|
||||
if (vgadev->set_decode) {
|
||||
new_decodes = vgadev->set_decode(vgadev->pdev,
|
||||
new_state);
|
||||
@ -1502,9 +1503,11 @@ static int pci_notify(struct notifier_block *nb, unsigned long action,
|
||||
|
||||
vgaarb_dbg(dev, "%s\n", __func__);
|
||||
|
||||
/* For now we're only intereted in devices added and removed. I didn't
|
||||
* test this thing here, so someone needs to double check for the
|
||||
* cases of hotplugable vga cards. */
|
||||
/*
|
||||
* For now, we're only interested in devices added and removed.
|
||||
* I didn't test this thing here, so someone needs to double check
|
||||
* for the cases of hot-pluggable VGA cards.
|
||||
*/
|
||||
if (action == BUS_NOTIFY_ADD_DEVICE)
|
||||
notify = vga_arbiter_add_pci_device(pdev);
|
||||
else if (action == BUS_NOTIFY_DEL_DEVICE)
|
||||
@ -1543,8 +1546,7 @@ static int __init vga_arb_device_init(void)
|
||||
|
||||
bus_register_notifier(&pci_bus_type, &pci_notifier);
|
||||
|
||||
/* We add all PCI devices satisfying VGA class in the arbiter by
|
||||
* default */
|
||||
/* Add all VGA class PCI devices by default */
|
||||
pdev = NULL;
|
||||
while ((pdev =
|
||||
pci_get_subsys(PCI_ANY_ID, PCI_ANY_ID, PCI_ANY_ID,
|
||||
|
@ -275,8 +275,23 @@ static ssize_t vpd_read(struct file *filp, struct kobject *kobj,
|
||||
size_t count)
|
||||
{
|
||||
struct pci_dev *dev = to_pci_dev(kobj_to_dev(kobj));
|
||||
struct pci_dev *vpd_dev = dev;
|
||||
ssize_t ret;
|
||||
|
||||
return pci_read_vpd(dev, off, count, buf);
|
||||
if (dev->dev_flags & PCI_DEV_FLAGS_VPD_REF_F0) {
|
||||
vpd_dev = pci_get_func0_dev(dev);
|
||||
if (!vpd_dev)
|
||||
return -ENODEV;
|
||||
}
|
||||
|
||||
pci_config_pm_runtime_get(vpd_dev);
|
||||
ret = pci_read_vpd(vpd_dev, off, count, buf);
|
||||
pci_config_pm_runtime_put(vpd_dev);
|
||||
|
||||
if (dev->dev_flags & PCI_DEV_FLAGS_VPD_REF_F0)
|
||||
pci_dev_put(vpd_dev);
|
||||
|
||||
return ret;
|
||||
}
|
||||
|
||||
static ssize_t vpd_write(struct file *filp, struct kobject *kobj,
|
||||
@ -284,8 +299,23 @@ static ssize_t vpd_write(struct file *filp, struct kobject *kobj,
|
||||
size_t count)
|
||||
{
|
||||
struct pci_dev *dev = to_pci_dev(kobj_to_dev(kobj));
|
||||
struct pci_dev *vpd_dev = dev;
|
||||
ssize_t ret;
|
||||
|
||||
return pci_write_vpd(dev, off, count, buf);
|
||||
if (dev->dev_flags & PCI_DEV_FLAGS_VPD_REF_F0) {
|
||||
vpd_dev = pci_get_func0_dev(dev);
|
||||
if (!vpd_dev)
|
||||
return -ENODEV;
|
||||
}
|
||||
|
||||
pci_config_pm_runtime_get(vpd_dev);
|
||||
ret = pci_write_vpd(vpd_dev, off, count, buf);
|
||||
pci_config_pm_runtime_put(vpd_dev);
|
||||
|
||||
if (dev->dev_flags & PCI_DEV_FLAGS_VPD_REF_F0)
|
||||
pci_dev_put(vpd_dev);
|
||||
|
||||
return ret;
|
||||
}
|
||||
static BIN_ATTR(vpd, 0600, vpd_read, vpd_write, 0);
|
||||
|
||||
|
@ -41,19 +41,8 @@ struct aer_capability_regs {
|
||||
};
|
||||
|
||||
#if defined(CONFIG_PCIEAER)
|
||||
/* PCIe port driver needs this function to enable AER */
|
||||
int pci_enable_pcie_error_reporting(struct pci_dev *dev);
|
||||
int pci_disable_pcie_error_reporting(struct pci_dev *dev);
|
||||
int pci_aer_clear_nonfatal_status(struct pci_dev *dev);
|
||||
#else
|
||||
static inline int pci_enable_pcie_error_reporting(struct pci_dev *dev)
|
||||
{
|
||||
return -EINVAL;
|
||||
}
|
||||
static inline int pci_disable_pcie_error_reporting(struct pci_dev *dev)
|
||||
{
|
||||
return -EINVAL;
|
||||
}
|
||||
static inline int pci_aer_clear_nonfatal_status(struct pci_dev *dev)
|
||||
{
|
||||
return -EINVAL;
|
||||
|
@ -366,8 +366,8 @@ struct pci_dev {
|
||||
pci_power_t current_state; /* Current operating state. In ACPI,
|
||||
this is D0-D3, D0 being fully
|
||||
functional, and D3 being off. */
|
||||
unsigned int imm_ready:1; /* Supports Immediate Readiness */
|
||||
u8 pm_cap; /* PM capability offset */
|
||||
unsigned int imm_ready:1; /* Supports Immediate Readiness */
|
||||
unsigned int pme_support:5; /* Bitmask of states from which PME#
|
||||
can be generated */
|
||||
unsigned int pme_poll:1; /* Poll device's PME status bit */
|
||||
@ -392,9 +392,9 @@ struct pci_dev {
|
||||
|
||||
#ifdef CONFIG_PCIEASPM
|
||||
struct pcie_link_state *link_state; /* ASPM link state */
|
||||
u16 l1ss; /* L1SS Capability pointer */
|
||||
unsigned int ltr_path:1; /* Latency Tolerance Reporting
|
||||
supported from root to here */
|
||||
u16 l1ss; /* L1SS Capability pointer */
|
||||
#endif
|
||||
unsigned int pasid_no_tlp:1; /* PASID works without TLP Prefix */
|
||||
unsigned int eetlp_prefix_path:1; /* End-to-End TLP Prefix */
|
||||
@ -464,12 +464,13 @@ struct pci_dev {
|
||||
unsigned int no_vf_scan:1; /* Don't scan for VFs after IOV enablement */
|
||||
unsigned int no_command_memory:1; /* No PCI_COMMAND_MEMORY */
|
||||
unsigned int rom_bar_overlap:1; /* ROM BAR disable broken */
|
||||
unsigned int rom_attr_enabled:1; /* Display of ROM attribute enabled? */
|
||||
pci_dev_flags_t dev_flags;
|
||||
atomic_t enable_cnt; /* pci_enable_device has been called */
|
||||
|
||||
spinlock_t pcie_cap_lock; /* Protects RMW ops in capability accessors */
|
||||
u32 saved_config_space[16]; /* Config space saved at suspend time */
|
||||
struct hlist_head saved_cap_space;
|
||||
int rom_attr_enabled; /* Display of ROM attribute enabled? */
|
||||
struct bin_attribute *res_attr[DEVICE_COUNT_RESOURCE]; /* sysfs file for resources */
|
||||
struct bin_attribute *res_attr_wc[DEVICE_COUNT_RESOURCE]; /* sysfs file for WC mapping of resources */
|
||||
|
||||
@ -1217,11 +1218,40 @@ int pcie_capability_read_word(struct pci_dev *dev, int pos, u16 *val);
|
||||
int pcie_capability_read_dword(struct pci_dev *dev, int pos, u32 *val);
|
||||
int pcie_capability_write_word(struct pci_dev *dev, int pos, u16 val);
|
||||
int pcie_capability_write_dword(struct pci_dev *dev, int pos, u32 val);
|
||||
int pcie_capability_clear_and_set_word(struct pci_dev *dev, int pos,
|
||||
u16 clear, u16 set);
|
||||
int pcie_capability_clear_and_set_word_unlocked(struct pci_dev *dev, int pos,
|
||||
u16 clear, u16 set);
|
||||
int pcie_capability_clear_and_set_word_locked(struct pci_dev *dev, int pos,
|
||||
u16 clear, u16 set);
|
||||
int pcie_capability_clear_and_set_dword(struct pci_dev *dev, int pos,
|
||||
u32 clear, u32 set);
|
||||
|
||||
/**
|
||||
* pcie_capability_clear_and_set_word - RMW accessor for PCI Express Capability Registers
|
||||
* @dev: PCI device structure of the PCI Express device
|
||||
* @pos: PCI Express Capability Register
|
||||
* @clear: Clear bitmask
|
||||
* @set: Set bitmask
|
||||
*
|
||||
* Perform a Read-Modify-Write (RMW) operation using @clear and @set
|
||||
* bitmasks on PCI Express Capability Register at @pos. Certain PCI Express
|
||||
* Capability Registers are accessed concurrently in RMW fashion, hence
|
||||
* require locking which is handled transparently to the caller.
|
||||
*/
|
||||
static inline int pcie_capability_clear_and_set_word(struct pci_dev *dev,
|
||||
int pos,
|
||||
u16 clear, u16 set)
|
||||
{
|
||||
switch (pos) {
|
||||
case PCI_EXP_LNKCTL:
|
||||
case PCI_EXP_RTCTL:
|
||||
return pcie_capability_clear_and_set_word_locked(dev, pos,
|
||||
clear, set);
|
||||
default:
|
||||
return pcie_capability_clear_and_set_word_unlocked(dev, pos,
|
||||
clear, set);
|
||||
}
|
||||
}
|
||||
|
||||
static inline int pcie_capability_set_word(struct pci_dev *dev, int pos,
|
||||
u16 set)
|
||||
{
|
||||
@ -1403,7 +1433,6 @@ void pci_assign_unassigned_bridge_resources(struct pci_dev *bridge);
|
||||
void pci_assign_unassigned_bus_resources(struct pci_bus *bus);
|
||||
void pci_assign_unassigned_root_bus_resources(struct pci_bus *bus);
|
||||
int pci_reassign_bridge_resources(struct pci_dev *bridge, unsigned long type);
|
||||
void pdev_enable_device(struct pci_dev *);
|
||||
int pci_enable_resources(struct pci_dev *, int mask);
|
||||
void pci_assign_irq(struct pci_dev *dev);
|
||||
struct resource *pci_find_resource(struct pci_dev *dev, struct resource *res);
|
||||
@ -2260,6 +2289,11 @@ int pcibios_alloc_irq(struct pci_dev *dev);
|
||||
void pcibios_free_irq(struct pci_dev *dev);
|
||||
resource_size_t pcibios_default_alignment(void);
|
||||
|
||||
#if !defined(HAVE_PCI_MMAP) && !defined(ARCH_GENERIC_PCI_MMAP_RESOURCE)
|
||||
extern int pci_create_resource_files(struct pci_dev *dev);
|
||||
extern void pci_remove_resource_files(struct pci_dev *dev);
|
||||
#endif
|
||||
|
||||
#if defined(CONFIG_PCI_MMCONFIG) || defined(CONFIG_ACPI_MCFG)
|
||||
void __init pci_mmcfg_early_init(void);
|
||||
void __init pci_mmcfg_late_init(void);
|
||||
|
@ -41,6 +41,7 @@ enum {
|
||||
enum switchtec_gen {
|
||||
SWITCHTEC_GEN3,
|
||||
SWITCHTEC_GEN4,
|
||||
SWITCHTEC_GEN5,
|
||||
};
|
||||
|
||||
struct mrpc_regs {
|
||||
|
@ -1,3 +1,5 @@
|
||||
/* SPDX-License-Identifier: MIT */
|
||||
|
||||
/*
|
||||
* The VGA aribiter manages VGA space routing and VGA resource decode to
|
||||
* allow multiple VGA devices to be used in a system in a safe way.
|
||||
@ -5,27 +7,6 @@
|
||||
* (C) Copyright 2005 Benjamin Herrenschmidt <benh@kernel.crashing.org>
|
||||
* (C) Copyright 2007 Paulo R. Zanoni <przanoni@gmail.com>
|
||||
* (C) Copyright 2007, 2009 Tiago Vignatti <vignatti@freedesktop.org>
|
||||
*
|
||||
* Permission is hereby granted, free of charge, to any person obtaining a
|
||||
* copy of this software and associated documentation files (the "Software"),
|
||||
* to deal in the Software without restriction, including without limitation
|
||||
* the rights to use, copy, modify, merge, publish, distribute, sublicense,
|
||||
* and/or sell copies of the Software, and to permit persons to whom the
|
||||
* Software is furnished to do so, subject to the following conditions:
|
||||
*
|
||||
* The above copyright notice and this permission notice (including the next
|
||||
* paragraph) shall be included in all copies or substantial portions of the
|
||||
* Software.
|
||||
*
|
||||
* THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
|
||||
* IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
|
||||
* FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
|
||||
* THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
|
||||
* LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
|
||||
* FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
|
||||
* DEALINGS
|
||||
* IN THE SOFTWARE.
|
||||
*
|
||||
*/
|
||||
|
||||
#ifndef LINUX_VGA_H
|
||||
@ -96,7 +77,7 @@ static inline int vga_client_register(struct pci_dev *pdev,
|
||||
static inline int vga_get_interruptible(struct pci_dev *pdev,
|
||||
unsigned int rsrc)
|
||||
{
|
||||
return vga_get(pdev, rsrc, 1);
|
||||
return vga_get(pdev, rsrc, 1);
|
||||
}
|
||||
|
||||
/**
|
||||
@ -111,7 +92,7 @@ static inline int vga_get_interruptible(struct pci_dev *pdev,
|
||||
static inline int vga_get_uninterruptible(struct pci_dev *pdev,
|
||||
unsigned int rsrc)
|
||||
{
|
||||
return vga_get(pdev, rsrc, 0);
|
||||
return vga_get(pdev, rsrc, 0);
|
||||
}
|
||||
|
||||
static inline void vga_client_unregister(struct pci_dev *pdev)
|
||||
|
Loading…
x
Reference in New Issue
Block a user