Merge branch 'pci/enumeration'
- Clear bridge Secondary Status errors after enumeration since enumeration causes many errors (Vidya Sagar) - Wait for Link Training==0 before starting Link retrain to avoid a race; this was done previously but broken by a faulty merge (Ilpo Järvinen) - Rename PCI_IRQ_LEGACY to PCI_IRQ_INTX to be more specific about what "LEGACY" means (Damien Le Moal) - Update return types of pci_find_capability() stubs to match the extern declarations for the actual implementations (Bjorn Helgaas) - Drop unnecessary pci_enable_device_io() from pata_cs5520 (Heiner Kallweit) - Drop unused pci_enable_device_io() (Heiner Kallweit) - On 2016 and newer BIOSes, skip early E820 check for ECAM regions described in ACPI MCFG; there's no spec requirement for E820 reservations, and some machines don't provide them (Bjorn Helgaas) - If devices were disconnected while suspended, don't wait for them when resuming (Ilpo Järvinen) * pci/enumeration: PCI: Do not wait for disconnected devices when resuming x86/pci: Skip early E820 check for ECAM region PCI: Remove unused pci_enable_device_io() ata: pata_cs5520: Remove unnecessary call to pci_enable_device_io() PCI: Update pci_find_capability() stub return types PCI: Remove PCI_IRQ_LEGACY scsi: vmw_pvscsi: Do not use PCI_IRQ_LEGACY instead of PCI_IRQ_LEGACY scsi: pmcraid: Use PCI_IRQ_INTX instead of PCI_IRQ_LEGACY scsi: mpt3sas: Use PCI_IRQ_INTX instead of PCI_IRQ_LEGACY scsi: megaraid_sas: Use PCI_IRQ_INTX instead of PCI_IRQ_LEGACY scsi: ipr: Use PCI_IRQ_INTX instead of PCI_IRQ_LEGACY scsi: hpsa: Use PCI_IRQ_INTX instead of PCI_IRQ_LEGACY scsi: arcmsr: Use PCI_IRQ_INTX instead of PCI_IRQ_LEGACY wifi: rtw89: Use PCI_IRQ_INTX instead of PCI_IRQ_LEGACY wifi: rtw88: Use PCI_IRQ_INTX instead of PCI_IRQ_LEGACY wifi: ath10k: Refer to INTX instead of LEGACY net: wangxun: Use PCI_IRQ_INTX instead of PCI_IRQ_LEGACY r8169: Use PCI_IRQ_INTX instead of PCI_IRQ_LEGACY net: alx: Use PCI_IRQ_INTX instead of PCI_IRQ_LEGACY net: atlantic: Use PCI_IRQ_INTX instead of PCI_IRQ_LEGACY net: amd-xgbe: Use PCI_IRQ_INTX instead of PCI_IRQ_LEGACY VMCI: Use PCI_IRQ_ALL_TYPES to remove PCI_IRQ_LEGACY use RDMA/vmw_pvrdma: Use PCI_IRQ_INTX instead of PCI_IRQ_LEGACY IB/qib: Use PCI_IRQ_INTX instead of PCI_IRQ_LEGACY drm/amdgpu: Use PCI_IRQ_INTX instead of PCI_IRQ_LEGACY mfd: intel-lpss: Use PCI_IRQ_INTX instead of PCI_IRQ_LEGACY ntb: idt: Use PCI_IRQ_INTX instead of PCI_IRQ_LEGACY platform/x86: intel_ips: Use PCI_IRQ_INTX instead of PCI_IRQ_LEGACY tty: 8250_pci: Use PCI_IRQ_INTX instead of PCI_IRQ_LEGACY usb: hcd-pci: Use PCI_IRQ_INTX instead of PCI_IRQ_LEGACY ASoC: Intel: avs: Use PCI_IRQ_INTX instead of PCI_IRQ_LEGACY Documentation: PCI: Use PCI_IRQ_INTX instead of PCI_IRQ_LEGACY PCI/portdrv: Use PCI_IRQ_INTX instead of PCI_IRQ_LEGACY PCI/MSI: Use PCI_IRQ_INTX instead of PCI_IRQ_LEGACY PCI: Clarify intent of LT wait PCI: Wait for Link Training==0 before starting Link retrain PCI: Clear Secondary Status errors after enumeration
This commit is contained in:
commit
ce4a9f1b1c
@ -103,7 +103,7 @@ min_vecs argument set to this limit, and the PCI core will return -ENOSPC
|
||||
if it can't meet the minimum number of vectors.
|
||||
|
||||
The flags argument is used to specify which type of interrupt can be used
|
||||
by the device and the driver (PCI_IRQ_LEGACY, PCI_IRQ_MSI, PCI_IRQ_MSIX).
|
||||
by the device and the driver (PCI_IRQ_INTX, PCI_IRQ_MSI, PCI_IRQ_MSIX).
|
||||
A convenient short-hand (PCI_IRQ_ALL_TYPES) is also available to ask for
|
||||
any possible kind of interrupt. If the PCI_IRQ_AFFINITY flag is set,
|
||||
pci_alloc_irq_vectors() will spread the interrupts around the available CPUs.
|
||||
|
@ -335,7 +335,7 @@ causes the PCI support to program CPU vector data into the PCI device
|
||||
capability registers. Many architectures, chip-sets, or BIOSes do NOT
|
||||
support MSI or MSI-X and a call to pci_alloc_irq_vectors with just
|
||||
the PCI_IRQ_MSI and PCI_IRQ_MSIX flags will fail, so try to always
|
||||
specify PCI_IRQ_LEGACY as well.
|
||||
specify PCI_IRQ_INTX as well.
|
||||
|
||||
Drivers that have different interrupt handlers for MSI/MSI-X and
|
||||
legacy INTx should chose the right one based on the msi_enabled
|
||||
|
@ -88,7 +88,7 @@ MSI功能。
|
||||
如果设备对最小数量的向量有要求,驱动程序可以传递一个min_vecs参数,设置为这个限制,
|
||||
如果PCI核不能满足最小数量的向量,将返回-ENOSPC。
|
||||
|
||||
flags参数用来指定设备和驱动程序可以使用哪种类型的中断(PCI_IRQ_LEGACY, PCI_IRQ_MSI,
|
||||
flags参数用来指定设备和驱动程序可以使用哪种类型的中断(PCI_IRQ_INTX, PCI_IRQ_MSI,
|
||||
PCI_IRQ_MSIX)。一个方便的短语(PCI_IRQ_ALL_TYPES)也可以用来要求任何可能的中断类型。
|
||||
如果PCI_IRQ_AFFINITY标志被设置,pci_alloc_irq_vectors()将把中断分散到可用的CPU上。
|
||||
|
||||
|
@ -304,7 +304,7 @@ MSI-X可以分配几个单独的向量。
|
||||
的PCI_IRQ_MSI和/或PCI_IRQ_MSIX标志来启用MSI功能。这将导致PCI支持将CPU向量数
|
||||
据编程到PCI设备功能寄存器中。许多架构、芯片组或BIOS不支持MSI或MSI-X,调用
|
||||
``pci_alloc_irq_vectors`` 时只使用PCI_IRQ_MSI和PCI_IRQ_MSIX标志会失败,
|
||||
所以尽量也要指定 ``PCI_IRQ_LEGACY`` 。
|
||||
所以尽量也要指定 ``PCI_IRQ_INTX`` 。
|
||||
|
||||
对MSI/MSI-X和传统INTx有不同中断处理程序的驱动程序应该在调用
|
||||
``pci_alloc_irq_vectors`` 后根据 ``pci_dev``结构体中的 ``msi_enabled``
|
||||
|
@ -518,7 +518,34 @@ static bool __ref pci_mmcfg_reserved(struct device *dev,
|
||||
{
|
||||
struct resource *conflict;
|
||||
|
||||
if (!early && !acpi_disabled) {
|
||||
if (early) {
|
||||
|
||||
/*
|
||||
* Don't try to do this check unless configuration type 1
|
||||
* is available. How about type 2?
|
||||
*/
|
||||
|
||||
/*
|
||||
* 946f2ee5c731 ("Check that MCFG points to an e820
|
||||
* reserved area") added this E820 check in 2006 to work
|
||||
* around BIOS defects.
|
||||
*
|
||||
* Per PCI Firmware r3.3, sec 4.1.2, ECAM space must be
|
||||
* reserved by a PNP0C02 resource, but it need not be
|
||||
* mentioned in E820. Before the ACPI interpreter is
|
||||
* available, we can't check for PNP0C02 resources, so
|
||||
* there's no reliable way to verify the region in this
|
||||
* early check. Keep it only for the old machines that
|
||||
* motivated 946f2ee5c731.
|
||||
*/
|
||||
if (dmi_get_bios_year() < 2016 && raw_pci_ops)
|
||||
return is_mmconf_reserved(e820__mapped_all, cfg, dev,
|
||||
"E820 entry");
|
||||
|
||||
return true;
|
||||
}
|
||||
|
||||
if (!acpi_disabled) {
|
||||
if (is_mmconf_reserved(is_acpi_reserved, cfg, dev,
|
||||
"ACPI motherboard resource"))
|
||||
return true;
|
||||
@ -551,16 +578,7 @@ static bool __ref pci_mmcfg_reserved(struct device *dev,
|
||||
* For MCFG information constructed from hotpluggable host bridge's
|
||||
* _CBA method, just assume it's reserved.
|
||||
*/
|
||||
if (pci_mmcfg_running_state)
|
||||
return true;
|
||||
|
||||
/* Don't try to do this check unless configuration
|
||||
type 1 is available. how about type 2 ?*/
|
||||
if (raw_pci_ops)
|
||||
return is_mmconf_reserved(e820__mapped_all, cfg, dev,
|
||||
"E820 entry");
|
||||
|
||||
return false;
|
||||
return pci_mmcfg_running_state;
|
||||
}
|
||||
|
||||
static void __init pci_mmcfg_reject_broken(int early)
|
||||
|
@ -151,12 +151,6 @@ static int cs5520_init_one(struct pci_dev *pdev, const struct pci_device_id *id)
|
||||
if (!host)
|
||||
return -ENOMEM;
|
||||
|
||||
/* Perform set up for DMA */
|
||||
if (pci_enable_device_io(pdev)) {
|
||||
dev_err(&pdev->dev, "unable to configure BAR2.\n");
|
||||
return -ENODEV;
|
||||
}
|
||||
|
||||
if (dma_set_mask_and_coherent(&pdev->dev, DMA_BIT_MASK(32))) {
|
||||
dev_err(&pdev->dev, "unable to configure DMA mask.\n");
|
||||
return -ENODEV;
|
||||
|
@ -279,7 +279,7 @@ int amdgpu_irq_init(struct amdgpu_device *adev)
|
||||
adev->irq.msi_enabled = false;
|
||||
|
||||
if (!amdgpu_msi_ok(adev))
|
||||
flags = PCI_IRQ_LEGACY;
|
||||
flags = PCI_IRQ_INTX;
|
||||
else
|
||||
flags = PCI_IRQ_ALL_TYPES;
|
||||
|
||||
|
@ -3281,7 +3281,7 @@ static int qib_7220_intr_fallback(struct qib_devdata *dd)
|
||||
|
||||
qib_free_irq(dd);
|
||||
dd->msi_lo = 0;
|
||||
if (pci_alloc_irq_vectors(dd->pcidev, 1, 1, PCI_IRQ_LEGACY) < 0)
|
||||
if (pci_alloc_irq_vectors(dd->pcidev, 1, 1, PCI_IRQ_INTX) < 0)
|
||||
qib_dev_err(dd, "Failed to enable INTx\n");
|
||||
qib_setup_7220_interrupt(dd);
|
||||
return 1;
|
||||
|
@ -3471,8 +3471,7 @@ try_intx:
|
||||
pci_irq_vector(dd->pcidev, msixnum),
|
||||
ret);
|
||||
qib_7322_free_irq(dd);
|
||||
pci_alloc_irq_vectors(dd->pcidev, 1, 1,
|
||||
PCI_IRQ_LEGACY);
|
||||
pci_alloc_irq_vectors(dd->pcidev, 1, 1, PCI_IRQ_INTX);
|
||||
goto try_intx;
|
||||
}
|
||||
dd->cspec->msix_entries[msixnum].arg = arg;
|
||||
@ -5143,7 +5142,7 @@ static int qib_7322_intr_fallback(struct qib_devdata *dd)
|
||||
qib_devinfo(dd->pcidev,
|
||||
"MSIx interrupt not detected, trying INTx interrupts\n");
|
||||
qib_7322_free_irq(dd);
|
||||
if (pci_alloc_irq_vectors(dd->pcidev, 1, 1, PCI_IRQ_LEGACY) < 0)
|
||||
if (pci_alloc_irq_vectors(dd->pcidev, 1, 1, PCI_IRQ_INTX) < 0)
|
||||
qib_dev_err(dd, "Failed to enable INTx\n");
|
||||
qib_setup_7322_interrupt(dd, 0);
|
||||
return 1;
|
||||
|
@ -210,7 +210,7 @@ int qib_pcie_params(struct qib_devdata *dd, u32 minw, u32 *nent)
|
||||
}
|
||||
|
||||
if (dd->flags & QIB_HAS_INTX)
|
||||
flags |= PCI_IRQ_LEGACY;
|
||||
flags |= PCI_IRQ_INTX;
|
||||
maxvec = (nent && *nent) ? *nent : 1;
|
||||
nvec = pci_alloc_irq_vectors(dd->pcidev, 1, maxvec, flags);
|
||||
if (nvec < 0)
|
||||
|
@ -531,7 +531,7 @@ static int pvrdma_alloc_intrs(struct pvrdma_dev *dev)
|
||||
PCI_IRQ_MSIX);
|
||||
if (ret < 0) {
|
||||
ret = pci_alloc_irq_vectors(pdev, 1, 1,
|
||||
PCI_IRQ_MSI | PCI_IRQ_LEGACY);
|
||||
PCI_IRQ_MSI | PCI_IRQ_INTX);
|
||||
if (ret < 0)
|
||||
return ret;
|
||||
}
|
||||
|
@ -54,7 +54,7 @@ static int intel_lpss_pci_probe(struct pci_dev *pdev,
|
||||
if (ret)
|
||||
return ret;
|
||||
|
||||
ret = pci_alloc_irq_vectors(pdev, 1, 1, PCI_IRQ_LEGACY);
|
||||
ret = pci_alloc_irq_vectors(pdev, 1, 1, PCI_IRQ_INTX);
|
||||
if (ret < 0)
|
||||
return ret;
|
||||
|
||||
|
@ -787,8 +787,7 @@ static int vmci_guest_probe_device(struct pci_dev *pdev,
|
||||
error = pci_alloc_irq_vectors(pdev, num_irq_vectors, num_irq_vectors,
|
||||
PCI_IRQ_MSIX);
|
||||
if (error < 0) {
|
||||
error = pci_alloc_irq_vectors(pdev, 1, 1,
|
||||
PCI_IRQ_MSIX | PCI_IRQ_MSI | PCI_IRQ_LEGACY);
|
||||
error = pci_alloc_irq_vectors(pdev, 1, 1, PCI_IRQ_ALL_TYPES);
|
||||
if (error < 0)
|
||||
goto err_unsubscribe_event;
|
||||
} else {
|
||||
|
@ -170,7 +170,7 @@ static int xgbe_config_irqs(struct xgbe_prv_data *pdata)
|
||||
goto out;
|
||||
|
||||
ret = pci_alloc_irq_vectors(pdata->pcidev, 1, 1,
|
||||
PCI_IRQ_LEGACY | PCI_IRQ_MSI);
|
||||
PCI_IRQ_INTX | PCI_IRQ_MSI);
|
||||
if (ret < 0) {
|
||||
dev_info(pdata->dev, "single IRQ enablement failed\n");
|
||||
return ret;
|
||||
|
@ -17,7 +17,7 @@
|
||||
|
||||
#define AQ_CFG_IS_POLLING_DEF 0U
|
||||
|
||||
#define AQ_CFG_FORCE_LEGACY_INT 0U
|
||||
#define AQ_CFG_FORCE_INTX 0U
|
||||
|
||||
#define AQ_CFG_INTERRUPT_MODERATION_OFF 0
|
||||
#define AQ_CFG_INTERRUPT_MODERATION_ON 1
|
||||
|
@ -104,7 +104,7 @@ struct aq_stats_s {
|
||||
};
|
||||
|
||||
#define AQ_HW_IRQ_INVALID 0U
|
||||
#define AQ_HW_IRQ_LEGACY 1U
|
||||
#define AQ_HW_IRQ_INTX 1U
|
||||
#define AQ_HW_IRQ_MSI 2U
|
||||
#define AQ_HW_IRQ_MSIX 3U
|
||||
|
||||
|
@ -127,7 +127,7 @@ void aq_nic_cfg_start(struct aq_nic_s *self)
|
||||
|
||||
cfg->irq_type = aq_pci_func_get_irq_type(self);
|
||||
|
||||
if ((cfg->irq_type == AQ_HW_IRQ_LEGACY) ||
|
||||
if ((cfg->irq_type == AQ_HW_IRQ_INTX) ||
|
||||
(cfg->aq_hw_caps->vecs == 1U) ||
|
||||
(cfg->vecs == 1U)) {
|
||||
cfg->is_rss = 0U;
|
||||
|
@ -200,7 +200,7 @@ unsigned int aq_pci_func_get_irq_type(struct aq_nic_s *self)
|
||||
if (self->pdev->msi_enabled)
|
||||
return AQ_HW_IRQ_MSI;
|
||||
|
||||
return AQ_HW_IRQ_LEGACY;
|
||||
return AQ_HW_IRQ_INTX;
|
||||
}
|
||||
|
||||
static void aq_pci_free_irq_vectors(struct aq_nic_s *self)
|
||||
@ -298,11 +298,8 @@ static int aq_pci_probe(struct pci_dev *pdev,
|
||||
|
||||
numvecs += AQ_HW_SERVICE_IRQS;
|
||||
/*enable interrupts */
|
||||
#if !AQ_CFG_FORCE_LEGACY_INT
|
||||
err = pci_alloc_irq_vectors(self->pdev, 1, numvecs,
|
||||
PCI_IRQ_MSIX | PCI_IRQ_MSI |
|
||||
PCI_IRQ_LEGACY);
|
||||
|
||||
#if !AQ_CFG_FORCE_INTX
|
||||
err = pci_alloc_irq_vectors(self->pdev, 1, numvecs, PCI_IRQ_ALL_TYPES);
|
||||
if (err < 0)
|
||||
goto err_hwinit;
|
||||
numvecs = err;
|
||||
|
@ -352,7 +352,7 @@ static int hw_atl_a0_hw_init(struct aq_hw_s *self, const u8 *mac_addr)
|
||||
{
|
||||
static u32 aq_hw_atl_igcr_table_[4][2] = {
|
||||
[AQ_HW_IRQ_INVALID] = { 0x20000000U, 0x20000000U },
|
||||
[AQ_HW_IRQ_LEGACY] = { 0x20000080U, 0x20000080U },
|
||||
[AQ_HW_IRQ_INTX] = { 0x20000080U, 0x20000080U },
|
||||
[AQ_HW_IRQ_MSI] = { 0x20000021U, 0x20000025U },
|
||||
[AQ_HW_IRQ_MSIX] = { 0x20000022U, 0x20000026U },
|
||||
};
|
||||
|
@ -562,7 +562,7 @@ static int hw_atl_b0_hw_init(struct aq_hw_s *self, const u8 *mac_addr)
|
||||
{
|
||||
static u32 aq_hw_atl_igcr_table_[4][2] = {
|
||||
[AQ_HW_IRQ_INVALID] = { 0x20000000U, 0x20000000U },
|
||||
[AQ_HW_IRQ_LEGACY] = { 0x20000080U, 0x20000080U },
|
||||
[AQ_HW_IRQ_INTX] = { 0x20000080U, 0x20000080U },
|
||||
[AQ_HW_IRQ_MSI] = { 0x20000021U, 0x20000025U },
|
||||
[AQ_HW_IRQ_MSIX] = { 0x20000022U, 0x20000026U },
|
||||
};
|
||||
|
@ -534,7 +534,7 @@ static int hw_atl2_hw_init(struct aq_hw_s *self, const u8 *mac_addr)
|
||||
{
|
||||
static u32 aq_hw_atl2_igcr_table_[4][2] = {
|
||||
[AQ_HW_IRQ_INVALID] = { 0x20000000U, 0x20000000U },
|
||||
[AQ_HW_IRQ_LEGACY] = { 0x20000080U, 0x20000080U },
|
||||
[AQ_HW_IRQ_INTX] = { 0x20000080U, 0x20000080U },
|
||||
[AQ_HW_IRQ_MSI] = { 0x20000021U, 0x20000025U },
|
||||
[AQ_HW_IRQ_MSIX] = { 0x20000022U, 0x20000026U },
|
||||
};
|
||||
|
@ -901,7 +901,7 @@ static int alx_init_intr(struct alx_priv *alx)
|
||||
int ret;
|
||||
|
||||
ret = pci_alloc_irq_vectors(alx->hw.pdev, 1, 1,
|
||||
PCI_IRQ_MSI | PCI_IRQ_LEGACY);
|
||||
PCI_IRQ_MSI | PCI_IRQ_INTX);
|
||||
if (ret < 0)
|
||||
return ret;
|
||||
|
||||
|
@ -5076,7 +5076,7 @@ static int rtl_alloc_irq(struct rtl8169_private *tp)
|
||||
rtl_lock_config_regs(tp);
|
||||
fallthrough;
|
||||
case RTL_GIGA_MAC_VER_07 ... RTL_GIGA_MAC_VER_17:
|
||||
flags = PCI_IRQ_LEGACY;
|
||||
flags = PCI_IRQ_INTX;
|
||||
break;
|
||||
default:
|
||||
flags = PCI_IRQ_ALL_TYPES;
|
||||
|
@ -1674,14 +1674,14 @@ static int wx_set_interrupt_capability(struct wx *wx)
|
||||
/* minmum one for queue, one for misc*/
|
||||
nvecs = 1;
|
||||
nvecs = pci_alloc_irq_vectors(pdev, nvecs,
|
||||
nvecs, PCI_IRQ_MSI | PCI_IRQ_LEGACY);
|
||||
nvecs, PCI_IRQ_MSI | PCI_IRQ_INTX);
|
||||
if (nvecs == 1) {
|
||||
if (pdev->msi_enabled)
|
||||
wx_err(wx, "Fallback to MSI.\n");
|
||||
else
|
||||
wx_err(wx, "Fallback to LEGACY.\n");
|
||||
wx_err(wx, "Fallback to INTx.\n");
|
||||
} else {
|
||||
wx_err(wx, "Failed to allocate MSI/LEGACY interrupts. Error: %d\n", nvecs);
|
||||
wx_err(wx, "Failed to allocate MSI/INTx interrupts. Error: %d\n", nvecs);
|
||||
return nvecs;
|
||||
}
|
||||
|
||||
@ -2127,7 +2127,7 @@ void wx_write_eitr(struct wx_q_vector *q_vector)
|
||||
* wx_configure_vectors - Configure vectors for hardware
|
||||
* @wx: board private structure
|
||||
*
|
||||
* wx_configure_vectors sets up the hardware to properly generate MSI-X/MSI/LEGACY
|
||||
* wx_configure_vectors sets up the hardware to properly generate MSI-X/MSI/INTx
|
||||
* interrupts.
|
||||
**/
|
||||
void wx_configure_vectors(struct wx *wx)
|
||||
|
@ -394,14 +394,14 @@ static irqreturn_t ath10k_ahb_interrupt_handler(int irq, void *arg)
|
||||
if (!ath10k_pci_irq_pending(ar))
|
||||
return IRQ_NONE;
|
||||
|
||||
ath10k_pci_disable_and_clear_legacy_irq(ar);
|
||||
ath10k_pci_disable_and_clear_intx_irq(ar);
|
||||
ath10k_pci_irq_msi_fw_mask(ar);
|
||||
napi_schedule(&ar->napi);
|
||||
|
||||
return IRQ_HANDLED;
|
||||
}
|
||||
|
||||
static int ath10k_ahb_request_irq_legacy(struct ath10k *ar)
|
||||
static int ath10k_ahb_request_irq_intx(struct ath10k *ar)
|
||||
{
|
||||
struct ath10k_pci *ar_pci = ath10k_pci_priv(ar);
|
||||
struct ath10k_ahb *ar_ahb = ath10k_ahb_priv(ar);
|
||||
@ -415,12 +415,12 @@ static int ath10k_ahb_request_irq_legacy(struct ath10k *ar)
|
||||
ar_ahb->irq, ret);
|
||||
return ret;
|
||||
}
|
||||
ar_pci->oper_irq_mode = ATH10K_PCI_IRQ_LEGACY;
|
||||
ar_pci->oper_irq_mode = ATH10K_PCI_IRQ_INTX;
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
static void ath10k_ahb_release_irq_legacy(struct ath10k *ar)
|
||||
static void ath10k_ahb_release_irq_intx(struct ath10k *ar)
|
||||
{
|
||||
struct ath10k_ahb *ar_ahb = ath10k_ahb_priv(ar);
|
||||
|
||||
@ -430,7 +430,7 @@ static void ath10k_ahb_release_irq_legacy(struct ath10k *ar)
|
||||
static void ath10k_ahb_irq_disable(struct ath10k *ar)
|
||||
{
|
||||
ath10k_ce_disable_interrupts(ar);
|
||||
ath10k_pci_disable_and_clear_legacy_irq(ar);
|
||||
ath10k_pci_disable_and_clear_intx_irq(ar);
|
||||
}
|
||||
|
||||
static int ath10k_ahb_resource_init(struct ath10k *ar)
|
||||
@ -621,7 +621,7 @@ static int ath10k_ahb_hif_start(struct ath10k *ar)
|
||||
|
||||
ath10k_core_napi_enable(ar);
|
||||
ath10k_ce_enable_interrupts(ar);
|
||||
ath10k_pci_enable_legacy_irq(ar);
|
||||
ath10k_pci_enable_intx_irq(ar);
|
||||
|
||||
ath10k_pci_rx_post(ar);
|
||||
|
||||
@ -775,7 +775,7 @@ static int ath10k_ahb_probe(struct platform_device *pdev)
|
||||
|
||||
ath10k_pci_init_napi(ar);
|
||||
|
||||
ret = ath10k_ahb_request_irq_legacy(ar);
|
||||
ret = ath10k_ahb_request_irq_intx(ar);
|
||||
if (ret)
|
||||
goto err_free_pipes;
|
||||
|
||||
@ -806,7 +806,7 @@ err_halt_device:
|
||||
ath10k_ahb_clock_disable(ar);
|
||||
|
||||
err_free_irq:
|
||||
ath10k_ahb_release_irq_legacy(ar);
|
||||
ath10k_ahb_release_irq_intx(ar);
|
||||
|
||||
err_free_pipes:
|
||||
ath10k_pci_release_resource(ar);
|
||||
@ -828,7 +828,7 @@ static void ath10k_ahb_remove(struct platform_device *pdev)
|
||||
|
||||
ath10k_core_unregister(ar);
|
||||
ath10k_ahb_irq_disable(ar);
|
||||
ath10k_ahb_release_irq_legacy(ar);
|
||||
ath10k_ahb_release_irq_intx(ar);
|
||||
ath10k_pci_release_resource(ar);
|
||||
ath10k_ahb_halt_chip(ar);
|
||||
ath10k_ahb_clock_disable(ar);
|
||||
|
@ -721,7 +721,7 @@ bool ath10k_pci_irq_pending(struct ath10k *ar)
|
||||
return false;
|
||||
}
|
||||
|
||||
void ath10k_pci_disable_and_clear_legacy_irq(struct ath10k *ar)
|
||||
void ath10k_pci_disable_and_clear_intx_irq(struct ath10k *ar)
|
||||
{
|
||||
/* IMPORTANT: INTR_CLR register has to be set after
|
||||
* INTR_ENABLE is set to 0, otherwise interrupt can not be
|
||||
@ -739,7 +739,7 @@ void ath10k_pci_disable_and_clear_legacy_irq(struct ath10k *ar)
|
||||
PCIE_INTR_ENABLE_ADDRESS);
|
||||
}
|
||||
|
||||
void ath10k_pci_enable_legacy_irq(struct ath10k *ar)
|
||||
void ath10k_pci_enable_intx_irq(struct ath10k *ar)
|
||||
{
|
||||
ath10k_pci_write32(ar, SOC_CORE_BASE_ADDRESS +
|
||||
PCIE_INTR_ENABLE_ADDRESS,
|
||||
@ -1935,7 +1935,7 @@ static void ath10k_pci_irq_msi_fw_unmask(struct ath10k *ar)
|
||||
static void ath10k_pci_irq_disable(struct ath10k *ar)
|
||||
{
|
||||
ath10k_ce_disable_interrupts(ar);
|
||||
ath10k_pci_disable_and_clear_legacy_irq(ar);
|
||||
ath10k_pci_disable_and_clear_intx_irq(ar);
|
||||
ath10k_pci_irq_msi_fw_mask(ar);
|
||||
}
|
||||
|
||||
@ -1949,7 +1949,7 @@ static void ath10k_pci_irq_sync(struct ath10k *ar)
|
||||
static void ath10k_pci_irq_enable(struct ath10k *ar)
|
||||
{
|
||||
ath10k_ce_enable_interrupts(ar);
|
||||
ath10k_pci_enable_legacy_irq(ar);
|
||||
ath10k_pci_enable_intx_irq(ar);
|
||||
ath10k_pci_irq_msi_fw_unmask(ar);
|
||||
}
|
||||
|
||||
@ -3111,11 +3111,11 @@ static irqreturn_t ath10k_pci_interrupt_handler(int irq, void *arg)
|
||||
return IRQ_NONE;
|
||||
}
|
||||
|
||||
if ((ar_pci->oper_irq_mode == ATH10K_PCI_IRQ_LEGACY) &&
|
||||
if ((ar_pci->oper_irq_mode == ATH10K_PCI_IRQ_INTX) &&
|
||||
!ath10k_pci_irq_pending(ar))
|
||||
return IRQ_NONE;
|
||||
|
||||
ath10k_pci_disable_and_clear_legacy_irq(ar);
|
||||
ath10k_pci_disable_and_clear_intx_irq(ar);
|
||||
ath10k_pci_irq_msi_fw_mask(ar);
|
||||
napi_schedule(&ar->napi);
|
||||
|
||||
@ -3152,7 +3152,7 @@ static int ath10k_pci_napi_poll(struct napi_struct *ctx, int budget)
|
||||
napi_schedule(ctx);
|
||||
goto out;
|
||||
}
|
||||
ath10k_pci_enable_legacy_irq(ar);
|
||||
ath10k_pci_enable_intx_irq(ar);
|
||||
ath10k_pci_irq_msi_fw_unmask(ar);
|
||||
}
|
||||
|
||||
@ -3177,7 +3177,7 @@ static int ath10k_pci_request_irq_msi(struct ath10k *ar)
|
||||
return 0;
|
||||
}
|
||||
|
||||
static int ath10k_pci_request_irq_legacy(struct ath10k *ar)
|
||||
static int ath10k_pci_request_irq_intx(struct ath10k *ar)
|
||||
{
|
||||
struct ath10k_pci *ar_pci = ath10k_pci_priv(ar);
|
||||
int ret;
|
||||
@ -3199,8 +3199,8 @@ static int ath10k_pci_request_irq(struct ath10k *ar)
|
||||
struct ath10k_pci *ar_pci = ath10k_pci_priv(ar);
|
||||
|
||||
switch (ar_pci->oper_irq_mode) {
|
||||
case ATH10K_PCI_IRQ_LEGACY:
|
||||
return ath10k_pci_request_irq_legacy(ar);
|
||||
case ATH10K_PCI_IRQ_INTX:
|
||||
return ath10k_pci_request_irq_intx(ar);
|
||||
case ATH10K_PCI_IRQ_MSI:
|
||||
return ath10k_pci_request_irq_msi(ar);
|
||||
default:
|
||||
@ -3232,7 +3232,7 @@ static int ath10k_pci_init_irq(struct ath10k *ar)
|
||||
ath10k_pci_irq_mode);
|
||||
|
||||
/* Try MSI */
|
||||
if (ath10k_pci_irq_mode != ATH10K_PCI_IRQ_LEGACY) {
|
||||
if (ath10k_pci_irq_mode != ATH10K_PCI_IRQ_INTX) {
|
||||
ar_pci->oper_irq_mode = ATH10K_PCI_IRQ_MSI;
|
||||
ret = pci_enable_msi(ar_pci->pdev);
|
||||
if (ret == 0)
|
||||
@ -3250,7 +3250,7 @@ static int ath10k_pci_init_irq(struct ath10k *ar)
|
||||
* For now, fix the race by repeating the write in below
|
||||
* synchronization checking.
|
||||
*/
|
||||
ar_pci->oper_irq_mode = ATH10K_PCI_IRQ_LEGACY;
|
||||
ar_pci->oper_irq_mode = ATH10K_PCI_IRQ_INTX;
|
||||
|
||||
ath10k_pci_write32(ar, SOC_CORE_BASE_ADDRESS + PCIE_INTR_ENABLE_ADDRESS,
|
||||
PCIE_INTR_FIRMWARE_MASK | PCIE_INTR_CE_MASK_ALL);
|
||||
@ -3258,7 +3258,7 @@ static int ath10k_pci_init_irq(struct ath10k *ar)
|
||||
return 0;
|
||||
}
|
||||
|
||||
static void ath10k_pci_deinit_irq_legacy(struct ath10k *ar)
|
||||
static void ath10k_pci_deinit_irq_intx(struct ath10k *ar)
|
||||
{
|
||||
ath10k_pci_write32(ar, SOC_CORE_BASE_ADDRESS + PCIE_INTR_ENABLE_ADDRESS,
|
||||
0);
|
||||
@ -3269,8 +3269,8 @@ static int ath10k_pci_deinit_irq(struct ath10k *ar)
|
||||
struct ath10k_pci *ar_pci = ath10k_pci_priv(ar);
|
||||
|
||||
switch (ar_pci->oper_irq_mode) {
|
||||
case ATH10K_PCI_IRQ_LEGACY:
|
||||
ath10k_pci_deinit_irq_legacy(ar);
|
||||
case ATH10K_PCI_IRQ_INTX:
|
||||
ath10k_pci_deinit_irq_intx(ar);
|
||||
break;
|
||||
default:
|
||||
pci_disable_msi(ar_pci->pdev);
|
||||
@ -3307,14 +3307,14 @@ int ath10k_pci_wait_for_target_init(struct ath10k *ar)
|
||||
if (val & FW_IND_INITIALIZED)
|
||||
break;
|
||||
|
||||
if (ar_pci->oper_irq_mode == ATH10K_PCI_IRQ_LEGACY)
|
||||
if (ar_pci->oper_irq_mode == ATH10K_PCI_IRQ_INTX)
|
||||
/* Fix potential race by repeating CORE_BASE writes */
|
||||
ath10k_pci_enable_legacy_irq(ar);
|
||||
ath10k_pci_enable_intx_irq(ar);
|
||||
|
||||
mdelay(10);
|
||||
} while (time_before(jiffies, timeout));
|
||||
|
||||
ath10k_pci_disable_and_clear_legacy_irq(ar);
|
||||
ath10k_pci_disable_and_clear_intx_irq(ar);
|
||||
ath10k_pci_irq_msi_fw_mask(ar);
|
||||
|
||||
if (val == 0xffffffff) {
|
||||
|
@ -101,7 +101,7 @@ struct ath10k_pci_supp_chip {
|
||||
|
||||
enum ath10k_pci_irq_mode {
|
||||
ATH10K_PCI_IRQ_AUTO = 0,
|
||||
ATH10K_PCI_IRQ_LEGACY = 1,
|
||||
ATH10K_PCI_IRQ_INTX = 1,
|
||||
ATH10K_PCI_IRQ_MSI = 2,
|
||||
};
|
||||
|
||||
@ -243,9 +243,9 @@ int ath10k_pci_init_pipes(struct ath10k *ar);
|
||||
int ath10k_pci_init_config(struct ath10k *ar);
|
||||
void ath10k_pci_rx_post(struct ath10k *ar);
|
||||
void ath10k_pci_flush(struct ath10k *ar);
|
||||
void ath10k_pci_enable_legacy_irq(struct ath10k *ar);
|
||||
void ath10k_pci_enable_intx_irq(struct ath10k *ar);
|
||||
bool ath10k_pci_irq_pending(struct ath10k *ar);
|
||||
void ath10k_pci_disable_and_clear_legacy_irq(struct ath10k *ar);
|
||||
void ath10k_pci_disable_and_clear_intx_irq(struct ath10k *ar);
|
||||
void ath10k_pci_irq_msi_fw_mask(struct ath10k *ar);
|
||||
int ath10k_pci_wait_for_target_init(struct ath10k *ar);
|
||||
int ath10k_pci_setup_resource(struct ath10k *ar);
|
||||
|
@ -1612,7 +1612,7 @@ static struct rtw_hci_ops rtw_pci_ops = {
|
||||
|
||||
static int rtw_pci_request_irq(struct rtw_dev *rtwdev, struct pci_dev *pdev)
|
||||
{
|
||||
unsigned int flags = PCI_IRQ_LEGACY;
|
||||
unsigned int flags = PCI_IRQ_INTX;
|
||||
int ret;
|
||||
|
||||
if (!rtw_disable_msi)
|
||||
|
@ -3547,7 +3547,7 @@ static int rtw89_pci_request_irq(struct rtw89_dev *rtwdev,
|
||||
unsigned long flags = 0;
|
||||
int ret;
|
||||
|
||||
flags |= PCI_IRQ_LEGACY | PCI_IRQ_MSI;
|
||||
flags |= PCI_IRQ_INTX | PCI_IRQ_MSI;
|
||||
ret = pci_alloc_irq_vectors(pdev, 1, 1, flags);
|
||||
if (ret < 0) {
|
||||
rtw89_err(rtwdev, "failed to alloc irq vectors, ret %d\n", ret);
|
||||
|
@ -2129,7 +2129,7 @@ static int idt_init_isr(struct idt_ntb_dev *ndev)
|
||||
int ret;
|
||||
|
||||
/* Allocate just one interrupt vector for the ISR */
|
||||
ret = pci_alloc_irq_vectors(pdev, 1, 1, PCI_IRQ_MSI | PCI_IRQ_LEGACY);
|
||||
ret = pci_alloc_irq_vectors(pdev, 1, 1, PCI_IRQ_MSI | PCI_IRQ_INTX);
|
||||
if (ret != 1) {
|
||||
dev_err(&pdev->dev, "Failed to allocate IRQ vector");
|
||||
return ret;
|
||||
|
@ -213,8 +213,8 @@ EXPORT_SYMBOL(pci_disable_msix);
|
||||
* * %PCI_IRQ_MSIX Allow trying MSI-X vector allocations
|
||||
* * %PCI_IRQ_MSI Allow trying MSI vector allocations
|
||||
*
|
||||
* * %PCI_IRQ_LEGACY Allow trying legacy INTx interrupts, if
|
||||
* and only if @min_vecs == 1
|
||||
* * %PCI_IRQ_INTX Allow trying INTx interrupts, if and
|
||||
* only if @min_vecs == 1
|
||||
*
|
||||
* * %PCI_IRQ_AFFINITY Auto-manage IRQs affinity by spreading
|
||||
* the vectors around available CPUs
|
||||
@ -279,8 +279,8 @@ int pci_alloc_irq_vectors_affinity(struct pci_dev *dev, unsigned int min_vecs,
|
||||
return nvecs;
|
||||
}
|
||||
|
||||
/* use legacy IRQ if allowed */
|
||||
if (flags & PCI_IRQ_LEGACY) {
|
||||
/* use INTx IRQ if allowed */
|
||||
if (flags & PCI_IRQ_INTX) {
|
||||
if (min_vecs == 1 && dev->irq) {
|
||||
/*
|
||||
* Invoke the affinity spreading logic to ensure that
|
||||
|
@ -1277,6 +1277,11 @@ static int pci_dev_wait(struct pci_dev *dev, char *reset_type, int timeout)
|
||||
for (;;) {
|
||||
u32 id;
|
||||
|
||||
if (pci_dev_is_disconnected(dev)) {
|
||||
pci_dbg(dev, "disconnected; not waiting\n");
|
||||
return -ENOTTY;
|
||||
}
|
||||
|
||||
pci_read_config_dword(dev, PCI_COMMAND, &id);
|
||||
if (!PCI_POSSIBLE_ERROR(id))
|
||||
break;
|
||||
@ -2110,20 +2115,6 @@ static int pci_enable_device_flags(struct pci_dev *dev, unsigned long flags)
|
||||
return err;
|
||||
}
|
||||
|
||||
/**
|
||||
* pci_enable_device_io - Initialize a device for use with IO space
|
||||
* @dev: PCI device to be initialized
|
||||
*
|
||||
* Initialize device before it's used by a driver. Ask low-level code
|
||||
* to enable I/O resources. Wake up the device if it was suspended.
|
||||
* Beware, this function can fail.
|
||||
*/
|
||||
int pci_enable_device_io(struct pci_dev *dev)
|
||||
{
|
||||
return pci_enable_device_flags(dev, IORESOURCE_IO);
|
||||
}
|
||||
EXPORT_SYMBOL(pci_enable_device_io);
|
||||
|
||||
/**
|
||||
* pci_enable_device_mem - Initialize a device for use with Memory space
|
||||
* @dev: PCI device to be initialized
|
||||
@ -4625,11 +4616,12 @@ int pcie_retrain_link(struct pci_dev *pdev, bool use_lt)
|
||||
|
||||
/*
|
||||
* Ensure the updated LNKCTL parameters are used during link
|
||||
* training by checking that there is no ongoing link training to
|
||||
* avoid LTSSM race as recommended in Implementation Note at the
|
||||
* end of PCIe r6.0.1 sec 7.5.3.7.
|
||||
* training by checking that there is no ongoing link training that
|
||||
* may have started before link parameters were changed, so as to
|
||||
* avoid LTSSM race as recommended in Implementation Note at the end
|
||||
* of PCIe r6.1 sec 7.5.3.7.
|
||||
*/
|
||||
rc = pcie_wait_for_link_status(pdev, use_lt, !use_lt);
|
||||
rc = pcie_wait_for_link_status(pdev, true, false);
|
||||
if (rc)
|
||||
return rc;
|
||||
|
||||
|
@ -187,15 +187,15 @@ static int pcie_init_service_irqs(struct pci_dev *dev, int *irqs, int mask)
|
||||
* interrupt.
|
||||
*/
|
||||
if ((mask & PCIE_PORT_SERVICE_PME) && pcie_pme_no_msi())
|
||||
goto legacy_irq;
|
||||
goto intx_irq;
|
||||
|
||||
/* Try to use MSI-X or MSI if supported */
|
||||
if (pcie_port_enable_irq_vec(dev, irqs, mask) == 0)
|
||||
return 0;
|
||||
|
||||
legacy_irq:
|
||||
/* fall back to legacy IRQ */
|
||||
ret = pci_alloc_irq_vectors(dev, 1, 1, PCI_IRQ_LEGACY);
|
||||
intx_irq:
|
||||
/* fall back to INTX IRQ */
|
||||
ret = pci_alloc_irq_vectors(dev, 1, 1, PCI_IRQ_INTX);
|
||||
if (ret < 0)
|
||||
return -ENODEV;
|
||||
|
||||
|
@ -1482,6 +1482,9 @@ static int pci_scan_bridge_extend(struct pci_bus *bus, struct pci_dev *dev,
|
||||
}
|
||||
|
||||
out:
|
||||
/* Clear errors in the Secondary Status Register */
|
||||
pci_write_config_word(dev, PCI_SEC_STATUS, 0xffff);
|
||||
|
||||
pci_write_config_word(dev, PCI_BRIDGE_CONTROL, bctl);
|
||||
|
||||
pm_runtime_put(&dev->dev);
|
||||
|
@ -1505,7 +1505,7 @@ static int ips_probe(struct pci_dev *dev, const struct pci_device_id *id)
|
||||
* IRQ handler for ME interaction
|
||||
* Note: don't use MSI here as the PCH has bugs.
|
||||
*/
|
||||
ret = pci_alloc_irq_vectors(dev, 1, 1, PCI_IRQ_LEGACY);
|
||||
ret = pci_alloc_irq_vectors(dev, 1, 1, PCI_IRQ_INTX);
|
||||
if (ret < 0)
|
||||
return ret;
|
||||
|
||||
|
@ -1007,7 +1007,7 @@ msi_int0:
|
||||
goto msi_int1;
|
||||
}
|
||||
}
|
||||
nvec = pci_alloc_irq_vectors(pdev, 1, 1, PCI_IRQ_LEGACY);
|
||||
nvec = pci_alloc_irq_vectors(pdev, 1, 1, PCI_IRQ_INTX);
|
||||
if (nvec < 1)
|
||||
return FAILED;
|
||||
msi_int1:
|
||||
|
@ -7509,7 +7509,7 @@ fallback:
|
||||
*/
|
||||
static int hpsa_interrupt_mode(struct ctlr_info *h)
|
||||
{
|
||||
unsigned int flags = PCI_IRQ_LEGACY;
|
||||
unsigned int flags = PCI_IRQ_INTX;
|
||||
int ret;
|
||||
|
||||
/* Some boards advertise MSI but don't really support it */
|
||||
|
@ -9463,7 +9463,7 @@ static int ipr_probe_ioa(struct pci_dev *pdev,
|
||||
ipr_number_of_msix = IPR_MAX_MSIX_VECTORS;
|
||||
}
|
||||
|
||||
irq_flag = PCI_IRQ_LEGACY;
|
||||
irq_flag = PCI_IRQ_INTX;
|
||||
if (ioa_cfg->ipr_chip->has_msi)
|
||||
irq_flag |= PCI_IRQ_MSI | PCI_IRQ_MSIX;
|
||||
rc = pci_alloc_irq_vectors(pdev, 1, ipr_number_of_msix, irq_flag);
|
||||
|
@ -6300,7 +6300,7 @@ static int megasas_init_fw(struct megasas_instance *instance)
|
||||
}
|
||||
|
||||
if (!instance->msix_vectors) {
|
||||
i = pci_alloc_irq_vectors(instance->pdev, 1, 1, PCI_IRQ_LEGACY);
|
||||
i = pci_alloc_irq_vectors(instance->pdev, 1, 1, PCI_IRQ_INTX);
|
||||
if (i < 0)
|
||||
goto fail_init_adapter;
|
||||
}
|
||||
@ -7839,7 +7839,7 @@ megasas_resume(struct device *dev)
|
||||
|
||||
if (!instance->msix_vectors) {
|
||||
rval = pci_alloc_irq_vectors(instance->pdev, 1, 1,
|
||||
PCI_IRQ_LEGACY);
|
||||
PCI_IRQ_INTX);
|
||||
if (rval < 0)
|
||||
goto fail_reenable_msix;
|
||||
}
|
||||
|
@ -3515,7 +3515,7 @@ _base_enable_msix(struct MPT3SAS_ADAPTER *ioc)
|
||||
ioc_info(ioc, "High IOPs queues : disabled\n");
|
||||
ioc->reply_queue_count = 1;
|
||||
ioc->iopoll_q_start_index = ioc->reply_queue_count - 0;
|
||||
r = pci_alloc_irq_vectors(ioc->pdev, 1, 1, PCI_IRQ_LEGACY);
|
||||
r = pci_alloc_irq_vectors(ioc->pdev, 1, 1, PCI_IRQ_INTX);
|
||||
if (r < 0) {
|
||||
dfailprintk(ioc,
|
||||
ioc_info(ioc, "pci_alloc_irq_vector(legacy) failed (r=%d) !!!\n",
|
||||
|
@ -4033,7 +4033,7 @@ static int
|
||||
pmcraid_register_interrupt_handler(struct pmcraid_instance *pinstance)
|
||||
{
|
||||
struct pci_dev *pdev = pinstance->pdev;
|
||||
unsigned int irq_flag = PCI_IRQ_LEGACY, flag;
|
||||
unsigned int irq_flag = PCI_IRQ_INTX, flag;
|
||||
int num_hrrq, rc, i;
|
||||
irq_handler_t isr;
|
||||
|
||||
|
@ -1346,7 +1346,7 @@ exit:
|
||||
|
||||
static int pvscsi_probe(struct pci_dev *pdev, const struct pci_device_id *id)
|
||||
{
|
||||
unsigned int irq_flag = PCI_IRQ_MSIX | PCI_IRQ_MSI | PCI_IRQ_LEGACY;
|
||||
unsigned int irq_flag = PCI_IRQ_ALL_TYPES;
|
||||
struct pvscsi_adapter *adapter;
|
||||
struct pvscsi_adapter adapter_temp;
|
||||
struct Scsi_Host *host = NULL;
|
||||
|
@ -4108,7 +4108,7 @@ pciserial_init_ports(struct pci_dev *dev, const struct pciserial_board *board)
|
||||
rc = pci_alloc_irq_vectors(dev, 1, 1, PCI_IRQ_ALL_TYPES);
|
||||
} else {
|
||||
pci_dbg(dev, "Using legacy interrupts\n");
|
||||
rc = pci_alloc_irq_vectors(dev, 1, 1, PCI_IRQ_LEGACY);
|
||||
rc = pci_alloc_irq_vectors(dev, 1, 1, PCI_IRQ_INTX);
|
||||
}
|
||||
if (rc < 0) {
|
||||
kfree(priv);
|
||||
|
@ -189,7 +189,8 @@ int usb_hcd_pci_probe(struct pci_dev *dev, const struct hc_driver *driver)
|
||||
* make sure irq setup is not touched for xhci in generic hcd code
|
||||
*/
|
||||
if ((driver->flags & HCD_MASK) < HCD_USB3) {
|
||||
retval = pci_alloc_irq_vectors(dev, 1, 1, PCI_IRQ_LEGACY | PCI_IRQ_MSI);
|
||||
retval = pci_alloc_irq_vectors(dev, 1, 1,
|
||||
PCI_IRQ_INTX | PCI_IRQ_MSI);
|
||||
if (retval < 0) {
|
||||
dev_err(&dev->dev,
|
||||
"Found HC with no IRQ. Check BIOS/PCI %s setup!\n",
|
||||
|
@ -1079,8 +1079,6 @@ enum {
|
||||
#define PCI_IRQ_MSIX (1 << 2) /* Allow MSI-X interrupts */
|
||||
#define PCI_IRQ_AFFINITY (1 << 3) /* Auto-assign affinity */
|
||||
|
||||
#define PCI_IRQ_LEGACY PCI_IRQ_INTX /* Deprecated! Use PCI_IRQ_INTX */
|
||||
|
||||
/* These external functions are only available when PCI support is enabled */
|
||||
#ifdef CONFIG_PCI
|
||||
|
||||
@ -1317,7 +1315,6 @@ int pci_user_write_config_word(struct pci_dev *dev, int where, u16 val);
|
||||
int pci_user_write_config_dword(struct pci_dev *dev, int where, u32 val);
|
||||
|
||||
int __must_check pci_enable_device(struct pci_dev *dev);
|
||||
int __must_check pci_enable_device_io(struct pci_dev *dev);
|
||||
int __must_check pci_enable_device_mem(struct pci_dev *dev);
|
||||
int __must_check pci_reenable_device(struct pci_dev *);
|
||||
int __must_check pcim_enable_device(struct pci_dev *pdev);
|
||||
@ -1650,8 +1647,7 @@ int pci_set_vga_state(struct pci_dev *pdev, bool decode,
|
||||
*/
|
||||
#define PCI_IRQ_VIRTUAL (1 << 4)
|
||||
|
||||
#define PCI_IRQ_ALL_TYPES \
|
||||
(PCI_IRQ_LEGACY | PCI_IRQ_MSI | PCI_IRQ_MSIX)
|
||||
#define PCI_IRQ_ALL_TYPES (PCI_IRQ_INTX | PCI_IRQ_MSI | PCI_IRQ_MSIX)
|
||||
|
||||
#include <linux/dmapool.h>
|
||||
|
||||
@ -1721,7 +1717,7 @@ pci_alloc_irq_vectors_affinity(struct pci_dev *dev, unsigned int min_vecs,
|
||||
unsigned int max_vecs, unsigned int flags,
|
||||
struct irq_affinity *aff_desc)
|
||||
{
|
||||
if ((flags & PCI_IRQ_LEGACY) && min_vecs == 1 && dev->irq)
|
||||
if ((flags & PCI_IRQ_INTX) && min_vecs == 1 && dev->irq)
|
||||
return 1;
|
||||
return -ENOSPC;
|
||||
}
|
||||
@ -2020,10 +2016,9 @@ static inline int pci_register_driver(struct pci_driver *drv)
|
||||
static inline void pci_unregister_driver(struct pci_driver *drv) { }
|
||||
static inline u8 pci_find_capability(struct pci_dev *dev, int cap)
|
||||
{ return 0; }
|
||||
static inline int pci_find_next_capability(struct pci_dev *dev, u8 post,
|
||||
int cap)
|
||||
static inline u8 pci_find_next_capability(struct pci_dev *dev, u8 post, int cap)
|
||||
{ return 0; }
|
||||
static inline int pci_find_ext_capability(struct pci_dev *dev, int cap)
|
||||
static inline u16 pci_find_ext_capability(struct pci_dev *dev, int cap)
|
||||
{ return 0; }
|
||||
|
||||
static inline u64 pci_get_dsn(struct pci_dev *dev)
|
||||
@ -2525,7 +2520,12 @@ static inline struct pci_dev *pcie_find_root_port(struct pci_dev *dev)
|
||||
|
||||
static inline bool pci_dev_is_disconnected(const struct pci_dev *dev)
|
||||
{
|
||||
return dev->error_state == pci_channel_io_perm_failure;
|
||||
/*
|
||||
* error_state is set in pci_dev_set_io_state() using xchg/cmpxchg()
|
||||
* and read w/o common lock. READ_ONCE() ensures compiler cannot cache
|
||||
* the value (e.g. inside the loop in pci_dev_wait()).
|
||||
*/
|
||||
return READ_ONCE(dev->error_state) == pci_channel_io_perm_failure;
|
||||
}
|
||||
|
||||
void pci_request_acs(void);
|
||||
|
@ -343,7 +343,7 @@ static int avs_hdac_acquire_irq(struct avs_dev *adev)
|
||||
int ret;
|
||||
|
||||
/* request one and check that we only got one interrupt */
|
||||
ret = pci_alloc_irq_vectors(pci, 1, 1, PCI_IRQ_MSI | PCI_IRQ_LEGACY);
|
||||
ret = pci_alloc_irq_vectors(pci, 1, 1, PCI_IRQ_MSI | PCI_IRQ_INTX);
|
||||
if (ret != 1) {
|
||||
dev_err(adev->dev, "Failed to allocate IRQ vector: %d\n", ret);
|
||||
return ret;
|
||||
|
Loading…
Reference in New Issue
Block a user