Updates for interrupt core and drivers:
Core code: - Make the managed interrupts more robust by shutting them down in the core code when the assigned affinity mask does not contain online CPUs. - Make the irq simulator chip work on RT - A small set of cpumask and power manageent cleanups Drivers: - A set of changes which mark GPIO interrupt chips immutable to prevent the GPIO subsystem from modifying it under the hood. This provides the necessary infrastructure and converts a set of GPIO and pinctrl drivers over. - A set of changes to make the pseudo-NMI handling for GICv3 more robust: a missing barrier and consistent handling of the priority mask. - Another set of GICv3 improvements and fixes, but nothing outstanding - The usual set of improvements and cleanups all over the place - No new irqchip drivers and not even a new device tree binding! 100+ interrupt chips are truly enough. -----BEGIN PGP SIGNATURE----- iQJHBAABCgAxFiEEQp8+kY+LLUocC4bMphj1TA10mKEFAmKLOEoTHHRnbHhAbGlu dXRyb25peC5kZQAKCRCmGPVMDXSYoQ4ED/9B1kDwunvkNAPJDmSmr4hFU7EU3ZLb SyS2099PWekgU3TaWdD6eILm9hIvsAmmhbU7CJ0EWol6G5VsqbNoYsfOsWliuGTi CL3ygZL84hL4b24c3sipqWAF60WCEKLnYV7pb1DgiZM41C87+wxPB49FQbHVjroz WDRTF8QYWMqoTRvxGMCflDfkAwydlCrqzQwgyUB5hJj3vbiYX9dVMAkJmHRyM3Uq Prwhx1Ipbj/wBSReIbIXlNx4XI/iUDI0UWeh02XkVxLb5Jzg7vPCHiuyVMR1DW2J oEjAR+/1sGwVOoRnfRlwdRUmRRItdlbopbL4CuhO/ENrM/r/o/rMvDDMwF4WoMW9 zXvzFBLllVpLvyFvVHO1LKI6Hx2mdyAmQ1M/TxMFOmHAyfOPtN150AJDPKdCrMk/ 0F0B0y/KPgU9P/Q9yLh2UiXRAkoUBpLpk20xZbAUGHnjXXkys4Z2fE+THIob+Ibe pUnXsgCXVVWyqJjdikPF2gqsSsCFUo7iblHRzI0hzOAPe3MTph0qh3hZoFAFNEYP IIyAv9+IiT1EvBMgjHNmZ51U0uTbt3qWOSxebEoU3a598wwEVNRRVyutqvREXhl8 inkzpL2N3uBPX7sA25lYkH4QKRbzVoNkF/s0e/J9WZdYbj3SsxGouoGdYA2xgvtM 8tiCnFC9hfzepQ== =xcXv -----END PGP SIGNATURE----- Merge tag 'irq-core-2022-05-23' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip Pull interrupt handling updates from Thomas Gleixner: "Core code: - Make the managed interrupts more robust by shutting them down in the core code when the assigned affinity mask does not contain online CPUs. - Make the irq simulator chip work on RT - A small set of cpumask and power manageent cleanups Drivers: - A set of changes which mark GPIO interrupt chips immutable to prevent the GPIO subsystem from modifying it under the hood. This provides the necessary infrastructure and converts a set of GPIO and pinctrl drivers over. - A set of changes to make the pseudo-NMI handling for GICv3 more robust: a missing barrier and consistent handling of the priority mask. - Another set of GICv3 improvements and fixes, but nothing outstanding - The usual set of improvements and cleanups all over the place - No new irqchip drivers and not even a new device tree binding! 100+ interrupt chips are truly enough" * tag 'irq-core-2022-05-23' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip: (39 commits) irqchip: Add Kconfig symbols for sunxi drivers irqchip/gic-v3: Fix priority mask handling irqchip/gic-v3: Refactor ISB + EOIR at ack time irqchip/gic-v3: Ensure pseudo-NMIs have an ISB between ack and handling genirq/irq_sim: Make the irq_work always run in hard irq context irqchip/armada-370-xp: Do not touch Performance Counter Overflow on A375, A38x, A39x irqchip/gic: Improved warning about incorrect type irqchip/csky: Return true/false (not 1/0) from bool functions irqchip/imx-irqsteer: Add runtime PM support irqchip/imx-irqsteer: Constify irq_chip struct irqchip/armada-370-xp: Enable MSI affinity configuration irqchip/aspeed-scu-ic: Fix irq_of_parse_and_map() return value irqchip/aspeed-i2c-ic: Fix irq_of_parse_and_map() return value irqchip/sun6i-r: Use NULL for chip_data irqchip/xtensa-mx: Fix initial IRQ affinity in non-SMP setup irqchip/exiu: Fix acknowledgment of edge triggered interrupts irqchip/gic-v3: Claim iomem resources dt-bindings: interrupt-controller: arm,gic-v3: Make the v2 compat requirements explicit irqchip/gic-v3: Relax polling of GIC{R,D}_CTLR.RWP irqchip/gic-v3: Detect LPI invalidation MMIO registers ...
This commit is contained in:
commit
fcfde8a7cf
@ -7,7 +7,7 @@ $schema: http://devicetree.org/meta-schemas/core.yaml#
|
|||||||
title: ARM Generic Interrupt Controller, version 3
|
title: ARM Generic Interrupt Controller, version 3
|
||||||
|
|
||||||
maintainers:
|
maintainers:
|
||||||
- Marc Zyngier <marc.zyngier@arm.com>
|
- Marc Zyngier <maz@kernel.org>
|
||||||
|
|
||||||
description: |
|
description: |
|
||||||
AArch64 SMP cores are often associated with a GICv3, providing Private
|
AArch64 SMP cores are often associated with a GICv3, providing Private
|
||||||
@ -78,7 +78,11 @@ properties:
|
|||||||
- GIC Hypervisor interface (GICH)
|
- GIC Hypervisor interface (GICH)
|
||||||
- GIC Virtual CPU interface (GICV)
|
- GIC Virtual CPU interface (GICV)
|
||||||
|
|
||||||
GICC, GICH and GICV are optional.
|
GICC, GICH and GICV are optional, but must be described if the CPUs
|
||||||
|
support them. Examples of such CPUs are ARM's implementations of the
|
||||||
|
ARMv8.0 architecture such as Cortex-A32, A34, A35, A53, A57, A72 and
|
||||||
|
A73 (this list is not exhaustive).
|
||||||
|
|
||||||
minItems: 2
|
minItems: 2
|
||||||
maxItems: 4096 # Should be enough?
|
maxItems: 4096 # Should be enough?
|
||||||
|
|
||||||
|
@ -417,30 +417,66 @@ struct gpio_irq_chip inside struct gpio_chip before adding the gpio_chip.
|
|||||||
If you do this, the additional irq_chip will be set up by gpiolib at the
|
If you do this, the additional irq_chip will be set up by gpiolib at the
|
||||||
same time as setting up the rest of the GPIO functionality. The following
|
same time as setting up the rest of the GPIO functionality. The following
|
||||||
is a typical example of a chained cascaded interrupt handler using
|
is a typical example of a chained cascaded interrupt handler using
|
||||||
the gpio_irq_chip:
|
the gpio_irq_chip. Note how the mask/unmask (or disable/enable) functions
|
||||||
|
call into the core gpiolib code:
|
||||||
|
|
||||||
.. code-block:: c
|
.. code-block:: c
|
||||||
|
|
||||||
/* Typical state container with dynamic irqchip */
|
/* Typical state container */
|
||||||
struct my_gpio {
|
struct my_gpio {
|
||||||
struct gpio_chip gc;
|
struct gpio_chip gc;
|
||||||
struct irq_chip irq;
|
};
|
||||||
|
|
||||||
|
static void my_gpio_mask_irq(struct irq_data *d)
|
||||||
|
{
|
||||||
|
struct gpio_chip *gc = irq_desc_get_handler_data(d);
|
||||||
|
|
||||||
|
/*
|
||||||
|
* Perform any necessary action to mask the interrupt,
|
||||||
|
* and then call into the core code to synchronise the
|
||||||
|
* state.
|
||||||
|
*/
|
||||||
|
|
||||||
|
gpiochip_disable_irq(gc, d->hwirq);
|
||||||
|
}
|
||||||
|
|
||||||
|
static void my_gpio_unmask_irq(struct irq_data *d)
|
||||||
|
{
|
||||||
|
struct gpio_chip *gc = irq_desc_get_handler_data(d);
|
||||||
|
|
||||||
|
gpiochip_enable_irq(gc, d->hwirq);
|
||||||
|
|
||||||
|
/*
|
||||||
|
* Perform any necessary action to unmask the interrupt,
|
||||||
|
* after having called into the core code to synchronise
|
||||||
|
* the state.
|
||||||
|
*/
|
||||||
|
}
|
||||||
|
|
||||||
|
/*
|
||||||
|
* Statically populate the irqchip. Note that it is made const
|
||||||
|
* (further indicated by the IRQCHIP_IMMUTABLE flag), and that
|
||||||
|
* the GPIOCHIP_IRQ_RESOURCE_HELPER macro adds some extra
|
||||||
|
* callbacks to the structure.
|
||||||
|
*/
|
||||||
|
static const struct irq_chip my_gpio_irq_chip = {
|
||||||
|
.name = "my_gpio_irq",
|
||||||
|
.irq_ack = my_gpio_ack_irq,
|
||||||
|
.irq_mask = my_gpio_mask_irq,
|
||||||
|
.irq_unmask = my_gpio_unmask_irq,
|
||||||
|
.irq_set_type = my_gpio_set_irq_type,
|
||||||
|
.flags = IRQCHIP_IMMUTABLE,
|
||||||
|
/* Provide the gpio resource callbacks */
|
||||||
|
GPIOCHIP_IRQ_RESOURCE_HELPERS,
|
||||||
};
|
};
|
||||||
|
|
||||||
int irq; /* from platform etc */
|
int irq; /* from platform etc */
|
||||||
struct my_gpio *g;
|
struct my_gpio *g;
|
||||||
struct gpio_irq_chip *girq;
|
struct gpio_irq_chip *girq;
|
||||||
|
|
||||||
/* Set up the irqchip dynamically */
|
|
||||||
g->irq.name = "my_gpio_irq";
|
|
||||||
g->irq.irq_ack = my_gpio_ack_irq;
|
|
||||||
g->irq.irq_mask = my_gpio_mask_irq;
|
|
||||||
g->irq.irq_unmask = my_gpio_unmask_irq;
|
|
||||||
g->irq.irq_set_type = my_gpio_set_irq_type;
|
|
||||||
|
|
||||||
/* Get a pointer to the gpio_irq_chip */
|
/* Get a pointer to the gpio_irq_chip */
|
||||||
girq = &g->gc.irq;
|
girq = &g->gc.irq;
|
||||||
girq->chip = &g->irq;
|
gpio_irq_chip_set_chip(girq, &my_gpio_irq_chip);
|
||||||
girq->parent_handler = ftgpio_gpio_irq_handler;
|
girq->parent_handler = ftgpio_gpio_irq_handler;
|
||||||
girq->num_parents = 1;
|
girq->num_parents = 1;
|
||||||
girq->parents = devm_kcalloc(dev, 1, sizeof(*girq->parents),
|
girq->parents = devm_kcalloc(dev, 1, sizeof(*girq->parents),
|
||||||
@ -458,23 +494,58 @@ the interrupt separately and go with it:
|
|||||||
|
|
||||||
.. code-block:: c
|
.. code-block:: c
|
||||||
|
|
||||||
/* Typical state container with dynamic irqchip */
|
/* Typical state container */
|
||||||
struct my_gpio {
|
struct my_gpio {
|
||||||
struct gpio_chip gc;
|
struct gpio_chip gc;
|
||||||
struct irq_chip irq;
|
};
|
||||||
|
|
||||||
|
static void my_gpio_mask_irq(struct irq_data *d)
|
||||||
|
{
|
||||||
|
struct gpio_chip *gc = irq_desc_get_handler_data(d);
|
||||||
|
|
||||||
|
/*
|
||||||
|
* Perform any necessary action to mask the interrupt,
|
||||||
|
* and then call into the core code to synchronise the
|
||||||
|
* state.
|
||||||
|
*/
|
||||||
|
|
||||||
|
gpiochip_disable_irq(gc, d->hwirq);
|
||||||
|
}
|
||||||
|
|
||||||
|
static void my_gpio_unmask_irq(struct irq_data *d)
|
||||||
|
{
|
||||||
|
struct gpio_chip *gc = irq_desc_get_handler_data(d);
|
||||||
|
|
||||||
|
gpiochip_enable_irq(gc, d->hwirq);
|
||||||
|
|
||||||
|
/*
|
||||||
|
* Perform any necessary action to unmask the interrupt,
|
||||||
|
* after having called into the core code to synchronise
|
||||||
|
* the state.
|
||||||
|
*/
|
||||||
|
}
|
||||||
|
|
||||||
|
/*
|
||||||
|
* Statically populate the irqchip. Note that it is made const
|
||||||
|
* (further indicated by the IRQCHIP_IMMUTABLE flag), and that
|
||||||
|
* the GPIOCHIP_IRQ_RESOURCE_HELPER macro adds some extra
|
||||||
|
* callbacks to the structure.
|
||||||
|
*/
|
||||||
|
static const struct irq_chip my_gpio_irq_chip = {
|
||||||
|
.name = "my_gpio_irq",
|
||||||
|
.irq_ack = my_gpio_ack_irq,
|
||||||
|
.irq_mask = my_gpio_mask_irq,
|
||||||
|
.irq_unmask = my_gpio_unmask_irq,
|
||||||
|
.irq_set_type = my_gpio_set_irq_type,
|
||||||
|
.flags = IRQCHIP_IMMUTABLE,
|
||||||
|
/* Provide the gpio resource callbacks */
|
||||||
|
GPIOCHIP_IRQ_RESOURCE_HELPERS,
|
||||||
};
|
};
|
||||||
|
|
||||||
int irq; /* from platform etc */
|
int irq; /* from platform etc */
|
||||||
struct my_gpio *g;
|
struct my_gpio *g;
|
||||||
struct gpio_irq_chip *girq;
|
struct gpio_irq_chip *girq;
|
||||||
|
|
||||||
/* Set up the irqchip dynamically */
|
|
||||||
g->irq.name = "my_gpio_irq";
|
|
||||||
g->irq.irq_ack = my_gpio_ack_irq;
|
|
||||||
g->irq.irq_mask = my_gpio_mask_irq;
|
|
||||||
g->irq.irq_unmask = my_gpio_unmask_irq;
|
|
||||||
g->irq.irq_set_type = my_gpio_set_irq_type;
|
|
||||||
|
|
||||||
ret = devm_request_threaded_irq(dev, irq, NULL,
|
ret = devm_request_threaded_irq(dev, irq, NULL,
|
||||||
irq_thread_fn, IRQF_ONESHOT, "my-chip", g);
|
irq_thread_fn, IRQF_ONESHOT, "my-chip", g);
|
||||||
if (ret < 0)
|
if (ret < 0)
|
||||||
@ -482,7 +553,7 @@ the interrupt separately and go with it:
|
|||||||
|
|
||||||
/* Get a pointer to the gpio_irq_chip */
|
/* Get a pointer to the gpio_irq_chip */
|
||||||
girq = &g->gc.irq;
|
girq = &g->gc.irq;
|
||||||
girq->chip = &g->irq;
|
gpio_irq_chip_set_chip(girq, &my_gpio_irq_chip);
|
||||||
/* This will let us handle the parent IRQ in the driver */
|
/* This will let us handle the parent IRQ in the driver */
|
||||||
girq->parent_handler = NULL;
|
girq->parent_handler = NULL;
|
||||||
girq->num_parents = 0;
|
girq->num_parents = 0;
|
||||||
@ -500,24 +571,61 @@ In this case the typical set-up will look like this:
|
|||||||
/* Typical state container with dynamic irqchip */
|
/* Typical state container with dynamic irqchip */
|
||||||
struct my_gpio {
|
struct my_gpio {
|
||||||
struct gpio_chip gc;
|
struct gpio_chip gc;
|
||||||
struct irq_chip irq;
|
|
||||||
struct fwnode_handle *fwnode;
|
struct fwnode_handle *fwnode;
|
||||||
};
|
};
|
||||||
|
|
||||||
int irq; /* from platform etc */
|
static void my_gpio_mask_irq(struct irq_data *d)
|
||||||
|
{
|
||||||
|
struct gpio_chip *gc = irq_desc_get_handler_data(d);
|
||||||
|
|
||||||
|
/*
|
||||||
|
* Perform any necessary action to mask the interrupt,
|
||||||
|
* and then call into the core code to synchronise the
|
||||||
|
* state.
|
||||||
|
*/
|
||||||
|
|
||||||
|
gpiochip_disable_irq(gc, d->hwirq);
|
||||||
|
irq_mask_mask_parent(d);
|
||||||
|
}
|
||||||
|
|
||||||
|
static void my_gpio_unmask_irq(struct irq_data *d)
|
||||||
|
{
|
||||||
|
struct gpio_chip *gc = irq_desc_get_handler_data(d);
|
||||||
|
|
||||||
|
gpiochip_enable_irq(gc, d->hwirq);
|
||||||
|
|
||||||
|
/*
|
||||||
|
* Perform any necessary action to unmask the interrupt,
|
||||||
|
* after having called into the core code to synchronise
|
||||||
|
* the state.
|
||||||
|
*/
|
||||||
|
|
||||||
|
irq_mask_unmask_parent(d);
|
||||||
|
}
|
||||||
|
|
||||||
|
/*
|
||||||
|
* Statically populate the irqchip. Note that it is made const
|
||||||
|
* (further indicated by the IRQCHIP_IMMUTABLE flag), and that
|
||||||
|
* the GPIOCHIP_IRQ_RESOURCE_HELPER macro adds some extra
|
||||||
|
* callbacks to the structure.
|
||||||
|
*/
|
||||||
|
static const struct irq_chip my_gpio_irq_chip = {
|
||||||
|
.name = "my_gpio_irq",
|
||||||
|
.irq_ack = my_gpio_ack_irq,
|
||||||
|
.irq_mask = my_gpio_mask_irq,
|
||||||
|
.irq_unmask = my_gpio_unmask_irq,
|
||||||
|
.irq_set_type = my_gpio_set_irq_type,
|
||||||
|
.flags = IRQCHIP_IMMUTABLE,
|
||||||
|
/* Provide the gpio resource callbacks */
|
||||||
|
GPIOCHIP_IRQ_RESOURCE_HELPERS,
|
||||||
|
};
|
||||||
|
|
||||||
struct my_gpio *g;
|
struct my_gpio *g;
|
||||||
struct gpio_irq_chip *girq;
|
struct gpio_irq_chip *girq;
|
||||||
|
|
||||||
/* Set up the irqchip dynamically */
|
|
||||||
g->irq.name = "my_gpio_irq";
|
|
||||||
g->irq.irq_ack = my_gpio_ack_irq;
|
|
||||||
g->irq.irq_mask = my_gpio_mask_irq;
|
|
||||||
g->irq.irq_unmask = my_gpio_unmask_irq;
|
|
||||||
g->irq.irq_set_type = my_gpio_set_irq_type;
|
|
||||||
|
|
||||||
/* Get a pointer to the gpio_irq_chip */
|
/* Get a pointer to the gpio_irq_chip */
|
||||||
girq = &g->gc.irq;
|
girq = &g->gc.irq;
|
||||||
girq->chip = &g->irq;
|
gpio_irq_chip_set_chip(girq, &my_gpio_irq_chip);
|
||||||
girq->default_type = IRQ_TYPE_NONE;
|
girq->default_type = IRQ_TYPE_NONE;
|
||||||
girq->handler = handle_bad_irq;
|
girq->handler = handle_bad_irq;
|
||||||
girq->fwnode = g->fwnode;
|
girq->fwnode = g->fwnode;
|
||||||
@ -605,8 +713,9 @@ When implementing an irqchip inside a GPIO driver, these two functions should
|
|||||||
typically be called in the .irq_disable() and .irq_enable() callbacks from the
|
typically be called in the .irq_disable() and .irq_enable() callbacks from the
|
||||||
irqchip.
|
irqchip.
|
||||||
|
|
||||||
When using the gpiolib irqchip helpers, these callbacks are automatically
|
When IRQCHIP_IMMUTABLE is not advertised by the irqchip, these callbacks
|
||||||
assigned.
|
are automatically assigned. This behaviour is deprecated and on its way
|
||||||
|
to be removed from the kernel.
|
||||||
|
|
||||||
|
|
||||||
Real-Time compliance for GPIO IRQ chips
|
Real-Time compliance for GPIO IRQ chips
|
||||||
|
@ -48,6 +48,7 @@ static inline u32 read_ ## a64(void) \
|
|||||||
return read_sysreg(a32); \
|
return read_sysreg(a32); \
|
||||||
} \
|
} \
|
||||||
|
|
||||||
|
CPUIF_MAP(ICC_EOIR1, ICC_EOIR1_EL1)
|
||||||
CPUIF_MAP(ICC_PMR, ICC_PMR_EL1)
|
CPUIF_MAP(ICC_PMR, ICC_PMR_EL1)
|
||||||
CPUIF_MAP(ICC_AP0R0, ICC_AP0R0_EL1)
|
CPUIF_MAP(ICC_AP0R0, ICC_AP0R0_EL1)
|
||||||
CPUIF_MAP(ICC_AP0R1, ICC_AP0R1_EL1)
|
CPUIF_MAP(ICC_AP0R1, ICC_AP0R1_EL1)
|
||||||
@ -63,12 +64,6 @@ CPUIF_MAP(ICC_AP1R3, ICC_AP1R3_EL1)
|
|||||||
|
|
||||||
/* Low-level accessors */
|
/* Low-level accessors */
|
||||||
|
|
||||||
static inline void gic_write_eoir(u32 irq)
|
|
||||||
{
|
|
||||||
write_sysreg(irq, ICC_EOIR1);
|
|
||||||
isb();
|
|
||||||
}
|
|
||||||
|
|
||||||
static inline void gic_write_dir(u32 val)
|
static inline void gic_write_dir(u32 val)
|
||||||
{
|
{
|
||||||
write_sysreg(val, ICC_DIR);
|
write_sysreg(val, ICC_DIR);
|
||||||
|
@ -4,10 +4,7 @@ menuconfig ARCH_SUNXI
|
|||||||
depends on ARCH_MULTI_V5 || ARCH_MULTI_V7
|
depends on ARCH_MULTI_V5 || ARCH_MULTI_V7
|
||||||
select ARCH_HAS_RESET_CONTROLLER
|
select ARCH_HAS_RESET_CONTROLLER
|
||||||
select CLKSRC_MMIO
|
select CLKSRC_MMIO
|
||||||
select GENERIC_IRQ_CHIP
|
|
||||||
select GPIOLIB
|
select GPIOLIB
|
||||||
select IRQ_DOMAIN_HIERARCHY
|
|
||||||
select IRQ_FASTEOI_HIERARCHY_HANDLERS
|
|
||||||
select PINCTRL
|
select PINCTRL
|
||||||
select PM_OPP
|
select PM_OPP
|
||||||
select SUN4I_TIMER
|
select SUN4I_TIMER
|
||||||
@ -22,10 +19,12 @@ if ARCH_MULTI_V7
|
|||||||
config MACH_SUN4I
|
config MACH_SUN4I
|
||||||
bool "Allwinner A10 (sun4i) SoCs support"
|
bool "Allwinner A10 (sun4i) SoCs support"
|
||||||
default ARCH_SUNXI
|
default ARCH_SUNXI
|
||||||
|
select SUN4I_INTC
|
||||||
|
|
||||||
config MACH_SUN5I
|
config MACH_SUN5I
|
||||||
bool "Allwinner A10s / A13 (sun5i) SoCs support"
|
bool "Allwinner A10s / A13 (sun5i) SoCs support"
|
||||||
default ARCH_SUNXI
|
default ARCH_SUNXI
|
||||||
|
select SUN4I_INTC
|
||||||
select SUN5I_HSTIMER
|
select SUN5I_HSTIMER
|
||||||
|
|
||||||
config MACH_SUN6I
|
config MACH_SUN6I
|
||||||
@ -34,6 +33,8 @@ config MACH_SUN6I
|
|||||||
select ARM_GIC
|
select ARM_GIC
|
||||||
select MFD_SUN6I_PRCM
|
select MFD_SUN6I_PRCM
|
||||||
select SUN5I_HSTIMER
|
select SUN5I_HSTIMER
|
||||||
|
select SUN6I_R_INTC
|
||||||
|
select SUNXI_NMI_INTC
|
||||||
|
|
||||||
config MACH_SUN7I
|
config MACH_SUN7I
|
||||||
bool "Allwinner A20 (sun7i) SoCs support"
|
bool "Allwinner A20 (sun7i) SoCs support"
|
||||||
@ -43,17 +44,21 @@ config MACH_SUN7I
|
|||||||
select ARCH_SUPPORTS_BIG_ENDIAN
|
select ARCH_SUPPORTS_BIG_ENDIAN
|
||||||
select HAVE_ARM_ARCH_TIMER
|
select HAVE_ARM_ARCH_TIMER
|
||||||
select SUN5I_HSTIMER
|
select SUN5I_HSTIMER
|
||||||
|
select SUNXI_NMI_INTC
|
||||||
|
|
||||||
config MACH_SUN8I
|
config MACH_SUN8I
|
||||||
bool "Allwinner sun8i Family SoCs support"
|
bool "Allwinner sun8i Family SoCs support"
|
||||||
default ARCH_SUNXI
|
default ARCH_SUNXI
|
||||||
select ARM_GIC
|
select ARM_GIC
|
||||||
select MFD_SUN6I_PRCM
|
select MFD_SUN6I_PRCM
|
||||||
|
select SUN6I_R_INTC
|
||||||
|
select SUNXI_NMI_INTC
|
||||||
|
|
||||||
config MACH_SUN9I
|
config MACH_SUN9I
|
||||||
bool "Allwinner (sun9i) SoCs support"
|
bool "Allwinner (sun9i) SoCs support"
|
||||||
default ARCH_SUNXI
|
default ARCH_SUNXI
|
||||||
select ARM_GIC
|
select ARM_GIC
|
||||||
|
select SUNXI_NMI_INTC
|
||||||
|
|
||||||
config ARCH_SUNXI_MC_SMP
|
config ARCH_SUNXI_MC_SMP
|
||||||
bool
|
bool
|
||||||
@ -69,6 +74,7 @@ if ARCH_MULTI_V5
|
|||||||
config MACH_SUNIV
|
config MACH_SUNIV
|
||||||
bool "Allwinner ARMv5 F-series (suniv) SoCs support"
|
bool "Allwinner ARMv5 F-series (suniv) SoCs support"
|
||||||
default ARCH_SUNXI
|
default ARCH_SUNXI
|
||||||
|
select SUN4I_INTC
|
||||||
help
|
help
|
||||||
Support for Allwinner suniv ARMv5 SoCs.
|
Support for Allwinner suniv ARMv5 SoCs.
|
||||||
(F1C100A, F1C100s, F1C200s, F1C500, F1C600)
|
(F1C100A, F1C100s, F1C200s, F1C500, F1C600)
|
||||||
|
@ -11,12 +11,11 @@ config ARCH_ACTIONS
|
|||||||
config ARCH_SUNXI
|
config ARCH_SUNXI
|
||||||
bool "Allwinner sunxi 64-bit SoC Family"
|
bool "Allwinner sunxi 64-bit SoC Family"
|
||||||
select ARCH_HAS_RESET_CONTROLLER
|
select ARCH_HAS_RESET_CONTROLLER
|
||||||
select GENERIC_IRQ_CHIP
|
|
||||||
select IRQ_DOMAIN_HIERARCHY
|
|
||||||
select IRQ_FASTEOI_HIERARCHY_HANDLERS
|
|
||||||
select PINCTRL
|
select PINCTRL
|
||||||
select RESET_CONTROLLER
|
select RESET_CONTROLLER
|
||||||
select SUN4I_TIMER
|
select SUN4I_TIMER
|
||||||
|
select SUN6I_R_INTC
|
||||||
|
select SUNXI_NMI_INTC
|
||||||
help
|
help
|
||||||
This enables support for Allwinner sunxi based SoCs like the A64.
|
This enables support for Allwinner sunxi based SoCs like the A64.
|
||||||
|
|
||||||
@ -253,6 +252,7 @@ config ARCH_INTEL_SOCFPGA
|
|||||||
|
|
||||||
config ARCH_SYNQUACER
|
config ARCH_SYNQUACER
|
||||||
bool "Socionext SynQuacer SoC Family"
|
bool "Socionext SynQuacer SoC Family"
|
||||||
|
select IRQ_FASTEOI_HIERARCHY_HANDLERS
|
||||||
|
|
||||||
config ARCH_TEGRA
|
config ARCH_TEGRA
|
||||||
bool "NVIDIA Tegra SoC Family"
|
bool "NVIDIA Tegra SoC Family"
|
||||||
|
@ -26,12 +26,6 @@
|
|||||||
* sets the GP register's most significant bits to 0 with an explicit cast.
|
* sets the GP register's most significant bits to 0 with an explicit cast.
|
||||||
*/
|
*/
|
||||||
|
|
||||||
static inline void gic_write_eoir(u32 irq)
|
|
||||||
{
|
|
||||||
write_sysreg_s(irq, SYS_ICC_EOIR1_EL1);
|
|
||||||
isb();
|
|
||||||
}
|
|
||||||
|
|
||||||
static __always_inline void gic_write_dir(u32 irq)
|
static __always_inline void gic_write_dir(u32 irq)
|
||||||
{
|
{
|
||||||
write_sysreg_s(irq, SYS_ICC_DIR_EL1);
|
write_sysreg_s(irq, SYS_ICC_DIR_EL1);
|
||||||
|
@ -178,3 +178,22 @@ discussed but the idea is to provide a low-level access point
|
|||||||
for debugging and hacking and to expose all lines without the
|
for debugging and hacking and to expose all lines without the
|
||||||
need of any exporting. Also provide ample ammunition to shoot
|
need of any exporting. Also provide ample ammunition to shoot
|
||||||
oneself in the foot, because this is debugfs after all.
|
oneself in the foot, because this is debugfs after all.
|
||||||
|
|
||||||
|
|
||||||
|
Moving over to immutable irq_chip structures
|
||||||
|
|
||||||
|
Most of the gpio chips implementing interrupt support rely on gpiolib
|
||||||
|
intercepting some of the irq_chip callbacks, preventing the structures
|
||||||
|
from being made read-only and forcing duplication of structures that
|
||||||
|
should otherwise be unique.
|
||||||
|
|
||||||
|
The solution is to call into the gpiolib code when needed (resource
|
||||||
|
management, enable/disable or unmask/mask callbacks), and to let the
|
||||||
|
core code know about that by exposing a flag (IRQCHIP_IMMUTABLE) in
|
||||||
|
the irq_chip structure. The irq_chip structure can then be made unique
|
||||||
|
and const.
|
||||||
|
|
||||||
|
A small number of drivers have been converted (pl061, tegra186, msm,
|
||||||
|
amd, apple), and can be used as examples of how to proceed with this
|
||||||
|
conversion. Note that drivers using the generic irqchip framework
|
||||||
|
cannot be converted yet, but watch this space!
|
||||||
|
@ -52,7 +52,6 @@ struct pl061 {
|
|||||||
|
|
||||||
void __iomem *base;
|
void __iomem *base;
|
||||||
struct gpio_chip gc;
|
struct gpio_chip gc;
|
||||||
struct irq_chip irq_chip;
|
|
||||||
int parent_irq;
|
int parent_irq;
|
||||||
|
|
||||||
#ifdef CONFIG_PM
|
#ifdef CONFIG_PM
|
||||||
@ -241,6 +240,8 @@ static void pl061_irq_mask(struct irq_data *d)
|
|||||||
gpioie = readb(pl061->base + GPIOIE) & ~mask;
|
gpioie = readb(pl061->base + GPIOIE) & ~mask;
|
||||||
writeb(gpioie, pl061->base + GPIOIE);
|
writeb(gpioie, pl061->base + GPIOIE);
|
||||||
raw_spin_unlock(&pl061->lock);
|
raw_spin_unlock(&pl061->lock);
|
||||||
|
|
||||||
|
gpiochip_disable_irq(gc, d->hwirq);
|
||||||
}
|
}
|
||||||
|
|
||||||
static void pl061_irq_unmask(struct irq_data *d)
|
static void pl061_irq_unmask(struct irq_data *d)
|
||||||
@ -250,6 +251,8 @@ static void pl061_irq_unmask(struct irq_data *d)
|
|||||||
u8 mask = BIT(irqd_to_hwirq(d) % PL061_GPIO_NR);
|
u8 mask = BIT(irqd_to_hwirq(d) % PL061_GPIO_NR);
|
||||||
u8 gpioie;
|
u8 gpioie;
|
||||||
|
|
||||||
|
gpiochip_enable_irq(gc, d->hwirq);
|
||||||
|
|
||||||
raw_spin_lock(&pl061->lock);
|
raw_spin_lock(&pl061->lock);
|
||||||
gpioie = readb(pl061->base + GPIOIE) | mask;
|
gpioie = readb(pl061->base + GPIOIE) | mask;
|
||||||
writeb(gpioie, pl061->base + GPIOIE);
|
writeb(gpioie, pl061->base + GPIOIE);
|
||||||
@ -283,6 +286,24 @@ static int pl061_irq_set_wake(struct irq_data *d, unsigned int state)
|
|||||||
return irq_set_irq_wake(pl061->parent_irq, state);
|
return irq_set_irq_wake(pl061->parent_irq, state);
|
||||||
}
|
}
|
||||||
|
|
||||||
|
static void pl061_irq_print_chip(struct irq_data *data, struct seq_file *p)
|
||||||
|
{
|
||||||
|
struct gpio_chip *gc = irq_data_get_irq_chip_data(data);
|
||||||
|
|
||||||
|
seq_printf(p, dev_name(gc->parent));
|
||||||
|
}
|
||||||
|
|
||||||
|
static const struct irq_chip pl061_irq_chip = {
|
||||||
|
.irq_ack = pl061_irq_ack,
|
||||||
|
.irq_mask = pl061_irq_mask,
|
||||||
|
.irq_unmask = pl061_irq_unmask,
|
||||||
|
.irq_set_type = pl061_irq_type,
|
||||||
|
.irq_set_wake = pl061_irq_set_wake,
|
||||||
|
.irq_print_chip = pl061_irq_print_chip,
|
||||||
|
.flags = IRQCHIP_IMMUTABLE,
|
||||||
|
GPIOCHIP_IRQ_RESOURCE_HELPERS,
|
||||||
|
};
|
||||||
|
|
||||||
static int pl061_probe(struct amba_device *adev, const struct amba_id *id)
|
static int pl061_probe(struct amba_device *adev, const struct amba_id *id)
|
||||||
{
|
{
|
||||||
struct device *dev = &adev->dev;
|
struct device *dev = &adev->dev;
|
||||||
@ -315,13 +336,6 @@ static int pl061_probe(struct amba_device *adev, const struct amba_id *id)
|
|||||||
/*
|
/*
|
||||||
* irq_chip support
|
* irq_chip support
|
||||||
*/
|
*/
|
||||||
pl061->irq_chip.name = dev_name(dev);
|
|
||||||
pl061->irq_chip.irq_ack = pl061_irq_ack;
|
|
||||||
pl061->irq_chip.irq_mask = pl061_irq_mask;
|
|
||||||
pl061->irq_chip.irq_unmask = pl061_irq_unmask;
|
|
||||||
pl061->irq_chip.irq_set_type = pl061_irq_type;
|
|
||||||
pl061->irq_chip.irq_set_wake = pl061_irq_set_wake;
|
|
||||||
|
|
||||||
writeb(0, pl061->base + GPIOIE); /* disable irqs */
|
writeb(0, pl061->base + GPIOIE); /* disable irqs */
|
||||||
irq = adev->irq[0];
|
irq = adev->irq[0];
|
||||||
if (!irq)
|
if (!irq)
|
||||||
@ -329,7 +343,7 @@ static int pl061_probe(struct amba_device *adev, const struct amba_id *id)
|
|||||||
pl061->parent_irq = irq;
|
pl061->parent_irq = irq;
|
||||||
|
|
||||||
girq = &pl061->gc.irq;
|
girq = &pl061->gc.irq;
|
||||||
girq->chip = &pl061->irq_chip;
|
gpio_irq_chip_set_chip(girq, &pl061_irq_chip);
|
||||||
girq->parent_handler = pl061_irq_handler;
|
girq->parent_handler = pl061_irq_handler;
|
||||||
girq->num_parents = 1;
|
girq->num_parents = 1;
|
||||||
girq->parents = devm_kcalloc(dev, 1, sizeof(*girq->parents),
|
girq->parents = devm_kcalloc(dev, 1, sizeof(*girq->parents),
|
||||||
|
@ -80,7 +80,6 @@ struct tegra_gpio_soc {
|
|||||||
|
|
||||||
struct tegra_gpio {
|
struct tegra_gpio {
|
||||||
struct gpio_chip gpio;
|
struct gpio_chip gpio;
|
||||||
struct irq_chip intc;
|
|
||||||
unsigned int num_irq;
|
unsigned int num_irq;
|
||||||
unsigned int *irq;
|
unsigned int *irq;
|
||||||
|
|
||||||
@ -372,6 +371,8 @@ static void tegra186_irq_mask(struct irq_data *data)
|
|||||||
value = readl(base + TEGRA186_GPIO_ENABLE_CONFIG);
|
value = readl(base + TEGRA186_GPIO_ENABLE_CONFIG);
|
||||||
value &= ~TEGRA186_GPIO_ENABLE_CONFIG_INTERRUPT;
|
value &= ~TEGRA186_GPIO_ENABLE_CONFIG_INTERRUPT;
|
||||||
writel(value, base + TEGRA186_GPIO_ENABLE_CONFIG);
|
writel(value, base + TEGRA186_GPIO_ENABLE_CONFIG);
|
||||||
|
|
||||||
|
gpiochip_disable_irq(&gpio->gpio, data->hwirq);
|
||||||
}
|
}
|
||||||
|
|
||||||
static void tegra186_irq_unmask(struct irq_data *data)
|
static void tegra186_irq_unmask(struct irq_data *data)
|
||||||
@ -385,6 +386,8 @@ static void tegra186_irq_unmask(struct irq_data *data)
|
|||||||
if (WARN_ON(base == NULL))
|
if (WARN_ON(base == NULL))
|
||||||
return;
|
return;
|
||||||
|
|
||||||
|
gpiochip_enable_irq(&gpio->gpio, data->hwirq);
|
||||||
|
|
||||||
value = readl(base + TEGRA186_GPIO_ENABLE_CONFIG);
|
value = readl(base + TEGRA186_GPIO_ENABLE_CONFIG);
|
||||||
value |= TEGRA186_GPIO_ENABLE_CONFIG_INTERRUPT;
|
value |= TEGRA186_GPIO_ENABLE_CONFIG_INTERRUPT;
|
||||||
writel(value, base + TEGRA186_GPIO_ENABLE_CONFIG);
|
writel(value, base + TEGRA186_GPIO_ENABLE_CONFIG);
|
||||||
@ -456,6 +459,24 @@ static int tegra186_irq_set_wake(struct irq_data *data, unsigned int on)
|
|||||||
return 0;
|
return 0;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
static void tegra186_irq_print_chip(struct irq_data *data, struct seq_file *p)
|
||||||
|
{
|
||||||
|
struct gpio_chip *gc = irq_data_get_irq_chip_data(data);
|
||||||
|
|
||||||
|
seq_printf(p, dev_name(gc->parent));
|
||||||
|
}
|
||||||
|
|
||||||
|
static const struct irq_chip tegra186_gpio_irq_chip = {
|
||||||
|
.irq_ack = tegra186_irq_ack,
|
||||||
|
.irq_mask = tegra186_irq_mask,
|
||||||
|
.irq_unmask = tegra186_irq_unmask,
|
||||||
|
.irq_set_type = tegra186_irq_set_type,
|
||||||
|
.irq_set_wake = tegra186_irq_set_wake,
|
||||||
|
.irq_print_chip = tegra186_irq_print_chip,
|
||||||
|
.flags = IRQCHIP_IMMUTABLE,
|
||||||
|
GPIOCHIP_IRQ_RESOURCE_HELPERS,
|
||||||
|
};
|
||||||
|
|
||||||
static void tegra186_gpio_irq(struct irq_desc *desc)
|
static void tegra186_gpio_irq(struct irq_desc *desc)
|
||||||
{
|
{
|
||||||
struct tegra_gpio *gpio = irq_desc_get_handler_data(desc);
|
struct tegra_gpio *gpio = irq_desc_get_handler_data(desc);
|
||||||
@ -760,15 +781,8 @@ static int tegra186_gpio_probe(struct platform_device *pdev)
|
|||||||
gpio->gpio.of_xlate = tegra186_gpio_of_xlate;
|
gpio->gpio.of_xlate = tegra186_gpio_of_xlate;
|
||||||
#endif /* CONFIG_OF_GPIO */
|
#endif /* CONFIG_OF_GPIO */
|
||||||
|
|
||||||
gpio->intc.name = dev_name(&pdev->dev);
|
|
||||||
gpio->intc.irq_ack = tegra186_irq_ack;
|
|
||||||
gpio->intc.irq_mask = tegra186_irq_mask;
|
|
||||||
gpio->intc.irq_unmask = tegra186_irq_unmask;
|
|
||||||
gpio->intc.irq_set_type = tegra186_irq_set_type;
|
|
||||||
gpio->intc.irq_set_wake = tegra186_irq_set_wake;
|
|
||||||
|
|
||||||
irq = &gpio->gpio.irq;
|
irq = &gpio->gpio.irq;
|
||||||
irq->chip = &gpio->intc;
|
gpio_irq_chip_set_chip(irq, &tegra186_gpio_irq_chip);
|
||||||
irq->fwnode = of_node_to_fwnode(pdev->dev.of_node);
|
irq->fwnode = of_node_to_fwnode(pdev->dev.of_node);
|
||||||
irq->child_to_parent_hwirq = tegra186_gpio_child_to_parent_hwirq;
|
irq->child_to_parent_hwirq = tegra186_gpio_child_to_parent_hwirq;
|
||||||
irq->populate_parent_alloc_arg = tegra186_gpio_populate_parent_fwspec;
|
irq->populate_parent_alloc_arg = tegra186_gpio_populate_parent_fwspec;
|
||||||
|
@ -1433,19 +1433,21 @@ static int gpiochip_to_irq(struct gpio_chip *gc, unsigned int offset)
|
|||||||
return irq_create_mapping(domain, offset);
|
return irq_create_mapping(domain, offset);
|
||||||
}
|
}
|
||||||
|
|
||||||
static int gpiochip_irq_reqres(struct irq_data *d)
|
int gpiochip_irq_reqres(struct irq_data *d)
|
||||||
{
|
{
|
||||||
struct gpio_chip *gc = irq_data_get_irq_chip_data(d);
|
struct gpio_chip *gc = irq_data_get_irq_chip_data(d);
|
||||||
|
|
||||||
return gpiochip_reqres_irq(gc, d->hwirq);
|
return gpiochip_reqres_irq(gc, d->hwirq);
|
||||||
}
|
}
|
||||||
|
EXPORT_SYMBOL(gpiochip_irq_reqres);
|
||||||
|
|
||||||
static void gpiochip_irq_relres(struct irq_data *d)
|
void gpiochip_irq_relres(struct irq_data *d)
|
||||||
{
|
{
|
||||||
struct gpio_chip *gc = irq_data_get_irq_chip_data(d);
|
struct gpio_chip *gc = irq_data_get_irq_chip_data(d);
|
||||||
|
|
||||||
gpiochip_relres_irq(gc, d->hwirq);
|
gpiochip_relres_irq(gc, d->hwirq);
|
||||||
}
|
}
|
||||||
|
EXPORT_SYMBOL(gpiochip_irq_relres);
|
||||||
|
|
||||||
static void gpiochip_irq_mask(struct irq_data *d)
|
static void gpiochip_irq_mask(struct irq_data *d)
|
||||||
{
|
{
|
||||||
@ -1485,6 +1487,11 @@ static void gpiochip_set_irq_hooks(struct gpio_chip *gc)
|
|||||||
{
|
{
|
||||||
struct irq_chip *irqchip = gc->irq.chip;
|
struct irq_chip *irqchip = gc->irq.chip;
|
||||||
|
|
||||||
|
if (irqchip->flags & IRQCHIP_IMMUTABLE)
|
||||||
|
return;
|
||||||
|
|
||||||
|
chip_warn(gc, "not an immutable chip, please consider fixing it!\n");
|
||||||
|
|
||||||
if (!irqchip->irq_request_resources &&
|
if (!irqchip->irq_request_resources &&
|
||||||
!irqchip->irq_release_resources) {
|
!irqchip->irq_release_resources) {
|
||||||
irqchip->irq_request_resources = gpiochip_irq_reqres;
|
irqchip->irq_request_resources = gpiochip_irq_reqres;
|
||||||
@ -1652,7 +1659,7 @@ static void gpiochip_irqchip_remove(struct gpio_chip *gc)
|
|||||||
irq_domain_remove(gc->irq.domain);
|
irq_domain_remove(gc->irq.domain);
|
||||||
}
|
}
|
||||||
|
|
||||||
if (irqchip) {
|
if (irqchip && !(irqchip->flags & IRQCHIP_IMMUTABLE)) {
|
||||||
if (irqchip->irq_request_resources == gpiochip_irq_reqres) {
|
if (irqchip->irq_request_resources == gpiochip_irq_reqres) {
|
||||||
irqchip->irq_request_resources = NULL;
|
irqchip->irq_request_resources = NULL;
|
||||||
irqchip->irq_release_resources = NULL;
|
irqchip->irq_release_resources = NULL;
|
||||||
|
@ -257,6 +257,18 @@ config ST_IRQCHIP
|
|||||||
help
|
help
|
||||||
Enables SysCfg Controlled IRQs on STi based platforms.
|
Enables SysCfg Controlled IRQs on STi based platforms.
|
||||||
|
|
||||||
|
config SUN4I_INTC
|
||||||
|
bool
|
||||||
|
|
||||||
|
config SUN6I_R_INTC
|
||||||
|
bool
|
||||||
|
select IRQ_DOMAIN_HIERARCHY
|
||||||
|
select IRQ_FASTEOI_HIERARCHY_HANDLERS
|
||||||
|
|
||||||
|
config SUNXI_NMI_INTC
|
||||||
|
bool
|
||||||
|
select GENERIC_IRQ_CHIP
|
||||||
|
|
||||||
config TB10X_IRQC
|
config TB10X_IRQC
|
||||||
bool
|
bool
|
||||||
select IRQ_DOMAIN
|
select IRQ_DOMAIN
|
||||||
|
@ -23,9 +23,9 @@ obj-$(CONFIG_OMPIC) += irq-ompic.o
|
|||||||
obj-$(CONFIG_OR1K_PIC) += irq-or1k-pic.o
|
obj-$(CONFIG_OR1K_PIC) += irq-or1k-pic.o
|
||||||
obj-$(CONFIG_ORION_IRQCHIP) += irq-orion.o
|
obj-$(CONFIG_ORION_IRQCHIP) += irq-orion.o
|
||||||
obj-$(CONFIG_OMAP_IRQCHIP) += irq-omap-intc.o
|
obj-$(CONFIG_OMAP_IRQCHIP) += irq-omap-intc.o
|
||||||
obj-$(CONFIG_ARCH_SUNXI) += irq-sun4i.o
|
obj-$(CONFIG_SUN4I_INTC) += irq-sun4i.o
|
||||||
obj-$(CONFIG_ARCH_SUNXI) += irq-sun6i-r.o
|
obj-$(CONFIG_SUN6I_R_INTC) += irq-sun6i-r.o
|
||||||
obj-$(CONFIG_ARCH_SUNXI) += irq-sunxi-nmi.o
|
obj-$(CONFIG_SUNXI_NMI_INTC) += irq-sunxi-nmi.o
|
||||||
obj-$(CONFIG_ARCH_SPEAR3XX) += spear-shirq.o
|
obj-$(CONFIG_ARCH_SPEAR3XX) += spear-shirq.o
|
||||||
obj-$(CONFIG_ARM_GIC) += irq-gic.o irq-gic-common.o
|
obj-$(CONFIG_ARM_GIC) += irq-gic.o irq-gic-common.o
|
||||||
obj-$(CONFIG_ARM_GIC_PM) += irq-gic-pm.o
|
obj-$(CONFIG_ARM_GIC_PM) += irq-gic-pm.o
|
||||||
|
@ -209,15 +209,29 @@ static struct msi_domain_info armada_370_xp_msi_domain_info = {
|
|||||||
|
|
||||||
static void armada_370_xp_compose_msi_msg(struct irq_data *data, struct msi_msg *msg)
|
static void armada_370_xp_compose_msi_msg(struct irq_data *data, struct msi_msg *msg)
|
||||||
{
|
{
|
||||||
|
unsigned int cpu = cpumask_first(irq_data_get_effective_affinity_mask(data));
|
||||||
|
|
||||||
msg->address_lo = lower_32_bits(msi_doorbell_addr);
|
msg->address_lo = lower_32_bits(msi_doorbell_addr);
|
||||||
msg->address_hi = upper_32_bits(msi_doorbell_addr);
|
msg->address_hi = upper_32_bits(msi_doorbell_addr);
|
||||||
msg->data = 0xf00 | (data->hwirq + PCI_MSI_DOORBELL_START);
|
msg->data = BIT(cpu + 8) | (data->hwirq + PCI_MSI_DOORBELL_START);
|
||||||
}
|
}
|
||||||
|
|
||||||
static int armada_370_xp_msi_set_affinity(struct irq_data *irq_data,
|
static int armada_370_xp_msi_set_affinity(struct irq_data *irq_data,
|
||||||
const struct cpumask *mask, bool force)
|
const struct cpumask *mask, bool force)
|
||||||
{
|
{
|
||||||
|
unsigned int cpu;
|
||||||
|
|
||||||
|
if (!force)
|
||||||
|
cpu = cpumask_any_and(mask, cpu_online_mask);
|
||||||
|
else
|
||||||
|
cpu = cpumask_first(mask);
|
||||||
|
|
||||||
|
if (cpu >= nr_cpu_ids)
|
||||||
return -EINVAL;
|
return -EINVAL;
|
||||||
|
|
||||||
|
irq_data_update_effective_affinity(irq_data, cpumask_of(cpu));
|
||||||
|
|
||||||
|
return IRQ_SET_MASK_OK;
|
||||||
}
|
}
|
||||||
|
|
||||||
static struct irq_chip armada_370_xp_msi_bottom_irq_chip = {
|
static struct irq_chip armada_370_xp_msi_bottom_irq_chip = {
|
||||||
@ -264,11 +278,21 @@ static const struct irq_domain_ops armada_370_xp_msi_domain_ops = {
|
|||||||
.free = armada_370_xp_msi_free,
|
.free = armada_370_xp_msi_free,
|
||||||
};
|
};
|
||||||
|
|
||||||
static int armada_370_xp_msi_init(struct device_node *node,
|
static void armada_370_xp_msi_reenable_percpu(void)
|
||||||
phys_addr_t main_int_phys_base)
|
|
||||||
{
|
{
|
||||||
u32 reg;
|
u32 reg;
|
||||||
|
|
||||||
|
/* Enable MSI doorbell mask and combined cpu local interrupt */
|
||||||
|
reg = readl(per_cpu_int_base + ARMADA_370_XP_IN_DRBEL_MSK_OFFS)
|
||||||
|
| PCI_MSI_DOORBELL_MASK;
|
||||||
|
writel(reg, per_cpu_int_base + ARMADA_370_XP_IN_DRBEL_MSK_OFFS);
|
||||||
|
/* Unmask local doorbell interrupt */
|
||||||
|
writel(1, per_cpu_int_base + ARMADA_370_XP_INT_CLEAR_MASK_OFFS);
|
||||||
|
}
|
||||||
|
|
||||||
|
static int armada_370_xp_msi_init(struct device_node *node,
|
||||||
|
phys_addr_t main_int_phys_base)
|
||||||
|
{
|
||||||
msi_doorbell_addr = main_int_phys_base +
|
msi_doorbell_addr = main_int_phys_base +
|
||||||
ARMADA_370_XP_SW_TRIG_INT_OFFS;
|
ARMADA_370_XP_SW_TRIG_INT_OFFS;
|
||||||
|
|
||||||
@ -287,18 +311,13 @@ static int armada_370_xp_msi_init(struct device_node *node,
|
|||||||
return -ENOMEM;
|
return -ENOMEM;
|
||||||
}
|
}
|
||||||
|
|
||||||
reg = readl(per_cpu_int_base + ARMADA_370_XP_IN_DRBEL_MSK_OFFS)
|
armada_370_xp_msi_reenable_percpu();
|
||||||
| PCI_MSI_DOORBELL_MASK;
|
|
||||||
|
|
||||||
writel(reg, per_cpu_int_base +
|
|
||||||
ARMADA_370_XP_IN_DRBEL_MSK_OFFS);
|
|
||||||
|
|
||||||
/* Unmask IPI interrupt */
|
|
||||||
writel(1, per_cpu_int_base + ARMADA_370_XP_INT_CLEAR_MASK_OFFS);
|
|
||||||
|
|
||||||
return 0;
|
return 0;
|
||||||
}
|
}
|
||||||
#else
|
#else
|
||||||
|
static void armada_370_xp_msi_reenable_percpu(void) {}
|
||||||
|
|
||||||
static inline int armada_370_xp_msi_init(struct device_node *node,
|
static inline int armada_370_xp_msi_init(struct device_node *node,
|
||||||
phys_addr_t main_int_phys_base)
|
phys_addr_t main_int_phys_base)
|
||||||
{
|
{
|
||||||
@ -308,7 +327,16 @@ static inline int armada_370_xp_msi_init(struct device_node *node,
|
|||||||
|
|
||||||
static void armada_xp_mpic_perf_init(void)
|
static void armada_xp_mpic_perf_init(void)
|
||||||
{
|
{
|
||||||
unsigned long cpuid = cpu_logical_map(smp_processor_id());
|
unsigned long cpuid;
|
||||||
|
|
||||||
|
/*
|
||||||
|
* This Performance Counter Overflow interrupt is specific for
|
||||||
|
* Armada 370 and XP. It is not available on Armada 375, 38x and 39x.
|
||||||
|
*/
|
||||||
|
if (!of_machine_is_compatible("marvell,armada-370-xp"))
|
||||||
|
return;
|
||||||
|
|
||||||
|
cpuid = cpu_logical_map(smp_processor_id());
|
||||||
|
|
||||||
/* Enable Performance Counter Overflow interrupts */
|
/* Enable Performance Counter Overflow interrupts */
|
||||||
writel(ARMADA_370_XP_INT_CAUSE_PERF(cpuid),
|
writel(ARMADA_370_XP_INT_CAUSE_PERF(cpuid),
|
||||||
@ -501,6 +529,8 @@ static void armada_xp_mpic_reenable_percpu(void)
|
|||||||
}
|
}
|
||||||
|
|
||||||
ipi_resume();
|
ipi_resume();
|
||||||
|
|
||||||
|
armada_370_xp_msi_reenable_percpu();
|
||||||
}
|
}
|
||||||
|
|
||||||
static int armada_xp_mpic_starting_cpu(unsigned int cpu)
|
static int armada_xp_mpic_starting_cpu(unsigned int cpu)
|
||||||
|
@ -77,8 +77,8 @@ static int __init aspeed_i2c_ic_of_init(struct device_node *node,
|
|||||||
}
|
}
|
||||||
|
|
||||||
i2c_ic->parent_irq = irq_of_parse_and_map(node, 0);
|
i2c_ic->parent_irq = irq_of_parse_and_map(node, 0);
|
||||||
if (i2c_ic->parent_irq < 0) {
|
if (!i2c_ic->parent_irq) {
|
||||||
ret = i2c_ic->parent_irq;
|
ret = -EINVAL;
|
||||||
goto err_iounmap;
|
goto err_iounmap;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
@ -157,8 +157,8 @@ static int aspeed_scu_ic_of_init_common(struct aspeed_scu_ic *scu_ic,
|
|||||||
}
|
}
|
||||||
|
|
||||||
irq = irq_of_parse_and_map(node, 0);
|
irq = irq_of_parse_and_map(node, 0);
|
||||||
if (irq < 0) {
|
if (!irq) {
|
||||||
rc = irq;
|
rc = -EINVAL;
|
||||||
goto err;
|
goto err;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
@ -315,7 +315,7 @@ static int __init bcm6345_l1_of_init(struct device_node *dn,
|
|||||||
cpumask_set_cpu(idx, &intc->cpumask);
|
cpumask_set_cpu(idx, &intc->cpumask);
|
||||||
}
|
}
|
||||||
|
|
||||||
if (!cpumask_weight(&intc->cpumask)) {
|
if (cpumask_empty(&intc->cpumask)) {
|
||||||
ret = -ENODEV;
|
ret = -ENODEV;
|
||||||
goto out_free;
|
goto out_free;
|
||||||
}
|
}
|
||||||
|
@ -136,11 +136,11 @@ static inline bool handle_irq_perbit(struct pt_regs *regs, u32 hwirq,
|
|||||||
u32 irq_base)
|
u32 irq_base)
|
||||||
{
|
{
|
||||||
if (hwirq == 0)
|
if (hwirq == 0)
|
||||||
return 0;
|
return false;
|
||||||
|
|
||||||
generic_handle_domain_irq(root_domain, irq_base + __fls(hwirq));
|
generic_handle_domain_irq(root_domain, irq_base + __fls(hwirq));
|
||||||
|
|
||||||
return 1;
|
return true;
|
||||||
}
|
}
|
||||||
|
|
||||||
/* gx6605s 64 irqs interrupt controller */
|
/* gx6605s 64 irqs interrupt controller */
|
||||||
|
@ -1624,7 +1624,7 @@ static int its_select_cpu(struct irq_data *d,
|
|||||||
|
|
||||||
cpu = cpumask_pick_least_loaded(d, tmpmask);
|
cpu = cpumask_pick_least_loaded(d, tmpmask);
|
||||||
} else {
|
} else {
|
||||||
cpumask_and(tmpmask, irq_data_get_affinity_mask(d), cpu_online_mask);
|
cpumask_copy(tmpmask, aff_mask);
|
||||||
|
|
||||||
/* If we cannot cross sockets, limit the search to that node */
|
/* If we cannot cross sockets, limit the search to that node */
|
||||||
if ((its_dev->its->flags & ITS_FLAGS_WORKAROUND_CAVIUM_23144) &&
|
if ((its_dev->its->flags & ITS_FLAGS_WORKAROUND_CAVIUM_23144) &&
|
||||||
|
@ -352,28 +352,27 @@ static int gic_peek_irq(struct irq_data *d, u32 offset)
|
|||||||
|
|
||||||
static void gic_poke_irq(struct irq_data *d, u32 offset)
|
static void gic_poke_irq(struct irq_data *d, u32 offset)
|
||||||
{
|
{
|
||||||
void (*rwp_wait)(void);
|
|
||||||
void __iomem *base;
|
void __iomem *base;
|
||||||
u32 index, mask;
|
u32 index, mask;
|
||||||
|
|
||||||
offset = convert_offset_index(d, offset, &index);
|
offset = convert_offset_index(d, offset, &index);
|
||||||
mask = 1 << (index % 32);
|
mask = 1 << (index % 32);
|
||||||
|
|
||||||
if (gic_irq_in_rdist(d)) {
|
if (gic_irq_in_rdist(d))
|
||||||
base = gic_data_rdist_sgi_base();
|
base = gic_data_rdist_sgi_base();
|
||||||
rwp_wait = gic_redist_wait_for_rwp;
|
else
|
||||||
} else {
|
|
||||||
base = gic_data.dist_base;
|
base = gic_data.dist_base;
|
||||||
rwp_wait = gic_dist_wait_for_rwp;
|
|
||||||
}
|
|
||||||
|
|
||||||
writel_relaxed(mask, base + offset + (index / 32) * 4);
|
writel_relaxed(mask, base + offset + (index / 32) * 4);
|
||||||
rwp_wait();
|
|
||||||
}
|
}
|
||||||
|
|
||||||
static void gic_mask_irq(struct irq_data *d)
|
static void gic_mask_irq(struct irq_data *d)
|
||||||
{
|
{
|
||||||
gic_poke_irq(d, GICD_ICENABLER);
|
gic_poke_irq(d, GICD_ICENABLER);
|
||||||
|
if (gic_irq_in_rdist(d))
|
||||||
|
gic_redist_wait_for_rwp();
|
||||||
|
else
|
||||||
|
gic_dist_wait_for_rwp();
|
||||||
}
|
}
|
||||||
|
|
||||||
static void gic_eoimode1_mask_irq(struct irq_data *d)
|
static void gic_eoimode1_mask_irq(struct irq_data *d)
|
||||||
@ -420,7 +419,11 @@ static int gic_irq_set_irqchip_state(struct irq_data *d,
|
|||||||
break;
|
break;
|
||||||
|
|
||||||
case IRQCHIP_STATE_MASKED:
|
case IRQCHIP_STATE_MASKED:
|
||||||
reg = val ? GICD_ICENABLER : GICD_ISENABLER;
|
if (val) {
|
||||||
|
gic_mask_irq(d);
|
||||||
|
return 0;
|
||||||
|
}
|
||||||
|
reg = GICD_ISENABLER;
|
||||||
break;
|
break;
|
||||||
|
|
||||||
default:
|
default:
|
||||||
@ -556,7 +559,8 @@ static void gic_irq_nmi_teardown(struct irq_data *d)
|
|||||||
|
|
||||||
static void gic_eoi_irq(struct irq_data *d)
|
static void gic_eoi_irq(struct irq_data *d)
|
||||||
{
|
{
|
||||||
gic_write_eoir(gic_irq(d));
|
write_gicreg(gic_irq(d), ICC_EOIR1_EL1);
|
||||||
|
isb();
|
||||||
}
|
}
|
||||||
|
|
||||||
static void gic_eoimode1_eoi_irq(struct irq_data *d)
|
static void gic_eoimode1_eoi_irq(struct irq_data *d)
|
||||||
@ -574,7 +578,6 @@ static int gic_set_type(struct irq_data *d, unsigned int type)
|
|||||||
{
|
{
|
||||||
enum gic_intid_range range;
|
enum gic_intid_range range;
|
||||||
unsigned int irq = gic_irq(d);
|
unsigned int irq = gic_irq(d);
|
||||||
void (*rwp_wait)(void);
|
|
||||||
void __iomem *base;
|
void __iomem *base;
|
||||||
u32 offset, index;
|
u32 offset, index;
|
||||||
int ret;
|
int ret;
|
||||||
@ -590,17 +593,14 @@ static int gic_set_type(struct irq_data *d, unsigned int type)
|
|||||||
type != IRQ_TYPE_LEVEL_HIGH && type != IRQ_TYPE_EDGE_RISING)
|
type != IRQ_TYPE_LEVEL_HIGH && type != IRQ_TYPE_EDGE_RISING)
|
||||||
return -EINVAL;
|
return -EINVAL;
|
||||||
|
|
||||||
if (gic_irq_in_rdist(d)) {
|
if (gic_irq_in_rdist(d))
|
||||||
base = gic_data_rdist_sgi_base();
|
base = gic_data_rdist_sgi_base();
|
||||||
rwp_wait = gic_redist_wait_for_rwp;
|
else
|
||||||
} else {
|
|
||||||
base = gic_data.dist_base;
|
base = gic_data.dist_base;
|
||||||
rwp_wait = gic_dist_wait_for_rwp;
|
|
||||||
}
|
|
||||||
|
|
||||||
offset = convert_offset_index(d, GICD_ICFGR, &index);
|
offset = convert_offset_index(d, GICD_ICFGR, &index);
|
||||||
|
|
||||||
ret = gic_configure_irq(index, type, base + offset, rwp_wait);
|
ret = gic_configure_irq(index, type, base + offset, NULL);
|
||||||
if (ret && (range == PPI_RANGE || range == EPPI_RANGE)) {
|
if (ret && (range == PPI_RANGE || range == EPPI_RANGE)) {
|
||||||
/* Misconfigured PPIs are usually not fatal */
|
/* Misconfigured PPIs are usually not fatal */
|
||||||
pr_warn("GIC: PPI INTID%d is secure or misconfigured\n", irq);
|
pr_warn("GIC: PPI INTID%d is secure or misconfigured\n", irq);
|
||||||
@ -640,40 +640,125 @@ static void gic_deactivate_unhandled(u32 irqnr)
|
|||||||
if (irqnr < 8192)
|
if (irqnr < 8192)
|
||||||
gic_write_dir(irqnr);
|
gic_write_dir(irqnr);
|
||||||
} else {
|
} else {
|
||||||
gic_write_eoir(irqnr);
|
write_gicreg(irqnr, ICC_EOIR1_EL1);
|
||||||
|
isb();
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
static inline void gic_handle_nmi(u32 irqnr, struct pt_regs *regs)
|
/*
|
||||||
{
|
* Follow a read of the IAR with any HW maintenance that needs to happen prior
|
||||||
bool irqs_enabled = interrupts_enabled(regs);
|
* to invoking the relevant IRQ handler. We must do two things:
|
||||||
int err;
|
*
|
||||||
|
* (1) Ensure instruction ordering between a read of IAR and subsequent
|
||||||
if (irqs_enabled)
|
* instructions in the IRQ handler using an ISB.
|
||||||
nmi_enter();
|
*
|
||||||
|
* It is possible for the IAR to report an IRQ which was signalled *after*
|
||||||
if (static_branch_likely(&supports_deactivate_key))
|
* the CPU took an IRQ exception as multiple interrupts can race to be
|
||||||
gic_write_eoir(irqnr);
|
* recognized by the GIC, earlier interrupts could be withdrawn, and/or
|
||||||
/*
|
* later interrupts could be prioritized by the GIC.
|
||||||
* Leave the PSR.I bit set to prevent other NMIs to be
|
*
|
||||||
* received while handling this one.
|
* For devices which are tightly coupled to the CPU, such as PMUs, a
|
||||||
* PSR.I will be restored when we ERET to the
|
* context synchronization event is necessary to ensure that system
|
||||||
* interrupted context.
|
* register state is not stale, as these may have been indirectly written
|
||||||
|
* *after* exception entry.
|
||||||
|
*
|
||||||
|
* (2) Deactivate the interrupt when EOI mode 1 is in use.
|
||||||
*/
|
*/
|
||||||
err = generic_handle_domain_nmi(gic_data.domain, irqnr);
|
static inline void gic_complete_ack(u32 irqnr)
|
||||||
if (err)
|
{
|
||||||
gic_deactivate_unhandled(irqnr);
|
if (static_branch_likely(&supports_deactivate_key))
|
||||||
|
write_gicreg(irqnr, ICC_EOIR1_EL1);
|
||||||
|
|
||||||
if (irqs_enabled)
|
isb();
|
||||||
nmi_exit();
|
|
||||||
}
|
}
|
||||||
|
|
||||||
static u32 do_read_iar(struct pt_regs *regs)
|
static bool gic_rpr_is_nmi_prio(void)
|
||||||
{
|
{
|
||||||
u32 iar;
|
if (!gic_supports_nmi())
|
||||||
|
return false;
|
||||||
|
|
||||||
if (gic_supports_nmi() && unlikely(!interrupts_enabled(regs))) {
|
return unlikely(gic_read_rpr() == GICD_INT_RPR_PRI(GICD_INT_NMI_PRI));
|
||||||
|
}
|
||||||
|
|
||||||
|
static bool gic_irqnr_is_special(u32 irqnr)
|
||||||
|
{
|
||||||
|
return irqnr >= 1020 && irqnr <= 1023;
|
||||||
|
}
|
||||||
|
|
||||||
|
static void __gic_handle_irq(u32 irqnr, struct pt_regs *regs)
|
||||||
|
{
|
||||||
|
if (gic_irqnr_is_special(irqnr))
|
||||||
|
return;
|
||||||
|
|
||||||
|
gic_complete_ack(irqnr);
|
||||||
|
|
||||||
|
if (generic_handle_domain_irq(gic_data.domain, irqnr)) {
|
||||||
|
WARN_ONCE(true, "Unexpected interrupt (irqnr %u)\n", irqnr);
|
||||||
|
gic_deactivate_unhandled(irqnr);
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
static void __gic_handle_nmi(u32 irqnr, struct pt_regs *regs)
|
||||||
|
{
|
||||||
|
if (gic_irqnr_is_special(irqnr))
|
||||||
|
return;
|
||||||
|
|
||||||
|
gic_complete_ack(irqnr);
|
||||||
|
|
||||||
|
if (generic_handle_domain_nmi(gic_data.domain, irqnr)) {
|
||||||
|
WARN_ONCE(true, "Unexpected pseudo-NMI (irqnr %u)\n", irqnr);
|
||||||
|
gic_deactivate_unhandled(irqnr);
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
/*
|
||||||
|
* An exception has been taken from a context with IRQs enabled, and this could
|
||||||
|
* be an IRQ or an NMI.
|
||||||
|
*
|
||||||
|
* The entry code called us with DAIF.IF set to keep NMIs masked. We must clear
|
||||||
|
* DAIF.IF (and update ICC_PMR_EL1 to mask regular IRQs) prior to returning,
|
||||||
|
* after handling any NMI but before handling any IRQ.
|
||||||
|
*
|
||||||
|
* The entry code has performed IRQ entry, and if an NMI is detected we must
|
||||||
|
* perform NMI entry/exit around invoking the handler.
|
||||||
|
*/
|
||||||
|
static void __gic_handle_irq_from_irqson(struct pt_regs *regs)
|
||||||
|
{
|
||||||
|
bool is_nmi;
|
||||||
|
u32 irqnr;
|
||||||
|
|
||||||
|
irqnr = gic_read_iar();
|
||||||
|
|
||||||
|
is_nmi = gic_rpr_is_nmi_prio();
|
||||||
|
|
||||||
|
if (is_nmi) {
|
||||||
|
nmi_enter();
|
||||||
|
__gic_handle_nmi(irqnr, regs);
|
||||||
|
nmi_exit();
|
||||||
|
}
|
||||||
|
|
||||||
|
if (gic_prio_masking_enabled()) {
|
||||||
|
gic_pmr_mask_irqs();
|
||||||
|
gic_arch_enable_irqs();
|
||||||
|
}
|
||||||
|
|
||||||
|
if (!is_nmi)
|
||||||
|
__gic_handle_irq(irqnr, regs);
|
||||||
|
}
|
||||||
|
|
||||||
|
/*
|
||||||
|
* An exception has been taken from a context with IRQs disabled, which can only
|
||||||
|
* be an NMI.
|
||||||
|
*
|
||||||
|
* The entry code called us with DAIF.IF set to keep NMIs masked. We must leave
|
||||||
|
* DAIF.IF (and ICC_PMR_EL1) unchanged.
|
||||||
|
*
|
||||||
|
* The entry code has performed NMI entry.
|
||||||
|
*/
|
||||||
|
static void __gic_handle_irq_from_irqsoff(struct pt_regs *regs)
|
||||||
|
{
|
||||||
u64 pmr;
|
u64 pmr;
|
||||||
|
u32 irqnr;
|
||||||
|
|
||||||
/*
|
/*
|
||||||
* We were in a context with IRQs disabled. However, the
|
* We were in a context with IRQs disabled. However, the
|
||||||
@ -691,47 +776,18 @@ static u32 do_read_iar(struct pt_regs *regs)
|
|||||||
pmr = gic_read_pmr();
|
pmr = gic_read_pmr();
|
||||||
gic_pmr_mask_irqs();
|
gic_pmr_mask_irqs();
|
||||||
isb();
|
isb();
|
||||||
|
irqnr = gic_read_iar();
|
||||||
iar = gic_read_iar();
|
|
||||||
|
|
||||||
gic_write_pmr(pmr);
|
gic_write_pmr(pmr);
|
||||||
} else {
|
|
||||||
iar = gic_read_iar();
|
|
||||||
}
|
|
||||||
|
|
||||||
return iar;
|
__gic_handle_nmi(irqnr, regs);
|
||||||
}
|
}
|
||||||
|
|
||||||
static asmlinkage void __exception_irq_entry gic_handle_irq(struct pt_regs *regs)
|
static asmlinkage void __exception_irq_entry gic_handle_irq(struct pt_regs *regs)
|
||||||
{
|
{
|
||||||
u32 irqnr;
|
if (unlikely(gic_supports_nmi() && !interrupts_enabled(regs)))
|
||||||
|
__gic_handle_irq_from_irqsoff(regs);
|
||||||
irqnr = do_read_iar(regs);
|
|
||||||
|
|
||||||
/* Check for special IDs first */
|
|
||||||
if ((irqnr >= 1020 && irqnr <= 1023))
|
|
||||||
return;
|
|
||||||
|
|
||||||
if (gic_supports_nmi() &&
|
|
||||||
unlikely(gic_read_rpr() == GICD_INT_RPR_PRI(GICD_INT_NMI_PRI))) {
|
|
||||||
gic_handle_nmi(irqnr, regs);
|
|
||||||
return;
|
|
||||||
}
|
|
||||||
|
|
||||||
if (gic_prio_masking_enabled()) {
|
|
||||||
gic_pmr_mask_irqs();
|
|
||||||
gic_arch_enable_irqs();
|
|
||||||
}
|
|
||||||
|
|
||||||
if (static_branch_likely(&supports_deactivate_key))
|
|
||||||
gic_write_eoir(irqnr);
|
|
||||||
else
|
else
|
||||||
isb();
|
__gic_handle_irq_from_irqson(regs);
|
||||||
|
|
||||||
if (generic_handle_domain_irq(gic_data.domain, irqnr)) {
|
|
||||||
WARN_ONCE(true, "Unexpected interrupt received!\n");
|
|
||||||
gic_deactivate_unhandled(irqnr);
|
|
||||||
}
|
|
||||||
}
|
}
|
||||||
|
|
||||||
static u32 gic_get_pribits(void)
|
static u32 gic_get_pribits(void)
|
||||||
@ -807,8 +863,8 @@ static void __init gic_dist_init(void)
|
|||||||
for (i = 0; i < GIC_ESPI_NR; i += 4)
|
for (i = 0; i < GIC_ESPI_NR; i += 4)
|
||||||
writel_relaxed(GICD_INT_DEF_PRI_X4, base + GICD_IPRIORITYRnE + i);
|
writel_relaxed(GICD_INT_DEF_PRI_X4, base + GICD_IPRIORITYRnE + i);
|
||||||
|
|
||||||
/* Now do the common stuff, and wait for the distributor to drain */
|
/* Now do the common stuff */
|
||||||
gic_dist_config(base, GIC_LINE_NR, gic_dist_wait_for_rwp);
|
gic_dist_config(base, GIC_LINE_NR, NULL);
|
||||||
|
|
||||||
val = GICD_CTLR_ARE_NS | GICD_CTLR_ENABLE_G1A | GICD_CTLR_ENABLE_G1;
|
val = GICD_CTLR_ARE_NS | GICD_CTLR_ENABLE_G1A | GICD_CTLR_ENABLE_G1;
|
||||||
if (gic_data.rdists.gicd_typer2 & GICD_TYPER2_nASSGIcap) {
|
if (gic_data.rdists.gicd_typer2 & GICD_TYPER2_nASSGIcap) {
|
||||||
@ -816,8 +872,9 @@ static void __init gic_dist_init(void)
|
|||||||
val |= GICD_CTLR_nASSGIreq;
|
val |= GICD_CTLR_nASSGIreq;
|
||||||
}
|
}
|
||||||
|
|
||||||
/* Enable distributor with ARE, Group1 */
|
/* Enable distributor with ARE, Group1, and wait for it to drain */
|
||||||
writel_relaxed(val, base + GICD_CTLR);
|
writel_relaxed(val, base + GICD_CTLR);
|
||||||
|
gic_dist_wait_for_rwp();
|
||||||
|
|
||||||
/*
|
/*
|
||||||
* Set all global interrupts to the boot CPU only. ARE must be
|
* Set all global interrupts to the boot CPU only. ARE must be
|
||||||
@ -919,6 +976,7 @@ static int __gic_update_rdist_properties(struct redist_region *region,
|
|||||||
void __iomem *ptr)
|
void __iomem *ptr)
|
||||||
{
|
{
|
||||||
u64 typer = gic_read_typer(ptr + GICR_TYPER);
|
u64 typer = gic_read_typer(ptr + GICR_TYPER);
|
||||||
|
u32 ctlr = readl_relaxed(ptr + GICR_CTLR);
|
||||||
|
|
||||||
/* Boot-time cleanip */
|
/* Boot-time cleanip */
|
||||||
if ((typer & GICR_TYPER_VLPIS) && (typer & GICR_TYPER_RVPEID)) {
|
if ((typer & GICR_TYPER_VLPIS) && (typer & GICR_TYPER_RVPEID)) {
|
||||||
@ -938,9 +996,18 @@ static int __gic_update_rdist_properties(struct redist_region *region,
|
|||||||
|
|
||||||
gic_data.rdists.has_vlpis &= !!(typer & GICR_TYPER_VLPIS);
|
gic_data.rdists.has_vlpis &= !!(typer & GICR_TYPER_VLPIS);
|
||||||
|
|
||||||
/* RVPEID implies some form of DirectLPI, no matter what the doc says... :-/ */
|
/*
|
||||||
|
* TYPER.RVPEID implies some form of DirectLPI, no matter what the
|
||||||
|
* doc says... :-/ And CTLR.IR implies another subset of DirectLPI
|
||||||
|
* that the ITS driver can make use of for LPIs (and not VLPIs).
|
||||||
|
*
|
||||||
|
* These are 3 different ways to express the same thing, depending
|
||||||
|
* on the revision of the architecture and its relaxations over
|
||||||
|
* time. Just group them under the 'direct_lpi' banner.
|
||||||
|
*/
|
||||||
gic_data.rdists.has_rvpeid &= !!(typer & GICR_TYPER_RVPEID);
|
gic_data.rdists.has_rvpeid &= !!(typer & GICR_TYPER_RVPEID);
|
||||||
gic_data.rdists.has_direct_lpi &= (!!(typer & GICR_TYPER_DirectLPIS) |
|
gic_data.rdists.has_direct_lpi &= (!!(typer & GICR_TYPER_DirectLPIS) |
|
||||||
|
!!(ctlr & GICR_CTLR_IR) |
|
||||||
gic_data.rdists.has_rvpeid);
|
gic_data.rdists.has_rvpeid);
|
||||||
gic_data.rdists.has_vpend_valid_dirty &= !!(typer & GICR_TYPER_DIRTY);
|
gic_data.rdists.has_vpend_valid_dirty &= !!(typer & GICR_TYPER_DIRTY);
|
||||||
|
|
||||||
@ -962,7 +1029,11 @@ static void gic_update_rdist_properties(void)
|
|||||||
gic_iterate_rdists(__gic_update_rdist_properties);
|
gic_iterate_rdists(__gic_update_rdist_properties);
|
||||||
if (WARN_ON(gic_data.ppi_nr == UINT_MAX))
|
if (WARN_ON(gic_data.ppi_nr == UINT_MAX))
|
||||||
gic_data.ppi_nr = 0;
|
gic_data.ppi_nr = 0;
|
||||||
pr_info("%d PPIs implemented\n", gic_data.ppi_nr);
|
pr_info("GICv3 features: %d PPIs%s%s\n",
|
||||||
|
gic_data.ppi_nr,
|
||||||
|
gic_data.has_rss ? ", RSS" : "",
|
||||||
|
gic_data.rdists.has_direct_lpi ? ", DirectLPI" : "");
|
||||||
|
|
||||||
if (gic_data.rdists.has_vlpis)
|
if (gic_data.rdists.has_vlpis)
|
||||||
pr_info("GICv4 features: %s%s%s\n",
|
pr_info("GICv4 features: %s%s%s\n",
|
||||||
gic_data.rdists.has_direct_lpi ? "DirectLPI " : "",
|
gic_data.rdists.has_direct_lpi ? "DirectLPI " : "",
|
||||||
@ -1284,8 +1355,6 @@ static int gic_set_affinity(struct irq_data *d, const struct cpumask *mask_val,
|
|||||||
*/
|
*/
|
||||||
if (enabled)
|
if (enabled)
|
||||||
gic_unmask_irq(d);
|
gic_unmask_irq(d);
|
||||||
else
|
|
||||||
gic_dist_wait_for_rwp();
|
|
||||||
|
|
||||||
irq_data_update_effective_affinity(d, cpumask_of(cpu));
|
irq_data_update_effective_affinity(d, cpumask_of(cpu));
|
||||||
|
|
||||||
@ -1803,8 +1872,6 @@ static int __init gic_init_bases(void __iomem *dist_base,
|
|||||||
irq_domain_update_bus_token(gic_data.domain, DOMAIN_BUS_WIRED);
|
irq_domain_update_bus_token(gic_data.domain, DOMAIN_BUS_WIRED);
|
||||||
|
|
||||||
gic_data.has_rss = !!(typer & GICD_TYPER_RSS);
|
gic_data.has_rss = !!(typer & GICD_TYPER_RSS);
|
||||||
pr_info("Distributor has %sRange Selector support\n",
|
|
||||||
gic_data.has_rss ? "" : "no ");
|
|
||||||
|
|
||||||
if (typer & GICD_TYPER_MBIS) {
|
if (typer & GICD_TYPER_MBIS) {
|
||||||
err = mbi_init(handle, gic_data.domain);
|
err = mbi_init(handle, gic_data.domain);
|
||||||
@ -1980,10 +2047,10 @@ static int __init gic_of_init(struct device_node *node, struct device_node *pare
|
|||||||
u32 nr_redist_regions;
|
u32 nr_redist_regions;
|
||||||
int err, i;
|
int err, i;
|
||||||
|
|
||||||
dist_base = of_iomap(node, 0);
|
dist_base = of_io_request_and_map(node, 0, "GICD");
|
||||||
if (!dist_base) {
|
if (IS_ERR(dist_base)) {
|
||||||
pr_err("%pOF: unable to map gic dist registers\n", node);
|
pr_err("%pOF: unable to map gic dist registers\n", node);
|
||||||
return -ENXIO;
|
return PTR_ERR(dist_base);
|
||||||
}
|
}
|
||||||
|
|
||||||
err = gic_validate_dist_version(dist_base);
|
err = gic_validate_dist_version(dist_base);
|
||||||
@ -2007,8 +2074,8 @@ static int __init gic_of_init(struct device_node *node, struct device_node *pare
|
|||||||
int ret;
|
int ret;
|
||||||
|
|
||||||
ret = of_address_to_resource(node, 1 + i, &res);
|
ret = of_address_to_resource(node, 1 + i, &res);
|
||||||
rdist_regs[i].redist_base = of_iomap(node, 1 + i);
|
rdist_regs[i].redist_base = of_io_request_and_map(node, 1 + i, "GICR");
|
||||||
if (ret || !rdist_regs[i].redist_base) {
|
if (ret || IS_ERR(rdist_regs[i].redist_base)) {
|
||||||
pr_err("%pOF: couldn't map region %d\n", node, i);
|
pr_err("%pOF: couldn't map region %d\n", node, i);
|
||||||
err = -ENODEV;
|
err = -ENODEV;
|
||||||
goto out_unmap_rdist;
|
goto out_unmap_rdist;
|
||||||
@ -2034,7 +2101,7 @@ static int __init gic_of_init(struct device_node *node, struct device_node *pare
|
|||||||
|
|
||||||
out_unmap_rdist:
|
out_unmap_rdist:
|
||||||
for (i = 0; i < nr_redist_regions; i++)
|
for (i = 0; i < nr_redist_regions; i++)
|
||||||
if (rdist_regs[i].redist_base)
|
if (rdist_regs[i].redist_base && !IS_ERR(rdist_regs[i].redist_base))
|
||||||
iounmap(rdist_regs[i].redist_base);
|
iounmap(rdist_regs[i].redist_base);
|
||||||
kfree(rdist_regs);
|
kfree(rdist_regs);
|
||||||
out_unmap_dist:
|
out_unmap_dist:
|
||||||
@ -2081,6 +2148,7 @@ gic_acpi_parse_madt_redist(union acpi_subtable_headers *header,
|
|||||||
pr_err("Couldn't map GICR region @%llx\n", redist->base_address);
|
pr_err("Couldn't map GICR region @%llx\n", redist->base_address);
|
||||||
return -ENOMEM;
|
return -ENOMEM;
|
||||||
}
|
}
|
||||||
|
request_mem_region(redist->base_address, redist->length, "GICR");
|
||||||
|
|
||||||
gic_acpi_register_redist(redist->base_address, redist_base);
|
gic_acpi_register_redist(redist->base_address, redist_base);
|
||||||
return 0;
|
return 0;
|
||||||
@ -2103,6 +2171,7 @@ gic_acpi_parse_madt_gicc(union acpi_subtable_headers *header,
|
|||||||
redist_base = ioremap(gicc->gicr_base_address, size);
|
redist_base = ioremap(gicc->gicr_base_address, size);
|
||||||
if (!redist_base)
|
if (!redist_base)
|
||||||
return -ENOMEM;
|
return -ENOMEM;
|
||||||
|
request_mem_region(gicc->gicr_base_address, size, "GICR");
|
||||||
|
|
||||||
gic_acpi_register_redist(gicc->gicr_base_address, redist_base);
|
gic_acpi_register_redist(gicc->gicr_base_address, redist_base);
|
||||||
return 0;
|
return 0;
|
||||||
@ -2304,6 +2373,7 @@ gic_acpi_init(union acpi_subtable_headers *header, const unsigned long end)
|
|||||||
pr_err("Unable to map GICD registers\n");
|
pr_err("Unable to map GICD registers\n");
|
||||||
return -ENOMEM;
|
return -ENOMEM;
|
||||||
}
|
}
|
||||||
|
request_mem_region(dist->base_address, ACPI_GICV3_DIST_MEM_SIZE, "GICD");
|
||||||
|
|
||||||
err = gic_validate_dist_version(acpi_data.dist_base);
|
err = gic_validate_dist_version(acpi_data.dist_base);
|
||||||
if (err) {
|
if (err) {
|
||||||
|
@ -1115,7 +1115,8 @@ static int gic_irq_domain_translate(struct irq_domain *d,
|
|||||||
*type = fwspec->param[2] & IRQ_TYPE_SENSE_MASK;
|
*type = fwspec->param[2] & IRQ_TYPE_SENSE_MASK;
|
||||||
|
|
||||||
/* Make it clear that broken DTs are... broken */
|
/* Make it clear that broken DTs are... broken */
|
||||||
WARN_ON(*type == IRQ_TYPE_NONE);
|
WARN(*type == IRQ_TYPE_NONE,
|
||||||
|
"HW irq %ld has invalid type\n", *hwirq);
|
||||||
return 0;
|
return 0;
|
||||||
}
|
}
|
||||||
|
|
||||||
@ -1132,7 +1133,8 @@ static int gic_irq_domain_translate(struct irq_domain *d,
|
|||||||
*hwirq = fwspec->param[0];
|
*hwirq = fwspec->param[0];
|
||||||
*type = fwspec->param[1];
|
*type = fwspec->param[1];
|
||||||
|
|
||||||
WARN_ON(*type == IRQ_TYPE_NONE);
|
WARN(*type == IRQ_TYPE_NONE,
|
||||||
|
"HW irq %ld has invalid type\n", *hwirq);
|
||||||
return 0;
|
return 0;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
@ -12,6 +12,7 @@
|
|||||||
#include <linux/kernel.h>
|
#include <linux/kernel.h>
|
||||||
#include <linux/of_irq.h>
|
#include <linux/of_irq.h>
|
||||||
#include <linux/of_platform.h>
|
#include <linux/of_platform.h>
|
||||||
|
#include <linux/pm_runtime.h>
|
||||||
#include <linux/spinlock.h>
|
#include <linux/spinlock.h>
|
||||||
|
|
||||||
#define CTRL_STRIDE_OFF(_t, _r) (_t * 4 * _r)
|
#define CTRL_STRIDE_OFF(_t, _r) (_t * 4 * _r)
|
||||||
@ -70,7 +71,7 @@ static void imx_irqsteer_irq_mask(struct irq_data *d)
|
|||||||
raw_spin_unlock_irqrestore(&data->lock, flags);
|
raw_spin_unlock_irqrestore(&data->lock, flags);
|
||||||
}
|
}
|
||||||
|
|
||||||
static struct irq_chip imx_irqsteer_irq_chip = {
|
static const struct irq_chip imx_irqsteer_irq_chip = {
|
||||||
.name = "irqsteer",
|
.name = "irqsteer",
|
||||||
.irq_mask = imx_irqsteer_irq_mask,
|
.irq_mask = imx_irqsteer_irq_mask,
|
||||||
.irq_unmask = imx_irqsteer_irq_unmask,
|
.irq_unmask = imx_irqsteer_irq_unmask,
|
||||||
@ -175,7 +176,7 @@ static int imx_irqsteer_probe(struct platform_device *pdev)
|
|||||||
data->irq_count = DIV_ROUND_UP(irqs_num, 64);
|
data->irq_count = DIV_ROUND_UP(irqs_num, 64);
|
||||||
data->reg_num = irqs_num / 32;
|
data->reg_num = irqs_num / 32;
|
||||||
|
|
||||||
if (IS_ENABLED(CONFIG_PM_SLEEP)) {
|
if (IS_ENABLED(CONFIG_PM)) {
|
||||||
data->saved_reg = devm_kzalloc(&pdev->dev,
|
data->saved_reg = devm_kzalloc(&pdev->dev,
|
||||||
sizeof(u32) * data->reg_num,
|
sizeof(u32) * data->reg_num,
|
||||||
GFP_KERNEL);
|
GFP_KERNEL);
|
||||||
@ -199,6 +200,7 @@ static int imx_irqsteer_probe(struct platform_device *pdev)
|
|||||||
ret = -ENOMEM;
|
ret = -ENOMEM;
|
||||||
goto out;
|
goto out;
|
||||||
}
|
}
|
||||||
|
irq_domain_set_pm_device(data->domain, &pdev->dev);
|
||||||
|
|
||||||
if (!data->irq_count || data->irq_count > CHAN_MAX_OUTPUT_INT) {
|
if (!data->irq_count || data->irq_count > CHAN_MAX_OUTPUT_INT) {
|
||||||
ret = -EINVAL;
|
ret = -EINVAL;
|
||||||
@ -219,6 +221,9 @@ static int imx_irqsteer_probe(struct platform_device *pdev)
|
|||||||
|
|
||||||
platform_set_drvdata(pdev, data);
|
platform_set_drvdata(pdev, data);
|
||||||
|
|
||||||
|
pm_runtime_set_active(&pdev->dev);
|
||||||
|
pm_runtime_enable(&pdev->dev);
|
||||||
|
|
||||||
return 0;
|
return 0;
|
||||||
out:
|
out:
|
||||||
clk_disable_unprepare(data->ipg_clk);
|
clk_disable_unprepare(data->ipg_clk);
|
||||||
@ -241,7 +246,7 @@ static int imx_irqsteer_remove(struct platform_device *pdev)
|
|||||||
return 0;
|
return 0;
|
||||||
}
|
}
|
||||||
|
|
||||||
#ifdef CONFIG_PM_SLEEP
|
#ifdef CONFIG_PM
|
||||||
static void imx_irqsteer_save_regs(struct irqsteer_data *data)
|
static void imx_irqsteer_save_regs(struct irqsteer_data *data)
|
||||||
{
|
{
|
||||||
int i;
|
int i;
|
||||||
@ -288,7 +293,10 @@ static int imx_irqsteer_resume(struct device *dev)
|
|||||||
#endif
|
#endif
|
||||||
|
|
||||||
static const struct dev_pm_ops imx_irqsteer_pm_ops = {
|
static const struct dev_pm_ops imx_irqsteer_pm_ops = {
|
||||||
SET_NOIRQ_SYSTEM_SLEEP_PM_OPS(imx_irqsteer_suspend, imx_irqsteer_resume)
|
SET_NOIRQ_SYSTEM_SLEEP_PM_OPS(pm_runtime_force_suspend,
|
||||||
|
pm_runtime_force_resume)
|
||||||
|
SET_RUNTIME_PM_OPS(imx_irqsteer_suspend,
|
||||||
|
imx_irqsteer_resume, NULL)
|
||||||
};
|
};
|
||||||
|
|
||||||
static const struct of_device_id imx_irqsteer_dt_ids[] = {
|
static const struct of_device_id imx_irqsteer_dt_ids[] = {
|
||||||
|
@ -37,11 +37,26 @@ struct exiu_irq_data {
|
|||||||
u32 spi_base;
|
u32 spi_base;
|
||||||
};
|
};
|
||||||
|
|
||||||
static void exiu_irq_eoi(struct irq_data *d)
|
static void exiu_irq_ack(struct irq_data *d)
|
||||||
{
|
{
|
||||||
struct exiu_irq_data *data = irq_data_get_irq_chip_data(d);
|
struct exiu_irq_data *data = irq_data_get_irq_chip_data(d);
|
||||||
|
|
||||||
writel(BIT(d->hwirq), data->base + EIREQCLR);
|
writel(BIT(d->hwirq), data->base + EIREQCLR);
|
||||||
|
}
|
||||||
|
|
||||||
|
static void exiu_irq_eoi(struct irq_data *d)
|
||||||
|
{
|
||||||
|
struct exiu_irq_data *data = irq_data_get_irq_chip_data(d);
|
||||||
|
|
||||||
|
/*
|
||||||
|
* Level triggered interrupts are latched and must be cleared during
|
||||||
|
* EOI or the interrupt will be jammed on. Of course if a level
|
||||||
|
* triggered interrupt is still asserted then the write will not clear
|
||||||
|
* the interrupt.
|
||||||
|
*/
|
||||||
|
if (irqd_is_level_type(d))
|
||||||
|
writel(BIT(d->hwirq), data->base + EIREQCLR);
|
||||||
|
|
||||||
irq_chip_eoi_parent(d);
|
irq_chip_eoi_parent(d);
|
||||||
}
|
}
|
||||||
|
|
||||||
@ -91,10 +106,13 @@ static int exiu_irq_set_type(struct irq_data *d, unsigned int type)
|
|||||||
writel_relaxed(val, data->base + EILVL);
|
writel_relaxed(val, data->base + EILVL);
|
||||||
|
|
||||||
val = readl_relaxed(data->base + EIEDG);
|
val = readl_relaxed(data->base + EIEDG);
|
||||||
if (type == IRQ_TYPE_LEVEL_LOW || type == IRQ_TYPE_LEVEL_HIGH)
|
if (type == IRQ_TYPE_LEVEL_LOW || type == IRQ_TYPE_LEVEL_HIGH) {
|
||||||
val &= ~BIT(d->hwirq);
|
val &= ~BIT(d->hwirq);
|
||||||
else
|
irq_set_handler_locked(d, handle_fasteoi_irq);
|
||||||
|
} else {
|
||||||
val |= BIT(d->hwirq);
|
val |= BIT(d->hwirq);
|
||||||
|
irq_set_handler_locked(d, handle_fasteoi_ack_irq);
|
||||||
|
}
|
||||||
writel_relaxed(val, data->base + EIEDG);
|
writel_relaxed(val, data->base + EIEDG);
|
||||||
|
|
||||||
writel_relaxed(BIT(d->hwirq), data->base + EIREQCLR);
|
writel_relaxed(BIT(d->hwirq), data->base + EIREQCLR);
|
||||||
@ -104,6 +122,7 @@ static int exiu_irq_set_type(struct irq_data *d, unsigned int type)
|
|||||||
|
|
||||||
static struct irq_chip exiu_irq_chip = {
|
static struct irq_chip exiu_irq_chip = {
|
||||||
.name = "EXIU",
|
.name = "EXIU",
|
||||||
|
.irq_ack = exiu_irq_ack,
|
||||||
.irq_eoi = exiu_irq_eoi,
|
.irq_eoi = exiu_irq_eoi,
|
||||||
.irq_enable = exiu_irq_enable,
|
.irq_enable = exiu_irq_enable,
|
||||||
.irq_mask = exiu_irq_mask,
|
.irq_mask = exiu_irq_mask,
|
||||||
|
@ -249,11 +249,13 @@ static int sun6i_r_intc_domain_alloc(struct irq_domain *domain,
|
|||||||
for (i = 0; i < nr_irqs; ++i, ++hwirq, ++virq) {
|
for (i = 0; i < nr_irqs; ++i, ++hwirq, ++virq) {
|
||||||
if (hwirq == nmi_hwirq) {
|
if (hwirq == nmi_hwirq) {
|
||||||
irq_domain_set_hwirq_and_chip(domain, virq, hwirq,
|
irq_domain_set_hwirq_and_chip(domain, virq, hwirq,
|
||||||
&sun6i_r_intc_nmi_chip, 0);
|
&sun6i_r_intc_nmi_chip,
|
||||||
|
NULL);
|
||||||
irq_set_handler(virq, handle_fasteoi_ack_irq);
|
irq_set_handler(virq, handle_fasteoi_ack_irq);
|
||||||
} else {
|
} else {
|
||||||
irq_domain_set_hwirq_and_chip(domain, virq, hwirq,
|
irq_domain_set_hwirq_and_chip(domain, virq, hwirq,
|
||||||
&sun6i_r_intc_wakeup_chip, 0);
|
&sun6i_r_intc_wakeup_chip,
|
||||||
|
NULL);
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
|
@ -151,14 +151,25 @@ static struct irq_chip xtensa_mx_irq_chip = {
|
|||||||
.irq_set_affinity = xtensa_mx_irq_set_affinity,
|
.irq_set_affinity = xtensa_mx_irq_set_affinity,
|
||||||
};
|
};
|
||||||
|
|
||||||
|
static void __init xtensa_mx_init_common(struct irq_domain *root_domain)
|
||||||
|
{
|
||||||
|
unsigned int i;
|
||||||
|
|
||||||
|
irq_set_default_host(root_domain);
|
||||||
|
secondary_init_irq();
|
||||||
|
|
||||||
|
/* Initialize default IRQ routing to CPU 0 */
|
||||||
|
for (i = 0; i < XCHAL_NUM_EXTINTERRUPTS; ++i)
|
||||||
|
set_er(1, MIROUT(i));
|
||||||
|
}
|
||||||
|
|
||||||
int __init xtensa_mx_init_legacy(struct device_node *interrupt_parent)
|
int __init xtensa_mx_init_legacy(struct device_node *interrupt_parent)
|
||||||
{
|
{
|
||||||
struct irq_domain *root_domain =
|
struct irq_domain *root_domain =
|
||||||
irq_domain_add_legacy(NULL, NR_IRQS - 1, 1, 0,
|
irq_domain_add_legacy(NULL, NR_IRQS - 1, 1, 0,
|
||||||
&xtensa_mx_irq_domain_ops,
|
&xtensa_mx_irq_domain_ops,
|
||||||
&xtensa_mx_irq_chip);
|
&xtensa_mx_irq_chip);
|
||||||
irq_set_default_host(root_domain);
|
xtensa_mx_init_common(root_domain);
|
||||||
secondary_init_irq();
|
|
||||||
return 0;
|
return 0;
|
||||||
}
|
}
|
||||||
|
|
||||||
@ -168,8 +179,7 @@ static int __init xtensa_mx_init(struct device_node *np,
|
|||||||
struct irq_domain *root_domain =
|
struct irq_domain *root_domain =
|
||||||
irq_domain_add_linear(np, NR_IRQS, &xtensa_mx_irq_domain_ops,
|
irq_domain_add_linear(np, NR_IRQS, &xtensa_mx_irq_domain_ops,
|
||||||
&xtensa_mx_irq_chip);
|
&xtensa_mx_irq_chip);
|
||||||
irq_set_default_host(root_domain);
|
xtensa_mx_init_common(root_domain);
|
||||||
secondary_init_irq();
|
|
||||||
return 0;
|
return 0;
|
||||||
}
|
}
|
||||||
IRQCHIP_DECLARE(xtensa_mx_irq_chip, "cdns,xtensa-mx", xtensa_mx_init);
|
IRQCHIP_DECLARE(xtensa_mx_irq_chip, "cdns,xtensa-mx", xtensa_mx_init);
|
||||||
|
@ -387,6 +387,8 @@ static void amd_gpio_irq_enable(struct irq_data *d)
|
|||||||
struct gpio_chip *gc = irq_data_get_irq_chip_data(d);
|
struct gpio_chip *gc = irq_data_get_irq_chip_data(d);
|
||||||
struct amd_gpio *gpio_dev = gpiochip_get_data(gc);
|
struct amd_gpio *gpio_dev = gpiochip_get_data(gc);
|
||||||
|
|
||||||
|
gpiochip_enable_irq(gc, d->hwirq);
|
||||||
|
|
||||||
raw_spin_lock_irqsave(&gpio_dev->lock, flags);
|
raw_spin_lock_irqsave(&gpio_dev->lock, flags);
|
||||||
pin_reg = readl(gpio_dev->base + (d->hwirq)*4);
|
pin_reg = readl(gpio_dev->base + (d->hwirq)*4);
|
||||||
pin_reg |= BIT(INTERRUPT_ENABLE_OFF);
|
pin_reg |= BIT(INTERRUPT_ENABLE_OFF);
|
||||||
@ -408,6 +410,8 @@ static void amd_gpio_irq_disable(struct irq_data *d)
|
|||||||
pin_reg &= ~BIT(INTERRUPT_MASK_OFF);
|
pin_reg &= ~BIT(INTERRUPT_MASK_OFF);
|
||||||
writel(pin_reg, gpio_dev->base + (d->hwirq)*4);
|
writel(pin_reg, gpio_dev->base + (d->hwirq)*4);
|
||||||
raw_spin_unlock_irqrestore(&gpio_dev->lock, flags);
|
raw_spin_unlock_irqrestore(&gpio_dev->lock, flags);
|
||||||
|
|
||||||
|
gpiochip_disable_irq(gc, d->hwirq);
|
||||||
}
|
}
|
||||||
|
|
||||||
static void amd_gpio_irq_mask(struct irq_data *d)
|
static void amd_gpio_irq_mask(struct irq_data *d)
|
||||||
@ -577,7 +581,7 @@ static void amd_irq_ack(struct irq_data *d)
|
|||||||
*/
|
*/
|
||||||
}
|
}
|
||||||
|
|
||||||
static struct irq_chip amd_gpio_irqchip = {
|
static const struct irq_chip amd_gpio_irqchip = {
|
||||||
.name = "amd_gpio",
|
.name = "amd_gpio",
|
||||||
.irq_ack = amd_irq_ack,
|
.irq_ack = amd_irq_ack,
|
||||||
.irq_enable = amd_gpio_irq_enable,
|
.irq_enable = amd_gpio_irq_enable,
|
||||||
@ -593,7 +597,8 @@ static struct irq_chip amd_gpio_irqchip = {
|
|||||||
* the wake event. Otherwise the wake event will never clear and
|
* the wake event. Otherwise the wake event will never clear and
|
||||||
* prevent the system from suspending.
|
* prevent the system from suspending.
|
||||||
*/
|
*/
|
||||||
.flags = IRQCHIP_ENABLE_WAKEUP_ON_SUSPEND,
|
.flags = IRQCHIP_ENABLE_WAKEUP_ON_SUSPEND | IRQCHIP_IMMUTABLE,
|
||||||
|
GPIOCHIP_IRQ_RESOURCE_HELPERS,
|
||||||
};
|
};
|
||||||
|
|
||||||
#define PIN_IRQ_PENDING (BIT(INTERRUPT_STS_OFF) | BIT(WAKE_STS_OFF))
|
#define PIN_IRQ_PENDING (BIT(INTERRUPT_STS_OFF) | BIT(WAKE_STS_OFF))
|
||||||
@ -1026,7 +1031,7 @@ static int amd_gpio_probe(struct platform_device *pdev)
|
|||||||
amd_gpio_irq_init(gpio_dev);
|
amd_gpio_irq_init(gpio_dev);
|
||||||
|
|
||||||
girq = &gpio_dev->gc.irq;
|
girq = &gpio_dev->gc.irq;
|
||||||
girq->chip = &amd_gpio_irqchip;
|
gpio_irq_chip_set_chip(girq, &amd_gpio_irqchip);
|
||||||
/* This will let us handle the parent IRQ in the driver */
|
/* This will let us handle the parent IRQ in the driver */
|
||||||
girq->parent_handler = NULL;
|
girq->parent_handler = NULL;
|
||||||
girq->num_parents = 0;
|
girq->num_parents = 0;
|
||||||
|
@ -36,7 +36,6 @@ struct apple_gpio_pinctrl {
|
|||||||
|
|
||||||
struct pinctrl_desc pinctrl_desc;
|
struct pinctrl_desc pinctrl_desc;
|
||||||
struct gpio_chip gpio_chip;
|
struct gpio_chip gpio_chip;
|
||||||
struct irq_chip irq_chip;
|
|
||||||
u8 irqgrps[];
|
u8 irqgrps[];
|
||||||
};
|
};
|
||||||
|
|
||||||
@ -275,17 +274,21 @@ static unsigned int apple_gpio_irq_type(unsigned int type)
|
|||||||
|
|
||||||
static void apple_gpio_irq_mask(struct irq_data *data)
|
static void apple_gpio_irq_mask(struct irq_data *data)
|
||||||
{
|
{
|
||||||
struct apple_gpio_pinctrl *pctl = gpiochip_get_data(irq_data_get_irq_chip_data(data));
|
struct gpio_chip *gc = irq_data_get_irq_chip_data(data);
|
||||||
|
struct apple_gpio_pinctrl *pctl = gpiochip_get_data(gc);
|
||||||
|
|
||||||
apple_gpio_set_reg(pctl, data->hwirq, REG_GPIOx_MODE,
|
apple_gpio_set_reg(pctl, data->hwirq, REG_GPIOx_MODE,
|
||||||
FIELD_PREP(REG_GPIOx_MODE, REG_GPIOx_IN_IRQ_OFF));
|
FIELD_PREP(REG_GPIOx_MODE, REG_GPIOx_IN_IRQ_OFF));
|
||||||
|
gpiochip_disable_irq(gc, data->hwirq);
|
||||||
}
|
}
|
||||||
|
|
||||||
static void apple_gpio_irq_unmask(struct irq_data *data)
|
static void apple_gpio_irq_unmask(struct irq_data *data)
|
||||||
{
|
{
|
||||||
struct apple_gpio_pinctrl *pctl = gpiochip_get_data(irq_data_get_irq_chip_data(data));
|
struct gpio_chip *gc = irq_data_get_irq_chip_data(data);
|
||||||
|
struct apple_gpio_pinctrl *pctl = gpiochip_get_data(gc);
|
||||||
unsigned int irqtype = apple_gpio_irq_type(irqd_get_trigger_type(data));
|
unsigned int irqtype = apple_gpio_irq_type(irqd_get_trigger_type(data));
|
||||||
|
|
||||||
|
gpiochip_enable_irq(gc, data->hwirq);
|
||||||
apple_gpio_set_reg(pctl, data->hwirq, REG_GPIOx_MODE,
|
apple_gpio_set_reg(pctl, data->hwirq, REG_GPIOx_MODE,
|
||||||
FIELD_PREP(REG_GPIOx_MODE, irqtype));
|
FIELD_PREP(REG_GPIOx_MODE, irqtype));
|
||||||
}
|
}
|
||||||
@ -343,13 +346,15 @@ static void apple_gpio_irq_handler(struct irq_desc *desc)
|
|||||||
chained_irq_exit(chip, desc);
|
chained_irq_exit(chip, desc);
|
||||||
}
|
}
|
||||||
|
|
||||||
static struct irq_chip apple_gpio_irqchip = {
|
static const struct irq_chip apple_gpio_irqchip = {
|
||||||
.name = "Apple-GPIO",
|
.name = "Apple-GPIO",
|
||||||
.irq_startup = apple_gpio_irq_startup,
|
.irq_startup = apple_gpio_irq_startup,
|
||||||
.irq_ack = apple_gpio_irq_ack,
|
.irq_ack = apple_gpio_irq_ack,
|
||||||
.irq_mask = apple_gpio_irq_mask,
|
.irq_mask = apple_gpio_irq_mask,
|
||||||
.irq_unmask = apple_gpio_irq_unmask,
|
.irq_unmask = apple_gpio_irq_unmask,
|
||||||
.irq_set_type = apple_gpio_irq_set_type,
|
.irq_set_type = apple_gpio_irq_set_type,
|
||||||
|
.flags = IRQCHIP_IMMUTABLE,
|
||||||
|
GPIOCHIP_IRQ_RESOURCE_HELPERS,
|
||||||
};
|
};
|
||||||
|
|
||||||
/* Probe & register */
|
/* Probe & register */
|
||||||
@ -360,8 +365,6 @@ static int apple_gpio_register(struct apple_gpio_pinctrl *pctl)
|
|||||||
void **irq_data = NULL;
|
void **irq_data = NULL;
|
||||||
int ret;
|
int ret;
|
||||||
|
|
||||||
pctl->irq_chip = apple_gpio_irqchip;
|
|
||||||
|
|
||||||
pctl->gpio_chip.label = dev_name(pctl->dev);
|
pctl->gpio_chip.label = dev_name(pctl->dev);
|
||||||
pctl->gpio_chip.request = gpiochip_generic_request;
|
pctl->gpio_chip.request = gpiochip_generic_request;
|
||||||
pctl->gpio_chip.free = gpiochip_generic_free;
|
pctl->gpio_chip.free = gpiochip_generic_free;
|
||||||
@ -377,7 +380,7 @@ static int apple_gpio_register(struct apple_gpio_pinctrl *pctl)
|
|||||||
if (girq->num_parents) {
|
if (girq->num_parents) {
|
||||||
int i;
|
int i;
|
||||||
|
|
||||||
girq->chip = &pctl->irq_chip;
|
gpio_irq_chip_set_chip(girq, &apple_gpio_irqchip);
|
||||||
girq->parent_handler = apple_gpio_irq_handler;
|
girq->parent_handler = apple_gpio_irq_handler;
|
||||||
|
|
||||||
girq->parents = kmalloc_array(girq->num_parents,
|
girq->parents = kmalloc_array(girq->num_parents,
|
||||||
|
@ -42,7 +42,6 @@
|
|||||||
* @chip: gpiochip handle.
|
* @chip: gpiochip handle.
|
||||||
* @desc: pin controller descriptor
|
* @desc: pin controller descriptor
|
||||||
* @restart_nb: restart notifier block.
|
* @restart_nb: restart notifier block.
|
||||||
* @irq_chip: irq chip information
|
|
||||||
* @irq: parent irq for the TLMM irq_chip.
|
* @irq: parent irq for the TLMM irq_chip.
|
||||||
* @intr_target_use_scm: route irq to application cpu using scm calls
|
* @intr_target_use_scm: route irq to application cpu using scm calls
|
||||||
* @lock: Spinlock to protect register resources as well
|
* @lock: Spinlock to protect register resources as well
|
||||||
@ -63,7 +62,6 @@ struct msm_pinctrl {
|
|||||||
struct pinctrl_desc desc;
|
struct pinctrl_desc desc;
|
||||||
struct notifier_block restart_nb;
|
struct notifier_block restart_nb;
|
||||||
|
|
||||||
struct irq_chip irq_chip;
|
|
||||||
int irq;
|
int irq;
|
||||||
|
|
||||||
bool intr_target_use_scm;
|
bool intr_target_use_scm;
|
||||||
@ -868,6 +866,8 @@ static void msm_gpio_irq_enable(struct irq_data *d)
|
|||||||
struct gpio_chip *gc = irq_data_get_irq_chip_data(d);
|
struct gpio_chip *gc = irq_data_get_irq_chip_data(d);
|
||||||
struct msm_pinctrl *pctrl = gpiochip_get_data(gc);
|
struct msm_pinctrl *pctrl = gpiochip_get_data(gc);
|
||||||
|
|
||||||
|
gpiochip_enable_irq(gc, d->hwirq);
|
||||||
|
|
||||||
if (d->parent_data)
|
if (d->parent_data)
|
||||||
irq_chip_enable_parent(d);
|
irq_chip_enable_parent(d);
|
||||||
|
|
||||||
@ -885,6 +885,8 @@ static void msm_gpio_irq_disable(struct irq_data *d)
|
|||||||
|
|
||||||
if (!test_bit(d->hwirq, pctrl->skip_wake_irqs))
|
if (!test_bit(d->hwirq, pctrl->skip_wake_irqs))
|
||||||
msm_gpio_irq_mask(d);
|
msm_gpio_irq_mask(d);
|
||||||
|
|
||||||
|
gpiochip_disable_irq(gc, d->hwirq);
|
||||||
}
|
}
|
||||||
|
|
||||||
/**
|
/**
|
||||||
@ -958,6 +960,14 @@ static void msm_gpio_irq_ack(struct irq_data *d)
|
|||||||
raw_spin_unlock_irqrestore(&pctrl->lock, flags);
|
raw_spin_unlock_irqrestore(&pctrl->lock, flags);
|
||||||
}
|
}
|
||||||
|
|
||||||
|
static void msm_gpio_irq_eoi(struct irq_data *d)
|
||||||
|
{
|
||||||
|
d = d->parent_data;
|
||||||
|
|
||||||
|
if (d)
|
||||||
|
d->chip->irq_eoi(d);
|
||||||
|
}
|
||||||
|
|
||||||
static bool msm_gpio_needs_dual_edge_parent_workaround(struct irq_data *d,
|
static bool msm_gpio_needs_dual_edge_parent_workaround(struct irq_data *d,
|
||||||
unsigned int type)
|
unsigned int type)
|
||||||
{
|
{
|
||||||
@ -1255,6 +1265,26 @@ static bool msm_gpio_needs_valid_mask(struct msm_pinctrl *pctrl)
|
|||||||
return device_property_count_u16(pctrl->dev, "gpios") > 0;
|
return device_property_count_u16(pctrl->dev, "gpios") > 0;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
static const struct irq_chip msm_gpio_irq_chip = {
|
||||||
|
.name = "msmgpio",
|
||||||
|
.irq_enable = msm_gpio_irq_enable,
|
||||||
|
.irq_disable = msm_gpio_irq_disable,
|
||||||
|
.irq_mask = msm_gpio_irq_mask,
|
||||||
|
.irq_unmask = msm_gpio_irq_unmask,
|
||||||
|
.irq_ack = msm_gpio_irq_ack,
|
||||||
|
.irq_eoi = msm_gpio_irq_eoi,
|
||||||
|
.irq_set_type = msm_gpio_irq_set_type,
|
||||||
|
.irq_set_wake = msm_gpio_irq_set_wake,
|
||||||
|
.irq_request_resources = msm_gpio_irq_reqres,
|
||||||
|
.irq_release_resources = msm_gpio_irq_relres,
|
||||||
|
.irq_set_affinity = msm_gpio_irq_set_affinity,
|
||||||
|
.irq_set_vcpu_affinity = msm_gpio_irq_set_vcpu_affinity,
|
||||||
|
.flags = (IRQCHIP_MASK_ON_SUSPEND |
|
||||||
|
IRQCHIP_SET_TYPE_MASKED |
|
||||||
|
IRQCHIP_ENABLE_WAKEUP_ON_SUSPEND |
|
||||||
|
IRQCHIP_IMMUTABLE),
|
||||||
|
};
|
||||||
|
|
||||||
static int msm_gpio_init(struct msm_pinctrl *pctrl)
|
static int msm_gpio_init(struct msm_pinctrl *pctrl)
|
||||||
{
|
{
|
||||||
struct gpio_chip *chip;
|
struct gpio_chip *chip;
|
||||||
@ -1276,22 +1306,6 @@ static int msm_gpio_init(struct msm_pinctrl *pctrl)
|
|||||||
if (msm_gpio_needs_valid_mask(pctrl))
|
if (msm_gpio_needs_valid_mask(pctrl))
|
||||||
chip->init_valid_mask = msm_gpio_init_valid_mask;
|
chip->init_valid_mask = msm_gpio_init_valid_mask;
|
||||||
|
|
||||||
pctrl->irq_chip.name = "msmgpio";
|
|
||||||
pctrl->irq_chip.irq_enable = msm_gpio_irq_enable;
|
|
||||||
pctrl->irq_chip.irq_disable = msm_gpio_irq_disable;
|
|
||||||
pctrl->irq_chip.irq_mask = msm_gpio_irq_mask;
|
|
||||||
pctrl->irq_chip.irq_unmask = msm_gpio_irq_unmask;
|
|
||||||
pctrl->irq_chip.irq_ack = msm_gpio_irq_ack;
|
|
||||||
pctrl->irq_chip.irq_set_type = msm_gpio_irq_set_type;
|
|
||||||
pctrl->irq_chip.irq_set_wake = msm_gpio_irq_set_wake;
|
|
||||||
pctrl->irq_chip.irq_request_resources = msm_gpio_irq_reqres;
|
|
||||||
pctrl->irq_chip.irq_release_resources = msm_gpio_irq_relres;
|
|
||||||
pctrl->irq_chip.irq_set_affinity = msm_gpio_irq_set_affinity;
|
|
||||||
pctrl->irq_chip.irq_set_vcpu_affinity = msm_gpio_irq_set_vcpu_affinity;
|
|
||||||
pctrl->irq_chip.flags = IRQCHIP_MASK_ON_SUSPEND |
|
|
||||||
IRQCHIP_SET_TYPE_MASKED |
|
|
||||||
IRQCHIP_ENABLE_WAKEUP_ON_SUSPEND;
|
|
||||||
|
|
||||||
np = of_parse_phandle(pctrl->dev->of_node, "wakeup-parent", 0);
|
np = of_parse_phandle(pctrl->dev->of_node, "wakeup-parent", 0);
|
||||||
if (np) {
|
if (np) {
|
||||||
chip->irq.parent_domain = irq_find_matching_host(np,
|
chip->irq.parent_domain = irq_find_matching_host(np,
|
||||||
@ -1300,7 +1314,6 @@ static int msm_gpio_init(struct msm_pinctrl *pctrl)
|
|||||||
if (!chip->irq.parent_domain)
|
if (!chip->irq.parent_domain)
|
||||||
return -EPROBE_DEFER;
|
return -EPROBE_DEFER;
|
||||||
chip->irq.child_to_parent_hwirq = msm_gpio_wakeirq;
|
chip->irq.child_to_parent_hwirq = msm_gpio_wakeirq;
|
||||||
pctrl->irq_chip.irq_eoi = irq_chip_eoi_parent;
|
|
||||||
/*
|
/*
|
||||||
* Let's skip handling the GPIOs, if the parent irqchip
|
* Let's skip handling the GPIOs, if the parent irqchip
|
||||||
* is handling the direct connect IRQ of the GPIO.
|
* is handling the direct connect IRQ of the GPIO.
|
||||||
@ -1313,7 +1326,7 @@ static int msm_gpio_init(struct msm_pinctrl *pctrl)
|
|||||||
}
|
}
|
||||||
|
|
||||||
girq = &chip->irq;
|
girq = &chip->irq;
|
||||||
girq->chip = &pctrl->irq_chip;
|
gpio_irq_chip_set_chip(girq, &msm_gpio_irq_chip);
|
||||||
girq->parent_handler = msm_gpio_irq_handler;
|
girq->parent_handler = msm_gpio_irq_handler;
|
||||||
girq->fwnode = pctrl->dev->fwnode;
|
girq->fwnode = pctrl->dev->fwnode;
|
||||||
girq->num_parents = 1;
|
girq->num_parents = 1;
|
||||||
|
@ -588,6 +588,22 @@ void gpiochip_relres_irq(struct gpio_chip *gc, unsigned int offset);
|
|||||||
void gpiochip_disable_irq(struct gpio_chip *gc, unsigned int offset);
|
void gpiochip_disable_irq(struct gpio_chip *gc, unsigned int offset);
|
||||||
void gpiochip_enable_irq(struct gpio_chip *gc, unsigned int offset);
|
void gpiochip_enable_irq(struct gpio_chip *gc, unsigned int offset);
|
||||||
|
|
||||||
|
/* irq_data versions of the above */
|
||||||
|
int gpiochip_irq_reqres(struct irq_data *data);
|
||||||
|
void gpiochip_irq_relres(struct irq_data *data);
|
||||||
|
|
||||||
|
/* Paste this in your irq_chip structure */
|
||||||
|
#define GPIOCHIP_IRQ_RESOURCE_HELPERS \
|
||||||
|
.irq_request_resources = gpiochip_irq_reqres, \
|
||||||
|
.irq_release_resources = gpiochip_irq_relres
|
||||||
|
|
||||||
|
static inline void gpio_irq_chip_set_chip(struct gpio_irq_chip *girq,
|
||||||
|
const struct irq_chip *chip)
|
||||||
|
{
|
||||||
|
/* Yes, dropping const is ugly, but it isn't like we have a choice */
|
||||||
|
girq->chip = (struct irq_chip *)chip;
|
||||||
|
}
|
||||||
|
|
||||||
/* Line status inquiry for drivers */
|
/* Line status inquiry for drivers */
|
||||||
bool gpiochip_line_is_open_drain(struct gpio_chip *gc, unsigned int offset);
|
bool gpiochip_line_is_open_drain(struct gpio_chip *gc, unsigned int offset);
|
||||||
bool gpiochip_line_is_open_source(struct gpio_chip *gc, unsigned int offset);
|
bool gpiochip_line_is_open_source(struct gpio_chip *gc, unsigned int offset);
|
||||||
|
@ -569,6 +569,7 @@ struct irq_chip {
|
|||||||
* IRQCHIP_ENABLE_WAKEUP_ON_SUSPEND: Invokes __enable_irq()/__disable_irq() for wake irqs
|
* IRQCHIP_ENABLE_WAKEUP_ON_SUSPEND: Invokes __enable_irq()/__disable_irq() for wake irqs
|
||||||
* in the suspend path if they are in disabled state
|
* in the suspend path if they are in disabled state
|
||||||
* IRQCHIP_AFFINITY_PRE_STARTUP: Default affinity update before startup
|
* IRQCHIP_AFFINITY_PRE_STARTUP: Default affinity update before startup
|
||||||
|
* IRQCHIP_IMMUTABLE: Don't ever change anything in this chip
|
||||||
*/
|
*/
|
||||||
enum {
|
enum {
|
||||||
IRQCHIP_SET_TYPE_MASKED = (1 << 0),
|
IRQCHIP_SET_TYPE_MASKED = (1 << 0),
|
||||||
@ -582,6 +583,7 @@ enum {
|
|||||||
IRQCHIP_SUPPORTS_NMI = (1 << 8),
|
IRQCHIP_SUPPORTS_NMI = (1 << 8),
|
||||||
IRQCHIP_ENABLE_WAKEUP_ON_SUSPEND = (1 << 9),
|
IRQCHIP_ENABLE_WAKEUP_ON_SUSPEND = (1 << 9),
|
||||||
IRQCHIP_AFFINITY_PRE_STARTUP = (1 << 10),
|
IRQCHIP_AFFINITY_PRE_STARTUP = (1 << 10),
|
||||||
|
IRQCHIP_IMMUTABLE = (1 << 11),
|
||||||
};
|
};
|
||||||
|
|
||||||
#include <linux/irqdesc.h>
|
#include <linux/irqdesc.h>
|
||||||
|
@ -127,6 +127,8 @@
|
|||||||
#define GICR_PIDR2 GICD_PIDR2
|
#define GICR_PIDR2 GICD_PIDR2
|
||||||
|
|
||||||
#define GICR_CTLR_ENABLE_LPIS (1UL << 0)
|
#define GICR_CTLR_ENABLE_LPIS (1UL << 0)
|
||||||
|
#define GICR_CTLR_CES (1UL << 1)
|
||||||
|
#define GICR_CTLR_IR (1UL << 2)
|
||||||
#define GICR_CTLR_RWP (1UL << 3)
|
#define GICR_CTLR_RWP (1UL << 3)
|
||||||
|
|
||||||
#define GICR_TYPER_CPU_NUMBER(r) (((r) >> 8) & 0xffff)
|
#define GICR_TYPER_CPU_NUMBER(r) (((r) >> 8) & 0xffff)
|
||||||
|
@ -258,7 +258,7 @@ static int __irq_build_affinity_masks(unsigned int startvec,
|
|||||||
nodemask_t nodemsk = NODE_MASK_NONE;
|
nodemask_t nodemsk = NODE_MASK_NONE;
|
||||||
struct node_vectors *node_vectors;
|
struct node_vectors *node_vectors;
|
||||||
|
|
||||||
if (!cpumask_weight(cpu_mask))
|
if (cpumask_empty(cpu_mask))
|
||||||
return 0;
|
return 0;
|
||||||
|
|
||||||
nodes = get_nodes_in_cpumask(node_to_cpumask, cpu_mask, &nodemsk);
|
nodes = get_nodes_in_cpumask(node_to_cpumask, cpu_mask, &nodemsk);
|
||||||
|
@ -1573,17 +1573,12 @@ static struct device *irq_get_parent_device(struct irq_data *data)
|
|||||||
int irq_chip_pm_get(struct irq_data *data)
|
int irq_chip_pm_get(struct irq_data *data)
|
||||||
{
|
{
|
||||||
struct device *dev = irq_get_parent_device(data);
|
struct device *dev = irq_get_parent_device(data);
|
||||||
int retval;
|
int retval = 0;
|
||||||
|
|
||||||
|
if (IS_ENABLED(CONFIG_PM) && dev)
|
||||||
|
retval = pm_runtime_resume_and_get(dev);
|
||||||
|
|
||||||
if (IS_ENABLED(CONFIG_PM) && dev) {
|
|
||||||
retval = pm_runtime_get_sync(dev);
|
|
||||||
if (retval < 0) {
|
|
||||||
pm_runtime_put_noidle(dev);
|
|
||||||
return retval;
|
return retval;
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
return 0;
|
|
||||||
}
|
}
|
||||||
|
|
||||||
/**
|
/**
|
||||||
|
@ -58,6 +58,7 @@ static const struct irq_bit_descr irqchip_flags[] = {
|
|||||||
BIT_MASK_DESCR(IRQCHIP_SUPPORTS_LEVEL_MSI),
|
BIT_MASK_DESCR(IRQCHIP_SUPPORTS_LEVEL_MSI),
|
||||||
BIT_MASK_DESCR(IRQCHIP_SUPPORTS_NMI),
|
BIT_MASK_DESCR(IRQCHIP_SUPPORTS_NMI),
|
||||||
BIT_MASK_DESCR(IRQCHIP_ENABLE_WAKEUP_ON_SUSPEND),
|
BIT_MASK_DESCR(IRQCHIP_ENABLE_WAKEUP_ON_SUSPEND),
|
||||||
|
BIT_MASK_DESCR(IRQCHIP_IMMUTABLE),
|
||||||
};
|
};
|
||||||
|
|
||||||
static void
|
static void
|
||||||
|
@ -181,7 +181,7 @@ struct irq_domain *irq_domain_create_sim(struct fwnode_handle *fwnode,
|
|||||||
goto err_free_bitmap;
|
goto err_free_bitmap;
|
||||||
|
|
||||||
work_ctx->irq_count = num_irqs;
|
work_ctx->irq_count = num_irqs;
|
||||||
init_irq_work(&work_ctx->work, irq_sim_handle_irq);
|
work_ctx->work = IRQ_WORK_INIT_HARD(irq_sim_handle_irq);
|
||||||
|
|
||||||
return work_ctx->domain;
|
return work_ctx->domain;
|
||||||
|
|
||||||
|
@ -222,11 +222,16 @@ int irq_do_set_affinity(struct irq_data *data, const struct cpumask *mask,
|
|||||||
{
|
{
|
||||||
struct irq_desc *desc = irq_data_to_desc(data);
|
struct irq_desc *desc = irq_data_to_desc(data);
|
||||||
struct irq_chip *chip = irq_data_get_irq_chip(data);
|
struct irq_chip *chip = irq_data_get_irq_chip(data);
|
||||||
|
const struct cpumask *prog_mask;
|
||||||
int ret;
|
int ret;
|
||||||
|
|
||||||
|
static DEFINE_RAW_SPINLOCK(tmp_mask_lock);
|
||||||
|
static struct cpumask tmp_mask;
|
||||||
|
|
||||||
if (!chip || !chip->irq_set_affinity)
|
if (!chip || !chip->irq_set_affinity)
|
||||||
return -EINVAL;
|
return -EINVAL;
|
||||||
|
|
||||||
|
raw_spin_lock(&tmp_mask_lock);
|
||||||
/*
|
/*
|
||||||
* If this is a managed interrupt and housekeeping is enabled on
|
* If this is a managed interrupt and housekeeping is enabled on
|
||||||
* it check whether the requested affinity mask intersects with
|
* it check whether the requested affinity mask intersects with
|
||||||
@ -248,24 +253,34 @@ int irq_do_set_affinity(struct irq_data *data, const struct cpumask *mask,
|
|||||||
*/
|
*/
|
||||||
if (irqd_affinity_is_managed(data) &&
|
if (irqd_affinity_is_managed(data) &&
|
||||||
housekeeping_enabled(HK_TYPE_MANAGED_IRQ)) {
|
housekeeping_enabled(HK_TYPE_MANAGED_IRQ)) {
|
||||||
const struct cpumask *hk_mask, *prog_mask;
|
const struct cpumask *hk_mask;
|
||||||
|
|
||||||
static DEFINE_RAW_SPINLOCK(tmp_mask_lock);
|
|
||||||
static struct cpumask tmp_mask;
|
|
||||||
|
|
||||||
hk_mask = housekeeping_cpumask(HK_TYPE_MANAGED_IRQ);
|
hk_mask = housekeeping_cpumask(HK_TYPE_MANAGED_IRQ);
|
||||||
|
|
||||||
raw_spin_lock(&tmp_mask_lock);
|
|
||||||
cpumask_and(&tmp_mask, mask, hk_mask);
|
cpumask_and(&tmp_mask, mask, hk_mask);
|
||||||
if (!cpumask_intersects(&tmp_mask, cpu_online_mask))
|
if (!cpumask_intersects(&tmp_mask, cpu_online_mask))
|
||||||
prog_mask = mask;
|
prog_mask = mask;
|
||||||
else
|
else
|
||||||
prog_mask = &tmp_mask;
|
prog_mask = &tmp_mask;
|
||||||
ret = chip->irq_set_affinity(data, prog_mask, force);
|
|
||||||
raw_spin_unlock(&tmp_mask_lock);
|
|
||||||
} else {
|
} else {
|
||||||
ret = chip->irq_set_affinity(data, mask, force);
|
prog_mask = mask;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
/*
|
||||||
|
* Make sure we only provide online CPUs to the irqchip,
|
||||||
|
* unless we are being asked to force the affinity (in which
|
||||||
|
* case we do as we are told).
|
||||||
|
*/
|
||||||
|
cpumask_and(&tmp_mask, prog_mask, cpu_online_mask);
|
||||||
|
if (!force && !cpumask_empty(&tmp_mask))
|
||||||
|
ret = chip->irq_set_affinity(data, &tmp_mask, force);
|
||||||
|
else if (force)
|
||||||
|
ret = chip->irq_set_affinity(data, mask, force);
|
||||||
|
else
|
||||||
|
ret = -EINVAL;
|
||||||
|
|
||||||
|
raw_spin_unlock(&tmp_mask_lock);
|
||||||
|
|
||||||
switch (ret) {
|
switch (ret) {
|
||||||
case IRQ_SET_MASK_OK:
|
case IRQ_SET_MASK_OK:
|
||||||
case IRQ_SET_MASK_OK_DONE:
|
case IRQ_SET_MASK_OK_DONE:
|
||||||
|
@ -286,7 +286,7 @@ void irq_matrix_remove_managed(struct irq_matrix *m, const struct cpumask *msk)
|
|||||||
int irq_matrix_alloc_managed(struct irq_matrix *m, const struct cpumask *msk,
|
int irq_matrix_alloc_managed(struct irq_matrix *m, const struct cpumask *msk,
|
||||||
unsigned int *mapped_cpu)
|
unsigned int *mapped_cpu)
|
||||||
{
|
{
|
||||||
unsigned int bit, cpu, end = m->alloc_end;
|
unsigned int bit, cpu, end;
|
||||||
struct cpumap *cm;
|
struct cpumap *cm;
|
||||||
|
|
||||||
if (cpumask_empty(msk))
|
if (cpumask_empty(msk))
|
||||||
|
@ -818,6 +818,21 @@ static int msi_init_virq(struct irq_domain *domain, int virq, unsigned int vflag
|
|||||||
irqd_clr_can_reserve(irqd);
|
irqd_clr_can_reserve(irqd);
|
||||||
if (vflags & VIRQ_NOMASK_QUIRK)
|
if (vflags & VIRQ_NOMASK_QUIRK)
|
||||||
irqd_set_msi_nomask_quirk(irqd);
|
irqd_set_msi_nomask_quirk(irqd);
|
||||||
|
|
||||||
|
/*
|
||||||
|
* If the interrupt is managed but no CPU is available to
|
||||||
|
* service it, shut it down until better times. Note that
|
||||||
|
* we only do this on the !RESERVE path as x86 (the only
|
||||||
|
* architecture using this flag) deals with this in a
|
||||||
|
* different way by using a catch-all vector.
|
||||||
|
*/
|
||||||
|
if ((vflags & VIRQ_ACTIVATE) &&
|
||||||
|
irqd_affinity_is_managed(irqd) &&
|
||||||
|
!cpumask_intersects(irq_data_get_affinity_mask(irqd),
|
||||||
|
cpu_online_mask)) {
|
||||||
|
irqd_set_managed_shutdown(irqd);
|
||||||
|
return 0;
|
||||||
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
if (!(vflags & VIRQ_ACTIVATE))
|
if (!(vflags & VIRQ_ACTIVATE))
|
||||||
|
Loading…
Reference in New Issue
Block a user