Merge branch 'irqdomain/next' of git://git.secretlab.ca/git/linux-2.6
This commit is contained in:
commit
89d4a1753b
117
Documentation/IRQ-domain.txt
Normal file
117
Documentation/IRQ-domain.txt
Normal file
@ -0,0 +1,117 @@
|
||||
irq_domain interrupt number mapping library
|
||||
|
||||
The current design of the Linux kernel uses a single large number
|
||||
space where each separate IRQ source is assigned a different number.
|
||||
This is simple when there is only one interrupt controller, but in
|
||||
systems with multiple interrupt controllers the kernel must ensure
|
||||
that each one gets assigned non-overlapping allocations of Linux
|
||||
IRQ numbers.
|
||||
|
||||
The irq_alloc_desc*() and irq_free_desc*() APIs provide allocation of
|
||||
irq numbers, but they don't provide any support for reverse mapping of
|
||||
the controller-local IRQ (hwirq) number into the Linux IRQ number
|
||||
space.
|
||||
|
||||
The irq_domain library adds mapping between hwirq and IRQ numbers on
|
||||
top of the irq_alloc_desc*() API. An irq_domain to manage mapping is
|
||||
preferred over interrupt controller drivers open coding their own
|
||||
reverse mapping scheme.
|
||||
|
||||
irq_domain also implements translation from Device Tree interrupt
|
||||
specifiers to hwirq numbers, and can be easily extended to support
|
||||
other IRQ topology data sources.
|
||||
|
||||
=== irq_domain usage ===
|
||||
An interrupt controller driver creates and registers an irq_domain by
|
||||
calling one of the irq_domain_add_*() functions (each mapping method
|
||||
has a different allocator function, more on that later). The function
|
||||
will return a pointer to the irq_domain on success. The caller must
|
||||
provide the allocator function with an irq_domain_ops structure with
|
||||
the .map callback populated as a minimum.
|
||||
|
||||
In most cases, the irq_domain will begin empty without any mappings
|
||||
between hwirq and IRQ numbers. Mappings are added to the irq_domain
|
||||
by calling irq_create_mapping() which accepts the irq_domain and a
|
||||
hwirq number as arguments. If a mapping for the hwirq doesn't already
|
||||
exist then it will allocate a new Linux irq_desc, associate it with
|
||||
the hwirq, and call the .map() callback so the driver can perform any
|
||||
required hardware setup.
|
||||
|
||||
When an interrupt is received, irq_find_mapping() function should
|
||||
be used to find the Linux IRQ number from the hwirq number.
|
||||
|
||||
If the driver has the Linux IRQ number or the irq_data pointer, and
|
||||
needs to know the associated hwirq number (such as in the irq_chip
|
||||
callbacks) then it can be directly obtained from irq_data->hwirq.
|
||||
|
||||
=== Types of irq_domain mappings ===
|
||||
There are several mechanisms available for reverse mapping from hwirq
|
||||
to Linux irq, and each mechanism uses a different allocation function.
|
||||
Which reverse map type should be used depends on the use case. Each
|
||||
of the reverse map types are described below:
|
||||
|
||||
==== Linear ====
|
||||
irq_domain_add_linear()
|
||||
|
||||
The linear reverse map maintains a fixed size table indexed by the
|
||||
hwirq number. When a hwirq is mapped, an irq_desc is allocated for
|
||||
the hwirq, and the IRQ number is stored in the table.
|
||||
|
||||
The Linear map is a good choice when the maximum number of hwirqs is
|
||||
fixed and a relatively small number (~ < 256). The advantages of this
|
||||
map are fixed time lookup for IRQ numbers, and irq_descs are only
|
||||
allocated for in-use IRQs. The disadvantage is that the table must be
|
||||
as large as the largest possible hwirq number.
|
||||
|
||||
The majority of drivers should use the linear map.
|
||||
|
||||
==== Tree ====
|
||||
irq_domain_add_tree()
|
||||
|
||||
The irq_domain maintains a radix tree map from hwirq numbers to Linux
|
||||
IRQs. When an hwirq is mapped, an irq_desc is allocated and the
|
||||
hwirq is used as the lookup key for the radix tree.
|
||||
|
||||
The tree map is a good choice if the hwirq number can be very large
|
||||
since it doesn't need to allocate a table as large as the largest
|
||||
hwirq number. The disadvantage is that hwirq to IRQ number lookup is
|
||||
dependent on how many entries are in the table.
|
||||
|
||||
Very few drivers should need this mapping. At the moment, powerpc
|
||||
iseries is the only user.
|
||||
|
||||
==== No Map ===-
|
||||
irq_domain_add_nomap()
|
||||
|
||||
The No Map mapping is to be used when the hwirq number is
|
||||
programmable in the hardware. In this case it is best to program the
|
||||
Linux IRQ number into the hardware itself so that no mapping is
|
||||
required. Calling irq_create_direct_mapping() will allocate a Linux
|
||||
IRQ number and call the .map() callback so that driver can program the
|
||||
Linux IRQ number into the hardware.
|
||||
|
||||
Most drivers cannot use this mapping.
|
||||
|
||||
==== Legacy ====
|
||||
irq_domain_add_legacy()
|
||||
irq_domain_add_legacy_isa()
|
||||
|
||||
The Legacy mapping is a special case for drivers that already have a
|
||||
range of irq_descs allocated for the hwirqs. It is used when the
|
||||
driver cannot be immediately converted to use the linear mapping. For
|
||||
example, many embedded system board support files use a set of #defines
|
||||
for IRQ numbers that are passed to struct device registrations. In that
|
||||
case the Linux IRQ numbers cannot be dynamically assigned and the legacy
|
||||
mapping should be used.
|
||||
|
||||
The legacy map assumes a contiguous range of IRQ numbers has already
|
||||
been allocated for the controller and that the IRQ number can be
|
||||
calculated by adding a fixed offset to the hwirq number, and
|
||||
visa-versa. The disadvantage is that it requires the interrupt
|
||||
controller to manage IRQ allocations and it requires an irq_desc to be
|
||||
allocated for every hwirq, even if it is unused.
|
||||
|
||||
The legacy map should only be used if fixed IRQ mappings must be
|
||||
supported. For example, ISA controllers would use the legacy map for
|
||||
mapping Linux IRQs 0-15 so that existing ISA drivers get the correct IRQ
|
||||
numbers.
|
15
MAINTAINERS
15
MAINTAINERS
@ -3318,6 +3318,12 @@ S: Maintained
|
||||
F: net/ieee802154/
|
||||
F: drivers/ieee802154/
|
||||
|
||||
IIO SUBSYSTEM AND DRIVERS
|
||||
M: Jonathan Cameron <jic23@cam.ac.uk>
|
||||
L: linux-iio@vger.kernel.org
|
||||
S: Maintained
|
||||
F: drivers/staging/iio/
|
||||
|
||||
IKANOS/ADI EAGLE ADSL USB DRIVER
|
||||
M: Matthieu Castet <castet.matthieu@free.fr>
|
||||
M: Stanislaw Gruszka <stf_xl@wp.pl>
|
||||
@ -3634,6 +3640,15 @@ S: Maintained
|
||||
T: git git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip.git irq/core
|
||||
F: kernel/irq/
|
||||
|
||||
IRQ DOMAINS (IRQ NUMBER MAPPING LIBRARY)
|
||||
M: Benjamin Herrenschmidt <benh@kernel.crashing.org>
|
||||
M: Grant Likely <grant.likely@secretlab.ca>
|
||||
T: git git://git.secretlab.ca/git/linux-2.6.git irqdomain/next
|
||||
S: Maintained
|
||||
F: Documentation/IRQ-domain.txt
|
||||
F: include/linux/irqdomain.h
|
||||
F: kernel/irq/irqdomain.c
|
||||
|
||||
ISAPNP
|
||||
M: Jaroslav Kysela <perex@perex.cz>
|
||||
S: Maintained
|
||||
|
@ -51,7 +51,6 @@ union gic_base {
|
||||
};
|
||||
|
||||
struct gic_chip_data {
|
||||
unsigned int irq_offset;
|
||||
union gic_base dist_base;
|
||||
union gic_base cpu_base;
|
||||
#ifdef CONFIG_CPU_PM
|
||||
@ -61,9 +60,7 @@ struct gic_chip_data {
|
||||
u32 __percpu *saved_ppi_enable;
|
||||
u32 __percpu *saved_ppi_conf;
|
||||
#endif
|
||||
#ifdef CONFIG_IRQ_DOMAIN
|
||||
struct irq_domain domain;
|
||||
#endif
|
||||
struct irq_domain *domain;
|
||||
unsigned int gic_irqs;
|
||||
#ifdef CONFIG_GIC_NON_BANKED
|
||||
void __iomem *(*get_base)(union gic_base *);
|
||||
@ -282,7 +279,7 @@ asmlinkage void __exception_irq_entry gic_handle_irq(struct pt_regs *regs)
|
||||
irqnr = irqstat & ~0x1c00;
|
||||
|
||||
if (likely(irqnr > 15 && irqnr < 1021)) {
|
||||
irqnr = irq_domain_to_irq(&gic->domain, irqnr);
|
||||
irqnr = irq_find_mapping(gic->domain, irqnr);
|
||||
handle_IRQ(irqnr, regs);
|
||||
continue;
|
||||
}
|
||||
@ -314,8 +311,8 @@ static void gic_handle_cascade_irq(unsigned int irq, struct irq_desc *desc)
|
||||
if (gic_irq == 1023)
|
||||
goto out;
|
||||
|
||||
cascade_irq = irq_domain_to_irq(&chip_data->domain, gic_irq);
|
||||
if (unlikely(gic_irq < 32 || gic_irq > 1020 || cascade_irq >= NR_IRQS))
|
||||
cascade_irq = irq_find_mapping(chip_data->domain, gic_irq);
|
||||
if (unlikely(gic_irq < 32 || gic_irq > 1020))
|
||||
do_bad_IRQ(cascade_irq, desc);
|
||||
else
|
||||
generic_handle_irq(cascade_irq);
|
||||
@ -348,10 +345,9 @@ void __init gic_cascade_irq(unsigned int gic_nr, unsigned int irq)
|
||||
|
||||
static void __init gic_dist_init(struct gic_chip_data *gic)
|
||||
{
|
||||
unsigned int i, irq;
|
||||
unsigned int i;
|
||||
u32 cpumask;
|
||||
unsigned int gic_irqs = gic->gic_irqs;
|
||||
struct irq_domain *domain = &gic->domain;
|
||||
void __iomem *base = gic_data_dist_base(gic);
|
||||
u32 cpu = cpu_logical_map(smp_processor_id());
|
||||
|
||||
@ -386,23 +382,6 @@ static void __init gic_dist_init(struct gic_chip_data *gic)
|
||||
for (i = 32; i < gic_irqs; i += 32)
|
||||
writel_relaxed(0xffffffff, base + GIC_DIST_ENABLE_CLEAR + i * 4 / 32);
|
||||
|
||||
/*
|
||||
* Setup the Linux IRQ subsystem.
|
||||
*/
|
||||
irq_domain_for_each_irq(domain, i, irq) {
|
||||
if (i < 32) {
|
||||
irq_set_percpu_devid(irq);
|
||||
irq_set_chip_and_handler(irq, &gic_chip,
|
||||
handle_percpu_devid_irq);
|
||||
set_irq_flags(irq, IRQF_VALID | IRQF_NOAUTOEN);
|
||||
} else {
|
||||
irq_set_chip_and_handler(irq, &gic_chip,
|
||||
handle_fasteoi_irq);
|
||||
set_irq_flags(irq, IRQF_VALID | IRQF_PROBE);
|
||||
}
|
||||
irq_set_chip_data(irq, gic);
|
||||
}
|
||||
|
||||
writel_relaxed(1, base + GIC_DIST_CTRL);
|
||||
}
|
||||
|
||||
@ -618,11 +597,27 @@ static void __init gic_pm_init(struct gic_chip_data *gic)
|
||||
}
|
||||
#endif
|
||||
|
||||
#ifdef CONFIG_OF
|
||||
static int gic_irq_domain_dt_translate(struct irq_domain *d,
|
||||
struct device_node *controller,
|
||||
const u32 *intspec, unsigned int intsize,
|
||||
unsigned long *out_hwirq, unsigned int *out_type)
|
||||
static int gic_irq_domain_map(struct irq_domain *d, unsigned int irq,
|
||||
irq_hw_number_t hw)
|
||||
{
|
||||
if (hw < 32) {
|
||||
irq_set_percpu_devid(irq);
|
||||
irq_set_chip_and_handler(irq, &gic_chip,
|
||||
handle_percpu_devid_irq);
|
||||
set_irq_flags(irq, IRQF_VALID | IRQF_NOAUTOEN);
|
||||
} else {
|
||||
irq_set_chip_and_handler(irq, &gic_chip,
|
||||
handle_fasteoi_irq);
|
||||
set_irq_flags(irq, IRQF_VALID | IRQF_PROBE);
|
||||
}
|
||||
irq_set_chip_data(irq, d->host_data);
|
||||
return 0;
|
||||
}
|
||||
|
||||
static int gic_irq_domain_xlate(struct irq_domain *d,
|
||||
struct device_node *controller,
|
||||
const u32 *intspec, unsigned int intsize,
|
||||
unsigned long *out_hwirq, unsigned int *out_type)
|
||||
{
|
||||
if (d->of_node != controller)
|
||||
return -EINVAL;
|
||||
@ -639,26 +634,23 @@ static int gic_irq_domain_dt_translate(struct irq_domain *d,
|
||||
*out_type = intspec[2] & IRQ_TYPE_SENSE_MASK;
|
||||
return 0;
|
||||
}
|
||||
#endif
|
||||
|
||||
const struct irq_domain_ops gic_irq_domain_ops = {
|
||||
#ifdef CONFIG_OF
|
||||
.dt_translate = gic_irq_domain_dt_translate,
|
||||
#endif
|
||||
.map = gic_irq_domain_map,
|
||||
.xlate = gic_irq_domain_xlate,
|
||||
};
|
||||
|
||||
void __init gic_init_bases(unsigned int gic_nr, int irq_start,
|
||||
void __iomem *dist_base, void __iomem *cpu_base,
|
||||
u32 percpu_offset)
|
||||
u32 percpu_offset, struct device_node *node)
|
||||
{
|
||||
irq_hw_number_t hwirq_base;
|
||||
struct gic_chip_data *gic;
|
||||
struct irq_domain *domain;
|
||||
int gic_irqs;
|
||||
int gic_irqs, irq_base;
|
||||
|
||||
BUG_ON(gic_nr >= MAX_GIC_NR);
|
||||
|
||||
gic = &gic_data[gic_nr];
|
||||
domain = &gic->domain;
|
||||
#ifdef CONFIG_GIC_NON_BANKED
|
||||
if (percpu_offset) { /* Frankein-GIC without banked registers... */
|
||||
unsigned int cpu;
|
||||
@ -694,10 +686,10 @@ void __init gic_init_bases(unsigned int gic_nr, int irq_start,
|
||||
* For primary GICs, skip over SGIs.
|
||||
* For secondary GICs, skip over PPIs, too.
|
||||
*/
|
||||
domain->hwirq_base = 32;
|
||||
hwirq_base = 32;
|
||||
if (gic_nr == 0) {
|
||||
if ((irq_start & 31) > 0) {
|
||||
domain->hwirq_base = 16;
|
||||
hwirq_base = 16;
|
||||
if (irq_start != -1)
|
||||
irq_start = (irq_start & ~31) + 16;
|
||||
}
|
||||
@ -713,17 +705,17 @@ void __init gic_init_bases(unsigned int gic_nr, int irq_start,
|
||||
gic_irqs = 1020;
|
||||
gic->gic_irqs = gic_irqs;
|
||||
|
||||
domain->nr_irq = gic_irqs - domain->hwirq_base;
|
||||
domain->irq_base = irq_alloc_descs(irq_start, 16, domain->nr_irq,
|
||||
numa_node_id());
|
||||
if (IS_ERR_VALUE(domain->irq_base)) {
|
||||
gic_irqs -= hwirq_base; /* calculate # of irqs to allocate */
|
||||
irq_base = irq_alloc_descs(irq_start, 16, gic_irqs, numa_node_id());
|
||||
if (IS_ERR_VALUE(irq_base)) {
|
||||
WARN(1, "Cannot allocate irq_descs @ IRQ%d, assuming pre-allocated\n",
|
||||
irq_start);
|
||||
domain->irq_base = irq_start;
|
||||
irq_base = irq_start;
|
||||
}
|
||||
domain->priv = gic;
|
||||
domain->ops = &gic_irq_domain_ops;
|
||||
irq_domain_add(domain);
|
||||
gic->domain = irq_domain_add_legacy(node, gic_irqs, irq_base,
|
||||
hwirq_base, &gic_irq_domain_ops, gic);
|
||||
if (WARN_ON(!gic->domain))
|
||||
return;
|
||||
|
||||
gic_chip.flags |= gic_arch_extn.flags;
|
||||
gic_dist_init(gic);
|
||||
@ -768,7 +760,6 @@ int __init gic_of_init(struct device_node *node, struct device_node *parent)
|
||||
void __iomem *dist_base;
|
||||
u32 percpu_offset;
|
||||
int irq;
|
||||
struct irq_domain *domain = &gic_data[gic_cnt].domain;
|
||||
|
||||
if (WARN_ON(!node))
|
||||
return -ENODEV;
|
||||
@ -782,9 +773,7 @@ int __init gic_of_init(struct device_node *node, struct device_node *parent)
|
||||
if (of_property_read_u32(node, "cpu-offset", &percpu_offset))
|
||||
percpu_offset = 0;
|
||||
|
||||
domain->of_node = of_node_get(node);
|
||||
|
||||
gic_init_bases(gic_cnt, -1, dist_base, cpu_base, percpu_offset);
|
||||
gic_init_bases(gic_cnt, -1, dist_base, cpu_base, percpu_offset, node);
|
||||
|
||||
if (parent) {
|
||||
irq = irq_of_parse_and_map(node, 0);
|
||||
|
@ -56,7 +56,7 @@ struct vic_device {
|
||||
u32 int_enable;
|
||||
u32 soft_int;
|
||||
u32 protect;
|
||||
struct irq_domain domain;
|
||||
struct irq_domain *domain;
|
||||
};
|
||||
|
||||
/* we cannot allocate memory when VICs are initially registered */
|
||||
@ -192,14 +192,8 @@ static void __init vic_register(void __iomem *base, unsigned int irq,
|
||||
v->resume_sources = resume_sources;
|
||||
v->irq = irq;
|
||||
vic_id++;
|
||||
|
||||
v->domain.irq_base = irq;
|
||||
v->domain.nr_irq = 32;
|
||||
#ifdef CONFIG_OF_IRQ
|
||||
v->domain.of_node = of_node_get(node);
|
||||
#endif /* CONFIG_OF */
|
||||
v->domain.ops = &irq_domain_simple_ops;
|
||||
irq_domain_add(&v->domain);
|
||||
v->domain = irq_domain_add_legacy(node, 32, irq, 0,
|
||||
&irq_domain_simple_ops, v);
|
||||
}
|
||||
|
||||
static void vic_ack_irq(struct irq_data *d)
|
||||
@ -348,7 +342,7 @@ static void __init vic_init_st(void __iomem *base, unsigned int irq_start,
|
||||
vic_register(base, irq_start, 0, node);
|
||||
}
|
||||
|
||||
static void __init __vic_init(void __iomem *base, unsigned int irq_start,
|
||||
void __init __vic_init(void __iomem *base, unsigned int irq_start,
|
||||
u32 vic_sources, u32 resume_sources,
|
||||
struct device_node *node)
|
||||
{
|
||||
@ -444,7 +438,7 @@ static int handle_one_vic(struct vic_device *vic, struct pt_regs *regs)
|
||||
stat = readl_relaxed(vic->base + VIC_IRQ_STATUS);
|
||||
while (stat) {
|
||||
irq = ffs(stat) - 1;
|
||||
handle_IRQ(irq_domain_to_irq(&vic->domain, irq), regs);
|
||||
handle_IRQ(irq_find_mapping(vic->domain, irq), regs);
|
||||
stat &= ~(1 << irq);
|
||||
handled = 1;
|
||||
}
|
||||
|
@ -39,7 +39,7 @@ struct device_node;
|
||||
extern struct irq_chip gic_arch_extn;
|
||||
|
||||
void gic_init_bases(unsigned int, int, void __iomem *, void __iomem *,
|
||||
u32 offset);
|
||||
u32 offset, struct device_node *);
|
||||
int gic_of_init(struct device_node *node, struct device_node *parent);
|
||||
void gic_secondary_init(unsigned int);
|
||||
void gic_handle_irq(struct pt_regs *regs);
|
||||
@ -49,7 +49,7 @@ void gic_raise_softirq(const struct cpumask *mask, unsigned int irq);
|
||||
static inline void gic_init(unsigned int nr, int start,
|
||||
void __iomem *dist , void __iomem *cpu)
|
||||
{
|
||||
gic_init_bases(nr, start, dist, cpu, 0);
|
||||
gic_init_bases(nr, start, dist, cpu, 0, NULL);
|
||||
}
|
||||
|
||||
#endif
|
||||
|
@ -47,6 +47,8 @@
|
||||
struct device_node;
|
||||
struct pt_regs;
|
||||
|
||||
void __vic_init(void __iomem *base, unsigned int irq_start, u32 vic_sources,
|
||||
u32 resume_sources, struct device_node *node);
|
||||
void vic_init(void __iomem *base, unsigned int irq_start, u32 vic_sources, u32 resume_sources);
|
||||
int vic_of_init(struct device_node *node, struct device_node *parent);
|
||||
void vic_handle_irq(struct pt_regs *regs);
|
||||
|
@ -266,6 +266,7 @@ void die(const char *str, struct pt_regs *regs, int err)
|
||||
{
|
||||
struct thread_info *thread = current_thread_info();
|
||||
int ret;
|
||||
enum bug_trap_type bug_type = BUG_TRAP_TYPE_NONE;
|
||||
|
||||
oops_enter();
|
||||
|
||||
@ -273,7 +274,9 @@ void die(const char *str, struct pt_regs *regs, int err)
|
||||
console_verbose();
|
||||
bust_spinlocks(1);
|
||||
if (!user_mode(regs))
|
||||
report_bug(regs->ARM_pc, regs);
|
||||
bug_type = report_bug(regs->ARM_pc, regs);
|
||||
if (bug_type != BUG_TRAP_TYPE_NONE)
|
||||
str = "Oops - BUG";
|
||||
ret = __die(str, err, thread, regs);
|
||||
|
||||
if (regs && kexec_should_crash(thread->task))
|
||||
|
@ -10,6 +10,7 @@
|
||||
#include <asm/page.h>
|
||||
|
||||
#define PROC_INFO \
|
||||
. = ALIGN(4); \
|
||||
VMLINUX_SYMBOL(__proc_info_begin) = .; \
|
||||
*(.proc.info.init) \
|
||||
VMLINUX_SYMBOL(__proc_info_end) = .;
|
||||
|
@ -394,7 +394,7 @@ void __init exynos4_init_irq(void)
|
||||
gic_bank_offset = soc_is_exynos4412() ? 0x4000 : 0x8000;
|
||||
|
||||
if (!of_have_populated_dt())
|
||||
gic_init_bases(0, IRQ_PPI(0), S5P_VA_GIC_DIST, S5P_VA_GIC_CPU, gic_bank_offset);
|
||||
gic_init_bases(0, IRQ_PPI(0), S5P_VA_GIC_DIST, S5P_VA_GIC_CPU, gic_bank_offset, NULL);
|
||||
#ifdef CONFIG_OF
|
||||
else
|
||||
of_irq_init(exynos4_dt_irq_match);
|
||||
|
@ -47,7 +47,7 @@ static const struct of_dev_auxdata imx51_auxdata_lookup[] __initconst = {
|
||||
static int __init imx51_tzic_add_irq_domain(struct device_node *np,
|
||||
struct device_node *interrupt_parent)
|
||||
{
|
||||
irq_domain_add_simple(np, 0);
|
||||
irq_domain_add_legacy(np, 128, 0, 0, &irq_domain_simple_ops, NULL);
|
||||
return 0;
|
||||
}
|
||||
|
||||
@ -57,7 +57,7 @@ static int __init imx51_gpio_add_irq_domain(struct device_node *np,
|
||||
static int gpio_irq_base = MXC_GPIO_IRQ_START + ARCH_NR_GPIOS;
|
||||
|
||||
gpio_irq_base -= 32;
|
||||
irq_domain_add_simple(np, gpio_irq_base);
|
||||
irq_domain_add_legacy(np, 32, gpio_irq_base, 0, &irq_domain_simple_ops, NULL);
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
@ -51,7 +51,7 @@ static const struct of_dev_auxdata imx53_auxdata_lookup[] __initconst = {
|
||||
static int __init imx53_tzic_add_irq_domain(struct device_node *np,
|
||||
struct device_node *interrupt_parent)
|
||||
{
|
||||
irq_domain_add_simple(np, 0);
|
||||
irq_domain_add_legacy(np, 128, 0, 0, &irq_domain_simple_ops, NULL);
|
||||
return 0;
|
||||
}
|
||||
|
||||
@ -61,7 +61,7 @@ static int __init imx53_gpio_add_irq_domain(struct device_node *np,
|
||||
static int gpio_irq_base = MXC_GPIO_IRQ_START + ARCH_NR_GPIOS;
|
||||
|
||||
gpio_irq_base -= 32;
|
||||
irq_domain_add_simple(np, gpio_irq_base);
|
||||
irq_domain_add_legacy(np, 32, gpio_irq_base, 0, &irq_domain_simple_ops, NULL);
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
@ -97,7 +97,8 @@ static int __init imx6q_gpio_add_irq_domain(struct device_node *np,
|
||||
static int gpio_irq_base = MXC_GPIO_IRQ_START + ARCH_NR_GPIOS;
|
||||
|
||||
gpio_irq_base -= 32;
|
||||
irq_domain_add_simple(np, gpio_irq_base);
|
||||
irq_domain_add_legacy(np, 32, gpio_irq_base, 0, &irq_domain_simple_ops,
|
||||
NULL);
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
@ -80,12 +80,8 @@ static struct of_device_id msm_dt_gic_match[] __initdata = {
|
||||
|
||||
static void __init msm8x60_dt_init(void)
|
||||
{
|
||||
struct device_node *node;
|
||||
|
||||
node = of_find_matching_node_by_address(NULL, msm_dt_gic_match,
|
||||
MSM8X60_QGIC_DIST_PHYS);
|
||||
if (node)
|
||||
irq_domain_add_simple(node, GIC_SPI_START);
|
||||
irq_domain_generate_simple(msm_dt_gic_match, MSM8X60_QGIC_DIST_PHYS,
|
||||
GIC_SPI_START);
|
||||
|
||||
if (of_machine_is_compatible("qcom,msm8660-surf")) {
|
||||
printk(KERN_INFO "Init surf UART registers\n");
|
||||
|
@ -814,7 +814,7 @@ static struct omap_dss_board_info sdp4430_dss_data = {
|
||||
.default_device = &sdp4430_lcd_device,
|
||||
};
|
||||
|
||||
static void omap_4430sdp_display_init(void)
|
||||
static void __init omap_4430sdp_display_init(void)
|
||||
{
|
||||
int r;
|
||||
|
||||
@ -851,7 +851,7 @@ static struct omap_board_mux board_mux[] __initdata = {
|
||||
#define board_mux NULL
|
||||
#endif
|
||||
|
||||
static void omap4_sdp4430_wifi_mux_init(void)
|
||||
static void __init omap4_sdp4430_wifi_mux_init(void)
|
||||
{
|
||||
omap_mux_init_gpio(GPIO_WIFI_IRQ, OMAP_PIN_INPUT |
|
||||
OMAP_PIN_OFF_WAKEUPENABLE);
|
||||
@ -878,12 +878,17 @@ static struct wl12xx_platform_data omap4_sdp4430_wlan_data __initdata = {
|
||||
.board_tcxo_clock = WL12XX_TCXOCLOCK_26,
|
||||
};
|
||||
|
||||
static void omap4_sdp4430_wifi_init(void)
|
||||
static void __init omap4_sdp4430_wifi_init(void)
|
||||
{
|
||||
int ret;
|
||||
|
||||
omap4_sdp4430_wifi_mux_init();
|
||||
if (wl12xx_set_platform_data(&omap4_sdp4430_wlan_data))
|
||||
pr_err("Error setting wl12xx data\n");
|
||||
platform_device_register(&omap_vwlan_device);
|
||||
ret = wl12xx_set_platform_data(&omap4_sdp4430_wlan_data);
|
||||
if (ret)
|
||||
pr_err("Error setting wl12xx data: %d\n", ret);
|
||||
ret = platform_device_register(&omap_vwlan_device);
|
||||
if (ret)
|
||||
pr_err("Error registering wl12xx device: %d\n", ret);
|
||||
}
|
||||
|
||||
static void __init omap_4430sdp_init(void)
|
||||
|
@ -67,7 +67,7 @@ static void __init omap_generic_init(void)
|
||||
{
|
||||
struct device_node *node = of_find_matching_node(NULL, intc_match);
|
||||
if (node)
|
||||
irq_domain_add_simple(node, 0);
|
||||
irq_domain_add_legacy(node, 32, 0, 0, &irq_domain_simple_ops, NULL);
|
||||
|
||||
omap_sdrc_init(NULL, NULL);
|
||||
|
||||
|
@ -617,6 +617,21 @@ static struct gpio omap3_evm_ehci_gpios[] __initdata = {
|
||||
{ OMAP3_EVM_EHCI_SELECT, GPIOF_OUT_INIT_LOW, "select EHCI port" },
|
||||
};
|
||||
|
||||
static void __init omap3_evm_wl12xx_init(void)
|
||||
{
|
||||
#ifdef CONFIG_WL12XX_PLATFORM_DATA
|
||||
int ret;
|
||||
|
||||
/* WL12xx WLAN Init */
|
||||
ret = wl12xx_set_platform_data(&omap3evm_wlan_data);
|
||||
if (ret)
|
||||
pr_err("error setting wl12xx data: %d\n", ret);
|
||||
ret = platform_device_register(&omap3evm_wlan_regulator);
|
||||
if (ret)
|
||||
pr_err("error registering wl12xx device: %d\n", ret);
|
||||
#endif
|
||||
}
|
||||
|
||||
static void __init omap3_evm_init(void)
|
||||
{
|
||||
omap3_evm_get_revision();
|
||||
@ -665,13 +680,7 @@ static void __init omap3_evm_init(void)
|
||||
omap_ads7846_init(1, OMAP3_EVM_TS_GPIO, 310, NULL);
|
||||
omap3evm_init_smsc911x();
|
||||
omap3_evm_display_init();
|
||||
|
||||
#ifdef CONFIG_WL12XX_PLATFORM_DATA
|
||||
/* WL12xx WLAN Init */
|
||||
if (wl12xx_set_platform_data(&omap3evm_wlan_data))
|
||||
pr_err("error setting wl12xx data\n");
|
||||
platform_device_register(&omap3evm_wlan_regulator);
|
||||
#endif
|
||||
omap3_evm_wl12xx_init();
|
||||
}
|
||||
|
||||
MACHINE_START(OMAP3EVM, "OMAP3 EVM")
|
||||
|
@ -488,13 +488,15 @@ void omap4_panda_display_init(void)
|
||||
static void __init omap4_panda_init(void)
|
||||
{
|
||||
int package = OMAP_PACKAGE_CBS;
|
||||
int ret;
|
||||
|
||||
if (omap_rev() == OMAP4430_REV_ES1_0)
|
||||
package = OMAP_PACKAGE_CBL;
|
||||
omap4_mux_init(board_mux, NULL, package);
|
||||
|
||||
if (wl12xx_set_platform_data(&omap_panda_wlan_data))
|
||||
pr_err("error setting wl12xx data\n");
|
||||
ret = wl12xx_set_platform_data(&omap_panda_wlan_data);
|
||||
if (ret)
|
||||
pr_err("error setting wl12xx data: %d\n", ret);
|
||||
|
||||
omap4_panda_i2c_init();
|
||||
platform_add_devices(panda_devices, ARRAY_SIZE(panda_devices));
|
||||
|
@ -296,8 +296,10 @@ static void enable_board_wakeup_source(void)
|
||||
|
||||
void __init zoom_peripherals_init(void)
|
||||
{
|
||||
if (wl12xx_set_platform_data(&omap_zoom_wlan_data))
|
||||
pr_err("error setting wl12xx data\n");
|
||||
int ret = wl12xx_set_platform_data(&omap_zoom_wlan_data);
|
||||
|
||||
if (ret)
|
||||
pr_err("error setting wl12xx data: %d\n", ret);
|
||||
|
||||
omap_i2c_init();
|
||||
platform_device_register(&omap_vwlan_device);
|
||||
|
@ -293,8 +293,8 @@ static inline void omap_hsmmc_mux(struct omap_mmc_platform_data *mmc_controller,
|
||||
}
|
||||
}
|
||||
|
||||
static int __init omap_hsmmc_pdata_init(struct omap2_hsmmc_info *c,
|
||||
struct omap_mmc_platform_data *mmc)
|
||||
static int omap_hsmmc_pdata_init(struct omap2_hsmmc_info *c,
|
||||
struct omap_mmc_platform_data *mmc)
|
||||
{
|
||||
char *hc_name;
|
||||
|
||||
@ -430,7 +430,7 @@ static int __init omap_hsmmc_pdata_init(struct omap2_hsmmc_info *c,
|
||||
|
||||
#define MAX_OMAP_MMC_HWMOD_NAME_LEN 16
|
||||
|
||||
void __init omap_init_hsmmc(struct omap2_hsmmc_info *hsmmcinfo, int ctrl_nr)
|
||||
void omap_init_hsmmc(struct omap2_hsmmc_info *hsmmcinfo, int ctrl_nr)
|
||||
{
|
||||
struct omap_hwmod *oh;
|
||||
struct platform_device *pdev;
|
||||
@ -487,7 +487,7 @@ done:
|
||||
kfree(mmc_data);
|
||||
}
|
||||
|
||||
void __init omap2_hsmmc_init(struct omap2_hsmmc_info *controllers)
|
||||
void omap2_hsmmc_init(struct omap2_hsmmc_info *controllers)
|
||||
{
|
||||
u32 reg;
|
||||
|
||||
|
@ -100,8 +100,8 @@ void omap_mux_write_array(struct omap_mux_partition *partition,
|
||||
|
||||
static char *omap_mux_options;
|
||||
|
||||
static int __init _omap_mux_init_gpio(struct omap_mux_partition *partition,
|
||||
int gpio, int val)
|
||||
static int _omap_mux_init_gpio(struct omap_mux_partition *partition,
|
||||
int gpio, int val)
|
||||
{
|
||||
struct omap_mux_entry *e;
|
||||
struct omap_mux *gpio_mux = NULL;
|
||||
@ -145,7 +145,7 @@ static int __init _omap_mux_init_gpio(struct omap_mux_partition *partition,
|
||||
return 0;
|
||||
}
|
||||
|
||||
int __init omap_mux_init_gpio(int gpio, int val)
|
||||
int omap_mux_init_gpio(int gpio, int val)
|
||||
{
|
||||
struct omap_mux_partition *partition;
|
||||
int ret;
|
||||
@ -159,9 +159,9 @@ int __init omap_mux_init_gpio(int gpio, int val)
|
||||
return -ENODEV;
|
||||
}
|
||||
|
||||
static int __init _omap_mux_get_by_name(struct omap_mux_partition *partition,
|
||||
const char *muxname,
|
||||
struct omap_mux **found_mux)
|
||||
static int _omap_mux_get_by_name(struct omap_mux_partition *partition,
|
||||
const char *muxname,
|
||||
struct omap_mux **found_mux)
|
||||
{
|
||||
struct omap_mux *mux = NULL;
|
||||
struct omap_mux_entry *e;
|
||||
@ -240,7 +240,7 @@ omap_mux_get_by_name(const char *muxname,
|
||||
return -ENODEV;
|
||||
}
|
||||
|
||||
int __init omap_mux_init_signal(const char *muxname, int val)
|
||||
int omap_mux_init_signal(const char *muxname, int val)
|
||||
{
|
||||
struct omap_mux_partition *partition = NULL;
|
||||
struct omap_mux *mux = NULL;
|
||||
@ -1094,8 +1094,8 @@ static void omap_mux_init_package(struct omap_mux *superset,
|
||||
omap_mux_package_init_balls(package_balls, superset);
|
||||
}
|
||||
|
||||
static void omap_mux_init_signals(struct omap_mux_partition *partition,
|
||||
struct omap_board_mux *board_mux)
|
||||
static void __init omap_mux_init_signals(struct omap_mux_partition *partition,
|
||||
struct omap_board_mux *board_mux)
|
||||
{
|
||||
omap_mux_set_cmdline_signals();
|
||||
omap_mux_write_array(partition, board_mux);
|
||||
@ -1109,8 +1109,8 @@ static void omap_mux_init_package(struct omap_mux *superset,
|
||||
{
|
||||
}
|
||||
|
||||
static void omap_mux_init_signals(struct omap_mux_partition *partition,
|
||||
struct omap_board_mux *board_mux)
|
||||
static void __init omap_mux_init_signals(struct omap_mux_partition *partition,
|
||||
struct omap_board_mux *board_mux)
|
||||
{
|
||||
}
|
||||
|
||||
|
@ -18,6 +18,7 @@
|
||||
#include <linux/linkage.h>
|
||||
#include <linux/init.h>
|
||||
|
||||
__CPUINIT
|
||||
/*
|
||||
* OMAP4 specific entry point for secondary CPU to jump from ROM
|
||||
* code. This routine also provides a holding flag into which
|
||||
|
@ -1517,8 +1517,8 @@ static int _enable(struct omap_hwmod *oh)
|
||||
if (oh->_state != _HWMOD_STATE_INITIALIZED &&
|
||||
oh->_state != _HWMOD_STATE_IDLE &&
|
||||
oh->_state != _HWMOD_STATE_DISABLED) {
|
||||
WARN(1, "omap_hwmod: %s: enabled state can only be entered "
|
||||
"from initialized, idle, or disabled state\n", oh->name);
|
||||
WARN(1, "omap_hwmod: %s: enabled state can only be entered from initialized, idle, or disabled state\n",
|
||||
oh->name);
|
||||
return -EINVAL;
|
||||
}
|
||||
|
||||
@ -1600,8 +1600,8 @@ static int _idle(struct omap_hwmod *oh)
|
||||
pr_debug("omap_hwmod: %s: idling\n", oh->name);
|
||||
|
||||
if (oh->_state != _HWMOD_STATE_ENABLED) {
|
||||
WARN(1, "omap_hwmod: %s: idle state can only be entered from "
|
||||
"enabled state\n", oh->name);
|
||||
WARN(1, "omap_hwmod: %s: idle state can only be entered from enabled state\n",
|
||||
oh->name);
|
||||
return -EINVAL;
|
||||
}
|
||||
|
||||
@ -1682,8 +1682,8 @@ static int _shutdown(struct omap_hwmod *oh)
|
||||
|
||||
if (oh->_state != _HWMOD_STATE_IDLE &&
|
||||
oh->_state != _HWMOD_STATE_ENABLED) {
|
||||
WARN(1, "omap_hwmod: %s: disabled state can only be entered "
|
||||
"from idle, or enabled state\n", oh->name);
|
||||
WARN(1, "omap_hwmod: %s: disabled state can only be entered from idle, or enabled state\n",
|
||||
oh->name);
|
||||
return -EINVAL;
|
||||
}
|
||||
|
||||
@ -2240,8 +2240,8 @@ void omap_hwmod_ocp_barrier(struct omap_hwmod *oh)
|
||||
BUG_ON(!oh);
|
||||
|
||||
if (!oh->class->sysc || !oh->class->sysc->sysc_flags) {
|
||||
WARN(1, "omap_device: %s: OCP barrier impossible due to "
|
||||
"device configuration\n", oh->name);
|
||||
WARN(1, "omap_device: %s: OCP barrier impossible due to device configuration\n",
|
||||
oh->name);
|
||||
return;
|
||||
}
|
||||
|
||||
|
@ -19,6 +19,7 @@
|
||||
|
||||
#include "common.h"
|
||||
#include <plat/cpu.h>
|
||||
#include <plat/irqs.h>
|
||||
#include <plat/prcm.h>
|
||||
|
||||
#include "vp.h"
|
||||
|
@ -107,18 +107,18 @@ static void omap_uart_set_noidle(struct platform_device *pdev)
|
||||
omap_hwmod_set_slave_idlemode(od->hwmods[0], HWMOD_IDLEMODE_NO);
|
||||
}
|
||||
|
||||
static void omap_uart_set_forceidle(struct platform_device *pdev)
|
||||
static void omap_uart_set_smartidle(struct platform_device *pdev)
|
||||
{
|
||||
struct omap_device *od = to_omap_device(pdev);
|
||||
|
||||
omap_hwmod_set_slave_idlemode(od->hwmods[0], HWMOD_IDLEMODE_FORCE);
|
||||
omap_hwmod_set_slave_idlemode(od->hwmods[0], HWMOD_IDLEMODE_SMART);
|
||||
}
|
||||
|
||||
#else
|
||||
static void omap_uart_enable_wakeup(struct platform_device *pdev, bool enable)
|
||||
{}
|
||||
static void omap_uart_set_noidle(struct platform_device *pdev) {}
|
||||
static void omap_uart_set_forceidle(struct platform_device *pdev) {}
|
||||
static void omap_uart_set_smartidle(struct platform_device *pdev) {}
|
||||
#endif /* CONFIG_PM */
|
||||
|
||||
#ifdef CONFIG_OMAP_MUX
|
||||
@ -349,7 +349,7 @@ void __init omap_serial_init_port(struct omap_board_data *bdata,
|
||||
omap_up.uartclk = OMAP24XX_BASE_BAUD * 16;
|
||||
omap_up.flags = UPF_BOOT_AUTOCONF;
|
||||
omap_up.get_context_loss_count = omap_pm_get_dev_context_loss_count;
|
||||
omap_up.set_forceidle = omap_uart_set_forceidle;
|
||||
omap_up.set_forceidle = omap_uart_set_smartidle;
|
||||
omap_up.set_noidle = omap_uart_set_noidle;
|
||||
omap_up.enable_wakeup = omap_uart_enable_wakeup;
|
||||
omap_up.dma_rx_buf_size = info->dma_rx_buf_size;
|
||||
|
@ -247,7 +247,7 @@ static void __init omap4_vc_init_channel(struct voltagedomain *voltdm)
|
||||
* omap_vc_i2c_init - initialize I2C interface to PMIC
|
||||
* @voltdm: voltage domain containing VC data
|
||||
*
|
||||
* Use PMIC supplied seetings for I2C high-speed mode and
|
||||
* Use PMIC supplied settings for I2C high-speed mode and
|
||||
* master code (if set) and program the VC I2C configuration
|
||||
* register.
|
||||
*
|
||||
@ -265,8 +265,8 @@ static void __init omap_vc_i2c_init(struct voltagedomain *voltdm)
|
||||
|
||||
if (initialized) {
|
||||
if (voltdm->pmic->i2c_high_speed != i2c_high_speed)
|
||||
pr_warn("%s: I2C config for all channels must match.",
|
||||
__func__);
|
||||
pr_warn("%s: I2C config for vdd_%s does not match other channels (%u).",
|
||||
__func__, voltdm->name, i2c_high_speed);
|
||||
return;
|
||||
}
|
||||
|
||||
@ -292,9 +292,7 @@ void __init omap_vc_init_channel(struct voltagedomain *voltdm)
|
||||
u32 val;
|
||||
|
||||
if (!voltdm->pmic || !voltdm->pmic->uv_to_vsel) {
|
||||
pr_err("%s: PMIC info requried to configure vc for"
|
||||
"vdd_%s not populated.Hence cannot initialize vc\n",
|
||||
__func__, voltdm->name);
|
||||
pr_err("%s: No PMIC info for vdd_%s\n", __func__, voltdm->name);
|
||||
return;
|
||||
}
|
||||
|
||||
|
@ -41,6 +41,11 @@ void __init omap_vp_init(struct voltagedomain *voltdm)
|
||||
u32 val, sys_clk_rate, timeout, waittime;
|
||||
u32 vddmin, vddmax, vstepmin, vstepmax;
|
||||
|
||||
if (!voltdm->pmic || !voltdm->pmic->uv_to_vsel) {
|
||||
pr_err("%s: No PMIC info for vdd_%s\n", __func__, voltdm->name);
|
||||
return;
|
||||
}
|
||||
|
||||
if (!voltdm->read || !voltdm->write) {
|
||||
pr_err("%s: No read/write API for accessing vdd_%s regs\n",
|
||||
__func__, voltdm->name);
|
||||
|
@ -68,7 +68,7 @@ void __init sirfsoc_of_irq_init(void)
|
||||
if (!sirfsoc_intc_base)
|
||||
panic("unable to map intc cpu registers\n");
|
||||
|
||||
irq_domain_add_simple(np, 0);
|
||||
irq_domain_add_legacy(np, 32, 0, 0, &irq_domain_simple_ops, NULL);
|
||||
|
||||
of_node_put(np);
|
||||
|
||||
|
@ -98,8 +98,11 @@ static const struct of_device_id sic_of_match[] __initconst = {
|
||||
|
||||
void __init versatile_init_irq(void)
|
||||
{
|
||||
vic_init(VA_VIC_BASE, IRQ_VIC_START, ~0, 0);
|
||||
irq_domain_generate_simple(vic_of_match, VERSATILE_VIC_BASE, IRQ_VIC_START);
|
||||
struct device_node *np;
|
||||
|
||||
np = of_find_matching_node_by_address(NULL, vic_of_match,
|
||||
VERSATILE_VIC_BASE);
|
||||
__vic_init(VA_VIC_BASE, IRQ_VIC_START, ~0, 0, np);
|
||||
|
||||
writel(~0, VA_SIC_BASE + SIC_IRQ_ENABLE_CLEAR);
|
||||
|
||||
|
@ -54,9 +54,15 @@ loop1:
|
||||
and r1, r1, #7 @ mask of the bits for current cache only
|
||||
cmp r1, #2 @ see what cache we have at this level
|
||||
blt skip @ skip if no cache, or just i-cache
|
||||
#ifdef CONFIG_PREEMPT
|
||||
save_and_disable_irqs r9 @ make cssr&csidr read atomic
|
||||
#endif
|
||||
mcr p15, 2, r10, c0, c0, 0 @ select current cache level in cssr
|
||||
isb @ isb to sych the new cssr&csidr
|
||||
mrc p15, 1, r1, c0, c0, 0 @ read the new csidr
|
||||
#ifdef CONFIG_PREEMPT
|
||||
restore_irqs_notrace r9
|
||||
#endif
|
||||
and r2, r1, #7 @ extract the length of the cache lines
|
||||
add r2, r2, #4 @ add 4 (line length offset)
|
||||
ldr r4, =0x3ff
|
||||
|
@ -12,6 +12,7 @@ config TMS320C6X
|
||||
select HAVE_GENERIC_HARDIRQS
|
||||
select HAVE_MEMBLOCK
|
||||
select HAVE_SPARSE_IRQ
|
||||
select IRQ_DOMAIN
|
||||
select OF
|
||||
select OF_EARLY_FLATTREE
|
||||
|
||||
|
@ -13,6 +13,7 @@
|
||||
#ifndef _ASM_C6X_IRQ_H
|
||||
#define _ASM_C6X_IRQ_H
|
||||
|
||||
#include <linux/irqdomain.h>
|
||||
#include <linux/threads.h>
|
||||
#include <linux/list.h>
|
||||
#include <linux/radix-tree.h>
|
||||
@ -41,253 +42,9 @@
|
||||
/* This number is used when no interrupt has been assigned */
|
||||
#define NO_IRQ 0
|
||||
|
||||
/* This type is the placeholder for a hardware interrupt number. It has to
|
||||
* be big enough to enclose whatever representation is used by a given
|
||||
* platform.
|
||||
*/
|
||||
typedef unsigned long irq_hw_number_t;
|
||||
|
||||
/* Interrupt controller "host" data structure. This could be defined as a
|
||||
* irq domain controller. That is, it handles the mapping between hardware
|
||||
* and virtual interrupt numbers for a given interrupt domain. The host
|
||||
* structure is generally created by the PIC code for a given PIC instance
|
||||
* (though a host can cover more than one PIC if they have a flat number
|
||||
* model). It's the host callbacks that are responsible for setting the
|
||||
* irq_chip on a given irq_desc after it's been mapped.
|
||||
*
|
||||
* The host code and data structures are fairly agnostic to the fact that
|
||||
* we use an open firmware device-tree. We do have references to struct
|
||||
* device_node in two places: in irq_find_host() to find the host matching
|
||||
* a given interrupt controller node, and of course as an argument to its
|
||||
* counterpart host->ops->match() callback. However, those are treated as
|
||||
* generic pointers by the core and the fact that it's actually a device-node
|
||||
* pointer is purely a convention between callers and implementation. This
|
||||
* code could thus be used on other architectures by replacing those two
|
||||
* by some sort of arch-specific void * "token" used to identify interrupt
|
||||
* controllers.
|
||||
*/
|
||||
struct irq_host;
|
||||
struct radix_tree_root;
|
||||
struct device_node;
|
||||
|
||||
/* Functions below are provided by the host and called whenever a new mapping
|
||||
* is created or an old mapping is disposed. The host can then proceed to
|
||||
* whatever internal data structures management is required. It also needs
|
||||
* to setup the irq_desc when returning from map().
|
||||
*/
|
||||
struct irq_host_ops {
|
||||
/* Match an interrupt controller device node to a host, returns
|
||||
* 1 on a match
|
||||
*/
|
||||
int (*match)(struct irq_host *h, struct device_node *node);
|
||||
|
||||
/* Create or update a mapping between a virtual irq number and a hw
|
||||
* irq number. This is called only once for a given mapping.
|
||||
*/
|
||||
int (*map)(struct irq_host *h, unsigned int virq, irq_hw_number_t hw);
|
||||
|
||||
/* Dispose of such a mapping */
|
||||
void (*unmap)(struct irq_host *h, unsigned int virq);
|
||||
|
||||
/* Translate device-tree interrupt specifier from raw format coming
|
||||
* from the firmware to a irq_hw_number_t (interrupt line number) and
|
||||
* type (sense) that can be passed to set_irq_type(). In the absence
|
||||
* of this callback, irq_create_of_mapping() and irq_of_parse_and_map()
|
||||
* will return the hw number in the first cell and IRQ_TYPE_NONE for
|
||||
* the type (which amount to keeping whatever default value the
|
||||
* interrupt controller has for that line)
|
||||
*/
|
||||
int (*xlate)(struct irq_host *h, struct device_node *ctrler,
|
||||
const u32 *intspec, unsigned int intsize,
|
||||
irq_hw_number_t *out_hwirq, unsigned int *out_type);
|
||||
};
|
||||
|
||||
struct irq_host {
|
||||
struct list_head link;
|
||||
|
||||
/* type of reverse mapping technique */
|
||||
unsigned int revmap_type;
|
||||
#define IRQ_HOST_MAP_PRIORITY 0 /* core priority irqs, get irqs 1..15 */
|
||||
#define IRQ_HOST_MAP_NOMAP 1 /* no fast reverse mapping */
|
||||
#define IRQ_HOST_MAP_LINEAR 2 /* linear map of interrupts */
|
||||
#define IRQ_HOST_MAP_TREE 3 /* radix tree */
|
||||
union {
|
||||
struct {
|
||||
unsigned int size;
|
||||
unsigned int *revmap;
|
||||
} linear;
|
||||
struct radix_tree_root tree;
|
||||
} revmap_data;
|
||||
struct irq_host_ops *ops;
|
||||
void *host_data;
|
||||
irq_hw_number_t inval_irq;
|
||||
|
||||
/* Optional device node pointer */
|
||||
struct device_node *of_node;
|
||||
};
|
||||
|
||||
struct irq_data;
|
||||
extern irq_hw_number_t irqd_to_hwirq(struct irq_data *d);
|
||||
extern irq_hw_number_t virq_to_hw(unsigned int virq);
|
||||
extern bool virq_is_host(unsigned int virq, struct irq_host *host);
|
||||
|
||||
/**
|
||||
* irq_alloc_host - Allocate a new irq_host data structure
|
||||
* @of_node: optional device-tree node of the interrupt controller
|
||||
* @revmap_type: type of reverse mapping to use
|
||||
* @revmap_arg: for IRQ_HOST_MAP_LINEAR linear only: size of the map
|
||||
* @ops: map/unmap host callbacks
|
||||
* @inval_irq: provide a hw number in that host space that is always invalid
|
||||
*
|
||||
* Allocates and initialize and irq_host structure. Note that in the case of
|
||||
* IRQ_HOST_MAP_LEGACY, the map() callback will be called before this returns
|
||||
* for all legacy interrupts except 0 (which is always the invalid irq for
|
||||
* a legacy controller). For a IRQ_HOST_MAP_LINEAR, the map is allocated by
|
||||
* this call as well. For a IRQ_HOST_MAP_TREE, the radix tree will be allocated
|
||||
* later during boot automatically (the reverse mapping will use the slow path
|
||||
* until that happens).
|
||||
*/
|
||||
extern struct irq_host *irq_alloc_host(struct device_node *of_node,
|
||||
unsigned int revmap_type,
|
||||
unsigned int revmap_arg,
|
||||
struct irq_host_ops *ops,
|
||||
irq_hw_number_t inval_irq);
|
||||
|
||||
|
||||
/**
|
||||
* irq_find_host - Locates a host for a given device node
|
||||
* @node: device-tree node of the interrupt controller
|
||||
*/
|
||||
extern struct irq_host *irq_find_host(struct device_node *node);
|
||||
|
||||
|
||||
/**
|
||||
* irq_set_default_host - Set a "default" host
|
||||
* @host: default host pointer
|
||||
*
|
||||
* For convenience, it's possible to set a "default" host that will be used
|
||||
* whenever NULL is passed to irq_create_mapping(). It makes life easier for
|
||||
* platforms that want to manipulate a few hard coded interrupt numbers that
|
||||
* aren't properly represented in the device-tree.
|
||||
*/
|
||||
extern void irq_set_default_host(struct irq_host *host);
|
||||
|
||||
|
||||
/**
|
||||
* irq_set_virq_count - Set the maximum number of virt irqs
|
||||
* @count: number of linux virtual irqs, capped with NR_IRQS
|
||||
*
|
||||
* This is mainly for use by platforms like iSeries who want to program
|
||||
* the virtual irq number in the controller to avoid the reverse mapping
|
||||
*/
|
||||
extern void irq_set_virq_count(unsigned int count);
|
||||
|
||||
|
||||
/**
|
||||
* irq_create_mapping - Map a hardware interrupt into linux virq space
|
||||
* @host: host owning this hardware interrupt or NULL for default host
|
||||
* @hwirq: hardware irq number in that host space
|
||||
*
|
||||
* Only one mapping per hardware interrupt is permitted. Returns a linux
|
||||
* virq number.
|
||||
* If the sense/trigger is to be specified, set_irq_type() should be called
|
||||
* on the number returned from that call.
|
||||
*/
|
||||
extern unsigned int irq_create_mapping(struct irq_host *host,
|
||||
irq_hw_number_t hwirq);
|
||||
|
||||
|
||||
/**
|
||||
* irq_dispose_mapping - Unmap an interrupt
|
||||
* @virq: linux virq number of the interrupt to unmap
|
||||
*/
|
||||
extern void irq_dispose_mapping(unsigned int virq);
|
||||
|
||||
/**
|
||||
* irq_find_mapping - Find a linux virq from an hw irq number.
|
||||
* @host: host owning this hardware interrupt
|
||||
* @hwirq: hardware irq number in that host space
|
||||
*
|
||||
* This is a slow path, for use by generic code. It's expected that an
|
||||
* irq controller implementation directly calls the appropriate low level
|
||||
* mapping function.
|
||||
*/
|
||||
extern unsigned int irq_find_mapping(struct irq_host *host,
|
||||
irq_hw_number_t hwirq);
|
||||
|
||||
/**
|
||||
* irq_create_direct_mapping - Allocate a virq for direct mapping
|
||||
* @host: host to allocate the virq for or NULL for default host
|
||||
*
|
||||
* This routine is used for irq controllers which can choose the hardware
|
||||
* interrupt numbers they generate. In such a case it's simplest to use
|
||||
* the linux virq as the hardware interrupt number.
|
||||
*/
|
||||
extern unsigned int irq_create_direct_mapping(struct irq_host *host);
|
||||
|
||||
/**
|
||||
* irq_radix_revmap_insert - Insert a hw irq to linux virq number mapping.
|
||||
* @host: host owning this hardware interrupt
|
||||
* @virq: linux irq number
|
||||
* @hwirq: hardware irq number in that host space
|
||||
*
|
||||
* This is for use by irq controllers that use a radix tree reverse
|
||||
* mapping for fast lookup.
|
||||
*/
|
||||
extern void irq_radix_revmap_insert(struct irq_host *host, unsigned int virq,
|
||||
irq_hw_number_t hwirq);
|
||||
|
||||
/**
|
||||
* irq_radix_revmap_lookup - Find a linux virq from a hw irq number.
|
||||
* @host: host owning this hardware interrupt
|
||||
* @hwirq: hardware irq number in that host space
|
||||
*
|
||||
* This is a fast path, for use by irq controller code that uses radix tree
|
||||
* revmaps
|
||||
*/
|
||||
extern unsigned int irq_radix_revmap_lookup(struct irq_host *host,
|
||||
irq_hw_number_t hwirq);
|
||||
|
||||
/**
|
||||
* irq_linear_revmap - Find a linux virq from a hw irq number.
|
||||
* @host: host owning this hardware interrupt
|
||||
* @hwirq: hardware irq number in that host space
|
||||
*
|
||||
* This is a fast path, for use by irq controller code that uses linear
|
||||
* revmaps. It does fallback to the slow path if the revmap doesn't exist
|
||||
* yet and will create the revmap entry with appropriate locking
|
||||
*/
|
||||
|
||||
extern unsigned int irq_linear_revmap(struct irq_host *host,
|
||||
irq_hw_number_t hwirq);
|
||||
|
||||
|
||||
|
||||
/**
|
||||
* irq_alloc_virt - Allocate virtual irq numbers
|
||||
* @host: host owning these new virtual irqs
|
||||
* @count: number of consecutive numbers to allocate
|
||||
* @hint: pass a hint number, the allocator will try to use a 1:1 mapping
|
||||
*
|
||||
* This is a low level function that is used internally by irq_create_mapping()
|
||||
* and that can be used by some irq controllers implementations for things
|
||||
* like allocating ranges of numbers for MSIs. The revmaps are left untouched.
|
||||
*/
|
||||
extern unsigned int irq_alloc_virt(struct irq_host *host,
|
||||
unsigned int count,
|
||||
unsigned int hint);
|
||||
|
||||
/**
|
||||
* irq_free_virt - Free virtual irq numbers
|
||||
* @virq: virtual irq number of the first interrupt to free
|
||||
* @count: number of interrupts to free
|
||||
*
|
||||
* This function is the opposite of irq_alloc_virt. It will not clear reverse
|
||||
* maps, this should be done previously by unmap'ing the interrupt. In fact,
|
||||
* all interrupts covered by the range being freed should have been unmapped
|
||||
* prior to calling this.
|
||||
*/
|
||||
extern void irq_free_virt(unsigned int virq, unsigned int count);
|
||||
|
||||
extern void __init init_pic_c64xplus(void);
|
||||
|
||||
|
@ -73,10 +73,10 @@ asmlinkage void c6x_do_IRQ(unsigned int prio, struct pt_regs *regs)
|
||||
set_irq_regs(old_regs);
|
||||
}
|
||||
|
||||
static struct irq_host *core_host;
|
||||
static struct irq_domain *core_domain;
|
||||
|
||||
static int core_host_map(struct irq_host *h, unsigned int virq,
|
||||
irq_hw_number_t hw)
|
||||
static int core_domain_map(struct irq_domain *h, unsigned int virq,
|
||||
irq_hw_number_t hw)
|
||||
{
|
||||
if (hw < 4 || hw >= NR_PRIORITY_IRQS)
|
||||
return -EINVAL;
|
||||
@ -86,8 +86,9 @@ static int core_host_map(struct irq_host *h, unsigned int virq,
|
||||
return 0;
|
||||
}
|
||||
|
||||
static struct irq_host_ops core_host_ops = {
|
||||
.map = core_host_map,
|
||||
static const struct irq_domain_ops core_domain_ops = {
|
||||
.map = core_domain_map,
|
||||
.xlate = irq_domain_xlate_onecell,
|
||||
};
|
||||
|
||||
void __init init_IRQ(void)
|
||||
@ -100,10 +101,11 @@ void __init init_IRQ(void)
|
||||
np = of_find_compatible_node(NULL, NULL, "ti,c64x+core-pic");
|
||||
if (np != NULL) {
|
||||
/* create the core host */
|
||||
core_host = irq_alloc_host(np, IRQ_HOST_MAP_PRIORITY, 0,
|
||||
&core_host_ops, 0);
|
||||
if (core_host)
|
||||
irq_set_default_host(core_host);
|
||||
core_domain = irq_domain_add_legacy(np, NR_PRIORITY_IRQS,
|
||||
0, 0, &core_domain_ops,
|
||||
NULL);
|
||||
if (core_domain)
|
||||
irq_set_default_host(core_domain);
|
||||
of_node_put(np);
|
||||
}
|
||||
|
||||
@ -128,601 +130,15 @@ int arch_show_interrupts(struct seq_file *p, int prec)
|
||||
return 0;
|
||||
}
|
||||
|
||||
/*
|
||||
* IRQ controller and virtual interrupts
|
||||
*/
|
||||
|
||||
/* The main irq map itself is an array of NR_IRQ entries containing the
|
||||
* associate host and irq number. An entry with a host of NULL is free.
|
||||
* An entry can be allocated if it's free, the allocator always then sets
|
||||
* hwirq first to the host's invalid irq number and then fills ops.
|
||||
*/
|
||||
struct irq_map_entry {
|
||||
irq_hw_number_t hwirq;
|
||||
struct irq_host *host;
|
||||
};
|
||||
|
||||
static LIST_HEAD(irq_hosts);
|
||||
static DEFINE_RAW_SPINLOCK(irq_big_lock);
|
||||
static DEFINE_MUTEX(revmap_trees_mutex);
|
||||
static struct irq_map_entry irq_map[NR_IRQS];
|
||||
static unsigned int irq_virq_count = NR_IRQS;
|
||||
static struct irq_host *irq_default_host;
|
||||
|
||||
irq_hw_number_t irqd_to_hwirq(struct irq_data *d)
|
||||
{
|
||||
return irq_map[d->irq].hwirq;
|
||||
return d->hwirq;
|
||||
}
|
||||
EXPORT_SYMBOL_GPL(irqd_to_hwirq);
|
||||
|
||||
irq_hw_number_t virq_to_hw(unsigned int virq)
|
||||
{
|
||||
return irq_map[virq].hwirq;
|
||||
struct irq_data *irq_data = irq_get_irq_data(virq);
|
||||
return WARN_ON(!irq_data) ? 0 : irq_data->hwirq;
|
||||
}
|
||||
EXPORT_SYMBOL_GPL(virq_to_hw);
|
||||
|
||||
bool virq_is_host(unsigned int virq, struct irq_host *host)
|
||||
{
|
||||
return irq_map[virq].host == host;
|
||||
}
|
||||
EXPORT_SYMBOL_GPL(virq_is_host);
|
||||
|
||||
static int default_irq_host_match(struct irq_host *h, struct device_node *np)
|
||||
{
|
||||
return h->of_node != NULL && h->of_node == np;
|
||||
}
|
||||
|
||||
struct irq_host *irq_alloc_host(struct device_node *of_node,
|
||||
unsigned int revmap_type,
|
||||
unsigned int revmap_arg,
|
||||
struct irq_host_ops *ops,
|
||||
irq_hw_number_t inval_irq)
|
||||
{
|
||||
struct irq_host *host;
|
||||
unsigned int size = sizeof(struct irq_host);
|
||||
unsigned int i;
|
||||
unsigned int *rmap;
|
||||
unsigned long flags;
|
||||
|
||||
/* Allocate structure and revmap table if using linear mapping */
|
||||
if (revmap_type == IRQ_HOST_MAP_LINEAR)
|
||||
size += revmap_arg * sizeof(unsigned int);
|
||||
host = kzalloc(size, GFP_KERNEL);
|
||||
if (host == NULL)
|
||||
return NULL;
|
||||
|
||||
/* Fill structure */
|
||||
host->revmap_type = revmap_type;
|
||||
host->inval_irq = inval_irq;
|
||||
host->ops = ops;
|
||||
host->of_node = of_node_get(of_node);
|
||||
|
||||
if (host->ops->match == NULL)
|
||||
host->ops->match = default_irq_host_match;
|
||||
|
||||
raw_spin_lock_irqsave(&irq_big_lock, flags);
|
||||
|
||||
/* Check for the priority controller. */
|
||||
if (revmap_type == IRQ_HOST_MAP_PRIORITY) {
|
||||
if (irq_map[0].host != NULL) {
|
||||
raw_spin_unlock_irqrestore(&irq_big_lock, flags);
|
||||
of_node_put(host->of_node);
|
||||
kfree(host);
|
||||
return NULL;
|
||||
}
|
||||
irq_map[0].host = host;
|
||||
}
|
||||
|
||||
list_add(&host->link, &irq_hosts);
|
||||
raw_spin_unlock_irqrestore(&irq_big_lock, flags);
|
||||
|
||||
/* Additional setups per revmap type */
|
||||
switch (revmap_type) {
|
||||
case IRQ_HOST_MAP_PRIORITY:
|
||||
/* 0 is always the invalid number for priority */
|
||||
host->inval_irq = 0;
|
||||
/* setup us as the host for all priority interrupts */
|
||||
for (i = 1; i < NR_PRIORITY_IRQS; i++) {
|
||||
irq_map[i].hwirq = i;
|
||||
smp_wmb();
|
||||
irq_map[i].host = host;
|
||||
smp_wmb();
|
||||
|
||||
ops->map(host, i, i);
|
||||
}
|
||||
break;
|
||||
case IRQ_HOST_MAP_LINEAR:
|
||||
rmap = (unsigned int *)(host + 1);
|
||||
for (i = 0; i < revmap_arg; i++)
|
||||
rmap[i] = NO_IRQ;
|
||||
host->revmap_data.linear.size = revmap_arg;
|
||||
smp_wmb();
|
||||
host->revmap_data.linear.revmap = rmap;
|
||||
break;
|
||||
case IRQ_HOST_MAP_TREE:
|
||||
INIT_RADIX_TREE(&host->revmap_data.tree, GFP_KERNEL);
|
||||
break;
|
||||
default:
|
||||
break;
|
||||
}
|
||||
|
||||
pr_debug("irq: Allocated host of type %d @0x%p\n", revmap_type, host);
|
||||
|
||||
return host;
|
||||
}
|
||||
|
||||
struct irq_host *irq_find_host(struct device_node *node)
|
||||
{
|
||||
struct irq_host *h, *found = NULL;
|
||||
unsigned long flags;
|
||||
|
||||
/* We might want to match the legacy controller last since
|
||||
* it might potentially be set to match all interrupts in
|
||||
* the absence of a device node. This isn't a problem so far
|
||||
* yet though...
|
||||
*/
|
||||
raw_spin_lock_irqsave(&irq_big_lock, flags);
|
||||
list_for_each_entry(h, &irq_hosts, link)
|
||||
if (h->ops->match(h, node)) {
|
||||
found = h;
|
||||
break;
|
||||
}
|
||||
raw_spin_unlock_irqrestore(&irq_big_lock, flags);
|
||||
return found;
|
||||
}
|
||||
EXPORT_SYMBOL_GPL(irq_find_host);
|
||||
|
||||
void irq_set_default_host(struct irq_host *host)
|
||||
{
|
||||
pr_debug("irq: Default host set to @0x%p\n", host);
|
||||
|
||||
irq_default_host = host;
|
||||
}
|
||||
|
||||
void irq_set_virq_count(unsigned int count)
|
||||
{
|
||||
pr_debug("irq: Trying to set virq count to %d\n", count);
|
||||
|
||||
BUG_ON(count < NR_PRIORITY_IRQS);
|
||||
if (count < NR_IRQS)
|
||||
irq_virq_count = count;
|
||||
}
|
||||
|
||||
static int irq_setup_virq(struct irq_host *host, unsigned int virq,
|
||||
irq_hw_number_t hwirq)
|
||||
{
|
||||
int res;
|
||||
|
||||
res = irq_alloc_desc_at(virq, 0);
|
||||
if (res != virq) {
|
||||
pr_debug("irq: -> allocating desc failed\n");
|
||||
goto error;
|
||||
}
|
||||
|
||||
/* map it */
|
||||
smp_wmb();
|
||||
irq_map[virq].hwirq = hwirq;
|
||||
smp_mb();
|
||||
|
||||
if (host->ops->map(host, virq, hwirq)) {
|
||||
pr_debug("irq: -> mapping failed, freeing\n");
|
||||
goto errdesc;
|
||||
}
|
||||
|
||||
irq_clear_status_flags(virq, IRQ_NOREQUEST);
|
||||
|
||||
return 0;
|
||||
|
||||
errdesc:
|
||||
irq_free_descs(virq, 1);
|
||||
error:
|
||||
irq_free_virt(virq, 1);
|
||||
return -1;
|
||||
}
|
||||
|
||||
unsigned int irq_create_direct_mapping(struct irq_host *host)
|
||||
{
|
||||
unsigned int virq;
|
||||
|
||||
if (host == NULL)
|
||||
host = irq_default_host;
|
||||
|
||||
BUG_ON(host == NULL);
|
||||
WARN_ON(host->revmap_type != IRQ_HOST_MAP_NOMAP);
|
||||
|
||||
virq = irq_alloc_virt(host, 1, 0);
|
||||
if (virq == NO_IRQ) {
|
||||
pr_debug("irq: create_direct virq allocation failed\n");
|
||||
return NO_IRQ;
|
||||
}
|
||||
|
||||
pr_debug("irq: create_direct obtained virq %d\n", virq);
|
||||
|
||||
if (irq_setup_virq(host, virq, virq))
|
||||
return NO_IRQ;
|
||||
|
||||
return virq;
|
||||
}
|
||||
|
||||
unsigned int irq_create_mapping(struct irq_host *host,
|
||||
irq_hw_number_t hwirq)
|
||||
{
|
||||
unsigned int virq, hint;
|
||||
|
||||
pr_debug("irq: irq_create_mapping(0x%p, 0x%lx)\n", host, hwirq);
|
||||
|
||||
/* Look for default host if nececssary */
|
||||
if (host == NULL)
|
||||
host = irq_default_host;
|
||||
if (host == NULL) {
|
||||
printk(KERN_WARNING "irq_create_mapping called for"
|
||||
" NULL host, hwirq=%lx\n", hwirq);
|
||||
WARN_ON(1);
|
||||
return NO_IRQ;
|
||||
}
|
||||
pr_debug("irq: -> using host @%p\n", host);
|
||||
|
||||
/* Check if mapping already exists */
|
||||
virq = irq_find_mapping(host, hwirq);
|
||||
if (virq != NO_IRQ) {
|
||||
pr_debug("irq: -> existing mapping on virq %d\n", virq);
|
||||
return virq;
|
||||
}
|
||||
|
||||
/* Allocate a virtual interrupt number */
|
||||
hint = hwirq % irq_virq_count;
|
||||
virq = irq_alloc_virt(host, 1, hint);
|
||||
if (virq == NO_IRQ) {
|
||||
pr_debug("irq: -> virq allocation failed\n");
|
||||
return NO_IRQ;
|
||||
}
|
||||
|
||||
if (irq_setup_virq(host, virq, hwirq))
|
||||
return NO_IRQ;
|
||||
|
||||
pr_debug("irq: irq %lu on host %s mapped to virtual irq %u\n",
|
||||
hwirq, host->of_node ? host->of_node->full_name : "null", virq);
|
||||
|
||||
return virq;
|
||||
}
|
||||
EXPORT_SYMBOL_GPL(irq_create_mapping);
|
||||
|
||||
unsigned int irq_create_of_mapping(struct device_node *controller,
|
||||
const u32 *intspec, unsigned int intsize)
|
||||
{
|
||||
struct irq_host *host;
|
||||
irq_hw_number_t hwirq;
|
||||
unsigned int type = IRQ_TYPE_NONE;
|
||||
unsigned int virq;
|
||||
|
||||
if (controller == NULL)
|
||||
host = irq_default_host;
|
||||
else
|
||||
host = irq_find_host(controller);
|
||||
if (host == NULL) {
|
||||
printk(KERN_WARNING "irq: no irq host found for %s !\n",
|
||||
controller->full_name);
|
||||
return NO_IRQ;
|
||||
}
|
||||
|
||||
/* If host has no translation, then we assume interrupt line */
|
||||
if (host->ops->xlate == NULL)
|
||||
hwirq = intspec[0];
|
||||
else {
|
||||
if (host->ops->xlate(host, controller, intspec, intsize,
|
||||
&hwirq, &type))
|
||||
return NO_IRQ;
|
||||
}
|
||||
|
||||
/* Create mapping */
|
||||
virq = irq_create_mapping(host, hwirq);
|
||||
if (virq == NO_IRQ)
|
||||
return virq;
|
||||
|
||||
/* Set type if specified and different than the current one */
|
||||
if (type != IRQ_TYPE_NONE &&
|
||||
type != (irqd_get_trigger_type(irq_get_irq_data(virq))))
|
||||
irq_set_irq_type(virq, type);
|
||||
return virq;
|
||||
}
|
||||
EXPORT_SYMBOL_GPL(irq_create_of_mapping);
|
||||
|
||||
void irq_dispose_mapping(unsigned int virq)
|
||||
{
|
||||
struct irq_host *host;
|
||||
irq_hw_number_t hwirq;
|
||||
|
||||
if (virq == NO_IRQ)
|
||||
return;
|
||||
|
||||
/* Never unmap priority interrupts */
|
||||
if (virq < NR_PRIORITY_IRQS)
|
||||
return;
|
||||
|
||||
host = irq_map[virq].host;
|
||||
if (WARN_ON(host == NULL))
|
||||
return;
|
||||
|
||||
irq_set_status_flags(virq, IRQ_NOREQUEST);
|
||||
|
||||
/* remove chip and handler */
|
||||
irq_set_chip_and_handler(virq, NULL, NULL);
|
||||
|
||||
/* Make sure it's completed */
|
||||
synchronize_irq(virq);
|
||||
|
||||
/* Tell the PIC about it */
|
||||
if (host->ops->unmap)
|
||||
host->ops->unmap(host, virq);
|
||||
smp_mb();
|
||||
|
||||
/* Clear reverse map */
|
||||
hwirq = irq_map[virq].hwirq;
|
||||
switch (host->revmap_type) {
|
||||
case IRQ_HOST_MAP_LINEAR:
|
||||
if (hwirq < host->revmap_data.linear.size)
|
||||
host->revmap_data.linear.revmap[hwirq] = NO_IRQ;
|
||||
break;
|
||||
case IRQ_HOST_MAP_TREE:
|
||||
mutex_lock(&revmap_trees_mutex);
|
||||
radix_tree_delete(&host->revmap_data.tree, hwirq);
|
||||
mutex_unlock(&revmap_trees_mutex);
|
||||
break;
|
||||
}
|
||||
|
||||
/* Destroy map */
|
||||
smp_mb();
|
||||
irq_map[virq].hwirq = host->inval_irq;
|
||||
|
||||
irq_free_descs(virq, 1);
|
||||
/* Free it */
|
||||
irq_free_virt(virq, 1);
|
||||
}
|
||||
EXPORT_SYMBOL_GPL(irq_dispose_mapping);
|
||||
|
||||
unsigned int irq_find_mapping(struct irq_host *host,
|
||||
irq_hw_number_t hwirq)
|
||||
{
|
||||
unsigned int i;
|
||||
unsigned int hint = hwirq % irq_virq_count;
|
||||
|
||||
/* Look for default host if nececssary */
|
||||
if (host == NULL)
|
||||
host = irq_default_host;
|
||||
if (host == NULL)
|
||||
return NO_IRQ;
|
||||
|
||||
/* Slow path does a linear search of the map */
|
||||
i = hint;
|
||||
do {
|
||||
if (irq_map[i].host == host &&
|
||||
irq_map[i].hwirq == hwirq)
|
||||
return i;
|
||||
i++;
|
||||
if (i >= irq_virq_count)
|
||||
i = 4;
|
||||
} while (i != hint);
|
||||
return NO_IRQ;
|
||||
}
|
||||
EXPORT_SYMBOL_GPL(irq_find_mapping);
|
||||
|
||||
unsigned int irq_radix_revmap_lookup(struct irq_host *host,
|
||||
irq_hw_number_t hwirq)
|
||||
{
|
||||
struct irq_map_entry *ptr;
|
||||
unsigned int virq;
|
||||
|
||||
if (WARN_ON_ONCE(host->revmap_type != IRQ_HOST_MAP_TREE))
|
||||
return irq_find_mapping(host, hwirq);
|
||||
|
||||
/*
|
||||
* The ptr returned references the static global irq_map.
|
||||
* but freeing an irq can delete nodes along the path to
|
||||
* do the lookup via call_rcu.
|
||||
*/
|
||||
rcu_read_lock();
|
||||
ptr = radix_tree_lookup(&host->revmap_data.tree, hwirq);
|
||||
rcu_read_unlock();
|
||||
|
||||
/*
|
||||
* If found in radix tree, then fine.
|
||||
* Else fallback to linear lookup - this should not happen in practice
|
||||
* as it means that we failed to insert the node in the radix tree.
|
||||
*/
|
||||
if (ptr)
|
||||
virq = ptr - irq_map;
|
||||
else
|
||||
virq = irq_find_mapping(host, hwirq);
|
||||
|
||||
return virq;
|
||||
}
|
||||
|
||||
void irq_radix_revmap_insert(struct irq_host *host, unsigned int virq,
|
||||
irq_hw_number_t hwirq)
|
||||
{
|
||||
if (WARN_ON(host->revmap_type != IRQ_HOST_MAP_TREE))
|
||||
return;
|
||||
|
||||
if (virq != NO_IRQ) {
|
||||
mutex_lock(&revmap_trees_mutex);
|
||||
radix_tree_insert(&host->revmap_data.tree, hwirq,
|
||||
&irq_map[virq]);
|
||||
mutex_unlock(&revmap_trees_mutex);
|
||||
}
|
||||
}
|
||||
|
||||
unsigned int irq_linear_revmap(struct irq_host *host,
|
||||
irq_hw_number_t hwirq)
|
||||
{
|
||||
unsigned int *revmap;
|
||||
|
||||
if (WARN_ON_ONCE(host->revmap_type != IRQ_HOST_MAP_LINEAR))
|
||||
return irq_find_mapping(host, hwirq);
|
||||
|
||||
/* Check revmap bounds */
|
||||
if (unlikely(hwirq >= host->revmap_data.linear.size))
|
||||
return irq_find_mapping(host, hwirq);
|
||||
|
||||
/* Check if revmap was allocated */
|
||||
revmap = host->revmap_data.linear.revmap;
|
||||
if (unlikely(revmap == NULL))
|
||||
return irq_find_mapping(host, hwirq);
|
||||
|
||||
/* Fill up revmap with slow path if no mapping found */
|
||||
if (unlikely(revmap[hwirq] == NO_IRQ))
|
||||
revmap[hwirq] = irq_find_mapping(host, hwirq);
|
||||
|
||||
return revmap[hwirq];
|
||||
}
|
||||
|
||||
unsigned int irq_alloc_virt(struct irq_host *host,
|
||||
unsigned int count,
|
||||
unsigned int hint)
|
||||
{
|
||||
unsigned long flags;
|
||||
unsigned int i, j, found = NO_IRQ;
|
||||
|
||||
if (count == 0 || count > (irq_virq_count - NR_PRIORITY_IRQS))
|
||||
return NO_IRQ;
|
||||
|
||||
raw_spin_lock_irqsave(&irq_big_lock, flags);
|
||||
|
||||
/* Use hint for 1 interrupt if any */
|
||||
if (count == 1 && hint >= NR_PRIORITY_IRQS &&
|
||||
hint < irq_virq_count && irq_map[hint].host == NULL) {
|
||||
found = hint;
|
||||
goto hint_found;
|
||||
}
|
||||
|
||||
/* Look for count consecutive numbers in the allocatable
|
||||
* (non-legacy) space
|
||||
*/
|
||||
for (i = NR_PRIORITY_IRQS, j = 0; i < irq_virq_count; i++) {
|
||||
if (irq_map[i].host != NULL)
|
||||
j = 0;
|
||||
else
|
||||
j++;
|
||||
|
||||
if (j == count) {
|
||||
found = i - count + 1;
|
||||
break;
|
||||
}
|
||||
}
|
||||
if (found == NO_IRQ) {
|
||||
raw_spin_unlock_irqrestore(&irq_big_lock, flags);
|
||||
return NO_IRQ;
|
||||
}
|
||||
hint_found:
|
||||
for (i = found; i < (found + count); i++) {
|
||||
irq_map[i].hwirq = host->inval_irq;
|
||||
smp_wmb();
|
||||
irq_map[i].host = host;
|
||||
}
|
||||
raw_spin_unlock_irqrestore(&irq_big_lock, flags);
|
||||
return found;
|
||||
}
|
||||
|
||||
void irq_free_virt(unsigned int virq, unsigned int count)
|
||||
{
|
||||
unsigned long flags;
|
||||
unsigned int i;
|
||||
|
||||
WARN_ON(virq < NR_PRIORITY_IRQS);
|
||||
WARN_ON(count == 0 || (virq + count) > irq_virq_count);
|
||||
|
||||
if (virq < NR_PRIORITY_IRQS) {
|
||||
if (virq + count < NR_PRIORITY_IRQS)
|
||||
return;
|
||||
count -= NR_PRIORITY_IRQS - virq;
|
||||
virq = NR_PRIORITY_IRQS;
|
||||
}
|
||||
|
||||
if (count > irq_virq_count || virq > irq_virq_count - count) {
|
||||
if (virq > irq_virq_count)
|
||||
return;
|
||||
count = irq_virq_count - virq;
|
||||
}
|
||||
|
||||
raw_spin_lock_irqsave(&irq_big_lock, flags);
|
||||
for (i = virq; i < (virq + count); i++) {
|
||||
struct irq_host *host;
|
||||
|
||||
host = irq_map[i].host;
|
||||
irq_map[i].hwirq = host->inval_irq;
|
||||
smp_wmb();
|
||||
irq_map[i].host = NULL;
|
||||
}
|
||||
raw_spin_unlock_irqrestore(&irq_big_lock, flags);
|
||||
}
|
||||
|
||||
#ifdef CONFIG_VIRQ_DEBUG
|
||||
static int virq_debug_show(struct seq_file *m, void *private)
|
||||
{
|
||||
unsigned long flags;
|
||||
struct irq_desc *desc;
|
||||
const char *p;
|
||||
static const char none[] = "none";
|
||||
void *data;
|
||||
int i;
|
||||
|
||||
seq_printf(m, "%-5s %-7s %-15s %-18s %s\n", "virq", "hwirq",
|
||||
"chip name", "chip data", "host name");
|
||||
|
||||
for (i = 1; i < nr_irqs; i++) {
|
||||
desc = irq_to_desc(i);
|
||||
if (!desc)
|
||||
continue;
|
||||
|
||||
raw_spin_lock_irqsave(&desc->lock, flags);
|
||||
|
||||
if (desc->action && desc->action->handler) {
|
||||
struct irq_chip *chip;
|
||||
|
||||
seq_printf(m, "%5d ", i);
|
||||
seq_printf(m, "0x%05lx ", irq_map[i].hwirq);
|
||||
|
||||
chip = irq_desc_get_chip(desc);
|
||||
if (chip && chip->name)
|
||||
p = chip->name;
|
||||
else
|
||||
p = none;
|
||||
seq_printf(m, "%-15s ", p);
|
||||
|
||||
data = irq_desc_get_chip_data(desc);
|
||||
seq_printf(m, "0x%16p ", data);
|
||||
|
||||
if (irq_map[i].host && irq_map[i].host->of_node)
|
||||
p = irq_map[i].host->of_node->full_name;
|
||||
else
|
||||
p = none;
|
||||
seq_printf(m, "%s\n", p);
|
||||
}
|
||||
|
||||
raw_spin_unlock_irqrestore(&desc->lock, flags);
|
||||
}
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
static int virq_debug_open(struct inode *inode, struct file *file)
|
||||
{
|
||||
return single_open(file, virq_debug_show, inode->i_private);
|
||||
}
|
||||
|
||||
static const struct file_operations virq_debug_fops = {
|
||||
.open = virq_debug_open,
|
||||
.read = seq_read,
|
||||
.llseek = seq_lseek,
|
||||
.release = single_release,
|
||||
};
|
||||
|
||||
static int __init irq_debugfs_init(void)
|
||||
{
|
||||
if (debugfs_create_file("virq_mapping", S_IRUGO, powerpc_debugfs_root,
|
||||
NULL, &virq_debug_fops) == NULL)
|
||||
return -ENOMEM;
|
||||
|
||||
return 0;
|
||||
}
|
||||
device_initcall(irq_debugfs_init);
|
||||
#endif /* CONFIG_VIRQ_DEBUG */
|
||||
|
@ -48,7 +48,7 @@ struct megamod_regs {
|
||||
};
|
||||
|
||||
struct megamod_pic {
|
||||
struct irq_host *irqhost;
|
||||
struct irq_domain *irqhost;
|
||||
struct megamod_regs __iomem *regs;
|
||||
raw_spinlock_t lock;
|
||||
|
||||
@ -116,7 +116,7 @@ static void megamod_irq_cascade(unsigned int irq, struct irq_desc *desc)
|
||||
}
|
||||
}
|
||||
|
||||
static int megamod_map(struct irq_host *h, unsigned int virq,
|
||||
static int megamod_map(struct irq_domain *h, unsigned int virq,
|
||||
irq_hw_number_t hw)
|
||||
{
|
||||
struct megamod_pic *pic = h->host_data;
|
||||
@ -136,21 +136,9 @@ static int megamod_map(struct irq_host *h, unsigned int virq,
|
||||
return 0;
|
||||
}
|
||||
|
||||
static int megamod_xlate(struct irq_host *h, struct device_node *ct,
|
||||
const u32 *intspec, unsigned int intsize,
|
||||
irq_hw_number_t *out_hwirq, unsigned int *out_type)
|
||||
|
||||
{
|
||||
/* megamod intspecs must have 1 cell */
|
||||
BUG_ON(intsize != 1);
|
||||
*out_hwirq = intspec[0];
|
||||
*out_type = IRQ_TYPE_NONE;
|
||||
return 0;
|
||||
}
|
||||
|
||||
static struct irq_host_ops megamod_host_ops = {
|
||||
static const struct irq_domain_ops megamod_domain_ops = {
|
||||
.map = megamod_map,
|
||||
.xlate = megamod_xlate,
|
||||
.xlate = irq_domain_xlate_onecell,
|
||||
};
|
||||
|
||||
static void __init set_megamod_mux(struct megamod_pic *pic, int src, int output)
|
||||
@ -223,9 +211,8 @@ static struct megamod_pic * __init init_megamod_pic(struct device_node *np)
|
||||
return NULL;
|
||||
}
|
||||
|
||||
pic->irqhost = irq_alloc_host(np, IRQ_HOST_MAP_LINEAR,
|
||||
NR_COMBINERS * 32, &megamod_host_ops,
|
||||
IRQ_UNMAPPED);
|
||||
pic->irqhost = irq_domain_add_linear(np, NR_COMBINERS * 32,
|
||||
&megamod_domain_ops, pic);
|
||||
if (!pic->irqhost) {
|
||||
pr_err("%s: Could not alloc host.\n", np->full_name);
|
||||
goto error_free;
|
||||
|
@ -14,6 +14,7 @@ config MICROBLAZE
|
||||
select TRACING_SUPPORT
|
||||
select OF
|
||||
select OF_EARLY_FLATTREE
|
||||
select IRQ_DOMAIN
|
||||
select HAVE_GENERIC_HARDIRQS
|
||||
select GENERIC_IRQ_PROBE
|
||||
select GENERIC_IRQ_SHOW
|
||||
|
@ -1,17 +1 @@
|
||||
/*
|
||||
* Copyright (C) 2006 Atmark Techno, Inc.
|
||||
*
|
||||
* This file is subject to the terms and conditions of the GNU General Public
|
||||
* License. See the file "COPYING" in the main directory of this archive
|
||||
* for more details.
|
||||
*/
|
||||
|
||||
#ifndef _ASM_MICROBLAZE_HARDIRQ_H
|
||||
#define _ASM_MICROBLAZE_HARDIRQ_H
|
||||
|
||||
/* should be defined in each interrupt controller driver */
|
||||
extern unsigned int get_irq(struct pt_regs *regs);
|
||||
|
||||
#include <asm-generic/hardirq.h>
|
||||
|
||||
#endif /* _ASM_MICROBLAZE_HARDIRQ_H */
|
||||
|
@ -9,49 +9,13 @@
|
||||
#ifndef _ASM_MICROBLAZE_IRQ_H
|
||||
#define _ASM_MICROBLAZE_IRQ_H
|
||||
|
||||
|
||||
/*
|
||||
* Linux IRQ# is currently offset by one to map to the hardware
|
||||
* irq number. So hardware IRQ0 maps to Linux irq 1.
|
||||
*/
|
||||
#define NO_IRQ_OFFSET 1
|
||||
#define IRQ_OFFSET NO_IRQ_OFFSET
|
||||
#define NR_IRQS (32 + IRQ_OFFSET)
|
||||
#define NR_IRQS (32 + 1)
|
||||
#include <asm-generic/irq.h>
|
||||
|
||||
/* This type is the placeholder for a hardware interrupt number. It has to
|
||||
* be big enough to enclose whatever representation is used by a given
|
||||
* platform.
|
||||
*/
|
||||
typedef unsigned long irq_hw_number_t;
|
||||
|
||||
extern unsigned int nr_irq;
|
||||
|
||||
struct pt_regs;
|
||||
extern void do_IRQ(struct pt_regs *regs);
|
||||
|
||||
/** FIXME - not implement
|
||||
* irq_dispose_mapping - Unmap an interrupt
|
||||
* @virq: linux virq number of the interrupt to unmap
|
||||
*/
|
||||
static inline void irq_dispose_mapping(unsigned int virq)
|
||||
{
|
||||
return;
|
||||
}
|
||||
|
||||
struct irq_host;
|
||||
|
||||
/**
|
||||
* irq_create_mapping - Map a hardware interrupt into linux virq space
|
||||
* @host: host owning this hardware interrupt or NULL for default host
|
||||
* @hwirq: hardware irq number in that host space
|
||||
*
|
||||
* Only one mapping per hardware interrupt is permitted. Returns a linux
|
||||
* virq number.
|
||||
* If the sense/trigger is to be specified, set_irq_type() should be called
|
||||
* on the number returned from that call.
|
||||
*/
|
||||
extern unsigned int irq_create_mapping(struct irq_host *host,
|
||||
irq_hw_number_t hwirq);
|
||||
/* should be defined in each interrupt controller driver */
|
||||
extern unsigned int get_irq(void);
|
||||
|
||||
#endif /* _ASM_MICROBLAZE_IRQ_H */
|
||||
|
@ -9,6 +9,7 @@
|
||||
*/
|
||||
|
||||
#include <linux/init.h>
|
||||
#include <linux/irqdomain.h>
|
||||
#include <linux/irq.h>
|
||||
#include <asm/page.h>
|
||||
#include <linux/io.h>
|
||||
@ -25,8 +26,6 @@ static unsigned int intc_baseaddr;
|
||||
#define INTC_BASE intc_baseaddr
|
||||
#endif
|
||||
|
||||
unsigned int nr_irq;
|
||||
|
||||
/* No one else should require these constants, so define them locally here. */
|
||||
#define ISR 0x00 /* Interrupt Status Register */
|
||||
#define IPR 0x04 /* Interrupt Pending Register */
|
||||
@ -84,24 +83,45 @@ static struct irq_chip intc_dev = {
|
||||
.irq_mask_ack = intc_mask_ack,
|
||||
};
|
||||
|
||||
unsigned int get_irq(struct pt_regs *regs)
|
||||
{
|
||||
int irq;
|
||||
static struct irq_domain *root_domain;
|
||||
|
||||
/*
|
||||
* NOTE: This function is the one that needs to be improved in
|
||||
* order to handle multiple interrupt controllers. It currently
|
||||
* is hardcoded to check for interrupts only on the first INTC.
|
||||
*/
|
||||
irq = in_be32(INTC_BASE + IVR) + NO_IRQ_OFFSET;
|
||||
pr_debug("get_irq: %d\n", irq);
|
||||
unsigned int get_irq(void)
|
||||
{
|
||||
unsigned int hwirq, irq = -1;
|
||||
|
||||
hwirq = in_be32(INTC_BASE + IVR);
|
||||
if (hwirq != -1U)
|
||||
irq = irq_find_mapping(root_domain, hwirq);
|
||||
|
||||
pr_debug("get_irq: hwirq=%d, irq=%d\n", hwirq, irq);
|
||||
|
||||
return irq;
|
||||
}
|
||||
|
||||
int xintc_map(struct irq_domain *d, unsigned int irq, irq_hw_number_t hw)
|
||||
{
|
||||
u32 intr_mask = (u32)d->host_data;
|
||||
|
||||
if (intr_mask & (1 << hw)) {
|
||||
irq_set_chip_and_handler_name(irq, &intc_dev,
|
||||
handle_edge_irq, "edge");
|
||||
irq_clear_status_flags(irq, IRQ_LEVEL);
|
||||
} else {
|
||||
irq_set_chip_and_handler_name(irq, &intc_dev,
|
||||
handle_level_irq, "level");
|
||||
irq_set_status_flags(irq, IRQ_LEVEL);
|
||||
}
|
||||
return 0;
|
||||
}
|
||||
|
||||
static const struct irq_domain_ops xintc_irq_domain_ops = {
|
||||
.xlate = irq_domain_xlate_onetwocell,
|
||||
.map = xintc_map,
|
||||
};
|
||||
|
||||
void __init init_IRQ(void)
|
||||
{
|
||||
u32 i, intr_mask;
|
||||
u32 nr_irq, intr_mask;
|
||||
struct device_node *intc = NULL;
|
||||
#ifdef CONFIG_SELFMOD_INTC
|
||||
unsigned int intc_baseaddr = 0;
|
||||
@ -146,16 +166,9 @@ void __init init_IRQ(void)
|
||||
/* Turn on the Master Enable. */
|
||||
out_be32(intc_baseaddr + MER, MER_HIE | MER_ME);
|
||||
|
||||
for (i = IRQ_OFFSET; i < (nr_irq + IRQ_OFFSET); ++i) {
|
||||
if (intr_mask & (0x00000001 << (i - IRQ_OFFSET))) {
|
||||
irq_set_chip_and_handler_name(i, &intc_dev,
|
||||
handle_edge_irq, "edge");
|
||||
irq_clear_status_flags(i, IRQ_LEVEL);
|
||||
} else {
|
||||
irq_set_chip_and_handler_name(i, &intc_dev,
|
||||
handle_level_irq, "level");
|
||||
irq_set_status_flags(i, IRQ_LEVEL);
|
||||
}
|
||||
irq_get_irq_data(i)->hwirq = i - IRQ_OFFSET;
|
||||
}
|
||||
/* Yeah, okay, casting the intr_mask to a void* is butt-ugly, but I'm
|
||||
* lazy and Michal can clean it up to something nicer when he tests
|
||||
* and commits this patch. ~~gcl */
|
||||
root_domain = irq_domain_add_linear(intc, nr_irq, &xintc_irq_domain_ops,
|
||||
(void *)intr_mask);
|
||||
}
|
||||
|
@ -31,14 +31,13 @@ void __irq_entry do_IRQ(struct pt_regs *regs)
|
||||
trace_hardirqs_off();
|
||||
|
||||
irq_enter();
|
||||
irq = get_irq(regs);
|
||||
irq = get_irq();
|
||||
next_irq:
|
||||
BUG_ON(!irq);
|
||||
/* Substract 1 because of get_irq */
|
||||
generic_handle_irq(irq + IRQ_OFFSET - NO_IRQ_OFFSET);
|
||||
generic_handle_irq(irq);
|
||||
|
||||
irq = get_irq(regs);
|
||||
if (irq) {
|
||||
irq = get_irq();
|
||||
if (irq != -1U) {
|
||||
pr_debug("next irq: %d\n", irq);
|
||||
++concurrent_irq;
|
||||
goto next_irq;
|
||||
@ -48,18 +47,3 @@ next_irq:
|
||||
set_irq_regs(old_regs);
|
||||
trace_hardirqs_on();
|
||||
}
|
||||
|
||||
/* MS: There is no any advance mapping mechanism. We are using simple 32bit
|
||||
intc without any cascades or any connection that's why mapping is 1:1 */
|
||||
unsigned int irq_create_mapping(struct irq_host *host, irq_hw_number_t hwirq)
|
||||
{
|
||||
return hwirq + IRQ_OFFSET;
|
||||
}
|
||||
EXPORT_SYMBOL_GPL(irq_create_mapping);
|
||||
|
||||
unsigned int irq_create_of_mapping(struct device_node *controller,
|
||||
const u32 *intspec, unsigned int intsize)
|
||||
{
|
||||
return intspec[0] + IRQ_OFFSET;
|
||||
}
|
||||
EXPORT_SYMBOL_GPL(irq_create_of_mapping);
|
||||
|
@ -51,8 +51,6 @@ void __init setup_arch(char **cmdline_p)
|
||||
|
||||
unflatten_device_tree();
|
||||
|
||||
/* NOTE I think that this function is not necessary to call */
|
||||
/* irq_early_init(); */
|
||||
setup_cpuinfo();
|
||||
|
||||
microblaze_cache_init();
|
||||
|
@ -2327,6 +2327,7 @@ config USE_OF
|
||||
bool "Flattened Device Tree support"
|
||||
select OF
|
||||
select OF_EARLY_FLATTREE
|
||||
select IRQ_DOMAIN
|
||||
help
|
||||
Include support for flattened device tree machine descriptions.
|
||||
|
||||
|
@ -11,15 +11,12 @@
|
||||
|
||||
#include <linux/linkage.h>
|
||||
#include <linux/smp.h>
|
||||
#include <linux/irqdomain.h>
|
||||
|
||||
#include <asm/mipsmtregs.h>
|
||||
|
||||
#include <irq.h>
|
||||
|
||||
static inline void irq_dispose_mapping(unsigned int virq)
|
||||
{
|
||||
}
|
||||
|
||||
#ifdef CONFIG_I8259
|
||||
static inline int irq_canonicalize(int irq)
|
||||
{
|
||||
|
@ -60,20 +60,6 @@ void __init early_init_dt_setup_initrd_arch(unsigned long start,
|
||||
}
|
||||
#endif
|
||||
|
||||
/*
|
||||
* irq_create_of_mapping - Hook to resolve OF irq specifier into a Linux irq#
|
||||
*
|
||||
* Currently the mapping mechanism is trivial; simple flat hwirq numbers are
|
||||
* mapped 1:1 onto Linux irq numbers. Cascaded irq controllers are not
|
||||
* supported.
|
||||
*/
|
||||
unsigned int irq_create_of_mapping(struct device_node *controller,
|
||||
const u32 *intspec, unsigned int intsize)
|
||||
{
|
||||
return intspec[0];
|
||||
}
|
||||
EXPORT_SYMBOL_GPL(irq_create_of_mapping);
|
||||
|
||||
void __init early_init_devtree(void *params)
|
||||
{
|
||||
/* Setup flat device-tree pointer */
|
||||
|
@ -24,6 +24,7 @@
|
||||
|
||||
#include <linux/types.h>
|
||||
#include <asm/irq.h>
|
||||
#include <linux/irqdomain.h>
|
||||
#include <linux/atomic.h>
|
||||
#include <linux/of_irq.h>
|
||||
#include <linux/of_fdt.h>
|
||||
@ -63,15 +64,6 @@ extern const void *of_get_mac_address(struct device_node *np);
|
||||
struct pci_dev;
|
||||
extern int of_irq_map_pci(struct pci_dev *pdev, struct of_irq *out_irq);
|
||||
|
||||
/* This routine is here to provide compatibility with how powerpc
|
||||
* handles IRQ mapping for OF device nodes. We precompute and permanently
|
||||
* register them in the platform_device objects, whereas powerpc computes them
|
||||
* on request.
|
||||
*/
|
||||
static inline void irq_dispose_mapping(unsigned int virq)
|
||||
{
|
||||
}
|
||||
|
||||
#endif /* __ASSEMBLY__ */
|
||||
#endif /* __KERNEL__ */
|
||||
#endif /* _ASM_OPENRISC_PROM_H */
|
||||
|
@ -135,6 +135,7 @@ config PPC
|
||||
select HAVE_GENERIC_HARDIRQS
|
||||
select HAVE_SPARSE_IRQ
|
||||
select IRQ_PER_CPU
|
||||
select IRQ_DOMAIN
|
||||
select GENERIC_IRQ_SHOW
|
||||
select GENERIC_IRQ_SHOW_LEVEL
|
||||
select IRQ_FORCED_THREADING
|
||||
|
@ -25,7 +25,7 @@
|
||||
|
||||
struct ehv_pic {
|
||||
/* The remapper for this EHV_PIC */
|
||||
struct irq_host *irqhost;
|
||||
struct irq_domain *irqhost;
|
||||
|
||||
/* The "linux" controller struct */
|
||||
struct irq_chip hc_irq;
|
||||
|
@ -6,7 +6,7 @@
|
||||
|
||||
extern void i8259_init(struct device_node *node, unsigned long intack_addr);
|
||||
extern unsigned int i8259_irq(void);
|
||||
extern struct irq_host *i8259_get_host(void);
|
||||
extern struct irq_domain *i8259_get_host(void);
|
||||
|
||||
#endif /* __KERNEL__ */
|
||||
#endif /* _ASM_POWERPC_I8259_H */
|
||||
|
@ -9,6 +9,7 @@
|
||||
* 2 of the License, or (at your option) any later version.
|
||||
*/
|
||||
|
||||
#include <linux/irqdomain.h>
|
||||
#include <linux/threads.h>
|
||||
#include <linux/list.h>
|
||||
#include <linux/radix-tree.h>
|
||||
@ -35,258 +36,12 @@ extern atomic_t ppc_n_lost_interrupts;
|
||||
/* Total number of virq in the platform */
|
||||
#define NR_IRQS CONFIG_NR_IRQS
|
||||
|
||||
/* Number of irqs reserved for the legacy controller */
|
||||
#define NUM_ISA_INTERRUPTS 16
|
||||
|
||||
/* Same thing, used by the generic IRQ code */
|
||||
#define NR_IRQS_LEGACY NUM_ISA_INTERRUPTS
|
||||
|
||||
/* This type is the placeholder for a hardware interrupt number. It has to
|
||||
* be big enough to enclose whatever representation is used by a given
|
||||
* platform.
|
||||
*/
|
||||
typedef unsigned long irq_hw_number_t;
|
||||
|
||||
/* Interrupt controller "host" data structure. This could be defined as a
|
||||
* irq domain controller. That is, it handles the mapping between hardware
|
||||
* and virtual interrupt numbers for a given interrupt domain. The host
|
||||
* structure is generally created by the PIC code for a given PIC instance
|
||||
* (though a host can cover more than one PIC if they have a flat number
|
||||
* model). It's the host callbacks that are responsible for setting the
|
||||
* irq_chip on a given irq_desc after it's been mapped.
|
||||
*
|
||||
* The host code and data structures are fairly agnostic to the fact that
|
||||
* we use an open firmware device-tree. We do have references to struct
|
||||
* device_node in two places: in irq_find_host() to find the host matching
|
||||
* a given interrupt controller node, and of course as an argument to its
|
||||
* counterpart host->ops->match() callback. However, those are treated as
|
||||
* generic pointers by the core and the fact that it's actually a device-node
|
||||
* pointer is purely a convention between callers and implementation. This
|
||||
* code could thus be used on other architectures by replacing those two
|
||||
* by some sort of arch-specific void * "token" used to identify interrupt
|
||||
* controllers.
|
||||
*/
|
||||
struct irq_host;
|
||||
struct radix_tree_root;
|
||||
|
||||
/* Functions below are provided by the host and called whenever a new mapping
|
||||
* is created or an old mapping is disposed. The host can then proceed to
|
||||
* whatever internal data structures management is required. It also needs
|
||||
* to setup the irq_desc when returning from map().
|
||||
*/
|
||||
struct irq_host_ops {
|
||||
/* Match an interrupt controller device node to a host, returns
|
||||
* 1 on a match
|
||||
*/
|
||||
int (*match)(struct irq_host *h, struct device_node *node);
|
||||
|
||||
/* Create or update a mapping between a virtual irq number and a hw
|
||||
* irq number. This is called only once for a given mapping.
|
||||
*/
|
||||
int (*map)(struct irq_host *h, unsigned int virq, irq_hw_number_t hw);
|
||||
|
||||
/* Dispose of such a mapping */
|
||||
void (*unmap)(struct irq_host *h, unsigned int virq);
|
||||
|
||||
/* Translate device-tree interrupt specifier from raw format coming
|
||||
* from the firmware to a irq_hw_number_t (interrupt line number) and
|
||||
* type (sense) that can be passed to set_irq_type(). In the absence
|
||||
* of this callback, irq_create_of_mapping() and irq_of_parse_and_map()
|
||||
* will return the hw number in the first cell and IRQ_TYPE_NONE for
|
||||
* the type (which amount to keeping whatever default value the
|
||||
* interrupt controller has for that line)
|
||||
*/
|
||||
int (*xlate)(struct irq_host *h, struct device_node *ctrler,
|
||||
const u32 *intspec, unsigned int intsize,
|
||||
irq_hw_number_t *out_hwirq, unsigned int *out_type);
|
||||
};
|
||||
|
||||
struct irq_host {
|
||||
struct list_head link;
|
||||
|
||||
/* type of reverse mapping technique */
|
||||
unsigned int revmap_type;
|
||||
#define IRQ_HOST_MAP_LEGACY 0 /* legacy 8259, gets irqs 1..15 */
|
||||
#define IRQ_HOST_MAP_NOMAP 1 /* no fast reverse mapping */
|
||||
#define IRQ_HOST_MAP_LINEAR 2 /* linear map of interrupts */
|
||||
#define IRQ_HOST_MAP_TREE 3 /* radix tree */
|
||||
union {
|
||||
struct {
|
||||
unsigned int size;
|
||||
unsigned int *revmap;
|
||||
} linear;
|
||||
struct radix_tree_root tree;
|
||||
} revmap_data;
|
||||
struct irq_host_ops *ops;
|
||||
void *host_data;
|
||||
irq_hw_number_t inval_irq;
|
||||
|
||||
/* Optional device node pointer */
|
||||
struct device_node *of_node;
|
||||
};
|
||||
|
||||
struct irq_data;
|
||||
extern irq_hw_number_t irqd_to_hwirq(struct irq_data *d);
|
||||
extern irq_hw_number_t virq_to_hw(unsigned int virq);
|
||||
extern bool virq_is_host(unsigned int virq, struct irq_host *host);
|
||||
|
||||
/**
|
||||
* irq_alloc_host - Allocate a new irq_host data structure
|
||||
* @of_node: optional device-tree node of the interrupt controller
|
||||
* @revmap_type: type of reverse mapping to use
|
||||
* @revmap_arg: for IRQ_HOST_MAP_LINEAR linear only: size of the map
|
||||
* @ops: map/unmap host callbacks
|
||||
* @inval_irq: provide a hw number in that host space that is always invalid
|
||||
*
|
||||
* Allocates and initialize and irq_host structure. Note that in the case of
|
||||
* IRQ_HOST_MAP_LEGACY, the map() callback will be called before this returns
|
||||
* for all legacy interrupts except 0 (which is always the invalid irq for
|
||||
* a legacy controller). For a IRQ_HOST_MAP_LINEAR, the map is allocated by
|
||||
* this call as well. For a IRQ_HOST_MAP_TREE, the radix tree will be allocated
|
||||
* later during boot automatically (the reverse mapping will use the slow path
|
||||
* until that happens).
|
||||
*/
|
||||
extern struct irq_host *irq_alloc_host(struct device_node *of_node,
|
||||
unsigned int revmap_type,
|
||||
unsigned int revmap_arg,
|
||||
struct irq_host_ops *ops,
|
||||
irq_hw_number_t inval_irq);
|
||||
|
||||
|
||||
/**
|
||||
* irq_find_host - Locates a host for a given device node
|
||||
* @node: device-tree node of the interrupt controller
|
||||
*/
|
||||
extern struct irq_host *irq_find_host(struct device_node *node);
|
||||
|
||||
|
||||
/**
|
||||
* irq_set_default_host - Set a "default" host
|
||||
* @host: default host pointer
|
||||
*
|
||||
* For convenience, it's possible to set a "default" host that will be used
|
||||
* whenever NULL is passed to irq_create_mapping(). It makes life easier for
|
||||
* platforms that want to manipulate a few hard coded interrupt numbers that
|
||||
* aren't properly represented in the device-tree.
|
||||
*/
|
||||
extern void irq_set_default_host(struct irq_host *host);
|
||||
|
||||
|
||||
/**
|
||||
* irq_set_virq_count - Set the maximum number of virt irqs
|
||||
* @count: number of linux virtual irqs, capped with NR_IRQS
|
||||
*
|
||||
* This is mainly for use by platforms like iSeries who want to program
|
||||
* the virtual irq number in the controller to avoid the reverse mapping
|
||||
*/
|
||||
extern void irq_set_virq_count(unsigned int count);
|
||||
|
||||
|
||||
/**
|
||||
* irq_create_mapping - Map a hardware interrupt into linux virq space
|
||||
* @host: host owning this hardware interrupt or NULL for default host
|
||||
* @hwirq: hardware irq number in that host space
|
||||
*
|
||||
* Only one mapping per hardware interrupt is permitted. Returns a linux
|
||||
* virq number.
|
||||
* If the sense/trigger is to be specified, set_irq_type() should be called
|
||||
* on the number returned from that call.
|
||||
*/
|
||||
extern unsigned int irq_create_mapping(struct irq_host *host,
|
||||
irq_hw_number_t hwirq);
|
||||
|
||||
|
||||
/**
|
||||
* irq_dispose_mapping - Unmap an interrupt
|
||||
* @virq: linux virq number of the interrupt to unmap
|
||||
*/
|
||||
extern void irq_dispose_mapping(unsigned int virq);
|
||||
|
||||
/**
|
||||
* irq_find_mapping - Find a linux virq from an hw irq number.
|
||||
* @host: host owning this hardware interrupt
|
||||
* @hwirq: hardware irq number in that host space
|
||||
*
|
||||
* This is a slow path, for use by generic code. It's expected that an
|
||||
* irq controller implementation directly calls the appropriate low level
|
||||
* mapping function.
|
||||
*/
|
||||
extern unsigned int irq_find_mapping(struct irq_host *host,
|
||||
irq_hw_number_t hwirq);
|
||||
|
||||
/**
|
||||
* irq_create_direct_mapping - Allocate a virq for direct mapping
|
||||
* @host: host to allocate the virq for or NULL for default host
|
||||
*
|
||||
* This routine is used for irq controllers which can choose the hardware
|
||||
* interrupt numbers they generate. In such a case it's simplest to use
|
||||
* the linux virq as the hardware interrupt number.
|
||||
*/
|
||||
extern unsigned int irq_create_direct_mapping(struct irq_host *host);
|
||||
|
||||
/**
|
||||
* irq_radix_revmap_insert - Insert a hw irq to linux virq number mapping.
|
||||
* @host: host owning this hardware interrupt
|
||||
* @virq: linux irq number
|
||||
* @hwirq: hardware irq number in that host space
|
||||
*
|
||||
* This is for use by irq controllers that use a radix tree reverse
|
||||
* mapping for fast lookup.
|
||||
*/
|
||||
extern void irq_radix_revmap_insert(struct irq_host *host, unsigned int virq,
|
||||
irq_hw_number_t hwirq);
|
||||
|
||||
/**
|
||||
* irq_radix_revmap_lookup - Find a linux virq from a hw irq number.
|
||||
* @host: host owning this hardware interrupt
|
||||
* @hwirq: hardware irq number in that host space
|
||||
*
|
||||
* This is a fast path, for use by irq controller code that uses radix tree
|
||||
* revmaps
|
||||
*/
|
||||
extern unsigned int irq_radix_revmap_lookup(struct irq_host *host,
|
||||
irq_hw_number_t hwirq);
|
||||
|
||||
/**
|
||||
* irq_linear_revmap - Find a linux virq from a hw irq number.
|
||||
* @host: host owning this hardware interrupt
|
||||
* @hwirq: hardware irq number in that host space
|
||||
*
|
||||
* This is a fast path, for use by irq controller code that uses linear
|
||||
* revmaps. It does fallback to the slow path if the revmap doesn't exist
|
||||
* yet and will create the revmap entry with appropriate locking
|
||||
*/
|
||||
|
||||
extern unsigned int irq_linear_revmap(struct irq_host *host,
|
||||
irq_hw_number_t hwirq);
|
||||
|
||||
|
||||
|
||||
/**
|
||||
* irq_alloc_virt - Allocate virtual irq numbers
|
||||
* @host: host owning these new virtual irqs
|
||||
* @count: number of consecutive numbers to allocate
|
||||
* @hint: pass a hint number, the allocator will try to use a 1:1 mapping
|
||||
*
|
||||
* This is a low level function that is used internally by irq_create_mapping()
|
||||
* and that can be used by some irq controllers implementations for things
|
||||
* like allocating ranges of numbers for MSIs. The revmaps are left untouched.
|
||||
*/
|
||||
extern unsigned int irq_alloc_virt(struct irq_host *host,
|
||||
unsigned int count,
|
||||
unsigned int hint);
|
||||
|
||||
/**
|
||||
* irq_free_virt - Free virtual irq numbers
|
||||
* @virq: virtual irq number of the first interrupt to free
|
||||
* @count: number of interrupts to free
|
||||
*
|
||||
* This function is the opposite of irq_alloc_virt. It will not clear reverse
|
||||
* maps, this should be done previously by unmap'ing the interrupt. In fact,
|
||||
* all interrupts covered by the range being freed should have been unmapped
|
||||
* prior to calling this.
|
||||
*/
|
||||
extern void irq_free_virt(unsigned int virq, unsigned int count);
|
||||
|
||||
/**
|
||||
* irq_early_init - Init irq remapping subsystem
|
||||
|
@ -255,7 +255,7 @@ struct mpic
|
||||
struct device_node *node;
|
||||
|
||||
/* The remapper for this MPIC */
|
||||
struct irq_host *irqhost;
|
||||
struct irq_domain *irqhost;
|
||||
|
||||
/* The "linux" controller struct */
|
||||
struct irq_chip hc_irq;
|
||||
|
@ -86,7 +86,7 @@ struct ics {
|
||||
extern unsigned int xics_default_server;
|
||||
extern unsigned int xics_default_distrib_server;
|
||||
extern unsigned int xics_interrupt_server_size;
|
||||
extern struct irq_host *xics_host;
|
||||
extern struct irq_domain *xics_host;
|
||||
|
||||
struct xics_cppr {
|
||||
unsigned char stack[MAX_NUM_PRIORITIES];
|
||||
|
@ -486,409 +486,19 @@ void do_softirq(void)
|
||||
local_irq_restore(flags);
|
||||
}
|
||||
|
||||
|
||||
/*
|
||||
* IRQ controller and virtual interrupts
|
||||
*/
|
||||
|
||||
/* The main irq map itself is an array of NR_IRQ entries containing the
|
||||
* associate host and irq number. An entry with a host of NULL is free.
|
||||
* An entry can be allocated if it's free, the allocator always then sets
|
||||
* hwirq first to the host's invalid irq number and then fills ops.
|
||||
*/
|
||||
struct irq_map_entry {
|
||||
irq_hw_number_t hwirq;
|
||||
struct irq_host *host;
|
||||
};
|
||||
|
||||
static LIST_HEAD(irq_hosts);
|
||||
static DEFINE_RAW_SPINLOCK(irq_big_lock);
|
||||
static DEFINE_MUTEX(revmap_trees_mutex);
|
||||
static struct irq_map_entry irq_map[NR_IRQS];
|
||||
static unsigned int irq_virq_count = NR_IRQS;
|
||||
static struct irq_host *irq_default_host;
|
||||
|
||||
irq_hw_number_t irqd_to_hwirq(struct irq_data *d)
|
||||
{
|
||||
return irq_map[d->irq].hwirq;
|
||||
return d->hwirq;
|
||||
}
|
||||
EXPORT_SYMBOL_GPL(irqd_to_hwirq);
|
||||
|
||||
irq_hw_number_t virq_to_hw(unsigned int virq)
|
||||
{
|
||||
return irq_map[virq].hwirq;
|
||||
struct irq_data *irq_data = irq_get_irq_data(virq);
|
||||
return WARN_ON(!irq_data) ? 0 : irq_data->hwirq;
|
||||
}
|
||||
EXPORT_SYMBOL_GPL(virq_to_hw);
|
||||
|
||||
bool virq_is_host(unsigned int virq, struct irq_host *host)
|
||||
{
|
||||
return irq_map[virq].host == host;
|
||||
}
|
||||
EXPORT_SYMBOL_GPL(virq_is_host);
|
||||
|
||||
static int default_irq_host_match(struct irq_host *h, struct device_node *np)
|
||||
{
|
||||
return h->of_node != NULL && h->of_node == np;
|
||||
}
|
||||
|
||||
struct irq_host *irq_alloc_host(struct device_node *of_node,
|
||||
unsigned int revmap_type,
|
||||
unsigned int revmap_arg,
|
||||
struct irq_host_ops *ops,
|
||||
irq_hw_number_t inval_irq)
|
||||
{
|
||||
struct irq_host *host;
|
||||
unsigned int size = sizeof(struct irq_host);
|
||||
unsigned int i;
|
||||
unsigned int *rmap;
|
||||
unsigned long flags;
|
||||
|
||||
/* Allocate structure and revmap table if using linear mapping */
|
||||
if (revmap_type == IRQ_HOST_MAP_LINEAR)
|
||||
size += revmap_arg * sizeof(unsigned int);
|
||||
host = kzalloc(size, GFP_KERNEL);
|
||||
if (host == NULL)
|
||||
return NULL;
|
||||
|
||||
/* Fill structure */
|
||||
host->revmap_type = revmap_type;
|
||||
host->inval_irq = inval_irq;
|
||||
host->ops = ops;
|
||||
host->of_node = of_node_get(of_node);
|
||||
|
||||
if (host->ops->match == NULL)
|
||||
host->ops->match = default_irq_host_match;
|
||||
|
||||
raw_spin_lock_irqsave(&irq_big_lock, flags);
|
||||
|
||||
/* If it's a legacy controller, check for duplicates and
|
||||
* mark it as allocated (we use irq 0 host pointer for that
|
||||
*/
|
||||
if (revmap_type == IRQ_HOST_MAP_LEGACY) {
|
||||
if (irq_map[0].host != NULL) {
|
||||
raw_spin_unlock_irqrestore(&irq_big_lock, flags);
|
||||
of_node_put(host->of_node);
|
||||
kfree(host);
|
||||
return NULL;
|
||||
}
|
||||
irq_map[0].host = host;
|
||||
}
|
||||
|
||||
list_add(&host->link, &irq_hosts);
|
||||
raw_spin_unlock_irqrestore(&irq_big_lock, flags);
|
||||
|
||||
/* Additional setups per revmap type */
|
||||
switch(revmap_type) {
|
||||
case IRQ_HOST_MAP_LEGACY:
|
||||
/* 0 is always the invalid number for legacy */
|
||||
host->inval_irq = 0;
|
||||
/* setup us as the host for all legacy interrupts */
|
||||
for (i = 1; i < NUM_ISA_INTERRUPTS; i++) {
|
||||
irq_map[i].hwirq = i;
|
||||
smp_wmb();
|
||||
irq_map[i].host = host;
|
||||
smp_wmb();
|
||||
|
||||
/* Legacy flags are left to default at this point,
|
||||
* one can then use irq_create_mapping() to
|
||||
* explicitly change them
|
||||
*/
|
||||
ops->map(host, i, i);
|
||||
|
||||
/* Clear norequest flags */
|
||||
irq_clear_status_flags(i, IRQ_NOREQUEST);
|
||||
}
|
||||
break;
|
||||
case IRQ_HOST_MAP_LINEAR:
|
||||
rmap = (unsigned int *)(host + 1);
|
||||
for (i = 0; i < revmap_arg; i++)
|
||||
rmap[i] = NO_IRQ;
|
||||
host->revmap_data.linear.size = revmap_arg;
|
||||
smp_wmb();
|
||||
host->revmap_data.linear.revmap = rmap;
|
||||
break;
|
||||
case IRQ_HOST_MAP_TREE:
|
||||
INIT_RADIX_TREE(&host->revmap_data.tree, GFP_KERNEL);
|
||||
break;
|
||||
default:
|
||||
break;
|
||||
}
|
||||
|
||||
pr_debug("irq: Allocated host of type %d @0x%p\n", revmap_type, host);
|
||||
|
||||
return host;
|
||||
}
|
||||
|
||||
struct irq_host *irq_find_host(struct device_node *node)
|
||||
{
|
||||
struct irq_host *h, *found = NULL;
|
||||
unsigned long flags;
|
||||
|
||||
/* We might want to match the legacy controller last since
|
||||
* it might potentially be set to match all interrupts in
|
||||
* the absence of a device node. This isn't a problem so far
|
||||
* yet though...
|
||||
*/
|
||||
raw_spin_lock_irqsave(&irq_big_lock, flags);
|
||||
list_for_each_entry(h, &irq_hosts, link)
|
||||
if (h->ops->match(h, node)) {
|
||||
found = h;
|
||||
break;
|
||||
}
|
||||
raw_spin_unlock_irqrestore(&irq_big_lock, flags);
|
||||
return found;
|
||||
}
|
||||
EXPORT_SYMBOL_GPL(irq_find_host);
|
||||
|
||||
void irq_set_default_host(struct irq_host *host)
|
||||
{
|
||||
pr_debug("irq: Default host set to @0x%p\n", host);
|
||||
|
||||
irq_default_host = host;
|
||||
}
|
||||
|
||||
void irq_set_virq_count(unsigned int count)
|
||||
{
|
||||
pr_debug("irq: Trying to set virq count to %d\n", count);
|
||||
|
||||
BUG_ON(count < NUM_ISA_INTERRUPTS);
|
||||
if (count < NR_IRQS)
|
||||
irq_virq_count = count;
|
||||
}
|
||||
|
||||
static int irq_setup_virq(struct irq_host *host, unsigned int virq,
|
||||
irq_hw_number_t hwirq)
|
||||
{
|
||||
int res;
|
||||
|
||||
res = irq_alloc_desc_at(virq, 0);
|
||||
if (res != virq) {
|
||||
pr_debug("irq: -> allocating desc failed\n");
|
||||
goto error;
|
||||
}
|
||||
|
||||
/* map it */
|
||||
smp_wmb();
|
||||
irq_map[virq].hwirq = hwirq;
|
||||
smp_mb();
|
||||
|
||||
if (host->ops->map(host, virq, hwirq)) {
|
||||
pr_debug("irq: -> mapping failed, freeing\n");
|
||||
goto errdesc;
|
||||
}
|
||||
|
||||
irq_clear_status_flags(virq, IRQ_NOREQUEST);
|
||||
|
||||
return 0;
|
||||
|
||||
errdesc:
|
||||
irq_free_descs(virq, 1);
|
||||
error:
|
||||
irq_free_virt(virq, 1);
|
||||
return -1;
|
||||
}
|
||||
|
||||
unsigned int irq_create_direct_mapping(struct irq_host *host)
|
||||
{
|
||||
unsigned int virq;
|
||||
|
||||
if (host == NULL)
|
||||
host = irq_default_host;
|
||||
|
||||
BUG_ON(host == NULL);
|
||||
WARN_ON(host->revmap_type != IRQ_HOST_MAP_NOMAP);
|
||||
|
||||
virq = irq_alloc_virt(host, 1, 0);
|
||||
if (virq == NO_IRQ) {
|
||||
pr_debug("irq: create_direct virq allocation failed\n");
|
||||
return NO_IRQ;
|
||||
}
|
||||
|
||||
pr_debug("irq: create_direct obtained virq %d\n", virq);
|
||||
|
||||
if (irq_setup_virq(host, virq, virq))
|
||||
return NO_IRQ;
|
||||
|
||||
return virq;
|
||||
}
|
||||
|
||||
unsigned int irq_create_mapping(struct irq_host *host,
|
||||
irq_hw_number_t hwirq)
|
||||
{
|
||||
unsigned int virq, hint;
|
||||
|
||||
pr_debug("irq: irq_create_mapping(0x%p, 0x%lx)\n", host, hwirq);
|
||||
|
||||
/* Look for default host if nececssary */
|
||||
if (host == NULL)
|
||||
host = irq_default_host;
|
||||
if (host == NULL) {
|
||||
printk(KERN_WARNING "irq_create_mapping called for"
|
||||
" NULL host, hwirq=%lx\n", hwirq);
|
||||
WARN_ON(1);
|
||||
return NO_IRQ;
|
||||
}
|
||||
pr_debug("irq: -> using host @%p\n", host);
|
||||
|
||||
/* Check if mapping already exists */
|
||||
virq = irq_find_mapping(host, hwirq);
|
||||
if (virq != NO_IRQ) {
|
||||
pr_debug("irq: -> existing mapping on virq %d\n", virq);
|
||||
return virq;
|
||||
}
|
||||
|
||||
/* Get a virtual interrupt number */
|
||||
if (host->revmap_type == IRQ_HOST_MAP_LEGACY) {
|
||||
/* Handle legacy */
|
||||
virq = (unsigned int)hwirq;
|
||||
if (virq == 0 || virq >= NUM_ISA_INTERRUPTS)
|
||||
return NO_IRQ;
|
||||
return virq;
|
||||
} else {
|
||||
/* Allocate a virtual interrupt number */
|
||||
hint = hwirq % irq_virq_count;
|
||||
virq = irq_alloc_virt(host, 1, hint);
|
||||
if (virq == NO_IRQ) {
|
||||
pr_debug("irq: -> virq allocation failed\n");
|
||||
return NO_IRQ;
|
||||
}
|
||||
}
|
||||
|
||||
if (irq_setup_virq(host, virq, hwirq))
|
||||
return NO_IRQ;
|
||||
|
||||
pr_debug("irq: irq %lu on host %s mapped to virtual irq %u\n",
|
||||
hwirq, host->of_node ? host->of_node->full_name : "null", virq);
|
||||
|
||||
return virq;
|
||||
}
|
||||
EXPORT_SYMBOL_GPL(irq_create_mapping);
|
||||
|
||||
unsigned int irq_create_of_mapping(struct device_node *controller,
|
||||
const u32 *intspec, unsigned int intsize)
|
||||
{
|
||||
struct irq_host *host;
|
||||
irq_hw_number_t hwirq;
|
||||
unsigned int type = IRQ_TYPE_NONE;
|
||||
unsigned int virq;
|
||||
|
||||
if (controller == NULL)
|
||||
host = irq_default_host;
|
||||
else
|
||||
host = irq_find_host(controller);
|
||||
if (host == NULL) {
|
||||
printk(KERN_WARNING "irq: no irq host found for %s !\n",
|
||||
controller->full_name);
|
||||
return NO_IRQ;
|
||||
}
|
||||
|
||||
/* If host has no translation, then we assume interrupt line */
|
||||
if (host->ops->xlate == NULL)
|
||||
hwirq = intspec[0];
|
||||
else {
|
||||
if (host->ops->xlate(host, controller, intspec, intsize,
|
||||
&hwirq, &type))
|
||||
return NO_IRQ;
|
||||
}
|
||||
|
||||
/* Create mapping */
|
||||
virq = irq_create_mapping(host, hwirq);
|
||||
if (virq == NO_IRQ)
|
||||
return virq;
|
||||
|
||||
/* Set type if specified and different than the current one */
|
||||
if (type != IRQ_TYPE_NONE &&
|
||||
type != (irqd_get_trigger_type(irq_get_irq_data(virq))))
|
||||
irq_set_irq_type(virq, type);
|
||||
return virq;
|
||||
}
|
||||
EXPORT_SYMBOL_GPL(irq_create_of_mapping);
|
||||
|
||||
void irq_dispose_mapping(unsigned int virq)
|
||||
{
|
||||
struct irq_host *host;
|
||||
irq_hw_number_t hwirq;
|
||||
|
||||
if (virq == NO_IRQ)
|
||||
return;
|
||||
|
||||
host = irq_map[virq].host;
|
||||
if (WARN_ON(host == NULL))
|
||||
return;
|
||||
|
||||
/* Never unmap legacy interrupts */
|
||||
if (host->revmap_type == IRQ_HOST_MAP_LEGACY)
|
||||
return;
|
||||
|
||||
irq_set_status_flags(virq, IRQ_NOREQUEST);
|
||||
|
||||
/* remove chip and handler */
|
||||
irq_set_chip_and_handler(virq, NULL, NULL);
|
||||
|
||||
/* Make sure it's completed */
|
||||
synchronize_irq(virq);
|
||||
|
||||
/* Tell the PIC about it */
|
||||
if (host->ops->unmap)
|
||||
host->ops->unmap(host, virq);
|
||||
smp_mb();
|
||||
|
||||
/* Clear reverse map */
|
||||
hwirq = irq_map[virq].hwirq;
|
||||
switch(host->revmap_type) {
|
||||
case IRQ_HOST_MAP_LINEAR:
|
||||
if (hwirq < host->revmap_data.linear.size)
|
||||
host->revmap_data.linear.revmap[hwirq] = NO_IRQ;
|
||||
break;
|
||||
case IRQ_HOST_MAP_TREE:
|
||||
mutex_lock(&revmap_trees_mutex);
|
||||
radix_tree_delete(&host->revmap_data.tree, hwirq);
|
||||
mutex_unlock(&revmap_trees_mutex);
|
||||
break;
|
||||
}
|
||||
|
||||
/* Destroy map */
|
||||
smp_mb();
|
||||
irq_map[virq].hwirq = host->inval_irq;
|
||||
|
||||
irq_free_descs(virq, 1);
|
||||
/* Free it */
|
||||
irq_free_virt(virq, 1);
|
||||
}
|
||||
EXPORT_SYMBOL_GPL(irq_dispose_mapping);
|
||||
|
||||
unsigned int irq_find_mapping(struct irq_host *host,
|
||||
irq_hw_number_t hwirq)
|
||||
{
|
||||
unsigned int i;
|
||||
unsigned int hint = hwirq % irq_virq_count;
|
||||
|
||||
/* Look for default host if nececssary */
|
||||
if (host == NULL)
|
||||
host = irq_default_host;
|
||||
if (host == NULL)
|
||||
return NO_IRQ;
|
||||
|
||||
/* legacy -> bail early */
|
||||
if (host->revmap_type == IRQ_HOST_MAP_LEGACY)
|
||||
return hwirq;
|
||||
|
||||
/* Slow path does a linear search of the map */
|
||||
if (hint < NUM_ISA_INTERRUPTS)
|
||||
hint = NUM_ISA_INTERRUPTS;
|
||||
i = hint;
|
||||
do {
|
||||
if (irq_map[i].host == host &&
|
||||
irq_map[i].hwirq == hwirq)
|
||||
return i;
|
||||
i++;
|
||||
if (i >= irq_virq_count)
|
||||
i = NUM_ISA_INTERRUPTS;
|
||||
} while(i != hint);
|
||||
return NO_IRQ;
|
||||
}
|
||||
EXPORT_SYMBOL_GPL(irq_find_mapping);
|
||||
|
||||
#ifdef CONFIG_SMP
|
||||
int irq_choose_cpu(const struct cpumask *mask)
|
||||
{
|
||||
@ -925,232 +535,11 @@ int irq_choose_cpu(const struct cpumask *mask)
|
||||
}
|
||||
#endif
|
||||
|
||||
unsigned int irq_radix_revmap_lookup(struct irq_host *host,
|
||||
irq_hw_number_t hwirq)
|
||||
{
|
||||
struct irq_map_entry *ptr;
|
||||
unsigned int virq;
|
||||
|
||||
if (WARN_ON_ONCE(host->revmap_type != IRQ_HOST_MAP_TREE))
|
||||
return irq_find_mapping(host, hwirq);
|
||||
|
||||
/*
|
||||
* The ptr returned references the static global irq_map.
|
||||
* but freeing an irq can delete nodes along the path to
|
||||
* do the lookup via call_rcu.
|
||||
*/
|
||||
rcu_read_lock();
|
||||
ptr = radix_tree_lookup(&host->revmap_data.tree, hwirq);
|
||||
rcu_read_unlock();
|
||||
|
||||
/*
|
||||
* If found in radix tree, then fine.
|
||||
* Else fallback to linear lookup - this should not happen in practice
|
||||
* as it means that we failed to insert the node in the radix tree.
|
||||
*/
|
||||
if (ptr)
|
||||
virq = ptr - irq_map;
|
||||
else
|
||||
virq = irq_find_mapping(host, hwirq);
|
||||
|
||||
return virq;
|
||||
}
|
||||
|
||||
void irq_radix_revmap_insert(struct irq_host *host, unsigned int virq,
|
||||
irq_hw_number_t hwirq)
|
||||
{
|
||||
if (WARN_ON(host->revmap_type != IRQ_HOST_MAP_TREE))
|
||||
return;
|
||||
|
||||
if (virq != NO_IRQ) {
|
||||
mutex_lock(&revmap_trees_mutex);
|
||||
radix_tree_insert(&host->revmap_data.tree, hwirq,
|
||||
&irq_map[virq]);
|
||||
mutex_unlock(&revmap_trees_mutex);
|
||||
}
|
||||
}
|
||||
|
||||
unsigned int irq_linear_revmap(struct irq_host *host,
|
||||
irq_hw_number_t hwirq)
|
||||
{
|
||||
unsigned int *revmap;
|
||||
|
||||
if (WARN_ON_ONCE(host->revmap_type != IRQ_HOST_MAP_LINEAR))
|
||||
return irq_find_mapping(host, hwirq);
|
||||
|
||||
/* Check revmap bounds */
|
||||
if (unlikely(hwirq >= host->revmap_data.linear.size))
|
||||
return irq_find_mapping(host, hwirq);
|
||||
|
||||
/* Check if revmap was allocated */
|
||||
revmap = host->revmap_data.linear.revmap;
|
||||
if (unlikely(revmap == NULL))
|
||||
return irq_find_mapping(host, hwirq);
|
||||
|
||||
/* Fill up revmap with slow path if no mapping found */
|
||||
if (unlikely(revmap[hwirq] == NO_IRQ))
|
||||
revmap[hwirq] = irq_find_mapping(host, hwirq);
|
||||
|
||||
return revmap[hwirq];
|
||||
}
|
||||
|
||||
unsigned int irq_alloc_virt(struct irq_host *host,
|
||||
unsigned int count,
|
||||
unsigned int hint)
|
||||
{
|
||||
unsigned long flags;
|
||||
unsigned int i, j, found = NO_IRQ;
|
||||
|
||||
if (count == 0 || count > (irq_virq_count - NUM_ISA_INTERRUPTS))
|
||||
return NO_IRQ;
|
||||
|
||||
raw_spin_lock_irqsave(&irq_big_lock, flags);
|
||||
|
||||
/* Use hint for 1 interrupt if any */
|
||||
if (count == 1 && hint >= NUM_ISA_INTERRUPTS &&
|
||||
hint < irq_virq_count && irq_map[hint].host == NULL) {
|
||||
found = hint;
|
||||
goto hint_found;
|
||||
}
|
||||
|
||||
/* Look for count consecutive numbers in the allocatable
|
||||
* (non-legacy) space
|
||||
*/
|
||||
for (i = NUM_ISA_INTERRUPTS, j = 0; i < irq_virq_count; i++) {
|
||||
if (irq_map[i].host != NULL)
|
||||
j = 0;
|
||||
else
|
||||
j++;
|
||||
|
||||
if (j == count) {
|
||||
found = i - count + 1;
|
||||
break;
|
||||
}
|
||||
}
|
||||
if (found == NO_IRQ) {
|
||||
raw_spin_unlock_irqrestore(&irq_big_lock, flags);
|
||||
return NO_IRQ;
|
||||
}
|
||||
hint_found:
|
||||
for (i = found; i < (found + count); i++) {
|
||||
irq_map[i].hwirq = host->inval_irq;
|
||||
smp_wmb();
|
||||
irq_map[i].host = host;
|
||||
}
|
||||
raw_spin_unlock_irqrestore(&irq_big_lock, flags);
|
||||
return found;
|
||||
}
|
||||
|
||||
void irq_free_virt(unsigned int virq, unsigned int count)
|
||||
{
|
||||
unsigned long flags;
|
||||
unsigned int i;
|
||||
|
||||
WARN_ON (virq < NUM_ISA_INTERRUPTS);
|
||||
WARN_ON (count == 0 || (virq + count) > irq_virq_count);
|
||||
|
||||
if (virq < NUM_ISA_INTERRUPTS) {
|
||||
if (virq + count < NUM_ISA_INTERRUPTS)
|
||||
return;
|
||||
count =- NUM_ISA_INTERRUPTS - virq;
|
||||
virq = NUM_ISA_INTERRUPTS;
|
||||
}
|
||||
|
||||
if (count > irq_virq_count || virq > irq_virq_count - count) {
|
||||
if (virq > irq_virq_count)
|
||||
return;
|
||||
count = irq_virq_count - virq;
|
||||
}
|
||||
|
||||
raw_spin_lock_irqsave(&irq_big_lock, flags);
|
||||
for (i = virq; i < (virq + count); i++) {
|
||||
struct irq_host *host;
|
||||
|
||||
host = irq_map[i].host;
|
||||
irq_map[i].hwirq = host->inval_irq;
|
||||
smp_wmb();
|
||||
irq_map[i].host = NULL;
|
||||
}
|
||||
raw_spin_unlock_irqrestore(&irq_big_lock, flags);
|
||||
}
|
||||
|
||||
int arch_early_irq_init(void)
|
||||
{
|
||||
return 0;
|
||||
}
|
||||
|
||||
#ifdef CONFIG_VIRQ_DEBUG
|
||||
static int virq_debug_show(struct seq_file *m, void *private)
|
||||
{
|
||||
unsigned long flags;
|
||||
struct irq_desc *desc;
|
||||
const char *p;
|
||||
static const char none[] = "none";
|
||||
void *data;
|
||||
int i;
|
||||
|
||||
seq_printf(m, "%-5s %-7s %-15s %-18s %s\n", "virq", "hwirq",
|
||||
"chip name", "chip data", "host name");
|
||||
|
||||
for (i = 1; i < nr_irqs; i++) {
|
||||
desc = irq_to_desc(i);
|
||||
if (!desc)
|
||||
continue;
|
||||
|
||||
raw_spin_lock_irqsave(&desc->lock, flags);
|
||||
|
||||
if (desc->action && desc->action->handler) {
|
||||
struct irq_chip *chip;
|
||||
|
||||
seq_printf(m, "%5d ", i);
|
||||
seq_printf(m, "0x%05lx ", irq_map[i].hwirq);
|
||||
|
||||
chip = irq_desc_get_chip(desc);
|
||||
if (chip && chip->name)
|
||||
p = chip->name;
|
||||
else
|
||||
p = none;
|
||||
seq_printf(m, "%-15s ", p);
|
||||
|
||||
data = irq_desc_get_chip_data(desc);
|
||||
seq_printf(m, "0x%16p ", data);
|
||||
|
||||
if (irq_map[i].host && irq_map[i].host->of_node)
|
||||
p = irq_map[i].host->of_node->full_name;
|
||||
else
|
||||
p = none;
|
||||
seq_printf(m, "%s\n", p);
|
||||
}
|
||||
|
||||
raw_spin_unlock_irqrestore(&desc->lock, flags);
|
||||
}
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
static int virq_debug_open(struct inode *inode, struct file *file)
|
||||
{
|
||||
return single_open(file, virq_debug_show, inode->i_private);
|
||||
}
|
||||
|
||||
static const struct file_operations virq_debug_fops = {
|
||||
.open = virq_debug_open,
|
||||
.read = seq_read,
|
||||
.llseek = seq_lseek,
|
||||
.release = single_release,
|
||||
};
|
||||
|
||||
static int __init irq_debugfs_init(void)
|
||||
{
|
||||
if (debugfs_create_file("virq_mapping", S_IRUGO, powerpc_debugfs_root,
|
||||
NULL, &virq_debug_fops) == NULL)
|
||||
return -ENOMEM;
|
||||
|
||||
return 0;
|
||||
}
|
||||
__initcall(irq_debugfs_init);
|
||||
#endif /* CONFIG_VIRQ_DEBUG */
|
||||
|
||||
#ifdef CONFIG_PPC64
|
||||
static int __init setup_noirqdistrib(char *str)
|
||||
{
|
||||
|
@ -21,7 +21,7 @@
|
||||
#include <asm/prom.h>
|
||||
|
||||
static struct device_node *cpld_pic_node;
|
||||
static struct irq_host *cpld_pic_host;
|
||||
static struct irq_domain *cpld_pic_host;
|
||||
|
||||
/*
|
||||
* Bits to ignore in the misc_status register
|
||||
@ -123,13 +123,13 @@ cpld_pic_cascade(unsigned int irq, struct irq_desc *desc)
|
||||
}
|
||||
|
||||
static int
|
||||
cpld_pic_host_match(struct irq_host *h, struct device_node *node)
|
||||
cpld_pic_host_match(struct irq_domain *h, struct device_node *node)
|
||||
{
|
||||
return cpld_pic_node == node;
|
||||
}
|
||||
|
||||
static int
|
||||
cpld_pic_host_map(struct irq_host *h, unsigned int virq,
|
||||
cpld_pic_host_map(struct irq_domain *h, unsigned int virq,
|
||||
irq_hw_number_t hw)
|
||||
{
|
||||
irq_set_status_flags(virq, IRQ_LEVEL);
|
||||
@ -137,8 +137,7 @@ cpld_pic_host_map(struct irq_host *h, unsigned int virq,
|
||||
return 0;
|
||||
}
|
||||
|
||||
static struct
|
||||
irq_host_ops cpld_pic_host_ops = {
|
||||
static const struct irq_domain_ops cpld_pic_host_ops = {
|
||||
.match = cpld_pic_host_match,
|
||||
.map = cpld_pic_host_map,
|
||||
};
|
||||
@ -191,8 +190,7 @@ mpc5121_ads_cpld_pic_init(void)
|
||||
|
||||
cpld_pic_node = of_node_get(np);
|
||||
|
||||
cpld_pic_host =
|
||||
irq_alloc_host(np, IRQ_HOST_MAP_LINEAR, 16, &cpld_pic_host_ops, 16);
|
||||
cpld_pic_host = irq_domain_add_linear(np, 16, &cpld_pic_host_ops, NULL);
|
||||
if (!cpld_pic_host) {
|
||||
printk(KERN_ERR "CPLD PIC: failed to allocate irq host!\n");
|
||||
goto end;
|
||||
|
@ -45,7 +45,7 @@ static struct of_device_id mpc5200_gpio_ids[] __initdata = {
|
||||
struct media5200_irq {
|
||||
void __iomem *regs;
|
||||
spinlock_t lock;
|
||||
struct irq_host *irqhost;
|
||||
struct irq_domain *irqhost;
|
||||
};
|
||||
struct media5200_irq media5200_irq;
|
||||
|
||||
@ -112,7 +112,7 @@ void media5200_irq_cascade(unsigned int virq, struct irq_desc *desc)
|
||||
raw_spin_unlock(&desc->lock);
|
||||
}
|
||||
|
||||
static int media5200_irq_map(struct irq_host *h, unsigned int virq,
|
||||
static int media5200_irq_map(struct irq_domain *h, unsigned int virq,
|
||||
irq_hw_number_t hw)
|
||||
{
|
||||
pr_debug("%s: h=%p, virq=%i, hwirq=%i\n", __func__, h, virq, (int)hw);
|
||||
@ -122,7 +122,7 @@ static int media5200_irq_map(struct irq_host *h, unsigned int virq,
|
||||
return 0;
|
||||
}
|
||||
|
||||
static int media5200_irq_xlate(struct irq_host *h, struct device_node *ct,
|
||||
static int media5200_irq_xlate(struct irq_domain *h, struct device_node *ct,
|
||||
const u32 *intspec, unsigned int intsize,
|
||||
irq_hw_number_t *out_hwirq,
|
||||
unsigned int *out_flags)
|
||||
@ -136,7 +136,7 @@ static int media5200_irq_xlate(struct irq_host *h, struct device_node *ct,
|
||||
return 0;
|
||||
}
|
||||
|
||||
static struct irq_host_ops media5200_irq_ops = {
|
||||
static const struct irq_domain_ops media5200_irq_ops = {
|
||||
.map = media5200_irq_map,
|
||||
.xlate = media5200_irq_xlate,
|
||||
};
|
||||
@ -173,15 +173,12 @@ static void __init media5200_init_irq(void)
|
||||
|
||||
spin_lock_init(&media5200_irq.lock);
|
||||
|
||||
media5200_irq.irqhost = irq_alloc_host(fpga_np, IRQ_HOST_MAP_LINEAR,
|
||||
MEDIA5200_NUM_IRQS,
|
||||
&media5200_irq_ops, -1);
|
||||
media5200_irq.irqhost = irq_domain_add_linear(fpga_np,
|
||||
MEDIA5200_NUM_IRQS, &media5200_irq_ops, &media5200_irq);
|
||||
if (!media5200_irq.irqhost)
|
||||
goto out;
|
||||
pr_debug("%s: allocated irqhost\n", __func__);
|
||||
|
||||
media5200_irq.irqhost->host_data = &media5200_irq;
|
||||
|
||||
irq_set_handler_data(cascade_virq, &media5200_irq);
|
||||
irq_set_chained_handler(cascade_virq, media5200_irq_cascade);
|
||||
|
||||
|
@ -81,7 +81,7 @@ MODULE_LICENSE("GPL");
|
||||
* @regs: virtual address of GPT registers
|
||||
* @lock: spinlock to coordinate between different functions.
|
||||
* @gc: gpio_chip instance structure; used when GPIO is enabled
|
||||
* @irqhost: Pointer to irq_host instance; used when IRQ mode is supported
|
||||
* @irqhost: Pointer to irq_domain instance; used when IRQ mode is supported
|
||||
* @wdt_mode: only relevant for gpt0: bit 0 (MPC52xx_GPT_CAN_WDT) indicates
|
||||
* if the gpt may be used as wdt, bit 1 (MPC52xx_GPT_IS_WDT) indicates
|
||||
* if the timer is actively used as wdt which blocks gpt functions
|
||||
@ -91,7 +91,7 @@ struct mpc52xx_gpt_priv {
|
||||
struct device *dev;
|
||||
struct mpc52xx_gpt __iomem *regs;
|
||||
spinlock_t lock;
|
||||
struct irq_host *irqhost;
|
||||
struct irq_domain *irqhost;
|
||||
u32 ipb_freq;
|
||||
u8 wdt_mode;
|
||||
|
||||
@ -204,7 +204,7 @@ void mpc52xx_gpt_irq_cascade(unsigned int virq, struct irq_desc *desc)
|
||||
}
|
||||
}
|
||||
|
||||
static int mpc52xx_gpt_irq_map(struct irq_host *h, unsigned int virq,
|
||||
static int mpc52xx_gpt_irq_map(struct irq_domain *h, unsigned int virq,
|
||||
irq_hw_number_t hw)
|
||||
{
|
||||
struct mpc52xx_gpt_priv *gpt = h->host_data;
|
||||
@ -216,7 +216,7 @@ static int mpc52xx_gpt_irq_map(struct irq_host *h, unsigned int virq,
|
||||
return 0;
|
||||
}
|
||||
|
||||
static int mpc52xx_gpt_irq_xlate(struct irq_host *h, struct device_node *ct,
|
||||
static int mpc52xx_gpt_irq_xlate(struct irq_domain *h, struct device_node *ct,
|
||||
const u32 *intspec, unsigned int intsize,
|
||||
irq_hw_number_t *out_hwirq,
|
||||
unsigned int *out_flags)
|
||||
@ -236,7 +236,7 @@ static int mpc52xx_gpt_irq_xlate(struct irq_host *h, struct device_node *ct,
|
||||
return 0;
|
||||
}
|
||||
|
||||
static struct irq_host_ops mpc52xx_gpt_irq_ops = {
|
||||
static const struct irq_domain_ops mpc52xx_gpt_irq_ops = {
|
||||
.map = mpc52xx_gpt_irq_map,
|
||||
.xlate = mpc52xx_gpt_irq_xlate,
|
||||
};
|
||||
@ -252,14 +252,12 @@ mpc52xx_gpt_irq_setup(struct mpc52xx_gpt_priv *gpt, struct device_node *node)
|
||||
if (!cascade_virq)
|
||||
return;
|
||||
|
||||
gpt->irqhost = irq_alloc_host(node, IRQ_HOST_MAP_LINEAR, 1,
|
||||
&mpc52xx_gpt_irq_ops, -1);
|
||||
gpt->irqhost = irq_domain_add_linear(node, 1, &mpc52xx_gpt_irq_ops, gpt);
|
||||
if (!gpt->irqhost) {
|
||||
dev_err(gpt->dev, "irq_alloc_host() failed\n");
|
||||
dev_err(gpt->dev, "irq_domain_add_linear() failed\n");
|
||||
return;
|
||||
}
|
||||
|
||||
gpt->irqhost->host_data = gpt;
|
||||
irq_set_handler_data(cascade_virq, gpt);
|
||||
irq_set_chained_handler(cascade_virq, mpc52xx_gpt_irq_cascade);
|
||||
|
||||
|
@ -132,7 +132,7 @@ static struct of_device_id mpc52xx_sdma_ids[] __initdata = {
|
||||
|
||||
static struct mpc52xx_intr __iomem *intr;
|
||||
static struct mpc52xx_sdma __iomem *sdma;
|
||||
static struct irq_host *mpc52xx_irqhost = NULL;
|
||||
static struct irq_domain *mpc52xx_irqhost = NULL;
|
||||
|
||||
static unsigned char mpc52xx_map_senses[4] = {
|
||||
IRQ_TYPE_LEVEL_HIGH,
|
||||
@ -301,7 +301,7 @@ static int mpc52xx_is_extirq(int l1, int l2)
|
||||
/**
|
||||
* mpc52xx_irqhost_xlate - translate virq# from device tree interrupts property
|
||||
*/
|
||||
static int mpc52xx_irqhost_xlate(struct irq_host *h, struct device_node *ct,
|
||||
static int mpc52xx_irqhost_xlate(struct irq_domain *h, struct device_node *ct,
|
||||
const u32 *intspec, unsigned int intsize,
|
||||
irq_hw_number_t *out_hwirq,
|
||||
unsigned int *out_flags)
|
||||
@ -335,7 +335,7 @@ static int mpc52xx_irqhost_xlate(struct irq_host *h, struct device_node *ct,
|
||||
/**
|
||||
* mpc52xx_irqhost_map - Hook to map from virq to an irq_chip structure
|
||||
*/
|
||||
static int mpc52xx_irqhost_map(struct irq_host *h, unsigned int virq,
|
||||
static int mpc52xx_irqhost_map(struct irq_domain *h, unsigned int virq,
|
||||
irq_hw_number_t irq)
|
||||
{
|
||||
int l1irq;
|
||||
@ -384,7 +384,7 @@ static int mpc52xx_irqhost_map(struct irq_host *h, unsigned int virq,
|
||||
return 0;
|
||||
}
|
||||
|
||||
static struct irq_host_ops mpc52xx_irqhost_ops = {
|
||||
static const struct irq_domain_ops mpc52xx_irqhost_ops = {
|
||||
.xlate = mpc52xx_irqhost_xlate,
|
||||
.map = mpc52xx_irqhost_map,
|
||||
};
|
||||
@ -444,9 +444,9 @@ void __init mpc52xx_init_irq(void)
|
||||
* As last step, add an irq host to translate the real
|
||||
* hw irq information provided by the ofw to linux virq
|
||||
*/
|
||||
mpc52xx_irqhost = irq_alloc_host(picnode, IRQ_HOST_MAP_LINEAR,
|
||||
mpc52xx_irqhost = irq_domain_add_linear(picnode,
|
||||
MPC52xx_IRQ_HIGHTESTHWIRQ,
|
||||
&mpc52xx_irqhost_ops, -1);
|
||||
&mpc52xx_irqhost_ops, NULL);
|
||||
|
||||
if (!mpc52xx_irqhost)
|
||||
panic(__FILE__ ": Cannot allocate the IRQ host\n");
|
||||
|
@ -29,7 +29,7 @@ static DEFINE_RAW_SPINLOCK(pci_pic_lock);
|
||||
|
||||
struct pq2ads_pci_pic {
|
||||
struct device_node *node;
|
||||
struct irq_host *host;
|
||||
struct irq_domain *host;
|
||||
|
||||
struct {
|
||||
u32 stat;
|
||||
@ -103,7 +103,7 @@ static void pq2ads_pci_irq_demux(unsigned int irq, struct irq_desc *desc)
|
||||
}
|
||||
}
|
||||
|
||||
static int pci_pic_host_map(struct irq_host *h, unsigned int virq,
|
||||
static int pci_pic_host_map(struct irq_domain *h, unsigned int virq,
|
||||
irq_hw_number_t hw)
|
||||
{
|
||||
irq_set_status_flags(virq, IRQ_LEVEL);
|
||||
@ -112,14 +112,14 @@ static int pci_pic_host_map(struct irq_host *h, unsigned int virq,
|
||||
return 0;
|
||||
}
|
||||
|
||||
static struct irq_host_ops pci_pic_host_ops = {
|
||||
static const struct irq_domain_ops pci_pic_host_ops = {
|
||||
.map = pci_pic_host_map,
|
||||
};
|
||||
|
||||
int __init pq2ads_pci_init_irq(void)
|
||||
{
|
||||
struct pq2ads_pci_pic *priv;
|
||||
struct irq_host *host;
|
||||
struct irq_domain *host;
|
||||
struct device_node *np;
|
||||
int ret = -ENODEV;
|
||||
int irq;
|
||||
@ -156,17 +156,13 @@ int __init pq2ads_pci_init_irq(void)
|
||||
out_be32(&priv->regs->mask, ~0);
|
||||
mb();
|
||||
|
||||
host = irq_alloc_host(np, IRQ_HOST_MAP_LINEAR, NUM_IRQS,
|
||||
&pci_pic_host_ops, NUM_IRQS);
|
||||
host = irq_domain_add_linear(np, NUM_IRQS, &pci_pic_host_ops, priv);
|
||||
if (!host) {
|
||||
ret = -ENOMEM;
|
||||
goto out_unmap_regs;
|
||||
}
|
||||
|
||||
host->host_data = priv;
|
||||
|
||||
priv->host = host;
|
||||
host->host_data = priv;
|
||||
irq_set_handler_data(irq, priv);
|
||||
irq_set_chained_handler(irq, pq2ads_pci_irq_demux);
|
||||
|
||||
|
@ -51,7 +51,7 @@ static struct socrates_fpga_irq_info fpga_irqs[SOCRATES_FPGA_NUM_IRQS] = {
|
||||
static DEFINE_RAW_SPINLOCK(socrates_fpga_pic_lock);
|
||||
|
||||
static void __iomem *socrates_fpga_pic_iobase;
|
||||
static struct irq_host *socrates_fpga_pic_irq_host;
|
||||
static struct irq_domain *socrates_fpga_pic_irq_host;
|
||||
static unsigned int socrates_fpga_irqs[3];
|
||||
|
||||
static inline uint32_t socrates_fpga_pic_read(int reg)
|
||||
@ -227,7 +227,7 @@ static struct irq_chip socrates_fpga_pic_chip = {
|
||||
.irq_set_type = socrates_fpga_pic_set_type,
|
||||
};
|
||||
|
||||
static int socrates_fpga_pic_host_map(struct irq_host *h, unsigned int virq,
|
||||
static int socrates_fpga_pic_host_map(struct irq_domain *h, unsigned int virq,
|
||||
irq_hw_number_t hwirq)
|
||||
{
|
||||
/* All interrupts are LEVEL sensitive */
|
||||
@ -238,7 +238,7 @@ static int socrates_fpga_pic_host_map(struct irq_host *h, unsigned int virq,
|
||||
return 0;
|
||||
}
|
||||
|
||||
static int socrates_fpga_pic_host_xlate(struct irq_host *h,
|
||||
static int socrates_fpga_pic_host_xlate(struct irq_domain *h,
|
||||
struct device_node *ct, const u32 *intspec, unsigned int intsize,
|
||||
irq_hw_number_t *out_hwirq, unsigned int *out_flags)
|
||||
{
|
||||
@ -269,7 +269,7 @@ static int socrates_fpga_pic_host_xlate(struct irq_host *h,
|
||||
return 0;
|
||||
}
|
||||
|
||||
static struct irq_host_ops socrates_fpga_pic_host_ops = {
|
||||
static const struct irq_domain_ops socrates_fpga_pic_host_ops = {
|
||||
.map = socrates_fpga_pic_host_map,
|
||||
.xlate = socrates_fpga_pic_host_xlate,
|
||||
};
|
||||
@ -279,10 +279,9 @@ void socrates_fpga_pic_init(struct device_node *pic)
|
||||
unsigned long flags;
|
||||
int i;
|
||||
|
||||
/* Setup an irq_host structure */
|
||||
socrates_fpga_pic_irq_host = irq_alloc_host(pic, IRQ_HOST_MAP_LINEAR,
|
||||
SOCRATES_FPGA_NUM_IRQS, &socrates_fpga_pic_host_ops,
|
||||
SOCRATES_FPGA_NUM_IRQS);
|
||||
/* Setup an irq_domain structure */
|
||||
socrates_fpga_pic_irq_host = irq_domain_add_linear(pic,
|
||||
SOCRATES_FPGA_NUM_IRQS, &socrates_fpga_pic_host_ops, NULL);
|
||||
if (socrates_fpga_pic_irq_host == NULL) {
|
||||
pr_err("FPGA PIC: Unable to allocate host\n");
|
||||
return;
|
||||
|
@ -50,7 +50,7 @@
|
||||
static DEFINE_RAW_SPINLOCK(gef_pic_lock);
|
||||
|
||||
static void __iomem *gef_pic_irq_reg_base;
|
||||
static struct irq_host *gef_pic_irq_host;
|
||||
static struct irq_domain *gef_pic_irq_host;
|
||||
static int gef_pic_cascade_irq;
|
||||
|
||||
/*
|
||||
@ -153,7 +153,7 @@ static struct irq_chip gef_pic_chip = {
|
||||
/* When an interrupt is being configured, this call allows some flexibilty
|
||||
* in deciding which irq_chip structure is used
|
||||
*/
|
||||
static int gef_pic_host_map(struct irq_host *h, unsigned int virq,
|
||||
static int gef_pic_host_map(struct irq_domain *h, unsigned int virq,
|
||||
irq_hw_number_t hwirq)
|
||||
{
|
||||
/* All interrupts are LEVEL sensitive */
|
||||
@ -163,7 +163,7 @@ static int gef_pic_host_map(struct irq_host *h, unsigned int virq,
|
||||
return 0;
|
||||
}
|
||||
|
||||
static int gef_pic_host_xlate(struct irq_host *h, struct device_node *ct,
|
||||
static int gef_pic_host_xlate(struct irq_domain *h, struct device_node *ct,
|
||||
const u32 *intspec, unsigned int intsize,
|
||||
irq_hw_number_t *out_hwirq, unsigned int *out_flags)
|
||||
{
|
||||
@ -177,7 +177,7 @@ static int gef_pic_host_xlate(struct irq_host *h, struct device_node *ct,
|
||||
return 0;
|
||||
}
|
||||
|
||||
static struct irq_host_ops gef_pic_host_ops = {
|
||||
static const struct irq_domain_ops gef_pic_host_ops = {
|
||||
.map = gef_pic_host_map,
|
||||
.xlate = gef_pic_host_xlate,
|
||||
};
|
||||
@ -211,10 +211,9 @@ void __init gef_pic_init(struct device_node *np)
|
||||
return;
|
||||
}
|
||||
|
||||
/* Setup an irq_host structure */
|
||||
gef_pic_irq_host = irq_alloc_host(np, IRQ_HOST_MAP_LINEAR,
|
||||
GEF_PIC_NUM_IRQS,
|
||||
&gef_pic_host_ops, NO_IRQ);
|
||||
/* Setup an irq_domain structure */
|
||||
gef_pic_irq_host = irq_domain_add_linear(np, GEF_PIC_NUM_IRQS,
|
||||
&gef_pic_host_ops, NULL);
|
||||
if (gef_pic_irq_host == NULL)
|
||||
return;
|
||||
|
||||
|
@ -67,7 +67,7 @@
|
||||
|
||||
|
||||
struct axon_msic {
|
||||
struct irq_host *irq_host;
|
||||
struct irq_domain *irq_domain;
|
||||
__le32 *fifo_virt;
|
||||
dma_addr_t fifo_phys;
|
||||
dcr_host_t dcr_host;
|
||||
@ -152,7 +152,7 @@ static void axon_msi_cascade(unsigned int irq, struct irq_desc *desc)
|
||||
|
||||
static struct axon_msic *find_msi_translator(struct pci_dev *dev)
|
||||
{
|
||||
struct irq_host *irq_host;
|
||||
struct irq_domain *irq_domain;
|
||||
struct device_node *dn, *tmp;
|
||||
const phandle *ph;
|
||||
struct axon_msic *msic = NULL;
|
||||
@ -184,14 +184,14 @@ static struct axon_msic *find_msi_translator(struct pci_dev *dev)
|
||||
goto out_error;
|
||||
}
|
||||
|
||||
irq_host = irq_find_host(dn);
|
||||
if (!irq_host) {
|
||||
dev_dbg(&dev->dev, "axon_msi: no irq_host found for node %s\n",
|
||||
irq_domain = irq_find_host(dn);
|
||||
if (!irq_domain) {
|
||||
dev_dbg(&dev->dev, "axon_msi: no irq_domain found for node %s\n",
|
||||
dn->full_name);
|
||||
goto out_error;
|
||||
}
|
||||
|
||||
msic = irq_host->host_data;
|
||||
msic = irq_domain->host_data;
|
||||
|
||||
out_error:
|
||||
of_node_put(dn);
|
||||
@ -280,7 +280,7 @@ static int axon_msi_setup_msi_irqs(struct pci_dev *dev, int nvec, int type)
|
||||
BUILD_BUG_ON(NR_IRQS > 65536);
|
||||
|
||||
list_for_each_entry(entry, &dev->msi_list, list) {
|
||||
virq = irq_create_direct_mapping(msic->irq_host);
|
||||
virq = irq_create_direct_mapping(msic->irq_domain);
|
||||
if (virq == NO_IRQ) {
|
||||
dev_warn(&dev->dev,
|
||||
"axon_msi: virq allocation failed!\n");
|
||||
@ -318,7 +318,7 @@ static struct irq_chip msic_irq_chip = {
|
||||
.name = "AXON-MSI",
|
||||
};
|
||||
|
||||
static int msic_host_map(struct irq_host *h, unsigned int virq,
|
||||
static int msic_host_map(struct irq_domain *h, unsigned int virq,
|
||||
irq_hw_number_t hw)
|
||||
{
|
||||
irq_set_chip_data(virq, h->host_data);
|
||||
@ -327,7 +327,7 @@ static int msic_host_map(struct irq_host *h, unsigned int virq,
|
||||
return 0;
|
||||
}
|
||||
|
||||
static struct irq_host_ops msic_host_ops = {
|
||||
static const struct irq_domain_ops msic_host_ops = {
|
||||
.map = msic_host_map,
|
||||
};
|
||||
|
||||
@ -337,7 +337,7 @@ static void axon_msi_shutdown(struct platform_device *device)
|
||||
u32 tmp;
|
||||
|
||||
pr_devel("axon_msi: disabling %s\n",
|
||||
msic->irq_host->of_node->full_name);
|
||||
msic->irq_domain->of_node->full_name);
|
||||
tmp = dcr_read(msic->dcr_host, MSIC_CTRL_REG);
|
||||
tmp &= ~MSIC_CTRL_ENABLE & ~MSIC_CTRL_IRQ_ENABLE;
|
||||
msic_dcr_write(msic, MSIC_CTRL_REG, tmp);
|
||||
@ -392,16 +392,13 @@ static int axon_msi_probe(struct platform_device *device)
|
||||
}
|
||||
memset(msic->fifo_virt, 0xff, MSIC_FIFO_SIZE_BYTES);
|
||||
|
||||
msic->irq_host = irq_alloc_host(dn, IRQ_HOST_MAP_NOMAP,
|
||||
NR_IRQS, &msic_host_ops, 0);
|
||||
if (!msic->irq_host) {
|
||||
printk(KERN_ERR "axon_msi: couldn't allocate irq_host for %s\n",
|
||||
msic->irq_domain = irq_domain_add_nomap(dn, &msic_host_ops, msic);
|
||||
if (!msic->irq_domain) {
|
||||
printk(KERN_ERR "axon_msi: couldn't allocate irq_domain for %s\n",
|
||||
dn->full_name);
|
||||
goto out_free_fifo;
|
||||
}
|
||||
|
||||
msic->irq_host->host_data = msic;
|
||||
|
||||
irq_set_handler_data(virq, msic);
|
||||
irq_set_chained_handler(virq, axon_msi_cascade);
|
||||
pr_devel("axon_msi: irq 0x%x setup for axon_msi\n", virq);
|
||||
|
@ -34,7 +34,7 @@ static DEFINE_RAW_SPINLOCK(beatic_irq_mask_lock);
|
||||
static uint64_t beatic_irq_mask_enable[(MAX_IRQS+255)/64];
|
||||
static uint64_t beatic_irq_mask_ack[(MAX_IRQS+255)/64];
|
||||
|
||||
static struct irq_host *beatic_host;
|
||||
static struct irq_domain *beatic_host;
|
||||
|
||||
/*
|
||||
* In this implementation, "virq" == "IRQ plug number",
|
||||
@ -122,7 +122,7 @@ static struct irq_chip beatic_pic = {
|
||||
*
|
||||
* Note that the number (virq) is already assigned at upper layer.
|
||||
*/
|
||||
static void beatic_pic_host_unmap(struct irq_host *h, unsigned int virq)
|
||||
static void beatic_pic_host_unmap(struct irq_domain *h, unsigned int virq)
|
||||
{
|
||||
beat_destruct_irq_plug(virq);
|
||||
}
|
||||
@ -133,7 +133,7 @@ static void beatic_pic_host_unmap(struct irq_host *h, unsigned int virq)
|
||||
*
|
||||
* Note that the number (virq) is already assigned at upper layer.
|
||||
*/
|
||||
static int beatic_pic_host_map(struct irq_host *h, unsigned int virq,
|
||||
static int beatic_pic_host_map(struct irq_domain *h, unsigned int virq,
|
||||
irq_hw_number_t hw)
|
||||
{
|
||||
int64_t err;
|
||||
@ -154,7 +154,7 @@ static int beatic_pic_host_map(struct irq_host *h, unsigned int virq,
|
||||
* Called from irq_create_of_mapping() only.
|
||||
* Note: We have only 1 entry to translate.
|
||||
*/
|
||||
static int beatic_pic_host_xlate(struct irq_host *h, struct device_node *ct,
|
||||
static int beatic_pic_host_xlate(struct irq_domain *h, struct device_node *ct,
|
||||
const u32 *intspec, unsigned int intsize,
|
||||
irq_hw_number_t *out_hwirq,
|
||||
unsigned int *out_flags)
|
||||
@ -166,13 +166,13 @@ static int beatic_pic_host_xlate(struct irq_host *h, struct device_node *ct,
|
||||
return 0;
|
||||
}
|
||||
|
||||
static int beatic_pic_host_match(struct irq_host *h, struct device_node *np)
|
||||
static int beatic_pic_host_match(struct irq_domain *h, struct device_node *np)
|
||||
{
|
||||
/* Match all */
|
||||
return 1;
|
||||
}
|
||||
|
||||
static struct irq_host_ops beatic_pic_host_ops = {
|
||||
static const struct irq_domain_ops beatic_pic_host_ops = {
|
||||
.map = beatic_pic_host_map,
|
||||
.unmap = beatic_pic_host_unmap,
|
||||
.xlate = beatic_pic_host_xlate,
|
||||
@ -239,9 +239,7 @@ void __init beatic_init_IRQ(void)
|
||||
ppc_md.get_irq = beatic_get_irq;
|
||||
|
||||
/* Allocate an irq host */
|
||||
beatic_host = irq_alloc_host(NULL, IRQ_HOST_MAP_NOMAP, 0,
|
||||
&beatic_pic_host_ops,
|
||||
0);
|
||||
beatic_host = irq_domain_add_nomap(NULL, &beatic_pic_host_ops, NULL);
|
||||
BUG_ON(beatic_host == NULL);
|
||||
irq_set_default_host(beatic_host);
|
||||
}
|
||||
|
@ -56,7 +56,7 @@ struct iic {
|
||||
|
||||
static DEFINE_PER_CPU(struct iic, cpu_iic);
|
||||
#define IIC_NODE_COUNT 2
|
||||
static struct irq_host *iic_host;
|
||||
static struct irq_domain *iic_host;
|
||||
|
||||
/* Convert between "pending" bits and hw irq number */
|
||||
static irq_hw_number_t iic_pending_to_hwnum(struct cbe_iic_pending_bits bits)
|
||||
@ -186,7 +186,7 @@ void iic_message_pass(int cpu, int msg)
|
||||
out_be64(&per_cpu(cpu_iic, cpu).regs->generate, (0xf - msg) << 4);
|
||||
}
|
||||
|
||||
struct irq_host *iic_get_irq_host(int node)
|
||||
struct irq_domain *iic_get_irq_host(int node)
|
||||
{
|
||||
return iic_host;
|
||||
}
|
||||
@ -222,13 +222,13 @@ void iic_request_IPIs(void)
|
||||
#endif /* CONFIG_SMP */
|
||||
|
||||
|
||||
static int iic_host_match(struct irq_host *h, struct device_node *node)
|
||||
static int iic_host_match(struct irq_domain *h, struct device_node *node)
|
||||
{
|
||||
return of_device_is_compatible(node,
|
||||
"IBM,CBEA-Internal-Interrupt-Controller");
|
||||
}
|
||||
|
||||
static int iic_host_map(struct irq_host *h, unsigned int virq,
|
||||
static int iic_host_map(struct irq_domain *h, unsigned int virq,
|
||||
irq_hw_number_t hw)
|
||||
{
|
||||
switch (hw & IIC_IRQ_TYPE_MASK) {
|
||||
@ -245,7 +245,7 @@ static int iic_host_map(struct irq_host *h, unsigned int virq,
|
||||
return 0;
|
||||
}
|
||||
|
||||
static int iic_host_xlate(struct irq_host *h, struct device_node *ct,
|
||||
static int iic_host_xlate(struct irq_domain *h, struct device_node *ct,
|
||||
const u32 *intspec, unsigned int intsize,
|
||||
irq_hw_number_t *out_hwirq, unsigned int *out_flags)
|
||||
|
||||
@ -285,7 +285,7 @@ static int iic_host_xlate(struct irq_host *h, struct device_node *ct,
|
||||
return 0;
|
||||
}
|
||||
|
||||
static struct irq_host_ops iic_host_ops = {
|
||||
static const struct irq_domain_ops iic_host_ops = {
|
||||
.match = iic_host_match,
|
||||
.map = iic_host_map,
|
||||
.xlate = iic_host_xlate,
|
||||
@ -378,8 +378,8 @@ static int __init setup_iic(void)
|
||||
void __init iic_init_IRQ(void)
|
||||
{
|
||||
/* Setup an irq host data structure */
|
||||
iic_host = irq_alloc_host(NULL, IRQ_HOST_MAP_LINEAR, IIC_SOURCE_COUNT,
|
||||
&iic_host_ops, IIC_IRQ_INVALID);
|
||||
iic_host = irq_domain_add_linear(NULL, IIC_SOURCE_COUNT, &iic_host_ops,
|
||||
NULL);
|
||||
BUG_ON(iic_host == NULL);
|
||||
irq_set_default_host(iic_host);
|
||||
|
||||
|
@ -62,7 +62,7 @@ enum {
|
||||
#define SPIDER_IRQ_INVALID 63
|
||||
|
||||
struct spider_pic {
|
||||
struct irq_host *host;
|
||||
struct irq_domain *host;
|
||||
void __iomem *regs;
|
||||
unsigned int node_id;
|
||||
};
|
||||
@ -168,7 +168,7 @@ static struct irq_chip spider_pic = {
|
||||
.irq_set_type = spider_set_irq_type,
|
||||
};
|
||||
|
||||
static int spider_host_map(struct irq_host *h, unsigned int virq,
|
||||
static int spider_host_map(struct irq_domain *h, unsigned int virq,
|
||||
irq_hw_number_t hw)
|
||||
{
|
||||
irq_set_chip_data(virq, h->host_data);
|
||||
@ -180,7 +180,7 @@ static int spider_host_map(struct irq_host *h, unsigned int virq,
|
||||
return 0;
|
||||
}
|
||||
|
||||
static int spider_host_xlate(struct irq_host *h, struct device_node *ct,
|
||||
static int spider_host_xlate(struct irq_domain *h, struct device_node *ct,
|
||||
const u32 *intspec, unsigned int intsize,
|
||||
irq_hw_number_t *out_hwirq, unsigned int *out_flags)
|
||||
|
||||
@ -194,7 +194,7 @@ static int spider_host_xlate(struct irq_host *h, struct device_node *ct,
|
||||
return 0;
|
||||
}
|
||||
|
||||
static struct irq_host_ops spider_host_ops = {
|
||||
static const struct irq_domain_ops spider_host_ops = {
|
||||
.map = spider_host_map,
|
||||
.xlate = spider_host_xlate,
|
||||
};
|
||||
@ -299,12 +299,10 @@ static void __init spider_init_one(struct device_node *of_node, int chip,
|
||||
panic("spider_pic: can't map registers !");
|
||||
|
||||
/* Allocate a host */
|
||||
pic->host = irq_alloc_host(of_node, IRQ_HOST_MAP_LINEAR,
|
||||
SPIDER_SRC_COUNT, &spider_host_ops,
|
||||
SPIDER_IRQ_INVALID);
|
||||
pic->host = irq_domain_add_linear(of_node, SPIDER_SRC_COUNT,
|
||||
&spider_host_ops, pic);
|
||||
if (pic->host == NULL)
|
||||
panic("spider_pic: can't allocate irq host !");
|
||||
pic->host->host_data = pic;
|
||||
|
||||
/* Go through all sources and disable them */
|
||||
for (i = 0; i < SPIDER_SRC_COUNT; i++) {
|
||||
|
@ -96,9 +96,9 @@ static struct irq_chip flipper_pic = {
|
||||
*
|
||||
*/
|
||||
|
||||
static struct irq_host *flipper_irq_host;
|
||||
static struct irq_domain *flipper_irq_host;
|
||||
|
||||
static int flipper_pic_map(struct irq_host *h, unsigned int virq,
|
||||
static int flipper_pic_map(struct irq_domain *h, unsigned int virq,
|
||||
irq_hw_number_t hwirq)
|
||||
{
|
||||
irq_set_chip_data(virq, h->host_data);
|
||||
@ -107,13 +107,13 @@ static int flipper_pic_map(struct irq_host *h, unsigned int virq,
|
||||
return 0;
|
||||
}
|
||||
|
||||
static int flipper_pic_match(struct irq_host *h, struct device_node *np)
|
||||
static int flipper_pic_match(struct irq_domain *h, struct device_node *np)
|
||||
{
|
||||
return 1;
|
||||
}
|
||||
|
||||
|
||||
static struct irq_host_ops flipper_irq_host_ops = {
|
||||
static const struct irq_domain_ops flipper_irq_domain_ops = {
|
||||
.map = flipper_pic_map,
|
||||
.match = flipper_pic_match,
|
||||
};
|
||||
@ -130,10 +130,10 @@ static void __flipper_quiesce(void __iomem *io_base)
|
||||
out_be32(io_base + FLIPPER_ICR, 0xffffffff);
|
||||
}
|
||||
|
||||
struct irq_host * __init flipper_pic_init(struct device_node *np)
|
||||
struct irq_domain * __init flipper_pic_init(struct device_node *np)
|
||||
{
|
||||
struct device_node *pi;
|
||||
struct irq_host *irq_host = NULL;
|
||||
struct irq_domain *irq_domain = NULL;
|
||||
struct resource res;
|
||||
void __iomem *io_base;
|
||||
int retval;
|
||||
@ -159,17 +159,15 @@ struct irq_host * __init flipper_pic_init(struct device_node *np)
|
||||
|
||||
__flipper_quiesce(io_base);
|
||||
|
||||
irq_host = irq_alloc_host(np, IRQ_HOST_MAP_LINEAR, FLIPPER_NR_IRQS,
|
||||
&flipper_irq_host_ops, -1);
|
||||
if (!irq_host) {
|
||||
pr_err("failed to allocate irq_host\n");
|
||||
irq_domain = irq_domain_add_linear(np, FLIPPER_NR_IRQS,
|
||||
&flipper_irq_domain_ops, io_base);
|
||||
if (!irq_domain) {
|
||||
pr_err("failed to allocate irq_domain\n");
|
||||
return NULL;
|
||||
}
|
||||
|
||||
irq_host->host_data = io_base;
|
||||
|
||||
out:
|
||||
return irq_host;
|
||||
return irq_domain;
|
||||
}
|
||||
|
||||
unsigned int flipper_pic_get_irq(void)
|
||||
|
@ -89,9 +89,9 @@ static struct irq_chip hlwd_pic = {
|
||||
*
|
||||
*/
|
||||
|
||||
static struct irq_host *hlwd_irq_host;
|
||||
static struct irq_domain *hlwd_irq_host;
|
||||
|
||||
static int hlwd_pic_map(struct irq_host *h, unsigned int virq,
|
||||
static int hlwd_pic_map(struct irq_domain *h, unsigned int virq,
|
||||
irq_hw_number_t hwirq)
|
||||
{
|
||||
irq_set_chip_data(virq, h->host_data);
|
||||
@ -100,11 +100,11 @@ static int hlwd_pic_map(struct irq_host *h, unsigned int virq,
|
||||
return 0;
|
||||
}
|
||||
|
||||
static struct irq_host_ops hlwd_irq_host_ops = {
|
||||
static const struct irq_domain_ops hlwd_irq_domain_ops = {
|
||||
.map = hlwd_pic_map,
|
||||
};
|
||||
|
||||
static unsigned int __hlwd_pic_get_irq(struct irq_host *h)
|
||||
static unsigned int __hlwd_pic_get_irq(struct irq_domain *h)
|
||||
{
|
||||
void __iomem *io_base = h->host_data;
|
||||
int irq;
|
||||
@ -123,14 +123,14 @@ static void hlwd_pic_irq_cascade(unsigned int cascade_virq,
|
||||
struct irq_desc *desc)
|
||||
{
|
||||
struct irq_chip *chip = irq_desc_get_chip(desc);
|
||||
struct irq_host *irq_host = irq_get_handler_data(cascade_virq);
|
||||
struct irq_domain *irq_domain = irq_get_handler_data(cascade_virq);
|
||||
unsigned int virq;
|
||||
|
||||
raw_spin_lock(&desc->lock);
|
||||
chip->irq_mask(&desc->irq_data); /* IRQ_LEVEL */
|
||||
raw_spin_unlock(&desc->lock);
|
||||
|
||||
virq = __hlwd_pic_get_irq(irq_host);
|
||||
virq = __hlwd_pic_get_irq(irq_domain);
|
||||
if (virq != NO_IRQ)
|
||||
generic_handle_irq(virq);
|
||||
else
|
||||
@ -155,9 +155,9 @@ static void __hlwd_quiesce(void __iomem *io_base)
|
||||
out_be32(io_base + HW_BROADWAY_ICR, 0xffffffff);
|
||||
}
|
||||
|
||||
struct irq_host *hlwd_pic_init(struct device_node *np)
|
||||
struct irq_domain *hlwd_pic_init(struct device_node *np)
|
||||
{
|
||||
struct irq_host *irq_host;
|
||||
struct irq_domain *irq_domain;
|
||||
struct resource res;
|
||||
void __iomem *io_base;
|
||||
int retval;
|
||||
@ -177,15 +177,14 @@ struct irq_host *hlwd_pic_init(struct device_node *np)
|
||||
|
||||
__hlwd_quiesce(io_base);
|
||||
|
||||
irq_host = irq_alloc_host(np, IRQ_HOST_MAP_LINEAR, HLWD_NR_IRQS,
|
||||
&hlwd_irq_host_ops, -1);
|
||||
if (!irq_host) {
|
||||
pr_err("failed to allocate irq_host\n");
|
||||
irq_domain = irq_domain_add_linear(np, HLWD_NR_IRQS,
|
||||
&hlwd_irq_domain_ops, io_base);
|
||||
if (!irq_domain) {
|
||||
pr_err("failed to allocate irq_domain\n");
|
||||
return NULL;
|
||||
}
|
||||
irq_host->host_data = io_base;
|
||||
|
||||
return irq_host;
|
||||
return irq_domain;
|
||||
}
|
||||
|
||||
unsigned int hlwd_pic_get_irq(void)
|
||||
@ -200,7 +199,7 @@ unsigned int hlwd_pic_get_irq(void)
|
||||
|
||||
void hlwd_pic_probe(void)
|
||||
{
|
||||
struct irq_host *host;
|
||||
struct irq_domain *host;
|
||||
struct device_node *np;
|
||||
const u32 *interrupts;
|
||||
int cascade_virq;
|
||||
|
@ -342,7 +342,7 @@ unsigned int iSeries_get_irq(void)
|
||||
|
||||
#ifdef CONFIG_PCI
|
||||
|
||||
static int iseries_irq_host_map(struct irq_host *h, unsigned int virq,
|
||||
static int iseries_irq_host_map(struct irq_domain *h, unsigned int virq,
|
||||
irq_hw_number_t hw)
|
||||
{
|
||||
irq_set_chip_and_handler(virq, &iseries_pic, handle_fasteoi_irq);
|
||||
@ -350,13 +350,13 @@ static int iseries_irq_host_map(struct irq_host *h, unsigned int virq,
|
||||
return 0;
|
||||
}
|
||||
|
||||
static int iseries_irq_host_match(struct irq_host *h, struct device_node *np)
|
||||
static int iseries_irq_host_match(struct irq_domain *h, struct device_node *np)
|
||||
{
|
||||
/* Match all */
|
||||
return 1;
|
||||
}
|
||||
|
||||
static struct irq_host_ops iseries_irq_host_ops = {
|
||||
static const struct irq_domain_ops iseries_irq_domain_ops = {
|
||||
.map = iseries_irq_host_map,
|
||||
.match = iseries_irq_host_match,
|
||||
};
|
||||
@ -368,7 +368,7 @@ static struct irq_host_ops iseries_irq_host_ops = {
|
||||
void __init iSeries_init_IRQ(void)
|
||||
{
|
||||
/* Register PCI event handler and open an event path */
|
||||
struct irq_host *host;
|
||||
struct irq_domain *host;
|
||||
int ret;
|
||||
|
||||
/*
|
||||
@ -380,8 +380,7 @@ void __init iSeries_init_IRQ(void)
|
||||
/* Create irq host. No need for a revmap since HV will give us
|
||||
* back our virtual irq number
|
||||
*/
|
||||
host = irq_alloc_host(NULL, IRQ_HOST_MAP_NOMAP, 0,
|
||||
&iseries_irq_host_ops, 0);
|
||||
host = irq_domain_add_nomap(NULL, &iseries_irq_domain_ops, NULL);
|
||||
BUG_ON(host == NULL);
|
||||
irq_set_default_host(host);
|
||||
|
||||
|
@ -61,7 +61,7 @@ static DEFINE_RAW_SPINLOCK(pmac_pic_lock);
|
||||
static unsigned long ppc_lost_interrupts[NR_MASK_WORDS];
|
||||
static unsigned long ppc_cached_irq_mask[NR_MASK_WORDS];
|
||||
static int pmac_irq_cascade = -1;
|
||||
static struct irq_host *pmac_pic_host;
|
||||
static struct irq_domain *pmac_pic_host;
|
||||
|
||||
static void __pmac_retrigger(unsigned int irq_nr)
|
||||
{
|
||||
@ -268,13 +268,13 @@ static struct irqaction gatwick_cascade_action = {
|
||||
.name = "cascade",
|
||||
};
|
||||
|
||||
static int pmac_pic_host_match(struct irq_host *h, struct device_node *node)
|
||||
static int pmac_pic_host_match(struct irq_domain *h, struct device_node *node)
|
||||
{
|
||||
/* We match all, we don't always have a node anyway */
|
||||
return 1;
|
||||
}
|
||||
|
||||
static int pmac_pic_host_map(struct irq_host *h, unsigned int virq,
|
||||
static int pmac_pic_host_map(struct irq_domain *h, unsigned int virq,
|
||||
irq_hw_number_t hw)
|
||||
{
|
||||
if (hw >= max_irqs)
|
||||
@ -288,21 +288,10 @@ static int pmac_pic_host_map(struct irq_host *h, unsigned int virq,
|
||||
return 0;
|
||||
}
|
||||
|
||||
static int pmac_pic_host_xlate(struct irq_host *h, struct device_node *ct,
|
||||
const u32 *intspec, unsigned int intsize,
|
||||
irq_hw_number_t *out_hwirq,
|
||||
unsigned int *out_flags)
|
||||
|
||||
{
|
||||
*out_flags = IRQ_TYPE_NONE;
|
||||
*out_hwirq = *intspec;
|
||||
return 0;
|
||||
}
|
||||
|
||||
static struct irq_host_ops pmac_pic_host_ops = {
|
||||
static const struct irq_domain_ops pmac_pic_host_ops = {
|
||||
.match = pmac_pic_host_match,
|
||||
.map = pmac_pic_host_map,
|
||||
.xlate = pmac_pic_host_xlate,
|
||||
.xlate = irq_domain_xlate_onecell,
|
||||
};
|
||||
|
||||
static void __init pmac_pic_probe_oldstyle(void)
|
||||
@ -352,9 +341,8 @@ static void __init pmac_pic_probe_oldstyle(void)
|
||||
/*
|
||||
* Allocate an irq host
|
||||
*/
|
||||
pmac_pic_host = irq_alloc_host(master, IRQ_HOST_MAP_LINEAR, max_irqs,
|
||||
&pmac_pic_host_ops,
|
||||
max_irqs);
|
||||
pmac_pic_host = irq_domain_add_linear(master, max_irqs,
|
||||
&pmac_pic_host_ops, NULL);
|
||||
BUG_ON(pmac_pic_host == NULL);
|
||||
irq_set_default_host(pmac_pic_host);
|
||||
|
||||
|
@ -125,7 +125,7 @@ static volatile u32 __iomem *psurge_start;
|
||||
static int psurge_type = PSURGE_NONE;
|
||||
|
||||
/* irq for secondary cpus to report */
|
||||
static struct irq_host *psurge_host;
|
||||
static struct irq_domain *psurge_host;
|
||||
int psurge_secondary_virq;
|
||||
|
||||
/*
|
||||
@ -176,7 +176,7 @@ static void smp_psurge_cause_ipi(int cpu, unsigned long data)
|
||||
psurge_set_ipi(cpu);
|
||||
}
|
||||
|
||||
static int psurge_host_map(struct irq_host *h, unsigned int virq,
|
||||
static int psurge_host_map(struct irq_domain *h, unsigned int virq,
|
||||
irq_hw_number_t hw)
|
||||
{
|
||||
irq_set_chip_and_handler(virq, &dummy_irq_chip, handle_percpu_irq);
|
||||
@ -184,7 +184,7 @@ static int psurge_host_map(struct irq_host *h, unsigned int virq,
|
||||
return 0;
|
||||
}
|
||||
|
||||
struct irq_host_ops psurge_host_ops = {
|
||||
static const struct irq_domain_ops psurge_host_ops = {
|
||||
.map = psurge_host_map,
|
||||
};
|
||||
|
||||
@ -192,8 +192,7 @@ static int psurge_secondary_ipi_init(void)
|
||||
{
|
||||
int rc = -ENOMEM;
|
||||
|
||||
psurge_host = irq_alloc_host(NULL, IRQ_HOST_MAP_NOMAP, 0,
|
||||
&psurge_host_ops, 0);
|
||||
psurge_host = irq_domain_add_nomap(NULL, &psurge_host_ops, NULL);
|
||||
|
||||
if (psurge_host)
|
||||
psurge_secondary_virq = irq_create_direct_mapping(psurge_host);
|
||||
|
@ -667,7 +667,7 @@ static void __maybe_unused _dump_mask(struct ps3_private *pd,
|
||||
static void dump_bmp(struct ps3_private* pd) {};
|
||||
#endif /* defined(DEBUG) */
|
||||
|
||||
static int ps3_host_map(struct irq_host *h, unsigned int virq,
|
||||
static int ps3_host_map(struct irq_domain *h, unsigned int virq,
|
||||
irq_hw_number_t hwirq)
|
||||
{
|
||||
DBG("%s:%d: hwirq %lu, virq %u\n", __func__, __LINE__, hwirq,
|
||||
@ -678,13 +678,13 @@ static int ps3_host_map(struct irq_host *h, unsigned int virq,
|
||||
return 0;
|
||||
}
|
||||
|
||||
static int ps3_host_match(struct irq_host *h, struct device_node *np)
|
||||
static int ps3_host_match(struct irq_domain *h, struct device_node *np)
|
||||
{
|
||||
/* Match all */
|
||||
return 1;
|
||||
}
|
||||
|
||||
static struct irq_host_ops ps3_host_ops = {
|
||||
static const struct irq_domain_ops ps3_host_ops = {
|
||||
.map = ps3_host_map,
|
||||
.match = ps3_host_match,
|
||||
};
|
||||
@ -751,10 +751,9 @@ void __init ps3_init_IRQ(void)
|
||||
{
|
||||
int result;
|
||||
unsigned cpu;
|
||||
struct irq_host *host;
|
||||
struct irq_domain *host;
|
||||
|
||||
host = irq_alloc_host(NULL, IRQ_HOST_MAP_NOMAP, 0, &ps3_host_ops,
|
||||
PS3_INVALID_OUTLET);
|
||||
host = irq_domain_add_nomap(NULL, &ps3_host_ops, NULL);
|
||||
irq_set_default_host(host);
|
||||
irq_set_virq_count(PS3_PLUG_MAX + 1);
|
||||
|
||||
|
@ -30,7 +30,7 @@
|
||||
static int opb_index = 0;
|
||||
|
||||
struct opb_pic {
|
||||
struct irq_host *host;
|
||||
struct irq_domain *host;
|
||||
void *regs;
|
||||
int index;
|
||||
spinlock_t lock;
|
||||
@ -179,7 +179,7 @@ static struct irq_chip opb_irq_chip = {
|
||||
.irq_set_type = opb_set_irq_type
|
||||
};
|
||||
|
||||
static int opb_host_map(struct irq_host *host, unsigned int virq,
|
||||
static int opb_host_map(struct irq_domain *host, unsigned int virq,
|
||||
irq_hw_number_t hwirq)
|
||||
{
|
||||
struct opb_pic *opb;
|
||||
@ -196,20 +196,9 @@ static int opb_host_map(struct irq_host *host, unsigned int virq,
|
||||
return 0;
|
||||
}
|
||||
|
||||
static int opb_host_xlate(struct irq_host *host, struct device_node *dn,
|
||||
const u32 *intspec, unsigned int intsize,
|
||||
irq_hw_number_t *out_hwirq, unsigned int *out_type)
|
||||
{
|
||||
/* Interrupt size must == 2 */
|
||||
BUG_ON(intsize != 2);
|
||||
*out_hwirq = intspec[0];
|
||||
*out_type = intspec[1];
|
||||
return 0;
|
||||
}
|
||||
|
||||
static struct irq_host_ops opb_host_ops = {
|
||||
static const struct irq_domain_ops opb_host_ops = {
|
||||
.map = opb_host_map,
|
||||
.xlate = opb_host_xlate,
|
||||
.xlate = irq_domain_xlate_twocell,
|
||||
};
|
||||
|
||||
irqreturn_t opb_irq_handler(int irq, void *private)
|
||||
@ -263,13 +252,11 @@ struct opb_pic *opb_pic_init_one(struct device_node *dn)
|
||||
goto free_opb;
|
||||
}
|
||||
|
||||
/* Allocate an irq host so that Linux knows that despite only
|
||||
/* Allocate an irq domain so that Linux knows that despite only
|
||||
* having one interrupt to issue, we're the controller for multiple
|
||||
* hardware IRQs, so later we can lookup their virtual IRQs. */
|
||||
|
||||
opb->host = irq_alloc_host(dn, IRQ_HOST_MAP_LINEAR,
|
||||
OPB_NR_IRQS, &opb_host_ops, -1);
|
||||
|
||||
opb->host = irq_domain_add_linear(dn, OPB_NR_IRQS, &opb_host_ops, opb);
|
||||
if (!opb->host) {
|
||||
printk(KERN_ERR "opb: Failed to allocate IRQ host!\n");
|
||||
goto free_regs;
|
||||
@ -277,7 +264,6 @@ struct opb_pic *opb_pic_init_one(struct device_node *dn)
|
||||
|
||||
opb->index = opb_index++;
|
||||
spin_lock_init(&opb->lock);
|
||||
opb->host->host_data = opb;
|
||||
|
||||
/* Disable all interrupts by default */
|
||||
opb_out(opb, OPB_MLSASIER, 0);
|
||||
|
@ -54,7 +54,7 @@ cpm8xx_t __iomem *cpmp; /* Pointer to comm processor space */
|
||||
immap_t __iomem *mpc8xx_immr;
|
||||
static cpic8xx_t __iomem *cpic_reg;
|
||||
|
||||
static struct irq_host *cpm_pic_host;
|
||||
static struct irq_domain *cpm_pic_host;
|
||||
|
||||
static void cpm_mask_irq(struct irq_data *d)
|
||||
{
|
||||
@ -98,7 +98,7 @@ int cpm_get_irq(void)
|
||||
return irq_linear_revmap(cpm_pic_host, cpm_vec);
|
||||
}
|
||||
|
||||
static int cpm_pic_host_map(struct irq_host *h, unsigned int virq,
|
||||
static int cpm_pic_host_map(struct irq_domain *h, unsigned int virq,
|
||||
irq_hw_number_t hw)
|
||||
{
|
||||
pr_debug("cpm_pic_host_map(%d, 0x%lx)\n", virq, hw);
|
||||
@ -123,7 +123,7 @@ static struct irqaction cpm_error_irqaction = {
|
||||
.name = "error",
|
||||
};
|
||||
|
||||
static struct irq_host_ops cpm_pic_host_ops = {
|
||||
static const struct irq_domain_ops cpm_pic_host_ops = {
|
||||
.map = cpm_pic_host_map,
|
||||
};
|
||||
|
||||
@ -164,8 +164,7 @@ unsigned int cpm_pic_init(void)
|
||||
|
||||
out_be32(&cpic_reg->cpic_cimr, 0);
|
||||
|
||||
cpm_pic_host = irq_alloc_host(np, IRQ_HOST_MAP_LINEAR,
|
||||
64, &cpm_pic_host_ops, 64);
|
||||
cpm_pic_host = irq_domain_add_linear(np, 64, &cpm_pic_host_ops, NULL);
|
||||
if (cpm_pic_host == NULL) {
|
||||
printk(KERN_ERR "CPM2 PIC: failed to allocate irq host!\n");
|
||||
sirq = NO_IRQ;
|
||||
|
@ -50,7 +50,7 @@
|
||||
|
||||
static intctl_cpm2_t __iomem *cpm2_intctl;
|
||||
|
||||
static struct irq_host *cpm2_pic_host;
|
||||
static struct irq_domain *cpm2_pic_host;
|
||||
#define NR_MASK_WORDS ((NR_IRQS + 31) / 32)
|
||||
static unsigned long ppc_cached_irq_mask[NR_MASK_WORDS];
|
||||
|
||||
@ -214,7 +214,7 @@ unsigned int cpm2_get_irq(void)
|
||||
return irq_linear_revmap(cpm2_pic_host, irq);
|
||||
}
|
||||
|
||||
static int cpm2_pic_host_map(struct irq_host *h, unsigned int virq,
|
||||
static int cpm2_pic_host_map(struct irq_domain *h, unsigned int virq,
|
||||
irq_hw_number_t hw)
|
||||
{
|
||||
pr_debug("cpm2_pic_host_map(%d, 0x%lx)\n", virq, hw);
|
||||
@ -224,21 +224,9 @@ static int cpm2_pic_host_map(struct irq_host *h, unsigned int virq,
|
||||
return 0;
|
||||
}
|
||||
|
||||
static int cpm2_pic_host_xlate(struct irq_host *h, struct device_node *ct,
|
||||
const u32 *intspec, unsigned int intsize,
|
||||
irq_hw_number_t *out_hwirq, unsigned int *out_flags)
|
||||
{
|
||||
*out_hwirq = intspec[0];
|
||||
if (intsize > 1)
|
||||
*out_flags = intspec[1];
|
||||
else
|
||||
*out_flags = IRQ_TYPE_NONE;
|
||||
return 0;
|
||||
}
|
||||
|
||||
static struct irq_host_ops cpm2_pic_host_ops = {
|
||||
static const struct irq_domain_ops cpm2_pic_host_ops = {
|
||||
.map = cpm2_pic_host_map,
|
||||
.xlate = cpm2_pic_host_xlate,
|
||||
.xlate = irq_domain_xlate_onetwocell,
|
||||
};
|
||||
|
||||
void cpm2_pic_init(struct device_node *node)
|
||||
@ -275,8 +263,7 @@ void cpm2_pic_init(struct device_node *node)
|
||||
out_be32(&cpm2_intctl->ic_scprrl, 0x05309770);
|
||||
|
||||
/* create a legacy host */
|
||||
cpm2_pic_host = irq_alloc_host(node, IRQ_HOST_MAP_LINEAR,
|
||||
64, &cpm2_pic_host_ops, 64);
|
||||
cpm2_pic_host = irq_domain_add_linear(node, 64, &cpm2_pic_host_ops, NULL);
|
||||
if (cpm2_pic_host == NULL) {
|
||||
printk(KERN_ERR "CPM2 PIC: failed to allocate irq host!\n");
|
||||
return;
|
||||
|
@ -182,13 +182,13 @@ unsigned int ehv_pic_get_irq(void)
|
||||
return irq_linear_revmap(global_ehv_pic->irqhost, irq);
|
||||
}
|
||||
|
||||
static int ehv_pic_host_match(struct irq_host *h, struct device_node *node)
|
||||
static int ehv_pic_host_match(struct irq_domain *h, struct device_node *node)
|
||||
{
|
||||
/* Exact match, unless ehv_pic node is NULL */
|
||||
return h->of_node == NULL || h->of_node == node;
|
||||
}
|
||||
|
||||
static int ehv_pic_host_map(struct irq_host *h, unsigned int virq,
|
||||
static int ehv_pic_host_map(struct irq_domain *h, unsigned int virq,
|
||||
irq_hw_number_t hw)
|
||||
{
|
||||
struct ehv_pic *ehv_pic = h->host_data;
|
||||
@ -217,7 +217,7 @@ static int ehv_pic_host_map(struct irq_host *h, unsigned int virq,
|
||||
return 0;
|
||||
}
|
||||
|
||||
static int ehv_pic_host_xlate(struct irq_host *h, struct device_node *ct,
|
||||
static int ehv_pic_host_xlate(struct irq_domain *h, struct device_node *ct,
|
||||
const u32 *intspec, unsigned int intsize,
|
||||
irq_hw_number_t *out_hwirq, unsigned int *out_flags)
|
||||
|
||||
@ -248,7 +248,7 @@ static int ehv_pic_host_xlate(struct irq_host *h, struct device_node *ct,
|
||||
return 0;
|
||||
}
|
||||
|
||||
static struct irq_host_ops ehv_pic_host_ops = {
|
||||
static const struct irq_domain_ops ehv_pic_host_ops = {
|
||||
.match = ehv_pic_host_match,
|
||||
.map = ehv_pic_host_map,
|
||||
.xlate = ehv_pic_host_xlate,
|
||||
@ -275,9 +275,8 @@ void __init ehv_pic_init(void)
|
||||
return;
|
||||
}
|
||||
|
||||
ehv_pic->irqhost = irq_alloc_host(np, IRQ_HOST_MAP_LINEAR,
|
||||
NR_EHV_PIC_INTS, &ehv_pic_host_ops, 0);
|
||||
|
||||
ehv_pic->irqhost = irq_domain_add_linear(np, NR_EHV_PIC_INTS,
|
||||
&ehv_pic_host_ops, ehv_pic);
|
||||
if (!ehv_pic->irqhost) {
|
||||
of_node_put(np);
|
||||
kfree(ehv_pic);
|
||||
@ -293,7 +292,6 @@ void __init ehv_pic_init(void)
|
||||
of_node_put(np2);
|
||||
}
|
||||
|
||||
ehv_pic->irqhost->host_data = ehv_pic;
|
||||
ehv_pic->hc_irq = ehv_pic_irq_chip;
|
||||
ehv_pic->hc_irq.irq_set_affinity = ehv_pic_set_affinity;
|
||||
ehv_pic->coreint_flag = coreint_flag;
|
||||
|
@ -60,7 +60,7 @@ static struct irq_chip fsl_msi_chip = {
|
||||
.name = "FSL-MSI",
|
||||
};
|
||||
|
||||
static int fsl_msi_host_map(struct irq_host *h, unsigned int virq,
|
||||
static int fsl_msi_host_map(struct irq_domain *h, unsigned int virq,
|
||||
irq_hw_number_t hw)
|
||||
{
|
||||
struct fsl_msi *msi_data = h->host_data;
|
||||
@ -74,7 +74,7 @@ static int fsl_msi_host_map(struct irq_host *h, unsigned int virq,
|
||||
return 0;
|
||||
}
|
||||
|
||||
static struct irq_host_ops fsl_msi_host_ops = {
|
||||
static const struct irq_domain_ops fsl_msi_host_ops = {
|
||||
.map = fsl_msi_host_map,
|
||||
};
|
||||
|
||||
@ -387,8 +387,8 @@ static int __devinit fsl_of_msi_probe(struct platform_device *dev)
|
||||
}
|
||||
platform_set_drvdata(dev, msi);
|
||||
|
||||
msi->irqhost = irq_alloc_host(dev->dev.of_node, IRQ_HOST_MAP_LINEAR,
|
||||
NR_MSI_IRQS, &fsl_msi_host_ops, 0);
|
||||
msi->irqhost = irq_domain_add_linear(dev->dev.of_node,
|
||||
NR_MSI_IRQS, &fsl_msi_host_ops, msi);
|
||||
|
||||
if (msi->irqhost == NULL) {
|
||||
dev_err(&dev->dev, "No memory for MSI irqhost\n");
|
||||
@ -420,8 +420,6 @@ static int __devinit fsl_of_msi_probe(struct platform_device *dev)
|
||||
|
||||
msi->feature = features->fsl_pic_ip;
|
||||
|
||||
msi->irqhost->host_data = msi;
|
||||
|
||||
/*
|
||||
* Remember the phandle, so that we can match with any PCI nodes
|
||||
* that have an "fsl,msi" property.
|
||||
|
@ -26,7 +26,7 @@
|
||||
#define FSL_PIC_IP_VMPIC 0x00000003
|
||||
|
||||
struct fsl_msi {
|
||||
struct irq_host *irqhost;
|
||||
struct irq_domain *irqhost;
|
||||
|
||||
unsigned long cascade_irq;
|
||||
|
||||
|
@ -25,7 +25,7 @@ static unsigned char cached_8259[2] = { 0xff, 0xff };
|
||||
|
||||
static DEFINE_RAW_SPINLOCK(i8259_lock);
|
||||
|
||||
static struct irq_host *i8259_host;
|
||||
static struct irq_domain *i8259_host;
|
||||
|
||||
/*
|
||||
* Acknowledge the IRQ using either the PCI host bridge's interrupt
|
||||
@ -163,12 +163,12 @@ static struct resource pic_edgectrl_iores = {
|
||||
.flags = IORESOURCE_BUSY,
|
||||
};
|
||||
|
||||
static int i8259_host_match(struct irq_host *h, struct device_node *node)
|
||||
static int i8259_host_match(struct irq_domain *h, struct device_node *node)
|
||||
{
|
||||
return h->of_node == NULL || h->of_node == node;
|
||||
}
|
||||
|
||||
static int i8259_host_map(struct irq_host *h, unsigned int virq,
|
||||
static int i8259_host_map(struct irq_domain *h, unsigned int virq,
|
||||
irq_hw_number_t hw)
|
||||
{
|
||||
pr_debug("i8259_host_map(%d, 0x%lx)\n", virq, hw);
|
||||
@ -185,7 +185,7 @@ static int i8259_host_map(struct irq_host *h, unsigned int virq,
|
||||
return 0;
|
||||
}
|
||||
|
||||
static int i8259_host_xlate(struct irq_host *h, struct device_node *ct,
|
||||
static int i8259_host_xlate(struct irq_domain *h, struct device_node *ct,
|
||||
const u32 *intspec, unsigned int intsize,
|
||||
irq_hw_number_t *out_hwirq, unsigned int *out_flags)
|
||||
{
|
||||
@ -205,13 +205,13 @@ static int i8259_host_xlate(struct irq_host *h, struct device_node *ct,
|
||||
return 0;
|
||||
}
|
||||
|
||||
static struct irq_host_ops i8259_host_ops = {
|
||||
static struct irq_domain_ops i8259_host_ops = {
|
||||
.match = i8259_host_match,
|
||||
.map = i8259_host_map,
|
||||
.xlate = i8259_host_xlate,
|
||||
};
|
||||
|
||||
struct irq_host *i8259_get_host(void)
|
||||
struct irq_domain *i8259_get_host(void)
|
||||
{
|
||||
return i8259_host;
|
||||
}
|
||||
@ -263,8 +263,7 @@ void i8259_init(struct device_node *node, unsigned long intack_addr)
|
||||
raw_spin_unlock_irqrestore(&i8259_lock, flags);
|
||||
|
||||
/* create a legacy host */
|
||||
i8259_host = irq_alloc_host(node, IRQ_HOST_MAP_LEGACY,
|
||||
0, &i8259_host_ops, 0);
|
||||
i8259_host = irq_domain_add_legacy_isa(node, &i8259_host_ops, NULL);
|
||||
if (i8259_host == NULL) {
|
||||
printk(KERN_ERR "i8259: failed to allocate irq host !\n");
|
||||
return;
|
||||
|
@ -672,13 +672,13 @@ static struct irq_chip ipic_edge_irq_chip = {
|
||||
.irq_set_type = ipic_set_irq_type,
|
||||
};
|
||||
|
||||
static int ipic_host_match(struct irq_host *h, struct device_node *node)
|
||||
static int ipic_host_match(struct irq_domain *h, struct device_node *node)
|
||||
{
|
||||
/* Exact match, unless ipic node is NULL */
|
||||
return h->of_node == NULL || h->of_node == node;
|
||||
}
|
||||
|
||||
static int ipic_host_map(struct irq_host *h, unsigned int virq,
|
||||
static int ipic_host_map(struct irq_domain *h, unsigned int virq,
|
||||
irq_hw_number_t hw)
|
||||
{
|
||||
struct ipic *ipic = h->host_data;
|
||||
@ -692,26 +692,10 @@ static int ipic_host_map(struct irq_host *h, unsigned int virq,
|
||||
return 0;
|
||||
}
|
||||
|
||||
static int ipic_host_xlate(struct irq_host *h, struct device_node *ct,
|
||||
const u32 *intspec, unsigned int intsize,
|
||||
irq_hw_number_t *out_hwirq, unsigned int *out_flags)
|
||||
|
||||
{
|
||||
/* interrupt sense values coming from the device tree equal either
|
||||
* LEVEL_LOW (low assertion) or EDGE_FALLING (high-to-low change)
|
||||
*/
|
||||
*out_hwirq = intspec[0];
|
||||
if (intsize > 1)
|
||||
*out_flags = intspec[1];
|
||||
else
|
||||
*out_flags = IRQ_TYPE_NONE;
|
||||
return 0;
|
||||
}
|
||||
|
||||
static struct irq_host_ops ipic_host_ops = {
|
||||
static struct irq_domain_ops ipic_host_ops = {
|
||||
.match = ipic_host_match,
|
||||
.map = ipic_host_map,
|
||||
.xlate = ipic_host_xlate,
|
||||
.xlate = irq_domain_xlate_onetwocell,
|
||||
};
|
||||
|
||||
struct ipic * __init ipic_init(struct device_node *node, unsigned int flags)
|
||||
@ -728,9 +712,8 @@ struct ipic * __init ipic_init(struct device_node *node, unsigned int flags)
|
||||
if (ipic == NULL)
|
||||
return NULL;
|
||||
|
||||
ipic->irqhost = irq_alloc_host(node, IRQ_HOST_MAP_LINEAR,
|
||||
NR_IPIC_INTS,
|
||||
&ipic_host_ops, 0);
|
||||
ipic->irqhost = irq_domain_add_linear(node, NR_IPIC_INTS,
|
||||
&ipic_host_ops, ipic);
|
||||
if (ipic->irqhost == NULL) {
|
||||
kfree(ipic);
|
||||
return NULL;
|
||||
@ -738,8 +721,6 @@ struct ipic * __init ipic_init(struct device_node *node, unsigned int flags)
|
||||
|
||||
ipic->regs = ioremap(res.start, resource_size(&res));
|
||||
|
||||
ipic->irqhost->host_data = ipic;
|
||||
|
||||
/* init hw */
|
||||
ipic_write(ipic->regs, IPIC_SICNR, 0x0);
|
||||
|
||||
|
@ -43,7 +43,7 @@ struct ipic {
|
||||
volatile u32 __iomem *regs;
|
||||
|
||||
/* The remapper for this IPIC */
|
||||
struct irq_host *irqhost;
|
||||
struct irq_domain *irqhost;
|
||||
};
|
||||
|
||||
struct ipic_info {
|
||||
|
@ -17,7 +17,7 @@
|
||||
|
||||
extern int cpm_get_irq(struct pt_regs *regs);
|
||||
|
||||
static struct irq_host *mpc8xx_pic_host;
|
||||
static struct irq_domain *mpc8xx_pic_host;
|
||||
#define NR_MASK_WORDS ((NR_IRQS + 31) / 32)
|
||||
static unsigned long ppc_cached_irq_mask[NR_MASK_WORDS];
|
||||
static sysconf8xx_t __iomem *siu_reg;
|
||||
@ -110,7 +110,7 @@ unsigned int mpc8xx_get_irq(void)
|
||||
|
||||
}
|
||||
|
||||
static int mpc8xx_pic_host_map(struct irq_host *h, unsigned int virq,
|
||||
static int mpc8xx_pic_host_map(struct irq_domain *h, unsigned int virq,
|
||||
irq_hw_number_t hw)
|
||||
{
|
||||
pr_debug("mpc8xx_pic_host_map(%d, 0x%lx)\n", virq, hw);
|
||||
@ -121,7 +121,7 @@ static int mpc8xx_pic_host_map(struct irq_host *h, unsigned int virq,
|
||||
}
|
||||
|
||||
|
||||
static int mpc8xx_pic_host_xlate(struct irq_host *h, struct device_node *ct,
|
||||
static int mpc8xx_pic_host_xlate(struct irq_domain *h, struct device_node *ct,
|
||||
const u32 *intspec, unsigned int intsize,
|
||||
irq_hw_number_t *out_hwirq, unsigned int *out_flags)
|
||||
{
|
||||
@ -142,7 +142,7 @@ static int mpc8xx_pic_host_xlate(struct irq_host *h, struct device_node *ct,
|
||||
}
|
||||
|
||||
|
||||
static struct irq_host_ops mpc8xx_pic_host_ops = {
|
||||
static struct irq_domain_ops mpc8xx_pic_host_ops = {
|
||||
.map = mpc8xx_pic_host_map,
|
||||
.xlate = mpc8xx_pic_host_xlate,
|
||||
};
|
||||
@ -171,8 +171,7 @@ int mpc8xx_pic_init(void)
|
||||
goto out;
|
||||
}
|
||||
|
||||
mpc8xx_pic_host = irq_alloc_host(np, IRQ_HOST_MAP_LINEAR,
|
||||
64, &mpc8xx_pic_host_ops, 64);
|
||||
mpc8xx_pic_host = irq_domain_add_linear(np, 64, &mpc8xx_pic_host_ops, NULL);
|
||||
if (mpc8xx_pic_host == NULL) {
|
||||
printk(KERN_ERR "MPC8xx PIC: failed to allocate irq host!\n");
|
||||
ret = -ENOMEM;
|
||||
|
@ -965,13 +965,13 @@ static struct irq_chip mpic_irq_ht_chip = {
|
||||
#endif /* CONFIG_MPIC_U3_HT_IRQS */
|
||||
|
||||
|
||||
static int mpic_host_match(struct irq_host *h, struct device_node *node)
|
||||
static int mpic_host_match(struct irq_domain *h, struct device_node *node)
|
||||
{
|
||||
/* Exact match, unless mpic node is NULL */
|
||||
return h->of_node == NULL || h->of_node == node;
|
||||
}
|
||||
|
||||
static int mpic_host_map(struct irq_host *h, unsigned int virq,
|
||||
static int mpic_host_map(struct irq_domain *h, unsigned int virq,
|
||||
irq_hw_number_t hw)
|
||||
{
|
||||
struct mpic *mpic = h->host_data;
|
||||
@ -1041,7 +1041,7 @@ static int mpic_host_map(struct irq_host *h, unsigned int virq,
|
||||
return 0;
|
||||
}
|
||||
|
||||
static int mpic_host_xlate(struct irq_host *h, struct device_node *ct,
|
||||
static int mpic_host_xlate(struct irq_domain *h, struct device_node *ct,
|
||||
const u32 *intspec, unsigned int intsize,
|
||||
irq_hw_number_t *out_hwirq, unsigned int *out_flags)
|
||||
|
||||
@ -1121,13 +1121,13 @@ static void mpic_cascade(unsigned int irq, struct irq_desc *desc)
|
||||
BUG_ON(!(mpic->flags & MPIC_SECONDARY));
|
||||
|
||||
virq = mpic_get_one_irq(mpic);
|
||||
if (virq != NO_IRQ)
|
||||
if (virq)
|
||||
generic_handle_irq(virq);
|
||||
|
||||
chip->irq_eoi(&desc->irq_data);
|
||||
}
|
||||
|
||||
static struct irq_host_ops mpic_host_ops = {
|
||||
static struct irq_domain_ops mpic_host_ops = {
|
||||
.match = mpic_host_match,
|
||||
.map = mpic_host_map,
|
||||
.xlate = mpic_host_xlate,
|
||||
@ -1345,10 +1345,9 @@ struct mpic * __init mpic_alloc(struct device_node *node,
|
||||
mpic->isu_shift = 1 + __ilog2(mpic->isu_size - 1);
|
||||
mpic->isu_mask = (1 << mpic->isu_shift) - 1;
|
||||
|
||||
mpic->irqhost = irq_alloc_host(mpic->node, IRQ_HOST_MAP_LINEAR,
|
||||
mpic->irqhost = irq_domain_add_linear(mpic->node,
|
||||
isu_size ? isu_size : mpic->num_sources,
|
||||
&mpic_host_ops,
|
||||
flags & MPIC_LARGE_VECTORS ? 2048 : 256);
|
||||
&mpic_host_ops, mpic);
|
||||
|
||||
/*
|
||||
* FIXME: The code leaks the MPIC object and mappings here; this
|
||||
@ -1357,8 +1356,6 @@ struct mpic * __init mpic_alloc(struct device_node *node,
|
||||
if (mpic->irqhost == NULL)
|
||||
return NULL;
|
||||
|
||||
mpic->irqhost->host_data = mpic;
|
||||
|
||||
/* Display version */
|
||||
switch (greg_feature & MPIC_GREG_FEATURE_VERSION_MASK) {
|
||||
case 1:
|
||||
|
@ -32,7 +32,7 @@ void mpic_msi_reserve_hwirq(struct mpic *mpic, irq_hw_number_t hwirq)
|
||||
static int mpic_msi_reserve_u3_hwirqs(struct mpic *mpic)
|
||||
{
|
||||
irq_hw_number_t hwirq;
|
||||
struct irq_host_ops *ops = mpic->irqhost->ops;
|
||||
const struct irq_domain_ops *ops = mpic->irqhost->ops;
|
||||
struct device_node *np;
|
||||
int flags, index, i;
|
||||
struct of_irq oirq;
|
||||
|
@ -70,7 +70,7 @@ static u32 mv64x60_cached_low_mask;
|
||||
static u32 mv64x60_cached_high_mask = MV64X60_HIGH_GPP_GROUPS;
|
||||
static u32 mv64x60_cached_gpp_mask;
|
||||
|
||||
static struct irq_host *mv64x60_irq_host;
|
||||
static struct irq_domain *mv64x60_irq_host;
|
||||
|
||||
/*
|
||||
* mv64x60_chip_low functions
|
||||
@ -208,7 +208,7 @@ static struct irq_chip *mv64x60_chips[] = {
|
||||
[MV64x60_LEVEL1_GPP] = &mv64x60_chip_gpp,
|
||||
};
|
||||
|
||||
static int mv64x60_host_map(struct irq_host *h, unsigned int virq,
|
||||
static int mv64x60_host_map(struct irq_domain *h, unsigned int virq,
|
||||
irq_hw_number_t hwirq)
|
||||
{
|
||||
int level1;
|
||||
@ -223,7 +223,7 @@ static int mv64x60_host_map(struct irq_host *h, unsigned int virq,
|
||||
return 0;
|
||||
}
|
||||
|
||||
static struct irq_host_ops mv64x60_host_ops = {
|
||||
static struct irq_domain_ops mv64x60_host_ops = {
|
||||
.map = mv64x60_host_map,
|
||||
};
|
||||
|
||||
@ -250,9 +250,8 @@ void __init mv64x60_init_irq(void)
|
||||
paddr = of_translate_address(np, reg);
|
||||
mv64x60_irq_reg_base = ioremap(paddr, reg[1]);
|
||||
|
||||
mv64x60_irq_host = irq_alloc_host(np, IRQ_HOST_MAP_LINEAR,
|
||||
MV64x60_NUM_IRQS,
|
||||
&mv64x60_host_ops, MV64x60_NUM_IRQS);
|
||||
mv64x60_irq_host = irq_domain_add_linear(np, MV64x60_NUM_IRQS,
|
||||
&mv64x60_host_ops, NULL);
|
||||
|
||||
spin_lock_irqsave(&mv64x60_lock, flags);
|
||||
out_le32(mv64x60_gpp_reg_base + MV64x60_GPP_INTR_MASK,
|
||||
|
@ -245,13 +245,13 @@ static struct irq_chip qe_ic_irq_chip = {
|
||||
.irq_mask_ack = qe_ic_mask_irq,
|
||||
};
|
||||
|
||||
static int qe_ic_host_match(struct irq_host *h, struct device_node *node)
|
||||
static int qe_ic_host_match(struct irq_domain *h, struct device_node *node)
|
||||
{
|
||||
/* Exact match, unless qe_ic node is NULL */
|
||||
return h->of_node == NULL || h->of_node == node;
|
||||
}
|
||||
|
||||
static int qe_ic_host_map(struct irq_host *h, unsigned int virq,
|
||||
static int qe_ic_host_map(struct irq_domain *h, unsigned int virq,
|
||||
irq_hw_number_t hw)
|
||||
{
|
||||
struct qe_ic *qe_ic = h->host_data;
|
||||
@ -272,23 +272,10 @@ static int qe_ic_host_map(struct irq_host *h, unsigned int virq,
|
||||
return 0;
|
||||
}
|
||||
|
||||
static int qe_ic_host_xlate(struct irq_host *h, struct device_node *ct,
|
||||
const u32 * intspec, unsigned int intsize,
|
||||
irq_hw_number_t * out_hwirq,
|
||||
unsigned int *out_flags)
|
||||
{
|
||||
*out_hwirq = intspec[0];
|
||||
if (intsize > 1)
|
||||
*out_flags = intspec[1];
|
||||
else
|
||||
*out_flags = IRQ_TYPE_NONE;
|
||||
return 0;
|
||||
}
|
||||
|
||||
static struct irq_host_ops qe_ic_host_ops = {
|
||||
static struct irq_domain_ops qe_ic_host_ops = {
|
||||
.match = qe_ic_host_match,
|
||||
.map = qe_ic_host_map,
|
||||
.xlate = qe_ic_host_xlate,
|
||||
.xlate = irq_domain_xlate_onetwocell,
|
||||
};
|
||||
|
||||
/* Return an interrupt vector or NO_IRQ if no interrupt is pending. */
|
||||
@ -339,8 +326,8 @@ void __init qe_ic_init(struct device_node *node, unsigned int flags,
|
||||
if (qe_ic == NULL)
|
||||
return;
|
||||
|
||||
qe_ic->irqhost = irq_alloc_host(node, IRQ_HOST_MAP_LINEAR,
|
||||
NR_QE_IC_INTS, &qe_ic_host_ops, 0);
|
||||
qe_ic->irqhost = irq_domain_add_linear(node, NR_QE_IC_INTS,
|
||||
&qe_ic_host_ops, qe_ic);
|
||||
if (qe_ic->irqhost == NULL) {
|
||||
kfree(qe_ic);
|
||||
return;
|
||||
@ -348,7 +335,6 @@ void __init qe_ic_init(struct device_node *node, unsigned int flags,
|
||||
|
||||
qe_ic->regs = ioremap(res.start, resource_size(&res));
|
||||
|
||||
qe_ic->irqhost->host_data = qe_ic;
|
||||
qe_ic->hc_irq = qe_ic_irq_chip;
|
||||
|
||||
qe_ic->virq_high = irq_of_parse_and_map(node, 0);
|
||||
|
@ -79,7 +79,7 @@ struct qe_ic {
|
||||
volatile u32 __iomem *regs;
|
||||
|
||||
/* The remapper for this QEIC */
|
||||
struct irq_host *irqhost;
|
||||
struct irq_domain *irqhost;
|
||||
|
||||
/* The "linux" controller struct */
|
||||
struct irq_chip hc_irq;
|
||||
|
@ -51,7 +51,7 @@
|
||||
u32 tsi108_pci_cfg_base;
|
||||
static u32 tsi108_pci_cfg_phys;
|
||||
u32 tsi108_csr_vir_base;
|
||||
static struct irq_host *pci_irq_host;
|
||||
static struct irq_domain *pci_irq_host;
|
||||
|
||||
extern u32 get_vir_csrbase(void);
|
||||
extern u32 tsi108_read_reg(u32 reg_offset);
|
||||
@ -376,7 +376,7 @@ static struct irq_chip tsi108_pci_irq = {
|
||||
.irq_unmask = tsi108_pci_irq_unmask,
|
||||
};
|
||||
|
||||
static int pci_irq_host_xlate(struct irq_host *h, struct device_node *ct,
|
||||
static int pci_irq_host_xlate(struct irq_domain *h, struct device_node *ct,
|
||||
const u32 *intspec, unsigned int intsize,
|
||||
irq_hw_number_t *out_hwirq, unsigned int *out_flags)
|
||||
{
|
||||
@ -385,7 +385,7 @@ static int pci_irq_host_xlate(struct irq_host *h, struct device_node *ct,
|
||||
return 0;
|
||||
}
|
||||
|
||||
static int pci_irq_host_map(struct irq_host *h, unsigned int virq,
|
||||
static int pci_irq_host_map(struct irq_domain *h, unsigned int virq,
|
||||
irq_hw_number_t hw)
|
||||
{ unsigned int irq;
|
||||
DBG("%s(%d, 0x%lx)\n", __func__, virq, hw);
|
||||
@ -397,7 +397,7 @@ static int pci_irq_host_map(struct irq_host *h, unsigned int virq,
|
||||
return 0;
|
||||
}
|
||||
|
||||
static struct irq_host_ops pci_irq_host_ops = {
|
||||
static struct irq_domain_ops pci_irq_domain_ops = {
|
||||
.map = pci_irq_host_map,
|
||||
.xlate = pci_irq_host_xlate,
|
||||
};
|
||||
@ -419,10 +419,9 @@ void __init tsi108_pci_int_init(struct device_node *node)
|
||||
{
|
||||
DBG("Tsi108_pci_int_init: initializing PCI interrupts\n");
|
||||
|
||||
pci_irq_host = irq_alloc_host(node, IRQ_HOST_MAP_LEGACY,
|
||||
0, &pci_irq_host_ops, 0);
|
||||
pci_irq_host = irq_domain_add_legacy_isa(node, &pci_irq_domain_ops, NULL);
|
||||
if (pci_irq_host == NULL) {
|
||||
printk(KERN_ERR "pci_irq_host: failed to allocate irq host !\n");
|
||||
printk(KERN_ERR "pci_irq_host: failed to allocate irq domain!\n");
|
||||
return;
|
||||
}
|
||||
|
||||
|
@ -49,7 +49,7 @@ struct uic {
|
||||
raw_spinlock_t lock;
|
||||
|
||||
/* The remapper for this UIC */
|
||||
struct irq_host *irqhost;
|
||||
struct irq_domain *irqhost;
|
||||
};
|
||||
|
||||
static void uic_unmask_irq(struct irq_data *d)
|
||||
@ -174,7 +174,7 @@ static struct irq_chip uic_irq_chip = {
|
||||
.irq_set_type = uic_set_irq_type,
|
||||
};
|
||||
|
||||
static int uic_host_map(struct irq_host *h, unsigned int virq,
|
||||
static int uic_host_map(struct irq_domain *h, unsigned int virq,
|
||||
irq_hw_number_t hw)
|
||||
{
|
||||
struct uic *uic = h->host_data;
|
||||
@ -190,21 +190,9 @@ static int uic_host_map(struct irq_host *h, unsigned int virq,
|
||||
return 0;
|
||||
}
|
||||
|
||||
static int uic_host_xlate(struct irq_host *h, struct device_node *ct,
|
||||
const u32 *intspec, unsigned int intsize,
|
||||
irq_hw_number_t *out_hwirq, unsigned int *out_type)
|
||||
|
||||
{
|
||||
/* UIC intspecs must have 2 cells */
|
||||
BUG_ON(intsize != 2);
|
||||
*out_hwirq = intspec[0];
|
||||
*out_type = intspec[1];
|
||||
return 0;
|
||||
}
|
||||
|
||||
static struct irq_host_ops uic_host_ops = {
|
||||
static struct irq_domain_ops uic_host_ops = {
|
||||
.map = uic_host_map,
|
||||
.xlate = uic_host_xlate,
|
||||
.xlate = irq_domain_xlate_twocell,
|
||||
};
|
||||
|
||||
void uic_irq_cascade(unsigned int virq, struct irq_desc *desc)
|
||||
@ -270,13 +258,11 @@ static struct uic * __init uic_init_one(struct device_node *node)
|
||||
}
|
||||
uic->dcrbase = *dcrreg;
|
||||
|
||||
uic->irqhost = irq_alloc_host(node, IRQ_HOST_MAP_LINEAR,
|
||||
NR_UIC_INTS, &uic_host_ops, -1);
|
||||
uic->irqhost = irq_domain_add_linear(node, NR_UIC_INTS, &uic_host_ops,
|
||||
uic);
|
||||
if (! uic->irqhost)
|
||||
return NULL; /* FIXME: panic? */
|
||||
|
||||
uic->irqhost->host_data = uic;
|
||||
|
||||
/* Start with all interrupts disabled, level and non-critical */
|
||||
mtdcr(uic->dcrbase + UIC_ER, 0);
|
||||
mtdcr(uic->dcrbase + UIC_CR, 0);
|
||||
|
@ -40,7 +40,7 @@ unsigned int xics_interrupt_server_size = 8;
|
||||
|
||||
DEFINE_PER_CPU(struct xics_cppr, xics_cppr);
|
||||
|
||||
struct irq_host *xics_host;
|
||||
struct irq_domain *xics_host;
|
||||
|
||||
static LIST_HEAD(ics_list);
|
||||
|
||||
@ -212,16 +212,16 @@ void xics_migrate_irqs_away(void)
|
||||
/* We can't set affinity on ISA interrupts */
|
||||
if (virq < NUM_ISA_INTERRUPTS)
|
||||
continue;
|
||||
if (!virq_is_host(virq, xics_host))
|
||||
continue;
|
||||
irq = (unsigned int)virq_to_hw(virq);
|
||||
/* We need to get IPIs still. */
|
||||
if (irq == XICS_IPI || irq == XICS_IRQ_SPURIOUS)
|
||||
continue;
|
||||
desc = irq_to_desc(virq);
|
||||
/* We only need to migrate enabled IRQS */
|
||||
if (!desc || !desc->action)
|
||||
continue;
|
||||
if (desc->irq_data.domain != xics_host)
|
||||
continue;
|
||||
irq = desc->irq_data.hwirq;
|
||||
/* We need to get IPIs still. */
|
||||
if (irq == XICS_IPI || irq == XICS_IRQ_SPURIOUS)
|
||||
continue;
|
||||
chip = irq_desc_get_chip(desc);
|
||||
if (!chip || !chip->irq_set_affinity)
|
||||
continue;
|
||||
@ -301,7 +301,7 @@ int xics_get_irq_server(unsigned int virq, const struct cpumask *cpumask,
|
||||
}
|
||||
#endif /* CONFIG_SMP */
|
||||
|
||||
static int xics_host_match(struct irq_host *h, struct device_node *node)
|
||||
static int xics_host_match(struct irq_domain *h, struct device_node *node)
|
||||
{
|
||||
struct ics *ics;
|
||||
|
||||
@ -323,7 +323,7 @@ static struct irq_chip xics_ipi_chip = {
|
||||
.irq_unmask = xics_ipi_unmask,
|
||||
};
|
||||
|
||||
static int xics_host_map(struct irq_host *h, unsigned int virq,
|
||||
static int xics_host_map(struct irq_domain *h, unsigned int virq,
|
||||
irq_hw_number_t hw)
|
||||
{
|
||||
struct ics *ics;
|
||||
@ -351,7 +351,7 @@ static int xics_host_map(struct irq_host *h, unsigned int virq,
|
||||
return -EINVAL;
|
||||
}
|
||||
|
||||
static int xics_host_xlate(struct irq_host *h, struct device_node *ct,
|
||||
static int xics_host_xlate(struct irq_domain *h, struct device_node *ct,
|
||||
const u32 *intspec, unsigned int intsize,
|
||||
irq_hw_number_t *out_hwirq, unsigned int *out_flags)
|
||||
|
||||
@ -366,7 +366,7 @@ static int xics_host_xlate(struct irq_host *h, struct device_node *ct,
|
||||
return 0;
|
||||
}
|
||||
|
||||
static struct irq_host_ops xics_host_ops = {
|
||||
static struct irq_domain_ops xics_host_ops = {
|
||||
.match = xics_host_match,
|
||||
.map = xics_host_map,
|
||||
.xlate = xics_host_xlate,
|
||||
@ -374,8 +374,7 @@ static struct irq_host_ops xics_host_ops = {
|
||||
|
||||
static void __init xics_init_host(void)
|
||||
{
|
||||
xics_host = irq_alloc_host(NULL, IRQ_HOST_MAP_TREE, 0, &xics_host_ops,
|
||||
XICS_IRQ_SPURIOUS);
|
||||
xics_host = irq_domain_add_tree(NULL, &xics_host_ops, NULL);
|
||||
BUG_ON(xics_host == NULL);
|
||||
irq_set_default_host(xics_host);
|
||||
}
|
||||
|
@ -40,7 +40,7 @@
|
||||
#define XINTC_IVR 24 /* Interrupt Vector */
|
||||
#define XINTC_MER 28 /* Master Enable */
|
||||
|
||||
static struct irq_host *master_irqhost;
|
||||
static struct irq_domain *master_irqhost;
|
||||
|
||||
#define XILINX_INTC_MAXIRQS (32)
|
||||
|
||||
@ -141,7 +141,7 @@ static struct irq_chip xilinx_intc_edge_irqchip = {
|
||||
/**
|
||||
* xilinx_intc_xlate - translate virq# from device tree interrupts property
|
||||
*/
|
||||
static int xilinx_intc_xlate(struct irq_host *h, struct device_node *ct,
|
||||
static int xilinx_intc_xlate(struct irq_domain *h, struct device_node *ct,
|
||||
const u32 *intspec, unsigned int intsize,
|
||||
irq_hw_number_t *out_hwirq,
|
||||
unsigned int *out_flags)
|
||||
@ -161,7 +161,7 @@ static int xilinx_intc_xlate(struct irq_host *h, struct device_node *ct,
|
||||
|
||||
return 0;
|
||||
}
|
||||
static int xilinx_intc_map(struct irq_host *h, unsigned int virq,
|
||||
static int xilinx_intc_map(struct irq_domain *h, unsigned int virq,
|
||||
irq_hw_number_t irq)
|
||||
{
|
||||
irq_set_chip_data(virq, h->host_data);
|
||||
@ -177,15 +177,15 @@ static int xilinx_intc_map(struct irq_host *h, unsigned int virq,
|
||||
return 0;
|
||||
}
|
||||
|
||||
static struct irq_host_ops xilinx_intc_ops = {
|
||||
static struct irq_domain_ops xilinx_intc_ops = {
|
||||
.map = xilinx_intc_map,
|
||||
.xlate = xilinx_intc_xlate,
|
||||
};
|
||||
|
||||
struct irq_host * __init
|
||||
struct irq_domain * __init
|
||||
xilinx_intc_init(struct device_node *np)
|
||||
{
|
||||
struct irq_host * irq;
|
||||
struct irq_domain * irq;
|
||||
void * regs;
|
||||
|
||||
/* Find and map the intc registers */
|
||||
@ -200,12 +200,11 @@ xilinx_intc_init(struct device_node *np)
|
||||
out_be32(regs + XINTC_IAR, ~(u32) 0); /* Acknowledge pending irqs */
|
||||
out_be32(regs + XINTC_MER, 0x3UL); /* Turn on the Master Enable. */
|
||||
|
||||
/* Allocate and initialize an irq_host structure. */
|
||||
irq = irq_alloc_host(np, IRQ_HOST_MAP_LINEAR, XILINX_INTC_MAXIRQS,
|
||||
&xilinx_intc_ops, -1);
|
||||
/* Allocate and initialize an irq_domain structure. */
|
||||
irq = irq_domain_add_linear(np, XILINX_INTC_MAXIRQS, &xilinx_intc_ops,
|
||||
regs);
|
||||
if (!irq)
|
||||
panic(__FILE__ ": Cannot allocate IRQ host\n");
|
||||
irq->host_data = regs;
|
||||
|
||||
return irq;
|
||||
}
|
||||
|
@ -22,6 +22,7 @@
|
||||
#include <linux/proc_fs.h>
|
||||
#include <linux/mutex.h>
|
||||
#include <linux/atomic.h>
|
||||
#include <linux/irqdomain.h>
|
||||
|
||||
#define OF_ROOT_NODE_ADDR_CELLS_DEFAULT 2
|
||||
#define OF_ROOT_NODE_SIZE_CELLS_DEFAULT 1
|
||||
@ -55,15 +56,6 @@ struct resource;
|
||||
extern void __iomem *of_ioremap(struct resource *res, unsigned long offset, unsigned long size, char *name);
|
||||
extern void of_iounmap(struct resource *res, void __iomem *base, unsigned long size);
|
||||
|
||||
/* These routines are here to provide compatibility with how powerpc
|
||||
* handles IRQ mapping for OF device nodes. We precompute and permanently
|
||||
* register them in the platform_device objects, whereas powerpc computes them
|
||||
* on request.
|
||||
*/
|
||||
static inline void irq_dispose_mapping(unsigned int virq)
|
||||
{
|
||||
}
|
||||
|
||||
extern struct device_node *of_console_device;
|
||||
extern char *of_console_path;
|
||||
extern char *of_console_options;
|
||||
|
@ -398,6 +398,7 @@ config X86_INTEL_CE
|
||||
select X86_REBOOTFIXUPS
|
||||
select OF
|
||||
select OF_EARLY_FLATTREE
|
||||
select IRQ_DOMAIN
|
||||
---help---
|
||||
Select for the Intel CE media processor (CE4100) SOC.
|
||||
This option compiles in support for the CE4100 SOC for settop
|
||||
@ -2076,6 +2077,7 @@ config OLPC
|
||||
select GPIOLIB
|
||||
select OF
|
||||
select OF_PROMTREE
|
||||
select IRQ_DOMAIN
|
||||
---help---
|
||||
Add support for detecting the unique features of the OLPC
|
||||
XO hardware.
|
||||
|
@ -29,7 +29,7 @@ extern unsigned int sig_xstate_size;
|
||||
extern void fpu_init(void);
|
||||
extern void mxcsr_feature_mask_init(void);
|
||||
extern int init_fpu(struct task_struct *child);
|
||||
extern asmlinkage void math_state_restore(void);
|
||||
extern void math_state_restore(void);
|
||||
extern void __math_state_restore(void);
|
||||
extern int dump_fpu(struct pt_regs *, struct user_i387_struct *);
|
||||
|
||||
@ -307,9 +307,54 @@ static inline void __clear_fpu(struct task_struct *tsk)
|
||||
}
|
||||
}
|
||||
|
||||
/*
|
||||
* Were we in an interrupt that interrupted kernel mode?
|
||||
*
|
||||
* We can do a kernel_fpu_begin/end() pair *ONLY* if that
|
||||
* pair does nothing at all: TS_USEDFPU must be clear (so
|
||||
* that we don't try to save the FPU state), and TS must
|
||||
* be set (so that the clts/stts pair does nothing that is
|
||||
* visible in the interrupted kernel thread).
|
||||
*/
|
||||
static inline bool interrupted_kernel_fpu_idle(void)
|
||||
{
|
||||
return !(current_thread_info()->status & TS_USEDFPU) &&
|
||||
(read_cr0() & X86_CR0_TS);
|
||||
}
|
||||
|
||||
/*
|
||||
* Were we in user mode (or vm86 mode) when we were
|
||||
* interrupted?
|
||||
*
|
||||
* Doing kernel_fpu_begin/end() is ok if we are running
|
||||
* in an interrupt context from user mode - we'll just
|
||||
* save the FPU state as required.
|
||||
*/
|
||||
static inline bool interrupted_user_mode(void)
|
||||
{
|
||||
struct pt_regs *regs = get_irq_regs();
|
||||
return regs && user_mode_vm(regs);
|
||||
}
|
||||
|
||||
/*
|
||||
* Can we use the FPU in kernel mode with the
|
||||
* whole "kernel_fpu_begin/end()" sequence?
|
||||
*
|
||||
* It's always ok in process context (ie "not interrupt")
|
||||
* but it is sometimes ok even from an irq.
|
||||
*/
|
||||
static inline bool irq_fpu_usable(void)
|
||||
{
|
||||
return !in_interrupt() ||
|
||||
interrupted_user_mode() ||
|
||||
interrupted_kernel_fpu_idle();
|
||||
}
|
||||
|
||||
static inline void kernel_fpu_begin(void)
|
||||
{
|
||||
struct thread_info *me = current_thread_info();
|
||||
|
||||
WARN_ON_ONCE(!irq_fpu_usable());
|
||||
preempt_disable();
|
||||
if (me->status & TS_USEDFPU)
|
||||
__save_init_fpu(me->task);
|
||||
@ -323,14 +368,6 @@ static inline void kernel_fpu_end(void)
|
||||
preempt_enable();
|
||||
}
|
||||
|
||||
static inline bool irq_fpu_usable(void)
|
||||
{
|
||||
struct pt_regs *regs;
|
||||
|
||||
return !in_interrupt() || !(regs = get_irq_regs()) || \
|
||||
user_mode(regs) || (read_cr0() & X86_CR0_TS);
|
||||
}
|
||||
|
||||
/*
|
||||
* Some instructions like VIA's padlock instructions generate a spurious
|
||||
* DNA fault but don't modify SSE registers. And these instructions
|
||||
@ -367,6 +404,7 @@ static inline void irq_ts_restore(int TS_state)
|
||||
*/
|
||||
static inline void save_init_fpu(struct task_struct *tsk)
|
||||
{
|
||||
WARN_ON_ONCE(task_thread_info(tsk)->status & TS_USEDFPU);
|
||||
preempt_disable();
|
||||
__save_init_fpu(tsk);
|
||||
stts();
|
||||
|
@ -1,12 +0,0 @@
|
||||
#ifndef __IRQ_CONTROLLER__
|
||||
#define __IRQ_CONTROLLER__
|
||||
|
||||
struct irq_domain {
|
||||
int (*xlate)(struct irq_domain *h, const u32 *intspec, u32 intsize,
|
||||
u32 *out_hwirq, u32 *out_type);
|
||||
void *priv;
|
||||
struct device_node *controller;
|
||||
struct list_head l;
|
||||
};
|
||||
|
||||
#endif
|
@ -21,7 +21,6 @@
|
||||
#include <asm/irq.h>
|
||||
#include <linux/atomic.h>
|
||||
#include <asm/setup.h>
|
||||
#include <asm/irq_controller.h>
|
||||
|
||||
#ifdef CONFIG_OF
|
||||
extern int of_ioapic;
|
||||
@ -43,15 +42,6 @@ extern char cmd_line[COMMAND_LINE_SIZE];
|
||||
#define pci_address_to_pio pci_address_to_pio
|
||||
unsigned long pci_address_to_pio(phys_addr_t addr);
|
||||
|
||||
/**
|
||||
* irq_dispose_mapping - Unmap an interrupt
|
||||
* @virq: linux virq number of the interrupt to unmap
|
||||
*
|
||||
* FIXME: We really should implement proper virq handling like power,
|
||||
* but that's going to be major surgery.
|
||||
*/
|
||||
static inline void irq_dispose_mapping(unsigned int virq) { }
|
||||
|
||||
#define HAVE_ARCH_DEVTREE_FIXUPS
|
||||
|
||||
#endif /* __ASSEMBLY__ */
|
||||
|
@ -439,7 +439,6 @@ void intel_pmu_pebs_enable(struct perf_event *event)
|
||||
hwc->config &= ~ARCH_PERFMON_EVENTSEL_INT;
|
||||
|
||||
cpuc->pebs_enabled |= 1ULL << hwc->idx;
|
||||
WARN_ON_ONCE(cpuc->enabled);
|
||||
|
||||
if (x86_pmu.intel_cap.pebs_trap && event->attr.precise_ip > 1)
|
||||
intel_pmu_lbr_enable(event);
|
||||
|
@ -72,8 +72,6 @@ void intel_pmu_lbr_enable(struct perf_event *event)
|
||||
if (!x86_pmu.lbr_nr)
|
||||
return;
|
||||
|
||||
WARN_ON_ONCE(cpuc->enabled);
|
||||
|
||||
/*
|
||||
* Reset the LBR stack if we changed task context to
|
||||
* avoid data leaks.
|
||||
|
@ -4,6 +4,7 @@
|
||||
#include <linux/bootmem.h>
|
||||
#include <linux/export.h>
|
||||
#include <linux/io.h>
|
||||
#include <linux/irqdomain.h>
|
||||
#include <linux/interrupt.h>
|
||||
#include <linux/list.h>
|
||||
#include <linux/of.h>
|
||||
@ -17,64 +18,14 @@
|
||||
#include <linux/initrd.h>
|
||||
|
||||
#include <asm/hpet.h>
|
||||
#include <asm/irq_controller.h>
|
||||
#include <asm/apic.h>
|
||||
#include <asm/pci_x86.h>
|
||||
|
||||
__initdata u64 initial_dtb;
|
||||
char __initdata cmd_line[COMMAND_LINE_SIZE];
|
||||
static LIST_HEAD(irq_domains);
|
||||
static DEFINE_RAW_SPINLOCK(big_irq_lock);
|
||||
|
||||
int __initdata of_ioapic;
|
||||
|
||||
#ifdef CONFIG_X86_IO_APIC
|
||||
static void add_interrupt_host(struct irq_domain *ih)
|
||||
{
|
||||
unsigned long flags;
|
||||
|
||||
raw_spin_lock_irqsave(&big_irq_lock, flags);
|
||||
list_add(&ih->l, &irq_domains);
|
||||
raw_spin_unlock_irqrestore(&big_irq_lock, flags);
|
||||
}
|
||||
#endif
|
||||
|
||||
static struct irq_domain *get_ih_from_node(struct device_node *controller)
|
||||
{
|
||||
struct irq_domain *ih, *found = NULL;
|
||||
unsigned long flags;
|
||||
|
||||
raw_spin_lock_irqsave(&big_irq_lock, flags);
|
||||
list_for_each_entry(ih, &irq_domains, l) {
|
||||
if (ih->controller == controller) {
|
||||
found = ih;
|
||||
break;
|
||||
}
|
||||
}
|
||||
raw_spin_unlock_irqrestore(&big_irq_lock, flags);
|
||||
return found;
|
||||
}
|
||||
|
||||
unsigned int irq_create_of_mapping(struct device_node *controller,
|
||||
const u32 *intspec, unsigned int intsize)
|
||||
{
|
||||
struct irq_domain *ih;
|
||||
u32 virq, type;
|
||||
int ret;
|
||||
|
||||
ih = get_ih_from_node(controller);
|
||||
if (!ih)
|
||||
return 0;
|
||||
ret = ih->xlate(ih, intspec, intsize, &virq, &type);
|
||||
if (ret)
|
||||
return 0;
|
||||
if (type == IRQ_TYPE_NONE)
|
||||
return virq;
|
||||
irq_set_irq_type(virq, type);
|
||||
return virq;
|
||||
}
|
||||
EXPORT_SYMBOL_GPL(irq_create_of_mapping);
|
||||
|
||||
unsigned long pci_address_to_pio(phys_addr_t address)
|
||||
{
|
||||
/*
|
||||
@ -354,36 +305,43 @@ static struct of_ioapic_type of_ioapic_type[] =
|
||||
},
|
||||
};
|
||||
|
||||
static int ioapic_xlate(struct irq_domain *id, const u32 *intspec, u32 intsize,
|
||||
u32 *out_hwirq, u32 *out_type)
|
||||
static int ioapic_xlate(struct irq_domain *domain,
|
||||
struct device_node *controller,
|
||||
const u32 *intspec, u32 intsize,
|
||||
irq_hw_number_t *out_hwirq, u32 *out_type)
|
||||
{
|
||||
struct mp_ioapic_gsi *gsi_cfg;
|
||||
struct io_apic_irq_attr attr;
|
||||
struct of_ioapic_type *it;
|
||||
u32 line, idx, type;
|
||||
u32 line, idx;
|
||||
int rc;
|
||||
|
||||
if (intsize < 2)
|
||||
if (WARN_ON(intsize < 2))
|
||||
return -EINVAL;
|
||||
|
||||
line = *intspec;
|
||||
idx = (u32) id->priv;
|
||||
gsi_cfg = mp_ioapic_gsi_routing(idx);
|
||||
*out_hwirq = line + gsi_cfg->gsi_base;
|
||||
line = intspec[0];
|
||||
|
||||
intspec++;
|
||||
type = *intspec;
|
||||
|
||||
if (type >= ARRAY_SIZE(of_ioapic_type))
|
||||
if (intspec[1] >= ARRAY_SIZE(of_ioapic_type))
|
||||
return -EINVAL;
|
||||
|
||||
it = of_ioapic_type + type;
|
||||
*out_type = it->out_type;
|
||||
it = &of_ioapic_type[intspec[1]];
|
||||
|
||||
idx = (u32) domain->host_data;
|
||||
set_io_apic_irq_attr(&attr, idx, line, it->trigger, it->polarity);
|
||||
|
||||
return io_apic_setup_irq_pin_once(*out_hwirq, cpu_to_node(0), &attr);
|
||||
rc = io_apic_setup_irq_pin_once(irq_find_mapping(domain, line),
|
||||
cpu_to_node(0), &attr);
|
||||
if (rc)
|
||||
return rc;
|
||||
|
||||
*out_hwirq = line;
|
||||
*out_type = it->out_type;
|
||||
return 0;
|
||||
}
|
||||
|
||||
const struct irq_domain_ops ioapic_irq_domain_ops = {
|
||||
.xlate = ioapic_xlate,
|
||||
};
|
||||
|
||||
static void __init ioapic_add_ofnode(struct device_node *np)
|
||||
{
|
||||
struct resource r;
|
||||
@ -399,13 +357,14 @@ static void __init ioapic_add_ofnode(struct device_node *np)
|
||||
for (i = 0; i < nr_ioapics; i++) {
|
||||
if (r.start == mpc_ioapic_addr(i)) {
|
||||
struct irq_domain *id;
|
||||
struct mp_ioapic_gsi *gsi_cfg;
|
||||
|
||||
id = kzalloc(sizeof(*id), GFP_KERNEL);
|
||||
gsi_cfg = mp_ioapic_gsi_routing(i);
|
||||
|
||||
id = irq_domain_add_legacy(np, 32, gsi_cfg->gsi_base, 0,
|
||||
&ioapic_irq_domain_ops,
|
||||
(void*)i);
|
||||
BUG_ON(!id);
|
||||
id->controller = np;
|
||||
id->xlate = ioapic_xlate;
|
||||
id->priv = (void *)i;
|
||||
add_interrupt_host(id);
|
||||
return;
|
||||
}
|
||||
}
|
||||
|
@ -599,10 +599,10 @@ void __math_state_restore(void)
|
||||
* Careful.. There are problems with IBM-designed IRQ13 behaviour.
|
||||
* Don't touch unless you *really* know how it works.
|
||||
*
|
||||
* Must be called with kernel preemption disabled (in this case,
|
||||
* local interrupts are disabled at the call-site in entry.S).
|
||||
* Must be called with kernel preemption disabled (eg with local
|
||||
* local interrupts as in the case of do_device_not_available).
|
||||
*/
|
||||
asmlinkage void math_state_restore(void)
|
||||
void math_state_restore(void)
|
||||
{
|
||||
struct thread_info *thread = current_thread_info();
|
||||
struct task_struct *tsk = thread->task;
|
||||
@ -631,6 +631,7 @@ EXPORT_SYMBOL_GPL(math_state_restore);
|
||||
dotraplinkage void __kprobes
|
||||
do_device_not_available(struct pt_regs *regs, long error_code)
|
||||
{
|
||||
WARN_ON_ONCE(!user_mode_vm(regs));
|
||||
#ifdef CONFIG_MATH_EMULATION
|
||||
if (read_cr0() & X86_CR0_EM) {
|
||||
struct math_emu_info info = { };
|
||||
|
@ -1659,7 +1659,7 @@ static void blkiocg_attach(struct cgroup_subsys *ss, struct cgroup *cgrp,
|
||||
ioc = get_task_io_context(task, GFP_ATOMIC, NUMA_NO_NODE);
|
||||
if (ioc) {
|
||||
ioc_cgroup_changed(ioc);
|
||||
put_io_context(ioc, NULL);
|
||||
put_io_context(ioc);
|
||||
}
|
||||
}
|
||||
}
|
||||
|
@ -642,7 +642,7 @@ static inline void blk_free_request(struct request_queue *q, struct request *rq)
|
||||
if (rq->cmd_flags & REQ_ELVPRIV) {
|
||||
elv_put_request(q, rq);
|
||||
if (rq->elv.icq)
|
||||
put_io_context(rq->elv.icq->ioc, q);
|
||||
put_io_context(rq->elv.icq->ioc);
|
||||
}
|
||||
|
||||
mempool_free(rq, q->rq.rq_pool);
|
||||
@ -872,13 +872,15 @@ retry:
|
||||
spin_unlock_irq(q->queue_lock);
|
||||
|
||||
/* create icq if missing */
|
||||
if (unlikely(et->icq_cache && !icq))
|
||||
if ((rw_flags & REQ_ELVPRIV) && unlikely(et->icq_cache && !icq)) {
|
||||
icq = ioc_create_icq(q, gfp_mask);
|
||||
if (!icq)
|
||||
goto fail_icq;
|
||||
}
|
||||
|
||||
/* rqs are guaranteed to have icq on elv_set_request() if requested */
|
||||
if (likely(!et->icq_cache || icq))
|
||||
rq = blk_alloc_request(q, icq, rw_flags, gfp_mask);
|
||||
rq = blk_alloc_request(q, icq, rw_flags, gfp_mask);
|
||||
|
||||
fail_icq:
|
||||
if (unlikely(!rq)) {
|
||||
/*
|
||||
* Allocation failed presumably due to memory. Undo anything
|
||||
@ -1210,7 +1212,6 @@ static bool bio_attempt_back_merge(struct request_queue *q, struct request *req,
|
||||
req->ioprio = ioprio_best(req->ioprio, bio_prio(bio));
|
||||
|
||||
drive_stat_acct(req, 0);
|
||||
elv_bio_merged(q, req, bio);
|
||||
return true;
|
||||
}
|
||||
|
||||
@ -1241,7 +1242,6 @@ static bool bio_attempt_front_merge(struct request_queue *q,
|
||||
req->ioprio = ioprio_best(req->ioprio, bio_prio(bio));
|
||||
|
||||
drive_stat_acct(req, 0);
|
||||
elv_bio_merged(q, req, bio);
|
||||
return true;
|
||||
}
|
||||
|
||||
@ -1255,13 +1255,12 @@ static bool bio_attempt_front_merge(struct request_queue *q,
|
||||
* on %current's plugged list. Returns %true if merge was successful,
|
||||
* otherwise %false.
|
||||
*
|
||||
* This function is called without @q->queue_lock; however, elevator is
|
||||
* accessed iff there already are requests on the plugged list which in
|
||||
* turn guarantees validity of the elevator.
|
||||
*
|
||||
* Note that, on successful merge, elevator operation
|
||||
* elevator_bio_merged_fn() will be called without queue lock. Elevator
|
||||
* must be ready for this.
|
||||
* Plugging coalesces IOs from the same issuer for the same purpose without
|
||||
* going through @q->queue_lock. As such it's more of an issuing mechanism
|
||||
* than scheduling, and the request, while may have elvpriv data, is not
|
||||
* added on the elevator at this point. In addition, we don't have
|
||||
* reliable access to the elevator outside queue lock. Only check basic
|
||||
* merging parameters without querying the elevator.
|
||||
*/
|
||||
static bool attempt_plug_merge(struct request_queue *q, struct bio *bio,
|
||||
unsigned int *request_count)
|
||||
@ -1280,10 +1279,10 @@ static bool attempt_plug_merge(struct request_queue *q, struct bio *bio,
|
||||
|
||||
(*request_count)++;
|
||||
|
||||
if (rq->q != q)
|
||||
if (rq->q != q || !blk_rq_merge_ok(rq, bio))
|
||||
continue;
|
||||
|
||||
el_ret = elv_try_merge(rq, bio);
|
||||
el_ret = blk_try_merge(rq, bio);
|
||||
if (el_ret == ELEVATOR_BACK_MERGE) {
|
||||
ret = bio_attempt_back_merge(q, rq, bio);
|
||||
if (ret)
|
||||
@ -1345,12 +1344,14 @@ void blk_queue_bio(struct request_queue *q, struct bio *bio)
|
||||
el_ret = elv_merge(q, &req, bio);
|
||||
if (el_ret == ELEVATOR_BACK_MERGE) {
|
||||
if (bio_attempt_back_merge(q, req, bio)) {
|
||||
elv_bio_merged(q, req, bio);
|
||||
if (!attempt_back_merge(q, req))
|
||||
elv_merged_request(q, req, el_ret);
|
||||
goto out_unlock;
|
||||
}
|
||||
} else if (el_ret == ELEVATOR_FRONT_MERGE) {
|
||||
if (bio_attempt_front_merge(q, req, bio)) {
|
||||
elv_bio_merged(q, req, bio);
|
||||
if (!attempt_front_merge(q, req))
|
||||
elv_merged_request(q, req, el_ret);
|
||||
goto out_unlock;
|
||||
|
111
block/blk-ioc.c
111
block/blk-ioc.c
@ -29,21 +29,6 @@ void get_io_context(struct io_context *ioc)
|
||||
}
|
||||
EXPORT_SYMBOL(get_io_context);
|
||||
|
||||
/*
|
||||
* Releasing ioc may nest into another put_io_context() leading to nested
|
||||
* fast path release. As the ioc's can't be the same, this is okay but
|
||||
* makes lockdep whine. Keep track of nesting and use it as subclass.
|
||||
*/
|
||||
#ifdef CONFIG_LOCKDEP
|
||||
#define ioc_release_depth(q) ((q) ? (q)->ioc_release_depth : 0)
|
||||
#define ioc_release_depth_inc(q) (q)->ioc_release_depth++
|
||||
#define ioc_release_depth_dec(q) (q)->ioc_release_depth--
|
||||
#else
|
||||
#define ioc_release_depth(q) 0
|
||||
#define ioc_release_depth_inc(q) do { } while (0)
|
||||
#define ioc_release_depth_dec(q) do { } while (0)
|
||||
#endif
|
||||
|
||||
static void icq_free_icq_rcu(struct rcu_head *head)
|
||||
{
|
||||
struct io_cq *icq = container_of(head, struct io_cq, __rcu_head);
|
||||
@ -75,11 +60,8 @@ static void ioc_exit_icq(struct io_cq *icq)
|
||||
if (rcu_dereference_raw(ioc->icq_hint) == icq)
|
||||
rcu_assign_pointer(ioc->icq_hint, NULL);
|
||||
|
||||
if (et->ops.elevator_exit_icq_fn) {
|
||||
ioc_release_depth_inc(q);
|
||||
if (et->ops.elevator_exit_icq_fn)
|
||||
et->ops.elevator_exit_icq_fn(icq);
|
||||
ioc_release_depth_dec(q);
|
||||
}
|
||||
|
||||
/*
|
||||
* @icq->q might have gone away by the time RCU callback runs
|
||||
@ -98,8 +80,15 @@ static void ioc_release_fn(struct work_struct *work)
|
||||
struct io_context *ioc = container_of(work, struct io_context,
|
||||
release_work);
|
||||
struct request_queue *last_q = NULL;
|
||||
unsigned long flags;
|
||||
|
||||
spin_lock_irq(&ioc->lock);
|
||||
/*
|
||||
* Exiting icq may call into put_io_context() through elevator
|
||||
* which will trigger lockdep warning. The ioc's are guaranteed to
|
||||
* be different, use a different locking subclass here. Use
|
||||
* irqsave variant as there's no spin_lock_irq_nested().
|
||||
*/
|
||||
spin_lock_irqsave_nested(&ioc->lock, flags, 1);
|
||||
|
||||
while (!hlist_empty(&ioc->icq_list)) {
|
||||
struct io_cq *icq = hlist_entry(ioc->icq_list.first,
|
||||
@ -121,15 +110,15 @@ static void ioc_release_fn(struct work_struct *work)
|
||||
*/
|
||||
if (last_q) {
|
||||
spin_unlock(last_q->queue_lock);
|
||||
spin_unlock_irq(&ioc->lock);
|
||||
spin_unlock_irqrestore(&ioc->lock, flags);
|
||||
blk_put_queue(last_q);
|
||||
} else {
|
||||
spin_unlock_irq(&ioc->lock);
|
||||
spin_unlock_irqrestore(&ioc->lock, flags);
|
||||
}
|
||||
|
||||
last_q = this_q;
|
||||
spin_lock_irq(this_q->queue_lock);
|
||||
spin_lock(&ioc->lock);
|
||||
spin_lock_irqsave(this_q->queue_lock, flags);
|
||||
spin_lock_nested(&ioc->lock, 1);
|
||||
continue;
|
||||
}
|
||||
ioc_exit_icq(icq);
|
||||
@ -137,10 +126,10 @@ static void ioc_release_fn(struct work_struct *work)
|
||||
|
||||
if (last_q) {
|
||||
spin_unlock(last_q->queue_lock);
|
||||
spin_unlock_irq(&ioc->lock);
|
||||
spin_unlock_irqrestore(&ioc->lock, flags);
|
||||
blk_put_queue(last_q);
|
||||
} else {
|
||||
spin_unlock_irq(&ioc->lock);
|
||||
spin_unlock_irqrestore(&ioc->lock, flags);
|
||||
}
|
||||
|
||||
kmem_cache_free(iocontext_cachep, ioc);
|
||||
@ -149,79 +138,29 @@ static void ioc_release_fn(struct work_struct *work)
|
||||
/**
|
||||
* put_io_context - put a reference of io_context
|
||||
* @ioc: io_context to put
|
||||
* @locked_q: request_queue the caller is holding queue_lock of (hint)
|
||||
*
|
||||
* Decrement reference count of @ioc and release it if the count reaches
|
||||
* zero. If the caller is holding queue_lock of a queue, it can indicate
|
||||
* that with @locked_q. This is an optimization hint and the caller is
|
||||
* allowed to pass in %NULL even when it's holding a queue_lock.
|
||||
* zero.
|
||||
*/
|
||||
void put_io_context(struct io_context *ioc, struct request_queue *locked_q)
|
||||
void put_io_context(struct io_context *ioc)
|
||||
{
|
||||
struct request_queue *last_q = locked_q;
|
||||
unsigned long flags;
|
||||
|
||||
if (ioc == NULL)
|
||||
return;
|
||||
|
||||
BUG_ON(atomic_long_read(&ioc->refcount) <= 0);
|
||||
if (locked_q)
|
||||
lockdep_assert_held(locked_q->queue_lock);
|
||||
|
||||
if (!atomic_long_dec_and_test(&ioc->refcount))
|
||||
return;
|
||||
|
||||
/*
|
||||
* Destroy @ioc. This is a bit messy because icq's are chained
|
||||
* from both ioc and queue, and ioc->lock nests inside queue_lock.
|
||||
* The inner ioc->lock should be held to walk our icq_list and then
|
||||
* for each icq the outer matching queue_lock should be grabbed.
|
||||
* ie. We need to do reverse-order double lock dancing.
|
||||
*
|
||||
* Another twist is that we are often called with one of the
|
||||
* matching queue_locks held as indicated by @locked_q, which
|
||||
* prevents performing double-lock dance for other queues.
|
||||
*
|
||||
* So, we do it in two stages. The fast path uses the queue_lock
|
||||
* the caller is holding and, if other queues need to be accessed,
|
||||
* uses trylock to avoid introducing locking dependency. This can
|
||||
* handle most cases, especially if @ioc was performing IO on only
|
||||
* single device.
|
||||
*
|
||||
* If trylock doesn't cut it, we defer to @ioc->release_work which
|
||||
* can do all the double-locking dancing.
|
||||
* Releasing ioc requires reverse order double locking and we may
|
||||
* already be holding a queue_lock. Do it asynchronously from wq.
|
||||
*/
|
||||
spin_lock_irqsave_nested(&ioc->lock, flags,
|
||||
ioc_release_depth(locked_q));
|
||||
|
||||
while (!hlist_empty(&ioc->icq_list)) {
|
||||
struct io_cq *icq = hlist_entry(ioc->icq_list.first,
|
||||
struct io_cq, ioc_node);
|
||||
struct request_queue *this_q = icq->q;
|
||||
|
||||
if (this_q != last_q) {
|
||||
if (last_q && last_q != locked_q)
|
||||
spin_unlock(last_q->queue_lock);
|
||||
last_q = NULL;
|
||||
|
||||
if (!spin_trylock(this_q->queue_lock))
|
||||
break;
|
||||
last_q = this_q;
|
||||
continue;
|
||||
}
|
||||
ioc_exit_icq(icq);
|
||||
if (atomic_long_dec_and_test(&ioc->refcount)) {
|
||||
spin_lock_irqsave(&ioc->lock, flags);
|
||||
if (!hlist_empty(&ioc->icq_list))
|
||||
schedule_work(&ioc->release_work);
|
||||
spin_unlock_irqrestore(&ioc->lock, flags);
|
||||
}
|
||||
|
||||
if (last_q && last_q != locked_q)
|
||||
spin_unlock(last_q->queue_lock);
|
||||
|
||||
spin_unlock_irqrestore(&ioc->lock, flags);
|
||||
|
||||
/* if no icq is left, we're done; otherwise, kick release_work */
|
||||
if (hlist_empty(&ioc->icq_list))
|
||||
kmem_cache_free(iocontext_cachep, ioc);
|
||||
else
|
||||
schedule_work(&ioc->release_work);
|
||||
}
|
||||
EXPORT_SYMBOL(put_io_context);
|
||||
|
||||
@ -236,7 +175,7 @@ void exit_io_context(struct task_struct *task)
|
||||
task_unlock(task);
|
||||
|
||||
atomic_dec(&ioc->nr_tasks);
|
||||
put_io_context(ioc, NULL);
|
||||
put_io_context(ioc);
|
||||
}
|
||||
|
||||
/**
|
||||
|
@ -471,3 +471,40 @@ int blk_attempt_req_merge(struct request_queue *q, struct request *rq,
|
||||
{
|
||||
return attempt_merge(q, rq, next);
|
||||
}
|
||||
|
||||
bool blk_rq_merge_ok(struct request *rq, struct bio *bio)
|
||||
{
|
||||
if (!rq_mergeable(rq))
|
||||
return false;
|
||||
|
||||
/* don't merge file system requests and discard requests */
|
||||
if ((bio->bi_rw & REQ_DISCARD) != (rq->bio->bi_rw & REQ_DISCARD))
|
||||
return false;
|
||||
|
||||
/* don't merge discard requests and secure discard requests */
|
||||
if ((bio->bi_rw & REQ_SECURE) != (rq->bio->bi_rw & REQ_SECURE))
|
||||
return false;
|
||||
|
||||
/* different data direction or already started, don't merge */
|
||||
if (bio_data_dir(bio) != rq_data_dir(rq))
|
||||
return false;
|
||||
|
||||
/* must be same device and not a special request */
|
||||
if (rq->rq_disk != bio->bi_bdev->bd_disk || rq->special)
|
||||
return false;
|
||||
|
||||
/* only merge integrity protected bio into ditto rq */
|
||||
if (bio_integrity(bio) != blk_integrity_rq(rq))
|
||||
return false;
|
||||
|
||||
return true;
|
||||
}
|
||||
|
||||
int blk_try_merge(struct request *rq, struct bio *bio)
|
||||
{
|
||||
if (blk_rq_pos(rq) + blk_rq_sectors(rq) == bio->bi_sector)
|
||||
return ELEVATOR_BACK_MERGE;
|
||||
else if (blk_rq_pos(rq) - bio_sectors(bio) == bio->bi_sector)
|
||||
return ELEVATOR_FRONT_MERGE;
|
||||
return ELEVATOR_NO_MERGE;
|
||||
}
|
||||
|
@ -137,6 +137,8 @@ int blk_attempt_req_merge(struct request_queue *q, struct request *rq,
|
||||
struct request *next);
|
||||
void blk_recalc_rq_segments(struct request *rq);
|
||||
void blk_rq_set_mixed_merge(struct request *rq);
|
||||
bool blk_rq_merge_ok(struct request *rq, struct bio *bio);
|
||||
int blk_try_merge(struct request *rq, struct bio *bio);
|
||||
|
||||
void blk_queue_congestion_threshold(struct request_queue *q);
|
||||
|
||||
|
Some files were not shown because too many files have changed in this diff Show More
Loading…
Reference in New Issue
Block a user