提交 17e6b00a 编写于 作者: L Linus Torvalds

Merge branch 'irq-core-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip

Pull irq updates from Thomas Gleixner:
 "This updated pull request does not contain the last few GIC related
  patches which were reported to cause a regression.  There is a fix
  available, but I let it breed for a couple of days first.

  The irq departement provides:

   - new infrastructure to support non PCI based MSI interrupts
   - a couple of new irq chip drivers
   - the usual pile of fixlets and updates to irq chip drivers
   - preparatory changes for removal of the irq argument from interrupt
     flow handlers
   - preparatory changes to remove IRQF_VALID"

* 'irq-core-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip: (129 commits)
  irqchip/imx-gpcv2: IMX GPCv2 driver for wakeup sources
  irqchip: Add bcm2836 interrupt controller for Raspberry Pi 2
  irqchip: Add documentation for the bcm2836 interrupt controller
  irqchip/bcm2835: Add support for being used as a second level controller
  irqchip/bcm2835: Refactor handle_IRQ() calls out of MAKE_HWIRQ
  PCI: xilinx: Fix typo in function name
  irqchip/gic: Ensure gic_cpu_if_up/down() programs correct GIC instance
  irqchip/gic: Only allow the primary GIC to set the CPU map
  PCI/MSI: pci-xgene-msi: Consolidate chained IRQ handler install/remove
  unicore32/irq: Prepare puv3_gpio_handler for irq argument removal
  tile/pci_gx: Prepare trio_handle_level_irq for irq argument removal
  m68k/irq: Prepare irq handlers for irq argument removal
  C6X/megamode-pic: Prepare megamod_irq_cascade for irq argument removal
  blackfin: Prepare irq handlers for irq argument removal
  arc/irq: Prepare idu_cascade_isr for irq argument removal
  sparc/irq: Use access helper irq_data_get_affinity_mask()
  sparc/irq: Use helper irq_data_get_irq_handler_data()
  parisc/irq: Use access helper irq_data_get_affinity_mask()
  mn10300/irq: Use access helper irq_data_get_affinity_mask()
  irqchip/i8259: Prepare i8259_irq_dispatch for irq argument removal
  ...
......@@ -5,9 +5,14 @@ The BCM2835 contains a custom top-level interrupt controller, which supports
controller, or the HW block containing it, is referred to occasionally
as "armctrl" in the SoC documentation, hence naming of this binding.
The BCM2836 contains the same interrupt controller with the same
interrupts, but the per-CPU interrupt controller is the root, and an
interrupt there indicates that the ARMCTRL has an interrupt to handle.
Required properties:
- compatible : should be "brcm,bcm2835-armctrl-ic"
- compatible : should be "brcm,bcm2835-armctrl-ic" or
"brcm,bcm2836-armctrl-ic"
- reg : Specifies base physical address and size of the registers.
- interrupt-controller : Identifies the node as an interrupt controller
- #interrupt-cells : Specifies the number of cells needed to encode an
......@@ -20,6 +25,12 @@ Required properties:
The 2nd cell contains the interrupt number within the bank. Valid values
are 0..7 for bank 0, and 0..31 for bank 1.
Additional required properties for brcm,bcm2836-armctrl-ic:
- interrupt-parent : Specifies the parent interrupt controller when this
controller is the second level.
- interrupts : Specifies the interrupt on the parent for this interrupt
controller to handle.
The interrupt sources are as follows:
Bank 0:
......@@ -102,9 +113,21 @@ Bank 2:
Example:
/* BCM2835, first level */
intc: interrupt-controller {
compatible = "brcm,bcm2835-armctrl-ic";
reg = <0x7e00b200 0x200>;
interrupt-controller;
#interrupt-cells = <2>;
};
/* BCM2836, second level */
intc: interrupt-controller {
compatible = "brcm,bcm2836-armctrl-ic";
reg = <0x7e00b200 0x200>;
interrupt-controller;
#interrupt-cells = <2>;
interrupt-parent = <&local_intc>;
interrupts = <8>;
};
BCM2836 per-CPU interrupt controller
The BCM2836 has a per-cpu interrupt controller for the timer, PMU
events, and SMP IPIs. One of the CPUs may receive interrupts for the
peripheral (GPU) events, which chain to the BCM2835-style interrupt
controller.
Required properties:
- compatible: Should be "brcm,bcm2836-l1-intc"
- reg: Specifies base physical address and size of the
registers
- interrupt-controller: Identifies the node as an interrupt controller
- #interrupt-cells: Specifies the number of cells needed to encode an
interrupt source. The value shall be 1
Please refer to interrupts.txt in this directory for details of the common
Interrupt Controllers bindings used by client devices.
The interrupt sources are as follows:
0: CNTPSIRQ
1: CNTPNSIRQ
2: CNTHPIRQ
3: CNTVIRQ
8: GPU_FAST
9: PMU_FAST
Example:
local_intc: local_intc {
compatible = "brcm,bcm2836-l1-intc";
reg = <0x40000000 0x100>;
interrupt-controller;
#interrupt-cells = <1>;
interrupt-parent = <&local_intc>;
};
......@@ -59,7 +59,7 @@ int irq_select_affinity(unsigned int irq)
cpu = (cpu < (NR_CPUS-1) ? cpu + 1 : 0);
last_cpu = cpu;
cpumask_copy(data->affinity, cpumask_of(cpu));
cpumask_copy(irq_data_get_affinity_mask(data), cpumask_of(cpu));
chip->irq_set_affinity(data, cpumask_of(cpu), false);
return 0;
}
......
......@@ -252,9 +252,10 @@ static struct irq_chip idu_irq_chip = {
static int idu_first_irq;
static void idu_cascade_isr(unsigned int core_irq, struct irq_desc *desc)
static void idu_cascade_isr(unsigned int __core_irq, struct irq_desc *desc)
{
struct irq_domain *domain = irq_desc_get_handler_data(desc);
unsigned int core_irq = irq_desc_get_irq(desc);
unsigned int idu_irq;
idu_irq = core_irq - idu_first_irq;
......
......@@ -62,8 +62,6 @@ static void __init r8a7779_map_io(void)
static void __init r8a7779_init_irq_dt(void)
{
gic_set_irqchip_flags(IRQCHIP_SKIP_SET_WAKE);
irqchip_init();
/* route all interrupts to ARM */
......
......@@ -56,7 +56,6 @@ void __init ux500_init_irq(void)
struct device_node *np;
struct resource r;
gic_set_irqchip_flags(IRQCHIP_SKIP_SET_WAKE | IRQCHIP_MASK_ON_SUSPEND);
irqchip_init();
np = of_find_compatible_node(NULL, NULL, "stericsson,db8500-prcmu");
of_address_to_resource(np, 0, &r);
......
......@@ -80,7 +80,7 @@ static void tc2_pm_cpu_powerdown_prepare(unsigned int cpu, unsigned int cluster)
* to the CPU by disabling the GIC CPU IF to prevent wfi
* from completing execution behind power controller back
*/
gic_cpu_if_down();
gic_cpu_if_down(0);
}
static void tc2_pm_cluster_powerdown_prepare(unsigned int cluster)
......
......@@ -186,7 +186,6 @@ static void __init zynq_map_io(void)
static void __init zynq_irq_init(void)
{
gic_set_irqchip_flags(IRQCHIP_SKIP_SET_WAKE | IRQCHIP_MASK_ON_SUSPEND);
irqchip_init();
}
......
......@@ -128,9 +128,9 @@ static int eic_set_irq_type(struct irq_data *d, unsigned int flow_type)
irqd_set_trigger_type(d, flow_type);
if (flow_type & (IRQ_TYPE_LEVEL_LOW | IRQ_TYPE_LEVEL_HIGH))
__irq_set_handler_locked(irq, handle_level_irq);
irq_set_handler_locked(d, handle_level_irq);
else
__irq_set_handler_locked(irq, handle_edge_irq);
irq_set_handler_locked(d, handle_edge_irq);
return IRQ_SET_MASK_OK_NOCOPY;
}
......
......@@ -286,7 +286,7 @@ static void gpio_irq_handler(unsigned irq, struct irq_desc *desc)
struct pio_device *pio = irq_desc_get_chip_data(desc);
unsigned gpio_irq;
gpio_irq = (unsigned) irq_get_handler_data(irq);
gpio_irq = (unsigned) irq_desc_get_handler_data(desc);
for (;;) {
u32 isr;
......@@ -312,7 +312,6 @@ gpio_irq_setup(struct pio_device *pio, int irq, int gpio_irq)
unsigned i;
irq_set_chip_data(irq, pio);
irq_set_handler_data(irq, (void *)gpio_irq);
for (i = 0; i < 32; i++, gpio_irq++) {
irq_set_chip_data(gpio_irq, pio);
......@@ -320,7 +319,8 @@ gpio_irq_setup(struct pio_device *pio, int irq, int gpio_irq)
handle_simple_irq);
}
irq_set_chained_handler(irq, gpio_irq_handler);
irq_set_chained_handler_and_data(irq, gpio_irq_handler,
(void *)gpio_irq);
}
/*--------------------------------------------------------------------------*/
......
......@@ -182,9 +182,11 @@ static struct irq_chip bf537_mac_rx_irqchip = {
.irq_unmask = bf537_mac_rx_unmask_irq,
};
static void bf537_demux_mac_rx_irq(unsigned int int_irq,
static void bf537_demux_mac_rx_irq(unsigned int __int_irq,
struct irq_desc *desc)
{
unsigned int int_irq = irq_desc_get_irq(desc);
if (bfin_read_DMA1_IRQ_STATUS() & (DMA_DONE | DMA_ERR))
bfin_handle_irq(IRQ_MAC_RX);
else
......
......@@ -194,7 +194,8 @@ void bfin_internal_unmask_irq(unsigned int irq)
#ifdef CONFIG_SMP
static void bfin_internal_unmask_irq_chip(struct irq_data *d)
{
bfin_internal_unmask_irq_affinity(d->irq, d->affinity);
bfin_internal_unmask_irq_affinity(d->irq,
irq_data_get_affinity_mask(d));
}
static int bfin_internal_set_affinity(struct irq_data *d,
......@@ -685,12 +686,12 @@ void bfin_demux_mac_status_irq(unsigned int int_err_irq,
}
#endif
static inline void bfin_set_irq_handler(unsigned irq, irq_flow_handler_t handle)
static inline void bfin_set_irq_handler(struct irq_data *d, irq_flow_handler_t handle)
{
#ifdef CONFIG_IPIPE
handle = handle_level_irq;
#endif
__irq_set_handler_locked(irq, handle);
irq_set_handler_locked(d, handle);
}
#ifdef CONFIG_GPIO_ADI
......@@ -802,9 +803,9 @@ static int bfin_gpio_irq_type(struct irq_data *d, unsigned int type)
}
if (type & (IRQ_TYPE_EDGE_RISING | IRQ_TYPE_EDGE_FALLING))
bfin_set_irq_handler(irq, handle_edge_irq);
bfin_set_irq_handler(d, handle_edge_irq);
else
bfin_set_irq_handler(irq, handle_level_irq);
bfin_set_irq_handler(d, handle_level_irq);
return 0;
}
......@@ -824,9 +825,9 @@ static void bfin_demux_gpio_block(unsigned int irq)
}
}
void bfin_demux_gpio_irq(unsigned int inta_irq,
struct irq_desc *desc)
void bfin_demux_gpio_irq(unsigned int __inta_irq, struct irq_desc *desc)
{
unsigned int inta_irq = irq_desc_get_irq(desc);
unsigned int irq;
switch (inta_irq) {
......
......@@ -93,10 +93,11 @@ static struct irq_chip megamod_chip = {
.irq_unmask = unmask_megamod,
};
static void megamod_irq_cascade(unsigned int irq, struct irq_desc *desc)
static void megamod_irq_cascade(unsigned int __irq, struct irq_desc *desc)
{
struct megamod_cascade_data *cascade;
struct megamod_pic *pic;
unsigned int irq;
u32 events;
int n, idx;
......@@ -282,8 +283,8 @@ static struct megamod_pic * __init init_megamod_pic(struct device_node *np)
soc_writel(~0, &pic->regs->evtmask[i]);
soc_writel(~0, &pic->regs->evtclr[i]);
irq_set_handler_data(irq, &cascade_data[i]);
irq_set_chained_handler(irq, megamod_irq_cascade);
irq_set_chained_handler_and_data(irq, megamod_irq_cascade,
&cascade_data[i]);
}
/* Finally, set up the MUX registers */
......
......@@ -610,9 +610,9 @@ register_intr (unsigned int gsi, int irq, unsigned char delivery,
chip->name, irq_type->name);
chip = irq_type;
}
__irq_set_chip_handler_name_locked(irq, chip, trigger == IOSAPIC_EDGE ?
handle_edge_irq : handle_level_irq,
NULL);
irq_set_chip_handler_name_locked(irq_get_irq_data(irq), chip,
trigger == IOSAPIC_EDGE ? handle_edge_irq : handle_level_irq,
NULL);
return 0;
}
......@@ -838,7 +838,7 @@ iosapic_unregister_intr (unsigned int gsi)
if (iosapic_intr_info[irq].count == 0) {
#ifdef CONFIG_SMP
/* Clear affinity */
cpumask_setall(irq_get_irq_data(irq)->affinity);
cpumask_setall(irq_get_affinity_mask(irq));
#endif
/* Clear the interrupt information */
iosapic_intr_info[irq].dest = 0;
......
......@@ -67,7 +67,7 @@ static char irq_redir [NR_IRQS]; // = { [0 ... NR_IRQS-1] = 1 };
void set_irq_affinity_info (unsigned int irq, int hwid, int redir)
{
if (irq < NR_IRQS) {
cpumask_copy(irq_get_irq_data(irq)->affinity,
cpumask_copy(irq_get_affinity_mask(irq),
cpumask_of(cpu_logical_id(hwid)));
irq_redir[irq] = (char) (redir & 0xff);
}
......@@ -119,8 +119,8 @@ static void migrate_irqs(void)
if (irqd_is_per_cpu(data))
continue;
if (cpumask_any_and(data->affinity, cpu_online_mask)
>= nr_cpu_ids) {
if (cpumask_any_and(irq_data_get_affinity_mask(data),
cpu_online_mask) >= nr_cpu_ids) {
/*
* Save it for phase 2 processing
*/
......
......@@ -23,7 +23,7 @@ static int ia64_set_msi_irq_affinity(struct irq_data *idata,
if (irq_prepare_move(irq, cpu))
return -1;
__get_cached_msi_msg(idata->msi_desc, &msg);
__get_cached_msi_msg(irq_data_get_msi_desc(idata), &msg);
addr = msg.address_lo;
addr &= MSI_ADDR_DEST_ID_MASK;
......@@ -36,7 +36,7 @@ static int ia64_set_msi_irq_affinity(struct irq_data *idata,
msg.data = data;
pci_write_msi_msg(irq, &msg);
cpumask_copy(idata->affinity, cpumask_of(cpu));
cpumask_copy(irq_data_get_affinity_mask(idata), cpumask_of(cpu));
return 0;
}
......@@ -148,7 +148,7 @@ static int dmar_msi_set_affinity(struct irq_data *data,
msg.address_lo |= MSI_ADDR_DEST_ID_CPU(cpu_physical_id(cpu));
dmar_msi_write(irq, &msg);
cpumask_copy(data->affinity, mask);
cpumask_copy(irq_data_get_affinity_mask(data), mask);
return 0;
}
......
......@@ -175,7 +175,7 @@ static int sn_set_msi_irq_affinity(struct irq_data *data,
* Release XIO resources for the old MSI PCI address
*/
__get_cached_msi_msg(data->msi_desc, &msg);
__get_cached_msi_msg(irq_data_get_msi_desc(data), &msg);
sn_pdev = (struct pcidev_info *)sn_irq_info->irq_pciioinfo;
pdev = sn_pdev->pdi_linux_pcidev;
provider = SN_PCIDEV_BUSPROVIDER(pdev);
......@@ -206,7 +206,7 @@ static int sn_set_msi_irq_affinity(struct irq_data *data,
msg.address_lo = (u32)(bus_addr & 0x00000000ffffffff);
pci_write_msi_msg(irq, &msg);
cpumask_copy(data->affinity, cpu_mask);
cpumask_copy(irq_data_get_affinity_mask(data), cpu_mask);
return 0;
}
......
......@@ -143,8 +143,10 @@ static int intc_irq_set_type(struct irq_data *d, unsigned int type)
* We need to be careful with the masking/acking due to the side effects
* of masking an interrupt.
*/
static void intc_external_irq(unsigned int irq, struct irq_desc *desc)
static void intc_external_irq(unsigned int __irq, struct irq_desc *desc)
{
unsigned int irq = irq_desc_get_irq(desc);
irq_desc_get_chip(desc)->irq_ack(&desc->irq_data);
handle_simple_irq(irq, desc);
}
......
......@@ -63,13 +63,15 @@ void __init oss_nubus_init(void)
* Handle miscellaneous OSS interrupts.
*/
static void oss_irq(unsigned int irq, struct irq_desc *desc)
static void oss_irq(unsigned int __irq, struct irq_desc *desc)
{
int events = oss->irq_pending &
(OSS_IP_IOPSCC | OSS_IP_SCSI | OSS_IP_IOPISM);
(OSS_IP_IOPSCC | OSS_IP_SCSI | OSS_IP_IOPISM);
#ifdef DEBUG_IRQS
if ((console_loglevel == 10) && !(events & OSS_IP_SCSI)) {
unsigned int irq = irq_desc_get_irq(desc);
printk("oss_irq: irq %u events = 0x%04X\n", irq,
(int) oss->irq_pending);
}
......
......@@ -113,9 +113,10 @@ void __init psc_init(void)
* PSC interrupt handler. It's a lot like the VIA interrupt handler.
*/
static void psc_irq(unsigned int irq, struct irq_desc *desc)
static void psc_irq(unsigned int __irq, struct irq_desc *desc)
{
unsigned int offset = (unsigned int)irq_desc_get_handler_data(desc);
unsigned int irq = irq_desc_get_irq(desc);
int pIFR = pIFRbase + offset;
int pIER = pIERbase + offset;
int irq_num;
......
......@@ -11,12 +11,11 @@
#include <linux/irqdomain.h>
#include <linux/irq.h>
#include <linux/irqchip.h>
#include <linux/of_address.h>
#include <linux/io.h>
#include <linux/bug.h>
#include "../../drivers/irqchip/irqchip.h"
static void __iomem *intc_baseaddr;
/* No one else should require these constants, so define them locally here. */
......
......@@ -1071,10 +1071,6 @@ config HOTPLUG_CPU
config SYS_SUPPORTS_HOTPLUG_CPU
bool
config I8259
bool
select IRQ_DOMAIN
config MIPS_BONITO64
bool
......
......@@ -17,7 +17,6 @@
#include <linux/interrupt.h>
#include <linux/irqchip.h>
#include <linux/of_irq.h>
#include "../../../drivers/irqchip/irqchip.h"
#include <asm/irq_cpu.h>
#include <asm/mipsregs.h>
......
......@@ -34,5 +34,5 @@ void __init arch_init_irq(void)
irqchip_init();
}
OF_DECLARE_2(irqchip, mips_cpu_intc, "mti,cpu-interrupt-controller",
IRQCHIP_DECLARE(mips_cpu_intc, "mti,cpu-interrupt-controller",
mips_cpu_irq_of_init);
......@@ -61,7 +61,6 @@ obj-$(CONFIG_MIPS_VPE_APSP_API) += rtlx.o
obj-$(CONFIG_MIPS_VPE_APSP_API_CMP) += rtlx-cmp.o
obj-$(CONFIG_MIPS_VPE_APSP_API_MT) += rtlx-mt.o
obj-$(CONFIG_I8259) += i8259.o
obj-$(CONFIG_IRQ_CPU_RM7K) += irq-rm7000.o
obj-$(CONFIG_MIPS_MSC) += irq-msc01.o
obj-$(CONFIG_IRQ_TXX9) += irq_txx9.o
......
......@@ -200,7 +200,7 @@ int arch_setup_msi_irqs(struct pci_dev *dev, int nvec, int type)
if (type == PCI_CAP_ID_MSI && nvec > 1)
return 1;
list_for_each_entry(entry, &dev->msi_list, list) {
for_each_pci_msi_entry(entry, dev) {
ret = arch_setup_msi_irq(dev, entry);
if (ret < 0)
return ret;
......
......@@ -116,7 +116,7 @@ int __init init_clockevents(void)
{
struct irq_data *data;
data = irq_get_irq_data(cd->irq);
cpumask_copy(data->affinity, cpumask_of(cpu));
cpumask_copy(irq_data_get_affinity_mask(data), cpumask_of(cpu));
iact->flags |= IRQF_NOBALANCING;
}
#endif
......
......@@ -87,7 +87,8 @@ static void mn10300_cpupic_mask_ack(struct irq_data *d)
tmp2 = GxICR(irq);
irq_affinity_online[irq] =
cpumask_any_and(d->affinity, cpu_online_mask);
cpumask_any_and(irq_data_get_affinity_mask(d),
cpu_online_mask);
CROSS_GxICR(irq, irq_affinity_online[irq]) =
(tmp & (GxICR_LEVEL | GxICR_ENABLE)) | GxICR_DETECT;
tmp = CROSS_GxICR(irq, irq_affinity_online[irq]);
......@@ -124,7 +125,7 @@ static void mn10300_cpupic_unmask_clear(struct irq_data *d)
} else {
tmp = GxICR(irq);
irq_affinity_online[irq] = cpumask_any_and(d->affinity,
irq_affinity_online[irq] = cpumask_any_and(irq_data_get_affinity_mask(d),
cpu_online_mask);
CROSS_GxICR(irq, irq_affinity_online[irq]) = (tmp & GxICR_LEVEL) | GxICR_ENABLE | GxICR_DETECT;
tmp = CROSS_GxICR(irq, irq_affinity_online[irq]);
......@@ -316,15 +317,16 @@ void migrate_irqs(void)
self = smp_processor_id();
for (irq = 0; irq < NR_IRQS; irq++) {
struct irq_data *data = irq_get_irq_data(irq);
struct cpumask *mask = irq_data_get_affinity_mask(data);
if (irqd_is_per_cpu(data))
continue;
if (cpumask_test_cpu(self, data->affinity) &&
if (cpumask_test_cpu(self, mask) &&
!cpumask_intersects(&irq_affinity[irq], cpu_online_mask)) {
int cpu_id;
cpu_id = cpumask_first(cpu_online_mask);
cpumask_set_cpu(cpu_id, data->affinity);
cpumask_set_cpu(cpu_id, mask);
}
/* We need to operate irq_affinity_online atomically. */
arch_local_cli_save(flags);
......@@ -335,8 +337,7 @@ void migrate_irqs(void)
GxICR(irq) = x & GxICR_LEVEL;
tmp = GxICR(irq);
new = cpumask_any_and(data->affinity,
cpu_online_mask);
new = cpumask_any_and(mask, cpu_online_mask);
irq_affinity_online[irq] = new;
CROSS_GxICR(irq, new) =
......
......@@ -131,7 +131,7 @@ static int cpu_set_affinity_irq(struct irq_data *d, const struct cpumask *dest,
if (cpu_dest < 0)
return -1;
cpumask_copy(d->affinity, dest);
cpumask_copy(irq_data_get_affinity_mask(d), dest);
return 0;
}
......@@ -339,7 +339,7 @@ unsigned long txn_affinity_addr(unsigned int irq, int cpu)
{
#ifdef CONFIG_SMP
struct irq_data *d = irq_get_irq_data(irq);
cpumask_copy(d->affinity, cpumask_of(cpu));
cpumask_copy(irq_data_get_affinity_mask(d), cpumask_of(cpu));
#endif
return per_cpu(cpu_data, cpu).txn_addr;
......@@ -508,7 +508,7 @@ void do_cpu_irq_mask(struct pt_regs *regs)
unsigned long eirr_val;
int irq, cpu = smp_processor_id();
#ifdef CONFIG_SMP
struct irq_desc *desc;
struct irq_data *irq_data;
cpumask_t dest;
#endif
......@@ -522,9 +522,9 @@ void do_cpu_irq_mask(struct pt_regs *regs)
irq = eirr_to_irq(eirr_val);
#ifdef CONFIG_SMP
desc = irq_to_desc(irq);
cpumask_copy(&dest, desc->irq_data.affinity);
if (irqd_is_per_cpu(&desc->irq_data) &&
irq_data = irq_get_irq_data(irq);
cpumask_copy(&dest, irq_data_get_affinity_mask(irq_data));
if (irqd_is_per_cpu(irq_data) &&
!cpumask_test_cpu(smp_processor_id(), &dest)) {
int cpu = cpumask_first(&dest);
......
......@@ -123,7 +123,8 @@ cpld_pic_cascade(unsigned int irq, struct irq_desc *desc)
}
static int
cpld_pic_host_match(struct irq_domain *h, struct device_node *node)
cpld_pic_host_match(struct irq_domain *h, struct device_node *node,
enum irq_domain_bus_token bus_token)
{
return cpld_pic_node == node;
}
......
......@@ -213,7 +213,7 @@ static int setup_msi_msg_address(struct pci_dev *dev, struct msi_msg *msg)
return -ENODEV;
}
entry = list_first_entry(&dev->msi_list, struct msi_desc, list);
entry = first_pci_msi_entry(dev);
for (; dn; dn = of_get_next_parent(dn)) {
if (entry->msi_attrib.is_64) {
......@@ -269,7 +269,7 @@ static int axon_msi_setup_msi_irqs(struct pci_dev *dev, int nvec, int type)
if (rc)
return rc;
list_for_each_entry(entry, &dev->msi_list, list) {
for_each_pci_msi_entry(entry, dev) {
virq = irq_create_direct_mapping(msic->irq_domain);
if (virq == NO_IRQ) {
dev_warn(&dev->dev,
......@@ -292,7 +292,7 @@ static void axon_msi_teardown_msi_irqs(struct pci_dev *dev)
dev_dbg(&dev->dev, "axon_msi: tearing down msi irqs\n");
list_for_each_entry(entry, &dev->msi_list, list) {
for_each_pci_msi_entry(entry, dev) {
if (entry->irq == NO_IRQ)
continue;
......
......@@ -222,7 +222,8 @@ void iic_request_IPIs(void)
#endif /* CONFIG_SMP */
static int iic_host_match(struct irq_domain *h, struct device_node *node)
static int iic_host_match(struct irq_domain *h, struct device_node *node,
enum irq_domain_bus_token bus_token)
{
return of_device_is_compatible(node,
"IBM,CBEA-Internal-Interrupt-Controller");
......
......@@ -108,7 +108,8 @@ static int flipper_pic_map(struct irq_domain *h, unsigned int virq,
return 0;
}
static int flipper_pic_match(struct irq_domain *h, struct device_node *np)
static int flipper_pic_match(struct irq_domain *h, struct device_node *np,
enum irq_domain_bus_token bus_token)
{
return 1;
}
......
......@@ -66,7 +66,7 @@ static void pasemi_msi_teardown_msi_irqs(struct pci_dev *pdev)
pr_debug("pasemi_msi_teardown_msi_irqs, pdev %p\n", pdev);
list_for_each_entry(entry, &pdev->msi_list, list) {
for_each_pci_msi_entry(entry, pdev) {
if (entry->irq == NO_IRQ)
continue;
......@@ -94,7 +94,7 @@ static int pasemi_msi_setup_msi_irqs(struct pci_dev *pdev, int nvec, int type)
msg.address_hi = 0;
msg.address_lo = PASEMI_MSI_ADDR;
list_for_each_entry(entry, &pdev->msi_list, list) {
for_each_pci_msi_entry(entry, pdev) {
/* Allocate 16 interrupts for now, since that's the grouping for
* affinity. This can be changed later if it turns out 32 is too
* few MSIs for someone, but restrictions will apply to how the
......
......@@ -268,7 +268,8 @@ static struct irqaction gatwick_cascade_action = {
.name = "cascade",
};
static int pmac_pic_host_match(struct irq_domain *h, struct device_node *node)
static int pmac_pic_host_match(struct irq_domain *h, struct device_node *node,
enum irq_domain_bus_token bus_token)
{
/* We match all, we don't always have a node anyway */
return 1;
......
......@@ -134,7 +134,8 @@ static void opal_handle_irq_work(struct irq_work *work)
opal_handle_events(be64_to_cpu(last_outstanding_events));
}
static int opal_event_match(struct irq_domain *h, struct device_node *node)
static int opal_event_match(struct irq_domain *h, struct device_node *node,
enum irq_domain_bus_token bus_token)
{
return h->of_node == node;
}
......
......@@ -61,7 +61,7 @@ int pnv_setup_msi_irqs(struct pci_dev *pdev, int nvec, int type)
if (pdev->no_64bit_msi && !phb->msi32_support)
return -ENODEV;
list_for_each_entry(entry, &pdev->msi_list, list) {
for_each_pci_msi_entry(entry, pdev) {
if (!entry->msi_attrib.is_64 && !phb->msi32_support) {
pr_warn("%s: Supports only 64-bit MSIs\n",
pci_name(pdev));
......@@ -103,7 +103,7 @@ void pnv_teardown_msi_irqs(struct pci_dev *pdev)
if (WARN_ON(!phb))
return;
list_for_each_entry(entry, &pdev->msi_list, list) {
for_each_pci_msi_entry(entry, pdev) {
if (entry->irq == NO_IRQ)
continue;
irq_set_msi_desc(entry->irq, NULL);
......
......@@ -678,7 +678,8 @@ static int ps3_host_map(struct irq_domain *h, unsigned int virq,
return 0;
}
static int ps3_host_match(struct irq_domain *h, struct device_node *np)
static int ps3_host_match(struct irq_domain *h, struct device_node *np,
enum irq_domain_bus_token bus_token)
{
/* Match all */
return 1;
......
......@@ -118,7 +118,7 @@ static void rtas_teardown_msi_irqs(struct pci_dev *pdev)
{
struct msi_desc *entry;
list_for_each_entry(entry, &pdev->msi_list, list) {
for_each_pci_msi_entry(entry, pdev) {
if (entry->irq == NO_IRQ)
continue;
......@@ -350,7 +350,7 @@ static int check_msix_entries(struct pci_dev *pdev)
* So we must reject such requests. */
expected = 0;
list_for_each_entry(entry, &pdev->msi_list, list) {
for_each_pci_msi_entry(entry, pdev) {
if (entry->msi_attrib.entry_nr != expected) {
pr_debug("rtas_msi: bad MSI-X entries.\n");
return -EINVAL;
......@@ -462,7 +462,7 @@ static int rtas_setup_msi_irqs(struct pci_dev *pdev, int nvec_in, int type)
}
i = 0;
list_for_each_entry(entry, &pdev->msi_list, list) {
for_each_pci_msi_entry(entry, pdev) {
hwirq = rtas_query_irq_number(pdn, i++);
if (hwirq < 0) {
pr_debug("rtas_msi: error (%d) getting hwirq\n", rc);
......
......@@ -177,7 +177,8 @@ unsigned int ehv_pic_get_irq(void)
return irq_linear_revmap(global_ehv_pic->irqhost, irq);
}
static int ehv_pic_host_match(struct irq_domain *h, struct device_node *node)
static int ehv_pic_host_match(struct irq_domain *h, struct device_node *node,
enum irq_domain_bus_token bus_token)
{
/* Exact match, unless ehv_pic node is NULL */
return h->of_node == NULL || h->of_node == node;
......
......@@ -129,7 +129,7 @@ static void fsl_teardown_msi_irqs(struct pci_dev *pdev)
struct msi_desc *entry;
struct fsl_msi *msi_data;
list_for_each_entry(entry, &pdev->msi_list, list) {
for_each_pci_msi_entry(entry, pdev) {
if (entry->irq == NO_IRQ)
continue;
msi_data = irq_get_chip_data(entry->irq);
......@@ -219,7 +219,7 @@ static int fsl_setup_msi_irqs(struct pci_dev *pdev, int nvec, int type)
}
}
list_for_each_entry(entry, &pdev->msi_list, list) {
for_each_pci_msi_entry(entry, pdev) {
/*
* Loop over all the MSI devices until we find one that has an
* available interrupt.
......
......@@ -162,7 +162,8 @@ static struct resource pic_edgectrl_iores = {
.flags = IORESOURCE_BUSY,
};
static int i8259_host_match(struct irq_domain *h, struct device_node *node)
static int i8259_host_match(struct irq_domain *h, struct device_node *node,
enum irq_domain_bus_token bus_token)
{
return h->of_node == NULL || h->of_node == node;
}
......
......@@ -671,7 +671,8 @@ static struct irq_chip ipic_edge_irq_chip = {
.irq_set_type = ipic_set_irq_type,
};
static int ipic_host_match(struct irq_domain *h, struct device_node *node)
static int ipic_host_match(struct irq_domain *h, struct device_node *node,
enum irq_domain_bus_token bus_token)
{
/* Exact match, unless ipic node is NULL */
return h->of_node == NULL || h->of_node == node;
......
......@@ -1007,7 +1007,8 @@ static struct irq_chip mpic_irq_ht_chip = {
#endif /* CONFIG_MPIC_U3_HT_IRQS */
static int mpic_host_match(struct irq_domain *h, struct device_node *node)
static int mpic_host_match(struct irq_domain *h, struct device_node *node,
enum irq_domain_bus_token bus_token)
{
/* Exact match, unless mpic node is NULL */
return h->of_node == NULL || h->of_node == node;
......
......@@ -108,7 +108,7 @@ static void u3msi_teardown_msi_irqs(struct pci_dev *pdev)
{
struct msi_desc *entry;
list_for_each_entry(entry, &pdev->msi_list, list) {
for_each_pci_msi_entry(entry, pdev) {
if (entry->irq == NO_IRQ)
continue;
......@@ -140,7 +140,7 @@ static int u3msi_setup_msi_irqs(struct pci_dev *pdev, int nvec, int type)
return -ENXIO;
}
list_for_each_entry(entry, &pdev->msi_list, list) {
for_each_pci_msi_entry(entry, pdev) {
hwirq = msi_bitmap_alloc_hwirqs(&msi_mpic->msi_bitmap, 1);
if (hwirq < 0) {
pr_debug("u3msi: failed allocating hwirq\n");
......
......@@ -51,7 +51,7 @@ static int hsta_setup_msi_irqs(struct pci_dev *dev, int nvec, int type)
return -EINVAL;
}
list_for_each_entry(entry, &dev->msi_list, list) {
for_each_pci_msi_entry(entry, dev) {
irq = msi_bitmap_alloc_hwirqs(&ppc4xx_hsta_msi.bmp, 1);
if (irq < 0) {
pr_debug("%s: Failed to allocate msi interrupt\n",
......@@ -109,7 +109,7 @@ static void hsta_teardown_msi_irqs(struct pci_dev *dev)
struct msi_desc *entry;
int irq;
list_for_each_entry(entry, &dev->msi_list, list) {
for_each_pci_msi_entry(entry, dev) {
if (entry->irq == NO_IRQ)
continue;
......
......@@ -93,7 +93,7 @@ static int ppc4xx_setup_msi_irqs(struct pci_dev *dev, int nvec, int type)
if (!msi_data->msi_virqs)
return -ENOMEM;
list_for_each_entry(entry, &dev->msi_list, list) {
for_each_pci_msi_entry(entry, dev) {
int_no = msi_bitmap_alloc_hwirqs(&msi_data->bitmap, 1);
if (int_no >= 0)
break;
......@@ -127,7 +127,7 @@ void ppc4xx_teardown_msi_irqs(struct pci_dev *dev)
dev_dbg(&dev->dev, "PCIE-MSI: tearing down msi irqs\n");
list_for_each_entry(entry, &dev->msi_list, list) {
for_each_pci_msi_entry(entry, dev) {
if (entry->irq == NO_IRQ)
continue;
irq_set_msi_desc(entry->irq, NULL);
......
......@@ -244,7 +244,8 @@ static struct irq_chip qe_ic_irq_chip = {
.irq_mask_ack = qe_ic_mask_irq,
};
static int qe_ic_host_match(struct irq_domain *h, struct device_node *node)
static int qe_ic_host_match(struct irq_domain *h, struct device_node *node,
enum irq_domain_bus_token bus_token)
{
/* Exact match, unless qe_ic node is NULL */
return h->of_node == NULL || h->of_node == node;
......
......@@ -72,7 +72,7 @@ static unsigned int ics_opal_startup(struct irq_data *d)
* card, using the MSI mask bits. Firmware doesn't appear to unmask
* at that level, so we do it here by hand.
*/
if (d->msi_desc)
if (irq_data_get_msi_desc(d))
pci_msi_unmask_irq(d);
#endif
......
......@@ -75,7 +75,7 @@ static unsigned int ics_rtas_startup(struct irq_data *d)
* card, using the MSI mask bits. Firmware doesn't appear to unmask
* at that level, so we do it here by hand.
*/
if (d->msi_desc)
if (irq_data_get_msi_desc(d))
pci_msi_unmask_irq(d);
#endif
/* unmask it */
......
......@@ -298,7 +298,8 @@ int xics_get_irq_server(unsigned int virq, const struct cpumask *cpumask,
}
#endif /* CONFIG_SMP */
static int xics_host_match(struct irq_domain *h, struct device_node *node)
static int xics_host_match(struct irq_domain *h, struct device_node *node,
enum irq_domain_bus_token bus_token)
{
struct ics *ics;
......
......@@ -409,7 +409,7 @@ int arch_setup_msi_irqs(struct pci_dev *pdev, int nvec, int type)
/* Request MSI interrupts */
hwirq = 0;
list_for_each_entry(msi, &pdev->msi_list, list) {
for_each_pci_msi_entry(msi, pdev) {
rc = -EIO;
irq = irq_alloc_desc(0); /* Alloc irq on node 0 */
if (irq < 0)
......@@ -435,7 +435,7 @@ int arch_setup_msi_irqs(struct pci_dev *pdev, int nvec, int type)
return (msi_vecs == nvec) ? 0 : msi_vecs;
out_msi:
list_for_each_entry(msi, &pdev->msi_list, list) {
for_each_pci_msi_entry(msi, pdev) {
if (hwirq-- == 0)
break;
irq_set_msi_desc(msi->irq, NULL);
......@@ -465,7 +465,7 @@ void arch_teardown_msi_irqs(struct pci_dev *pdev)
return;
/* Release MSI interrupts */
list_for_each_entry(msi, &pdev->msi_list, list) {
for_each_pci_msi_entry(msi, pdev) {
if (msi->msi_attrib.is_msix)
__pci_msix_desc_mask_irq(msi, 1);
else
......
......@@ -31,7 +31,7 @@ struct irq_domain *se7343_irq_domain;
static void se7343_irq_demux(unsigned int irq, struct irq_desc *desc)
{
struct irq_data *data = irq_get_irq_data(irq);
struct irq_data *data = irq_desc_get_irq_data(desc);
struct irq_chip *chip = irq_data_get_irq_chip(data);
unsigned long mask;
int bit;
......
......@@ -30,7 +30,7 @@ struct irq_domain *se7722_irq_domain;
static void se7722_irq_demux(unsigned int irq, struct irq_desc *desc)
{
struct irq_data *data = irq_get_irq_data(irq);
struct irq_data *data = irq_desc_get_irq_data(desc);
struct irq_chip *chip = irq_data_get_irq_chip(data);
unsigned long mask;
int bit;
......
......@@ -92,8 +92,9 @@ static struct irq_chip se7724_irq_chip __read_mostly = {
.irq_unmask = enable_se7724_irq,
};
static void se7724_irq_demux(unsigned int irq, struct irq_desc *desc)
static void se7724_irq_demux(unsigned int __irq, struct irq_desc *desc)
{
unsigned int irq = irq_desc_get_irq(desc);
struct fpga_irq set = get_fpga_irq(irq);
unsigned short intv = __raw_readw(set.sraddr);
unsigned int ext_irq = set.base;
......
......@@ -62,7 +62,7 @@ static int x3proto_gpio_to_irq(struct gpio_chip *chip, unsigned gpio)
static void x3proto_gpio_irq_handler(unsigned int irq, struct irq_desc *desc)
{
struct irq_data *data = irq_get_irq_data(irq);
struct irq_data *data = irq_desc_get_irq_data(desc);
struct irq_chip *chip = irq_data_get_irq_chip(data);
unsigned long mask;
int pin;
......
......@@ -227,16 +227,17 @@ void migrate_irqs(void)
for_each_active_irq(irq) {
struct irq_data *data = irq_get_irq_data(irq);
if (data->node == cpu) {
unsigned int newcpu = cpumask_any_and(data->affinity,
if (irq_data_get_node(data) == cpu) {
struct cpumask *mask = irq_data_get_affinity_mask(data);
unsigned int newcpu = cpumask_any_and(mask,
cpu_online_mask);
if (newcpu >= nr_cpu_ids) {
pr_info_ratelimited("IRQ%u no longer affine to CPU%u\n",
irq, cpu);
cpumask_setall(data->affinity);
cpumask_setall(mask);
}
irq_set_affinity(irq, data->affinity);
irq_set_affinity(irq, mask);
}
}
}
......
......@@ -210,21 +210,21 @@ struct irq_handler_data {
static inline unsigned int irq_data_to_handle(struct irq_data *data)
{
struct irq_handler_data *ihd = data->handler_data;
struct irq_handler_data *ihd = irq_data_get_irq_handler_data(data);
return ihd->dev_handle;
}
static inline unsigned int irq_data_to_ino(struct irq_data *data)
{
struct irq_handler_data *ihd = data->handler_data;
struct irq_handler_data *ihd = irq_data_get_irq_handler_data(data);
return ihd->dev_ino;
}
static inline unsigned long irq_data_to_sysino(struct irq_data *data)
{
struct irq_handler_data *ihd = data->handler_data;
struct irq_handler_data *ihd = irq_data_get_irq_handler_data(data);
return ihd->sysino;
}
......@@ -370,13 +370,15 @@ static int irq_choose_cpu(unsigned int irq, const struct cpumask *affinity)
static void sun4u_irq_enable(struct irq_data *data)
{
struct irq_handler_data *handler_data = data->handler_data;
struct irq_handler_data *handler_data;
handler_data = irq_data_get_irq_handler_data(data);
if (likely(handler_data)) {
unsigned long cpuid, imap, val;
unsigned int tid;
cpuid = irq_choose_cpu(data->irq, data->affinity);
cpuid = irq_choose_cpu(data->irq,
irq_data_get_affinity_mask(data));
imap = handler_data->imap;
tid = sun4u_compute_tid(imap, cpuid);
......@@ -393,8 +395,9 @@ static void sun4u_irq_enable(struct irq_data *data)
static int sun4u_set_affinity(struct irq_data *data,
const struct cpumask *mask, bool force)
{
struct irq_handler_data *handler_data = data->handler_data;
struct irq_handler_data *handler_data;
handler_data = irq_data_get_irq_handler_data(data);
if (likely(handler_data)) {
unsigned long cpuid, imap, val;
unsigned int tid;
......@@ -438,15 +441,17 @@ static void sun4u_irq_disable(struct irq_data *data)
static void sun4u_irq_eoi(struct irq_data *data)
{
struct irq_handler_data *handler_data = data->handler_data;
struct irq_handler_data *handler_data;
handler_data = irq_data_get_irq_handler_data(data);
if (likely(handler_data))
upa_writeq(ICLR_IDLE, handler_data->iclr);
}
static void sun4v_irq_enable(struct irq_data *data)
{
unsigned long cpuid = irq_choose_cpu(data->irq, data->affinity);
unsigned long cpuid = irq_choose_cpu(data->irq,
irq_data_get_affinity_mask(data));
unsigned int ino = irq_data_to_sysino(data);
int err;
......@@ -508,7 +513,7 @@ static void sun4v_virq_enable(struct irq_data *data)
unsigned long cpuid;
int err;
cpuid = irq_choose_cpu(data->irq, data->affinity);
cpuid = irq_choose_cpu(data->irq, irq_data_get_affinity_mask(data));
err = sun4v_vintr_set_target(dev_handle, dev_ino, cpuid);
if (err != HV_EOK)
......@@ -881,8 +886,8 @@ void fixup_irqs(void)
if (desc->action && !irqd_is_per_cpu(data)) {
if (data->chip->irq_set_affinity)
data->chip->irq_set_affinity(data,
data->affinity,
false);
irq_data_get_affinity_mask(data),
false);
}
raw_spin_unlock_irqrestore(&desc->lock, flags);
}
......
......@@ -126,7 +126,7 @@ static int leon_set_affinity(struct irq_data *data, const struct cpumask *dest,
int oldcpu, newcpu;
mask = (unsigned long)data->chip_data;
oldcpu = irq_choose_cpu(data->affinity);
oldcpu = irq_choose_cpu(irq_data_get_affinity_mask(data));
newcpu = irq_choose_cpu(dest);
if (oldcpu == newcpu)
......@@ -149,7 +149,7 @@ static void leon_unmask_irq(struct irq_data *data)
int cpu;
mask = (unsigned long)data->chip_data;
cpu = irq_choose_cpu(data->affinity);
cpu = irq_choose_cpu(irq_data_get_affinity_mask(data));
spin_lock_irqsave(&leon_irq_lock, flags);
oldmask = LEON3_BYPASS_LOAD_PA(LEON_IMASK(cpu));
LEON3_BYPASS_STORE_PA(LEON_IMASK(cpu), (oldmask | mask));
......@@ -162,7 +162,7 @@ static void leon_mask_irq(struct irq_data *data)
int cpu;
mask = (unsigned long)data->chip_data;
cpu = irq_choose_cpu(data->affinity);
cpu = irq_choose_cpu(irq_data_get_affinity_mask(data));
spin_lock_irqsave(&leon_irq_lock, flags);
oldmask = LEON3_BYPASS_LOAD_PA(LEON_IMASK(cpu));
LEON3_BYPASS_STORE_PA(LEON_IMASK(cpu), (oldmask & ~mask));
......
......@@ -914,7 +914,7 @@ int arch_setup_msi_irq(struct pci_dev *pdev, struct msi_desc *desc)
void arch_teardown_msi_irq(unsigned int irq)
{
struct msi_desc *entry = irq_get_msi_desc(irq);
struct pci_dev *pdev = entry->dev;
struct pci_dev *pdev = msi_desc_to_pci_dev(entry);
struct pci_pbm_info *pbm = pdev->dev.archdata.host_controller;
if (pbm->teardown_msi_irq)
......
......@@ -188,7 +188,7 @@ void sun4d_handler_irq(unsigned int pil, struct pt_regs *regs)
static void sun4d_mask_irq(struct irq_data *data)
{
struct sun4d_handler_data *handler_data = data->handler_data;
struct sun4d_handler_data *handler_data = irq_data_get_irq_handler_data(data);
unsigned int real_irq;
#ifdef CONFIG_SMP
int cpuid = handler_data->cpuid;
......@@ -206,7 +206,7 @@ static void sun4d_mask_irq(struct irq_data *data)
static void sun4d_unmask_irq(struct irq_data *data)
{
struct sun4d_handler_data *handler_data = data->handler_data;
struct sun4d_handler_data *handler_data = irq_data_get_irq_handler_data(data);
unsigned int real_irq;
#ifdef CONFIG_SMP
int cpuid = handler_data->cpuid;
......
......@@ -188,9 +188,10 @@ static unsigned long sun4m_imask[0x50] = {
static void sun4m_mask_irq(struct irq_data *data)
{
struct sun4m_handler_data *handler_data = data->handler_data;
struct sun4m_handler_data *handler_data;
int cpu = smp_processor_id();
handler_data = irq_data_get_irq_handler_data(data);
if (handler_data->mask) {
unsigned long flags;
......@@ -206,9 +207,10 @@ static void sun4m_mask_irq(struct irq_data *data)
static void sun4m_unmask_irq(struct irq_data *data)
{
struct sun4m_handler_data *handler_data = data->handler_data;
struct sun4m_handler_data *handler_data;
int cpu = smp_processor_id();
handler_data = irq_data_get_irq_handler_data(data);
if (handler_data->mask) {
unsigned long flags;
......
......@@ -304,11 +304,12 @@ static struct irq_chip tilegx_legacy_irq_chip = {
* to Linux which just calls handle_level_irq() after clearing the
* MAC INTx Assert status bit associated with this interrupt.
*/
static void trio_handle_level_irq(unsigned int irq, struct irq_desc *desc)
static void trio_handle_level_irq(unsigned int __irq, struct irq_desc *desc)
{
struct pci_controller *controller = irq_desc_get_handler_data(desc);
gxio_trio_context_t *trio_context = controller->trio;
uint64_t intx = (uint64_t)irq_desc_get_chip_data(desc);
unsigned int irq = irq_desc_get_irq(desc);
int mac = controller->mac;
unsigned int reg_offset;
uint64_t level_mask;
......@@ -1442,7 +1443,7 @@ static struct pci_ops tile_cfg_ops = {
/* MSI support starts here. */
static unsigned int tilegx_msi_startup(struct irq_data *d)
{
if (d->msi_desc)
if (irq_data_get_msi_desc(d))
pci_msi_unmask_irq(d);
return 0;
......
......@@ -112,10 +112,9 @@ static struct irq_chip puv3_low_gpio_chip = {
* irq_controller_lock held, and IRQs disabled. Decode the IRQ
* and call the handler.
*/
static void
puv3_gpio_handler(unsigned int irq, struct irq_desc *desc)
static void puv3_gpio_handler(unsigned int __irq, struct irq_desc *desc)
{
unsigned int mask;
unsigned int mask, irq;
mask = readl(GPIO_GEDR);
do {
......
......@@ -179,7 +179,7 @@ static int xen_setup_msi_irqs(struct pci_dev *dev, int nvec, int type)
if (ret)
goto error;
i = 0;
list_for_each_entry(msidesc, &dev->msi_list, list) {
for_each_pci_msi_entry(msidesc, dev) {
irq = xen_bind_pirq_msi_to_irq(dev, msidesc, v[i],
(type == PCI_CAP_ID_MSI) ? nvec : 1,
(type == PCI_CAP_ID_MSIX) ?
......@@ -230,7 +230,7 @@ static int xen_hvm_setup_msi_irqs(struct pci_dev *dev, int nvec, int type)
if (type == PCI_CAP_ID_MSI && nvec > 1)
return 1;
list_for_each_entry(msidesc, &dev->msi_list, list) {
for_each_pci_msi_entry(msidesc, dev) {
__pci_read_msi_msg(msidesc, &msg);
pirq = MSI_ADDR_EXT_DEST_ID(msg.address_hi) |
((msg.address_lo >> MSI_ADDR_DEST_ID_SHIFT) & 0xff);
......@@ -274,7 +274,7 @@ static int xen_initdom_setup_msi_irqs(struct pci_dev *dev, int nvec, int type)
int ret = 0;
struct msi_desc *msidesc;
list_for_each_entry(msidesc, &dev->msi_list, list) {
for_each_pci_msi_entry(msidesc, dev) {
struct physdev_map_pirq map_irq;
domid_t domid;
......@@ -386,7 +386,7 @@ static void xen_teardown_msi_irqs(struct pci_dev *dev)
{
struct msi_desc *msidesc;
msidesc = list_entry(dev->msi_list.next, struct msi_desc, list);
msidesc = first_pci_msi_entry(dev);
if (msidesc->msi_attrib.is_msix)
xen_pci_frontend_disable_msix(dev);
else
......
......@@ -177,23 +177,25 @@ void migrate_irqs(void)
for_each_active_irq(i) {
struct irq_data *data = irq_get_irq_data(i);
struct cpumask *mask;
unsigned int newcpu;
if (irqd_is_per_cpu(data))
continue;
if (!cpumask_test_cpu(cpu, data->affinity))
mask = irq_data_get_affinity_mask(data);
if (!cpumask_test_cpu(cpu, mask))
continue;
newcpu = cpumask_any_and(data->affinity, cpu_online_mask);
newcpu = cpumask_any_and(mask, cpu_online_mask);
if (newcpu >= nr_cpu_ids) {
pr_info_ratelimited("IRQ%u no longer affine to CPU%u\n",
i, cpu);
cpumask_setall(data->affinity);
cpumask_setall(mask);
}
irq_set_affinity(i, data->affinity);
irq_set_affinity(i, mask);
}
}
#endif /* CONFIG_HOTPLUG_CPU */
......@@ -22,6 +22,7 @@ obj-$(CONFIG_REGMAP) += regmap/
obj-$(CONFIG_SOC_BUS) += soc.o
obj-$(CONFIG_PINCTRL) += pinctrl.o
obj-$(CONFIG_DEV_COREDUMP) += devcoredump.o
obj-$(CONFIG_GENERIC_MSI_IRQ_DOMAIN) += platform-msi.o
ccflags-$(CONFIG_DEBUG_DRIVER) := -DDEBUG
......@@ -708,6 +708,9 @@ void device_initialize(struct device *dev)
INIT_LIST_HEAD(&dev->devres_head);
device_pm_init(dev);
set_dev_node(dev, -1);
#ifdef CONFIG_GENERIC_MSI_IRQ
INIT_LIST_HEAD(&dev->msi_list);
#endif
}
EXPORT_SYMBOL_GPL(device_initialize);
......
/*
* MSI framework for platform devices
*
* Copyright (C) 2015 ARM Limited, All Rights Reserved.
* Author: Marc Zyngier <marc.zyngier@arm.com>
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License version 2 as
* published by the Free Software Foundation.
*
* This program is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
* GNU General Public License for more details.
*
* You should have received a copy of the GNU General Public License
* along with this program. If not, see <http://www.gnu.org/licenses/>.
*/
#include <linux/device.h>
#include <linux/idr.h>
#include <linux/irq.h>
#include <linux/irqdomain.h>
#include <linux/msi.h>
#include <linux/slab.h>
#define DEV_ID_SHIFT 24
/*
* Internal data structure containing a (made up, but unique) devid
* and the callback to write the MSI message.
*/
struct platform_msi_priv_data {
irq_write_msi_msg_t write_msg;
int devid;
};
/* The devid allocator */
static DEFINE_IDA(platform_msi_devid_ida);
#ifdef GENERIC_MSI_DOMAIN_OPS
/*
* Convert an msi_desc to a globaly unique identifier (per-device
* devid + msi_desc position in the msi_list).
*/
static irq_hw_number_t platform_msi_calc_hwirq(struct msi_desc *desc)
{
u32 devid;
devid = desc->platform.msi_priv_data->devid;
return (devid << (32 - DEV_ID_SHIFT)) | desc->platform.msi_index;
}
static void platform_msi_set_desc(msi_alloc_info_t *arg, struct msi_desc *desc)
{
arg->desc = desc;
arg->hwirq = platform_msi_calc_hwirq(desc);
}
static int platform_msi_init(struct irq_domain *domain,
struct msi_domain_info *info,
unsigned int virq, irq_hw_number_t hwirq,
msi_alloc_info_t *arg)
{
struct irq_data *data;
irq_domain_set_hwirq_and_chip(domain, virq, hwirq,
info->chip, info->chip_data);
/*
* Save the MSI descriptor in handler_data so that the
* irq_write_msi_msg callback can retrieve it (and the
* associated device).
*/
data = irq_domain_get_irq_data(domain, virq);
data->handler_data = arg->desc;
return 0;
}
#else
#define platform_msi_set_desc NULL
#define platform_msi_init NULL
#endif
static void platform_msi_update_dom_ops(struct msi_domain_info *info)
{
struct msi_domain_ops *ops = info->ops;
BUG_ON(!ops);
if (ops->msi_init == NULL)
ops->msi_init = platform_msi_init;
if (ops->set_desc == NULL)
ops->set_desc = platform_msi_set_desc;
}
static void platform_msi_write_msg(struct irq_data *data, struct msi_msg *msg)
{
struct msi_desc *desc = irq_data_get_irq_handler_data(data);
struct platform_msi_priv_data *priv_data;
priv_data = desc->platform.msi_priv_data;
priv_data->write_msg(desc, msg);
}
static void platform_msi_update_chip_ops(struct msi_domain_info *info)
{
struct irq_chip *chip = info->chip;
BUG_ON(!chip);
if (!chip->irq_mask)
chip->irq_mask = irq_chip_mask_parent;
if (!chip->irq_unmask)
chip->irq_unmask = irq_chip_unmask_parent;
if (!chip->irq_eoi)
chip->irq_eoi = irq_chip_eoi_parent;
if (!chip->irq_set_affinity)
chip->irq_set_affinity = msi_domain_set_affinity;
if (!chip->irq_write_msi_msg)
chip->irq_write_msi_msg = platform_msi_write_msg;
}
static void platform_msi_free_descs(struct device *dev)
{
struct msi_desc *desc, *tmp;
list_for_each_entry_safe(desc, tmp, dev_to_msi_list(dev), list) {
list_del(&desc->list);
free_msi_entry(desc);
}
}
static int platform_msi_alloc_descs(struct device *dev, int nvec,
struct platform_msi_priv_data *data)
{
int i;
for (i = 0; i < nvec; i++) {
struct msi_desc *desc;
desc = alloc_msi_entry(dev);
if (!desc)
break;
desc->platform.msi_priv_data = data;
desc->platform.msi_index = i;
desc->nvec_used = 1;
list_add_tail(&desc->list, dev_to_msi_list(dev));
}
if (i != nvec) {
/* Clean up the mess */
platform_msi_free_descs(dev);
return -ENOMEM;
}
return 0;
}
/**
* platform_msi_create_irq_domain - Create a platform MSI interrupt domain
* @np: Optional device-tree node of the interrupt controller
* @info: MSI domain info
* @parent: Parent irq domain
*
* Updates the domain and chip ops and creates a platform MSI
* interrupt domain.
*
* Returns:
* A domain pointer or NULL in case of failure.
*/
struct irq_domain *platform_msi_create_irq_domain(struct device_node *np,
struct msi_domain_info *info,
struct irq_domain *parent)
{
struct irq_domain *domain;
if (info->flags & MSI_FLAG_USE_DEF_DOM_OPS)
platform_msi_update_dom_ops(info);
if (info->flags & MSI_FLAG_USE_DEF_CHIP_OPS)
platform_msi_update_chip_ops(info);
domain = msi_create_irq_domain(np, info, parent);
if (domain)
domain->bus_token = DOMAIN_BUS_PLATFORM_MSI;
return domain;
}
/**
* platform_msi_domain_alloc_irqs - Allocate MSI interrupts for @dev
* @dev: The device for which to allocate interrupts
* @nvec: The number of interrupts to allocate
* @write_msi_msg: Callback to write an interrupt message for @dev
*
* Returns:
* Zero for success, or an error code in case of failure
*/
int platform_msi_domain_alloc_irqs(struct device *dev, unsigned int nvec,
irq_write_msi_msg_t write_msi_msg)
{
struct platform_msi_priv_data *priv_data;
int err;
/*
* Limit the number of interrupts to 256 per device. Should we
* need to bump this up, DEV_ID_SHIFT should be adjusted
* accordingly (which would impact the max number of MSI
* capable devices).
*/
if (!dev->msi_domain || !write_msi_msg || !nvec ||
nvec > (1 << (32 - DEV_ID_SHIFT)))
return -EINVAL;
if (dev->msi_domain->bus_token != DOMAIN_BUS_PLATFORM_MSI) {
dev_err(dev, "Incompatible msi_domain, giving up\n");
return -EINVAL;
}
/* Already had a helping of MSI? Greed... */
if (!list_empty(dev_to_msi_list(dev)))
return -EBUSY;
priv_data = kzalloc(sizeof(*priv_data), GFP_KERNEL);
if (!priv_data)
return -ENOMEM;
priv_data->devid = ida_simple_get(&platform_msi_devid_ida,
0, 1 << DEV_ID_SHIFT, GFP_KERNEL);
if (priv_data->devid < 0) {
err = priv_data->devid;
goto out_free_data;
}
priv_data->write_msg = write_msi_msg;
err = platform_msi_alloc_descs(dev, nvec, priv_data);
if (err)
goto out_free_id;
err = msi_domain_alloc_irqs(dev->msi_domain, dev, nvec);
if (err)
goto out_free_desc;
return 0;
out_free_desc:
platform_msi_free_descs(dev);
out_free_id:
ida_simple_remove(&platform_msi_devid_ida, priv_data->devid);
out_free_data:
kfree(priv_data);
return err;
}
/**
* platform_msi_domain_free_irqs - Free MSI interrupts for @dev
* @dev: The device for which to free interrupts
*/
void platform_msi_domain_free_irqs(struct device *dev)
{
struct msi_desc *desc;
desc = first_msi_entry(dev);
if (desc) {
struct platform_msi_priv_data *data;
data = desc->platform.msi_priv_data;
ida_simple_remove(&platform_msi_devid_ida, data->devid);
kfree(data);
}
msi_domain_free_irqs(dev->msi_domain, dev);
platform_msi_free_descs(dev);
}
......@@ -61,6 +61,10 @@ config ATMEL_AIC5_IRQ
select MULTI_IRQ_HANDLER
select SPARSE_IRQ
config I8259
bool
select IRQ_DOMAIN
config BCM7038_L1_IRQ
bool
select GENERIC_IRQ_CHIP
......@@ -177,3 +181,9 @@ config RENESAS_H8300H_INTC
config RENESAS_H8S_INTC
bool
select IRQ_DOMAIN
config IMX_GPCV2
bool
select IRQ_DOMAIN
help
Enables the wakeup IRQs for IMX platforms with GPCv2 block
obj-$(CONFIG_IRQCHIP) += irqchip.o
obj-$(CONFIG_ARCH_BCM2835) += irq-bcm2835.o
obj-$(CONFIG_ARCH_BCM2835) += irq-bcm2836.o
obj-$(CONFIG_ARCH_EXYNOS) += exynos-combiner.o
obj-$(CONFIG_ARCH_HIP04) += irq-hip04.o
obj-$(CONFIG_ARCH_MMP) += irq-mmp.o
......@@ -22,11 +23,12 @@ obj-$(CONFIG_ARCH_SPEAR3XX) += spear-shirq.o
obj-$(CONFIG_ARM_GIC) += irq-gic.o irq-gic-common.o
obj-$(CONFIG_ARM_GIC_V2M) += irq-gic-v2m.o
obj-$(CONFIG_ARM_GIC_V3) += irq-gic-v3.o irq-gic-common.o
obj-$(CONFIG_ARM_GIC_V3_ITS) += irq-gic-v3-its.o
obj-$(CONFIG_ARM_GIC_V3_ITS) += irq-gic-v3-its.o irq-gic-v3-its-pci-msi.o irq-gic-v3-its-platform-msi.o
obj-$(CONFIG_ARM_NVIC) += irq-nvic.o
obj-$(CONFIG_ARM_VIC) += irq-vic.o
obj-$(CONFIG_ATMEL_AIC_IRQ) += irq-atmel-aic-common.o irq-atmel-aic.o
obj-$(CONFIG_ATMEL_AIC5_IRQ) += irq-atmel-aic-common.o irq-atmel-aic5.o
obj-$(CONFIG_I8259) += irq-i8259.o
obj-$(CONFIG_IMGPDC_IRQ) += irq-imgpdc.o
obj-$(CONFIG_IRQ_MIPS_CPU) += irq-mips-cpu.o
obj-$(CONFIG_SIRF_IRQ) += irq-sirfsoc.o
......@@ -52,3 +54,4 @@ obj-$(CONFIG_RENESAS_H8300H_INTC) += irq-renesas-h8300h.o
obj-$(CONFIG_RENESAS_H8S_INTC) += irq-renesas-h8s.o
obj-$(CONFIG_ARCH_SA1100) += irq-sa11x0.o
obj-$(CONFIG_INGENIC_IRQ) += irq-ingenic.o
obj-$(CONFIG_IMX_GPCV2) += irq-imx-gpcv2.o
......@@ -15,13 +15,12 @@
#include <linux/slab.h>
#include <linux/syscore_ops.h>
#include <linux/irqdomain.h>
#include <linux/irqchip.h>
#include <linux/irqchip/chained_irq.h>
#include <linux/interrupt.h>
#include <linux/of_address.h>
#include <linux/of_irq.h>
#include "irqchip.h"
#define COMBINER_ENABLE_SET 0x0
#define COMBINER_ENABLE_CLEAR 0x4
#define COMBINER_INT_STATUS 0xC
......@@ -66,10 +65,12 @@ static void combiner_unmask_irq(struct irq_data *data)
__raw_writel(mask, combiner_base(data) + COMBINER_ENABLE_SET);
}
static void combiner_handle_cascade_irq(unsigned int irq, struct irq_desc *desc)
static void combiner_handle_cascade_irq(unsigned int __irq,
struct irq_desc *desc)
{
struct combiner_chip_data *chip_data = irq_get_handler_data(irq);
struct irq_chip *chip = irq_get_chip(irq);
struct combiner_chip_data *chip_data = irq_desc_get_handler_data(desc);
struct irq_chip *chip = irq_desc_get_chip(desc);
unsigned int irq = irq_desc_get_irq(desc);
unsigned int cascade_irq, combiner_irq;
unsigned long status;
......@@ -122,9 +123,8 @@ static struct irq_chip combiner_chip = {
static void __init combiner_cascade_irq(struct combiner_chip_data *combiner_data,
unsigned int irq)
{
if (irq_set_handler_data(irq, combiner_data) != 0)
BUG();
irq_set_chained_handler(irq, combiner_handle_cascade_irq);
irq_set_chained_handler_and_data(irq, combiner_handle_cascade_irq,
combiner_data);
}
static void __init combiner_init_one(struct combiner_chip_data *combiner_data,
......@@ -185,14 +185,14 @@ static void __init combiner_init(void __iomem *combiner_base,
combiner_data = kcalloc(max_nr, sizeof (*combiner_data), GFP_KERNEL);
if (!combiner_data) {
pr_warning("%s: could not allocate combiner data\n", __func__);
pr_warn("%s: could not allocate combiner data\n", __func__);
return;
}
combiner_irq_domain = irq_domain_add_linear(np, nr_irq,
&combiner_irq_domain_ops, combiner_data);
if (WARN_ON(!combiner_irq_domain)) {
pr_warning("%s: irq domain init failed\n", __func__);
pr_warn("%s: irq domain init failed\n", __func__);
return;
}
......
......@@ -18,6 +18,7 @@
#include <linux/init.h>
#include <linux/irq.h>
#include <linux/interrupt.h>
#include <linux/irqchip.h>
#include <linux/irqchip/chained_irq.h>
#include <linux/cpu.h>
#include <linux/io.h>
......@@ -33,8 +34,6 @@
#include <asm/smp_plat.h>
#include <asm/mach/irq.h>
#include "irqchip.h"
/* Interrupt Controller Registers Map */
#define ARMADA_370_XP_INT_SET_MASK_OFFS (0x48)
#define ARMADA_370_XP_INT_CLEAR_MASK_OFFS (0x4C)
......@@ -451,7 +450,7 @@ static void armada_370_xp_handle_msi_irq(struct pt_regs *r, bool b) {}
static void armada_370_xp_mpic_handle_cascade_irq(unsigned int irq,
struct irq_desc *desc)
{
struct irq_chip *chip = irq_get_chip(irq);
struct irq_chip *chip = irq_desc_get_chip(desc);
unsigned long irqmap, irqn, irqsrc, cpuid;
unsigned int cascade_irq;
......
......@@ -19,6 +19,7 @@
#include <linux/bitmap.h>
#include <linux/types.h>
#include <linux/irq.h>
#include <linux/irqchip.h>
#include <linux/of.h>
#include <linux/of_address.h>
#include <linux/of_irq.h>
......@@ -31,7 +32,6 @@
#include <asm/mach/irq.h>
#include "irq-atmel-aic-common.h"
#include "irqchip.h"
/* Number of irq lines managed by AIC */
#define NR_AIC_IRQS 32
......@@ -225,7 +225,7 @@ static void __init at91sam9g45_aic_irq_fixup(struct device_node *root)
aic_common_rtt_irq_fixup(root);
}
static const struct of_device_id __initdata aic_irq_fixups[] = {
static const struct of_device_id aic_irq_fixups[] __initconst = {
{ .compatible = "atmel,at91rm9200", .data = at91rm9200_aic_irq_fixup },
{ .compatible = "atmel,at91sam9g45", .data = at91sam9g45_aic_irq_fixup },
{ .compatible = "atmel,at91sam9n12", .data = at91rm9200_aic_irq_fixup },
......
......@@ -19,6 +19,7 @@
#include <linux/bitmap.h>
#include <linux/types.h>
#include <linux/irq.h>
#include <linux/irqchip.h>
#include <linux/of.h>
#include <linux/of_address.h>
#include <linux/of_irq.h>
......@@ -31,7 +32,6 @@
#include <asm/mach/irq.h>
#include "irq-atmel-aic-common.h"
#include "irqchip.h"
/* Number of irq lines managed by AIC */
#define NR_AIC5_IRQS 128
......@@ -290,7 +290,7 @@ static void __init sama5d3_aic_irq_fixup(struct device_node *root)
aic_common_rtc_irq_fixup(root);
}
static const struct of_device_id __initdata aic5_irq_fixups[] = {
static const struct of_device_id aic5_irq_fixups[] __initconst = {
{ .compatible = "atmel,sama5d3", .data = sama5d3_aic_irq_fixup },
{ .compatible = "atmel,sama5d4", .data = sama5d3_aic_irq_fixup },
{ /* sentinel */ },
......
......@@ -48,13 +48,12 @@
#include <linux/slab.h>
#include <linux/of_address.h>
#include <linux/of_irq.h>
#include <linux/irqchip.h>
#include <linux/irqdomain.h>
#include <asm/exception.h>
#include <asm/mach/irq.h>
#include "irqchip.h"
/* Put the bank and irq (32 bits) into the hwirq */
#define MAKE_HWIRQ(b, n) ((b << 5) | (n))
#define HWIRQ_BANK(i) (i >> 5)
......@@ -76,10 +75,10 @@
#define NR_BANKS 3
#define IRQS_PER_BANK 32
static int reg_pending[] __initconst = { 0x00, 0x04, 0x08 };
static int reg_enable[] __initconst = { 0x18, 0x10, 0x14 };
static int reg_disable[] __initconst = { 0x24, 0x1c, 0x20 };
static int bank_irqs[] __initconst = { 8, 32, 32 };
static const int reg_pending[] __initconst = { 0x00, 0x04, 0x08 };
static const int reg_enable[] __initconst = { 0x18, 0x10, 0x14 };
static const int reg_disable[] __initconst = { 0x24, 0x1c, 0x20 };
static const int bank_irqs[] __initconst = { 8, 32, 32 };
static const int shortcuts[] = {
7, 9, 10, 18, 19, /* Bank 1 */
......@@ -97,6 +96,7 @@ struct armctrl_ic {
static struct armctrl_ic intc __read_mostly;
static void __exception_irq_entry bcm2835_handle_irq(
struct pt_regs *regs);
static void bcm2836_chained_handle_irq(unsigned int irq, struct irq_desc *desc);
static void armctrl_mask_irq(struct irq_data *d)
{
......@@ -140,7 +140,8 @@ static const struct irq_domain_ops armctrl_ops = {
};
static int __init armctrl_of_init(struct device_node *node,
struct device_node *parent)
struct device_node *parent,
bool is_2836)
{
void __iomem *base;
int irq, b, i;
......@@ -169,54 +170,90 @@ static int __init armctrl_of_init(struct device_node *node,
}
}
set_handle_irq(bcm2835_handle_irq);
if (is_2836) {
int parent_irq = irq_of_parse_and_map(node, 0);
if (!parent_irq) {
panic("%s: unable to get parent interrupt.\n",
node->full_name);
}
irq_set_chained_handler(parent_irq, bcm2836_chained_handle_irq);
} else {
set_handle_irq(bcm2835_handle_irq);
}
return 0;
}
static int __init bcm2835_armctrl_of_init(struct device_node *node,
struct device_node *parent)
{
return armctrl_of_init(node, parent, false);
}
static int __init bcm2836_armctrl_of_init(struct device_node *node,
struct device_node *parent)
{
return armctrl_of_init(node, parent, true);
}
/*
* Handle each interrupt across the entire interrupt controller. This reads the
* status register before handling each interrupt, which is necessary given that
* handle_IRQ may briefly re-enable interrupts for soft IRQ handling.
*/
static void armctrl_handle_bank(int bank, struct pt_regs *regs)
static u32 armctrl_translate_bank(int bank)
{
u32 stat, irq;
u32 stat = readl_relaxed(intc.pending[bank]);
while ((stat = readl_relaxed(intc.pending[bank]))) {
irq = MAKE_HWIRQ(bank, ffs(stat) - 1);
handle_IRQ(irq_linear_revmap(intc.domain, irq), regs);
}
return MAKE_HWIRQ(bank, ffs(stat) - 1);
}
static u32 armctrl_translate_shortcut(int bank, u32 stat)
{
return MAKE_HWIRQ(bank, shortcuts[ffs(stat >> SHORTCUT_SHIFT) - 1]);
}
static void armctrl_handle_shortcut(int bank, struct pt_regs *regs,
u32 stat)
static u32 get_next_armctrl_hwirq(void)
{
u32 irq = MAKE_HWIRQ(bank, shortcuts[ffs(stat >> SHORTCUT_SHIFT) - 1]);
handle_IRQ(irq_linear_revmap(intc.domain, irq), regs);
u32 stat = readl_relaxed(intc.pending[0]) & BANK0_VALID_MASK;
if (stat == 0)
return ~0;
else if (stat & BANK0_HWIRQ_MASK)
return MAKE_HWIRQ(0, ffs(stat & BANK0_HWIRQ_MASK) - 1);
else if (stat & SHORTCUT1_MASK)
return armctrl_translate_shortcut(1, stat & SHORTCUT1_MASK);
else if (stat & SHORTCUT2_MASK)
return armctrl_translate_shortcut(2, stat & SHORTCUT2_MASK);
else if (stat & BANK1_HWIRQ)
return armctrl_translate_bank(1);
else if (stat & BANK2_HWIRQ)
return armctrl_translate_bank(2);
else
BUG();
}
static void __exception_irq_entry bcm2835_handle_irq(
struct pt_regs *regs)
{
u32 stat, irq;
while ((stat = readl_relaxed(intc.pending[0]) & BANK0_VALID_MASK)) {
if (stat & BANK0_HWIRQ_MASK) {
irq = MAKE_HWIRQ(0, ffs(stat & BANK0_HWIRQ_MASK) - 1);
handle_IRQ(irq_linear_revmap(intc.domain, irq), regs);
} else if (stat & SHORTCUT1_MASK) {
armctrl_handle_shortcut(1, regs, stat & SHORTCUT1_MASK);
} else if (stat & SHORTCUT2_MASK) {
armctrl_handle_shortcut(2, regs, stat & SHORTCUT2_MASK);
} else if (stat & BANK1_HWIRQ) {
armctrl_handle_bank(1, regs);
} else if (stat & BANK2_HWIRQ) {
armctrl_handle_bank(2, regs);
} else {
BUG();
}
}
u32 hwirq;
while ((hwirq = get_next_armctrl_hwirq()) != ~0)
handle_IRQ(irq_linear_revmap(intc.domain, hwirq), regs);
}
static void bcm2836_chained_handle_irq(unsigned int irq, struct irq_desc *desc)
{
u32 hwirq;
while ((hwirq = get_next_armctrl_hwirq()) != ~0)
generic_handle_irq(irq_linear_revmap(intc.domain, hwirq));
}
IRQCHIP_DECLARE(bcm2835_armctrl_ic, "brcm,bcm2835-armctrl-ic", armctrl_of_init);
IRQCHIP_DECLARE(bcm2835_armctrl_ic, "brcm,bcm2835-armctrl-ic",
bcm2835_armctrl_of_init);
IRQCHIP_DECLARE(bcm2836_armctrl_ic, "brcm,bcm2836-armctrl-ic",
bcm2836_armctrl_of_init);
/*
* Root interrupt controller for the BCM2836 (Raspberry Pi 2).
*
* Copyright 2015 Broadcom
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License as published by
* the Free Software Foundation; either version 2 of the License, or
* (at your option) any later version.
*
* This program is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
* GNU General Public License for more details.
*/
#include <linux/cpu.h>
#include <linux/of_address.h>
#include <linux/of_irq.h>
#include <linux/irqchip.h>
#include <linux/irqdomain.h>
#include <asm/exception.h>
/*
* The low 2 bits identify the CPU that the GPU IRQ goes to, and the
* next 2 bits identify the CPU that the GPU FIQ goes to.
*/
#define LOCAL_GPU_ROUTING 0x00c
/* When setting bits 0-3, enables PMU interrupts on that CPU. */
#define LOCAL_PM_ROUTING_SET 0x010
/* When setting bits 0-3, disables PMU interrupts on that CPU. */
#define LOCAL_PM_ROUTING_CLR 0x014
/*
* The low 4 bits of this are the CPU's timer IRQ enables, and the
* next 4 bits are the CPU's timer FIQ enables (which override the IRQ
* bits).
*/
#define LOCAL_TIMER_INT_CONTROL0 0x040
/*
* The low 4 bits of this are the CPU's per-mailbox IRQ enables, and
* the next 4 bits are the CPU's per-mailbox FIQ enables (which
* override the IRQ bits).
*/
#define LOCAL_MAILBOX_INT_CONTROL0 0x050
/*
* The CPU's interrupt status register. Bits are defined by the the
* LOCAL_IRQ_* bits below.
*/
#define LOCAL_IRQ_PENDING0 0x060
/* Same status bits as above, but for FIQ. */
#define LOCAL_FIQ_PENDING0 0x070
/*
* Mailbox0 write-to-set bits. There are 16 mailboxes, 4 per CPU, and
* these bits are organized by mailbox number and then CPU number. We
* use mailbox 0 for IPIs. The mailbox's interrupt is raised while
* any bit is set.
*/
#define LOCAL_MAILBOX0_SET0 0x080
/* Mailbox0 write-to-clear bits. */
#define LOCAL_MAILBOX0_CLR0 0x0c0
#define LOCAL_IRQ_CNTPSIRQ 0
#define LOCAL_IRQ_CNTPNSIRQ 1
#define LOCAL_IRQ_CNTHPIRQ 2
#define LOCAL_IRQ_CNTVIRQ 3
#define LOCAL_IRQ_MAILBOX0 4
#define LOCAL_IRQ_MAILBOX1 5
#define LOCAL_IRQ_MAILBOX2 6
#define LOCAL_IRQ_MAILBOX3 7
#define LOCAL_IRQ_GPU_FAST 8
#define LOCAL_IRQ_PMU_FAST 9
#define LAST_IRQ LOCAL_IRQ_PMU_FAST
struct bcm2836_arm_irqchip_intc {
struct irq_domain *domain;
void __iomem *base;
};
static struct bcm2836_arm_irqchip_intc intc __read_mostly;
static void bcm2836_arm_irqchip_mask_per_cpu_irq(unsigned int reg_offset,
unsigned int bit,
int cpu)
{
void __iomem *reg = intc.base + reg_offset + 4 * cpu;
writel(readl(reg) & ~BIT(bit), reg);
}
static void bcm2836_arm_irqchip_unmask_per_cpu_irq(unsigned int reg_offset,
unsigned int bit,
int cpu)
{
void __iomem *reg = intc.base + reg_offset + 4 * cpu;
writel(readl(reg) | BIT(bit), reg);
}
static void bcm2836_arm_irqchip_mask_timer_irq(struct irq_data *d)
{
bcm2836_arm_irqchip_mask_per_cpu_irq(LOCAL_TIMER_INT_CONTROL0,
d->hwirq - LOCAL_IRQ_CNTPSIRQ,
smp_processor_id());
}
static void bcm2836_arm_irqchip_unmask_timer_irq(struct irq_data *d)
{
bcm2836_arm_irqchip_unmask_per_cpu_irq(LOCAL_TIMER_INT_CONTROL0,
d->hwirq - LOCAL_IRQ_CNTPSIRQ,
smp_processor_id());
}
static struct irq_chip bcm2836_arm_irqchip_timer = {
.name = "bcm2836-timer",
.irq_mask = bcm2836_arm_irqchip_mask_timer_irq,
.irq_unmask = bcm2836_arm_irqchip_unmask_timer_irq,
};
static void bcm2836_arm_irqchip_mask_pmu_irq(struct irq_data *d)
{
writel(1 << smp_processor_id(), intc.base + LOCAL_PM_ROUTING_CLR);
}
static void bcm2836_arm_irqchip_unmask_pmu_irq(struct irq_data *d)
{
writel(1 << smp_processor_id(), intc.base + LOCAL_PM_ROUTING_SET);
}
static struct irq_chip bcm2836_arm_irqchip_pmu = {
.name = "bcm2836-pmu",
.irq_mask = bcm2836_arm_irqchip_mask_pmu_irq,
.irq_unmask = bcm2836_arm_irqchip_unmask_pmu_irq,
};
static void bcm2836_arm_irqchip_mask_gpu_irq(struct irq_data *d)
{
}
static void bcm2836_arm_irqchip_unmask_gpu_irq(struct irq_data *d)
{
}
static struct irq_chip bcm2836_arm_irqchip_gpu = {
.name = "bcm2836-gpu",
.irq_mask = bcm2836_arm_irqchip_mask_gpu_irq,
.irq_unmask = bcm2836_arm_irqchip_unmask_gpu_irq,
};
static void bcm2836_arm_irqchip_register_irq(int hwirq, struct irq_chip *chip)
{
int irq = irq_create_mapping(intc.domain, hwirq);
irq_set_percpu_devid(irq);
irq_set_chip_and_handler(irq, chip, handle_percpu_devid_irq);
irq_set_status_flags(irq, IRQ_NOAUTOEN);
}
static void
__exception_irq_entry bcm2836_arm_irqchip_handle_irq(struct pt_regs *regs)
{
int cpu = smp_processor_id();
u32 stat;
stat = readl_relaxed(intc.base + LOCAL_IRQ_PENDING0 + 4 * cpu);
if (stat & 0x10) {
#ifdef CONFIG_SMP
void __iomem *mailbox0 = (intc.base +
LOCAL_MAILBOX0_CLR0 + 16 * cpu);
u32 mbox_val = readl(mailbox0);
u32 ipi = ffs(mbox_val) - 1;
writel(1 << ipi, mailbox0);
handle_IPI(ipi, regs);
#endif
} else {
u32 hwirq = ffs(stat) - 1;
handle_IRQ(irq_linear_revmap(intc.domain, hwirq), regs);
}
}
#ifdef CONFIG_SMP
static void bcm2836_arm_irqchip_send_ipi(const struct cpumask *mask,
unsigned int ipi)
{
int cpu;
void __iomem *mailbox0_base = intc.base + LOCAL_MAILBOX0_SET0;
/*
* Ensure that stores to normal memory are visible to the
* other CPUs before issuing the IPI.
*/
dsb();
for_each_cpu(cpu, mask) {
writel(1 << ipi, mailbox0_base + 16 * cpu);
}
}
/* Unmasks the IPI on the CPU when it's online. */
static int bcm2836_arm_irqchip_cpu_notify(struct notifier_block *nfb,
unsigned long action, void *hcpu)
{
unsigned int cpu = (unsigned long)hcpu;
unsigned int int_reg = LOCAL_MAILBOX_INT_CONTROL0;
unsigned int mailbox = 0;
if (action == CPU_STARTING || action == CPU_STARTING_FROZEN)
bcm2836_arm_irqchip_unmask_per_cpu_irq(int_reg, mailbox, cpu);
else if (action == CPU_DYING)
bcm2836_arm_irqchip_mask_per_cpu_irq(int_reg, mailbox, cpu);
return NOTIFY_OK;
}
static struct notifier_block bcm2836_arm_irqchip_cpu_notifier = {
.notifier_call = bcm2836_arm_irqchip_cpu_notify,
.priority = 100,
};
#endif
static const struct irq_domain_ops bcm2836_arm_irqchip_intc_ops = {
.xlate = irq_domain_xlate_onecell
};
static void
bcm2836_arm_irqchip_smp_init(void)
{
#ifdef CONFIG_SMP
/* Unmask IPIs to the boot CPU. */
bcm2836_arm_irqchip_cpu_notify(&bcm2836_arm_irqchip_cpu_notifier,
CPU_STARTING,
(void *)smp_processor_id());
register_cpu_notifier(&bcm2836_arm_irqchip_cpu_notifier);
set_smp_cross_call(bcm2836_arm_irqchip_send_ipi);
#endif
}
static int __init bcm2836_arm_irqchip_l1_intc_of_init(struct device_node *node,
struct device_node *parent)
{
intc.base = of_iomap(node, 0);
if (!intc.base) {
panic("%s: unable to map local interrupt registers\n",
node->full_name);
}
intc.domain = irq_domain_add_linear(node, LAST_IRQ + 1,
&bcm2836_arm_irqchip_intc_ops,
NULL);
if (!intc.domain)
panic("%s: unable to create IRQ domain\n", node->full_name);
bcm2836_arm_irqchip_register_irq(LOCAL_IRQ_CNTPSIRQ,
&bcm2836_arm_irqchip_timer);
bcm2836_arm_irqchip_register_irq(LOCAL_IRQ_CNTPNSIRQ,
&bcm2836_arm_irqchip_timer);
bcm2836_arm_irqchip_register_irq(LOCAL_IRQ_CNTHPIRQ,
&bcm2836_arm_irqchip_timer);
bcm2836_arm_irqchip_register_irq(LOCAL_IRQ_CNTVIRQ,
&bcm2836_arm_irqchip_timer);
bcm2836_arm_irqchip_register_irq(LOCAL_IRQ_GPU_FAST,
&bcm2836_arm_irqchip_gpu);
bcm2836_arm_irqchip_register_irq(LOCAL_IRQ_PMU_FAST,
&bcm2836_arm_irqchip_pmu);
bcm2836_arm_irqchip_smp_init();
set_handle_irq(bcm2836_arm_irqchip_handle_irq);
return 0;
}
IRQCHIP_DECLARE(bcm2836_arm_irqchip_l1_intc, "brcm,bcm2836-l1-intc",
bcm2836_arm_irqchip_l1_intc_of_init);
......@@ -29,10 +29,9 @@
#include <linux/slab.h>
#include <linux/smp.h>
#include <linux/types.h>
#include <linux/irqchip.h>
#include <linux/irqchip/chained_irq.h>
#include "irqchip.h"
#define IRQS_PER_WORD 32
#define REG_BYTES_PER_IRQ_WORD (sizeof(u32) * 4)
#define MAX_WORDS 8
......@@ -257,8 +256,8 @@ static int __init bcm7038_l1_init_one(struct device_node *dn,
pr_err("failed to map parent interrupt %d\n", parent_irq);
return -EINVAL;
}
irq_set_handler_data(parent_irq, intc);
irq_set_chained_handler(parent_irq, bcm7038_l1_irq_handle);
irq_set_chained_handler_and_data(parent_irq, bcm7038_l1_irq_handle,
intc);
return 0;
}
......
......@@ -26,10 +26,9 @@
#include <linux/irqdomain.h>
#include <linux/reboot.h>
#include <linux/bitops.h>
#include <linux/irqchip.h>
#include <linux/irqchip/chained_irq.h>
#include "irqchip.h"
/* Register offset in the L2 interrupt controller */
#define IRQEN 0x00
#define IRQSTAT 0x04
......@@ -38,6 +37,11 @@
#define MAX_MAPPINGS (MAX_WORDS * 2)
#define IRQS_PER_WORD 32
struct bcm7120_l1_intc_data {
struct bcm7120_l2_intc_data *b;
u32 irq_map_mask[MAX_WORDS];
};
struct bcm7120_l2_intc_data {
unsigned int n_words;
void __iomem *map_base[MAX_MAPPINGS];
......@@ -47,14 +51,15 @@ struct bcm7120_l2_intc_data {
struct irq_domain *domain;
bool can_wake;
u32 irq_fwd_mask[MAX_WORDS];
u32 irq_map_mask[MAX_WORDS];
struct bcm7120_l1_intc_data *l1_data;
int num_parent_irqs;
const __be32 *map_mask_prop;
};
static void bcm7120_l2_intc_irq_handle(unsigned int irq, struct irq_desc *desc)
{
struct bcm7120_l2_intc_data *b = irq_desc_get_handler_data(desc);
struct bcm7120_l1_intc_data *data = irq_desc_get_handler_data(desc);
struct bcm7120_l2_intc_data *b = data->b;
struct irq_chip *chip = irq_desc_get_chip(desc);
unsigned int idx;
......@@ -69,7 +74,8 @@ static void bcm7120_l2_intc_irq_handle(unsigned int irq, struct irq_desc *desc)
irq_gc_lock(gc);
pending = irq_reg_readl(gc, b->stat_offset[idx]) &
gc->mask_cache;
gc->mask_cache &
data->irq_map_mask[idx];
irq_gc_unlock(gc);
for_each_set_bit(hwirq, &pending, IRQS_PER_WORD) {
......@@ -81,11 +87,10 @@ static void bcm7120_l2_intc_irq_handle(unsigned int irq, struct irq_desc *desc)
chained_irq_exit(chip, desc);
}
static void bcm7120_l2_intc_suspend(struct irq_data *d)
static void bcm7120_l2_intc_suspend(struct irq_chip_generic *gc)
{
struct irq_chip_generic *gc = irq_data_get_irq_chip_data(d);
struct irq_chip_type *ct = irq_data_get_chip_type(d);
struct bcm7120_l2_intc_data *b = gc->private;
struct irq_chip_type *ct = gc->chip_types;
irq_gc_lock(gc);
if (b->can_wake)
......@@ -94,10 +99,9 @@ static void bcm7120_l2_intc_suspend(struct irq_data *d)
irq_gc_unlock(gc);
}
static void bcm7120_l2_intc_resume(struct irq_data *d)
static void bcm7120_l2_intc_resume(struct irq_chip_generic *gc)
{
struct irq_chip_generic *gc = irq_data_get_irq_chip_data(d);
struct irq_chip_type *ct = irq_data_get_chip_type(d);
struct irq_chip_type *ct = gc->chip_types;
/* Restore the saved mask */
irq_gc_lock(gc);
......@@ -107,8 +111,9 @@ static void bcm7120_l2_intc_resume(struct irq_data *d)
static int bcm7120_l2_intc_init_one(struct device_node *dn,
struct bcm7120_l2_intc_data *data,
int irq)
int irq, u32 *valid_mask)
{
struct bcm7120_l1_intc_data *l1_data = &data->l1_data[irq];
int parent_irq;
unsigned int idx;
......@@ -120,20 +125,28 @@ static int bcm7120_l2_intc_init_one(struct device_node *dn,
/* For multiple parent IRQs with multiple words, this looks like:
* <irq0_w0 irq0_w1 irq1_w0 irq1_w1 ...>
*
* We need to associate a given parent interrupt with its corresponding
* map_mask in order to mask the status register with it because we
* have the same handler being called for multiple parent interrupts.
*
* This is typically something needed on BCM7xxx (STB chips).
*/
for (idx = 0; idx < data->n_words; idx++) {
if (data->map_mask_prop) {
data->irq_map_mask[idx] |=
l1_data->irq_map_mask[idx] |=
be32_to_cpup(data->map_mask_prop +
irq * data->n_words + idx);
} else {
data->irq_map_mask[idx] = 0xffffffff;
l1_data->irq_map_mask[idx] = 0xffffffff;
}
valid_mask[idx] |= l1_data->irq_map_mask[idx];
}
irq_set_handler_data(parent_irq, data);
irq_set_chained_handler(parent_irq, bcm7120_l2_intc_irq_handle);
l1_data->b = data;
irq_set_chained_handler_and_data(parent_irq,
bcm7120_l2_intc_irq_handle, l1_data);
return 0;
}
......@@ -214,6 +227,7 @@ int __init bcm7120_l2_intc_probe(struct device_node *dn,
struct irq_chip_type *ct;
int ret = 0;
unsigned int idx, irq, flags;
u32 valid_mask[MAX_WORDS] = { };
data = kzalloc(sizeof(*data), GFP_KERNEL);
if (!data)
......@@ -226,9 +240,16 @@ int __init bcm7120_l2_intc_probe(struct device_node *dn,
goto out_unmap;
}
data->l1_data = kcalloc(data->num_parent_irqs, sizeof(*data->l1_data),
GFP_KERNEL);
if (!data->l1_data) {
ret = -ENOMEM;
goto out_free_l1_data;
}
ret = iomap_regs_fn(dn, data);
if (ret < 0)
goto out_unmap;
goto out_free_l1_data;
for (idx = 0; idx < data->n_words; idx++) {
__raw_writel(data->irq_fwd_mask[idx],
......@@ -237,16 +258,16 @@ int __init bcm7120_l2_intc_probe(struct device_node *dn,
}
for (irq = 0; irq < data->num_parent_irqs; irq++) {
ret = bcm7120_l2_intc_init_one(dn, data, irq);
ret = bcm7120_l2_intc_init_one(dn, data, irq, valid_mask);
if (ret)
goto out_unmap;
goto out_free_l1_data;
}
data->domain = irq_domain_add_linear(dn, IRQS_PER_WORD * data->n_words,
&irq_generic_chip_ops, NULL);
if (!data->domain) {
ret = -ENOMEM;
goto out_unmap;
goto out_free_l1_data;
}
/* MIPS chips strapped for BE will automagically configure the
......@@ -270,7 +291,7 @@ int __init bcm7120_l2_intc_probe(struct device_node *dn,
irq = idx * IRQS_PER_WORD;
gc = irq_get_domain_generic_chip(data->domain, irq);
gc->unused = 0xffffffff & ~data->irq_map_mask[idx];
gc->unused = 0xffffffff & ~valid_mask[idx];
gc->private = data;
ct = gc->chip_types;
......@@ -280,8 +301,15 @@ int __init bcm7120_l2_intc_probe(struct device_node *dn,
ct->chip.irq_mask = irq_gc_mask_clr_bit;
ct->chip.irq_unmask = irq_gc_mask_set_bit;
ct->chip.irq_ack = irq_gc_noop;
ct->chip.irq_suspend = bcm7120_l2_intc_suspend;
ct->chip.irq_resume = bcm7120_l2_intc_resume;
gc->suspend = bcm7120_l2_intc_suspend;
gc->resume = bcm7120_l2_intc_resume;
/*
* Initialize mask-cache, in case we need it for
* saving/restoring fwd mask even w/o any child interrupts
* installed
*/
gc->mask_cache = irq_reg_readl(gc, ct->regs.mask);
if (data->can_wake) {
/* This IRQ chip can wake the system, set all
......@@ -300,6 +328,8 @@ int __init bcm7120_l2_intc_probe(struct device_node *dn,
out_free_domain:
irq_domain_remove(data->domain);
out_free_l1_data:
kfree(data->l1_data);
out_unmap:
for (idx = 0; idx < MAX_MAPPINGS; idx++) {
if (data->map_base[idx])
......
......@@ -32,8 +32,6 @@
#include <linux/irqchip.h>
#include <linux/irqchip/chained_irq.h>
#include "irqchip.h"
/* Register offsets in the L2 interrupt controller */
#define CPU_STATUS 0x00
#define CPU_SET 0x04
......@@ -51,11 +49,13 @@ struct brcmstb_l2_intc_data {
u32 saved_mask; /* for suspend/resume */
};
static void brcmstb_l2_intc_irq_handle(unsigned int irq, struct irq_desc *desc)
static void brcmstb_l2_intc_irq_handle(unsigned int __irq,
struct irq_desc *desc)
{
struct brcmstb_l2_intc_data *b = irq_desc_get_handler_data(desc);
struct irq_chip_generic *gc = irq_get_domain_generic_chip(b->domain, 0);
struct irq_chip *chip = irq_desc_get_chip(desc);
unsigned int irq = irq_desc_get_irq(desc);
u32 status;
chained_irq_enter(chip, desc);
......@@ -172,8 +172,8 @@ int __init brcmstb_l2_intc_of_init(struct device_node *np,
}
/* Set the IRQ chaining logic */
irq_set_handler_data(data->parent_irq, data);
irq_set_chained_handler(data->parent_irq, brcmstb_l2_intc_irq_handle);
irq_set_chained_handler_and_data(data->parent_irq,
brcmstb_l2_intc_irq_handle, data);
gc = irq_get_domain_generic_chip(data->domain, 0);
gc->reg_base = data->base;
......
......@@ -11,6 +11,7 @@
#include <linux/io.h>
#include <linux/irq.h>
#include <linux/irqchip.h>
#include <linux/irqdomain.h>
#include <linux/of_address.h>
#include <linux/of_irq.h>
......@@ -19,8 +20,6 @@
#include <asm/exception.h>
#include <asm/mach/irq.h>
#include "irqchip.h"
#define CLPS711X_INTSR1 (0x0240)
#define CLPS711X_INTMR1 (0x0280)
#define CLPS711X_BLEOI (0x0600)
......
......@@ -11,13 +11,12 @@
*/
#include <linux/err.h>
#include <linux/io.h>
#include <linux/irqchip.h>
#include <linux/irqdomain.h>
#include <linux/of_address.h>
#include <linux/of_irq.h>
#include <linux/slab.h>
#include "irqchip.h"
#define IRQ_FREE -1
#define IRQ_RESERVED -2
#define IRQ_SKIP -3
......
......@@ -12,6 +12,7 @@
#include <linux/io.h>
#include <linux/irq.h>
#include <linux/irqchip.h>
#include <linux/of.h>
#include <linux/of_address.h>
#include <linux/of_irq.h>
......@@ -20,8 +21,6 @@
#include <asm/exception.h>
#include "irqchip.h"
#define UC_IRQ_CONTROL 0x04
#define IC_FLAG_CLEAR_LO 0x00
......
......@@ -13,36 +13,36 @@
#include <linux/io.h>
#include <linux/irq.h>
#include <linux/irqchip.h>
#include <linux/irqchip/chained_irq.h>
#include <linux/of_address.h>
#include <linux/of_irq.h>
#include "irqchip.h"
#define APB_INT_ENABLE_L 0x00
#define APB_INT_ENABLE_H 0x04
#define APB_INT_MASK_L 0x08
#define APB_INT_MASK_H 0x0c
#define APB_INT_FINALSTATUS_L 0x30
#define APB_INT_FINALSTATUS_H 0x34
#define APB_INT_BASE_OFFSET 0x04
static void dw_apb_ictl_handler(unsigned int irq, struct irq_desc *desc)
{
struct irq_chip *chip = irq_get_chip(irq);
struct irq_chip_generic *gc = irq_get_handler_data(irq);
struct irq_domain *d = gc->private;
u32 stat;
struct irq_domain *d = irq_desc_get_handler_data(desc);
struct irq_chip *chip = irq_desc_get_chip(desc);
int n;
chained_irq_enter(chip, desc);
for (n = 0; n < gc->num_ct; n++) {
stat = readl_relaxed(gc->reg_base +
APB_INT_FINALSTATUS_L + 4 * n);
for (n = 0; n < d->revmap_size; n += 32) {
struct irq_chip_generic *gc = irq_get_domain_generic_chip(d, n);
u32 stat = readl_relaxed(gc->reg_base + APB_INT_FINALSTATUS_L);
while (stat) {
u32 hwirq = ffs(stat) - 1;
generic_handle_irq(irq_find_mapping(d,
gc->irq_base + hwirq + 32 * n));
u32 virq = irq_find_mapping(d, gc->irq_base + hwirq);
generic_handle_irq(virq);
stat &= ~(1 << hwirq);
}
}
......@@ -73,7 +73,7 @@ static int __init dw_apb_ictl_init(struct device_node *np,
struct irq_domain *domain;
struct irq_chip_generic *gc;
void __iomem *iobase;
int ret, nrirqs, irq;
int ret, nrirqs, irq, i;
u32 reg;
/* Map the parent interrupt for the chained handler */
......@@ -128,35 +128,25 @@ static int __init dw_apb_ictl_init(struct device_node *np,
goto err_unmap;
}
ret = irq_alloc_domain_generic_chips(domain, 32, (nrirqs > 32) ? 2 : 1,
np->name, handle_level_irq, clr, 0,
IRQ_GC_MASK_CACHE_PER_TYPE |
ret = irq_alloc_domain_generic_chips(domain, 32, 1, np->name,
handle_level_irq, clr, 0,
IRQ_GC_INIT_MASK_CACHE);
if (ret) {
pr_err("%s: unable to alloc irq domain gc\n", np->full_name);
goto err_unmap;
}
gc = irq_get_domain_generic_chip(domain, 0);
gc->private = domain;
gc->reg_base = iobase;
gc->chip_types[0].regs.mask = APB_INT_MASK_L;
gc->chip_types[0].regs.enable = APB_INT_ENABLE_L;
gc->chip_types[0].chip.irq_mask = irq_gc_mask_set_bit;
gc->chip_types[0].chip.irq_unmask = irq_gc_mask_clr_bit;
gc->chip_types[0].chip.irq_resume = dw_apb_ictl_resume;
if (nrirqs > 32) {
gc->chip_types[1].regs.mask = APB_INT_MASK_H;
gc->chip_types[1].regs.enable = APB_INT_ENABLE_H;
gc->chip_types[1].chip.irq_mask = irq_gc_mask_set_bit;
gc->chip_types[1].chip.irq_unmask = irq_gc_mask_clr_bit;
gc->chip_types[1].chip.irq_resume = dw_apb_ictl_resume;
for (i = 0; i < DIV_ROUND_UP(nrirqs, 32); i++) {
gc = irq_get_domain_generic_chip(domain, i * 32);
gc->reg_base = iobase + i * APB_INT_BASE_OFFSET;
gc->chip_types[0].regs.mask = APB_INT_MASK_L;
gc->chip_types[0].regs.enable = APB_INT_ENABLE_L;
gc->chip_types[0].chip.irq_mask = irq_gc_mask_set_bit;
gc->chip_types[0].chip.irq_unmask = irq_gc_mask_clr_bit;
gc->chip_types[0].chip.irq_resume = dw_apb_ictl_resume;
}
irq_set_handler_data(irq, gc);
irq_set_chained_handler(irq, dw_apb_ictl_handler);
irq_set_chained_handler_and_data(irq, dw_apb_ictl_handler, domain);
return 0;
......
......@@ -45,13 +45,11 @@
struct v2m_data {
spinlock_t msi_cnt_lock;
struct msi_controller mchip;
struct resource res; /* GICv2m resource */
void __iomem *base; /* GICv2m virt address */
u32 spi_start; /* The SPI number that MSIs start */
u32 nr_spis; /* The number of SPIs for MSIs */
unsigned long *bm; /* MSI vector bitmap */
struct irq_domain *domain;
};
static void gicv2m_mask_msi_irq(struct irq_data *d)
......@@ -213,11 +211,25 @@ static bool is_msi_spi_valid(u32 base, u32 num)
return true;
}
static struct irq_chip gicv2m_pmsi_irq_chip = {
.name = "pMSI",
};
static struct msi_domain_ops gicv2m_pmsi_ops = {
};
static struct msi_domain_info gicv2m_pmsi_domain_info = {
.flags = (MSI_FLAG_USE_DEF_DOM_OPS | MSI_FLAG_USE_DEF_CHIP_OPS),
.ops = &gicv2m_pmsi_ops,
.chip = &gicv2m_pmsi_irq_chip,
};
static int __init gicv2m_init_one(struct device_node *node,
struct irq_domain *parent)
{
int ret;
struct v2m_data *v2m;
struct irq_domain *inner_domain, *pci_domain, *plat_domain;
v2m = kzalloc(sizeof(struct v2m_data), GFP_KERNEL);
if (!v2m) {
......@@ -261,32 +273,28 @@ static int __init gicv2m_init_one(struct device_node *node,
goto err_iounmap;
}
v2m->domain = irq_domain_add_tree(NULL, &gicv2m_domain_ops, v2m);
if (!v2m->domain) {
inner_domain = irq_domain_add_tree(node, &gicv2m_domain_ops, v2m);
if (!inner_domain) {
pr_err("Failed to create GICv2m domain\n");
ret = -ENOMEM;
goto err_free_bm;
}
v2m->domain->parent = parent;
v2m->mchip.of_node = node;
v2m->mchip.domain = pci_msi_create_irq_domain(node,
&gicv2m_msi_domain_info,
v2m->domain);
if (!v2m->mchip.domain) {
pr_err("Failed to create MSI domain\n");
inner_domain->bus_token = DOMAIN_BUS_NEXUS;
inner_domain->parent = parent;
pci_domain = pci_msi_create_irq_domain(node, &gicv2m_msi_domain_info,
inner_domain);
plat_domain = platform_msi_create_irq_domain(node,
&gicv2m_pmsi_domain_info,
inner_domain);
if (!pci_domain || !plat_domain) {
pr_err("Failed to create MSI domains\n");
ret = -ENOMEM;
goto err_free_domains;
}
spin_lock_init(&v2m->msi_cnt_lock);
ret = of_pci_msi_chip_add(&v2m->mchip);
if (ret) {
pr_err("Failed to add msi_chip.\n");
goto err_free_domains;
}
pr_info("Node %s: range[%#lx:%#lx], SPI[%d:%d]\n", node->name,
(unsigned long)v2m->res.start, (unsigned long)v2m->res.end,
v2m->spi_start, (v2m->spi_start + v2m->nr_spis));
......@@ -294,10 +302,12 @@ static int __init gicv2m_init_one(struct device_node *node,
return 0;
err_free_domains:
if (v2m->mchip.domain)
irq_domain_remove(v2m->mchip.domain);
if (v2m->domain)
irq_domain_remove(v2m->domain);
if (plat_domain)
irq_domain_remove(plat_domain);
if (pci_domain)
irq_domain_remove(pci_domain);
if (inner_domain)
irq_domain_remove(inner_domain);
err_free_bm:
kfree(v2m->bm);
err_iounmap:
......
/*
* Copyright (C) 2013-2015 ARM Limited, All Rights Reserved.
* Author: Marc Zyngier <marc.zyngier@arm.com>
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License version 2 as
* published by the Free Software Foundation.
*
* This program is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
* GNU General Public License for more details.
*
* You should have received a copy of the GNU General Public License
* along with this program. If not, see <http://www.gnu.org/licenses/>.
*/
#include <linux/msi.h>
#include <linux/of.h>
#include <linux/of_irq.h>
#include <linux/of_pci.h>
static void its_mask_msi_irq(struct irq_data *d)
{
pci_msi_mask_irq(d);
irq_chip_mask_parent(d);
}
static void its_unmask_msi_irq(struct irq_data *d)
{
pci_msi_unmask_irq(d);
irq_chip_unmask_parent(d);
}
static struct irq_chip its_msi_irq_chip = {
.name = "ITS-MSI",
.irq_unmask = its_unmask_msi_irq,
.irq_mask = its_mask_msi_irq,
.irq_eoi = irq_chip_eoi_parent,
.irq_write_msi_msg = pci_msi_domain_write_msg,
};
struct its_pci_alias {
struct pci_dev *pdev;
u32 dev_id;
u32 count;
};
static int its_pci_msi_vec_count(struct pci_dev *pdev)
{
int msi, msix;
msi = max(pci_msi_vec_count(pdev), 0);
msix = max(pci_msix_vec_count(pdev), 0);
return max(msi, msix);
}
static int its_get_pci_alias(struct pci_dev *pdev, u16 alias, void *data)
{
struct its_pci_alias *dev_alias = data;
dev_alias->dev_id = alias;
if (pdev != dev_alias->pdev)
dev_alias->count += its_pci_msi_vec_count(dev_alias->pdev);
return 0;
}
static int its_pci_msi_prepare(struct irq_domain *domain, struct device *dev,
int nvec, msi_alloc_info_t *info)
{
struct pci_dev *pdev;
struct its_pci_alias dev_alias;
struct msi_domain_info *msi_info;
if (!dev_is_pci(dev))
return -EINVAL;
msi_info = msi_get_domain_info(domain->parent);
pdev = to_pci_dev(dev);
dev_alias.pdev = pdev;
dev_alias.count = nvec;
pci_for_each_dma_alias(pdev, its_get_pci_alias, &dev_alias);
/* ITS specific DeviceID, as the core ITS ignores dev. */
info->scratchpad[0].ul = dev_alias.dev_id;
return msi_info->ops->msi_prepare(domain->parent,
dev, dev_alias.count, info);
}
static struct msi_domain_ops its_pci_msi_ops = {
.msi_prepare = its_pci_msi_prepare,
};
static struct msi_domain_info its_pci_msi_domain_info = {
.flags = (MSI_FLAG_USE_DEF_DOM_OPS | MSI_FLAG_USE_DEF_CHIP_OPS |
MSI_FLAG_MULTI_PCI_MSI | MSI_FLAG_PCI_MSIX),
.ops = &its_pci_msi_ops,
.chip = &its_msi_irq_chip,
};
static struct of_device_id its_device_id[] = {
{ .compatible = "arm,gic-v3-its", },
{},
};
static int __init its_pci_msi_init(void)
{
struct device_node *np;
struct irq_domain *parent;
for (np = of_find_matching_node(NULL, its_device_id); np;
np = of_find_matching_node(np, its_device_id)) {
if (!of_property_read_bool(np, "msi-controller"))
continue;
parent = irq_find_matching_host(np, DOMAIN_BUS_NEXUS);
if (!parent || !msi_get_domain_info(parent)) {
pr_err("%s: unable to locate ITS domain\n",
np->full_name);
continue;
}
if (!pci_msi_create_irq_domain(np, &its_pci_msi_domain_info,
parent)) {
pr_err("%s: unable to create PCI domain\n",
np->full_name);
continue;
}
pr_info("PCI/MSI: %s domain created\n", np->full_name);
}
return 0;
}
early_initcall(its_pci_msi_init);
/*
* Copyright (C) 2013-2015 ARM Limited, All Rights Reserved.
* Author: Marc Zyngier <marc.zyngier@arm.com>
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License version 2 as
* published by the Free Software Foundation.
*
* This program is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
* GNU General Public License for more details.
*
* You should have received a copy of the GNU General Public License
* along with this program. If not, see <http://www.gnu.org/licenses/>.
*/
#include <linux/device.h>
#include <linux/msi.h>
#include <linux/of.h>
#include <linux/of_irq.h>
static struct irq_chip its_pmsi_irq_chip = {
.name = "ITS-pMSI",
};
static int its_pmsi_prepare(struct irq_domain *domain, struct device *dev,
int nvec, msi_alloc_info_t *info)
{
struct msi_domain_info *msi_info;
u32 dev_id;
int ret;
msi_info = msi_get_domain_info(domain->parent);
/* Suck the DeviceID out of the msi-parent property */
ret = of_property_read_u32_index(dev->of_node, "msi-parent",
1, &dev_id);
if (ret)
return ret;
/* ITS specific DeviceID, as the core ITS ignores dev. */
info->scratchpad[0].ul = dev_id;
return msi_info->ops->msi_prepare(domain->parent,
dev, nvec, info);
}
static struct msi_domain_ops its_pmsi_ops = {
.msi_prepare = its_pmsi_prepare,
};
static struct msi_domain_info its_pmsi_domain_info = {
.flags = (MSI_FLAG_USE_DEF_DOM_OPS | MSI_FLAG_USE_DEF_CHIP_OPS),
.ops = &its_pmsi_ops,
.chip = &its_pmsi_irq_chip,
};
static struct of_device_id its_device_id[] = {
{ .compatible = "arm,gic-v3-its", },
{},
};
static int __init its_pmsi_init(void)
{
struct device_node *np;
struct irq_domain *parent;
for (np = of_find_matching_node(NULL, its_device_id); np;
np = of_find_matching_node(np, its_device_id)) {
if (!of_property_read_bool(np, "msi-controller"))
continue;
parent = irq_find_matching_host(np, DOMAIN_BUS_NEXUS);
if (!parent || !msi_get_domain_info(parent)) {
pr_err("%s: unable to locate ITS domain\n",
np->full_name);
continue;
}
if (!platform_msi_create_irq_domain(np, &its_pmsi_domain_info,
parent)) {
pr_err("%s: unable to create platform domain\n",
np->full_name);
continue;
}
pr_info("Platform MSI: %s domain created\n", np->full_name);
}
return 0;
}
early_initcall(its_pmsi_init);
......@@ -30,14 +30,13 @@
#include <linux/percpu.h>
#include <linux/slab.h>
#include <linux/irqchip.h>
#include <linux/irqchip/arm-gic-v3.h>
#include <asm/cacheflush.h>
#include <asm/cputype.h>
#include <asm/exception.h>
#include "irqchip.h"
#define ITS_FLAGS_CMDQ_NEEDS_FLUSHING (1 << 0)
#define RDIST_FLAGS_PROPBASE_NEEDS_FLUSHING (1 << 0)
......@@ -54,14 +53,12 @@ struct its_collection {
/*
* The ITS structure - contains most of the infrastructure, with the
* msi_controller, the command queue, the collections, and the list of
* devices writing to it.
* top-level MSI domain, the command queue, the collections, and the
* list of devices writing to it.
*/
struct its_node {
raw_spinlock_t lock;
struct list_head entry;
struct msi_controller msi_chip;
struct irq_domain *domain;
void __iomem *base;
unsigned long phys_base;
struct its_cmd_block *cmd_base;
......@@ -643,26 +640,6 @@ static struct irq_chip its_irq_chip = {
.irq_compose_msi_msg = its_irq_compose_msi_msg,
};
static void its_mask_msi_irq(struct irq_data *d)
{
pci_msi_mask_irq(d);
irq_chip_mask_parent(d);
}
static void its_unmask_msi_irq(struct irq_data *d)
{
pci_msi_unmask_irq(d);
irq_chip_unmask_parent(d);
}
static struct irq_chip its_msi_irq_chip = {
.name = "ITS-MSI",
.irq_unmask = its_unmask_msi_irq,
.irq_mask = its_mask_msi_irq,
.irq_eoi = irq_chip_eoi_parent,
.irq_write_msi_msg = pci_msi_domain_write_msg,
};
/*
* How we allocate LPIs:
*
......@@ -831,7 +808,7 @@ static void its_free_tables(struct its_node *its)
}
}
static int its_alloc_tables(struct its_node *its)
static int its_alloc_tables(const char *node_name, struct its_node *its)
{
int err;
int i;
......@@ -874,7 +851,7 @@ static int its_alloc_tables(struct its_node *its)
if (order >= MAX_ORDER) {
order = MAX_ORDER - 1;
pr_warn("%s: Device Table too large, reduce its page order to %u\n",
its->msi_chip.of_node->full_name, order);
node_name, order);
}
}
......@@ -944,7 +921,7 @@ static int its_alloc_tables(struct its_node *its)
if (val != tmp) {
pr_err("ITS: %s: GITS_BASER%d doesn't stick: %lx %lx\n",
its->msi_chip.of_node->full_name, i,
node_name, i,
(unsigned long) val, (unsigned long) tmp);
err = -ENXIO;
goto out_free;
......@@ -1209,85 +1186,50 @@ static int its_alloc_device_irq(struct its_device *dev, irq_hw_number_t *hwirq)
return 0;
}
struct its_pci_alias {
struct pci_dev *pdev;
u32 dev_id;
u32 count;
};
static int its_pci_msi_vec_count(struct pci_dev *pdev)
{
int msi, msix;
msi = max(pci_msi_vec_count(pdev), 0);
msix = max(pci_msix_vec_count(pdev), 0);
return max(msi, msix);
}
static int its_get_pci_alias(struct pci_dev *pdev, u16 alias, void *data)
{
struct its_pci_alias *dev_alias = data;
dev_alias->dev_id = alias;
if (pdev != dev_alias->pdev)
dev_alias->count += its_pci_msi_vec_count(dev_alias->pdev);
return 0;
}
static int its_msi_prepare(struct irq_domain *domain, struct device *dev,
int nvec, msi_alloc_info_t *info)
{
struct pci_dev *pdev;
struct its_node *its;
struct its_device *its_dev;
struct its_pci_alias dev_alias;
if (!dev_is_pci(dev))
return -EINVAL;
struct msi_domain_info *msi_info;
u32 dev_id;
pdev = to_pci_dev(dev);
dev_alias.pdev = pdev;
dev_alias.count = nvec;
/*
* We ignore "dev" entierely, and rely on the dev_id that has
* been passed via the scratchpad. This limits this domain's
* usefulness to upper layers that definitely know that they
* are built on top of the ITS.
*/
dev_id = info->scratchpad[0].ul;
pci_for_each_dma_alias(pdev, its_get_pci_alias, &dev_alias);
its = domain->parent->host_data;
msi_info = msi_get_domain_info(domain);
its = msi_info->data;
its_dev = its_find_device(its, dev_alias.dev_id);
its_dev = its_find_device(its, dev_id);
if (its_dev) {
/*
* We already have seen this ID, probably through
* another alias (PCI bridge of some sort). No need to
* create the device.
*/
dev_dbg(dev, "Reusing ITT for devID %x\n", dev_alias.dev_id);
pr_debug("Reusing ITT for devID %x\n", dev_id);
goto out;
}
its_dev = its_create_device(its, dev_alias.dev_id, dev_alias.count);
its_dev = its_create_device(its, dev_id, nvec);
if (!its_dev)
return -ENOMEM;
dev_dbg(&pdev->dev, "ITT %d entries, %d bits\n",
dev_alias.count, ilog2(dev_alias.count));
pr_debug("ITT %d entries, %d bits\n", nvec, ilog2(nvec));
out:
info->scratchpad[0].ptr = its_dev;
info->scratchpad[1].ptr = dev;
return 0;
}
static struct msi_domain_ops its_pci_msi_ops = {
static struct msi_domain_ops its_msi_domain_ops = {
.msi_prepare = its_msi_prepare,
};
static struct msi_domain_info its_pci_msi_domain_info = {
.flags = (MSI_FLAG_USE_DEF_DOM_OPS | MSI_FLAG_USE_DEF_CHIP_OPS |
MSI_FLAG_MULTI_PCI_MSI | MSI_FLAG_PCI_MSIX),
.ops = &its_pci_msi_ops,
.chip = &its_msi_irq_chip,
};
static int its_irq_gic_domain_alloc(struct irq_domain *domain,
unsigned int virq,
irq_hw_number_t hwirq)
......@@ -1323,9 +1265,9 @@ static int its_irq_domain_alloc(struct irq_domain *domain, unsigned int virq,
irq_domain_set_hwirq_and_chip(domain, virq + i,
hwirq, &its_irq_chip, its_dev);
dev_dbg(info->scratchpad[1].ptr, "ID:%d pID:%d vID:%d\n",
(int)(hwirq - its_dev->event_map.lpi_base),
(int)hwirq, virq + i);
pr_debug("ID:%d pID:%d vID:%d\n",
(int)(hwirq - its_dev->event_map.lpi_base),
(int) hwirq, virq + i);
}
return 0;
......@@ -1426,6 +1368,7 @@ static int its_probe(struct device_node *node, struct irq_domain *parent)
struct resource res;
struct its_node *its;
void __iomem *its_base;
struct irq_domain *inner_domain;
u32 val;
u64 baser, tmp;
int err;
......@@ -1469,7 +1412,6 @@ static int its_probe(struct device_node *node, struct irq_domain *parent)
INIT_LIST_HEAD(&its->its_device_list);
its->base = its_base;
its->phys_base = res.start;
its->msi_chip.of_node = node;
its->ite_size = ((readl_relaxed(its_base + GITS_TYPER) >> 4) & 0xf) + 1;
its->cmd_base = kzalloc(ITS_CMD_QUEUE_SZ, GFP_KERNEL);
......@@ -1479,7 +1421,7 @@ static int its_probe(struct device_node *node, struct irq_domain *parent)
}
its->cmd_write = its->cmd_base;
err = its_alloc_tables(its);
err = its_alloc_tables(node->full_name, its);
if (err)
goto out_free_cmd;
......@@ -1515,26 +1457,27 @@ static int its_probe(struct device_node *node, struct irq_domain *parent)
writeq_relaxed(0, its->base + GITS_CWRITER);
writel_relaxed(GITS_CTLR_ENABLE, its->base + GITS_CTLR);
if (of_property_read_bool(its->msi_chip.of_node, "msi-controller")) {
its->domain = irq_domain_add_tree(NULL, &its_domain_ops, its);
if (!its->domain) {
if (of_property_read_bool(node, "msi-controller")) {
struct msi_domain_info *info;
info = kzalloc(sizeof(*info), GFP_KERNEL);
if (!info) {
err = -ENOMEM;
goto out_free_tables;
}
its->domain->parent = parent;
its->msi_chip.domain = pci_msi_create_irq_domain(node,
&its_pci_msi_domain_info,
its->domain);
if (!its->msi_chip.domain) {
inner_domain = irq_domain_add_tree(node, &its_domain_ops, its);
if (!inner_domain) {
err = -ENOMEM;
goto out_free_domains;
kfree(info);
goto out_free_tables;
}
err = of_pci_msi_chip_add(&its->msi_chip);
if (err)
goto out_free_domains;
inner_domain->parent = parent;
inner_domain->bus_token = DOMAIN_BUS_NEXUS;
info->ops = &its_msi_domain_ops;
info->data = its;
inner_domain->host_data = info;
}
spin_lock(&its_lock);
......@@ -1543,11 +1486,6 @@ static int its_probe(struct device_node *node, struct irq_domain *parent)
return 0;
out_free_domains:
if (its->msi_chip.domain)
irq_domain_remove(its->msi_chip.domain);
if (its->domain)
irq_domain_remove(its->domain);
out_free_tables:
its_free_tables(its);
out_free_cmd:
......
......@@ -25,6 +25,7 @@
#include <linux/percpu.h>
#include <linux/slab.h>
#include <linux/irqchip.h>
#include <linux/irqchip/arm-gic-v3.h>
#include <asm/cputype.h>
......@@ -32,7 +33,6 @@
#include <asm/smp_plat.h>
#include "irq-gic-common.h"
#include "irqchip.h"
struct redist_region {
void __iomem *redist_base;
......
......@@ -38,6 +38,7 @@
#include <linux/interrupt.h>
#include <linux/percpu.h>
#include <linux/slab.h>
#include <linux/irqchip.h>
#include <linux/irqchip/chained_irq.h>
#include <linux/irqchip/arm-gic.h>
#include <linux/irqchip/arm-gic-acpi.h>
......@@ -48,7 +49,6 @@
#include <asm/smp_plat.h>
#include "irq-gic-common.h"
#include "irqchip.h"
union gic_base {
void __iomem *common_base;
......@@ -288,8 +288,8 @@ static void __exception_irq_entry gic_handle_irq(struct pt_regs *regs)
static void gic_handle_cascade_irq(unsigned int irq, struct irq_desc *desc)
{
struct gic_chip_data *chip_data = irq_get_handler_data(irq);
struct irq_chip *chip = irq_get_chip(irq);
struct gic_chip_data *chip_data = irq_desc_get_handler_data(desc);
struct irq_chip *chip = irq_desc_get_chip(desc);
unsigned int cascade_irq, gic_irq;
unsigned long status;
......@@ -324,16 +324,17 @@ static struct irq_chip gic_chip = {
#endif
.irq_get_irqchip_state = gic_irq_get_irqchip_state,
.irq_set_irqchip_state = gic_irq_set_irqchip_state,
.flags = IRQCHIP_SET_TYPE_MASKED,
.flags = IRQCHIP_SET_TYPE_MASKED |
IRQCHIP_SKIP_SET_WAKE |
IRQCHIP_MASK_ON_SUSPEND,
};
void __init gic_cascade_irq(unsigned int gic_nr, unsigned int irq)
{
if (gic_nr >= MAX_GIC_NR)
BUG();
if (irq_set_handler_data(irq, &gic_data[gic_nr]) != 0)
BUG();
irq_set_chained_handler(irq, gic_handle_cascade_irq);
irq_set_chained_handler_and_data(irq, gic_handle_cascade_irq,
&gic_data[gic_nr]);
}
static u8 gic_get_cpumask(struct gic_chip_data *gic)
......@@ -355,9 +356,9 @@ static u8 gic_get_cpumask(struct gic_chip_data *gic)
return mask;
}
static void gic_cpu_if_up(void)
static void gic_cpu_if_up(struct gic_chip_data *gic)
{
void __iomem *cpu_base = gic_data_cpu_base(&gic_data[0]);
void __iomem *cpu_base = gic_data_cpu_base(gic);
u32 bypass = 0;
/*
......@@ -401,34 +402,47 @@ static void gic_cpu_init(struct gic_chip_data *gic)
int i;
/*
* Get what the GIC says our CPU mask is.
* Setting up the CPU map is only relevant for the primary GIC
* because any nested/secondary GICs do not directly interface
* with the CPU(s).
*/
BUG_ON(cpu >= NR_GIC_CPU_IF);
cpu_mask = gic_get_cpumask(gic);
gic_cpu_map[cpu] = cpu_mask;
if (gic == &gic_data[0]) {
/*
* Get what the GIC says our CPU mask is.
*/
BUG_ON(cpu >= NR_GIC_CPU_IF);
cpu_mask = gic_get_cpumask(gic);
gic_cpu_map[cpu] = cpu_mask;
/*
* Clear our mask from the other map entries in case they're
* still undefined.
*/
for (i = 0; i < NR_GIC_CPU_IF; i++)
if (i != cpu)
gic_cpu_map[i] &= ~cpu_mask;
/*
* Clear our mask from the other map entries in case they're
* still undefined.
*/
for (i = 0; i < NR_GIC_CPU_IF; i++)
if (i != cpu)
gic_cpu_map[i] &= ~cpu_mask;
}
gic_cpu_config(dist_base, NULL);
writel_relaxed(GICC_INT_PRI_THRESHOLD, base + GIC_CPU_PRIMASK);
gic_cpu_if_up();
gic_cpu_if_up(gic);
}
void gic_cpu_if_down(void)
int gic_cpu_if_down(unsigned int gic_nr)
{
void __iomem *cpu_base = gic_data_cpu_base(&gic_data[0]);
void __iomem *cpu_base;
u32 val = 0;
if (gic_nr >= MAX_GIC_NR)
return -EINVAL;
cpu_base = gic_data_cpu_base(&gic_data[gic_nr]);
val = readl(cpu_base + GIC_CPU_CTRL);
val &= ~GICC_ENABLE;
writel_relaxed(val, cpu_base + GIC_CPU_CTRL);
return 0;
}
#ifdef CONFIG_CPU_PM
......@@ -564,7 +578,7 @@ static void gic_cpu_restore(unsigned int gic_nr)
dist_base + GIC_DIST_PRI + i * 4);
writel_relaxed(GICC_INT_PRI_THRESHOLD, cpu_base + GIC_CPU_PRIMASK);
gic_cpu_if_up();
gic_cpu_if_up(&gic_data[gic_nr]);
}
static int gic_notifier(struct notifier_block *self, unsigned long cmd, void *v)
......@@ -880,11 +894,6 @@ static const struct irq_domain_ops gic_irq_domain_ops = {
.xlate = gic_irq_domain_xlate,
};
void gic_set_irqchip_flags(unsigned long flags)
{
gic_chip.flags |= flags;
}
void __init gic_init_bases(unsigned int gic_nr, int irq_start,
void __iomem *dist_base, void __iomem *cpu_base,
u32 percpu_offset, struct device_node *node)
......@@ -929,13 +938,6 @@ void __init gic_init_bases(unsigned int gic_nr, int irq_start,
gic_set_base_accessor(gic, gic_get_common_base);
}
/*
* Initialize the CPU interface map to all CPUs.
* It will be refined as each CPU probes its ID.
*/
for (i = 0; i < NR_GIC_CPU_IF; i++)
gic_cpu_map[i] = 0xff;
/*
* Find out how many interrupts are supported.
* The GIC only supports up to 1020 interrupt sources.
......@@ -981,6 +983,13 @@ void __init gic_init_bases(unsigned int gic_nr, int irq_start,
return;
if (gic_nr == 0) {
/*
* Initialize the CPU interface map to all CPUs.
* It will be refined as each CPU probes its ID.
* This is only necessary for the primary GIC.
*/
for (i = 0; i < NR_GIC_CPU_IF; i++)
gic_cpu_map[i] = 0xff;
#ifdef CONFIG_SMP
set_smp_cross_call(gic_raise_softirq);
register_cpu_notifier(&gic_cpu_notifier);
......
......@@ -41,6 +41,7 @@
#include <linux/irqdomain.h>
#include <linux/interrupt.h>
#include <linux/slab.h>
#include <linux/irqchip.h>
#include <linux/irqchip/arm-gic.h>
#include <asm/irq.h>
......@@ -48,7 +49,6 @@
#include <asm/smp_plat.h>
#include "irq-gic-common.h"
#include "irqchip.h"
#define HIP04_MAX_IRQS 510
......@@ -202,7 +202,9 @@ static struct irq_chip hip04_irq_chip = {
#ifdef CONFIG_SMP
.irq_set_affinity = hip04_irq_set_affinity,
#endif
.flags = IRQCHIP_SET_TYPE_MASKED,
.flags = IRQCHIP_SET_TYPE_MASKED |
IRQCHIP_SKIP_SET_WAKE |
IRQCHIP_MASK_ON_SUSPEND,
};
static u16 hip04_get_cpumask(struct hip04_irq_data *intc)
......
......@@ -12,6 +12,7 @@
#include <linux/init.h>
#include <linux/ioport.h>
#include <linux/interrupt.h>
#include <linux/irqchip.h>
#include <linux/irqdomain.h>
#include <linux/kernel.h>
#include <linux/of_irq.h>
......@@ -22,8 +23,6 @@
#include <asm/i8259.h>
#include <asm/io.h>
#include "../../drivers/irqchip/irqchip.h"
/*
* This is the 'legacy' 8259A Programmable Interrupt Controller,
* present in the majority of PC/AT boxes.
......@@ -353,10 +352,11 @@ void __init init_i8259_irqs(void)
__init_i8259_irqs(NULL);
}
static void i8259_irq_dispatch(unsigned int irq, struct irq_desc *desc)
static void i8259_irq_dispatch(unsigned int __irq, struct irq_desc *desc)
{
struct irq_domain *domain = irq_get_handler_data(irq);
struct irq_domain *domain = irq_desc_get_handler_data(desc);
int hwirq = i8259_irq();
unsigned int irq;
if (hwirq < 0)
return;
......
......@@ -218,8 +218,9 @@ static int pdc_irq_set_wake(struct irq_data *data, unsigned int on)
return 0;
}
static void pdc_intc_perip_isr(unsigned int irq, struct irq_desc *desc)
static void pdc_intc_perip_isr(unsigned int __irq, struct irq_desc *desc)
{
unsigned int irq = irq_desc_get_irq(desc);
struct pdc_intc_priv *priv;
unsigned int i, irq_no;
......@@ -451,13 +452,13 @@ static int pdc_intc_probe(struct platform_device *pdev)
/* Setup chained handlers for the peripheral IRQs */
for (i = 0; i < priv->nr_perips; ++i) {
irq = priv->perip_irqs[i];
irq_set_handler_data(irq, priv);
irq_set_chained_handler(irq, pdc_intc_perip_isr);
irq_set_chained_handler_and_data(irq, pdc_intc_perip_isr,
priv);
}
/* Setup chained handler for the syswake IRQ */
irq_set_handler_data(priv->syswake_irq, priv);
irq_set_chained_handler(priv->syswake_irq, pdc_intc_syswake_isr);
irq_set_chained_handler_and_data(priv->syswake_irq,
pdc_intc_syswake_isr, priv);
dev_info(&pdev->dev,
"PDC IRQ controller initialised (%u perip IRQs, %u syswake IRQs)\n",
......
/*
* Copyright (C) 2015 Freescale Semiconductor, Inc.
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License version 2 as
* published by the Free Software Foundation.
*/
#include <linux/of_address.h>
#include <linux/of_irq.h>
#include <linux/slab.h>
#include <linux/irqchip.h>
#include <linux/syscore_ops.h>
#define IMR_NUM 4
#define GPC_MAX_IRQS (IMR_NUM * 32)
#define GPC_IMR1_CORE0 0x30
#define GPC_IMR1_CORE1 0x40
struct gpcv2_irqchip_data {
struct raw_spinlock rlock;
void __iomem *gpc_base;
u32 wakeup_sources[IMR_NUM];
u32 saved_irq_mask[IMR_NUM];
u32 cpu2wakeup;
};
static struct gpcv2_irqchip_data *imx_gpcv2_instance;
/*
* Interface for the low level wakeup code.
*/
u32 imx_gpcv2_get_wakeup_source(u32 **sources)
{
if (!imx_gpcv2_instance)
return 0;
if (sources)
*sources = imx_gpcv2_instance->wakeup_sources;
return IMR_NUM;
}
static int gpcv2_wakeup_source_save(void)
{
struct gpcv2_irqchip_data *cd;
void __iomem *reg;
int i;
cd = imx_gpcv2_instance;
if (!cd)
return 0;
for (i = 0; i < IMR_NUM; i++) {
reg = cd->gpc_base + cd->cpu2wakeup + i * 4;
cd->saved_irq_mask[i] = readl_relaxed(reg);
writel_relaxed(cd->wakeup_sources[i], reg);
}
return 0;
}
static void gpcv2_wakeup_source_restore(void)
{
struct gpcv2_irqchip_data *cd;
void __iomem *reg;
int i;
cd = imx_gpcv2_instance;
if (!cd)
return;
for (i = 0; i < IMR_NUM; i++) {
reg = cd->gpc_base + cd->cpu2wakeup + i * 4;
writel_relaxed(cd->saved_irq_mask[i], reg);
}
}
static struct syscore_ops imx_gpcv2_syscore_ops = {
.suspend = gpcv2_wakeup_source_save,
.resume = gpcv2_wakeup_source_restore,
};
static int imx_gpcv2_irq_set_wake(struct irq_data *d, unsigned int on)
{
struct gpcv2_irqchip_data *cd = d->chip_data;
unsigned int idx = d->hwirq / 32;
unsigned long flags;
void __iomem *reg;
u32 mask, val;
raw_spin_lock_irqsave(&cd->rlock, flags);
reg = cd->gpc_base + cd->cpu2wakeup + idx * 4;
mask = 1 << d->hwirq % 32;
val = cd->wakeup_sources[idx];
cd->wakeup_sources[idx] = on ? (val & ~mask) : (val | mask);
raw_spin_unlock_irqrestore(&cd->rlock, flags);
/*
* Do *not* call into the parent, as the GIC doesn't have any
* wake-up facility...
*/
return 0;
}
static void imx_gpcv2_irq_unmask(struct irq_data *d)
{
struct gpcv2_irqchip_data *cd = d->chip_data;
void __iomem *reg;
u32 val;
raw_spin_lock(&cd->rlock);
reg = cd->gpc_base + cd->cpu2wakeup + d->hwirq / 32 * 4;
val = readl_relaxed(reg);
val &= ~(1 << d->hwirq % 32);
writel_relaxed(val, reg);
raw_spin_unlock(&cd->rlock);
irq_chip_unmask_parent(d);
}
static void imx_gpcv2_irq_mask(struct irq_data *d)
{
struct gpcv2_irqchip_data *cd = d->chip_data;
void __iomem *reg;
u32 val;
raw_spin_lock(&cd->rlock);
reg = cd->gpc_base + cd->cpu2wakeup + d->hwirq / 32 * 4;
val = readl_relaxed(reg);
val |= 1 << (d->hwirq % 32);
writel_relaxed(val, reg);
raw_spin_unlock(&cd->rlock);
irq_chip_mask_parent(d);
}
static struct irq_chip gpcv2_irqchip_data_chip = {
.name = "GPCv2",
.irq_eoi = irq_chip_eoi_parent,
.irq_mask = imx_gpcv2_irq_mask,
.irq_unmask = imx_gpcv2_irq_unmask,
.irq_set_wake = imx_gpcv2_irq_set_wake,
.irq_retrigger = irq_chip_retrigger_hierarchy,
#ifdef CONFIG_SMP
.irq_set_affinity = irq_chip_set_affinity_parent,
#endif
};
static int imx_gpcv2_domain_xlate(struct irq_domain *domain,
struct device_node *controller,
const u32 *intspec,
unsigned int intsize,
unsigned long *out_hwirq,
unsigned int *out_type)
{
/* Shouldn't happen, really... */
if (domain->of_node != controller)
return -EINVAL;
/* Not GIC compliant */
if (intsize != 3)
return -EINVAL;
/* No PPI should point to this domain */
if (intspec[0] != 0)
return -EINVAL;
*out_hwirq = intspec[1];
*out_type = intspec[2];
return 0;
}
static int imx_gpcv2_domain_alloc(struct irq_domain *domain,
unsigned int irq, unsigned int nr_irqs,
void *data)
{
struct of_phandle_args *args = data;
struct of_phandle_args parent_args;
irq_hw_number_t hwirq;
int i;
/* Not GIC compliant */
if (args->args_count != 3)
return -EINVAL;
/* No PPI should point to this domain */
if (args->args[0] != 0)
return -EINVAL;
/* Can't deal with this */
hwirq = args->args[1];
if (hwirq >= GPC_MAX_IRQS)
return -EINVAL;
for (i = 0; i < nr_irqs; i++) {
irq_domain_set_hwirq_and_chip(domain, irq + i, hwirq + i,
&gpcv2_irqchip_data_chip, domain->host_data);
}
parent_args = *args;
parent_args.np = domain->parent->of_node;
return irq_domain_alloc_irqs_parent(domain, irq, nr_irqs, &parent_args);
}
static struct irq_domain_ops gpcv2_irqchip_data_domain_ops = {
.xlate = imx_gpcv2_domain_xlate,
.alloc = imx_gpcv2_domain_alloc,
.free = irq_domain_free_irqs_common,
};
static int __init imx_gpcv2_irqchip_init(struct device_node *node,
struct device_node *parent)
{
struct irq_domain *parent_domain, *domain;
struct gpcv2_irqchip_data *cd;
int i;
if (!parent) {
pr_err("%s: no parent, giving up\n", node->full_name);
return -ENODEV;
}
parent_domain = irq_find_host(parent);
if (!parent_domain) {
pr_err("%s: unable to get parent domain\n", node->full_name);
return -ENXIO;
}
cd = kzalloc(sizeof(struct gpcv2_irqchip_data), GFP_KERNEL);
if (!cd) {
pr_err("kzalloc failed!\n");
return -ENOMEM;
}
cd->gpc_base = of_iomap(node, 0);
if (!cd->gpc_base) {
pr_err("fsl-gpcv2: unable to map gpc registers\n");
kfree(cd);
return -ENOMEM;
}
domain = irq_domain_add_hierarchy(parent_domain, 0, GPC_MAX_IRQS,
node, &gpcv2_irqchip_data_domain_ops, cd);
if (!domain) {
iounmap(cd->gpc_base);
kfree(cd);
return -ENOMEM;
}
irq_set_default_host(domain);
/* Initially mask all interrupts */
for (i = 0; i < IMR_NUM; i++) {
writel_relaxed(~0, cd->gpc_base + GPC_IMR1_CORE0 + i * 4);
writel_relaxed(~0, cd->gpc_base + GPC_IMR1_CORE1 + i * 4);
cd->wakeup_sources[i] = ~0;
}
/* Let CORE0 as the default CPU to wake up by GPC */
cd->cpu2wakeup = GPC_IMR1_CORE0;
/*
* Due to hardware design failure, need to make sure GPR
* interrupt(#32) is unmasked during RUN mode to avoid entering
* DSM by mistake.
*/
writel_relaxed(~0x1, cd->gpc_base + cd->cpu2wakeup);
imx_gpcv2_instance = cd;
register_syscore_ops(&imx_gpcv2_syscore_ops);
return 0;
}
IRQCHIP_DECLARE(imx_gpcv2, "fsl,imx7d-gpc", imx_gpcv2_irqchip_init);
......@@ -18,6 +18,7 @@
#include <linux/types.h>
#include <linux/interrupt.h>
#include <linux/ioport.h>
#include <linux/irqchip.h>
#include <linux/irqchip/ingenic.h>
#include <linux/of_address.h>
#include <linux/of_irq.h>
......@@ -28,8 +29,6 @@
#include <asm/io.h>
#include <asm/mach-jz4740/irq.h>
#include "irqchip.h"
struct ingenic_intc_data {
void __iomem *base;
unsigned num_chips;
......
......@@ -20,13 +20,12 @@
#include <linux/module.h>
#include <linux/moduleparam.h>
#include <linux/irqdomain.h>
#include <linux/irqchip.h>
#include <linux/irqchip/chained_irq.h>
#include <linux/of.h>
#include <linux/of_platform.h>
#include <linux/mfd/syscon.h>
#include <linux/regmap.h>
#include "irqchip.h"
/* The source ID bits start from 4 to 31 (total 28 bits)*/
#define BIT_OFS 4
......@@ -84,8 +83,9 @@ static void keystone_irq_ack(struct irq_data *d)
/* nothing to do here */
}
static void keystone_irq_handler(unsigned irq, struct irq_desc *desc)
static void keystone_irq_handler(unsigned __irq, struct irq_desc *desc)
{
unsigned int irq = irq_desc_get_irq(desc);
struct keystone_irq_device *kirq = irq_desc_get_handler_data(desc);
unsigned long pending;
int src, virq;
......
......@@ -404,7 +404,6 @@ static int meta_intc_irq_set_type(struct irq_data *data, unsigned int flow_type)
#ifdef CONFIG_METAG_SUSPEND_MEM
struct meta_intc_priv *priv = &meta_intc_priv;
#endif
unsigned int irq = data->irq;
irq_hw_number_t hw = data->hwirq;
unsigned int bit = 1 << meta_intc_offset(hw);
void __iomem *level_addr = meta_intc_level_addr(hw);
......@@ -413,11 +412,11 @@ static int meta_intc_irq_set_type(struct irq_data *data, unsigned int flow_type)
/* update the chip/handler */
if (flow_type & IRQ_TYPE_LEVEL_MASK)
__irq_set_chip_handler_name_locked(irq, &meta_intc_level_chip,
handle_level_irq, NULL);
irq_set_chip_handler_name_locked(data, &meta_intc_level_chip,
handle_level_irq, NULL);
else
__irq_set_chip_handler_name_locked(irq, &meta_intc_edge_chip,
handle_edge_irq, NULL);
irq_set_chip_handler_name_locked(data, &meta_intc_edge_chip,
handle_edge_irq, NULL);
/* and clear/set the bit in HWLEVELEXT */
__global_lock2(flags);
......
......@@ -286,8 +286,7 @@ static void metag_internal_irq_init_cpu(struct metag_internal_irq_priv *priv,
int irq = tbisig_map(signum);
/* Register the multiplexed IRQ handler */
irq_set_handler_data(irq, priv);
irq_set_chained_handler(irq, metag_internal_irq_demux);
irq_set_chained_handler_and_data(irq, metag_internal_irq_demux, priv);
irq_set_irq_type(irq, IRQ_TYPE_LEVEL_LOW);
}
......
......@@ -31,6 +31,7 @@
#include <linux/interrupt.h>
#include <linux/kernel.h>
#include <linux/irq.h>
#include <linux/irqchip.h>
#include <linux/irqdomain.h>
#include <asm/irq_cpu.h>
......@@ -38,8 +39,6 @@
#include <asm/mipsmtregs.h>
#include <asm/setup.h>
#include "irqchip.h"
static inline void unmask_mips_irq(struct irq_data *d)
{
set_c0_status(0x100 << (d->irq - MIPS_CPU_IRQ_BASE));
......
......@@ -11,6 +11,7 @@
#include <linux/init.h>
#include <linux/interrupt.h>
#include <linux/irq.h>
#include <linux/irqchip.h>
#include <linux/irqchip/mips-gic.h>
#include <linux/of_address.h>
#include <linux/sched.h>
......@@ -22,8 +23,6 @@
#include <dt-bindings/interrupt-controller/mips-gic.h>
#include "irqchip.h"
unsigned int gic_present;
struct gic_pcpu_mask {
......@@ -358,15 +357,12 @@ static int gic_set_type(struct irq_data *d, unsigned int type)
break;
}
if (is_edge) {
__irq_set_chip_handler_name_locked(d->irq,
&gic_edge_irq_controller,
handle_edge_irq, NULL);
} else {
__irq_set_chip_handler_name_locked(d->irq,
&gic_level_irq_controller,
handle_level_irq, NULL);
}
if (is_edge)
irq_set_chip_handler_name_locked(d, &gic_edge_irq_controller,
handle_edge_irq, NULL);
else
irq_set_chip_handler_name_locked(d, &gic_level_irq_controller,
handle_level_irq, NULL);
spin_unlock_irqrestore(&gic_lock, flags);
return 0;
......@@ -396,7 +392,7 @@ static int gic_set_affinity(struct irq_data *d, const struct cpumask *cpumask,
clear_bit(irq, pcpu_masks[i].pcpu_mask);
set_bit(irq, pcpu_masks[cpumask_first(&tmp)].pcpu_mask);
cpumask_copy(d->affinity, cpumask);
cpumask_copy(irq_data_get_affinity_mask(d), cpumask);
spin_unlock_irqrestore(&gic_lock, flags);
return IRQ_SET_MASK_OK_NOCOPY;
......
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
Markdown is supported
0% .
You are about to add 0 people to the discussion. Proceed with caution.
先完成此消息的编辑!
想要评论请 注册