- 16 8月, 2017 2 次提交
-
-
由 Mark Rutland 提交于
This patch enables arm64 to be built with vmap'd task and IRQ stacks. As vmap'd stacks are mapped at page granularity, stacks must be a multiple of PAGE_SIZE. This means that a 64K page kernel must use stacks of at least 64K in size. To minimize the increase in Image size, IRQ stacks are dynamically allocated at boot time, rather than embedding the boot CPU's IRQ stack in the kernel image. This patch was co-authored by Ard Biesheuvel and Mark Rutland. Signed-off-by: NArd Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: NMark Rutland <mark.rutland@arm.com> Reviewed-by: NWill Deacon <will.deacon@arm.com> Tested-by: NLaura Abbott <labbott@redhat.com> Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: James Morse <james.morse@arm.com>
-
由 Mark Rutland 提交于
We allocate our IRQ stacks using a percpu array. This allows us to generate our IRQ stack pointers with adr_this_cpu, but bloats the kernel Image with the boot CPU's IRQ stack. Additionally, these are packed with other percpu variables, and aren't guaranteed to have guard pages. When we enable VMAP_STACK we'll want to vmap our IRQ stacks also, in order to provide guard pages and to permit more stringent alignment requirements. Doing so will require that we use a percpu pointer to each IRQ stack, rather than allocating a percpu IRQ stack in the kernel image. This patch updates our IRQ stack code to use a percpu pointer to the base of each IRQ stack. This will allow us to change the way the stack is allocated with minimal changes elsewhere. In some cases we may try to backtrace before the IRQ stack pointers are initialised, so on_irq_stack() is updated to account for this. In testing with cyclictest, there was no measureable difference between using adr_this_cpu (for irq_stack) and ldr_this_cpu (for irq_stack_ptr) in the IRQ entry path. Signed-off-by: NMark Rutland <mark.rutland@arm.com> Reviewed-by: NWill Deacon <will.deacon@arm.com> Tested-by: NLaura Abbott <labbott@redhat.com> Cc: Ard Biesheuvel <ard.biesheuvel@linaro.org> Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: James Morse <james.morse@arm.com>
-
- 22 12月, 2015 1 次提交
-
-
由 James Morse 提交于
sysrq_handle_reboot() re-enables interrupts while on the irq stack. The irq_stack implementation wrongly assumed this would only ever happen via the softirq path, allowing it to update irq_count late, in do_softirq_own_stack(). This means if an irq occurs in sysrq_handle_reboot(), during emergency_restart() the stack will be corrupted, as irq_count wasn't updated. Lose the optimisation, and instead of moving the adding/subtracting of irq_count into irq_stack_entry/irq_stack_exit, remove it, and compare sp_el0 (struct thread_info) with sp & ~(THREAD_SIZE - 1). This tells us if we are on a task stack, if so, we can safely switch to the irq stack. Finally, remove do_softirq_own_stack(), we don't need it anymore. Reported-by: NWill Deacon <will.deacon@arm.com> Signed-off-by: NJames Morse <james.morse@arm.com> [will: use get_thread_info macro] Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
- 08 12月, 2015 2 次提交
-
-
由 James Morse 提交于
entry.S is modified to switch to the per_cpu irq_stack during el{0,1}_irq. irq_count is used to detect recursive interrupts on the irq_stack, it is updated late by do_softirq_own_stack(), when called on the irq_stack, before __do_softirq() re-enables interrupts to process softirqs. do_softirq_own_stack() is added by this patch, but does not yet switch stack. This patch adds the dummy stack frame and data needed by the previous stack tracing patches. Reviewed-by: NCatalin Marinas <catalin.marinas@arm.com> Signed-off-by: NJames Morse <james.morse@arm.com> Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
由 AKASHI Takahiro 提交于
This patch allows unwind_frame() to traverse from interrupt stack to task stack correctly. It requires data from a dummy stack frame, created during irq_stack_entry(), added by a later patch. A similar approach is taken to modify dump_backtrace(), which expects to find struct pt_regs underneath any call to functions marked __exception. When on an irq_stack, the struct pt_regs is stored on the old task stack, the location of which is stored in the dummy stack frame. Reviewed-by: NCatalin Marinas <catalin.marinas@arm.com> Signed-off-by: NAKASHI Takahiro <takahiro.akashi@linaro.org> [james.morse: merged two patches, reworked for per_cpu irq_stacks, and no alignment guarantees, added irq_stack definitions] Signed-off-by: NJames Morse <james.morse@arm.com> Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
- 10 10月, 2015 1 次提交
-
-
由 Yang Yingliang 提交于
When cpu is disabled, all irqs will be migratged to another cpu. In some cases, a new affinity is different, the old affinity need to be updated and if irq_set_affinity's return value is IRQ_SET_MASK_OK_DONE, the old affinity can not be updated. Fix it by using irq_do_set_affinity. And migrating interrupts is a core code matter, so use the generic function irq_migrate_all_off_this_cpu() to migrate interrupts in kernel/irq/migration.c. Cc: Jiang Liu <jiang.liu@linux.intel.com> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: Mark Rutland <mark.rutland@arm.com> Cc: Will Deacon <will.deacon@arm.com> Cc: Russell King - ARM Linux <linux@arm.linux.org.uk> Cc: Hanjun Guo <hanjun.guo@linaro.org> Acked-by: NMarc Zyngier <marc.zyngier@arm.com> Signed-off-by: NYang Yingliang <yangyingliang@huawei.com> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
- 27 7月, 2015 1 次提交
-
-
由 Will Deacon 提交于
Nobody seems to be producing !SMP systems anymore, so this is just becoming a source of kernel bugs, particularly if people want to use coherent DMA with non-shared pages. This patch forces CONFIG_SMP=y for arm64, removing a modest amount of code in the process. Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
- 22 7月, 2015 1 次提交
-
-
由 Jiang Liu 提交于
This is a preparatory patch for moving irq_data struct members. Signed-off-by: NJiang Liu <jiang.liu@linux.intel.com> Reviewed-by: NHanjun Guo <hanjun.guo@linaro.org> Cc: linux-arm-kernel@lists.infradead.org Signed-off-by: NThomas Gleixner <tglx@linutronix.de> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
- 25 11月, 2014 1 次提交
-
-
由 Laura Abbott 提交于
handle_arch_irq isn't actually text, it's just a function pointer. It doesn't need to be stored in the text section and doing so causes problesm if we ever want to make the kernel text read only. Declare handle_arch_irq as a proper function pointer stored in the data section. Reviewed-by: NKees Cook <keescook@chromium.org> Reviewed-by: NMark Rutland <mark.rutland@arm.com> Acked-by: NArd Biesheuvel <ard.biesheuvel@linaro.org> Tested-by: NMark Rutland <mark.rutland@arm.com> Tested-by: NKees Cook <keescook@chromium.org> Signed-off-by: NLaura Abbott <lauraa@codeaurora.org> Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
- 04 9月, 2014 1 次提交
-
-
由 Sudeep Holla 提交于
The arm64 interrupt migration code on cpu offline calls irqchip.irq_set_affinity() with the argument force=true. Originally this argument had no effect because it was not used by any interrupt chip driver and there was no semantics defined. This changed with commit 01f8fa4f ("genirq: Allow forcing cpu affinity of interrupts") which made the force argument useful to route interrupts to not yet online cpus without checking the target cpu against the cpu online mask. The following commit ffde1de6 ("irqchip: gic: Support forced affinity setting") implemented this for the GIC interrupt controller. As a consequence the cpu offline irq migration fails if CPU0 is offlined, because CPU0 is still set in the affinity mask and the validation against cpu online mask is skipped to the force argument being true. The following first_cpu(mask) selection always selects CPU0 as the target. Commit 601c9421("arm64: use cpu_online_mask when using forced irq_set_affinity") intended to fix the above mentioned issue but introduced another issue where affinity can be migrated to a wrong CPU due to unconditional copy of cpu_online_mask. As with for arm, solve the issue by calling irq_set_affinity() with force=false from the CPU offline irq migration code so the GIC driver validates the affinity mask against CPU online mask and therefore removes CPU0 from the possible target candidates. Also revert the changes done in the commit 601c9421 as it's no longer needed. Tested on Juno platform. Fixes: 601c9421("arm64: use cpu_online_mask when using forced irq_set_affinity") Signed-off-by: NSudeep Holla <sudeep.holla@arm.com> Acked-by: NMark Rutland <mark.rutland@arm.com> Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: Will Deacon <will.deacon@arm.com> Cc: <stable@vger.kernel.org> # 3.10.x Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
- 03 9月, 2014 2 次提交
-
-
由 Marc Zyngier 提交于
All the arm64 irqchip drivers have been converted to handle_domain_irq, making it possible to remove the handle_IRQ stub entierely. Signed-off-by: NMarc Zyngier <marc.zyngier@arm.com> Acked-by: NCatalin Marinas <catalin.marinas@arm.com> Link: https://lkml.kernel.org/r/1409047421-27649-26-git-send-email-marc.zyngier@arm.comSigned-off-by: NJason Cooper <jason@lakedaemon.net>
-
由 Marc Zyngier 提交于
In order to limit code duplication, convert the architecture specific handle_IRQ to use the generic __handle_domain_irq function. Signed-off-by: NMarc Zyngier <marc.zyngier@arm.com> Acked-by: NCatalin Marinas <catalin.marinas@arm.com> Link: https://lkml.kernel.org/r/1409047421-27649-3-git-send-email-marc.zyngier@arm.comSigned-off-by: NJason Cooper <jason@lakedaemon.net>
-
- 12 5月, 2014 1 次提交
-
-
由 Sudeep Holla 提交于
Commit 01f8fa4f("genirq: Allow forcing cpu affinity of interrupts") enabled the forced irq_set_affinity which previously refused to route an interrupt to an offline cpu. Commit ffde1de6("irqchip: Gic: Support forced affinity setting") implements this force logic and disables the cpu online check for GIC interrupt controller. When __cpu_disable calls migrate_irqs, it disables the current cpu in cpu_online_mask and uses forced irq_set_affinity to migrate the IRQs away from the cpu but passes affinity mask with the cpu being offlined also included in it. When calling irq_set_affinity with force == true in a cpu hotplug path, the caller must ensure that the cpu being offlined is not present in the affinity mask or it may be selected as the target CPU, leading to the interrupt not being migrated. This patch uses cpu_online_mask when using forced irq_set_affinity so that the IRQs are properly migrated away. Signed-off-by: NSudeep Holla <sudeep.holla@arm.com> Acked-by: NMark Rutland <mark.rutland@arm.com> Cc: Will Deacon <will.deacon@arm.com> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
- 25 10月, 2013 1 次提交
-
-
由 Mark Rutland 提交于
This patch adds the basic infrastructure necessary to support CPU_HOTPLUG on arm64, based on the arm implementation. Actual hotplug support will depend on an implementation's cpu_operations (e.g. PSCI). Signed-off-by: NMark Rutland <mark.rutland@arm.com> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
- 27 3月, 2013 1 次提交
-
-
由 Catalin Marinas 提交于
This patch uses the generic irqchip_init() function for initialising the interrupt controller on arm64. It also adds several definitions required by the ARM GIC irqchip driver but does not enable ARM_GIC yet. Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
- 17 9月, 2012 1 次提交
-
-
由 Marc Zyngier 提交于
This patch adds the support for IRQ handling. The actual interrupt controller will be part of a separate patch (going into drivers/irqchip/). Signed-off-by: NMarc Zyngier <marc.zyngier@arm.com> Signed-off-by: NWill Deacon <will.deacon@arm.com> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com> Acked-by: NTony Lindgren <tony@atomide.com> Acked-by: NNicolas Pitre <nico@linaro.org> Acked-by: NOlof Johansson <olof@lixom.net> Acked-by: NSantosh Shilimkar <santosh.shilimkar@ti.com>
-