1. 16 8月, 2017 2 次提交
    • M
      arm64: add basic VMAP_STACK support · e3067861
      Mark Rutland 提交于
      This patch enables arm64 to be built with vmap'd task and IRQ stacks.
      
      As vmap'd stacks are mapped at page granularity, stacks must be a multiple of
      PAGE_SIZE. This means that a 64K page kernel must use stacks of at least 64K in
      size.
      
      To minimize the increase in Image size, IRQ stacks are dynamically allocated at
      boot time, rather than embedding the boot CPU's IRQ stack in the kernel image.
      
      This patch was co-authored by Ard Biesheuvel and Mark Rutland.
      Signed-off-by: NArd Biesheuvel <ard.biesheuvel@linaro.org>
      Signed-off-by: NMark Rutland <mark.rutland@arm.com>
      Reviewed-by: NWill Deacon <will.deacon@arm.com>
      Tested-by: NLaura Abbott <labbott@redhat.com>
      Cc: Catalin Marinas <catalin.marinas@arm.com>
      Cc: James Morse <james.morse@arm.com>
      e3067861
    • M
      arm64: use an irq stack pointer · f60fe78f
      Mark Rutland 提交于
      We allocate our IRQ stacks using a percpu array. This allows us to generate our
      IRQ stack pointers with adr_this_cpu, but bloats the kernel Image with the boot
      CPU's IRQ stack. Additionally, these are packed with other percpu variables,
      and aren't guaranteed to have guard pages.
      
      When we enable VMAP_STACK we'll want to vmap our IRQ stacks also, in order to
      provide guard pages and to permit more stringent alignment requirements. Doing
      so will require that we use a percpu pointer to each IRQ stack, rather than
      allocating a percpu IRQ stack in the kernel image.
      
      This patch updates our IRQ stack code to use a percpu pointer to the base of
      each IRQ stack. This will allow us to change the way the stack is allocated
      with minimal changes elsewhere. In some cases we may try to backtrace before
      the IRQ stack pointers are initialised, so on_irq_stack() is updated to account
      for this.
      
      In testing with cyclictest, there was no measureable difference between using
      adr_this_cpu (for irq_stack) and ldr_this_cpu (for irq_stack_ptr) in the IRQ
      entry path.
      Signed-off-by: NMark Rutland <mark.rutland@arm.com>
      Reviewed-by: NWill Deacon <will.deacon@arm.com>
      Tested-by: NLaura Abbott <labbott@redhat.com>
      Cc: Ard Biesheuvel <ard.biesheuvel@linaro.org>
      Cc: Catalin Marinas <catalin.marinas@arm.com>
      Cc: James Morse <james.morse@arm.com>
      f60fe78f
  2. 22 12月, 2015 1 次提交
    • J
      arm64: remove irq_count and do_softirq_own_stack() · d224a69e
      James Morse 提交于
      sysrq_handle_reboot() re-enables interrupts while on the irq stack. The
      irq_stack implementation wrongly assumed this would only ever happen
      via the softirq path, allowing it to update irq_count late, in
      do_softirq_own_stack().
      
      This means if an irq occurs in sysrq_handle_reboot(), during
      emergency_restart() the stack will be corrupted, as irq_count wasn't
      updated.
      
      Lose the optimisation, and instead of moving the adding/subtracting of
      irq_count into irq_stack_entry/irq_stack_exit, remove it, and compare
      sp_el0 (struct thread_info) with sp & ~(THREAD_SIZE - 1). This tells us
      if we are on a task stack, if so, we can safely switch to the irq stack.
      Finally, remove do_softirq_own_stack(), we don't need it anymore.
      Reported-by: NWill Deacon <will.deacon@arm.com>
      Signed-off-by: NJames Morse <james.morse@arm.com>
      [will: use get_thread_info macro]
      Signed-off-by: NWill Deacon <will.deacon@arm.com>
      d224a69e
  3. 08 12月, 2015 2 次提交
  4. 10 10月, 2015 1 次提交
    • Y
      arm64: fix a migrating irq bug when hotplug cpu · 217d453d
      Yang Yingliang 提交于
      When cpu is disabled, all irqs will be migratged to another cpu.
      In some cases, a new affinity is different, the old affinity need
      to be updated and if irq_set_affinity's return value is IRQ_SET_MASK_OK_DONE,
      the old affinity can not be updated. Fix it by using irq_do_set_affinity.
      
      And migrating interrupts is a core code matter, so use the generic
      function irq_migrate_all_off_this_cpu() to migrate interrupts in
      kernel/irq/migration.c.
      
      Cc: Jiang Liu <jiang.liu@linux.intel.com>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: Mark Rutland <mark.rutland@arm.com>
      Cc: Will Deacon <will.deacon@arm.com>
      Cc: Russell King - ARM Linux <linux@arm.linux.org.uk>
      Cc: Hanjun Guo <hanjun.guo@linaro.org>
      Acked-by: NMarc Zyngier <marc.zyngier@arm.com>
      Signed-off-by: NYang Yingliang <yangyingliang@huawei.com>
      Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
      217d453d
  5. 27 7月, 2015 1 次提交
  6. 22 7月, 2015 1 次提交
  7. 25 11月, 2014 1 次提交
  8. 04 9月, 2014 1 次提交
    • S
      arm64: use irq_set_affinity with force=false when migrating irqs · 3d8afe30
      Sudeep Holla 提交于
      The arm64 interrupt migration code on cpu offline calls
      irqchip.irq_set_affinity() with the argument force=true. Originally
      this argument had no effect because it was not used by any interrupt
      chip driver and there was no semantics defined.
      
      This changed with commit 01f8fa4f ("genirq: Allow forcing cpu
      affinity of interrupts") which made the force argument useful to route
      interrupts to not yet online cpus without checking the target cpu
      against the cpu online mask. The following commit ffde1de6
      ("irqchip: gic: Support forced affinity setting") implemented this for
      the GIC interrupt controller.
      
      As a consequence the cpu offline irq migration fails if CPU0 is
      offlined, because CPU0 is still set in the affinity mask and the
      validation against cpu online mask is skipped to the force argument
      being true. The following first_cpu(mask) selection always selects
      CPU0 as the target.
      
      Commit 601c9421("arm64: use cpu_online_mask when using forced
      irq_set_affinity") intended to fix the above mentioned issue but
      introduced another issue where affinity can be migrated to a wrong
      CPU due to unconditional copy of cpu_online_mask.
      
      As with for arm, solve the issue by calling irq_set_affinity() with
      force=false from the CPU offline irq migration code so the GIC driver
      validates the affinity mask against CPU online mask and therefore
      removes CPU0 from the possible target candidates. Also revert the
      changes done in the commit 601c9421 as it's no longer needed.
      
      Tested on Juno platform.
      
      Fixes: 601c9421("arm64: use cpu_online_mask when using forced
      	irq_set_affinity")
      Signed-off-by: NSudeep Holla <sudeep.holla@arm.com>
      Acked-by: NMark Rutland <mark.rutland@arm.com>
      Cc: Catalin Marinas <catalin.marinas@arm.com>
      Cc: Will Deacon <will.deacon@arm.com>
      Cc: <stable@vger.kernel.org> # 3.10.x
      Signed-off-by: NWill Deacon <will.deacon@arm.com>
      3d8afe30
  9. 03 9月, 2014 2 次提交
  10. 12 5月, 2014 1 次提交
    • S
      arm64: use cpu_online_mask when using forced irq_set_affinity · 601c9421
      Sudeep Holla 提交于
      Commit 01f8fa4f("genirq: Allow forcing cpu affinity of interrupts")
      enabled the forced irq_set_affinity which previously refused to route an
      interrupt to an offline cpu.
      
      Commit ffde1de6("irqchip: Gic: Support forced affinity setting")
      implements this force logic and disables the cpu online check for GIC
      interrupt controller.
      
      When __cpu_disable calls migrate_irqs, it disables the current cpu in
      cpu_online_mask and uses forced irq_set_affinity to migrate the IRQs
      away from the cpu but passes affinity mask with the cpu being offlined
      also included in it.
      
      When calling irq_set_affinity with force == true in a cpu hotplug path,
      the caller must ensure that the cpu being offlined is not present in the
      affinity mask or it may be selected as the target CPU, leading to the
      interrupt not being migrated.
      
      This patch uses cpu_online_mask when using forced irq_set_affinity so
      that the IRQs are properly migrated away.
      Signed-off-by: NSudeep Holla <sudeep.holla@arm.com>
      Acked-by: NMark Rutland <mark.rutland@arm.com>
      Cc: Will Deacon <will.deacon@arm.com>
      Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
      601c9421
  11. 25 10月, 2013 1 次提交
  12. 27 3月, 2013 1 次提交
  13. 17 9月, 2012 1 次提交