1. 24 6月, 2017 3 次提交
  2. 23 6月, 2017 8 次提交
    • T
      genirq: Add force argument to irq_startup() · 4cde9c6b
      Thomas Gleixner 提交于
      In order to handle managed interrupts gracefully on irq_startup() so they
      won't lose their assigned affinity, it's necessary to allow startups which
      keep the interrupts in managed shutdown state, if none of the assigend CPUs
      is online. This allows drivers to request interrupts w/o the CPUs being
      online, which avoid online/offline churn in drivers.
      
      Add a force argument which can override that decision and let only
      request_irq() and enable_irq() allow the managed shutdown
      handling. enable_irq() is required, because the interrupt might be
      requested with IRQF_NOAUTOEN and enable_irq() invokes irq_startup() which
      would then wreckage the assignment again. All other callers force startup
      and potentially break the assigned affinity.
      
      No functional change as this only adds the function argument.
      Signed-off-by: NThomas Gleixner <tglx@linutronix.de>
      Cc: Jens Axboe <axboe@kernel.dk>
      Cc: Marc Zyngier <marc.zyngier@arm.com>
      Cc: Michael Ellerman <mpe@ellerman.id.au>
      Cc: Keith Busch <keith.busch@intel.com>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Christoph Hellwig <hch@lst.de>
      Link: http://lkml.kernel.org/r/20170619235447.112094565@linutronix.de
      4cde9c6b
    • T
      genirq: Introduce IRQD_MANAGED_SHUTDOWN · 54fdf6a0
      Thomas Gleixner 提交于
      Affinity managed interrupts should keep their assigned affinity accross CPU
      hotplug. To avoid magic hackery in device drivers, the core code shall
      manage them transparently. This will set these interrupts into a managed
      shutdown state when the last CPU of the assigned affinity mask goes
      offline. The interrupt will be restarted when one of the CPUs in the
      assigned affinity mask comes back online.
      
      Introduce the necessary state flag and the accessor functions.
      Signed-off-by: NThomas Gleixner <tglx@linutronix.de>
      Cc: Jens Axboe <axboe@kernel.dk>
      Cc: Marc Zyngier <marc.zyngier@arm.com>
      Cc: Michael Ellerman <mpe@ellerman.id.au>
      Cc: Keith Busch <keith.busch@intel.com>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Christoph Hellwig <hch@lst.de>
      Link: http://lkml.kernel.org/r/20170619235446.954523476@linutronix.de
      54fdf6a0
    • T
      genirq: Move irq_fixup_move_pending() to core · 36d84fb4
      Thomas Gleixner 提交于
      Now that x86 uses the generic code, the function declaration and inline
      stub can move to the core internal header.
      Signed-off-by: NThomas Gleixner <tglx@linutronix.de>
      Cc: Jens Axboe <axboe@kernel.dk>
      Cc: Marc Zyngier <marc.zyngier@arm.com>
      Cc: Michael Ellerman <mpe@ellerman.id.au>
      Cc: Keith Busch <keith.busch@intel.com>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Christoph Hellwig <hch@lst.de>
      Link: http://lkml.kernel.org/r/20170619235445.928156166@linutronix.de
      36d84fb4
    • T
      genirq/cpuhotplug: Add support for cleaning up move in progress · f0383c24
      Thomas Gleixner 提交于
      In order to move x86 to the generic hotplug migration code, add support for
      cleaning up move in progress bits.
      
      On architectures which have this x86 specific (mis)feature not enabled,
      this is optimized out by the compiler.
      Signed-off-by: NThomas Gleixner <tglx@linutronix.de>
      Cc: Jens Axboe <axboe@kernel.dk>
      Cc: Marc Zyngier <marc.zyngier@arm.com>
      Cc: Michael Ellerman <mpe@ellerman.id.au>
      Cc: Keith Busch <keith.busch@intel.com>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Christoph Hellwig <hch@lst.de>
      Link: http://lkml.kernel.org/r/20170619235445.525817311@linutronix.de
      f0383c24
    • C
      genirq: Move pending helpers to internal.h · 137221df
      Christoph Hellwig 提交于
      So that the affinity code can reuse them.
      Signed-off-by: NChristoph Hellwig <hch@lst.de>
      Signed-off-by: NThomas Gleixner <tglx@linutronix.de>
      Cc: Jens Axboe <axboe@kernel.dk>
      Cc: Marc Zyngier <marc.zyngier@arm.com>
      Cc: Michael Ellerman <mpe@ellerman.id.au>
      Cc: Keith Busch <keith.busch@intel.com>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Link: http://lkml.kernel.org/r/20170619235445.109426284@linutronix.de
      137221df
    • T
      genirq: Rename setup_affinity() to irq_setup_affinity() · 43564bd9
      Thomas Gleixner 提交于
      Rename it with a proper irq_ prefix and make it available for other files
      in the core code. Preparatory patch for moving the irq affinity setup
      around.
      Signed-off-by: NThomas Gleixner <tglx@linutronix.de>
      Cc: Jens Axboe <axboe@kernel.dk>
      Cc: Marc Zyngier <marc.zyngier@arm.com>
      Cc: Michael Ellerman <mpe@ellerman.id.au>
      Cc: Keith Busch <keith.busch@intel.com>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Christoph Hellwig <hch@lst.de>
      Link: http://lkml.kernel.org/r/20170619235444.928501004@linutronix.de
      43564bd9
    • T
      genirq: Remove mask argument from setup_affinity() · cba4235e
      Thomas Gleixner 提交于
      No point to have this alloc/free dance of cpumasks. Provide a static mask
      for setup_affinity() and protect it proper.
      Signed-off-by: NThomas Gleixner <tglx@linutronix.de>
      Cc: Jens Axboe <axboe@kernel.dk>
      Cc: Marc Zyngier <marc.zyngier@arm.com>
      Cc: Michael Ellerman <mpe@ellerman.id.au>
      Cc: Keith Busch <keith.busch@intel.com>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Christoph Hellwig <hch@lst.de>
      Link: http://lkml.kernel.org/r/20170619235444.851571573@linutronix.de
      cba4235e
    • T
      genirq/debugfs: Add proper debugfs interface · 087cdfb6
      Thomas Gleixner 提交于
      Debugging (hierarchical) interupt domains is tedious as there is no
      information about the hierarchy and no information about states of
      interrupts in the various domain levels.
      
      Add a debugfs directory 'irq' and subdirectories 'domains' and 'irqs'.
      
      The domains directory contains the domain files. The content is information
      about the domain. If the domain is part of a hierarchy then the parent
      domains are printed as well.
      
      # ls /sys/kernel/debug/irq/domains/
      default     INTEL-IR-2	    INTEL-IR-MSI-2  IO-APIC-IR-2  PCI-MSI
      DMAR-MSI    INTEL-IR-3	    INTEL-IR-MSI-3  IO-APIC-IR-3  unknown-1
      INTEL-IR-0  INTEL-IR-MSI-0  IO-APIC-IR-0    IO-APIC-IR-4  VECTOR
      INTEL-IR-1  INTEL-IR-MSI-1  IO-APIC-IR-1    PCI-HT
      
      # cat /sys/kernel/debug/irq/domains/VECTOR 
      name:   VECTOR
       size:   0
       mapped: 216
       flags:  0x00000041
      
      # cat /sys/kernel/debug/irq/domains/IO-APIC-IR-0 
      name:   IO-APIC-IR-0
       size:   24
       mapped: 19
       flags:  0x00000041
       parent: INTEL-IR-3
          name:   INTEL-IR-3
           size:   65536
           mapped: 167
           flags:  0x00000041
           parent: VECTOR
              name:   VECTOR
               size:   0
               mapped: 216
               flags:  0x00000041
      
      Unfortunately there is no per cpu information about the VECTOR domain (yet).
      
      The irqs directory contains detailed information about mapped interrupts.
      
      # cat /sys/kernel/debug/irq/irqs/3
      handler:  handle_edge_irq
      status:   0x00004000
      istate:   0x00000000
      ddepth:   1
      wdepth:   0
      dstate:   0x01018000
                  IRQD_IRQ_DISABLED
                  IRQD_SINGLE_TARGET
                  IRQD_MOVE_PCNTXT
      node:     0
      affinity: 0-143
      effectiv: 0
      pending:  
      domain:  IO-APIC-IR-0
       hwirq:   0x3
       chip:    IR-IO-APIC
        flags:   0x10
                   IRQCHIP_SKIP_SET_WAKE
       parent:
          domain:  INTEL-IR-3
           hwirq:   0x20000
           chip:    INTEL-IR
            flags:   0x0
           parent:
              domain:  VECTOR
               hwirq:   0x3
               chip:    APIC
                flags:   0x0
      
      This was developed to simplify the debugging of the managed affinity
      changes.
      Signed-off-by: NThomas Gleixner <tglx@linutronix.de>
      Acked-by: NMarc Zyngier <marc.zyngier@arm.com>
      Cc: Jens Axboe <axboe@kernel.dk>
      Cc: Michael Ellerman <mpe@ellerman.id.au>
      Cc: Keith Busch <keith.busch@intel.com>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Christoph Hellwig <hch@lst.de>
      Link: http://lkml.kernel.org/r/20170619235444.537566163@linutronix.deSigned-off-by: NThomas Gleixner <tglx@linutronix.de>
      087cdfb6
  3. 21 6月, 2017 1 次提交
  4. 04 7月, 2016 1 次提交
  5. 18 6月, 2016 1 次提交
    • K
      genirq: Add untracked irq handler · edd14cfe
      Keith Busch 提交于
      This adds a software irq handler for controllers that multiplex
      interrupts from multiple devices, but don't know which device generated
      the interrupt. For these devices, the irq handler that demuxes must
      check every action for every software irq using the same h/w irq in order
      to find out which device generated the interrupt. This will inevitably
      trigger spurious interrupt detection if we are noting the irq.
      
      The new irq handler does not track the handling for spurious interrupt
      detection. An irq that uses this also won't get stats tracked since it
      didn't generate the interrupt, nor added to randomness since they are
      not random.
      Signed-off-by: NKeith Busch <keith.busch@intel.com>
      Cc: Bjorn Helgaas <bhelgaas@google.com>
      Cc: linux-pci@vger.kernel.org
      Cc: Jon Derrick <jonathan.derrick@intel.com>
      Link: http://lkml.kernel.org/r/1466200821-29159-1-git-send-email-keith.busch@intel.comSigned-off-by: NThomas Gleixner <tglx@linutronix.de>
      edd14cfe
  6. 13 6月, 2016 1 次提交
  7. 24 2月, 2016 1 次提交
  8. 15 2月, 2016 1 次提交
  9. 10 11月, 2015 1 次提交
    • G
      genirq/PM: Restore system wake up from chained interrupts · 4717f133
      Grygorii Strashko 提交于
      Commit e509bd7d ("genirq: Allow migration of chained interrupts
      by installing default action") breaks PCS wake up IRQ behaviour on
      TI OMAP based platforms (dra7-evm).
      
      TI OMAP IRQ wake up configuration:
      GIC-irqchip->PCM_IRQ
        |- omap_prcm_register_chain_handler
           |- PRCM-irqchip -> PRCM_IO_IRQ
              |- pcs_irq_chain_handler
                 |- pinctrl-irqchip -> PCS_uart1_wakeup_irq
      
      This happens because IRQ PM code (irq/pm.c) is expected to ignore
      chained interrupts by default:
        static bool suspend_device_irq(struct irq_desc *desc)
        {
      	if (!desc->action || desc->no_suspend_depth)
      		return false;
       - it's expected !desc->action = true for chained interrupts;
      
      but, after above change, all chained interrupt descriptors will
      have default action handler installed - chained_action.
      As result, chained interrupts will be silently disabled during system
      suspend.
      
      Hence, fix it by introducing helper function irq_desc_is_chained() and
      use it in suspend_device_irq() for chained interrupts identification
      and skip them, once detected.
      
      Fixes: e509bd7d ("genirq: Allow migration of chained interrupts..")
      Signed-off-by: NGrygorii Strashko <grygorii.strashko@ti.com>
      Reviewed-by: NMika Westerberg <mika.westerberg@linux.intel.com>
      Cc: Tony Lindgren <tony@atomide.com>
      Cc: <nsekhar@ti.com>
      Cc: <linux-arm-kernel@lists.infradead.org>
      Cc: Tony Lindgren <tony@atomide.com>
      Link: http://lkml.kernel.org/r/1447149492-20699-1-git-send-email-grygorii.strashko@ti.comSigned-off-by: NThomas Gleixner <tglx@linutronix.de>
      4717f133
  10. 10 10月, 2015 1 次提交
    • M
      genirq: Allow migration of chained interrupts by installing default action · e509bd7d
      Mika Westerberg 提交于
      When a CPU is offlined all interrupts that have an action are migrated to
      other still online CPUs. However, if the interrupt has chained handler
      installed this is not done. Chained handlers are used by GPIO drivers which
      support interrupts, for instance.
      
      When the affinity is not corrected properly we end up in situation where
      most interrupts are not arriving to the online CPUs anymore. For example on
      Intel Braswell system which has SD-card card detection signal connected to
      a GPIO the IO-APIC routing entries look like below after CPU1 is offlined:
      
        pin30, enabled , level, low , V(52), IRR(0), S(0), logical , D(03), M(1)
        pin31, enabled , level, low , V(42), IRR(0), S(0), logical , D(03), M(1)
        pin32, enabled , level, low , V(62), IRR(0), S(0), logical , D(03), M(1)
        pin5b, enabled , level, low , V(72), IRR(0), S(0), logical , D(03), M(1)
      
      The problem here is that the destination mask still contains both CPUs even
      if CPU1 is already offline. This means that the IO-APIC still routes
      interrupts to the other CPU as well.
      
      We solve the problem by providing a default action for chained interrupts.
      This action allows the migration code to correct affinity (as it finds
      desc->action != NULL).
      
      Also make the default action handler to emit a warning if for some reason a
      chained handler ends up calling it.
      Signed-off-by: NMika Westerberg <mika.westerberg@linux.intel.com>
      Cc: Jiang Liu <jiang.liu@linux.intel.com>
      Link: http://lkml.kernel.org/r/1444039935-30475-1-git-send-email-mika.westerberg@linux.intel.comSigned-off-by: NThomas Gleixner <tglx@linutronix.de>
      e509bd7d
  11. 22 9月, 2015 1 次提交
  12. 16 9月, 2015 1 次提交
  13. 12 7月, 2015 4 次提交
  14. 08 7月, 2015 1 次提交
    • T
      hotplug: Prevent alloc/free of irq descriptors during cpu up/down · a8994181
      Thomas Gleixner 提交于
      When a cpu goes up some architectures (e.g. x86) have to walk the irq
      space to set up the vector space for the cpu. While this needs extra
      protection at the architecture level we can avoid a few race
      conditions by preventing the concurrent allocation/free of irq
      descriptors and the associated data.
      
      When a cpu goes down it moves the interrupts which are targeted to
      this cpu away by reassigning the affinities. While this happens
      interrupts can be allocated and freed, which opens a can of race
      conditions in the code which reassignes the affinities because
      interrupt descriptors might be freed underneath.
      
      Example:
      
      CPU1				CPU2
      cpu_up/down
       irq_desc = irq_to_desc(irq);
      				remove_from_radix_tree(desc);
       raw_spin_lock(&desc->lock);
      				free(desc);
      
      We could protect the irq descriptors with RCU, but that would require
      a full tree change of all accesses to interrupt descriptors. But
      fortunately these kind of race conditions are rather limited to a few
      things like cpu hotplug. The normal setup/teardown is very well
      serialized. So the simpler and obvious solution is:
      
      Prevent allocation and freeing of interrupt descriptors accross cpu
      hotplug.
      Signed-off-by: NThomas Gleixner <tglx@linutronix.de>
      Acked-by: NPeter Zijlstra <peterz@infradead.org>
      Cc: xiao jin <jin.xiao@intel.com>
      Cc: Joerg Roedel <jroedel@suse.de>
      Cc: Borislav Petkov <bp@suse.de>
      Cc: Yanmin Zhang <yanmin_zhang@linux.intel.com>
      Link: http://lkml.kernel.org/r/20150705171102.063519515@linutronix.de
      a8994181
  15. 16 6月, 2015 1 次提交
  16. 12 6月, 2015 3 次提交
  17. 13 12月, 2014 1 次提交
    • T
      genirq: Prevent proc race against freeing of irq descriptors · c291ee62
      Thomas Gleixner 提交于
      Since the rework of the sparse interrupt code to actually free the
      unused interrupt descriptors there exists a race between the /proc
      interfaces to the irq subsystem and the code which frees the interrupt
      descriptor.
      
      CPU0				CPU1
      				show_interrupts()
      				  desc = irq_to_desc(X);
      free_desc(desc)
        remove_from_radix_tree();
        kfree(desc);
      				  raw_spinlock_irq(&desc->lock);
      
      /proc/interrupts is the only interface which can actively corrupt
      kernel memory via the lock access. /proc/stat can only read from freed
      memory. Extremly hard to trigger, but possible.
      
      The interfaces in /proc/irq/N/ are not affected by this because the
      removal of the proc file is serialized in procfs against concurrent
      readers/writers. The removal happens before the descriptor is freed.
      
      For architectures which have CONFIG_SPARSE_IRQ=n this is a non issue
      as the descriptor is never freed. It's merely cleared out with the irq
      descriptor lock held. So any concurrent proc access will either see
      the old correct value or the cleared out ones.
      
      Protect the lookup and access to the irq descriptor in
      show_interrupts() with the sparse_irq_lock.
      
      Provide kstat_irqs_usr() which is protecting the lookup and access
      with sparse_irq_lock and switch /proc/stat to use it.
      
      Document the existing kstat_irqs interfaces so it's clear that the
      caller needs to take care about protection. The users of these
      interfaces are either not affected due to SPARSE_IRQ=n or already
      protected against removal.
      
      Fixes: 1f5a5b87 "genirq: Implement a sane sparse_irq allocator"
      Signed-off-by: NThomas Gleixner <tglx@linutronix.de>
      Cc: stable@vger.kernel.org
      c291ee62
  18. 01 9月, 2014 3 次提交
  19. 27 5月, 2014 1 次提交
  20. 16 5月, 2014 1 次提交
  21. 14 3月, 2014 1 次提交
  22. 05 3月, 2014 1 次提交
  23. 20 2月, 2014 1 次提交
    • T
      genirq: Provide irq_wake_thread() · a92444c6
      Thomas Gleixner 提交于
      In course of the sdhci/sdio discussion with Russell about killing the
      sdio kthread hackery we discovered the need to be able to wake an
      interrupt thread from software.
      
      The rationale for this is, that sdio hardware can lack proper
      interrupt support for certain features. So the driver needs to poll
      the status registers, but at the same time it needs to be woken up by
      an hardware interrupt.
      
      To be able to get rid of the home brewn kthread construct of sdio we
      need a way to wake an irq thread independent of an actual hardware
      interrupt.
      
      Provide an irq_wake_thread() function which wakes up the thread which
      is associated to a given dev_id. This allows sdio to invoke the irq
      thread from the hardware irq handler via the IRQ_WAKE_THREAD return
      value and provides a possibility to wake it via a timer for the
      polling scenarios. That allows to simplify the sdio logic
      significantly.
      Signed-off-by: NThomas Gleixner <tglx@linutronix.de>
      Cc: Russell King <linux@arm.linux.org.uk>
      Cc: Chris Ball <chris@printf.net>
      Acked-by: NPeter Zijlstra <peterz@infradead.org>
      Link: http://lkml.kernel.org/r/20140215003823.772565780@linutronix.de
      a92444c6
  24. 25 5月, 2012 1 次提交