1. 26 9月, 2017 5 次提交
    • T
      genirq/irqdomain: Propagate early activation · 42e1cc2d
      Thomas Gleixner 提交于
      Propagate the early activation mode to the irqdomain activate()
      callbacks. This is required for the upcoming reservation, late vector
      assignment scheme, so that the early activation call can act accordingly.
      Signed-off-by: NThomas Gleixner <tglx@linutronix.de>
      Tested-by: NJuergen Gross <jgross@suse.com>
      Tested-by: NYu Chen <yu.c.chen@intel.com>
      Acked-by: NJuergen Gross <jgross@suse.com>
      Cc: Boris Ostrovsky <boris.ostrovsky@oracle.com>
      Cc: Tony Luck <tony.luck@intel.com>
      Cc: Marc Zyngier <marc.zyngier@arm.com>
      Cc: Alok Kataria <akataria@vmware.com>
      Cc: Joerg Roedel <joro@8bytes.org>
      Cc: "Rafael J. Wysocki" <rjw@rjwysocki.net>
      Cc: Steven Rostedt <rostedt@goodmis.org>
      Cc: Christoph Hellwig <hch@lst.de>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Borislav Petkov <bp@alien8.de>
      Cc: Paolo Bonzini <pbonzini@redhat.com>
      Cc: Rui Zhang <rui.zhang@intel.com>
      Cc: "K. Y. Srinivasan" <kys@microsoft.com>
      Cc: Arjan van de Ven <arjan@linux.intel.com>
      Cc: Dan Williams <dan.j.williams@intel.com>
      Cc: Len Brown <lenb@kernel.org>
      Link: https://lkml.kernel.org/r/20170913213153.028353660@linutronix.de
      42e1cc2d
    • T
      genirq/irqdomain: Allow irq_domain_activate_irq() to fail · bb9b428a
      Thomas Gleixner 提交于
      Allow irq_domain_activate_irq() to fail. This is required to support a
      reservation and late vector assignment scheme.
      Signed-off-by: NThomas Gleixner <tglx@linutronix.de>
      Tested-by: NJuergen Gross <jgross@suse.com>
      Tested-by: NYu Chen <yu.c.chen@intel.com>
      Acked-by: NJuergen Gross <jgross@suse.com>
      Cc: Boris Ostrovsky <boris.ostrovsky@oracle.com>
      Cc: Tony Luck <tony.luck@intel.com>
      Cc: Marc Zyngier <marc.zyngier@arm.com>
      Cc: Alok Kataria <akataria@vmware.com>
      Cc: Joerg Roedel <joro@8bytes.org>
      Cc: "Rafael J. Wysocki" <rjw@rjwysocki.net>
      Cc: Steven Rostedt <rostedt@goodmis.org>
      Cc: Christoph Hellwig <hch@lst.de>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Borislav Petkov <bp@alien8.de>
      Cc: Paolo Bonzini <pbonzini@redhat.com>
      Cc: Rui Zhang <rui.zhang@intel.com>
      Cc: "K. Y. Srinivasan" <kys@microsoft.com>
      Cc: Arjan van de Ven <arjan@linux.intel.com>
      Cc: Dan Williams <dan.j.williams@intel.com>
      Cc: Len Brown <lenb@kernel.org>
      Link: https://lkml.kernel.org/r/20170913213152.933882227@linutronix.de
      bb9b428a
    • T
      genirq: Separate activation and startup · c942cee4
      Thomas Gleixner 提交于
      Activation of an interrupt and startup are currently a combo
      functionlity. That works so far, but upcoming changes require a strict
      separation because the activation can fail in future.
      Signed-off-by: NThomas Gleixner <tglx@linutronix.de>
      Tested-by: NJuergen Gross <jgross@suse.com>
      Tested-by: NYu Chen <yu.c.chen@intel.com>
      Acked-by: NJuergen Gross <jgross@suse.com>
      Cc: Boris Ostrovsky <boris.ostrovsky@oracle.com>
      Cc: Tony Luck <tony.luck@intel.com>
      Cc: Marc Zyngier <marc.zyngier@arm.com>
      Cc: Alok Kataria <akataria@vmware.com>
      Cc: Joerg Roedel <joro@8bytes.org>
      Cc: "Rafael J. Wysocki" <rjw@rjwysocki.net>
      Cc: Steven Rostedt <rostedt@goodmis.org>
      Cc: Christoph Hellwig <hch@lst.de>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Borislav Petkov <bp@alien8.de>
      Cc: Paolo Bonzini <pbonzini@redhat.com>
      Cc: Rui Zhang <rui.zhang@intel.com>
      Cc: "K. Y. Srinivasan" <kys@microsoft.com>
      Cc: Arjan van de Ven <arjan@linux.intel.com>
      Cc: Dan Williams <dan.j.williams@intel.com>
      Cc: Len Brown <lenb@kernel.org>
      Link: https://lkml.kernel.org/r/20170913213152.754334077@linutronix.de
      c942cee4
    • T
      genirq: Make state consistent for !IRQ_DOMAIN_HIERARCHY · 457f6d35
      Thomas Gleixner 提交于
      In the !IRQ_DOMAIN_HIERARCHY cas the activation stubs are not
      setting/clearing the activation status bits. This is not a problem at the
      moment, but upcoming changes require a correct status.
      
      Add the set/clear incovations to the stub functions and move them to the
      core internal header to avoid duplication and visibility outside the core.
      Signed-off-by: NThomas Gleixner <tglx@linutronix.de>
      Tested-by: NJuergen Gross <jgross@suse.com>
      Tested-by: NYu Chen <yu.c.chen@intel.com>
      Acked-by: NJuergen Gross <jgross@suse.com>
      Cc: Boris Ostrovsky <boris.ostrovsky@oracle.com>
      Cc: Tony Luck <tony.luck@intel.com>
      Cc: Marc Zyngier <marc.zyngier@arm.com>
      Cc: Alok Kataria <akataria@vmware.com>
      Cc: Joerg Roedel <joro@8bytes.org>
      Cc: "Rafael J. Wysocki" <rjw@rjwysocki.net>
      Cc: Steven Rostedt <rostedt@goodmis.org>
      Cc: Christoph Hellwig <hch@lst.de>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Borislav Petkov <bp@alien8.de>
      Cc: Paolo Bonzini <pbonzini@redhat.com>
      Cc: Rui Zhang <rui.zhang@intel.com>
      Cc: "K. Y. Srinivasan" <kys@microsoft.com>
      Cc: Arjan van de Ven <arjan@linux.intel.com>
      Cc: Dan Williams <dan.j.williams@intel.com>
      Cc: Len Brown <lenb@kernel.org>
      Link: https://lkml.kernel.org/r/20170913213152.591985591@linutronix.de
      457f6d35
    • T
      genirq/msi: Capture device name for debugfs · 07557ccb
      Thomas Gleixner 提交于
      For debugging the allocation of unused or potentially leaked interrupt
      descriptor it's helpful to have some information about the site which
      allocated them. In case of MSI this is simple because the caller hands the
      device struct pointer into the domain allocation function.
      
      Duplicate the device name and show it in the debugfs entry of the interrupt
      descriptor.
      Signed-off-by: NThomas Gleixner <tglx@linutronix.de>
      Tested-by: NJuergen Gross <jgross@suse.com>
      Tested-by: NYu Chen <yu.c.chen@intel.com>
      Acked-by: NJuergen Gross <jgross@suse.com>
      Cc: Boris Ostrovsky <boris.ostrovsky@oracle.com>
      Cc: Tony Luck <tony.luck@intel.com>
      Cc: Marc Zyngier <marc.zyngier@arm.com>
      Cc: Alok Kataria <akataria@vmware.com>
      Cc: Joerg Roedel <joro@8bytes.org>
      Cc: "Rafael J. Wysocki" <rjw@rjwysocki.net>
      Cc: Steven Rostedt <rostedt@goodmis.org>
      Cc: Christoph Hellwig <hch@lst.de>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Borislav Petkov <bp@alien8.de>
      Cc: Paolo Bonzini <pbonzini@redhat.com>
      Cc: Rui Zhang <rui.zhang@intel.com>
      Cc: "K. Y. Srinivasan" <kys@microsoft.com>
      Cc: Arjan van de Ven <arjan@linux.intel.com>
      Cc: Dan Williams <dan.j.williams@intel.com>
      Cc: Len Brown <lenb@kernel.org>
      Link: https://lkml.kernel.org/r/20170913213152.433038426@linutronix.de
      07557ccb
  2. 14 8月, 2017 1 次提交
  3. 18 7月, 2017 1 次提交
  4. 04 7月, 2017 1 次提交
  5. 24 6月, 2017 3 次提交
  6. 23 6月, 2017 8 次提交
    • T
      genirq: Add force argument to irq_startup() · 4cde9c6b
      Thomas Gleixner 提交于
      In order to handle managed interrupts gracefully on irq_startup() so they
      won't lose their assigned affinity, it's necessary to allow startups which
      keep the interrupts in managed shutdown state, if none of the assigend CPUs
      is online. This allows drivers to request interrupts w/o the CPUs being
      online, which avoid online/offline churn in drivers.
      
      Add a force argument which can override that decision and let only
      request_irq() and enable_irq() allow the managed shutdown
      handling. enable_irq() is required, because the interrupt might be
      requested with IRQF_NOAUTOEN and enable_irq() invokes irq_startup() which
      would then wreckage the assignment again. All other callers force startup
      and potentially break the assigned affinity.
      
      No functional change as this only adds the function argument.
      Signed-off-by: NThomas Gleixner <tglx@linutronix.de>
      Cc: Jens Axboe <axboe@kernel.dk>
      Cc: Marc Zyngier <marc.zyngier@arm.com>
      Cc: Michael Ellerman <mpe@ellerman.id.au>
      Cc: Keith Busch <keith.busch@intel.com>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Christoph Hellwig <hch@lst.de>
      Link: http://lkml.kernel.org/r/20170619235447.112094565@linutronix.de
      4cde9c6b
    • T
      genirq: Introduce IRQD_MANAGED_SHUTDOWN · 54fdf6a0
      Thomas Gleixner 提交于
      Affinity managed interrupts should keep their assigned affinity accross CPU
      hotplug. To avoid magic hackery in device drivers, the core code shall
      manage them transparently. This will set these interrupts into a managed
      shutdown state when the last CPU of the assigned affinity mask goes
      offline. The interrupt will be restarted when one of the CPUs in the
      assigned affinity mask comes back online.
      
      Introduce the necessary state flag and the accessor functions.
      Signed-off-by: NThomas Gleixner <tglx@linutronix.de>
      Cc: Jens Axboe <axboe@kernel.dk>
      Cc: Marc Zyngier <marc.zyngier@arm.com>
      Cc: Michael Ellerman <mpe@ellerman.id.au>
      Cc: Keith Busch <keith.busch@intel.com>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Christoph Hellwig <hch@lst.de>
      Link: http://lkml.kernel.org/r/20170619235446.954523476@linutronix.de
      54fdf6a0
    • T
      genirq: Move irq_fixup_move_pending() to core · 36d84fb4
      Thomas Gleixner 提交于
      Now that x86 uses the generic code, the function declaration and inline
      stub can move to the core internal header.
      Signed-off-by: NThomas Gleixner <tglx@linutronix.de>
      Cc: Jens Axboe <axboe@kernel.dk>
      Cc: Marc Zyngier <marc.zyngier@arm.com>
      Cc: Michael Ellerman <mpe@ellerman.id.au>
      Cc: Keith Busch <keith.busch@intel.com>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Christoph Hellwig <hch@lst.de>
      Link: http://lkml.kernel.org/r/20170619235445.928156166@linutronix.de
      36d84fb4
    • T
      genirq/cpuhotplug: Add support for cleaning up move in progress · f0383c24
      Thomas Gleixner 提交于
      In order to move x86 to the generic hotplug migration code, add support for
      cleaning up move in progress bits.
      
      On architectures which have this x86 specific (mis)feature not enabled,
      this is optimized out by the compiler.
      Signed-off-by: NThomas Gleixner <tglx@linutronix.de>
      Cc: Jens Axboe <axboe@kernel.dk>
      Cc: Marc Zyngier <marc.zyngier@arm.com>
      Cc: Michael Ellerman <mpe@ellerman.id.au>
      Cc: Keith Busch <keith.busch@intel.com>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Christoph Hellwig <hch@lst.de>
      Link: http://lkml.kernel.org/r/20170619235445.525817311@linutronix.de
      f0383c24
    • C
      genirq: Move pending helpers to internal.h · 137221df
      Christoph Hellwig 提交于
      So that the affinity code can reuse them.
      Signed-off-by: NChristoph Hellwig <hch@lst.de>
      Signed-off-by: NThomas Gleixner <tglx@linutronix.de>
      Cc: Jens Axboe <axboe@kernel.dk>
      Cc: Marc Zyngier <marc.zyngier@arm.com>
      Cc: Michael Ellerman <mpe@ellerman.id.au>
      Cc: Keith Busch <keith.busch@intel.com>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Link: http://lkml.kernel.org/r/20170619235445.109426284@linutronix.de
      137221df
    • T
      genirq: Rename setup_affinity() to irq_setup_affinity() · 43564bd9
      Thomas Gleixner 提交于
      Rename it with a proper irq_ prefix and make it available for other files
      in the core code. Preparatory patch for moving the irq affinity setup
      around.
      Signed-off-by: NThomas Gleixner <tglx@linutronix.de>
      Cc: Jens Axboe <axboe@kernel.dk>
      Cc: Marc Zyngier <marc.zyngier@arm.com>
      Cc: Michael Ellerman <mpe@ellerman.id.au>
      Cc: Keith Busch <keith.busch@intel.com>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Christoph Hellwig <hch@lst.de>
      Link: http://lkml.kernel.org/r/20170619235444.928501004@linutronix.de
      43564bd9
    • T
      genirq: Remove mask argument from setup_affinity() · cba4235e
      Thomas Gleixner 提交于
      No point to have this alloc/free dance of cpumasks. Provide a static mask
      for setup_affinity() and protect it proper.
      Signed-off-by: NThomas Gleixner <tglx@linutronix.de>
      Cc: Jens Axboe <axboe@kernel.dk>
      Cc: Marc Zyngier <marc.zyngier@arm.com>
      Cc: Michael Ellerman <mpe@ellerman.id.au>
      Cc: Keith Busch <keith.busch@intel.com>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Christoph Hellwig <hch@lst.de>
      Link: http://lkml.kernel.org/r/20170619235444.851571573@linutronix.de
      cba4235e
    • T
      genirq/debugfs: Add proper debugfs interface · 087cdfb6
      Thomas Gleixner 提交于
      Debugging (hierarchical) interupt domains is tedious as there is no
      information about the hierarchy and no information about states of
      interrupts in the various domain levels.
      
      Add a debugfs directory 'irq' and subdirectories 'domains' and 'irqs'.
      
      The domains directory contains the domain files. The content is information
      about the domain. If the domain is part of a hierarchy then the parent
      domains are printed as well.
      
      # ls /sys/kernel/debug/irq/domains/
      default     INTEL-IR-2	    INTEL-IR-MSI-2  IO-APIC-IR-2  PCI-MSI
      DMAR-MSI    INTEL-IR-3	    INTEL-IR-MSI-3  IO-APIC-IR-3  unknown-1
      INTEL-IR-0  INTEL-IR-MSI-0  IO-APIC-IR-0    IO-APIC-IR-4  VECTOR
      INTEL-IR-1  INTEL-IR-MSI-1  IO-APIC-IR-1    PCI-HT
      
      # cat /sys/kernel/debug/irq/domains/VECTOR 
      name:   VECTOR
       size:   0
       mapped: 216
       flags:  0x00000041
      
      # cat /sys/kernel/debug/irq/domains/IO-APIC-IR-0 
      name:   IO-APIC-IR-0
       size:   24
       mapped: 19
       flags:  0x00000041
       parent: INTEL-IR-3
          name:   INTEL-IR-3
           size:   65536
           mapped: 167
           flags:  0x00000041
           parent: VECTOR
              name:   VECTOR
               size:   0
               mapped: 216
               flags:  0x00000041
      
      Unfortunately there is no per cpu information about the VECTOR domain (yet).
      
      The irqs directory contains detailed information about mapped interrupts.
      
      # cat /sys/kernel/debug/irq/irqs/3
      handler:  handle_edge_irq
      status:   0x00004000
      istate:   0x00000000
      ddepth:   1
      wdepth:   0
      dstate:   0x01018000
                  IRQD_IRQ_DISABLED
                  IRQD_SINGLE_TARGET
                  IRQD_MOVE_PCNTXT
      node:     0
      affinity: 0-143
      effectiv: 0
      pending:  
      domain:  IO-APIC-IR-0
       hwirq:   0x3
       chip:    IR-IO-APIC
        flags:   0x10
                   IRQCHIP_SKIP_SET_WAKE
       parent:
          domain:  INTEL-IR-3
           hwirq:   0x20000
           chip:    INTEL-IR
            flags:   0x0
           parent:
              domain:  VECTOR
               hwirq:   0x3
               chip:    APIC
                flags:   0x0
      
      This was developed to simplify the debugging of the managed affinity
      changes.
      Signed-off-by: NThomas Gleixner <tglx@linutronix.de>
      Acked-by: NMarc Zyngier <marc.zyngier@arm.com>
      Cc: Jens Axboe <axboe@kernel.dk>
      Cc: Michael Ellerman <mpe@ellerman.id.au>
      Cc: Keith Busch <keith.busch@intel.com>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Christoph Hellwig <hch@lst.de>
      Link: http://lkml.kernel.org/r/20170619235444.537566163@linutronix.deSigned-off-by: NThomas Gleixner <tglx@linutronix.de>
      087cdfb6
  7. 21 6月, 2017 1 次提交
  8. 04 7月, 2016 1 次提交
  9. 18 6月, 2016 1 次提交
    • K
      genirq: Add untracked irq handler · edd14cfe
      Keith Busch 提交于
      This adds a software irq handler for controllers that multiplex
      interrupts from multiple devices, but don't know which device generated
      the interrupt. For these devices, the irq handler that demuxes must
      check every action for every software irq using the same h/w irq in order
      to find out which device generated the interrupt. This will inevitably
      trigger spurious interrupt detection if we are noting the irq.
      
      The new irq handler does not track the handling for spurious interrupt
      detection. An irq that uses this also won't get stats tracked since it
      didn't generate the interrupt, nor added to randomness since they are
      not random.
      Signed-off-by: NKeith Busch <keith.busch@intel.com>
      Cc: Bjorn Helgaas <bhelgaas@google.com>
      Cc: linux-pci@vger.kernel.org
      Cc: Jon Derrick <jonathan.derrick@intel.com>
      Link: http://lkml.kernel.org/r/1466200821-29159-1-git-send-email-keith.busch@intel.comSigned-off-by: NThomas Gleixner <tglx@linutronix.de>
      edd14cfe
  10. 13 6月, 2016 1 次提交
  11. 24 2月, 2016 1 次提交
  12. 15 2月, 2016 1 次提交
  13. 10 11月, 2015 1 次提交
    • G
      genirq/PM: Restore system wake up from chained interrupts · 4717f133
      Grygorii Strashko 提交于
      Commit e509bd7d ("genirq: Allow migration of chained interrupts
      by installing default action") breaks PCS wake up IRQ behaviour on
      TI OMAP based platforms (dra7-evm).
      
      TI OMAP IRQ wake up configuration:
      GIC-irqchip->PCM_IRQ
        |- omap_prcm_register_chain_handler
           |- PRCM-irqchip -> PRCM_IO_IRQ
              |- pcs_irq_chain_handler
                 |- pinctrl-irqchip -> PCS_uart1_wakeup_irq
      
      This happens because IRQ PM code (irq/pm.c) is expected to ignore
      chained interrupts by default:
        static bool suspend_device_irq(struct irq_desc *desc)
        {
      	if (!desc->action || desc->no_suspend_depth)
      		return false;
       - it's expected !desc->action = true for chained interrupts;
      
      but, after above change, all chained interrupt descriptors will
      have default action handler installed - chained_action.
      As result, chained interrupts will be silently disabled during system
      suspend.
      
      Hence, fix it by introducing helper function irq_desc_is_chained() and
      use it in suspend_device_irq() for chained interrupts identification
      and skip them, once detected.
      
      Fixes: e509bd7d ("genirq: Allow migration of chained interrupts..")
      Signed-off-by: NGrygorii Strashko <grygorii.strashko@ti.com>
      Reviewed-by: NMika Westerberg <mika.westerberg@linux.intel.com>
      Cc: Tony Lindgren <tony@atomide.com>
      Cc: <nsekhar@ti.com>
      Cc: <linux-arm-kernel@lists.infradead.org>
      Cc: Tony Lindgren <tony@atomide.com>
      Link: http://lkml.kernel.org/r/1447149492-20699-1-git-send-email-grygorii.strashko@ti.comSigned-off-by: NThomas Gleixner <tglx@linutronix.de>
      4717f133
  14. 10 10月, 2015 1 次提交
    • M
      genirq: Allow migration of chained interrupts by installing default action · e509bd7d
      Mika Westerberg 提交于
      When a CPU is offlined all interrupts that have an action are migrated to
      other still online CPUs. However, if the interrupt has chained handler
      installed this is not done. Chained handlers are used by GPIO drivers which
      support interrupts, for instance.
      
      When the affinity is not corrected properly we end up in situation where
      most interrupts are not arriving to the online CPUs anymore. For example on
      Intel Braswell system which has SD-card card detection signal connected to
      a GPIO the IO-APIC routing entries look like below after CPU1 is offlined:
      
        pin30, enabled , level, low , V(52), IRR(0), S(0), logical , D(03), M(1)
        pin31, enabled , level, low , V(42), IRR(0), S(0), logical , D(03), M(1)
        pin32, enabled , level, low , V(62), IRR(0), S(0), logical , D(03), M(1)
        pin5b, enabled , level, low , V(72), IRR(0), S(0), logical , D(03), M(1)
      
      The problem here is that the destination mask still contains both CPUs even
      if CPU1 is already offline. This means that the IO-APIC still routes
      interrupts to the other CPU as well.
      
      We solve the problem by providing a default action for chained interrupts.
      This action allows the migration code to correct affinity (as it finds
      desc->action != NULL).
      
      Also make the default action handler to emit a warning if for some reason a
      chained handler ends up calling it.
      Signed-off-by: NMika Westerberg <mika.westerberg@linux.intel.com>
      Cc: Jiang Liu <jiang.liu@linux.intel.com>
      Link: http://lkml.kernel.org/r/1444039935-30475-1-git-send-email-mika.westerberg@linux.intel.comSigned-off-by: NThomas Gleixner <tglx@linutronix.de>
      e509bd7d
  15. 22 9月, 2015 1 次提交
  16. 16 9月, 2015 1 次提交
  17. 12 7月, 2015 4 次提交
  18. 08 7月, 2015 1 次提交
    • T
      hotplug: Prevent alloc/free of irq descriptors during cpu up/down · a8994181
      Thomas Gleixner 提交于
      When a cpu goes up some architectures (e.g. x86) have to walk the irq
      space to set up the vector space for the cpu. While this needs extra
      protection at the architecture level we can avoid a few race
      conditions by preventing the concurrent allocation/free of irq
      descriptors and the associated data.
      
      When a cpu goes down it moves the interrupts which are targeted to
      this cpu away by reassigning the affinities. While this happens
      interrupts can be allocated and freed, which opens a can of race
      conditions in the code which reassignes the affinities because
      interrupt descriptors might be freed underneath.
      
      Example:
      
      CPU1				CPU2
      cpu_up/down
       irq_desc = irq_to_desc(irq);
      				remove_from_radix_tree(desc);
       raw_spin_lock(&desc->lock);
      				free(desc);
      
      We could protect the irq descriptors with RCU, but that would require
      a full tree change of all accesses to interrupt descriptors. But
      fortunately these kind of race conditions are rather limited to a few
      things like cpu hotplug. The normal setup/teardown is very well
      serialized. So the simpler and obvious solution is:
      
      Prevent allocation and freeing of interrupt descriptors accross cpu
      hotplug.
      Signed-off-by: NThomas Gleixner <tglx@linutronix.de>
      Acked-by: NPeter Zijlstra <peterz@infradead.org>
      Cc: xiao jin <jin.xiao@intel.com>
      Cc: Joerg Roedel <jroedel@suse.de>
      Cc: Borislav Petkov <bp@suse.de>
      Cc: Yanmin Zhang <yanmin_zhang@linux.intel.com>
      Link: http://lkml.kernel.org/r/20150705171102.063519515@linutronix.de
      a8994181
  19. 16 6月, 2015 1 次提交
  20. 12 6月, 2015 3 次提交
  21. 13 12月, 2014 1 次提交
    • T
      genirq: Prevent proc race against freeing of irq descriptors · c291ee62
      Thomas Gleixner 提交于
      Since the rework of the sparse interrupt code to actually free the
      unused interrupt descriptors there exists a race between the /proc
      interfaces to the irq subsystem and the code which frees the interrupt
      descriptor.
      
      CPU0				CPU1
      				show_interrupts()
      				  desc = irq_to_desc(X);
      free_desc(desc)
        remove_from_radix_tree();
        kfree(desc);
      				  raw_spinlock_irq(&desc->lock);
      
      /proc/interrupts is the only interface which can actively corrupt
      kernel memory via the lock access. /proc/stat can only read from freed
      memory. Extremly hard to trigger, but possible.
      
      The interfaces in /proc/irq/N/ are not affected by this because the
      removal of the proc file is serialized in procfs against concurrent
      readers/writers. The removal happens before the descriptor is freed.
      
      For architectures which have CONFIG_SPARSE_IRQ=n this is a non issue
      as the descriptor is never freed. It's merely cleared out with the irq
      descriptor lock held. So any concurrent proc access will either see
      the old correct value or the cleared out ones.
      
      Protect the lookup and access to the irq descriptor in
      show_interrupts() with the sparse_irq_lock.
      
      Provide kstat_irqs_usr() which is protecting the lookup and access
      with sparse_irq_lock and switch /proc/stat to use it.
      
      Document the existing kstat_irqs interfaces so it's clear that the
      caller needs to take care about protection. The users of these
      interfaces are either not affected due to SPARSE_IRQ=n or already
      protected against removal.
      
      Fixes: 1f5a5b87 "genirq: Implement a sane sparse_irq allocator"
      Signed-off-by: NThomas Gleixner <tglx@linutronix.de>
      Cc: stable@vger.kernel.org
      c291ee62
  22. 01 9月, 2014 1 次提交
    • T
      genirq: Simplify wakeup mechanism · 9ce7a258
      Thomas Gleixner 提交于
      Currently we suspend wakeup interrupts by lazy disabling them and
      check later whether the interrupt has fired, but that's not sufficient
      for suspend to idle as there is no way to check that once we
      transitioned into the CPU idle state.
      
      So we change the mechanism in the following way:
      
      1) Leave the wakeup interrupts enabled across suspend
      
      2) Add a check to irq_may_run() which is called at the beginning of
         each flow handler whether the interrupt is an armed wakeup source.
      
         This check is basically free as it just extends the existing check
         for IRQD_IRQ_INPROGRESS. So no new conditional in the hot path.
      
         If the IRQD_WAKEUP_ARMED flag is set, then the interrupt is
         disabled, marked as pending/suspended and the pm core is notified
         about the wakeup event.
      Signed-off-by: NThomas Gleixner <tglx@linutronix.de>
      [ rjw: syscore.c and put irq_pm_check_wakeup() into pm.c ]
      Signed-off-by: NRafael J. Wysocki <rafael.j.wysocki@intel.com>
      9ce7a258