1. 11 6月, 2020 1 次提交
    • T
      x86/entry: Unbreak __irqentry_text_start/end magic · f0178fc0
      Thomas Gleixner 提交于
      The entry rework moved interrupt entry code from the irqentry to the
      noinstr section which made the irqentry section empty.
      
      This breaks boundary checks which rely on the __irqentry_text_start/end
      markers to find out whether a function in a stack trace is
      interrupt/exception entry code. This affects the function graph tracer and
      filter_irq_stacks().
      
      As the IDT entry points are all sequentialy emitted this is rather simple
      to unbreak by injecting __irqentry_text_start/end as global labels.
      
      To make this work correctly:
      
        - Remove the IRQENTRY_TEXT section from the x86 linker script
        - Define __irqentry so it breaks the build if it's used
        - Adjust the entry mirroring in PTI
        - Remove the redundant kprobes and unwinder bound checks
      Reported-by: NQian Cai <cai@lca.pw>
      Signed-off-by: NThomas Gleixner <tglx@linutronix.de>
      f0178fc0
  2. 08 3月, 2020 1 次提交
  3. 19 11月, 2019 1 次提交
  4. 12 11月, 2019 1 次提交
  5. 19 8月, 2019 1 次提交
  6. 23 7月, 2019 1 次提交
    • R
      PCI: irq: Introduce rearm_wake_irq() · 3a79bc63
      Rafael J. Wysocki 提交于
      Introduce a new function, rearm_wake_irq(), allowing a wakeup IRQ
      to be armed for systen wakeup detection again without running any
      action handlers associated with it after it has been armed for
      wakeup detection and triggered.
      
      That is useful for IRQs, like ACPI SCI, that may deliver wakeup
      as well as non-wakeup interrupts when armed for systen wakeup
      detection.  In those cases, it may be possible to determine whether
      or not the delivered interrupt is a systen wakeup one without
      running the entire action handler (or handlers, if the IRQ is
      shared) for the IRQ, and if the interrupt turns out to be a
      non-wakeup one, the IRQ can be rearmed with the help of the
      new function.
      Signed-off-by: NRafael J. Wysocki <rafael.j.wysocki@intel.com>
      Acked-by: NThomas Gleixner <tglx@linutronix.de>
      3a79bc63
  7. 15 6月, 2019 1 次提交
  8. 22 3月, 2019 1 次提交
  9. 18 2月, 2019 3 次提交
    • M
      genirq/affinity: Add new callback for (re)calculating interrupt sets · c66d4bd1
      Ming Lei 提交于
      The interrupt affinity spreading mechanism supports to spread out
      affinities for one or more interrupt sets. A interrupt set contains one or
      more interrupts. Each set is mapped to a specific functionality of a
      device, e.g. general I/O queues and read I/O queus of multiqueue block
      devices.
      
      The number of interrupts per set is defined by the driver. It depends on
      the total number of available interrupts for the device, which is
      determined by the PCI capabilites and the availability of underlying CPU
      resources, and the number of queues which the device provides and the
      driver wants to instantiate.
      
      The driver passes initial configuration for the interrupt allocation via a
      pointer to struct irq_affinity.
      
      Right now the allocation mechanism is complex as it requires to have a loop
      in the driver to determine the maximum number of interrupts which are
      provided by the PCI capabilities and the underlying CPU resources.  This
      loop would have to be replicated in every driver which wants to utilize
      this mechanism. That's unwanted code duplication and error prone.
      
      In order to move this into generic facilities it is required to have a
      mechanism, which allows the recalculation of the interrupt sets and their
      size, in the core code. As the core code does not have any knowledge about the
      underlying device, a driver specific callback is required in struct
      irq_affinity, which can be invoked by the core code. The callback gets the
      number of available interupts as an argument, so the driver can calculate the
      corresponding number and size of interrupt sets.
      
      At the moment the struct irq_affinity pointer which is handed in from the
      driver and passed through to several core functions is marked 'const', but for
      the callback to be able to modify the data in the struct it's required to
      remove the 'const' qualifier.
      
      Add the optional callback to struct irq_affinity, which allows drivers to
      recalculate the number and size of interrupt sets and remove the 'const'
      qualifier.
      
      For simple invocations, which do not supply a callback, a default callback
      is installed, which just sets nr_sets to 1 and transfers the number of
      spreadable vectors to the set_size array at index 0.
      
      This is for now guarded by a check for nr_sets != 0 to keep the NVME driver
      working until it is converted to the callback mechanism.
      
      To make sure that the driver configuration is correct under all circumstances
      the callback is invoked even when there are no interrupts for queues left,
      i.e. the pre/post requirements already exhaust the numner of available
      interrupts.
      
      At the PCI layer irq_create_affinity_masks() has to be invoked even for the
      case where the legacy interrupt is used. That ensures that the callback is
      invoked and the device driver can adjust to that situation.
      
      [ tglx: Fixed the simple case (no sets required). Moved the sanity check
        	for nr_sets after the invocation of the callback so it catches
        	broken drivers. Fixed the kernel doc comments for struct
        	irq_affinity and de-'This patch'-ed the changelog ]
      Signed-off-by: NMing Lei <ming.lei@redhat.com>
      Signed-off-by: NThomas Gleixner <tglx@linutronix.de>
      Acked-by: NMarc Zyngier <marc.zyngier@arm.com>
      Cc: Christoph Hellwig <hch@lst.de>
      Cc: Bjorn Helgaas <helgaas@kernel.org>
      Cc: Jens Axboe <axboe@kernel.dk>
      Cc: linux-block@vger.kernel.org
      Cc: Sagi Grimberg <sagi@grimberg.me>
      Cc: linux-nvme@lists.infradead.org
      Cc: linux-pci@vger.kernel.org
      Cc: Keith Busch <keith.busch@intel.com>
      Cc: Sumit Saxena <sumit.saxena@broadcom.com>
      Cc: Kashyap Desai <kashyap.desai@broadcom.com>
      Cc: Shivasharan Srikanteshwara <shivasharan.srikanteshwara@broadcom.com>
      Link: https://lkml.kernel.org/r/20190216172228.512444498@linutronix.de
      
      c66d4bd1
    • M
      genirq/affinity: Store interrupt sets size in struct irq_affinity · 9cfef55b
      Ming Lei 提交于
      The interrupt affinity spreading mechanism supports to spread out
      affinities for one or more interrupt sets. A interrupt set contains one
      or more interrupts. Each set is mapped to a specific functionality of a
      device, e.g. general I/O queues and read I/O queus of multiqueue block
      devices.
      
      The number of interrupts per set is defined by the driver. It depends on
      the total number of available interrupts for the device, which is
      determined by the PCI capabilites and the availability of underlying CPU
      resources, and the number of queues which the device provides and the
      driver wants to instantiate.
      
      The driver passes initial configuration for the interrupt allocation via
      a pointer to struct irq_affinity.
      
      Right now the allocation mechanism is complex as it requires to have a
      loop in the driver to determine the maximum number of interrupts which
      are provided by the PCI capabilities and the underlying CPU resources.
      This loop would have to be replicated in every driver which wants to
      utilize this mechanism. That's unwanted code duplication and error
      prone.
      
      In order to move this into generic facilities it is required to have a
      mechanism, which allows the recalculation of the interrupt sets and
      their size, in the core code. As the core code does not have any
      knowledge about the underlying device, a driver specific callback will
      be added to struct affinity_desc, which will be invoked by the core
      code. The callback will get the number of available interupts as an
      argument, so the driver can calculate the corresponding number and size
      of interrupt sets.
      
      To support this, two modifications for the handling of struct irq_affinity
      are required:
      
      1) The (optional) interrupt sets size information is contained in a
         separate array of integers and struct irq_affinity contains a
         pointer to it.
      
         This is cumbersome and as the maximum number of interrupt sets is small,
         there is no reason to have separate storage. Moving the size array into
         struct affinity_desc avoids indirections and makes the code simpler.
      
      2) At the moment the struct irq_affinity pointer which is handed in from
         the driver and passed through to several core functions is marked
         'const'.
      
         With the upcoming callback to recalculate the number and size of
         interrupt sets, it's necessary to remove the 'const'
         qualifier. Otherwise the callback would not be able to update the data.
      
      Implement #1 and store the interrupt sets size in 'struct irq_affinity'.
      
      No functional change.
      
      [ tglx: Fixed the memcpy() size so it won't copy beyond the size of the
        	source. Fixed the kernel doc comments for struct irq_affinity and
        	de-'This patch'-ed the changelog ]
      Signed-off-by: NMing Lei <ming.lei@redhat.com>
      Signed-off-by: NThomas Gleixner <tglx@linutronix.de>
      Acked-by: NMarc Zyngier <marc.zyngier@arm.com>
      Cc: Christoph Hellwig <hch@lst.de>
      Cc: Bjorn Helgaas <helgaas@kernel.org>
      Cc: Jens Axboe <axboe@kernel.dk>
      Cc: linux-block@vger.kernel.org
      Cc: Sagi Grimberg <sagi@grimberg.me>
      Cc: linux-nvme@lists.infradead.org
      Cc: linux-pci@vger.kernel.org
      Cc: Keith Busch <keith.busch@intel.com>
      Cc: Sumit Saxena <sumit.saxena@broadcom.com>
      Cc: Kashyap Desai <kashyap.desai@broadcom.com>
      Cc: Shivasharan Srikanteshwara <shivasharan.srikanteshwara@broadcom.com>
      Link: https://lkml.kernel.org/r/20190216172228.423723127@linutronix.de
      
      9cfef55b
    • T
      genirq/affinity: Code consolidation · 0145c30e
      Thomas Gleixner 提交于
      All information and calculations in the interrupt affinity spreading code
      is strictly unsigned int. Though the code uses int all over the place.
      
      Convert it over to unsigned int.
      Signed-off-by: NThomas Gleixner <tglx@linutronix.de>
      Reviewed-by: NMing Lei <ming.lei@redhat.com>
      Acked-by: NMarc Zyngier <marc.zyngier@arm.com>
      Cc: Christoph Hellwig <hch@lst.de>
      Cc: Bjorn Helgaas <helgaas@kernel.org>
      Cc: Jens Axboe <axboe@kernel.dk>
      Cc: linux-block@vger.kernel.org
      Cc: Sagi Grimberg <sagi@grimberg.me>
      Cc: linux-nvme@lists.infradead.org
      Cc: linux-pci@vger.kernel.org
      Cc: Keith Busch <keith.busch@intel.com>
      Cc: Sumit Saxena <sumit.saxena@broadcom.com>
      Cc: Kashyap Desai <kashyap.desai@broadcom.com>
      Cc: Shivasharan Srikanteshwara <shivasharan.srikanteshwara@broadcom.com>
      Link: https://lkml.kernel.org/r/20190216172228.336424556@linutronix.de
      0145c30e
  10. 05 2月, 2019 2 次提交
    • J
      genirq: Provide NMI management for percpu_devid interrupts · 4b078c3f
      Julien Thierry 提交于
      Add support for percpu_devid interrupts treated as NMIs.
      
      Percpu_devid NMIs need to be setup/torn down on each CPU they target.
      
      The same restrictions as for global NMIs still apply for percpu_devid NMIs.
      Signed-off-by: NJulien Thierry <julien.thierry@arm.com>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Ingo Molnar <mingo@kernel.org>
      Cc: Marc Zyngier <marc.zyngier@arm.com>
      Signed-off-by: NMarc Zyngier <marc.zyngier@arm.com>
      4b078c3f
    • J
      genirq: Provide basic NMI management for interrupt lines · b525903c
      Julien Thierry 提交于
      Add functionality to allocate interrupt lines that will deliver IRQs
      as Non-Maskable Interrupts. These allocations are only successful if
      the irqchip provides the necessary support and allows NMI delivery for the
      interrupt line.
      
      Interrupt lines allocated for NMI delivery must be enabled/disabled through
      enable_nmi/disable_nmi_nosync to keep their state consistent.
      
      To treat a PERCPU IRQ as NMI, the interrupt must not be shared nor threaded,
      the irqchip directly managing the IRQ must be the root irqchip and the
      irqchip cannot be behind a slow bus.
      Signed-off-by: NJulien Thierry <julien.thierry@arm.com>
      Reviewed-by: NMarc Zyngier <marc.zyngier@arm.com>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Ingo Molnar <mingo@kernel.org>
      Cc: Marc Zyngier <marc.zyngier@arm.com>
      Signed-off-by: NMarc Zyngier <marc.zyngier@arm.com>
      b525903c
  11. 18 1月, 2019 1 次提交
  12. 19 12月, 2018 2 次提交
    • D
      genirq/affinity: Add is_managed to struct irq_affinity_desc · c410abbb
      Dou Liyang 提交于
      Devices which use managed interrupts usually have two classes of
      interrupts:
      
        - Interrupts for multiple device queues
        - Interrupts for general device management
      
      Currently both classes are treated the same way, i.e. as managed
      interrupts. The general interrupts get the default affinity mask assigned
      while the device queue interrupts are spread out over the possible CPUs.
      
      Treating the general interrupts as managed is both a limitation and under
      certain circumstances a bug. Assume the following situation:
      
       default_irq_affinity = 4..7
      
      So if CPUs 4-7 are offlined, then the core code will shut down the device
      management interrupts because the last CPU in their affinity mask went
      offline.
      
      It's also a limitation because it's desired to allow manual placement of
      the general device interrupts for various reasons. If they are marked
      managed then the interrupt affinity setting from both user and kernel space
      is disabled. That limitation was reported by Kashyap and Sumit.
      
      Expand struct irq_affinity_desc with a new bit 'is_managed' which is set
      for truly managed interrupts (queue interrupts) and cleared for the general
      device interrupts.
      
      [ tglx: Simplify code and massage changelog ]
      Reported-by: NKashyap Desai <kashyap.desai@broadcom.com>
      Reported-by: NSumit Saxena <sumit.saxena@broadcom.com>
      Signed-off-by: NDou Liyang <douliyangs@gmail.com>
      Signed-off-by: NThomas Gleixner <tglx@linutronix.de>
      Cc: linux-pci@vger.kernel.org
      Cc: shivasharan.srikanteshwara@broadcom.com
      Cc: ming.lei@redhat.com
      Cc: hch@lst.de
      Cc: bhelgaas@google.com
      Cc: douliyang1@huawei.com
      Link: https://lkml.kernel.org/r/20181204155122.6327-3-douliyangs@gmail.com
      c410abbb
    • D
      genirq/core: Introduce struct irq_affinity_desc · bec04037
      Dou Liyang 提交于
      The interrupt affinity management uses straight cpumask pointers to convey
      the automatically assigned affinity masks for managed interrupts. The core
      interrupt descriptor allocation also decides based on the pointer being non
      NULL whether an interrupt is managed or not.
      
      Devices which use managed interrupts usually have two classes of
      interrupts:
      
        - Interrupts for multiple device queues
        - Interrupts for general device management
      
      Currently both classes are treated the same way, i.e. as managed
      interrupts. The general interrupts get the default affinity mask assigned
      while the device queue interrupts are spread out over the possible CPUs.
      
      Treating the general interrupts as managed is both a limitation and under
      certain circumstances a bug. Assume the following situation:
      
       default_irq_affinity = 4..7
      
      So if CPUs 4-7 are offlined, then the core code will shut down the device
      management interrupts because the last CPU in their affinity mask went
      offline.
      
      It's also a limitation because it's desired to allow manual placement of
      the general device interrupts for various reasons. If they are marked
      managed then the interrupt affinity setting from both user and kernel space
      is disabled.
      
      To remedy that situation it's required to convey more information than the
      cpumasks through various interfaces related to interrupt descriptor
      allocation.
      
      Instead of adding yet another argument, create a new data structure
      'irq_affinity_desc' which for now just contains the cpumask. This struct
      can be expanded to convey auxilliary information in the next step.
      
      No functional change, just preparatory work.
      
      [ tglx: Simplified logic and clarified changelog ]
      Suggested-by: NThomas Gleixner <tglx@linutronix.de>
      Suggested-by: NBjorn Helgaas <bhelgaas@google.com>
      Signed-off-by: NDou Liyang <douliyangs@gmail.com>
      Signed-off-by: NThomas Gleixner <tglx@linutronix.de>
      Cc: linux-pci@vger.kernel.org
      Cc: kashyap.desai@broadcom.com
      Cc: shivasharan.srikanteshwara@broadcom.com
      Cc: sumit.saxena@broadcom.com
      Cc: ming.lei@redhat.com
      Cc: hch@lst.de
      Cc: douliyang1@huawei.com
      Link: https://lkml.kernel.org/r/20181204155122.6327-2-douliyangs@gmail.com
      bec04037
  13. 05 11月, 2018 1 次提交
  14. 09 10月, 2018 1 次提交
  15. 14 5月, 2018 2 次提交
  16. 16 2月, 2018 1 次提交
  17. 16 11月, 2017 1 次提交
  18. 02 11月, 2017 1 次提交
    • G
      License cleanup: add SPDX GPL-2.0 license identifier to files with no license · b2441318
      Greg Kroah-Hartman 提交于
      Many source files in the tree are missing licensing information, which
      makes it harder for compliance tools to determine the correct license.
      
      By default all files without license information are under the default
      license of the kernel, which is GPL version 2.
      
      Update the files which contain no license information with the 'GPL-2.0'
      SPDX license identifier.  The SPDX identifier is a legally binding
      shorthand, which can be used instead of the full boiler plate text.
      
      This patch is based on work done by Thomas Gleixner and Kate Stewart and
      Philippe Ombredanne.
      
      How this work was done:
      
      Patches were generated and checked against linux-4.14-rc6 for a subset of
      the use cases:
       - file had no licensing information it it.
       - file was a */uapi/* one with no licensing information in it,
       - file was a */uapi/* one with existing licensing information,
      
      Further patches will be generated in subsequent months to fix up cases
      where non-standard license headers were used, and references to license
      had to be inferred by heuristics based on keywords.
      
      The analysis to determine which SPDX License Identifier to be applied to
      a file was done in a spreadsheet of side by side results from of the
      output of two independent scanners (ScanCode & Windriver) producing SPDX
      tag:value files created by Philippe Ombredanne.  Philippe prepared the
      base worksheet, and did an initial spot review of a few 1000 files.
      
      The 4.13 kernel was the starting point of the analysis with 60,537 files
      assessed.  Kate Stewart did a file by file comparison of the scanner
      results in the spreadsheet to determine which SPDX license identifier(s)
      to be applied to the file. She confirmed any determination that was not
      immediately clear with lawyers working with the Linux Foundation.
      
      Criteria used to select files for SPDX license identifier tagging was:
       - Files considered eligible had to be source code files.
       - Make and config files were included as candidates if they contained >5
         lines of source
       - File already had some variant of a license header in it (even if <5
         lines).
      
      All documentation files were explicitly excluded.
      
      The following heuristics were used to determine which SPDX license
      identifiers to apply.
      
       - when both scanners couldn't find any license traces, file was
         considered to have no license information in it, and the top level
         COPYING file license applied.
      
         For non */uapi/* files that summary was:
      
         SPDX license identifier                            # files
         ---------------------------------------------------|-------
         GPL-2.0                                              11139
      
         and resulted in the first patch in this series.
      
         If that file was a */uapi/* path one, it was "GPL-2.0 WITH
         Linux-syscall-note" otherwise it was "GPL-2.0".  Results of that was:
      
         SPDX license identifier                            # files
         ---------------------------------------------------|-------
         GPL-2.0 WITH Linux-syscall-note                        930
      
         and resulted in the second patch in this series.
      
       - if a file had some form of licensing information in it, and was one
         of the */uapi/* ones, it was denoted with the Linux-syscall-note if
         any GPL family license was found in the file or had no licensing in
         it (per prior point).  Results summary:
      
         SPDX license identifier                            # files
         ---------------------------------------------------|------
         GPL-2.0 WITH Linux-syscall-note                       270
         GPL-2.0+ WITH Linux-syscall-note                      169
         ((GPL-2.0 WITH Linux-syscall-note) OR BSD-2-Clause)    21
         ((GPL-2.0 WITH Linux-syscall-note) OR BSD-3-Clause)    17
         LGPL-2.1+ WITH Linux-syscall-note                      15
         GPL-1.0+ WITH Linux-syscall-note                       14
         ((GPL-2.0+ WITH Linux-syscall-note) OR BSD-3-Clause)    5
         LGPL-2.0+ WITH Linux-syscall-note                       4
         LGPL-2.1 WITH Linux-syscall-note                        3
         ((GPL-2.0 WITH Linux-syscall-note) OR MIT)              3
         ((GPL-2.0 WITH Linux-syscall-note) AND MIT)             1
      
         and that resulted in the third patch in this series.
      
       - when the two scanners agreed on the detected license(s), that became
         the concluded license(s).
      
       - when there was disagreement between the two scanners (one detected a
         license but the other didn't, or they both detected different
         licenses) a manual inspection of the file occurred.
      
       - In most cases a manual inspection of the information in the file
         resulted in a clear resolution of the license that should apply (and
         which scanner probably needed to revisit its heuristics).
      
       - When it was not immediately clear, the license identifier was
         confirmed with lawyers working with the Linux Foundation.
      
       - If there was any question as to the appropriate license identifier,
         the file was flagged for further research and to be revisited later
         in time.
      
      In total, over 70 hours of logged manual review was done on the
      spreadsheet to determine the SPDX license identifiers to apply to the
      source files by Kate, Philippe, Thomas and, in some cases, confirmation
      by lawyers working with the Linux Foundation.
      
      Kate also obtained a third independent scan of the 4.13 code base from
      FOSSology, and compared selected files where the other two scanners
      disagreed against that SPDX file, to see if there was new insights.  The
      Windriver scanner is based on an older version of FOSSology in part, so
      they are related.
      
      Thomas did random spot checks in about 500 files from the spreadsheets
      for the uapi headers and agreed with SPDX license identifier in the
      files he inspected. For the non-uapi files Thomas did random spot checks
      in about 15000 files.
      
      In initial set of patches against 4.14-rc6, 3 files were found to have
      copy/paste license identifier errors, and have been fixed to reflect the
      correct identifier.
      
      Additionally Philippe spent 10 hours this week doing a detailed manual
      inspection and review of the 12,461 patched files from the initial patch
      version early this week with:
       - a full scancode scan run, collecting the matched texts, detected
         license ids and scores
       - reviewing anything where there was a license detected (about 500+
         files) to ensure that the applied SPDX license was correct
       - reviewing anything where there was no detection but the patch license
         was not GPL-2.0 WITH Linux-syscall-note to ensure that the applied
         SPDX license was correct
      
      This produced a worksheet with 20 files needing minor correction.  This
      worksheet was then exported into 3 different .csv files for the
      different types of files to be modified.
      
      These .csv files were then reviewed by Greg.  Thomas wrote a script to
      parse the csv files and add the proper SPDX tag to the file, in the
      format that the file expected.  This script was further refined by Greg
      based on the output to detect more types of files automatically and to
      distinguish between header and source .c files (which need different
      comment types.)  Finally Greg ran the script using the .csv files to
      generate the patches.
      Reviewed-by: NKate Stewart <kstewart@linuxfoundation.org>
      Reviewed-by: NPhilippe Ombredanne <pombredanne@nexb.com>
      Reviewed-by: NThomas Gleixner <tglx@linutronix.de>
      Signed-off-by: NGreg Kroah-Hartman <gregkh@linuxfoundation.org>
      b2441318
  19. 10 8月, 2017 1 次提交
  20. 07 7月, 2017 1 次提交
    • D
      genirq: Allow to pass the IRQF_TIMER flag with percpu irq request · c80081b9
      Daniel Lezcano 提交于
      The irq timings infrastructure tracks when interrupts occur in order to
      statistically predict te next interrupt event.
      
      There is no point to track timer interrupts and try to predict them because
      the next expiration time is already known. This can be avoided via the
      IRQF_TIMER flag which is passed by timer drivers in request_irq(). It marks
      the interrupt as timer based which alloes to ignore these interrupts in the
      timings code.
      
      Per CPU interrupts which are requested via request_percpu_+irq() have no
      flag argument, so marking per cpu timer interrupts is not possible and they
      get tracked pointlessly.
      
      Add __request_percpu_irq() as a variant of request_percpu_irq() with a
      flags argument and make request_percpu_irq() an inline wrapper passing
      flags = 0.
      
      The flag parameter is restricted to IRQF_TIMER as all other IRQF_ flags
      make no sense for per cpu interrupts.
      
      The next step is to convert all existing users of request_percpu_irq() and
      then remove the wrapper and the underscores.
      
      [ tglx: Massaged changelog ]
      Signed-off-by: NDaniel Lezcano <daniel.lezcano@linaro.org>
      Signed-off-by: NThomas Gleixner <tglx@linutronix.de>
      Cc: peterz@infradead.org
      Cc: nicolas.pitre@linaro.org
      Cc: vincent.guittot@linaro.org
      Cc: rafael@kernel.org
      Link: http://lkml.kernel.org/r/1499344144-3964-1-git-send-email-daniel.lezcano@linaro.org
      c80081b9
  21. 24 6月, 2017 2 次提交
    • D
      genirq/timings: Add infrastructure for estimating the next interrupt arrival time · e1c92149
      Daniel Lezcano 提交于
      An interrupt behaves with a burst of activity with periodic interval of time
      followed by one or two peaks of longer interval.
      
      As the time intervals are periodic, statistically speaking they follow a normal
      distribution and each interrupts can be tracked individually.
      
      Add a mechanism to compute the statistics on all interrupts, except the
      timers which are deterministic from a prediction point of view, as their
      expiry time is known.
      
      The goal is to extract the periodicity for each interrupt, with the last
      timestamp and sum them, so the next event can be predicted to a certain
      extent.
      
      Taking the earliest prediction gives the expected wakeup on the system
      (assuming a timer won't expire before).
      Signed-off-by: NDaniel Lezcano <daniel.lezcano@linaro.org>
      Signed-off-by: NThomas Gleixner <tglx@linutronix.de>
      Cc: Nicolas Pitre <nicolas.pitre@linaro.org>
      Cc: Jens Axboe <axboe@kernel.dk>
      Cc: Hannes Reinecke <hare@suse.com>
      Cc: Vincent Guittot <vincent.guittot@linaro.org>
      Cc: "Rafael J . Wysocki" <rafael@kernel.org>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Bjorn Helgaas <bhelgaas@google.com>
      Link: http://lkml.kernel.org/r/1498227072-5980-2-git-send-email-daniel.lezcano@linaro.org
      e1c92149
    • D
      genirq/timings: Add infrastructure to track the interrupt timings · b2d3d61a
      Daniel Lezcano 提交于
      The interrupt framework gives a lot of information about each interrupt. It
      does not keep track of when those interrupts occur though, which is a
      prerequisite for estimating the next interrupt arrival for power management
      purposes.
      
      Add a mechanism to record the timestamp for each interrupt occurrences in a
      per-CPU circular buffer to help with the prediction of the next occurrence
      using a statistical model.
      
      Each CPU can store up to IRQ_TIMINGS_SIZE events <irq, timestamp>, the
      current value of IRQ_TIMINGS_SIZE is 32.
      
      Each event is encoded into a single u64, where the high 48 bits are used
      for the timestamp and the low 16 bits are for the irq number.
      
      A static key is introduced so when the irq prediction is switched off at
      runtime, the overhead is near to zero.
      
      It results in most of the code in internals.h for inline reasons and a very
      few in the new file timings.c. The latter will contain more in the next patch
      which will provide the statistical model for the next event prediction.
      Signed-off-by: NDaniel Lezcano <daniel.lezcano@linaro.org>
      Signed-off-by: NThomas Gleixner <tglx@linutronix.de>
      Acked-by: NNicolas Pitre <nicolas.pitre@linaro.org>
      Cc: Jens Axboe <axboe@kernel.dk>
      Cc: Hannes Reinecke <hare@suse.com>
      Cc: Vincent Guittot <vincent.guittot@linaro.org>
      Cc: "Rafael J . Wysocki" <rafael@kernel.org>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Bjorn Helgaas <bhelgaas@google.com>
      Link: http://lkml.kernel.org/r/1498227072-5980-1-git-send-email-daniel.lezcano@linaro.org
      b2d3d61a
  22. 23 5月, 2017 1 次提交
  23. 19 4月, 2017 1 次提交
  24. 09 11月, 2016 3 次提交
  25. 15 9月, 2016 2 次提交
  26. 04 7月, 2016 1 次提交
  27. 26 3月, 2016 1 次提交
  28. 09 3月, 2016 1 次提交
    • C
      x86/ACPI/PCI: Recognize that Interrupt Line 255 means "not connected" · e237a551
      Chen Fan 提交于
      Per the x86-specific footnote to PCI spec r3.0, sec 6.2.4, the value 255 in
      the Interrupt Line register means "unknown" or "no connection."
      Previously, when we couldn't derive an IRQ from the _PRT, we fell back to
      using the value from Interrupt Line as an IRQ.  It's questionable whether
      we should do that at all, but the spec clearly suggests we shouldn't do it
      for the value 255 on x86.
      
      Calling request_irq() with IRQ 255 may succeed, but the driver won't
      receive any interrupts.  Or, if IRQ 255 is shared with another device, it
      may succeed, and the driver's ISR will be called at random times when the
      *other* device interrupts.  Or it may fail if another device is using IRQ
      255 with incompatible flags.  What we *want* is for request_irq() to fail
      predictably so the driver can fall back to polling.
      
      On x86, assume 255 in the Interrupt Line means the INTx line is not
      connected.  In that case, set dev->irq to IRQ_NOTCONNECTED so request_irq()
      will fail gracefully with -ENOTCONN.
      
      We found this problem on a system where Secure Boot firmware assigned
      Interrupt Line 255 to an i801_smbus device and another device was already
      using MSI-X IRQ 255.  This was in v3.10, where i801_probe() fails if
      request_irq() fails:
      
        i801_smbus 0000:00:1f.3: enabling device (0140 -> 0143)
        i801_smbus 0000:00:1f.3: can't derive routing for PCI INT C
        i801_smbus 0000:00:1f.3: PCI INT C: no GSI
        genirq: Flags mismatch irq 255. 00000080 (i801_smbus) vs. 00000000 (megasa)
        CPU: 0 PID: 2487 Comm: kworker/0:1 Not tainted 3.10.0-229.el7.x86_64 #1
        Hardware name: FUJITSU PRIMEQUEST 2800E2/D3736, BIOS PRIMEQUEST 2000 Serie5
        Call Trace:
          dump_stack+0x19/0x1b
          __setup_irq+0x54a/0x570
          request_threaded_irq+0xcc/0x170
          i801_probe+0x32f/0x508 [i2c_i801]
          local_pci_probe+0x45/0xa0
        i801_smbus 0000:00:1f.3: Failed to allocate irq 255: -16
        i801_smbus: probe of 0000:00:1f.3 failed with error -16
      
      After aeb8a3d1 ("i2c: i801: Check if interrupts are disabled"),
      i801_probe() will fall back to polling if request_irq() fails.  But we
      still need this patch because request_irq() may succeed or fail depending
      on other devices in the system.  If request_irq() fails, i801_smbus will
      work by falling back to polling, but if it succeeds, i801_smbus won't work
      because it expects interrupts that it may not receive.
      Signed-off-by: NChen Fan <chen.fan.fnst@cn.fujitsu.com>
      Acked-by: NThomas Gleixner <tglx@linutronix.de>
      Acked-by: NBjorn Helgaas <bhelgaas@google.com>
      Signed-off-by: NRafael J. Wysocki <rafael.j.wysocki@intel.com>
      e237a551
  29. 12 12月, 2015 1 次提交
  30. 08 12月, 2015 1 次提交
    • T
      genirq: Implement irq_percpu_is_enabled() · f0cb3220
      Thomas Petazzoni 提交于
      Certain interrupt controller drivers have a register set that does not
      make it easy to save/restore the mask of enabled/disabled interrupts
      at suspend/resume time. At resume time, such drivers rely on the core
      kernel irq subsystem to tell whether such or such interrupt is enabled
      or not, in order to restore the proper state in the interrupt
      controller register.
      
      While the irqd_irq_disabled() provides the relevant information for
      global interrupts, there is no similar function to query the
      enabled/disabled state of a per-CPU interrupt.
      
      Therefore, this commit complements the percpu_irq API with an
      irq_percpu_is_enabled() function.
      
      [ tglx: Simplified the implementation and added kerneldoc ]
      Signed-off-by: NThomas Petazzoni <thomas.petazzoni@free-electrons.com>
      Cc: linux-arm-kernel@lists.infradead.org
      Cc: Tawfik Bayouk <tawfik@marvell.com>
      Cc: Nadav Haklai <nadavh@marvell.com>
      Cc: Lior Amsalem <alior@marvell.com>
      Cc: Andrew Lunn <andrew@lunn.ch>
      Cc: Sebastian Hesselbarth <sebastian.hesselbarth@gmail.com>
      Cc: Gregory Clement <gregory.clement@free-electrons.com>
      Cc: Jason Cooper <jason@lakedaemon.net>
      Cc: Marc Zyngier <marc.zyngier@arm.com>
      Link: http://lkml.kernel.org/r/1445347435-2333-2-git-send-email-thomas.petazzoni@free-electrons.comSigned-off-by: NThomas Gleixner <tglx@linutronix.de>
      f0cb3220
  31. 22 9月, 2015 1 次提交
    • T
      genirq: Handle force threading of irqs with primary and thread handler · 2a1d3ab8
      Thomas Gleixner 提交于
      Force threading of interrupts does not really deal with interrupts
      which are requested with a primary and a threaded handler. The current
      policy is to leave them alone and let the primary handler run in
      interrupt context, but we set the ONESHOT flag for those interrupts as
      well.
      
      Kohji Okuno debugged a problem with the SDHCI driver where the
      interrupt thread waits for a hardware interrupt to trigger, which can't
      work well because the hardware interrupt is masked due to the ONESHOT
      flag being set. He proposed to set the ONESHOT flag only if the
      interrupt does not provide a thread handler.
      
      Though that does not work either because these interrupts can be
      shared. So the other interrupt would rightfully get the ONESHOT flag
      set and therefor the same situation would happen again.
      
      To deal with this proper, we need to force thread the primary handler
      of such interrupts as well. That means that the primary interrupt
      handler is treated as any other primary interrupt handler which is not
      marked IRQF_NO_THREAD. The threaded handler becomes a separate thread
      so the SDHCI flow logic can be handled gracefully.
      
      The same issue was reported against 4.1-rt.
      Reported-and-tested-by: NKohji Okuno <okuno.kohji@jp.panasonic.com>
      Reported-By: NMichal Smucr <msmucr@gmail.com>
      Reported-and-tested-by: NNathan Sullivan <nathan.sullivan@ni.com>
      Signed-off-by: NThomas Gleixner <tglx@linutronix.de>
      Cc: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
      Link: http://lkml.kernel.org/r/alpine.DEB.2.11.1509211058080.5606@nanosSigned-off-by: NThomas Gleixner <tglx@linutronix.de>
      2a1d3ab8