1. 15 8月, 2017 1 次提交
  2. 27 7月, 2017 3 次提交
    • P
      printk/console: Enhance the check for consoles using init memory · 5a814231
      Petr Mladek 提交于
      printk_late_init() is responsible for disabling boot consoles that
      use init memory. It checks the address of struct console for this.
      
      But this is not enough. For example, there are several early
      consoles that have write() method in the init section and
      struct console in the normal section. They are not disabled
      and could cause fancy and hard to debug system states.
      
      It is even more complicated by the macros EARLYCON_DECLARE() and
      OF_EARLYCON_DECLARE() where various struct members are set at
      runtime by the provided setup() function.
      
      I have tried to reproduce this problem and forced the classic uart
      early console to stay using keep_bootcon parameter. In particular
      I used earlycon=uart,io,0x3f8 keep_bootcon console=ttyS0,115200.
      The system did not boot:
      
      [    1.570496] PM: Image not found (code -22)
      [    1.570496] PM: Image not found (code -22)
      [    1.571886] PM: Hibernation image not present or could not be loaded.
      [    1.571886] PM: Hibernation image not present or could not be loaded.
      [    1.576407] Freeing unused kernel memory: 2528K
      [    1.577244] kernel tried to execute NX-protected page - exploit attempt? (uid: 0)
      
      The double lines are caused by having both early uart console and
      ttyS0 console enabled at the same time. The early console stopped
      working when the init memory was freed. Fortunately, the invalid
      call was caught by the NX-protexted page check and did not cause
      any silent fancy problems.
      
      This patch adds a check for many other addresses stored in
      struct console. It omits setup() and match() that are used
      only when the console is registered. Therefore they have
      already been used at this point and there is no reason
      to use them again.
      
      Link: http://lkml.kernel.org/r/1500036673-7122-3-git-send-email-pmladek@suse.com
      Cc: Steven Rostedt <rostedt@goodmis.org>
      Cc: Andrew Morton <akpm@linux-foundation.org>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Matt Redfearn <matt.redfearn@imgtec.com>
      Cc: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
      Cc: Jiri Slaby <jslaby@suse.com>
      Cc: "David S. Miller" <davem@davemloft.net>
      Cc: Alan Cox <gnomes@lxorguk.ukuu.org.uk>
      Cc: "Fabio M. Di Nitto" <fdinitto@redhat.com>
      Cc: linux-serial@vger.kernel.org
      Cc: linux-kernel@vger.kernel.org
      Reviewed-by: NSergey Senozhatsky <sergey.senozhatsky@gmail.com>
      Signed-off-by: NPetr Mladek <pmladek@suse.com>
      5a814231
    • M
      printk/console: Always disable boot consoles that use init memory before it is freed · 2b1be689
      Matt Redfearn 提交于
      Commit 4c30c6f5 ("kernel/printk: do not turn off bootconsole in
      printk_late_init() if keep_bootcon") added a check on keep_bootcon to
      ensure that boot consoles were kept around until the real console is
      registered.
      
      This can lead to problems if the boot console data and code are in the
      init section, since it can be freed before the boot console is
      unregistered.
      
      Commit 81cc26f2 ("printk: only unregister boot consoles when
      necessary") fixed this a better way. It allowed to keep boot consoles
      that did not use init data. Unfortunately it did not remove the check
      of keep_bootcon.
      
      This can lead to crashes and weird panics when the bootconsole is
      accessed after free, especially if page poisoning is in use and the
      code / data have been overwritten with a poison value.
      
      To prevent this, always free the boot console if it is within the init
      section. In addition, print a warning about that the console is removed
      prematurely.
      
      Finally there is a new comment how to avoid the warning. It replaced
      an explanation that duplicated a more comprehensive function
      description few lines above.
      
      Fixes: 4c30c6f5 ("kernel/printk: do not turn off bootconsole in printk_late_init() if keep_bootcon")
      Link: http://lkml.kernel.org/r/1500036673-7122-2-git-send-email-pmladek@suse.com
      Cc: Steven Rostedt <rostedt@goodmis.org>
      Cc: Andrew Morton <akpm@linux-foundation.org>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
      Cc: Jiri Slaby <jslaby@suse.com>
      Cc: "David S. Miller" <davem@davemloft.net>
      Cc: Alan Cox <gnomes@lxorguk.ukuu.org.uk>
      Cc: "Fabio M. Di Nitto" <fdinitto@redhat.com>
      Cc: linux-serial@vger.kernel.org
      Cc: linux-kernel@vger.kernel.org
      Signed-off-by: NMatt Redfearn <matt.redfearn@imgtec.com>
      [pmladek@suse.com: print the warning, code and comments clean up]
      Reviewed-by: NSergey Senozhatsky <sergey.senozhatsky@gmail.com>
      Signed-off-by: NPetr Mladek <pmladek@suse.com>
      2b1be689
    • P
      printk: Modify operators of printed_len and text_len · aec47caa
      Pierre Kuo 提交于
      With commit <ddb9baa8> ("printk: report lost messages in printk
      safe/nmi contexts") and commit <8b1742c9> ("printk: remove zap_locks()
      function"), it seems we can remove initialization, "=0", of text_len and
      directly assign result of log_output to printed_len.
      
      Link: http://lkml.kernel.org/r/1499755255-6258-1-git-send-email-vichy.kuo@gmail.com
      Cc: rostedt@goodmis.org
      Cc: linux-kernel@vger.kernel.org
      Cc: joe@perches.com
      Signed-off-by: NPierre Kuo <vichy.kuo@gmail.com>
      Reviewed-by: NSergey Senozhatsky <sergey.senozhatsky@gmail.com>
      Signed-off-by: NPetr Mladek <pmladek@suse.com>
      aec47caa
  3. 30 6月, 2017 7 次提交
  4. 29 6月, 2017 3 次提交
  5. 27 6月, 2017 2 次提交
  6. 26 6月, 2017 2 次提交
  7. 24 6月, 2017 7 次提交
  8. 23 6月, 2017 15 次提交
    • N
      sched/rt: Move RT related code from sched/core.c to sched/rt.c · 8887cd99
      Nicolas Pitre 提交于
      This helps making sched/core.c smaller and hopefully easier to understand and maintain.
      Signed-off-by: NNicolas Pitre <nico@linaro.org>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Link: http://lkml.kernel.org/r/20170621182203.30626-3-nicolas.pitre@linaro.orgSigned-off-by: NIngo Molnar <mingo@kernel.org>
      8887cd99
    • N
      sched/deadline: Move DL related code from sched/core.c to sched/deadline.c · 06a76fe0
      Nicolas Pitre 提交于
      This helps making sched/core.c smaller and hopefully easier to understand and maintain.
      Signed-off-by: NNicolas Pitre <nico@linaro.org>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Link: http://lkml.kernel.org/r/20170621182203.30626-2-nicolas.pitre@linaro.orgSigned-off-by: NIngo Molnar <mingo@kernel.org>
      06a76fe0
    • N
      sched/cpuset: Only offer CONFIG_CPUSETS if SMP is enabled · e1d4eeec
      Nicolas Pitre 提交于
      Make CONFIG_CPUSETS=y depend on SMP as this feature makes no sense
      on UP. This allows for configuring out cpuset_cpumask_can_shrink()
      and task_can_attach() entirely, which shrinks the kernel a bit.
      Signed-off-by: NNicolas Pitre <nico@linaro.org>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Link: http://lkml.kernel.org/r/20170614171926.8345-2-nicolas.pitre@linaro.orgSigned-off-by: NIngo Molnar <mingo@kernel.org>
      e1d4eeec
    • M
      genirq/irqdomain: Remove auto-recursive hierarchy support · 6a6544e5
      Marc Zyngier 提交于
      It did seem like a good idea at the time, but it never really
      caught on, and auto-recursive domains remain unused 3 years after
      having been introduced.
      
      Oh well, time for a late spring cleanup.
      Signed-off-by: NMarc Zyngier <marc.zyngier@arm.com>
      Signed-off-by: NThomas Gleixner <tglx@linutronix.de>
      6a6544e5
    • M
      genirq/irqdomain: Add irq_domain_update_bus_token helper · 61d0a000
      Marc Zyngier 提交于
      We can have irq domains that are identified by the same fwnode
      (because they are serviced by the same HW), and yet have different
      functionnality (because they serve different busses, for example).
      This is what we use the bus_token field.
      
      Since we don't use this field when generating the domain name,
      all the aliasing domains will get the same name, and the debugfs
      file creation fails. Also, bus_token is updated by individual drivers,
      and the core code is unaware of that update.
      
      In order to sort this mess, let's introduce a helper that takes care
      of updating bus_token, and regenerate the debugfs file.
      
      A separate patch will update all the individual users.
      Signed-off-by: NMarc Zyngier <marc.zyngier@arm.com>
      Signed-off-by: NThomas Gleixner <tglx@linutronix.de>
      61d0a000
    • C
      genirq/affinity: Assign vectors to all present CPUs · 9a0ef98e
      Christoph Hellwig 提交于
      Currently the irq vector spread algorithm is restricted to online CPUs,
      which ties the IRQ mapping to the currently online devices and doesn't deal
      nicely with the fact that CPUs could come and go rapidly due to e.g. power
      management.
      
      Instead assign vectors to all present CPUs to avoid this churn.
      
      Build a map of all possible CPUs for a given node, as the architectures
      only provide a map of all onlines CPUs. Do this dynamically on each call
      for the vector assingments, which is a bit suboptimal and could be
      optimized in the future by provinding a mapping from the arch code.
      Signed-off-by: NChristoph Hellwig <hch@lst.de>
      Signed-off-by: NThomas Gleixner <tglx@linutronix.de>
      Cc: Jens Axboe <axboe@kernel.dk>
      Cc: linux-block@vger.kernel.org
      Cc: Sagi Grimberg <sagi@grimberg.me>
      Cc: Marc Zyngier <marc.zyngier@arm.com>
      Cc: Michael Ellerman <mpe@ellerman.id.au>
      Cc: linux-nvme@lists.infradead.org
      Cc: Keith Busch <keith.busch@intel.com>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Link: http://lkml.kernel.org/r/20170603140403.27379-5-hch@lst.de
      9a0ef98e
    • T
      genirq/cpuhotplug: Avoid irq affinity setting for single targets · 8f31a984
      Thomas Gleixner 提交于
      Avoid trying to add a newly online CPU to the effective affinity mask of an
      started up interrupt. That interrupt will either stay on the already online
      CPU or move around for no value.
      Signed-off-by: NThomas Gleixner <tglx@linutronix.de>
      Cc: Jens Axboe <axboe@kernel.dk>
      Cc: Marc Zyngier <marc.zyngier@arm.com>
      Cc: Michael Ellerman <mpe@ellerman.id.au>
      Cc: Keith Busch <keith.busch@intel.com>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Christoph Hellwig <hch@lst.de>
      Link: http://lkml.kernel.org/r/20170619235447.431321047@linutronix.de
      8f31a984
    • T
      genirq: Introduce IRQD_SINGLE_TARGET flag · d52dd441
      Thomas Gleixner 提交于
      Many interrupt chips allow only a single CPU as interrupt target. The core
      code has no knowledge about that. That's unfortunate as it could avoid
      trying to readd a newly online CPU to the effective affinity mask.
      
      Add the status flag and the necessary accessors.
      Signed-off-by: NThomas Gleixner <tglx@linutronix.de>
      Cc: Jens Axboe <axboe@kernel.dk>
      Cc: Marc Zyngier <marc.zyngier@arm.com>
      Cc: Michael Ellerman <mpe@ellerman.id.au>
      Cc: Keith Busch <keith.busch@intel.com>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Christoph Hellwig <hch@lst.de>
      Link: http://lkml.kernel.org/r/20170619235447.352343969@linutronix.de
      d52dd441
    • T
      genirq/cpuhotplug: Handle managed IRQs on CPU hotplug · c5cb83bb
      Thomas Gleixner 提交于
      If a CPU goes offline, interrupts affine to the CPU are moved away. If the
      outgoing CPU is the last CPU in the affinity mask the migration code breaks
      the affinity and sets it it all online cpus.
      
      This is a problem for affinity managed interrupts as CPU hotplug is often
      used for power management purposes. If the affinity is broken, the
      interrupt is not longer affine to the CPUs to which it was allocated.
      
      The affinity spreading allows to lay out multi queue devices in a way that
      they are assigned to a single CPU or a group of CPUs. If the last CPU goes
      offline, then the queue is not longer used, so the interrupt can be
      shutdown gracefully and parked until one of the assigned CPUs comes online
      again.
      
      Add a graceful shutdown mechanism into the irq affinity breaking code path,
      mark the irq as MANAGED_SHUTDOWN and leave the affinity mask unmodified.
      
      In the online path, scan the active interrupts for managed interrupts and
      if the interrupt is functional and the newly online CPU is part of the
      affinity mask, restart the interrupt if it is marked MANAGED_SHUTDOWN or if
      the interrupts is started up, try to add the CPU back to the effective
      affinity mask.
      Originally-by: NChristoph Hellwig <hch@lst.de>
      Signed-off-by: NThomas Gleixner <tglx@linutronix.de>
      Cc: Jens Axboe <axboe@kernel.dk>
      Cc: Marc Zyngier <marc.zyngier@arm.com>
      Cc: Michael Ellerman <mpe@ellerman.id.au>
      Cc: Keith Busch <keith.busch@intel.com>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Link: http://lkml.kernel.org/r/20170619235447.273417334@linutronix.de
      c5cb83bb
    • T
      genirq: Handle managed irqs gracefully in irq_startup() · 761ea388
      Thomas Gleixner 提交于
      Affinity managed interrupts should keep their assigned affinity accross CPU
      hotplug. To avoid magic hackery in device drivers, the core code shall
      manage them transparently and set these interrupts into a managed shutdown
      state when the last CPU of the assigned affinity mask goes offline. The
      interrupt will be restarted when one of the CPUs in the assigned affinity
      mask comes back online.
      
      Add the necessary logic to irq_startup(). If an interrupt is requested and
      started up, the code checks whether it is affinity managed and if so, it
      checks whether a CPU in the interrupts affinity mask is online. If not, it
      puts the interrupt into managed shutdown state. 
      Signed-off-by: NThomas Gleixner <tglx@linutronix.de>
      Cc: Jens Axboe <axboe@kernel.dk>
      Cc: Marc Zyngier <marc.zyngier@arm.com>
      Cc: Michael Ellerman <mpe@ellerman.id.au>
      Cc: Keith Busch <keith.busch@intel.com>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Christoph Hellwig <hch@lst.de>
      Link: http://lkml.kernel.org/r/20170619235447.189851170@linutronix.de
      761ea388
    • T
      genirq: Add force argument to irq_startup() · 4cde9c6b
      Thomas Gleixner 提交于
      In order to handle managed interrupts gracefully on irq_startup() so they
      won't lose their assigned affinity, it's necessary to allow startups which
      keep the interrupts in managed shutdown state, if none of the assigend CPUs
      is online. This allows drivers to request interrupts w/o the CPUs being
      online, which avoid online/offline churn in drivers.
      
      Add a force argument which can override that decision and let only
      request_irq() and enable_irq() allow the managed shutdown
      handling. enable_irq() is required, because the interrupt might be
      requested with IRQF_NOAUTOEN and enable_irq() invokes irq_startup() which
      would then wreckage the assignment again. All other callers force startup
      and potentially break the assigned affinity.
      
      No functional change as this only adds the function argument.
      Signed-off-by: NThomas Gleixner <tglx@linutronix.de>
      Cc: Jens Axboe <axboe@kernel.dk>
      Cc: Marc Zyngier <marc.zyngier@arm.com>
      Cc: Michael Ellerman <mpe@ellerman.id.au>
      Cc: Keith Busch <keith.busch@intel.com>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Christoph Hellwig <hch@lst.de>
      Link: http://lkml.kernel.org/r/20170619235447.112094565@linutronix.de
      4cde9c6b
    • T
      genirq: Split out irq_startup() code · 708d174b
      Thomas Gleixner 提交于
      Split out the inner workings of irq_startup() so it can be reused to handle
      managed interrupts gracefully.
      Signed-off-by: NThomas Gleixner <tglx@linutronix.de>
      Cc: Jens Axboe <axboe@kernel.dk>
      Cc: Marc Zyngier <marc.zyngier@arm.com>
      Cc: Michael Ellerman <mpe@ellerman.id.au>
      Cc: Keith Busch <keith.busch@intel.com>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Christoph Hellwig <hch@lst.de>
      Link: http://lkml.kernel.org/r/20170619235447.033235144@linutronix.de
      708d174b
    • T
      genirq: Introduce IRQD_MANAGED_SHUTDOWN · 54fdf6a0
      Thomas Gleixner 提交于
      Affinity managed interrupts should keep their assigned affinity accross CPU
      hotplug. To avoid magic hackery in device drivers, the core code shall
      manage them transparently. This will set these interrupts into a managed
      shutdown state when the last CPU of the assigned affinity mask goes
      offline. The interrupt will be restarted when one of the CPUs in the
      assigned affinity mask comes back online.
      
      Introduce the necessary state flag and the accessor functions.
      Signed-off-by: NThomas Gleixner <tglx@linutronix.de>
      Cc: Jens Axboe <axboe@kernel.dk>
      Cc: Marc Zyngier <marc.zyngier@arm.com>
      Cc: Michael Ellerman <mpe@ellerman.id.au>
      Cc: Keith Busch <keith.busch@intel.com>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Christoph Hellwig <hch@lst.de>
      Link: http://lkml.kernel.org/r/20170619235446.954523476@linutronix.de
      54fdf6a0
    • T
      genirq/cpuhotplug: Use effective affinity mask · 415fcf1a
      Thomas Gleixner 提交于
      If the architecture supports the effective affinity mask, migrating
      interrupts away which are not targeted by the effective mask is
      pointless.
      
      They can stay in the user or system supplied affinity mask, but won't be
      targetted at any given point as the affinity setter functions need to
      validate against the online cpu mask anyway.
      Signed-off-by: NThomas Gleixner <tglx@linutronix.de>
      Cc: Jens Axboe <axboe@kernel.dk>
      Cc: Marc Zyngier <marc.zyngier@arm.com>
      Cc: Michael Ellerman <mpe@ellerman.id.au>
      Cc: Keith Busch <keith.busch@intel.com>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Christoph Hellwig <hch@lst.de>
      Link: http://lkml.kernel.org/r/20170619235446.328488490@linutronix.de
      415fcf1a
    • T
      genirq: Introduce effective affinity mask · 0d3f5425
      Thomas Gleixner 提交于
      There is currently no way to evaluate the effective affinity mask of a
      given interrupt. Many irq chips allow only a single target CPU or a subset
      of CPUs in the affinity mask.
      
      Updating the mask at the time of setting the affinity to the subset would
      be counterproductive because information for cpu hotplug about assigned
      interrupt affinities gets lost. On CPU hotplug it's also pointless to force
      migrate an interrupt, which is not targeted at the CPU effectively. But
      currently the information is not available.
      
      Provide a seperate mask to be updated by the irq_chip->irq_set_affinity()
      implementations. Implement the read only proc files so the user can see the
      effective mask as well w/o trying to deduce it from /proc/interrupts.
      Signed-off-by: NThomas Gleixner <tglx@linutronix.de>
      Cc: Jens Axboe <axboe@kernel.dk>
      Cc: Marc Zyngier <marc.zyngier@arm.com>
      Cc: Michael Ellerman <mpe@ellerman.id.au>
      Cc: Keith Busch <keith.busch@intel.com>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Christoph Hellwig <hch@lst.de>
      Link: http://lkml.kernel.org/r/20170619235446.247834245@linutronix.de
      0d3f5425