1. 14 5月, 2018 3 次提交
  2. 13 5月, 2018 5 次提交
    • M
      irqchip/gic-v3: Add support for Message Based Interrupts as an MSI controller · 50528752
      Marc Zyngier 提交于
      GICv3 offers the possibility to signal SPIs using a pair of doorbells
      (SETPI, CLRSPI) under the name of Message Based Interrupts (MBI).
      They can be used as either traditional (edge) MSIs, or the more exotic
      level-triggered flavour.
      
      Let's implement support for platform MSI, which is the original intent
      for this feature.
      Signed-off-by: NMarc Zyngier <marc.zyngier@arm.com>
      Signed-off-by: NThomas Gleixner <tglx@linutronix.de>
      Cc: Rob Herring <robh@kernel.org>
      Cc: Jason Cooper <jason@lakedaemon.net>
      Cc: Ard Biesheuvel <ard.biesheuvel@linaro.org>
      Cc: Srinivas Kandagatla <srinivas.kandagatla@linaro.org>
      Cc: Thomas Petazzoni <thomas.petazzoni@bootlin.com>
      Cc: Miquel Raynal <miquel.raynal@bootlin.com>
      Link: https://lkml.kernel.org/r/20180508121438.11301-8-marc.zyngier@arm.com
      50528752
    • M
      irqdomain: Let irq_find_host default to DOMAIN_BUS_WIRED · 64619343
      Marc Zyngier 提交于
      At the beginning of times, irq_find_host() was simple. Each device node
      implemented at most one irq domain, and we were happy. Over time, things
      have become more complex, and we now have nodes implementing a plurality
      of domains, tagged by "bus_token".
      
      Crutially, users of irq_find_host() all expect the most basic domain
      to be returned, and not any other domain such as a bus-specific MSI
      domain.
      
      So let's change irq_find_host() to first look for a DOMAIN_BUS_WIRED
      domain, and only if this fails fallback to DOMAIN_BUS_ANY. Note that
      this is consistent with what irq_create_fwspec_mapping is already
      doing, see 530cbe10 ("irqdomain: Allow domain lookup with
      DOMAIN_BUS_WIRED token").
      Signed-off-by: NMarc Zyngier <marc.zyngier@arm.com>
      Signed-off-by: NThomas Gleixner <tglx@linutronix.de>
      Cc: Rob Herring <robh@kernel.org>
      Cc: Jason Cooper <jason@lakedaemon.net>
      Cc: Ard Biesheuvel <ard.biesheuvel@linaro.org>
      Cc: Srinivas Kandagatla <srinivas.kandagatla@linaro.org>
      Cc: Thomas Petazzoni <thomas.petazzoni@bootlin.com>
      Cc: Miquel Raynal <miquel.raynal@bootlin.com>
      Link: https://lkml.kernel.org/r/20180508121438.11301-6-marc.zyngier@arm.com
      64619343
    • M
      dma-iommu: Fix compilation when !CONFIG_IOMMU_DMA · 8a22a3e1
      Marc Zyngier 提交于
      Inclusion of include/dma-iommu.h when CONFIG_IOMMU_DMA is not selected
      results in the following splat:
      
      In file included from drivers/irqchip/irq-gic-v3-mbi.c:20:0:
      ./include/linux/dma-iommu.h:95:69: error: unknown type name ‘dma_addr_t’
       static inline int iommu_get_msi_cookie(struct iommu_domain *domain, dma_addr_t base)
                                                                           ^~~~~~~~~~
      ./include/linux/dma-iommu.h:108:74: warning: ‘struct list_head’ declared inside parameter list will not be visible outside of this definition or declaration
       static inline void iommu_dma_get_resv_regions(struct device *dev, struct list_head *list)
                                                                                ^~~~~~~~~
      scripts/Makefile.build:312: recipe for target 'drivers/irqchip/irq-gic-v3-mbi.o' failed
      
      Fix it by including linux/types.h.
      Signed-off-by: NMarc Zyngier <marc.zyngier@arm.com>
      Signed-off-by: NThomas Gleixner <tglx@linutronix.de>
      Cc: Rob Herring <robh@kernel.org>
      Cc: Jason Cooper <jason@lakedaemon.net>
      Cc: Ard Biesheuvel <ard.biesheuvel@linaro.org>
      Cc: Srinivas Kandagatla <srinivas.kandagatla@linaro.org>
      Cc: Thomas Petazzoni <thomas.petazzoni@bootlin.com>
      Cc: Miquel Raynal <miquel.raynal@bootlin.com>
      Link: https://lkml.kernel.org/r/20180508121438.11301-5-marc.zyngier@arm.com
      8a22a3e1
    • M
      genirq/msi: Limit level-triggered MSI to platform devices · 6988e0e0
      Marc Zyngier 提交于
      Nobody would be insane enough to try and use level triggered
      MSIs on PCI, but let's make sure it doesn't happen. Also,
      let's mandate that the irqchip backing the platform MSI domain
      is providing the IRQCHIP_SUPPORTS_LEVEL_MSI flag.
      Signed-off-by: NMarc Zyngier <marc.zyngier@arm.com>
      Signed-off-by: NThomas Gleixner <tglx@linutronix.de>
      Cc: Rob Herring <robh@kernel.org>
      Cc: Jason Cooper <jason@lakedaemon.net>
      Cc: Ard Biesheuvel <ard.biesheuvel@linaro.org>
      Cc: Srinivas Kandagatla <srinivas.kandagatla@linaro.org>
      Cc: Thomas Petazzoni <thomas.petazzoni@bootlin.com>
      Cc: Miquel Raynal <miquel.raynal@bootlin.com>
      Link: https://lkml.kernel.org/r/20180508121438.11301-3-marc.zyngier@arm.com
      6988e0e0
    • M
      genirq/msi: Allow level-triggered MSIs to be exposed by MSI providers · 0be8153c
      Marc Zyngier 提交于
      So far, MSIs have been used to signal edge-triggered interrupts, as
      a write is a good model for an edge (you can't "unwrite" something).
      On the other hand, routing zillions of wires in an SoC because you
      need level interrupts is a bit extreme.
      
      People have come up with a variety of schemes to support this, which
      involves sending two messages: one to signal the interrupt, and one
      to clear it. Since the kernel cannot represent this, we've ended up
      with side-band mechanisms that are pretty awful.
      
      Instead, let's acknoledge the requirement, and ensure that, under the
      right circumstances, the irq_compose_msg and irq_write_msg can take
      as a parameter an array of two messages instead of a pointer to a
      single one. We also add some checking that the compose method only
      clobbers the second message if the MSI domain has been created with
      the MSI_FLAG_LEVEL_CAPABLE flags.
      Signed-off-by: NMarc Zyngier <marc.zyngier@arm.com>
      Signed-off-by: NThomas Gleixner <tglx@linutronix.de>
      Cc: Rob Herring <robh@kernel.org>
      Cc: Jason Cooper <jason@lakedaemon.net>
      Cc: Ard Biesheuvel <ard.biesheuvel@linaro.org>
      Cc: Srinivas Kandagatla <srinivas.kandagatla@linaro.org>
      Cc: Thomas Petazzoni <thomas.petazzoni@bootlin.com>
      Cc: Miquel Raynal <miquel.raynal@bootlin.com>
      Link: https://lkml.kernel.org/r/20180508121438.11301-2-marc.zyngier@arm.com
      0be8153c
  3. 12 5月, 2018 2 次提交
  4. 10 5月, 2018 1 次提交
  5. 05 5月, 2018 1 次提交
  6. 04 5月, 2018 1 次提交
    • P
      sched/core: Introduce set_special_state() · b5bf9a90
      Peter Zijlstra 提交于
      Gaurav reported a perceived problem with TASK_PARKED, which turned out
      to be a broken wait-loop pattern in __kthread_parkme(), but the
      reported issue can (and does) in fact happen for states that do not do
      condition based sleeps.
      
      When the 'current->state = TASK_RUNNING' store of a previous
      (concurrent) try_to_wake_up() collides with the setting of a 'special'
      sleep state, we can loose the sleep state.
      
      Normal condition based wait-loops are immune to this problem, but for
      sleep states that are not condition based are subject to this problem.
      
      There already is a fix for TASK_DEAD. Abstract that and also apply it
      to TASK_STOPPED and TASK_TRACED, both of which are also without
      condition based wait-loop.
      Reported-by: NGaurav Kohli <gkohli@codeaurora.org>
      Signed-off-by: NPeter Zijlstra (Intel) <peterz@infradead.org>
      Reviewed-by: NOleg Nesterov <oleg@redhat.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Signed-off-by: NIngo Molnar <mingo@kernel.org>
      b5bf9a90
  7. 03 5月, 2018 2 次提交
    • T
      bdi: wake up concurrent wb_shutdown() callers. · 8236b0ae
      Tetsuo Handa 提交于
      syzbot is reporting hung tasks at wait_on_bit(WB_shutting_down) in
      wb_shutdown() [1]. This seems to be because commit 5318ce7d ("bdi:
      Shutdown writeback on all cgwbs in cgwb_bdi_destroy()") forgot to call
      wake_up_bit(WB_shutting_down) after clear_bit(WB_shutting_down).
      
      Introduce a helper function clear_and_wake_up_bit() and use it, in order
      to avoid similar errors in future.
      
      [1] https://syzkaller.appspot.com/bug?id=b297474817af98d5796bc544e1bb806fc3da0e5eSigned-off-by: NTetsuo Handa <penguin-kernel@I-love.SAKURA.ne.jp>
      Reported-by: Nsyzbot <syzbot+c0cf869505e03bdf1a24@syzkaller.appspotmail.com>
      Fixes: 5318ce7d ("bdi: Shutdown writeback on all cgwbs in cgwb_bdi_destroy()")
      Cc: Tejun Heo <tj@kernel.org>
      Reviewed-by: NJan Kara <jack@suse.cz>
      Suggested-by: NLinus Torvalds <torvalds@linux-foundation.org>
      Signed-off-by: NJens Axboe <axboe@kernel.dk>
      8236b0ae
    • P
      kthread, sched/wait: Fix kthread_parkme() completion issue · 85f1abe0
      Peter Zijlstra 提交于
      Even with the wait-loop fixed, there is a further issue with
      kthread_parkme(). Upon hotplug, when we do takedown_cpu(),
      smpboot_park_threads() can return before all those threads are in fact
      blocked, due to the placement of the complete() in __kthread_parkme().
      
      When that happens, sched_cpu_dying() -> migrate_tasks() can end up
      migrating such a still runnable task onto another CPU.
      
      Normally the task will have hit schedule() and gone to sleep by the
      time we do kthread_unpark(), which will then do __kthread_bind() to
      re-bind the task to the correct CPU.
      
      However, when we loose the initial TASK_PARKED store to the concurrent
      wakeup issue described previously, do the complete(), get migrated, it
      is possible to either:
      
       - observe kthread_unpark()'s clearing of SHOULD_PARK and terminate
         the park and set TASK_RUNNING, or
      
       - __kthread_bind()'s wait_task_inactive() to observe the competing
         TASK_RUNNING store.
      
      Either way the WARN() in __kthread_bind() will trigger and fail to
      correctly set the CPU affinity.
      
      Fix this by only issuing the complete() when the kthread has scheduled
      out. This does away with all the icky 'still running' nonsense.
      
      The alternative is to promote TASK_PARKED to a special state, this
      guarantees wait_task_inactive() cannot observe a 'stale' TASK_RUNNING
      and we'll end up doing the right thing, but this preserves the whole
      icky business of potentially migating the still runnable thing.
      Reported-by: NGaurav Kohli <gkohli@codeaurora.org>
      Signed-off-by: NPeter Zijlstra (Intel) <peterz@infradead.org>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Oleg Nesterov <oleg@redhat.com>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Signed-off-by: NIngo Molnar <mingo@kernel.org>
      85f1abe0
  8. 29 4月, 2018 1 次提交
    • A
      <linux/stringhash.h>: fix end_name_hash() for 64bit long · 19b9ad67
      Amir Goldstein 提交于
      The comment claims that this helper will try not to loose bits, but for
      64bit long it looses the high bits before hashing 64bit long into 32bit
      int.  Use the helper hash_long() to do the right thing for 64bit long.
      For 32bit long, there is no change.
      
      All the callers of end_name_hash() either assign the result to
      qstr->hash, which is u32 or return the result as an int value (e.g.
      full_name_hash()).  Change the helper return type to int to conform to
      its users.
      
      [ It took me a while to apply this, because my initial reaction to it
        was - incorrectly - that it could make for slower code.
      
        After having looked more at it, I take back all my complaints about
        the patch, Amir was right and I was mis-reading things or just being
        stupid.
      
        I also don't worry too much about the possible performance impact of
        this on 64-bit, since most architectures that actually care about
        performance end up not using this very much (the dcache code is the
        most performance-critical, but the word-at-a-time case uses its own
        hashing anyway).
      
        So this ends up being mostly used for filesystems that do their own
        degraded hashing (usually because they want a case-insensitive
        comparison function).
      
        A _tiny_ worry remains, in that not everybody uses DCACHE_WORD_ACCESS,
        and then this potentially makes things more expensive on 64-bit
        architectures with slow or lacking multipliers even for the normal
        case.
      
        That said, realistically the only such architecture I can think of is
        PA-RISC. Nobody really cares about performance on that, it's more of a
        "look ma, I've got warts^W an odd machine" platform.
      
        So the patch is fine, and all my initial worries were just misplaced
        from not looking at this properly.   - Linus ]
      Signed-off-by: NAmir Goldstein <amir73il@gmail.com>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      19b9ad67
  9. 27 4月, 2018 3 次提交
  10. 26 4月, 2018 4 次提交
    • O
      blk-mq: fix sysfs inflight counter · bf0ddaba
      Omar Sandoval 提交于
      When the blk-mq inflight implementation was added, /proc/diskstats was
      converted to use it, but /sys/block/$dev/inflight was not. Fix it by
      adding another helper to count in-flight requests by data direction.
      
      Fixes: f299b7c7 ("blk-mq: provide internal in-flight variant")
      Signed-off-by: NOmar Sandoval <osandov@fb.com>
      Signed-off-by: NJens Axboe <axboe@kernel.dk>
      bf0ddaba
    • T
      Revert: Unify CLOCK_MONOTONIC and CLOCK_BOOTTIME · a3ed0e43
      Thomas Gleixner 提交于
      Revert commits
      
      92af4dcb ("tracing: Unify the "boot" and "mono" tracing clocks")
      127bfa5f ("hrtimer: Unify MONOTONIC and BOOTTIME clock behavior")
      7250a404 ("posix-timers: Unify MONOTONIC and BOOTTIME clock behavior")
      d6c7270e ("timekeeping: Remove boot time specific code")
      f2d6fdbf ("Input: Evdev - unify MONOTONIC and BOOTTIME clock behavior")
      d6ed449a ("timekeeping: Make the MONOTONIC clock behave like the BOOTTIME clock")
      72199320 ("timekeeping: Add the new CLOCK_MONOTONIC_ACTIVE clock")
      
      As stated in the pull request for the unification of CLOCK_MONOTONIC and
      CLOCK_BOOTTIME, it was clear that we might have to revert the change.
      
      As reported by several folks systemd and other applications rely on the
      documented behaviour of CLOCK_MONOTONIC on Linux and break with the above
      changes. After resume daemons time out and other timeout related issues are
      observed. Rafael compiled this list:
      
      * systemd kills daemons on resume, after >WatchdogSec seconds
        of suspending (Genki Sky).  [Verified that that's because systemd uses
        CLOCK_MONOTONIC and expects it to not include the suspend time.]
      
      * systemd-journald misbehaves after resume:
        systemd-journald[7266]: File /var/log/journal/016627c3c4784cd4812d4b7e96a34226/system.journal
      corrupted or uncleanly shut down, renaming and replacing.
        (Mike Galbraith).
      
      * NetworkManager reports "networking disabled" and networking is broken
        after resume 50% of the time (Pavel).  [May be because of systemd.]
      
      * MATE desktop dims the display and starts the screensaver right after
        system resume (Pavel).
      
      * Full system hang during resume (me).  [May be due to systemd or NM or both.]
      
      That happens on debian and open suse systems.
      
      It's sad, that these problems were neither catched in -next nor by those
      folks who expressed interest in this change.
      Reported-by: NRafael J. Wysocki <rjw@rjwysocki.net>
      Reported-by: Genki Sky <sky@genki.is>,
      Reported-by: NPavel Machek <pavel@ucw.cz>
      Signed-off-by: NThomas Gleixner <tglx@linutronix.de>
      Cc: Dmitry Torokhov <dmitry.torokhov@gmail.com>
      Cc: John Stultz <john.stultz@linaro.org>
      Cc: Jonathan Corbet <corbet@lwn.net>
      Cc: Kevin Easton <kevin@guarana.org>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Mark Salyzyn <salyzyn@android.com>
      Cc: Michael Kerrisk <mtk.manpages@gmail.com>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Petr Mladek <pmladek@suse.com>
      Cc: Prarit Bhargava <prarit@redhat.com>
      Cc: Sergey Senozhatsky <sergey.senozhatsky@gmail.com>
      Cc: Steven Rostedt <rostedt@goodmis.org>
      a3ed0e43
    • A
      remoteproc: fix crashed parameter logic on stop call · fcd58037
      Arnaud Pouliquen 提交于
      Fix rproc_add_subdev parameter name and inverse the crashed logic.
      
      Fixes: 880f5b38 ("remoteproc: Pass type of shutdown to subdev remove")
      Reviewed-by: NAlex Elder <elder@linaro.org>
      Signed-off-by: NArnaud Pouliquen <arnaud.pouliquen@st.com>
      Signed-off-by: NBjorn Andersson <bjorn.andersson@linaro.org>
      fcd58037
    • M
      virtio: add ability to iterate over vqs · 24a7e4d2
      Michael S. Tsirkin 提交于
      For cleanup it's helpful to be able to simply scan all vqs and discard
      all data. Add an iterator to do that.
      
      Cc: stable@vger.kernel.org
      Signed-off-by: NMichael S. Tsirkin <mst@redhat.com>
      24a7e4d2
  11. 25 4月, 2018 2 次提交
  12. 24 4月, 2018 3 次提交
  13. 23 4月, 2018 3 次提交
  14. 21 4月, 2018 3 次提交
    • A
      kasan: add no_sanitize attribute for clang builds · 12c8f25a
      Andrey Konovalov 提交于
      KASAN uses the __no_sanitize_address macro to disable instrumentation of
      particular functions.  Right now it's defined only for GCC build, which
      causes false positives when clang is used.
      
      This patch adds a definition for clang.
      
      Note, that clang's revision 329612 or higher is required.
      
      [andreyknvl@google.com: remove redundant #ifdef CONFIG_KASAN check]
        Link: http://lkml.kernel.org/r/c79aa31a2a2790f6131ed607c58b0dd45dd62a6c.1523967959.git.andreyknvl@google.com
      Link: http://lkml.kernel.org/r/4ad725cc903f8534f8c8a60f0daade5e3d674f8d.1523554166.git.andreyknvl@google.comSigned-off-by: NAndrey Konovalov <andreyknvl@google.com>
      Acked-by: NAndrey Ryabinin <aryabinin@virtuozzo.com>
      Cc: Alexander Potapenko <glider@google.com>
      Cc: Dmitry Vyukov <dvyukov@google.com>
      Cc: David Rientjes <rientjes@google.com>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: Ingo Molnar <mingo@kernel.org>
      Cc: David Woodhouse <dwmw@amazon.co.uk>
      Cc: Andrey Konovalov <andreyknvl@google.com>
      Cc: Will Deacon <will.deacon@arm.com>
      Cc: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
      Cc: Paul Lawrence <paullawrence@google.com>
      Cc: Sandipan Das <sandipan@linux.vnet.ibm.com>
      Cc: Kees Cook <keescook@chromium.org>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      12c8f25a
    • G
      writeback: safer lock nesting · 2e898e4c
      Greg Thelen 提交于
      lock_page_memcg()/unlock_page_memcg() use spin_lock_irqsave/restore() if
      the page's memcg is undergoing move accounting, which occurs when a
      process leaves its memcg for a new one that has
      memory.move_charge_at_immigrate set.
      
      unlocked_inode_to_wb_begin,end() use spin_lock_irq/spin_unlock_irq() if
      the given inode is switching writeback domains.  Switches occur when
      enough writes are issued from a new domain.
      
      This existing pattern is thus suspicious:
          lock_page_memcg(page);
          unlocked_inode_to_wb_begin(inode, &locked);
          ...
          unlocked_inode_to_wb_end(inode, locked);
          unlock_page_memcg(page);
      
      If both inode switch and process memcg migration are both in-flight then
      unlocked_inode_to_wb_end() will unconditionally enable interrupts while
      still holding the lock_page_memcg() irq spinlock.  This suggests the
      possibility of deadlock if an interrupt occurs before unlock_page_memcg().
      
          truncate
          __cancel_dirty_page
          lock_page_memcg
          unlocked_inode_to_wb_begin
          unlocked_inode_to_wb_end
          <interrupts mistakenly enabled>
                                          <interrupt>
                                          end_page_writeback
                                          test_clear_page_writeback
                                          lock_page_memcg
                                          <deadlock>
          unlock_page_memcg
      
      Due to configuration limitations this deadlock is not currently possible
      because we don't mix cgroup writeback (a cgroupv2 feature) and
      memory.move_charge_at_immigrate (a cgroupv1 feature).
      
      If the kernel is hacked to always claim inode switching and memcg
      moving_account, then this script triggers lockup in less than a minute:
      
        cd /mnt/cgroup/memory
        mkdir a b
        echo 1 > a/memory.move_charge_at_immigrate
        echo 1 > b/memory.move_charge_at_immigrate
        (
          echo $BASHPID > a/cgroup.procs
          while true; do
            dd if=/dev/zero of=/mnt/big bs=1M count=256
          done
        ) &
        while true; do
          sync
        done &
        sleep 1h &
        SLEEP=$!
        while true; do
          echo $SLEEP > a/cgroup.procs
          echo $SLEEP > b/cgroup.procs
        done
      
      The deadlock does not seem possible, so it's debatable if there's any
      reason to modify the kernel.  I suggest we should to prevent future
      surprises.  And Wang Long said "this deadlock occurs three times in our
      environment", so there's more reason to apply this, even to stable.
      Stable 4.4 has minor conflicts applying this patch.  For a clean 4.4 patch
      see "[PATCH for-4.4] writeback: safer lock nesting"
      https://lkml.org/lkml/2018/4/11/146
      
      Wang Long said "this deadlock occurs three times in our environment"
      
      [gthelen@google.com: v4]
        Link: http://lkml.kernel.org/r/20180411084653.254724-1-gthelen@google.com
      [akpm@linux-foundation.org: comment tweaks, struct initialization simplification]
      Change-Id: Ibb773e8045852978f6207074491d262f1b3fb613
      Link: http://lkml.kernel.org/r/20180410005908.167976-1-gthelen@google.com
      Fixes: 682aa8e1 ("writeback: implement unlocked_inode_to_wb transaction and use it for stat updates")
      Signed-off-by: NGreg Thelen <gthelen@google.com>
      Reported-by: NWang Long <wanglong19@meituan.com>
      Acked-by: NWang Long <wanglong19@meituan.com>
      Acked-by: NMichal Hocko <mhocko@suse.com>
      Reviewed-by: NAndrew Morton <akpm@linux-foundation.org>
      Cc: Johannes Weiner <hannes@cmpxchg.org>
      Cc: Tejun Heo <tj@kernel.org>
      Cc: Nicholas Piggin <npiggin@gmail.com>
      Cc: <stable@vger.kernel.org>	[v4.2+]
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      2e898e4c
    • K
      fork: unconditionally clear stack on fork · e01e8063
      Kees Cook 提交于
      One of the classes of kernel stack content leaks[1] is exposing the
      contents of prior heap or stack contents when a new process stack is
      allocated.  Normally, those stacks are not zeroed, and the old contents
      remain in place.  In the face of stack content exposure flaws, those
      contents can leak to userspace.
      
      Fixing this will make the kernel no longer vulnerable to these flaws, as
      the stack will be wiped each time a stack is assigned to a new process.
      There's not a meaningful change in runtime performance; it almost looks
      like it provides a benefit.
      
      Performing back-to-back kernel builds before:
      	Run times: 157.86 157.09 158.90 160.94 160.80
      	Mean: 159.12
      	Std Dev: 1.54
      
      and after:
      	Run times: 159.31 157.34 156.71 158.15 160.81
      	Mean: 158.46
      	Std Dev: 1.46
      
      Instead of making this a build or runtime config, Andy Lutomirski
      recommended this just be enabled by default.
      
      [1] A noisy search for many kinds of stack content leaks can be seen here:
      https://cve.mitre.org/cgi-bin/cvekey.cgi?keyword=linux+kernel+stack+leak
      
      I did some more with perf and cycle counts on running 100,000 execs of
      /bin/true.
      
      before:
      Cycles: 218858861551 218853036130 214727610969 227656844122 224980542841
      Mean:  221015379122.60
      Std Dev: 4662486552.47
      
      after:
      Cycles: 213868945060 213119275204 211820169456 224426673259 225489986348
      Mean:  217745009865.40
      Std Dev: 5935559279.99
      
      It continues to look like it's faster, though the deviation is rather
      wide, but I'm not sure what I could do that would be less noisy.  I'm
      open to ideas!
      
      Link: http://lkml.kernel.org/r/20180221021659.GA37073@beastSigned-off-by: NKees Cook <keescook@chromium.org>
      Acked-by: NMichal Hocko <mhocko@suse.com>
      Reviewed-by: NAndrew Morton <akpm@linux-foundation.org>
      Cc: Andy Lutomirski <luto@kernel.org>
      Cc: Laura Abbott <labbott@redhat.com>
      Cc: Rasmus Villemoes <rasmus.villemoes@prevas.dk>
      Cc: Mel Gorman <mgorman@techsingularity.net>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      e01e8063
  15. 20 4月, 2018 1 次提交
    • R
      fsnotify: Fix fsnotify_mark_connector race · d90a10e2
      Robert Kolchmeyer 提交于
      fsnotify() acquires a reference to a fsnotify_mark_connector through
      the SRCU-protected pointer to_tell->i_fsnotify_marks. However, it
      appears that no precautions are taken in fsnotify_put_mark() to
      ensure that fsnotify() drops its reference to this
      fsnotify_mark_connector before assigning a value to its 'destroy_next'
      field. This can result in fsnotify_put_mark() assigning a value
      to a connector's 'destroy_next' field right before fsnotify() tries to
      traverse the linked list referenced by the connector's 'list' field.
      Since these two fields are members of the same union, this behavior
      results in a kernel panic.
      
      This issue is resolved by moving the connector's 'destroy_next' field
      into the object pointer union. This should work since the object pointer
      access is protected by both a spinlock and the value of the 'flags'
      field, and the 'flags' field is cleared while holding the spinlock in
      fsnotify_put_mark() before 'destroy_next' is updated. It shouldn't be
      possible for another thread to accidentally read from the object pointer
      after the 'destroy_next' field is updated.
      
      The offending behavior here is extremely unlikely; since
      fsnotify_put_mark() removes references to a connector (specifically,
      it ensures that the connector is unreachable from the inode it was
      formerly attached to) before updating its 'destroy_next' field, a
      sizeable chunk of code in fsnotify_put_mark() has to execute in the
      short window between when fsnotify() acquires the connector reference
      and saves the value of its 'list' field. On the HEAD kernel, I've only
      been able to reproduce this by inserting a udelay(1) in fsnotify().
      However, I've been able to reproduce this issue without inserting a
      udelay(1) anywhere on older unmodified release kernels, so I believe
      it's worth fixing at HEAD.
      
      References: https://bugzilla.kernel.org/show_bug.cgi?id=199437
      Fixes: 08991e83
      CC: stable@vger.kernel.org
      Signed-off-by: NRobert Kolchmeyer <rkolchmeyer@google.com>
      Signed-off-by: NJan Kara <jack@suse.cz>
      d90a10e2
  16. 19 4月, 2018 4 次提交
  17. 18 4月, 2018 1 次提交