1. 18 11月, 2017 10 次提交
    • C
      xprtrdma: Add data structure to manage RDMA Send arguments · ae72950a
      Chuck Lever 提交于
      Problem statement:
      
      Recently Sagi Grimberg <sagi@grimberg.me> observed that kernel RDMA-
      enabled storage initiators don't handle delayed Send completion
      correctly. If Send completion is delayed beyond the end of a ULP
      transaction, the ULP may release resources that are still being used
      by the HCA to complete a long-running Send operation.
      
      This is a common design trait amongst our initiators. Most Send
      operations are faster than the ULP transaction they are part of.
      Waiting for a completion for these is typically unnecessary.
      
      Infrequently, a network partition or some other problem crops up
      where an ordering problem can occur. In NFS parlance, the RPC Reply
      arrives and completes the RPC, but the HCA is still retrying the
      Send WR that conveyed the RPC Call. In this case, the HCA can try
      to use memory that has been invalidated or DMA unmapped, and the
      connection is lost. If that memory has been re-used for something
      else (possibly not related to NFS), and the Send retransmission
      exposes that data on the wire.
      
      Thus we cannot assume that it is safe to release Send-related
      resources just because a ULP reply has arrived.
      
      After some analysis, we have determined that the completion
      housekeeping will not be difficult for xprtrdma:
      
       - Inline Send buffers are registered via the local DMA key, and
         are already left DMA mapped for the lifetime of a transport
         connection, thus no additional handling is necessary for those
       - Gathered Sends involving page cache pages _will_ need to
         DMA unmap those pages after the Send completes. But like
         inline send buffers, they are registered via the local DMA key,
         and thus will not need to be invalidated
      
      In addition, RPC completion will need to wait for Send completion
      in the latter case. However, nearly always, the Send that conveys
      the RPC Call will have completed long before the RPC Reply
      arrives, and thus no additional latency will be accrued.
      
      Design notes:
      
      In this patch, the rpcrdma_sendctx object is introduced, and a
      lock-free circular queue is added to manage a set of them per
      transport.
      
      The RPC client's send path already prevents sending more than one
      RPC Call at the same time. This allows us to treat the consumer
      side of the queue (rpcrdma_sendctx_get_locked) as if there is a
      single consumer thread.
      
      The producer side of the queue (rpcrdma_sendctx_put_locked) is
      invoked only from the Send completion handler, which is a single
      thread of execution (soft IRQ).
      
      The only care that needs to be taken is with the tail index, which
      is shared between the producer and consumer. Only the producer
      updates the tail index. The consumer compares the head with the
      tail to ensure that the a sendctx that is in use is never handed
      out again (or, expressed more conventionally, the queue is empty).
      
      When the sendctx queue empties completely, there are enough Sends
      outstanding that posting more Send operations can result in a Send
      Queue overflow. In this case, the ULP is told to wait and try again.
      This introduces strong Send Queue accounting to xprtrdma.
      
      As a final touch, Jason Gunthorpe <jgunthorpe@obsidianresearch.com>
      suggested a mechanism that does not require signaling every Send.
      We signal once every N Sends, and perform SGE unmapping of N Send
      operations during that one completion.
      Reported-by: NSagi Grimberg <sagi@grimberg.me>
      Suggested-by: NJason Gunthorpe <jgunthorpe@obsidianresearch.com>
      Signed-off-by: NChuck Lever <chuck.lever@oracle.com>
      Signed-off-by: NAnna Schumaker <Anna.Schumaker@Netapp.com>
      ae72950a
    • C
      xprtrdma: "Unoptimize" rpcrdma_prepare_hdr_sge() · a062a2a3
      Chuck Lever 提交于
      Commit 655fec69 ("xprtrdma: Use gathered Send for large inline
      messages") assumed that, since the zeroeth element of the Send SGE
      array always pointed to req->rl_rdmabuf, it needed to be initialized
      just once. This was a valid assumption because the Send SGE array
      and rl_rdmabuf both live in the same rpcrdma_req.
      
      In a subsequent patch, the Send SGE array will be separated from the
      rpcrdma_req, so the zeroeth element of the SGE array needs to be
      initialized every time.
      Signed-off-by: NChuck Lever <chuck.lever@oracle.com>
      Signed-off-by: NAnna Schumaker <Anna.Schumaker@Netapp.com>
      a062a2a3
    • C
      xprtrdma: Change return value of rpcrdma_prepare_send_sges() · 857f9aca
      Chuck Lever 提交于
      Clean up: Make rpcrdma_prepare_send_sges() return a negative errno
      instead of a bool. Soon callers will want distinct treatments of
      different types of failures.
      Signed-off-by: NChuck Lever <chuck.lever@oracle.com>
      Signed-off-by: NAnna Schumaker <Anna.Schumaker@Netapp.com>
      857f9aca
    • C
      xprtrdma: Fix error handling in rpcrdma_prepare_msg_sges() · 394b2c77
      Chuck Lever 提交于
      When this function fails, it needs to undo the DMA mappings it's
      done so far. Otherwise these are leaked.
      Signed-off-by: NChuck Lever <chuck.lever@oracle.com>
      Signed-off-by: NAnna Schumaker <Anna.Schumaker@Netapp.com>
      394b2c77
    • C
      xprtrdma: Clean up SGE accounting in rpcrdma_prepare_msg_sges() · ad99f053
      Chuck Lever 提交于
      Clean up. rpcrdma_prepare_hdr_sge() sets num_sge to one, then
      rpcrdma_prepare_msg_sges() sets num_sge again to the count of SGEs
      it added, plus one for the header SGE just mapped in
      rpcrdma_prepare_hdr_sge(). This is confusing, and nails in an
      assumption about when these functions are called.
      
      Instead, maintain a running count that both functions can update
      with just the number of SGEs they have added to the SGE array.
      Signed-off-by: NChuck Lever <chuck.lever@oracle.com>
      Signed-off-by: NAnna Schumaker <Anna.Schumaker@Netapp.com>
      ad99f053
    • C
      xprtrdma: Decode credits field in rpcrdma_reply_handler · be798f90
      Chuck Lever 提交于
      We need to decode and save the incoming rdma_credits field _after_
      we know that the direction of the message is "forward direction
      Reply". Otherwise, the credits value in reverse direction Calls is
      also used to update the forward direction credits.
      
      It is safe to decode the rdma_credits field in rpcrdma_reply_handler
      now that rpcrdma_reply_handler is single-threaded. Receives complete
      in the same order as they were sent on the NFS server.
      Signed-off-by: NChuck Lever <chuck.lever@oracle.com>
      Signed-off-by: NAnna Schumaker <Anna.Schumaker@Netapp.com>
      be798f90
    • C
      xprtrdma: Invoke rpcrdma_reply_handler directly from RECV completion · d8f532d2
      Chuck Lever 提交于
      I noticed that the soft IRQ thread looked pretty busy under heavy
      I/O workloads. perf suggested one area that was expensive was the
      queue_work() call in rpcrdma_wc_receive. That gave me some ideas.
      
      Instead of scheduling a separate worker to process RPC Replies,
      promote the Receive completion handler to IB_POLL_WORKQUEUE, and
      invoke rpcrdma_reply_handler directly.
      
      Note that the poll workqueue is single-threaded. In order to keep
      memory invalidation from serializing all RPC Replies, handle any
      necessary invalidation tasks in a separate multi-threaded workqueue.
      
      This provides a two-tier scheme, similar to OS I/O interrupt
      handlers: A fast interrupt handler that schedules the slow handler
      and re-enables the interrupt, and a slower handler that is invoked
      for any needed heavy lifting.
      
      Benefits include:
      - One less context switch for RPCs that don't register memory
      - Receive completion handling is moved out of soft IRQ context to
        make room for other users of soft IRQ
      - The same CPU core now DMA syncs and XDR decodes the Receive buffer
      Signed-off-by: NChuck Lever <chuck.lever@oracle.com>
      Signed-off-by: NAnna Schumaker <Anna.Schumaker@Netapp.com>
      d8f532d2
    • C
      xprtrdma: Refactor rpcrdma_reply_handler some more · e1352c96
      Chuck Lever 提交于
      Clean up: I'd like to be able to invoke the tail of
      rpcrdma_reply_handler in two different places. Split the tail out
      into its own helper function.
      Signed-off-by: NChuck Lever <chuck.lever@oracle.com>
      Signed-off-by: NAnna Schumaker <Anna.Schumaker@Netapp.com>
      e1352c96
    • C
      xprtrdma: Move decoded header fields into rpcrdma_rep · 5381e0ec
      Chuck Lever 提交于
      Clean up: Make it easier to pass the decoded XID, vers, credits, and
      proc fields around by moving these variables into struct rpcrdma_rep.
      
      Note: the credits field will be handled in a subsequent patch.
      Signed-off-by: NChuck Lever <chuck.lever@oracle.com>
      Signed-off-by: NAnna Schumaker <Anna.Schumaker@Netapp.com>
      5381e0ec
    • C
      xprtrdma: Throw away reply when version is unrecognized · 61433af5
      Chuck Lever 提交于
      A reply with an unrecognized value in the version field means the
      transport header is potentially garbled and therefore all the fields
      are untrustworthy.
      
      Fixes: 59aa1f9a ("xprtrdma: Properly handle RDMA_ERROR ... ")
      Signed-off-by: NChuck Lever <chuck.lever@oracle.com>
      Signed-off-by: NAnna Schumaker <Anna.Schumaker@Netapp.com>
      61433af5
  2. 17 10月, 2017 7 次提交
    • C
      xprtrdma: Remove ro_unmap_safe · 2b4f8923
      Chuck Lever 提交于
      Clean up: There are no remaining callers of this method.
      Signed-off-by: NChuck Lever <chuck.lever@oracle.com>
      Signed-off-by: NAnna Schumaker <Anna.Schumaker@Netapp.com>
      2b4f8923
    • C
      xprtrdma: Use ro_unmap_sync in xprt_rdma_send_request · 4ce6c04c
      Chuck Lever 提交于
      The "safe" version of ro_unmap is used here to avoid waiting
      unnecessarily. However:
      
       - It is safe to wait. After all, we have to wait anyway when using
         FMR to register memory.
      
       - This case is rare: it occurs only after a reconnect.
      
      By switching this call site to ro_unmap_sync, the final use of
      ro_unmap_safe is removed.
      Signed-off-by: NChuck Lever <chuck.lever@oracle.com>
      Signed-off-by: NAnna Schumaker <Anna.Schumaker@Netapp.com>
      4ce6c04c
    • C
      xprtrdma: Don't defer fencing an async RPC's chunks · 8f66b1a5
      Chuck Lever 提交于
      In current kernels, waiting in xprt_release appears to be safe to
      do. I had erroneously believed that for ASYNC RPCs, waiting of any
      kind in xprt_release->xprt_rdma_free would result in deadlock. I've
      done injection testing and consulted with Trond to confirm that
      waiting in the RPC release path is safe.
      
      For the very few times where RPC resources haven't yet been released
      earlier by the reply handler, it is safe to wait synchronously in
      xprt_rdma_free for invalidation rather than defering it to MR
      recovery.
      
      Note: When the QP is error state, posting a LocalInvalidate should
      flush and mark the MR as bad. There is no way the remote HCA can
      access that MR via a QP in error state, so it is effectively already
      inaccessible and thus safe for the Upper Layer to access. The next
      time the MR is used it should be recognized and cleaned up properly
      by frwr_op_map.
      Signed-off-by: NChuck Lever <chuck.lever@oracle.com>
      Signed-off-by: NAnna Schumaker <Anna.Schumaker@Netapp.com>
      8f66b1a5
    • N
      NFS: remove special-case revalidate in nfs_opendir() · 1fea73ac
      NeilBrown 提交于
      Commit f5a73672 ("NFS: allow close-to-open cache semantics to
      apply to root of NFS filesystem") added a call to
      __nfs_revalidate_inode() to nfs_opendir to as the lookup
      process wouldn't reliable do this.
      
      Subsequent commit a3fbbde7 ("VFS: we need to set LOOKUP_JUMPED
      on mountpoint crossing") make this unnecessary.  So remove the
      unnecessary code.
      Signed-off-by: NNeilBrown <neilb@suse.com>
      Signed-off-by: NAnna Schumaker <Anna.Schumaker@Netapp.com>
      1fea73ac
    • N
      NFS: revalidate "." etc correctly on "open". · b688741c
      NeilBrown 提交于
      For correct close-to-open semantics, NFS must validate
      the change attribute of a directory (or file) on open.
      
      Since commit ecf3d1f1 ("vfs: kill FS_REVAL_DOT by adding a
      d_weak_revalidate dentry op"), open() of "." or a path ending ".." is
      not revalidated reliably (except when that direct is a mount point).
      
      Prior to that commit, "." was revalidated using nfs_lookup_revalidate()
      which checks the LOOKUP_OPEN flag and forces revalidation if the flag is
      set.
      Since that commit, nfs_weak_revalidate() is used for NFSv3 (which
      ignores the flags) and nothing is used for NFSv4.
      
      This is fixed by using nfs_lookup_verify_inode() in
      nfs_weak_revalidate().  This does the revalidation exactly when needed.
      Also, add a definition of .d_weak_revalidate for NFSv4.
      
      The incorrect behavior is easily demonstrated by running "echo *" in
      some non-mountpoint NFS directory while watching network traffic.
      Without this patch, "echo *" sometimes doesn't produce any traffic.
      With the patch it always does.
      
      Fixes: ecf3d1f1 ("vfs: kill FS_REVAL_DOT by adding a d_weak_revalidate dentry op")
      cc: stable@vger.kernel.org (3.9+)
      Signed-off-by: NNeilBrown <neilb@suse.com>
      Signed-off-by: NAnna Schumaker <Anna.Schumaker@Netapp.com>
      b688741c
    • A
      NFS: Don't compare apples to elephants to determine access bits · 1750d929
      Anna Schumaker 提交于
      The NFS_ACCESS_* flags aren't a 1:1 mapping to the MAY_* flags, so
      checking for MAY_WHATEVER might have surprising results in
      nfs*_proc_access().  Let's simplify this check when determining which
      bits to ask for, and do it in a generic place instead of copying code
      for each NFS version.
      Signed-off-by: NAnna Schumaker <Anna.Schumaker@Netapp.com>
      1750d929
    • A
      NFS: Create NFS_ACCESS_* flags · 3c181827
      Anna Schumaker 提交于
      Passing the NFS v4 flags into the v3 code seems weird to me, even if
      they are defined to the same values.  This patch adds in generic flags
      to help me feel better
      Signed-off-by: NAnna Schumaker <Anna.Schumaker@Netapp.com>
      3c181827
  3. 16 10月, 2017 1 次提交
  4. 15 10月, 2017 10 次提交
  5. 14 10月, 2017 12 次提交
    • B
      x86/microcode: Do the family check first · 1f161f67
      Borislav Petkov 提交于
      On CPUs like AMD's Geode, for example, we shouldn't even try to load
      microcode because they do not support the modern microcode loading
      interface.
      
      However, we do the family check *after* the other checks whether the
      loader has been disabled on the command line or whether we're running in
      a guest.
      
      So move the family checks first in order to exit early if we're being
      loaded on an unsupported family.
      Reported-and-tested-by: NSven Glodowski <glodi1@arcor.de>
      Signed-off-by: NBorislav Petkov <bp@suse.de>
      Cc: <stable@vger.kernel.org> # 4.11..
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Link: http://bugzilla.suse.com/show_bug.cgi?id=1061396
      Link: http://lkml.kernel.org/r/20171012112316.977-1-bp@alien8.deSigned-off-by: NIngo Molnar <mingo@kernel.org>
      1f161f67
    • I
      locking/lockdep: Disable cross-release features for now · b483cf3b
      Ingo Molnar 提交于
      Johan Hovold reported a big lockdep slowdown on his system, caused by lockdep:
      
      > I had noticed that the BeagleBone Black boot time appeared to have
      > increased significantly with 4.14 and yesterday I finally had time to
      > investigate it.
      >
      > Boot time (from "Linux version" to login prompt) had in fact doubled
      > since 4.13 where it took 17 seconds (with my current config) compared to
      > the 35 seconds I now see with 4.14-rc4.
      >
      > I quick bisect pointed to lockdep and specifically the following commit:
      >
      >	28a903f6 ("locking/lockdep: Handle non(or multi)-acquisition of a crosslock")
      
      Because the final v4.14 release is close, disable the cross-release lockdep
      features for now.
      Bisected-by: NJohan Hovold <johan@kernel.org>
      Debugged-by: NJohan Hovold <johan@kernel.org>
      Reported-by: NJohan Hovold <johan@kernel.org>
      Cc: Arnd Bergmann <arnd@arndb.de>
      Cc: Byungchul Park <byungchul.park@lge.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: Tony Lindgren <tony@atomide.com>
      Cc: kernel-team@lge.com
      Cc: linux-arm-kernel@lists.infradead.org
      Cc: linux-mm@kvack.org
      Cc: linux-omap@vger.kernel.org
      Link: http://lkml.kernel.org/r/20171014072659.f2yr6mhm5ha3eou7@gmail.comSigned-off-by: NIngo Molnar <mingo@kernel.org>
      b483cf3b
    • L
      Merge branch '4.14-fixes' of git://git.linux-mips.org/pub/scm/ralf/upstream-linus · be1f16ba
      Linus Torvalds 提交于
      Pull MIPS fixes from Ralf Baechle:
       "More MIPS fixes for 4.14:
      
         - Loongson 1: Set the default number of RX and TX queues to
           accomodate for recent changes of stmmac driver.
      
         - BPF: Fix uninitialised target compiler error.
      
         - Fix cmpxchg on 32 bit signed ints for 64 bit kernels with
           !kernel_uses_llsc
      
         - Fix generic-board-config.sh for builds using O=
      
         - Remove pr_err() calls from fpu_emu() for a case which is not a
           kernel error"
      
      * '4.14-fixes' of git://git.linux-mips.org/pub/scm/ralf/upstream-linus:
        MIPS: math-emu: Remove pr_err() calls from fpu_emu()
        MIPS: Fix generic-board-config.sh for builds using O=
        MIPS: Fix cmpxchg on 32b signed ints for 64b kernel with !kernel_uses_llsc
        MIPS: loongson1: set default number of rx and tx queues for stmmac
        MIPS: bpf: Fix uninitialised target compiler error
      be1f16ba
    • A
      x86/mm: Flush more aggressively in lazy TLB mode · b956575b
      Andy Lutomirski 提交于
      Since commit:
      
        94b1b03b ("x86/mm: Rework lazy TLB mode and TLB freshness tracking")
      
      x86's lazy TLB mode has been all the way lazy: when running a kernel thread
      (including the idle thread), the kernel keeps using the last user mm's
      page tables without attempting to maintain user TLB coherence at all.
      
      From a pure semantic perspective, this is fine -- kernel threads won't
      attempt to access user pages, so having stale TLB entries doesn't matter.
      
      Unfortunately, I forgot about a subtlety.  By skipping TLB flushes,
      we also allow any paging-structure caches that may exist on the CPU
      to become incoherent.  This means that we can have a
      paging-structure cache entry that references a freed page table, and
      the CPU is within its rights to do a speculative page walk starting
      at the freed page table.
      
      I can imagine this causing two different problems:
      
       - A speculative page walk starting from a bogus page table could read
         IO addresses.  I haven't seen any reports of this causing problems.
      
       - A speculative page walk that involves a bogus page table can install
         garbage in the TLB.  Such garbage would always be at a user VA, but
         some AMD CPUs have logic that triggers a machine check when it notices
         these bogus entries.  I've seen a couple reports of this.
      
      Boris further explains the failure mode:
      
      > It is actually more of an optimization which assumes that paging-structure
      > entries are in WB DRAM:
      >
      > "TlbCacheDis: cacheable memory disable. Read-write. 0=Enables
      > performance optimization that assumes PML4, PDP, PDE, and PTE entries
      > are in cacheable WB-DRAM; memory type checks may be bypassed, and
      > addresses outside of WB-DRAM may result in undefined behavior or NB
      > protocol errors. 1=Disables performance optimization and allows PML4,
      > PDP, PDE and PTE entries to be in any memory type. Operating systems
      > that maintain page tables in memory types other than WB- DRAM must set
      > TlbCacheDis to insure proper operation."
      >
      > The MCE generated is an NB protocol error to signal that
      >
      > "Link: A specific coherent-only packet from a CPU was issued to an
      > IO link. This may be caused by software which addresses page table
      > structures in a memory type other than cacheable WB-DRAM without
      > properly configuring MSRC001_0015[TlbCacheDis]. This may occur, for
      > example, when page table structure addresses are above top of memory. In
      > such cases, the NB will generate an MCE if it sees a mismatch between
      > the memory operation generated by the core and the link type."
      >
      > I'm assuming coherent-only packets don't go out on IO links, thus the
      > error.
      
      To fix this, reinstate TLB coherence in lazy mode.  With this patch
      applied, we do it in one of two ways:
      
       - If we have PCID, we simply switch back to init_mm's page tables
         when we enter a kernel thread -- this seems to be quite cheap
         except for the cost of serializing the CPU.
      
       - If we don't have PCID, then we set a flag and switch to init_mm
         the first time we would otherwise need to flush the TLB.
      
      The /sys/kernel/debug/x86/tlb_use_lazy_mode debug switch can be changed
      to override the default mode for benchmarking.
      
      In theory, we could optimize this better by only flushing the TLB in
      lazy CPUs when a page table is freed.  Doing that would require
      auditing the mm code to make sure that all page table freeing goes
      through tlb_remove_page() as well as reworking some data structures
      to implement the improved flush logic.
      Reported-by: NMarkus Trippelsdorf <markus@trippelsdorf.de>
      Reported-by: NAdam Borowski <kilobyte@angband.pl>
      Signed-off-by: NAndy Lutomirski <luto@kernel.org>
      Signed-off-by: NBorislav Petkov <bp@suse.de>
      Cc: Borislav Petkov <bp@alien8.de>
      Cc: Brian Gerst <brgerst@gmail.com>
      Cc: Daniel Borkmann <daniel@iogearbox.net>
      Cc: Eric Biggers <ebiggers@google.com>
      Cc: Johannes Hirte <johannes.hirte@datenkhaos.de>
      Cc: Kees Cook <keescook@chromium.org>
      Cc: Kirill A. Shutemov <kirill.shutemov@linux.intel.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Nadav Amit <nadav.amit@gmail.com>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Rik van Riel <riel@redhat.com>
      Cc: Roman Kagan <rkagan@virtuozzo.com>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Fixes: 94b1b03b ("x86/mm: Rework lazy TLB mode and TLB freshness tracking")
      Link: http://lkml.kernel.org/r/20171009170231.fkpraqokz6e4zeco@pd.tnicSigned-off-by: NIngo Molnar <mingo@kernel.org>
      b956575b
    • L
      Merge tag 'drm-fixes-for-v4.14-rc5' of git://people.freedesktop.org/~airlied/linux · 9aa0d2dd
      Linus Torvalds 提交于
      Pull drm fixes from Dave Airlie:
       "Couple of the arm people seem to wake up so this has imx and msm
        fixes, along with a bunch of i915 stable bounds fixes and an amdgpu
        regression fix.
      
        All seems pretty okay for now"
      
      * tag 'drm-fixes-for-v4.14-rc5' of git://people.freedesktop.org/~airlied/linux:
        drm/msm: fix _NO_IMPLICIT fencing case
        drm/msm: fix error path cleanup
        drm/msm/mdp5: Remove extra pm_runtime_put call in mdp5_crtc_cursor_set()
        drm/msm/dsi: Use correct pm_runtime_put variant during host_init
        drm/msm: fix return value check in _msm_gem_kernel_new()
        drm/msm: use proper memory barriers for updating tail/head
        drm/msm/mdp5: add missing max size for 8x74 v1
        drm/amdgpu: fix placement flags in amdgpu_ttm_bind
        drm/i915/bios: parse DDI ports also for CHV for HDMI DDC pin and DP AUX channel
        gpu: ipu-v3: pre: implement workaround for ERR009624
        gpu: ipu-v3: prg: wait for double buffers to be filled on channel startup
        gpu: ipu-v3: Allow channel burst locking on i.MX6 only
        drm/i915: Read timings from the correct transcoder in intel_crtc_mode_get()
        drm/i915: Order two completing nop_submit_request
        drm/i915: Silence compiler warning for hsw_power_well_enable()
        drm/i915: Use crtc_state_is_legacy_gamma in intel_color_check
        drm/i915/edp: Increase the T12 delay quirk to 1300ms
        drm/i915/edp: Get the Panel Power Off timestamp after panel is off
        sync_file: Return consistent status in SYNC_IOC_FILE_INFO
        drm/atomic: Unref duplicated drm_atomic_state in drm_atomic_helper_resume()
      9aa0d2dd
    • D
      Merge tag 'drm-intel-fixes-2017-10-11' of... · a480f308
      Dave Airlie 提交于
      Merge tag 'drm-intel-fixes-2017-10-11' of git://anongit.freedesktop.org/drm/drm-intel into drm-fixes
      
      drm/i915 fixes for 4.14-rc5:
      
      Three fixes for stable:
      
      - Use crtc_state_is_legacy_gamma in intel_color_check (Maarten)
      - Read timings from the correct transcoder (Ville).
      - Fix HDMI on BSW (Jani).
      
      Other fixes:
      
      - eDP fixes (Manasi)
      - Silence compiler warnings (Chris)
      - Order two completing nop_submit_request (Chris)
      
      * tag 'drm-intel-fixes-2017-10-11' of git://anongit.freedesktop.org/drm/drm-intel:
        drm/i915/bios: parse DDI ports also for CHV for HDMI DDC pin and DP AUX channel
        drm/i915: Read timings from the correct transcoder in intel_crtc_mode_get()
        drm/i915: Order two completing nop_submit_request
        drm/i915: Silence compiler warning for hsw_power_well_enable()
        drm/i915: Use crtc_state_is_legacy_gamma in intel_color_check
        drm/i915/edp: Increase the T12 delay quirk to 1300ms
        drm/i915/edp: Get the Panel Power Off timestamp after panel is off
      a480f308
    • D
      Merge branch 'msm-fixes-4.14-rc4' of git://people.freedesktop.org/~robclark/linux into drm-fixes · 7a5bea77
      Dave Airlie 提交于
      bunch of msm fixes
      
      * 'msm-fixes-4.14-rc4' of git://people.freedesktop.org/~robclark/linux:
        drm/msm: fix _NO_IMPLICIT fencing case
        drm/msm: fix error path cleanup
        drm/msm/mdp5: Remove extra pm_runtime_put call in mdp5_crtc_cursor_set()
        drm/msm/dsi: Use correct pm_runtime_put variant during host_init
        drm/msm: fix return value check in _msm_gem_kernel_new()
        drm/msm: use proper memory barriers for updating tail/head
        drm/msm/mdp5: add missing max size for 8x74 v1
      7a5bea77
    • L
      Merge branch 'akpm' (patches from Andrew) · 06d97c58
      Linus Torvalds 提交于
      Merge misc fixes from Andrew Morton:
       "18 fixes"
      
      * emailed patches from Andrew Morton <akpm@linux-foundation.org>:
        mm, swap: use page-cluster as max window of VMA based swap readahead
        mm: page_vma_mapped: ensure pmd is loaded with READ_ONCE outside of lock
        kmemleak: clear stale pointers from task stacks
        fs/binfmt_misc.c: node could be NULL when evicting inode
        fs/mpage.c: fix mpage_writepage() for pages with buffers
        linux/kernel.h: add/correct kernel-doc notation
        tty: fall back to N_NULL if switching to N_TTY fails during hangup
        Revert "vmalloc: back off when the current task is killed"
        mm/cma.c: take __GFP_NOWARN into account in cma_alloc()
        scripts/kallsyms.c: ignore symbol type 'n'
        userfaultfd: selftest: exercise -EEXIST only in background transfer
        mm: only display online cpus of the numa node
        mm: remove unnecessary WARN_ONCE in page_vma_mapped_walk().
        mm/mempolicy: fix NUMA_INTERLEAVE_HIT counter
        include/linux/of.h: provide of_n_{addr,size}_cells wrappers for !CONFIG_OF
        mm/madvise.c: add description for MADV_WIPEONFORK and MADV_KEEPONFORK
        lib/Kconfig.debug: kernel hacking menu: runtime testing: keep tests together
        mm/migrate: fix indexing bug (off by one) and avoid out of bound access
      06d97c58
    • H
      mm, swap: use page-cluster as max window of VMA based swap readahead · 61b63972
      Huang Ying 提交于
      When the VMA based swap readahead was introduced, a new knob
      
        /sys/kernel/mm/swap/vma_ra_max_order
      
      was added as the max window of VMA swap readahead.  This is to make it
      possible to use different max window for VMA based readahead and
      original physical readahead.  But Minchan Kim pointed out that this will
      cause a regression because setting page-cluster sysctl to zero cannot
      disable swap readahead with the change.
      
      To fix the regression, the page-cluster sysctl is used as the max window
      of both the VMA based swap readahead and original physical swap
      readahead.  If more fine grained control is needed in the future, more
      knobs can be added as the subordinate knobs of the page-cluster sysctl.
      
      The vma_ra_max_order knob is deleted.  Because the knob was introduced
      in v4.14-rc1, and this patch is targeting being merged before v4.14
      releasing, there should be no existing users of this newly added ABI.
      
      Link: http://lkml.kernel.org/r/20171011070847.16003-1-ying.huang@intel.com
      Fixes: ec560175 ("mm, swap: VMA based swap readahead")
      Signed-off-by: N"Huang, Ying" <ying.huang@intel.com>
      Reported-by: NMinchan Kim <minchan@kernel.org>
      Acked-by: NMinchan Kim <minchan@kernel.org>
      Acked-by: NMichal Hocko <mhocko@suse.com>
      Cc: Johannes Weiner <hannes@cmpxchg.org>
      Cc: Rik van Riel <riel@redhat.com>
      Cc: Shaohua Li <shli@kernel.org>
      Cc: Hugh Dickins <hughd@google.com>
      Cc: Fengguang Wu <fengguang.wu@intel.com>
      Cc: Tim Chen <tim.c.chen@intel.com>
      Cc: Dave Hansen <dave.hansen@intel.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      61b63972
    • W
      mm: page_vma_mapped: ensure pmd is loaded with READ_ONCE outside of lock · a7b10095
      Will Deacon 提交于
      Loading the pmd without holding the pmd_lock exposes us to races with
      concurrent updaters of the page tables but, worse still, it also allows
      the compiler to cache the pmd value in a register and reuse it later on,
      even if we've performed a READ_ONCE in between and seen a more recent
      value.
      
      In the case of page_vma_mapped_walk, this leads to the following crash
      when the pmd loaded for the initial pmd_trans_huge check is all zeroes
      and a subsequent valid table entry is loaded by check_pmd.  We then
      proceed into map_pte, but the compiler re-uses the zero entry inside
      pte_offset_map, resulting in a junk pointer being installed in
      pvmw->pte:
      
        PC is at check_pte+0x20/0x170
        LR is at page_vma_mapped_walk+0x2e0/0x540
        [...]
        Process doio (pid: 2463, stack limit = 0xffff00000f2e8000)
        Call trace:
          check_pte+0x20/0x170
          page_vma_mapped_walk+0x2e0/0x540
          page_mkclean_one+0xac/0x278
          rmap_walk_file+0xf0/0x238
          rmap_walk+0x64/0xa0
          page_mkclean+0x90/0xa8
          clear_page_dirty_for_io+0x84/0x2a8
          mpage_submit_page+0x34/0x98
          mpage_process_page_bufs+0x164/0x170
          mpage_prepare_extent_to_map+0x134/0x2b8
          ext4_writepages+0x484/0xe30
          do_writepages+0x44/0xe8
          __filemap_fdatawrite_range+0xbc/0x110
          file_write_and_wait_range+0x48/0xd8
          ext4_sync_file+0x80/0x4b8
          vfs_fsync_range+0x64/0xc0
          SyS_msync+0x194/0x1e8
      
      This patch fixes the problem by ensuring that READ_ONCE is used before
      the initial checks on the pmd, and this value is subsequently used when
      checking whether or not the pmd is present.  pmd_check is removed and
      the pmd_present check is inlined directly.
      
      Link: http://lkml.kernel.org/r/1507222630-5839-1-git-send-email-will.deacon@arm.com
      Fixes: f27176cf ("mm: convert page_mkclean_one() to use page_vma_mapped_walk()")
      Signed-off-by: NWill Deacon <will.deacon@arm.com>
      Tested-by: NYury Norov <ynorov@caviumnetworks.com>
      Tested-by: NRichard Ruigrok <rruigrok@codeaurora.org>
      Acked-by: NKirill A. Shutemov <kirill.shutemov@linux.intel.com>
      Cc: "Paul E. McKenney" <paulmck@linux.vnet.ibm.com>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: <stable@vger.kernel.org>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      a7b10095
    • K
      kmemleak: clear stale pointers from task stacks · ca182551
      Konstantin Khlebnikov 提交于
      Kmemleak considers any pointers on task stacks as references.  This
      patch clears newly allocated and reused vmap stacks.
      
      Link: http://lkml.kernel.org/r/150728990124.744199.8403409836394318684.stgit@buzzSigned-off-by: NKonstantin Khlebnikov <khlebnikov@yandex-team.ru>
      Acked-by: NCatalin Marinas <catalin.marinas@arm.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      ca182551
    • E
      fs/binfmt_misc.c: node could be NULL when evicting inode · 7e866006
      Eryu Guan 提交于
      inode->i_private is assigned by a Node pointer only after registering a
      new binary format, so it could be NULL if inode was created by
      bm_fill_super() (or iput() was called by the error path in
      bm_register_write()), and this could result in NULL pointer dereference
      when evicting such an inode.  e.g.  mount binfmt_misc filesystem then
      umount it immediately:
      
        mount -t binfmt_misc binfmt_misc /proc/sys/fs/binfmt_misc
        umount /proc/sys/fs/binfmt_misc
      
      will result in
      
        BUG: unable to handle kernel NULL pointer dereference at 0000000000000013
        IP: bm_evict_inode+0x16/0x40 [binfmt_misc]
        ...
        Call Trace:
         evict+0xd3/0x1a0
         iput+0x17d/0x1d0
         dentry_unlink_inode+0xb9/0xf0
         __dentry_kill+0xc7/0x170
         shrink_dentry_list+0x122/0x280
         shrink_dcache_parent+0x39/0x90
         do_one_tree+0x12/0x40
         shrink_dcache_for_umount+0x2d/0x90
         generic_shutdown_super+0x1f/0x120
         kill_litter_super+0x29/0x40
         deactivate_locked_super+0x43/0x70
         deactivate_super+0x45/0x60
         cleanup_mnt+0x3f/0x70
         __cleanup_mnt+0x12/0x20
         task_work_run+0x86/0xa0
         exit_to_usermode_loop+0x6d/0x99
         syscall_return_slowpath+0xba/0xf0
         entry_SYSCALL_64_fastpath+0xa3/0xa
      
      Fix it by making sure Node (e) is not NULL.
      
      Link: http://lkml.kernel.org/r/20171010100642.31786-1-eguan@redhat.com
      Fixes: 83f91827 ("exec: binfmt_misc: shift filp_close(interp_file) from kill_node() to bm_evict_inode()")
      Signed-off-by: NEryu Guan <eguan@redhat.com>
      Acked-by: NOleg Nesterov <oleg@redhat.com>
      Cc: Alexander Viro <viro@zeniv.linux.org.uk>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      7e866006