1. 12 10月, 2015 2 次提交
    • B
      xfs: add missing ilock around dio write last extent alignment · 009c6e87
      Brian Foster 提交于
      The iomap codepath (via get_blocks()) acquires and release the inode
      lock in the case of a direct write that requires block allocation. This
      is because xfs_iomap_write_direct() allocates a transaction, which means
      the ilock must be dropped and reacquired after the transaction is
      allocated and reserved.
      
      xfs_iomap_write_direct() invokes xfs_iomap_eof_align_last_fsb() before
      the transaction is created and thus before the ilock is reacquired. This
      can lead to calls to xfs_iread_extents() and reads of the in-core extent
      list without any synchronization (via xfs_bmap_eof() and
      xfs_bmap_last_extent()). xfs_iread_extents() assert fails if the ilock
      is not held, but this is not currently seen in practice as the current
      callers had already invoked xfs_bmapi_read().
      
      What has been seen in practice are reports of crashes down in the
      xfs_bmap_eof() codepath on direct writes due to seemingly bogus pointer
      references from xfs_iext_get_ext(). While an explicit reproducer is not
      currently available to confirm the cause of the problem, crash analysis
      and code inspection from David Jeffrey had identified the insufficient
      locking.
      
      xfs_iomap_eof_align_last_fsb() is called from other contexts with the
      inode lock already held, so we cannot acquire it therein.
      __xfs_get_blocks() acquires and drops the ilock with variable flags to
      cover the event that the extent list must be read in. The common case is
      that __xfs_get_blocks() acquires the shared ilock. To provide locking
      around the last extent alignment call without adding more lock cycles to
      the dio path, update xfs_iomap_write_direct() to expect the shared ilock
      held on entry and do the extent alignment under its protection. Demote
      the lock, if necessary, from __xfs_get_blocks() and push the
      xfs_qm_dqattach() call outside of the shared lock critical section.
      Also, add an assert to document that the extent list is always expected
      to be present in this path. Otherwise, we risk a call to
      xfs_iread_extents() while under the shared ilock. This is safe as all
      current callers have executed an xfs_bmapi_read() call under the current
      iolock context.
      Reported-by: NDavid Jeffery <djeffery@redhat.com>
      Signed-off-by: NBrian Foster <bfoster@redhat.com>
      Reviewed-by: NDave Chinner <dchinner@redhat.com>
      Signed-off-by: NDave Chinner <david@fromorbit.com>
      009c6e87
    • Z
      cancel the setfilesize transation when io error happen · 5cb13dcd
      Zhaohongjiang 提交于
      When I ran xfstest/073 case, the remount process was blocked to wait
      transactions to be zero. I found there was a io error happened, and
      the setfilesize transaction was not released properly. We should add
      the changes to cancel the io error in this case.
      
      Reproduction steps:
      1. dd if=/dev/zero of=xfs1.img bs=1M count=2048
      2. mkfs.xfs xfs1.img
      3. losetup -f ./xfs1.img /dev/loop0
      4. mount -t xfs /dev/loop0 /home/test_dir/
      5. mkdir /home/test_dir/test
      6. mkfs.xfs -dfile,name=image,size=2g
      7. mount -t xfs -o loop image /home/test_dir/test
      8. cp a file bigger than 2g to /home/test_dir/test
      9. mount -t xfs -o remount,ro /home/test_dir/test
      
      [ dchinner: moved io error detection to xfs_setfilesize_ioend() after
        transaction context restoration. ]
      Signed-off-by: NZhao Hongjiang <zhaohongjiang@huawei.com>
      Signed-off-by: NDave Chinner <david@fromorbit.com>
      5cb13dcd
  2. 20 9月, 2015 1 次提交
    • C
      fs-writeback: unplug before cond_resched in writeback_sb_inodes · 590dca3a
      Chris Mason 提交于
      Commit 505a666e ("writeback: plug writeback in wb_writeback() and
      writeback_inodes_wb()") has us holding a plug during writeback_sb_inodes,
      which increases the merge rate when relatively contiguous small files
      are written by the filesystem.  It helps both on flash and spindles.
      
      For an fs_mark workload creating 4K files in parallel across 8 drives,
      this commit improves performance ~9% more by unplugging before calling
      cond_resched().  cond_resched() doesn't trigger an implicit unplug, so
      explicitly getting the IO down to the device before scheduling reduces
      latencies for anyone waiting on clean pages.
      
      It also cuts down on how often we use kblockd to unplug, which means
      less work bouncing from one workqueue to another.
      
      Many more details about how we got here:
      
        https://lkml.org/lkml/2015/9/11/570Signed-off-by: NChris Mason <clm@fb.com>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      590dca3a
  3. 18 9月, 2015 1 次提交
  4. 16 9月, 2015 2 次提交
  5. 13 9月, 2015 1 次提交
    • L
      writeback: plug writeback in wb_writeback() and writeback_inodes_wb() · 505a666e
      Linus Torvalds 提交于
      We had to revert the pluggin in writeback_sb_inodes() because the
      wb->list_lock is held, but we could easily plug at a higher level before
      taking that lock, and unplug after releasing it.  This does that.
      
      Chris will run performance numbers, just to verify that this approach is
      comparable to the alternative (we could just drop and re-take the lock
      around the blk_finish_plug() rather than these two commits.
      
      I'd have preferred waiting for actual performance numbers before picking
      one approach over the other, but I don't want to release rc1 with the
      known "sleeping function called from invalid context" issue, so I'll
      pick this cleanup version for now.  But if the numbers show that we
      really want to plug just at the writeback_sb_inodes() level, and we
      should just play ugly games with the spinlock, we'll switch to that.
      
      Cc: Chris Mason <clm@fb.com>
      Cc: Josef Bacik <jbacik@fb.com>
      Cc: Dave Chinner <david@fromorbit.com>
      Cc: Neil Brown <neilb@suse.de>
      Cc: Jan Kara <jack@suse.cz>
      Cc: Christoph Hellwig <hch@lst.de>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      505a666e
  6. 12 9月, 2015 4 次提交
    • S
      [CIFS] mount option sec=none not displayed properly in /proc/mounts · eda2116f
      Steve French 提交于
      When the user specifies "sec=none" in a cifs mount, we set
      sec_type as unspecified (and set a flag and the username will be
      null) rather than setting sectype as "none" so
      cifs_show_security was not properly displaying it in
      cifs /proc/mounts entries.
      Signed-off-by: NSteve French <steve.french@primarydata.com>
      Reviewed-by: NJeff Layton <jlayton@poochiereds.net>
      eda2116f
    • A
      revert "ocfs2/dlm: use list_for_each_entry instead of list_for_each" · e527b22c
      Andrew Morton 提交于
      Revert commit f83c7b5e ("ocfs2/dlm: use list_for_each_entry instead
      of list_for_each").
      
      list_for_each_entry() will dereference its `pos' argument, which can be
      NULL in dlm_process_recovery_data().
      Reported-by: NJulia Lawall <julia.lawall@lip6.fr>
      Reported-by: NFengguang Wu <fengguang.wu@gmail.com>
      Cc: Joseph Qi <joseph.qi@huawei.com>
      Cc: Mark Fasheh <mfasheh@suse.de>
      Cc: Joel Becker <jlbec@evilplan.org>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      e527b22c
    • J
      fs/seq_file: convert int seq_vprint/seq_printf/etc... returns to void · 6798a8ca
      Joe Perches 提交于
      The seq_<foo> function return values were frequently misused.
      
      See: commit 1f33c41c ("seq_file: Rename seq_overflow() to
           seq_has_overflowed() and make public")
      
      All uses of these return values have been removed, so convert the
      return types to void.
      
      Miscellanea:
      
      o Move seq_put_decimal_<type> and seq_escape prototypes closer the
        other seq_vprintf prototypes
      o Reorder seq_putc and seq_puts to return early on overflow
      o Add argument names to seq_vprintf and seq_printf
      o Update the seq_escape kernel-doc
      o Convert a couple of leading spaces to tabs in seq_escape
      Signed-off-by: NJoe Perches <joe@perches.com>
      Cc: Al Viro <viro@ZenIV.linux.org.uk>
      Cc: Steven Rostedt <rostedt@goodmis.org>
      Cc: Mark Brown <broonie@kernel.org>
      Cc: Stephen Rothwell <sfr@canb.auug.org.au>
      Cc: Joerg Roedel <jroedel@suse.de>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      6798a8ca
    • L
      Revert "writeback: plug writeback at a high level" · 0ba13fd1
      Linus Torvalds 提交于
      This reverts commit d353d758.
      
      Doing the block layer plug/unplug inside writeback_sb_inodes() is
      broken, because that function is actually called with a spinlock held:
      wb->list_lock, as pointed out by Chris Mason.
      
      Chris suggested just dropping and re-taking the spinlock around the
      blk_finish_plug() call (the plgging itself can happen under the
      spinlock), and that would technically work, but is just disgusting.
      
      We do something fairly similar - but not quite as disgusting because we
      at least have a better reason for it - in writeback_single_inode(), so
      it's not like the caller can depend on the lock being held over the
      call, but in this case there just isn't any good reason for that
      "release and re-take the lock" pattern.
      
      [ In general, we should really strive to avoid the "release and retake"
        pattern for locks, because in the general case it can easily cause
        subtle bugs when the caller caches any state around the call that
        might be invalidated by dropping the lock even just temporarily. ]
      
      But in this case, the plugging should be easy to just move up to the
      callers before the spinlock is taken, which should even improve the
      effectiveness of the plug.  So there is really no good reason to play
      games with locking here.
      
      I'll send off a test-patch so that Dave Chinner can verify that that
      plug movement works.  In the meantime this just reverts the problematic
      commit and adds a comment to the function so that we hopefully don't
      make this mistake again.
      Reported-by: NChris Mason <clm@fb.com>
      Cc: Josef Bacik <jbacik@fb.com>
      Cc: Dave Chinner <david@fromorbit.com>
      Cc: Neil Brown <neilb@suse.de>
      Cc: Jan Kara <jack@suse.cz>
      Cc: Christoph Hellwig <hch@lst.de>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      0ba13fd1
  7. 11 9月, 2015 17 次提交
    • J
      CIFS: fix type confusion in copy offload ioctl · 4c17a6d5
      Jann Horn 提交于
      This might lead to local privilege escalation (code execution as
      kernel) for systems where the following conditions are met:
      
       - CONFIG_CIFS_SMB2 and CONFIG_CIFS_POSIX are enabled
       - a cifs filesystem is mounted where:
        - the mount option "vers" was used and set to a value >=2.0
        - the attacker has write access to at least one file on the filesystem
      
      To attack this, an attacker would have to guess the target_tcon
      pointer (but guessing wrong doesn't cause a crash, it just returns an
      error code) and win a narrow race.
      
      CC: Stable <stable@vger.kernel.org>
      Signed-off-by: NJann Horn <jann@thejh.net>
      Signed-off-by: NSteve French <smfrench@gmail.com>
      4c17a6d5
    • K
      mm: mark most vm_operations_struct const · 7cbea8dc
      Kirill A. Shutemov 提交于
      With two exceptions (drm/qxl and drm/radeon) all vm_operations_struct
      structs should be constant.
      Signed-off-by: NKirill A. Shutemov <kirill.shutemov@linux.intel.com>
      Reviewed-by: NOleg Nesterov <oleg@redhat.com>
      Cc: "H. Peter Anvin" <hpa@zytor.com>
      Cc: Andy Lutomirski <luto@amacapital.net>
      Cc: Dave Hansen <dave.hansen@linux.intel.com>
      Cc: Ingo Molnar <mingo@elte.hu>
      Cc: Minchan Kim <minchan@kernel.org>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      7cbea8dc
    • M
      namei: fix warning while make xmldocs caused by namei.c · 2a78b857
      Masanari Iida 提交于
      Fix the following warnings:
      
      Warning(.//fs/namei.c:2422): No description found for parameter 'nd'
      Warning(.//fs/namei.c:2422): Excess function parameter 'nameidata'
      description in 'path_mountpoint'
      Signed-off-by: NMasanari Iida <standby24x7@gmail.com>
      Acked-by: NRandy Dunlap <rdunlap@infradead.org>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      2a78b857
    • P
      fs/affs: make root lookup from blkdev logical size · e852d82a
      Pranay Kr. Srivastava 提交于
      This patch resolves https://bugzilla.kernel.org/show_bug.cgi?id=16531.
      
      When logical blkdev size > 512 then sector numbers become larger than the
      device can support.
      
      Make affs start lookup based on the device's logical sector size instead
      of 512.
      Reported-by: NMark <markk@clara.co.uk>
      Suggested-by: NMark <markk@clara.co.uk>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      e852d82a
    • A
      seq_file: provide an analogue of print_hex_dump() · 37607102
      Andy Shevchenko 提交于
      This introduces a new helper and switches current users to use it.  All
      patches are compiled tested. kmemleak is tested via its own test suite.
      
      This patch (of 6):
      
      The new seq_hex_dump() is a complete analogue of print_hex_dump().
      
      We have few users of this functionality already. It allows to reduce their
      codebase.
      Signed-off-by: NAndy Shevchenko <andriy.shevchenko@linux.intel.com>
      Cc: Alexander Viro <viro@zeniv.linux.org.uk>
      Cc: Joe Perches <joe@perches.com>
      Cc: Tadeusz Struk <tadeusz.struk@intel.com>
      Cc: Helge Deller <deller@gmx.de>
      Cc: Ingo Tuchscherer <ingo.tuchscherer@de.ibm.com>
      Cc: Catalin Marinas <catalin.marinas@arm.com>
      Cc: Vladimir Kondratiev <qca_vkondrat@qca.qualcomm.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      37607102
    • J
      fs: Don't dump core if the corefile would become world-readable. · 40f705a7
      Jann Horn 提交于
      On a filesystem like vfat, all files are created with the same owner
      and mode independent of who created the file. When a vfat filesystem
      is mounted with root as owner of all files and read access for everyone,
      root's processes left world-readable coredumps on it (but other
      users' processes only left empty corefiles when given write access
      because of the uid mismatch).
      
      Given that the old behavior was inconsistent and insecure, I don't see
      a problem with changing it. Now, all processes refuse to dump core unless
      the resulting corefile will only be readable by their owner.
      Signed-off-by: NJann Horn <jann@thejh.net>
      Acked-by: NKees Cook <keescook@chromium.org>
      Cc: Al Viro <viro@zeniv.linux.org.uk>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      40f705a7
    • J
      fs: if a coredump already exists, unlink and recreate with O_EXCL · fbb18169
      Jann Horn 提交于
      It was possible for an attacking user to trick root (or another user) into
      writing his coredumps into an attacker-readable, pre-existing file using
      rename() or link(), causing the disclosure of secret data from the victim
      process' virtual memory.  Depending on the configuration, it was also
      possible to trick root into overwriting system files with coredumps.  Fix
      that issue by never writing coredumps into existing files.
      
      Requirements for the attack:
       - The attack only applies if the victim's process has a nonzero
         RLIMIT_CORE and is dumpable.
       - The attacker can trick the victim into coredumping into an
         attacker-writable directory D, either because the core_pattern is
         relative and the victim's cwd is attacker-writable or because an
         absolute core_pattern pointing to a world-writable directory is used.
       - The attacker has one of these:
        A: on a system with protected_hardlinks=0:
           execute access to a folder containing a victim-owned,
           attacker-readable file on the same partition as D, and the
           victim-owned file will be deleted before the main part of the attack
           takes place. (In practice, there are lots of files that fulfill
           this condition, e.g. entries in Debian's /var/lib/dpkg/info/.)
           This does not apply to most Linux systems because most distros set
           protected_hardlinks=1.
        B: on a system with protected_hardlinks=1:
           execute access to a folder containing a victim-owned,
           attacker-readable and attacker-writable file on the same partition
           as D, and the victim-owned file will be deleted before the main part
           of the attack takes place.
           (This seems to be uncommon.)
        C: on any system, independent of protected_hardlinks:
           write access to a non-sticky folder containing a victim-owned,
           attacker-readable file on the same partition as D
           (This seems to be uncommon.)
      
      The basic idea is that the attacker moves the victim-owned file to where
      he expects the victim process to dump its core.  The victim process dumps
      its core into the existing file, and the attacker reads the coredump from
      it.
      
      If the attacker can't move the file because he does not have write access
      to the containing directory, he can instead link the file to a directory
      he controls, then wait for the original link to the file to be deleted
      (because the kernel checks that the link count of the corefile is 1).
      
      A less reliable variant that requires D to be non-sticky works with link()
      and does not require deletion of the original link: link() the file into
      D, but then unlink() it directly before the kernel performs the link count
      check.
      
      On systems with protected_hardlinks=0, this variant allows an attacker to
      not only gain information from coredumps, but also clobber existing,
      victim-writable files with coredumps.  (This could theoretically lead to a
      privilege escalation.)
      Signed-off-by: NJann Horn <jann@thejh.net>
      Cc: Kees Cook <keescook@chromium.org>
      Cc: Al Viro <viro@zeniv.linux.org.uk>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      fbb18169
    • H
      hfs: fix B-tree corruption after insertion at position 0 · b4cc0efe
      Hin-Tak Leung 提交于
      Fix B-tree corruption when a new record is inserted at position 0 in the
      node in hfs_brec_insert().
      
      This is an identical change to the corresponding hfs b-tree code to Sergei
      Antonov's "hfsplus: fix B-tree corruption after insertion at position 0",
      to keep similar code paths in the hfs and hfsplus drivers in sync, where
      appropriate.
      Signed-off-by: NHin-Tak Leung <htl10@users.sourceforge.net>
      Cc: Sergei Antonov <saproj@gmail.com>
      Cc: Joe Perches <joe@perches.com>
      Reviewed-by: NVyacheslav Dubeyko <slava@dubeyko.com>
      Cc: Anton Altaparmakov <anton@tuxera.com>
      Cc: Al Viro <viro@zeniv.linux.org.uk>
      Cc: Christoph Hellwig <hch@infradead.org>
      Cc: <stable@vger.kernel.org>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      b4cc0efe
    • H
      hfs,hfsplus: cache pages correctly between bnode_create and bnode_free · 7cb74be6
      Hin-Tak Leung 提交于
      Pages looked up by __hfs_bnode_create() (called by hfs_bnode_create() and
      hfs_bnode_find() for finding or creating pages corresponding to an inode)
      are immediately kmap()'ed and used (both read and write) and kunmap()'ed,
      and should not be page_cache_release()'ed until hfs_bnode_free().
      
      This patch fixes a problem I first saw in July 2012: merely running "du"
      on a large hfsplus-mounted directory a few times on a reasonably loaded
      system would get the hfsplus driver all confused and complaining about
      B-tree inconsistencies, and generates a "BUG: Bad page state".  Most
      recently, I can generate this problem on up-to-date Fedora 22 with shipped
      kernel 4.0.5, by running "du /" (="/" + "/home" + "/mnt" + other smaller
      mounts) and "du /mnt" simultaneously on two windows, where /mnt is a
      lightly-used QEMU VM image of the full Mac OS X 10.9:
      
      $ df -i / /home /mnt
      Filesystem                  Inodes   IUsed      IFree IUse% Mounted on
      /dev/mapper/fedora-root    3276800  551665    2725135   17% /
      /dev/mapper/fedora-home   52879360  716221   52163139    2% /home
      /dev/nbd0p2             4294967295 1387818 4293579477    1% /mnt
      
      After applying the patch, I was able to run "du /" (60+ times) and "du
      /mnt" (150+ times) continuously and simultaneously for 6+ hours.
      
      There are many reports of the hfsplus driver getting confused under load
      and generating "BUG: Bad page state" or other similar issues over the
      years.  [1]
      
      The unpatched code [2] has always been wrong since it entered the kernel
      tree.  The only reason why it gets away with it is that the
      kmap/memcpy/kunmap follow very quickly after the page_cache_release() so
      the kernel has not had a chance to reuse the memory for something else,
      most of the time.
      
      The current RW driver appears to have followed the design and development
      of the earlier read-only hfsplus driver [3], where-by version 0.1 (Dec
      2001) had a B-tree node-centric approach to
      read_cache_page()/page_cache_release() per bnode_get()/bnode_put(),
      migrating towards version 0.2 (June 2002) of caching and releasing pages
      per inode extents.  When the current RW code first entered the kernel [2]
      in 2005, there was an REF_PAGES conditional (and "//" commented out code)
      to switch between B-node centric paging to inode-centric paging.  There
      was a mistake with the direction of one of the REF_PAGES conditionals in
      __hfs_bnode_create().  In a subsequent "remove debug code" commit [4], the
      read_cache_page()/page_cache_release() per bnode_get()/bnode_put() were
      removed, but a page_cache_release() was mistakenly left in (propagating
      the "REF_PAGES <-> !REF_PAGE" mistake), and the commented-out
      page_cache_release() in bnode_release() (which should be spanned by
      !REF_PAGES) was never enabled.
      
      References:
      [1]:
      Michael Fox, Apr 2013
      http://www.spinics.net/lists/linux-fsdevel/msg63807.html
      ("hfsplus volume suddenly inaccessable after 'hfs: recoff %d too large'")
      
      Sasha Levin, Feb 2015
      http://lkml.org/lkml/2015/2/20/85 ("use after free")
      
      https://bugs.launchpad.net/ubuntu/+source/linux/+bug/740814
      https://bugs.launchpad.net/ubuntu/+source/linux/+bug/1027887
      https://bugzilla.kernel.org/show_bug.cgi?id=42342
      https://bugzilla.kernel.org/show_bug.cgi?id=63841
      https://bugzilla.kernel.org/show_bug.cgi?id=78761
      
      [2]:
      http://git.kernel.org/cgit/linux/kernel/git/tglx/history.git/commit/\
      fs/hfs/bnode.c?id=d1081202f1d0ee35ab0beb490da4b65d4bc763db
      commit d1081202f1d0ee35ab0beb490da4b65d4bc763db
      Author: Andrew Morton <akpm@osdl.org>
      Date:   Wed Feb 25 16:17:36 2004 -0800
      
          [PATCH] HFS rewrite
      
      http://git.kernel.org/cgit/linux/kernel/git/tglx/history.git/commit/\
      fs/hfsplus/bnode.c?id=91556682e0bf004d98a529bf829d339abb98bbbd
      
      commit 91556682e0bf004d98a529bf829d339abb98bbbd
      Author: Andrew Morton <akpm@osdl.org>
      Date:   Wed Feb 25 16:17:48 2004 -0800
      
          [PATCH] HFS+ support
      
      [3]:
      http://sourceforge.net/projects/linux-hfsplus/
      
      http://sourceforge.net/projects/linux-hfsplus/files/Linux%202.4.x%20patch/hfsplus%200.1/
      http://sourceforge.net/projects/linux-hfsplus/files/Linux%202.4.x%20patch/hfsplus%200.2/
      
      http://linux-hfsplus.cvs.sourceforge.net/viewvc/linux-hfsplus/linux/\
      fs/hfsplus/bnode.c?r1=1.4&r2=1.5
      
      Date:   Thu Jun 6 09:45:14 2002 +0000
      Use buffer cache instead of page cache in bnode.c. Cache inode extents.
      
      [4]:
      http://git.kernel.org/cgit/linux/kernel/git/\
      stable/linux-stable.git/commit/?id=a5e3985f
      
      commit a5e3985f
      Author: Roman Zippel <zippel@linux-m68k.org>
      Date:   Tue Sep 6 15:18:47 2005 -0700
      
      [PATCH] hfs: remove debug code
      Signed-off-by: NHin-Tak Leung <htl10@users.sourceforge.net>
      Signed-off-by: NSergei Antonov <saproj@gmail.com>
      Reviewed-by: NAnton Altaparmakov <anton@tuxera.com>
      Reported-by: NSasha Levin <sasha.levin@oracle.com>
      Cc: Al Viro <viro@zeniv.linux.org.uk>
      Cc: Christoph Hellwig <hch@infradead.org>
      Cc: Vyacheslav Dubeyko <slava@dubeyko.com>
      Cc: Sougata Santra <sougata@tuxera.com>
      Cc: <stable@vger.kernel.org>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      7cb74be6
    • J
      fs/coda: fix readlink buffer overflow · 3725e9dd
      Jan Harkes 提交于
      Dan Carpenter discovered a buffer overflow in the Coda file system
      readlink code.  A userspace file system daemon can return a 4096 byte
      result which then triggers a one byte write past the allocated readlink
      result buffer.
      
      This does not trigger with an unmodified Coda implementation because Coda
      has a 1024 byte limit for symbolic links, however other userspace file
      systems using the Coda kernel module could be affected.
      
      Although this is an obvious overflow, I don't think this has to be handled
      as too sensitive from a security perspective because the overflow is on
      the Coda userspace daemon side which already needs root to open Coda's
      kernel device and to mount the file system before we get to the point that
      links can be read.
      
      [akpm@linux-foundation.org: coding-style fixes]
      Signed-off-by: NJan Harkes <jaharkes@cs.cmu.edu>
      Reported-by: NDan Carpenter <dan.carpenter@oracle.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      3725e9dd
    • A
      proc: convert to kstrto*()/kstrto*_from_user() · 774636e1
      Alexey Dobriyan 提交于
      Convert from manual allocation/copy_from_user/...  to kstrto*() family
      which were designed for exactly that.
      
      One case can not be converted to kstrto*_from_user() to make code even
      more simpler because of whitespace stripping, oh well...
      Signed-off-by: NAlexey Dobriyan <adobriyan@gmail.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      774636e1
    • W
      proc: change proc_subdir_lock to a rwlock · ecf1a3df
      Waiman Long 提交于
      The proc_subdir_lock spinlock is used to allow only one task to make
      change to the proc directory structure as well as looking up information
      in it.  However, the information lookup part can actually be entered by
      more than one task as the pde_get() and pde_put() reference count update
      calls in the critical sections are atomic increment and decrement
      respectively and so are safe with concurrent updates.
      
      The x86 architecture has already used qrwlock which is fair and other
      architectures like ARM are in the process of switching to qrwlock.  So
      unfairness shouldn't be a concern in that conversion.
      
      This patch changed the proc_subdir_lock to a rwlock in order to enable
      concurrent lookup. The following functions were modified to take a
      write lock:
       - proc_register()
       - remove_proc_entry()
       - remove_proc_subtree()
      
      The following functions were modified to take a read lock:
       - xlate_proc_name()
       - proc_lookup_de()
       - proc_readdir_de()
      
      A parallel /proc filesystem search with the "find" command (1000 threads)
      was run on a 4-socket Haswell-EX box (144 threads).  Before the patch, the
      parallel search took about 39s.  After the patch, the parallel find took
      only 25s, a saving of about 14s.
      
      The micro-benchmark that I used was artificial, but it was used to
      reproduce an exit hanging problem that I saw in real application.  In
      fact, only allow one task to do a lookup seems too limiting to me.
      Signed-off-by: NWaiman Long <Waiman.Long@hp.com>
      Acked-by: N"Eric W. Biederman" <ebiederm@xmission.com>
      Cc: Alexey Dobriyan <adobriyan@gmail.com>
      Cc: Nicolas Dichtel <nicolas.dichtel@6wind.com>
      Cc: Al Viro <viro@zeniv.linux.org.uk>
      Cc: Scott J Norton <scott.norton@hp.com>
      Cc: Douglas Hatch <doug.hatch@hp.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      ecf1a3df
    • C
      procfs: always expose /proc/<pid>/map_files/ and make it readable · bdb4d100
      Calvin Owens 提交于
      Currently, /proc/<pid>/map_files/ is restricted to CAP_SYS_ADMIN, and is
      only exposed if CONFIG_CHECKPOINT_RESTORE is set.
      
      Each mapped file region gets a symlink in /proc/<pid>/map_files/
      corresponding to the virtual address range at which it is mapped.  The
      symlinks work like the symlinks in /proc/<pid>/fd/, so you can follow them
      to the backing file even if that backing file has been unlinked.
      
      Currently, files which are mapped, unlinked, and closed are impossible to
      stat() from userspace.  Exposing /proc/<pid>/map_files/ closes this
      functionality "hole".
      
      Not being able to stat() such files makes noticing and explicitly
      accounting for the space they use on the filesystem impossible.  You can
      work around this by summing up the space used by every file in the
      filesystem and subtracting that total from what statfs() tells you, but
      that obviously isn't great, and it becomes unworkable once your filesystem
      becomes large enough.
      
      This patch moves map_files/ out from behind CONFIG_CHECKPOINT_RESTORE, and
      adjusts the permissions enforced on it as follows:
      
      * proc_map_files_lookup()
      * proc_map_files_readdir()
      * map_files_d_revalidate()
      
      	Remove the CAP_SYS_ADMIN restriction, leaving only the current
      	restriction requiring PTRACE_MODE_READ. The information made
      	available to userspace by these three functions is already
      	available in /proc/PID/maps with MODE_READ, so I don't see any
      	reason to limit them any further (see below for more detail).
      
      * proc_map_files_follow_link()
      
      	This stub has been added, and requires that the user have
      	CAP_SYS_ADMIN in order to follow the links in map_files/,
      	since there was concern on LKML both about the potential for
      	bypassing permissions on ancestor directories in the path to
      	files pointed to, and about what happens with more exotic
      	memory mappings created by some drivers (ie dma-buf).
      
      In older versions of this patch, I changed every permission check in
      the four functions above to enforce MODE_ATTACH instead of MODE_READ.
      This was an oversight on my part, and after revisiting the discussion
      it seems that nobody was concerned about anything outside of what is
      made possible by ->follow_link(). So in this version, I've left the
      checks for PTRACE_MODE_READ as-is.
      
      [akpm@linux-foundation.org: catch up with concurrent proc_pid_follow_link() changes]
      Signed-off-by: NCalvin Owens <calvinowens@fb.com>
      Reviewed-by: NKees Cook <keescook@chromium.org>
      Cc: Andy Lutomirski <luto@amacapital.net>
      Cc: Cyrill Gorcunov <gorcunov@openvz.org>
      Cc: Joe Perches <joe@perches.com>
      Cc: Kirill A. Shutemov <kirill@shutemov.name>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      bdb4d100
    • V
      proc: add cond_resched to /proc/kpage* read/write loop · d3691d2c
      Vladimir Davydov 提交于
      Reading/writing a /proc/kpage* file may take long on machines with a lot
      of RAM installed.
      Signed-off-by: NVladimir Davydov <vdavydov@parallels.com>
      Suggested-by: NAndres Lagar-Cavilla <andreslc@google.com>
      Reviewed-by: NAndres Lagar-Cavilla <andreslc@google.com>
      Cc: Minchan Kim <minchan@kernel.org>
      Cc: Raghavendra K T <raghavendra.kt@linux.vnet.ibm.com>
      Cc: Johannes Weiner <hannes@cmpxchg.org>
      Cc: Michal Hocko <mhocko@suse.cz>
      Cc: Greg Thelen <gthelen@google.com>
      Cc: Michel Lespinasse <walken@google.com>
      Cc: David Rientjes <rientjes@google.com>
      Cc: Pavel Emelyanov <xemul@parallels.com>
      Cc: Cyrill Gorcunov <gorcunov@openvz.org>
      Cc: Jonathan Corbet <corbet@lwn.net>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      d3691d2c
    • V
      proc: export idle flag via kpageflags · f074a8f4
      Vladimir Davydov 提交于
      As noted by Minchan, a benefit of reading idle flag from /proc/kpageflags
      is that one can easily filter dirty and/or unevictable pages while
      estimating the size of unused memory.
      
      Note that idle flag read from /proc/kpageflags may be stale in case the
      page was accessed via a PTE, because it would be too costly to iterate
      over all page mappings on each /proc/kpageflags read to provide an
      up-to-date value.  To make sure the flag is up-to-date one has to read
      /sys/kernel/mm/page_idle/bitmap first.
      Signed-off-by: NVladimir Davydov <vdavydov@parallels.com>
      Reviewed-by: NAndres Lagar-Cavilla <andreslc@google.com>
      Cc: Minchan Kim <minchan@kernel.org>
      Cc: Raghavendra K T <raghavendra.kt@linux.vnet.ibm.com>
      Cc: Johannes Weiner <hannes@cmpxchg.org>
      Cc: Michal Hocko <mhocko@suse.cz>
      Cc: Greg Thelen <gthelen@google.com>
      Cc: Michel Lespinasse <walken@google.com>
      Cc: David Rientjes <rientjes@google.com>
      Cc: Pavel Emelyanov <xemul@parallels.com>
      Cc: Cyrill Gorcunov <gorcunov@openvz.org>
      Cc: Jonathan Corbet <corbet@lwn.net>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      f074a8f4
    • V
      mm: introduce idle page tracking · 33c3fc71
      Vladimir Davydov 提交于
      Knowing the portion of memory that is not used by a certain application or
      memory cgroup (idle memory) can be useful for partitioning the system
      efficiently, e.g.  by setting memory cgroup limits appropriately.
      Currently, the only means to estimate the amount of idle memory provided
      by the kernel is /proc/PID/{clear_refs,smaps}: the user can clear the
      access bit for all pages mapped to a particular process by writing 1 to
      clear_refs, wait for some time, and then count smaps:Referenced.  However,
      this method has two serious shortcomings:
      
       - it does not count unmapped file pages
       - it affects the reclaimer logic
      
      To overcome these drawbacks, this patch introduces two new page flags,
      Idle and Young, and a new sysfs file, /sys/kernel/mm/page_idle/bitmap.
      A page's Idle flag can only be set from userspace by setting bit in
      /sys/kernel/mm/page_idle/bitmap at the offset corresponding to the page,
      and it is cleared whenever the page is accessed either through page tables
      (it is cleared in page_referenced() in this case) or using the read(2)
      system call (mark_page_accessed()). Thus by setting the Idle flag for
      pages of a particular workload, which can be found e.g.  by reading
      /proc/PID/pagemap, waiting for some time to let the workload access its
      working set, and then reading the bitmap file, one can estimate the amount
      of pages that are not used by the workload.
      
      The Young page flag is used to avoid interference with the memory
      reclaimer.  A page's Young flag is set whenever the Access bit of a page
      table entry pointing to the page is cleared by writing to the bitmap file.
      If page_referenced() is called on a Young page, it will add 1 to its
      return value, therefore concealing the fact that the Access bit was
      cleared.
      
      Note, since there is no room for extra page flags on 32 bit, this feature
      uses extended page flags when compiled on 32 bit.
      
      [akpm@linux-foundation.org: fix build]
      [akpm@linux-foundation.org: kpageidle requires an MMU]
      [akpm@linux-foundation.org: decouple from page-flags rework]
      Signed-off-by: NVladimir Davydov <vdavydov@parallels.com>
      Reviewed-by: NAndres Lagar-Cavilla <andreslc@google.com>
      Cc: Minchan Kim <minchan@kernel.org>
      Cc: Raghavendra K T <raghavendra.kt@linux.vnet.ibm.com>
      Cc: Johannes Weiner <hannes@cmpxchg.org>
      Cc: Michal Hocko <mhocko@suse.cz>
      Cc: Greg Thelen <gthelen@google.com>
      Cc: Michel Lespinasse <walken@google.com>
      Cc: David Rientjes <rientjes@google.com>
      Cc: Pavel Emelyanov <xemul@parallels.com>
      Cc: Cyrill Gorcunov <gorcunov@openvz.org>
      Cc: Jonathan Corbet <corbet@lwn.net>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      33c3fc71
    • V
      proc: add kpagecgroup file · 80ae2fdc
      Vladimir Davydov 提交于
      /proc/kpagecgroup contains a 64-bit inode number of the memory cgroup each
      page is charged to, indexed by PFN.  Having this information is useful for
      estimating a cgroup working set size.
      
      The file is present if CONFIG_PROC_PAGE_MONITOR && CONFIG_MEMCG.
      Signed-off-by: NVladimir Davydov <vdavydov@parallels.com>
      Reviewed-by: NAndres Lagar-Cavilla <andreslc@google.com>
      Cc: Minchan Kim <minchan@kernel.org>
      Cc: Raghavendra K T <raghavendra.kt@linux.vnet.ibm.com>
      Cc: Johannes Weiner <hannes@cmpxchg.org>
      Cc: Michal Hocko <mhocko@suse.cz>
      Cc: Greg Thelen <gthelen@google.com>
      Cc: Michel Lespinasse <walken@google.com>
      Cc: David Rientjes <rientjes@google.com>
      Cc: Pavel Emelyanov <xemul@parallels.com>
      Cc: Cyrill Gorcunov <gorcunov@openvz.org>
      Cc: Jonathan Corbet <corbet@lwn.net>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      80ae2fdc
  8. 10 9月, 2015 2 次提交
  9. 09 9月, 2015 10 次提交