1. 14 1月, 2012 3 次提交
  2. 13 1月, 2012 9 次提交
    • C
      c/r: procfs: add start_data, end_data, start_brk members to /proc/$pid/stat v4 · b3f7f573
      Cyrill Gorcunov 提交于
      The mm->start_code/end_code, mm->start_data/end_data, mm->start_brk are
      involved into calculation of program text/data segment sizes (which might
      be seen in /proc/<pid>/statm) and into brk() call final address.
      
      For restore we need to know all these values.  While
      mm->start_code/end_code already present in /proc/$pid/stat, the rest
      members are not, so this patch brings them in.
      
      The restore procedure of these members is addressed in another patch using
      prctl().
      Signed-off-by: NCyrill Gorcunov <gorcunov@openvz.org>
      Acked-by: NSerge Hallyn <serge.hallyn@canonical.com>
      Reviewed-by: NKees Cook <keescook@chromium.org>
      Reviewed-by: NKAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>
      Cc: Alexey Dobriyan <adobriyan@gmail.com>
      Cc: Tejun Heo <tj@kernel.org>
      Cc: Andrew Vagin <avagin@openvz.org>
      Cc: Vasiliy Kulikov <segoon@openwall.com>
      Cc: Alexey Dobriyan <adobriyan@gmail.com>
      Cc: "Eric W. Biederman" <ebiederm@xmission.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      b3f7f573
    • A
      dio: optimize cache misses in the submission path · 65dd2aa9
      Andi Kleen 提交于
      Some investigation of a transaction processing workload showed that a
      major consumer of cycles in __blockdev_direct_IO is the cache miss while
      accessing the block size.  This is because it has to walk the chain from
      block_dev to gendisk to queue.
      
      The block size is needed early on to check alignment and sizes.  It's only
      done if the check for the inode block size fails.  But the costly block
      device state is unconditionally fetched.
      
      - Reorganize the code to only fetch block dev state when actually
        needed.
      
      Then do a prefetch on the block dev early on in the direct IO path.  This
      is worth it, because there is substantial code run before we actually
      touch the block dev now.
      
      - I also added some unlikelies to make it clear the compiler that block
        device fetch code is not normally executed.
      
      This gave a small, but measurable improvement on a large database
      benchmark (about 0.3%)
      
      [akpm@linux-foundation.org: coding-style fixes]
      [sfr@canb.auug.org.au: using prefetch requires including prefetch.h]
      Signed-off-by: NAndi Kleen <ak@linux.intel.com>
      Cc: Jeff Moyer <jmoyer@redhat.com>
      Cc: Jens Axboe <axboe@kernel.dk>
      Cc: Christoph Hellwig <hch@lst.de>
      Signed-off-by: NStephen Rothwell <sfr@canb.auug.org.au>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      65dd2aa9
    • A
      vfs: cache request_queue in struct block_device · 87192a2a
      Andi Kleen 提交于
      This makes it possible to get from the inode to the request_queue with one
      less cache miss.  Used in followon optimization.
      
      The livetime of the pointer is the same as the gendisk.
      
      This assumes that the queue will always stay the same in the gendisk while
      it's visible to block_devices.  I think that's safe correct?
      Signed-off-by: NAndi Kleen <ak@linux.intel.com>
      Acked-by: NJeff Moyer <jmoyer@redhat.com>
      Cc: Jens Axboe <axboe@kernel.dk>
      Cc: Christoph Hellwig <hch@lst.de>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      87192a2a
    • T
      fs/direct-io.c: calculate fs_count correctly in get_more_blocks() · ae55e1aa
      Tao Ma 提交于
      In get_more_blocks(), we use dio_count to calcuate fs_count and do some
      tricky things to increase fs_count if dio_count isn't aligned.  But
      actually it still has some corner cases that can't be coverd.  See the
      following example:
      
      	dio_write foo -s 1024 -w 4096
      
      (direct write 4096 bytes at offset 1024).  The same goes if the offset
      isn't aligned to fs_blocksize.
      
      In this case, the old calculation counts fs_count to be 1, but actually we
      will write into 2 different blocks (if fs_blocksize=4096).  The old code
      just works, since it will call get_block twice (and may have to allocate
      and create extents twice for filesystems like ext4).  So we'd better call
      get_block just once with the proper fs_count.
      Signed-off-by: NTao Ma <boyu.mt@taobao.com>
      Cc: "Theodore Ts'o" <tytso@mit.edu>
      Cc: Christoph Hellwig <hch@lst.de>
      Cc: Al Viro <viro@zeniv.linux.org.uk>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      ae55e1aa
    • M
      mm: compaction: introduce sync-light migration for use by compaction · a6bc32b8
      Mel Gorman 提交于
      This patch adds a lightweight sync migrate operation MIGRATE_SYNC_LIGHT
      mode that avoids writing back pages to backing storage.  Async compaction
      maps to MIGRATE_ASYNC while sync compaction maps to MIGRATE_SYNC_LIGHT.
      For other migrate_pages users such as memory hotplug, MIGRATE_SYNC is
      used.
      
      This avoids sync compaction stalling for an excessive length of time,
      particularly when copying files to a USB stick where there might be a
      large number of dirty pages backed by a filesystem that does not support
      ->writepages.
      
      [aarcange@redhat.com: This patch is heavily based on Andrea's work]
      [akpm@linux-foundation.org: fix fs/nfs/write.c build]
      [akpm@linux-foundation.org: fix fs/btrfs/disk-io.c build]
      Signed-off-by: NMel Gorman <mgorman@suse.de>
      Reviewed-by: NRik van Riel <riel@redhat.com>
      Cc: Andrea Arcangeli <aarcange@redhat.com>
      Cc: Minchan Kim <minchan.kim@gmail.com>
      Cc: Dave Jones <davej@redhat.com>
      Cc: Jan Kara <jack@suse.cz>
      Cc: Andy Isaacson <adi@hexapodia.org>
      Cc: Nai Xia <nai.xia@gmail.com>
      Cc: Johannes Weiner <jweiner@redhat.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      a6bc32b8
    • M
      mm: compaction: determine if dirty pages can be migrated without blocking within ->migratepage · b969c4ab
      Mel Gorman 提交于
      Asynchronous compaction is used when allocating transparent hugepages to
      avoid blocking for long periods of time.  Due to reports of stalling,
      there was a debate on disabling synchronous compaction but this severely
      impacted allocation success rates.  Part of the reason was that many dirty
      pages are skipped in asynchronous compaction by the following check;
      
      	if (PageDirty(page) && !sync &&
      		mapping->a_ops->migratepage != migrate_page)
      			rc = -EBUSY;
      
      This skips over all mapping aops using buffer_migrate_page() even though
      it is possible to migrate some of these pages without blocking.  This
      patch updates the ->migratepage callback with a "sync" parameter.  It is
      the responsibility of the callback to fail gracefully if migration would
      block.
      Signed-off-by: NMel Gorman <mgorman@suse.de>
      Reviewed-by: NRik van Riel <riel@redhat.com>
      Cc: Andrea Arcangeli <aarcange@redhat.com>
      Cc: Minchan Kim <minchan.kim@gmail.com>
      Cc: Dave Jones <davej@redhat.com>
      Cc: Jan Kara <jack@suse.cz>
      Cc: Andy Isaacson <adi@hexapodia.org>
      Cc: Nai Xia <nai.xia@gmail.com>
      Cc: Johannes Weiner <jweiner@redhat.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      b969c4ab
    • J
      epoll: limit paths · 28d82dc1
      Jason Baron 提交于
      The current epoll code can be tickled to run basically indefinitely in
      both loop detection path check (on ep_insert()), and in the wakeup paths.
      The programs that tickle this behavior set up deeply linked networks of
      epoll file descriptors that cause the epoll algorithms to traverse them
      indefinitely.  A couple of these sample programs have been previously
      posted in this thread: https://lkml.org/lkml/2011/2/25/297.
      
      To fix the loop detection path check algorithms, I simply keep track of
      the epoll nodes that have been already visited.  Thus, the loop detection
      becomes proportional to the number of epoll file descriptor and links.
      This dramatically decreases the run-time of the loop check algorithm.  In
      one diabolical case I tried it reduced the run-time from 15 mintues (all
      in kernel time) to .3 seconds.
      
      Fixing the wakeup paths could be done at wakeup time in a similar manner
      by keeping track of nodes that have already been visited, but the
      complexity is harder, since there can be multiple wakeups on different
      cpus...Thus, I've opted to limit the number of possible wakeup paths when
      the paths are created.
      
      This is accomplished, by noting that the end file descriptor points that
      are found during the loop detection pass (from the newly added link), are
      actually the sources for wakeup events.  I keep a list of these file
      descriptors and limit the number and length of these paths that emanate
      from these 'source file descriptors'.  In the current implemetation I
      allow 1000 paths of length 1, 500 of length 2, 100 of length 3, 50 of
      length 4 and 10 of length 5.  Note that it is sufficient to check the
      'source file descriptors' reachable from the newly added link, since no
      other 'source file descriptors' will have newly added links.  This allows
      us to check only the wakeup paths that may have gotten too long, and not
      re-check all possible wakeup paths on the system.
      
      In terms of the path limit selection, I think its first worth noting that
      the most common case for epoll, is probably the model where you have 1
      epoll file descriptor that is monitoring n number of 'source file
      descriptors'.  In this case, each 'source file descriptor' has a 1 path of
      length 1.  Thus, I believe that the limits I'm proposing are quite
      reasonable and in fact may be too generous.  Thus, I'm hoping that the
      proposed limits will not prevent any workloads that currently work to
      fail.
      
      In terms of locking, I have extended the use of the 'epmutex' to all
      epoll_ctl add and remove operations.  Currently its only used in a subset
      of the add paths.  I need to hold the epmutex, so that we can correctly
      traverse a coherent graph, to check the number of paths.  I believe that
      this additional locking is probably ok, since its in the setup/teardown
      paths, and doesn't affect the running paths, but it certainly is going to
      add some extra overhead.  Also, worth noting is that the epmuex was
      recently added to the ep_ctl add operations in the initial path loop
      detection code using the argument that it was not on a critical path.
      
      Another thing to note here, is the length of epoll chains that is allowed.
      Currently, eventpoll.c defines:
      
      /* Maximum number of nesting allowed inside epoll sets */
      #define EP_MAX_NESTS 4
      
      This basically means that I am limited to a graph depth of 5 (EP_MAX_NESTS
      + 1).  However, this limit is currently only enforced during the loop
      check detection code, and only when the epoll file descriptors are added
      in a certain order.  Thus, this limit is currently easily bypassed.  The
      newly added check for wakeup paths, stricly limits the wakeup paths to a
      length of 5, regardless of the order in which ep's are linked together.
      Thus, a side-effect of the new code is a more consistent enforcement of
      the graph depth.
      
      Thus far, I've tested this, using the sample programs previously
      mentioned, which now either return quickly or return -EINVAL.  I've also
      testing using the piptest.c epoll tester, which showed no difference in
      performance.  I've also created a number of different epoll networks and
      tested that they behave as expectded.
      
      I believe this solves the original diabolical test cases, while still
      preserving the sane epoll nesting.
      Signed-off-by: NJason Baron <jbaron@redhat.com>
      Cc: Nelson Elhage <nelhage@ksplice.com>
      Cc: Davide Libenzi <davidel@xmailserver.org>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      28d82dc1
    • S
      pipe: fail cleanly when root tries F_SETPIPE_SZ with big size · 2ccd4f4d
      Sasha Levin 提交于
      When a user with the CAP_SYS_RESOURCE cap tries to F_SETPIPE_SZ a pipe
      with size bigger than kmalloc() can alloc it spits out an ugly warning:
      
        ------------[ cut here ]------------
        WARNING: at mm/page_alloc.c:2095 __alloc_pages_nodemask+0x5d3/0x7a0()
        Pid: 733, comm: a.out Not tainted 3.2.0-rc1+ #4
        Call Trace:
           warn_slowpath_common+0x75/0xb0
           warn_slowpath_null+0x15/0x20
           __alloc_pages_nodemask+0x5d3/0x7a0
           __get_free_pages+0x12/0x50
           __kmalloc+0x12b/0x150
           pipe_set_size+0x75/0x120
           pipe_fcntl+0xf8/0x140
           do_fcntl+0x2d4/0x410
           sys_fcntl+0x66/0xa0
           system_call_fastpath+0x16/0x1b
        ---[ end trace 432f702e6db7b5ee ]---
      
      Instead, make kcalloc() handle the overflow case and fail quietly.
      
      [akpm@linux-foundation.org: switch to sizeof(*bufs) for 80-column niceness]
      Signed-off-by: NSasha Levin <levinsasha928@gmail.com>
      Cc: Alexander Viro <viro@zeniv.linux.org.uk>
      Acked-by: NPekka Enberg <penberg@kernel.org>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      2ccd4f4d
    • X
      proc: fix null pointer deref in proc_pid_permission() · a2ef990a
      Xiaotian Feng 提交于
      get_proc_task() can fail to search the task and return NULL,
      put_task_struct() will then bomb the kernel with following oops:
      
        BUG: unable to handle kernel NULL pointer dereference at 0000000000000010
        IP: [<ffffffff81217d34>] proc_pid_permission+0x64/0xe0
        PGD 112075067 PUD 112814067 PMD 0
        Oops: 0002 [#1] PREEMPT SMP
      
      This is a regression introduced by commit 0499680a ("procfs: add hidepid=
      and gid= mount options").  The kernel should return -ESRCH if
      get_proc_task() failed.
      Signed-off-by: NXiaotian Feng <dannyfeng@tencent.com>
      Cc: Al Viro <viro@zeniv.linux.org.uk>
      Cc: Vasiliy Kulikov <segoon@openwall.com>
      Cc: Stephen Wilson <wilsons@start.ca>
      Acked-by: NDavid Rientjes <rientjes@google.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      a2ef990a
  3. 11 1月, 2012 21 次提交
    • A
      autofs4: deal with autofs4_write/autofs4_write races · d668dc56
      Al Viro 提交于
      Just serialize the actual writing of packets into pipe on
      a new mutex, independent from everything else in the locking
      hierarchy.  As soon as something has started feeding a piece
      of packet into the pipe to daemon, we *want* everything else
      about to try the same to wait until we are done.
      Acked-by: NIan Kent <raven@themaw.net>
      Signed-off-by: NAl Viro <viro@zeniv.linux.org.uk>
      d668dc56
    • A
      autofs4: catatonic_mode vs. notify_daemon race · 87533332
      Al Viro 提交于
      we need to hold ->wq_mutex while we are forming the packet to send,
      lest we have autofs4_catatonic_mode() setting wq->name.name to NULL
      just as autofs4_notify_daemon() decides to memcpy() from it...
      
      We do have check for catatonic mode immediately after that (under
      ->wq_mutex, as it ought to be) and packet won't be actually sent,
      but it'll be too late for us if we oops on that memcpy() from NULL...
      
      Fix is obvious - just extend the area covered by ->wq_mutex over
      that switch and check whether it's catatonic *before* doing anything
      else.
      Acked-by: NIan Kent <raven@themaw.net>
      Signed-off-by: NAl Viro <viro@zeniv.linux.org.uk>
      87533332
    • A
      autofs4: autofs4_wait() vs. autofs4_catatonic_mode() race · 4041bcdc
      Al Viro 提交于
      We need to recheck ->catatonic after autofs4_wait() got ->wq_mutex
      for good, or we might end up with wq inserted into queue after
      autofs4_catatonic_mode() had done its thing.  It will stick there
      forever, since there won't be anything to clear its ->name.name.
      
      A bit of a complication: validate_request() drops and regains ->wq_mutex.
      It actually ends up the most convenient place to stick the check into...
      Acked-by: NIan Kent <raven@themaw.net>
      Signed-off-by: NAl Viro <viro@zeniv.linux.org.uk>
      4041bcdc
    • V
      procfs: add hidepid= and gid= mount options · 0499680a
      Vasiliy Kulikov 提交于
      Add support for mount options to restrict access to /proc/PID/
      directories.  The default backward-compatible "relaxed" behaviour is left
      untouched.
      
      The first mount option is called "hidepid" and its value defines how much
      info about processes we want to be available for non-owners:
      
      hidepid=0 (default) means the old behavior - anybody may read all
      world-readable /proc/PID/* files.
      
      hidepid=1 means users may not access any /proc/<pid>/ directories, but
      their own.  Sensitive files like cmdline, sched*, status are now protected
      against other users.  As permission checking done in proc_pid_permission()
      and files' permissions are left untouched, programs expecting specific
      files' modes are not confused.
      
      hidepid=2 means hidepid=1 plus all /proc/PID/ will be invisible to other
      users.  It doesn't mean that it hides whether a process exists (it can be
      learned by other means, e.g.  by kill -0 $PID), but it hides process' euid
      and egid.  It compicates intruder's task of gathering info about running
      processes, whether some daemon runs with elevated privileges, whether
      another user runs some sensitive program, whether other users run any
      program at all, etc.
      
      gid=XXX defines a group that will be able to gather all processes' info
      (as in hidepid=0 mode).  This group should be used instead of putting
      nonroot user in sudoers file or something.  However, untrusted users (like
      daemons, etc.) which are not supposed to monitor the tasks in the whole
      system should not be added to the group.
      
      hidepid=1 or higher is designed to restrict access to procfs files, which
      might reveal some sensitive private information like precise keystrokes
      timings:
      
      http://www.openwall.com/lists/oss-security/2011/11/05/3
      
      hidepid=1/2 doesn't break monitoring userspace tools.  ps, top, pgrep, and
      conky gracefully handle EPERM/ENOENT and behave as if the current user is
      the only user running processes.  pstree shows the process subtree which
      contains "pstree" process.
      
      Note: the patch doesn't deal with setuid/setgid issues of keeping
      preopened descriptors of procfs files (like
      https://lkml.org/lkml/2011/2/7/368).  We rely on that the leaked
      information like the scheduling counters of setuid apps doesn't threaten
      anybody's privacy - only the user started the setuid program may read the
      counters.
      Signed-off-by: NVasiliy Kulikov <segoon@openwall.com>
      Cc: Alexey Dobriyan <adobriyan@gmail.com>
      Cc: Al Viro <viro@zeniv.linux.org.uk>
      Cc: Randy Dunlap <rdunlap@xenotime.net>
      Cc: "H. Peter Anvin" <hpa@zytor.com>
      Cc: Greg KH <greg@kroah.com>
      Cc: Theodore Tso <tytso@MIT.EDU>
      Cc: Alan Cox <alan@lxorguk.ukuu.org.uk>
      Cc: James Morris <jmorris@namei.org>
      Cc: Oleg Nesterov <oleg@redhat.com>
      Cc: Hugh Dickins <hughd@google.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      0499680a
    • V
      procfs: parse mount options · 97412950
      Vasiliy Kulikov 提交于
      Add support for procfs mount options.  Actual mount options are coming in
      the next patches.
      Signed-off-by: NVasiliy Kulikov <segoon@openwall.com>
      Cc: Alexey Dobriyan <adobriyan@gmail.com>
      Cc: Al Viro <viro@zeniv.linux.org.uk>
      Cc: Randy Dunlap <rdunlap@xenotime.net>
      Cc: "H. Peter Anvin" <hpa@zytor.com>
      Cc: Greg KH <greg@kroah.com>
      Cc: Theodore Tso <tytso@MIT.EDU>
      Cc: Alan Cox <alan@lxorguk.ukuu.org.uk>
      Cc: James Morris <jmorris@namei.org>
      Cc: Oleg Nesterov <oleg@redhat.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      97412950
    • P
      procfs: introduce the /proc/<pid>/map_files/ directory · 640708a2
      Pavel Emelyanov 提交于
      This one behaves similarly to the /proc/<pid>/fd/ one - it contains
      symlinks one for each mapping with file, the name of a symlink is
      "vma->vm_start-vma->vm_end", the target is the file.  Opening a symlink
      results in a file that point exactly to the same inode as them vma's one.
      
      For example the ls -l of some arbitrary /proc/<pid>/map_files/
      
       | lr-x------ 1 root root 64 Aug 26 06:40 7f8f80403000-7f8f80404000 -> /lib64/libc-2.5.so
       | lr-x------ 1 root root 64 Aug 26 06:40 7f8f8061e000-7f8f80620000 -> /lib64/libselinux.so.1
       | lr-x------ 1 root root 64 Aug 26 06:40 7f8f80826000-7f8f80827000 -> /lib64/libacl.so.1.1.0
       | lr-x------ 1 root root 64 Aug 26 06:40 7f8f80a2f000-7f8f80a30000 -> /lib64/librt-2.5.so
       | lr-x------ 1 root root 64 Aug 26 06:40 7f8f80a30000-7f8f80a4c000 -> /lib64/ld-2.5.so
      
      This *helps* checkpointing process in three ways:
      
      1. When dumping a task mappings we do know exact file that is mapped
         by particular region.  We do this by opening
         /proc/$pid/map_files/$address symlink the way we do with file
         descriptors.
      
      2. This also helps in determining which anonymous shared mappings are
         shared with each other by comparing the inodes of them.
      
      3. When restoring a set of processes in case two of them has a mapping
         shared, we map the memory by the 1st one and then open its
         /proc/$pid/map_files/$address file and map it by the 2nd task.
      
      Using /proc/$pid/maps for this is quite inconvenient since it brings
      repeatable re-reading and reparsing for this text file which slows down
      restore procedure significantly.  Also as being pointed in (3) it is a way
      easier to use top level shared mapping in children as
      /proc/$pid/map_files/$address when needed.
      
      [akpm@linux-foundation.org: coding-style fixes]
      [gorcunov@openvz.org: make map_files depend on CHECKPOINT_RESTORE]
      Signed-off-by: NPavel Emelyanov <xemul@parallels.com>
      Signed-off-by: NCyrill Gorcunov <gorcunov@openvz.org>
      Reviewed-by: NVasiliy Kulikov <segoon@openwall.com>
      Reviewed-by: N"Kirill A. Shutemov" <kirill@shutemov.name>
      Cc: Tejun Heo <tj@kernel.org>
      Cc: Alexey Dobriyan <adobriyan@gmail.com>
      Cc: Al Viro <viro@ZenIV.linux.org.uk>
      Cc: Pavel Machek <pavel@ucw.cz>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      640708a2
    • C
      procfs: make proc_get_link to use dentry instead of inode · 7773fbc5
      Cyrill Gorcunov 提交于
      Prepare the ground for the next "map_files" patch which needs a name of a
      link file to analyse.
      Signed-off-by: NCyrill Gorcunov <gorcunov@openvz.org>
      Cc: Pavel Emelyanov <xemul@parallels.com>
      Cc: Tejun Heo <tj@kernel.org>
      Cc: Vasiliy Kulikov <segoon@openwall.com>
      Cc: "Kirill A. Shutemov" <kirill@shutemov.name>
      Cc: Alexey Dobriyan <adobriyan@gmail.com>
      Cc: Al Viro <viro@ZenIV.linux.org.uk>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      7773fbc5
    • F
      reiserfs: don't lock root inode searching · 9b467e6e
      Frederic Weisbecker 提交于
      Nothing requires that we lock the filesystem until the root inode is
      provided.
      
      Also iget5_locked() triggers a warning because we are holding the
      filesystem lock while allocating the inode, which result in a lockdep
      suspicion that we have a lock inversion against the reclaim path:
      
      [ 1986.896979] =================================
      [ 1986.896990] [ INFO: inconsistent lock state ]
      [ 1986.896997] 3.1.1-main #8
      [ 1986.897001] ---------------------------------
      [ 1986.897007] inconsistent {RECLAIM_FS-ON-W} -> {IN-RECLAIM_FS-W} usage.
      [ 1986.897016] kswapd0/16 [HC0[0]:SC0[0]:HE1:SE1] takes:
      [ 1986.897023]  (&REISERFS_SB(s)->lock){+.+.?.}, at: [<c01f8bd4>] reiserfs_write_lock+0x20/0x2a
      [ 1986.897044] {RECLAIM_FS-ON-W} state was registered at:
      [ 1986.897050]   [<c014a5b9>] mark_held_locks+0xae/0xd0
      [ 1986.897060]   [<c014aab3>] lockdep_trace_alloc+0x7d/0x91
      [ 1986.897068]   [<c0190ee0>] kmem_cache_alloc+0x1a/0x93
      [ 1986.897078]   [<c01e7728>] reiserfs_alloc_inode+0x13/0x3d
      [ 1986.897088]   [<c01a5b06>] alloc_inode+0x14/0x5f
      [ 1986.897097]   [<c01a5cb9>] iget5_locked+0x62/0x13a
      [ 1986.897106]   [<c01e99e0>] reiserfs_fill_super+0x410/0x8b9
      [ 1986.897114]   [<c01953da>] mount_bdev+0x10b/0x159
      [ 1986.897123]   [<c01e764d>] get_super_block+0x10/0x12
      [ 1986.897131]   [<c0195b38>] mount_fs+0x59/0x12d
      [ 1986.897138]   [<c01a80d1>] vfs_kern_mount+0x45/0x7a
      [ 1986.897147]   [<c01a83e3>] do_kern_mount+0x2f/0xb0
      [ 1986.897155]   [<c01a987a>] do_mount+0x5c2/0x612
      [ 1986.897163]   [<c01a9a72>] sys_mount+0x61/0x8f
      [ 1986.897170]   [<c044060c>] sysenter_do_call+0x12/0x32
      [ 1986.897181] irq event stamp: 7509691
      [ 1986.897186] hardirqs last  enabled at (7509691): [<c0190f34>] kmem_cache_alloc+0x6e/0x93
      [ 1986.897197] hardirqs last disabled at (7509690): [<c0190eea>] kmem_cache_alloc+0x24/0x93
      [ 1986.897209] softirqs last  enabled at (7508896): [<c01294bd>] __do_softirq+0xee/0xfd
      [ 1986.897222] softirqs last disabled at (7508859): [<c01030ed>] do_softirq+0x50/0x9d
      [ 1986.897234]
      [ 1986.897235] other info that might help us debug this:
      [ 1986.897242]  Possible unsafe locking scenario:
      [ 1986.897244]
      [ 1986.897250]        CPU0
      [ 1986.897254]        ----
      [ 1986.897257]   lock(&REISERFS_SB(s)->lock);
      [ 1986.897265] <Interrupt>
      [ 1986.897269]     lock(&REISERFS_SB(s)->lock);
      [ 1986.897276]
      [ 1986.897277]  *** DEADLOCK ***
      [ 1986.897278]
      [ 1986.897286] no locks held by kswapd0/16.
      [ 1986.897291]
      [ 1986.897292] stack backtrace:
      [ 1986.897299] Pid: 16, comm: kswapd0 Not tainted 3.1.1-main #8
      [ 1986.897306] Call Trace:
      [ 1986.897314]  [<c0439e76>] ? printk+0xf/0x11
      [ 1986.897324]  [<c01482d1>] print_usage_bug+0x20e/0x21a
      [ 1986.897332]  [<c01479b8>] ? print_irq_inversion_bug+0x172/0x172
      [ 1986.897341]  [<c014855c>] mark_lock+0x27f/0x483
      [ 1986.897349]  [<c0148d88>] __lock_acquire+0x628/0x1472
      [ 1986.897358]  [<c0149fae>] lock_acquire+0x47/0x5e
      [ 1986.897366]  [<c01f8bd4>] ? reiserfs_write_lock+0x20/0x2a
      [ 1986.897384]  [<c01f8bd4>] ? reiserfs_write_lock+0x20/0x2a
      [ 1986.897397]  [<c043b5ef>] mutex_lock_nested+0x35/0x26f
      [ 1986.897409]  [<c01f8bd4>] ? reiserfs_write_lock+0x20/0x2a
      [ 1986.897421]  [<c01f8bd4>] reiserfs_write_lock+0x20/0x2a
      [ 1986.897433]  [<c01e2edd>] map_block_for_writepage+0xc9/0x590
      [ 1986.897448]  [<c01b1706>] ? create_empty_buffers+0x33/0x8f
      [ 1986.897461]  [<c0121124>] ? get_parent_ip+0xb/0x31
      [ 1986.897472]  [<c043ef7f>] ? sub_preempt_count+0x81/0x8e
      [ 1986.897485]  [<c043cae0>] ? _raw_spin_unlock+0x27/0x3d
      [ 1986.897496]  [<c0121124>] ? get_parent_ip+0xb/0x31
      [ 1986.897508]  [<c01e355d>] reiserfs_writepage+0x1b9/0x3e7
      [ 1986.897521]  [<c0173b40>] ? clear_page_dirty_for_io+0xcb/0xde
      [ 1986.897533]  [<c014a6e3>] ? trace_hardirqs_on_caller+0x108/0x138
      [ 1986.897546]  [<c014a71e>] ? trace_hardirqs_on+0xb/0xd
      [ 1986.897559]  [<c0177b38>] shrink_page_list+0x34f/0x5e2
      [ 1986.897572]  [<c01780a7>] shrink_inactive_list+0x172/0x22c
      [ 1986.897585]  [<c0178464>] shrink_zone+0x303/0x3b1
      [ 1986.897597]  [<c043cae0>] ? _raw_spin_unlock+0x27/0x3d
      [ 1986.897611]  [<c01788c9>] kswapd+0x3b7/0x5f2
      
      The deadlock shouldn't happen since we are doing that allocation in the
      mount path, the filesystem is not available for any reclaim.  Still the
      warning is annoying.
      
      To solve this, acquire the lock later only where we need it, right before
      calling reiserfs_read_locked_inode() that wants to lock to walk the tree.
      Reported-by: NKnut Petersen <Knut_Petersen@t-online.de>
      Signed-off-by: NFrederic Weisbecker <fweisbec@gmail.com>
      Cc: Al Viro <viro@zeniv.linux.org.uk>
      Cc: Christoph Hellwig <hch@lst.de>
      Cc: Jeff Mahoney <jeffm@suse.com>
      Cc: Jan Kara <jack@suse.cz>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      9b467e6e
    • F
      reiserfs: don't lock journal_init() · 37c69b98
      Frederic Weisbecker 提交于
      journal_init() doesn't need the lock since no operation on the filesystem
      is involved there.  journal_read() and get_list_bitmap() have yet to be
      reviewed carefully though before removing the lock there.  Just keep the
      it around these two calls for safety.
      Signed-off-by: NFrederic Weisbecker <fweisbec@gmail.com>
      Cc: Al Viro <viro@zeniv.linux.org.uk>
      Cc: Christoph Hellwig <hch@lst.de>
      Cc: Jeff Mahoney <jeffm@suse.com>
      Cc: Jan Kara <jack@suse.cz>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      37c69b98
    • F
      reiserfs: delay reiserfs lock until journal initialization · f32485be
      Frederic Weisbecker 提交于
      In the mount path, transactions that are made before journal
      initialization don't involve the filesystem.  We can delay the reiserfs
      lock until we play with the journal.
      Signed-off-by: NFrederic Weisbecker <fweisbec@gmail.com>
      Cc: Al Viro <viro@zeniv.linux.org.uk>
      Cc: Christoph Hellwig <hch@lst.de>
      Cc: Jeff Mahoney <jeffm@suse.com>
      Cc: Jan Kara <jack@suse.cz>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      f32485be
    • D
      reiserfs: delete comments referring to the BKL · b18c1c6e
      Davidlohr Bueso 提交于
      Signed-off-by: NDavidlohr Bueso <dave@gnu.org>
      Cc: Jan Kara <jack@suse.cz>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      b18c1c6e
    • D
      fs: binfmt_elf: create Kconfig variable for PIE randomization · e39f5602
      David Daney 提交于
      Randomization of PIE load address is hard coded in binfmt_elf.c for X86
      and ARM.  Create a new Kconfig variable
      (CONFIG_ARCH_BINFMT_ELF_RANDOMIZE_PIE) for this and use it instead.  Thus
      architecture specific policy is pushed out of the generic binfmt_elf.c and
      into the architecture Kconfig files.
      
      X86 and ARM Kconfigs are modified to select the new variable so there is
      no change in behavior.  A follow on patch will select it for MIPS too.
      Signed-off-by: NDavid Daney <david.daney@cavium.com>
      Cc: Russell King <linux@arm.linux.org.uk>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: Ingo Molnar <mingo@redhat.com>
      Cc: "H. Peter Anvin" <hpa@zytor.com>
      Cc: Alexander Viro <viro@zeniv.linux.org.uk>
      Acked-by: NH. Peter Anvin <hpa@zytor.com>
      Cc: Ralf Baechle <ralf@linux-mips.org>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      e39f5602
    • K
      tracepoint: add tracepoints for debugging oom_score_adj · 43d2b113
      KAMEZAWA Hiroyuki 提交于
      oom_score_adj is used for guarding processes from OOM-Killer.  One of
      problem is that it's inherited at fork().  When a daemon set oom_score_adj
      and make children, it's hard to know where the value is set.
      
      This patch adds some tracepoints useful for debugging. This patch adds
      3 trace points.
        - creating new task
        - renaming a task (exec)
        - set oom_score_adj
      
      To debug, users need to enable some trace pointer. Maybe filtering is useful as
      
      # EVENT=/sys/kernel/debug/tracing/events/task/
      # echo "oom_score_adj != 0" > $EVENT/task_newtask/filter
      # echo "oom_score_adj != 0" > $EVENT/task_rename/filter
      # echo 1 > $EVENT/enable
      # EVENT=/sys/kernel/debug/tracing/events/oom/
      # echo 1 > $EVENT/enable
      
      output will be like this.
      # grep oom /sys/kernel/debug/tracing/trace
      bash-7699  [007] d..3  5140.744510: oom_score_adj_update: pid=7699 comm=bash oom_score_adj=-1000
      bash-7699  [007] ...1  5151.818022: task_newtask: pid=7729 comm=bash clone_flags=1200011 oom_score_adj=-1000
      ls-7729  [003] ...2  5151.818504: task_rename: pid=7729 oldcomm=bash newcomm=ls oom_score_adj=-1000
      bash-7699  [002] ...1  5175.701468: task_newtask: pid=7730 comm=bash clone_flags=1200011 oom_score_adj=-1000
      grep-7730  [007] ...2  5175.701993: task_rename: pid=7730 oldcomm=bash newcomm=grep oom_score_adj=-1000
      Signed-off-by: NKAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>
      Cc: KOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com>
      Acked-by: NDavid Rientjes <rientjes@google.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      43d2b113
    • J
      btrfs: pass __GFP_WRITE for buffered write page allocations · e3a41a5b
      Johannes Weiner 提交于
      Tell the page allocator that pages allocated for a buffered write are
      expected to become dirty soon.
      Signed-off-by: NJohannes Weiner <jweiner@redhat.com>
      Reviewed-by: NRik van Riel <riel@redhat.com>
      Acked-by: NMel Gorman <mgorman@suse.de>
      Cc: Minchan Kim <minchan.kim@gmail.com>
      Cc: Michal Hocko <mhocko@suse.cz>
      Cc: KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>
      Cc: Christoph Hellwig <hch@infradead.org>
      Cc: Wu Fengguang <fengguang.wu@intel.com>
      Cc: Dave Chinner <david@fromorbit.com>
      Cc: Jan Kara <jack@suse.cz>
      Cc: Shaohua Li <shaohua.li@intel.com>
      Cc: Chris Mason <chris.mason@oracle.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      e3a41a5b
    • K
      mm: account reaped page cache on inode cache pruning · 5f8aefd4
      Konstantin Khlebnikov 提交于
      Inode cache pruning indirectly reclaims page-cache by invalidating mapping
      pages.  Let's account them into reclaim-state to notice this progress in
      memory reclaimer.
      Signed-off-by: NKonstantin Khlebnikov <khlebnikov@openvz.org>
      Cc: Dave Chinner <david@fromorbit.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      5f8aefd4
    • A
      hfsplus: creation of hidden dir on mount can fail · b3f2a924
      Al Viro 提交于
      Signed-off-by: NAl Viro <viro@zeniv.linux.org.uk>
      b3f2a924
    • S
      block_dev: Suppress bdev_cache_init() kmemleak warninig · ace8577a
      Sergey Senozhatsky 提交于
      Kmemleak reports the following warning in bdev_cache_init()
      [    0.003738] kmemleak: Object 0xffff880153035200 (size 256):
      [    0.003823] kmemleak:   comm "swapper/0", pid 0, jiffies 4294667299
      [    0.003909] kmemleak:   min_count = 1
      [    0.003988] kmemleak:   count = 0
      [    0.004066] kmemleak:   flags = 0x1
      [    0.004144] kmemleak:   checksum = 0
      [    0.004224] kmemleak:   backtrace:
      [    0.004303]      [<ffffffff814755ac>] kmemleak_alloc+0x21/0x3e
      [    0.004446]      [<ffffffff811100ba>] kmem_cache_alloc+0xca/0x1dc
      [    0.004592]      [<ffffffff811371b1>] alloc_vfsmnt+0x1f/0x198
      [    0.004736]      [<ffffffff811375c5>] vfs_kern_mount+0x36/0xd2
      [    0.004879]      [<ffffffff8113929a>] kern_mount_data+0x18/0x32
      [    0.005025]      [<ffffffff81ab9075>] bdev_cache_init+0x51/0x81
      [    0.005169]      [<ffffffff81ab8abf>] vfs_caches_init+0x101/0x10d
      [    0.005313]      [<ffffffff81a9bae3>] start_kernel+0x344/0x383
      [    0.005456]      [<ffffffff81a9b2a7>] x86_64_start_reservations+0xae/0xb2
      [    0.005602]      [<ffffffff81a9b3ad>] x86_64_start_kernel+0x102/0x111
      [    0.005747]      [<ffffffffffffffff>] 0xffffffffffffffff
      [    0.008653] kmemleak: Trying to color unknown object at 0xffff880153035220 as Grey
      [    0.008754] Pid: 0, comm: swapper/0 Not tainted 3.3.0-rc0-dbg-04200-g8180888-dirty #888
      [    0.008856] Call Trace:
      [    0.008934]  [<ffffffff81118704>] ? find_and_get_object+0x44/0x118
      [    0.009023]  [<ffffffff81118fe6>] paint_ptr+0x57/0x8f
      [    0.009109]  [<ffffffff81475935>] kmemleak_not_leak+0x23/0x42
      [    0.009195]  [<ffffffff81ab9096>] bdev_cache_init+0x72/0x81
      [    0.009282]  [<ffffffff81ab8abf>] vfs_caches_init+0x101/0x10d
      [    0.009368]  [<ffffffff81a9bae3>] start_kernel+0x344/0x383
      [    0.009466]  [<ffffffff81a9b2a7>] x86_64_start_reservations+0xae/0xb2
      [    0.009555]  [<ffffffff81a9b140>] ? early_idt_handlers+0x140/0x140
      [    0.009643]  [<ffffffff81a9b3ad>] x86_64_start_kernel+0x102/0x111
      
      due to attempt to mark pointer to `struct vfsmount' as a gray object, which
      is embedded into `struct mount' returned from alloc_vfsmnt().
      
      Make `bd_mnt' static, avoiding need to tell kmemleak to mark it gray, as
      suggested by Al Viro.
      Signed-off-by: NSergey Senozhatsky <sergey.senozhatsky@gmail.com>
      Signed-off-by: NAl Viro <viro@zeniv.linux.org.uk>
      ace8577a
    • M
      fix shrink_dcache_parent() livelock · eaf5f907
      Miklos Szeredi 提交于
      Two (or more) concurrent calls of shrink_dcache_parent() on the same dentry may
      cause shrink_dcache_parent() to loop forever.
      
      Here's what appears to happen:
      
      1 - CPU0: select_parent(P) finds C and puts it on dispose list, returns 1
      
      2 - CPU1: select_parent(P) locks P->d_lock
      
      3 - CPU0: shrink_dentry_list() locks C->d_lock
         dentry_kill(C) tries to lock P->d_lock but fails, unlocks C->d_lock
      
      4 - CPU1: select_parent(P) locks C->d_lock,
               moves C from dispose list being processed on CPU0 to the new
      dispose list, returns 1
      
      5 - CPU0: shrink_dentry_list() finds dispose list empty, returns
      
      6 - Goto 2 with CPU0 and CPU1 switched
      
      Basically select_parent() steals the dentry from shrink_dentry_list() and thinks
      it found a new one, causing shrink_dentry_list() to think it's making progress
      and loop over and over.
      
      One way to trigger this is to make udev calls stat() on the sysfs file while it
      is going away.
      
      Having a file in /lib/udev/rules.d/ with only this one rule seems to the trick:
      
      ATTR{vendor}=="0x8086", ATTR{device}=="0x10ca", ENV{PCI_SLOT_NAME}="%k", ENV{MATCHADDR}="$attr{address}", RUN+="/bin/true"
      
      Then execute the following loop:
      
      while true; do
              echo -bond0 > /sys/class/net/bonding_masters
              echo +bond0 > /sys/class/net/bonding_masters
              echo -bond1 > /sys/class/net/bonding_masters
              echo +bond1 > /sys/class/net/bonding_masters
      done
      
      One fix would be to check all callers and prevent concurrent calls to
      shrink_dcache_parent().  But I think a better solution is to stop the
      stealing behavior.
      
      This patch adds a new dentry flag that is set when the dentry is added to the
      dispose list.  The flag is cleared in dentry_lru_del() in case the dentry gets a
      new reference just before being pruned.
      
      If the dentry has this flag, select_parent() will skip it and let
      shrink_dentry_list() retry pruning it.  With select_parent() skipping those
      dentries there will not be the appearance of progress (new dentries found) when
      there is none, hence shrink_dcache_parent() will not loop forever.
      
      Set the flag is also set in prune_dcache_sb() for consistency as suggested by
      Linus.
      Signed-off-by: NMiklos Szeredi <mszeredi@suse.cz>
      CC: stable@vger.kernel.org
      Signed-off-by: NAl Viro <viro@zeniv.linux.org.uk>
      eaf5f907
    • X
      ext4: fix undefined behavior in ext4_fill_flex_info() · d50f2ab6
      Xi Wang 提交于
      Commit 503358ae ("ext4: avoid divide by
      zero when trying to mount a corrupted file system") fixes CVE-2009-4307
      by performing a sanity check on s_log_groups_per_flex, since it can be
      set to a bogus value by an attacker.
      
      	sbi->s_log_groups_per_flex = sbi->s_es->s_log_groups_per_flex;
      	groups_per_flex = 1 << sbi->s_log_groups_per_flex;
      
      	if (groups_per_flex < 2) { ... }
      
      This patch fixes two potential issues in the previous commit.
      
      1) The sanity check might only work on architectures like PowerPC.
      On x86, 5 bits are used for the shifting amount.  That means, given a
      large s_log_groups_per_flex value like 36, groups_per_flex = 1 << 36
      is essentially 1 << 4 = 16, rather than 0.  This will bypass the check,
      leaving s_log_groups_per_flex and groups_per_flex inconsistent.
      
      2) The sanity check relies on undefined behavior, i.e., oversized shift.
      A standard-confirming C compiler could rewrite the check in unexpected
      ways.  Consider the following equivalent form, assuming groups_per_flex
      is unsigned for simplicity.
      
      	groups_per_flex = 1 << sbi->s_log_groups_per_flex;
      	if (groups_per_flex == 0 || groups_per_flex == 1) {
      
      We compile the code snippet using Clang 3.0 and GCC 4.6.  Clang will
      completely optimize away the check groups_per_flex == 0, leaving the
      patched code as vulnerable as the original.  GCC keeps the check, but
      there is no guarantee that future versions will do the same.
      Signed-off-by: NXi Wang <xi.wang@gmail.com>
      Signed-off-by: N"Theodore Ts'o" <tytso@mit.edu>
      Cc: stable@vger.kernel.org
      d50f2ab6
    • A
    • A
      coda: deal correctly with allocation failure from coda_cnode_makectl() · 0b2c4e39
      Al Viro 提交于
      lookup should fail with ENOMEM, not silently make dentry negative.
      Switched to saner calling conventions, while we are at it.
      Signed-off-by: NAl Viro <viro@zeniv.linux.org.uk>
      0b2c4e39
  4. 10 1月, 2012 7 次提交
    • A
      vfs: new helper - d_make_root() · adc0e91a
      Al Viro 提交于
      d_alloc_root() with iput() in case of allocation failure...
      Signed-off-by: NAl Viro <viro@zeniv.linux.org.uk>
      adc0e91a
    • D
      dcache: use a dispose list in select_parent · b48f03b3
      Dave Chinner 提交于
      select_parent currently abuses the dentry cache LRU to provide
      cleanup features for child dentries that need to be freed. It moves
      them to the tail of the LRU, then tells shrink_dcache_parent() to
      calls __shrink_dcache_sb to unconditionally move them to a dispose
      list (as DCACHE_REFERENCED is ignored). __shrink_dcache_sb() has to
      relock the dentries to move them off the LRU onto the dispose list,
      but otherwise does not touch the dentries that select_parent() moved
      to the tail of the LRU. It then passses the dispose list to
      shrink_dentry_list() which tries to free the dentries.
      
      IOWs, the use of __shrink_dcache_sb() is superfluous - we can build
      exactly the same list of dentries for disposal directly in
      select_parent() and call shrink_dentry_list() instead of calling
      __shrink_dcache_sb() to do that. This means that we avoid long holds
      on the lru lock walking the LRU moving dentries to the dispose list
      We also avoid the need to relock each dentry just to move it off the
      LRU, reducing the numebr of times we lock each dentry to dispose of
      them in shrink_dcache_parent() from 3 to 2 times.
      
      Further, we remove one of the two callers of __shrink_dcache_sb().
      This also means that __shrink_dcache_sb can be moved into back into
      prune_dcache_sb() and we no longer have to handle referenced
      dentries conditionally, simplifying the code.
      Signed-off-by: NDave Chinner <dchinner@redhat.com>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      Signed-off-by: NAl Viro <viro@zeniv.linux.org.uk>
      b48f03b3
    • A
      ceph: d_alloc_root() may fail · 3c5184ef
      Al Viro 提交于
      ... and ceph_init_dentry(NULL) will oops
      Signed-off-by: NAl Viro <viro@zeniv.linux.org.uk>
      3c5184ef
    • A
      ext4: fix failure exits · 94bf608a
      Al Viro 提交于
      a) leaking root dentry is bad
      b) in case of failed ext4_mb_init() we don't want to do ext4_mb_release()
      c) OTOH, in the same case we *do* want ext4_ext_release()
      Signed-off-by: NAl Viro <viro@zeniv.linux.org.uk>
      94bf608a
    • T
      NFSv4: Change the default setting of the nfs4_disable_idmapping parameter · 074b1d12
      Trond Myklebust 提交于
      Now that the use of numeric uids/gids is officially sanctioned in
      RFC3530bis, it is time to change the default here to 'enabled'.
      
      By doing so, we ensure that NFSv4 copies the behaviour of NFSv3 when we're
      using the default AUTH_SYS authentication (i.e. when the client uses the
      numeric uids/gids as authentication tokens), so that when new files are
      created, they will appear to have the correct user/group.
      It also fixes a number of backward compatibility issues when migrating
      from NFSv3 to NFSv4 on a platform where the server uses different uid/gid
      mappings than the client.
      
      Note also that this setting has been successfully tested against servers
      that do not support numeric uids/gids at several Connectathon/Bakeathon
      events at this point, and the fall back to using string names/groups has
      been shown to work well in all those test cases.
      Signed-off-by: NTrond Myklebust <Trond.Myklebust@netapp.com>
      074b1d12
    • A
      mtd: do not use mtd->block_markbad directly · 800ffd34
      Artem Bityutskiy 提交于
      Instead, use the new 'mtd_can_have_bb()', or just rely on 'mtd_block_markbad()'
      return code, which will be -EOPNOTSUPP if bad blocks are not supported.
      Signed-off-by: NArtem Bityutskiy <artem.bityutskiy@linux.intel.com>
      Signed-off-by: NDavid Woodhouse <David.Woodhouse@intel.com>
      800ffd34
    • A
      logfs: do not use 'mtd->block_isbad' directly · d58b27ed
      Artem Bityutskiy 提交于
      Instead, use the new 'mtd_can_have_bb()' helper.
      
      Cc: Jörn Engel <joern@logfs.org>
      Signed-off-by: NArtem Bityutskiy <artem.bityutskiy@linux.intel.com>
      Signed-off-by: NDavid Woodhouse <David.Woodhouse@intel.com>
      d58b27ed