1. 12 10月, 2016 6 次提交
    • M
      pipe: fix limit checking in alloc_pipe_info() · a005ca0e
      Michael Kerrisk (man-pages) 提交于
      The limit checking in alloc_pipe_info() (used by pipe(2) and when
      opening a FIFO) has the following problems:
      
      (1) When checking capacity required for the new pipe, the checks against
          the limit in /proc/sys/fs/pipe-user-pages-{soft,hard} are made
          against existing consumption, and exclude the memory required for
          the new pipe capacity. As a consequence: (1) the memory allocation
          throttling provided by the soft limit does not kick in quite as
          early as it should, and (2) the user can overrun the hard limit.
      
      (2) As currently implemented, accounting and checking against the limits
          is done as follows:
      
          (a) Test whether the user has exceeded the limit.
          (b) Make new pipe buffer allocation.
          (c) Account new allocation against the limits.
      
          This is racey. Multiple processes may pass point (a) simultaneously,
          and then allocate pipe buffers that are accounted for only in step
          (c).  The race means that the user's pipe buffer allocation could be
          pushed over the limit (by an arbitrary amount, depending on how
          unlucky we were in the race). [Thanks to Vegard Nossum for spotting
          this point, which I had missed.]
      
      This patch addresses the above problems as follows:
      
      * Alter the checks against limits to include the memory required for the
        new pipe.
      * Re-order the accounting step so that it precedes the buffer allocation.
        If the accounting step determines that a limit has been reached, revert
        the accounting and cause the operation to fail.
      
      Link: http://lkml.kernel.org/r/8ff3e9f9-23f6-510c-644f-8e70cd1c0bd9@gmail.comSigned-off-by: NMichael Kerrisk <mtk.manpages@gmail.com>
      Reviewed-by: NVegard Nossum <vegard.nossum@oracle.com>
      Cc: Willy Tarreau <w@1wt.eu>
      Cc: <socketpair@gmail.com>
      Cc: Tetsuo Handa <penguin-kernel@I-love.SAKURA.ne.jp>
      Cc: Jens Axboe <axboe@fb.com>
      Cc: Al Viro <viro@zeniv.linux.org.uk>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      a005ca0e
    • M
      pipe: simplify logic in alloc_pipe_info() · 09b4d199
      Michael Kerrisk (man-pages) 提交于
      Replace an 'if' block that covers most of the code in this function
      with a 'goto'. This makes the code a little simpler to read, and also
      simplifies the next patch (fix limit checking in alloc_pipe_info())
      
      Link: http://lkml.kernel.org/r/aef030c1-0257-98a9-4988-186efa48530c@gmail.comSigned-off-by: NMichael Kerrisk <mtk.manpages@gmail.com>
      Reviewed-by: NVegard Nossum <vegard.nossum@oracle.com>
      Cc: Willy Tarreau <w@1wt.eu>
      Cc: <socketpair@gmail.com>
      Cc: Tetsuo Handa <penguin-kernel@I-love.SAKURA.ne.jp>
      Cc: Jens Axboe <axboe@fb.com>
      Cc: Al Viro <viro@zeniv.linux.org.uk>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      09b4d199
    • M
      pipe: fix limit checking in pipe_set_size() · b0b91d18
      Michael Kerrisk (man-pages) 提交于
      The limit checking in pipe_set_size() (used by fcntl(F_SETPIPE_SZ))
      has the following problems:
      
      (1) When increasing the pipe capacity, the checks against the limits in
          /proc/sys/fs/pipe-user-pages-{soft,hard} are made against existing
          consumption, and exclude the memory required for the increased pipe
          capacity. The new increase in pipe capacity can then push the total
          memory used by the user for pipes (possibly far) over a limit. This
          can also trigger the problem described next.
      
      (2) The limit checks are performed even when the new pipe capacity is
          less than the existing pipe capacity. This can lead to problems if a
          user sets a large pipe capacity, and then the limits are lowered,
          with the result that the user will no longer be able to decrease the
          pipe capacity.
      
      (3) As currently implemented, accounting and checking against the
          limits is done as follows:
      
          (a) Test whether the user has exceeded the limit.
          (b) Make new pipe buffer allocation.
          (c) Account new allocation against the limits.
      
          This is racey. Multiple processes may pass point (a)
          simultaneously, and then allocate pipe buffers that are accounted
          for only in step (c).  The race means that the user's pipe buffer
          allocation could be pushed over the limit (by an arbitrary amount,
          depending on how unlucky we were in the race). [Thanks to Vegard
          Nossum for spotting this point, which I had missed.]
      
      This patch addresses the above problems as follows:
      
      * Perform checks against the limits only when increasing a pipe's
        capacity; an unprivileged user can always decrease a pipe's capacity.
      * Alter the checks against limits to include the memory required for
        the new pipe capacity.
      * Re-order the accounting step so that it precedes the buffer
        allocation. If the accounting step determines that a limit has
        been reached, revert the accounting and cause the operation to fail.
      
      The program below can be used to demonstrate problems 1 and 2, and the
      effect of the fix. The program takes one or more command-line arguments.
      The first argument specifies the number of pipes that the program should
      create. The remaining arguments are, alternately, pipe capacities that
      should be set using fcntl(F_SETPIPE_SZ), and sleep intervals (in
      seconds) between the fcntl() operations. (The sleep intervals allow the
      possibility to change the limits between fcntl() operations.)
      
      Problem 1
      =========
      
      Using the test program on an unpatched kernel, we first set some
      limits:
      
          # echo 0 > /proc/sys/fs/pipe-user-pages-soft
          # echo 1000000000 > /proc/sys/fs/pipe-max-size
          # echo 10000 > /proc/sys/fs/pipe-user-pages-hard    # 40.96 MB
      
      Then show that we can set a pipe with capacity (100MB) that is
      over the hard limit
      
          # sudo -u mtk ./test_F_SETPIPE_SZ 1 100000000
          Initial pipe capacity: 65536
              Loop 1: set pipe capacity to 100000000 bytes
                  F_SETPIPE_SZ returned 134217728
      
      Now set the capacity to 100MB twice. The second call fails (which is
      probably surprising to most users, since it seems like a no-op):
      
          # sudo -u mtk ./test_F_SETPIPE_SZ 1 100000000 0 100000000
          Initial pipe capacity: 65536
              Loop 1: set pipe capacity to 100000000 bytes
                  F_SETPIPE_SZ returned 134217728
              Loop 2: set pipe capacity to 100000000 bytes
                  Loop 2, pipe 0: F_SETPIPE_SZ failed: fcntl: Operation not permitted
      
      With a patched kernel, setting a capacity over the limit fails at the
      first attempt:
      
          # echo 0 > /proc/sys/fs/pipe-user-pages-soft
          # echo 1000000000 > /proc/sys/fs/pipe-max-size
          # echo 10000 > /proc/sys/fs/pipe-user-pages-hard
          # sudo -u mtk ./test_F_SETPIPE_SZ 1 100000000
          Initial pipe capacity: 65536
              Loop 1: set pipe capacity to 100000000 bytes
                  Loop 1, pipe 0: F_SETPIPE_SZ failed: fcntl: Operation not permitted
      
      There is a small chance that the change to fix this problem could
      break user-space, since there are cases where fcntl(F_SETPIPE_SZ)
      calls that previously succeeded might fail. However, the chances are
      small, since (a) the pipe-user-pages-{soft,hard} limits are new (in
      4.5), and the default soft/hard limits are high/unlimited.  Therefore,
      it seems warranted to make these limits operate more precisely (and
      behave more like what users probably expect).
      
      Problem 2
      =========
      
      Running the test program on an unpatched kernel, we first set some limits:
      
          # getconf PAGESIZE
          4096
          # echo 0 > /proc/sys/fs/pipe-user-pages-soft
          # echo 1000000000 > /proc/sys/fs/pipe-max-size
          # echo 10000 > /proc/sys/fs/pipe-user-pages-hard    # 40.96 MB
      
      Now perform two fcntl(F_SETPIPE_SZ) operations on a single pipe,
      first setting a pipe capacity (10MB), sleeping for a few seconds,
      during which time the hard limit is lowered, and then set pipe
      capacity to a smaller amount (5MB):
      
          # sudo -u mtk ./test_F_SETPIPE_SZ 1 10000000 15 5000000 &
          [1] 748
          # Initial pipe capacity: 65536
              Loop 1: set pipe capacity to 10000000 bytes
                  F_SETPIPE_SZ returned 16777216
                  Sleeping 15 seconds
      
          # echo 1000 > /proc/sys/fs/pipe-user-pages-hard      # 4.096 MB
          #     Loop 2: set pipe capacity to 5000000 bytes
                  Loop 2, pipe 0: F_SETPIPE_SZ failed: fcntl: Operation not permitted
      
      In this case, the user should be able to lower the limit.
      
      With a kernel that has the patch below, the second fcntl()
      succeeds:
      
          # echo 0 > /proc/sys/fs/pipe-user-pages-soft
          # echo 1000000000 > /proc/sys/fs/pipe-max-size
          # echo 10000 > /proc/sys/fs/pipe-user-pages-hard
          # sudo -u mtk ./test_F_SETPIPE_SZ 1 10000000 15 5000000 &
          [1] 3215
          # Initial pipe capacity: 65536
          #     Loop 1: set pipe capacity to 10000000 bytes
                  F_SETPIPE_SZ returned 16777216
                  Sleeping 15 seconds
      
          # echo 1000 > /proc/sys/fs/pipe-user-pages-hard
      
          #     Loop 2: set pipe capacity to 5000000 bytes
                  F_SETPIPE_SZ returned 8388608
      
      8x---8x---8x---8x---8x---8x---8x---8x---8x---8x---8x---8x---8x---8x---
      
      /* test_F_SETPIPE_SZ.c
      
         (C) 2016, Michael Kerrisk; licensed under GNU GPL version 2 or later
      
         Test operation of fcntl(F_SETPIPE_SZ) for setting pipe capacity
         and interactions with limits defined by /proc/sys/fs/pipe-* files.
      */
      
      #define _GNU_SOURCE
      #include <stdio.h>
      #include <stdlib.h>
      #include <fcntl.h>
      #include <unistd.h>
      
      int
      main(int argc, char *argv[])
      {
          int (*pfd)[2];
          int npipes;
          int pcap, rcap;
          int j, p, s, stime, loop;
      
          if (argc < 2) {
              fprintf(stderr, "Usage: %s num-pipes "
                      "[pipe-capacity sleep-time]...\n", argv[0]);
              exit(EXIT_FAILURE);
          }
      
          npipes = atoi(argv[1]);
      
          pfd = calloc(npipes, sizeof (int [2]));
          if (pfd == NULL) {
              perror("calloc");
              exit(EXIT_FAILURE);
          }
      
          for (j = 0; j < npipes; j++) {
              if (pipe(pfd[j]) == -1) {
                  fprintf(stderr, "Loop %d: pipe() failed: ", j);
                  perror("pipe");
                  exit(EXIT_FAILURE);
              }
          }
      
          printf("Initial pipe capacity: %d\n", fcntl(pfd[0][0], F_GETPIPE_SZ));
      
          for (j = 2; j < argc; j += 2 ) {
              loop = j / 2;
              pcap = atoi(argv[j]);
              printf("    Loop %d: set pipe capacity to %d bytes\n", loop, pcap);
      
              for (p = 0; p < npipes; p++) {
                  s = fcntl(pfd[p][0], F_SETPIPE_SZ, pcap);
                  if (s == -1) {
                      fprintf(stderr, "        Loop %d, pipe %d: F_SETPIPE_SZ "
                              "failed: ", loop, p);
                      perror("fcntl");
                      exit(EXIT_FAILURE);
                  }
      
                  if (p == 0) {
                      printf("        F_SETPIPE_SZ returned %d\n", s);
                      rcap = s;
                  } else {
                      if (s != rcap) {
                          fprintf(stderr, "        Loop %d, pipe %d: F_SETPIPE_SZ "
                                  "unexpected return: %d\n", loop, p, s);
                          exit(EXIT_FAILURE);
                      }
                  }
      
                  stime = (j + 1 < argc) ? atoi(argv[j + 1]) : 0;
                  if (stime > 0) {
                      printf("        Sleeping %d seconds\n", stime);
                      sleep(stime);
                  }
              }
          }
      
          exit(EXIT_SUCCESS);
      }
      
      8x---8x---8x---8x---8x---8x---8x---8x---8x---8x---8x---8x---8x---8x---
      
      Patch history:
      
      v2
         * Switch order of test in 'if' statement to avoid function call
            (to capability()) in normal path. [This is a fix to a preexisting
            wart in the code. Thanks to Willy Tarreau]
          * Perform (size > pipe_max_size) check before calling
            account_pipe_buffers().  [Thanks to Vegard Nossum]
            Quoting Vegard:
      
              The potential problem happens if the user passes a very large number
              which will overflow pipe->user->pipe_bufs.
      
              On 32-bit, sizeof(int) == sizeof(long), so if they pass arg = INT_MAX
              then round_pipe_size() returns INT_MAX. Although it's true that the
              accounting is done in terms of pages and not bytes, so you'd need on
              the order of (1 << 13) = 8192 processes hitting the limit at the same
              time in order to make it overflow, which seems a bit unlikely.
      
              (See https://lkml.org/lkml/2016/8/12/215 for another discussion on the
              limit checking)
      
      Link: http://lkml.kernel.org/r/1e464945-536b-2420-798b-e77b9c7e8593@gmail.comSigned-off-by: NMichael Kerrisk <mtk.manpages@gmail.com>
      Reviewed-by: NVegard Nossum <vegard.nossum@oracle.com>
      Cc: Willy Tarreau <w@1wt.eu>
      Cc: <socketpair@gmail.com>
      Cc: Tetsuo Handa <penguin-kernel@I-love.SAKURA.ne.jp>
      Cc: Jens Axboe <axboe@fb.com>
      Cc: Al Viro <viro@zeniv.linux.org.uk>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      b0b91d18
    • M
      pipe: refactor argument for account_pipe_buffers() · 3734a13b
      Michael Kerrisk (man-pages) 提交于
      This is a preparatory patch for following work. account_pipe_buffers()
      performs accounting in the 'user_struct'. There is no need to pass a
      pointer to a 'pipe_inode_info' struct (which is then dereferenced to
      obtain a pointer to the 'user' field). Instead, pass a pointer directly
      to the 'user_struct'. This change is needed in preparation for a
      subsequent patch that the fixes the limit checking in alloc_pipe_info()
      (and the resulting code is a little more logical).
      
      Link: http://lkml.kernel.org/r/7277bf8c-a6fc-4a7d-659c-f5b145c981ab@gmail.comSigned-off-by: NMichael Kerrisk <mtk.manpages@gmail.com>
      Reviewed-by: NVegard Nossum <vegard.nossum@oracle.com>
      Cc: Willy Tarreau <w@1wt.eu>
      Cc: <socketpair@gmail.com>
      Cc: Tetsuo Handa <penguin-kernel@I-love.SAKURA.ne.jp>
      Cc: Jens Axboe <axboe@fb.com>
      Cc: Al Viro <viro@zeniv.linux.org.uk>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      3734a13b
    • M
      pipe: move limit checking logic into pipe_set_size() · d37d4166
      Michael Kerrisk (man-pages) 提交于
      This is a preparatory patch for following work. Move the F_SETPIPE_SZ
      limit-checking logic from pipe_fcntl() into pipe_set_size().  This
      simplifies the code a little, and allows for reworking required in
      a later patch that fixes the limit checking in pipe_set_size()
      
      Link: http://lkml.kernel.org/r/3701b2c5-2c52-2c3e-226d-29b9deb29b50@gmail.comSigned-off-by: NMichael Kerrisk <mtk.manpages@gmail.com>
      Reviewed-by: NVegard Nossum <vegard.nossum@oracle.com>
      Cc: Willy Tarreau <w@1wt.eu>
      Cc: <socketpair@gmail.com>
      Cc: Tetsuo Handa <penguin-kernel@I-love.SAKURA.ne.jp>
      Cc: Jens Axboe <axboe@fb.com>
      Cc: Al Viro <viro@zeniv.linux.org.uk>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      d37d4166
    • M
      pipe: relocate round_pipe_size() above pipe_set_size() · f491bd71
      Michael Kerrisk (man-pages) 提交于
      Patch series "pipe: fix limit handling", v2.
      
      When changing a pipe's capacity with fcntl(F_SETPIPE_SZ), various limits
      defined by /proc/sys/fs/pipe-* files are checked to see if unprivileged
      users are exceeding limits on memory consumption.
      
      While documenting and testing the operation of these limits I noticed
      that, as currently implemented, these checks have a number of problems:
      
      (1) When increasing the pipe capacity, the checks against the limits
          in /proc/sys/fs/pipe-user-pages-{soft,hard} are made against
          existing consumption, and exclude the memory required for the
          increased pipe capacity. The new increase in pipe capacity can then
          push the total memory used by the user for pipes (possibly far) over
          a limit. This can also trigger the problem described next.
      
      (2) The limit checks are performed even when the new pipe capacity
          is less than the existing pipe capacity. This can lead to problems
          if a user sets a large pipe capacity, and then the limits are
          lowered, with the result that the user will no longer be able to
          decrease the pipe capacity.
      
      (3) As currently implemented, accounting and checking against the
          limits is done as follows:
      
          (a) Test whether the user has exceeded the limit.
          (b) Make new pipe buffer allocation.
          (c) Account new allocation against the limits.
      
          This is racey. Multiple processes may pass point (a) simultaneously,
          and then allocate pipe buffers that are accounted for only in step
          (c).  The race means that the user's pipe buffer allocation could be
          pushed over the limit (by an arbitrary amount, depending on how
          unlucky we were in the race). [Thanks to Vegard Nossum for spotting
          this point, which I had missed.]
      
      This patch series addresses these three problems.
      
      This patch (of 8):
      
      This is a minor preparatory patch.  After subsequent patches,
      round_pipe_size() will be called from pipe_set_size(), so place
      round_pipe_size() above pipe_set_size().
      
      Link: http://lkml.kernel.org/r/91a91fdb-a959-ba7f-b551-b62477cc98a1@gmail.comSigned-off-by: NMichael Kerrisk <mtk.manpages@gmail.com>
      Reviewed-by: NVegard Nossum <vegard.nossum@oracle.com>
      Cc: Willy Tarreau <w@1wt.eu>
      Cc: <socketpair@gmail.com>
      Cc: Tetsuo Handa <penguin-kernel@I-love.SAKURA.ne.jp>
      Cc: Jens Axboe <axboe@fb.com>
      Cc: Al Viro <viro@zeniv.linux.org.uk>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      f491bd71
  2. 06 10月, 2016 2 次提交
  3. 28 9月, 2016 1 次提交
  4. 10 8月, 2016 1 次提交
    • V
      mm: memcontrol: only mark charged pages with PageKmemcg · c4159a75
      Vladimir Davydov 提交于
      To distinguish non-slab pages charged to kmemcg we mark them PageKmemcg,
      which sets page->_mapcount to -512.  Currently, we set/clear PageKmemcg
      in __alloc_pages_nodemask()/free_pages_prepare() for any page allocated
      with __GFP_ACCOUNT, including those that aren't actually charged to any
      cgroup, i.e. allocated from the root cgroup context.  To avoid overhead
      in case cgroups are not used, we only do that if memcg_kmem_enabled() is
      true.  The latter is set iff there are kmem-enabled memory cgroups
      (online or offline).  The root cgroup is not considered kmem-enabled.
      
      As a result, if a page is allocated with __GFP_ACCOUNT for the root
      cgroup when there are kmem-enabled memory cgroups and is freed after all
      kmem-enabled memory cgroups were removed, e.g.
      
        # no memory cgroups has been created yet, create one
        mkdir /sys/fs/cgroup/memory/test
        # run something allocating pages with __GFP_ACCOUNT, e.g.
        # a program using pipe
        dmesg | tail
        # remove the memory cgroup
        rmdir /sys/fs/cgroup/memory/test
      
      we'll get bad page state bug complaining about page->_mapcount != -1:
      
        BUG: Bad page state in process swapper/0  pfn:1fd945c
        page:ffffea007f651700 count:0 mapcount:-511 mapping:          (null) index:0x0
        flags: 0x1000000000000000()
      
      To avoid that, let's mark with PageKmemcg only those pages that are
      actually charged to and hence pin a non-root memory cgroup.
      
      Fixes: 4949148a ("mm: charge/uncharge kmemcg from generic page allocator paths")
      Reported-and-tested-by: NEric Dumazet <eric.dumazet@gmail.com>
      Signed-off-by: NVladimir Davydov <vdavydov@virtuozzo.com>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      c4159a75
  5. 27 7月, 2016 1 次提交
    • V
      pipe: account to kmemcg · d86133bd
      Vladimir Davydov 提交于
      Pipes can consume a significant amount of system memory, hence they
      should be accounted to kmemcg.
      
      This patch marks pipe_inode_info and anonymous pipe buffer page
      allocations as __GFP_ACCOUNT so that they would be charged to kmemcg.
      Note, since a pipe buffer page can be "stolen" and get reused for other
      purposes, including mapping to userspace, we clear PageKmemcg thus
      resetting page->_mapcount and uncharge it in anon_pipe_buf_steal, which
      is introduced by this patch.
      
      A note regarding anon_pipe_buf_steal implementation.  We allow to steal
      the page if its ref count equals 1.  It looks racy, but it is correct
      for anonymous pipe buffer pages, because:
      
       - We lock out all other pipe users, because ->steal is called with
         pipe_lock held, so the page can't be spliced to another pipe from
         under us.
      
       - The page is not on LRU and it never was.
      
       - Thus a parallel thread can access it only by PFN. Although this is
         quite possible (e.g. see page_idle_get_page and balloon_page_isolate)
         this is not dangerous, because all such functions do is increase page
         ref count, check if the page is the one they are looking for, and
         decrease ref count if it isn't. Since our page is clean except for
         PageKmemcg mark, which doesn't conflict with other _mapcount users,
         the worst that can happen is we see page_count > 2 due to a transient
         ref, in which case we false-positively abort ->steal, which is still
         fine, because ->steal is not guaranteed to succeed.
      
      Link: http://lkml.kernel.org/r/20160527150313.GD26059@esperanzaSigned-off-by: NVladimir Davydov <vdavydov@virtuozzo.com>
      Cc: Alexander Viro <viro@zeniv.linux.org.uk>
      Cc: Johannes Weiner <hannes@cmpxchg.org>
      Cc: Michal Hocko <mhocko@kernel.org>
      Cc: Eric Dumazet <eric.dumazet@gmail.com>
      Cc: Minchan Kim <minchan@kernel.org>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      d86133bd
  6. 05 4月, 2016 1 次提交
    • K
      mm, fs: get rid of PAGE_CACHE_* and page_cache_{get,release} macros · 09cbfeaf
      Kirill A. Shutemov 提交于
      PAGE_CACHE_{SIZE,SHIFT,MASK,ALIGN} macros were introduced *long* time
      ago with promise that one day it will be possible to implement page
      cache with bigger chunks than PAGE_SIZE.
      
      This promise never materialized.  And unlikely will.
      
      We have many places where PAGE_CACHE_SIZE assumed to be equal to
      PAGE_SIZE.  And it's constant source of confusion on whether
      PAGE_CACHE_* or PAGE_* constant should be used in a particular case,
      especially on the border between fs and mm.
      
      Global switching to PAGE_CACHE_SIZE != PAGE_SIZE would cause to much
      breakage to be doable.
      
      Let's stop pretending that pages in page cache are special.  They are
      not.
      
      The changes are pretty straight-forward:
      
       - <foo> << (PAGE_CACHE_SHIFT - PAGE_SHIFT) -> <foo>;
      
       - <foo> >> (PAGE_CACHE_SHIFT - PAGE_SHIFT) -> <foo>;
      
       - PAGE_CACHE_{SIZE,SHIFT,MASK,ALIGN} -> PAGE_{SIZE,SHIFT,MASK,ALIGN};
      
       - page_cache_get() -> get_page();
      
       - page_cache_release() -> put_page();
      
      This patch contains automated changes generated with coccinelle using
      script below.  For some reason, coccinelle doesn't patch header files.
      I've called spatch for them manually.
      
      The only adjustment after coccinelle is revert of changes to
      PAGE_CAHCE_ALIGN definition: we are going to drop it later.
      
      There are few places in the code where coccinelle didn't reach.  I'll
      fix them manually in a separate patch.  Comments and documentation also
      will be addressed with the separate patch.
      
      virtual patch
      
      @@
      expression E;
      @@
      - E << (PAGE_CACHE_SHIFT - PAGE_SHIFT)
      + E
      
      @@
      expression E;
      @@
      - E >> (PAGE_CACHE_SHIFT - PAGE_SHIFT)
      + E
      
      @@
      @@
      - PAGE_CACHE_SHIFT
      + PAGE_SHIFT
      
      @@
      @@
      - PAGE_CACHE_SIZE
      + PAGE_SIZE
      
      @@
      @@
      - PAGE_CACHE_MASK
      + PAGE_MASK
      
      @@
      expression E;
      @@
      - PAGE_CACHE_ALIGN(E)
      + PAGE_ALIGN(E)
      
      @@
      expression E;
      @@
      - page_cache_get(E)
      + get_page(E)
      
      @@
      expression E;
      @@
      - page_cache_release(E)
      + put_page(E)
      Signed-off-by: NKirill A. Shutemov <kirill.shutemov@linux.intel.com>
      Acked-by: NMichal Hocko <mhocko@suse.com>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      09cbfeaf
  7. 20 1月, 2016 1 次提交
    • W
      pipe: limit the per-user amount of pages allocated in pipes · 759c0114
      Willy Tarreau 提交于
      On no-so-small systems, it is possible for a single process to cause an
      OOM condition by filling large pipes with data that are never read. A
      typical process filling 4000 pipes with 1 MB of data will use 4 GB of
      memory. On small systems it may be tricky to set the pipe max size to
      prevent this from happening.
      
      This patch makes it possible to enforce a per-user soft limit above
      which new pipes will be limited to a single page, effectively limiting
      them to 4 kB each, as well as a hard limit above which no new pipes may
      be created for this user. This has the effect of protecting the system
      against memory abuse without hurting other users, and still allowing
      pipes to work correctly though with less data at once.
      
      The limit are controlled by two new sysctls : pipe-user-pages-soft, and
      pipe-user-pages-hard. Both may be disabled by setting them to zero. The
      default soft limit allows the default number of FDs per process (1024)
      to create pipes of the default size (64kB), thus reaching a limit of 64MB
      before starting to create only smaller pipes. With 256 processes limited
      to 1024 FDs each, this results in 1024*64kB + (256*1024 - 1024) * 4kB =
      1084 MB of memory allocated for a user. The hard limit is disabled by
      default to avoid breaking existing applications that make intensive use
      of pipes (eg: for splicing).
      
      Reported-by: socketpair@gmail.com
      Reported-by: NTetsuo Handa <penguin-kernel@I-love.SAKURA.ne.jp>
      Mitigates: CVE-2013-4312 (Linux 2.0+)
      Suggested-by: NLinus Torvalds <torvalds@linux-foundation.org>
      Signed-off-by: NWilly Tarreau <w@1wt.eu>
      Signed-off-by: NAl Viro <viro@zeniv.linux.org.uk>
      759c0114
  8. 11 11月, 2015 2 次提交
    • E
      fs/pipe.c: return error code rather than 0 in pipe_write() · 6ae08069
      Eric Biggers 提交于
      pipe_write() would return 0 if it failed to merge the beginning of the
      data to write with the last, partially filled pipe buffer.  It should
      return an error code instead.  Userspace programs could be confused by
      write() returning 0 when called with a nonzero 'count'.
      
      The EFAULT error case was a regression from f0d1bec9 ("new helper:
      copy_page_from_iter()"), while the ops->confirm() error case was a much
      older bug.
      
      Test program:
      
      	#include <assert.h>
      	#include <errno.h>
      	#include <unistd.h>
      
      	int main(void)
      	{
      		int fd[2];
      		char data[1] = {0};
      
      		assert(0 == pipe(fd));
      		assert(1 == write(fd[1], data, 1));
      
      		/* prior to this patch, write() returned 0 here  */
      		assert(-1 == write(fd[1], NULL, 1));
      		assert(errno == EFAULT);
      	}
      
      Cc: stable@vger.kernel.org # at least v3.15+
      Signed-off-by: NEric Biggers <ebiggers3@gmail.com>
      Signed-off-by: NAl Viro <viro@zeniv.linux.org.uk>
      6ae08069
    • E
      fs/pipe.c: preserve alloc_file() error code · e9bb1f9b
      Eric Biggers 提交于
      If sys_pipe() was unable to allocate a 'struct file', it always failed
      with ENFILE, which means "The number of simultaneously open files in the
      system would exceed a system-imposed limit." However, alloc_file()
      actually returns an ERR_PTR value and might fail with other error codes.
      Currently, in addition to ENFILE, it can fail with ENOMEM, potentially
      when there are few open files in the system.  Update sys_pipe() to
      preserve this error code.
      
      In a prior submission of a similar patch (1) some concern was raised
      about introducing a new error code for sys_pipe().  However, for most
      system calls, programs cannot assume that new error codes will never be
      introduced.  In addition, ENOMEM was, in fact, already a possible error
      code for sys_pipe(), in the case where the file descriptor table could
      not be expanded due to insufficient memory.
      
      	(1) http://comments.gmane.org/gmane.linux.kernel/1357942Signed-off-by: NEric Biggers <ebiggers3@gmail.com>
      Signed-off-by: NAl Viro <viro@zeniv.linux.org.uk>
      e9bb1f9b
  9. 16 4月, 2015 1 次提交
  10. 12 4月, 2015 1 次提交
  11. 26 3月, 2015 1 次提交
  12. 07 5月, 2014 3 次提交
  13. 02 4月, 2014 2 次提交
  14. 24 1月, 2014 1 次提交
  15. 03 12月, 2013 1 次提交
    • L
      vfs: fix subtle use-after-free of pipe_inode_info · b0d8d229
      Linus Torvalds 提交于
      The pipe code was trying (and failing) to be very careful about freeing
      the pipe info only after the last access, with a pattern like:
      
              spin_lock(&inode->i_lock);
              if (!--pipe->files) {
                      inode->i_pipe = NULL;
                      kill = 1;
              }
              spin_unlock(&inode->i_lock);
              __pipe_unlock(pipe);
              if (kill)
                      free_pipe_info(pipe);
      
      where the final freeing is done last.
      
      HOWEVER.  The above is actually broken, because while the freeing is
      done at the end, if we have two racing processes releasing the pipe
      inode info, the one that *doesn't* free it will decrement the ->files
      count, and unlock the inode i_lock, but then still use the
      "pipe_inode_info" afterwards when it does the "__pipe_unlock(pipe)".
      
      This is *very* hard to trigger in practice, since the race window is
      very small, and adding debug options seems to just hide it by slowing
      things down.
      
      Simon originally reported this way back in July as an Oops in
      kmem_cache_allocate due to a single bit corruption (due to the final
      "spin_unlock(pipe->mutex.wait_lock)" incrementing a field in a different
      allocation that had re-used the free'd pipe-info), it's taken this long
      to figure out.
      
      Since the 'pipe->files' accesses aren't even protected by the pipe lock
      (we very much use the inode lock for that), the simple solution is to
      just drop the pipe lock early.  And since there were two users of this
      pattern, create a helper function for it.
      
      Introduced commit ba5bb147 ("pipe: take allocation and freeing of
      pipe_inode_info out of ->i_mutex").
      Reported-by: NSimon Kirby <sim@hostway.ca>
      Reported-by: NIan Applegate <ia@cloudflare.com>
      Acked-by: NAl Viro <viro@zeniv.linux.org.uk>
      Cc: stable@kernel.org   # v3.10+
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      b0d8d229
  16. 08 5月, 2013 1 次提交
  17. 10 4月, 2013 11 次提交
  18. 12 3月, 2013 1 次提交
    • A
      vfs: fix pipe counter breakage · a930d879
      Al Viro 提交于
      If you open a pipe for neither read nor write, the pipe code will not
      add any usage counters to the pipe, causing the 'struct pipe_inode_info"
      to be potentially released early.
      
      That doesn't normally matter, since you cannot actually use the pipe,
      but the pipe release code - particularly fasync handling - still expects
      the actual pipe infrastructure to all be there.  And rather than adding
      NULL pointer checks, let's just disallow this case, the same way we
      already do for the named pipe ("fifo") case.
      
      This is ancient going back to pre-2.4 days, and until trinity, nobody
      naver noticed.
      Reported-by: NDave Jones <davej@redhat.com>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      a930d879
  19. 23 2月, 2013 2 次提交