1. 15 10月, 2016 1 次提交
  2. 12 10月, 2016 1 次提交
  3. 06 10月, 2016 2 次提交
    • M
      pipe: add pipe_buf_release() helper · a779638c
      Miklos Szeredi 提交于
      Signed-off-by: NMiklos Szeredi <mszeredi@redhat.com>
      Signed-off-by: NAl Viro <viro@zeniv.linux.org.uk>
      a779638c
    • A
      new iov_iter flavour: pipe-backed · 241699cd
      Al Viro 提交于
      iov_iter variant for passing data into pipe.  copy_to_iter()
      copies data into page(s) it has allocated and stuffs them into
      the pipe; copy_page_to_iter() stuffs there a reference to the
      page given to it.  Both will try to coalesce if possible.
      iov_iter_zero() is similar to copy_to_iter(); iov_iter_get_pages()
      and friends will do as copy_to_iter() would have and return the
      pages where the data would've been copied.  iov_iter_advance()
      will truncate everything past the spot it has advanced to.
      
      New primitive: iov_iter_pipe(), used for initializing those.
      pipe should be locked all along.
      
      Running out of space acts as fault would for iovec-backed ones;
      in other words, giving it to ->read_iter() may result in short
      read if the pipe overflows, or -EFAULT if it happens with nothing
      copied there.
      
      In other words, ->read_iter() on those acts pretty much like
      ->splice_read().  Moreover, all generic_file_splice_read() users,
      as well as many other ->splice_read() instances can be switched
      to that scheme - that'll happen in the next commit.
      Signed-off-by: NAl Viro <viro@zeniv.linux.org.uk>
      241699cd
  4. 28 9月, 2016 1 次提交
    • A
      get rid of separate multipage fault-in primitives · 4bce9f6e
      Al Viro 提交于
      * the only remaining callers of "short" fault-ins are just as happy with generic
      variants (both in lib/iov_iter.c); switch them to multipage variants, kill the
      "short" ones
      * rename the multipage variants to now available plain ones.
      * get rid of compat macro defining iov_iter_fault_in_multipage_readable by
      expanding it in its only user.
      Signed-off-by: NAl Viro <viro@zeniv.linux.org.uk>
      4bce9f6e
  5. 18 9月, 2016 1 次提交
  6. 29 7月, 2016 1 次提交
    • M
      mm: optimize copy_page_to/from_iter_iovec · 3fa6c507
      Mikulas Patocka 提交于
      copy_page_to_iter_iovec() and copy_page_from_iter_iovec() copy some data
      to userspace or from userspace.  These functions have a fast path where
      they map a page using kmap_atomic and a slow path where they use kmap.
      
      kmap is slower than kmap_atomic, so the fast path is preferred.
      
      However, on kernels without highmem support, kmap just calls
      page_address, so there is no need to avoid kmap.  On kernels without
      highmem support, the fast path just increases code size (and cache
      footprint) and it doesn't improve copy performance in any way.
      
      This patch enables the fast path only if CONFIG_HIGHMEM is defined.
      
      Code size reduced by this patch:
        x86 (without highmem)	  928
        x86-64		  960
        sparc64		  848
        alpha			 1136
        pa-risc		 1200
      
      [akpm@linux-foundation.org: use IS_ENABLED(), per Andi]
      Link: http://lkml.kernel.org/r/alpine.LRH.2.02.1607221711410.4818@file01.intranet.prod.int.rdu2.redhat.comSigned-off-by: NMikulas Patocka <mpatocka@redhat.com>
      Cc: Hugh Dickins <hughd@google.com>
      Cc: Michal Hocko <mhocko@kernel.org>
      Cc: Alexander Viro <viro@zeniv.linux.org.uk>
      Cc: Mel Gorman <mgorman@suse.de>
      Cc: Johannes Weiner <hannes@cmpxchg.org>
      Cc: Andi Kleen <andi@firstfloor.org>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      3fa6c507
  7. 10 6月, 2016 1 次提交
  8. 26 5月, 2016 1 次提交
    • A
      do "fold checks into iterate_and_advance()" right · 19f18459
      Al Viro 提交于
      the only case when we should skip the iterate_and_advance() guts
      is when nothing's left in the iterator, _not_ just when requested
      amount is 0.  Said guts will do nothing in the latter case anyway;
      the problem we tried to deal with in the aforementioned commit is
      that when there's nothing left *and* the amount requested is 0,
      we might end up deferencing one iovec too many; the value we fetch
      from there is discarded in that case, but theoretically it might
      oops if the iovec array ends exactly at the end of page with the
      next page not mapped.
      
      Bailing out on zero size requested had an unexpected side effect -
      zero-length segment in the beginning of iovec array ended up
      throwing do_loop_readv_writev() into infinite spin; we do not
      advance past the empty segment at all.  Reproducer is trivial:
      echo '#include <sys/uio.h>' >a.c
      echo 'main() {char c; struct iovec v[] = {{&c,0},{&c,1}}; readv(0,v,2);}' >>a.c
      cc a.c && ./a.out </proc/uptime
      
      which should end up with the process not hanging.  Probably ought to
      go into LTP or xfstests...
      Signed-off-by: NAl Viro <viro@zeniv.linux.org.uk>
      19f18459
  9. 10 5月, 2016 1 次提交
  10. 09 4月, 2016 1 次提交
  11. 07 12月, 2015 2 次提交
  12. 12 4月, 2015 1 次提交
  13. 30 3月, 2015 1 次提交
    • A
      saner iov_iter initialization primitives · bc917be8
      Al Viro 提交于
      iovec-backed iov_iter instances are assumed to satisfy several properties:
      	* no more than UIO_MAXIOV elements in iovec array
      	* total size of all ranges is no more than MAX_RW_COUNT
      	* all ranges pass access_ok().
      
      The problem is, invariants of data structures should be established in the
      primitives creating those data structures, not in the code using those
      primitives.  And iov_iter_init() violates that principle.  For a while we
      managed to get away with that, but once the use of iov_iter started to
      spread, it didn't take long for shit to hit the fan - missed check in
      sys_sendto() had introduced a roothole.
      
      We _do_ have primitives for importing and validating iovecs (both native and
      compat ones) and those primitives are almost always followed by shoving the
      resulting iovec into iov_iter.  Life would be considerably simpler (and safer)
      if we combined those primitives with initializing iov_iter.
      
      That gives us two new primitives - import_iovec() and compat_import_iovec().
      Calling conventions:
      	iovec = iov_array;
      	err = import_iovec(direction, uvec, nr_segs,
      			   ARRAY_SIZE(iov_array), &iovec,
      			   &iter);
      imports user vector into kernel space (into iov_array if it fits, allocated
      if it doesn't fit or if iovec was NULL), validates it and sets iter up to
      refer to it.  On success 0 is returned and allocated kernel copy (or NULL
      if the array had fit into caller-supplied one) is returned via iovec.
      On failure all allocations are undone and -E... is returned.  If the total
      size of ranges exceeds MAX_RW_COUNT, the excess is silently truncated.
      
      compat_import_iovec() expects uvec to be a pointer to user array of compat_iovec;
      otherwise it's identical to import_iovec().
      
      Finally, import_single_range() sets iov_iter backed by single-element iovec
      covering a user-supplied range -
      
      	err = import_single_range(direction, address, size, iovec, &iter);
      
      does validation and sets iter up.  Again, size in excess of MAX_RW_COUNT gets
      silently truncated.
      
      Next commits will be switching the things up to use of those and reducing
      the amount of iov_iter_init() instances.
      Signed-off-by: NAl Viro <viro@zeniv.linux.org.uk>
      bc917be8
  14. 18 2月, 2015 2 次提交
  15. 29 1月, 2015 1 次提交
  16. 09 12月, 2014 4 次提交
  17. 28 11月, 2014 9 次提交
  18. 14 11月, 2014 1 次提交
    • P
      Fix thinko in iov_iter_single_seg_count · ad0eab92
      Paul Mackerras 提交于
      The branches of the if (i->type & ITER_BVEC) statement in
      iov_iter_single_seg_count() are the wrong way around; if ITER_BVEC is
      clear then we use i->bvec, when we should be using i->iov.  This fixes
      it.
      
      In my case, the symptom that this caused was that a KVM guest doing
      filesystem operations on a virtual disk would result in one of qemu's
      threads on the host going into an infinite loop in
      generic_perform_write().  The loop would hit the copied == 0 case and
      call iov_iter_single_seg_count() to reduce the number of bytes to try
      to process, but because of the error, iov_iter_single_seg_count()
      would just return i->count and the loop made no progress and continued
      forever.
      
      Cc: stable@vger.kernel.org # 3.16+
      Signed-off-by: NPaul Mackerras <paulus@samba.org>
      Signed-off-by: NAl Viro <viro@zeniv.linux.org.uk>
      ad0eab92
  19. 09 10月, 2014 1 次提交
    • M
      Add copy_to_iter(), copy_from_iter() and iov_iter_zero() · c35e0248
      Matthew Wilcox 提交于
      For DAX, we want to be able to copy between iovecs and kernel addresses
      that don't necessarily have a struct page.  This is a fairly simple
      rearrangement for bvec iters to kmap the pages outside and pass them in,
      but for user iovecs it gets more complicated because we might try various
      different ways to kmap the memory.  Duplicating the existing logic works
      out best in this case.
      
      We need to be able to write zeroes to an iovec for reads from unwritten
      ranges in a file.  This is performed by the new iov_iter_zero() function,
      again patterned after the existing code that handles iovec iterators.
      
      [AV: and export the buggers...]
      Signed-off-by: NMatthew Wilcox <willy@linux.intel.com>
      Signed-off-by: NAl Viro <viro@zeniv.linux.org.uk>
      c35e0248
  20. 27 9月, 2014 1 次提交
  21. 08 8月, 2014 1 次提交
  22. 07 5月, 2014 5 次提交