1. 12 2月, 2018 1 次提交
    • L
      vfs: do bulk POLL* -> EPOLL* replacement · a9a08845
      Linus Torvalds 提交于
      This is the mindless scripted replacement of kernel use of POLL*
      variables as described by Al, done by this script:
      
          for V in IN OUT PRI ERR RDNORM RDBAND WRNORM WRBAND HUP RDHUP NVAL MSG; do
              L=`git grep -l -w POLL$V | grep -v '^t' | grep -v /um/ | grep -v '^sa' | grep -v '/poll.h$'|grep -v '^D'`
              for f in $L; do sed -i "-es/^\([^\"]*\)\(\<POLL$V\>\)/\\1E\\2/" $f; done
          done
      
      with de-mangling cleanups yet to come.
      
      NOTE! On almost all architectures, the EPOLL* constants have the same
      values as the POLL* constants do.  But they keyword here is "almost".
      For various bad reasons they aren't the same, and epoll() doesn't
      actually work quite correctly in some cases due to this on Sparc et al.
      
      The next patch from Al will sort out the final differences, and we
      should be all done.
      Scripted-by: NAl Viro <viro@zeniv.linux.org.uk>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      a9a08845
  2. 30 11月, 2017 1 次提交
  3. 28 11月, 2017 1 次提交
  4. 12 9月, 2017 3 次提交
    • M
      fuse: getattr cleanup · 5b97eeac
      Miklos Szeredi 提交于
      The refreshed argument isn't used by any caller, get rid of it.
      
      Use a helper for just updating the inode (no need to fill in a kstat).
      Signed-off-by: NMiklos Szeredi <mszeredi@redhat.com>
      5b97eeac
    • M
      fuse: honor iocb sync flags on write · e1c0eecb
      Miklos Szeredi 提交于
      If the IOCB_DSYNC flag is set a sync is not being performed by
      fuse_file_write_iter.
      
      Honor IOCB_DSYNC/IOCB_SYNC by setting O_DYSNC/O_SYNC respectively in the
      flags filed of the write request.
      
      We don't need to sync data or metadata, since fuse_perform_write() does
      write-through and the filesystem is responsible for updating file times.
      
      Original patch by Vitaly Zolotusky.
      Reported-by: NNate Clark <nate@neworld.us>
      Cc: Vitaly Zolotusky <vitaly@unitc.com>.
      Signed-off-by: NMiklos Szeredi <mszeredi@redhat.com>
      e1c0eecb
    • M
      fuse: allow server to run in different pid_ns · 5d6d3a30
      Miklos Szeredi 提交于
      Commit 0b6e9ea0 ("fuse: Add support for pid namespaces") broke
      Sandstorm.io development tools, which have been sending FUSE file
      descriptors across PID namespace boundaries since early 2014.
      
      The above patch added a check that prevented I/O on the fuse device file
      descriptor if the pid namespace of the reader/writer was different from the
      pid namespace of the mounter.  With this change passing the device file
      descriptor to a different pid namespace simply doesn't work.  The check was
      added because pids are transferred to/from the fuse userspace server in the
      namespace registered at mount time.
      
      To fix this regression, remove the checks and do the following:
      
      1) the pid in the request header (the pid of the task that initiated the
      filesystem operation) is translated to the reader's pid namespace.  If a
      mapping doesn't exist for this pid, then a zero pid is used.  Note: even if
      a mapping would exist between the initiator task's pid namespace and the
      reader's pid namespace the pid will be zero if either mapping from
      initator's to mounter's namespace or mapping from mounter's to reader's
      namespace doesn't exist.
      
      2) The lk.pid value in setlk/setlkw requests and getlk reply is left alone.
      Userspace should not interpret this value anyway.  Also allow the
      setlk/setlkw operations if the pid of the task cannot be represented in the
      mounter's namespace (pid being zero in that case).
      Reported-by: NKenton Varda <kenton@sandstorm.io>
      Signed-off-by: NMiklos Szeredi <mszeredi@redhat.com>
      Fixes: 0b6e9ea0 ("fuse: Add support for pid namespaces")
      Cc: <stable@vger.kernel.org> # v4.12+
      Cc: Eric W. Biederman <ebiederm@xmission.com>
      Cc: Seth Forshee <seth.forshee@canonical.com>
      5d6d3a30
  5. 11 8月, 2017 1 次提交
  6. 03 8月, 2017 1 次提交
    • A
      fuse: Dont call set_page_dirty_lock() for ITER_BVEC pages for async_dio · 61c12b49
      Ashish Samant 提交于
      Commit 8fba54ae ("fuse: direct-io: don't dirty ITER_BVEC pages") fixes
      the ITER_BVEC page deadlock for direct io in fuse by checking in
      fuse_direct_io(), whether the page is a bvec page or not, before locking
      it.  However, this check is missed when the "async_dio" mount option is
      enabled.  In this case, set_page_dirty_lock() is called from the req->end
      callback in request_end(), when the fuse thread is returning from userspace
      to respond to the read request.  This will cause the same deadlock because
      the bvec condition is not checked in this path.
      
      Here is the stack of the deadlocked thread, while returning from userspace:
      
      [13706.656686] INFO: task glusterfs:3006 blocked for more than 120 seconds.
      [13706.657808] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables
      this message.
      [13706.658788] glusterfs       D ffffffff816c80f0     0  3006      1
      0x00000080
      [13706.658797]  ffff8800d6713a58 0000000000000086 ffff8800d9ad7000
      ffff8800d9ad5400
      [13706.658799]  ffff88011ffd5cc0 ffff8800d6710008 ffff88011fd176c0
      7fffffffffffffff
      [13706.658801]  0000000000000002 ffffffff816c80f0 ffff8800d6713a78
      ffffffff816c790e
      [13706.658803] Call Trace:
      [13706.658809]  [<ffffffff816c80f0>] ? bit_wait_io_timeout+0x80/0x80
      [13706.658811]  [<ffffffff816c790e>] schedule+0x3e/0x90
      [13706.658813]  [<ffffffff816ca7e5>] schedule_timeout+0x1b5/0x210
      [13706.658816]  [<ffffffff81073ffb>] ? gup_pud_range+0x1db/0x1f0
      [13706.658817]  [<ffffffff810668fe>] ? kvm_clock_read+0x1e/0x20
      [13706.658819]  [<ffffffff81066909>] ? kvm_clock_get_cycles+0x9/0x10
      [13706.658822]  [<ffffffff810f5792>] ? ktime_get+0x52/0xc0
      [13706.658824]  [<ffffffff816c6f04>] io_schedule_timeout+0xa4/0x110
      [13706.658826]  [<ffffffff816c8126>] bit_wait_io+0x36/0x50
      [13706.658828]  [<ffffffff816c7d06>] __wait_on_bit_lock+0x76/0xb0
      [13706.658831]  [<ffffffffa0545636>] ? lock_request+0x46/0x70 [fuse]
      [13706.658834]  [<ffffffff8118800a>] __lock_page+0xaa/0xb0
      [13706.658836]  [<ffffffff810c8500>] ? wake_atomic_t_function+0x40/0x40
      [13706.658838]  [<ffffffff81194d08>] set_page_dirty_lock+0x58/0x60
      [13706.658841]  [<ffffffffa054d968>] fuse_release_user_pages+0x58/0x70 [fuse]
      [13706.658844]  [<ffffffffa0551430>] ? fuse_aio_complete+0x190/0x190 [fuse]
      [13706.658847]  [<ffffffffa0551459>] fuse_aio_complete_req+0x29/0x90 [fuse]
      [13706.658849]  [<ffffffffa05471e9>] request_end+0xd9/0x190 [fuse]
      [13706.658852]  [<ffffffffa0549126>] fuse_dev_do_write+0x336/0x490 [fuse]
      [13706.658854]  [<ffffffffa054963e>] fuse_dev_write+0x6e/0xa0 [fuse]
      [13706.658857]  [<ffffffff812a9ef3>] ? security_file_permission+0x23/0x90
      [13706.658859]  [<ffffffff81205300>] do_iter_readv_writev+0x60/0x90
      [13706.658862]  [<ffffffffa05495d0>] ? fuse_dev_splice_write+0x350/0x350
      [fuse]
      [13706.658863]  [<ffffffff812062a1>] do_readv_writev+0x171/0x1f0
      [13706.658866]  [<ffffffff810b3d00>] ? try_to_wake_up+0x210/0x210
      [13706.658868]  [<ffffffff81206361>] vfs_writev+0x41/0x50
      [13706.658870]  [<ffffffff81206496>] SyS_writev+0x56/0xf0
      [13706.658872]  [<ffffffff810257a1>] ? syscall_trace_leave+0xf1/0x160
      [13706.658874]  [<ffffffff816cbb2e>] system_call_fastpath+0x12/0x71
      
      Fix this by making should_dirty a fuse_io_priv parameter that can be
      checked in fuse_aio_complete_req().
      Reported-by: NTiger Yang <tiger.yang@oracle.com>
      Signed-off-by: NAshish Samant <ashish.samant@oracle.com>
      Signed-off-by: NMiklos Szeredi <mszeredi@redhat.com>
      61c12b49
  7. 01 8月, 2017 1 次提交
  8. 16 7月, 2017 1 次提交
    • B
      fs/locks: Remove fl_nspid and use fs-specific l_pid for remote locks · 9d5b86ac
      Benjamin Coddington 提交于
      Since commit c69899a1 "NFSv4: Update of VFS byte range lock must be
      atomic with the stateid update", NFSv4 has been inserting locks in rpciod
      worker context.  The result is that the file_lock's fl_nspid is the
      kworker's pid instead of the original userspace pid.
      
      The fl_nspid is only used to represent the namespaced virtual pid number
      when displaying locks or returning from F_GETLK.  There's no reason to set
      it for every inserted lock, since we can usually just look it up from
      fl_pid.  So, instead of looking up and holding struct pid for every lock,
      let's just look up the virtual pid number from fl_pid when it is needed.
      That means we can remove fl_nspid entirely.
      
      The translaton and presentation of fl_pid should handle the following four
      cases:
      
      1 - F_GETLK on a remote file with a remote lock:
          In this case, the filesystem should determine the l_pid to return here.
          Filesystems should indicate that the fl_pid represents a non-local pid
          value that should not be translated by returning an fl_pid <= 0.
      
      2 - F_GETLK on a local file with a remote lock:
          This should be the l_pid of the lock manager process, and translated.
      
      3 - F_GETLK on a remote file with a local lock, and
      4 - F_GETLK on a local file with a local lock:
          These should be the translated l_pid of the local locking process.
      
      Fuse was already doing the correct thing by translating the pid into the
      caller's namespace.  With this change we must update fuse to translate
      to init's pid namespace, so that the locks API can then translate from
      init's pid namespace into the pid namespace of the caller.
      
      With this change, the locks API will expect that if a filesystem returns
      a remote pid as opposed to a local pid for F_GETLK, that remote pid will
      be <= 0.  This signifies that the pid is remote, and the locks API will
      forego translating that pid into the pid namespace of the local calling
      process.
      
      Finally, we convert remote filesystems to present remote pids using
      negative numbers. Have lustre, 9p, ceph, cifs, and dlm negate the remote
      pid returned for F_GETLK lock requests.
      
      Since local pids will never be larger than PID_MAX_LIMIT (which is
      currently defined as <= 4 million), but pid_t is an unsigned int, we
      should have plenty of room to represent remote pids with negative
      numbers if we assume that remote pid numbers are similarly limited.
      
      If this is not the case, then we run the risk of having a remote pid
      returned for which there is also a corresponding local pid.  This is a
      problem we have now, but this patch should reduce the chances of that
      occurring, while also returning those remote pid numbers, for whatever
      that may be worth.
      Signed-off-by: NBenjamin Coddington <bcodding@redhat.com>
      Signed-off-by: NJeff Layton <jlayton@redhat.com>
      9d5b86ac
  9. 09 6月, 2017 1 次提交
  10. 21 4月, 2017 1 次提交
  11. 18 4月, 2017 2 次提交
  12. 25 2月, 2017 1 次提交
  13. 23 2月, 2017 3 次提交
  14. 15 11月, 2016 1 次提交
  15. 01 10月, 2016 2 次提交
  16. 22 9月, 2016 1 次提交
  17. 25 8月, 2016 1 次提交
  18. 29 7月, 2016 4 次提交
  19. 30 6月, 2016 1 次提交
    • A
      fuse: improve aio directIO write performance for size extending writes · 7879c4e5
      Ashish Sangwan 提交于
      While sending the blocking directIO in fuse, the write request is broken
      into sub-requests, each of default size 128k and all the requests are sent
      in non-blocking background mode if async_dio mode is supported by libfuse.
      The process which issue the write wait for the completion of all the
      sub-requests. Sending multiple requests parallely gives a chance to perform
      parallel writes in the user space fuse implementation if it is
      multi-threaded and hence improves the performance.
      
      When there is a size extending aio dio write, we switch to blocking mode so
      that we can properly update the size of the file after completion of the
      writes. However, in this situation all the sub-requests are sent in
      serialized manner where the next request is sent only after receiving the
      reply of the current request. Hence the multi-threaded user space
      implementation is not utilized properly.
      
      This patch changes the size extending aio dio behavior to exactly follow
      blocking dio. For multi threaded fuse implementation having 10 threads and
      using buffer size of 64MB to perform async directIO, we are getting double
      the speed.
      Signed-off-by: NAshish Sangwan <ashishsangwan2@gmail.com>
      Signed-off-by: NMiklos Szeredi <mszeredi@redhat.com>
      7879c4e5
  20. 02 5月, 2016 2 次提交
  21. 25 4月, 2016 1 次提交
  22. 05 4月, 2016 1 次提交
    • K
      mm, fs: get rid of PAGE_CACHE_* and page_cache_{get,release} macros · 09cbfeaf
      Kirill A. Shutemov 提交于
      PAGE_CACHE_{SIZE,SHIFT,MASK,ALIGN} macros were introduced *long* time
      ago with promise that one day it will be possible to implement page
      cache with bigger chunks than PAGE_SIZE.
      
      This promise never materialized.  And unlikely will.
      
      We have many places where PAGE_CACHE_SIZE assumed to be equal to
      PAGE_SIZE.  And it's constant source of confusion on whether
      PAGE_CACHE_* or PAGE_* constant should be used in a particular case,
      especially on the border between fs and mm.
      
      Global switching to PAGE_CACHE_SIZE != PAGE_SIZE would cause to much
      breakage to be doable.
      
      Let's stop pretending that pages in page cache are special.  They are
      not.
      
      The changes are pretty straight-forward:
      
       - <foo> << (PAGE_CACHE_SHIFT - PAGE_SHIFT) -> <foo>;
      
       - <foo> >> (PAGE_CACHE_SHIFT - PAGE_SHIFT) -> <foo>;
      
       - PAGE_CACHE_{SIZE,SHIFT,MASK,ALIGN} -> PAGE_{SIZE,SHIFT,MASK,ALIGN};
      
       - page_cache_get() -> get_page();
      
       - page_cache_release() -> put_page();
      
      This patch contains automated changes generated with coccinelle using
      script below.  For some reason, coccinelle doesn't patch header files.
      I've called spatch for them manually.
      
      The only adjustment after coccinelle is revert of changes to
      PAGE_CAHCE_ALIGN definition: we are going to drop it later.
      
      There are few places in the code where coccinelle didn't reach.  I'll
      fix them manually in a separate patch.  Comments and documentation also
      will be addressed with the separate patch.
      
      virtual patch
      
      @@
      expression E;
      @@
      - E << (PAGE_CACHE_SHIFT - PAGE_SHIFT)
      + E
      
      @@
      expression E;
      @@
      - E >> (PAGE_CACHE_SHIFT - PAGE_SHIFT)
      + E
      
      @@
      @@
      - PAGE_CACHE_SHIFT
      + PAGE_SHIFT
      
      @@
      @@
      - PAGE_CACHE_SIZE
      + PAGE_SIZE
      
      @@
      @@
      - PAGE_CACHE_MASK
      + PAGE_MASK
      
      @@
      expression E;
      @@
      - PAGE_CACHE_ALIGN(E)
      + PAGE_ALIGN(E)
      
      @@
      expression E;
      @@
      - page_cache_get(E)
      + get_page(E)
      
      @@
      expression E;
      @@
      - page_cache_release(E)
      + put_page(E)
      Signed-off-by: NKirill A. Shutemov <kirill.shutemov@linux.intel.com>
      Acked-by: NMichal Hocko <mhocko@suse.com>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      09cbfeaf
  23. 16 3月, 2016 1 次提交
  24. 14 3月, 2016 2 次提交
    • S
      fuse: Add reference counting for fuse_io_priv · 744742d6
      Seth Forshee 提交于
      The 'reqs' member of fuse_io_priv serves two purposes. First is to track
      the number of oustanding async requests to the server and to signal that
      the io request is completed. The second is to be a reference count on the
      structure to know when it can be freed.
      
      For sync io requests these purposes can be at odds.  fuse_direct_IO() wants
      to block until the request is done, and since the signal is sent when
      'reqs' reaches 0 it cannot keep a reference to the object. Yet it needs to
      use the object after the userspace server has completed processing
      requests. This leads to some handshaking and special casing that it
      needlessly complicated and responsible for at least one race condition.
      
      It's much cleaner and safer to maintain a separate reference count for the
      object lifecycle and to let 'reqs' just be a count of outstanding requests
      to the userspace server. Then we can know for sure when it is safe to free
      the object without any handshaking or special cases.
      
      The catch here is that most of the time these objects are stack allocated
      and should not be freed. Initializing these objects with a single reference
      that is never released prevents accidental attempts to free the objects.
      
      Fixes: 9d5722b7 ("fuse: handle synchronous iocbs internally")
      Cc: stable@vger.kernel.org # v4.1+
      Signed-off-by: NSeth Forshee <seth.forshee@canonical.com>
      Signed-off-by: NMiklos Szeredi <mszeredi@redhat.com>
      744742d6
    • R
      fuse: do not use iocb after it may have been freed · 7cabc61e
      Robert Doebbelin 提交于
      There's a race in fuse_direct_IO(), whereby is_sync_kiocb() is called on an
      iocb that could have been freed if async io has already completed.  The fix
      in this case is simple and obvious: cache the result before starting io.
      
      It was discovered by KASan:
      
      kernel: ==================================================================
      kernel: BUG: KASan: use after free in fuse_direct_IO+0xb1a/0xcc0 at addr ffff88036c414390
      Signed-off-by: NRobert Doebbelin <robert@quobyte.com>
      Signed-off-by: NMiklos Szeredi <mszeredi@redhat.com>
      Fixes: bcba24cc ("fuse: enable asynchronous processing direct IO")
      Cc: <stable@vger.kernel.org> # 3.10+
      7cabc61e
  25. 23 1月, 2016 1 次提交
    • A
      wrappers for ->i_mutex access · 5955102c
      Al Viro 提交于
      parallel to mutex_{lock,unlock,trylock,is_locked,lock_nested},
      inode_foo(inode) being mutex_foo(&inode->i_mutex).
      
      Please, use those for access to ->i_mutex; over the coming cycle
      ->i_mutex will become rwsem, with ->lookup() done with it held
      only shared.
      Signed-off-by: NAl Viro <viro@zeniv.linux.org.uk>
      5955102c
  26. 10 11月, 2015 2 次提交
    • R
      fuse: add support for SEEK_HOLE and SEEK_DATA in lseek · 0b5da8db
      Ravishankar N 提交于
      A useful performance improvement for accessing virtual machine images
      via FUSE mount.
      
      See https://bugzilla.redhat.com/show_bug.cgi?id=1220173 for a use-case
      for glusterFS.
      Signed-off-by: NRavishankar N <ravishankar@redhat.com>
      Signed-off-by: NMiklos Szeredi <miklos@szeredi.hu>
      0b5da8db
    • R
      fuse: break infinite loop in fuse_fill_write_pages() · 3ca8138f
      Roman Gushchin 提交于
      I got a report about unkillable task eating CPU. Further
      investigation shows, that the problem is in the fuse_fill_write_pages()
      function. If iov's first segment has zero length, we get an infinite
      loop, because we never reach iov_iter_advance() call.
      
      Fix this by calling iov_iter_advance() before repeating an attempt to
      copy data from userspace.
      
      A similar problem is described in 124d3b70 ("fix writev regression:
      pan hanging unkillable and un-straceable"). If zero-length segmend
      is followed by segment with invalid address,
      iov_iter_fault_in_readable() checks only first segment (zero-length),
      iov_iter_copy_from_user_atomic() skips it, fails at second and
      returns zero -> goto again without skipping zero-length segment.
      
      Patch calls iov_iter_advance() before goto again: we'll skip zero-length
      segment at second iteraction and iov_iter_fault_in_readable() will detect
      invalid address.
      
      Special thanks to Konstantin Khlebnikov, who helped a lot with the commit
      description.
      
      Cc: Andrew Morton <akpm@linux-foundation.org>
      Cc: Maxim Patlasov <mpatlasov@parallels.com>
      Cc: Konstantin Khlebnikov <khlebnikov@yandex-team.ru>
      Signed-off-by: NRoman Gushchin <klamm@yandex-team.ru>
      Signed-off-by: NMiklos Szeredi <miklos@szeredi.hu>
      Fixes: ea9b9907 ("fuse: implement perform_write")
      Cc: <stable@vger.kernel.org>
      3ca8138f
  27. 23 10月, 2015 1 次提交
  28. 01 7月, 2015 1 次提交
    • M
      fuse: req state use flags · 33e14b4d
      Miklos Szeredi 提交于
      Use flags for representing the state in fuse_req.  This is needed since
      req->list will be protected by different locks in different states, hence
      we'll want the state itself to be split into distinct bits, each protected
      with the relevant lock in that state.
      Signed-off-by: NMiklos Szeredi <mszeredi@suse.cz>
      33e14b4d