1. 18 4月, 2017 2 次提交
  2. 23 2月, 2017 1 次提交
  3. 14 1月, 2017 1 次提交
    • P
      locking/atomic, kref: Add KREF_INIT() · 1e24edca
      Peter Zijlstra 提交于
      Since we need to change the implementation, stop exposing internals.
      
      Provide KREF_INIT() to allow static initialization of struct kref.
      Signed-off-by: NPeter Zijlstra (Intel) <peterz@infradead.org>
      Cc: Andrew Morton <akpm@linux-foundation.org>
      Cc: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: linux-kernel@vger.kernel.org
      Signed-off-by: NIngo Molnar <mingo@kernel.org>
      1e24edca
  4. 18 10月, 2016 1 次提交
  5. 01 10月, 2016 4 次提交
  6. 22 9月, 2016 1 次提交
  7. 31 7月, 2016 1 次提交
  8. 30 6月, 2016 2 次提交
    • A
      fuse: improve aio directIO write performance for size extending writes · 7879c4e5
      Ashish Sangwan 提交于
      While sending the blocking directIO in fuse, the write request is broken
      into sub-requests, each of default size 128k and all the requests are sent
      in non-blocking background mode if async_dio mode is supported by libfuse.
      The process which issue the write wait for the completion of all the
      sub-requests. Sending multiple requests parallely gives a chance to perform
      parallel writes in the user space fuse implementation if it is
      multi-threaded and hence improves the performance.
      
      When there is a size extending aio dio write, we switch to blocking mode so
      that we can properly update the size of the file after completion of the
      writes. However, in this situation all the sub-requests are sent in
      serialized manner where the next request is sent only after receiving the
      reply of the current request. Hence the multi-threaded user space
      implementation is not utilized properly.
      
      This patch changes the size extending aio dio behavior to exactly follow
      blocking dio. For multi threaded fuse implementation having 10 threads and
      using buffer size of 64MB to perform async directIO, we are getting double
      the speed.
      Signed-off-by: NAshish Sangwan <ashishsangwan2@gmail.com>
      Signed-off-by: NMiklos Szeredi <mszeredi@redhat.com>
      7879c4e5
    • M
      fuse: serialize dirops by default · 5c672ab3
      Miklos Szeredi 提交于
      Negotiate with userspace filesystems whether they support parallel readdir
      and lookup.  Disable parallelism by default for fear of breaking fuse
      filesystems.
      Signed-off-by: NMiklos Szeredi <mszeredi@redhat.com>
      Fixes: 9902af79 ("parallel lookups: actual switch to rwsem")
      Fixes: d9b3dbdc ("fuse: switch to ->iterate_shared()")
      5c672ab3
  9. 14 3月, 2016 1 次提交
    • S
      fuse: Add reference counting for fuse_io_priv · 744742d6
      Seth Forshee 提交于
      The 'reqs' member of fuse_io_priv serves two purposes. First is to track
      the number of oustanding async requests to the server and to signal that
      the io request is completed. The second is to be a reference count on the
      structure to know when it can be freed.
      
      For sync io requests these purposes can be at odds.  fuse_direct_IO() wants
      to block until the request is done, and since the signal is sent when
      'reqs' reaches 0 it cannot keep a reference to the object. Yet it needs to
      use the object after the userspace server has completed processing
      requests. This leads to some handshaking and special casing that it
      needlessly complicated and responsible for at least one race condition.
      
      It's much cleaner and safer to maintain a separate reference count for the
      object lifecycle and to let 'reqs' just be a count of outstanding requests
      to the userspace server. Then we can know for sure when it is safe to free
      the object without any handshaking or special cases.
      
      The catch here is that most of the time these objects are stack allocated
      and should not be freed. Initializing these objects with a single reference
      that is never released prevents accidental attempts to free the objects.
      
      Fixes: 9d5722b7 ("fuse: handle synchronous iocbs internally")
      Cc: stable@vger.kernel.org # v4.1+
      Signed-off-by: NSeth Forshee <seth.forshee@canonical.com>
      Signed-off-by: NMiklos Szeredi <mszeredi@redhat.com>
      744742d6
  10. 10 11月, 2015 1 次提交
  11. 01 7月, 2015 12 次提交
  12. 14 3月, 2015 1 次提交
  13. 06 1月, 2015 1 次提交
  14. 12 12月, 2014 4 次提交
    • M
      fuse: introduce fuse_simple_request() helper · 7078187a
      Miklos Szeredi 提交于
      The following pattern is repeated many times:
      
      	req = fuse_get_req_nopages(fc);
      	/* Initialize req->(in|out).args */
      	fuse_request_send(fc, req);
      	err = req->out.h.error;
      	fuse_put_request(req);
      
      Create a new replacement helper:
      
      	/* Initialize args */
      	err = fuse_simple_request(fc, &args);
      
      In addition to reducing the code size, this will ease moving from the
      complex arg-based to a simpler page-based I/O on the fuse device.
      Signed-off-by: NMiklos Szeredi <mszeredi@suse.cz>
      7078187a
    • M
      fuse: reduce max out args · f704dcb5
      Miklos Szeredi 提交于
      The third out-arg is never actually used.
      Signed-off-by: NMiklos Szeredi <mszeredi@suse.cz>
      f704dcb5
    • M
      fuse: hold inode instead of path after release · baebccbe
      Miklos Szeredi 提交于
      path_put() in release could trigger a DESTROY request in fuseblk.  The
      possible deadlock was worked around by doing the path_put() with
      schedule_work().
      
      This complexity isn't needed if we just hold the inode instead of the path.
      Since we now flush all requests before destroying the super block we can be
      sure that all held inodes will be dropped.
      Signed-off-by: NMiklos Szeredi <mszeredi@suse.cz>
      baebccbe
    • M
      fuse: flush requests on umount · 580640ba
      Miklos Szeredi 提交于
      Use fuse_abort_conn() instead of fuse_conn_kill() in fuse_put_super().
      This flushes and aborts requests still on any queues.  But since we've
      already reset fc->connected, those requests would not be useful anyway and
      would be flushed when the fuse device is closed.
      
      Next patches will rely on requests being flushed before the superblock is
      destroyed.
      
      Use fuse_abort_conn() in cuse_process_init_reply() too, since it makes no
      difference there, and we can get rid of fuse_conn_kill().
      Signed-off-by: NMiklos Szeredi <mszeredi@suse.cz>
      580640ba
  15. 07 5月, 2014 1 次提交
  16. 28 4月, 2014 4 次提交
  17. 02 4月, 2014 2 次提交
    • P
      fuse: Fix O_DIRECT operations vs cached writeback misorder · ea8cd333
      Pavel Emelyanov 提交于
      The problem is:
      
      1. write cached data to a file
      2. read directly from the same file (via another fd)
      
      The 2nd operation may read stale data, i.e. the one that was in a file
      before the 1st op. Problem is in how fuse manages writeback.
      
      When direct op occurs the core kernel code calls filemap_write_and_wait
      to flush all the cached ops in flight. But fuse acks the writeback right
      after the ->writepages callback exits w/o waiting for the real write to
      happen. Thus the subsequent direct op proceeds while the real writeback
      is still in flight. This is a problem for backends that reorder operation.
      
      Fix this by making the fuse direct IO callback explicitly wait on the
      in-flight writeback to finish.
      Signed-off-by: NMaxim Patlasov <MPatlasov@parallels.com>
      Signed-off-by: NMiklos Szeredi <mszeredi@suse.cz>
      ea8cd333
    • M
      fuse: Trust kernel i_mtime only · b0aa7606
      Maxim Patlasov 提交于
      Let the kernel maintain i_mtime locally:
       - clear S_NOCMTIME
       - implement i_op->update_time()
       - flush mtime on fsync and last close
       - update i_mtime explicitly on truncate and fallocate
      
      Fuse inode flag FUSE_I_MTIME_DIRTY serves as indication that local i_mtime
      should be flushed to the server eventually.
      Signed-off-by: NMaxim Patlasov <MPatlasov@parallels.com>
      Signed-off-by: NMiklos Szeredi <mszeredi@suse.cz>
      b0aa7606