1. 26 6月, 2020 1 次提交
    • O
      NFSv4 fix CLOSE not waiting for direct IO compeletion · d03727b2
      Olga Kornievskaia 提交于
      Figuring out the root case for the REMOVE/CLOSE race and
      suggesting the solution was done by Neil Brown.
      
      Currently what happens is that direct IO calls hold a reference
      on the open context which is decremented as an asynchronous task
      in the nfs_direct_complete(). Before reference is decremented,
      control is returned to the application which is free to close the
      file. When close is being processed, it decrements its reference
      on the open_context but since directIO still holds one, it doesn't
      sent a close on the wire. It returns control to the application
      which is free to do other operations. For instance, it can delete a
      file. Direct IO is finally releasing its reference and triggering
      an asynchronous close. Which races with the REMOVE. On the server,
      REMOVE can be processed before the CLOSE, failing the REMOVE with
      EACCES as the file is still opened.
      Signed-off-by: NOlga Kornievskaia <kolga@netapp.com>
      Suggested-by: NNeil Brown <neilb@suse.com>
      CC: stable@vger.kernel.org
      Signed-off-by: NAnna Schumaker <Anna.Schumaker@Netapp.com>
      d03727b2
  2. 12 6月, 2020 2 次提交
    • C
      NFS: Fix direct WRITE throughput regression · ba838a75
      Chuck Lever 提交于
      I measured a 50% throughput regression for large direct writes.
      
      The observed on-the-wire behavior is that the client sends every
      NFS WRITE twice: once as an UNSTABLE WRITE plus a COMMIT, and once
      as a FILE_SYNC WRITE.
      
      This is because the nfs_write_match_verf() check in
      nfs_direct_commit_complete() fails for every WRITE.
      
      Buffered writes use nfs_write_completion(), which sets req->wb_verf
      correctly. Direct writes use nfs_direct_write_completion(), which
      does not set req->wb_verf at all. This leaves req->wb_verf set to
      all zeroes for every direct WRITE, and thus
      nfs_direct_commit_completion() always sets NFS_ODIRECT_RESCHED_WRITES.
      
      This fix appears to restore nearly all of the lost performance.
      
      Fixes: 1f28476d ("NFS: Fix O_DIRECT commit verifier handling")
      Signed-off-by: NChuck Lever <chuck.lever@oracle.com>
      Signed-off-by: NAnna Schumaker <Anna.Schumaker@Netapp.com>
      ba838a75
    • C
      NFS: remove redundant initialization of variable result · 86b93667
      Colin Ian King 提交于
      The variable result is being initialized with a value that is never read
      and it is being updated later with a new value.  The initialization is
      redundant and can be removed.
      
      Addresses-Coverity: ("Unused value")
      Signed-off-by: NColin Ian King <colin.king@canonical.com>
      Signed-off-by: NAnna Schumaker <Anna.Schumaker@Netapp.com>
      86b93667
  3. 02 4月, 2020 2 次提交
  4. 28 3月, 2020 6 次提交
  5. 23 3月, 2020 1 次提交
  6. 15 1月, 2020 2 次提交
  7. 09 10月, 2019 2 次提交
  8. 19 8月, 2019 1 次提交
  9. 21 5月, 2019 1 次提交
  10. 26 4月, 2019 2 次提交
  11. 21 2月, 2019 2 次提交
  12. 02 12月, 2018 1 次提交
    • D
      nfs: don't dirty kernel pages read by direct-io · ad3cba22
      Dave Kleikamp 提交于
      When we use direct_IO with an NFS backing store, we can trigger a
      WARNING in __set_page_dirty(), as below, since we're dirtying the page
      unnecessarily in nfs_direct_read_completion().
      
      To fix, replicate the logic in commit 53cbf3b1 ("fs: direct-io:
      don't dirtying pages for ITER_BVEC/ITER_KVEC direct read").
      
      Other filesystems that implement direct_IO handle this; most use
      blockdev_direct_IO(). ceph and cifs have similar logic.
      
      mount 127.0.0.1:/export /nfs
      dd if=/dev/zero of=/nfs/image bs=1M count=200
      losetup --direct-io=on -f /nfs/image
      mkfs.btrfs /dev/loop0
      mount -t btrfs /dev/loop0 /mnt/
      
      kernel: WARNING: CPU: 0 PID: 8067 at fs/buffer.c:580 __set_page_dirty+0xaf/0xd0
      kernel: Modules linked in: loop(E) nfsv3(E) rpcsec_gss_krb5(E) nfsv4(E) dns_resolver(E) nfs(E) fscache(E) nfsd(E) auth_rpcgss(E) nfs_acl(E) lockd(E) grace(E) fuse(E) tun(E) ip6t_rpfilter(E) ipt_REJECT(E) nf_
      kernel:  snd_seq(E) snd_seq_device(E) snd_pcm(E) video(E) snd_timer(E) snd(E) soundcore(E) ip_tables(E) xfs(E) libcrc32c(E) sd_mod(E) sr_mod(E) cdrom(E) ata_generic(E) pata_acpi(E) crc32c_intel(E) ahci(E) li
      kernel: CPU: 0 PID: 8067 Comm: kworker/0:2 Tainted: G            E     4.20.0-rc1.master.20181111.ol7.x86_64 #1
      kernel: Hardware name: innotek GmbH VirtualBox/VirtualBox, BIOS VirtualBox 12/01/2006
      kernel: Workqueue: nfsiod rpc_async_release [sunrpc]
      kernel: RIP: 0010:__set_page_dirty+0xaf/0xd0
      kernel: Code: c3 48 8b 02 f6 c4 04 74 d4 48 89 df e8 ba 05 f7 ff 48 89 c6 eb cb 48 8b 43 08 a8 01 75 1f 48 89 d8 48 8b 00 a8 04 74 02 eb 87 <0f> 0b eb 83 48 83 e8 01 eb 9f 48 83 ea 01 0f 1f 00 eb 8b 48 83 e8
      kernel: RSP: 0000:ffffc1c8825b7d78 EFLAGS: 00013046
      kernel: RAX: 000fffffc0020089 RBX: fffff2b603308b80 RCX: 0000000000000001
      kernel: RDX: 0000000000000001 RSI: ffff9d11478115c8 RDI: ffff9d11478115d0
      kernel: RBP: ffffc1c8825b7da0 R08: 0000646f6973666e R09: 8080808080808080
      kernel: R10: 0000000000000001 R11: 0000000000000000 R12: ffff9d11478115d0
      kernel: R13: ffff9d11478115c8 R14: 0000000000003246 R15: 0000000000000001
      kernel: FS:  0000000000000000(0000) GS:ffff9d115ba00000(0000) knlGS:0000000000000000
      kernel: CS:  0010 DS: 0000 ES: 0000 CR0: 0000000080050033
      kernel: CR2: 00007f408686f640 CR3: 0000000104d8e004 CR4: 00000000000606f0
      kernel: DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000
      kernel: DR3: 0000000000000000 DR6: 00000000fffe0ff0 DR7: 0000000000000400
      kernel: Call Trace:
      kernel:  __set_page_dirty_buffers+0xb6/0x110
      kernel:  set_page_dirty+0x52/0xb0
      kernel:  nfs_direct_read_completion+0xc4/0x120 [nfs]
      kernel:  nfs_pgio_release+0x10/0x20 [nfs]
      kernel:  rpc_free_task+0x30/0x70 [sunrpc]
      kernel:  rpc_async_release+0x12/0x20 [sunrpc]
      kernel:  process_one_work+0x174/0x390
      kernel:  worker_thread+0x4f/0x3e0
      kernel:  kthread+0x102/0x140
      kernel:  ? drain_workqueue+0x130/0x130
      kernel:  ? kthread_stop+0x110/0x110
      kernel:  ret_from_fork+0x35/0x40
      kernel: ---[ end trace 01341980905412c9 ]---
      Signed-off-by: NDave Kleikamp <dave.kleikamp@oracle.com>
      Signed-off-by: NSantosh Shilimkar <santosh.shilimkar@oracle.com>
      
      [forward-ported to v4.20]
      Signed-off-by: NCalum Mackay <calum.mackay@oracle.com>
      Reviewed-by: NDave Kleikamp <dave.kleikamp@oracle.com>
      Reviewed-by: NChuck Lever <chuck.lever@oracle.com>
      Signed-off-by: NTrond Myklebust <trond.myklebust@hammerspace.com>
      ad3cba22
  13. 09 8月, 2018 1 次提交
  14. 09 3月, 2018 1 次提交
  15. 16 1月, 2018 1 次提交
  16. 15 8月, 2017 1 次提交
  17. 21 4月, 2017 2 次提交
  18. 18 4月, 2017 1 次提交
  19. 25 12月, 2016 1 次提交
  20. 02 12月, 2016 1 次提交
  21. 23 9月, 2016 1 次提交
    • D
      NFS: direct: use complete() instead of complete_all() · 024de8f1
      Daniel Wagner 提交于
      There is only one waiter for the completion, therefore there
      is no need to use complete_all(). Let's make that clear by
      using complete() instead of complete_all().
      
      nfs_file_direct_write() or nfs_file_direct_read() allocated a request
      object via nfs_direct_req_alloc(), which initializes the
      completion. The request object then is freed later in the exit path.
      Between the initialization and the release either
      nfs_direct_write_schedule_iovec() resp
      nfs_direct_read_schedule_iovec() are called which will asynchronously
      process the request. The calling function waits via nfs_direct_wait()
      till the async work has been done. Thus there is only one waiter on
      the completion.
      
      nfs_direct_pgio_init() and nfs_direct_read_completion() are passed via
      function pointers to nfs pageio. The first function does a ref
      counting (get_dreq() and put_dreq()) which ensures that
      nfs_direct_read_completion() and nfs_direct_read_schedule_iovec() only
      call the completion path once.
      
      The usage pattern of the completion is:
      
      waiter context                          waker context
      
      nfs_file_direct_write()
        dreq = nfs_direct_req_alloc()
          init_completion()
        nfs_direct_write_schedule_iovec()
        nfs_direct_wait()
          wait_for_completion_killable()
      
                                              nfs_direct_write_schedule_work()
                                                nfs_direct_complete()
                                                  complete()
      
      nfs_file_direct_read()
        dreq = nfs_direct_req_all()
          init_completion()
        nfs_direct_read_schedule_iovec()
        nfs_direct_wait()
          wait_for_completion_killable()
                                              nfs_direct_read_schedule_iovec()
                                                nfs_direct_complete()
                                                  complete()
      
                                              nfs_direct_read_completion()
                                                nfs_direct_complete()
                                                  complete()
      Signed-off-by: NDaniel Wagner <daniel.wagner@bmw-carit.de>
      Signed-off-by: NAnna Schumaker <Anna.Schumaker@Netapp.com>
      024de8f1
  22. 06 7月, 2016 6 次提交
  23. 25 6月, 2016 1 次提交