1. 04 2月, 2015 6 次提交
  2. 09 10月, 2014 2 次提交
  3. 13 9月, 2014 2 次提交
  4. 11 9月, 2014 8 次提交
  5. 04 8月, 2014 2 次提交
  6. 25 6月, 2014 2 次提交
  7. 29 5月, 2014 4 次提交
  8. 18 4月, 2014 1 次提交
  9. 14 1月, 2014 1 次提交
  10. 29 6月, 2013 1 次提交
  11. 07 6月, 2013 1 次提交
  12. 21 3月, 2013 1 次提交
  13. 24 2月, 2013 1 次提交
    • B
      pnfs: fix resend_to_mds for directio · 78f33277
      Benny Halevy 提交于
      Pass the directio request on pageio_init to clean up the API.
      
      Percolate pg_dreq from original nfs_pageio_descriptor to the
      pnfs_{read,write}_done_resend_to_mds and use it on respective
      call to nfs_pageio_init_{read,write} on the newly created
      nfs_pageio_descriptor.
      
      Reproduced by command:
       mount -o vers=4.1 server:/ /mnt
       dd bs=128k count=8 if=/dev/zero of=/mnt/dd.out oflag=direct
      
      BUG: unable to handle kernel NULL pointer dereference at 0000000000000028
      IP: [<ffffffffa021a3a8>] atomic_inc+0x4/0x9 [nfs]
      PGD 34786067 PUD 34794067 PMD 0
      Oops: 0002 [#1] SMP
      Modules linked in: nfs_layout_nfsv41_files nfsv4 nfs nfsd lockd nfs_acl auth_rpcgss exportfs sunrpc btrfs zlib_deflate libcrc32c ipv6 autofs4
      CPU 1
      Pid: 259, comm: kworker/1:2 Not tainted 3.8.0-rc6 #2 Bochs Bochs
      RIP: 0010:[<ffffffffa021a3a8>]  [<ffffffffa021a3a8>] atomic_inc+0x4/0x9 [nfs]
      RSP: 0018:ffff880038f8fa68  EFLAGS: 00010206
      RAX: ffffffffa021a6a9 RBX: ffff880038f8fb48 RCX: 00000000000a0000
      RDX: ffffffffa021e616 RSI: ffff8800385e9a40 RDI: 0000000000000028
      RBP: ffff880038f8fa68 R08: ffffffff81ad6720 R09: ffff8800385e9510
      R10: ffffffffa0228450 R11: ffff880038e87418 R12: ffff8800385e9a40
      R13: ffff8800385e9a70 R14: ffff880038f8fb38 R15: ffffffffa0148878
      FS:  0000000000000000(0000) GS:ffff88003e400000(0000) knlGS:0000000000000000
      CS:  0010 DS: 0000 ES: 0000 CR0: 000000008005003b
      CR2: 0000000000000028 CR3: 0000000034789000 CR4: 00000000000006e0
      DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000
      DR3: 0000000000000000 DR6: 00000000ffff0ff0 DR7: 0000000000000400
      Process kworker/1:2 (pid: 259, threadinfo ffff880038f8e000, task ffff880038302480)
      Stack:
       ffff880038f8fa78 ffffffffa021a6bf ffff880038f8fa88 ffffffffa021bb82
       ffff880038f8fae8 ffffffffa021f454 ffff880038f8fae8 ffffffff8109689d
       ffff880038f8fab8 ffffffff00000006 0000000000000000 ffff880038f8fb48
      Call Trace:
       [<ffffffffa021a6bf>] nfs_direct_pgio_init+0x16/0x18 [nfs]
       [<ffffffffa021bb82>] nfs_pgheader_init+0x6a/0x6c [nfs]
       [<ffffffffa021f454>] nfs_generic_pg_writepages+0x51/0xf8 [nfs]
       [<ffffffff8109689d>] ? mark_held_locks+0x71/0x99
       [<ffffffffa0148878>] ? rpc_release_resources_task+0x37/0x37 [sunrpc]
       [<ffffffffa021bc25>] nfs_pageio_doio+0x1a/0x43 [nfs]
       [<ffffffffa021be7c>] nfs_pageio_complete+0x16/0x2c [nfs]
       [<ffffffffa02608be>] pnfs_write_done_resend_to_mds+0x95/0xc5 [nfsv4]
       [<ffffffffa0148878>] ? rpc_release_resources_task+0x37/0x37 [sunrpc]
       [<ffffffffa028e27f>] filelayout_reset_write+0x8c/0x99 [nfs_layout_nfsv41_files]
       [<ffffffffa028e5f9>] filelayout_write_done_cb+0x4d/0xc1 [nfs_layout_nfsv41_files]
       [<ffffffffa024587a>] nfs4_write_done+0x36/0x49 [nfsv4]
       [<ffffffffa021f996>] nfs_writeback_done+0x53/0x1cc [nfs]
       [<ffffffffa021fb1d>] nfs_writeback_done_common+0xe/0x10 [nfs]
       [<ffffffffa028e03d>] filelayout_write_call_done+0x28/0x2a [nfs_layout_nfsv41_files]
       [<ffffffffa01488a1>] rpc_exit_task+0x29/0x87 [sunrpc]
       [<ffffffffa014a0c9>] __rpc_execute+0x11d/0x3cc [sunrpc]
       [<ffffffff810969dc>] ? trace_hardirqs_on_caller+0x117/0x173
       [<ffffffffa014a39f>] rpc_async_schedule+0x27/0x32 [sunrpc]
       [<ffffffffa014a378>] ? __rpc_execute+0x3cc/0x3cc [sunrpc]
       [<ffffffff8105f8c1>] process_one_work+0x226/0x422
       [<ffffffff8105f7f4>] ? process_one_work+0x159/0x422
       [<ffffffff81094757>] ? lock_acquired+0x210/0x249
       [<ffffffffa014a378>] ? __rpc_execute+0x3cc/0x3cc [sunrpc]
       [<ffffffff810600d8>] worker_thread+0x126/0x1c4
       [<ffffffff8105ffb2>] ? manage_workers+0x240/0x240
       [<ffffffff81064ef8>] kthread+0xb1/0xb9
       [<ffffffff81064e47>] ? __kthread_parkme+0x65/0x65
       [<ffffffff815206ec>] ret_from_fork+0x7c/0xb0
       [<ffffffff81064e47>] ? __kthread_parkme+0x65/0x65
      Code: 00 83 38 02 74 12 48 81 4b 50 00 00 01 00 c7 83 60 07 00 00 01 00 00 00 48 89 df e8 55 fe ff ff 5b 41 5c 5d c3 66 90 55 48 89 e5 <f0> ff 07 5d c3 55 48 89 e5 f0 ff 0f 0f 94 c0 84 c0 0f 95 c0 0f
      RIP  [<ffffffffa021a3a8>] atomic_inc+0x4/0x9 [nfs]
       RSP <ffff880038f8fa68>
      CR2: 0000000000000028
      Signed-off-by: NBenny Halevy <bhalevy@tonian.com>
      Cc: stable@kernel.org [>= 3.6]
      Signed-off-by: NTrond Myklebust <Trond.Myklebust@netapp.com>
      78f33277
  14. 15 2月, 2013 1 次提交
    • T
      NFSv4.1: Fix bulk recall and destroy of layouts · fd9a8d71
      Trond Myklebust 提交于
      The current code in pnfs_destroy_all_layouts() assumes that removing
      the layout from the server->layouts list is sufficient to make it
      invisible to other processes. This ignores the fact that most
      users access the layout through the nfs_inode->layout...
      There is further breakage due to lack of reference counting of the
      layouts, meaning that the whole thing Oopses at the drop of a hat.
      
      The code in initiate_bulk_draining() is almost correct, and can be
      used as a model for pnfs_destroy_all_layouts(), so move that
      code to pnfs.c, and refactor the code to allow us to choose between
      a single filesystem bulk recall, and a recall of all layouts.
      Also note that initiate_bulk_draining() currently calls iput() while
      holding locks. Fix that too.
      Signed-off-by: NTrond Myklebust <Trond.Myklebust@netapp.com>
      Cc: stable@vger.kernel.org
      fd9a8d71
  15. 15 10月, 2012 1 次提交
  16. 09 10月, 2012 1 次提交
  17. 29 9月, 2012 5 次提交